entry_id
stringlengths 33
34
| published
stringlengths 14
14
| title
stringlengths 6
252
| authors
sequencelengths 1
1.7k
| primary_category
stringclasses 152
values | categories
sequencelengths 1
8
| text
stringlengths 0
52.1M
| introduction
stringlengths 0
79.1k
⌀ | background
stringlengths 0
34.5k
⌀ | method
stringlengths 0
49.1k
⌀ | results
stringlengths 0
175k
⌀ | discussion
stringlengths 0
57.6k
⌀ | conclusion
stringlengths 5
156k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|
http://arxiv.org/abs/1701.07510v1 | 20170125223506 | Nitrogen Fractionation in Protoplanetary Disks from the H13CN/HC15N Ratio | [
"V. V. Guzmán",
"K. I. Öberg",
"J. Huang",
"R. Loomis",
"C. Qi"
] | astro-ph.GA | [
"astro-ph.GA"
] |
[email protected]
Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA
⋆Currently at Joint Atacama Large Millimeter/submillimeter Array (ALMA) Observatory, Avenida Alonso de Córdova 3107, Vitacura, Santiago, Chile.
Nitrogen fractionation is commonly used to assess the thermal history
of Solar System volatiles. With ALMA it is for the first time possible
to directly measure ratios in common molecules during
assembly of planetary systems. We present ALMA observations of the
and J=3-2 lines at 0”.5 angular resolution,
toward a sample of six protoplanetary disks, selected to span a range
of stellar and disk structure properties. Adopting a typical
ratio of 70, we find comet-like ratios of 80-160 in 5/6 of
the disks (3 T Tauri and 2 Herbig Ae disks) and lack constraints for
one of the T Tauri disks (IM Lup). There are no systematic differences
between T Tauri and Herbig Ae disks, or between full and transition
disks within the sample. In addition, no correlation is observed
between disk-averaged D/H and ratios in the sample. One of
the disks, V4046 Sgr, presents unusually bright HCN isotopologue
emission, enabling us to model the radial profiles of and
. We find tentative evidence of an increasing ratio
with radius, indicating that selective photodissociation in the inner
disk is important in setting the ratio during planet
formation.
§ INTRODUCTION
The origin of Solar System organics is a fundamental and highly
debated topic. It is unclear whether the organics in the different
Solar System bodies were inherited from the cold and dense molecular
parent cloud of our Sun, or if they are the result of chemical
processing within the Solar nebula protoplanetary disk. The isotopic
composition of present day organics may help to shed light on their
origins, since isotopic fractionation chemistry is highly environment
specific and can leave a permanent imprint. Comets are especially
interesting in this context, since they should preserve the isotopic
compositions in different molecules during the assembly of the Solar
System.
Among the different methods used to trace the origin of molecules, the
isotopic ratio is one of the most popular ones. Nitrogen
isotopic ratios span at least an order of magnitude between different
Solar System bodies. A low nitrogen fractionation (high ) is
observed toward the Sun and Jupiter <cit.>, while a high fractionation is observed in the rocky
planets, comets and meteorites
<cit.>. The origin of these
variations is not well understood, but it suggests that different Solar
System bodies obtained their nitrogen from different nitrogen
reservoirs <cit.>. Based on observations of the
Interstellar Medium (ISM), comets and chemical models, there are three
major nitrogen reservoirs in dense interstellar and circumstellar media,
N_2, NH_3 and HCN. These species have different fractionation
pathways <cit.>, and it is thus important to
measure the ratio in molecules representative of these nitrogen
reservoirs.
This study focuses on HCN. HCN is readily detected in the ISM
<cit.>, comets
<cit.> and protoplanetary disks
<cit.>. While several studies have
been made toward prestellar cores and protostars, the
characterization of isotopic ratios in protoplanetary disks is rather
new due to the intrinsic weak line emission. <cit.>
presented the first detection of and in the disk
around Herbig Ae star MWC 480, and provided the first measurement of
the in a disk. They found an isotopic ratio of 200±100,
which is similar to what is observed in the cold ISM and in comets.
The low value in the MWC 480 disk implies either inheritance
of HCN from the ISM, or the presence of an active fractionation
chemistry in the disk. There are two potentially active fractionation
channels in disks and in the ISM. The first is through isotope
exchange reactions, such as
HC^14NH^+ + ^15N→HC^15NH^+ + ^14N + hν
which favor the incorporation of ^15N into molecules at low
temperatures (<20 K). HC^15NH^+ can later recombine with
free electrons to produce . Observations of HCN and HNC
fractionation toward protostars present a tentative trend of the
with temperature, supporting this scenario
<cit.>. The second mechanism is selective
photo-dissociation of ^14N^15N over ^14N_2, due to
self-shielding of ^14N_2 <cit.>. In the surface layers
of protoplanetary disks, which are directly illuminated by the
radiation field of the central star, the dominant formation pathways
leading to HCN and are
^14N + CH_2→HC^14N
^15N + CH_2→
Both mechanisms can reduce the HCN/ ratio in protoplanetary
disks <cit.>. Distinguishing between these
different origins of nitrogen fractionation levels in disks and
between inheritance and in situ disk fractionation chemistry (and
further in comets and planets) requires more disk measurements, and
constraints on the radial profiles of the ratio in disks
with different structures and around stars with different radiation
fields. A constant ratio across disks would favor a scenario
where disks inherit their organics from the natal cloud, while disk
chemistry should result in a radial gradient, since the disk
environment is dramatically different at different radii.
To begin to address this long-term goal, we present observations of
and in a diverse sample of 6 protoplanetary
disks. Because the HCN lines may be optically thick, we use the
line as a proxy of HCN to derive the ratio. In
section 2 we present the observations and describe the data reduction
process. The disk-averaged isotopic flux ratios derived from the
observations are presented in Section 3. In Section 4, we model the
disk abundance profiles of and in V4046 Sgr, the
source with the highest signal-to-noise ratio detection. In Section 5,
we discuss the results and compare with observations in our Solar
System and in the cold ISM. A summary is presented in Section 6.
§ OBSERVATIONS AND DATA REDUCTION
The and J=3-2 lines were observed with ALMA during
Cycle 2 as part of project ADS/JAO.ALMA#2013.1.00226. The Band 6
observations included two spectral settings, at 1.1 and 1.4 mm. The
correlator setup of the 1.1 mm and 1.4 mm settings were configured
with 14 and 13 narrow spectral windows, respectively, targeting
different molecular lines. The main targets of the observations were
deuterated species, which are presented in <cit.>. They include an independent analysis of the data in
the context of D/H fractionation. A focused study on the DCO^+
emission in IM Lup was presented earlier by <cit.>. The
data in MWC 480 was also used for the study of CH_3CN by
<cit.>. In addition, <cit.> presented the
and data in MWC 480. In this paper, we focus on the
and lines for the full sample. For consistency, we
present data reduction and imaged independently of the previous
papers. For one of the sources, V4046 Sgr, we also use CO J=2-1
isotopologue data from the same survey.
The observations are described in detail by <cit.>. In
short, the Band 6 observations were carried-out between 2014 and 2015
with baseline lengths spanning between 18 and 650 m. The total
on-source time was 20 min, on average. A quasar was observed to
calibrate amplitude and phase temporal variations. To calibrate the
frequency bandpass a quasar was observed. The absolute flux scale was
derived by observing Titan for about half the observations, or a
quasar. The HCN isotopologue lines were covered by two spectral
windows of 59 MHz bandwidth and 61 kHz channel width in the 1.1 mm
spectral setting. The CO isotopologue lines were covered by three
spectral windows in the 1.4 mm spectral setting, with the same
bandwidth and channel width.
The data calibration was performed by the ALMA staff using standard
procedures. We took advantage of the bright continuum emission of the
sources to improve the signal-to-noise ratio, by further
self-calibrating the HCN isotopologue data. The self-calibration
solutions were derived on individual spectral windows when possible
(AS 209, LkCa 15, MWC 480 and V4046 Sgr) and on averaged spectral
windows for the weaker sources (IM Lup and HD 163296), and then
applied to each spectral window. The continuum was then subtracted
from the visibilities to produce the spectral line cubes. The clean
images were obtained by deconvolving the visibilities in CASA, using
the CLEAN algorithm with Briggs weighting. The HCN isotopologue data
was regridded to a spectral resolution of 0.5 for the full
sample. The robust parameter was set to 1.0 for , except for
IM Lup where we used a value of 2.0. For the robust parameter
was set to 2.0 to improve the signal-to-noise ratio, except for
V4046 Sgr and MWC 480 for which a value of 1.0 was used because of the
bright line emission. To help the cleaning process we created
elliptical masks, same for all channels, centered on the dust
continuum centroid, using the disk inclination and position angle
listed in Table <ref>. The mask radius was chosen to cover
the stronger line emission in all channels, or to cover the
dust continuum emission if the line was not detected. The
resulting beam, channel rms and moment zero rms values are listed in
Table <ref>. For the CO isotopologues the robust parameter was
set to 0.5, and a Keplerian mask was used to help the cleaning
process, created by selecting emission consistent with the expected
Keplerian rotation of the disk in each channel.
§ SAMPLE STATISTICS
The stellar and disk properties of the sample are summarized in
Table <ref>. The sample includes 4 T Tauri stars and 2
Herbig Ae stars. The stellar masses range between 0.9 (AS 209) and
2.3 (HD 163296), corresponding to luminosities that span an
order of magnitude. Two of the sources, namely LkCa 15 and V4046 Sgr,
are transitional disks, with inner holes resolved at millimeter
wavelengths. The sample is biased toward large disks, with a known
rich molecular emission. However, given the very different physical
properties, in particular the gas temperature, the source selection
allows us to determine the disk averaged ratio in HCN in a
diverse sample of disks.
Figure <ref> shows the observations for the full sample of
disks. The dust continuum images are shown in the left column. The
1.1 mm continuum images were produced by averaging 1.1 mm line free
spectral windows. The four middle panels display the and
velocity integrated maps, for the full line (color images)
and for two velocity ranges, the blue and red shifted parts of the
line, to demonstrate the Keplerian rotation of the disk (blue and red
contours). The right column of the figure shows the disk integrated
line profiles of the and lines. The spectra were
extracted using the same elliptical masks used to clean the
data. Figs. <ref> to <ref> show
channels maps of the and lines in each source, with
the elliptical masks overlaid on top.
The lines are classified as detected if emission consistent with the
expected Keplerian rotation of the disk is observed in at least three
channels at a 3σ level. From the inspection of the channel maps
we find that is clearly detected toward all disks except for
the T Tauri disk IM Lup. is clearly detected toward
V4046 Sgr, MWC 480 and HD 163296, weakly detected toward AS 209 and
LkCa 15, and not detected toward IM Lup. We note that for the two
disks with weak line emission, while emission is detected in 3
or more individual channels, the emission is almost washed out in the
integrated intensity maps.
The emission is generally compact compared to the extent of
the dust disk. It appears centrally peaked toward MWC 480 and V4046
Sgr, and possibly toward HD 163296, but presents clear rings toward
the remaining two sources: LkCa 15 and AS 209. The latter is
unexpected, since AS 209 is not a transition disk. The line
shows a similar behavior, although the signal-to-noise ratio is lower.
We extracted the disk integrated fluxes from the unclipped moment zero
maps using the same elliptical mask created to clean the data and
extract the spectra. The uncertainty in the flux was estimated by
simulating integrated flux measurements from signal free regions using
the same elliptical mask but centered at random positions. The
integrated disk fluxes and their associated uncertainties are listed
in Table <ref>.
We use the extracted line flux ratios to estimate the abundance ratios
of the two isotopologues. This is a reasonable first approximation
when comparing HCN isotopologes, as the and line
emission is expected to be optically thin, arise in the same region
and this region is dense enough for the molecular rotational
population to be in LTE <cit.>. We compute the
/ abundance ratio in the LTE case and for
=15 K. For MWC 480, we obtain a lower / ratio
(1.8±0.3) than the value of 2.8±1.4 reported in
<cit.>, although both values are consistent within the
errors. This is due to the different method implemented in this paper
to extract the fluxes. The resulting / abundance ratios
span from 1.2 to 2.2 with an average of 1.8. Given the almost
identical upper energies, Einstein coefficients and the ratio Q/g_u
(partition function over upper state degeneracy) for the J=3-2
transition (see Table <ref>), the / abundance
ratios are almost identical to the flux ratios. We note that a higher
excitation temperature of 100 K changes the inferred abundances by
less than 1%.
In order to derive the nitrogen fractionation in HCN, we adopt an
isotopic ratio of =70. Because the C isotopic ratio depends
on the physical conditions of the gas <cit.> we
include a 30% uncertainty in this value to convert the
/ ratio into a ratio. The inferred
ratios span from 83 to 156 with an average of 124. All disks
present low cometary-like ratios. The resulting
/ abundance ratios and the inferred flux
ratios are listed in Table <ref>. Fig. <ref>
shows the nitrogen fractionation ratios for the full sample and
compares it with DCN/HCN ratios derived by <cit.>. There
is no indication of a correlation between the disk averaged nitrogen
and hydrogen fractionation in these disks, as might have been expected
if both originated from a cold fractionation pathway (see also
section <ref>). We note that the spread in
is small compared to the errors and we cannot rule out that
there is a trend that is washed out by the noise.
§ THE / PROFILE IN V4046 SGR
HCN isotopologue emission observed toward V4046 Sgr is sufficiently
bright to provide constraints on the and abundance
profiles. In this section, we model the emission profile of the
observed lines in order to retrieve the underlying abundance
ratio across the disk.
§.§ Disk physical structure
In order to investigate possible variations of the abundance ratio
across the disk a detailed model of the line emission is needed. We
build a parametric model to describe the physical structure of the
disk based on the model described in <cit.>, which was
constructed to reproduce the emission of the dust continuum and the CO
isotopologues. We first parametrize the dust surface density as
Σ_dust(r) =
Σ_c (r/r_c)^-γexp [ - ( r/r_c )^2-γ ] r ≥ r_cav
Σ_cav 0.2AU<r<r_cav
where Σ_c is a normalization factor, r_c is a characteristic
radius, γ=1 is the power-law index of the viscosity, and
r_cav is the radius of the inner cavity. We include two dust
populations, one for the atmosphere and another for the midplane grains.
The dust volume density is computed assuming a vertical Gaussian distribution
of each dust grain population:
ρ_dust = ∑_i=0,1Σ_i(r)/√(2π) H_i(r)exp
(-z^2/H_i(r)^2),
The midplane grains, which are larger than the atmospheric ones,
comprise the bulk of the dust mass (Σ_mid = 0.9
Σ_dust, Σ_atm=0.1 Σ_dust). The atmospheric dust
grains are vertically Gaussian distributed with a scale height:
H_atm(r) = H_10 ( r/100 AU )^h
and the midplane dust grains are concentrated closer to the midplane,
with a scale height that is half of that of the atmospheric grains.
H_mid(r) = 1/2 H_atm(r)
With the dust density described above, the radiative transfer code
RADMC-3D <cit.> was then used to compute the dust
temperature throughout the disk. The dust absorption and scattering
opacities, which are needed to solve the thermal balance, were
computed using the pacityTool[https://dianaproject.wp.st-andrews.ac.uk/data-results-downloads/fortran-package/]
from the DIANA project <cit.>. The code assumes a mixture
of amorphous laboratory silicates with amorphous carbon and 25%
porosity for the grain composition. The grain size distribution
follows a power-law of index -3.5. The minimum grain size was set to
5 nm, and the maximum size was set to 10 μm and 1 cm, for the
atmosphere and midplane grain populations, respectively.
The gas temperature is parametrized as
T_gas(r,z) =
T_a + (T_m-T_a) ( cos (π z/2 z_q) )^2δ z<z_q
T_a z≥ z_q
following <cit.>. Here, the atmospheric temperature is
given by a power-law (T_a=T_a,0 (r/10)^q_atm), and the
midplane temperature is fixed to the dust temperature
(T_m=(z=0)). The fiducial scale height at which the gas
temperature is allowed to vary vertically, z_q, is fixed to z_q=2
H_gas, where H_gas=2 c_s/Ω is the hydrostatic gas scale
height evaluated at the midplane, z=0.
Once the gas temperature is obtained, the hydrostatic equation is
solved to derive the gas density across the disk. For this we assume a
vertically integrated gas-to-dust ratio at each radii of 100. The
adopted parameters for the model are listed in
Table <ref>. The resulting gas density and temperature
structures are shown in Figure <ref>. This model
reproduces the main features of the ^12CO, ^13CO and C^18O
emission (see Fig. <ref>) well enough for the purpose of
this study, assuming standard isotopic ratios. In this model, the CO
abundance is kept constant throughout the disk, except in the cold
midplane (<19 K) where the abundance is reduced by a factor of
10^3 due to freeze-out onto dust grains, and in the disk atmosphere
where the CO abundance is reduced by a factor 10^8 due to
photodissociation.
§.§ Abundance fitting
The molecular abundances for and were defined as
power-laws,
X=X_0(r/R_0)^α
where X_0 is the abundance with respect to total hydrogen at
R_0=100 AU and α is a power-law index. We also include an
outer cut-off radius . This parametrization is a common
approach when modeling molecular abundances in disks
<cit.>.
In order to find the model that best reproduces the HCN isotopologue
observations and the associated uncertainties, we use a Bayesian
approach. In short, we first create a synthetic observation of the
line emission for each species separately. Taking advantage of the
bright HCN isotopologue emission in V4046 Sgr, we produced observed
visibilities and cleaned cubes at a higher spectral resolution of
0.2 for the line modeling and include include 60 channels. We
use the is_sample Python
package[https://pypi.python.org/pypi/vis_sample] to compute
the Fourier Transform of the synthetic model and obtain visibilities
that are correctly re-projected on the u-v points of the
observations. The likelihood function is then computed in the u-v
plane, by computing the weighted difference between model and
observations, for the real and imaginary parts of the complex
visibility. We sample the posterior distribution with the MCMC method
implemented in the mcee package by <cit.>.
We include two free parameters in the line modeling, that is X_0,
α, which are associated with the molecular abundances of
and . The outer radius, R_out, was fixed to 100 AU (chosen
by the extension of the emission in the moment-zero map), but we
checked that a larger radius of 200 AU gave the same result. The disk
physical structure, that is the gas density and temperature, are fixed
in the line fitting. We adopt the disk inclination, position angle,
stellar mass (including both stars) and systemic velocity listed in
Table <ref>. The level populations were computed using
RADMC and assuming the gas is under LTE. We checked that non-LTE
effects are not important for these lines using the non-LTE radiative
transfer code LIME <cit.> to re-calculate the level
populations for the best-fit model. When generating a new sample, we
included a flat prior for the power-law index -3<α<2, and for
the molecular abundance 10^-20<X_100<10^-8.
The best-fit model corresponds to X_0=8.94±0.30×10^-13 and
α=-0.69±0.03 for , and
X_0=3.37±0.19×10^-13 and α=-1.08±0.04 for
. Our model suggests an increasing / ratio as a
function of radius, higher fractionation in the inner disk
compared to the outer disk. Fig. <ref> shows the
deprojected radial profiles of the dust continuum and HCN isotopologue
emission in V4046 Sgr (left panel) as well as the observed and modeled
/ flux ratio (right panel). We note that beyond 60 AU,
the ratio is highly uncertain because the signal-to-noise ratio
becomes too low, in particular for . Figure <ref>
shows the posterior probability distribution for the fitted
parameters. Fig. <ref> shows the residuals between our
best-fit model and the data for selected channels. Our simple model
is able to well reproduce the observations.
The inferred abundance / ratio at 10 and 50 AU are
1.08±0.14 and 2.02±0.27, respectively. Assuming
=70±21, we infer an abundance ratio of 76±25
and 142±47, at 10 and 50 AU respectively.
We note that the derived disk integrated HCN/ flux ratio of
115±43 falls in between the inferred abundance ratios at 10 and
50 AU. Given the consistent nitrogen fractionation levels inferred from the
observations and line modeling in V4046 Sgr, we expect the observed
ratio in the other sources to be representative of their
abundance ratio in the comet forming region.
§ DISCUSSION
§.§ Disk-averaged nitrogen fractionation in protoplanetary disks
We have shown that HCN isotopologues are abundant in disks. Both
and are detected toward 5/6 disks in our sample –
the one exception being the disk around T Tauri star IM Lup. The disk
around IM Lup is very massive (=0.1), very cold and also
very young <cit.> compared to the rest of the
disks in the sample. The non-detection of was surprising
considering that IM Lup is quite bright in the main HCN isotopologue
<cit.>. Given the observed HCN flux density of
3.5 Jy , we could expect a flux density of
50 mJy if HCN is optically thin and =70. This is
consistent with the observed 3σ upper limit of 51 mJy .
The inferred disk-averaged nitrogen fractionation ratios range from 83±37
to 156±78. Despite the different physical conditions of the disks
in the sample, the observed / ratios are consistent with
sampling a constant disk-averaged fractionation level. In particular,
we find no difference in the nitrogen fractionation level between disks
around T Tauri and Herbig Ae stars, which have an order of magnitude
difference in the stellar radiation field. No difference in the
disk-averaged is observed between full and transitional
disks, either. The age of the star does not seem to play an important
role either – we target young (∼1 Myr) and old (>10 Myr)
sources – suggesting either that the is inherited from the
parent cloud and is not modified in the disk, or the disk chemistry
sets the global, disk averaged nitrogen fractionation level early
(≲1 Myr) in the protoplanetary disk life.
Although we do not observe differences in the nitrogen fractionation ratio
between the sources, the data suggest that there may be a difference
in the nitrogen abundance between the disks around stars V4046 Sgr,
MWC 480 and HD 163206, and the disks around T Tauri stars AS 209 and
LkCa 15. The old disk around the binary T Tauri stars V4046 Sgr and
the two Herbig Ae stars in the sample are enriched in HCN compared to
the young T Tauri stars. Disk models have shown that dust migration
and carbon and oxygen depletion (mainly due to CO and H_2O
freeze-out) can increase the column density of cyanides by up to two
orders of magnitude in the outer disk <cit.>. Future
observations toward a larger sample of disks will show if this
corresponds to an evolutionary trend.
§.§ Comparison between nitrogen fractionation in disks, Solar System and ISM
Fig. <ref> shows the observed ratios in
different Solar System bodies, the cold ISM and in the diffuse medium.
There are large variations in the ratio among different
Solar system bodies, in particular between the rocky and gaseous
bodies. The Sun has the highest value in this comparison
(441±5), measured by the Genesis mission that sampled solar wind
ions, N^+ among them <cit.>. An almost identical value
was found in the atmosphere of Jupiter through the NH_3 observations
carried-out by the Cassini spacecraft <cit.>. Both
measurements are expected to trace the lack of nitrogen fractionation
in N_2, the main nitrogen reservoir of the protosolar nebula. The
^15N-depleted Solar value is thus considered to be representative
of the conditions of the gas when the Sun formed. All the other Solar
system bodies are enriched in ^15N compared to the Sun. A value of
=272 is found in the Earth's atmosphere, measured in
N_2. The has also been measured in several comets. An
isotopic ratio of ∼150 was found in C/1995 O1 (Hale-Bopp) and
17P/Holmes, value which was consistent for both HCN and CN
<cit.>. Observations of 18 comets from both the Oort
cloud and the Kuiper Belt all show consistently low
HCN/≃100-250 ratios <cit.>.
The cold interstellar medium is also enriched in ^15N.
<cit.> measured the / toward two
prestellar cores, L183 and L1544, and found values of
140-250 and 140-360, respectively. The in CN was later
measured toward L1544 resulting in a surprisingly high CN/C^15N
ratio of 500±75 <cit.>. The authors were able to
reproduce the observed difference in CN and HCN with chemical models
of cold gas. The fact that CN and HCN present similar fractionation
levels in comets, could be explained if CN is produced in the coma
from photo-dissociation of HCN
<cit.>. <cit.> also found a low
HCN/ ratio of 151±16 toward the prestellar core
L1521E.
Finally, the averaged ^15N enrichment observed for HCN in
protoplanetary disks are similar to comets and the cold ISM. While the
similar nitrogen fractionation ratios found in the cold ISM, comets
and disks is consistent with an inheritance scenario for the origin of
organics in the Solar System, as we discuss in the next section, the
increasing ratio in the disk of V4046 Sgr suggests that in
situ disk chemistry also contributes to the observed fractionation
patterns.
§.§ Resolved nitrogen fractionation chemistry in disks
The bright emission of the HCN isotopologues in V4046 Sgr allows us to
trace the profile across a disk for the first time. In
general, there are three possibilities for what could be observed: a
flat, decreasing or increasing as a function of disk
radius. If and are inherited from the prestellar gas
and no further chemical processing occurs in the disk, then a constant
ratio is expected across the disk. On the other hand, if the
nitrogen chemistry is altered by in-situ chemical fractionation in the
disk then a varying is expected. In this case, if chemical
fractionation dominates the nitrogen fractionation then a low
is expected to occur in the outer disk, where the gas
temperature is low (, a decreasing with radius). In
contrast, if selective photodissociation is the dominant pathway to
fractionate HCN, then an increasing profile would be
observed because this pathway is most important in regions exposed to
UV photons, the inner disk which is illuminated by the central
star.
The observations toward V4046 Sgr show that both and
are best reproduced by a decreasing abundance
profile. However, the inferred emission profile is slightly
steeper than that of , pointing to a higher fractionation in
the inner disk than in the outer disk. The varying observed
in V4046 Sgr shows that there is an active nitrogen fractionation in
the disk, that changes the original fractionation pattern. The fact
that is lower in the inner disk suggests that selective
photodissociation is indeed an important pathway to fractionate HCN in
the inner disk. Higher signal-to-noise ratio observations are needed
to determine if there is also an active N fractionation chemistry in
the outer disk.
There is additional evidence that cold ion-molecule fractionation does
not alone regulate the / ratio in
disks. <cit.> measured the D/H isotopic ratio in DCO^+
and HCN toward the same sample of disks. They found enhanced D/H
ratios compared to the elemental ratio in the local ISM
(∼2×10^-5) in all disks, with DCN/HCN ratios ranging from
0.005-0.08. If the cold pathway dominates the fractionation for both
species in these disks we could expect a correlation between the D/H
and ratios. However, we do not see such correlation in the
sample (see Fig. <ref>).
§.§ Future directions
We have shown that the ratio increases with radius in the
disk around T Tauri stars V4046 Sgr. However, the angular resolution
of the current data (0”.5 corresponding to ∼36 AU at 73 pc)
prevents us from resolving variations at smaller scales and
the low signal-to-noise ratio prevents us from constraining the
chemistry beyond 60 AU. Future observations at higher angular
resolution should allow us to measure the radial dependence of
at Solar System scales, and determine how the fractionation
ratio changes from the inner (<15 AU) to the outer (>30 AU) disk.
High-angular resolution observations toward more disks are needed to
determine whether an increasing is a general characteristic
of disks or unique to V4046 Sgr. A larger sample is also needed to
draw more general conclusions on the dominant fractionation pathways
in disks, and how the pattern depends on the physical
conditions of the disk. In this respect, new chemical models that
include all nitrogen fractionation pathways (selective
photodissociation and cold isotope exchange reactions) as well as
inheritance from the parent cloud are key to interpret the
observations. In addition, it is desirable to include the
fractionation of both carbon and nitrogen in the models, since most
measurements in both disks and cold ISM rely on observations
of the ^13C isotopologues to measure the contribution of the main
isotopologues.
In contrast to HCN, hydrides (, N_2H^+ and NH_3) are found
to be ^15N-depleted in the ISM. Toward L1544,
<cit.> found a nitrogen isotopic ratio in N_2H^+ of
1000. Toward the class 0 protostar B1b, <cit.> found
ratios of 260-355 for NH_3 and an upper limit of >600
for N_2H^+. <cit.> proposed that the difference in
nitrogen fractionation between cyanides and hydrides in the cold ISM
is the result of their different chemical origins: HCN derives from
atomic N, while NH_3 and N_2H^+ derive from molecular
nitrogen. This hypothesis, however, is challenged by the recent
measurement of the in NH_2, a photodissociation product of
NH_3 in cometary coma, toward comet C/2012 S1 (ISON), where a
fractionation ratio of 139±38 was found <cit.>. A
similarly low value (∼130) was found previously by
<cit.> based on the averaged spectrum of 12
comets. Toward comet ISON, CN and HCN were also found to be highly
enriched in ^15N (∼150). As explained by
<cit.>, one possibility to obtain similar HCN and NH_3
fractionation levels in comets, despite very different fractionation
levels in the cold ISM, is through grain surface chemistry. Indeed,
the ISM values represent the in the gas-phase, while
cometary values represent the in the ices. It is also
possible that the measured cometary values is not representative of
the composition of the nucleus. Measurements of the isotopic ratio in
NH_3 and N_2H^+ in disks would provide additional clues to
answer these questions. In particular, if the cyanide/hydride
dichotomy observed in the cold ISM holds for disks as well.
Finally, more observations toward comets targeting different
molecular parent species (, species tracing the cometary
nucleus and not daughter species which are produced in the coma) will
be important to compare with observations in the ISM and disks, and to
elucidate the origins of ^15N enhancements, and ultimately the
origins of organics, across the Solar System.
§ CONCLUSIONS
We have presented ALMA observations at ∼0”.5 angular resolution
of the HCN isotopologues in a diverse sample of six protoplanetary
disks. The sample contains 4 T Tauri and 2 Herbig Ae stars, which
sample an order of magnitude in radiation fields. Both and
are detected toward all the sources, except for IM Lup,
which is the most massive and coldest disk, and likely the youngest,
in the sample. Adopting a standard ratio of 70 (with a 30%
uncertainty), we infer disk-averaged ratios of 80-160 for
the sources. Despite the different physical conditions of the disks,
in HCN is similar for all sources. No differences are
observed between T Tauri and Herbig Ae stars, or between the full
disks and the transitional disks, which feature large dust
cavities. Also, no correlation is observed between disk-averaged D/H
and ratios in the sample. The observed disk-averaged
ratios are similar to what is observed in comets and in the
cold ISM, which is consistent with the inheritance scenario for the
origin of organics in the Solar System. However, chemical processing
within the protoplanetary disk phase based on these ratios alone
cannot be ruled out. Indeed, in the one disk where we could resolve
the / ratio as a function of radius we find a slightly
steeper profile for , an increasing ratio with
radius. The higher nitrogen fractionation level in the inner disk
compared to the outer disk suggest that selective photodissociation is
an important fractionation pathway in the inner disk.
This paper makes use of ALMA data, project code:
ADS/JAO.ALMA#2013.1.00226. ALMA is a partnership of ESO (representing
its member states), NSF (USA) and NINS (Japan), together with NRC
(Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic
of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and
NAOJ. The National Radio Astronomy Observatory is a facility of the
National Science Foundation operated under cooperative agreement by
Associated Universities, Inc. VVG thanks support from the Chilean
Government through the Becas Chile program. KIÖ also acknowledges
funding from the Packard Foundation and an investigator award from
Simons Collaboration on the Origins of Life (SCOL). JH and RL
acknowledge support from the National Science Foundation (Grant
No. DGE-1144152).
[Du et al.(2015)]du2015 Du, F., Bergin, E. A., & Hogerheijde, M. R. 2015, , 807, L32
[Bizzocchi et al.(2013)]bizzocchi2013 Bizzocchi, L., Caselli, P., Leonardo, E., & Dore, L. 2013, , 555, A109
[Bockelée-Morvan et al.(2008)]bockelee2008
Bockelée-Morvan, D., Biver, N., Jehin, E., et al. 2008, , 679,
L49
[Brinch & Hogerheijde(2010)]brinch2010 Brinch, C.,
& Hogerheijde, M. R. 2010, , 523, A25
[Chapillon et al.(2012)]chapillon2012 Chapillon, E., Guilloteau, S., Dutrey, A., Piétu, V., & Guélin, M. 2012, , 537, A60
[Cleeves et al.(2016)]cleeves2016 Cleeves, L. I., Öberg, K. I., Wilner, D. J., et al. 2016, arXiv:1610.00715
[Daniel et al.(2013)]daniel2013 Daniel, F., Gérin, M., Roueff, E., et al. 2013, , 560, A3
[Dartois et al.(2003)]dartois2003 Dartois, E., Dutrey, A.,
& Guilloteau, S. 2003, , 399, 773
[Dullemond(2012)]dullemond2012 Dullemond, C. P. 2012, Astrophysics Source Code Library, ascl:1202.015
[Foreman-Mackey et al.(2013)]foreman2013
Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013,
, 125, 306
[Fouchet et al.(2004)]fouchet2004 Fouchet, T., Irwin, P. G. J., Parrish, P., et al. 2004, , 172, 50
[Guzmán et al.(2015)]guzman2015 Guzmán, V. V., Öberg, K. I., Loomis, R., & Qi, C. 2015, , 814, 53
[Heays et al.(2014)]heays2014 Heays, A. N., Visser, R.,
Gredel, R., et al. 2014, , 562, A61
[Hily-Blant et al.(2013)]hily-blant2013 Hily-Blant, P.,
Bonal, L., Faure, A., & Quirico, E. 2013, , 223, 582
[Hily-Blant et al.(2013b)]hily-blant2013b Hily-Blant, P., Pineau des Forêts, G., Faure, A., Le Gal, R., & Padovani, M. 2013, , 557, A65
[Huang et al.(2017)]huang2017 Huang, J., Oberg, K. I., Qi, C., et al. 2017, arXiv:1701.01735
[Ikeda et al.(2002)]ikeda2002 Ikeda, M., Hirota, T., & Yamamoto, S. 2002, , 575, 250
[Lyons et al.(2009)]lyons2009 Lyons, J. R., Bergin, E. A., Ciesla, F. J., et al. 2009, , 73, 4998
[Lucas & Liszt(1998)]lucas1998 Lucas, R., & Liszt,
H. 1998, , 337, 246
[Liszt & Lucas(2001)]liszt2001 Liszt, H., & Lucas, R. 2001, , 370, 576
[Marty et al.(2011)]marty2011 Marty, B., Chaussidon, M.,
Wiens, R. C., Jurewicz, A. J. G., & Burnett, D. S. 2011, Science,
332, 1533
[Mumma & Charnley(2011)]mumma2011 Mumma, M. J., & Charnley, S. B. 2011, , 49, 471
[Öberg et al.(2015)]oberg2015a Öberg, K. I., Furuya, K., Loomis, R., et al. 2015, , 810, 112
[Öberg et al.(2015)]oberg2015b Öberg, K. I., Guzmán, V. V., Furuya, K., et al. 2015, , 520, 198
[Öberg et al.(2010)]oberg2010 Öberg, K. I., Qi,
C., Fogel, J. K. J., et al. 2010, , 720, 480
[Öberg et al.(2011)]oberg2011 Öberg, K. I., Qi,
C., Fogel, J. K. J., et al. 2011, , 734, 98
[Pavlyuchenkov et al.(2007)]pavlyuchenkov2007 Pavlyuchenkov, Y., Semenov, D., Henning, T., et al. 2007, , 669, 1262
& Guilloteau, S. 2007, , 467, 163
[Rosenfeld et al.(2013)]rosenfeld2013 Rosenfeld, K. A.,
Andrews, S. M., Wilner, D. J., Kastner, J. H., & McClure,
M. K. 2013, , 775, 136
[Roueff et al.(2015)]roueff2015 Roueff, E., Loison, J. C., & Hickson, K. M. 2015, , 576, A99
[Rousselot et al.(2014)]rousselot2014 Rousselot, P., Pirali, O., Jehin, E., et al. 2014, , 780, L17
[Shinnaka et al.(2014)]shinnaka2014 Shinnaka, Y., Kawakita, H., Kobayashi, H., Nagashima, M., & Boice, D. C. 2014, , 782, L16
[Thi et al.(2004)]thi2004 Thi, W.-F., van Zadelhoff, G.-J., & van Dishoeck, E. F. 2004, , 425, 955
[Qi et al.(2013)]qi2013 Qi, C., Öberg, K. I., & Wilner, D. J. 2013, , 765, 34
[Qi et al.(2008)]qi2008 Qi, C., Wilner, D. J., Aikawa, Y., Blake, G. A., & Hogerheijde, M. R. 2008, , 681, 1396-1407
[Wampfler et al.(2014)]wampfler2014 Wampfler, S. F.,
Jørgensen, J. K., Bizzarro, M., & Bisschop, S. E. 2014, ,
572, A24
[Woitke et al.(2016)]woitke2016 Woitke, P., Min,
M., Pinte, C., et al. 2016, , 586, A103
figuresection
§ CHANNEL MAPS
| The origin of Solar System organics is a fundamental and highly
debated topic. It is unclear whether the organics in the different
Solar System bodies were inherited from the cold and dense molecular
parent cloud of our Sun, or if they are the result of chemical
processing within the Solar nebula protoplanetary disk. The isotopic
composition of present day organics may help to shed light on their
origins, since isotopic fractionation chemistry is highly environment
specific and can leave a permanent imprint. Comets are especially
interesting in this context, since they should preserve the isotopic
compositions in different molecules during the assembly of the Solar
System.
Among the different methods used to trace the origin of molecules, the
isotopic ratio is one of the most popular ones. Nitrogen
isotopic ratios span at least an order of magnitude between different
Solar System bodies. A low nitrogen fractionation (high ) is
observed toward the Sun and Jupiter <cit.>, while a high fractionation is observed in the rocky
planets, comets and meteorites
<cit.>. The origin of these
variations is not well understood, but it suggests that different Solar
System bodies obtained their nitrogen from different nitrogen
reservoirs <cit.>. Based on observations of the
Interstellar Medium (ISM), comets and chemical models, there are three
major nitrogen reservoirs in dense interstellar and circumstellar media,
N_2, NH_3 and HCN. These species have different fractionation
pathways <cit.>, and it is thus important to
measure the ratio in molecules representative of these nitrogen
reservoirs.
This study focuses on HCN. HCN is readily detected in the ISM
<cit.>, comets
<cit.> and protoplanetary disks
<cit.>. While several studies have
been made toward prestellar cores and protostars, the
characterization of isotopic ratios in protoplanetary disks is rather
new due to the intrinsic weak line emission. <cit.>
presented the first detection of and in the disk
around Herbig Ae star MWC 480, and provided the first measurement of
the in a disk. They found an isotopic ratio of 200±100,
which is similar to what is observed in the cold ISM and in comets.
The low value in the MWC 480 disk implies either inheritance
of HCN from the ISM, or the presence of an active fractionation
chemistry in the disk. There are two potentially active fractionation
channels in disks and in the ISM. The first is through isotope
exchange reactions, such as
HC^14NH^+ + ^15N→HC^15NH^+ + ^14N + hν
which favor the incorporation of ^15N into molecules at low
temperatures (<20 K). HC^15NH^+ can later recombine with
free electrons to produce . Observations of HCN and HNC
fractionation toward protostars present a tentative trend of the
with temperature, supporting this scenario
<cit.>. The second mechanism is selective
photo-dissociation of ^14N^15N over ^14N_2, due to
self-shielding of ^14N_2 <cit.>. In the surface layers
of protoplanetary disks, which are directly illuminated by the
radiation field of the central star, the dominant formation pathways
leading to HCN and are
^14N + CH_2→HC^14N
^15N + CH_2→
Both mechanisms can reduce the HCN/ ratio in protoplanetary
disks <cit.>. Distinguishing between these
different origins of nitrogen fractionation levels in disks and
between inheritance and in situ disk fractionation chemistry (and
further in comets and planets) requires more disk measurements, and
constraints on the radial profiles of the ratio in disks
with different structures and around stars with different radiation
fields. A constant ratio across disks would favor a scenario
where disks inherit their organics from the natal cloud, while disk
chemistry should result in a radial gradient, since the disk
environment is dramatically different at different radii.
To begin to address this long-term goal, we present observations of
and in a diverse sample of 6 protoplanetary
disks. Because the HCN lines may be optically thick, we use the
line as a proxy of HCN to derive the ratio. In
section 2 we present the observations and describe the data reduction
process. The disk-averaged isotopic flux ratios derived from the
observations are presented in Section 3. In Section 4, we model the
disk abundance profiles of and in V4046 Sgr, the
source with the highest signal-to-noise ratio detection. In Section 5,
we discuss the results and compare with observations in our Solar
System and in the cold ISM. A summary is presented in Section 6. | null | null | null | §.§ Disk-averaged nitrogen fractionation in protoplanetary disks
We have shown that HCN isotopologues are abundant in disks. Both
and are detected toward 5/6 disks in our sample –
the one exception being the disk around T Tauri star IM Lup. The disk
around IM Lup is very massive (=0.1), very cold and also
very young <cit.> compared to the rest of the
disks in the sample. The non-detection of was surprising
considering that IM Lup is quite bright in the main HCN isotopologue
<cit.>. Given the observed HCN flux density of
3.5 Jy , we could expect a flux density of
50 mJy if HCN is optically thin and =70. This is
consistent with the observed 3σ upper limit of 51 mJy .
The inferred disk-averaged nitrogen fractionation ratios range from 83±37
to 156±78. Despite the different physical conditions of the disks
in the sample, the observed / ratios are consistent with
sampling a constant disk-averaged fractionation level. In particular,
we find no difference in the nitrogen fractionation level between disks
around T Tauri and Herbig Ae stars, which have an order of magnitude
difference in the stellar radiation field. No difference in the
disk-averaged is observed between full and transitional
disks, either. The age of the star does not seem to play an important
role either – we target young (∼1 Myr) and old (>10 Myr)
sources – suggesting either that the is inherited from the
parent cloud and is not modified in the disk, or the disk chemistry
sets the global, disk averaged nitrogen fractionation level early
(≲1 Myr) in the protoplanetary disk life.
Although we do not observe differences in the nitrogen fractionation ratio
between the sources, the data suggest that there may be a difference
in the nitrogen abundance between the disks around stars V4046 Sgr,
MWC 480 and HD 163206, and the disks around T Tauri stars AS 209 and
LkCa 15. The old disk around the binary T Tauri stars V4046 Sgr and
the two Herbig Ae stars in the sample are enriched in HCN compared to
the young T Tauri stars. Disk models have shown that dust migration
and carbon and oxygen depletion (mainly due to CO and H_2O
freeze-out) can increase the column density of cyanides by up to two
orders of magnitude in the outer disk <cit.>. Future
observations toward a larger sample of disks will show if this
corresponds to an evolutionary trend.
§.§ Comparison between nitrogen fractionation in disks, Solar System and ISM
Fig. <ref> shows the observed ratios in
different Solar System bodies, the cold ISM and in the diffuse medium.
There are large variations in the ratio among different
Solar system bodies, in particular between the rocky and gaseous
bodies. The Sun has the highest value in this comparison
(441±5), measured by the Genesis mission that sampled solar wind
ions, N^+ among them <cit.>. An almost identical value
was found in the atmosphere of Jupiter through the NH_3 observations
carried-out by the Cassini spacecraft <cit.>. Both
measurements are expected to trace the lack of nitrogen fractionation
in N_2, the main nitrogen reservoir of the protosolar nebula. The
^15N-depleted Solar value is thus considered to be representative
of the conditions of the gas when the Sun formed. All the other Solar
system bodies are enriched in ^15N compared to the Sun. A value of
=272 is found in the Earth's atmosphere, measured in
N_2. The has also been measured in several comets. An
isotopic ratio of ∼150 was found in C/1995 O1 (Hale-Bopp) and
17P/Holmes, value which was consistent for both HCN and CN
<cit.>. Observations of 18 comets from both the Oort
cloud and the Kuiper Belt all show consistently low
HCN/≃100-250 ratios <cit.>.
The cold interstellar medium is also enriched in ^15N.
<cit.> measured the / toward two
prestellar cores, L183 and L1544, and found values of
140-250 and 140-360, respectively. The in CN was later
measured toward L1544 resulting in a surprisingly high CN/C^15N
ratio of 500±75 <cit.>. The authors were able to
reproduce the observed difference in CN and HCN with chemical models
of cold gas. The fact that CN and HCN present similar fractionation
levels in comets, could be explained if CN is produced in the coma
from photo-dissociation of HCN
<cit.>. <cit.> also found a low
HCN/ ratio of 151±16 toward the prestellar core
L1521E.
Finally, the averaged ^15N enrichment observed for HCN in
protoplanetary disks are similar to comets and the cold ISM. While the
similar nitrogen fractionation ratios found in the cold ISM, comets
and disks is consistent with an inheritance scenario for the origin of
organics in the Solar System, as we discuss in the next section, the
increasing ratio in the disk of V4046 Sgr suggests that in
situ disk chemistry also contributes to the observed fractionation
patterns.
§.§ Resolved nitrogen fractionation chemistry in disks
The bright emission of the HCN isotopologues in V4046 Sgr allows us to
trace the profile across a disk for the first time. In
general, there are three possibilities for what could be observed: a
flat, decreasing or increasing as a function of disk
radius. If and are inherited from the prestellar gas
and no further chemical processing occurs in the disk, then a constant
ratio is expected across the disk. On the other hand, if the
nitrogen chemistry is altered by in-situ chemical fractionation in the
disk then a varying is expected. In this case, if chemical
fractionation dominates the nitrogen fractionation then a low
is expected to occur in the outer disk, where the gas
temperature is low (, a decreasing with radius). In
contrast, if selective photodissociation is the dominant pathway to
fractionate HCN, then an increasing profile would be
observed because this pathway is most important in regions exposed to
UV photons, the inner disk which is illuminated by the central
star.
The observations toward V4046 Sgr show that both and
are best reproduced by a decreasing abundance
profile. However, the inferred emission profile is slightly
steeper than that of , pointing to a higher fractionation in
the inner disk than in the outer disk. The varying observed
in V4046 Sgr shows that there is an active nitrogen fractionation in
the disk, that changes the original fractionation pattern. The fact
that is lower in the inner disk suggests that selective
photodissociation is indeed an important pathway to fractionate HCN in
the inner disk. Higher signal-to-noise ratio observations are needed
to determine if there is also an active N fractionation chemistry in
the outer disk.
There is additional evidence that cold ion-molecule fractionation does
not alone regulate the / ratio in
disks. <cit.> measured the D/H isotopic ratio in DCO^+
and HCN toward the same sample of disks. They found enhanced D/H
ratios compared to the elemental ratio in the local ISM
(∼2×10^-5) in all disks, with DCN/HCN ratios ranging from
0.005-0.08. If the cold pathway dominates the fractionation for both
species in these disks we could expect a correlation between the D/H
and ratios. However, we do not see such correlation in the
sample (see Fig. <ref>).
§.§ Future directions
We have shown that the ratio increases with radius in the
disk around T Tauri stars V4046 Sgr. However, the angular resolution
of the current data (0”.5 corresponding to ∼36 AU at 73 pc)
prevents us from resolving variations at smaller scales and
the low signal-to-noise ratio prevents us from constraining the
chemistry beyond 60 AU. Future observations at higher angular
resolution should allow us to measure the radial dependence of
at Solar System scales, and determine how the fractionation
ratio changes from the inner (<15 AU) to the outer (>30 AU) disk.
High-angular resolution observations toward more disks are needed to
determine whether an increasing is a general characteristic
of disks or unique to V4046 Sgr. A larger sample is also needed to
draw more general conclusions on the dominant fractionation pathways
in disks, and how the pattern depends on the physical
conditions of the disk. In this respect, new chemical models that
include all nitrogen fractionation pathways (selective
photodissociation and cold isotope exchange reactions) as well as
inheritance from the parent cloud are key to interpret the
observations. In addition, it is desirable to include the
fractionation of both carbon and nitrogen in the models, since most
measurements in both disks and cold ISM rely on observations
of the ^13C isotopologues to measure the contribution of the main
isotopologues.
In contrast to HCN, hydrides (, N_2H^+ and NH_3) are found
to be ^15N-depleted in the ISM. Toward L1544,
<cit.> found a nitrogen isotopic ratio in N_2H^+ of
1000. Toward the class 0 protostar B1b, <cit.> found
ratios of 260-355 for NH_3 and an upper limit of >600
for N_2H^+. <cit.> proposed that the difference in
nitrogen fractionation between cyanides and hydrides in the cold ISM
is the result of their different chemical origins: HCN derives from
atomic N, while NH_3 and N_2H^+ derive from molecular
nitrogen. This hypothesis, however, is challenged by the recent
measurement of the in NH_2, a photodissociation product of
NH_3 in cometary coma, toward comet C/2012 S1 (ISON), where a
fractionation ratio of 139±38 was found <cit.>. A
similarly low value (∼130) was found previously by
<cit.> based on the averaged spectrum of 12
comets. Toward comet ISON, CN and HCN were also found to be highly
enriched in ^15N (∼150). As explained by
<cit.>, one possibility to obtain similar HCN and NH_3
fractionation levels in comets, despite very different fractionation
levels in the cold ISM, is through grain surface chemistry. Indeed,
the ISM values represent the in the gas-phase, while
cometary values represent the in the ices. It is also
possible that the measured cometary values is not representative of
the composition of the nucleus. Measurements of the isotopic ratio in
NH_3 and N_2H^+ in disks would provide additional clues to
answer these questions. In particular, if the cyanide/hydride
dichotomy observed in the cold ISM holds for disks as well.
Finally, more observations toward comets targeting different
molecular parent species (, species tracing the cometary
nucleus and not daughter species which are produced in the coma) will
be important to compare with observations in the ISM and disks, and to
elucidate the origins of ^15N enhancements, and ultimately the
origins of organics, across the Solar System. | null |
http://arxiv.org/abs/1701.07708v1 | 20170126140836 | The EUSO@TurLab Project | [
"H. Miyamoto",
"M. Bertaina",
"G. Cotto",
"R. Forza",
"M. Manfrin",
"M. Mignone",
"G. Suino",
"A. Youssef",
"R. Caruso",
"G. Contino",
"S. Bacholle",
"P. Gorodetzky",
"A. Jung",
"E. Parizot",
"G. Prevôt",
"P. Barrillon",
"S. Dagoret-Campagne",
"J. Rabanal Reina",
"S. Blin"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.HE"
] |
Università di Torino/INFN Torino,
via Pietro Giuria 1, 10125 Torino, Italy
Università di Catania/INFN Catania,
via Santa Sofia 64, 95123 Catania, Italy
Laboratoire APC
10, rue Alice Domon et Léonie Duquet, 75013 Paris, France
LAL/IN2P3/CNRS/Université Paris-Sud
Centre Scientifique d'Orsay, Bâtiment 200 - BP 34
91898 Orsay cedex, France
OMEGA/CNRS/IN2P3,
Ecole Polytechnique
91128 Palaiseau Cedex, France
The TurLab facility is a laboratory, equipped with a 5 m diameter and 1 m depth rotating tank,
located in the Physics Department of the University of Turin.
The tank has been built mainly to study problems where system rotation plays a key role in the fluid behaviour such as in atmospheric and oceanic flows at different scales.
The tank can be filled with different fluids of variable density, which enables studies in layered conditions such as sea waves.
The tank can be also used to simulate the terrestrial surface with the optical characteristics of different environments such as snow, grass, ocean, land with soil, stones etc., fogs and clouds.
As it is located in an extremely dark place, the light intensity can be controlled artificially.
Such capabilities of the TurLab facility are applied to perform experiments related to the observation of Extreme Energy Cosmic Rays (EECRs) from space using the fluorescence technique, as in the case of the JEM-EUSO mission, where the diffuse night brightness and artificial light sources can vary significantly in time and space inside the Field of View (FoV) of the telescope.
Here we will report the currently ongoing activity at the TurLab facility in the framework of the JEM-EUSO mission (EUSO@TurLab).
The EUSO@TurLab Project
for the JEM-EUSO Collaboration
December 30, 2023
=============================================
fancy
§ JEM-EUSO AND ITS PATHFINDERS
JEM-EUSO <cit.> is the concept of a space-borne fluorescence telescope to be hosted on the International Space Station (ISS).
One of its main goals is a high statistical observation of EECRs with primary particle energies above 5×10^19 eV.
The telescope consists of three Fresnel lenses and a focal surface consisting of 0.3M pixels of UV sensitive detectors with a fast readout system, which enables a single photon counting measurement with a wide FoV of ±30^∘ with a spatial resolution of 500×500 m^2 on the ground, covering roughly 60 % of the entire surface of the Earth in its flight on the ISS orbit.
Looking towards the Earth from space, JEM-EUSO will reveal these particles as well as very high energy neutrinos by observing the fluorescence signal from the generated Extensive Air Showers (EAS) during their passage through the atmosphere.
It will also contribute to the investigation of atmospheric phenomena such as Transient Luminous Events (TLEs) and meteors.
The JEM-EUSO project is carried out by an international collaboration consisting of currently 88 institutions from 16 countries.
In parallel to the JEM-EUSO development, several pathfinders such as EUSO-Balloon <cit.>, launched in August 2014, and a ground-based pathfinder TA-EUSO <cit.>, currently in operation at the Black Rock Mesa in Utah, US have been developed.
Also, a space-borne pathfinder MINI-EUSO as well as an advanced balloon-borne pathfinder with NASA Super Pressure Balloon, EUSO-SPB, are currently being developed to be launched in 2017.
The ISS is flying at a speed of 7.5 km/s, which makes 15.5 orbits per day, and every 30 to 45 minutes it passes by the night region.
While the ISS flies on the orbit, it passes by many kinds of sceneries such as oceans, clouds, city light, airglows, forests, lightning, and so on from the altitude of 400 km.
The Fig. <ref> shows the UV intensity measured during the five hours of an EUSO-Balloon flight in August 2014.
The balloon was flying from left to right, from Timmins airport to the dark region where mostly lakes and forests are located.
The plot shows the averaged intensity in reference area in logarithmic scale as a function of time (UTC).
The high intensities at the beginning of the flight (left part in the plot) appeared when the balloon was flying above cities like Timmins and neighbourhoods, mines and airports,
where there are full of artificial lights.
One may see the UV background level is increasing by a factor of 10 comparing to other areas mainly covered by forests and lakes.
§.§ JEM-EUSO Focal Surface and Data Acquisition Chain (DAQ)
JEM-EUSO focal surface is consisting of Hamamatsu 64-ch Multi-Anode PhotoMultiplier Tubes (MAPMTs).
One Elementary Cell unit (EC_unit) consists of 4 PMTs, and 9 EC_units form a Photo-Detector Module (PDM).
In total, the entire focal surface consists of 137 PDMs with about 5,000 PMTs with 0.3M pixels.
The output of PMTs are readout by an Elementary Cell ASIC board (EC_ASIC) which consists of 6 SPACIROC ASICs <cit.>.
A PDM board is an interface board between a PDM and the latter part of data processing system, which sends slow control commands and processes First Level Triggers (FLTs).
§.§ First Level Trigger (FLT)
JEM-EUSO will have to deal with a huge amount of data.
Therefore, it is essential to develop an efficient FLT which detects UHECRs in a continuously varying background.
The Fig. <ref> shows a conceptual figure of EUSO FLT (top)
with an example of an EUSO-Balloon laser event <cit.> (bottom);
i.e., an integrated image during 320 μs (left) and 3×3 pixel counts as a function of frame (Gate Time Unit, GTU=2.5μs, right).
The FLT is a persistence trigger which scans counts in every 3×3 pixel-boxes in a PMT, checking the excess against the pixel threshold which is automatically adjusted in every 320 μs.
FLT sets a proper threshold automatically as indicated by the blue dotted line in the plot
to avoid triggering on the background, but only on the EAS-like signals.
§ THE TURLAB@PHYSICS DEPARTMENT - TORINO UNIVERSITY
To test our electronics, as well as study and develop the FLT in such a dynamic condition, we got an idea to use a big rotation tank of TurLab <cit.>, which is located in the fourth basement of the physics department building of University of Turin.
TurLab is a laboratory for geo-fluid-dynamics studies, where rotation is a key parameter such as Coriolis force and Rossby Number.
With using inks or particles, based on the fluid-dynamics, key phenomena such as planetary atmospheric and fluid instabilities can be reproduced in the TurLab water tank.
The tank has 5 m diameter with a capability of the rotation at a speed of 3 s to 20 min per rotation.
Also, as it is located in a very dark environment, the light intensity can be controlled artificially.
§ THE EUSO@TURLAB PROJECT
By means of the rotating tank with the capabilities mentioned above, we have been testing EUSO electronics such as its basic performance as well as the FLT for cosmic rays in the various background and sceneries which are transiting from one to another.
When we set an EUSO EC_unit camera, a 2×2 array of 64-ch MAPMTs on the ceiling at the height of 2 meters above the TurLab tank, we know that a pixel watches a FoV of 5×5 mm^2 from the past measurements.
This FoV corresponds to a solid angle of 6.25×10^-6 sr which is comparable to the one for JEM-EUSO.
Therefore, considering the altitude and the speed of ISS, we learned that we can reproduce the JEM-EUSO observation in TurLab.
The Fig. <ref> shows the TurLab tank, light sources and materials used to mimic the various kinds of phenomena and albedos that JEM-EUSO will encounter.
We use an EUSO EC_unit with a lens as a camera, readout by the EUSO front-end electronics with a test board, observing several materials such as meteor, cosmic ray, cloud, ground glass, passing from one to another in the FoV in a constant background light produced by a high power LED lamp diffused on the ceiling.
High voltages, DC power supplies, function generators and monitoring oscilloscopes are on the desk by the side of the tank with a PC with LabView interface for the test board to send commands for slow control and data acquisition, and ROOT programs for monitoring and analysis.
Fig. <ref>
shows the examples of UV images obtained by the EC_unit camera during the full tank rotation.
§.§ FLT analysis (Offline)
The top panel of Fig. <ref> shows the result of UV intensities in a full rotation of the tank with various materials.
The plot shows the summed counts of a PMT (=64 pixels) as a function of time in a unit of frame.
Afterward, we analysed the data and processed the FLT offline.
The middle panel shows the averaged counts per pixel which is used to set the FLT threshold, while the bottom shows when FLTs were issued based on signals in that PMT, as a function of frame (GTU) respectively.
Almost all triggers coincide with passing over the ARDUINO driven LED chain, which mimics cosmic-ray like events, as it should be; the one that is not is due to a specific location near one of the two bridges crossing the tank where the variations of light reflection were still too fast to be compensated by the slower rotation of the tank.
§ SUMMARY
JEM-EUSO and its pathfinders will observe various cosmic/atmospheric phenomena in UV from the space/atmosphere.
TurLab is a unique facility in Turin with interdisciplinary experts (waves, geophysics, atmospheric science, meteors, astroparticle/cosmic ray physics, etc.), capable of providing an ideal condition to test EUSO electronics in a controlled environment against various sceneries which the telescope will encounter.
The TurLab system has been used to verify/implement FLT in use of EUSO-SPB. Analysis and development of FLT for Mini-EUSO are currently ongoing.
This work has been partially funded by the Italian Ministry of Foreign Affairs and International Cooperation, the European High-Performance Infrastructures in Turbulence (EuHIT).
9
jemeuso
J. H. Adams et al. (JEM-EUSO Coll.), Exp. Astronomy 40 (2015) 3.
eusoballoon
J. H. Adams et al. (JEM-EUSO Coll.), Exp. Astronomy 40 (2015) 281.
taeuso
J. H. Adams et al. (JEM-EUSO Coll.), Exp. Astronomy 40 (2015) 301.
spaciroc
H. Miyamoto et al., PoS TIPP2014 (2015) 362.
laser
J. Eser et al. (JEM-EUSO Coll.), Proc. 34th ICRC, Den Haag, ♯0860 (2015)
turlab
M. Bertaina et al. (JEM-EUSO Coll.), EPJ Web of Conferences 89 (2015) 03003.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07895v2 | 20170126224320 | Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity | [
"Adarsh Barik",
"Jean Honorio",
"Mohit Tawarmalani"
] | cs.LG | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity
Adarsh Barik
Krannert School of Management
Purdue University
West Lafayette, Indiana 47906
Email: [email protected]
Jean Honorio
Department of Computer Science
Purdue University
West Lafayette, Indiana 47906
Email: [email protected]
Mohit Tawarmalani
Krannert School of Management
Purdue University
West Lafayette, Indiana 47906
Email: [email protected]
========================================================================================================================================================================================================================================================================================================================================================================================
We analyze the necessary number of samples for sparse vector recovery in a noisy linear prediction setup. This model includes problems such as linear regression and classification. We focus on structured graph models. In particular, we prove that sufficient number of samples for the weighted graph model proposed by Hegde and others <cit.> is also necessary. We use the Fano's inequality <cit.> on well constructed ensembles as our main tool in establishing information theoretic lower bounds.
Keywords:
Compressive sensing, Linear Prediction, Classification, Fano's Inequality, Mutual Information, Kullback Leibler divergence.
§ INTRODUCTION
Sparse vectors are widely used tools in fields related to high dimensional
data analytics such as machine learning, compressed sensing and statistics.
This makes estimation of sparse vectors an important field of research. In a compressive sensing setting, the problem is to closely approximate a d-dimensional signal by an s-sparse vector without losing much information. For regression, this is usually done by observing the inner product of the signal with a design matrix. It is a well known fact that if the design matrix satisfies the Restricted Isometry Property (RIP) then estimation can be done efficiently with a sample complexity of O(s logd/s). Many algorithms such as CoSamp <cit.>, Subspace Pursuit (SP) <cit.> and Iterative Hard Thresholding (IHT) <cit.> provide high probability performance guarantees. Baraniuk and others <cit.> came up with a model based sparse recovery framework. Under this framework, the sufficient number of samples for correct recovery is logarithmic with respect to the cardinality of the sparsity model.
A major issue with the model based framework is that it does not provide any recovery algorithm on its own. In fact, it is some times very hard to come up with an efficient recovery algorithm. Addressing this issue, Hegde and others <cit.> came up with a weighted graph model for graph structured sparsity and provided a nearly linear time recovery algorithm. They also analyzed the sufficient number of samples for efficient recovery. In this paper, we will provide the necessary condition on the sample
complexity for sparse recovery on a weighted graph model. We will also note that our information theoretic lower bound can be applied not only to linear regression but also to other linear prediction tasks such as classification.
The paper is organized as follows. We describe our setup in Section <ref>. Then we briefly describe the weighted graph model in Section <ref>. We state our results in Section <ref>. In Section <ref>, we apply our technique to some specific examples. At last, we provide some concluding remarks in Section <ref>.
§ LINEAR PREDICTION MODEL
In this section, we introduce the observation model for linear prediction and later specify how to use it for specific problems such as linear regression and classification.
Formally, the problem is to estimate an s-sparse vector β̅ from noisy observations of the form,
z = f(X β̅ + e) ,
where z ∈ℝ^n is the observed output, X ∈ℝ^n × d is the design matrix , e ∈ℝ^n is a noise vector and f:ℝ^n→ℝ^n is a fixed function. Our task is to recover β̅∈ℝ^d from the observations z.
§.§ Linear Regression
Linear regression is a special case of the above by choosing f (x) = x. Then we simply have,
z = X β̅ + e .
Prior work analyzes the sample complexity of sparse recovery for the linear regression setup. In particular, if the design matrix X satisfies the Restricted Isometry Property (RIP) then algorithms such as CoSamp <cit.>, Subspace Pursuit (SP) <cit.> and Iterative Hard Thresholding (IHT) <cit.> can recover β̅ quite efficiently and in a stable way with a sample complexity of O(s logd/s). Furthermore, it is known that Gaussian random matrices (or sub-Gaussian in general) satisfy RIP <cit.>. If we choose our design matrix to be a Gaussian matrix and we have a good sparsity model that incorporates extra information on the sparsity structure then we can reduce the sample complexity to O(log m_s) where m_s is number of possible supports in the sparsity model, i.e., the cardinality of the sparsity model <cit.>. In the same line of work, Hegde and others <cit.> proposed a weighted graph based sparsity model to efficiently learn β̅.
§.§ Classification
We can model binary classification problems by choosing f(x) = (x) or in other words, we can have,
z = (X β̅ + e) .
Similar to the linear regression setup, there is also prior work <cit.>, <cit.>, <cit.>, on analyzing the sample complexity of sparse recovery for binary classification problem (also known as 1-bit compressed sensing).
Since arguments for establishing information theoretic lower bounds are not algorithm specific, we can extend our basic argument to the both settings mentioned above. For comparison, we will use the results by Hegde and others <cit.> in a linear regression setup.
§ WEIGHTED GRAPH MODEL (WGM)
In this section, we introduce the Weighted Graph Model (WGM) and formally state the sample complexity results from <cit.>. The Weighted Graph Model is defined on an underlying graph G = (V,E) whose vertices are on the coefficients of the unknown s-sparse vector β̅∈ℝ^d i.e. V = [d] = {1, 2, …, d}. Moreover, the graph is weighted and thus we introduce a weight function w : E →ℕ. Borrowing some notations from <cit.>, for a forest F ⊆ G we denote ∑_e ∈ F w_e as w(F). B denotes the weight budget, s denotes the sparsity (number of non-zero coefficients) of β̅ and g denotes the number of connected components in F. The weight-degree ρ(v) of a node v ∈ V is the largest number of adjacent nodes connected by edges with the same weight, i.e.,
ρ(v) = max_b ∈ N |{ (v', v) ∈ E | w(v',v) = b }| .
We define the weight-degree of G, ρ(G) to be the maximum weight-degree of any v ∈ V. Next, we define the Weighted Graph Model on coefficients of β̅ as follows:
The (G, s, g, B)-WGM is the set of supports defined as
𝕄 = { S ⊆ [d] | |S| = s and ∃ F ⊆ G with V_F = S, .
. γ(F) = g, w(F) ≤ B } ,
where γ(F) is number of connected components in a forest F. Authors in <cit.> provide the following sample complexity result for linear regression under their model:
Let β̅∈ℝ^d be in the (G, s, g, B)-WGM. Then
n = O(s(logρ(G) + logB/s) + g logd/g)
i.i.d. Gaussian observations suffice to estimate β̅. More precisely, let e ∈ℝ^n be an arbitrary noise vector from equation (<ref>) and X be an i.i.d. Gaussian matrix. Then we can efficiently find an estimate β̂ such that
β̅ - β̂≤ C e ,
where C is a constant indepenedent of all variables above.
Notice that in the noiseless case (e = 0), we recover the exact β̅. We will prove that information-theoretically, the bound on the sample complexity is tight and thus the algorithm of <cit.> is statistically optimal.
§ MAIN RESULTS
In this section, we will state our results for both the noiseless and the noisy case. We establish an information theoretic lower bound on linear prediction problem defined on WGM. We use Fano's inequality <cit.> to prove our result by carefully constructing an ensemble, i.e., a WGM. Any algorithm which infers β̅ from this particular WGM would require a minimum number of samples. Note that the use of restricted ensembles is customary for information-theoretic lower bounds <cit.> <cit.>. It follows that in the case of linear regression, the upper bound on the sample complexity by Hegde and others <cit.> is indeed tight.
§.§ Noiseless Case
Here, we provide a necessary condition on the sample complexity for exact recovery in the noiseless case. More formally,
There exists a particular (G, s, g, B)-WGM, and a particular set of weights for the entries in the support of β̅ such that if we draw a β̅∈ℝ^d uniformly at random and we have a data set 𝒮 of n ∈ o((s-g) (logρ(G) + logB/s-g) + g logd/g + (s - g) logg/s - g + s log 2) i.i.d. observations as defined in equation (<ref>) with e = 0 then P(β̅≠β̂) ≥1/2 irrespective of the procedure we use to infer β̂ on (G,s,g,B)-WGM from 𝒮.
We use Fano's inequality <cit.> on a carefully chosen restricted ensemble to prove our theorem. A detailed proof can be found in appendix.
§.§ Noisy Case
A similar result can be stated for the noisy case. However, in this case recovery is not exact but is sufficiently close in l_2-norm with respect to noise in the signal. Another thing to note is that in <cit.> inferred β̂ can come from a slightly bigger WGM model but here we actually infer β̂ from the same WGM.
There exists a particular (G, s, g, B)-WGM, and a particular set of weights for the entries in the support of β̅ such that if we draw a β̅∈ℝ^d uniformly at random and we have a data set 𝒮 of n ∈ o((s-g) (logρ(G) + logB/s-g) + g logd/g + (s-g) logg/s-g + s log 2) i.i.d. observations as defined in equation (<ref>) with e_i iid∼𝒩(0,σ), ∀ i ∈{1 … n } then ℙ(β̅ - β̂≥ C e ) ≥1/10 for 0 < C ≤ C_0 irrespective of the procedure we use to infer β̂ on (G,s,g,B)-WGM from 𝒮.
Note that when s ≫ g and B ≥ s-g then Ω((s-g) (logρ(G) + logB/s-g) + g logd/g + (s - g) logg/s - g + slog 2) is roughly Ω(s(logρ(G) + logB/s) + g logd/g).
We will prove this result in three steps. First, we will carefully construct an underlying graph G for the WGM. Second, we will
bound mutual information between β̅ and 𝒮 by bounding the Kullback-Leibler (KL) divergence. Third, we will bound the size of properly defined restricted ensemble to complete our proof.
Constructing an underlying graph G for the WGM We construct an underlying graph for the WGM using the following steps:
* Divide d nodes equally into g groups with each group having d/g nodes.
* For each group j, we denote a node by N_i^j where j is the group index and i is the node index. Each group j, contains nodes from N_1^j to N_d/g^j.
* We allow for circular indexing, i.e., a node N_i^j where i > d/g is same as node N_i - d/g^j.
* For each p = 1,…,B/s-g, node N_i^j has an edge with nodes N_i+(p -1)ρ(G)/2 + 1^j to N_i+p ρ(G)/2^j with weight p.
* Cross edges between nodes in two different groups are allowed as long as they have edge weights greater than B/s-g and they do not affect ρ(G).
Figure <ref> shows an example of a graph constructed using the above steps. Furthermore, parameters for our WGM satisfy the following requirements:
R1. d/g≥ρ(G)B/s-g + 1,
R2. ρ(G)B/2(s-g)≥s/g - 1,
R3. B ≥ s-g .
These are quite mild requirements (see appendix) on the parameters and are easy to fulfill. Figure <ref> shows one graph which follows our construction and also fulfills R1, R2 and R3.
We define our restricted ensemble ℱ on G as:
ℱ = {β | β_i = 0, if i ∉ S, β_i ∈{C_0σ√(d)/√(2(1 - ϵ)), ..
. . C_0σ√(d)/√(2(1 - ϵ)) + C_0σ√(d)/√((1 - ϵ))}, if i ∈ S, S ∈𝕄} ,
for some 0 < ϵ < 1 and 𝕄 is as in Definition <ref>.
Our true β̅ is picked uniformly at random from the above restricted ensemble. We will prove that on this restricted ensemble, our Theorem <ref> holds. We will make use of following lemmas for our proof:
Given the restricted ensemble ℱ,
β̅ - β̂≤C_0σ√(d)/√((1-ϵ))β̅ = β̂ .
We are dealing with high dimensional cases, hence moving forward we will assume that n < d. We state another lemma:
For some 0 < ϵ < 1,
ℙ(e ^2 ≤σ^2 n/1 - ϵ) ≥ 1 - exp(-ϵ^2 n/4) .
From Lemma <ref>, Lemma <ref> and using the fact that
d > n and C ≤ C_0, the corollary below follows:
ℙ(β̅ - β̂≥ C e | β̅≠β̂) ≥ 1 - exp(-ϵ^2 n/4) .
Bound on the mutual information We will assume that the elements of design matrix X have been chosen at random and independently from 𝒩(0, 1). The linear prediction problem from Section <ref> can be described by the following Markov's chain:
β̅→ y = X β̅ + e → z = f (y ) →β̂ .
Lets say 𝒮 contains n i.i.d. observations of z and 𝒮' contains n i.i.d. observations of y. Then using the data processing inequality <cit.> we can say that,
𝕀(β̅, 𝒮) ≤𝕀(β̅, 𝒮') .
Hence, for our purpose it suffices to have an upper bound on 𝕀(β̅, 𝒮'). Now we can bound the mutual information by the following <cit.>:
𝕀(β̅, 𝒮') ≤1/|ℱ|^2∑_β∈ℱ^∑_β' ∈ℱ^𝕂𝕃(ℙ_𝒮'|βℙ_𝒮'|β' ) ,
where 𝕂𝕃 is the Kullback-Leibler divergence. Note that 𝒮' consists of n i.i.d. observations of y. Hence,
𝕀(β̅, 𝒮') ≤n/|ℱ|^2∑_β∈ℱ^∑_β' ∈ℱ^𝕂𝕃(ℙ_y_i|βℙ_y_i|β' )
Furthermore, from equation (<ref>) and noting that the elements of X come independently from 𝒩(0, 1),
y_i = X_i β + e_i
y_i|β∼𝒩(0 , β^2 + σ^2 )
y_i|β' ∼𝒩(0 , β' ^2 + σ^2 ) .
We can bound the Kullback-Leibler divergence between ℙ_y_i|β and ℙ_y_i|β' as follows:
𝕂𝕃(ℙ_y_i|βℙ_y_i|β') = 1/2(β^2 + σ^2/β' ^2 + σ^2 - 1 .
. - logβ^2 + σ^2/β' ^2 + σ^2)
≤1/2(β^2 + σ^2/β' ^2 + σ^2 - 1 - 1 + β' ^2 + σ^2/β^2 + σ^2)
≤1/2((C_0σ√(d)/√(2(1-ϵ)) + C_0σ√(d)/√((1-ϵ)))^2 s + σ^2/(C_0σ√(d)/√(2(1-ϵ)))^2 s + σ^2.
. + (C_0σ√(d)/√(2(1-ϵ)) + C_0σ√(d)/√((1-ϵ)) )^2 s + σ^2/(C_0σ√(d)/√(2(1-ϵ)) )^2 s + σ^2 - 2 )
≤1/2(2 (C_0σ√(d)/√(2(1-ϵ)) + C_0σ√(d)/√(1-ϵ))^2 s + σ^2/(C_0σ√(d)/√(2(1-ϵ)))^2 s + σ^2 - 2 )
≤1/2(2 (√(2)+1)^2(C_0σ√(d)/√(2(1-ϵ)))^2 s + σ^2/(C_0σ√(d)/√(2(1-ϵ)))^2 s + σ^2 - 2 )
≤ (√(2)+1)^2 - 1
≤ 5 .
The first inequality holds because 1 - 1/x≤log x, ∀ x > 0, the second inequality holds by taking the largest value of numerators and the smallest value of denominators. The other inequalities follow from simple algebraic manipulation. Substituting 𝕂𝕃(ℙ_y_i|βℙ_y_i|β') in equation (<ref>) we get,
𝕀(β̅, 𝒮') ≤ 5 n .
Bound on |ℱ| Now we will count elements in ℱ to complete our proof. We present the following counting argument to establish a lower bound on all the possible supports for our restricted ensemble:
* We choose one node from each of the g groups in underlying graph G to be root of a connected component. Each group has d/g possible candidates for the root and hence we can choose them in (d/g)^g possible ways.
* Since we are interested only in establishing a lower bound on ℱ, we will only consider the cases where each connected component has s/g nodes. Moreover, given a root node N_i^j in group j, we will choose the remaining s/g - 1 nodes connected with the root only from the nodes N_i+1^j to nodes N_i+Bρ(G)/2(s - g)^j (using circular indices if needed). Construction of the graph G allows us to do this. At least till the last ρ(G)B/2(s - g) nodes, we always include node N_i^j and we never include N_r^j, r ≤ i-1 in our selection. Furthermore, R1 guarantees that we have enough nodes to avoid any possible repetitions due to circular indices for the last ρ(G)B/2(s -g) nodes and R2 ensures that we have enough nodes to form a connected component. This guarantees that all the supports are unique. Hence, given a root node N_i^j we have ρ(G)B/2(s-g)s/g -1 choices which across all the groups comes out to be (ρ(G)B/2(s - g)s/g -1 )^g.
* Each entry in the support of β can take two values which can either be C_0σ√(d)/√(2(1 - ϵ)) or C_0σ√(d)/√(2(1 - ϵ)) + C_0σ√(d)/√((1 - ϵ)).
It should be noted that any support chosen using the above steps satisfies constraint on weight budget, i.e., w(F) ≤ B as the maximum edge weight in any connected component will always be less than or equal to B/s-g. Combining all the above steps together we get:
|ℱ| ≥ 2^s (d/g)^g (ρ(G)B/2(s - g)s/g -1 )^g
≥ 2^s (d/g)^g (ρ(G)Bg/2(s-g)^2)^(s-g) .
Using Fano's inequality <cit.> and results from equation (<ref>) and equation (<ref>), it is easy to prove the following lemma,
If n ∈ o(log |ℱ|) then ℙ(β̂≠β̅) ≥1/2.
By using Bayes' Theorem and combining Corollary <ref> and Lemma <ref>,
ℙ(β̅ - β̂≥ Ce) ≥ℙ(β̅ - β̂≥ Ce, β̅≠β̂)
= ℙ(β̅ - β̂≥ Ce | β̅≠β̂)
ℙ(β̅≠β̂)
≥(1 - exp(-ϵ^2 n/4)) 1/2 .
The last inequality (<ref>) holds when n is o(log |ℱ|). We also know that n ≥ 1 and if we choose ϵ≥√(-4log 0.8)∼ 0.9448, then we can write inequality (<ref>) as,
ℙ(β̅ - β̂≥ Ce) ≥1/10 .
§ SPECIFIC EXAMPLES
Here, we will provide counting arguments for some of the well-known sparsity structures, such as tree sparsity and block sparsity models. It should be noted that barring the count of possible supports in the specific model our technique can be used to prove lower bounds of the sample complexity for other sparsity structures.
§.§ Tree-structured sparsity model
The tree-sparsity model <cit.>, <cit.> is used in many applications such as wavelet decomposition of piecewise smooth signals and images. In this model, we assume that the coefficients of the s-sparse signal form a k-ary tree and the support of the sparse signal form a rooted and connected sub-tree on s nodes in this k-ary tree. The arrangement is such that if a node is part of this subtree then its parent is also included in it. Here, we will discuss the case of a binary tree which can be generalized to a k-ary tree. In particular, the following proposition provides a lower bound on the number of possible supports of an s-sparse signal following a binary tree-structured sparsity model.
In a binary tree-structured sparsity model ℱ, log |ℱ| ≥ cs for some c > 0.
The proof of the proposition <ref> follows from the fact that we have at least 2^s different choices of β̅ in our restricted ensemble.
From the above and following the same proof technique as before, it is easy to prove the following corollary for the noisy case (a similar result holds for the noiseless case as well).
In a binary tree-structured sparsity model, if n ∈ o(s) then ℙ(β̅ - β̂≥ C e ) ≥1/10.
Essentially, Corollary <ref> proves that the O(s) sample complexity achieved in <cit.> is optimal for the tree-sparsity model.
§.§ Block sparsity model
In the block sparsity model, <cit.>, an s-sparse signal, β∈ℝ^J × N, can be represented as a matrix with J rows and N columns. The support of β comes from K columns of this matrix such that s = J K. More precisely,
𝒮_K = {β = [ β_1 …β_N ]∈ℝ^J × N such that .
. β_n = 0 for n ∉ L, L ⊆{1,…, N}, |L|=K } .
The above can be modeled as a graph model. In particular, we can construct a graph G over all the elements in β by treating nodes in the column of the matrix as connected nodes (see Fig. <ref>) and then our problem is to choose K connected components from N.
It is easy to see that the number of possible supports in this model, ℱ, would be, |ℱ| = 2^KJN K≥ 2^KJ (N/K)^K. Correspondingly the necessary number of samples for efficient signal recovery comes out to be Ω(KJ + K logN/K). An upper bound of O(KJ + K logN/K) was derived in <cit.> which matches our lower bound.
§ CONCLUDING REMARKS
We proved that the necessary number of samples required to efficiently recover a sparse vector in the weighted graph model is of the same order as the sufficient number of samples provided by Hegde and others <cit.>. Moreover, our results not only pertain to linear regression but also apply to linear prediction problems in general.
1
baraniuk2010model
R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based
compressive sensing,” IEEE Transactions on Information Theory,
vol. 56, no. 4, pp. 1982–2001, 2010.
hegde2015nearly
C. Hegde, P. Indyk, and L. Schmidt, “A nearly-linear time framework for
graph-structured sparsity,” in Proceedings of the 32nd International
Conference on Machine Learning (ICML-15), pp. 928–937, 2015.
blumensath2009iterative
Thomas Blumensath and Mike E Davies.
Iterative hard thresholding for compressed sensing.
Applied and Computational Harmonic Analysis, 27(3):265–274,
2009.
dai2008subspace
Wei Dai and Olgica Milenkovic.
Subspace pursuit for compressive sensing: Closing the gap between
performance and complexity.
Technical report, DTIC Document, 2008.
needell2009cosamp
Deanna Needell and Joel A Tropp.
Cosamp: Iterative signal recovery from incomplete and inaccurate
samples.
Applied and Computational Harmonic Analysis, 26(3):301–321,
2009.
baraniuk2008simple
Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin.
A simple proof of the restricted isometry property for random
matrices.
Constructive Approximation, 28(3):253–263, 2008.
hegde2014fast
Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt.
A fast approximation algorithm for tree-sparse recovery.
In 2014 IEEE International Symposium on Information Theory,
pages 1842–1846. IEEE, 2014.
Yu97
B. Yu.
Assouad, Fano, and Le
Cam.
In Torgersen E. Pollard D. and Yang G., editors, Festschrift for
Lucien Le Cam: Research Papers in Probability and Statistics, pages
423–435. Springer New York, 1997.
gupta2010sample
Ankit Gupta, Robert D Nowak, and Benjamin Recht.
Sample complexity for 1-bit compressed sensing and sparse
classification.
In ISIT, pages 1553–1557, 2010.
gopi2013one
Sivakant Gopi, Praneeth Netrapalli, Prateek Jain, and Aditya V Nori.
One-bit compressed sensing: Provable support and vector recovery.
In ICML (3), pages 154–162, 2013.
Cover06
T. Cover and J. Thomas.
Elements of Information Theory.
John Wiley & Sons, 2nd edition, 2006.
ai2014one
Albert Ai, Alex Lapanowski, Yaniv Plan, and Roman Vershynin.
One-bit compressed sensing with non-gaussian measurements.
Linear Algebra and its Applications, 441:222–239, 2014.
santhanam2012information
Narayana P Santhanam and Martin J Wainwright.
Information-theoretic limits of selecting binary graphical models in
high dimensions.
IEEE Transactions on Information Theory, 58(7):4117–4134,
2012.
wang2010information
Wei Wang, Martin J Wainwright, and Kannan Ramchandran.
Information-theoretic bounds on model selection for gaussian markov
random fields.
In Information Theory Proceedings (ISIT), 2010 IEEE
International Symposium on, pages 1373–1377. IEEE, 2010.
§ APPENDIX
§ PROOF OF LEMMA <REF>
First note that when β̅ = β̂ then its obvious that Lemma <ref> holds. Here, we will prove that two arbitrarily chosen β_1 and β_2 such that β_1, β_2 ∈ℱ where β_1 ≠β_2 then β_1 - β_2 ≥C_0σ√(d)/√(1-ϵ). ℱ is as defined in equation (<ref>).
β_1 and β_2 have the same support Since we assume that β_1 ≠β_2, thus they must differ in at least one position on their support. Lets say that one such position is i. Then,
β_1 - β_2 ≥ |β_1i - β_2i|
=C_0σ√(d)/√(1 - ϵ) .
β_1 and β_2 have different supports When β_1 and β_2 have different supports then we can always find i and j such that i ∈ S_1, i ∉ S_2 and j ∉ S_1, j ∈ S_2 where S_1 and S_2 are supports of β_1 and β_2 respectively. Then,
β_1 - β_2 ≥√(β_1i^2 + β_2j^2 )
≥√((C_0σ√(d)/√(2(1 - ϵ)) )^2+ (C_0σ√(d)/√(2(1 - ϵ)))^2 )
=C_0σ√(d)/√(1 - ϵ) .
Since this is true for any two arbitrarily chosen β_1 and β_2, hence it holds for β̅ and β̂ as well. This proves the lemma.
§ PROOF OF LEMMA <REF>
ℙ( e ^2 ≥σ^2 n/1-ϵ) = ℙ(exp(λ/2e/σ^2) ≥exp(λ/2n/1-ϵ))
≤𝔼[exp(λ/2e/σ^2)]/exp(λ/2n/1-ϵ)
=exp(-λ/2n/1 -ϵ) (1/1-λ)^n/2 .
The first equality holds for any λ > 0, we take 0 < λ < 1 . The second inequality comes from Markov's inequality. The last equality follows since e_i/σiid∼𝒩(0, 1). Now, by taking λ = ϵ,
ℙ( e ^2 ≥σ^2 n/1-ϵ) ≤exp(-n/2(ϵ/1 -ϵ + log (1-ϵ)))
≤exp(-ϵ^2 n/4) .
The last inequality holds because for 0 < ϵ < 1, ϵ/1 -ϵ + log (1-ϵ) ≥ϵ^2/2. This proves our lemma,
ℙ( e ^2 ≤σ^2 n/1-ϵ) ≥ 1 - exp(-ϵ^2 n/4) .
§ PROOF OF LEMMA <REF>
Using Fano's inequality <cit.>, we can say that,
ℙ(β̂≠β̅) ≥ 1 - 𝕀(β̅, 𝒮) + log 2/log |ℱ|
≥ 1 - 𝕀(β̅, 𝒮') + log 2/log |ℱ|
≥ 1 - 5n + log 2/log |ℱ| .
The first inequality follows from equation (<ref>) and the second inequality follows from the upper bound on the mutual information established in equation (<ref>). Now, we want ℙ(β̂≠β̅) ≤1/2, then it follows that n must be,
n ≥1/10log |ℱ| - 1/5log 2 .
This proves the lemma.
§ PROOF OF THEOREM <REF>
*
Constructing an underlying graph G We assume that our underlying graph G fulfills all the properties mentioned while proving Theorem <ref>. On this underlying graph G, we define our restricted ensemble ℱ as:
ℱ = {β_i ∈{ 1, -1}, if i ∈ S, else β_i = 0, S ∈𝕄} ,
where 𝕄 is as in Definition <ref>.
Bound on the mutual information We will assume that the elements of design matrix X have been chosen at random and independently from 𝒩(1/s √(2), 1/s). As in the proof of Theorem <ref>, we can describe noiseless linear prediction problem as the following Markov's chain:
β̅→ y = X β̅→ z = f (y ) →β̂ .
Lets say 𝒮 contains n i.i.d. observations of z and 𝒮' contains n i.i.d. observations of y. Then using the data processing inequality <cit.>, we can say that,
𝕀(β̅, 𝒮) ≤𝕀(β̅, 𝒮') .
Hence, for our purpose it suffices to have an upper bound on 𝕀(β̅, 𝒮'). Now using results from <cit.>,
𝕀(β̅, 𝒮') ≤1/|ℱ|^2∑_β∈ℱ^∑_β' ∈ℱ^𝕂𝕃(ℙ_𝒮'|βℙ_𝒮'|β' ) .
where 𝕂𝕃 is the Kullback-Leibler divergence. Note that 𝒮' consists of n i.i.d. observations of y. Hence,
𝕀(β̅, 𝒮') ≤n/|ℱ|^2∑_β∈ℱ^∑_β' ∈ℱ^𝕂𝕃(ℙ_y_i|βℙ_y_i|β' ) .
Furthermore from equation (<ref>) and noting that the elements of X come independently from 𝒩(1/s√(2), 1/s),
y_i = X_i β
y_i|β∼𝒩(∑_k=1^dβ_k/s √(2) , 1)
y_i|β' ∼𝒩(∑_k=1^dβ'_k/s√(2), 1) .
We can bound 𝕂𝕃(ℙ_y_i|βℙ_y_i|β') by,
𝕂𝕃(ℙ_y_i|βℙ_y_i|β') = 1/2( ∑_k=1^d (β_k - β'_k) /s√(2) )^2
≤ 1 .
Substituting 𝕂𝕃(ℙ_y_i|βℙ_y_i|β') in equation (<ref>) we get,
𝕀(β̅, 𝒮') ≤ n .
Bound on |ℱ| Using a similar counting logic used in Theorem <ref>, we can get:
|ℱ| ≥ 2^s (d/g)^g (ρ(G)Bg/2(s-g)^2)^(s-g) .
We prove the theorem by substituting the mutual information from equation (<ref>) and |ℱ| from equation (<ref>) in the Fano's inequality <cit.>.
§ DISCUSSION ON THE REQUIREMENTS FOR THE UNDERLYING GRAPH G
We mentioned before that R1, R2 and R3 are quite mild requirements on the parameters. In fact, it is easy to see that,
Given any value of s, g and B ≥ s - g, there are infinitely many choices for ρ(G) and d that satisfy R1 and R2 and hence, there are infinitely many (G, s, g, B)-WGM which follow our construction.
R3 is readily satisfied if each edge has at least unit edge weight and we are not forced to choose isolated nodes in support. Most of the graph-structured sparsity models fulfill this requirement. R2 gives us a lower bound on the choice of ρ(G),
ρ(G) ≥2(s-g)^2/Bg .
Similarly, given a value of ρ(G), R1 just provides a lower bound on choice of d,
d ≥g ρ(G)B/s-g + g .
Clearly, there is an infinite number of combinations for ρ(G) and d.
| Sparse vectors are widely used tools in fields related to high dimensional
data analytics such as machine learning, compressed sensing and statistics.
This makes estimation of sparse vectors an important field of research. In a compressive sensing setting, the problem is to closely approximate a d-dimensional signal by an s-sparse vector without losing much information. For regression, this is usually done by observing the inner product of the signal with a design matrix. It is a well known fact that if the design matrix satisfies the Restricted Isometry Property (RIP) then estimation can be done efficiently with a sample complexity of O(s logd/s). Many algorithms such as CoSamp <cit.>, Subspace Pursuit (SP) <cit.> and Iterative Hard Thresholding (IHT) <cit.> provide high probability performance guarantees. Baraniuk and others <cit.> came up with a model based sparse recovery framework. Under this framework, the sufficient number of samples for correct recovery is logarithmic with respect to the cardinality of the sparsity model.
A major issue with the model based framework is that it does not provide any recovery algorithm on its own. In fact, it is some times very hard to come up with an efficient recovery algorithm. Addressing this issue, Hegde and others <cit.> came up with a weighted graph model for graph structured sparsity and provided a nearly linear time recovery algorithm. They also analyzed the sufficient number of samples for efficient recovery. In this paper, we will provide the necessary condition on the sample
complexity for sparse recovery on a weighted graph model. We will also note that our information theoretic lower bound can be applied not only to linear regression but also to other linear prediction tasks such as classification.
The paper is organized as follows. We describe our setup in Section <ref>. Then we briefly describe the weighted graph model in Section <ref>. We state our results in Section <ref>. In Section <ref>, we apply our technique to some specific examples. At last, we provide some concluding remarks in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07712v2 | 20170126141730 | The role of T-helper/T-suppressor ratio in the adaptive immune response: a dynamical model | [
"Alessia Annibale",
"Louise A Dziobek-Garrett",
"Haider Tari"
] | q-bio.CB | [
"q-bio.CB",
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.08150v1 | 20170127185454 | Grioli's Theorem with weights and the relaxed-polar mechanism of optimal Cosserat rotations | [
"Andreas Fischle",
"Patrizio Neff"
] | math-ph | [
"math-ph",
"math.MP"
] |
The Chronicles of LITTLE THINGS BCDs III: Gas Clouds in and around Mrk 178, VII Zw 403, and NGC 3738
Nau Raj Pokhrel
=====================================================================================================
arabic
Abstract
0.95
Let F ∈^+(3) and consider the right polar decomposition
into an orthogonal factor
(F) ∈(3) and a symmetric, positive definite
factor U = √(F^TF)∈(3). In 1940 Giuseppe Grioli
proved that
R ∈ (3)R^TF - 𝕀^2 = { (F) } = R ∈ (3)F - R^2 .
This variational characterization of the orthogonal
factor (F) ∈(n) holds in any dimension
n ≥ 2 (a result due to Martins and Podio-Guidugli).
In a similar spirit, we characterize the optimal rotations
_μ,μ_c(F) R ∈ (n) {μ(R^TF - 𝕀)^2
+ μ_c(R^TF - 𝕀)^2
}
for given weights μ > 0 and μ_c ≥ 0. We identify a
classical parameter range μ_c ≥μ > 0 for which Grioli's
Theorem is recovered and a non-classical parameter range
μ > μ_c ≥ 0 giving rise to a new type of globally
energy-minimizing rotations which can substantially deviate
from (F). In mechanics, the weighted energy subject
to minimization appears as the shear-stretch contribution in
any geometrically nonlinear, quadratic, and isotropic Cosserat
theory.
Key words:
Cosserat,
Grioli's theorem,
micropolar,
polar media,
zero Cosserat couple modulus,
euclidean distance to (n)
AMS 2010 subject classification:
15A24, 22E30, 74A30, 74A35, 74B20, 74G05, 74G55, 74G65, 74N15.
§ INTRODUCTION
In 1940 Giuseppe Grioli proved a variational characterization of
the orthogonal factor of the polar decomposition <cit.>.
In order to state this result, let (F) ∈(n) be the
unique rotation characterized as the orthogonal factor of the
right polar decomposition of
F = (F) U(F), F ∈^+(n) ,
where U(F) = (F)^TF = √(F^TF)∈(n) denotes
the symmetric positive definite factor (which, in mechanics, is
referred to as the Biot stretch tensor).
Grioli's original result[An exposition of the original
contribution of Grioli in modernized notation has been recently
made available in <cit.>.] is the important special
case of space dimension n = 3 of the following
[Grioli's theorem <cit.>]
Let n ≥ 2 and X^2 X^TX the Frobenius
norm. Then for any F ∈^+(n), it holds
R ∈ (n)R^TF - 𝕀^2 = {(F)},
and thus min_R ∈ (n)R^TF - 𝕀^2 = U - 𝕀^2 .
The polar factor (F) ∈(n) is the unique energy-minimizing
rotation for any given F ∈^+(n) in any dimension n ≥ 2,
see, e.g., <cit.>. This optimality property has an interesting
geometric interpretation following from the orthogonal invariance of the
Frobenius norm
R^TF - 𝕀^2 = F - R^2 = ^2_ euclid(F, R)
which reveals a connection to the problem class of matrix distance
(or nearness) problems. In elasticity, a distance of a deformation
gradient (jacobian matrix) F ∇φ∈^+(n) to
a rotation (n) is of interest as a measure for the energy
induced by local changes in length.
In this contribution, we consider a weighted analog of Grioli's
theorem motivated by Cosserat theory and present the energy-minimizing
(optimal) rotations characterized by
[Weighted optimality]
Let n ≥ 2. Compute the set of optimal rotations
R ∈ (n)(R ;F)R ∈ (n){μ (R^TF - 𝕀)^2
+ μ_c (R^TF - 𝕀)^2}
for given F ∈^+(n) and weights μ > 0,μ_c ≥ 0.
Here, (X) 1/2(X + X^T) and
(X) 1/2(X - X^T) denote the symmetric
and skew-symmetric parts of X ∈^n × n, respectively.
Note that Grioli's theorem stated above is recovered for the
case of equal weights μ = μ_c > 0. In order to express the
connection to the variational characterization of the polar
factor (F), we have introduced the following notation
[Relaxed polar factor(s)]
Let μ > 0 and μ_c ≥ 0. We denote the set-valued mapping
that assigns to a given parameter F ∈^+(n) its associated
set of energy-minimizing rotations by
_μ,μ_c(F) R ∈ (n)(R ;F) .
In the weighted case, the polar factor (F) is always critical
but not always optimal. In general the global minimizers
_μ,μ_c(F) depend on the parameters μ > 0 and
μ_c ≥ 0 and can substantially deviate from (F).
The optimal rotations in the weighted case _μ,μ_c(F)
have been worked out in two and three space dimensions by the present
authors in a series of papers <cit.>;
cf. also <cit.> and <cit.>
for earlier related work. A visualization of the mechanism of optimal
Cosserat rotations in dimension n = 3 for an idealized nano-indentation
was given in <cit.> and shows that the optimal rotations
can produce interesting non-classical patterns. A final proof of
optimality in any dimension n ≥ 2 has been obtained by Borisov
and the authors in <cit.> and is based on a new
characterization of real square roots of real symmetric matrices.
This contribution presents an overview of these results omitting
the proofs for which we refer to the original contributions.
Our study of the energy-minimizing rotations _μ,μ_c(F)
is motivated by a particular Cosserat (micropolar)
theory <cit.>, i.e., a continuum theory
with additional degrees of freedom R ∈(n). In this
context, the objective function (R ;F) subject to
minimization in intro:prob:weighted determines the
shear-stretch contribution to the strain energy in any
nonlinear, quadratic, and isotropic Cosserat theory, see also <cit.>.
The arguments to the shear-stretch energy W_μ,μ_c(R ;F) are
the deformation gradient field
F ∇φ: Ω→^+(n)
and the microrotation field R: Ω→(n) evaluated at a
given point of the domain Ω. A full Cosserat continuum model
furthermore contains an additional curvature energy
term <cit.> and a volumetric
energy term, see, e.g., <cit.> or <cit.>.
It is always possible to express the local energy contribution in a
Cosserat model as W = W(), where R^TF
is the first Cosserat deformation tensor. This reduction follows
from objectivity requirements and has already been observed by the
Cosserat brothers <cit.>,
see also <cit.> and <cit.>.
Since is in general non-symmetric, the most general isotropic
and quadratic local energy contribution which is zero at the reference
state is given by
μ ( - 𝕀)^2
+ μ_c ( - 𝕀)^2_“shear-stretch energy” + λ/2 - 𝕀)^2_“volumetric energy” .
The last term will be discarded in the following, since it couples
the rotational and volumetric response, a feature not present in the
well-known isotropic linear Cosserat models.[The Cosserat brothers never
proposed any specific expression for the local energy W = W().
The chosen quadratic ansatz for W = W() is motivated by a direct
extension of the quadratic energy in the linear theory of Cosserat
models, see, e.g. <cit.>. We always consider a true volumetric-isochoric
split in our applications.]
From the perspective of Cosserat theory, the optimal rotations
_μ,μ_c(F) yield insight into the important limit case
of vanishing characteristic length L_ c = 0.[This
identification requires that the volume term decouples from
the , e.g.,
W^ vol() λ/4[( - 1)^2 + (1/ - 1)^2] .
This requirement is quite natural and is satisfied by all linear
Cosserat models <cit.>.] In this context, we can interpret the solutions
of (<ref>) as an energetically optimal mechanical
response of the field R ∈(n) of Cosserat microrotations to
a given deformation gradient F ∇φ∈^+(n).
The correct choice of the so-called Cosserat couple modulus
μ_c ≥ 0 for specific materials and boundary value problems is
an interesting open question. There are indications that a
non-vanishing μ_c > 0 has never been experimentally observed
and that such a choice is at least debatable <cit.>.
The limit case μ_c = 0 is hence of particular interest.
We want to stress that although the term (R ;F) subject to
minimization in (<ref>) is quadratic in the nonsymmetric
microstrain tensor - 𝕀 = R^TF - 𝕀, see,
e.g., <cit.>, the associated minimization
problem with respect to R is nonlinear due to the multiplicative
coupling R^TF and the geometry of (n).
The energy (R ;F) is a polynomial in the matrix entries,
hence ∈ C^∞((n),). Further, since the Lie
group (n) is compact and ∂(n) = ∅, the
global extrema of are attained at interior points.
The previous remark hints at a possible solution strategy for
intro:prob:weighted. If all the critical points
R_crit(F) ∈(n) of (R ;F) can be
computed[The smooth manifold (n) has empty boundary.
This implies that a critical point for given F ∈^+(n)
satisfies (R(t) ; F)0 = 0 for
every smooth curve of rotations R(t): (-ϵ, ϵ) →(n) passing through R(0) = R_crit.],
then a direct comparison of the associated critical energy levels
(R_crit ;F) allows to determine the
critical branches which are energy-minimizing.
Clearly, any minimizing critical branch realizes the
reduced Cosserat shear-stretch energy defined as
W^ red_μ,μ_c: ^+(n) → ,
W^ red_μ,μ_c(F) min_R ∈ (n) W_μ,μ_c(R ;F) .
At first, a solution of intro:prob:weighted in three space
dimensions was out of reach (let alone the n-dimensional problem).
Therefore, we first restrict our attention to the planar case, where
we can base our computations on the standard parametrisation
R: [-π, π] →(2) ⊂^2× 2, R() [ cos -sin; sin cos ]
by a rotation angle.[Note that π and -π are mapped
to the same rotation. In this text, we implicitly choose π over
-π for the rotation angle whenever uniqueness is an issue.]
It turns out that there are at most two optimal planar rotations
_μ,μ_c^±(F) in the non-classical parameter
range μ > μ_c ≥ 0 and we distinguish these by a sign.
The corresponding optimal rotation angles of
^±_μ,μ_c(F) are denoted by
α^±_μ,μ_c(F). The non-classical minimizers coincide
with the polar factor (F) in the compressive regime
of F ∈^+(2), but deviate otherwise.
The computation of the global minimizers in dependence of F
is not completely obvious even for the planar case.
Hence, the following simplifications of the minimization problem
are helpful.
First, it is useful to introduce
[Parameter rescaling]
Let μ > μ_c ≥ 0. We define the singular radius by
> 0 ,
and further define λ_μ,μ_c/ρ_1,0 = μ/μ - μ_c ,
as the induced scaling parameter. Note that ρ_1,0 = 2 and
λ_1,0 = 1. Further, we define the parameter rescaling
given by
F_μ,μ_c λ^-1_μ,μ_c F = μ - μ_c/μ F ∈^+(n) .
For μ > 0 and μ_c = 0, we obtain F_μ,0 = F, i.e.,
the rescaling is only effective for μ_c > 0.
Regarding the material parameters, we proved
in <cit.> that for any dimension
n ≥ 2, it is in fact sufficient to restrict
our attention to two parameter pairs:
(μ,μ_c) = (1,1), the classical case, and
(μ,μ_c) = (1,0), the non-classical case.
Hence, somewhat surprisingly, the solutions for arbitrary
μ > 0 and μ_c ≥ 0 can be recovered from these
two limit cases. This is the content of
Let n ≥ 2 and let F ∈^+(n), then
μ_c ≥μ > 0 ⟹ (R ;F)
∼
W_1,1(R ;F) , and
μ > μ_c ≥ 0 ⟹ (R ;F)
∼ (R ;F_μ,μ_c) .
Here, the equivalence notation means that the energies give rise to
the same global minimizers which we can also state as
_μ,μ_c(F) =
_1,1(F) = {(F)}, if μ_c ≥μ > 0
_1,0(F_μ,μ_c), if μ > μ_c ≥ 0
Another important observation can be made introducing the rotation
R Q^TR^T Q
which acts relative to the polar factor (F) in the
coordinate system given by the columns of Q which span a positively
oriented frame of principal directions of U. This allows us to
transform
Q^T((R^TF) - 𝕀)Q = Q^T((R^T QDQ^T) - 𝕀))Q
=(Q^TR^T QDQ^TQ - Q^TQ) = (Q^TR^T Q_ RD - 𝕀) = (RD - 𝕀) .
For fixed choice of Q ∈(n), the inverse transformation allows
to reconstruct the absolute rotation uniquely
R = (QRQ^T ^T)^T = QR^TQ^T .
Hence, in the non-classical parameter range represented by the
limit case (μ,μ_c) = (1,0), the minimization problem can
be reduced to the following problem for the optimal relative
rotations.
Let n ≥ 2. Compute the set of energy-minimizing relative
rotations
_1,0(D) R ∈ (n)W_1,0(R ;D) = R ∈ (n)(RD - 𝕀)^2 ⊆ (n) .
The decisive point in the solution of prob:opt
in dimensions n ≥ 3 is the characterization of the set
of relative rotations R∈(n) satisfying the
particular symmetric square condition
(RD - 𝕀)^2 ∈(n)
which is equivalent to the Euler-Lagrange equations.
After having set the stage of the optimization problem on (n),
this overview is now structured as follows: in the next Section 2,
we consider in some detail the planar problem which allows for a
complete solution by elementary techniques and which presents
already the essential geometry which unfolds in dimensions
n ≥ 3. In Section 3, we provide the complete solution for
the three-dimensional case as well as the corresponding reduced
energy expression in terms of singular values of F. We also
provide a geometrical interpretation that allows to view the
minimization problem for μ_c = 0 as a distance problem.
Furtermore, we provide a discussion for which deformation
gradients we can only have the classical response (F).
Finally, in Section 4, we present our results for the general
n-dimensional case.
§ OPTIMAL ROTATIONS IN TWO SPACE DIMENSIONS
In this section, we consider
[The planar minimization problem]
Let F ∈^+(2), μ > 0 and μ_c ≥ 0. The task is to
compute the set of optimal microrotation angles
∈ [-π,π]{μ(R(α)^TF - 𝕀_2)^2
+
μ_c (R(α)^TF - 𝕀_2)^2
} ,
where
R(α) [ cos -sin; sin cos ]∈(2)
and [ F_11 F_12; F_21 F_22; ]∈^+(2) .
In this case we can compute explicit representations of optimal
planar rotations for the Cosserat shear-stretch energy by
elementary means. The parameter reduction strategy described
by lem:parameter_reduction allows us to concentrate
our efforts towards the construction of explicit solutions
to intro:prob:planar on two representative pairs of parameter
values μ and μ_c. The classical
regime is characterized by the limit case (μ,μ_c) = (1,1) and
the unique minimizer is given by the polar factor (F) for
any dimension n ≥ 2.
The non-classical case represented by (μ,μ_c) = (1,0) turns
out to be much more interesting and we compute all global non-classical
minimizers _1,0(F) for n = 2. This is the main contribution
of this section. Furthermore, we derive the associated
reduced energy levels W^ red_1,1(F) and W^ red_1,0(F)
which are realized by the corresponding optimal Cosserat microrotations.
Finally, we reconstruct the minimizing rotation angles for general values
of μ and μ_c from the classical and non-classical limit cases.
§.§ Explicit solution for the classical parameter range: μ_c ≥μ > 0
The polar factor (F) is uniquely optimal for the classical
parameter range in any dimension n ≥ 2. Let us give an explicit
representation for n = 2 in terms of
∈ (-π,π]. In view of the parameter reduction, distilled
in lem:parameter_reduction, it suffices to compute the set of
optimal rotation angles for the representative limit case
(μ,μ_c) = (1,1).
Thus, to obtain an explicit representation of ∈ (-π,π] which
characterizes the polar factor (F) in dimension n = 2, we
consider
∈ [-π,π] W_1,1(R() ;F)
= ∈ [-π,π][[ cos -sin; sin cos ]^T
[ F_11 F_12; F_21 F_22; ]
-[ 1 0; 0 1 ]]^2 .
Let us introduce the rotation
J [ 0 -1; 1 0 ]∈(2).
Its application to a vector v ∈^2 corresponds to multiplication
with the imaginary unit i ∈. In what follows, the quantities
F = F_11 + F_22 and JF = - F_21 + F_12
play a particular role and we note the identity
F^2 + JF^2 = F^2 + 2 F = U^2 .
The reduced energy
W_1,1^ red(F) min_R∈(n) W_1,1(R ;F)
realized by the polar factor (F) can be shown to be the euclidean
distance of an arbitrary F in ^n× n to (n). For n = 2,
we obtain
[Euclidean distance to planar rotations]
Let F ∈^+(2), then
W_1,1^ red(F) = ^2(F,(2)) = U - 𝕀^2
= F^2 - 2 √(F^2 + 2 F) + 2 .
The unique optimal rotation angle realizing this minimial
energy level satisfies the equation
[ sin; cos ]
=1/U[ -JF; F ] .
In particular, we have
(F) = -(JF)·arccos(F/U) ∈ [-π,π].
Let F ∈^+(2), then the polar factor (F) has the
explicit representation
(F) =
R() [ cos -sin; sin cos ]
= 1/U[ -F JF; -JF F ] .
§.§ The limit case (μ,μ_c) = (1,0) for μ > μ_c ≥ 0
We now approach the more interesting non-classical limit case
(μ, μ_c) = (1,0) and compute the optimal rotations for
W_μ,μ_c(R ;F). Note that, due to lem:parameter_reduction,
this limit case represents the entire non-classical parameter
range μ > μ_c ≥ 0.
[The formally reduced energy W^ red_1,0(F)]
Let F ∈^+(2). Then, the formally reduced energy
W_1,0^ red(F) min_R ∈ (2)W_1,0(R ;F)
min_R ∈ (2)(R^TF-𝕀)^2
is given by
W_1,0^ red(F)
= U - 𝕀^2 = (U - 𝕀)^2 = ^2(F,(2)) , if U < 2
1/2F^2 - F
= 1/2 U^2 - 2 U , if U≥ 2 .
It is well-known that any orthogonally invariant energy density
W(F) admits a representation in terms of the singular values of F, i.e.,
in the eigenvalues of U. Let us give this representation.
Let F ∈^+(2) and denote its singular values by ν_i, i = 1,2.
The representation of W^ red_1,0(F) in the singular values of F
is given by
W_1,0^ red(F) = W_1,0^ red(ν_1,ν_2) =
(ν_1 - 1)^2 + (ν_2 - 1)^2 , if ν_1 + ν_2 < 2
1/2(ν_1 - ν_2)^2 , if ν_1 + ν_2 ≥ 2 .
Note that the previous formulae are independent of the enumeration
of the singular values.
§.§.§ Optimal relative rotations for mu = 1 and muc = 0
Our next goal is to compute explicit representations of the rotations
^±_1,0(F) which realize the minimal energy level in the
non-classical limit case (μ,μ_c) = (1,0). This is the content
of the next theorem for which we now prepare the stage with the
following
Let D = (σ_1,σ_2) > 0, i.e, a diagonal matrix with
strictly positive diagonal entries. Then, assuming D≥ 2,
the equation R(β) D = 2 has the following solutions
β^± = ± arccos(2/D) ∈ [-π,π] .
For D < 2, there exists no solution, but we can define
β = β^± 0 by continuous extension.
Our fig:alpha_rel_plot shows a plot of the optimal
relative rotation angle β(U).
In the classical parameter range 0 < U≤ 2,
(F) is uniquely optimal and β vanishes identically.
In , a classical pitchfork bifurcation occurs. In
particular, due to U(𝕀_2) = 𝕀_2 = 2,
the identity matrix is a bifurcation point of β^±(F).
Further, we note that the branches β^±(U) = ±arccos(2/U)
are not differentiable at U = 2. This has implications
on the interaction of the Cosserat shear-stretch energy with the
Cosserat curvature energy W_ curv.
[Optimal non-classical microrotation angles _1,0^±]
Let F ∈^+(2) and consider (μ,μ_c) = (1,0). The optimal
rotation angles for are given by
_1,0^±(F) =
(F) = arccos(F/U)
, if U < 2
(F) ± arccos(2/U)
, if U≥ 2 .
§.§ Expressions for general non-classical parameter choices
The reduction for μ and μ_c in lem:parameter_reduction
asserts that the optimal rotations for arbitrary values of
μ > 0 and μ_c ≥ 0
can be reconstructed from the limit cases (μ, μ_c) = (1,1) and
(μ, μ_c) = (1,0). We now detail this procedure which essentially
exploits defi:reduction:rescaling.
Note first that the rescaled deformation gradient F_μ,μ_cλ^-1_μ,μ_c F induces a rescaled stretch tensor
U_μ,μ_c = √((F_μ,μ_c)^TF_μ,μ_c) = λ^-1_μ,μ_c· U .
The right polar decomposition takes the form
F_μ,μ_c = (F_μ,μ_c) U_μ,μ_c.
From (F_μ,μ_c) = F_μ,μ_cU_μ,μ_c^-1 follows the scaling invariance (F_μ,μ_c) = (F).
For the non-classical parameter range μ > μ_c ≥ 0, the quantity
U_μ,μ_c = λ_μ,μ_c^-1· U = ρ_1,0/ρ_μ_,μ_cU
plays an essential role. This leads us to
U_μ,μ_c≥ 2 = ρ_1,0 ρ_1,0/ρ_μ_,μ_c· U≥ρ_1,0 U≥ρ_μ,μ_c .
In particular, this implies that the bifurcation in U
allowing for non-classical optimal planar rotations is
characterized by the singular radius .
Let F ∈^+(2). For μ_c ≥μ > 0
the optimal microrotation angle is given by
α_μ,μ_c(F)
= (F_μ,μ_c)
= (F) = arccos(F/U) .
For μ > μ_c ≥ 0, the two optimal rotation angles
are given by
^±_μ,μ_c(F) = ^±_1,0(F_μ,μ_c)
= (F) = arccos(F/U) , if U < ρ_μ,μ_c
(F) ∓ arccos(/U) , if U≥ρ_μ,μ_c .
§.§ Optimal rotations for planar simple shear
We now apply our previous optimality results to simple shear
deformations
F_γ[ 1 γ; 0 1 ], γ∈ .
The energy-minimizing rotation angles
_μ,μ_c(γ) _μ,μ_c(F_γ)
for simple shear can be explicitly computed; see also <cit.>
for previous results.
In the classical parameter range μ_c ≥μ > 0 represented
by the limit case (μ,μ_c) = (1,1) the polar rotation
(F_γ) is uniquely optimal.
Let us collect some properties of simple shear F_γ. We have
F_γ^2 = 2 + γ^2 and F_γ = 1, i.e.,
simple shear is volume preserving for any amount γ. This
allows us to compute
U_γ = √(F_γ^2 + 2 F_γ) = √(4 + γ^2)≥ 2 .
Thus, we have
Let (μ,μ_c) = (1,0) and let F_γ∈^+(2) be a simple
shear of amount γ∈. Then,
γ≠ 0
⟹ ^±_1,0(F_γ) ≠ (F_γ) .
A simple shear F_γ by a non-zero amount γ≠ 0
automatically generates an optimal microrotational response
^±(F_γ) which deviates from the continuum
rotation (F). This implies that the associated first
Cosserat deformation tensor ^±_1,0(F_γ) _1,0^±(F_γ)^TF_γ is not symmetric for any
γ≠ 0.
§ OPTIMAL ROTATIONS IN THREE SPACE DIMENSIONS
In this section, we discuss
[Weighted optimality in dimension n = 3]
Let μ > 0 and μ_c ≥ 0. Compute the set of optimal rotations
R ∈ (3)(R ;F) R ∈ (3){μ (R^TF - 𝕀)^2
+ μ_c (R^TF - 𝕀)^2}
for given parameter F ∈^+(3) with distinct singular values
ν_1 > ν_2 > ν_3 > 0.
The polar factor (F) is the unique minimizer for
W_μ,μ_c(R ;F) in the classical parameter range
μ_c ≥μ > 0, in all dimensions n ≥ 2,
see <cit.>.
Since the classical parameter domain μ_c ≥μ > 0 is very well
understood, we focus entirely on the non-classical parameter
range μ > μ_c ≥ 0. Furthermore, due to the parameter
reduction described by lem:parameter_reduction, which holds
for all dimensions n ≥ 2, it suffices to solve the non-classical
limit case (μ,μ_c) = (1,0), since
R ∈ (3)W_μ,μ_c(R ;F) = R ∈ (3)W_1,0(R ; F_μ,μ_c) .
On the right hand side, we notice a rescaled deformation gradient
F_μ,μ_cλ^-1_μ,μ_c· F ∈^+(3)
which is obtained from F ∈^+(3) by multiplication with the inverse of
the induced scaling parameter
λ_μ,μ_cμ/μ - μ_c > 0. We note that we use
the previous notation throughout the text and further introduce
the singular radius ρ_μ,μ_c2μ/μ - μ_c.
It follows that the set of optimal Cosserat rotations can be described
by
_μ,μ_c(F) = _1,0(F_μ,μ_c)
for the entire non-classical parameter range μ > μ_c ≥ 0.
We are therefore mostly concerned with the case μ_c = 0 in the
present text. Note that for all μ > 0, we have the equality
^±_μ,0(F)
= ^±_1,0(F) .
§.§ The locally energy-minimizing Cosserat rotations ^±_μ,μ_c(F)
We briefly present the geometric characterization of the optimal
Cosserat rotations ^±_μ,μ_c(F) obtained
in <cit.>. Let R ∈(3) and
let ^2 ⊂^3 denote the unit 2-sphere.
We make use of the well-known angle-axis parametrization of
rotations which we write as [α, r][The angle-axis parametrization
is singular, but this is not an issue for our exposition.],
where α∈ (-π,π] denotes the rotation angle
and r ∈^2 specifies the oriented rotation axis.
We recall that it is sufficient to solve for the relative
rotation, i.e., we consider
[Diagonal form of weighted optimality in n = 3]
Let μ > 0 and μ_c ≥ 0 and let
D = (ν_1,ν_2,ν_3) with
ν_1 > ν_2 > ν_3 > 0. Compute the set of
optimal relative rotations
R ∈ (3)(R^T ;D) R ∈ (3){μ (R D - 𝕀)^2
+ μ_c (R D - 𝕀)^2} .
We stress that the rotation angle of the
relative rotation R is implicitly reversed
due to the correspondence R^T ↔R.
The computation of the solutions to intro:prob_wmm_reduced
by computer algebra together with a statistical verification are
the core results obtained in <cit.> which we
present next.
[Energy-minimizing relative rotations
for (μ,μ_c) = (1,0)]
Let ν_1 > ν_2 > ν_3 > 0 be the singular values of
F ∈^+(3). Then the energy-minimizing relative rotations
solving intro:prob_wmm_reduced are given by
R_1,0^±(F)
[ cosβ̂^±_1,0 -sinβ̂^±_1,0 0; sinβ̂^±_1,0 cosβ̂^±_1,0 0; 0 0 1; ] ,
where the optimal rotation angles
β̂^±_1,0∈ (-π,π]
are given by
β̂^±_1,0(F)
0 , if ν_1 + ν_2 ≤ 2 ,
±arccos(2/ν_1 + ν_2) ,
if ν_1 + ν_2 ≥ 2 .
Thus, in the non-classical regime ν_1 + ν_2 ≥ 2,
we obtain the explicit expression
R_1,0^±(F)
[ 2/ν_1 + ν_2 ∓√(1-(2/ν_1 + ν_2)^2) 0; ±√(1-(2/ν_1 + ν_2)^2) 2/ν_1 + ν_2 0; 0 0 1 ] .
In the classical regime ν_1 + ν_2 ≤ 2, we simply
obtain the relative rotation R_1,0^±(F) = 𝕀,
and there is no deviation from the polar factor (F)
at all.
Note that, due to the parameter reduction lem:parameter_reduction,
it is always possible to recover the optimal rotations
^±_μ,μ_c(F)
for general non-classical parameter choices μ > μ_c ≥ 0
from the non-classical limit case
(μ,μ_c) = (1,0); cf. <cit.>
and <cit.> for details.
§.§ Geometric and mechanical aspects of optimal Cosserat
rotations
It seems natural to introduce
[Maximal mean planar stretch and strain]
Let F ∈^+(3) with singular values
ν_1 ≥ν_2 ≥ν_3 > 0. We introduce
the maximal mean planar stretch and
the maximal mean planar strain as follows:
(F) ν_1 + ν_2/2 , and
(F) (ν_1 - 1) + (ν_2 - 1)/2 = (F) - 1 .
In order to describe the bifurcation behavior of _μ,μ_c^±(F)
as a function of the parameter F ∈^+(3), it is helpful to
partition the parameter space ^+(3).
[Classical and non-classical domain]
To any pair of material parameters (μ,μ_c) in the non-classical
range μ > μ_c ≥ 0, we associate a classical domain
_μ,μ_c and a non-classical domain _μ,μ_c.
Here,
_μ,μ_c F ∈^+(3)(F_μ,μ_c) ≤ 0 ,
and
_μ,μ_c F ∈^+(3)(F_μ,μ_c) ≥ 0 ,
respectively.
It is straight-forward to derive the following equivalent characterizations
_μ,μ_c = F ∈^+(3)(F) ≤λ_μ,μ_c
= F ∈^+(3)ν_1 + ν_2 ≤2μ/μ - μ_c ,
_μ,μ_c = F ∈^+(3)(F) ≥λ_μ,μ_c
= F ∈^+(3)ν_1 + ν_2 ≥2μ/μ - μ_c .
On the intersection
_μ,μ_c∩_μ,μ_c = F ∈^+(3)(F) = 0,
the minimizers _μ,μ_c^±(F) coincide with the polar
factor (F). This can be seen from the form of the optimal
relative rotations in propo:rhat. More explicitly, in
dimension n = 3 and in the non-classical limit case
(μ,μ_c) = (1,0), we have:
_1,0F ∈^+(3)(F) ≤ 0 ,
and _1,0F ∈^+(3)(F) ≥ 0 .
Since the maximal mean planar strain (F) is related to
strain, this indicates a particular (possibly new) type of
tension-compression asymmetry.
Towards a geometric interpretation of the energy-minimizing Cosserat
rotations ^±_1,0(F) in the non-classical limit case
(μ,μ_c) = (1,0), we reconsider the spectral decomposition of
U = QDQ^T from the principal axis transformation in sec:intro.
Let us denote the columns of Q ∈(3) by q_i ∈^2, i = 1,2,3.
Then q_1 and q_2 are orthonormal eigenvectors of U which correspond
to the largest two singular values ν_1 and ν_2 of F ∈^+(3).
More generally, we introduce the following
[Plane of maximal stretch]
The plane of maximal stretch is the linear subspace
P^ ms(F) q_1,q_2⊂^3
spanned by the two eigenvectors q_1,q_2 of U associated with the two
largest singular values ν_1 > ν_2 > ν_3 > 0
of the deformation gradient F ∈^+(3).
We recall that, due to the parameter
reduction lem:parameter_reduction, it is always possible to
recover the optimal rotations
_μ,μ_c(F) R ∈ (3)W_μ,μ_c(R ;F)
for a general choice of non-classical parameters μ > μ_c ≥ 0 from the
non-classical limit case (μ,μ_c) = (1,0). However, we defer
the explicit procedure for a bit since it is quite instructive
to interpret this distinguished non-classical limit case first.
at (0,0)
< g r a p h i c s >
;
[above] at (-2,0.5) -β̂_1,0;
[->] (-1.8,0.5) – (-1, -0.25);
[above] at ( 1.8, 1.3) β̂_1,0;
[->] (1.8, 1.3) – (-0.1, -0.6);
[shift=(2,-1.25),scale=0.5]
[thick,->] (0,0) – (0.95 * -0.975, 0.95 * -0.7);
[anchor=north] at (0.95 * -0.975, 0.95 * -0.7) q_1;
[thick,->] (0,0) – (1.2 * 1, 1.2 * -0.275);
[anchor=north west] at (1.0 * 1, 1.0 * -0.275) q_2;
[thick,->] (0,0) – (0.025, 1.2);
[anchor=south] at (0.025, 1.2) q_3;
figureAction of ^±_1,0(F) in
axes of principal stretch for a stretch ellipsoid with half-axes
(ν_1,ν_2,ν_3) = (4,2,1/2). The plane of maximal stretch
P^ mp(F) is depicted in blue. The cylinder along
q_3 ⊥P^ mp(F) illustrates that the axis
of rotation is the eigenvector q_3 of U associated with the
smallest singular value ν_3 = 1/2 of F. The thin cylinder
[blue] bisecting the opening represents the relative rotation angle
β̂ = 0 and corresponds to (F). The outer two
cylinders [red] correspond to the two non-classical minimizers
_1,0^±(F). The enclosed angles
β̂_1,0^± = ±arccos(2/ν_1 + ν_2) are
the optimal relative rotation angles. This reveals the major
symmetry of the non-classical minimizers.
For (F) ≤ 0 the maximal mean planar strain is non-expansive.
By definition, we have F ∈_1,0 in the
classical domain, for which the energy-minimizing relative rotation
is given by R_1,0(F) = 𝕀 and there is no deviation
from the polar factor. In short _1,0^±(F) = (F).
Let us now turn to the more interesting non-classical case F ∈_1,0.
If F ∈_1,0, then by definition (F) > 0
and the maximal mean planar strain is expansive. The deviation of
the non-classical energy-minimizing rotations ^±_1,0(F)
from the polar factor is measured by a rotation
in the plane of maximal stretch P^ mp(F) given
by (F)^T^±_1,0(F) = Q(F)R_1,0^∓(F)Q(F)^T.
The rotation axis is the eigenvector q_3 associated with the smallest
singular value ν_3 > 0 of F and the relative rotation angle is
given by β̂_1,0^∓(F) = ∓arccos(1/(F)).
The rotation angles increase monotonically towards the asymptotic limits
lim_(F) → ∞β̂_1,0^±(F)
= ±π/2 .
In axis-angle representation, we obtain
R_1,0^±(F) ≡ [±arccos(1/(F)), (0, 0, 1)] , and
^T^±_1,0(F) ≡ [∓arccos(1/(F)), q_3 ] .
For the non-classical limit case (μ,μ_c) = (1,0) we have
the following formula for the energy-minimizing Cosserat rotations:
^±_1,0(F)
(F) , if F ∈_1,0 ,
(F)Q(F)R_1,0^∓(F)Q(F)^T , if F ∈_1,0 .
For general values of the weights in the non-classical range
μ > μ_c ≥ 0, we obtain
^±_μ,μ_c(F) ^±_1,0(F_μ,μ_c) ,
where F_μ,μ_cλ^-1_μ,μ_c F is obtained
by rescaling the deformation gradient with the inverse of the induced scaling
parameter λ_μ,μ_cμ/μ - μ_c > 0.
Note that the previous definition is relative to a fixed choice
of the orthonormal factor Q(F) ∈(3) in the spectral
decomposition of U = QDQ^T. Further, right from their variational
characterization, one easily deduces that the energy-minimizing
rotations satisfy
^±_μ, μ_c(Q F) = Q ^±_μ,μ_c(F),
for any Q ∈(3), i.e., they are objective
functions; cf.rem:polar_vs_rpolar.
The domains of the piecewise definition of _1,0^±(F)
in cor:rpolar_formula indicate a certain tension-compression
asymmetry in the material model characterized by the Cosserat
shear-stretch energy W_1,0(R ;F). We can also make a second
important observation. To this end, consider
a smooth curve F(t):(-ϵ,ϵ) →^+(3). If the
eigenvector q_3(t) ∈^2 associated with the smallest singular
value ν_3(t) changes its orientation along this curve, then
the rotation axis of _1,0^±(F) flips as well. Effectively,
the sign of the relative rotation angle β̂_1,0^±(F) is
negated which may lead to jumps. This can happen, e.g., if F(t)
passes through a deformation gradient with a non-simple singular
value, but it may also depend on details of the specific algorithm
used for the computation of the eigenbasis.
For the classical range μ_c ≥μ > 0, the
polar factor and the relaxed polar factor(s) coincide and trivially
share all properties. This is no longer true for the non-classical parameter
range μ_c ≥μ > 0 and we compare the properties for that range
in our next remark. More precisely, we present a detailed comparison
of the well-known features of the polar factor which are of
fundamental importance in the context of mechanics.
Let n ≥ 2 and F ∈^+(n).
The polar factor (F) ∈(n) obtained from the polar decomposition
F = (F) U is always unique and satisfies:
[l]
(Objectivity) ( Q· F ) = Q·(F) (∀ Q ∈(n)) ,
(Isotropy) ( F· Q ) = (F)· Q (∀ Q ∈(n)) ,
(Scaling invariance) ( λ· F ) = (F) (∀λ > 0) ,
(Inversion symmetry) (F^-1) = (F)^-1 .
The relaxed polar factor(s) _μ,μ_c(F) ⊂(n) is
in general multi-valued and, due to its variational
characterization, satisfies:
[l]
(Objectivity) _μ,μ_c( Q · F ) = Q ·_μ,μ_c(F) (∀ Q ∈(n)) ,
(Isotropy) _μ,μ_c( F· Q ) = _μ,μ_c(F)· Q (∀ Q ∈(n)) .
For the particular dimensions k = 2,3, our explicit formulae imply
that there exist particular instances λ^* > 0
and F^* ∈^+(k), for which we have
[l]
(Broken scaling invariance) ^±_μ,μ_c(λ^* · F^*) ≠ (F^*)
, and
(Broken inversion symmetry) ^±_μ,μ_c(F^*^-1) ≠ (F^*)^-1 .
This can be directly inferred from the partitioning of
^+(k) = _μ,μ_c ∪ _μ,μ_c
and the respective piecewise definition of the relaxed
polar factor(s), see cor:rpolar_formula.
We interpret these broken symmetries as a (generalized) tension-compression
asymmetry.
§.§ The reduced Cosserat shear-stretch energy
We now introduce the notion of a reduced energy as the energy level
realized by the energy-minimizing rotations _μ,μ_c(F).
[Reduced Cosserat shear-stretch energy]
The reduced Cosserat shear-stretch energy is defined as
W_μ,μ_c^ red: ^+(n) →,
W_μ, μ_c^ red(F) min_R ∈ (n) W_μ,μ_c(R ;F) .
Besides the previous definition, we also have the following equivalent
means for the explicit computation of the reduced energy
W_μ,μ_c^ red(F) = W_μ,μ_c(^±_μ,μ_c(F) ;F) , and
W_μ,μ_c^ red(F)
=
W_μ, μ_c^ red(D)
min_R∈(n) W_μ,μ_c(R ; D)
=
W_μ,μ_c(R_μ,μ_c^± ;D) .
Let F ∈^+(3) and ν_1 > ν_2 > ν_3 > 0 the
ordered singular values of F. Then the reduced Cosserat shear-stretch
energy (F) admits the following piecewise representation
(F) = (ν_1 - 1)^2 + (ν_2 - 1)^2 + (ν_3 - 1)^2 = U - 𝕀^2
, if ν_1 + ν_2 ≤ 2, i.e., F ∈_1,0 ,
1/2 (ν_1 - ν_2)^2 + (ν_3 - 1)^2
, if ν_1 + ν_2 ≥ 2, i.e., F ∈_1,0 .
Our next step is to reveal the form of the reduced energy for the entire
non-classical parameter range μ > μ_c ≥ 0 which involves the
parameter reduction lemma, but we have to be a bit careful.
The parameter reduction in lem:parameter_reduction is
the key step in the computation of the minimizers for general
non-classical material parameters μ > μ_c ≥ 0.
It might be tempting, but we have to stress that the general form of
the reduced energy cannot be obtained by rescaling the singular values
ν_i ↦λ_μ,μ_c^-1ν_i in the singular value
representation of W^ red_1,0.
[W^ red_μ,μ_c as a function of the singular values]
Let F ∈^+(n) and ν_1 > ν_2 > ν_3 > 0, the ordered
singular values of F and let μ > μ_c ≥ 0, i.e., a non-classical
parameter set. Then the reduced Cosserat shear-stretch energy
W^ red_μ,μ_c: ^+(3) →
admits the following explicit representation
W^ red_μ,μ_c(F) = μ((ν_1 - 1)^2 + (ν_2 - 1)^2 + (ν_3 -
1)^2) = μ U - 𝕀^2
, F ∈_μ,μ_c ,
μ/2(ν_1 - ν_2)^2
+ μ (ν_3 - 1)^2
+ μ_c/2((ν_1 + ν_2) - )^2
- μ_c/2·ρ_μ,μ_c^2
, F ∈_μ,μ_c .
Let us consider the contribution of the skew-term to
W^ red_μ,μ_c
given by
μ_c/2((ν_1 + ν_2) - )^2
as a penalty term for F ∈^+(3) arising for material parameters
in the non-classical parameter range μ > μ_c ≥ 0. This leads to
a simple but interesting observation for strictly positive μ_c > 0.
The minimizers F ∈^+(3) for the penalty term satisfy
the bifurcation criterion
ν_1 + ν_2 =
for ^±_μ,μ_c(F). In this case
R_μ,μ_c^± = 𝕀 which implies that
R_μ,μ_c^± D - 𝕀∈(3), i.e.,
it is symmetric. Hence, the skew-part vanishes entirely which
minimizes the penalty. In numerical applications, a rotation
field R approximating ^±(F) can be expected to
be unstable in the vicinity of the branching point
ν_1 + ν_2 ≈.
Hence, a penalty which explicitly rewards an approximation to the
bifurcation point seems to be a delicate property. In strong contrast,
for the case when the Cosserat couple modulus is zero, i.e.,
μ_c = 0, the penalty term vanishes entirely. This hints at a
possibly more favorable qualitative behavior of the model in that
case; cf. <cit.>.
We recall that the tangent bundle T(n) is isomorphic to the product
(n)×(n) as a vector bundle. This is commonly referred
to as the left trivialization, see, e.g., <cit.>.
With this we can minimize over the tangent bundle in the following
Let F ∈^n× n. Then
inf_R ∈ (n)
A ∈ (n)R^TF - 𝕀 - A^2 = min_R ∈ (n)(R^TF - 𝕀)^2 min_R ∈ (n) W_1,0(R ;F) .
In the non-classical limit case (μ,μ_c) = (1,0), the preceding
lemma yields a geometric characterization of the reduced
Cosserat shear-stretch energy as a distance which we find remarkable.
Let n ≥ 2 and consider F ∈^+(n) with singular values
ν_1 ≥ν_2 ≥…≥ν_n > 0, i.e., not
necessarily distinct. Then the reduced Cosserat shear-stretch energy
W_1,0^ red: ^+(n) →
admits the following characterization as a distance
W_1,0^ red(F) = _ euclid^2(F, (n)(𝕀 + (n))) .
Here, _ euclid denotes the euclidean distance function.
§.§ Alternative criteria for the existence of non-classical solutions
For μ > μ_c > 0, i.e., for strictly positive μ_c > 0,
the singular radius satisfies 2μ/μ - μ_c > 2.
We now define a quite similar constant, namely
ζ_μ,μ_c - ρ_1,0
= 2μ_c/μ - μ_c > 0 .
Furthermore, we define the ϵ-neighborhood of a set
𝒳⊆^n× n relative to the euclidean
distance function as
N_ϵ(𝒳)
Y ∈^n × n_ euclid(Y, 𝒳) < ϵ .
Let μ > μ_c > 0, F ∈^+(3) and
ζ_μ,μ_c2μ_c/μ - μ_c > 0.
Then we have the following inclusion
N_1/2ζ_μ,μ_c^2((3)) ⊂ _μ,μ_c .
In other words, for all F ∈^+(3) satisfying
_ euclid(F, (3)) = U - 𝕀^2 < 1/2ζ_μ,μ_c^2,
the polar factor is the unique minimizer of
W_μ,μ_c(R ;F).
Let F ∈(3), i.e., F = ν_1ν_2ν_3 = 1,
where ν_1 ≥ν_2 ≥ν_3 > 0 are ordered
singular values of F, not necessarily distinct. Then
(3) ⊂ _1,0 ,
i.e., F induces a strictly non-classical minimizer. Equivalently,
F = 1 implies the estimate ν_1 + ν_2 ≥ 2.
If we make the stronger assumption ν_1 > ν_2 > ν_3 > 0,
we obtain a strict inequality ν_1 + ν_2 > 2. In that case,
F ∈_1,0∖_1,0 is strictly non-classical.
Let μ > 0,
F ∈^+(3) and assume
that ν_1 > ν_2 > ν_3 > 0. Then
F ∈ _μ,0∖_μ,0 ,
i.e., the minimizers _μ,0^±(F) ≠ are strictly
non-classical.
§ OPTIMAL ROTATIONS IN GENERAL DIMENSION
The key insight for the solution of the minimization problem
in general dimension n ≥ 2 is a new approach to the
analysis of the critical points. The Euler-Lagrange equations for
W_1,0(R ;F) are equivalent to
(RD - 𝕀)^2 ∈ (n) .
This is a symmetric square condition for the relative rotation
R, since
(X(R))^2 = S ∈(n),
where
X(R) RD - 𝕀∈^n × n .
As it is sufficient to compute the optimal relative rotation
R, we simply set R = R for the rest
of this section.
One might suspect that the critical points of W_1,0(R ;D)
are connected to real matrix square roots of real symmetric
matrices. And indeed, the structure of the set of critical
points of W_1,0(R ;D) can be revealed quite elegantly by a specific
characterization of the set of real matrix square roots of real
symmetric matrices. Note that this characterization <cit.>, which is similar in spirit to the standard representation
theorem for orthogonal matrices Ø(n) as block matrices, seems not
to be known in the literature. Due to this representation, the square
roots of interest can always be orthogonally transformed into a
block-diagonal representation which reduces the minimization problem
from arbitrary dimension n > 2 into decoupled one- and two-dimensional
subproblems. These can then be solved independently.
From this point of view, a non-classical minimizer in n=3,
simultaneously solves a one-dimensional and a two-dimensional
subproblem. The one-dimensional problem determines the rotation axis
of the optimal rotations, while the two-dimensional subproblem
determines the optimal rotation angles.
The degenerate cases of optimal Cosserat rotations arising for
recurring parameter values ν_i, i = 1,2,3, in the diagonal
parameter matrix D ∈(n) has not been treated previously
in <cit.>, but is also accessible with the
general approach. Note that this case corresponds to the special
case of two or more equal principal stretches ν_i
which is an important highly symmetric corner case in mechanics.
Combining the results of the two preceding sections, we can now
describe the critical values of the Cosserat shear-stretch energy
W_1,0(R ;D) which are attained at the critical points. The main result
of this section is a procedure (algorithm) which traverses the set of
critical points in a way that reduces the energy at every step of the
procedure and finally terminates in the subset of global minimizers.
Technically, we label the critical points by certain partitions
of the index set {1,…,n} containing only subsets I with one
or two elements. In the last section, we have seen that the subsets
I and a choice of sign for R_I uniquely characterize a critical
point R ∈(n).
Let us give an outline of the energy-decreasing traversal strategy
starting from a given labeling partition (i.e., critical point):
* Choose the positive sign R_I = +1 for each subset
of the partition.
* Disentangle all overlapping blocks for n > 3
(cf. lem:overlap).
* Successively shift all 2×2-blocks to the lowest
possible index, i.e., collect the blocks of size two
as close to the upper left corner of the matrix R
as possible (cf. lem:comparison).
* Introduce as many additional 2× 2-blocks by joining
adjacent blocks of size 1 as the constraint ν_i + ν_j > 2
allows (cf. lem:comparison).
The next theorem connects the value of W_1,0(R ;D) realized by
a critical point with its labeling partition and the choice of
determinants R_I which characterize it.
[Characterization of critical points and values]
Let the entries ν_1>ν_2>…>ν_n>0 of D ∈(n).
Then the critical points R ∈(n) can be classified according
to partitions of the index set {1,…,n} into subsets of size
one or two and choices of signs for the determinant R_I for
each subset I. The subsets of size two I = {i,j} satisfy
|ν_i + ν_j | > 2, R_I = +1 , and
ν_i - ν_j > 2, R_I = -1 .
The corresponding critical values are given by
W_1,0(R ;D) = ∑_I = {i}
R_I = 1 (ν_i - 1)^2
+ ∑_I = {i}
R_I = -1 (ν_i + 1)^2
+ ∑_I = {i,j}
R_I = 11/2(ν_i - ν_j)^2
+ ∑_I = {i,j}
R_I = -11/2(ν_i + ν_j)^2 .
If we allow
ν_1 ≥ν_2 ≥…≥ν_n > 0
for the entries of D, then the D- and R-invariant
subspaces V_i are not necessarily coordinate subspaces.
This produces non-isolated critical points but does not
change the formula for the critical values.
In order to compute the global minimizers R ∈(n) for the
Cosserat shear-stretch energy W_1,0(R ;D), we have to compare
all the critical values which correspond to the different
partitions and choices of the signs of the determinants
in the statement of theo:critical_values. We may, however,
assume that R_I = +1 for all subsets I,
see <cit.> for further details.
The following lemma shows that blocks of size two are always
favored whenever they exist.
If ν_i + ν_j > 2 then the difference between
the critical values of W_1,0(R ;D) corresponding to the
choice of a size two subset I = {i,j} as compared
to the choice of two size one subsets {i}, {j}
is given by
-1/2(ν_i+ν_j-2)^2.
Let us rewrite W_1,0(R ;D) in a slightly different form in order to
distill the contributions of the size two blocks in the partition.
For the choices of R_I = 1 there holds
W_1,0(R ;D) = (RD-𝕀)^2 = ∑_i=1^n(ν_i - 1)^2
-1/2∑_I = {i,j} (ν_i+ν_j-2)^2.
To study the global minimizers for the Cosserat shear-stretch energy
in arbitrary dimension n ≥ 4, we need to investigate the relative
location of the size two subsets of the partition.
Let R ∈(n) be a global minimizer for W_1,0(R ;D).
Then R cannot contain overlapping size two subsets, i.e.,
I = {i_1,i_4}, J = {i_2,i_3}, with i_1 < i_2 < i_3 < i_4.
We are now ready to state the result in the general n-dimensional case.
Let ν_1>ν_2>…ν_n>0 be the entries of D.
Let us fix the maximum k for which ν_2k-1 + ν_2k > 2.
Any global minimizer R ∈(n) corresponds to the
partition of the form
{1,2}⊔{3,4}⊔…⊔{2k-1, 2k}⊔{2k+1}⊔…⊔{n}
and the global minimum of W_1,0(R ;D) is given by
W_1,0^ red(D) min_R ∈(n)(RD - 𝕀)^2
= ∑_i=1^n(ν_i-1)^2 - 1/2∑_i=1^k (ν_2i-1+ν_2i-2)^2
= 1/2∑_i=1^k (ν_2i-1 - ν_2i)^2 + ∑_i=2k+1^n (ν_i-1)^2 .
The number of global minimizers in the above theorem is
2^k, where k is the number of blocks of size two in
the preceding characterization of a global minimizer as
a block diagonal matrix. All global minimizers are block
diagonal, similar to the previously discussed n = 3 case.
tocsectionReferences
plain
| In 1940 Giuseppe Grioli proved a variational characterization of
the orthogonal factor of the polar decomposition <cit.>.
In order to state this result, let (F) ∈(n) be the
unique rotation characterized as the orthogonal factor of the
right polar decomposition of
F = (F) U(F), F ∈^+(n) ,
where U(F) = (F)^TF = √(F^TF)∈(n) denotes
the symmetric positive definite factor (which, in mechanics, is
referred to as the Biot stretch tensor).
Grioli's original result[An exposition of the original
contribution of Grioli in modernized notation has been recently
made available in <cit.>.] is the important special
case of space dimension n = 3 of the following
[Grioli's theorem <cit.>]
Let n ≥ 2 and X^2 X^TX the Frobenius
norm. Then for any F ∈^+(n), it holds
R ∈ (n)R^TF - 𝕀^2 = {(F)},
and thus min_R ∈ (n)R^TF - 𝕀^2 = U - 𝕀^2 .
The polar factor (F) ∈(n) is the unique energy-minimizing
rotation for any given F ∈^+(n) in any dimension n ≥ 2,
see, e.g., <cit.>. This optimality property has an interesting
geometric interpretation following from the orthogonal invariance of the
Frobenius norm
R^TF - 𝕀^2 = F - R^2 = ^2_ euclid(F, R)
which reveals a connection to the problem class of matrix distance
(or nearness) problems. In elasticity, a distance of a deformation
gradient (jacobian matrix) F ∇φ∈^+(n) to
a rotation (n) is of interest as a measure for the energy
induced by local changes in length.
In this contribution, we consider a weighted analog of Grioli's
theorem motivated by Cosserat theory and present the energy-minimizing
(optimal) rotations characterized by
[Weighted optimality]
Let n ≥ 2. Compute the set of optimal rotations
R ∈ (n)(R ;F)R ∈ (n){μ (R^TF - 𝕀)^2
+ μ_c (R^TF - 𝕀)^2}
for given F ∈^+(n) and weights μ > 0,μ_c ≥ 0.
Here, (X) 1/2(X + X^T) and
(X) 1/2(X - X^T) denote the symmetric
and skew-symmetric parts of X ∈^n × n, respectively.
Note that Grioli's theorem stated above is recovered for the
case of equal weights μ = μ_c > 0. In order to express the
connection to the variational characterization of the polar
factor (F), we have introduced the following notation
[Relaxed polar factor(s)]
Let μ > 0 and μ_c ≥ 0. We denote the set-valued mapping
that assigns to a given parameter F ∈^+(n) its associated
set of energy-minimizing rotations by
_μ,μ_c(F) R ∈ (n)(R ;F) .
In the weighted case, the polar factor (F) is always critical
but not always optimal. In general the global minimizers
_μ,μ_c(F) depend on the parameters μ > 0 and
μ_c ≥ 0 and can substantially deviate from (F).
The optimal rotations in the weighted case _μ,μ_c(F)
have been worked out in two and three space dimensions by the present
authors in a series of papers <cit.>;
cf. also <cit.> and <cit.>
for earlier related work. A visualization of the mechanism of optimal
Cosserat rotations in dimension n = 3 for an idealized nano-indentation
was given in <cit.> and shows that the optimal rotations
can produce interesting non-classical patterns. A final proof of
optimality in any dimension n ≥ 2 has been obtained by Borisov
and the authors in <cit.> and is based on a new
characterization of real square roots of real symmetric matrices.
This contribution presents an overview of these results omitting
the proofs for which we refer to the original contributions.
Our study of the energy-minimizing rotations _μ,μ_c(F)
is motivated by a particular Cosserat (micropolar)
theory <cit.>, i.e., a continuum theory
with additional degrees of freedom R ∈(n). In this
context, the objective function (R ;F) subject to
minimization in intro:prob:weighted determines the
shear-stretch contribution to the strain energy in any
nonlinear, quadratic, and isotropic Cosserat theory, see also <cit.>.
The arguments to the shear-stretch energy W_μ,μ_c(R ;F) are
the deformation gradient field
F ∇φ: Ω→^+(n)
and the microrotation field R: Ω→(n) evaluated at a
given point of the domain Ω. A full Cosserat continuum model
furthermore contains an additional curvature energy
term <cit.> and a volumetric
energy term, see, e.g., <cit.> or <cit.>.
It is always possible to express the local energy contribution in a
Cosserat model as W = W(), where R^TF
is the first Cosserat deformation tensor. This reduction follows
from objectivity requirements and has already been observed by the
Cosserat brothers <cit.>,
see also <cit.> and <cit.>.
Since is in general non-symmetric, the most general isotropic
and quadratic local energy contribution which is zero at the reference
state is given by
μ ( - 𝕀)^2
+ μ_c ( - 𝕀)^2_“shear-stretch energy” + λ/2 - 𝕀)^2_“volumetric energy” .
The last term will be discarded in the following, since it couples
the rotational and volumetric response, a feature not present in the
well-known isotropic linear Cosserat models.[The Cosserat brothers never
proposed any specific expression for the local energy W = W().
The chosen quadratic ansatz for W = W() is motivated by a direct
extension of the quadratic energy in the linear theory of Cosserat
models, see, e.g. <cit.>. We always consider a true volumetric-isochoric
split in our applications.]
From the perspective of Cosserat theory, the optimal rotations
_μ,μ_c(F) yield insight into the important limit case
of vanishing characteristic length L_ c = 0.[This
identification requires that the volume term decouples from
the , e.g.,
W^ vol() λ/4[( - 1)^2 + (1/ - 1)^2] .
This requirement is quite natural and is satisfied by all linear
Cosserat models <cit.>.] In this context, we can interpret the solutions
of (<ref>) as an energetically optimal mechanical
response of the field R ∈(n) of Cosserat microrotations to
a given deformation gradient F ∇φ∈^+(n).
The correct choice of the so-called Cosserat couple modulus
μ_c ≥ 0 for specific materials and boundary value problems is
an interesting open question. There are indications that a
non-vanishing μ_c > 0 has never been experimentally observed
and that such a choice is at least debatable <cit.>.
The limit case μ_c = 0 is hence of particular interest.
We want to stress that although the term (R ;F) subject to
minimization in (<ref>) is quadratic in the nonsymmetric
microstrain tensor - 𝕀 = R^TF - 𝕀, see,
e.g., <cit.>, the associated minimization
problem with respect to R is nonlinear due to the multiplicative
coupling R^TF and the geometry of (n).
The energy (R ;F) is a polynomial in the matrix entries,
hence ∈ C^∞((n),). Further, since the Lie
group (n) is compact and ∂(n) = ∅, the
global extrema of are attained at interior points.
The previous remark hints at a possible solution strategy for
intro:prob:weighted. If all the critical points
R_crit(F) ∈(n) of (R ;F) can be
computed[The smooth manifold (n) has empty boundary.
This implies that a critical point for given F ∈^+(n)
satisfies (R(t) ; F)0 = 0 for
every smooth curve of rotations R(t): (-ϵ, ϵ) →(n) passing through R(0) = R_crit.],
then a direct comparison of the associated critical energy levels
(R_crit ;F) allows to determine the
critical branches which are energy-minimizing.
Clearly, any minimizing critical branch realizes the
reduced Cosserat shear-stretch energy defined as
W^ red_μ,μ_c: ^+(n) → ,
W^ red_μ,μ_c(F) min_R ∈ (n) W_μ,μ_c(R ;F) .
At first, a solution of intro:prob:weighted in three space
dimensions was out of reach (let alone the n-dimensional problem).
Therefore, we first restrict our attention to the planar case, where
we can base our computations on the standard parametrisation
R: [-π, π] →(2) ⊂^2× 2, R() [ cos -sin; sin cos ]
by a rotation angle.[Note that π and -π are mapped
to the same rotation. In this text, we implicitly choose π over
-π for the rotation angle whenever uniqueness is an issue.]
It turns out that there are at most two optimal planar rotations
_μ,μ_c^±(F) in the non-classical parameter
range μ > μ_c ≥ 0 and we distinguish these by a sign.
The corresponding optimal rotation angles of
^±_μ,μ_c(F) are denoted by
α^±_μ,μ_c(F). The non-classical minimizers coincide
with the polar factor (F) in the compressive regime
of F ∈^+(2), but deviate otherwise.
The computation of the global minimizers in dependence of F
is not completely obvious even for the planar case.
Hence, the following simplifications of the minimization problem
are helpful.
First, it is useful to introduce
[Parameter rescaling]
Let μ > μ_c ≥ 0. We define the singular radius by
> 0 ,
and further define λ_μ,μ_c/ρ_1,0 = μ/μ - μ_c ,
as the induced scaling parameter. Note that ρ_1,0 = 2 and
λ_1,0 = 1. Further, we define the parameter rescaling
given by
F_μ,μ_c λ^-1_μ,μ_c F = μ - μ_c/μ F ∈^+(n) .
For μ > 0 and μ_c = 0, we obtain F_μ,0 = F, i.e.,
the rescaling is only effective for μ_c > 0.
Regarding the material parameters, we proved
in <cit.> that for any dimension
n ≥ 2, it is in fact sufficient to restrict
our attention to two parameter pairs:
(μ,μ_c) = (1,1), the classical case, and
(μ,μ_c) = (1,0), the non-classical case.
Hence, somewhat surprisingly, the solutions for arbitrary
μ > 0 and μ_c ≥ 0 can be recovered from these
two limit cases. This is the content of
Let n ≥ 2 and let F ∈^+(n), then
μ_c ≥μ > 0 ⟹ (R ;F)
∼
W_1,1(R ;F) , and
μ > μ_c ≥ 0 ⟹ (R ;F)
∼ (R ;F_μ,μ_c) .
Here, the equivalence notation means that the energies give rise to
the same global minimizers which we can also state as
_μ,μ_c(F) =
_1,1(F) = {(F)}, if μ_c ≥μ > 0
_1,0(F_μ,μ_c), if μ > μ_c ≥ 0
Another important observation can be made introducing the rotation
R Q^TR^T Q
which acts relative to the polar factor (F) in the
coordinate system given by the columns of Q which span a positively
oriented frame of principal directions of U. This allows us to
transform
Q^T((R^TF) - 𝕀)Q = Q^T((R^T QDQ^T) - 𝕀))Q
=(Q^TR^T QDQ^TQ - Q^TQ) = (Q^TR^T Q_ RD - 𝕀) = (RD - 𝕀) .
For fixed choice of Q ∈(n), the inverse transformation allows
to reconstruct the absolute rotation uniquely
R = (QRQ^T ^T)^T = QR^TQ^T .
Hence, in the non-classical parameter range represented by the
limit case (μ,μ_c) = (1,0), the minimization problem can
be reduced to the following problem for the optimal relative
rotations.
Let n ≥ 2. Compute the set of energy-minimizing relative
rotations
_1,0(D) R ∈ (n)W_1,0(R ;D) = R ∈ (n)(RD - 𝕀)^2 ⊆ (n) .
The decisive point in the solution of prob:opt
in dimensions n ≥ 3 is the characterization of the set
of relative rotations R∈(n) satisfying the
particular symmetric square condition
(RD - 𝕀)^2 ∈(n)
which is equivalent to the Euler-Lagrange equations.
After having set the stage of the optimization problem on (n),
this overview is now structured as follows: in the next Section 2,
we consider in some detail the planar problem which allows for a
complete solution by elementary techniques and which presents
already the essential geometry which unfolds in dimensions
n ≥ 3. In Section 3, we provide the complete solution for
the three-dimensional case as well as the corresponding reduced
energy expression in terms of singular values of F. We also
provide a geometrical interpretation that allows to view the
minimization problem for μ_c = 0 as a distance problem.
Furtermore, we provide a discussion for which deformation
gradients we can only have the classical response (F).
Finally, in Section 4, we present our results for the general
n-dimensional case. | null | null | null | null | null |
http://arxiv.org/abs/1701.07632v1 | 20170126094916 | Redshift, metallicity and size of two extended dwarf Irregular galaxies. A link between dwarf Irregulars and Ultra Diffuse Galaxies? | [
"M. Bellazzini",
"V. Belokurov",
"L. Magrini",
"F. Fraternali",
"V. Testa",
"G. Beccari",
"A. Marchetti",
"R. Carini"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Identification of nonclassical properties of light with multiplexing layouts
W. Vogel
December 30, 2023
============================================================================
We present the results of the spectroscopic and photometric follow-up of two field galaxies that were selected as possible stellar counterparts of local high velocity clouds. Our analysis shows that the
two systems are distant (D>20 Mpc) dwarf irregular galaxies unrelated to the local
H I clouds. However, the newly derived distance and structural parameters reveal
that the two galaxies have luminosities and effective radii very similar to the recently identified ultra diffuse galaxies (UDGs). At odds with classical UDGs, they are remarkably isolated, having no known giant galaxy within ∼ 2.0 Mpc.
Moreover, one of them has a very high gas content compared to galaxies of similar stellar mass, with a H I
to stellar mass ratio
M_ HI/M_⋆∼ 90, typical of almost-dark dwarfs.
Expanding on this finding, we show that extended dwarf irregulars overlap the distribution of UDGs in the M_V vs. log r_ e plane and that the sequence including dwarf spheroidals, dwarf irregulars and UDGs appears as continuously populated in this plane.
This may suggest an evolutionary link between dwarf irregulars and UDGs.
ISM: Hii regions — galaxies: dwarf — galaxies: star formation
§ INTRODUCTION
The complete census of dwarf galaxies in the Local Group (LG, and in the Local Volume) is a key observational enterprise in these decades, closely tied to the solution of the long-standing missing satellites problem <cit.>.
The recent discovery of the nearby (D=1.7 Mpc) faint (M_V=-9.4) star-forming dwarf galaxy Leo P <cit.> has opened a new road for the identification of local dwarfs.
Leo P was found as the stellar counterpart of a very compact high velocity cloud (CHVC) of neutral Hydrogen identified in the ALFALFA HI survey <cit.>, thus suggesting that some of the missing dwarfs in the LG and its surroundings could be hidden within similar CHVCs.
These dwarfs may be the gas-rich star-forming counterparts of the quiescent ultra faint dwarfs (UFD) that have been found in relatively large numbers as stellar overdensities in panoramic imaging surveys <cit.>.
Indeed there are models within the Λ-cold dark matter (CDM) scenario predicting that a large number of small DM haloes <cit.> should have had their star formation inhibited or quenched by global or local feedback effects (e.g., re-ionization, supernova feedback, ram-pressure stripping), thus leading to a population of gas-rich dwarfs with low or null stellar content <cit.>.
The only possibility to confirm these systems as real galaxies and to gauge their distances is to find a stellar population associated with the HI clouds and indeed several teams followed up the CHVCs proposed by the ALFALFA <cit.> and GALFA-HI <cit.> surveys as candidate local (D≤ 3.0 Mpc) mini-haloes.
<cit.>, within the SECCO survey[http://www.bo.astro.it/secco/], obtained deep and homogeneous imaging of 25 of the ALFALFA candidates from A13, finding only one confirmed stellar counterpart, the very faint star-forming system SECCO 1, likely located in the Virgo cluster <cit.>. <cit.>, searching several public image archives, were able to confirm SECCO 1 and discovered four additional counterparts in the GALFA-HI sample, all of them with D3.0 Mpc <cit.>.
<cit.> adopted a different approach, searching for small groupings of blue stars within the SDSS catalogue and identifying ∼ 100 interesting candidates.
The follow-up of 12 of them revealed a population of faint, blue, metal-poor low surface brightness (LSB) dwarfs in the distance range 5 MpcD 120 Mpc, six of them associated with HI clouds listed in <cit.>. J15 defined the newly found systems as blue diffuse dwarf (BDD) galaxies.
Apparently we are beginning to scratch the surface of a population of LSB star-forming dwarfs that went undetected until now, although they are not found in the Local Volume <cit.>.
Within this context, we have selected mini-halo candidates from the GASS HI survey <cit.>.
Unlike previous searches, which only looked at velocities very different from Galactic emission (HVCs with |v_ dev|>90 km s^-1, v_ dev being the deviation velocity with respect to a regularly rotating Galactic disc), we have explored the range of lower velocities, typical of intermediate velocity clouds (IVCs, 30<|v_ dev|<90 km s^-1).
We detected HI sources using the code ^ 3DBAROLO <cit.> and applying selection criteria on their size and velocity width to minimize the contamination from Galactic clouds.
This left us with a sample of about one thousand best candidates, presumably with a very high degree of contamination by Galactic sources, which we searched for stellar counterparts in SDSS <cit.>, ATLAS <cit.> and DES <cit.> images.
The process of visual inspection of available images around the positions of the clouds led to the selection of two promising candidates. These are blue LSB galaxies whose apparent diameters are fully compatible with being located within
∼ 3.0 Mpc from us; moreover they are not completely unresolved, displaying a few blue compact sources resembling HII regions.
Unfortunately, the spatial resolution of GASS is about 16 arcmin and it does not allow an association with certainty, hence spectroscopic follow-up is required.
Here we present the results of this follow-up, ultimately resulting in the rejection of the association of both the candidate stellar counterparts with the local gas clouds, since they are located at distances larger than 40 Mpc. Still, our observations provide the first redshift and metallicity estimates for these galaxies, which are useful for future studies, and reveal their remarkably large size, given their total luminosity. The latter feature lead us to note that the most luminous dwarf Irregular galaxies (dIrr) display structural parameters (sizes, integrated magnitudes, Sérsic indices, and surface brightnesses) overlapping the range inhabited by the newly discovered ultra diffuse galaxies <cit.>, suggesting a possible relation between the two classes of stellar systems.
While these observations are not part of the SECCO survey, they are strongly related and for this reason we adopt the SECCO nomenclature to name the two dwarfs considered here. In particular, following <cit.> we call them SECCO-dI-1 and SECCO-dI-2 (where dI = dwarf Irregular), abbreviated as SdI-1
and SdI-2.
§ OBSERVATIONS AND DATA REDUCTION
All the observations have been obtained under clear sky, during the night of March 3, 2016 with the Large Binocular Telescope (LBT) on Mt Graham (AZ), used in
pseudo-binocular mode, i.e. with different instruments operating simultaneously in the two channels of the telescope.
The main programme was performed using the low resolution spectrograph MODS-1 <cit.>, aligning a 1.2-wide long-slit along the major axes of the two targets. With this set-up and a dichroic, MODS-1 provides a blue spectrum covering 3200Å<λ<5650Å, and a red spectrum covering 5650Å<λ<10000Å at a spectral resolution λ/Δλ∼ 1100. Three t_ exp=1200 s exposures per target were acquired.
SDSS images were used to define the slit position and orientation.
The spectra were corrected for bias and flat-field, sky-subtracted, wavelength calibrated, then extracted and combined into flux-calibrated summed spectra using the pipeline developed at the Italian LBT Spectroscopic Reduction Center[http://lbt-spectro.iasf-milano.inaf.it].
Simultaneously, we got deep r_ SDSS and i_ SDSS (hereafter r and i, for brevity) band imaging with LBC-R, the red channel of a pair of twin wide field (≃ 23×23) cameras <cit.>. For each target, we got 9× 200 s exposures per filter, dithered to minimise the effect of bad pixels and to cover the inter-chip gaps of LBC-R.
The reduction of the LBC images was performed with the specific pipeline developed at INAF-OAR (Paris et al. in preparation). The individual raw images were first corrected for bias and flat-field, and then background-subtracted. After astrometric calibration, they were combined into single r- and i-band stacked images with the SWarp software <cit.>. In the following we will analyse these stacked and sky-subtracted images. The 5-σ level over the background measured on the images corresponds to an i-band surface brightness of ≃27.8 mag/arcsec^2, in line with the limits typically obtained with LBC images reduced in the same way <cit.>.
The photometric calibration was obtained with hundreds of stars
in common with the Sloan Digital Sky Survey - Data Release 9 <cit.>.
In Fig. <ref> we show postage-stamp images of the two target galaxies from our LBC-R images.
§.§ The targets
The main properties of the target galaxies are summarised in Table <ref>. SdI-1 has a determination of redshift z=0.025988 from H I by <cit.> that places it far beyond the realm of local mini-haloes <cit.>. However a negative redshift (z=-0.05545) was also reported from optical spectroscopy <cit.>[In the Millennium Galaxy Catalog http://www.hs.uni-hamburg.de/jliske/mgc/]. While extremely implausible, this may suggest an improper association between the H I detection by <cit.> and the stellar counterpart. Given the identification of an ICV along the same line of sight, a new attempt to get a reliable velocity from an optical spectrum was worth doing. <cit.> also reported an integrated magnitude of B=19.2 and an effective radius r_ e=4 (no uncertainty reported) from single-band photometry much shallower than the one we obtained with LBC.
SdI-2 lacks any redshift, size and optical magnitude estimate. There are two GALEX sources projected onto the main body of the galaxy, GALEXMSC J114433.79-005200.0 classified as a UV source, and GALEXASC J114433.60-005203.0 classified as a galaxy.
Our deep images (see Fig. <ref>) reveal two remarkably elongated irregular blue galaxies, with some compact knots superimposed. SdI-1 may be interacting with some smaller companions, located ∼ 15 westward of its center. The main body of SdI-2 is surrounded by a very low SB asymmetric halo, more extended in the North-East direction. A fluffy nucleus is visible at the very center of SdI-2, which can be perceived also in the light profile (as a slight change of slope in the innermost 2, see below). While SdI-2 displays and an overall elliptical shape, the apparent morphology of SdI-1 is remarkably irregular independently of the adopted image cuts.
According to the NASA Extragalactic Datasystem NED[http://ned.ipac.caltech.edu] there are eight galaxies within 1 degree of SdI-1 with recessional velocities in the range 6000 km s^-1≤ V_r≤10000 km s^-1. The most remarkable one is the radio galaxy IC 753 that has V_r=6220 km s^-1 and lies ≃ 1 apart in the plane of the sky, corresponding to a projected distance of ≃ 2.0 Mpc at the distance of SdI-1.
Two galaxies (SDSS J115406.35+000158.5 and SDSS J115254.30-001408.5) have velocities within 200 km s^-1 of SdI-1. They are relatively faint (r≥ 17.7) and lie at a projected distance of 0.9 Mpc and 1.6 Mpc, respectively.
There are eighteen galaxies listed in NED within 1 degree of SdI-2 with recessional velocities in the range 1000.0 km s^-1≤ V_r≤4000.0 km s^-1, but only three within ≃ 500 km s^-1 of SdI-2, SDSS J114428.88-012335.7, SDSS J114640.78-011749.2, and SDSS J114325.36-013742.5. All these three are significantly fainter than SdI-2 (r≥ 20.8) and lie more than half a degree apart, corresponding to a projected distance larger than 350 kpc at the distance of SdI-2.
In conclusion, both our target dwarfs do not lie in the vicinity of large galaxies and seems to be remarkably isolated.
§ PHYSICAL PROPERTIES OF SDI-1 AND SDI-2
In Fig. <ref> we show relevant portions of the spectra we obtained from our observations. In SdI-1 this corresponds to a bright and extended H II region ≃ 5.2 to the south-west of the center of the galaxy. In SdI-2 we got the spectra of two such H II regions (sources A and B, ≃ 5.3 and ≃ 8.4 to the south-south-west of the center of the galaxy, respectively) and the absorption spectrum of the central nucleus (source C).
To get a rough estimate of the mean age of the nucleus we compared the observed spectrum of source C with a set of low-resolution synthetic spectra of simple stellar populations from the BASTI repository <cit.>, finding a satisfactory fit with a model having metallicity Z=0.002 and age=200 Myr.
We estimated the redshift by fitting the centroid of identified atomic lines and averaging the various measures. The uncertainty in the derived radial velocities is dominated by the uncertainty in the zero point which, comparing the velocities from the blue and red spectra, is about 50 km s^-1.
The match between our new optical redshift for SdI-1 and the H I estimate by <cit.> implies that the association between the distant cloud and the stellar counterpart is correct.
Both SdI-1 and SdI-2 have velocities clearly incompatible with association with our GASS candidate mini-haloes. Since these were the best candidates for counterparts selected from a sample of ∼ 1000 ICVs, we conclude that it is very unlikely that a local dwarf is associated with these clouds.
We report the radial velocities in the 3K reference frame and then we derive the distance adopting H_0=73 km s^-1 Mpc^-1 (the value adopted by NED). We obtain a distance of 112± 8 Mpc and 40± 1 Mpc for SdI-1 and SdI-2, respectively.
§.§ Metallicity
Using the iraf task splot we measured the fluxes of recombination lines of H (Hα and Hβ) and collisional lines of a few ions ([Oii], [Oiii], [Nii], [Sii], see Table <ref>). All measured line intensities were corrected for extinction by computing the ratio between the observed and theoretical Balmer decrement for the typical conditions of an Hii region
<cit.>. For the metallicity estimates in SdI-2, described below, we adopt the weighted average of the estimates of the two individual H II regions, as their abundances are indistinguishable, within the uncertainties.
Due to the absence of electron-temperature diagnostic lines, the gas-phase oxygen abundance of each source was determined with the following strong-line ratios (also depending on the available lines in each spectrum):
N2=[Nii]/Hα, O3N2=([Oiii]/Hβ)/([Nii]/Hα), O2=([Oii/Hβ), and S2=([Sii](λ6717+λ6730))/Hα. For N2 and O3N2 we adopt the calibration by <cit.>, while for O2 and S2 we adopt the calibration by <cit.>. The final abundance estimates, listed in Table <ref>, are the average of the values obtained from N2 and O3N2 and the average of the values obtained from O2 and S2. For SdI-2 the two estimates are in good agreement, within the uncertainties.
For SdI-1 we were able to obtain only an upper limit from N2 and O3N2. For both galaxies, the derived abundances are within the range covered by dwarf galaxies of similar luminosity <cit.>.
The nitrogen-to-oxygen ratio is about log(N/O)≃ -1.7 in SdI-2, indicating that the star formation efficiency is quite high in this galaxy <cit.>.
§.§ Surface Photometry
In Fig. <ref> we show the i-band surface brightness profiles of our target galaxies, obtained by photometry on elliptical apertures performed with the APT software <cit.>. The axis ratio and position angle were estimated by eye, superposing ellipses to the images of the galaxies. r-band profiles were obtained in the same way and are very similar in shape, but we prefer to analyse i-band profiles because they should be minimally affected by the light of bright H II regions. Indeed slightly larger effective radii are obtained from r-band profiles.
In spite of some noise due to the presence of fore/background sources or compact sources within the galaxies, the overall profiles emerge very clearly, and in both cases they can be satisfactorily fitted by a simple exponential model.
Contamination from nearby but unrelated sources especially affects the outer part of the profile of SdI-1, which flattens beyond R=10 mainly due to the contribution from the possible companion lying to the west of the galaxy (see Sect. <ref>). We limited our fit to the nearly uncontaminated inner regions to avoid overestimating the size of the galaxies.
The effective radii (r_ e) along the major axis, the central surface brightness (μ_ V,0) and the associated uncertainties have been estimated by performing a linear regression on the points within R=10.0, for SdI-1, and within R=16.0, for SdI-1, with the macro lm of the package R[ www.r-project.org]. This simple approach seems fully adequate in this context, is straightforward and provides robust estimates of these parameters.
We note that our r_ e value for SdI-1 is in reasonable agreement with the estimate by <cit.> from much shallower images. To minimise the effect of fore/background sources we derived the integrated magnitudes and the mean surface brightness within r_ e
from the best fitting exponential profiles, using the equations provided by <cit.>. All the photometric and structural parameters derived in this way are listed in Table <ref>.
The (r-i)_0 colors of SdI-1 and SdI-2 (-0.1 and +0.1, respectively) are typical of star-forming galaxies and on the blue side of the range spanned by blue UDGs from <cit.>.
Following these authors we used the relations by <cit.> to derive a rough estimate of the stellar mass from the integrated i-band luminosity and the (r-i)_0 colors; both galaxies have M_⋆≃ 1×10^7 M_ (see Tab. <ref>). For SdI-1 we have also a reliable estimate of the H I mass from <cit.> that allows us to compute the ratio of the H I mass to V-band luminosity (from Tab. 1) to be M_ HI/L_V≃ 6.2, larger than the typical values of gas-rich dwarfs in the Local Group <cit.>. The ratio of H I mass to stellar mass, M_ HI/M_⋆≃ 90 places SdI-1 in the realm of almost-dark galaxies <cit.>.
§ A LINK BETWEEN DWARF IRREGULARS AND UDGS?
Having at our disposal newly derived structural parameters and distances of our target galaxies, we noted that they have absolute magnitude, surface brightness (see Table <ref>), and, in particular,
physical sizes (r_ e=2.6 kpc and r_ e=1.3 kpc, for SdI-1 and SdI-2, respectively) in the range covered by the recently identified class of ultra diffuse galaxies[There is not yet a generally accepted definition of the class, hence similarity must be intended in a broad sense. For example, SdI-1 fulfills the main size criterion (r_ e>1.5 kpc) adopted by <cit.> in their definition of UDGs, independently of adopting the major axis or the circularised effective radius, SdI-2 fails by a small amount, and both galaxies fail to fulfill the μ_ V,0>24.0 mag/arcsec^2 criterion. However both galaxies are consistent with all the <cit.> criteria once the fading by passive evolution is taken into account (see below). Moreover, both SdI-1 and SdI-2 overlap with Fornax UDGs identified by <cit.> in luminosity, radius and ⟨μ⟩_ e.] <cit.>.
UDGs are roundish amorphous galaxies having “...the sizes of giants but the luminosity of dwarfs...” <cit.> that have been recently discovered in large numbers in clusters of galaxies <cit.>, with some examples in other environments <cit.>. In general UDGs lie on the red sequence of galaxies and their light profiles are well approximated by exponential laws <cit.>. The fact that they survive in dense environments without obvious signs of tidal distortions, as well as the first analyses of the kinematics of individual UDGs, strongly suggest that they are dark matter dominated systems <cit.>. On the other hand it is still not established if, e.g., they are failed giant galaxies <cit.> or extended and quenched dwarfs <cit.>.
In Fig. <ref> we compare the sample of Coma cluster UDGs by <cit.> with SdI-1, SdI-2 and the dwarf galaxies in the Local Volume, from the compilation by <cit.>, in the absolute magnitude vs. effective radius plane.
Local dIrr galaxies are plotted in a darker tone of grey with respect to dSphs and dwarf ellipticals. NGC 300 and NGC 55 have been plotted for reference although they do not fit the definition of dwarf galaxies. To expand our view we included also the BDDs of <cit.> and the H I-selected sample of gas-rich LSB galaxies within ≃ 250 Mpc by <cit.>. These authors derived new, more reliable estimates of integrated magnitudes and effective radii by re-analysing SDSS images. Absolute g- and
r-band integrated magnitudes from <cit.> were transformed into V-band magnitudes by using Lupton (2005)[http://www.sdss3.org/dr9/algorithms/sdssUBVRITransform.php] equations.
Also g-band magnitudes of <cit.> galaxies were converted into V-band with the same equation, adopting the mean color of red sequence galaxies at that luminosity, (g-r)_0=0.6, from <cit.>. UDGs from <cit.> have been included in Fig. <ref> with the same transformation, adopting mean colors of (g-r)_0=0.25 and (g-r)_0=0.6, for blue and red UDGs, respectively (see their Fig. 4).
In Fig. <ref> we show two slightly different versions of the plot. In the lower panel we use the effective radius measured along the major axis of the galaxies (r_ e,maj), while in the upper panel we use the circularised radius (r_ e,circ=r_ e,maj√(b/a)), since this has been adopted in several comparisons of the same kind including UDGs <cit.>.
We note that, independently of the version of the plot, not only SdI-1 and SdI-2, but also a few local large dwarf irregulars partly overlap the distribution of UDGs (WLM, IC 1613, in particular, but also NGC 3109 and IC 3104, if r_ e,maj is considered). Indeed, in their extensive literature search for previously identified UDGs, <cit.> found six galaxies satisfying all their UDG criteria in the catalog of nearby dwarf irregulars by <cit.>, concluding that an overlap between the two classes may indeed exist.
The adoption of r_ e,circ makes the overlapping range slightly narrower, due to the fact that UDGs have, on average, a much rounder shape than dIrrs.
It is also important to stress that while it was generally recognised that in the M_V vs. log r_ e plane, UDGs lie at the tip of the sequence of dSphs and dIrrs, the inclusion of the latter class of galaxies in the plot (usually not performed) makes the overall distribution much more continuous: there seems to be no gap between dSphs+dIrrs and UDGs. Expanding the view to non-local systems suggests that large dIrrs have indeed effective radii in the range 1-4 kpc <cit.>.
The sharp edge of the distribution of <cit.> galaxies toward low SB values in Fig. <ref> strongly suggests that the lack of a more substantial overlap with the UDGs may be merely due to incompleteness: a population of star-forming dwarfs with ⟨μ_V⟩_ e>24.5 mag/arcsec^2 may still be waiting to be uncovered by future more sensitive surveys[It is interesting to note that <cit.> stated that the data-reduction strategy they adopted implies a bias against the detection of extended dwarf irregulars.]. This hypothesis is confirmed by the fact that the sample of BDDs by <cit.>, which has been selected from the same source (SDSS images), shows the same cut in surface brightness. Fig. <ref> shows also that <cit.> and <cit.> galaxies and local dIrrs are indistinguishable in this plane.
We do not know if the similarity between large dIrrs and UDGs[The similarities include also the typical shape of the light profiles (nearly exponential), the incidence of stellar nuclei <cit.>, and the presence of globular cluster systems <cit.>. Note that at least one UDG has been found hosting a globular cluster population significantly larger than typical dwarfs of the same luminosity <cit.>.] is hinting at an evolutionary link between the two classes, but it is certainly worth noting, since a clear explanation for the origin of UDGs is still lacking and they can also be the end-product of different evolutionary channels <cit.>. For instance, it seems to support the hypothesis that UDGs may be “quenched Large Magellanic Cloud-like systems”, recently put forward by <cit.>. It may be conceived that the tidal stirring process that is supposed to transform small gas-rich disc dwarfs into dSph around Milky Way-sized galaxies <cit.>, acting on a larger scale, can also transform large dIrrs into UDGs within galaxy clusters, removing the gas, stopping the star formation and redistributing the stars into an amorphous spheroid (see also for a possible relation between disc galaxies and UDGs, and
for shape arguments not supporting this relation). Interestingly, the only Local Group
quiescent galaxy overlapping the distribution of UDGs in Fig. <ref> <cit.>, the Sagittarius dSph, is believed to have evolved to its present amorphous and gas-less state from a star-forming galaxy of mass similar to the Small Magellanic Cloud, mainly driven by the tidal interaction with the Milky Way that is disrupting it <cit.>.
However, it must be noted, in this context, that typical UDGs do not shows signs of ongoing disruption.
In any case, the continuity of the sequence including dIrrs of any size and dwarf spheroids of any size (dSphs and UDGs) in the M_V vs. log r_ e plane may suggest similar progenitors for all these LSB systems over more than six orders of magnitude in luminosity.
In this context, the very recent work by <cit.> is particularly relevant. These authors identified a population of possible progenitors of UDGs (blue UDGs, from their colors significantly bluer than classical red UDGs) in the outskirts of galaxy groups containing classical UDGs. RT16 demonstrated that a few Gyrs of passive evolution would
transform their blue UDGs into classical red ones, by reddening their colors, fading their surface brightness and total luminosity while keeping their large size nearly unchanged. SdI-1 and SdI-2 have size, stellar mass, luminosity, surface brightness, ellipticity and color very similar to the RT16 blue UDGs (see, e.g., Fig. <ref>), hence RT16 results on the evolutionary path of blue UDGs applies also to our galaxies as well as to other local dIrrs plotted in Fig. <ref>. In particular, according to canonical solar-scaled BASTI[http://basti.oa-teramo.inaf.it] stellar evolutionary models <cit.> for a simple stellar population with metallicity Z=0.002, the passive evolution from an age=0.5 Gyr to age=6.0 Gyr would led to a fading by 1.84 magnitudes in V-band, driving the central surface brightness of SdI-1 and SdI-2 down to μ_ V,0≃ 25.0 mag/arcsec^2, fully in the realm of classical UDGs, as displayed by the arrows plotted in the upper panel of Fig. <ref>.
SdI-1 and SdI-2 seem to be even more isolated than their RT16 siblings, since they lie at more than 900 kpc and 350 kpc from their nearest known neighbours, respectively, while RT16 blue UDGs are within 250-550 kpc from the centre of the galaxy groups they are associated to. Moreover their nearest neighbours are dwarf galaxies, while the groups where RT16 UDGs live host also giant galaxies. SdI-1 and SdI-2 are perhaps the most isolated UDGs (or UDG progenitors) identified until now, indicating that isolated field dwarfs can indeed evolve into UDGs.
Hence, SdI-1 and SdI-2 lend additional and independent support to the scenario proposed by <cit.>, in which the progenitors of classical UDGs were dwarfs born in the field and then processed within galaxy groups and, finally, in galaxy clusters.
The properties of SdI-1 and SdI-2 also fit nicely with those of the dwarfs identified by <cit.> as counterparts of UDGs in cosmological simulations including feedback processes. Also in the <cit.> scenario UDGs are born as dwarf galaxies, but their evolution is driven by internal processes (feedback-driven gas flows, in particular), hence it is independent of the environment and it is expected to take place also in the field.
A specific prediction of the simulations by <cit.> is that UDGs evolved in isolation should have larger gas content than regular dwarfs of similar stellar mass. It is intriguing to note that this prediction is vindicated by the the very high H I/M_* ratio observed in SdI-1, the most extended among our two isolated, star-forming UDGs.
§ ACKNOWLEDGEMENTS
We are grateful to an anonymous referee for a very careful reading of the original manuscript and for useful suggestions that allowed us to make a more thorough analysis.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
999
[Abbott et al.(2005)]des
Abbott, T., Aldering, G., Annis, J., et al., arXiv:astro-ph/0510346
[Adams et al.(2013)]adams Adams, E.K., Giovanelli, R., Haynes, M.P., 2013, , 768, 77 (A13)
[Ahn et al.(2012)]dr9
Ahn, C.P., Alexandroff, R., Allende Prieto, C., et al. 2012, ,
203, 21
[Amorisco & Loeb(2016)]amorisco
Amorisco, N. C., Loeb, A., 2016, , 459, L51
[Beasley et al.(2016)]beasley
Beasley, M.A., Romanowsky, A., Pota, V., Navarro, I.M., Martinez Delgado, D., Neyer, F.,
Deich, A.L., 2016, , 819, L20
[Beasley & Trujillo(2016)]beas2
Beasley, M.A., Trujillo, I., 2016, , 830, 23
[Beccari et al. (2016)]pap2 Beccari, G., Bellazzini, M., Battaglia, G., et al., 2016, , 591, 56 (B16)
[Beccari et al.(2017)]secco_muse
Beccari, G., Bellazzini, M., Magrini, L., et al., 2017, ,
465, 2189
[Bellazzini et al.(2015a)]pap1 Bellazzini, M., Beccari, G., Battaglia, et al., 2015a, , 575, 126 (B15a)
[Bellazzini et al.(2015b)]secco1 Bellazzini, M., Magrini, L., Mucciarelli, A., et al., 2015b, , 800, L15 (B15b)
[Belokurov(2013)]belo
Belokurov, V., 2013, New Astronomy Rev., 57, 100
[Bertin et al.(2002)]swarp
Bertin, E., Mellier, Y., Radovich, M., Missonnier, G., Didelon, P.,
Morin, B., 2002 in Astronomical Data Analysis Software and Systems XI,
San Francisco: Astronomical Society of the Pacific, ASP Conf. Series, 281, 228
[Blanton & Moustakas(2009)]blanton
Blanton, M.R., Moustakas, J., 2009, , 47, 159
[Burkert(2016)]burk
Burkert, A., 2016, , submitted (arXiv:1609.00052)
[Cannon et al.(2015)]almostdark
Cannon, J.M., et al., 2015, , 149, 72
[Di Cintio et al.(2017)]dicint
Di Cintio, A., Brook, C.B., Dutton, A.A., Macciò, A.V., Obreja, A., Dekel, A., 2017, ,
466, L1
[Di Teodoro & Fraternali(2015)]barolo
Di Teodoro, E., Fraternali, F., 2015, , 451, 3021
[Du et al.(2015)]du
Du, W., Wu, H., Lam, M.I., Zhu, Y., Lei, F., Zhou, Z., 2015, , 2015, 149, 199
[Freudling et al.(2013)]fre13 Freudling, W., Romaniello, M., Bramich, D. M., et al. 2013, , 559, A96
[Giallongo et al.(2008)]lbc
Giallongo, E., Ragazzoni, R., Grazian, A., 2008, 482, 349
[Giovanelli et al.(2013)]leop_1
Giovanelli, R., Haynes, M.P., Adams, E., 2013, , 146, 15
[Giovanelli et al.(2007)]giova07
Giovanelli, R., Haynes, M.P., Kent, B.R., et al., 2007, , 133, 2583
[Graham & Driver(2005)]GD05
Graham, A.W., Driver, S.P., 2005, , 22, 118
[James et al.(2015)]bdd
James, B., Koposov, S., Stark, D.P., Belokurov, V., Pettini, M.,
Olszewski, E.W., 2015, , 448, 2687 (J15)
[James et al.(2016)]bdd2
James, B., Koposov, S., Stark, D.P., Belokurov, V., Pettini, M.,
Olszewski, E.W., McQuinn, K.B.W., 2016, , in press (arXiv:1611.05888)
[Hunter & Elmegreen(2006)]HE06
Hunter, D.A., Elmegreen, B.G., 2006, , 162, 49
[Koda et al.(2015)]koda
Koda, J., Yagi, M., Yamanoi, H., & Komiyama, Y. 2015, , 807, L2
[Koposov et al.(2015)]kopo15
Koposov, S.E., Belokurov, V., Torrealba, G., Evans, N.W., 2015, , 805, 130
[Kormendy & Freeman(2016)]korfree
Kormendy, J., Freeman, K.C., 2016, , 817, 84
[Laher et al.(2012)]apt
Laher, R.R., Gorjian, V., Rebull, L.M., et al., 2012, , 124, 737
[Lange et al.(2016)]lange
Lange, R., Moffett, A., Driver, S., et al., 2016, , 462, 1470
[Lee et al.(2003)]lee03 Lee, H., Grebel, E.K., Hodge, P.W., 2003, , 401, 141
[Liske et al.(2003)]liske03
Liske, Lemon, D.J., Driver, S.P., Cross, N.J.G., Couch, W.J., 2003, , 344, 307
[Liske et al.(2015)]liske15
Liske, Baldry, I.K., Driver, S.P., et al. 2015, , 452, 2087
[Maddox et al.(1990)]apmuk
Maddox, S.J., Sutherland, W.J., Efstathiou, G., Loveday, G., 1990, , 243, 692
[Marino et al.(2013)]marino
Marino, R. A., Rosales-Ortega, F. F., Sánchez, S. F., et al., 2013, , 559, 114
[Martínez-Delgado(2016)]david
Martínez-Delgado, D., Laskër, R., Sharina, M., et al., 2016, , 151, 96
[Mayer et al.(2007)]lucionat
Mayer, L., Kazantzidis, S., Mastropietro, C., Wadsley, J., 2007, Nature, 445, 738
[Majewski et al.(2003)]maj
Majewski, S.R., Skrutskie, M.F., Weinberg, M.D., Ostheimer, J.C., 2003, , 599, 1082
[McClure-Griffiths et al.(2009)]McClure
McClure-Griffiths, N. M. et al., 2009, , 181, 398
[McConnachie(2012)]mcc
McConnachie, A.W., 2012, , 144, 4
[McQuinn et al.(2013)]leop_lbt
McQuinn, K.D.W., Skillman, E.D., Berg, D.A., et al., 2013, , 146, 145
[Mihos et al.(2015)]mihos
Mihos, J.C., Durrell, P.R., Ferrarese, L., et al., 2015, , 809, L21
[Moore et al.(1999)]moore
Moore, B., Ghigna, S., Governato, F., et al., 1999, , 524, L19
[Muñoz et al.(2015)]munoz
Muñoz, R.O., Eigenthaler, P., Puzia, T.H., et al., 2015, , 813, L15
[Niederste-Ostholt et al.(2010)]NO
Niederste-Ostholt, M., Belokurov, V., Evans, N.W., Peñarrubia, J., 2010, , 712, 516
[Osterbrock & Ferland(2006)]os06 Osterbrock, D. E., & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei, 2nd. ed. by D.E. Osterbrock and G.J. Ferland. Sausalito, CA: University Science Books, 2006,
[Pagel et al.(1978)]pagel Pagel, B.E.J., Edmunds, M.G., Fosbury, R.A., Webster, B.L., 1978, , 184, 569
[Percival et al.(2009)]basti_synt
Percival, S.M., Salaris, M., Cassisi, S., Pietrinferni, A., 2009, ,
690, 426
[Pettini & Pagel(2004)]pp04 Pettini, M., & Pagel, B. E. J. 2004, , 348, L59
[Pietrinferni et al.(2004)]basti
Pietrinferni, A., Cassisi, S., Salaris, M., Castelli, F., 2004, ,
612, 168
[Pilyugin & Mattsson(2011)]pm11 Pilyugin, L. S., & Mattsson, L. 2011, , 412, 1145
[Pilyugin & Grebel(2016)]pg16
Pilyugin, L.S., Grebel, E.K., 2016, , 457, 3678
[Pogge et al.(2010)]mods
Pogge, R. W., Atwood, B., Brewer, D. F., et al. 2010, Proc. SPIE, 7735, 9
[Ricotti(2009)]ricotti
Ricotti, M., 2009, , 392, L45
[Roberts et al.(2004)]rob04
Roberts, S., Davies, J., Sabatini, S., et al., 2004, , 352, 478
[Roediger & Courteau(2015)]rc15
Roediger, J.C., Courteau, S., 2015, , 452, 3209
[Roman & Trujillo(2016)]rt16
Roman, J., Trujillo, I., 2016, , in press (arXiv:1610.08980) (RT16)
[Sand et al.(2015)]sand15
Sand, D.J., Crnojević, D., Bennet, P., et al., 2015, , 806, 95
[Saul et al.(2012)]galfa
Saul, D. R., Peek, J. E. G., Grcevich, J., et al. 2012, , 758, 44
[Sawala et al.(2016)]sawa16
Sawala, T., Frenk, C.S., Fattahi, A., et al., 2016, , 457, 1931
[Schlafy & Finkbeiner(2011)]schlafy
Schlafy, E.F., Finkbeiner, D.P., 2011, , 737, 103
[Shanks et al.(2015)]atlas
Shanks T. et al., 2015, , 451, 4238
[Stetson(2005)]stet
Stetson, P.B., 2005, , 117, 563
[Tollerud et al.(2016)]tolle
Tollerud, E.J., Geha, M.C., Grcevich, J., et al., 2016, , 827, 89
[van der Burg(2016)]burg
van der Burg, R.F.J., Muzzin, A., Hoekstra, H., 2016, , 590, 20
[van Dokkum et al.(2015a)]udg
van Dokkum, P. G., Abraham, R., Merritt, A., et al. 2015a, , 798, L45
[van Dokkum et al.(2015b)]udg2
van Dokkum, P. G., Romanowsky, A.J., Abraham, R., et al., 2015b, , 804, L45
van Dokkum, P. G., Abraham, R., Merritt, A., et al. 2015a, , 798, L45
[van Dokkum et al.(2016)]udg3
van Dokkum, P. G., Abraham, R., Brodie, J., et al., 2016, , 828, L6
[Vincenzo et al.(2016)]vincenzo
Vincenzo, F., Belfiore, F., Maiolino, R., Matteucci, F., Ventura, P., 2016, , 458, 3466
[Yagi et al.(2016)]yagi
Yagi, M., Koda, J., Komiyama, Y., Tamanoi, H., 2016, , 225, 11
[Zaritsky(2016)]zar
Zaritsky, D., 2017, , 464, L110
| The complete census of dwarf galaxies in the Local Group (LG, and in the Local Volume) is a key observational enterprise in these decades, closely tied to the solution of the long-standing missing satellites problem <cit.>.
The recent discovery of the nearby (D=1.7 Mpc) faint (M_V=-9.4) star-forming dwarf galaxy Leo P <cit.> has opened a new road for the identification of local dwarfs.
Leo P was found as the stellar counterpart of a very compact high velocity cloud (CHVC) of neutral Hydrogen identified in the ALFALFA HI survey <cit.>, thus suggesting that some of the missing dwarfs in the LG and its surroundings could be hidden within similar CHVCs.
These dwarfs may be the gas-rich star-forming counterparts of the quiescent ultra faint dwarfs (UFD) that have been found in relatively large numbers as stellar overdensities in panoramic imaging surveys <cit.>.
Indeed there are models within the Λ-cold dark matter (CDM) scenario predicting that a large number of small DM haloes <cit.> should have had their star formation inhibited or quenched by global or local feedback effects (e.g., re-ionization, supernova feedback, ram-pressure stripping), thus leading to a population of gas-rich dwarfs with low or null stellar content <cit.>.
The only possibility to confirm these systems as real galaxies and to gauge their distances is to find a stellar population associated with the HI clouds and indeed several teams followed up the CHVCs proposed by the ALFALFA <cit.> and GALFA-HI <cit.> surveys as candidate local (D≤ 3.0 Mpc) mini-haloes.
<cit.>, within the SECCO survey[ obtained deep and homogeneous imaging of 25 of the ALFALFA candidates from A13, finding only one confirmed stellar counterpart, the very faint star-forming system SECCO 1, likely located in the Virgo cluster <cit.>. <cit.>, searching several public image archives, were able to confirm SECCO 1 and discovered four additional counterparts in the GALFA-HI sample, all of them with D3.0 Mpc <cit.>.
<cit.> adopted a different approach, searching for small groupings of blue stars within the SDSS catalogue and identifying ∼ 100 interesting candidates.
The follow-up of 12 of them revealed a population of faint, blue, metal-poor low surface brightness (LSB) dwarfs in the distance range 5 MpcD 120 Mpc, six of them associated with HI clouds listed in <cit.>. J15 defined the newly found systems as blue diffuse dwarf (BDD) galaxies.
Apparently we are beginning to scratch the surface of a population of LSB star-forming dwarfs that went undetected until now, although they are not found in the Local Volume <cit.>.
Within this context, we have selected mini-halo candidates from the GASS HI survey <cit.>.
Unlike previous searches, which only looked at velocities very different from Galactic emission (HVCs with |v_ dev|>90 km s^-1, v_ dev being the deviation velocity with respect to a regularly rotating Galactic disc), we have explored the range of lower velocities, typical of intermediate velocity clouds (IVCs, 30<|v_ dev|<90 km s^-1).
We detected HI sources using the code ^ 3DBAROLO <cit.> and applying selection criteria on their size and velocity width to minimize the contamination from Galactic clouds.
This left us with a sample of about one thousand best candidates, presumably with a very high degree of contamination by Galactic sources, which we searched for stellar counterparts in SDSS <cit.>, ATLAS <cit.> and DES <cit.> images.
The process of visual inspection of available images around the positions of the clouds led to the selection of two promising candidates. These are blue LSB galaxies whose apparent diameters are fully compatible with being located within
∼ 3.0 Mpc from us; moreover they are not completely unresolved, displaying a few blue compact sources resembling HII regions.
Unfortunately, the spatial resolution of GASS is about 16 arcmin and it does not allow an association with certainty, hence spectroscopic follow-up is required.
Here we present the results of this follow-up, ultimately resulting in the rejection of the association of both the candidate stellar counterparts with the local gas clouds, since they are located at distances larger than 40 Mpc. Still, our observations provide the first redshift and metallicity estimates for these galaxies, which are useful for future studies, and reveal their remarkably large size, given their total luminosity. The latter feature lead us to note that the most luminous dwarf Irregular galaxies (dIrr) display structural parameters (sizes, integrated magnitudes, Sérsic indices, and surface brightnesses) overlapping the range inhabited by the newly discovered ultra diffuse galaxies <cit.>, suggesting a possible relation between the two classes of stellar systems.
While these observations are not part of the SECCO survey, they are strongly related and for this reason we adopt the SECCO nomenclature to name the two dwarfs considered here. In particular, following <cit.> we call them SECCO-dI-1 and SECCO-dI-2 (where dI = dwarf Irregular), abbreviated as SdI-1
and SdI-2. | null | null | null | null | null |
http://arxiv.org/abs/1701.07482v2 | 20170125204302 | Switched control for quantized feedback systems: invariance and limit cycle analysis | [
"Alessandro Vittorio Papadopoulos",
"Federico Terraneo",
"Alberto Leva",
"Maria Prandini"
] | cs.SY | [
"cs.SY"
] |
Switched control for quantized feedback systems:
invariance and limit cycle analysis
Alessandro Vittorio Papadopoulos, Member, IEEE,
Federico Terraneo,
Alberto Leva, Member, IEEE,
Maria Prandini, Senior Member, IEEEA.V. Papadopoulos is with Mälardalen University, Västerås, Sweden, (e-mail: [email protected]), and F. Terraneo, A. Leva, and M. Prandini are with the Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, 20133, Italy, e-mail: {federico.terraneo, alberto.leva, maria.prandini}@polimi.it).This work was done when the first author was a post-doctoral researcher at Politecnico di Milano, and is supported by the European Commission under the project UnCoVerCPS with grant number 643921.
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study feedback control for discrete-time linear time-invariant systems in the presence of quantization both in the control action and in the measurement of the controlled variable.
While in some application the quantization effects can be neglected, when high-precision control is needed, they have to be explicitly accounted for in control design. In this paper we propose a switched control solution for minimizing the effect of quantization of both the control and controlled variables in the case of a simple integrator with unitary delay, a model that is quite common in the computing systems domain, for example in thread scheduling, clock synchronization, and resource allocation.
We show that the switched solution outperforms the one without switching, designed by neglecting quantization, and analyze necessary and sufficient conditions for the controlled system to exhibit periodic solutions in the presence of an additive constant disturbance affecting the control input.
Simulation results provide evidence of the effectiveness of the approach.
quantized feedback control, switched control, practical stability, computing system design, limit cycle.
§ INTRODUCTION
This paper deals with quantized feedback control for discrete-time linear time-invariant control systems. In particular, we consider the effect of quantization of both the measurements and the control actions.
In general, any digital implementation of a control system entails input and output quantization. This is typically the case when the output measurements used for feedback and the control actions applied to the controlled process are transmitted via a digital communication channel, <cit.>.
Depending on the specific application, quantization effects can become relevant and significantly affect the control system performance. While in some applications the quantization effects can be neglected, when high-precision control is needed, quantization has to be explicitly accounted for in control design.
Given a system that is stabilized by a standard linear time-invariant feedback controller when there is no quantization, the problem addressed herein is to find a switched controller that steers the system towards the smallest possible invariant set that includes the origin when its control input and output are quantized. We focus, in particular, on a discrete time linear system described by an integrator with a one time-unit delay. The system is affected by an additive constant bias on the control input, and both control input and controlled output measurements are quantized via a rounding operator.
Despite its simplicity, this system structure appears in several problems pertaining to the domain of computing systems. For example it represents the dynamics from reservation to cumulative CPU time in task scheduling, a typical source of disturbance being the latency of the preemption interrupt <cit.>. It models the disturbance to error dynamics in clock synchronization for wireless sensor networks, where the most relevant source of disturbance is given by temperature variations in the oscillator crystals <cit.>. It plays a role in server systems <cit.>, queuing systems <cit.>, and so forth, as can be observed from the variety of problems mentioned in <cit.>. Quantizers are present in virtually the totality of these applications, and dealing with their effect is important when high-performance is required.
In fact, several of the problems just listed require zero error in the presence of constant inputs, hence the relevance of quantization becomes apparent. Constant (or practically constant) are for example thermal disturbances experienced by wireless nodes in a climatized environment. In such an application context, temperature variations are very small and slow, because they are smoothed by the environment thermal dynamics and counteracted by temperature control, and abrupt variations may occur but only sporadically, for example when turning the air conditioners on once per day or week.
The considered linear system is stabilizable, and in the absence of quantization, one can introduce a standard proportional-integral (PI) controller to compensate for a constant load disturbance and bring the state trajectories to the zero equilibrium. The presence of input and output quantizers degrades the
PI controller performance, introducing oscillations in the quantized output with an excursion that is equal to twice the quantizer resolution. Such oscillations may be not admissible when dealing with high precision computing systems. Our goal in this paper is to design a better performing controller, while maintaining a PI-like structure in order to ease implementation and tuning. Invariant set and reachability analysis are the methods adopted to assess the properties of the designed control scheme.
More precisely, we propose a switched variant of the PI controller to address quantization and minimize its effect on the feedback control system performance. We then show that when the disturbance is constant, the switched control solution presents an invariant set for the quantized control input and output variables such that the quantized output is either zero or has a unitary amplitude (corresponding to the least significant bit, hence to the minimum representable quantity). A numerical reachability analysis study shows that, if the PI controller is suitably tuned, this invariant set is a global attractor. Necessary and sufficient conditions for the existence of a periodic solution in the (unquantized) control input and output variables are given as well.
Many papers in the literature address control of quantized linear systems.
Most of them focus on stabilization at the zero equilibrium in absence of disturbances.
Contributions can be classified based on the characteristics of the adopted quantizer. If the quantizer has a finite resolution, like in our paper where uniform quantization is adopted, then, <cit.> shows that classical stability cannot be achieved and introduces the practical stability
notion for quantized systems. More specifically, <cit.> proves that,
given an unstable discrete time system that is stabilizable, if the state measurements are quantized, then, there is no control strategy that makes all trajectories of the quantized state-feedback system asymptotically converge to zero, and only convergence to an invariant set around zero can be obtained.
Classical results on asymptotic stability of the origin are recovered in <cit.> by changing the resolution of the quantizer depending on the state behavior, and hence making the resolution higher and higher while approaching the origin. This approach has been extended to input to state and l_2 stabilization in presence of a disturbance input in <cit.> and <cit.>, respectively.
When a logarithmic quantizer with (countably) infinite quantization levels is adopted,
the resolution of the quantizer is infinite close to the origin, and global asymptotic stability
can be achieved, <cit.>. However, when finite-level logarithmic quantizers
are used, practical stability results can only be proven. Analysis of practical stability and constructive results on how to design finite-level logarithmic state quantizers guaranteeing practical stability are given in, e.g., <cit.>.
It is worth noticing that most papers in the literature consider quantization of either the control input (see, e.g., <cit.>) or the controlled output (see, e.g., <cit.>), whereas only a few address the set-up considered in this paper, where both control input and controlled output are quantized. This is the case in <cit.>.
Whereas logarithmic quantizers with infinite quantization levels are considered in <cit.>, in <cit.>, input and output quantizers are assumed to have a finite number of quantization levels. Practical stabilization of a double integrator system is studied in <cit.>, showing how the parameters defining the quantizers should be set for the practical stability result to hold. Extension to higher order integrator models is outlined as well, focusing however on stabilization without disturbances acting on the system.
The work closest to the present paper is <cit.>, where pre-defined finite resolution quantizers on both input and output are given and a feedback controller is designed to achieve some control goal. More precisely, in <cit.>, practical stabilization of unstable discrete time linear systems is addressed, and a quantized static state-feedback controller is designed that brings the state of the system to some invariant set around the origin in a finite number of steps. Our approach differs from <cit.> in that we address disturbance compensation, and we introduce a switched output-feedback controller to make the state of the controlled system reach an invariant set around the origin. Disturbance compensation and dynamic state/output-feedback control are not addressed in <cit.> and related work. In turn, while the methodology in <cit.> is of general applicability, our design is tailored to a simple system model and not easily extendable to different higher dimensional models.
The rest of the paper is organized as follows. Section <ref> first describes the control scheme without switching, and highlights how quantization deteriorates the performance of the control system. The switched solution that allows for minimizing the effect of quantization is then presented in the same section. Section <ref> provides necessary and sufficient conditions for entering the invariant set. A numerical reachability analysis study is performed in Section <ref> for identifying the controller parameter tuning that makes such an invariant set a global attractor. Section <ref> gives necessary and sufficient conditions for the existence of periodic solutions. Finally, Section <ref> provides evidence of the effectiveness of the approach via a simulation study, while Section <ref> concludes the paper.
§ BASIC CONTROL SCHEME AND ITS SWITCHED VARIANT
§.§ Notation
We now introduce some notation that will be used in the paper developments.
The sign function of a real number z is defined as:
z :=
1, z > 0
0, z = 0
-1, z < 0
The integer part of a real number z is defined as:
z :=
⌊ z ⌋, z≥ 0
⌈ z ⌉, z < 0
where ⌊ z ⌋ is the largest signed integer smaller than or equal to z and
⌈ z ⌉ is the smaller signed integer larger than or equal to z.
The fractional part of a real number z is defined as:
z: = z - z
A quantizer maps a real-valued function into a piecewise constant function taking values in a discrete set, and here it is defined as the rounding operator.
Given a real number z, its rounding ρ: ℝ→ℤ is defined as:
z :=
z·|z|, 0 ≤ |z| < 1/2
z·( |z| + 1), 1/2≤ |z| < 1
Given a real number z, its rounding error is:
Δ_z := z - z.
Notice that according to the provided definitions, the rounding error of a real number z is always bounded as |Δ_z| ≤1/2.
Finally, note that given two real numbers a∈ℝ, and b ∈ℝ, we have that a+b = a + b.
§.§ The basic scheme
We consider a system with control input u and output e, which is governed by the following equation
e(k+1) = e(k)+u(k)+d(k),
where d is some additive constant yet unknown disturbance on the quantized control action u.
The output e represents some error signal and should be driven to zero by compensating the disturbance d through the control input u. To this purpose, quantized measurements of e are available for feedback.
Due to the quantization of both u and e, the disturbance might not be exactly compensated and the goal is to design an output feedback compensator so that e is kept below the minimum resolution as defined by the quantizer (e=0).
The transfer function between the residual disturbance u+d and the controlled variable e is given by
P(z) = 1/z-1,
which is a discrete time integrator with a one time unit delay.
Suppose that disturbance d is constant, and neglect the quantization for the time being.
Then, a discrete-time Proportional Integral (PI) controller described via the transfer function:
R(z)=1-α z/z-1,
would suffice to drive e to zero with a rate of convergence that can be set via the parameter α.
Indeed, if we neglect the quantizers, the effect of the disturbance d on the output e can be described via the (closed-loop) transfer function
F(z)= P(z)/1-R(z)P(z) = z-1/z(z+α-2),
which corresponds to an asymptotically stable linear system if 1 < α < 3.
Hence, in the absence of quantization effects, the PI controller guarantees that the error converges to zero in the presence of a constant disturbance, with a rate of convergence that depends on the parameter α. If α=2, output e would be brought to zero in two time units.
Figure <ref> shows the resulting control scheme, including the quantizers.
§.§ The effect of quantization
As anticipated in the introduction, whenever high-precision control is needed, quantization can significantly deteriorate the performance of the control system. Indeed, quantization effects are not negligible in almost all the applications where a digital implementation is in place.
In particular, in the case of the scheme in Figure <ref>, a constant disturbance may cause the system to end up in a limit cycle where the excursion in amplitude of the quantized error is 2. An example is shown in Figure <ref>, with α = 1.4, d(k) = d = 1.2, and the control system initialized as e(0) = 2, u(0) = 0.
This figure, and, more precisely, the behavior of the error signal e, shows that the system with transfer function P(z) integrates over time the residual between the disturbance d and the quantized control input u. Due to the quantization on the system output e, the PI controller keeps its control action constant as long as e is zero. It then reacts when the integrated residual disturbance exceeds the threshold 1/2 in amplitude and makes the quantized output e change value from 0 to either 1 or -1, depending on its sign. The control signal reverses the sign of the residual disturbance, thus causing the quantized output e too to change sign. As a result, e is brought to a limit cycle where it keeps commuting between -1 and 1, with an excursion in amplitude that is equal to 2.
§.§ The proposed switched control scheme
In this section, we propose a switched control scheme that reduces the effect of quantization, steering the system to a limit cycle of an amplitude that is half of the one obtained with the control scheme in Figure <ref>. The proposed solution has the advantage of still adopting simple controllers, which leads to a system easily implementable in an embedded device, with very low overhead.
The controller is composed of a linear part with transfer function
R̃(z) = α z - 1z.
and a switched part where the control action ũ computed by R̃(z) is set as the input to the following modified integrator:
u(k+1) = u(k) + ũ(k+1), if e(k+1)≠ 0
u(k+1) = u(k) + ũ(k+1), if e(k+1) = 0
that finally computes the actual control input u, based on the quantized error measurements e.
Figure <ref> shows the resulting switched control scheme.
Note that if e(k+1)≠ 0, then, the effect of e on u is describe by the transfer function R(z) of the PI controller previously presented. Furthermore, in the absence of quantization, the two schemes in Figures <ref> and <ref> coincide.
The switched control system dynamics is characterized by the state variables u and e, and can be expressed as follows:
* if e(k+1) = e(k) + u(k) + d(k) = 0, then:
e(k+1) = e(k) + u(k) + d(k)
u(k+1) = u(k) + e(k)
* if e(k+1) = e(k) + u(k) + d(k)≠ 0, then:
e(k+1) = e(k) + u(k) + d(k)
u(k+1) = u(k) + e(k)
- αe(k) + u(k) + d(k)
§ INVARIANT SET ANALYSIS
In this section we prove that, for a constant disturbance d(k)=d, the proposed control scheme admits an invariant set in the quantized state variables e and u, and within that set the amplitude of the quantized error oscillations is 1.
We characterize the conditions under which the control system enters this invariant set. To this purpose it is convenient to express the control input as the quantized disturbance compensation term -d plus the residual:
u(k) = -d + (k),
and let
Δ_d = d - d,
be the rounding error of the disturbance. We can then rewrite the control system dynamics in the state variables e and as:
* if e(k+1) = e(k) + (k) + Δ_d = 0, then:
e(k+1) = e(k) + (k) + Δ_d
(k+1) = (k) + e(k)
* if e(k+1) = e(k) + (k) + Δ_d≠ 0, then:
e(k+1) = e(k) + (k) + Δ_d
(k+1) = (k) + e(k)
- αe(k) + (k) + Δ_d
which better shows that the rounding error of the disturbance is integrated by the process dynamics.
Let 1 < α < 3/2, and consider the system described by (<ref>) and (<ref>). If, at some time k
-1/2 < e(k) < 1/2
1 ≤α -(k)Δ_d < 3/2
-1/2 < (k) < 1/2
then, for all the subsequent time steps k+h, h>0:
(e(k+h), (k+h)) ∈
{ (0,0), (Δ_d,-Δ_d) }.
Moreover, { (0,0), (Δ_d,-Δ_d) } is the smallest invariant set for e and , when the system evolves starting from (<ref>).
Let us first consider the case where Δ_d = 0. Given the error evolution in (<ref>)-(<ref>), we get from (<ref>) that:
e(k+1) = e(k) + (k) + Δ_d = e(k).
Then e(k+1) = e(k) = 0, and by (<ref>) the system evolves according to (<ref>):
e(k+1) = e(k) + (k) + Δ_d
(k+1) = (k) + e(k) ⇒
e(k+1) = e(k)
(k+1) = 0
The first equation satisfies (<ref>), and the second equation satisfies both (<ref>) and (<ref>), so that the corresponding system keeps evolving according to (<ref>).
In addition, (e(k+1),(k+1)) is equal to (0,0), and the system will keep staying in (0,0) for all time k + h, with h>0. This concludes the proof for the case when Δ_d = 0.
We now consider the case when 0 < Δ_d ≤ 1/2. Derivations for the case -1/2 ≤Δ_d < 0 are analogous, and hence omitted. Given (<ref>), we have:
e(k+1) = e(k) + (k) + Δ_d = e(k) + Δ_d.
Since -1/2 < e(k) < 1/2 in (<ref>), and 0 < Δ_d ≤ 1/2, then
-1/2 < e(k) + Δ_d < 1,
and
e(k+1) = e(k)+ Δ_d
=
0, | e(k) + Δ_d | < 1/2
1, 1/2≤ e(k) + Δ_d < 1
We can then distinguish the following two cases:
* e(k+1) = e(k) + (k) + Δ_d = 0
* e(k+1) = e(k) + (k) + Δ_d = 1
Case <ref>): The system evolves according to (<ref>):
e(k+1) = e(k) + (k) + Δ_d
(k+1) = (k) + e(k) ⇒
e(k+1) = e(k) + Δ_d
(k+1) = 0,
so that in one step the quantized state is brought to zero: (e(k+1),(k+1)) = (0,0). Since the first equation in (<ref>) satisfies (<ref>), and the second satisfies both (<ref>) and (<ref>), we are back then to (<ref>).
Case <ref>): The system evolves according to (<ref>):
e(k+1) = e(k) + (k) + Δ_d
(k+1) = (k) + e(k)
- αe(k) + (k) + Δ_d ⇒
e(k+1) = e(k) + Δ_d
(k+1) = (k) - α
By (<ref>), we have:
-3/2 < (k)-α≤ -1,
hence
(k+1) = (k)-α = -1,
so that (e(k+1),(k+1)) = (1,-1).
If we next compute:
e(k+2) = e(k+1) + (k+1) + Δ_d
= e(k+1) -1 + Δ_d,
since e(k+1) = e(k) + Δ_d, and in this case 1/2 ≤ e(k) + Δ_d < 1:
-1/2 < e(k+1) - 1 + Δ_d < 1/2,
we then have
e(k+2) = 0.
The dynamics therefore evolves according to (<ref>), i.e.,
e(k+2) = e(k+1) + (k+1) + Δ_d
(k+2) = (k+1) + e(k+1) ⇒
e(k+2) = e(k+1) - 1 + Δ_d
(k+2) = - 1 + 1 = 0
so that (e(k+2),(k+2)) = (0,0). In 2 steps the quantized state is brought to zero. The first equation in (<ref>) satisfies hypothesis (<ref>), the second satisfies both (<ref>) and (<ref>), hence we are back to (<ref>).
All the above shows that starting from (<ref>), the system ends up evolving in the invariant set {(0,0),(1,-1)} for (e,). Now we need to prove that this is the smallest invariant set.
Note that we have just shown that from (<ref>) the system either enters the invariant set in (0,0) or in (1,-1), and in this latter case it evolves to (0,0) in one time step. Also, in both cases the system is back to set (<ref>), with = 0 (see equations (<ref>) and (<ref>)). We then need to show that the quantized state cannot keep being in (0,0) indefinitely, but it will eventually switch to (1,-1).
This is indeed the case because according to equation (<ref>), the system keeps being in (<ref>) with =0 and keeps integrating the rounding error until e (necessarily) exceed 1/2. Then, we are in case 2 since e = 1, and the quantized state switches to (1,-1).
Let 1 < α < 3/2, and consider the switched control system described by (<ref>) and (<ref>). If, at some time k, the state satisfies (<ref>), then, for all the time steps k+h, h>1:
e(k+h) = e(k+h-1) + (k+h-1) + Δ_d
(k+h) = -αe(k+h)
Equation (<ref>) follows immediately from the system dynamics in (<ref>)-(<ref>). Based on the proof of Theorem <ref>, (<ref>) is trivially satisfied when Δ_d = 0 since in this case e(k)=0, and the system evolves according to (<ref>). Let Δ_d ≠ 0. If e(k+1) = 0, then (k+1) = 0 (see equation (<ref>)). If instead, e(k+1) = Δ_d, then (k+1) = (k) - αΔ_d, and in one time step (k+2) = 0 (see equations (<ref>) and (<ref>)).
After time k+2, keeps its value to 0, when e = 0. It become -αΔ_d as soon as e = Δ_d, and then gets back to = 0 in one time step.
As a consequence, it is possible to express (k+h), with h>1, as:
(k+h) = -αe(k+h),
thus concluding the proof.
A possible evolution of the system is shown in Figure <ref>, for α = 1.1, Δ_d=0.4, when the switched control system (<ref>) and (<ref>) is initialized at e(0) = 0.2, and (0) = 0.6. The green square in the figure indicates the initial condition, while the red area indicates the region (<ref>). The top graph in Figure <ref> shows the phase plot of the system. After the state enters the red area, it ends up in the invariant set characterized in Theorem <ref>. The central and bottom graphs represent the time evolution of the state variables e and and of their quantized version.
Theorem <ref> provides conditions under which the system ends up in an invariant set where the quantized state variables e(k) and (k) range between the values 0 and Δ_d, and 0 and -Δ_d, respectively, with an excursion of amplitude equal to 1. However, depending on the value of α and of Δ_d the system may end up on a different invariant set.
This is studied in the following section.
§ NUMERICAL ANALYSIS OF REACHABILITY AND GLOBAL ATTRACTIVENESS
The purpose of this section is to study the global attractiveness of the invariant set identified in Theorem <ref>. To this end, we exploit the fact that once the system has entered the region (<ref>), in one step it ends up in the invariant set.
Therefore, we only need to study the reachability of region (<ref>).
Providing an analytical reachability analysis for the considered system is quite involved and far from being trivial, due to the quantization effect. In addition, most of the available tools for performing such an analysis (e.g., SpaceEx <cit.>, Flow* <cit.>, KeYmaera <cit.>, or Ariadne <cit.>) are meant for continuous time dynamical systems <cit.>.
This analysis is parametric in the (α, Δ_d) pair.
To carry it out numerically, α and Δ_d were made variable in the sets [1.001,1.499] and [-0.5,0.5] taking 500 and 1000 equally spaced values, respectively.
For each considered pair (α, Δ_d), system (<ref>)-(<ref>) was initialized with (e(0),(0)) ∈ [-10,10]^2, taking 1000 equally spaced values per coordinate. Note that [-10,10]^2 can be taken as representative of the whole state space because for larger values of (e,) the quantization errors become negligible. Outside that set one can therefore assume the system to behave linearly, causing any trajectory to end up in the set itself.
The region delimited by the closed curve in Figure <ref> includes all pairs (α, Δ_d) in the grid for which all the considered initial conditions cause the trajectory to end up in region (<ref>), and therefore in the invariant set identified in Theorem <ref>. Note that the values Δ_d=± 0.5 are not included in that region.
This leads to the following statement, which is not a theorem since it is based on a numerical analysis, not on a formal proof.
If 5/4 < α < 3/2 and |Δ_d|<0.5, the invariant set in Theorem <ref> is globally attractive.
In the case when |Δ_d|=0.5, the numerical analysis revealed the existence of an invariant set where the excursion in amplitude of the quantized error is equal to 2.
In particular, for Δ_d=-0.5 we get
(e,) ∈{(-1,2),(1,-1)},
whereas for Δ_d = 0.5
(e,) ∈{(-1,1),(1,-2)}.
The invariant sets (<ref>) and (<ref>) can be reached only from a subset of initial conditions, since Theorem <ref> holds for any Δ_d.
It is worth stressing that invariant sets with amplitude 2 for the quantized error excursion only appeared when |Δ_d| = 0.5. An example is shown in Figure <ref>.
For |Δ_d| ≠ 0.5, if α<5/4 our numerical study showed the existence of two invariant sets, both with unitary excursion amplitude, one of them being that in Theorem <ref>.
Figure <ref> shows an example of an invariant set that is different from the one in Theorem <ref> (but still has a quantized error excursion of amplitude 1). Such an invariant set
(e,) ∈{(0,1),(1,0)},
is obtained for α = 1.1 (<5/4), Δ_d=-0.3, when the system (<ref>) and (<ref>) is initialized at e(0) = -0.2, and (0) = 0.6. Note that the non-quantized control input behavior shown in Figure <ref> is not easy to predict. On the contrary, the non-quantized control input behavior for the invariant set in Theorem <ref> can be easily predicted based on α (see Proposition <ref>).
Since α is a design parameter, we can choose it so as to enforce the presence only of the invariant set that is fully characterized in Theorem <ref>, for all disturbances except for those with |Δ_d| = 0.5.
§ LIMIT CYCLE ANALYSIS
In this section, we analyze the evolution of the switched control system within the invariant set in Theorem <ref>, and determine possible periodic solutions for the error e and the control input , jointly with their period p. In particular, we show in Theorem <ref> that a necessary and sufficient condition for the presence of periodic solutions is that the disturbance rounding error, hence the disturbance, is a rational number. When dealing with applications in the computing systems domain, rational disturbances can indeed occur due to the inherently discrete nature of the signals and processes involved.
Note also that Theorem <ref> provides a necessary and sufficient condition for the existence of a periodic solution so that we can state that for any irrational disturbance, no periodic solution exists, thus further characterizing the behavior of the switched control system.
We can now start the analysis by defining the notion of n-periodic limit cycle of period p.
An n-periodic limit cycle of period p, with n,p ∈ℕ, is a solution of the switched control system (<ref>)-(<ref>) such that
e(k+p) = e(k)
(k+p) = (k)
, ∀ k ≥k
for some k≥ 0, and the quantized state (e,) switches n times per period.
A necessary and sufficient condition for the switched control system to evolve according to an n-periodic limit cycle of period m within the invariant set in Theorem <ref> is that the disturbance rounding error is rational and satisfies
|Δ_d| =n/m, with 1 ≤ n < m, and n, m ∈ℕ.
Note that when the system is within the invariant set of Theorem <ref>, the algebraic relation (<ref>) holds. Therefore, we just need to show that the state variable e evolves on the n-periodic limit cycle of period m.
We start by showing that a necessary condition for this to hold is that |Δ_d| is rational.
Suppose that at a certain time step h the system is within the (minimal) invariant set of Theorem <ref>. Assume also, without loss of generality, that (e(h),(h)) = (0,0). This entails that |e(h)|<0.5 and that the input u(h)+ d to the process is equal to Δ_d since u(h)=-d from equation (<ref>).
Indeed, the input to the process keeps constant and equal to Δ_d for k time steps, until |e(h+k)| exceeds or gets equal to 0.5 if Δ_d>0, -0.5 if Δ_d <0. At time h+k, then, e(h+k)≠ 0 and the pair (e(h+k),(h+k)) switches to (Δ_d,-Δ_d) in the invariant set. The number of steps k is given by the following formula
k=λ(Δ_d,x^+(0)):=⌈0.5 Δ_d-x^+(0)/Δ_d⌉,
where we set e(h)=x^+(0).
Observe that λ(Δ_d,x^+(0)) approaches infinity as Δ_d tends to zero, in accordance with Theorem <ref> where the invariant set is composed only of the value 0 if Δ_d=0.
The value x^+(1) taken by e(h+k+1) can be obtained as
x^+(1) = x^+(0) + λ(Δ_d,x^+(0))Δ_d + Δ_d - Δ_d,
since the process integrates an input that is constant and equal to Δ_d for k=λ(Δ_d,x^+(0)) steps, and then receives as input u(h+k)+d=(h+k)-d+d=-Δ_d+Δ_d at time h+k.
If x^+(1) is equal to x^+(0), then the evolution of state e of the system is periodic with period λ(Δ_d,x^+(0))+1, and we have an 1-periodic limit cycle of period k+1, because one single switch is needed within the invariant set to reset the state of the process to its original value, and this required k+1 steps.
If x^+(1) x^+(0), we can further iterate the same reasoning by considering i>1 switches within the invariant set and computing x^+(i), i>1.
If there exists some integer N>1 such that x^+(N+h)=x^+(h), for some h≥ 0, then, the state of the process evolves according to an N-periodic limit cycle.
More specifically, we need to compute
x^+(N+h) = x^+(h) +
+ ∑_i=0^N-1λ(Δ_d,x^+(i+h))Δ_d + N( Δ_d - Δ_d),
and set x^+(N+h)=x^+(h), which reduces to solving
(∑_i=0^N-1λ(Δ_d,x^+(i+h)) + N) |Δ_d|=N.
For this equation to admit a solution we must have
|Δ_d|=N/L,
where we set L= (∑_i=0^N-1λ(Δ_d,x^+(i+h)) + N).
Note that since L is an integer larger than N, for a periodic trajectory of the state process e to exist, the absolute value of disturbance quantization error |Δ_d| must be a rational number of the form n/m with n<m. Irrational values for |Δ_d| are then incompatible with periodic solutions.
We now show that the condition |Δ_d| = n/m being a rational number is sufficient to have an n-periodic limit cycle of period m.
Observe that by definition of λ as the minimum number of steps needed for e(h+k)≠ 0 starting from e(h)=x^+(0), we have that
e(h+k)= x^+(0) + λ(Δ_d,x^+(0))Δ_d
∈
[0.5, 0.5 +Δ_d) Δ_d>0
(-0.5+Δ_d, -0.5] Δ_d<0
.
This entails that
x^+(1) in (<ref>) satisfies
x^+(1)∈
[-0.5+Δ_d, -0.5 +2Δ_d) Δ_d>0
(0.5+2Δ_d, 0.5 +Δ_d] Δ_d<0
irrespectively of x^+(0). And this hold true for every x^+(i) value of e after i switches within the invariant set, with i≥ 1.
Let |Δ_d|=n/m, where n and m are coprime integers, m > n ≥ 1,we next show that, after at least one switch has occurred within the invariant set, then, the switched control system starts evolving according to an n-periodic limit cycle of period m. We refer to the case when Δ_d>0. The same reasoning applies to Δ_d<0.
If there were no further switches after time h+k when e(h+k)= x^+(1), then, e(h+k+m) would take values in [x^+(1), x^+(1)+mΔ_d]=[x^+(1), x^+(1)+n] since the system would integrate a constant input equal to Δ_d for m steps. However, as soon as e becomes larger than or equal to the threshold 0.5, then, its value is decreased by 1, so that if there were exactly n switches in the time frame [h+k, h+k+m], then, e(h+k+m)=x^+(1)=e(h+k) and a periodic solution would be in place. Now, in order to show that there are exactly n switches in the time frame [h+k, h+k+m], one should simply check that [x^+(1), x^+(1)+n] contains {0.5+i, i=0,1, …, n-1} and does not contain 0.5+n.
Clearly, 0.5+i is contained in [x^+(1), x^+(1)+n] for i=0 and i=n-1, since x^+(1)>-0.5.
Now we need to show that x^+(1)+n<0.5+n to conclude that [x^+(1), x^+(1)+n] does not contain 0.5+n. Indeed, since x^+(1)<-0.5 +2Δ_d, we have that x^+(1)+n<n+2Δ_d-0.5, which entails x^+(1)+n<n+0.5 given that Δ_d ≤ 0.5.
This concludes the proof.
Figure <ref> plots the evolution of the state of the control system for Δ_d=√(2)/3, α=1.1, e(0) = 0.2, and (0) = 0.6. Notice that since Δ_d is irrational, the obtained trajectory is not periodic.
Figure <ref> shows an example of a 1-periodic limit cycle of period 5 obtained for Δ_d=0.2=1/5, starting from the initial condition e(0) = -0.4, (0) = 0.2. Figure <ref> shows a 2-periodic limit cycle of period 5 for Δ_d=-0.4=2/5 starting from the same initial condition e(0) = -0.4, (0) = 0.2.
The following corollary directly follows from Theorem <ref> and Theorem <ref>, and summarizes the results of the limit cycle analysis.
If 1 < α < 3/2 and |Δ_d| = n/m, where n,m∈ℕ, 1 ≤ n < m, and |Δ_d| < 1/2, then the switched control system (<ref>)-(<ref>) admits a limit cycle where the error e is kept within [-0.5+Δ_d, 0.5 +Δ_d) if Δ_d > 0, and within (-0.5+Δ_d, 0.5 + Δ_d] if Δ_d < 0 with a corresponding quantized version excursion of 1.
We only need to show that e is kept within [-0.5+Δ_d, 0.5 +Δ_d) if Δ_d > 0, and within (-0.5+Δ_d, 0.5 + Δ_d] if Δ_d < 0. Suppose that Δ_d > 0. By Theorem <ref> and Proposition <ref>, we have that at some time k>1 after entering the invariant set e(k) = (k) = 0, and (k) = 0. Then, the error evolves starting from |e(k)|<1/2, according to (<ref>) which becomes:
e(k+h) = e(k+h-1) + Δ_d
(since e = = 0), until 1/2 ≤ e(k+h) < 1/2+Δ_d, when e(k+h) = 1 and hence (k+h) = -αe(k+h) = -α. At time k+h+1, the error is reset to e(k+h+1) = e(k+h) +Δ_d -1, so that -1/2 +Δ_d ≤ e(k+h+1) < -1/2+2Δ_d, and we are back to the integral dynamics (<ref>) because e = = 0. From this analysis it follows that -1/2+Δ_d ≤ e < 1/2 + Δ_d. Analogous derivations can be carried out for the case Δ_d <0.
The reachability numerical analysis in Section <ref> shows that the limit cycle in Corollary <ref> is globally attractive if we restrict α to the range 5/4< α < 3/2.
§ SIMULATION RESULTS
We first present some simulation results comparing the three cases when no quantization is present in the control scheme, and when quantization is present and either the PI or its switched extension is implemented. Notice that in the absence of quantization the PI controller and its switched extension coincide. Figure <ref> reports the simulation runs for the three cases for a finite horizon of 30 time units. In all three plots the error is normalized, i.e., a unitary resolution is assumed. The value used for α is 11/8, and Δ_d = √(2)-1, while the system state is initialized at e(0) = 0, and u(0)= 0.
While in the absence of quantization the error converges to 0 with the designed controller, when quantization is in place it is not possible anymore to guaranteeing convergence to zero. In the case of PI control, the error oscillates in the area [-1,1], while in the case of its switched extension, it ends up oscillating in the region [0,1] according to Statement <ref> and Theorem <ref>. It is worth noticing that for the chosen value of Δ_d the evolution of the control system state cannot be periodic by Theorem <ref>. This is reflected in the evolution of e that oscillates in the gray area, but always assumes different values in the set.
We now consider a time-varying disturbance, which is initially constant and takes the value d = d_1 = 2.6 (Δ_d = -0.4 <0), then, starts decreasing linearly at time k=20 till it hits the value d = d_2 = 2.4 at k=40 (Δ_d = 0.4>0), and finally keeps constant.
The results of the simulation with the switched controller are shown in Figure <ref>, with the error e, the control signal u, and the disturbance d on the left column, and their quantized versions on the right column. The system is initialized with e(0) = 0, u(0) = 0, and we set α = 11/8.
Note that the abrupt change of sign of Δ_d when the disturbance crosses the threshold 2.5 at time k=30 causes a transient which can be seen from the error behavior, and it is reflected in the quantized version only later, at time k=37, when the quantized error starts oscillating between [-1,1] and correspondingly the quantized control input oscillates between [-4,-1]. Such oscillations stop when the (new) invariant set described in Theorem <ref> is reached according to Statement <ref>. The quantized error then exceeds the minimum resolution only temporarily during the (delayed) transient cause by the threshold crossing.
In the case of the standard PI controller, the quantized error and the quantized control input keep oscillating between [-1,1] and [-4,-1], respectively, for the whole time horizon, irrespectively of the fact that the disturbance crosses the threshold (see Figure <ref>).
If we change d_2 to 2.501, the threshold 2.5 is not crossed by the disturbance and the system keeps evolving in the same invariant set (see Figure <ref>).
The results presented next refer to a simulation campaign aimed at investigating the effect of the disturbance magnitude on the control performance, with and without the proposed switched extension.
The campaign was carried out by choosing the values of d reported in Table <ref>. For each value of d, two models – one with bare PI control and the other with switched PI – were initialized to e(0) = 0 and u(0) = 0, and then subjected to a constant disturbance of the selected amplitude. Data were collected from the two simulated experiments just described over a finite horizon of H = 1000 time units.
We assess performance by computing the Root Mean Square (RMS) value of the quantized error, that is defined as:
RMS_e = √(1H∑_i=0^H-1e(i)^2)
where H is the length of the simulation.
Table <ref> summarizes the results and shows that the proposed switched scheme decreases the RMS_e by 30%.
§ CONCLUSIONS AND FUTURE WORK
A switched control scheme was proposed for reducing the degradation effect due to the quantization of both control and controlled variables in a system described as an integrator with unit delay. Set invariance and limit cycle analysis were performed, jointly with a numerical reachability study, to assess the switched control scheme performance and provide guidelines for control tuning. In particular, necessary and sufficient conditions for the presence of n-periodic limit cycles of period p were discussed. Finally, simulation results confirm the effectiveness of the proposed solution.
Future work will concern the evaluation of the proposed approach in specific types of applications, where the quantization effect is relevant.
Results are confined to a specific class of systems. Further investigations are needed also to extend the proposed approach to a larger class of problems.
IEEEtran
| This paper deals with quantized feedback control for discrete-time linear time-invariant control systems. In particular, we consider the effect of quantization of both the measurements and the control actions.
In general, any digital implementation of a control system entails input and output quantization. This is typically the case when the output measurements used for feedback and the control actions applied to the controlled process are transmitted via a digital communication channel, <cit.>.
Depending on the specific application, quantization effects can become relevant and significantly affect the control system performance. While in some applications the quantization effects can be neglected, when high-precision control is needed, quantization has to be explicitly accounted for in control design.
Given a system that is stabilized by a standard linear time-invariant feedback controller when there is no quantization, the problem addressed herein is to find a switched controller that steers the system towards the smallest possible invariant set that includes the origin when its control input and output are quantized. We focus, in particular, on a discrete time linear system described by an integrator with a one time-unit delay. The system is affected by an additive constant bias on the control input, and both control input and controlled output measurements are quantized via a rounding operator.
Despite its simplicity, this system structure appears in several problems pertaining to the domain of computing systems. For example it represents the dynamics from reservation to cumulative CPU time in task scheduling, a typical source of disturbance being the latency of the preemption interrupt <cit.>. It models the disturbance to error dynamics in clock synchronization for wireless sensor networks, where the most relevant source of disturbance is given by temperature variations in the oscillator crystals <cit.>. It plays a role in server systems <cit.>, queuing systems <cit.>, and so forth, as can be observed from the variety of problems mentioned in <cit.>. Quantizers are present in virtually the totality of these applications, and dealing with their effect is important when high-performance is required.
In fact, several of the problems just listed require zero error in the presence of constant inputs, hence the relevance of quantization becomes apparent. Constant (or practically constant) are for example thermal disturbances experienced by wireless nodes in a climatized environment. In such an application context, temperature variations are very small and slow, because they are smoothed by the environment thermal dynamics and counteracted by temperature control, and abrupt variations may occur but only sporadically, for example when turning the air conditioners on once per day or week.
The considered linear system is stabilizable, and in the absence of quantization, one can introduce a standard proportional-integral (PI) controller to compensate for a constant load disturbance and bring the state trajectories to the zero equilibrium. The presence of input and output quantizers degrades the
PI controller performance, introducing oscillations in the quantized output with an excursion that is equal to twice the quantizer resolution. Such oscillations may be not admissible when dealing with high precision computing systems. Our goal in this paper is to design a better performing controller, while maintaining a PI-like structure in order to ease implementation and tuning. Invariant set and reachability analysis are the methods adopted to assess the properties of the designed control scheme.
More precisely, we propose a switched variant of the PI controller to address quantization and minimize its effect on the feedback control system performance. We then show that when the disturbance is constant, the switched control solution presents an invariant set for the quantized control input and output variables such that the quantized output is either zero or has a unitary amplitude (corresponding to the least significant bit, hence to the minimum representable quantity). A numerical reachability analysis study shows that, if the PI controller is suitably tuned, this invariant set is a global attractor. Necessary and sufficient conditions for the existence of a periodic solution in the (unquantized) control input and output variables are given as well.
Many papers in the literature address control of quantized linear systems.
Most of them focus on stabilization at the zero equilibrium in absence of disturbances.
Contributions can be classified based on the characteristics of the adopted quantizer. If the quantizer has a finite resolution, like in our paper where uniform quantization is adopted, then, <cit.> shows that classical stability cannot be achieved and introduces the practical stability
notion for quantized systems. More specifically, <cit.> proves that,
given an unstable discrete time system that is stabilizable, if the state measurements are quantized, then, there is no control strategy that makes all trajectories of the quantized state-feedback system asymptotically converge to zero, and only convergence to an invariant set around zero can be obtained.
Classical results on asymptotic stability of the origin are recovered in <cit.> by changing the resolution of the quantizer depending on the state behavior, and hence making the resolution higher and higher while approaching the origin. This approach has been extended to input to state and l_2 stabilization in presence of a disturbance input in <cit.> and <cit.>, respectively.
When a logarithmic quantizer with (countably) infinite quantization levels is adopted,
the resolution of the quantizer is infinite close to the origin, and global asymptotic stability
can be achieved, <cit.>. However, when finite-level logarithmic quantizers
are used, practical stability results can only be proven. Analysis of practical stability and constructive results on how to design finite-level logarithmic state quantizers guaranteeing practical stability are given in, e.g., <cit.>.
It is worth noticing that most papers in the literature consider quantization of either the control input (see, e.g., <cit.>) or the controlled output (see, e.g., <cit.>), whereas only a few address the set-up considered in this paper, where both control input and controlled output are quantized. This is the case in <cit.>.
Whereas logarithmic quantizers with infinite quantization levels are considered in <cit.>, in <cit.>, input and output quantizers are assumed to have a finite number of quantization levels. Practical stabilization of a double integrator system is studied in <cit.>, showing how the parameters defining the quantizers should be set for the practical stability result to hold. Extension to higher order integrator models is outlined as well, focusing however on stabilization without disturbances acting on the system.
The work closest to the present paper is <cit.>, where pre-defined finite resolution quantizers on both input and output are given and a feedback controller is designed to achieve some control goal. More precisely, in <cit.>, practical stabilization of unstable discrete time linear systems is addressed, and a quantized static state-feedback controller is designed that brings the state of the system to some invariant set around the origin in a finite number of steps. Our approach differs from <cit.> in that we address disturbance compensation, and we introduce a switched output-feedback controller to make the state of the controlled system reach an invariant set around the origin. Disturbance compensation and dynamic state/output-feedback control are not addressed in <cit.> and related work. In turn, while the methodology in <cit.> is of general applicability, our design is tailored to a simple system model and not easily extendable to different higher dimensional models.
The rest of the paper is organized as follows. Section <ref> first describes the control scheme without switching, and highlights how quantization deteriorates the performance of the control system. The switched solution that allows for minimizing the effect of quantization is then presented in the same section. Section <ref> provides necessary and sufficient conditions for entering the invariant set. A numerical reachability analysis study is performed in Section <ref> for identifying the controller parameter tuning that makes such an invariant set a global attractor. Section <ref> gives necessary and sufficient conditions for the existence of periodic solutions. Finally, Section <ref> provides evidence of the effectiveness of the approach via a simulation study, while Section <ref> concludes the paper. | null | null | null | null | null |
http://arxiv.org/abs/1701.07628v2 | 20170126093816 | Second law of thermodynamics with quantum memory | [
"Li-Hang Ren",
"Heng Fan"
] | quant-ph | [
"quant-ph"
] |
Beijing National Laboratory for Condensed Matter Physics, Institute of
Physics, Chinese Academy of Sciences, Beijing 100190, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China
[email protected]
Beijing National Laboratory for Condensed Matter Physics, Institute of
Physics, Chinese Academy of Sciences, Beijing 100190, China
School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100190, China
Collaborative Innovation Center of Quantum Matter, Beijing 100190, China
We design a heat engine with multi-heat-reservoir,
ancillary system and quantum memory. We then derive an inequality related with the second law
of thermodynamics, and give a new limitation about the work gain from the engine by analyzing the entropy change and
quantum mutual information change during the process.
In addition and remarkably, by combination of two independent engines and with the help of the entropic uncertainty relation
with quantum memory, we find that the total maximum work gained from those two heat engines should be larger than a quantity related with quantum
entanglement between the ancillary state and the quantum memory. This result provides a lower bound for the maximum work extracted,
in contrast with the upper bound in the conventional second law of thermodynamics. However, the validity of this inequality depends on whether the
maximum work can achieve the upper bound.
03.67.-a, 05.70.-a, 89.70.Cf, 03.65.Ta
Second law of thermodynamics with quantum memory
Heng Fan
Received December 14, 2016; accepted January 23, 2017
=========================================================
§ INTRODUCTION
Maxwell's demon plays an important role in the history of thermodynamics and information theory <cit.>.
It is first proposed by Maxwell that a powerful demon might conduct microscopic operation to break the
second law of thermodynamics.
However, according to Landauer's principle, erasure of information will inevitably be accompanied by energy consumption <cit.>, which saves the second law, see also <cit.>.
Addtionally, with this demon, one can relax the restrictions imposed by the second law on the energy exchanged between a system and surroundings,
and some new thermodynamic inequalities
are studied, see <cit.> and a review <cit.>.
In conventional thermodynamics, the second law gives
W_ext≤-△ F, where W_ext is the extractable work from system and
F=U-TS is the free energy during the isothermal process.
Due to the Maxwell's demon, this thermodynamic expression can be extended to a favorable form
with discrete quantum feedback control<cit.>:
W_ext≤-△ F+k_B T I,
in which k_B is the Boltzmann constant and I is the quantum-classical mutual information
describing the mutual information of a fixed quantum system
and the outcome classical information obtained by a quantum measurement.
This quantum-classical mutual information is an extension of the standard quantum mutual information defined originally between
two subsystems. One may then observe that the maximum work that can be extracted may exceed that in conventional
thermodynamics, however, the marginal part is restricted by term of the quantum-classical mutual information.
This inequality lays an extension of the second law of thermodynamics.
The above statement shows information can be exploited to extract physical work,
which may be called an information heat engine <cit.>. Szilard first explicitly pointed out the
significance of information in thermodynamics, who proposed the so-called Szilard engine (SZE) to realize Maxwell's demon <cit.>.
This SZE involves a single-molecule gas in a box, immersed in a thermal reservoir at temperature T,
and an external demon, see also <cit.>. The demon inserts a partition into the middle of the box,
then measures on which side the molecule is trapped and performs expansion to extract work W_ext=k_B T ln2.
In SZE, the information that the molecule is in the left or right is exploited to extract physical work.
Both theoretical and experimental studies on the information heat engine, additionally the extension for quantum case,
have been performed <cit.>.
Quantum resources for quantum information processing such as quantum entanglement,
quantum discord or quantum coherence may be converted to extractable work.
It is proved that the work gain may result from the entanglement between subsystems <cit.>
because of deep connections between thermodynamics and the theory of entanglement <cit.>.
Also one can devise a heat engine which can be driven by purely quantum information
such as the quantum discord <cit.>. Recently, experimental investigation is performed to show that quantum discord is necessary in energy transport
in a nanoscale aluminum-sapphire interface <cit.>.
We know that for a quantum system, a quantum memory can be available and
quantum entanglement or some quantumness of correlations may play a critical
role in quantum information processing <cit.>. With a quantum memory,
the entropic uncertainty relations generalizing Heisenberg uncertainty principle <cit.>
are studied <cit.>.
We may notice that entanglement and quantum discord appear to be resource
for work extraction in thermodynamics. It is desirable to construct a heat engine
where quantum correlations or entanglement appear involving in the process.
In this paper, we design a heat engine which includes the system S contacted
by independent heat baths with possible different temperatures, the ancillary
system A and a quantum memory B, see Fig. <ref>. This set up of
heat engine is similar with that in Ref.<cit.>,
but with an ancillary system A and a quantum memory B.
With the help of the quantum memory, we then can characterize the role of quantum
correlation in the thermodynamic circle. Thus, new second law
of thermodynamic inequality can be obtained.
This paper is organized as follows.
In section II,
we design a physical model to realize the information heat engine
and describe the thermodynamic process of our heat engine in detail.
In section 3,
we derive the extractable work from this engine and
discuss two important cases: the isothermal process and Carnot-like cycle.
In section 4,
we consider operating processes of two independent engines with different measurement bases.
Based on entropic uncertainty relation with quantum memory,
lower bound of the maximum extractable work with two measurements will be presented.
In section 5, we have the conclusion and discussion.
§ SET UP OF THE HEAT ENGINE AND ITS PROCESS
The heat engine involves four parts: system S, a set of thermal reservoirs R={R_1, ⋯ R_n} and
composite quantum system M consisting of ancillary state A and quantum memory B, see Fig. <ref>.
The total Hamiltonian is written as
H(t)=H_SR(t)+H^int_SM_AB(t)+H_M_AB(t),
where H_SR(t)=H_S(t)+∑_m=1^n[H_SR_m^int(t)+H_R_m],
describing the Hamiltonian of the system, reservoirs and their interaction.
During the operating process, system S can contact R_1, R_2,…, R_n
which are at respective temperatures T_1, T_2,…, T_n.
By contacting S with R_1 at the start and the end of the process,
system S is in thermodynamic equilibrium in the initial and final state.
However, the system may not be in thermodynamic equilibrium between the initial and final state.
For convenience, we note the temperature of S in the initial and final state as T=T_1.
At the beginning and last, we assume H_SR_m^int(t_i)=H_SR_m^int(t_f)=0,
H_S(t_i)=H_S^(i) and H_S(t_f)=H_S^(f).
The general process of the engine is divided into four stages, which is similar as that
in Ref.<cit.> but with different operations since we have additional quantum
systems A and B. This heat engine is also similar as that in Ref.<cit.> but
with multiple reservoirs in temperatures T_1,T_2,...,T_n.
Stage (i). At time t_i, the system and reservoirs are in thermodynamic equilibrium respectively, that is,
they are in canonical distribution. The initial state reads
ρ^(i) = exp(-β H_S^(i))/Z_S^(i)⊗exp(-β_1 H_R_1)/Z_R_1⊗⋯
⊗ exp(-β_n H_R_n)/Z_R_n⊗ρ^(i)_AB
≡ ρ^(i)_SR⊗ρ^(i)_AB
where β_n=1/(k_B T_n) is related with temperature T_n,
and the partition functions are Z_S^(i)=[exp(-β H_S^(i))] and Z_R_n=[exp(-β_n H_R_n)].
Stage (ii). System S begins to interact with the surrounding to extract work.
In a general way any thermodynamic process between the system and reservoirs
can be expressed by an unitary evolution.
With unitary operation U^(1)=I_AB⊗ U^(1)_SR as shown in Fig. <ref>, the initial state becomes
ρ^(1)=U^(1)ρ^(i)U^(1)†,
in which I_AB is the identity operator for M consisting of ancillary state A and memory B.
During this process, the composite system M is left unchanged.
In stage (iii), the system M starts to work, which plays the role of Maxwell's demon.
In order to make use of quantum information to extract work, we let the system S
interacts with the ancillary system A such that there is quantum information exchange,
then we make measurement on A by positive operator valued measures (POVMs).
Specifically speaking, this measurement process is performed on A with rank-1 projector {Π_A^k}.
So the measurement process is implemented by performing a unitary transformation U^(2) on SA
followed by a projection measurement {Π_A^k:|k⟩⟨ k|} on A.
The density matrix is given by
ρ^(2) = ∑_kΠ_A^k U^(2)ρ^(1)U^(2)†Π_A^k
= ∑_k p_k |k⟩_A⟨ k|⊗ρ_BSR^(2)k
The measurement outcome p_k= [Π_A^kU^(2)ρ^(1)U^(2)†Π_A^k] is registered by the memory
and the post measurement state is ρ_BSR^(2)k= _A[Π_A^kU^(2)ρ^(1)U^(2)†Π_A^k/p_k].
The stage (iv) is the feedback control. It is performed discretely depending on outcome p_k by
applying corresponding operations on the system S and the multi-heat-reservoir R.
Feedback control is also a quantum process, the unitary operator is written as,
U^(3)=I_B⊗∑_k|k⟩_A⟨ k|⊗ U^k_SR.
The whole process of stage (iii) and the feedback control of stage (iv) are schematically
presented in Fig. <ref>.
Now, the final state becomes
ρ^(f)=U^(3)ρ^(2)U^(3)†.
In the last equilibration process,
the system and heat reservoirs evolve to reach thermodynamic equilibrium
at temperatures T and T_1,...,T_n, respectively, similar as that in <cit.>. It
can be described by a unitary transformation U^(f) on ρ^(f).
That is, the final state is U^(f)ρ^(f)U^(f)†.
In the following, entropy calculations will be performed.
However due to the fact, S(U^(f)ρ^(f)U^(f)†)=S(ρ^(f)),
for simplicity, we sometimes write ρ^(f) as the final state instead of
U^(f)ρ^(f)U^(f)†, if there is no confusion.
Although the system and reservoirs are equilibrium states at last,
which is from a macroscopic point of view,
the final state U^(f)ρ^(f)U^(f)† may not necessarily
be in the form of the rigorous canonical distribution. In order
to evaluate the energy of the system, we introduce the standard canonical distributed
state as the reference state,
ρ^(ref)_SR = exp(-β H_S^(f))/Z_S^(f)⊗exp(-β_1 H_R_1)/Z_R_1⊗⋯
⊗ exp(-β_n H_R_n)/Z_R_n,
where Z_S^(f)=[exp(-β H_S^(f))].
The reference state will be used to correspond the final equilibrium state
U^(f)ρ^(f)U^(f)† with the same temperatures T and T_1,...,T_n.
§ MAXIMUM WORK EXTRACTED FROM THE HEAT ENGINE
We will proceed to calculate the net work gain from the heat engine by
analyzing the entropy change during the process.
The von Neumann entropy of the state ρ is defined as S(ρ)=-(ρlnρ).
The following discussion will involve techniques from quantum information <cit.>.
The difference between states ρ^(i) and ρ^(1) is a unitary transformation,
so the entropy remains invariant.
However, in the third step, measurement is performed.
Thus, we know that projective measurements increase entropy,
one obtains
S[ρ^(i)]≤ S[ρ^(2)].
According to equalities (<ref>) and (<ref>), we know that states
ρ^(i) is product state,
S[ρ^(i)]=S[ρ^(i)_SR]+S[ρ^(i)_AB],
also state ρ^(2) can be written as the following form,
S[ρ^(2)]=H(p_k)+∑_k p_k S[ρ_BSR^(2)k].
With the help of the subadditivity of von Neumann entropy, S[ρ_BSR^(2)k]≤ S[ρ_SR^(2)k]+S[ρ_B^(2)k],
and the above facts, the initial inequality (<ref>) now takes the form,
S[ρ^(i)_SR]-∑_k p_k S[ρ_SR^(2)k]≤ H(p_k)+∑_k p_k S[ρ_B^(2)k]-S[ρ^(i)_AB].
By considering the form of U^(3) in Eq. (<ref>) and tracing out the subsystems AB, the final state can be extracted as,
ρ^(f)_SR= _AB[ρ^(f)]=∑_k p_k U^k_SRρ_SR^(2)kU^k†_SR.
Considering the concavity of the von Neumann entropy, we have
S[ρ^(f)_SR]≥∑_k p_k S[ρ_SR^(2)k]
By means of taking partial trace like equation (<ref>), we can easily obtain
ρ^(f)_A, ρ^(f)_B and ρ_AB^(2), and the results are listed as,
S[ρ^(f)_A]=S[ρ^(2)_A], S[ρ^(f)_B]=S[ρ^(2)_B],
S[ρ^(2)_AB]=H(p_k)+∑_k p_k S[ρ_B^(2)k].
We then substitute equations (<ref>), (<ref>) and (<ref>) into Eq. (<ref>). It is simple to show,
S[ρ^(i)_SR]-S[ρ^(f)_SR] ≤ S[ρ^(i)_SR]-∑_k p_k S[ρ_SR^(2)k]
≤ H(p_k)+∑_k p_k S[ρ_B^(2)k]-S[ρ^(i)_AB]
= S[ρ_AB^(2)]-S[ρ^(i)_AB]
≡ △ S_A+△ S_B-△ I .
Explicitly, the entropy change between initial state and the final state for system S and the multi-heat-reservoir R is
written as,
S[ρ^(i)_SR]-S[ρ^(f)_SR] ≤ △ S_A+△ S_B-△ I,
where △ S_A=S[ρ_A^(f)]-S[ρ_A^(i)] denotes the entropy
change for ancillary system A, and △ S_B denotes the
entropy change of the quantum memory B. And
I denotes quantum mutual information defined as
I(X:Y)=S(ρ _X)+S(ρ _Y)-S(ρ _XY),
here △ I=I(A^(2):B^(2))-I(A^(i):B^(i)) represents the change of quantum
mutual information in the process of heat engine for composite system M including both A and B.
According to Klein's inequality S(ρσ)= (ρlnρ)- (ρlnσ)≥0,
where S(ρσ) is the relative entropy, we have
[ρ_SR^(f)lnρ_SR^(ref)]≤ - S[ρ_SR^(f)].
Therefore with the help of relation (<ref>),
S[ρ^(i)_SR]+ [ρ_SR^(f)lnρ_SR^(ref)]≤△ S-△ I,
with △ S≡△ S_A+△ S_B.
From Eq.(<ref>) and Eq.(<ref>), we know that ρ^(i)_SR and ρ_SR^(ref) are canonical distribution.
Then substituting their specific expression into inequality (<ref>), we obtain
E_S^(i)-E_S^(f)+∑^n_m=1T/T_m(E_R_m^(i)-E_R_m^(f))
≤ F_S^(i)-F_S^(f)
+k_B T [△ S-△ I],
where E_S^(i)=(ρ^(i) H_S^(i)), E_R_m^(i)=(ρ^(i) H_R_m), F_S^(i)=-k_B Tln Z_S^(i),
E_S^(f)=(ρ^(f) H_S^(f)), E_R_m^(f)=(ρ^(f) H_R_m) by
comparing the final state to the reference state, also F_S^(f)=-k_B Tln Z_S^(f).
Among them E_S^(f)-E_S^(i)≡△ U_S is the change of the internal energy,
E_R_m^(i)-E_R_m^(f)≡ Q_m is the heat exchange between system and reservoir R_m and
F_S^(f)-F_S^(i)≡△ F_S is the difference in the Helmholtz free energy of system.
Then the above inequality becomes
-△ U_S +∑_m=1^n T/T_mQ_m≤-△ F_S +k_B T[ △ S-△ I].
This inequality is one of the main results in the present work.
Before interpreting this extension of the second law of thermodynamic inequality in more detail,
two special cases will first be illustrated.
If n=1, there is only one reservoir with temperature T.
As the work extractable from the engine is defined as
W_ext=-△ U_S+Q=-(E_S^(f)-E_S^(i))+E_R^(i)-E_R^(f),
the final result (<ref>) reduces to a simple case
W_ext≤ -△ F_S+k_B T △ S-k_B T △ I.
This inequality is the same as the result of Ref.<cit.>.
The inequality means that the extracted work can exceed the difference of the free energy which
generalizes the conventional second law of thermodynamics, however,
the marginal part is constrained by the difference of the changes for entropy and the quantum mutual
information k_B T (△ S-△ I).
When n=2, the heat engine becomes an analogue Carnot cycle.
We take two heat baths with respectively temperatures T_H and T_L for consideration: T_H>T_L.
After a cycle, we assume that △ U_S=△ F_S=0. Because W_ext=Q_H+Q_L,
we find,
W_ext≤(1-T_L/T_H)Q_H+k_B T_L(△ S-△ I)
The efficiency of the heat engine is
η=W_ext/Q_H=1-T_L/T_H+k_B T_L(△ S-△ I)/Q_H.
In contrast with the efficiency of the conventional Carnot cycle, η_carnot=1-T_L/T_H, the
heat engine presented here can exceed the conventional Carnot heat engine, but new restriction
is still imposed as presented in the last term. On the other hand, as discussed in <cit.>,
the engine does not form a cycle because the final memories are not reset to their initial states.
If reset is performed, the effect of last term should vanish.
The general n case heat engine can be considered as simple extension for n=1,2, and the inequality
can be considered as the second law of thermodynamics with quantum correlation in quantum information science.
We comment that the mutual information can be divided into two parts as classical correlation and quantum discord
<cit.>, so △ I=△ J+△δ.
From the Eq.(<ref>), we have ρ_AB^(2)= ∑_kp_k|k⟩_A⟨ k|⊗ρ_B^(2)k
which is a post-measurement density matrix, so discord δ(B^(2)|A^(2))=0. Thus for example n=1, Eq.(15) becomes
W_ext≤ -△ F_S+k_B TC,
where we use the notation, C=△ S-△ J+δ(B^(i)|A^(i)).
It shows that the initial quantum discord can be exploited to acquire physical work, in agreement
with the results in <cit.>. Similar form can also be obtained
for general n from Eq.(<ref>).
§ LOWER BOUND FOR WORK GAINED WITH DIFFERENT MEASUREMENT BASES
The heat engine in this work includes measurement process at the third stage.
It is well-known that Heisenberg has asserted a fundamental limit to the precision of the outcomes
for a pair of incompatible obervables <cit.>.
For a quantum system which can be entangled with a quantum memory, there exists
the entanglement-assisted entropic uncertainty relation for two incompatible measurements <cit.>,
and more generally for multiple measurements <cit.>.
Next, we will study the heat engine,
different from the previous results, by considering two measurements at stage (iii).
For convenience, we just consider the single-reservoir case.
We emphasize that we next concentrate on the measurement process. The heat engine will work twice each with a set of measurement operators,
K≡{Π_A^k: |k⟩⟨ k|} and M≡{Π_A^α _m: |α _m⟩⟨α _m|}, however, the whole system should be reset before
the second process begins. On the other hand from a different point of view, we can consider two independent whole systems, the
single measurement will be preformed respectively on each system. Next, we generally consider this condition.
We denote the maximal overlap of the two sets of projective operators as, c= max_k,m|⟨ k|α _m⟩ |^2.
For state ρ^(i)_AB, we have the entropic uncertainty relation,
S(K|B)+S(M|B)≥log_2 1/c+S(A^(i)|B^(i)),
in which S(A^(i)|B^(i))=S(ρ^(i)_AB)-S(ρ^(i)_B) is the conditional entropy for initial state
ρ _AB^(i), we would like to point out that S(A^(i)|B^(i)) can be negative for entangled initial state.
The quantity S(K|B) is the conditional von Neumann entropy
of the post-measurement state after performing the measurement {Π_A^k} on A,
S(K|B) = S[∑_k(Π_A^k⊗ I_B)ρ^(i)_AB(Π_A^k⊗ I_B)]-S(ρ^(i)_B)
= S[∑_k p_k|k⟩⟨ k|⊗ρ_B^(i)k] -S(ρ^(i)_B),
where p_k= (Π_A^kρ_AB^(i)Π_A^k), ρ_B^(i)k= _A(Π_A^k ρ_AB^(i)Π_A^k)/p_k.
Similarly, S(M|B) can also be defined after measurement {Π_A^α _m} as follows,
S(M|B) = S[∑_m q_m|α _m⟩⟨α _m|⊗ρ_B^(i)α _m] -S(ρ^(i)_B),
where q_m= (Π_A^α _mρ_AB^(i)Π_A^α _m) and ρ_B^(i)α _m= _A(Π_A^α _mρ_AB^(i)Π_A^α _m)/q_m.
We then substitute these equalities into relation (<ref>),
that is, the outcomes should satisfy the uncertainty relation,
S(∑_k p_k|k⟩⟨ k|⊗ρ_B^(i)k) -S(ρ^(i)_B)
+S(∑_m q_m|α _m⟩⟨α _m|⊗ρ_B^(i)α _m)-S(ρ^(i)_B)
≥ log_21/c+S(ρ^(i)_AB)-S(ρ^(i)_B).
By considering the working process of the heat engine, we know that
ρ_AB^(2)= _SRρ^(2)= ∑_kp_k|k⟩_A⟨ k|⊗ρ_B^(2)k.
For partial trace,
ρ_B^(i)k= _A(Π_A^k ρ_AB^(i)Π_A^k) and
ρ_B^(2)k= _SR[ _A(Π_A^k ρ^(1)'Π_A^k)]
= _A(Π_A^k ρ_AB^(i)Π_A^k).
Then we have ρ_B^(i)k=ρ_B^(2)k.
Thus the uncertainty relation (<ref>) can be rewritten as,
S(ρ^(2)_AB)+S(ρ^(2)'_AB)≥log_21/c+S(ρ^(i)_AB)+S(ρ^(i)_B),
where ρ^(2)'_AB indicates the corresponding state when we use the
measurement bases of {Π_A^m}.
Here we find that if we use two independent engines, the measurements are different
for their corresponding engines, the combination of two results constitute an
associated bound.
The extractable work has been illustrated in Eq.(<ref>).
Note that the relation (<ref>) gives
△ S-△ I=S[ρ_AB^(2)]-S[ρ^(i)_AB].
Next we denote W^K_max as the maximum extractable work of the first engine with projector K represented as {Π_A^k: |k⟩⟨ k|},
and similarly W^M_max is for another engine with the second measurement, then we have,
W^K_max = -△ F_S+k_B T[S(ρ_AB^(2))-S(ρ_AB^(i))],
W^M_max = -△ F_S^'+k_B T[S(ρ_AB^(2)')-S(ρ_AB^(i))].
Due to the limitation of the entropic uncertainty relation, S(ρ^(2)_AB)+S(ρ^(2)'_AB)
satisfies the inequality (<ref>).
With the help of the results for W^K_max+W^M_max,
the inequality now reads,
W^K_max+W^M_max≥ -△ F_S-△ F_S^' +k_B T[log_21/c-S(A^(i)|B^(i))].
The term of conditional entropy S(A^(i)|B^(i)) appearing on the right-hand side can be considered
as quantifying the amount of entanglement between A and B for initial state. If it is negative, we know that
A and B are entangled.
We remark that the quantity S(A^(i)|B^(i)) plays a significant role
in quantum information science, see for example <cit.> and the references therein.
The inequality (<ref>) plays a complete different role in comparing with the inequalities of second law of
thermodynamics (<ref>-<ref>). Remarkably, the entropic uncertainty which is a representation of the Heisenberg uncertainty
principle implies that there exists lower bound, instead of upper bound as shown in (<ref>-<ref>),
for the heat engines presented in this work.
That is to say, the maximum total work gain can be larger than a bound for two measurement bases,
implying that superposition, which is the reason of no-cloning theorem <cit.>, is related
with extractable work.
The negative S(A^(i)|B^(i)), meaning the existence of entanglement,
will result in higher lower bound and let the work extracted larger
in case of two measurements with each performing on an individual engine. Straightforwardly, those results can be generalized for multiple measurements.
We still assume that we have several independent engines each with different measurements.
On the other hand, the validity of above interpretation about the inequality (<ref>) depends on crucial conditions.
The extractable work is upper bounded in the relation (<ref>), we assume that the maximum extractable work W_max
can saturate this bound. For this saturation, there should be very strong requirements on the choice of measurement basis and the scheme
of feedback control <cit.>. If the saturation cannot be achieved, the inequality (<ref>) in general does not hold.
§ CONCLUSION AND DISCUSSION
We design a specific heat engine with both ancillary state and the quantum memory. The new inequality
related with the second law of thermodynamics is obtained. For simple cases, our results extend the
results in the isothermal process and the Carnot circle. The changes of entropy and the quantum mutual information
lay new limit for the marginal part of work which exceeds the conventional second law of thermodynamics.
In addition, if two measurements are preformed on two independent engines,
we find a new inequality due to the entropic uncertainty relation with the assistance of quantum
memory by combining the results of the two engines.
We also discuss the possibility for the summation of the maximum extractable work of the two engines being
larger than a lower bound. The validity of this inequality depends on whether the maximum extractable work
can saturate its upper bound.
In the study of various extensions of the second law of thermodynamics,
the resources of quantum information such as entanglement and discord can be
shown to play important roles. Experimentally, quantum correlations can be well
measured, however, the work or energy for few particles of microscopic system generally depend on
definition. It is then challenging to check directly the inequalities involving both
quantum correlations and work or energy related with the second law of thermodynamics.
The exploration both theoretically and experimentally are necessary in studying rigorous
testable extensions of the second law in the context of quantum information.
Acknowledgments:
This work was supported by MOST of China (Grant Nos. 2016YFA0302104 and
2016YFA0300600), national natural science foundation of China NSFC (Grant Nos. 91536108, 11774406)
and Chinese Academy of Sciences (Grant No. XDB21030300). We thank Zheng-An Wang for helping us in drawing the pictures.
99
demon J. C. Maxwell, Theory of Heat(Appleton, Lonton, 1871).
demon2 H. S. Leff and A. F. Rex, Maxwell's demons 2(IOP Publishing, Bristol, 2003).
landauer R. Landauer, IBM J. Res. Dev. 5, 183 (1961).
BennettC. H. Bennett, Int. J. Theor. Phys. 21, 905 (1982).
LloydS. Lloyd, Phys. Rev. A 56, 3374 (1997).
VedralK. Maruyama, F. Nori, and V. Vedral, Rev. Mod. Phys. 81, 1 (2009).
discrete T. Sagawa and M. Ueda, Phys. Rev. Lett. 100, 080403 (2008).
minimal T. Sagawa and M. Ueda, Phys. Rev. Lett. 102, 250602 (2009).
main J. J. Park, K.-H. Kim, T. Sagawa and S. W. Kim, Phys. Rev. Lett. 111, 230402 (2013).
sze L. Szilard, Z. Phys. 53, 840 (1929).
entangle K. Funo, Y. Watanabe and M. Ueda, Phys. Rev. A 88, 052319 (2013).
WinterS. Popescu, A. J. Short, and A. Winter,
Nat. Phys. 2, 754 (2006).
PlenioF. G. S. L. Brandao and M. B. Plenio,
Nat. Phys. 4, 873 (2008).
LloydChenGangS. Lloyd, V. Chiloyan, Y. J. Hu, S. Huberman, Z. W. Liu, and G. Chen,
arXiv:1510.05035.
UedaS. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki, and M. Sano,
Nat. Phys. 6, 988 (2010).
review J. M. R. Parrondo, J. M. Horowitz and T. Sagawa,
Nat. Phys. 11, 131-139 (2015).
PNASF. G. S. L. Brandao, M. Horodecki, H. H. Y. Ng, J. Oppenheim, and S. Wehner,
Proc. Natl. Acad. Sci. U.S.A. 112, 3725 (2015).
OppenheimJ. Oppenheim and S. Wehner,
Science 330, 1072 (2010).
WehnerE. Hanggi and S. Wehner,
Nat. Commun. 4, 1670 (2013).
RenLiHangL. H. Ren and H. Fan,
Phys. Rev. A 90, 052110 (2014).
book M. A. Nielsen, and I. L. Chuang, Quantum Computation and Quantum Information
(Cambridge University Press, 2000).
uncertainty M. Berta, M. Christandl, R. Colbeck, J. M. Renes and R. Renner, Nature 6, 659-662 (2010).
LiuShangS. Liu, L. Z. Mu, and H. Fan,
Phys. Rev. A 91, 042133 (2015).
heisenberg W. Heisenberg, Z. Phys. 43, 172 (1927).
DiscordK. Modi, A. Brodutch, H. Cable, T. Paterek, and V. Vedral,
Rev. Mod. Phys. 84, 1655 (2012).
partialWinterM. Horodecki, J. Oppenheim, and A. Winter,
Nature 436, 673 (2005).
MingLiangM. L. Hu and H. Fan,
Phys. Rev. A 87, 022314 (2013).
CerfAdamN. J. Cerf and C. Adami,
Phys. Rev. Lett. 79, 5194 (1997).
CoherenceA. Streltsov, E. Chitambar, S. Rana, M. N. Bera, A. Winter, and M. Lewenstein, Phys.
Rev. Lett. 116, 240405 (2016).
FanRepH. Fan, Y. N. Wang, L. Jing, J. D. Yue, H. D. Shi, Y. L. Zhang, and L. Z. Mu,
Phys. Rep. 544, 241 (2014).
KurtK. Jacobs, Phys. Rev. A 80, 012322 (2009).
Kurt1K. Jacobs, Phys. Rev. A 67, 030301 (2003).
LiuYXRepJ. Zhang, Y. X. Liu, R. B. Wu, K. Jacobs, F. Nori, Phys. Rep. 679, 1 (2017).
| Maxwell's demon plays an important role in the history of thermodynamics and information theory <cit.>.
It is first proposed by Maxwell that a powerful demon might conduct microscopic operation to break the
second law of thermodynamics.
However, according to Landauer's principle, erasure of information will inevitably be accompanied by energy consumption <cit.>, which saves the second law, see also <cit.>.
Addtionally, with this demon, one can relax the restrictions imposed by the second law on the energy exchanged between a system and surroundings,
and some new thermodynamic inequalities
are studied, see <cit.> and a review <cit.>.
In conventional thermodynamics, the second law gives
W_ext≤-△ F, where W_ext is the extractable work from system and
F=U-TS is the free energy during the isothermal process.
Due to the Maxwell's demon, this thermodynamic expression can be extended to a favorable form
with discrete quantum feedback control<cit.>:
W_ext≤-△ F+k_B T I,
in which k_B is the Boltzmann constant and I is the quantum-classical mutual information
describing the mutual information of a fixed quantum system
and the outcome classical information obtained by a quantum measurement.
This quantum-classical mutual information is an extension of the standard quantum mutual information defined originally between
two subsystems. One may then observe that the maximum work that can be extracted may exceed that in conventional
thermodynamics, however, the marginal part is restricted by term of the quantum-classical mutual information.
This inequality lays an extension of the second law of thermodynamics.
The above statement shows information can be exploited to extract physical work,
which may be called an information heat engine <cit.>. Szilard first explicitly pointed out the
significance of information in thermodynamics, who proposed the so-called Szilard engine (SZE) to realize Maxwell's demon <cit.>.
This SZE involves a single-molecule gas in a box, immersed in a thermal reservoir at temperature T,
and an external demon, see also <cit.>. The demon inserts a partition into the middle of the box,
then measures on which side the molecule is trapped and performs expansion to extract work W_ext=k_B T ln2.
In SZE, the information that the molecule is in the left or right is exploited to extract physical work.
Both theoretical and experimental studies on the information heat engine, additionally the extension for quantum case,
have been performed <cit.>.
Quantum resources for quantum information processing such as quantum entanglement,
quantum discord or quantum coherence may be converted to extractable work.
It is proved that the work gain may result from the entanglement between subsystems <cit.>
because of deep connections between thermodynamics and the theory of entanglement <cit.>.
Also one can devise a heat engine which can be driven by purely quantum information
such as the quantum discord <cit.>. Recently, experimental investigation is performed to show that quantum discord is necessary in energy transport
in a nanoscale aluminum-sapphire interface <cit.>.
We know that for a quantum system, a quantum memory can be available and
quantum entanglement or some quantumness of correlations may play a critical
role in quantum information processing <cit.>. With a quantum memory,
the entropic uncertainty relations generalizing Heisenberg uncertainty principle <cit.>
are studied <cit.>.
We may notice that entanglement and quantum discord appear to be resource
for work extraction in thermodynamics. It is desirable to construct a heat engine
where quantum correlations or entanglement appear involving in the process.
In this paper, we design a heat engine which includes the system S contacted
by independent heat baths with possible different temperatures, the ancillary
system A and a quantum memory B, see Fig. <ref>. This set up of
heat engine is similar with that in Ref.<cit.>,
but with an ancillary system A and a quantum memory B.
With the help of the quantum memory, we then can characterize the role of quantum
correlation in the thermodynamic circle. Thus, new second law
of thermodynamic inequality can be obtained.
This paper is organized as follows.
In section II,
we design a physical model to realize the information heat engine
and describe the thermodynamic process of our heat engine in detail.
In section 3,
we derive the extractable work from this engine and
discuss two important cases: the isothermal process and Carnot-like cycle.
In section 4,
we consider operating processes of two independent engines with different measurement bases.
Based on entropic uncertainty relation with quantum memory,
lower bound of the maximum extractable work with two measurements will be presented.
In section 5, we have the conclusion and discussion. | null | null | null | null | null |
http://arxiv.org/abs/1701.08083v4 | 20170127153801 | Ensemble Estimation of Generalized Mutual Information with Applications to Genomics | [
"Kevin R. Moon",
"Kumar Sricharan",
"Alfred O. Hero III"
] | cs.IT | [
"cs.IT",
"math.IT",
"math.ST",
"stat.TH"
] |
#1𝐞̃_#1,h_#1
𝐆̃_h_X,h_X|y
𝐆̃_h_X(l),h_Y(l)
𝐆̃_h_X,h_Y
𝐆̃_h
𝐆̂_k_1,k_2
𝐆̂_k
𝐆̂_k(l)
#1𝐆̃_#1
Cov
#1𝐟̃_#1,h_#1
#1𝐟̃_#1,h
#1𝐟̃_#1,h_#1(l)
#1𝐞̃_#1,h_#1
ƪ#1𝐞̃_#1,h
#1#2𝐟̃_#1,h_#1(l_#2)
𝔼
𝕍
𝔹
𝒮
#1𝔼_#1
#1ρ_#1,k_#1
#1ρ_#1,k_#1+1
#1𝐟̂_#1,k_#1
#1#2𝐟̂_#1,k(#2)
#1𝐟̂_#1,k_#1+1
#1𝐟̅_#1,k_#1
#1𝐟̅_#1,k_#1+1
#1#2𝐞̃_#1#2,h_#1,h_#2
𝐘
𝐗
𝐙
𝐍
𝐖
𝐡_X_C|x
𝐡_Y_C|y
#1#2#3#1_#2^(#3)
Ensemble Estimation of Generalized Mutual Information with Applications
to Genomics
Kevin R. Moon1, Kumar Sricharan2, Alfred O. Hero III3
1Dept. of Mathematics and Statistics, Utah State University, [email protected]
2Intuit Inc., [email protected]
3EECS Dept., University of Michigan, [email protected] This work was supported in part by the US Army Research Office under grants W911NF1910269 and W911NF1510479, and by the National Nuclear Security Administration in the US Department of Energy under grant DE-NA0003921. This paper appeared in part in the Proceedings of the 2017 IEEE Intl. Symposium on Information Theory (ISIT) <cit.>.
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Mutual information is a measure of the dependence between random variables
that has been used successfully in myriad applications in many fields.
Generalized mutual information measures that go beyond classical Shannon
mutual information have also received much interest in these applications.
We derive the mean squared error convergence rates of kernel density-based
plug-in estimators of general mutual information measures between
two multidimensional random variables and for two cases:
1) and are continuous; 2) and may have a mixture
of discrete and continuous components. Using the derived rates, we
propose an ensemble estimator of these information measures called
GENIE by taking a weighted sum of the plug-in estimators with varied
bandwidths. The resulting ensemble estimators achieve the 1/N parametric
mean squared error convergence rate when the conditional densities
of the continuous variables are sufficiently smooth. To the best of
our knowledge, this is the first nonparametric mutual information
estimator known to achieve the parametric convergence rate for the
mixture case, which frequently arises in applications (e.g. variable
selection in classification). The estimator is simple to implement
and it uses the solution to an offline convex optimization problem
and simple plug-in estimators. A central limit theorem is also derived
for the ensemble estimators and minimax rates are derived for the
continuous case. We demonstrate the ensemble estimator for the mixed
case on simulated data and apply the proposed estimator to analyze
gene relationships in single cell data.
mutual information; nonparametric estimation; central limit theorem; single cell data; feature selection; minimax rate
§ INTRODUCTION
Mutual information (MI) is a measure of the amount
of shared information between a pair of random variables and
. MI estimation is related to the problem of estimating functionals
of probability distributions, which has received deserved attention
in recent years <cit.>.
Many statistical problems rely in some form upon accurate estimation
of functionals of probability distributions including estimating the
decay rates of error probabilities <cit.>, estimating
bounds on the Bayes error rate <cit.>,
and hypothesis testing <cit.>.
MI estimation, in particular, also has many applications in information
theory and machine learning including independent subspace analysis <cit.>,
structure learning <cit.>, fMRI data processing <cit.>,
forest density estimation <cit.>, clustering <cit.>,
neuron classification <cit.>, blind source
separation <cit.>, intrinsically motivated reinforcement
learning <cit.>, as well
as other data science applications such as sociology <cit.>,
computational biology <cit.>,
and improving neural network models <cit.>. A particularly
common application is feature selection or extraction where features
are chosen to maximize the MI between the chosen features (represented
by 𝐗) and the outcome variables (represented by 𝐘)
<cit.>.
In many of these applications, the variables and may have
any mixture of discrete and continuous components. In feature selection,
for example, the predictor labels may have discrete components (e.g.
classification labels) while the input variables may have a mixture
of discrete and continuous features. To the best of our knowledge,
there are currently no nonparametric MI estimators that are known
to achieve the parametric mean squared error (MSE) convergence rate
1/N (N is the number of samples) in this setting where
and/or contain a mixture of discrete and continuous components.
Instead, most existing estimators of MI focus on the cases where both
and are either purely discrete or purely continuous. Also,
while many nonparametric estimators of MI exist, most have not been
generalized beyond Shannon or Rényi information. Furthermore,
minimax convergence rates are currently unknown for the continuous
and the mixture cases.
In this paper, we provide a framework for nonparametric estimation
of a large class of MI measures where we only have available a finite
population of i.i.d. samples. This framework can be applied
to accurately estimate general MI measures when either and
are purely continuous or the mixed case when and may contain
a mixture of discrete and continuous components. We derive an MI estimator
for these cases that achieves the parametric MSE rate when the conditional
densities of the continuous variables are sufficiently smooth, thus
achieving the minimax rate (which we also derive) in this setting.
We call this estimator the Generalized ENsemble
Information Estimator (GENIE).
Our estimation method applies to other MI measures in addition to
Shannon information, which have been the focus of much recent interest.
An information measure based on a quadratic divergence was defined
in <cit.>. A density-resampled version of MI
was introduced in <cit.> to better measure
gene relationships in single-cell data when sampling may not be uniform.
A MI measure based on the Pearson divergence was considered in <cit.>.
Minimal spanning tree <cit.> and generalized nearest-neighbor
graph <cit.> approaches have been developed for
estimating Rényi information <cit.>,
which has been used in many applications (e.g. <cit.>).
§.§ Related Work
Many estimators for MI have been previously developed. Nearly all
of these estimators ignore the mixed case and focus on the case where
both and are either purely continuous or purely discrete.
A popular k-nearest neighbor (nn)-based estimator was proposed
in <cit.> which is a modification of the entropy
estimator derived in <cit.>. However, these
estimators have only been shown to achieve the parametric convergence
rate when the dimension of each of the random variables is less than
3 <cit.>. Furthermore, these estimators focus
only on estimating the Shannon MI between purely continuous random
variables. Similarly, the estimators in <cit.>
do not achieve the parametric rate and focus on the purely continuous
case. An adaptation of the Shannon MI estimator in <cit.>
was recently proposed to handle the discrete-continuous mixture case <cit.>.
While this estimator has been proven to be consistent, its convergence
rate is currently unknown. Central limit theorems have also been derived for several entropy estimators <cit.>, which can then be applied to Shannon MI. However, it is not clear if these results can be extended to more general MI functionals.
A neural network-based estimator of Shannon MI was proposed in <cit.>.
While this estimator is computationally efficient, its statistical
properties are largely unknown as the authors only prove convergence
in probability rates. It is also unclear how to extend this estimator
to other MI measures such as the Rényi information. A jackknife
approach to estimating Shannon MI was also recently proposed <cit.>.
This approach provides an automatic selection of the kernel bandwidth
for a plug-in kernel density estimator (KDE) and does not require
boundary correction, which is generally a major difficulty in estimating
functionals of probability distributions. However, the MSE convergence
rate of this estimator is also unknown.
Much work has focused on the problem of estimating the entropy of
purely discrete random variables <cit.>.
Shannon MI can then be estimated by estimating the joint and marginal
entropies of and . However, it is not clear if discrete
methods can be extended successfully to the mixed-case. Quantizing
the continuous components of the data is one potential approach that
has been shown to be consistent for some quantization schemes in the
purely continuous case <cit.> but it is currently
unknown if similar approaches can be applied in the mixed-case. Also,
extending these estimators to general MI measures like Rényi
information is not straightforward.
Recent work has focused on nonparametric divergence estimation for
continuous random variables. One approach <cit.>
uses an optimal KDE to achieve the parametric convergence rate when
the densities are at least d <cit.>
or d/2 <cit.>
times differentiable where d is the dimension of the data. These
methods, like ours, assume that the densities are bounded away from
zero as this simplifies the analysis. However, this induces a boundary
on the densities' support set. For accurate estimation, the optimal
KDE approaches require knowledge of the density support boundary and
are difficult to construct near the boundary. Numerical integration
may also be required for estimating some divergence functionals under
this approach, which can be computationally expensive. In contrast,
our approach to MI estimation does not require numerical integration
and can be performed without knowledge of the support boundary.
Some methods for estimating distributional functionals have relaxed
the boundedness assumption on the densities <cit.>.
These approaches typically assume that the tails of the densities
decay at a sufficiently fast rate (e.g. sub-exponential or sub-Gaussian).
In <cit.>, the authors only consider densities
with up to 2 derivatives as it is difficult to exploit higher smoothness when the densities are not lower-bounded.
More closely related work <cit.>
uses an ensemble approach to estimate entropy or divergence functionals
for continuous random variables. These works construct an ensemble
of simple plug-in estimators by varying the neighborhood size of density
estimators. They then take a weighted average of the estimators where
the weights are chosen to decrease the bias with only a small increase
in the variance. The parametric rate of convergence is achieved when
the densities are either d <cit.>
or d/2 <cit.> times
differentiable. These approaches are simple to implement as they only
require simple plug-in estimates and the solution of an offline convex
optimization problem. The ensemble approach also automatically corrects
for bias at the boundary of the densities' support set.
Finally, <cit.> showed that k-nn or KDE based
approaches underestimate the MI when the MI is large. As MI increases,
the dependencies between random variables increase which results in
less smooth densities. Thus this isn't an issue when the densities
are smooth <cit.>.
For the mixture setting, we focus on the important special case where
the components of each observation are assumed to decompose into discrete
and continuous dimensions. This enables the density to be factored:
f_X(x)=f_X_D(x_D)f_X_C|X_D(x_C|x_D)
where x_C and x_D are the continuous and discrete components
of x. We note that this excludes the more general case considered
by <cit.> where one or more components can have
discrete and continuous values simultaneously. However, our setting
is a common occurrence in many machine learning and statistical problems.
For example, a search within the UCI Machine Learning Repository <cit.>
yields many datasets with such structure. Many statistical models
have also focused on similar settings <cit.>.
Thus we believe that this special case warrants its own treatment
and retain the more general case for future work. Despite the importance
of this mixed setting, no other MI estimators have been derived or
analyzed that achieve the parametric MSE convergence rate.
§.§ Contributions
In the context of this related work, we make the following novel contributions
in this paper:
* For purely continuous random variables, we derive the asymptotic bias
and variance of kernel density plug-in MI estimators for general MI
measures without boundary correction <cit.>
(Section <ref>).
* We leverage the results for the purely continuous case to derive the
bias and variance of general kernel density plug-in MI estimators
when and/or contain a mixture of discrete and continuous
components by reformulating the densities as a mixture of the conditional
density of the continuous variables given the discrete variables (Section
<ref>). Note that this is a special case of the mixture
setting where discrete and continuous components are separated into
different dimensions.
* We leverage this theory for the mixed case described above in conjunction
with the generalized theory of ensemble estimators <cit.>
to derive GENIE. To the best of our knowledge, this is the first non-parametric
estimator of general MI measures that achieves a parametric rate of
MSE convergence of O(1/N) when the densities are sufficiently smooth for any mixed case (Section
<ref>) let alone the special case we consider,
where N is the number of samples available from each distribution.
* We prove a minimax lower bound for the convergence rate of MI estimators
in the purely continuous case (Section <ref>). This
unifies the minimax theory for estimating continuous entropy <cit.>
and divergence functionals <cit.>.
Neither of these approaches are directly extendable to the MI case
due to the dependence of the marginal distributions on the joint distribution
and the integral relationship between the joint and the marginals.
Therefore, we have tailored the proof to the MI estimation case. We
also show that the MI ensemble estimator achieves the minimax rate
when the densities are sufficiently smooth.
* We derive a central limit theorem for the ensemble estimators (Section
<ref>).
* We apply the method to single-cell RNA-sequencing feature selection
problems (Section <ref>).
We note that KDE plug-in approaches to estimating functionals such
as entropy and MI are well-known and perhaps the simplest approach <cit.>.
Applying the generalized theory of ensemble estimation to the KDE
plug-in estimator does not raise the complexity of the estimators
substantially, either computationally or conceptually. Yet by employing
these simple methods, the resulting ensemble estimator is able to
achieve the minimax convergence rate for sufficiently smooth densities
without employing more complicated von-Mises expansions (as in <cit.>)
or boundary correction (as in <cit.>)
to reduce the bias.
§ MUTUAL INFORMATION FUNCTIONALS
We first define a family of MI functionals based on f-divergence
functionals which are defined as follows. Let P and Q be probability
measures on the Euclidean space 𝒮. Let g:(0,∞)→ℝ be the f-divergence shaping function.
The f-divergence functional associated with g is <cit.>
D_g(P||Q):=_Q[g(dP/dQ)],
where dP/dQ is the Radon-Nikodym derivative and _Q
indicates the expectation wrt to the measure Q. To obtain a true
divergence, we require g to be convex and g(1)=0. However, we
consider more general functionals and so we do not place these restrictions
on g.
A generalized MI functional can be derived from (<ref>).
Let and be (potentially multivariate) random variables
with respective marginal probability measures P_X and P_Y
and joint probability measure P_XY. Let g be as before. Then
the MI functional associated with g is
I(;):=D_g(.P_XP_Y‖ P_XY).
Shannon MI can be obtained from (<ref>) by setting
g(t)=-log t.
If and are purely continuous random variables with respective
marginal probability densities f_X and f_Y and joint probability
density f_XY, then (<ref>) can be written as
I(;)=∫ g(f_X(x)f_Y(y)/f_XY(x,y))f_XY(x,y)dxdy.
However, we are also interested in the case where or may
have a mixture of discrete and continuous components. In this special
case, the distributions can be factored into a product of the conditional
density and the probability mass functions. The MI can then be expressed
as a sum of integrals which can then be individually estimated. To
do this, denote the continuous and discrete components of as
_C and _D, respectively. Denote _C and _D
similarly. Let =(𝐗,𝐘)^T and let
_C and _D be the respective continuous and discrete
components of . Consider the probability distributions f_XY,
f_X, f_Y and the corresponding densities that are obtained
by conditioning on _D and _D, e.g. f_XY(x_C,x_D,y_C,y_D)=f_X_CY_C|X_DY_D(x_C,y_C|x_D,y_D)f_X_DY_D(x_D,y_D).
Then after factoring the distributions, (<ref>) can
be written as
I(;) =∑_x_D,y_D∫ g(f_X(x_C,x_D)f_Y(y_C,y_D)/f_XY(x_C,x_D,y_C,y_D))dF_XY(x_C,x_D,y_C,y_D)
=∑_z_Df_Z_D(z_D)∫ g(R_1(z_C)R_2(z_D))f_Z_C|Z_D(z_C|z_D)dz_C,
where
R_1(z_C) =f_X_C|X_D(x_C|x_D)f_Y_C|Y_D(y_C|y_D)/f_X_CY_C|X_DY_D(x_C,y_C|x_D,y_D),
R_2(z_D) =f_X_D(x_D)f_Y_D(y_D)/f_X_DY_D(x_D,y_D).
The expression R_1 is the ratio of the product of the conditional
densities f_X_C|X_D and f_Y_C|Y_D to the conditional
density f_X_CY_C|X_DY_D. It is a continuous function
of z_C. Similarly, the expression R_2 is the ratio of the
product of the probability mass functions (pmf) f_X_D and f_Y_D
to the pmf f_X_DY_D and is a discrete function of z_D.
In the following sections, we will obtain MSE convergence rates of
KDE plug-in estimators of the general MI measures described above.
We first focus on the case when and are purely continuous
(Equation (<ref>)). We then generalize to the case where
and may have any mixture of continuous and discrete components
(Equation (<ref>)). The derived convergence rates can
then be used to derive ensemble estimators that achieve the parametric
MSE rate.
§ CONTINUOUS RANDOM VARIABLES
For this section, we define KDE plug-in estimators
of general MI measures under the assumption that and are
purely continuous. Thus _C= and _C= and we can write
I(;)=∫ g(f_X(x)f_Y(y)/f_XY(x,y))f_XY(x,y)dxdy.
We then derive the MSE convergence rate of the KDE plug-in estimator.
We also present a minimax lower bound for MI estimation in this continuous
setting.
To more easily generalize our results to the mixture case, we consider
a modified version of (<ref>) where the densities are
weighted as follows. Let ν be a 3-dimensional vector with 0<ν_i≤1
for each i∈{1,2,3}. We can then write
I_ν(;)=∫ g(f_X(x)f_Y(y)ν_1ν_2/f_XY(x,y)ν_3)f_XY(x,y)dxdy.
The expression in (<ref>) reduces to that in (<ref>)
when ν_i=1 for each i∈{1,2,3}. When we generalize to
the mixture case, the pmf estimators will be substituted into ν.
§.§ The KDE Plug-in Estimator
Let f_X(x), f_Y(y), and f_XY(x,y) be d_X, d_Y,
and d_X+d_Y=d-dimensional probability densities. Since we are
assuming for now that 𝐗 and 𝐘 are continuous
with marginal densities f_X and f_Y, the MI functional I_v(;)
can be estimated using KDEs. Assume that N i.i.d. samples {𝐙_1,…,𝐙_N}
are available from the joint density f_XY with 𝐙_i=(𝐗_i,𝐘_i)^T.
Let h_X, h_Y be kernel bandwidths. Let K_X(·) and
K_Y(·) be symmetric kernel functions with ∫ K_X(x)dx=∫ K_Y(y)dy=1,
||K_X||_∞, ||K_Y||_∞<∞ where ||K||_∞=sup_x|K(x)|.
The KDEs for f_X, f_Y, and f_XY=f_Z, respectively,
are
X(x) = 1/Nh_X^d_X∑_i=1^NK_X(x-𝐗_i/h_X),
Y(y) = 1/Nh_Y^d_Y∑_i=1^NK_Y(y-𝐘_i/h_Y),
Z(x,y) = 1/Nh_X^d_Xh_Y^d_Y∑_i=1^NK_X(x-𝐗_i/h_X)K_Y(y-𝐘_i/h_Y),
where h_Z=(h_X,h_Y). Then I_ν(;) can be estimated
with a KDE plug-in estimator:
=1/N∑_i=1^Ng( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3).
Note that in this estimator we evaluate the KDEs at each of the data
points. In practice, this is done using a leave-one-out KDE. This
enables us to avoid evaluating a high-dimensional integral and instead
estimate the integral with the empirical average in eq. (<ref>).
§.§ MSE Convergence Rate of the Continuous Plug-in Estimator
We are interested in the MSE convergence rate of the KDE plug-in estimator
in eq. (<ref>). The MSE of an estimator can be expressed
as the sum of the squared bias and the variance of the estimator.
We first focus on the bias of the estimator . The bias of nonparametric
estimators typically depends on the smoothness of the functions that
are being estimated. In our case, we have multiple functions including
the joint and marginal densities and the function g. We quantify
the smoothness of the densities using the Hölder class Σ(s,H):
Let
𝒳⊂ℝ^d be a compact space. For q=(q_1,…,q_d),
q_i∈ℕ, define |q|=∑_i=1^dq_i and D^q=∂^|q|/∂ x_1^q_1…∂ x_d^q_d.
The Hölder class Σ(s,H) of functions on L_2(𝒳)
consists of the functions f that satisfy
|D^qf(x)-D^qf(y)|≤ H‖ x-y‖ ^s-|q|,
for all x, y∈𝒳 and for all q s.t. |q|≤⌊ s⌋.
A key fact that comes from Definition <ref> is that if
a function f belongs to Σ(s,H), then it is r=⌊ s⌋
times differentiable. Given this definition, the full assumptions
we make to derive bias convergence rates are:
* (𝒜.0): The kernels K_X and K_Y are symmetric
product kernels with bounded support.
* (𝒜.1): There exist constants ϵ_0, ϵ_∞
such that 0<ϵ_0≤ f_X(x)≤ϵ_∞<∞
∀ x∈𝒮_X, ϵ_0≤ f_Y(y)≤ϵ_∞
∀ y∈𝒮_Y, and ϵ_0≤ f_XY(x,y)≤ϵ_∞
∀(x,y)∈𝒮_X×𝒮_Y.
* (𝒜.2): Each of the densities belong to Σ(s,H)
in the interior of their support sets with s≥2.
* (𝒜.3): g(t_1/t_2) has an infinite number of mixed
derivatives wrt t_1 and t_2.
* (𝒜.4): |∂^k+lg(t_1/t_2)/∂ t_1^k∂ t_2^l|/(k!l!),
k,l=0,1,… are strictly upper bounded for ϵ_0≤ t_1,t_2≤ϵ_∞.
* (𝒜.5): Let K be either K_X or K_Y, 𝒮
either 𝒮_X or 𝒮_Y, h either h_X
or h_Y, f either f_X or f_Y, and d either d_X
or d_Y. Let q=(q_1,…,q_d) with q_i∈ℕ and |q|≤ r=⌊ s⌋. Then we assume for any positive integers t and l that
∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮K^l(u)u^q D^q f(x) du)^tdx = v_t(h),
where v_t(h) admits the expansion
v_t(h)=∑_i=1^r-|q|e_i,q,t,lh^i+o(h^r-|q|),
for some constants e_i,q,t,l.
These assumptions can largely be summarized as follows: 1) f_X,
f_Y, f_XY, and g are smooth (𝒜.2-𝒜.4)
; 2) f_X and f_Y have bounded support sets 𝒮_X
and 𝒮_Y with respective dimensions d_X and d_Y
(𝒜.1); 3) f_X, f_Y, and f_XY are strictly
lower bounded on their support sets (𝒜.1); and 4) the
boundary of the support set is smooth (𝒜.5). More specifically,
assumption 𝒜.5 states that the support of the density
is smooth with respect to the kernel K in the sense that the expected value of a polynomial with coefficients consisting of the densities and their derivatives near the boundary is a smooth function of the bandwidth h. The inner integral in (<ref>) captures this
expectation while the outer integral averages this inner integral
over all points near the boundary of the support. The v_t(h)
term captures the fact that the smoothness of this expectation is
proportional to the smoothness of the function D^q f(x).
While these assumptions may appear highly technical, they are satisfied
for relatively simple support sets and for common kernels, functions
g, and densities and thus are widely applicable <cit.>.
These assumptions are also comparable to those in similar studies
on asymptotic convergence analysis <cit.>.
Some studies consider the case where the densities are not strictly lower bounded, which makes the problem different <cit.> with different minimax rates (see <cit.> for the entropy estimation case).
In particular, assumption 𝒜.5 is satisfied if the kernel K is smooth, has either circular or rectangular support (which includes product kernels), and the density support set consists of the unit cube. See Appendix <ref> for details. The unit cube assumption is common in the nonparametric density functional estimation literature <cit.> as the results can then be extended to density support sets that are isomorphic to the unit cube.
To derive the convergence rates of many state-of-the art distributional functional estimators, authors commonly assume that the derivatives of the density f(x) vanish near the boundary <cit.>. Note that in this assumption, the density itself is not required to vanish near the boundary. Thus densities such as the uniform distribution satisfy this common assumption. However, this assumption is stronger than 𝒜.5 as formalized in Proposition <ref> below. Our weaker assumption 𝒜.5 comes at a small cost as we require the f-divergence shaping function g to be infinitely differentiable. In contrast, the authors in <cit.> assume that the shaping function has a finite number of derivatives. In practice, this tradeoff does not have a major practical impact as most shaping functions of interest are either infinitely differentiable everywhere (e.g. Shannon and Renyi information) or not differentiable on a set of measure zero (e.g. the total variation distance and the Bayes error rate in the divergence case).
Let the density support set 𝒮 be the unit cube. Let the derivatives of the density f up to order r vanish at the boundary of the density support set. Assume that ||K||_∞<∞ and the support of K is bounded with either rectangular or circular support. Then assumption 𝒜.5 is satisfied.
The proof is given in Appendix <ref>. The assumption of vanishing density derivatives at the boundary is strictly weaker than assumption 𝒜.5. As an example, consider a standard Gaussian distribution truncated to the support [-1,1]^d. Clearly, the derivatives of this density do not vanish at the boundary. However, we show in Appendix <ref> that this density satisfies 𝒜.5.
We note that the boundary assumption 𝒜.5 does not directly
result in parametric convergence rates for the plug-in estimator ,
which is in contrast with the boundary assumptions in <cit.>.
The estimators in <cit.>
perform boundary correction, which requires knowledge of the density
support boundary and complex calculations at the boundary in addition
to the boundary assumptions, to achieve the parametric convergence
rates. In contrast, we use ensemble methods to improve the resulting
convergence rates of without boundary correction, greatly simplifying
our estimator.
Under assumptions 𝒜.0-𝒜.5,
the bias of is
[] = ∑_j=0
i+j≠0
^r∑_i=0^rc_10,i,j(ν_1ν_2,ν_3)h_X^ih_Y^j+c_11/Nh_X^d_Xh_Y^d_Y
+O(h_X^s+h_Y^s+1/Nh_X^d_Xh_Y^d_Y),
where the constants in (<ref>) are independent of the bandwidths
h_X and h_Y and depend on the
densities and their derivatives, the functional g and its derivatives,
and the kernels. They also include polynomial terms of ν_1ν_2
and ν_3 when ν_i≠ 1.
Expressions for
the constants in (<ref>) are not given in this paper due to their complexity. These constants are not needed as the bias rates in Theorem <ref> are sufficient to implement ensemble bias reduction. The resultant ensemble estimator achieves the parametric MSE convergence
rate O(1/N) (see Section <ref> for the mixed case and
Appendix <ref> for the continuous case).
We also derive a refined expression for the bias that enables
us to achieve the parametric convergence rate under less restrictive
smoothness assumptions on the densities (s>(d_X+d_Y)/2 compared
to s≥ d_X+d_Y for (<ref>)). However, the resulting
expansion has more terms and the ensemble estimator is more complicated to implement. Thus we have chosen
to present the simpler case here. The more complex expansion and estimator are presented
in Appendix <ref>.
Having obtained an expression for the bias of , we now present
an upper bound on its variance to complete the derivation of its MSE.
If the functional g
is Lipschitz continuous in both of its arguments with Lipschitz constant
C_g, then the variance of is
[]≤22C_g^2||K_X· K_Y||_∞^2/N.
The Lipschitz assumption on g for the variance result is comparable
to assumptions made by others for nonparametric estimation of distributional
functionals <cit.>
and is satisfied for Shannon and Renyi informations when the densities
are bounded above and below. Note that Theorem <ref>
requires much less strict assumptions than Theorem <ref>.
The proofs of Theorems <ref> and <ref> are
given in Appendix <ref> and <ref>,
respectively.
Theorems <ref> and <ref> indicate that for
the MSE to go to zero, we require h_X,h_Y→0 and Nh_X^d_Xh_Y^d_Y→∞.
In Section <ref>, we will use Theorems <ref>
and <ref> to derive bias and variance expressions for
the MI plug-in estimators under the more general cases where
and/or may contain a mixture of discrete and continuous components.
We will then use these convergence rate results to derive MI ensemble
estimators for both cases (purely continuous random variables and
mixed random variables) that achieve the parametric MSE convergence
rate regardless of the dimension as long as the densities are sufficiently
smooth.
§.§ Minimax Rate for MI estimation
We wrap up this section with a minimax lower
bound on the MSE rate of convergence for the continuous MI estimation
problem.
Assume that g is
at least twice differentiable and that given ϵ>0, |g”(ϵ)|>0.
Define the set of functions Σ(s,H,ϵ_0,ϵ_∞)
to be the set of Hölder continuous functions Σ(s,H) that
are bounded between ϵ_0 and ϵ_∞. Then
with γ=min{ 8s/(4s+d_X+d_Y),1},
there exists a strictly positive constant c such that
lim inf_N→∞inf_Ĝ_Nsup_f_XY∈Σ(s,H,ϵ_0,ϵ_∞)[(Ĝ_N-I(;))^2]≥ cN^-γ.
The proof uses Le Cam's method <cit.> and is
given in Appendix <ref>. Theorem <ref>
indicates that the minimax rate is the parametric rate N^-1 as
long as s≥(d_X+d_Y)/4. This is consistent with
minimax rates for divergence <cit.>
and entropy <cit.> functional estimation, thus
expanding the previous theory on minimax estimation of information
theoretic functionals.
In Section <ref> and Appendix <ref>,
we derive MI estimators that achieve the minimax rate when s≥ d_X+d_Y and s>(d_X+d_Y)/2, respectively.
While estimators have been derived for the divergence estimation problem
that achieve the minimax rate for less smooth densities, they require
numerical integration and are thus computationally slow <cit.>.
Deriving estimators of these functionals (e.g. MI and divergence)
that are known to achieve the minimax rate in this less smooth regime
and that are computationally reasonable thus remains an open problem.
§ MIXED RANDOM VARIABLES
In this section, we extend the results of Section <ref>
to general MI estimation when 𝐗 and 𝐘 may
have a mixture of discrete and continuous components. We focus on
the most complex case: and both have discrete and continuous
components. The MI between and is written in (<ref>).
§.§ KDE Plug-in Estimator
We first define the KDE plug-in estimator of (<ref>).
Let 𝒮_Y_C and 𝒮_X_C be the respective
supports of the corresponding densities of _C and _C
and let 𝒮_Y_D and 𝒮_X_D be the respective
supports of the corresponding probability mass functions of _D
and _D. Suppose we have N i.i.d. samples of (,) drawn
from f_XY where the ith samples are denoted as (_i,_i)=(_i,C,_i,D,_i,C,_i,D).
Define the following random variables:
𝐍_y =∑_i=1^N1_{𝐘_i,D=y} ,
_x =∑_i=1^N1_{_i,D=x} ,
_xy =∑_i=1^N1_{_i,D=x,_i,D=y} ,
where x∈𝒮_X_D, y∈𝒮_Y_D, and
1_{·} is the indicator function. These will be used to
estimate the pmfs of the discrete components of and .
For the continuous components, we will condition on the discrete components
and construct KDEs for the conditional probability density functions.
Let 𝒮_X_C and 𝒮_Y_C be the respective
supports of the marginal densities f_X_C and f_Y_C with
corresponding dimensions of d_X and d_Y. As before, let
K_X(·) and K_Y(·) be kernel functions with ∫ K_X(x)dx=∫ K_Y(y)dy=1,
||K_X||_∞, ||K_Y||_∞<∞ where ||K||_∞=sup_x|K(x)|.
Consider the following sets:
𝒳_x ={._i,C∈{_1,C,…,_N,C}|_i,D=x} ,
𝒴_y ={._i,C∈{_1,C,…,_N,C}|_i,D=y} .
The set 𝒳_x is the set of the continuous data
points where the corresponding discrete component is equal to x.
The set 𝒴_y is defined similarly. The KDEs for f_X_C|X_D,
f_Y_C|Y_D, and f_X_CY_C|X_DY_Dat x∈𝒮_X_D
and y∈𝒮_Y_D are, respectively,
X_C|x(x) =1/_xh_X_C|x^d_X∑_[ _j,C∈𝒳_x; i≠ j ]K_X(x-_j,C/h_X_C|x),
Y_C|y(y) =1/_yh_Y_C|y^d_Y∑_[ _j,C∈𝒴_y; i≠ j ]K_Y(y-_j,C/h_Y_C|y),
Z_C|z(x,y) =1/_xyh_X_C|x^d_Xh_Y_C|y^d_Y∑_[ _j,C∈𝒴_y AND _j,C∈𝒳_x; i≠ j ]K_X(x-_j,C/h_X_C|x)K_Y(y-_j,C/h_Y_C|y),
where _C=(_C,_C) and h_Z_C|z=(h_X_C|x,h_Y_C|y).
Note that we allow the bandwidths to depend on the discrete components
of 𝐗 and 𝐘. The reason for this is that the
bandwidth is generally chosen as a function of the number of data
points, which will differ for these conditional distributions as the
discrete components of 𝐗 and 𝐘 differ.
The MI I(;) can then be estimated by plugging in the conditional
KDEs. Note that the MI in eq. (<ref>) is written as
a weighted sum of integral functionals. We therefore first define
an intermediate estimator of the integral functionals:
h_X_C|x,h_Y_C|y=1/_xy∑__C∈𝒳_xAND_C∈𝒴_yg(X_C|x(_C)Y_C|y(_C)/Z_C|z(_C,_C)×_x_y/N_xy).
Again in practice, we evaluate the KDEs at each of the data points
using a leave-one-out KDE, enabling us to avoid evaluating a high-dimensional
integral. We then define a plug-in KDE estimator of I(;):
h_X_C|X_D,h_Y_C|Y_D=∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/Nh_X_C|x,h_Y_C|y.
The quality of the conditional density estimates in terms of bias
and variance depends on the choice of bandwidths h_X_C|x and
h_Y_C|y. That is, for the KDE X_C|x to converge in
MSE, it is necessary that h_X_C|x→0 and _xh_X_C|x^d_X→0
as _x→∞ (a similar result holds for h_Y_C|y) <cit.>.
Furthermore, we will see when we derive the bias and variance of h_X_C|X_D,h_Y_C|Y_D
that these conditions are also necessary for h_X_C|X_D,h_Y_C|Y_D
to converge in MSE. Thus, when deriving the MSE convergence rate of
h_X_C|X_D,h_Y_C|Y_D, we will assume that h_X_C|x
is a function of _x and h_Y_C|y is a function of _y.
§.§ MSE Convergence Rates of the Mixed Plug-in Estimator
Here we derive the MSE convergence rate
of a plug-in estimator of MI when the random variables have a mixture
of discrete and continuous components. We will need the following:
Let _y, _x, and _xy
be defined as in (<ref>). Assume that their corresponding
probability mass functions are bounded away from zero. If α∈ℝ\{0,1}
and λ+β+γ∈ℝ\{0,1}, then
[_xy^α] =(Nf_X_DY_D(x,y))^α+O(N^α-1)
[_xy^λ_x^β_y^γ] =N^λ+β+γ(f_X_DY_D(x,y))^λ(f_X_D(x))^β(f_Y_D(y))^γ+O(N^λ+β+γ-1).
The proof is in Appendix <ref> and uses the
generalized binomial theorem, Taylor series expansions, and known
results about the central moments of binomial random variables <cit.>.
Lemma <ref> provides key results on moments
of products of the binomial random variables _xy, _x,
and _y. These results can be used to derive the bias and variance
of a plug-in estimator of MI with mixed components in (<ref>)
as long as the bias and variance of the corresponding plug-in estimator
for the continuous weighted case in (<ref>) is known.
This is demonstrated in the following theorems for the KDE plug-in
estimator h_X_C|X_D,h_Y_C|Y_D.
Assume that assumptions 𝒜.0-𝒜.5
hold with respect to the functional g, the kernels K_X and
K_Y, and the densities f_X_C|X_D, f_Y_C|Y_D
and f_X_CY_C|X_DY_D. Assume that |𝒮_X_D|,|𝒮_Y_D|<∞.
Assume that =l_X𝐍_x^-β and =l_Y_y^-α
with 0<β<1/d_X, 0<α<1/d_Y, and
l_X,l_Y>0. Then the bias of h_X_C|X_D,h_Y_C|Y_D
is
[h_X_C|X_D,h_Y_C|Y_D] =∑_i,j=0
i+j≠0
^rc_13,i,jl_X^il_Y^jN^-iβ-jα+O(N^-sα+N^-sβ+N^β d_X+α d_Y-1).
The constants depend on the underlying densities, the chosen kernels,
the functional g, and the probability mass functions and are independent
of l_X, l_Y, and N. Furthermore, these rates are asymptotically
tight.
Assume that =l_X𝐍_x^-β
and =l_Y_y^-α with 0<β<1/d_X,
0<α<1/d_Y, β d_X+α d_Y≤1, and
l_X,l_Y>0. Assume that |𝒮_X_D|,|𝒮_Y_D|<∞.
If the shaping function g is Lipschitz continuous in both of its arguments,
then the variance of h_X_C|X_D,h_Y_C|Y_D is O(1/N).
These theorems provide the necessary information for applying the
theory of optimally weighted ensemble estimation to obtain MI estimators
with improved rates (see Section <ref>).
§.§ Proof Sketches of Theorems <ref> and <ref>
For Theorem <ref>, the proof splits the bias term
into two terms by adding and subtracting g(𝒯(,)_x_y/N_xy)
for each pair (x,y) where 𝒯(,) is independent
of the data samples and is defined in Eq. (<ref>). It can
be shown that the newly added term has bias O(1/N). The other term
is handled by conditioning on the discrete components of the data
samples to obtain the conditional bias terms [.h_X_C|x,h_Y_C|y|_1,D,…,_N,D,_1,D,…,_N,D]
for each pair (x,y). Theorem <ref> can then be applied
to each of these terms to obtain expressions of the random variables
_x, _y, and _xy with terms of the form given
in Lemma <ref>. Lemma <ref>
can be applied to these terms to obtain the final result, where care
is taken to ensure that all relevant terms have been handled properly.
The full proof is given in Appendix <ref>.
To prove Theorem <ref>, we use the law of total variance
to split the variance into two terms: the expected value of the variance
conditioned on the discrete components of the data samples and the
variance of the conditional expectation. Theorem <ref>
is applied to the conditional variance term. For the conditional expectation
term, we use results obtained in the proof of Theorem <ref>
combined with the Efron-Stein inequality <cit.>
to obtain expressions of the random variables _x, _y,
and _xy. Lemma <ref> can be applied
again to these terms to obtain the final result. The full proof is
given in Appendix <ref>.
§ ENSEMBLE ESTIMATION OF GENERALIZED MI
If no bias correction is performed, then Theorems <ref>
and <ref> show that the optimal bias rate of the KDE
plug-in estimators and h_X_C|X_D,h_Y_C|Y_D
is O(1/N^1/(d_X+d_Y+1)), which converges very
slowly to zero when either d_X or d_Y are not small. Thus
the standard KDE plug-in estimators will perform poorly in higher-dimensional
settings. We use the theory of optimally weighted ensemble estimation
developed in <cit.> to improve this rate. For brevity,
we focus on the case where and both contain a mixture
of discrete and continuous components. The purely continuous case
is described in Appendix <ref>.
An ensemble of estimators is first formed by choosing different bandwidth
values for the plug-in estimators as follows. Let ℒ be
a set of real positive numbers with |ℒ|=L<∞. This
set will parameterize the bandwidths and for X_C|x
and Y_C|y, respectively, resulting in L estimators in
the ensemble. In other words, we set (l)=l_x^-β and
(l)=l_y^-α with l∈ℒ. While different
parameter sets for and can be chosen, we only use one
set here for simplicity of exposition. To achieve the parametric rate,
we need to ensure that the final terms in (<ref>)
are O(1/√(N)). Thus we require the following conditions to
be met:
sα ≥1/2,
sβ ≥1/2,
1-β d_X-α d_Y ≥1/2.
For all of these conditions to hold, it is necessary that s≥ d_X+d_Y.
Thus for each estimator in the ensemble we choose (l)=l_x^-1/(2(d_X+d_Y))
and (l)=l_y^-1/(2(d_X+d_Y))
where l∈ℒ. Define w to be a weight vector parameterized
by l∈ℒ with ∑_l∈ℒw(l)=1 and define
w=∑_l∈ℒw(l)∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/Nh_X_C|x(l),h_Y_C|y(l).
This is the weighted ensemble estimator. From Theorem <ref>,
the bias of w is
[ w] =∑_l∈ℒ∑_i=1^rθ(w(l)l^iN^-i/2(d_X+d_Y))
+O(√(L)||w||_2(N^-s/2(d_X+d_Y)+N^-1/2)),
where we use θ notation to omit the constants.
We use the general theory of optimally weighted ensemble estimation
in <cit.> to improve the MSE convergence
rate of the plug-in estimator by choosing the appropriate weights
to cancel the lower order terms in (<ref>):
Let ℒ be a set of
real positive numbers with |ℒ|=L<∞ and let J={1,2,…,d_X+d_Y}.
Assume the same conditions in Theorems <ref> and
<ref> hold with (l)=l_x^-1/(2(d_X+d_Y))
and (l)=l_y^-1/(2(d_X+d_Y)).
Assume that s≥ d_X+d_Y and define w as in (<ref>).
Then the MSE of w_0 attains the parametric rate of convergence
of O(1/N) where w_0 is the solution to the following
offline convex optimization problem:
[ min_w ||w||_2; subject to ∑_l∈ℒw(l)=1,; ∑_l∈ℒw(l)l^i=0, i∈ J. ]
To summarize, if the weights are chosen using eq. (<ref>),
then the weighted ensemble estimator w_0 achieves the parametric
MSE rate. In practice, the optimization problem in (<ref>)
typically results in a very large increase in variance for finite
samples. Thus we use a relaxed version of (<ref>):
[ min_w ϵ; subject to ∑_l∈ℒw(l)=1,; |∑_l∈ℒw(l)l^iN^1/2-i/2(d_X+d_Y)|≤ϵ, i∈ J,; ‖ w‖ _2^2≤ηϵ. ]
The parameter η is chosen to achieve a trade-off between bias
and variance. As shown in <cit.>,
the ensemble estimator w_0 using the resulting weight vector
from the optimization problem in (<ref>) still achieves
the parametric MSE convergence rate under the same assumptions as
described previously. We denote this estimator as GENIE. Algorithm <ref>
summarizes the estimator GENIE.
A similar approach can be used to derive an ensemble estimator for
the case when and are purely continuous. Furthermore,
we can define ensemble estimators for both the continuous and the
mixed cases that achieve the parametric MSE rate if s>(d_X+d_Y)/2,
although the optimization problem is more complicated. See Appendix <ref>
for details.
The weights obtained in (<ref>) are optimal in two senses.
First, they are the optimal solution to the problem in (<ref>).
This contrasts with other popular ensemble methods such as random
forests, where the ensemble of learners are equally weighted, and
AdaBoost, where the weights are assigned to different regions of the
feature space based on the training data. The weights are also optimal
in an asymptotic sense. It can be shown that the variance of the ensemble
estimator is bounded by a multiple of ||w||_2 <cit.>.
By minimizing the norm of the weights (or an upper bound on it), we
choose a weight vector that reduces the bias (due to the constraints)
while controlling the variance. Thus the weights are also optimal
in the sense that the bias is reduced to the parametric rate while
the variance is controlled as much as possible given the information
that we have. Since the parametric rate is minimax optimal, this is
also asymptotically optimal for sufficiently smooth densities.
We note that the ensemble estimation approach given here can be compared
to the Jackknife bias correction method <cit.>.
Both approaches use a linear combination of estimators to obtain a
less-biased estimator. However, the standard Jackknife approach uses uniform
weights for the linear combination while the ensemble approach presented
here obtains weights from an optimization problem. This results in
a more computationally efficient procedure as only L estimators
are required for the ensemble approach where L is on the order
of 30-50. The weights can also be computed offline and so solving
the optimization problem contributes little to the total computation
time. In contrast, the standard Jackknife approach requires N different
estimators which is less computationally efficient.
A more general Jackknife approach such as that in <cit.> shares more similarities with our ensemble method. In this particular work, the authors similarly compute the weights based on an asymptotic bias expansion. However, they do not control the variance via the norm of the weights as we do. Additionally, the Jackknife approach uses a linear combination of estimators with different samples sizes while we use estimators with different bandwidths. Finally, this general Jackknife approach is still more computationally intensive than our ensemble method which computes the weights offline.
At first glance, the weighted ensemble approach discussed in this
section appears to be quite similar to the optimal kernel approaches
used in <cit.>.
However, the weighted ensemble estimation theory we use is applied
to an ensemble of MI estimators after plugging in an ensemble of KDEs
with different bandwidths. So in some sense, we are optimizing the
ensemble of kernels (whose shape is determined by the bandwidth and
the fixed kernel) for the MI estimation problem. In contrast, the
optimal KDE approach first optimizes the kernel for the KDE problem,
and then plugs in the optimized KDE for MI estimation. It is possible
that a proper modification of the ensemble estimation theory could
be applied to a KDE to obtain an optimal KDE and unify these approaches.
This extension is left for future work.
§.§ Parameter Selection
Asymptotically, the theoretical results of the previous sections hold
for any choice of the bandwidth vectors as determined by ℒ.
In practice, we find that the following rules-of-thumb for tuning
the parameters lead to high-quality estimates in the finite sample
regime.
* Select the minimum and maximum bandwidth parameter to produce density
estimates that satisfy the following: first the minimum bandwidth
should not lead to a zero-valued density estimate at any sample point;
second the maximum bandwidth should be smaller than the diameter of
the support.
* Ensure the bandwidths are sufficiently distinct. Similar bandwidth
values lead to a negligible decrease in the bias and many bandwidth
values may increase ||w_0||_2 resulting in an increase in variance
<cit.>.
* Select L=|ℒ|>|J|=I to obtain a feasible solution for
the optimization problems in (<ref>) and (<ref>).
We find that choosing a value of 30≤ L≤60, and setting ℒ
to be L linearly spaced values between the minimum and maximum
values described above works well in practice.
The resulting ensemble estimators are robust in the sense that they
are not sensitive to the exact choice of the bandwidths or the number
of estimators as long as the the rough rules-of-thumb given above
are followed. Moon et al. <cit.> gives more details
on ensemble estimator parameter selection for continuous divergence
estimation. These details also apply to the continuous parts of the
mixed cases for MI estimation in this paper. In particular, the minimum
and maximum bandwidth parameters can be efficiently selected based
on the k nearest neighbor distances of all data points.
Since the optimal weight w_0 can be calculated offline, the computational
complexity of the estimators is dominated by the construction of the
KDEs which has a complexity of O(N^2) using the standard
implementation. For very large datasets, more efficient KDE implementations
(e.g. <cit.>) can be used to reduce the computational
burden.
§.§ Central Limit Theorem
We finish this section with central limit theorems
for the ensemble estimators. This enables us to perform hypothesis
testing on the MI measure.
Let w^cont be a weighted
KDE ensemble estimator of I_ν(;) when and are
continuous with bandwidths h_X(l) and h_Y(l) for each estimator
in the ensemble. Assume that the shaping function g is Lipschitz in both
arguments with Lipschitz constant C_g and that h_X(l), h_Y(l)→0,
N→∞, and Nh_X^d_X(l), Nh_Y^d_Y(l)→∞
for each l∈ℒ. Then for fixed ℒ, and if
𝐒 is a standard normal random variable,
(( w^cont-[ w^cont])/√([ w^cont])≤ t)→(𝐒≤ t).
The proof is based on an application of Slutsky's Theorem preceded
by an application of the Efron-Stein inequality (see Appendix <ref>).
For the mixed component case, if 𝒮_X and 𝒮_Y
are finite, then the corresponding ensemble estimators also obey a
central limit theorem. The proof follows by an application of Slutsky's
Theorem combined with Theorem <ref>.
Let w be a weighted KDE ensemble
estimator of I(;) when and contain both continuous
and discrete components. Let the bandwidths for the conditional estimators
be (l) and (l) for each estimator in the ensemble. Assume
that the shaping function g is Lipschitz in both arguments and that , →0,
N→∞, and Nh_X^d_X, Nh_X|y^d_X→∞
for each l∈ℒ and ∀(x,y)∈𝒮_X_D×𝒮_Y_D
with |𝒮_X_D|,|𝒮_Y_D|<∞.
Then for fixed ℒ,
(( w-[ w])/√([ w])≤ t)→(𝐒≤ t).
§ APPLICATIONS
§.§ Simulations
In this section, we validate our theory by estimating the Rényi-α
MI integral (i.e. g(x)=x^α in (<ref>); see <cit.>)
where 𝐗 is a mixture of truncated Gaussian random variables
restricted to the unit cube and 𝐘 is a categorical random
variable that indicates the corresponding truncated Gaussian random
variable that is drawn from in the mixture. In this setting,
can be viewed as a classification variable and contains
the chosen features, which are all continuous in this case. Since
is purely continuous and is purely discrete, the MI integral
reduces to the following:
I(;)=∑_y∈ S_Yf_Y_D(y)∫(f_X_C(x_C)/f_X_C|Y_D(x_C|y))^αf_X_C|Y_D(x_C|y)dx_C.
We illustrate with Rényi MI as it has received recent interest and the
estimation problem does not reduce to entropy estimation, in contrast
to Shannon MI. Thus this is a clear case where there are no other
nonparametric estimators that are known to achieve the parametric
MSE rate. In fact, to the best of our knowledge, there are no other
nonparametric estimators of Rényi MI that are known to be consistent
in this mixed setting.
We consider two cases. In the first case, 𝐘 has three
possible outcomes (i.e. |𝒮_Y|=3) and respective probabilities
(𝐘=0)=(𝐘=1)=2/5 and (𝐘=2)=1/5.
The conditional covariance matrices are all 0.1× I_d and
the conditional means are, respectively, μ̅_0=0.25×1̅_d,
μ̅_1=0.75×1̅_d, and μ̅_2=0.5×1̅_d,
where I_d is the d× d identity matrix and 1̅_d
is a d-dimensional vector of ones. This experiment can be viewed
as the problem of estimating MI (e.g. for feature selection or Bayes
error bounds) of a classification problem where each discrete value
corresponds to a distinct class, the distribution of each class overlaps
slightly with others, and the class probabilities are unequal. We
use α=0.5. We set ℒ to be 40 linearly spaced
values between 1.2 and 3. The bandwidth in the KDE plug-in estimator
is also set to 2.1N^-1/(2d).
Figure <ref> shows the MSE (200 trials) of the plug-in
KDE estimator of the MI integral using a uniform kernel and the optimally
weighted ensemble estimator GENIE for various sample sizes
and for d=4, 6, 9, respectively. The ensemble estimator GENIE
outperforms the standard plug-in estimator, especially for larger
sample sizes and larger dimensions. This demonstrates that while an
individual kernel estimator performs poorly, an ensemble of estimators
including the individual estimator performs well.
For the second case, 𝐘 has six possible outcomes (i.e.
|𝒮_Y|=6) and respective probabilities (=0)=0.35,
(=1)=0.2, (=2)=(=3)=0.15, (=4)=0.1, and
(=5)=0.05. We chose α=0.5 and d=6. The conditional
covariance matrices are again 0.1× I_d and the conditional
means are, respectively, μ̅_0=0.25×1̅_d, μ̅_1=0.75×1̅_d,
and μ̅_2=0.5×1̅_d, μ̅_3=(0.25×1̅_4^T,0.5×1̅_2^T)^T,
μ̅_4=(0.75×1̅_2^T,0.375×1̅_4^T)^T,
and μ̅_5=(0.5×1̅_4^T,0.25×1̅_2^T)^T.
The results are again given in Figure <ref>. The parameters
for the ensemble estimator and the KDE plug-in estimators are the
same as in the other three plots in Figure <ref>. The
ensemble estimator also outperforms the plug-in estimator in this
setting.
The estimated negative slopes of the log-log plots in Figure <ref>
are given in Table <ref>. In all settings, both the plug-in
and ensemble estimators outperform their theoretical rates in this
finite-sample regime. However, the rates are generally approaching
the theoretical rates as the dimension increases. It is also clear
from these slopes that the ensemble estimators greatly outperform
the plug-in estimators. We expect the rates to converge to the theoretical
rates as the sample size increases.
§.§ Application to Single-Cell RNA-Sequencing Data
A common application of MI estimation is to measure the strength of
relationships between different variables, especially in a feature
selection setting. Model aggregation, which includes ensemble methods,
for model selection is a classical problem in statistics <cit.>.
Here we use the GENIE estimator on two different single-cell RNA-sequencing
(scRNA-seq) datasets to demonstrate the estimator's utility for feature
selection.
Information theory has been used previously in many genomics applications <cit.>.
Single-cell RNA-sequencing data is obtained by measuring the RNA expression
levels in individual (i.e. single) cells <cit.>.
Thousands of genes are typically measured in thousands of cells. This
allows the data to capture the heterogeneity of cell types within
a sample, in contrast with bulk RNA-sequencing methods which effectively
measure the average RNA expression levels within a sample. To correct
for undersampling that is present in scRNA-seq data, we first performed
imputation on both datasets <cit.>.
For these datasets, we estimated two MI measures: the Rényi MI
and DREMI <cit.>. We define the Rényi
MI to be equal to the Rényi divergence between the joint distribution
of and and the product of the marginal distributions.
The DREMI score is a weighted MI developed specifically for analyzing
single-cell data <cit.>. See Appendix <ref>
for further details. Note that no other estimator has been defined
for I_DREMI when the dimension of the continuous component or
components are greater than 1.
§.§.§ Mouse bone marrow data
We applied GENIE to scRNA-seq data measured from developing mouse
bone marrow cells <cit.>. Estimating mutual
information is commonly done in feature selection where features (in
this case the expression levels of genes) are selected based on the
estimated mutual information between the features (in this case
the gene expression levels) and the response variable (in this
case the cell type classification). Features with higher MI are chosen
as they provide more information about the response variable. After
preprocessing, the data contained 10,738 genes measured in 2,730 cells.
In <cit.>, the authors assigned each of the
cells to one of 19 different cell types based on its gene expression
profile. Examples of cell types in this data include erythrocytes,
basophils, and monocytes.
For this data, we estimated the two different MI measures between
the cell type classification (discrete) and selected groups of genes
(continuous). We estimated the MI for different combinations of genes
selected from the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways
associated with the hematopoietic cell lineage <cit.>.
Each of these collections contained 8-10 genes. Since the number of
cell types is discrete and the gene expression levels are continuous,
the estimation problem corresponds to estimating the MI between
and for the case where is discrete and is continuous.
In this problem, |𝒮_Y|=19 and d_X is the number
of genes in the chosen collection.
Table <ref> gives the results. The mean and standard deviation
of the estimated MI (calculated from 1000 bootstrap samples) are reported
for each gene collection including all genes from the four selected
KEGG pathways. Note that the scores for DREMI and Rényi MI are
not directly comparable due to different scaling. The estimated Rényi
MI for these collections is higher than when selecting 8 genes at
random. This is corroborated by classification accuracies obtained
using either a linear SVM classifier or random forests: the classification
accuracies using the KEGG pathways genes are significantly higher
than those obtained using a random set of genes. This suggests the
genes in KEGG pathways associated with the hematopoietic lineage do
provide some information about cell type in this data. Additionally,
the combined genes from all four pathways have the largest estimated
MI for both measures and classification accuracy, which is expected
as genes from different pathways contain information about different
cell types and are thus necessary for distinguishing between cell
types.
In general, the estimated DREMI when using the KEGG pathways is higher
than the estimated DREMI obtained using random genes. However, several
of these scores are within a standard deviation of the score obtained
from the random genes. Of the four KEGG pathways collections, the
Erythrocyte pathway genes has the largest estimated Rényi MI
and smallest estimated DREMI. Yet, the classification accuracy is
essentially the same as that of the Platelets pathway geneset. These
results highlight the different use cases of these two MI measures.
The Erythrocyte cells are the largest group, containing 1,095 cells.
This suggests that the estimated Rényi MI is biased high for
features relevant for overrepresented groups. In contrast, the DREMI
score appears to be biased low in this case. These results indicate
that the DREMI score may be more appropriate than the Rényi MI
when analyzing less common populations. On the other hand, when less
common populations are not relevant to the analysis, DREMI may not
be as appropriate as other MI measures. These different use cases
highlight the utility of the GENIE estimator in estimating different
MI measures.
§.§.§ Human embryoid body data
We applied GENIE to scRNA-seq data measured from human embryoid bodies
(EB) collected over a 27-day time course <cit.>. Cells
were sampled at 3-day intervals and then pooled resulting in 5 different
sample collections over time. Thus sample 1 contains cells from days
0 and 3, sample 2 contains cells from days 6 and 9, etc. After preprocessing,
the data contained 17,580 genes measured in 16,825 cells, with each
of the five time samples containing about 2,400 to 4,100 cells. In <cit.>,
the authors identified and analyzed several branches of cells. We
used GENIE to identify genes associated with a neural progenitor (NP)
branch and a neural crest (NC) branch by estimating the Rényi
MI and the DREMI score between the gene expression levels of the cells
in each branch () and the timecourse variable (). This again
corresponds to the case where is discrete and is continuous.
For this problem, |𝒮_Y|=5 and d_X is allowed to
vary as described below. Figure <ref> shows PHATE visualizations
of the data highlighted by time sample, and the two branches.
We performed three experiments with each of the branches. For all
experiments, we limited ourselves to genes that are on average nondecreasing
in the branch as time goes on. Thus in each branch, we only considered
the genes such that the correlation between the gene expression level
and time is greater than zero.
For the first experiment, we estimated the MI scores between the time
course variable and a single gene for all genes in the data (i.e.,
d_X=1). Table <ref> contains the estimated MI scores
of the top 10 genes for each of the measures and branches. Several
of these genes are known to be associated with their respective tissues.
For example, CX3CL1 is often expressed in the brain <cit.>,
SEPT6 has been found to be important for the developing neural tube
in zebra fish <cit.>, SREBF2 is necessary for normal
brain development in mice <cit.>, NR2E1 is predominantly
expressed in the developing brain <cit.>, and ZNF804A
may help regulate early brain development <cit.>.
For the NC branch, multiple HOX genes are listed as having high Rény
MI, all of which are known to be important in the NC <cit.>.
Additionally, RBP1 has been found in enteric nerve NC cells <cit.>,
SHC4 is involved in melanocyte (an NC derivative) development <cit.>,
and PRAME is involved in further differentiation of NC cells <cit.>.
For comparison, we also used the sure independence screening (SIS)
approach described in <cit.>. This approach reduces
to selecting the genes with the largest correlation with . Table <ref>
shows the top 10 genes for each of the branches and the corresponding
correlation coefficient. Note that only 1/10 of the SIS-selected genes
match with the Rényi MI-selected genes in the NP branch and only
3/10 in the NC branch. None of the DREMI-selected genes match the
SIS-selected genes. Since the SIS approach focuses on linear relationships,
this suggests that our MI estimator is able to effectively detect
strong relationships that are not strictly linear.
None of the DREMI-selected genes match the Rényi MI-selected
genes in both branches. Visualizing the gene expression levels of
the selected genes using PHATE indicates that genes with high DREMI
scores tend to be more localized to a branch while genes with high
Rényi MI may be spread out more (see Figure <ref>
for some examples). This suggests that the DREMI score may be better
than the Rényi MI when the goal is to identify genes that are
uniquely expressed in specific branches. Again, these different use
cases highlight the utility of the GENIE estimator in estimating different
MI measures.
For the second experiment, we used a greedy forward-selection approach
with the GENIE estimator to identify relevant genes for the two branches.
For the third experiment, we used the same forward-selection approach
as in the second experiment except we started by including three or
four relevant genes identified in <cit.> per branch.
In these experiments, we were able to identify several genes with
known associations with these cell types. See Appendix <ref>
for details.
Our results here indicate that GENIE can be useful in identifying
relevant features under multiple settings, even when the variables
are not purely continuous or purely discrete. In particular, since
GENIE accurately identifies previously known gene relationships, we
propose that GENIE can be used to identify unknown gene relationships
for biological discovery. This use can also be extended to other domains
for scientific discovery.
§ CONCLUSION
We derived the MSE convergence rates for general plug-in KDE-based
estimators of general MI measures between 𝐗 and 𝐘
when they have only continuous components and for the case where
and/or contain a mixture of discrete and continuous components.
Using these rates, we defined an ensemble estimator GENIE that achieves
an MSE rate of O(1/N) when the densities are sufficiently smooth.
To the best of our knowledge, this is the first nonparametric MI estimator
that achieves the MSE convergence rate of O(1/N) in this setting
of mixed random variables (i.e. and are not both purely
discrete or purely continuous). We also derived a minimax lower bound
on the convergence rate for estimating MI in the continuous case,
derived the asymptotic distribution of the estimator, validated the
superior convergence rate of the ensemble estimator via experiments,
and applied the estimator to analyze feature relevance in single cell
data. We show that the ensemble estimators for the continuous case
achieve the minimax rate for sufficiently smooth densities. Future
work includes extending this approach to k-nn based estimators
which are generally computationally easier than KDE estimators and
extending the minimax rate for the mixed case considered here. We
conjecture that the minimax rates in the mixed case are at least as
slow as those for the continuous case as the mixed case can be decomposed
into a random sum of continuous MI estimators.
ieeetr
§ EXTENDED GENOMICS DETAILS AND RESULTS
Here we provide further details on the genomics experiments.
§.§ Imputation and DREMI
Single-cell RNA-sequencing data typically suffers
from undersampling. Therefore, we perform imputation on both datasets
using MAGIC <cit.> prior to estimating the MI
measures.
Typically, MI measures are weighted by the joint probability density
of and . In DREMI, the measure is instead weighted by the
conditional probability density of |. This allows DREMI to
measure the strength of the relationship between and regardless
of differences in population density that often arise in single-cell
data. Since is continuous and is discrete for both genomics
applications, DREMI can be defined mathematically as
I_DREMI(;) =∑_y∈ S_Y∫ f_Y_D|X_C(y|x)log(f_X_DY_C(x_C,y)/f_X_C(x_C)f_Y_D(y))dx_C
=∑_y∈ S_Yf_Y_D(y)∫log(f_X_C|Y_D(x_C|y)/f_X_C(x_C))f_X_C|Y_D(x_C|y)/f_X_C(x_C)dx_C.
This measure differs from standard Shannon MI with the inclusion of
the weight 1/f_X_C(x_C) within the integral. While
this does not fit our standard definition of a generalized MI, our
estimation approach allows us to include the inverse of the KDE of
f_X_C when estimating the integral. The proof techniques are
unaffected and therefore our theoretical results still hold.
§.§ EB Data Extended Results
For the second experiment, we used a greedy forward-selection
approach with the GENIE estimator to identify relevant genes. We first
selected the gene with the highest estimated MI in a given branch
(d_X=1). We then identified the gene that gave the largest MI
when included with the first gene (d_X=2). We then repeated this
to obtain the top 10 genes. The results are shown in Table <ref>.
Rényi MI should never decrease as we add more genes, and we indeed
see this in Table <ref>. Thus the relative increase in
estimated Rényi MI can be used as a measure of the amount of
information each gene adds. Note that for both branches, the largest
increase in Rényi MI occurs within the first four genes and the
inclusion of each subsequent gene adds a decreasing amount of Rényi
MI. However, several of these genes have known associations with their
respective branches. Mutations of HFE are associated with neurological
disorders <cit.> while DOC2A is mainly expressed in
the brain <cit.>. For the NC branch, RGR
is associated with eye development which comes partially from the
neural crest, ITPKB is associated with neurulation <cit.>,
and DPYSL4 is associated with the development of the nervous system <cit.>.
While the Rényi MI does not decrease with the addition of genes,
DREMI may decrease due to the reweighting caused by using the conditional
distribution instead of the joint. Thus the change in score when adding
genes is less informative for DREMI. For a fixed dimension, however,
the relative DREMI scores are informative and thus can be used to
identify relevant genes using the forward-selection approach. Using
this approach with DREMI, we identified several genes with known associations
such as HFE (also identified with Rényi MI) and BRD9 <cit.>
with the NP branch, and CFL1 <cit.> and BMP8B <cit.>
with the NC branch.
We also performed a forward-selection variant on SIS. We first selected
the gene with the highest SIS score (correlation coefficient in this
case). We then performed regression with this gene and the time course
variable . We then calculated the SIS score between all of the
other genes individually and the regression residuals to select the
next gene. This process was repeated to obtain a list of the top ten
genes in Table <ref>. Since the SIS criteria is scale-invariant,
this can sometimes result in an increase in the correlation coefficient
as more genes are included, although generally we expect the correlation
to decrease. Thus it is somewhat difficult to assess using SIS the
amount of information added by including each gene. In this case,
the MI and SIS approaches identified unique genes with no shared overlap
in either branch, again suggesting that our MI approaches are identifying
nonlinear relationships.
For the third experiment, we used the same forward-selection approach
as in the second experiment except we started by including three or
four relevant genes identified in <cit.>. These genes
were NKX2-8, EN2, and SOX1 for the NP branch, and PAX3, FOXD3, SOX9,
and SOX10 for the NC branch. The results are presented in Table <ref>.
Interestingly, including these “preset” genes results in a larger
overall Rényi MI and DREMI in the NP branch than when using a
purely greedy approach (Table <ref>) while the opposite
is true for the NC branch. Additionally, the identified genes are
all different from the purely greedy approach. However, many of them
are known to be associated with their respective tissues. PTK6 affects
neurite extension <cit.>, SIRT3 regulates
mitochondria in the brain during development <cit.>,
HEY1 is expressed in neural precursor cells <cit.>,
BMPR1B is important for brain development <cit.>,
FAM72C is enriched in cortical neural progenitors <cit.>,
PPFIA4 is involved in neural development <cit.>,
LRSAM1 is related to enteric NC cells <cit.>,
LINC00327 is associated with regulating neuroblasts <cit.>,
and GRHPR is associated with human eye development <cit.>.
§ MI ENSEMBLE ESTIMATION EXTENSIONS
In this appendix, we present several extensions
of the ensemble estimation approach to MI. First, we show how to apply
the theory to the purely continuous case. We then show how the theory
can be applied to obtain estimators that achieve the parametric rate
for less smooth densities.
§.§ Continuous Random Varables
We can apply Theorem 3 in <cit.>
to obtain a version of the GENIE MI estimator that achieves the parametric
rate for the case when 𝐗 and 𝐘 are purely
continuous. For completeness, we repeat the theorem and its proof here. For a general
estimation problem, let N be the number of available samples and
let ℒ={ l_1,…,l_L} be a set of
index values. For an indexed ensemble of estimators {𝐄̂_l} _l∈ℒ
of a parameter E, the weighted ensemble estimator with weights
w={ w(l_1),…,w(l_L)}
satisfying ∑_l∈ℒw(l)=1 is defined as
𝐄̂_w=∑_l∈ℒw(l)𝐄̂_l.
Consider the following conditions on {𝐄̂_l} _l∈ℒ:
* 𝒞.1 The bias is expressible as
[𝐄̂_l]=∑_i∈ Jc_iψ_i(l)ϕ_i,d(N)+O(1/√(N)),
where c_i are constants depending on the underlying density and
are independent of N and l, J={ i_1,…,i_I}
is a finite index set with I<L, and ψ_i(l) are basis functions
depending only on the parameter l and not on the sample size N.
* 𝒞.2 The variance is expressible as
[𝐄̂_l]=c_v(1/N)+o(1/N).
Assume conditions 𝒞.1
and 𝒞.2 hold for an ensemble of estimators {𝐄̂_l} _l∈ℒ.
Then there exists a weight vector w_0 such that the MSE of the
weighted ensemble estimator attains the parametric rate of convergence:
𝔼[(𝐄̂_w_0-E)^2]=O(1/N).
The weight vector w_0 is the solution to the following convex
optimization problem:
[ min_w ||w||_2; subject to ∑_l∈ℒw(l)=1,; γ_w(i)=∑_l∈ℒw(l)ψ_i(l)=0, i∈ J. ]
Due to condition 𝒞.1, the bias of the weighted ensemble estimator is
[𝐄̂_w]=∑_i∈ Jc_iγ_w(i)ϕ_i,d(N)+O(√(L)||w||_2/√(N). )
Denote the covariance matrix of the ensemble of estimators as Σ_L. By the Cauchy-Schwarz inequality and condition 𝒞.2, the entries of Σ_L are O(1/N). The variance of the weighted estimator is then bounded above as
[𝐄̂_w]=w^TΣ_L w≤Trace(Σ_L N)||w||_2^2/N=c_v L||w||_2^2/N+o(1/N).
The optimization problem in (<ref>) zeroes out the lower-order bias terms and limits the ℓ_2 norm of the weight vector w to prevent the variance from exploding. This results in an MSE of O(1/N) as long as the dimension d is fixed and L is fixed and independent of the sample size N. A solution is guaranteed to (<ref>) as long as L>I and the vectors a_i=[ψ_i(l_1),…,ψ_i(l_L)] are linearly independent.
As before, (<ref>) typically results in an ensemble
estimator with a large variance. We can relax this optimization problem
and obtain an estimator that still obtains the parametric rate:
[ min_w ϵ; subject to ∑_l∈ℒw(l)=1,; |γ_w(i)N^1/2ϕ_i,d(N)|≤ϵ, i∈ J,; ‖ w‖ _2^2≤ηϵ. ]
We can use (<ref>) to obtain a GENIE estimator for the
purely continuous case. Theorem <ref> indicates that we
need h_X^d_Xh_Y^d_Y∝ N^-1/2 for the O(1/(Nh_X^d_Xh_Y^d_Y))
terms to be O(1/√(N)). We consider the more general case where
the parameters may differ for h_X and h_Y. Let ℒ_X
and ℒ_Y be sets of real, positive numbers with |ℒ_X|=L_X
and |ℒ_Y|=L_Y. For each estimator in the ensemble,
choose l_X∈ℒ_𝒳 and l_Y∈ℒ_Y and
set h_X(l_X)=l_XN^-1/(2(d_X+d_Y)) and h_Y(l_Y)=l_YN^-1/(2(d_X+d_Y)).
Define the matrix w s.t. ∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)=1.
From Theorems <ref> and <ref>, conditions
𝒞.1 and 𝒞.2 are satisfied if s≥ d_X+d_Y
with ψ_i,j(l_X,l_Y)=l_X^il_Y^j and ϕ_i,j(N)=N^-(i+j)/(2(d_X+d_Y))
for 0≤ i,j≤ d_X+d_Y s.t. 0<i+j≤ d_X+d_Y. The
optimal weight w_0 is calculated using (<ref>).
The resulting estimator
w_0^cont=∑_l_X∈ℒ_X,l_Y∈ℒ_Yw_0(l_X,l_Y)h_X(l_X),h_Y(l_Y)
achieves the parametric MSE rate when s≥ d_X+d_Y. We denote
this estimator as GENIE^cont.
§.§ Less Smooth Densities
The GENIE estimators GENIE and GENIE^cont
are guaranteed to achieve the parametric convergence rate as long
as s≥ d_X+d_Y. Here we derive ensemble estimators of MI
that achieve the parametric rate under less strict smoothness assumptions
on the densities. To derive the ensemble estimators for less smooth
densities, we need a different expansion of the bias that includes
some higher order terms. We present those results here for the continuous
and mixed cases and show how to apply the ensemble estimation theory
to obtain the parametric MSE convergence rate when the densities are
less smooth (s>(d_X+d_Y)/2).
§.§.§ Continuous Random Variables
We first consider the case where and are both purely continuous.
Consider the following result on the bias of the plug-in estimator:
Assume that assumptions
𝒜.0-𝒜.5 hold. Then for λ≥2 a
positive integer, the bias of is
[] = ∑_m,n=0
i+j+m+n≠0
^λ∑_i,j=0^rc_11,i,j,m,nh_X^ih_Y^j/(Nh_X^d_X)^m(Nh_Y^d_Y)^n
+∑_m=1^λ∑_i=0^r∑_j=0^rc_13,i,j,mh_X^ih_Y^j/(Nh_X^d_Xh_Y^d_Y)^m
+O(h_X^s+h_Y^s+1/(Nh_X^d_Xh_Y^d_Y)^λ).
The proof is given in Appendix <ref>. Note that no
extra assumptions are required to achieve this result compared to
Theorem <ref>. However, we elected to retain these results
for the appendix to simplify the presentation in the main paper.
We now use these results to define a new ensemble estimator. Set δ>0
and let ℒ_X and ℒ_Y be sets of real,
positive numbers with |ℒ_X|=L_X and |ℒ_Y|=L_Y.
For each estimator in the ensemble, choose l_X∈ℒ_𝒳
and l_Y∈ℒ_Y and set h_X(l_X)=l_XN^-1/(d_X+d_Y+δ)
and h_Y(l_Y)=l_YN^-1/(d_X+d_Y+δ). Then conditions
𝒞.1 and 𝒞.2 are satisfied if s≥(d_X+d_Y+δ)/2
and λ≥(d_X+d_Y+δ)/δ with ψ_1,i,j,m,n(l_X,l_Y)=l_X^i-md_Xl_Y^j-nd_Y
and ϕ_1,i,j,m,n(N)=N^-i+j+m(d_Y+δ)+n(d_X+δ)/d_X+d_Y+δ
for 0<i+j+m(d_Y+δ)+n(d_X+δ)≤d_X+d_Y+δ/2
and the terms ψ_2,i,j,m(l_X,l_Y)=l_X^i-md_Xl_Y^j-md_Y
and ϕ_2,i,j,m(N)=N^-i+j+mδ/d_X+d_Y+δ
for m≥1 and i+j+mδ≤d_X+d_Y+δ/2.
These all correspond to terms that converge to zero slower than N^-1/2
when left uncorrected. The optimal weight w_0 is again calculated
using (<ref>) and the resulting ensemble estimator achieves
the parametric MSE convergence rate when s≥(d_X+d_Y+δ)/2.
Since δ can be chosen arbitrarily close to zero, the parametric
rate can be achieved theoretically as long as s>(d_X+d_Y)/2.
§.§.§ Mixed Random Variables
We now consider the case where 𝐗 and 𝐘 may
have any mixture of continuous and discrete components. We have a
similar result on the bias as in Theorem <ref>.
Here we assume that =l_X𝐍_x^-β and =l_Y_y^-α
with 0<β<1/d_X, 0<α<1/d_Y, and
l_X,l_Y>0.
Assume that the same assumptions hold as in
Theorem <ref>. Then for λ≥2 a positive
integer, the bias of h_X_C|X_D,h_Y_C|Y_D is
[h_X_C|X_D,h_Y_C|Y_D] =∑_m,n=0
i+j+m+n≠0
^λ∑_i,j=0^rc_14,i,j,m,nl_X^il_Y^jN^-iβ-jα/(l_X^d_XN^1-β d_X)^m(l_Y^d_YN^1-α d_Y)^n
+∑_m=1^λ∑_i=0^r∑_j=0^rc_15,i,j,ml_X^il_Y^jN^-iβ-jα/(l_X^d_Xl_Y^d_YN^1-β d_X-α d_Y)^m
+O(N^-sβ+N^-sα+1/(N^1-β d_X-α d_Y)^λ).
The proof is given in Appendix <ref>.
We now use these results to define a new ensemble estimator in the
mixed case. The procedure is similar to the continuous case. Set δ>0
and let ℒ_X and ℒ_Y be sets of real,
positive numbers with |ℒ_X|=L_X and |ℒ_Y|=L_Y.
For each estimator in the ensemble, choose l_X∈ℒ_𝒳
and l_Y∈ℒ_Y and set (l_X)=l_X𝐍_x^-1/(d_X+d_Y+δ)
and (l_Y)=l_Y_y^-1/(d_X+d_Y+δ). Conditions
𝒞.1 and 𝒞.2 are satisfied if s≥(d_X+d_Y+δ)/2
and λ≥(d_X+d_Y+δ)/δ. The first set of terms
in the optimization problem are ψ_1,i,j,m,n(l_X,l_Y)=l_X^i-md_Xl_Y^j-nd_Y
and ϕ_1,i,j,m,n(N)=N^-i+j+m(d_Y+δ)+n(d_X+δ)/d_X+d_Y+δ
for 0<i+j+m(d_Y+δ)+n(d_X+δ)≤d_X+d_Y+δ/2.
The second set of terms are ψ_2,i,j,m(l_X,l_Y)=l_X^i-md_Xl_Y^j-md_Y
and ϕ_2,i,j,m(N)=N^-i+j+mδ/d_X+d_Y+δ
for m≥1 and i+j+mδ≤d_X+d_Y+δ/2.
The optimal weight w_0 is again calculated using (<ref>)
and the resulting ensemble estimator achieves the parametric MSE convergence
rate when s≥(d_X+d_Y+δ)/2. Since δ can be chosen
arbitrarily close to zero, the parametric rate can be achieved theoretically
as long as s>(d_X+d_Y)/2.
The modified estimators defined in this section have better statistical
properties than the original GENIE estimators defined in Section <ref>
and Appendix <ref> as the parametric rate is
guaranteed under less restrictive smoothness assumptions on the densities.
On the other hand, the number of parameters required for the optimization
problem in (<ref>) is larger for the modified estimator.
In theory, this could lead to larger variance although this is not
necessarily true in practice according to divergence estimation experiments
in <cit.>.
§ THE BOUNDARY CONDITION
Here we prove that under certain smoothness assumptions on a kernel
with either rectangular or circular support, the boundary assumption 𝒜.5 is
satisfied for densities with the unit cube as its support set. We will
prove the following, more general result:
Let K be a kernel function with rectangular or circular support,
i.e., K(u)=0 for ‖ u‖_1 >1 or ‖ u‖_2 >1, respectively. Let K∈Σ(s,H_K) in the interior of its support. Let p_x(u):ℝ^d→ℝ
be a polynomial in u of order |q|≤ r=⌊ s⌋
whose coefficients are a function of x and are r-|q| times differentiable.
Then for 𝒮=[0,1]^d and any positive integers t and
m, we have that
∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮K^l(u)p_x(u)du)^tdx=v_t(h),
where v_t(h) admits the expansion
v_t(h)=∑_i=1^r-|q|e_i,q,t,lh^i+o(h^r-q),
for some constants e_i,q,t,l.
Before proving this, we will relate this result to assumption 𝒜.5.
Assumption 𝒜.5 is satisfied under the same conditions as in Theorem <ref>, i.e., K∈Σ(s,H_K) is a kernel function with either rectangular or circular support and 𝒮=[0,1]^d.
This follows immediately from assumption 𝒜.2 that f is in the Hölder class Σ(s,H), which implies that D^q f(x) is r-|q| times differentiable. Thus equation (<ref>) implies 𝒜.5.
§.§ Proof of Theorem <ref>: Rectangular Support Kernels
We will first consider points that are boundary points due to a single coordinate and then extend to the general case where multiple coordinates are close to the boundary.
*Single Coordinate Boundary Point Since K∈Σ(s,H_K), we can take a Taylor series expansion of K^l(u) around zero to obtain
K^l(u)=p_K,l(u)+o(||u||_2^r),
where p_K,l(u) is a polynomial function of u with degree r.
Consider points x that are boundary points by virtue
of a single coordinate x_i such that x_i+u_ih∉.
Without loss of generality, assume that x_i+u_ih>1. After performing the above Taylor series expansion of K^l, the inner
integral in (<ref>) can then be evaluated first with respect to
all coordinates other than i. Since all of these coordinates lie
within the support, the inner integral over these coordinates will
amount to integration of the polynomial p_x(u) over a symmetric
d-1 dimensional rectangular region |u_j|≤ 1
for all j≠ i. This yields a function ∑_m=1^q+rp̃_m(x)u_i^m+o(u_i^r+1)
where the coefficients p̃_m(x) are each r-|q| times differentiable
wrt x.
With respect to the u_i coordinate, the inner integral will have
limits from 1-x_i/h to 1 for some 1>x_i>1-h.
Consider the p̃_m(x)u_i^m monomial term. The inner
integral wrt this term yields (ignoring the o(·) term for now)
∑_m=1^|q|+rp̃_m(x)∫_1-x_i/h^1u_i^mdu_i=∑_m=1^|q|+rp̃_m(x)1/m+1(1-(1-x_i/h)^m+1).
Raising the right hand side of (<ref>) to the power of t
results in an expression of the form
∑_j=0^(|q|+r)tp̌_j(x)(1-x_i/h)^j,
where the coefficients p̌_j(x) are r-|q| times differentiable
wrt x. Integrating (<ref>) over all the coordinates in
x other than x_i results in an expression of the form
∑_j=0^(|q|+r)tp̅_j(x_i)(1-x_i/h)^j,
where again the coefficients p̅_j(x_i) are r-|q| times
differentiable wrt x_i. Note that since the other cooordinates
of x other than x_i are far away from the boundary, the coefficients
p̅_j(x_i) are independent of h. To evaluate the integral
of (<ref>), consider the r-|q| term Taylor series expansion
of p̅_j(x_i) around x_i=1, where we use a smooth extension of the function and its derivatives to the boundary. This will yield terms
of the form
∫_1-h^1(1-x_i)^j+k/h^kdx_i = .-(1-x_i)^j+k+1/h^k(j+k+1)|_x_i=1-h^x_i=1
= h^j+1/(j+k+1),
for 0≤ j≤ r-|q|, and 0≤ k≤ (|q|+r)t. By a similar analysis, the o(·) terms from the Taylor expansion of the kernel result in o(h^r). Combining terms results
in the expansion v_t(h)=∑_i=1^r-|q|e_i,q,th^i+o(h^r-|q|).
*Multiple Coordinate Boundary Point The case where multiple coordinates of the point x are near the
boundary is a straightforward extension of the single boundary point
case so we only sketch the main ideas here. As an example, consider
the case where 2 of the coordinates are near the boundary. Assume
for notational ease that they are x_1 and x_2 and that x_1+u_1h>1
and x_2+u_2h>1. The inner integral in (<ref>)
can again be evaluated first wrt all coordinates other than 1 and
2. This yields a function ∑_m,j=1^q+rp̃_m,j(x)u_1^mu_2^j
where the coefficients p̃_m,j(x) are each r-q times
differentiable wrt x. Integrating this wrt x_1 and x_2
and then raising the result to the power of t yields a double sum
similar to (<ref>). Integrating this over all the coordinates
in x other than x_1 and x_2 gives a double sum similar
to (<ref>). Then a Taylor series expansion of the coefficients
and integration over x_1 and x_2 yields the result.
§.§ Proof of Theorem <ref>: Circular Support Kernels
The case where the kernel K has a circular support is more complex than when the support is rectangular. We will again first consider points that are boundary points due to a single coordinate. We will then extend to the general case where multiple coordinates are close to the boundary.
*Single Coordinate Boundary Point Consider points x that are
boundary points due to a single coordinate x_i s.t. x_i+u_ih∉𝒮.
Without loss of generality, assume that x_i+u_ih>1. We focus
first on the inner integral in (<ref>). We will use the
following lemma:
Let D_d(ρ) be a d-sphere with radius
ρ and let ∑_i=1^dn_i=q. Then
∫_D_d(ρ)u_1^n_1u_2^n_2… u_d^n_ddu_1… du_d=Cρ^d+q,
where C is a constant that depends on the n_is and d.
We convert to d-dimensional spherical coordinates to handle the
integration. Let r be the distance of a point u from the origin.
We nave d-1 angular coordinates ϕ_i where ϕ_d-1
ranges from 0 to 2π and all other ϕ_i range from 0
to π. The conversion from the spherical coordinates to Cartesian
coordinates is then
u_1 = rcos(ϕ_1)
u_2 = rsin(ϕ_1)cos(ϕ_2)
u_3 = rsin(ϕ_1)sin(ϕ_2)cos(ϕ_3)
⋮
u_d-1 = rsin(ϕ_1)⋯sin(ϕ_d-2)cos(ϕ_d-1)
u_d = rsin(ϕ_1)⋯sin(ϕ_d-2)sin(ϕ_d-1).
The spherical volume element is then
r^d-1sin^d-2(ϕ_1)sin^d-3(ϕ_1)⋯sin(ϕ_d-1)dr dϕ_1 dϕ_2⋯ dϕ_d-1.
Combining these results gives
∫_D_d(ρ)u_1^n_1u_2^n_2… u_d^n_ddu_1… du_d
= ∫_0^ρ∫_o^2π∫_0^π⋯∫_0^πr^q+d-1[sin^q-n_1+d-2(ϕ_1)sin^q-n_1-n_d+d-3(ϕ_2)⋯.
.sin^n_d+n_d-1+1(ϕ_d-2)sin^n_d(ϕ_d-1)][cos^n_1(ϕ_1)⋯cos^n_d(ϕ_d-1)]dϕ_1⋯ dϕ_d-1dr
= Cρ^q+d.
Since K∈Σ(s,H_K), we can take a Taylor series expansion of K^l(u) around zero to obtain
K^l(u)=p_K,l(u)+o(||u||_2^r),
where p_K,l(u) is a polynomial function of u with degree r.
The region of integration for the inner integral in (<ref>)
corresponds to a hyperspherical cap with radius 1 and height of
1-x_i/h. The inner integral can be calculated using an
approach similar to that used in <cit.> to calculate
the volume of a hyperspherical cap. It is obtained by integrating
the polynomial p_x(u) over a d-1-sphere with radius sinθ
and height element dcosθ. This is done using Lemma <ref>.
We then integrate over θ which has a range of 0 to ϕ=cos^-1(1-x_i/h).
Thus we have
∫_u:x+uh∉𝒮K^l(u)p_x(u)du = ∫_u:||u||_2≤1,x+uh∉𝒮(p_K,l(u)+o(||u||_2^r))p_x(u)du
= ∑_m=0^|q|+rp̃_m,l(x)∫_0^ϕsin^m+d-1(θ)sinθcos^mθ dθ+o( ∫_u:||u||_2≤1,x+uh∉𝒮||u||_2^rdu)
= ∑_m=0^|q|+rp̃_m,l(x)∫_0^ϕsin^m+d(θ)cos^mθ dθ+o( ∫_u:||u||_2≤1,x+uh∉𝒮||u||_2^rdu),
where p̃_m(x) is the polynomial coefficient corresponding to u_i^m after the Taylor series expansion of K^l and integrating over the d-1-sphere.
We will focus on the first term in Eq. <ref> as the o(·) term will follow similarly as a polynomial function of u. From standard integral tables, we get that for n≥2 and m≥0
∫_0^ϕsin^nθcos^mθ dθ=-sin^n-1ϕcos^m+1ϕ/n+m+n-1/n+m∫_0^ϕsin^n-2θcos^mθ dθ.
If n=1, then we get
∫_0^ϕsinθcos^mθ dθ=1/m+1-cos^m+1ϕ/m+1.
Since ϕ=cos^-1(1-x_i/h), we have
cosϕ = 1-x_i/h,
sinϕ = √(1-(1-x_i/h)^2).
Therefore, if n is odd, we obtain
∫_0^ϕsin^nθcos^mθ dθ=∑_ℓ=0^(n-1)/2c_ℓ(√(1-(1-x_i/h)^2))^2ℓ(1-x_i/h)^m+1+c,
where the constants depend on m and n.
If n is even and m>0, then the final term in the recursion in
(<ref>) reduces to
∫_0^ϕcos^mθ dθ=cos^m-1ϕsinϕ/m+m-1/m∫_0^ϕcos^m-2θ dθ.
If m=2, then
∫_0^ϕcos^2θ dθ = ϕ/2+1/4sin(2ϕ)
= ϕ/2+1/2sinϕcosϕ.
Therefore, if n and m are both even, then this gives
∫_0^ϕsin^nθcos^mθ dθ = ∑_ℓ=0^(n-2)/2c_ℓ^'(√(1-(1-x_i/h)^2))^2ℓ+1(1-x_i/h)^m+1+c^'cos^-1(1-x_i/h)
+∑_ℓ=0^(m-2)/2c_ℓ^”(√(1-(1-x_i/h)^2))(1-x_i/h)^2ℓ+1.
On the other hand, if n is even and m is odd, we get
∫_0^ϕsin^nθcos^mθ dθ = ∑_ℓ=0^(n-2)/2c_ℓ^”'(√(1-(1-x_i/h)^2))^2ℓ+1(1-x_i/h)^m+1
+∑_ℓ=0^(m-1)/2c_ℓ^””(√(1-(1-x_i/h)^2))(1-x_i/h)^2ℓ.
If d is odd, then combining (<ref>) and (<ref>)
with (<ref>) gives
∑_m=0^|q|+rp̃_m,l(x)∫_0^ϕsin^m+d(θ)cos^mθ dθ = ∑_m=0^|q|+r∑_ℓ=0^d+|q|p_m,ℓ,l(x)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m,
where the coefficients p_m,ℓ,l(x) are r-|q| times differentiable
wrt x. Similarly, if d is even, then
∑_m=0^|q|+rp̃_m,l(x)∫_0^ϕsin^m+d(θ)cos^mθ dθ = ∑_m=0^|q|+r∑_ℓ=0^d+|q|p_m,ℓ,l^'(x)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m
+p^'(x)cos^-1(1-x_i/h),
where again the coefficients p_m,ℓ,l^'(x) and p^'(x)
are r-q times differentiable wrt x. Raising (<ref>)
and (<ref>) to the power of t gives respective expressions
of the form
∑_m=0^(|q|+r)t∑_ℓ=0^(d+|q|)tp̌_m,ℓ,l(x)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m,
∑_m=0^(|q|+r)t∑_ℓ=0^(d+|q|)t∑_n=0^tp̌_m,ℓ,l,n(x)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m(cos^-1(1-x_i/h))^n,
where the coefficients p̌_m,ℓ,l(x) and p̌_m,ℓ,l,n(x)
are all r-q times differentiable wrt x. Integrating (<ref>)
and (<ref>) over all the coordinates in x except
for x_i affects only the p̌_m,ℓ,l(x) and p̌_m,ℓ,l,n(x)
coefficients, resulting in respective expressions of the form
∑_m=0^|q|t∑_ℓ=0^(d+|q|)tp̅_m,ℓ,l(x_i)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m,
∑_m=0^|q|t∑_ℓ=0^(d+|q|)t∑_n=0^tp̅_m,ℓ,l,n(x_i)(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m(cos^-1(1-x_i/h))^n.
The coefficients p̅_m,ℓ,l(x_i) and p̅_m,ℓ,l,n(x_i)
are r-q times differentiable wrt x_i. Since the other coordinates
of x other than x_i are far away from the boundary, the coefficients
are independent of h. For the integral wrt x_i of (<ref>),
taking a Taylor series expansion of p̅_m,ℓ(x_i) around
x_i=1 (again using a smooth extension of the function and its derivatives to the boundary) yields terms of the form
∫_1-h^1(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m+jh^jdx_i = h^j+1∫_0^1(1-y_i)^ℓ/2y_i^m+j-1/2dy_i
= h^j+1B(ℓ+2/2,m+j+1/2),
where 0≤ j≤ r-q, 0≤ℓ≤(d+|q|)t, 0≤ m≤ |q|t,
and B(x,y) is the beta function. Note that the first step uses
the substitution of y_i=(1-x_i/h)^2.
If d is even (i.e. (<ref>)), a simple closed-form
expression is not easy to obtain due to the cos^-1(1-x_i/h)
terms. However, by similarly applying a Taylor series expansion to
p̅_m,ℓ,l,n(x_i) and substituting y_i=1-x_i/h
gives terms of the form of
∫_1-h^1(√(1-(1-x_i/h)^2))^ℓ(1-x_i/h)^m+j(cos^-1(1-x_i/h))^nh^jdx_i
= h^j+1∫_0^1(1-y_i^2)^ℓ/2y_i^m+j(cos^-1y_i)^ndy_i
= h^j+1c_ℓ,m,j,n,
for 0≤ j≤ r-|q|, 0≤ℓ≤(d+|q|)t, 0≤ m≤ (|q|+r)t,
and 0≤ n≤ t. By a similar analysis, the o(·) term in Eq. <ref> results in o(h^r). Combining terms results in the expansion v_t(h)=∑_i=1^r-|q|e_i,q,t,lh^i+o(h^r-|q|).
*Multiple Coordinate Boundary Point
The case where multiple coordinates
of the point x are near the boundary is a fairly straightforward
extension of the single boundary point case. Consider the case where
2 of the coordinates are near the boundary, e.g., x_1 and x_2
with x_1+u_1h>1 and x_2+u_2h>1. The region of integration
for the inner integral can be decomposed into two parts: a hyperspherical
cap wrt x_1 and the remaining area (denoted, respectively, as
A_1 and A_2). The remaining area A_2 can be decomposed
further into two other areas: a hyperspherical cap wrt x_2 (denoted
B_1) and a height chosen s.t. B_1 just intersects A_1
on their boundaries. Integrating over the remainder of A_2 is
achieved by integrating along x_2 over d-1-dimensional hyperspherical
caps from the boundary of B_1 to the boundary of A_2. Thus
integrating over these regions yields an expression similar to (<ref>).
Following a similar procedure will then yield the result.
§.§ Proof of Proposition <ref>
Recall that we assume that the derivatives of the density f vanish near the boundary of the density support set. We will use the following lemma, which is modified from equation (8) in Singh and Poczos <cit.>:
<cit.> Let the density support set 𝒮 be the unit cube with d≥2 and let the derivatives of the density f vanish near the boundary of the support set, which is denoted as ∂𝒮. Assume that f belongs to the Hölder set of order s, i.e. f∈Σ(H,s). Assume that q with 0≤|q|≤ r and x∈ℬ:={x∈𝒮|dist(x,∂𝒮)≤ h }. Then if h≤ 1/2,
| D^q f(x)|≤ Hh^s-|q|.
We include a proof here for completeness. We will use an induction argument on |q| as |q| decreases from r to 0. Let y∈∂𝒮 such that ||x-y||≤ h. For |q|=r, we have
|D^qf(x)| =|D^qf(x)-D^qf(y)|
≤ H||x-y||^s-|q|
≤ Hh^s-|q|,
where the first step comes from the fact that D^qf(y)=0, the second step uses the Hölder condition, and the last step follows from the choice of y. Suppose we have the desired bound for derivatives of order |q|+1. Let x∈ℬ and let u=(0,…,0,± 1,0,…,0)∈ℝ^d, where u_j=± 1 for some j∈[d]. Then we can write x=y+hu for some y∈ℬ. Furthermore, the point y+tu∈ℬ for t∈[0,h]. To see how this is true, note that if x∈ℬ then there exists at least one coordinate x_i such that either 0≤ x_i≤ h or 1-h≤ x_i≤ 1. Pick any other coordinate x_j. Then choose the sign of u_j such that y=x-hu∈𝒮. This is possible since h≤ 1/2. Since y is constructed by moving from x∈ℬ in parallel with the boundary, then y∈ℬ as well as y+tu∈ℬ.
Based on this construction of y and u, we have
|D^q f(y+hu)| ≤∫_0^h | ∂/∂ x_jD^q f(y+tu)|dt
≤∫_0^h Hh^s-(|q|+1)dt
=Hh^s-|q|.
The desired result then follows by induction on |q|.
From Lemma <ref>,
|∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮K^l(u)u^q D^q f(x) du)^tdx| ≤(Hh^s-|q|||K||_∞)^t| ∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮u^q du)^tdx|.
Consider the case when K has rectangular support from (without loss of generality) -1 to 1 in each dimension. We consider the setting when x is close to the 1 boundary in a single coordinate x_i. The inner integral in (<ref>) then reduces to
(C∫_1-x_i/h^1 u_i^m du_i)^t = (C/m+1(1-(1-x_i/h)^m+1))^t
= (C/m+1)^t∑_j=0^t (-1)^jtj(1-x_i/h)^(m+1)j,
where C is a constant that comes from integrating the other terms of u^q and m≤ |q| is the exponent of u_i in u^q. Based on equation (<ref>), taking the integral of this result with respect to x will then yield terms of the form of
h^j+1 for j≥ 0. The extension to the case with multiple boundary coordinates is similar to that in the proof of Theorem <ref>.
Combining this with the results in equations (<ref>) and (<ref>) gives that
|∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮K^l(u)u^q D^q f(x) du)^tdx|=O(h^s-|q|+1)=o(h^r-|q|).
Therefore, assumption 𝒜.5 is satisfied with the coefficients in the expansion being equal to zero.
The proof for the circular support kernel follows similar arguments also adapted from the proof of Theorem <ref>.
§.§ Assumption 𝒜.5 and the Truncated Gaussian Distribution
Here we show that the truncated Gaussian distribution satisfies assumption 𝒜.5. For simplicity, we will consider the univariate case for s=2 and the uniform kernel. We restrict the standard normal distribution to the interval [-1,1]. We focus on the +1 boundary as the -1 boundary follows a similar procedure. This gives
∫_x∈𝒮(∫_u:K(u)>0, x+uh∉𝒮K^l(u)u^q D^q f(x) du)^tdx =∫_1-h^1(∫_1-x/h^1 d^q/dx^qf(x)u^q du)^tdx
= ∫_1-h^1( (1-(1-x/h)^q+1)d^q/dx^qf(x)/q+1)^tdx.
We can extend the derivatives of the function f to the boundary by simply using the derivatives prior to truncation. By a Taylor series expansion around x=1, we have
f(x) =f(1)+f'(1)(1-x)+f”(1)/2(1-x)^2+o((1-x)^2),
d/dxf(x) =f'(1)+f”(1)(1-x)+o(1-x),
d^2/dx^2f(x) =f”(1)+o(1).
Therefore, the integral with respect to x in (<ref>) has terms of the form of (1-x)^j+k/h^k which integrates to h^j+1/j+k+1 (see (<ref>)). Combining these results gives the desired expansion. As an example, suppose t=1 and q=1. Then (<ref>) becomes
1/2∫_1-h^1(1-(1-x/h)^2)(f'(1)+f”(1)(1-x)+o(1-x))dx.
Distributing the expression for d/dxf(x) through and evaluating the integrals separately gives
1/2∫_1-h^1(f'(1)+f”(1)(1-x)+o(1-x))dx=1/2(f'(1)h+f”(1)h^2/2)+o(h^2),
1/2∫_1-h^1(1-x/h)^2(f'(1)+f”(1)(1-x)+o(1-x))dx=1/2( f'(1)h/3+f”(1)h^2/4)+o(h^2).
Taking the difference between these expressions gives the final expansion:
f'(1)h/3+f”(1)h^2/8+o(h^2).
§ PROOF OF THEOREM <REF> (CONTINUOUS BIAS)
Here we prove the results shown in Theorem <ref>,
which includes the results for Theorem <ref>. In this section
and throughout all other proofs, let Z denote the conditional
expectation given 𝐙.
The bias of is
[]=[g( X(𝐗) Y(𝐘)ν_1ν_2/ Z(𝐗,𝐘)ν_3)-g(f_X(𝐗)f_Y(𝐘)ν_1ν_2/f_XY(𝐗,𝐘)ν_3)],
where 𝐗 and 𝐘 are drawn jointly from f_XY.
We will derive an expression for this in terms of the bandwidths by
applying Taylor series expansions to both the functional g and
the densities.
The Taylor series expansion of g( X(𝐗) Y(𝐘)ν_1ν_2/ Z(𝐗,𝐘)ν_3)
around f_X(𝐗)f_Y(𝐘)ν_1ν_2 and f_XY(𝐗,𝐘)ν_3
gives an expansion with terms of the form of
Define g(t_1/t_2)=g(t_1,t_2). Define
the following terms:
Z^q(𝐙) =ν_3^q( Z(𝐙)-f_XY(𝐗,𝐘))^q,
XY^q(𝐙) =(ν_1ν_2)^q( X(𝐗) Y(𝐘)-f_X(𝐗)f_Y(𝐘))^q.
Then the Taylor series expansion of g( X(𝐗) Y(𝐘)ν_1ν_2/ Z(𝐗,𝐘)ν_3)
around f_X(𝐗)f_Y(𝐘)ν_1ν_2 and f_XY(𝐗,𝐘)ν_3
gives
g( X(𝐗) Y(𝐘)ν_1ν_2/ Z(𝐗,𝐘)ν_3)=∑_i=0^∞∑_j=0^∞(.∂^i+jg(t_1,t_2)/∂ t_1^i∂ t_2^j|_[ t_1=f_X(𝐗)f_Y(𝐘)ν_1ν_2; t_2=f_XY(𝐗,𝐘)ν_3 ]) Z^j(𝐙) XY^i(𝐙)/i!j!.
To simplify this, we focus on the terms in (<ref>). Note
that if ν_i=1, then the terms in (<ref>) are unaffected.
For other values, ν_i^j decreases to zero as j→∞
since 0<ν_i<1.
By the binomial theorem, we obtain
Z^q(𝐙) =ν_3^q∑_k=0^q( Z(𝐙))^k(f_XY(𝐗,𝐘))^q-k(-1)^q-k,
XY^q(𝐙) =(ν_1ν_2)^q∑_k=0^q( X(𝐗) Y(𝐘))^k(f_X(𝐗)f_Y(𝐘))^q-k(-1)^q-k.
To derive the bias, we will take the conditional expectation given
of these terms.
Since we are not doing explicit boundary correction, we need to consider
separately the cases when 𝐙 is in the interior of the
support 𝒮_X×𝒮_Y and when 𝐙
is close to the boundary of the support. For precise definitions,
a point Z=(X,Y)∈𝒮_X×𝒮_Y is in the
interior of 𝒮_X×𝒮_Y if for all Z^'∉𝒮_X×𝒮_Y,
K_X(X-X^'/h_X)K_Y(Y-Y^'/h_Y)=0,
and a point Z∈𝒮_X×𝒮_Y is near the
boundary of the support if it is not in the interior.
§.§ Interior Points
We first consider the case where 𝐙=(𝐗,𝐘)
is drawn from f_XY in the interior of 𝒮_X×𝒮_Y.
It can be shown (see <cit.>) by Taylor series expansions
of the probability densities that
X[ X(𝐗)] = f_X(𝐗)+∑_j=1^⌊ s/2⌋c_X,j(𝐗)h_X^2j+O(h_X^s),
Y[ Y(𝐘)] = f_Y(𝐘)+∑_j=1^⌊ s/2⌋c_Y,j(𝐘)h_Y^2j+O(h_Y^s),
[ Z(𝐙)] = f_XY(𝐗,𝐘)+∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j(𝐗,𝐘)h_X^2ih_Y^2j+O(h_X^s+h_Y^s).
The constants in the above expressions are independent of the bandwidths
h_X and h_Y and only depend on the densities and their derivatives.
We will also need an expression for Z[ X(𝐗) Y(𝐘)]:
Z[ X(𝐗) Y(𝐘)] =[1/N^2h_X^d_Xh_Y^d_Y∑_i=1^N∑_j=1^NK_X(-_i/h_X)K_Y(-_j/h_Y)]
= Z[1/N^2h_X^d_Xh_Y^d_Y∑_i=1^NK_X(-_i/h_X)K_Y(-_i/h_Y)]
+ Z[1/N^2h_X^d_Xh_Y^d_Y∑_i,j=1
i≠ j
^NK_X(-_i/h_X)K_Y(-_j/h_Y)]
=1/N Z[ Z(𝐙)]+N^2-N/N^2 X[ X(𝐗)] Y[ Y(𝐘)]
=N^2-N/N^2f_X(𝐗)f_Y(𝐘)+∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_X,Y,i,j(𝐗,𝐘)h_X^2ih_Y^2j+O(h_X^s+h_Y^s+1/N),
where we have used the fact that _i and _j are independent
when i≠ j.
We will also need expressions for the conditional expectation of powers
of the KDEs to simplify (<ref>). Consider first ( Z(𝐙))^2.
Note that
( Z(𝐙))^2=1/N^2h_X^2d_Xh_Y^2d_Y∑_i=1^N∑_j=1^NK_X(-_i/h_X)K_Y(-_i/h_Y)K_X(-_j/h_X)K_Y(-_j/h_Y).
The above double sum can be split into two cases: i=j (N terms)
and i≠ j (N^2-N terms). When i=j, we have by Taylor
series expansions of the joint density f_XY:
1/Nh_X^2d_Xh_Y^2d_Y Z[K_X^2(-_i/h_X)K_Y^2(-_i/h_Y)] =1/Nh_X^d_Xh_Y^d_Y∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,1(𝐗,𝐘)h_X^2ih_Y^2j+O(h_X^s+h_Y^s).
When i j,recall that _i is independent of _j.
Therefore, we obtain for these terms
N^2-N/N^2h_X^2d_Xh_Y^2d_Y[K_X(-_i/h_X)K_Y(-_i/h_Y)K_X(-_j/h_X)K_Y(-_j/h_Y)]
=N^2-N/N^2 Z[ Z(𝐙)]^2.
Combining these results gives
Z[( Z(𝐙))^2] =N^2-N/N^2f_XY()^2+∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,2(𝐗,𝐘)h_X^2ih_Y^2j
+∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,3(,)/Nh_X^d_Xh_Y^d_Yh_X^2ih_Y^2j+O(h_X^s+h_Y^s).
For the cross term (k=1) in (<ref>) when q=2, we
obtain
Z[ Z(𝐙)]f_XY()=f_XY()^2+f_XY()∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j(𝐗,𝐘)h_X^2ih_Y^2j+O(h_X^s+h_Y^s).
Combining these results gives
Z[ Z^2(𝐙)] =∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,4(𝐗,𝐘)h_X^2ih_Y^2j+∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,5(,)/Nh_X^d_Xh_Y^d_Yh_X^2ih_Y^2j
-f_XY()^2/N+O(h_X^s+h_Y^s).
By following similar procedures, it can be shown that for q≥2
Z[ Z^q(𝐙)] =∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,q,1(𝐗,𝐘)h_X^2ih_Y^2j+∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,q,2(,)/(Nh_X^d_Xh_Y^d_Y)^q-1h_X^2ih_Y^2j
+O(1/N^q-1+h_X^s+h_Y^s).
In particular, the terms related to f_XY()^q all combine
to be O(1/N). In the q=2 example above, we end up with
f_XY()^2-2f_XY()^2+N^2-N/N^2f_XY()^2=-f_XY()^2/N.
As another example, for q=3, we end up with
(N(N-1)(N-2)/N^3-3N^2-N/N^2+3-1)f_XY()^3 =(N^3-2N^2-N^2+2N-N^3/N^3-3N^2-N-N^2/N^2)f_XY()^3
=(-3N+2+3N/N^2)f_XY()^3
=O(1/N^2).
A similar pattern holds for higher values of q.
Then by following a similar process, we obtain for q≥2
Z[ XY^q(𝐙)] =∑_i=0
i+j≠0
^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,q,3(𝐗,𝐘)h_X^2ih_Y^2j+∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,q,4(,)/(Nh_X^d_X)^q-1h_X^2ih_Y^2j
+∑_i=0^⌊ s/2⌋∑_j=0^⌊ s/2⌋c_XY,i,j,q,5(,)/(Nh_Y^d_Y)^q-1h_X^2ih_Y^2j+O(1/N^q-1+h_X^s+h_Y^s).
§.§ Points Near the Boundary
For a point near the boundary of the support, we extend the expectation
beyond the support of the density. As an example if 𝐗
is near the boundary of 𝒮_X, then we get
X[ X(𝐗)]-f_X() =1/h_X^d_X∫_V:V∈𝒮_XK_X(𝐗-V/h_X)f_X(V)dV-f_X(𝐗)
=[1/h_X^d_X∫_V:K_X(𝐗-V/h_X)>0K_X(𝐗-V/h_X)f_X(V)dV-f_X(𝐗)]
-[1/h_X^d_X∫_V:V∉𝒮_XK_X(𝐗-V/h_X)f_X(V)dV]
=T_1,X(𝐗)-T_2,X(𝐗).
Note that we are technically evaluating the density f_X at points
outside of the support in T_1,X(). However, to obtain an expression
for this integral, we take a Taylor series expansion of f_X at
the point which is inside the support. Thus the exact manner
in which we define the extension of f_X does not matter as long
as the Taylor series remains the same and as long as the extension
is smooth. Thus the expected value of T_1,X() gives an expression
similar to that of the interior point case in (<ref>).
For the T_2,X(𝐗) term, we can use multi-index notation
on the expansion of f_X to show that
T_2,X(𝐗) =[1/h_X^d_X∫_V:V∉𝒮_XK_X(𝐗-V/h_X)f_X(V)dV]
=∫_u:h_Xu+𝐗∉𝒮_X,K_X(u)>0K_X(u)f_X(𝐗+h_Xu)du
=∑_|α|≤ rh_X^|α|/α!∫_u:h_Xu+𝐗∉𝒮_X,K_X(u)>0K_X(u)D^αf_X(𝐗)u^αdu+o(h_X^r).
Then since the |α|th derivative of f_X is r-|α|
times differentiable, we apply the condition in assumption 𝒜.5
to obtain
[T_2,X(𝐗)]=∑_i=1^re_ih_X^i+o(h_X^r).
Similar expressions can be obtained for Y, Z, and the
product Y X.
The above results considers Z^q(𝐙) and XY^q(𝐙)
for q=1. We now consider when q≥2. We follow a similar procedure
where we extend the density beyond the support, but only evaluate
the densities and their derivatives at points within the support.
Thus by the binomial theorem, we can write
Z[ Z^q(𝐙)] =ν_3^q∑_k=0^q Z[( Z(𝐙))^k](f_XY(𝐗,𝐘))^q-k(-1)^q-k
=ν_3^q∑_k=0^q Z[( Z(𝐙))^k]_extended(f_XY(𝐗,𝐘))^q-k(-1)^q-k
-ν_3^q∑_k=1^q Z[( Z(𝐙))^k]_outside(f_XY(𝐗,𝐘))^q-k(-1)^q-k
=T_1,q,Z()-T_2,q,Z().
As before, T_1,q,Z() corresponds to the case where we have
extended the density beyond the support and results in terms of the
form in (<ref>). T_2,q,Z() corresponds to the case where
we integrate outside of the boundary. The additional powers applied
to the KDE simply result in terms with the kernel raised to a power
or Z[ Z(𝐙)] raised to a power. By applying
assumption 𝒜.5, we obtain [T_2,q,X(𝐗)]=∑_i=1^r∑_j=1^re_q,ih_X^ih_Y^j+o(h_X^r+h_Y^r).
Similar results are obtained for XY^q(𝐙).
Combining the results for the interior points and points near the
boundary completes the proof.
§ PROOF OF THEOREM <REF> (CONTINUOUS VARIANCE)
Here we prove Theorem <ref>. The
proof uses the Efron-Stein inequality <cit.>:
Let 𝐗_1,…,𝐗_n,𝐗_1^',…,𝐗_n^'
be independent random variables on the space 𝒮. Then
if f:𝒮×…×𝒮→ℝ,
we have that
[f(𝐗_1,…,𝐗_n)]≤1/2∑_i=1^n[(f(𝐗_1,…,𝐗_n)-f(𝐗_1,…,𝐗_i^',…,𝐗_n))^2].
In this case we consider the samples {𝐙_1,…,𝐙_N}
and {𝐙_1^',_2…,𝐙_N}
and the respective estimators and ^'. By the triangle
inequality,
|-^'| ≤ 1/N|g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^')ν_3)|
+1/N∑_j=2^N_2|g( X(𝐗_j) Y(𝐘_j)ν_1ν_2/ Z(𝐗_j,𝐘_j)ν_3)-g( X^'(𝐗_j) Y^'(𝐘_1)ν_1ν_2/ Z^'(𝐗_1,𝐘_1)ν_3)|.
By the Lipschitz condition on g, the first term in (<ref>)
can be decomposed into terms of the form of
ν_3| Z(𝐙_1)- Z(𝐙_1^')|,
ν_1ν_2| X(𝐗_1) Y(𝐘_1)- X(𝐗_1^') Y(𝐘_1^')|.
For the Z(𝐙_1) term, we first apply Jensen's inequality:
[| Z(𝐙_1)- Z(𝐙_1^')|^2] =[1/N^2h_X^2d_Xh_Y^2d_Y(∑_i=1^N(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)-(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y)))^2]
≤1/Nh_X^2d_Xh_Y^2d_Y∑_i=1^N[(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)-(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y))^2]
By making the substitutions 𝐮_i=_1-_i/h_X,
𝐯_i=_1-_i/h_Y, 𝐮_i^'=_1-_i/h_X,
and 𝐯_i^'=_1^'-_i/h_Y in the expectation,
we obtain
[1/h_X^2d_Xh_Y^2d_Y(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)-(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y))^2]
=1/h_X^2d_Xh_Y^2d_Y∫(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)-(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y))^2f_Z(_i)f_Z(_1^')f_Z(_1)d_id_1d_1^'
≤2||K_X· K_Y||_∞^2.
This gives
[ν_3^2| Z(𝐙_1)- Z(𝐙_1^')|^2]≤2||K_X· K_Y||_∞^2,
where we have used the fact that ν_3≤1.
For the product of the marginal KDEs, we have that
X(𝐗_1) Y(𝐘_1) = 1/N^2h_X^d_Xh_Y^d_Y∑_i=2^N∑_j=2^NK_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)
= 1/N Z(𝐙_1)+1/N^2h_X^d_Xh_Y^d_Y∑_i≠ jK_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y).
By applying the triangle inequality, Jensen's inequality, and similar
substitutions, we get
[ν_1^2ν_2^2| X(𝐗_1) Y(𝐘_1)- X(𝐗_1^') Y(𝐘_1^')|^2] ≤ [2/N^2| Z(𝐙_1)- Z(𝐙_1^')|^2]
+2(M-1)/N^3h_X^2d_Xh_Y^2d_Y×
∑_i≠ j[(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)..
..-K_X(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y))^2]
≤ 4+2(N-1)^2/N^2||K_X· K_Y||^2.
For the second term in (<ref>), it can be shown that
(see <cit.>)
[ν_3^2| Z(𝐙_i)- Z^'(𝐙_i)|^2] = ν_3^2/N^2h_X^2d_Xh_Y^2d_Y[(K_X(𝐗_1-𝐗_i/h_X)K_Y(𝐘_1-𝐘_j/h_Y)..
..-K_X(𝐗_1^'-𝐗_i/h_X)K_Y(𝐘_1^'-𝐘_j/h_Y))^2]
≤ 2||K_X· K_Y||_∞^2/N^2.
By a similar approach,
X(𝐗_i) Y(𝐘_i)- X^'(𝐗_i) Y^'(𝐘_i)
= Z(𝐙_i)- Z^'(𝐙_i)+1/M^2h_X^d_Xh_Y^d_Y(∑_n=2
n≠ i
K_Y(𝐘_i-𝐘_n/h_Y)(K_X(𝐗_i-𝐗_1/h_X)-K_X(𝐗_i-𝐗_1^'/h_X)).
.+∑_n=2
n≠ i
K_X(𝐗_i-𝐗_n/h_X)(K_Y(𝐘_i-𝐘_1/h_Y)-K_Y(𝐘_i-𝐘_1^'/h_Y))),
[ν_1^2ν_2^2| X(𝐗_i) Y(𝐘_i)- X^'(𝐗_i) Y^'(𝐘_i)|^2]≤6||K_X· K_Y||_∞^2(1/N^2+(N-2)^2/N^4)
We can then apply the Cauchy Schwarz inequality to bound the square
of the second term in (<ref>) to get
[(∑_j=2^N_2|g( X(𝐗_1) Y(𝐘_1)/ Z(𝐗_1,𝐘_1))-g( X^'(𝐗_1) Y^'(𝐘_1)/ Z^'(𝐗_1,𝐘_1))|)^2]≤14C_g^2||K_X· K_Y||_∞^2.
Applying Jensen's inequality in conjunction with these results gives
[|-^'|^2]≤44C_g^2||K_X· K_Y||_∞^2/N^2.
Applying the Efron-Stein inequality finishes the proof.
§ PROOF OF MINIMAX RATES (THEOREM <REF>)
Here we present a proof of the minimax lower
bound on MI estimation convergence rates given in Theorem <ref>.
We follow a similar approach to that given in <cit.>,
which uses the standard approach of Le Cam's method <cit.>.
However, the minimax theory previously derived in these references
are not directly applicable to the MI estimation setting. On the one
hand, MI estimation could be considered to be similar to the entropy
estimation problem as MI is technically a functional of just the joint
distribution as the marginals are derived from the joint. However,
this viewpoint results in a very complicated functional of the joint
distribution, and it is not at all obvious if the theory derived in
<cit.> is applicable. On the other hand, MI estimation
could be viewed as a divergence estimation problem between the joint
and the product of marginals. However, perturbing the joint density,
as is done in Le Cam's method, should result in a perturbation of
the marginals as well. This is not accounted for in existing approaches
for the divergence estimation problem <cit.>.
Therefore, we tailor Le Cam's method directly to the MI estimation
case to avoid these problems.
Le Cam's method reduces the estimation problem to a testing problem
which can then be used to characterize the minimax rate. We will need
the squared Hellinger distance between two densities p and q,
which is defined as
H^2(P,Q)=∫(√(p(x,y))-√(q(x,y)))^2dxdy.
For this proof, we present a shift in notation for clarity. We can
write
I(;)=I(f_XY,f_Xf_Y).
We now present Le Cam's method adapted to the continuous MI estimation
setting:
Let I be a functional defined on some subset
of a parameter space Θ×Θ which contains (f_XY,f_Xf_Y)
and (f_XY,λ,f_Xf_Y) for all λ in
some index set Λ. Denote the distributions of f_XY,f_Xf_Y,
and f_XY,λ as F_XY, F_XF_Y, and F_XY,λ,
respectively. Define F̅_XY^N=1/|Λ|∑_λ∈ΛF_XY,λ^N.
Consider the following two conditions for γ<2 and β>0:
(i) H^2(F̅_XY^N×(F_XF_Y)^N,F_XY^N×(F_XF_Y)^N)≤γ<2,
(ii) I(f_XY,f_Xf_Y)≥2β+I(f_XY,λ,f_Xf_Y) ∀λ∈Λ
OR I(f_XY,λ,f_Xf_Y)≥2β+I(f_XY,f_Xf_Y) ∀λ∈Λ.
Then
inf_Ĝ_Nsup_f_XY∈Θ[|Ĝ_N-I(f_XY,f_Xf_Y)|>β]≥1/2(1-√(γ(1-γ/4))).
Lemma <ref> is nearly identical to Lemma 7 in Krishnamurthy
et al. <cit.> which is itself a modification
of Theorem 2.2 in Tsybakov <cit.>. The
main change is the addition of the second option to condition (ii).
Thus the only required addition to the proof is to check that the
second option is indeed sufficient. Since we consider the probability
of the absolute value of Ĝ_N-I(f_XY,f_Xf_Y)
being greater than β, it is clear that following the original
proof with the second option will give the same result as the first
option in condition (ii).
Note that we are only required to perturb the joint density f_XY
to derive the minimax rate. In general, perturbing the joint density
will result in perturbed marginal densities as well. However, the
following Lemma shows that it is possible to construct perturbation
functions that do not affect the marginal densities.
Let R_1,…,R_ℓ be a partition of
[0,1]^d_X+d_Y, each being cubes with side length ℓ^-1/(d_X+d_Y).
Then there exists functions u_1,…,u_ℓ such that,
supp(u_j)⊂{.z|B(z,ϵ)⊂ R_j} ,
∫ u_j^2(x,y)dxdy∈Θ(ℓ^-1),
∫ u_j(x,y)dx=∫ u_j(x,y)dy=0,
∫ g'(f_X(x)f_Y(y)/f_XY(x,y))f_X(x)f_Y(y)u_j(x,y)/f_XY(x,y)dxdy=0,
‖ D^ru_j‖ _∞≤ℓ^r/(d_X+d_Y) ∀ r s.t. ∑_jr_j≤ s+1,
where B(z,ϵ) denotes an L_2 ball around z=[[ x; y ]] with radius ϵ∈(0,1).
Let ϵ>0. We can construct two orthonormal systems of q>3
functions. Construct the first system on [0,1]^d_X such that
ϕ_X,1=1, supp(ϕ_X,j)⊂[ϵ,1-ϵ]^d_X,
and ‖ D^rϕ_X,j‖ _∞≤ J_X<∞
for all j. The second system is constructed on [0,1]^d_Ysuch
that ϕ_Y,1=1, supp(ϕ_Y,j)⊂[ϵ,1-ϵ]^d_Y,
and ‖ D^rϕ_Y,j‖ _∞≤ J_Y<∞
for all j. We can then construct a combined orthonormal system
ϕ_i,j=ϕ_X,iϕ_Y,j which has q^2 functions. It
is clear that this is an orthonormal system since ∫ϕ_i,j(x,y)ϕ_m,n(x,y)dxdy=1
when i=m and j=n and zero otherwise. Also, ϕ_1,1=1,
supp(ϕ_i,j)⊂[ϵ,1-ϵ]^d_X+d_Y,
and there exists some J<∞ such that ‖ D^rϕ_i,j‖ _∞≤ J
for all i,j.
Now for any given function f∈ L_2([0,1]^d_X+d_Y),
we can find a unit-normed function v∈span({ϕ_i,j})
such that v⊥ϕ_1,i for all i, v⊥ϕ_i,1 for
all i, and v⊥ f. We can write v=∑_i,j=1^qa_ib_jϕ_i,j.
Then D^rv=∑_i,j=1^qa_ib_jD^rϕ_i,j which implies
that
‖ D^rv‖ _∞≤ J∑_i,j|a_ib_j|≤ Jq,
where the last inequality comes from the fact that v is unit-normed.
Now define ν=1/Jqv. Then clearly ∫ν^2(x,y)dxdy
is upper and lower bounded and we have that ‖ D^rν‖ _∞≤1.
We can now construct the functions u_j. First map R_j to
[0,1]^d_X+d_Yby scaling it appropriately. Then set
u_j(x,y)=ν(ℓ^1/(d_X+d_Y)([[ x; y ]]-𝐣)),
where 𝐣 is the point in R_j that is mapped to 0
after scaling. This maps [0,1]^d_X+d_Y back to R_j while
inheriting the properties derived from the construction of ν.
Now let f be g'(f_X(x)f_Y(y)/f_XY(x,y))f_X(x)f_Y(y)/f_XY(x,y)
constrained to R_j and scaled to fit [0,1]^d_X+d_Y.
Conditions 1, 3, and 4 above are then fulfilled by construction. Also
∫_R_ju_j^2(x,y)dxdy=1/ℓν^2(x,y)dxdy∈Θ(ℓ^-1)
which is condition 2 above. It's also clear that ‖ D^ru_j‖ _∞≤ℓ^r/(d_X+d_Y)
which is condition 5, completing the proof.
We now can prove Theorem <ref>. We will construct the
conditions necessary to apply Lemma <ref>. Apply Lemma <ref>
to obtain an index set Λ̃={-1,1}^ℓ and functions
u_1,…,u_ℓ. Define the following set of perturbed functions
around f_XY:
Λ={ f_XY,λ=f_XY+H_1∑_j=1^ℓλ_ju_j|λ_j∈Λ̃} .
This will form our set of alternatives. Due to the fact that ∫ u_j(x,y)dx=∫ u_j(x,y)dy=0,
we have that
∫ f_XY,λ(x,y)dx=f_Y(y),
∫ f_XY,λ(x,y)dy=f_X(x).
That is, the perturbations on f_XY are chosen so that the resulting
marginal distributions are unperturbed.
The perturbation functions u_j in Lemma <ref>
are restricted to the small R_j bins and thus violate the Hölder
class assumption. However, by scaling H_1 appropriately, we can
ensure that f_XY,λ∈Σ(s,H). We show this by following
the same argument as in Krishnamurthy et al. <cit.>,
which we repeat here for completeness. Define u_λ=H_1∑_j=1^ℓλ_ju_j.
We will first show that u_λ is Hölder smooth. Then
f_XY,λ is Hölder smooth by the triangle inequality.
For u_λ, fix two points v,z∈ℝ^d_X+d_Y
and fix r with ∑_jr_j=s. Define z_1 as the boundary
point of the R_j bin containing z along the line between z
and v and define v_1 similarly as the boundary point for the
bin containing v along the same line. We then have the following:
|D^ru_λ(z)-D^ru_λ(v)| ≤|D^ru_λ(z)-D^ru_λ(z_1)|+|D^ru_λ(z_1)-D^ru_λ(v_1)|+|D^ru_λ(v_1)-D^ru_λ(v)|
=|D^ru_λ(z)-D^ru_λ(z_1)|+|D^ru_λ(v_1)-D^ru_λ(v)|
=∫_γ(z,z_1)∇ D^ru_λ(t)dt+∫_γ(v,v_1)∇ D^ru_λ(t)dt
≤ H_1‖ D^r+1u_j‖ _∞(‖ z-z_1‖ _2+‖ v-v_1‖ _2)
≤ H_1ℓ^r+1/d_X+d_Y(‖ z-z_1‖ _2^1-(s-r)‖ z-z_1‖ _2^s-r+‖ v-v_1‖ _2^1-(s-r)‖ v-v_1‖ _2^s-r)
≤ H_1ℓ^r+1/d_X+d_Y√(d_X+d_Y)ℓ^-1-(s-r)/d_X+d_Y(‖ z-z_1‖ _2^s-r+‖ v-v_1‖ _2^s-r)
≤ H_1ℓ^s/d_X+d_Y√(d_X+d_Y)‖ z-v‖ _2^s-r.
The first line is an application of the triangle inequality. The second
line follows from the fact that u_λ and all of its derivatives
are zero on the boundaries of the cubes R_j as u_j is not
supported in the band around the border of R_j. The third line
follows from the fundamental theorem of calculus where γ(z,z_1)
is the path between z and z_1. The fourth line is an application
of Hölder's inequality where we replace each derivative with
its supremum, leaving just the path integral which simplifies to the
length of the path. The fifth line follows from the assumption that
‖ D^ru_j‖ _∞≤ℓ^r/(d_X+d_Y)
when ∑_jr_j≤ s+1. For the sixth line, since z and
z_1 are in the same bin, then ‖ z-z_1‖ _2≤√(d_X+d_Y)ℓ^-1/(d_X+d_Y)
as there are ℓ boxes with side length ℓ^-1/(d_X+d_Y).
Finally, the last line follows since z_1 and v_1 are on
the line segment between z and v.
This indicates that u_λ, and therefore f_XY,λ,
is guaranteed to be Hölder smooth if H_1ℓ^s/d_X+d_Y√(d_X+d_Y)≤ H.
Thus we require that H_1=O(ℓ^-s/d_X+d_Y).
We will set ℓ later on.
Note that for any f_XY,λ∈Λ, by a second order Taylor
series approximation in the first argument we have
I(f_XY,λ,f_Xf_Y) =I(f_XY,f_Xf_Y)-∫ g'(f_X(x)f_Y(y)/f_XY(x,y))f_X(x)f_Y(y)u_λ(x,y)/f_XY(x,y)dxdy
+1/2∫ g”(f_X(x)f_Y(y)/f_XY^*(x,y))f_X^2(x)f_Y^2(y)u_λ^2(x,y)/(f_XY^*(x,y))^2dxdy,
where f_XY^* is the function from Taylor's remainder theorem.
By construction (see Lemma <ref>), the first order
term vanishes. For the second order term, note that f_XY^*
lies on the line segment between f_XY and f_XY,λ and
is therefore upper and lower bounded. Similarly the density f_XY,λ
will be upper and lower bounded for N sufficiently large as f_XY,λ∈[f_XY-H_1,f_XY+H_1]
due to the fact that ‖ D_0u_j‖ _∞=‖ u_j‖ _∞≤1,
and H_1 will be chosen to decrease as N increases. Assume
without loss of generality that given ϵ>0, g”(ϵ)>0.
Thus there exists a constant c_0 such that
1/2∫ g”(f_X(x)f_Y(y)/f_XY^*(x,y))f_X^2(x)f_Y^2(y)u_λ^2(x,y)/(f_XY^*(x,y))^2dxdy≥ c_0H_1^2∑_j=1^ℓ‖ u_j‖ _2^2≥ c_1H_1^2,
where we have used the facts that the u_j functions are orthogonal
to each other and ‖ u_j‖ _2^2∈Θ(ℓ^-1).
Therefore,
I(f_XY,λ,f_Xf_Y)-I(f_XY,f_Xf_Y)≥ c_1H_1^2,
providing us with the necessary separation of 2β where β=c_1H_1^2/2.
Note that if g”(ϵ)<0, we can simply consider I(f_XY,f_Xf_Y)-I(f_XY,λ,f_Xf_Y)
instead.
We now focus on bounding the squared Hellinger distance H^2(F̅_XY^N×(F_XF_Y)^N,F_XY^N×(F_XF_Y)^N)
where F̅_XY^N=1/|Λ|∑_λ∈ΛF_XY,λ^N.
The Hellinger distance decomposes across product measures resulting
in:
H^2(F̅_XY^N×(F_XF_Y)^N,F_XY^N×(F_XF_Y)^N) =2(1-(1-H^2(F̅_XY^N,F_XY^N)/2)(1-H^2((F_XF_Y)^N,(F_XF_Y)^N)))
=H^2(F̅_XY^N,F_XY^N).
To bound this, we will use the following result from Birge and Massart <cit.>:
Consider a set of densities p and p_λ=p(1+∑_jλ v_j)
for λ∈Λ={-1,1}^ℓ. Suppose that (i) ‖ v_j‖ _∞≤1,
(ii) ‖ 1_{R_j^C}v_j‖ _1=0, (iii)
∫ pv_j=0, and (iv) ∫ pv_j^2=α_j>0 all hold.
Define P̅^N=1/|Λ|∑_λ∈ΛP_λ^N.
Then
H^2(P̅^N,P^N)≤N^2/3∑_j=1^ℓα_j^2.
To apply Lemma <ref>, define v_j(z)=H_1u_j(z)/f_XY(z).
Then f_XY,λ=f_XY(1+∑_jλ_jv_j).
Requirements (i)-(iii) are immediately satisfied based on the properties
of u_j (see Lemma <ref>). Furthermore,
α_j=∫ v_j^2f_XY=H_1^2∫ u_j^2/f_XY≤H_1^2C/ℓ,
for some constant C. Therefore
H^2(F̅_XY^N,F_XY^N)≤N^2/3∑_j=1^ℓα_j^2≤N^2H_1^4C^2/ℓ∈Θ(N^2ℓ^-4s+d_X+d_Y/d_X+d_Y).
Set ℓ=N^2(d_X+d_Y)/4s+d_X+d_Y resulting
in H_1=N^-2s/4s+d_X+d_Y. Then the Hellinger distance
is bounded by a constant. Additionally, the error is larger than β∈Θ(N^-4s/4s+d_X+d_Y)
allowing us to apply Lemma <ref> when s<(d_X+d_Y)/4.
Markov's inequality then finishes the proof.
For s>(d_X+d_Y)/4,we get a lower bound of O(N^-1)
which is the parametric rate. In general, we cannot do any better
than this <cit.>
thus establishing the lower bound in this regime. In particular, Krishnamurthy
et al. <cit.> use a contradiction approach
to establish this for divergence estimation which can be extended
to the MI estimation problem.
§ THEORY FOR MIXED RANDOM VARIABLES
Here we provide proofs of the theory that extends the MI estimators
for the continuous case to the mixed case.
§.§ Proof of Lemma <ref>
For (<ref>), note that_xy is a binomial random
variable with parameter f_X_DY_D(x,y), N trials,
and mean Nf_X_DY_D(x,y). Thus (<ref>)
is the (potentially) fractional moment of a binomial random variable.
By the generalized binomial theorem, we have that
𝐍_xy^α = (𝐍_xy-Nf_X_DY_D(x,y)+Nf_X_DY_D(x,y))^α
= ∑_i=0^∞([ α; i ])(Nf_X_DY_D(x,y))^α-i(𝐍_xy-Nf_X_DY_D(x,y))^i,
[𝐍_xy^α] = ∑_i=0^∞([ α; i ])(Nf_X_DY_D(x,y))^α-i[(𝐍_xy-Nf_X_DY_D(x,y))^i].
From <cit.>, the i-th central moment of 𝐍_xy
has the form of
[(𝐍_xy-Nf_X_DY_D(x,y))^i]=∑_n=0^⌊ i/2⌋c_n,i(f_X_DY_D(x,y))N^n.
Combining this with (<ref>) gives
[_xy^α] =∑_i=0^∞∑_n=0^⌊ i/2⌋([ α; i ])(f_X_DY_D(x,y))^α-ic_n,i(f_X_DY_D(x,y))N^α-i+n
=(Nf_X_DY_D(x,y))^α+O(N^α-1).
For (<ref>), we apply a Taylor series expansion
to obtain
_xy^λ_x^β_y^γ =N^λ+β+γp^λp_x^βp_y^γ+(_xy-Np)p^λ-1(N^λ+β+γ-1p_x^βp_y^γ+N^λ+β+γ-2(p_x^β-1p_y^γ(_x-Np_x)+p_x^βp_y^γ-1(_y-Np_y)))
+N^λ+β+γ-1p^λ(p_x^β-1p_y^γ(_x-Np_x)+p_x^βp_y^γ-1(_y-Np_y))+O(N^λ+β+γ-2((_x-Np_x)(_y-Np_y))),
where we set p=f_X_DY_D(x,y), p_x=f_X_D(x),
and p_y=f_Y_D(y) for notational convenience. By taking the
expected value with respect to _x, _y, and _xy,
we obtain
[_xy^λ_x^β_y^γ] =N^λ+β+γp^λp_x^βp_y^γ+N^λ+β+γ-2p^λ-1(p_x^β-1p_y^γ(_xy,_x)+p_x^βp_y^γ-1(_xy,_y))
+O(N^β+γ-1(_x,_y))
=N^λ+β+γp^λp_x^βp_y^γ+O(N^λ+β+γ-1),
where the last step follows from the Cauchy-Schwarz inequality and
the variance of a binomial random variable.
§.§ Proof of Theorem <ref> (Bias)
For notational ease, let
𝒯(,)=f_X_C|X_D(_C|_D)f_Y_C|Y_D(_C|_D)/f_X_CY_C|X_DY_D(_C,_C|_D,_D).
We have that
[h_X_C|X_D,h_Y_C|Y_D] =[h_X_C|X_D,h_Y_C|Y_D]-I(;)
=[∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/Nh_X_C|x,h_Y_C|y-g(𝒯(,)×f_X_D(_D)f_Y_D(_D)/f_X_DY_D(_D,_D))]
=[∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/N(h_X_C|x,h_Y_C|y-g(𝒯(,)×_x_y/N_xy))]
+[∑_x∈𝒮_X_D,y∈𝒮_Y_D(_xy/Ng(𝒯(,)×_x_y/N_xy)-f_X_DY_D(x,y)g(𝒯(,)×f_X_D(x)f_Y_D(y)/f_X_DY_D(x,y)))].
We consider the second term in (<ref>) first. A Taylor
series expansion of g(𝒯(,)×_x_y/N_xy)
evaluated at 𝒯(,)×f_X_D(x)f_Y_D(y)/f_X_DY_D(x,y)
gives terms of the form of
(f_X_C|X_D(_C|x)f_Y_C|Y_D(_C|y)(_x_y/N^2-f_X_D(x)f_Y_D(y)))^i,
(f_X_CY_C|X_DY_D(_C,_C|x,y)(_xy/N-f_X_DY_D(x,y)))^i,
where i is a positive integer. For notational ease, set p=f_X_DY_D(x,y).
By applying the binomial theorem and (<ref>), we obtain
_xy/N(p-_xy/N)^i =∑_k=0^iikp^i-k(_xy/N)^k+1(-1)^k
[_xy/N(p-_xy/N)^i] =p^i+1∑_k=0^iik(-1)^k+O(1/N)
=O(1/N).
Using a similar approach with (<ref>), it can be
shown that
[_xy/N(_x_y/N^2-f_X_D(x)f_Y_D(y))^i]=O(1/N).
Thus the second term in (<ref>) reduces to O(1/N).
By conditioning on _1,D,…,_N,D,_1,D,…,_N,D,
the first term in (<ref>) can be written as
[∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/N[.h_X_C|x,h_Y_C|y|_1,D,…,_N,D,_1,D,…,_N,D]].
The conditional bias of h_X_C|x,h_Y_C|y given _1,D,…,_N,D,_1,D,…,_N,D
can be obtained from Theorem <ref> as
[.h_X_C|x,h_Y_C|y|_1,D,…,_N,D,_1,D,…,_N,D] =∑_i,j=0
i+j≠0
^rc_10,i,j(_x_y/N^2,_xy/N)^i^j
+O(^s+^s+1/_xy^d_X^d_Y).
This expression provides the motivation for our choice of and
. Since ∝_x^-β and ∝_y^-α,
then (<ref>) gives terms with the form of _xy_x^-β i_y^-α j/N
with i+j≥1. From Lemma <ref>, taking
the expected value of these terms gives
[_xy_x^-β i_y^-α j/N]=N^-β i-α jf_X_DY_D(x,y)(f_X_D(x))^-β i(f_Y_D(y))^-α j+o(1/N).
Similarly, taking the expectation of _xy_x^β d_X_y^α d_Y/N^2
gives O(N^β d_X+α d_Y-1). Note that the
polynomial terms of _x_y/N^2 and _xy/N in the
constants in (<ref>) do not contribute to the
bias rate as the _x_y and _xy terms in the numerator
are cancelled by the N^2 and N terms in the denominator, respectively,
after taking the expectation. Combining all of these results completes
the proof.
§.§ Proof of Theorem <ref> (Variance)
By the law of total variance, which can
be derived from the Pythagorean theorem, we have
[h_X_C|X_D,h_Y_C|Y_D] =[[.h_X_C|X_D,h_Y_C|Y_D|_1,D,…,_N,D,_1,D,…,_N,D]]
+[[.h_X_C|X_D,h_Y_C|Y_D|_1,D,…,_N,D,_1,D,…,_N,D]].
Given all of the _i,D and _i,D random variables, the
estimators h_X_C|x,h_Y_C|y are all conditionally independent
since they use different sets of 𝐗_i,C's and _i,C's
for each pair (x,y). Thus from Theorem <ref>, we
get
[.h_X_C|X_D,h_Y_C|Y_D|_1,D,…,_N,D,_1,D,…,_N,D] =O(∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy^2/N^21/_xy)
=O(∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/N^2).
Taking the expectation yields O(1/N).
For the second term in (<ref>), we know from (<ref>)
that
[.h_X_C|x,h_Y_C|y|_1,D,…,_N,D,_1,D,…,_N,D] =O(∑_i,j=0
i+j≠0
^r_x^-iβ_y^-jα+_x^-sβ+_y^-sα+_x^β d_X_y^α d_Y/_xy)
=O(f(_x,_y,_xy)).
Let _xy', _x', and _y' be independent and identically
distributed realizations of _xy, _x, and _y,
respectively. Then by the Efron-Stein inequality,
[∑_x∈𝒮_X_D,y∈𝒮_Y_D_xy/Nf(_x,_y,_xy)] ≤1/2N^2∑_x∈𝒮_X_D,y∈𝒮_Y_D[(_xyf(_x,_y,_xy)-_xy'f(_x',_y',_xy'))^2],
where since _x, _y, and _xy are not independent,
we consider the effect of resampling all three simultaneously. Note
that
(_xyf(_x,_y,_xy)-_xy'f(_x,'_y',_xy'))^2 =O((∑_i,j=0
i+j≠0
^r(_xy_x^-iβ_y^-jα-_xy'(_x')^-iβ(_y')^-jα)..
+(_xy_x^-sβ-_xy'(_x')^-sβ)+(_xy_y^-sα-_xy'(_y')^-sα)
..[ ; ; ; ; ]+(_x^β d_X_y^α d_Y-(_x')^β d_X(_y')^α d_Y))^2).
By Jensen's inequality, we can consider separately each of the squared
differences in (<ref>). Then since (_xy,_x,_y)
is independent of (_xy',_x',_y') and they
are identically distributed, then the expected squared difference
is proportional to the variance. For example, applying Lemma <ref>
gives
[(_xy_x^-sβ-_xy'(_x')^-sβ)^2] =2[_xy_x^-sβ]
=2N^2-2sβ(f_X_DY_D(x,y))^2(f_X_D(x))^-2sβ-2(N^1-sβf_X_DY_D(x,y)(f_X_D(x))^-sβ)^2
+O(N^1-2sβ)
=O(N^1-2sβ).
By a similar procedure, we obtain
[(_xy_y^-sα-_xy'(_y')^-sα)^2] =O(N^1-2sα),
[(_x^β d_X_y^α d_Y-(_x')^β d_X(_y')^α d_Y)^2] =O(N^2β d_X+2α d_Y-1),
[(∑_i,j=0
i+j≠0
^r(_xy_x^-iβ_y^-jα-_xy'(_x')^-iβ(_y')^-jα))^2] =O(N^1-2β)+O(N^1-2α).
Combining these results with (<ref>) and (<ref>)
completes the proof.
§ PROOF OF THEOREM <REF> (CLT)
We will first find the asymptotic distribution
of
√(N)(-[]) =1/√(N)∑_i=1^N(g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-Z_i[g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)])
+1/√(N)∑_i=1^N(Z_i[g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)]-[g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)]).
By the standard central limit theorem <cit.>,
the second term converges in distribution to a Gaussian random variable
with variance
[ Z[g( X(𝐗) Y(𝐘)ν_1ν_2/ Z(𝐗,𝐘)ν_3)]].
All that remains is to show that the first term converges in probability
to zero as Slutsky's theorem <cit.> can then be
applied. Denote this first term as _N and note that [_N]=0.
We will use Chebyshev's inequality combined with the Efron-Stein inequality
to bound the variance of _N. Consider the samples {_1,…,_N}
and {_1^',_2,…,_N} and the respective
sequences _N and _N^'. This gives
_N-_N^' =1/√(N)(g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-Z_1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)])
-1/√(N)(g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^'))-Z_1^'[g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^'))])
+1/√(N)∑_i=2^N(g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)).
Note that
[(g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-Z_1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)])^2]=[__1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)]].
We will use the Efron-Stein inequality to bound __1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)].
We thus need to bound the conditional expectation of the term
|g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-g( X^'(𝐗_1) Y^'(𝐘_1)ν_1ν_2/ Z^'(𝐗_1,𝐘_1)ν_3)|^2,
where _i is replaced with _i^' in the KDEs for some
i≠1. Using similar steps as in Appendix <ref>,
we have that
[|g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-g( X^'(𝐗_1) Y^'(𝐘_1)ν_1ν_2/ Z^'(𝐗_1,𝐘_1)ν_3)|^2]=O(1/N^2).
Then by the Efron-Stein inequality, __1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)]=O(1/N).
Therefore
[1/N(g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-Z_1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)])^2]=O(1/N^2).
A similar result holds for the g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^')ν_3)
term in (<ref>).
For the third term in (<ref>),
[(∑_i=2^N|g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)|)^2]
=∑_i,j=2^N[|g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)|.
×.|g( X(𝐗_j) Y(𝐘_j)ν_1ν_2/ Z(𝐗_j,𝐘_j)ν_3)-g( X^'(𝐗_j) Y^'(𝐘_j)ν_1ν_2/ Z^'(𝐗_j,𝐘_j)ν_3)|]
For the N-1 terms where i=j, we know from Appendix <ref>
that
[|g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)|^2]=O(1/N^2).
Thus these terms contribute O(1/N). For the (N-1)^2-(N-1)
terms where i≠ j, we can do multiple substitutions of the form
𝐮_j=_j-_1/h_X resulting in
[|g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i))-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i))|.
×.|g( X(𝐗_j) Y(𝐘_j)ν_1ν_2/ Z(𝐗_j,𝐘_j)ν_3)-g( X^'(𝐗_j) Y^'(𝐘_j)ν_1ν_2/ Z^'(𝐗_j,𝐘_j)ν_3)|] =O(h_X^2d_Xh_Y^2d_Y/N^2).
Since h_X^d_Xh_Y^d_Y=o(1),
[(∑_i=2^N|g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)|)^2]=o(1).
Combining all of these results with Jensen's inequality gives
[(_N-_N^')^2] ≤3/N[(g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)-Z_1[g( X(𝐗_1) Y(𝐘_1)ν_1ν_2/ Z(𝐗_1,𝐘_1)ν_3)])^2]
+3/N[(g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^')ν_3)-Z_1^'[g( X(𝐗_1^') Y(𝐘_1^')ν_1ν_2/ Z(𝐗_1^',𝐘_1^')ν_3)])^2]
+3/N[(∑_i=2^N(g( X(𝐗_i) Y(𝐘_i)ν_1ν_2/ Z(𝐗_i,𝐘_i)ν_3)-g( X^'(𝐗_i) Y^'(𝐘_i)ν_1ν_2/ Z^'(𝐗_i,𝐘_i)ν_3)))^2]
=o(1/N).
Applying the Efron-Stein inequality gives that [_N]=o(1).
Then by ChebyShev's inequality, _N converges to zero in probability.
This completes the proof for the plug-in estimator.
For the weighted ensemble estimator, we present a more general result
where we have different parameters l_X∈ℒ_X and
l_Y∈ℒ_Y for and , respectively. We
can then write
√(N)( w-[ w]) =1/√(N)∑_i=1^N∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)(g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3).
.-Z_i[g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3)])
+1/√(N)∑_i=1^N(Z_i[∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3)].
.-[∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3)]).
By the central limit theorem, the second term converges in distribution
to a zero-mean Gaussian random variable with variance
[Z_i[∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3)]].
From the previous results, the first term converges to zero in probability
as it can be written as
∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)1/√(N)∑_i=1^N(g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3).
.-Z_i[g( XX(𝐗_i) YY(𝐘_i)ν_1ν_2/ ZZ(𝐗_i,𝐘_i)ν_3)]) =∑_l_X∈ℒ_X,l_Y∈ℒ_Yw(l_X,l_Y)o_P(1)
=o_P(1),
where o_P(1) denotes convergence to zero in probability and we
use the fact that linear combinations of random variables that converge
in probability individually to constants converge in probability to
the linear combination of the constants. The proof is finished with
Slutsky's theorem.
Note that the proof of Corollary <ref> follows a similar
procedure as the extension to the ensemble case.
| Mutual information (MI) is a measure of the amount
of shared information between a pair of random variables and
. MI estimation is related to the problem of estimating functionals
of probability distributions, which has received deserved attention
in recent years <cit.>.
Many statistical problems rely in some form upon accurate estimation
of functionals of probability distributions including estimating the
decay rates of error probabilities <cit.>, estimating
bounds on the Bayes error rate <cit.>,
and hypothesis testing <cit.>.
MI estimation, in particular, also has many applications in information
theory and machine learning including independent subspace analysis <cit.>,
structure learning <cit.>, fMRI data processing <cit.>,
forest density estimation <cit.>, clustering <cit.>,
neuron classification <cit.>, blind source
separation <cit.>, intrinsically motivated reinforcement
learning <cit.>, as well
as other data science applications such as sociology <cit.>,
computational biology <cit.>,
and improving neural network models <cit.>. A particularly
common application is feature selection or extraction where features
are chosen to maximize the MI between the chosen features (represented
by 𝐗) and the outcome variables (represented by 𝐘)
<cit.>.
In many of these applications, the variables and may have
any mixture of discrete and continuous components. In feature selection,
for example, the predictor labels may have discrete components (e.g.
classification labels) while the input variables may have a mixture
of discrete and continuous features. To the best of our knowledge,
there are currently no nonparametric MI estimators that are known
to achieve the parametric mean squared error (MSE) convergence rate
1/N (N is the number of samples) in this setting where
and/or contain a mixture of discrete and continuous components.
Instead, most existing estimators of MI focus on the cases where both
and are either purely discrete or purely continuous. Also,
while many nonparametric estimators of MI exist, most have not been
generalized beyond Shannon or Rényi information. Furthermore,
minimax convergence rates are currently unknown for the continuous
and the mixture cases.
In this paper, we provide a framework for nonparametric estimation
of a large class of MI measures where we only have available a finite
population of i.i.d. samples. This framework can be applied
to accurately estimate general MI measures when either and
are purely continuous or the mixed case when and may contain
a mixture of discrete and continuous components. We derive an MI estimator
for these cases that achieves the parametric MSE rate when the conditional
densities of the continuous variables are sufficiently smooth, thus
achieving the minimax rate (which we also derive) in this setting.
We call this estimator the Generalized ENsemble
Information Estimator (GENIE).
Our estimation method applies to other MI measures in addition to
Shannon information, which have been the focus of much recent interest.
An information measure based on a quadratic divergence was defined
in <cit.>. A density-resampled version of MI
was introduced in <cit.> to better measure
gene relationships in single-cell data when sampling may not be uniform.
A MI measure based on the Pearson divergence was considered in <cit.>.
Minimal spanning tree <cit.> and generalized nearest-neighbor
graph <cit.> approaches have been developed for
estimating Rényi information <cit.>,
which has been used in many applications (e.g. <cit.>).
§.§ Related Work
Many estimators for MI have been previously developed. Nearly all
of these estimators ignore the mixed case and focus on the case where
both and are either purely continuous or purely discrete.
A popular k-nearest neighbor (nn)-based estimator was proposed
in <cit.> which is a modification of the entropy
estimator derived in <cit.>. However, these
estimators have only been shown to achieve the parametric convergence
rate when the dimension of each of the random variables is less than
3 <cit.>. Furthermore, these estimators focus
only on estimating the Shannon MI between purely continuous random
variables. Similarly, the estimators in <cit.>
do not achieve the parametric rate and focus on the purely continuous
case. An adaptation of the Shannon MI estimator in <cit.>
was recently proposed to handle the discrete-continuous mixture case <cit.>.
While this estimator has been proven to be consistent, its convergence
rate is currently unknown. Central limit theorems have also been derived for several entropy estimators <cit.>, which can then be applied to Shannon MI. However, it is not clear if these results can be extended to more general MI functionals.
A neural network-based estimator of Shannon MI was proposed in <cit.>.
While this estimator is computationally efficient, its statistical
properties are largely unknown as the authors only prove convergence
in probability rates. It is also unclear how to extend this estimator
to other MI measures such as the Rényi information. A jackknife
approach to estimating Shannon MI was also recently proposed <cit.>.
This approach provides an automatic selection of the kernel bandwidth
for a plug-in kernel density estimator (KDE) and does not require
boundary correction, which is generally a major difficulty in estimating
functionals of probability distributions. However, the MSE convergence
rate of this estimator is also unknown.
Much work has focused on the problem of estimating the entropy of
purely discrete random variables <cit.>.
Shannon MI can then be estimated by estimating the joint and marginal
entropies of and . However, it is not clear if discrete
methods can be extended successfully to the mixed-case. Quantizing
the continuous components of the data is one potential approach that
has been shown to be consistent for some quantization schemes in the
purely continuous case <cit.> but it is currently
unknown if similar approaches can be applied in the mixed-case. Also,
extending these estimators to general MI measures like Rényi
information is not straightforward.
Recent work has focused on nonparametric divergence estimation for
continuous random variables. One approach <cit.>
uses an optimal KDE to achieve the parametric convergence rate when
the densities are at least d <cit.>
or d/2 <cit.>
times differentiable where d is the dimension of the data. These
methods, like ours, assume that the densities are bounded away from
zero as this simplifies the analysis. However, this induces a boundary
on the densities' support set. For accurate estimation, the optimal
KDE approaches require knowledge of the density support boundary and
are difficult to construct near the boundary. Numerical integration
may also be required for estimating some divergence functionals under
this approach, which can be computationally expensive. In contrast,
our approach to MI estimation does not require numerical integration
and can be performed without knowledge of the support boundary.
Some methods for estimating distributional functionals have relaxed
the boundedness assumption on the densities <cit.>.
These approaches typically assume that the tails of the densities
decay at a sufficiently fast rate (e.g. sub-exponential or sub-Gaussian).
In <cit.>, the authors only consider densities
with up to 2 derivatives as it is difficult to exploit higher smoothness when the densities are not lower-bounded.
More closely related work <cit.>
uses an ensemble approach to estimate entropy or divergence functionals
for continuous random variables. These works construct an ensemble
of simple plug-in estimators by varying the neighborhood size of density
estimators. They then take a weighted average of the estimators where
the weights are chosen to decrease the bias with only a small increase
in the variance. The parametric rate of convergence is achieved when
the densities are either d <cit.>
or d/2 <cit.> times
differentiable. These approaches are simple to implement as they only
require simple plug-in estimates and the solution of an offline convex
optimization problem. The ensemble approach also automatically corrects
for bias at the boundary of the densities' support set.
Finally, <cit.> showed that k-nn or KDE based
approaches underestimate the MI when the MI is large. As MI increases,
the dependencies between random variables increase which results in
less smooth densities. Thus this isn't an issue when the densities
are smooth <cit.>.
For the mixture setting, we focus on the important special case where
the components of each observation are assumed to decompose into discrete
and continuous dimensions. This enables the density to be factored:
f_X(x)=f_X_D(x_D)f_X_C|X_D(x_C|x_D)
where x_C and x_D are the continuous and discrete components
of x. We note that this excludes the more general case considered
by <cit.> where one or more components can have
discrete and continuous values simultaneously. However, our setting
is a common occurrence in many machine learning and statistical problems.
For example, a search within the UCI Machine Learning Repository <cit.>
yields many datasets with such structure. Many statistical models
have also focused on similar settings <cit.>.
Thus we believe that this special case warrants its own treatment
and retain the more general case for future work. Despite the importance
of this mixed setting, no other MI estimators have been derived or
analyzed that achieve the parametric MSE convergence rate.
§.§ Contributions
In the context of this related work, we make the following novel contributions
in this paper:
* For purely continuous random variables, we derive the asymptotic bias
and variance of kernel density plug-in MI estimators for general MI
measures without boundary correction <cit.>
(Section <ref>).
* We leverage the results for the purely continuous case to derive the
bias and variance of general kernel density plug-in MI estimators
when and/or contain a mixture of discrete and continuous
components by reformulating the densities as a mixture of the conditional
density of the continuous variables given the discrete variables (Section
<ref>). Note that this is a special case of the mixture
setting where discrete and continuous components are separated into
different dimensions.
* We leverage this theory for the mixed case described above in conjunction
with the generalized theory of ensemble estimators <cit.>
to derive GENIE. To the best of our knowledge, this is the first non-parametric
estimator of general MI measures that achieves a parametric rate of
MSE convergence of O(1/N) when the densities are sufficiently smooth for any mixed case (Section
<ref>) let alone the special case we consider,
where N is the number of samples available from each distribution.
* We prove a minimax lower bound for the convergence rate of MI estimators
in the purely continuous case (Section <ref>). This
unifies the minimax theory for estimating continuous entropy <cit.>
and divergence functionals <cit.>.
Neither of these approaches are directly extendable to the MI case
due to the dependence of the marginal distributions on the joint distribution
and the integral relationship between the joint and the marginals.
Therefore, we have tailored the proof to the MI estimation case. We
also show that the MI ensemble estimator achieves the minimax rate
when the densities are sufficiently smooth.
* We derive a central limit theorem for the ensemble estimators (Section
<ref>).
* We apply the method to single-cell RNA-sequencing feature selection
problems (Section <ref>).
We note that KDE plug-in approaches to estimating functionals such
as entropy and MI are well-known and perhaps the simplest approach <cit.>.
Applying the generalized theory of ensemble estimation to the KDE
plug-in estimator does not raise the complexity of the estimators
substantially, either computationally or conceptually. Yet by employing
these simple methods, the resulting ensemble estimator is able to
achieve the minimax convergence rate for sufficiently smooth densities
without employing more complicated von-Mises expansions (as in <cit.>)
or boundary correction (as in <cit.>)
to reduce the bias. | null | null | null | null | We derived the MSE convergence rates for general plug-in KDE-based
estimators of general MI measures between 𝐗 and 𝐘
when they have only continuous components and for the case where
and/or contain a mixture of discrete and continuous components.
Using these rates, we defined an ensemble estimator GENIE that achieves
an MSE rate of O(1/N) when the densities are sufficiently smooth.
To the best of our knowledge, this is the first nonparametric MI estimator
that achieves the MSE convergence rate of O(1/N) in this setting
of mixed random variables (i.e. and are not both purely
discrete or purely continuous). We also derived a minimax lower bound
on the convergence rate for estimating MI in the continuous case,
derived the asymptotic distribution of the estimator, validated the
superior convergence rate of the ensemble estimator via experiments,
and applied the estimator to analyze feature relevance in single cell
data. We show that the ensemble estimators for the continuous case
achieve the minimax rate for sufficiently smooth densities. Future
work includes extending this approach to k-nn based estimators
which are generally computationally easier than KDE estimators and
extending the minimax rate for the mixed case considered here. We
conjecture that the minimax rates in the mixed case are at least as
slow as those for the continuous case as the mixed case can be decomposed
into a random sum of continuous MI estimators.
ieeetr |
http://arxiv.org/abs/1701.07484v1 | 20170125205813 | Monitoring and Intervention: Concepts and Formal Models | [
"Kenneth Johnson",
"John V. Tucker",
"Victoria Wang"
] | cs.CY | [
"cs.CY"
] |
DisplaySignature
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07686v2 | 20170126132524 | Pulse length of ultracold electron bunches extracted from a laser cooled gas | [
"J. G. H. Franssen",
"T. L. I. Frankort",
"E. J. D. Vredenbregt",
"O. J. Luiten"
] | physics.atom-ph | [
"physics.atom-ph",
"physics.acc-ph",
"physics.plasm-ph"
] |
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Institute for Complex Molecular Systems, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Institute for Complex Molecular Systems, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
[email protected]
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Institute for Complex Molecular Systems, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
We present measurements of the pulse length of ultracold electron bunches generated by near-threshold two-photon photoionization of a laser-cooled gas. The pulse length has been measured using a resonant 3 GHz deflecting cavity in TM_110 mode.
We have measured the pulse length in three ionization regimes. The first is direct two-photon photoionization using only a 480 nm femtosecond laser pulse, which results in short (∼ 15ps) but hot (∼ 10^4K) electron bunches.
The second regime is just-above-threshold femtosecond photoionization employing the combination of a continuous-wave 780 nm excitation laser and a tunable 480 nm femtosecond ionization laser which results in both ultracold (∼ 10K) and ultrafast (∼ 25ps) electron bunches. These pulses typically contain ∼ 10^3 electrons and have an rms normalized transverse beam emittance of 1.5±0.1nm·rad. The measured pulse lengths are limited by the energy spread associated with the longitudinal size of the ionization volume, as expected.
The third regime is just-below-threshold ionization which produces Rydberg states which slowly ionize on microsecond time scales.
37.10, 37.20, 41.75, 41.85
Pulse length of ultracold electron bunches extracted from a laser cooled gas
O.J. Luiten
December 30, 2023
============================================================================
§ INTRODUCTION
Ultrafast electron diffraction has developed into a powerful technique for studying structural dynamics<cit.>. Pumping samples with femtosecond laser pulses and probing them with high energy electron bunches can easily lead to sample damage, which is particularly true for biological molecules<cit.>. This means that diffraction patterns preferably have to be captured with a single electron bunch: single-shot electron diffraction. This requires electron bunches with 10^6-10^7 electrons<cit.> per pulse which are prone to space charge explosions, resulting in loss of temporal resolution and degradation of transverse beam quality. The effect of space charge forces may be mitigated by using an ultracold electron source since it allows for larger source sizes and thus lower bunch densities for the same beam quality<cit.>.
To obtain high quality diffraction images the relative energy spread of the electron bunch needs to be much smaller than unity and the transverse coherence length larger than the lattice spacing of the structure under investigation. Previous work demonstrated that the ultracold electron source is capable of producing high quality diffraction images<cit.>, even with electron bunches created by femtosecond photoionization<cit.>.
The transverse beam quality of the ultracold electron source has been investigated extensively previously<cit.>. However, the longitudinal electron beam characteristics have not been investigated in great detail. Recently it was shown that the ultracold electron source can be used to create ultracold electron bunches with a root-mean-square (rms) pulse length of 250 ps<cit.>. In this paper we present pulse length measurements with sub-ps resolution of ultracold electron bunches with an rms pulse duration of 25 ps containing ∼ 10^3 electrons per pulse. Up to ∼ 10^6 electrons can be extracted in a single shot from the ultracold electron source<cit.> but then strong space charge effects come into play. These space charge effects can be minimized by shaping the initial electron distribution <cit.>. In this paper we stick to maximally ∼ 10^3 electrons per bunch, thus avoiding the complication of space charge effects.
For UED, the electron pulse length has to be shorter than the shortest timescale associated with the process under investigation. This means that often the electron bunch length should preferably be much shorter than one picosecond. This can be achieved by compressing longer electron pulses using a resonant RF cavity<cit.> in TM_010 mode, but for this to work the electron pulse should not be longer than a few 10 ps<cit.> for a cavity operated at 3GHz. In this paper we show that such electron bunches can indeed be extracted from the ultracold electron source.
This paper is organized as follows: In Section <ref> we introduce the ultracold electron source and the relevant photoionization schemes (Section <ref>A) that were used. In Section <ref>B and <ref>C we address the transverse electron beam quality (transverse emittance) and the longitudinal beam quality (longitudinal emittance). In Section <ref>D we discuss a model of the rubidium ionization process allowing us to make an estimate of the shortest electron pulse lengths that can be expected. In Section <ref> the experimental setup is described. Finally, in Section <ref>, the results will be presented.
We show that we can produce electron bunches which are both ultracold and ultrafast, with rms pulse durations of ∼ 20 ps, short enough to be compressed to sub-ps bunch lengths. In Section <ref> we will finish with a conclusion and an outlook.
§ ULTRACOLD ELECTRON SOURCE
Ultracold electron bunches are created by near-threshold photoionization of laser-cooled and trapped 85-rubidium atoms, as is illustrated in Fig. <ref>. First, Fig. <ref>a, rubidium atoms are laser-cooled and trapped in a magneto-optical trap (MOT), with an atom density ∼ 10^16m^-3.
Second, Fig. <ref>b, the trapping laser is switched off 1 μs before the ionization laser pulse, so all atoms in the MOT relax to the ground state. Then the 780 nm excitation laser beam is switched on, creating a small cylinder of excited atoms in the 5P_3/2 F=4 state which has a rms radius of 35 μm. Subsequently a small volume of rubidium atoms is ionized by a femtosecond 480 nm ionization laser beam, intersecting the excitation laser beam at right angles, resulting in a cloud of cold electrons and ions typically less than 100 μm in diameter. This all occurs in a static electric field<cit.>, see Fig. <ref>c, which immediately accelerates all charged particles created.
Figure <ref> shows a schematic picture of the accelerator. The electrons are created in the center of the accelerating structure<cit.> which has a potential 2 F d_acc applied across the electrodes, with F the electric field strength and d_acc=13.5mm the length over which the electrons are accelerated. Electrons created in the back of the ionization volume will be accelerated over a longer distance and thus acquire more energy than electrons created in the front. This results in a correlated momentum spread at the exit of the accelerator (Fig. <ref>b). At a distance 2d_acc behind the accelerator the electrons created in the back catch up with the electrons created in the front, creating a self focus (Fig. <ref>c). After this self focus the electron pulse stretches due to the energy spread (Fig. <ref>d).
In absence of space charge forces, the initial longitudinal energy spread is dominated by the width of the ionization laser beam σ_ion in the direction of the acceleration field. The initial energy spread due to thermal motion is 1-10 meV for all three degrees of freedom<cit.>, which is at least an order of magnitude smaller than the energy spread due to the finite ionization laser beam size, typically 0.1-1 eV. Therefore the Boersch effect can be neglected. As a result, the rms relative energy spread σ_U/U is to a good approximation directly proportional to the ionization laser beam size,
σ_U/U=σ_ion/d_acc,
with U=eFd_acc the average bunch energy. The spot size of the diffraction limited ionization laser beam focus at the position of the ionization volume results in σ_ion≈ 30 μm so the expected rms relative energy spread σ_U/U≳ 3·10^-3.
Depending on the intensity and the central wavelength of the 480nm femtosecond laser ionization may occur by different mechanisms. We may distinguish three different ionization schemes, schematically indicated in Fig. <ref>, which are described in more detail below. In principle all three ionization processes always occur but their relative importance is determined by the intensity and central wavelength of the ionization laser.
§.§ Ionization schemes
The first ionization regime is photoionization of rubidium atoms from the ground state (5S_1/2) using two blue photons (480-480), see Fig. <ref>a. This will result in hot electrons (∼ 10^4 K) due to large excess energies. The ionization probability of this non-linear process scales with the intensity of the ionization laser field squared, which therefore results in an ionization volume that is smaller in length by a factor 1/√(2) than it would be by 780-480 nm photoionization, described below.
The second regime is near-threshold photoionization using a red and a blue laser (780-480), which results in ultracold electron bunches suitable for high quality electron diffraction. Atoms are excited from the ground state to the 5P_3/2 state using a 1 μs excitation pulse. In the mean time, an ultrafast blue laser ionizes the excited atoms, see Fig. <ref>b. In all (780-480) measurements the intensity of the femtosecond ionization laser has been decreased to minimize the contribution of two-photon (480-480) ionization. The ionization probability per 480nm photon of this process is at least an order of magnitude larger than that in the direct photoionization regime, resulting in more electrons per pulse.
When the center wavelength of the ionization laser is tuned below the ionization threshold, slowly decaying Rydberg states are formed (780-480-Ry), the third regime. For a given ionization laser wavelength λ and electric field strength F, the excess energy<cit.> is given by:
E_exc=E_λ+E_F≡ hc(1/λ-1/λ_0)+2E_h√(F/F_0),
where λ_0=479.06 nm is the zero-field ionization laser wavelength threshold, E_h=27.2 eV the Hartree energy, F_0=5.14 · 10^11 V/m the atomic unit of electric field strength, h Planck's constant and c the speed of light.
The part of the femtosecond laser spectrum lying in the positive excess energy regime will still be able to produce fast electron pulses, see Fig. <ref>c. Figure <ref>d depicts the Stark-shifted rubidium potential, with E_λ the contribution to the excess energy due to the laser wavelength and E_F the contribution to the excess energy due to the Stark shift.
The tail below threshold will create long lived Rydberg states that slowly ionize. We have scanned the center wavelength of the ionization laser across the zero excess energy point to investigate the ionization dynamics of the Rydberg gas.
§.§ Transverse phase space
The transverse beam quality of an electron beam can be described by the normalized root-mean-squared (rms) transverse emittance<cit.>
ϵ̂_x=σ_xσ_p_x/mc=σ_x√(k_bT_x/m c^2)
with σ_x the rms source size, m the electron mass and σ_p_x the rms momentum spread at the source, which can be expressed in terms of an effective transverse source temperature T_x.
The beam quality of the electrons extracted from the ultracold source has been investigated extensively in previous work<cit.>. Waist scan measurements resulted in a normalized rms transverse beam emittance of ϵ̂_x= 1.5 nm·rad<cit.>. This is equivalent to a relative transverse coherence length of C_⊥ = L_⊥/σ_x= _c/ϵ̂_x= 2.5 · 10^-4 with _c≡ħ/mc the reduced Compton wavelength, ħ Dirac's constant and L_⊥ the transverse coherence length.
We have repeated our previous measurements<cit.> with a higher resolution electron detector (TVips TemCam-F216) allowing for more accurate measurements. Figure <ref> shows the spot size as measured on the detector as a function of magnetic lens current together with our beam line model<cit.> fit which is used to determine the transverse beam emittance. These measurements result in a rms transverse normalized emittance of ϵ̂_x= 1.5 ± 0.1 nm·rad for λ=490nm and F=0.813MV/m. Using the value of the independently measured source size σ_x=35 μm<cit.>, this emittance corresponds to an effective transverse source temperature T_x=10K confirming our earlier results<cit.>.
§.§ Longitudinal phase space
The normalized longitudinal beam emittance of the ultracold electron source is, analogous to the transverse beam emittance (see Eq. (<ref>)), described by:
ϵ̂_z=σ_zσ_p_z/mc=σ_z√(k_bT_z/m c^2).
with σ_p_z the longitudinal rms momentum spread, σ_z the rms size of the ionization volume in the acceleration (ẑ) direction and T_z the effective longitudinal source temperature. In the most favorable case T_x=10K, as measured, and T_z=100K determined by the bandwidth (σ_λ=4nm) of the ionization laser. This results in a normalized rms longitudinal emittance of ϵ̂_̂ẑ=4nm·rad.
The longitudinal beam emittance is better known as the product of the rms pulse duration τ_w in a waist and the rms energy spread σ_U,
ϵ̂_z=τ_wσ_U/mc,
resulting in ϵ̂_̂ẑ· mc=7ps·eV.
The longitudinal beam emittance is an important parameter since it determines to what extent we can compress our electron pulse for a given energy spread, see Eq. (<ref>). Using Eq. (<ref>),(<ref>), (<ref>) and U=eFd_acc we can show that the rms pulse length in the longitudinal waist (See Fig. <ref>c) is given by
τ_w=√(mk_bT_z)/eF.
with e the elementary charge. For T_z=100 K and F=0.813 MV/m we thus find τ_w=270 fs. Strictly speaking, Eq. (<ref>) only holds when all electrons are created instantaneously. If the electrons are created over a time span τ_ion, the longitudinal beam emittance becomes
ϵ̂_z=σ_U/mc√(τ_w^2+τ_ion^2).
This equation shows that the longitudinal beam emittance is influenced by the duration of the ionization process τ_ion, which will be treated in the next section.
Generally, the pulse length of the freely propagating bunch is determined by four processes: first the duration of the ionization laser pulse τ_l; second the time τ_ion it takes an electron to escape the rubidium potential; Third, electron pulse lengthening due to the beam energy spread σ_U, as illustrated in Fig. <ref>; Fourth, the pulse lengthening due to space charge forces in the electron bunch. The latter process has been avoided in this work by using low charge densities.
The rms temporal bunch length τ_U due to the energy spread at a distance z behind the MOT is given by,
τ_U≅√(mσ^2_ion/2U)(1-z-d_acc/2 d_acc).
Here we approximated the accelerating field by an uniform electric field F⃗=F ẑ that extends up to z=d_acc and is zero for z > d_acc. This approximation holds for positions z-d_acc≫ a, with a the size of the aperture in the anode. This equation shows that there is a longitudinal focus at z=3 d_acc. Here the fast electrons created in the back of the ionization volume catch up with the slower electrons created in the front, as illustrated in Fig. <ref>.
§.§ Ionization process
To predict the ionization time constant τ_ion we make use of a classical model of the ionization process. This model<cit.> has previously been used successfully to predict the transverse beam quality.
In the model describing the rubidium atom there are no closed electron orbits<cit.>, which means that all electrons with an excess energy E_exc (Eq. (<ref>)) larger than zero will eventually leave the ion. The particle trajectories are calculated with the General Particle Tracer code<cit.>. The electrons are started with a velocity directed radially outwards under an angle θ with the acceleration field. All simulation parameters are described in Ref<cit.>. From the simulated trajectories, the arrival time of the electrons is calculated at z = 10 μm, where the ion potential is negligible.
This is done for various starting angles and excess energies. To calculate the electron pulse shape for femtosecond laser ionization we have to convolve the simulation results with the broadband laser spectrum, with an rms width σ_λ=4 nm, and with the emission angle distribution associated with the polarization of the laser field. For polarization parallel to the acceleration field we assume an angular distribution ∼cos^2(θ) and for perpendicular polarization we assume an angular distribution ∼sin^2(θ).
The arrival-time distribution for F=0.237 MV/m, when using a femtosecond ionization laser pulse is depicted in Fig. <ref> for ionization laser wavelengths λ=470 and λ=489.8nm and ionization laser polarizations both parallel and perpendicular to the acceleration field. Figure <ref>a and b show the temporal electron bunch charge distribution for a laser polarization perpendicular to the acceleration direction.
Figure <ref>a shows an electron pulse for a center wavelength of the ionization laser well above (λ=470nm) the ionization threshold, resulting in a relatively large excess energy and a fast pulse. Figure <ref>b shows an electron pulse for a center photon energy of the ionization laser pulse below (λ=489.8) the ionization threshold; the zero excess energy wavelength is at λ=486nm. Electrons created with negative excess energies cannot escape (in the classical model) which means that we are effectively narrowing the bandwidth of the broadband laser pulse. This results in a train of electron pulses leaving the atom<cit.>.
Figure <ref>c and d show the temporal electron bunch charge distribution for a laser polarization parallel to the acceleration direction. Figure <ref>c shows a fast pulse which is split in two. The first pulse is due to the electrons that immediately exit the potential; the second pulse is due to the electrons that are launched in the uphill direction (see Fig. <ref>d) and subsequently first make a round trip inside the potential before leaving. Figure <ref>d results in an electron pulse train, similar to Fig. <ref>b. The ratio of the intensity of the peaks belonging to the second and third electron pulse compared to the first pulse are much smaller than that for perpendicular polarization. This is caused by the fact that there is a maximum ejection angle θ_c<cit.> for a given λ and F, so that for ∥ polarization almost half of the initial angles θ≤θ_c.
In our experiments the ionization laser pulse length τ_l≈ 100fs; the pulse length contribution due to the laser pulse length can therefore be neglected as τ_ion is much larger.
§ EXPERIMENTAL
Figure <ref> shows a schematic representation of the entire beam line. The electrons are created at the center of a DC accelerating structure<cit.>. The accelerated electrons pass a set of steering coils and a magnetic solenoid lens schematically indicated by a single magnetic lens. These electron optical elements allow us to control the beam position and size. Before the electron beam reaches the detector, which consists of a dual micro-channel plate (MCP) and a phosphor screen, it passes through a 3 GHz RF deflecting cavity which will be used to measure the electron bunch lengths. Note that the detector used for the pulse length measurements (dual MCP) is different from the detector that has been used for the waist scan measurements described in section <ref>.
§.§ RF cavity
The electric and magnetic fields present in an RF cavity operated in the TM_110 mode, exert a mainly transverse force on the electrons whose strength depends on the RF phase at the moment they pass through. The electrons in a bunch of finite length, shorter than half an RF oscillation period will therefore acquire a transverse momentum kick while traveling through the cavity whose magnitude depends on their arrival time. The bunch will then be streaked across a detector downstream, as illustrated in Fig. <ref>. To enable measurements of sub-picosecond bunch lengths the cavity needs to be operated in the GHz regime. A 3GHz streak cavity was recently developed in our group<cit.>, optimized for a 30 keV beam. We will use this cavity to probe the length of the electron bunches extracted from the MOT.
The phase of the electromagnetic fields inside the RF cavity is synchronized<cit.> with few 100fs accuracy to the femtosecond laser pulse which ionizes the rubidium atoms, guaranteeing that the center of every electron pulse experiences nearly the same electromagnetic field every time it passes through the cavity. For a pillbox cavity with small entrance and exit holes at z=±L_cav/2 and transverse positions close to the cavity axis, i.e. x,y ≪c/ω, the magnetic field can be approximated by:
B⃗(x,y,z,t) ≈ B_0 sin(ϕ+ω t) x̂,
with ω the angular frequency, ϕ a phase offset and B_0 the amplitude of the oscilating magnetic field inside the cavity. We can show that the rms length of the resulting streak on the detector is given by:
σ_screen^2=σ_off^2 + (2 ω_cσ_t (d_det-d_cav) sin(ζ)cos(ϕ))^2,
while the average deflection angle of the electron pulse is given by
Δ v_y/v_z=2ω_c/ωsin(ζ)sin(ϕ),
where ζ≡ω L_cav/2 v_z, σ_off is the transverse rms beam size when the cavity is turned off and ω_c=eB_0/m the cyclotron frequency. To maximize the streak length the bunch has to stay either one (ζ=π/2) or three half periods (ζ=3π/2) inside the cavity. The cavity length L_cav=16.7mm was optimized to ζ=π/2 for a 30 keV beam<cit.>. Since the energy in this setup is ≤ 20keV, the electron beam energy is fixed at U=3.2keV, resulting in ζ=3π/2.
The 3 GHz deflecting cavity is positioned at a distance d_cav=0.68 m from the magneto-optical trap while the detector is at a distance d_det=1.9m. The pulse length σ_t at the position of the cavity is dominated by the energy spread of the electron beam. From Eq. (<ref>) we can estimate this σ_τ≈ 18 ps. Here we have assumed that the ionization laser beam is at its diffraction limit (σ_ion=30 μm) and the pulse length τ_l≪ 1 ps. Furthermore we assume that the average beam energy is optimized for maximum streak (ζ=3π/2) and that the duration of the ionization process τ_ion≲ 10 ps, as discussed in section <ref>.
The pulse length measurements are calibrated by measuring the term 2 ω_c/ωsin(ζ) in Eq. (<ref>). This is done by scanning the phase ϕ while measuring the position on the screen Δ v_y/v_z (d_det-d_cav). The phase ϕ can be scanned with a phase shifter (Mini-Circuits JSPHS-446) by applying a relative voltage v_phase=V_phase/V_max with 0≤ v_phase≤ 1.
§.§ Ionization laser
The femtosecond laser consists of an amplified Ti:Sapphire laser system (Coherent Legend Elite) that pumps an optical parametric amplifier (Coherent OPerA Solo) generating tunable 480 nm femtosecond pulses. The Ti:Sapph system produces 800 nm 35 fs pulses with an energy of 2.5mJ per pulse at a repetition frequency of 1 kHz. The 480 nm laser pulse length τ_l at the MOT is estimated to be 100 fs. The pulse energy of the ionization laser increases as a function of wavelength, from ∼ 75 μJ for λ=470nm to ∼ 150 μJ for λ=490nm.
§ RESULTS
An example of an streak as measured on the detector is depicted in Fig. <ref>a. This figure is recorded with an ionization laser wavelength λ=483 nm and using the (480-480) ionization scheme. Every streak measurement consists of ∼ 10^3 electron pulses. For every wavelength λ of the ionization laser, the phase ϕ of the RF cavity was scanned over one entire period while the electron beam was imaged by the detector.
Figure <ref> shows a false color plot of the electron spot as measured on the detector as function of the relative phase voltage v_phase. The figure clearly shows that the electron spot is swept across the detector, as is predicted by Eq. (<ref>).
The position of the electron pulse with respect to the center of the streak for relative phase shifter voltages v_phase ranging from 0.45 to 0.95 is depicted in the top plot of Fig. <ref>. This figure nicely shows that scanning the phase of the RF cavity will shift the position of the electron spot across the detector, as shown in Fig. <ref>. The position of the example electron spot (see Fig. <ref>a) is indicated by the grey dot.
The bottom plot of Fig. <ref> shows the rms size of the electron spot, which was obtained by fitting with a gaussian function. This shows that the cavity is most sensitive for arrival time spread when the electron pulse is on the center of the streak, as predicted by Eq. (<ref>) which is clearly visible in Fig. <ref>. Knowing the total length of the streak (used to determine 2 ω_c/ωsin(ζ)) we can calibrate the time axis separately for each wavelength. The right plot of Fig. <ref> shows the electron pulse in the time domain together with a gaussian fit resulting in a rms pulse length of 15ps.
In the next section we first present the streak data of the direct photoionization scheme (480-480) and the just-above-threshold photoionization scheme (780-480). Subsequently we show that we can make a pulse train by shaping the ionization laser beam profile. Finally we present pulse length measurements of slowly ionizing Rydberg states using the just-below-threshold photoionization scheme (780-480-Ry).
§.§ Direct photoionization (480-480)
Figure <ref> shows the rms pulse length of electron pulses created by direct ionization from the rubidium ground state (Fig. <ref>a). The measurement has been done both for laser polarization parallel and perpendicular to the acceleration field.
The rms pulse lengths measured here are shorter than can be explained by the diffraction-limited rms size of the ionization laser, which is in agreement with the fact that direct ionization scales with the square of the intensity of the laser field effectively narrowing the rms size of the ionization volume by a factor of 1/√(2). A diffraction limited ionization laser beam σ_ion=30 μm should result in 18/√(2)≈ 13ps, which is confirmed by the measurement presented in Fig <ref>. We note that the amount of electrons per pulse is smaller for ∥ polarization in contrast to ⊥ polarization. We also find that the measured rms pulse lengths are shorter for ∥ than for ⊥ polarization and that the pulse length increases with the ionization wavelength. These experimental findings are not yet fully understood and require further investigation, which is outside the scope of this paper.
The ∼ 1ps variation in the data points in Fig. <ref> can be explained by a ∼ 1 μm pointing instability of the ionization laser beam.
§.§ Just-above-threshold photoionization (780-480)
Figure <ref> shows the measured rms pulse length of an electron pulse created by just-above-threshold ionization of excited rubidium atoms (Fig. <ref>b).
The pulse length at the position of the cavity is predominantly determined by the energy spread of the electron bunch, see Eq.(<ref>). Convolving the temporal electron pulse distributions, see Fig. <ref>, with a gaussian energy spread given by a gaussian ionization laser beam with a rms width of σ_ion=32 μm we can calculate the expected pulse length for various ionization laser wavelengths and polarizations. The results are represented by the solid lines in Fig. <ref>.
We see that the measured rms pulse length is in agreement with the pulse length determined by the energy spread. We also see that the rms pulse length increases as a function of wavelength but the increase is relatively small with respect to the magnitude of the pulse lengths, as expected from the simulations. The data shows a stronger growth than expected. Similar to the direct ionization scheme (480-480) the measurement resolution is limited due to pointing instabilities of the ionization laser beam. Additionally, the gaussian fits are less reliable due to deviation from perfect gaussian behavior, as will be discussed below, see Fig. <ref>.
The electron pulse shapes for various ionization laser wavelengths are depicted in Fig. <ref>, together with their gaussian fits which were used to determine the rms pulse lengths presented in Fig. <ref>. Figures b,c and d show sharp features around ±40 ps. These are probably due to a deviation from a perfect gaussian ionization laser beam profile. The features do not change position as a function of wavelength and are too sharp to be explained by a pulse train emitted by the rubidium atom (see Section <ref>) since this temporal information is washed out due to the energy spread of the beam, resulting in features with at least an rms width of 20 ps. The pulse train predicted by the simulations (See Fig. <ref>) cannot be measured since the time difference between the pulses is smaller than the pulse broadening due to the energy spread.
Figure <ref>c and d show a gaussian arrival time distribution with a very sharp peak in the center. This peak can be attributed to slowly ionizing Rydberg atoms. This effect will be discussed in Section <ref>.
These are measurements of both ultracold and ultrafast electron bunches containing ∼ 10^3 electrons per pulse. The waist scan presented in Section <ref> has been performed on the same beam with the cavity retracted from the beam line with λ=490nm and F=0.814MV/m. This shows that we can produce electron pulses containing ∼ 10^3 electrons with an rms width of ∼ 25 picosecond and a normalized rms transverse emittance of 1.5±0.1 nm·rad.
§.§ Pulse shaping
We will now show how transversely shaping the ionization laser beam profile allows us to temporally shape an electron pulse. Figure <ref>a shows an example in which the ionization laser profile was shaped such that the distance between the two peaks is 90 μm. This distribution will lead to a similarly shaped longitudinal energy distribution and thus in a similarly shaped arrival time distribution at the cavity. Using Eq. (<ref>) we can estimate that the temporal electron pulse length at the position of the cavity is ∼ 65ps. Figure <ref>b shows the streak as measured on the MCP detector. The peaks in this figure are ∼ 6.5mm apart. A full streak is equal to 14mm, see top plot of Fig. <ref>, which is equivalent to half a RF period ∼ 160ps. This means that the measured pulse length ∼ 75ps, roughly in agreement with the expected pulse length. We thus show that the intensity distribution of the ionization laser profile is indeed imprinted on the temporal charge distribution we measure.
This opens the possibility to produce well defined pulse trains by shaping the intensity profile of the ionization laser.
§.§ Just-below-threshold photoionization (780-480-Ry)
We have seen the first evidence of slowly decaying Rydberg atoms in Fig. <ref>c and d. Here we will discuss how these slowly ionizing Rydberg atoms are formed when part of the ionization laser spectrum is below the ionization threshold.
The RF power to the cavity is switched on 10 μs before the ionization laser pulse reaches the MOT, to make sure that the electromagnetic fields inside the cavity are stable. In all the above mentioned measurements the RF power to the cavity was switched off 1 μs after the 480nm ionization laser pulse had reached the MOT. This was done to prevent unnecessary heating of the RF cavity thus reducing phase instabilities. By this time all the fast electrons have reached the detector (Time of flight is ∼ 50ns).
Due to the relatively low quality factor of the cavity the time it takes for the fields inside the cavity to build up and decay is ∼100 ns. Electrons traversing the cavity after the RF power was switched off will not be deflected and will therefore pass right through. Varying the time at which the RF power is switched off allows us to measure the fraction of electrons that pass the cavity at timescales ∼μs after the ionization laser pulse.
Figure <ref> shows the streak of an electron pulse when the RF power was switched off 1 μs after the laser pulse had reached the MOT. The left peak (between -5 and 5mm) shows the fast electron pulse that is streaked by the cavity; the right peak (between 5 and 10mm) shows electrons that have passed the cavity after the RF power was switched off. The solid line indicates a gaussian fit through the undisturbed electron peak.
We have measured the streak as depicted in Fig. <ref> as a function of the time after which the RF power was switched off. Figure <ref> shows the intensity of the peak going straight through versus the time the RF power was on after the ionization laser pulse hit the MOT, for various ionization laser wavelengths. The lowest lying data points in Fig. <ref> represent a center ionization laser wavelength close to the ionization threshold (λ=480nm). Increasing the ionization laser wavelength increases the intensity of the Rydberg signal. The intensity of the peaks shows an exponential decay and are fitted with exp(-t/τ) indicated by the solid curves. The decay constant τ as function of wavelength is depicted in the inset plot of Fig. <ref>. The observed decay times are in agreement with earlier observations<cit.>. Note that the larger the wavelength of the ionization laser the more Rydberg atoms are created, as expected. In addition the time constant of the ionization process increases when the ionization laser wavelength is increased. This is attributed to the fact that lower-lying Rydberg states take longer to ionize.
§ CONCLUSIONS AND OUTLOOK
An RF cavity has been used to measure the pulse length of electron bunches produced by femtosecond laser photoionization of a laser-cooled and trapped ultracold atomic gas. Pulse lengths have been measured in three photoionization regimes: direct photoionization using only the femtosecond laser, two-step just-above-threshold photoionization, and two-step just-below-threshold photoionization. Both direct and just-above-threshold ionization produce ultrashort electron pulses with rms pulse durations ranging from 10 to 30 ps. Direct ionization produces few (∼10^2) and hot (∼10^4K) electrons while just-above-threshold ionization produces many (∼10^3) and ultracold (∼10K) electrons with a normalized transverse emittance ϵ̂_x=1.5±0.1 nm·rad. The measured pulse lengths can be explained by the energy spread associated with the length of the ionization volume. These bunches are sufficiently short to be compressed to sub-ps pulse lengths using established RF compression techniques.
Just-below-threshold ionization produces a Rydberg gas which ionizes on ∼μs timescales.
The method described here can be used to investigate the effect of space charge forces which is the main limitation of the achievable temporal resolution in single-shot UED. Measuring the pulse length in the longitudinal waist should allow the direct measurement of the influence of space charge forces on the longitudinal phase space distribution.
The temporal distribution of the predicted train of electron pulses leaving a rubidium atom could also be investigated by measuring the pulse length in the longitudinal waist.
This research is supported by the Institute of Complex Molecular Systems (ICMS) at Eindhoven University of Technology. The authors would like to thank Eddy Rietman and Harry van Doorn for expert technical assistance.
26
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[King et al.(2005)King,
Campbell, Frank, Reed,
Schmerge, Siwick, Stuart, and Weber]King2005
author author W. King, author G. Campbell,
author A. Frank, author B. Reed, author
J. F. Schmerge, author
B. J. Siwick, author
B. C. Stuart, and author
P. M. Weber, title title Ultrafast electron microscopy in materials science,
biology, and chemistry, @noop journal journal J. Appl. Phys. volume 97 (year 2005)NoStop
[Sciaini and Miller(2011)]Sciaini2011a
author author G. Sciaini and author R. Miller, title title Femtosecond
electron diffraction: heralding the era of atomically resolved dynamics, @noop journal journal Reports Prog.
Phys. volume 74, pages 096101
(year 2011)NoStop
[Zewail(2010)]Zewail2010
author author A. H. Zewail, title title
Four-dimensional electron microscopy. 10.1126/science.1166135 journal journal
Science volume 328, pages 187–93
(year 2010)NoStop
[Vanacore, Fitzpatrick, and Zewail(2015)]Vanacore2015
author author G. M. Vanacore, author A. W. P. Fitzpatrick, and author A. H. Zewail, title title
Four-dimensional electron microscopy: Ultrafast imaging, diffraction and
spectroscopy in materials science and biology, 10.1016/j.nantod.2016.04.009 journal journal
Nano Today volume 11, pages 228–249
(year 2015)NoStop
[Egerton(2015)]Egerton2015
author author R. F. Egerton, title title Outrun
radiation damage with electrons, @noop journal
journal Adv. Struct. Chem. Imaging volume 1, pages 5 (year 2015)NoStop
[Engelen et al.(2014)Engelen, Smakman, Bakker, Luiten, and Vredenbregt]Engelen2014
author author W. J. Engelen, author E. P. Smakman, author D. J. Bakker,
author O. J. Luiten, and author E. J. D. Vredenbregt, title title Effective temperature of an
ultracold electron source based on near-threshold photoionization, @noop journal journal Ultramicroscopy volume 136, pages 73–80 (year 2014)NoStop
[Engelen(2013)]Engelen
author author W. J. Engelen, title Coherent Electron Bunches from
Laser-Cooled Gases, @noop Ph.D. thesis, school
Eindhoven University of Technology (year 2013)NoStop
[McCulloch et al.(2013)McCulloch, Sheludko, Junker, and Scholten]McCulloch2013
author author A. J. McCulloch, author D. V. Sheludko, author M. Junker, and author R. E. Scholten, title title High-coherence picosecond
electron bunches from cold atoms. @noop journal
journal Nat. Commun. volume 4, pages 1692 (year 2013)NoStop
[van Mourik et al.(2014)van
Mourik, Engelen, Vredenbregt, and Luiten]VanMourik2014a
author author M. W. van Mourik, author W. J. Engelen, author E. J. D. Vredenbregt, and author O. J. Luiten, title title Ultrafast
electron diffraction using an ultracold source, @noop journal journal Struct. Dyn. volume
1, pages 034302 (year 2014)NoStop
[Speirs et al.(2015)Speirs,
Putkunz, McCulloch, Nugent,
Sparkes, and Scholten]Speirs2015a
author author R. W. Speirs, author C. T. Putkunz,
author A. J. McCulloch, author K. A. Nugent, author
B. M. Sparkes, and author
R. E. Scholten, title
title Single-shot electron diffraction using a cold
atom electron source, 10.1088/0953-4075/48/21/214002
journal journal J. Phys. B At. Mol. Opt. Phys. volume 48, pages 214002 (year 2015)NoStop
[Engelen et al.(2013)Engelen, van der Heijden, Bakker,
Vredenbregt, and Luiten]Engelen2013
author author W. J. Engelen, author M. van der
Heijden, author D. J. Bakker,
author E. J. D. Vredenbregt, and author O. J. Luiten, title title High-coherence electron
bunches produced by femtosecond photoionization. @noop
journal journal Nat. Commun. volume 4, pages 1693 (year 2013)NoStop
[Sparkes et al.(2016)Sparkes, Murphy, Taylor, Speirs, McCulloch, and Scholten]Sparkes2016a
author author B. M. Sparkes, author D. Murphy,
author R. J. Taylor, author R. W. Speirs, author
A. J. McCulloch, and author
R. E. Scholten, title
title Stimulated Raman adiabatic passage for improved
performance of a cold-atom electron and ion source, 10.1103/PhysRevA.94.023404 journal journal Phys.
Rev. A volume 94, pages 023404
(year 2016)NoStop
[Luiten et al.(2004)Luiten,
Van Der Geer, De Loos, Kiewiet, and Van Der Wiel]Luiten2004
author author O. J. Luiten, author S. B. Van Der
Geer, author M. J. De Loos,
author F. B. Kiewiet, and author M. J. Van Der Wiel, title title How to realize uniform
three-dimensional ellipsoidal electron bunches, 10.1103/PhysRevLett.93.094802 journal journal
Phys. Rev. Lett. volume 93, pages
1–4 (year 2004)NoStop
[Thompson et al.(2016)Thompson, Murphy, Speirs, Van Bijnen, McCulloch, Scholten, and Sparkes]Thompson2016
author author D. J. Thompson, author D. Murphy,
author R. W. Speirs, author R. M. W. Van Bijnen, author A. J. McCulloch, author
R. E. Scholten, and author
B. M. Sparkes, title
title Suppression of Emittance Growth Using a Shaped
Cold Atom Electron and Ion Source, 10.1103/PhysRevLett.117.193202 journal journal
Phys. Rev. Lett. volume 117, pages
1–5 (year 2016)NoStop
[Van Oudheusden et al.(2010)Van Oudheusden, Pasmans, Van Der Geer,
De Loos, Van Der Wiel, and Luiten]VanOudheusden2010b
author author T. Van
Oudheusden, author P. L. E. M. Pasmans, author S. B. Van Der
Geer, author M. J. De Loos,
author M. J. Van Der Wiel, and author O. J. Luiten, title title Compression of
subrelativistic space-charge-dominated electron bunches for single-shot
femtosecond electron diffraction, 10.1103/PhysRevLett.105.264801 journal journal
Phys. Rev. Lett. volume 105, pages
2–6 (year 2010), http://arxiv.org/abs/1006.2041
1006.2041 NoStop
[Taban et al.(2008)Taban,
Reijnders, Bell, Van Der
Geer, Luiten, and Vredenbregt]Taban2008
author author G. Taban, author M. P. Reijnders, author S. C. Bell,
author S. B. Van Der Geer,
author O. J. Luiten, and author E. J. D. Vredenbregt, title title Design and validation of an
accelerator for an ultracold electron source, @noop journal journal Phys. Rev. Spec. Top. - Accel. Beams volume 11, pages 1–8 (year
2008)NoStop
[Geer et al.(2009)Geer,
Loos, Vredenbregt, and Luiten]Geer2009a
author author S. B. V. D. Geer, author M. J. D. Loos, author E. J. D. Vredenbregt, and author O. J. Luiten, title title
Ultracold Electron Source for Single-Shot, Ultrafast Electron
Diffraction, 10.1017/S1431927603030617 journal journal Microsc. Microanal. , pages
282–289 (year 2009)NoStop
[Engelen, Vredenbregt, and Luiten(2014)]Engelen2014b
author author W. J. Engelen, author E. J. D. Vredenbregt, and author O. J. Luiten, title title Analytical
model of an isolated single-atom electron source, 10.1016/j.ultramic.2014.07.001 journal journal
Ultramicroscopy volume 147, pages
61–69 (year 2014)NoStop
[van der Geer and de Loos()]GPT
author author B. van der Geer and author M. de Loos, @noop title General Particle
Tracer - http://www.pulsar.nl, NoStop
[Robicheaux and Shaw(1997)]Robicheaux1997
author author F. Robicheaux and author J. Shaw, title title Calculated
electron dynamics in an electric field, 10.1103/PhysRevA.56.278 journal journal Phys.
Rev. A volume 56, pages 278–289
(year 1997)NoStop
[Bordas et al.(2003)Bordas,
Lépine, Nicole, and Vrakking]Bordas2003
author author C. Bordas, author F. Lépine,
author C. Nicole, and author M. J. J. Vrakking, title title Semiclassical description of
photoionization microscopy, 10.1103/PhysRevA.68.012709
journal journal Phys. Rev. A volume 68, pages 012709 (year
2003)NoStop
[Lankhuijzen and Noordam(1996)]Lankhuijzen1996a
author author G. Lankhuijzen and author L. Noordam, title title Streak-camera
probing of rubidium Rydberg wave packet decay in an electric field. @noop journal journal Phys. Rev. Lett. volume 76, pages 1784–1787 (year 1996)NoStop
[Lassise, Mutsaers, and Luiten(2012)]Lassise2012a
author author A. Lassise, author P. H. A. Mutsaers, and author O. J. Luiten, title title Compact, low
power radio frequency cavity for femtosecond electron microscopy, 10.1063/1.3703314 journal journal Rev.
Sci. Instrum. volume 83 (year 2012), 10.1063/1.3703314NoStop
[Lassise(2012)]Lassise2012
author author A. C. Lassise, title Miniaturized RF Technology for
Femtosecond Electron Microscopy, @noop Ph.D. thesis, school Eindhoven University of Technology (year
2012)NoStop
[Brussaard et al.(2013)Brussaard, Lassise, Pasmans, Mutsaers, Van Der Wiel, and Luiten]Brussaard2013
author author G. J. H. Brussaard, author A. Lassise, author P. L. E. M. Pasmans, author P. H. A. Mutsaers, author M. J. Van Der
Wiel, and author O. J. Luiten, title title Direct
measurement of synchronization between femtosecond laser pulses and a 3 GHz
radio frequency electric field inside a resonant cavity, 10.1063/1.4823590 journal journal Appl. Phys.
Lett. volume 103, pages 16–18
(year 2013)NoStop
[Robinson et al.(2000)Robinson, Tolra, Noel, Gallagher, and Pillet]Robinson2000
author author M. P. Robinson, author B. L. Tolra,
author M. W. Noel, author T. F. Gallagher, and author P. Pillet, title
title Spontaneous evolution of Rydberg atoms into an
ultracold plasma, 10.1103/PhysRevLett.85.4466
journal journal Phys. Rev. Lett. volume 85, pages 4466–4469 (year
2000)NoStop
| Ultrafast electron diffraction has developed into a powerful technique for studying structural dynamics<cit.>. Pumping samples with femtosecond laser pulses and probing them with high energy electron bunches can easily lead to sample damage, which is particularly true for biological molecules<cit.>. This means that diffraction patterns preferably have to be captured with a single electron bunch: single-shot electron diffraction. This requires electron bunches with 10^6-10^7 electrons<cit.> per pulse which are prone to space charge explosions, resulting in loss of temporal resolution and degradation of transverse beam quality. The effect of space charge forces may be mitigated by using an ultracold electron source since it allows for larger source sizes and thus lower bunch densities for the same beam quality<cit.>.
To obtain high quality diffraction images the relative energy spread of the electron bunch needs to be much smaller than unity and the transverse coherence length larger than the lattice spacing of the structure under investigation. Previous work demonstrated that the ultracold electron source is capable of producing high quality diffraction images<cit.>, even with electron bunches created by femtosecond photoionization<cit.>.
The transverse beam quality of the ultracold electron source has been investigated extensively previously<cit.>. However, the longitudinal electron beam characteristics have not been investigated in great detail. Recently it was shown that the ultracold electron source can be used to create ultracold electron bunches with a root-mean-square (rms) pulse length of 250 ps<cit.>. In this paper we present pulse length measurements with sub-ps resolution of ultracold electron bunches with an rms pulse duration of 25 ps containing ∼ 10^3 electrons per pulse. Up to ∼ 10^6 electrons can be extracted in a single shot from the ultracold electron source<cit.> but then strong space charge effects come into play. These space charge effects can be minimized by shaping the initial electron distribution <cit.>. In this paper we stick to maximally ∼ 10^3 electrons per bunch, thus avoiding the complication of space charge effects.
For UED, the electron pulse length has to be shorter than the shortest timescale associated with the process under investigation. This means that often the electron bunch length should preferably be much shorter than one picosecond. This can be achieved by compressing longer electron pulses using a resonant RF cavity<cit.> in TM_010 mode, but for this to work the electron pulse should not be longer than a few 10 ps<cit.> for a cavity operated at 3GHz. In this paper we show that such electron bunches can indeed be extracted from the ultracold electron source.
This paper is organized as follows: In Section <ref> we introduce the ultracold electron source and the relevant photoionization schemes (Section <ref>A) that were used. In Section <ref>B and <ref>C we address the transverse electron beam quality (transverse emittance) and the longitudinal beam quality (longitudinal emittance). In Section <ref>D we discuss a model of the rubidium ionization process allowing us to make an estimate of the shortest electron pulse lengths that can be expected. In Section <ref> the experimental setup is described. Finally, in Section <ref>, the results will be presented.
We show that we can produce electron bunches which are both ultracold and ultrafast, with rms pulse durations of ∼ 20 ps, short enough to be compressed to sub-ps bunch lengths. In Section <ref> we will finish with a conclusion and an outlook. | null | null | An example of an streak as measured on the detector is depicted in Fig. <ref>a. This figure is recorded with an ionization laser wavelength λ=483 nm and using the (480-480) ionization scheme. Every streak measurement consists of ∼ 10^3 electron pulses. For every wavelength λ of the ionization laser, the phase ϕ of the RF cavity was scanned over one entire period while the electron beam was imaged by the detector.
Figure <ref> shows a false color plot of the electron spot as measured on the detector as function of the relative phase voltage v_phase. The figure clearly shows that the electron spot is swept across the detector, as is predicted by Eq. (<ref>).
The position of the electron pulse with respect to the center of the streak for relative phase shifter voltages v_phase ranging from 0.45 to 0.95 is depicted in the top plot of Fig. <ref>. This figure nicely shows that scanning the phase of the RF cavity will shift the position of the electron spot across the detector, as shown in Fig. <ref>. The position of the example electron spot (see Fig. <ref>a) is indicated by the grey dot.
The bottom plot of Fig. <ref> shows the rms size of the electron spot, which was obtained by fitting with a gaussian function. This shows that the cavity is most sensitive for arrival time spread when the electron pulse is on the center of the streak, as predicted by Eq. (<ref>) which is clearly visible in Fig. <ref>. Knowing the total length of the streak (used to determine 2 ω_c/ωsin(ζ)) we can calibrate the time axis separately for each wavelength. The right plot of Fig. <ref> shows the electron pulse in the time domain together with a gaussian fit resulting in a rms pulse length of 15ps.
In the next section we first present the streak data of the direct photoionization scheme (480-480) and the just-above-threshold photoionization scheme (780-480). Subsequently we show that we can make a pulse train by shaping the ionization laser beam profile. Finally we present pulse length measurements of slowly ionizing Rydberg states using the just-below-threshold photoionization scheme (780-480-Ry).
§.§ Direct photoionization (480-480)
Figure <ref> shows the rms pulse length of electron pulses created by direct ionization from the rubidium ground state (Fig. <ref>a). The measurement has been done both for laser polarization parallel and perpendicular to the acceleration field.
The rms pulse lengths measured here are shorter than can be explained by the diffraction-limited rms size of the ionization laser, which is in agreement with the fact that direct ionization scales with the square of the intensity of the laser field effectively narrowing the rms size of the ionization volume by a factor of 1/√(2). A diffraction limited ionization laser beam σ_ion=30 μm should result in 18/√(2)≈ 13ps, which is confirmed by the measurement presented in Fig <ref>. We note that the amount of electrons per pulse is smaller for ∥ polarization in contrast to ⊥ polarization. We also find that the measured rms pulse lengths are shorter for ∥ than for ⊥ polarization and that the pulse length increases with the ionization wavelength. These experimental findings are not yet fully understood and require further investigation, which is outside the scope of this paper.
The ∼ 1ps variation in the data points in Fig. <ref> can be explained by a ∼ 1 μm pointing instability of the ionization laser beam.
§.§ Just-above-threshold photoionization (780-480)
Figure <ref> shows the measured rms pulse length of an electron pulse created by just-above-threshold ionization of excited rubidium atoms (Fig. <ref>b).
The pulse length at the position of the cavity is predominantly determined by the energy spread of the electron bunch, see Eq.(<ref>). Convolving the temporal electron pulse distributions, see Fig. <ref>, with a gaussian energy spread given by a gaussian ionization laser beam with a rms width of σ_ion=32 μm we can calculate the expected pulse length for various ionization laser wavelengths and polarizations. The results are represented by the solid lines in Fig. <ref>.
We see that the measured rms pulse length is in agreement with the pulse length determined by the energy spread. We also see that the rms pulse length increases as a function of wavelength but the increase is relatively small with respect to the magnitude of the pulse lengths, as expected from the simulations. The data shows a stronger growth than expected. Similar to the direct ionization scheme (480-480) the measurement resolution is limited due to pointing instabilities of the ionization laser beam. Additionally, the gaussian fits are less reliable due to deviation from perfect gaussian behavior, as will be discussed below, see Fig. <ref>.
The electron pulse shapes for various ionization laser wavelengths are depicted in Fig. <ref>, together with their gaussian fits which were used to determine the rms pulse lengths presented in Fig. <ref>. Figures b,c and d show sharp features around ±40 ps. These are probably due to a deviation from a perfect gaussian ionization laser beam profile. The features do not change position as a function of wavelength and are too sharp to be explained by a pulse train emitted by the rubidium atom (see Section <ref>) since this temporal information is washed out due to the energy spread of the beam, resulting in features with at least an rms width of 20 ps. The pulse train predicted by the simulations (See Fig. <ref>) cannot be measured since the time difference between the pulses is smaller than the pulse broadening due to the energy spread.
Figure <ref>c and d show a gaussian arrival time distribution with a very sharp peak in the center. This peak can be attributed to slowly ionizing Rydberg atoms. This effect will be discussed in Section <ref>.
These are measurements of both ultracold and ultrafast electron bunches containing ∼ 10^3 electrons per pulse. The waist scan presented in Section <ref> has been performed on the same beam with the cavity retracted from the beam line with λ=490nm and F=0.814MV/m. This shows that we can produce electron pulses containing ∼ 10^3 electrons with an rms width of ∼ 25 picosecond and a normalized rms transverse emittance of 1.5±0.1 nm·rad.
§.§ Pulse shaping
We will now show how transversely shaping the ionization laser beam profile allows us to temporally shape an electron pulse. Figure <ref>a shows an example in which the ionization laser profile was shaped such that the distance between the two peaks is 90 μm. This distribution will lead to a similarly shaped longitudinal energy distribution and thus in a similarly shaped arrival time distribution at the cavity. Using Eq. (<ref>) we can estimate that the temporal electron pulse length at the position of the cavity is ∼ 65ps. Figure <ref>b shows the streak as measured on the MCP detector. The peaks in this figure are ∼ 6.5mm apart. A full streak is equal to 14mm, see top plot of Fig. <ref>, which is equivalent to half a RF period ∼ 160ps. This means that the measured pulse length ∼ 75ps, roughly in agreement with the expected pulse length. We thus show that the intensity distribution of the ionization laser profile is indeed imprinted on the temporal charge distribution we measure.
This opens the possibility to produce well defined pulse trains by shaping the intensity profile of the ionization laser.
§.§ Just-below-threshold photoionization (780-480-Ry)
We have seen the first evidence of slowly decaying Rydberg atoms in Fig. <ref>c and d. Here we will discuss how these slowly ionizing Rydberg atoms are formed when part of the ionization laser spectrum is below the ionization threshold.
The RF power to the cavity is switched on 10 μs before the ionization laser pulse reaches the MOT, to make sure that the electromagnetic fields inside the cavity are stable. In all the above mentioned measurements the RF power to the cavity was switched off 1 μs after the 480nm ionization laser pulse had reached the MOT. This was done to prevent unnecessary heating of the RF cavity thus reducing phase instabilities. By this time all the fast electrons have reached the detector (Time of flight is ∼ 50ns).
Due to the relatively low quality factor of the cavity the time it takes for the fields inside the cavity to build up and decay is ∼100 ns. Electrons traversing the cavity after the RF power was switched off will not be deflected and will therefore pass right through. Varying the time at which the RF power is switched off allows us to measure the fraction of electrons that pass the cavity at timescales ∼μs after the ionization laser pulse.
Figure <ref> shows the streak of an electron pulse when the RF power was switched off 1 μs after the laser pulse had reached the MOT. The left peak (between -5 and 5mm) shows the fast electron pulse that is streaked by the cavity; the right peak (between 5 and 10mm) shows electrons that have passed the cavity after the RF power was switched off. The solid line indicates a gaussian fit through the undisturbed electron peak.
We have measured the streak as depicted in Fig. <ref> as a function of the time after which the RF power was switched off. Figure <ref> shows the intensity of the peak going straight through versus the time the RF power was on after the ionization laser pulse hit the MOT, for various ionization laser wavelengths. The lowest lying data points in Fig. <ref> represent a center ionization laser wavelength close to the ionization threshold (λ=480nm). Increasing the ionization laser wavelength increases the intensity of the Rydberg signal. The intensity of the peaks shows an exponential decay and are fitted with exp(-t/τ) indicated by the solid curves. The decay constant τ as function of wavelength is depicted in the inset plot of Fig. <ref>. The observed decay times are in agreement with earlier observations<cit.>. Note that the larger the wavelength of the ionization laser the more Rydberg atoms are created, as expected. In addition the time constant of the ionization process increases when the ionization laser wavelength is increased. This is attributed to the fact that lower-lying Rydberg states take longer to ionize. | null | null |
http://arxiv.org/abs/1701.08027v1 | 20170127122519 | LocDyn: Robust Distributed Localization for Mobile Underwater Networks | [
"Cláudia Soares",
"João Gomes",
"Beatriz Ferreira",
"João Paulo Costeira"
] | cs.MA | [
"cs.MA",
"math.OC",
"stat.ML"
] |
LocDyn: Robust Distributed Localization for Mobile Underwater
Networks
Cláudia Soares, Member, IEEE,
João Gomes, Member, IEEE,
Beatriz Ferreira, Student Member, IEEE,
and João Paulo Costeira
This research was partially supported by EU-H2020 WiMUST project (grant agreement No. 645141) and Fundação para a Ciência e Tecnologia (project UID/EEA/50009/2013).
December 30, 2023
========================================================================================================================================================================================================================================================================================================================
How to self-localize large teams of underwater nodes using only
noisy range measurements? How to do it in a distributed way, and
incorporating dynamics into the problem? How to reject outliers and
produce trustworthy position estimates? The stringent acoustic
communication channel and the accuracy needs of our geophysical
survey application demand faster and more accurate localization
methods. We approach dynamic localization as a MAP estimation
problem where the prior encodes dynamics, and we devise a convex
relaxation method that takes advantage of previous estimates at each
measurement acquisition step; The algorithm converges at an optimal
rate for first order methods. LocDyn is distributed: there is no
fusion center responsible for processing acquired data and the same
simple computations are performed for each node. LocDyn is accurate:
experiments attest to a smaller positioning error than a comparable
Kalman filter. LocDyn is robust: it rejects outlier noise, while the
comparing methods succumb in terms of positioning error.
Range-based localization, Distributed localization, Autonomous underwater
vehicles, Mobile location estimation, Robust network localization.
§ INTRODUCTION
The development of networked systems of agents that can interact with
the physical world and carry out complex tasks in various contexts is
currently a major driver for research and technological development <cit.>. This trend is also seen in contemporary ocean
applications and propelled research projects on multi-vehicle systems
like MORPH (Kalwa et al. <cit.>) and, recently, WiMUST
(Al-Khatib et al. <cit.>).
Coordinated operation of vehicles requires a communication network to
share data, most critically, those data related to navigation and
positioning, as further explored in Abreu et
al. <cit.>. Our work concerns
localization of (underwater) vehicles, a key subsystem needed in the
absence of GPS to properly georeference any acquired data and also
used in cooperative control algorithms. This paper presents research
results within the scope of EU H2020 project WiMUST, aiming at
advanced control, communication and signal processing tools to enable
a team of marine robots, either on the surface or submerged, to
jointly conduct geoacoustic surveys.
Today, geophysicists reveal sub-bottom structures using powerful sound
sources and hydrophones. During surveys, a towed source produces
acoustic waves that penetrate the sea bottom, and its layers
are inferred from the pattern of echoes observed at the towed
hydrophones, over a long period of time and a wide geographic
area. Such surveys are routinely carried out to characterize the sea
bottom prior to underwater construction, to monitor pipelines and
submerged structures, and for the operation of offshore oil and gas
fields.
As depicted in Figure <ref>, a single vessel
tows very long arrays of streamers and, thus, operation of a
traditional geophysical survey at sea means we cannot change
trajectories to recheck interesting findings; also, maneuvering
between rectilinear transects while keeping the streamers untangled
is challenging. The vision of WiMUST is to replace the monolithic
setup with a more flexible one where multiple heterogeneous underwater
vehicles tow smaller arrays while retaining a precise spatial
alignment. These are easier to maneuver, and the absence of long
physical ties between the surface ship and the data acquisition
devices enables new capabilities such as operating at variable depths
or adaptively changing the shape of the ensemble of
hydrophones.
Self-localization is a cornerstone for multi-vehicle cooperative
control in general and for WiMUST in particular, as the acoustic
signals must be georeferenced to high precision to enable an accurate
inference of deep sub-bottom layers. Our specific goal in this paper
is to accurately localize a network of moving agents from noisy
inter-vehicle ranges and from the positions of a few anchors or
landmarks.
*Related work
The signal processing and control communities studied the network
localization problem in many variants, like static or dynamic network
localization, centralized or distributed computations,
maximum-likelihood methods, approximation algorithms, or outlier
robust methods.
The control community's mainstream approach to localization
relies on the robust and strong properties of the Kalman
filter to dynamically compensate noise and bias. Recent approaches can
be found in Pinheiro et al. <cit.>
and Rad et al. <cit.>. In the first very
recent paper, position and velocity are estimated from ranges,
accelerometer readings and gyroscope measurements with an Extended
Kalman filter. The authors of the second paper linearize the dynamic
network localization problem, solving it with a linear Kalman filter.
This last method is comparable with our range-only problem, although
the method requires knowledge of the noise's standard deviation.
The signal processing community traditionally studies static network
localization from a centralized perspective, like Keller and
Gur <cit.>, that formulate the problem as a regression
over adaptive bases; but the authors use squared distances, prone to
outlier noise amplification. Shang et
al. <cit.> follow a multidimensional
scaling approach, but multidimensional scaling works well only in
networks with high connectivity — a property not encountered in
practice in large-scale geometric networks. Biswas et
al. <cit.> and more recently Oğuz-Ekim et
al. <cit.> proposed semi-definite and
second order cone relaxations of the maximum likelihood
estimator. Although more precise, these convexified problems get
intractable even for a small number of nodes. Recently, we have
witnessed increasing interest from signal processing in distributed
static network localization. Papers by Shi et
al. <cit.>, Srirangarajan et
al. <cit.>, Chan and So <cit.>,
Khan et al. <cit.>, Simonetto and
Leus <cit.> and recently Soares et
al. <cit.> use different convex approximations to the
nonconvex optimization costs to devise scalable and distributed
algorithms for network localization. But for scenarios where
approximate solutions are not enough, researchers optimized the
maximum likelihood directly, obtaining solutions that depend on the
initialization of the algorithm. The methods in Calafiore et
al. <cit.> and Soares et
al. <cit.> increase the precision of a
relaxation-based solution, but are prone to local minima if wrongly
initialized. Lately, signal processing researchers produced solutions
for dynamic network localization; Schlupkothen et
al. <cit.> incorporated velocity
information from past position estimates to bias the solution of a
static localization problem via a regularization term.
*Our approach
In this paper we deal with the network localization problem from an
optimization-based standpoint. We formalize the network localization
problem under the maximum a posteriori framework considering
white Gaussian noise and we tightly relax the nonconvex estimator to a
convex unconstrained program. We optimize the approximated problem
with a scalable and fast first order method, achieving smooth
trajectories for a small number of distributed iterations. We define
distributed operation as requiring no central or fusion node, and
where all nodes perform the same types of computations. Distributed
operation of vehicles requires the existence of a communication
network to share navigation and positioning data.
We propose a distributed algorithm for network localization of
underwater mobile nodes with the main properties of following a
principled maximum a posteriori approach, distributed iterations at
each agent, robustness to outlier measurements, and fast convergence.
We call our algorithm localization under dynamics, or LocDyn for
short.
While LocDyn supports distributed operation in the classic sense, it
may also be viewed in a more restricted way simply as an efficient
parallel algorithm when run at a central location that collects all
required range measurements through an appropriate forwarding protocol
(discussed, , in Ludovico et
al. <cit.>). Depending on the capacity of
the shared transmission medium the latter solution may be preferable
from a practical standpoint, but it does not impact the derivations
below.
Our approach is most closely related to one-shot network localization
methods in signal processing (i.e., starting anew when repeated over
time), but it adds a temporal dimension that enables filtering to
regularize position estimates and thus improve their
accuracy. Contrary to many Kalman-filtering-based approaches to
localization in the control literature, we assume an extremely simple
dynamic model for mobile nodes and do not rely on navigation
information that could be provided by an inertial measurement unit
(IMU). The rationale for this is that mobile nodes in our scenarios
are not necessarily AUVs; provisions could be made to install some
range-measurement and communication devices in streamers, whose
dynamics are not as well characterized as those of the towing AUVs,
and where the presence of high-quality IMUs seems far-fetched at
present.
§.§ Contributions
We introduce LocDyn, the first optimization-based dynamic network
localization estimator that is fully distributed and has optimal
convergence rate. LocDyn tightly approximates the MAP estimator, with
the nodes' dynamics as priors, so Bayesian estimation properties
are to be expected. We use a position predictor with information from
previous position estimates via a low-pass velocity approximation.
Our method is more accurate than a Kalman filter implementation by
more than 30cm per trajectory point in all our experiments.
In our companion UCOMMS'16 paper (Ferreira et al. <cit.>) we focused on demonstrating
benefits for collaborative localization using hybrid range/bearing
measurements and time-domain filtering. We explore the same
fundamental idea for time-recursive processing here, but the method
for predicting velocities is now considerably improved, and we propose
an efficient distributed localization algorithm, whereas in
Ferreira et al. the dynamic optimization
problem was solved using a general-purpose (centralized) convex
solver. Also, the algorithm is carefully characterized and benchmarked
using numeric simulations.
The hybrid setup of Ferreira et al.
adopts the FLORIS/CLORIS least-squares framework (from a previous work
by Ferreira et al. <cit.>), which in turn relies
on a so-called disk-based relaxation presented in Soares at
al. <cit.> to attain a high-precision convex
formulation that is amenable to distributed/parallel processing. In
the present paper we consider exclusively range measurements to
streamline the technical content, but we emphasize that accommodating
bearing measurements in a hybrid localization scheme involves only
minor adaptations in the optimization problem and distributed solution
algorithm.
§ STATIC NETWORK LOCALIZATION
The network of range-measurement and communication devices (nodes),
installed on AUVs and conceivably on streamers as well, is represented
as an undirected connected graph
𝒢 = (𝒱,ℰ). The node set
𝒱 = {1,2, …, n} denotes the agents with unknown
positions. There is an edge i ∼ j ∈ℰ between i
and j if a noisy range measurement between nodes i and j is
available at both, and if i and j can communicate with each other.
The set of landmarks with known positions[These are
considered constant for convenience, but could also be surface-bound
mobile devices with permanently known positions obtained through
GPS.], named anchors, is denoted by
𝒜 = { 1, …, m }. For each i ∈𝒱, we
let 𝒜_i ⊂𝒜 be the subset of anchors (if
any) relative to which node i also possesses a noisy range
measurement.
Let ^p be the space of interest (p=2 for planar networks,
and p=3 in the volumetric scenarios of greater interest here),
x_i ∈^p the position of sensor i, and d_ij the noisy
range measurement between sensors i and j, known by both i and
j. Without loss of generality, we assume d_ij = d_ji. Anchor
positions are denoted by a_k∈^p. Similarly, r_ik
is the noisy range measurement between sensor i and anchor k,
available at sensor i.
The distributed network localization problem addressed in this work
consists in estimating the sensors' positions x = { x_i : i ∈𝒱}, from the available measurements { d_ij : i
∼ j }∪{ r_ik : i ∈𝒱, k ∈𝒜_i }, through collaborative message passing between neighboring
agents in the communication graph 𝒢.
Under the assumption of zero-mean, independent and
identically-distributed, additive Gaussian measurement noise, the
maximum-likelihood estimator for the nodes' positions is the solution
of the optimization problem
_x f(x),
where
f(x) = ∑ _i ∼ j1/2(x_i - x_j - d_ij)^2 + ∑_i∑_k ∈𝒜_i1/2(x_i-a_k - r_ik)^2.
Problem (<ref>) is nonconvex and difficult
to solve. Even in the centralized setting (, all measurements are
available at a central node) currently available iterative techniques
don't claim convergence to the global optimum. Also, even with
noiseless measurements, multiple solutions might exist due to
ambiguities in the network topology
itself <cit.>.
We can address this problem by optimizing a convex approximation
to (<ref>), amenable to distributed
implementation, as in Soares et al. <cit.>. The
convex approximation f̂ is tight at each term of f and can be
optimized by a first order method with optimal convergence rate. The
approximated problem is
_xf̂(x).
The convex surrogate function f̂ is defined as
f̂(x) = ∑_i ∼ j1/2d^2_B_ij(x_i-x_j) + ∑_i∑_k ∈𝒜_i1/2d^2_Ba_ik(x_i),
where d^2_B_ij and d^2_Ba_ik are the
squared distances to a ball B_ij = {y : y≤ d_ij}, and
a ball Ba_ik = {y : y-a_k≤ r_ik}, respectively. The
convexification strategy underlying (<ref>) is to relax
spheres in the constraint sets of squared distance functions to balls
(disks) B_ij, Ba_ik, hence the name disk-based
relaxation. In the next section we will use function f̂ and
a modified version of problem (<ref>) to
localize underwater moving nodes.
§.§ Assumptions
Range-only position estimation needs at least
p+1 anchors, or an equivalent set of physical constraints, to avoid
spatial ambiguities <cit.>.
Consequently, all range-only methods assume that the number
of anchors, or landmarks, is greater than the dimension of the
deployment space — 3 anchors for planar deployment and 4 for a
volumetric one.
§ MOTION-AWARE LOCALIZATION
One naive approach to localize a network of moving agents would be to
estimate the vehicles' positions solving (<ref>)
at each time step. Although it is possible, it does not take advantage
of the knowledge of previously estimated positions, so something
will be lost in processing time or communication bandwidth.
To bring motion into play we invoke the concept of prior
knowledge in Bayesian statistics and assume a Gaussian prior on the
nodes' positions, now understood as random variables. Each position's
distribution depends on the Gaussian distribution of the noisy range
measurements, its own prior and distributions of neighboring nodes'
positions.
The prior is the predicted position
x̃_i(k+1) = x̂_i(k) + v(k)_iΔ T,
where x̂_i(k) is the estimated position at time
step k, and v_i(k) is the measured or estimated velocity of i.
As measurements are not taken continuously, we model time in discrete
steps t = k Δ T, where t is continuous time, k is the time
step, and Δ T is the sampling period. Without loss of
generality we consider Δ T fixed.
The distance measurement between vehicles i and j at positions
x_i^⋆ and x_j^⋆ is modeled as
d_ij = x_i^⋆ - x_j^⋆ + 𝒩(0,σ^2),
and, similarly, the range measurement between vehicle i and
anchor k is
r_ik = x_i^⋆ - a_k + 𝒩(0,σ^2).
For each node i the prior distribution is also Gaussian, centered
on x̃_i with variance ς^2. Assuming
independency, the posterior distribution of the positions at a given
time step is, up to a normalization constant, p(x|{d}) ∝
p({d}|x)p(x). This evaluates to
[ p(x|{d}) ∝ ∏ _i ∼ j p(x_i-x_j - d_ij); ∏_i(∏_k ∈𝒜 p(x_i-a_k -
r_ik) p(x_i - x̃_i)), ]
where all densities on the right-hand side are Gaussian.
We cast network localization as a maximum a posteriori
estimation problem. After applying the logarithm, we get
_x1/σ^2 f(x) + 1/ς^2∑_ix_i
- x̃_i^2,
equivalently written as
_x f(x) + λx - (x̂ + v Δ T)^2,
where we multiplied by σ^2 and
adopted λ = σ^2/ς^2. Thus, the
parameter λ has a physical interpretation: it is the ratio of
the uncertainty in the measurements to the richness of the trajectory.
The concatenated vehicles'
velocities at time k, v(k), can be measured or approximated from
the previous location estimates.
As we have seen, this problem is nonconvex, so we convexify it using
the approach from section <ref> to obtain the problem
_x g_λ(x) = f̂(x) + λx - (x̂ +
v Δ T)^2.
We can also interpret (<ref>) as a regularized
network localization problem. The regularization parameter λ
controls how much we want to bias our estimate towards the predicted
position.
For example, if a node moves linearly as in
Figure <ref>, at k=2 our formulation will
use velocity v(1) to predict the position of the vehicle, and bias
the static localization problem towards the predicted solution.
We introduced problem (<ref>) in
Ferreira et al. <cit.> in the context
of hybrid collaborative localization based on range and bearing
measurements.
§ THE LOCDYN ALGORITHM
Problem (<ref>) has properties that allow fast
optimization: it is a sum of convex functions and, thus, convex. It is
actually strongly convex[Recall that a function g is
m-strongly convex if and only if g(x) - m/2 x^⊤x is
convex for all x.], meaning that at any point x the function is
lower bounded by a quadratic and thus possesses a unique minimum on
compact sets (c.f. Boyd and
Vandenberghe <cit.>). Also, our objective
function in (<ref>) is L-smooth, meaning that there
is a quadratic upper bound to g_λ(x) for all x,
so g_λ does not grow too
fast. Appendix <ref> holds proofs and
computation of the L and m constants.
If g_λ is strongly convex and has a Lipschitz continuous
gradient, a first-order minimization algorithm can be maximally
accelerated (c.f. Nesterov <cit.>).
The gradient of g_λ is
∇ g_λ = ∇f̂ (x) + 2 λ (x-x̃),
where ∇f̂ is, as defined in (15) of Soares et
al. <cit.>:
∇f̂(x) = ℒx -A^⊤P_B(Ax) +
[ ∑_k ∈𝒜_1x_1 - P_Ba_1k(x_1); ⋮; ∑_k ∈𝒜_nx_n - P_Ba_nk(x_n) ],
where A=C ⊗ I_p, C is the arc-node incidence matrix of
𝒢, I_p is the identity matrix of size p, and
B is the Cartesian product of the
balls B_ij = {y : y≤ d_ij} corresponding to
all the edges in ℰ. Similarly,
Ba_ik = {y : y-a_k≤ r_ik}. Also,
ℒ= A^⊤A = L ⊗ I_p, with L being the
Laplacian matrix of graph 𝒢.
Algorithm <ref> specifies LocDyn as detailed this far, with
its regularization term. As discussed previously, k indexes time
steps where we have range data and anchor positions acquisition. in
the interval, we run the algorithm, whose steps are indexed
by κ. The procedure inherits and concurs with the
distributed properties of the static method in Soares et al.<cit.>. Step <ref> computes
the extrapolated points w_i in a standard application of Nesterov's
method <cit.>. Step <ref>
corresponds to the i-th entry of ∇f̂ and an affine term
on x_i dependent only on each node's unknown coordinates,
velocity, and the position estimated in the previous time
step. Constants c_(i ∼ j,i) denote the entry (i ∼ j,i) in
the arc-node incidence matrix C, and δ_i is the degree of
node i. The i-th entry of ℒx can be computed by
node i from its current position estimate and the position estimates
of the neighbors,
as (ℒx)_i = δ_ix_i - ∑_j ∈
N_ix_j. As further detailed in Soares et
al. <cit.>,
(A^⊤P_B(Ax))_i = ∑_j ∈ N_ic_(i ∼
j,i)P_B_ij(x_i-x_j),
as presented in Step <ref>.
Next, we deal with how to approximate the velocity of a node in the
global reference frame only with data collected so far.
§.§ Velocity estimation
To include vehicle dynamics in network localization, we penalize
discrepancies between the predicted and estimated positions. The
predicted positions are computed based on the previous estimated
position and the vehicle's velocity in world coordinates. This
velocity can be measured by the AUV, or, as we consider next, it can
be estimated from the moving pattern so far. In a previous paper
(Ferreira et al. <cit.>), inspired by
the recent work by Schlupkothen et
al. <cit.>, we estimated the velocity
of each vehicle in the global reference frame by averaging the norm
and the angle over a sliding time window. But prediction by averaging
is accurate only if the averaged quantities are nearly constant
through time — meaning linear constant motion. This is not
necessarily the case in the projected futuristic scenarios for
geoacoustic surveying; in this type of application, richer
trajectories such as the one depicted in Figure <ref>
are meant to densely cover the geographic area under study. In this
paper we estimate each vehicle's velocity v̂_i by taking
Taylor expansions of the derivative of the position. We start by
approximating velocity by central finite differences. Unlike the
causal backward Euler difference approximation, that converges
linearly, the centered difference approximation converges
quadratically as Δ T → 0 and is defined as:
v̂_i(k) Δ T = x_i(k+1) - x_i(k-1)/2.
Higher order approximations have even faster convergence rates, and
they are more robust to noise in the position estimates. Nevertheless,
to use them in causal estimators like LocDyn, we have to introduce a
time lag that covers the higher order time shifts. As communication in
the underwater acoustic channel has low bandwidth and slow propagation
speed, networking protocols may entail considerable latency that
invalidates the estimation of velocities with too large time shifts, so there
is a tradeoff between accuracy and opportunity in choosing the order
of approximation. A causal sixth-order approximation is
[ v̂_i(k) Δ T = 45 (x_i(k-3)-x_i(k-5))/60 +; - 9 (x_i(k-2) - x_i(k-6))/60 +; x_i(k-1)-x_i(k-7)/60. ]
However, computing differences amplifies noise. To reduce the impact
of noise in velocity estimation, we take the derivative approximation
as an anti-symmetric FIR filter with a defined accuracy order. The
derivative approximation's transfer function can then be designed to
match the transfer function of the continuous derivative
operator. Holoborodko <cit.> proposed a method to
generate differentiator filters that also cut high frequencies,
rejecting noise both in measurements and in the estimation process,
while preserving the differentiation behavior at low frequencies. This
method was successfully used, for example, by Khong et
al. <cit.> or Hosseini and
Plataniotis <cit.>. We use the smooth low noise
differentiator fitted from a second-degree polynomial with a lag of 7
samples. Although it takes the same estimates
as (<ref>), the low-pass designed coefficients
reject high-frequency noise. The expression for the product between
estimated velocity and sampling is
[ v̂_i(k) Δ T = 5 (x_i(k-3) - x_i(k-5))/32 +; 4 (x_i(k-2)-x_i(k-6))/32 +; x_i(k-1)-x_i(k-7)/32. ]
§.§ Convergence
The accelerated gradient method implemented in Alg. <ref>
for function g_λ with constants m and L specified
in (<ref>) and (<ref>) converges at the optimal rate
O( κ^-2) as proved by Nesterov
<cit.>,<cit.>. Also, the distance to the
unique global optimum g_λ^⋆ at iteration κ is
theoretically bounded by
[ g_λ( x(κ) ) - g_λ^⋆≤; 4/(2+κ√(m/L))^2( g_λ(x(0)) -
g_λ^⋆ + m/2 x(0) - x^⋆^2). ]
§ NUMERICAL EVALUATION
We evaluate LocDyn in trajectories used in (not necessarily
geoacoustic) surveys: a lap (Figure <ref>), descending
3D spiral (Figure <ref>), and the lawn mower
(Figure <ref>).
In all experiments, we contaminate distance measurements at each time
step k with zero-mean white Gaussian noise with standard deviation
σ = 1m. This value for the standard deviation of
measurement noise is an upper bound on the real-world noise observed in the
WiMUST vehicles, where ranging errors are less than 1m.
We generated synthetic data according to
d_ij = |x_i^⋆ -x_j^⋆ + 𝒩(0,σ^2)|,
for inter-sensor distances and
r_ik = |x_i^⋆ -a_k + 𝒩(0,σ^2)|,
for sensor-anchor distances. We emphasize that measurements are in
mismatch with the data model considered in Sections <ref>
and <ref>, but the discrepancy is not serious, as the
likelihood of d_ij, r_ik being nonpositive in (<ref>) and (<ref>) is typically very small.
We benchmark LocDyn against static localization in Soares et
al. <cit.> and against the linear Kalman filter
solution proposed by Rad et al. <cit.>. We
compare LocDyn with the Kalman filter in the lap and the lawn mower
trajectories and not in the spiral, because the
implementation that the authors kindly made available to us works only
in 2D. We provided the true σ to the Kalman filter, whereas
LocDyn and static localization do not use it. All the other Kalman
filter parameters were not altered.
We ran 100 Monte Carlo trials for each experiment, and measured the
empirical error as
Error = 1/K∑_k=1^Kx̂(k) -
x^⋆(k) ,
where K is the total number of steps in the
trajectory, and x^⋆(k) is the concatenation of the true
positions x_i^⋆.
We initialize LocDyn and the Kalman filter in the position
marked with a magenta cross. Static localization doesn't require
initialization. Anchors are placed to contain the trajectory on their
convex hull. In 2d, we placed 6 anchors for the lawn mower and 12 for
the lap. For 3D, we used 16 anchors.
§.§ Lap trajectory
Lap trajectories are frequent in ocean surveying. They are also rich
since they combine linear parts with curved ones. We tested LocDyn,
the Kalman filter and static localization in the lap shown in
Figure <ref> and one of the Monte Carlo runs is
depicted in Figure <ref>. We perceive the LocDyn
trajectory as the most natural of the three. There is an intentional
slowdown of the vehicles around (-15,-5) and LocDyn is the least
affected by it. We observed these behaviors in all our visualizations
of the lap estimated trajectories.
We can see in more detail the behavior of the Kalman filter estimates
in Figure <ref>, where we displayed only the center
vehicle. It transitions well from the linear to the circular part, but
it gets lost when entering the next linear one. When the first slowdown
starts it gets lost, and the same happens again, after the second
slowdown.
But what about statistically relevant behavior? To answer this, we
simulated the lap for 100 Monte Carlo trials and computed the
empirical error of the trajectories. Figure <ref>
displays the resulting empirical cumulative distributions (CDF) for
the three algorithms. Not only is LocDyn more accurate, but it also
shows less variance in the error. The Kalman filter lags behind static
localization, although it delivers smoother trajectories. This
increase in the error is due to the bad accuracy near the slowdown
points discussed previously.
§.§ Descending spiral trajectory
Descending spirals are useful for monitoring the water column. They
are difficult trajectories to follow so, although they are not
associated with geophysical surveying activities, we tested LocDyn in
them.
Figure <ref> shows an example run of the
descending spiral trajectory. LocDyn has the smoothest trajectory,
although both LocDyn and static localization are more deviated towards
the center of the spiral. This is due to the small number of anchors
in this setup, considering the augmented degrees of freedom in passing
from 2D to 3D. Nevertheless, including dynamic information improves
drastically the localization accuracy.
We documented the experimental accuracy increase in
Figure <ref>. Here, we see the CDF of the
empirical error from 100 Monte Carlo simulations. Noticeably, LocDyn
confirms the intuition from the example trajectory of
Figure <ref>: the average accuracy gain of
using LocDyn is of 30cm — about one third of the 1m of measurement
noise standard deviation.
§.§ Lawn mower trajectory
In this experiment we test the robustness of the algorithms to
outlier noise. Outliers are due, for example, to
reflection or multi-path of the acoustic wave. At each step k with
probability 1% the noisy range measurement of the vehicle to the anchor
on the SW corner is doubled.
Figure <ref> depicts one example run of the
algorithms. At the outlier-contaminated steps we see that the Kalman
filter looses the lawn mower path. Less frequently, static
localization also increases the localization error.
Figure <ref> displays only LocDyn estimates, and
evidences that the trajectory was not particularly hurt.
The empirical CDF demonstrates the impact of outlier noise in the
positioning error, and confirms that LocDyn is not only the most
accurate by far, but also with much less variance of the error.
§ CONCLUSION
We explored self-localization of a network of underwater vehicles from
no more than noisy range measurements in a principled MAP estimation
framework. We produced a fast, and distributed algorithm, LocDyn,
topping in accuracy a comparable Kalman filter estimator. We showed
the advantage of encoding the dynamic behavior of moving vehicles, by
comparing LocDyn with a static network localization estimator.
Also, we gave physical meaning to LocDyn's only parameter, as a ratio
of variability in measurement noise to variability of trajectories. An
important open end of this work is to devise a way to eliminate the
parameter altogether so to have a parameter-free solution for
motion-aware network localization.
§ ACKNOWLEDGMENT
The authors would like to thank António Pascoal and Jorge Ribeiro
from DSOR-ISR for the information regarding AUV missions, and Hadi
Jamali Rad, for kindly providing code for his method.
§ L-SMOOTHNESS AND M-STRONG CONVEXITY OF G_Λ
L-smoothness implies that the gradient of g_λ is Lipschitz
continuous with Lipschitz constant L.
A function g is Lipschitz continuous if
there exists a Lipschitz constant L such that
g(x) -g(y) ≤ L x-y,
for all x and y. We now prove that the gradient of g_λ is
Lipschitz continuous and a Lipschitz constant L can be identified by
rCl∇g_λ(x) - ∇g_λ(y) = ∇f̂(x) - ∇f̂(y) + 2 λ(x-y)
≤ ∇f̂(x) - ∇f̂(y) + 2λx-y
≤ (L_f̂ + 2λ)_L x-y.
The inequality (<ref>) refers to the constant L_f̂
in (16) of Soares et al. <cit.>.
A strong convexity
modulus m for g_λcan be computed from
g_λ(x) - m/2 x^⊤x = f̂(x) + (
λ - m/2 ) x^⊤x -2 λx̃^⊤x
+ λx̃^⊤x̃
noticing that the left-hand side is convex if all terms in the
right-hand side are convex in x. This entails that g_λ is
strongly convex with modulus
m ≤ 2λ.
As we want
robustness of g_λ to errors in x, we choose m to have
the smallest condition number L/m for
function g_λ; thus, from now on we take the largest
possible
m = 2λ.
IEEEtran
| The development of networked systems of agents that can interact with
the physical world and carry out complex tasks in various contexts is
currently a major driver for research and technological development <cit.>. This trend is also seen in contemporary ocean
applications and propelled research projects on multi-vehicle systems
like MORPH (Kalwa et al. <cit.>) and, recently, WiMUST
(Al-Khatib et al. <cit.>).
Coordinated operation of vehicles requires a communication network to
share data, most critically, those data related to navigation and
positioning, as further explored in Abreu et
al. <cit.>. Our work concerns
localization of (underwater) vehicles, a key subsystem needed in the
absence of GPS to properly georeference any acquired data and also
used in cooperative control algorithms. This paper presents research
results within the scope of EU H2020 project WiMUST, aiming at
advanced control, communication and signal processing tools to enable
a team of marine robots, either on the surface or submerged, to
jointly conduct geoacoustic surveys.
Today, geophysicists reveal sub-bottom structures using powerful sound
sources and hydrophones. During surveys, a towed source produces
acoustic waves that penetrate the sea bottom, and its layers
are inferred from the pattern of echoes observed at the towed
hydrophones, over a long period of time and a wide geographic
area. Such surveys are routinely carried out to characterize the sea
bottom prior to underwater construction, to monitor pipelines and
submerged structures, and for the operation of offshore oil and gas
fields.
As depicted in Figure <ref>, a single vessel
tows very long arrays of streamers and, thus, operation of a
traditional geophysical survey at sea means we cannot change
trajectories to recheck interesting findings; also, maneuvering
between rectilinear transects while keeping the streamers untangled
is challenging. The vision of WiMUST is to replace the monolithic
setup with a more flexible one where multiple heterogeneous underwater
vehicles tow smaller arrays while retaining a precise spatial
alignment. These are easier to maneuver, and the absence of long
physical ties between the surface ship and the data acquisition
devices enables new capabilities such as operating at variable depths
or adaptively changing the shape of the ensemble of
hydrophones.
Self-localization is a cornerstone for multi-vehicle cooperative
control in general and for WiMUST in particular, as the acoustic
signals must be georeferenced to high precision to enable an accurate
inference of deep sub-bottom layers. Our specific goal in this paper
is to accurately localize a network of moving agents from noisy
inter-vehicle ranges and from the positions of a few anchors or
landmarks.
*Related work
The signal processing and control communities studied the network
localization problem in many variants, like static or dynamic network
localization, centralized or distributed computations,
maximum-likelihood methods, approximation algorithms, or outlier
robust methods.
The control community's mainstream approach to localization
relies on the robust and strong properties of the Kalman
filter to dynamically compensate noise and bias. Recent approaches can
be found in Pinheiro et al. <cit.>
and Rad et al. <cit.>. In the first very
recent paper, position and velocity are estimated from ranges,
accelerometer readings and gyroscope measurements with an Extended
Kalman filter. The authors of the second paper linearize the dynamic
network localization problem, solving it with a linear Kalman filter.
This last method is comparable with our range-only problem, although
the method requires knowledge of the noise's standard deviation.
The signal processing community traditionally studies static network
localization from a centralized perspective, like Keller and
Gur <cit.>, that formulate the problem as a regression
over adaptive bases; but the authors use squared distances, prone to
outlier noise amplification. Shang et
al. <cit.> follow a multidimensional
scaling approach, but multidimensional scaling works well only in
networks with high connectivity — a property not encountered in
practice in large-scale geometric networks. Biswas et
al. <cit.> and more recently Oğuz-Ekim et
al. <cit.> proposed semi-definite and
second order cone relaxations of the maximum likelihood
estimator. Although more precise, these convexified problems get
intractable even for a small number of nodes. Recently, we have
witnessed increasing interest from signal processing in distributed
static network localization. Papers by Shi et
al. <cit.>, Srirangarajan et
al. <cit.>, Chan and So <cit.>,
Khan et al. <cit.>, Simonetto and
Leus <cit.> and recently Soares et
al. <cit.> use different convex approximations to the
nonconvex optimization costs to devise scalable and distributed
algorithms for network localization. But for scenarios where
approximate solutions are not enough, researchers optimized the
maximum likelihood directly, obtaining solutions that depend on the
initialization of the algorithm. The methods in Calafiore et
al. <cit.> and Soares et
al. <cit.> increase the precision of a
relaxation-based solution, but are prone to local minima if wrongly
initialized. Lately, signal processing researchers produced solutions
for dynamic network localization; Schlupkothen et
al. <cit.> incorporated velocity
information from past position estimates to bias the solution of a
static localization problem via a regularization term.
*Our approach
In this paper we deal with the network localization problem from an
optimization-based standpoint. We formalize the network localization
problem under the maximum a posteriori framework considering
white Gaussian noise and we tightly relax the nonconvex estimator to a
convex unconstrained program. We optimize the approximated problem
with a scalable and fast first order method, achieving smooth
trajectories for a small number of distributed iterations. We define
distributed operation as requiring no central or fusion node, and
where all nodes perform the same types of computations. Distributed
operation of vehicles requires the existence of a communication
network to share navigation and positioning data.
We propose a distributed algorithm for network localization of
underwater mobile nodes with the main properties of following a
principled maximum a posteriori approach, distributed iterations at
each agent, robustness to outlier measurements, and fast convergence.
We call our algorithm localization under dynamics, or LocDyn for
short.
While LocDyn supports distributed operation in the classic sense, it
may also be viewed in a more restricted way simply as an efficient
parallel algorithm when run at a central location that collects all
required range measurements through an appropriate forwarding protocol
(discussed, , in Ludovico et
al. <cit.>). Depending on the capacity of
the shared transmission medium the latter solution may be preferable
from a practical standpoint, but it does not impact the derivations
below.
Our approach is most closely related to one-shot network localization
methods in signal processing (i.e., starting anew when repeated over
time), but it adds a temporal dimension that enables filtering to
regularize position estimates and thus improve their
accuracy. Contrary to many Kalman-filtering-based approaches to
localization in the control literature, we assume an extremely simple
dynamic model for mobile nodes and do not rely on navigation
information that could be provided by an inertial measurement unit
(IMU). The rationale for this is that mobile nodes in our scenarios
are not necessarily AUVs; provisions could be made to install some
range-measurement and communication devices in streamers, whose
dynamics are not as well characterized as those of the towing AUVs,
and where the presence of high-quality IMUs seems far-fetched at
present.
§.§ Contributions
We introduce LocDyn, the first optimization-based dynamic network
localization estimator that is fully distributed and has optimal
convergence rate. LocDyn tightly approximates the MAP estimator, with
the nodes' dynamics as priors, so Bayesian estimation properties
are to be expected. We use a position predictor with information from
previous position estimates via a low-pass velocity approximation.
Our method is more accurate than a Kalman filter implementation by
more than 30cm per trajectory point in all our experiments.
In our companion UCOMMS'16 paper (Ferreira et al. <cit.>) we focused on demonstrating
benefits for collaborative localization using hybrid range/bearing
measurements and time-domain filtering. We explore the same
fundamental idea for time-recursive processing here, but the method
for predicting velocities is now considerably improved, and we propose
an efficient distributed localization algorithm, whereas in
Ferreira et al. the dynamic optimization
problem was solved using a general-purpose (centralized) convex
solver. Also, the algorithm is carefully characterized and benchmarked
using numeric simulations.
The hybrid setup of Ferreira et al.
adopts the FLORIS/CLORIS least-squares framework (from a previous work
by Ferreira et al. <cit.>), which in turn relies
on a so-called disk-based relaxation presented in Soares at
al. <cit.> to attain a high-precision convex
formulation that is amenable to distributed/parallel processing. In
the present paper we consider exclusively range measurements to
streamline the technical content, but we emphasize that accommodating
bearing measurements in a hybrid localization scheme involves only
minor adaptations in the optimization problem and distributed solution
algorithm. | null | null | null | null | We explored self-localization of a network of underwater vehicles from
no more than noisy range measurements in a principled MAP estimation
framework. We produced a fast, and distributed algorithm, LocDyn,
topping in accuracy a comparable Kalman filter estimator. We showed
the advantage of encoding the dynamic behavior of moving vehicles, by
comparing LocDyn with a static network localization estimator.
Also, we gave physical meaning to LocDyn's only parameter, as a ratio
of variability in measurement noise to variability of trajectories. An
important open end of this work is to devise a way to eliminate the
parameter altogether so to have a parameter-free solution for
motion-aware network localization. |
http://arxiv.org/abs/1701.07745v1 | 20170126154422 | Pseudo-$R^2$ statistics under complex sampling | [
"Thomas Lumley"
] | stat.ME | [
"stat.ME"
] |
Dipolar Dark Matter as an Effective Field Theory
Lavinia Heisenberg
December 30, 2023
================================================
Model summaries based on the ratio of fitted and null likelihoods have been proposed for generalised linear models, reducing to the familiar R^2 coefficient of determination in the Gaussian model with identity link. In this note I show how to define the Cox–Snell and Nagelkerke summaries under arbitrary probability sampling designs, giving a design-consistent estimator of the population model summary. I also show that for logistic regression models under case–control sampling the usual Cox–Snell and Nagelkerke R^2 are not design-consistent, but are systematically larger than would be obtained with a cross-sectional or cohort sample, even in settings where the weighted and unweighted logistic regression estimators are similar or identical.
Keywords: likelihood, logistic regression, case-control study, sampling weights
§ BACKGROUND
The coefficient of determination for a linear regression model is the proportional reduction in squared prediction error from using the model prediction instead of the mean. That is, if we have n observations, a vector X of p predictors and a model
E[Y|X=x]=μ=α+xβ
we define
R^2=1-∑_i=1^n (Y_i-μ̂_i)^2/∑_i=1^n (Y_i-Y̅)^2.
<cit.>, in an exercise, proposed a definition for binary Y in terms of the likelihood ratio
R^2_CS=1-(L(0) L(β̂))^2/n
where L(0) is the maximised likelihood for an intercept-only model
E[Y]=α
and L(β̂) is the likelihood maximised over α and β.
This definition reduces to the coefficient of determination in a linear-Normal model with Y∼ N(μ,σ^2) if the likelihood is also maximised over the variance parameter (so -2log L=2+n+nlog(2πσ̂^2)), though not if L(0)/L(β̂) is the likelihood ratio from a model with fixed σ (as is common for generalised linear models).
<cit.> noted that for Bernoulli data the maximum possible value of the likelihood is 1, and so the maximum possible value of R^2_CS is 1-(L(0))^2/n. He defined a rescaled pseudo-R^2
R^2=. R^2_CS/ (1- L(0))^2/n.
which attains the value 1 when prediction is perfect. Nagelkerke's R^2 has become popular for logistic regression.
As maintainer of the survey package for R<cit.>, I have been asked on more than one occasion how to compute Nagelkerke's R^2 under complex sampling.
In this note I propose a definition in terms of a superpopulation parameter, and a design-based estimator. I show the estimator consistently estimates the population value and reduces exactly to Nagelkerke's statistic under simple random sampling. I also compare the proposed definition to the estimates obtained by naive use of the standard formula in unweighted logistic regression under case–control sampling, the standard analysis method in epidemiology and biostatistics.
Code and data for the examples is on <github.com/tslumley/pseudorsq>, and the estimators are available in the `survey' package starting from version 3.31–7.
§ DEFINITION
Following <cit.>, I will assume a finite population of size N is an independent and identically distributed draw of vectors (X, Y) from some (unknown) probability model with density g(x,y), and a probability sample of size n is taken from with known sampling probabilities π_i for the ith individual. I will write E_p[·] for expectations with respect to the superpopulation model and E_π[·] for expectations with respect to finite sampling, and write w_i for the sampling weights 1/π_i.
The analysis goal is to fit parametric regression models f(y|x; θ) for the marginal densities of Y given X, where the parameter vector θ includes the regression intercept and slopes (α, β) and possibly other nuisance parameters. I do not assume this parametric family necessarily contains the true model; the aim is to estimate the `least false' parameter θ^*, the value minimising the Kullback–Leibler divergence between f(y|x; θ) and the true g(·). Where there is a possibility of confusion, I will refer to f(y|x; θ) as the `working model' and the likelihood based on it as the `working likelihood'. Let θ̃_N be the `census parameter', the value obtained by fitting the model f(y|x; θ) to the whole population. By standard results on maximum likelihood estimation, θ̃_Np→θ^*.
The parametric model is fitted to the complex sample by maximising the weighted loglikelihood
ℓ̂(θ) = ∑_i∈sample w_i ℓ_i(θ) = ∑_i∈sample w_i log f(y_i|x_i; θ).
The weighted likelihood is unbiased for the population working loglikelihood:
E_π[ℓ̂(θ)]=∑_i=1^N ℓ_i(θ)
and its derivative is unbiased for the population working score function.
Under mild conditions on the superpopulation distribution g(·) and the sampling design, the resulting estimator θ̂ is √(n)-consistent for θ̃_N, and so also for θ^*. Because the main interest of the analyst is usually in the regression parameters β I will abuse notation by writing ℓ̂(β) for the value of ℓ̂(θ) with any other parameters estimated by maximum weighted likelihood.
We can now consider the Cox–Snell summary, R^2_CS. First, let ℓ_1(β) be the working loglikelihood for a single random observation from the superpopulation distribution, and define the superpopulation summary ρ^2_CS by
log (1- ρ^2_CS) =2E_p[ℓ_1(β^*)-ℓ_1(0)].
The ordinary Cox–Snell R^2_CS for an iid sample is an estimator of ρ^2_CS obtained by replacing the expectation with a population or sample average, and under iid sampling the consistency of R^2_CS for ρ^2_CS follows from the law of large numbers, the consistency of maximum likelihood, and the smoothness of ℓ(·).
The finite population has been constructed as an iid sample, so I can define the census (population) parameter R̃^2_CS by
log (1-R̃^2_CS) =2/N(ℓ(β̃)-ℓ(0))=1/N∑_i=1^N (ℓ_i(β̃)-ℓ_i(0))
and obtain a design-based plug-in estimator
log (1-R̂^2_CS) =2/N̂(ℓ̂(β̂)-ℓ̂(0))=∑_i∈sample w_i (ℓ_i(β̂)-ℓ_i(0))/∑_i∈sample w_i
That is, under complex sampling, the likelihood should be replaced by the weighted (pseudo)likelihood, and the n in the Cox–Snell and Nagelkerke formulas should be replaced by the sum of the weights.
The analysis so far has not assumed the model is correctly specified. If the fitted parametric model family f(·; θ) happened to include the true superpopulation distribution g(·), the right-hand side of equation <ref> would be the mutual information between X and Y, or equivalently, the difference between the marginal and conditional entropy of Y. This relationship to information theory gives another reason to regard ρ^2_CS as a meaningful model summary.
The design-based Cox–Snell statistic R̂^2_CS can now be rescaled to an upper limit of 1, obtaining a design-based version of the Nagelkerke pseudo-R^2.
R̂^2=. R̂^2_CS/ (1- exp(ℓ̂(0)))^2/n.
§ COMPARISONS
In this section I compare the proposed estimator either to a known or simulated population value or to the estimator obtained by ignoring the sampling.
§.§ A large multistage survey
NHANES, the National Health and Nutrition Examination Survey, is a series of large national surveys of the US civilian population conducted by the National Center for Health Statistics (NCHS). The survey has been run continuously since 1999, with data released for each two-year wave. Each wave samples about 10,000 individuals, in a complex multistage, multiphase design, which is approximated for the purpose of public-use datasets by a two-stage sample. In the approximate design, two city or county sampling units are taken from each of about 16 geographical strata, and a total of about 10,000 individual are sampled. The survey oversamples people under 18 and over 60, and also oversamples racial/ethnic minority groups. We will use data on blood pressure from the 2003–4 and 2005–6 waves of NHANES, in 18323 individuals for whom both dietary data and blood pressure were available<cit.>.
Isolated Systolic Hypertension (ISH) is the form of hypertension most common in older people. It is defined by an elevated systolic blood pressure (>140 mmHg) with normal or low diastolic blood pressure (<90 mmHg), and indicates stiffening of walls of major arteries<cit.>. Following <cit.> we fit a sequence of logistic regression models. The base model used age as a linear spline with interior knots at 50 and 65 years. Successive models then added race/ethnicity as five categories, gender, a gender by age interaction, and reported dietary sodium intake. The R^2 statistics for the models are shown in Table <ref>; for both the Cox–Snell and Nagelkerke statistics the design-based estimator has a slightly lower value than the estimator assuming simple random sampling.
§.§ Case–control sampling
Logistic regression in case–control designs is probably the most frequently-used example of a generalised linear model under complex sampling. Case–control sampling is used in the study of rare diseases: the small number of individuals with Y=1 are all sampled, but only a small fraction of those with Y=0. The standard analysis of a case–control sample in epidemiology is by unweighted logistic regression; the bias that would be expected from ignoring the sampling is confined to the intercept and the estimate of β is the semiparametric MLE. The unweighted analysis is sometimes, but not always, substantially more efficient than the weighted analysis.
§.§.§ Heuristics
A heuristic analysis is useful. Suppose that P(Y=1) is very small in the population, and that π_0≪ 1 is chosen to give m controls per case. The efficiency of this design (with unweighted analysis) relative to using the entire population is m/(m+1), which can be chosen close to 1. Almost all the (Fisher) information in the population is now present in the sample, so the likelihood ratio between the null model and the model fitted by maximum likelihood will be approximately the same in the sample as in the population. The ordinary Cox–Snell R-squared in the population, R̃^2_CS, satisfies
log (1-ρ^2_CS) ≈log (1-R̃^2_CS) =2/N(ℓ(β̃)-ℓ(0))
so in the unweighted model in the sample it will satisfy
log (1-R^2_CS,sample) ≈2/n(ℓ(β̃)-ℓ(0))
different by a factor of N/n. That is, case–control sampling will dramatically deflate log(1-R^2_CS) and so inflate R^2_CS relative to prospective sampling from the same population. In a less-ideal case–control scenario we would expect the sample likelihood ratio to be smaller than the population likelihood ratio — but if it were smaller by a factor of n/N, to cancel the bias, a case–control design would have no advantage over simple random sampling.
The Nagelkerke correction will mitigate the sampling bias, because the rescaling factor is also affected by sampling bias and the biases partly cancel. However,
the sampling bias is multiplicative on the scale of the log likelihood ratio and the correction is multiplicative on the scale of R^2, a concave function of the log likelihood ratio. By Jensen's inequality, the ordinary Nagelkerke R^2 in the sample will still be biased upwards relative to the population value except when R^2=0 or R^2=1. The design-based estimator R̂^2 does not have these biases; it will estimate the same quantity under case–control sampling as under prospective sampling.
§.§.§ Simulation
Table <ref> demonstrates this bias in a simulation. A population of size 10^5 was simulated with a single N(0,1) predictor, X, and a binary outcome Y satisfying
logit P(Y=1|X=x)=-6+x.
There were 389 cases in the simulated population, and control samples were drawn with 1, 2,5, 10, or 20 controls per case, giving control sampling fractions from 0.4% to 7.8%. As Table <ref> shows, the design-based estimators accurately reproduced the population R^2 statistics under all sampling fractions. The unweighted estimators displayed a substantial upwards bias that decreased with increasing sampling fraction, as expected from our heuristic argument.
§.§.§ A practical example
To illustrate the practical relevance of this bias, consider the case–control study of oesphageal cancer in men from Ille-et-Vilaine, France, reported by <cit.> and later used by <cit.> a study that is close to the heuristic ideal. There are two publicly-available data sets: one with discrete variables for age (five groups), tobacco consumption (four groups), and alcohol consumption (four groups), and the other with continuous variables for age and reported alcohol and tobacco consumption. The study recruited between four and five controls per case; the control sampling fraction is not given explicitly but can be estimated from population data to be about 1/440.
For both data sets, a model with main effects of each variable fits reasonably well. We consider this base model and a model adding a linear by linear interaction between alcohol and tobacco consumption. When using the grouped data, the estimated variances of β̂ are very similar for the unweighted and design-weighted approaches; when using the continuous data, the unweighted estimate is substantially more precise.
The R^2 estimates are shown in Table <ref>. As the heuristic analysis indicated, the ordinary pseudo-R^2 statistics are much larger than a design-based version, and so can be importantly biased relative to population or cohort statistics. The bias exists both for the continuous-data models where the unweighted analysis is substantially more efficient, and the grouped-data models where there is little efficiency difference.
§.§ Informative two-phase sampling from a cohort
Wilms' Tumour is a rare, largely treatable childhood cancer of the kidney, and the National Wilms' Tumor Study Group has run a series of clinical trials aiming to reduce the long-term side-effects of treatment while maintaining the cure rate<cit.>. From a biostatistical viewpoint, one of the interesting aspects of Wilms' Tumour is that the histological classification (roughly, `cell abnormality') is difficult, and the study group central pathologist appears to be better at it than anyone else. That is, given the central lab histology, the local-hospital histology is not predictive of relapse, and the local-hospital histology can be treated as the central-lab histology plus random error. There was obvious interest in studying Wilms' Tumour relapse with central-lab analysis of only a subset, rather than of all cases, and one of the data sets from the NWTG studies has become a standard example of two-phase sampling (eg, <cit.>)
Table <ref>, compares the results for the full cohort to two sampling schemes: case–control sampling with equal numbers of cases and controsl, and balanced two-phase sampling with the same number sampled from all four cells of a 2× 2 local-histology by relapse table. The design-based estimator is always close to the full cohort estimator; however, the unweighted estimator under case–control sampling is biased upwards. The bias is less dramatic than in the previous example because the control sampling fraction is larger: about 10% versus less than 1%.
§ DISCUSSION
There is some controversy about the usefulness of pseudo-R^2 measures for generalised linear models, but they are quite widely used. In this note I have shown how to construct design-based versions of the Nagelkerke and Cox–Snell pseudo-R^2, which should be of use to survey statisticians. I have also shown that the standard versions of these statistics when used for logistic regression on case–control samples do not estimate the same model summary as they would under prospective or cross-sectional sampling — a fact that does not seem to be well known in biostatistics and epidemiology.
abbrvnat
| null | The coefficient of determination for a linear regression model is the proportional reduction in squared prediction error from using the model prediction instead of the mean. That is, if we have n observations, a vector X of p predictors and a model
E[Y|X=x]=μ=α+xβ
we define
R^2=1-∑_i=1^n (Y_i-μ̂_i)^2/∑_i=1^n (Y_i-Y̅)^2.
<cit.>, in an exercise, proposed a definition for binary Y in terms of the likelihood ratio
R^2_CS=1-(L(0) L(β̂))^2/n
where L(0) is the maximised likelihood for an intercept-only model
E[Y]=α
and L(β̂) is the likelihood maximised over α and β.
This definition reduces to the coefficient of determination in a linear-Normal model with Y∼ N(μ,σ^2) if the likelihood is also maximised over the variance parameter (so -2log L=2+n+nlog(2πσ̂^2)), though not if L(0)/L(β̂) is the likelihood ratio from a model with fixed σ (as is common for generalised linear models).
<cit.> noted that for Bernoulli data the maximum possible value of the likelihood is 1, and so the maximum possible value of R^2_CS is 1-(L(0))^2/n. He defined a rescaled pseudo-R^2
R^2=. R^2_CS/ (1- L(0))^2/n.
which attains the value 1 when prediction is perfect. Nagelkerke's R^2 has become popular for logistic regression.
As maintainer of the survey package for R<cit.>, I have been asked on more than one occasion how to compute Nagelkerke's R^2 under complex sampling.
In this note I propose a definition in terms of a superpopulation parameter, and a design-based estimator. I show the estimator consistently estimates the population value and reduces exactly to Nagelkerke's statistic under simple random sampling. I also compare the proposed definition to the estimates obtained by naive use of the standard formula in unweighted logistic regression under case–control sampling, the standard analysis method in epidemiology and biostatistics.
Code and data for the examples is on <github.com/tslumley/pseudorsq>, and the estimators are available in the `survey' package starting from version 3.31–7. | null | null | There is some controversy about the usefulness of pseudo-R^2 measures for generalised linear models, but they are quite widely used. In this note I have shown how to construct design-based versions of the Nagelkerke and Cox–Snell pseudo-R^2, which should be of use to survey statisticians. I have also shown that the standard versions of these statistics when used for logistic regression on case–control samples do not estimate the same model summary as they would under prospective or cross-sectional sampling — a fact that does not seem to be well known in biostatistics and epidemiology.
abbrvnat | null |
http://arxiv.org/abs/1701.07870v1 | 20170126203108 | Implementation of the iFREDKIN gate in scalable superconducting architecture for the quantum simulation of Fermionic systems | [
"Per J. Liebermann",
"Pierre-Luc Dallaire-Demers",
"Frank K. Wilhelm"
] | quant-ph | [
"quant-ph"
] |
Theoretical physics, Saarland University, 66123 Saarbrücken, Germany
Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA 02138, USA
Theoretical physics, Saarland University, 66123 Saarbrücken, Germany
We present a Superconducting Planar ARchitecture for Quantum Simulations (SPARQS) intended to implement a scalable qubit layout for quantum simulators. To this end, we describe the iFREDKIN gate as a controlled entangler for the simulation of Fermionic systems that is advantageous if it can be directly implemented. Using optimal control, we show that and how this gate can be efficiently implemented in the SPARQS circuit, making it a promising platform and control scheme for quantum simulations. Such a quantum simulator can be built with current quantum technologies to advance the design of molecules and quantum materials.
Implementation of the iFREDKIN gate in scalable superconducting architecture for the quantum simulation of Fermionic systems
Frank K. Wilhelm
December 30, 2023
============================================================================================================================
§ INTRODUCTION
Richard Feynman proposed building computers that processes information based on the rules of quantum mechanics to solve the difficult problem of simulating quantum systems <cit.>.
Motivated by Shor's algorithm to factor large composite numbers <cit.> and its potential in cryptography, considerable efforts have been invested in building a universal fault-tolerant quantum computer <cit.>.
However, it was
recently argued that such a universal machine may not be necessary for the purpose of simulating Fermionic systems beyond the reach of modern supercomputers
<cit.> as long coherence times (quantum memories) ought to be sufficient.
Following advances in quantum technologies from the past decades, it can be suggested that Feynman's machine could possibly be built in a near future.
It is entirely possible that quantum simulations of
Fermionic systems such as Fermi-Hubbard-like models and molecular models will be a main application of quantum computers in the coming years.
In this paper, we propose a superconducting circuit architecture for simulating general Fermionic systems which can be cast in a second-quantized formulation.
It is based on a superconducting circuit implementation called the RezQu architecture, as this has the tunability of its components and the simplicity of its implementation <cit.>.
The layout presented in sec. <ref> is extensible and planar and it can be implemented with current quantum technologies.
The properties of this Superconducting Planar ARchitecture for Quantum Simulations (SPARQS) are such that it can be used to prepare a molecular or cluster state, measure its energy and correlation functions <cit.>.
It was also previously shown that for quantum simulations, the number of gates that need to be tuned and benchmarked scales linearly with the size of the simulated system <cit.>. For example, in the case of the Fermi-Hubbard model, the size of the system in a hybrid simulation method <cit.> corresponds to the number of spin orbitals in an exactly solved sub-lattice of the full infinite lattice.
From the RezQu literature, we assume that single-qubit and two-qubit gates can be implemented straightforwardly based on known results <cit.>.
Quantum simulations also benefit from iFREDKIN gates to efficiently implement the time evolution of Fermionic Hamiltonians.
The iFREDKIN gate is a new entangling gate in the family of three-qubit gates which includes the TOFFOLI and FREDKIN gates. However, unlike the latter two, it has no classical analog. It performs an entangler, the iSWAP <cit.>, on two target qubits conditioned on a control qubit (conditional iSWAP). This conditional evolution is naturally adapted to the need to interfere an entangled manybody state in the target qubit to a reference state as a key step in phase estimation.
We expect it to be the most costly gate in quantum simulations, as it is used between chains of qubits to implement hopping and interaction terms of Fermionic Hamiltonians <cit.>.
Therefore, the remaining challenge is to show that the iFREDKIN gates can be implemented between the probe qubit P and neighboring system qubits S.
In sec. <ref>, as a proof of principle, we use GRadient Ascent Pulse Engineering (GRAPE) <cit.>
to show that the iFREDKIN gate can be implemented in a time comparable to a simple iSWAP gate between neighboring system qubits, even when leakage is included in a SPARQS circuit. Thus, we conclude that SPARQS circuits with an appropriate iFREDKIN control scheme provide a natural platform for the simulation of Fermionic systems.
§ CIRCUIT ARCHITECTURE
In Ref. <cit.> we highlighted that a dual-rail qubit with a highly connected central qubit is well-suited for the purpose of simulating clusters of the Fermi-Hubbard model and other Fermionic systems.
As shown in fig. <ref>, the layout consists of a register S which encodes a system Hamiltonian and a bath register B used in the procedure of creating a Gibbs state of the system Hamiltonian in S. The Gibbs state preparation <cit.> or phase estimation <cit.> digital register R also requires a line of qubits whose size depends on the desired precision of prepared or measured energies. Interactions between registers S+B and R and possible subsequent correlation functions measurements are mediated through a probe qubit P between the digital and analog registers. Lines between registers indicates where multi-qubit interactions are used in simulation algorithms. An advantage of using a middle qubit P is that all-to-all connectivity is not required for the implementation of useful algorithms. The triangles formed by interaction lines between neighboring qubits in S and register P are meant to indicate that iFREDKIN gates have to be used with P as the control qubit.
Here, we will describe a possible superconducting circuit architecture for this purpose. Within this, we will show in sec. <ref> how to implement a fast and direct iFREDKIN gate using optimal control methods.
The basic architecture for such a SPARQS circuit is shown in in fig. <ref>, a modified RezQu architecture <cit.>.
Qubits with tunable frequency are connected through a superconducting cavity which acts as a bus for quantum information <cit.>.
The qubits do not interact with each other unless they are brought in resonance with the cavity. This architecture can be fabricated in a planer way and is expeced to be extensible based on current quantum technologies. The exponential increase in coherence times of superconducting qubits <cit.> is a good indication that this architecture could be tested with minimal quantum error correction.
It is known that single-qubit and two-qubit gate can be efficiently implemented in RezQu-like circuits <cit.>. However, iFREDKIN gates have not been studied yet. In the next section, we show using optimal control that the iFREDKIN gate can be implemented efficiently in a SPARQS processor for realistic circuit parameters.
§ IFREDKIN
Analogous to the FREDKIN gate (conditional swap), the iFREDKIN gate is an entangling three-qubit gate, which performs an iSWAP operation on two qubits, depending on the state of the first qubit, i.e., a conditional iSWAP:
U_±iFREDKIN =|0⟩⟨ 0|⊗1_4+|1⟩⟨ 1|⊗±iSWAP
=[ 1 0 0 0 0 0 0 0; 0 1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 ± 0; 0 0 0 0 0 ± 0 0; 0 0 0 0 0 0 0 1; ] .
It is used in the context of quantum simulations to perform the time evolution of Fermionic Hamiltonians <cit.> in a Jordan-Wigner basis <cit.>, which maps indistinguishable particles with antisymmetric exchange properties to a register of distinguishable qubits. The iFREDKIN gate is entangling since it will map a separable state (|001⟩+|101⟩)/√(2) to a generalized GHZ state (|001⟩+i|110⟩)/√(2) <cit.>. Specifically, it executes a two-qubit iSWAP gate, which is a perfect two-qubit entangler <cit.>, conditional on a control qubit. When the control qubit is in a superposition, it accumulates a phase i if and only if the state of the system qubits are different. Hence, the iFREDKIN gate can be used to characterize the interaction between qubits as if they where indistinguishable particles with antisymmetric exchange properties. In a SPARQS circuit, the control qubit is always P and the conditional iSWAP is performed between neighboring S qubits.
Here we want to implement the iFREDKIN gate on a cluster of the SPARQS processor as shown in fig. <ref>. Pulse shapes found by numerical methods, such as GRAPE <cit.>, have proven to be faster than analytical control pulses on this architecture <cit.>, and optimal control methods have been demonstrated successfully on three qubit gates <cit.>. Additionally, we avoid to reach the decoherence limit compared to a gate decomposition of multi-qubit gates.
The following Hamiltonian describes the architecture, where the coupling between each qubit is mediated by a common bus
H = ω_B a^† a + ∑_iω_it - Δ_i/2 b_i^† b_i + Δ_i/2b_i^† b_i^2
+∑_i g_i a^† b_i + a b_i^† .
a and b_i are the bus and qubit annihilation operators, respectively. We pick realistic parameters for the architecture in order to proceed with the proof-of-principle. With a bus at ω_B/2π=6.5 GHz, the off-resonant frequencies are set to ω_P/2π=7.5 GHz, ω_S1/2π=8.0 GHz and ω_S2/2π=8.5 GHz, with anharmonicities Δ_P/2π=-200 MHz, Δ_S1/2π=-300 MHz and Δ_S2/2π=-400 MHz. The coupling strengths are g_BP/2π=30 MHz, g_BS1/2π=45 MHz and g_BS2/2π=60 MHz, keeping the ratio g_i/Δ_i=-0.15 fixed. In all runs the controls have a time resolution of 1ns, typical for arbitrary waveform generators (AWGs), with fine steps of 0.1 ns for the simulated time evolution. Additionally, the pulse shapes are filtered by a Gaussian window with a bandwidth of 331 MHz (standard deviation σ=0.4 ns). For the optimization, we work in the rotating frame with angular frequency ω_R=ω_B. Therefore the implemented Hamiltonian reads
H = ∑_iδ_it - Δ_i/2 b_i^† a_i + Δ_i/2b_i^† b_i^2
+∑_i g^(i)a^† b_i + a b_i^† .
δ_i=ω_i-ω_B is the detuning of qubit i from the bus. For each qubit, the first three energy levels are taken into account. Since we are only interested in the correct evolution of the computational subspace, the fidelity function only measures the overlap of the projected total time evolution with the target gate <cit.>
Φ = 1/4U^†_F P_Q Ut_g P_Q^2 ,
and global phases are omitted.
In fig. <ref>fig:zctrls we show the optimized qubit-bus detuning parameter for the implementation of a iFREDKIN gate. The gate can be implemented in 56 ns for the chosen parameters with a realistic control sequence with 99.99% fidelity. The time evolution of the populations is shown in fig. <ref>. As can be seen, the control qubit gets de-excited and re-excited during the process, hence allowing for the dynamics to be reenacted a two-excitation interference experiment, being consistent with the speed limit corresponding for a small multiple of the periods induced by the various g couplings. Leakage into the second level of the qubits plays an important role in the gate implementation of the numerical pulse. Also, the pulse shapes are highly symmetric, as are the resulting time evolutions of the populations. The speed limit shown in fig. <ref> proves that the iFREDKIN gate can easily be implemented below a gate duration of t_g=55 ns. This time scale is typical for analytic two-qubit pulse shapes, i.e., the simultaneous version of the Strauch sequence<cit.> and compares to the implementation of a traditional iSWAP on the same architecture as shown in fig. <ref>, setting the P qubit to a off-resonant parking frequency of ω_P/2π=10GHz.
§ CONCLUSION
Quantum simulations could be one of the main applications for future quantum computers where they could outperform their classical counterparts.
We outlined the SPARQS circuit as an explicit superconducting implementation for a quantum simulator.
We showed how the most expensive gate in quantum simulations of Fermionic systems, the three-qubit iFREDKIN, can be efficiently implemented in such a device using GRAPE, a standard optimal control method.
For reasonable parameters of the SPARQS qubits, the iFREDKIN can be implemented in a time slightly longer than a typical iSWAP gate.
As coherence times for superconducting qubits keep increasing to date, it is realistic to expect that a large number of those gates could be reliably used for the purpose of performing quantum simulations beyond what can be done on classical computers.
§ ACKNOWLEDGMENTS
We acknowledge support from the European Union under the ScaleQIT integrated project.
| Richard Feynman proposed building computers that processes information based on the rules of quantum mechanics to solve the difficult problem of simulating quantum systems <cit.>.
Motivated by Shor's algorithm to factor large composite numbers <cit.> and its potential in cryptography, considerable efforts have been invested in building a universal fault-tolerant quantum computer <cit.>.
However, it was
recently argued that such a universal machine may not be necessary for the purpose of simulating Fermionic systems beyond the reach of modern supercomputers
<cit.> as long coherence times (quantum memories) ought to be sufficient.
Following advances in quantum technologies from the past decades, it can be suggested that Feynman's machine could possibly be built in a near future.
It is entirely possible that quantum simulations of
Fermionic systems such as Fermi-Hubbard-like models and molecular models will be a main application of quantum computers in the coming years.
In this paper, we propose a superconducting circuit architecture for simulating general Fermionic systems which can be cast in a second-quantized formulation.
It is based on a superconducting circuit implementation called the RezQu architecture, as this has the tunability of its components and the simplicity of its implementation <cit.>.
The layout presented in sec. <ref> is extensible and planar and it can be implemented with current quantum technologies.
The properties of this Superconducting Planar ARchitecture for Quantum Simulations (SPARQS) are such that it can be used to prepare a molecular or cluster state, measure its energy and correlation functions <cit.>.
It was also previously shown that for quantum simulations, the number of gates that need to be tuned and benchmarked scales linearly with the size of the simulated system <cit.>. For example, in the case of the Fermi-Hubbard model, the size of the system in a hybrid simulation method <cit.> corresponds to the number of spin orbitals in an exactly solved sub-lattice of the full infinite lattice.
From the RezQu literature, we assume that single-qubit and two-qubit gates can be implemented straightforwardly based on known results <cit.>.
Quantum simulations also benefit from iFREDKIN gates to efficiently implement the time evolution of Fermionic Hamiltonians.
The iFREDKIN gate is a new entangling gate in the family of three-qubit gates which includes the TOFFOLI and FREDKIN gates. However, unlike the latter two, it has no classical analog. It performs an entangler, the iSWAP <cit.>, on two target qubits conditioned on a control qubit (conditional iSWAP). This conditional evolution is naturally adapted to the need to interfere an entangled manybody state in the target qubit to a reference state as a key step in phase estimation.
We expect it to be the most costly gate in quantum simulations, as it is used between chains of qubits to implement hopping and interaction terms of Fermionic Hamiltonians <cit.>.
Therefore, the remaining challenge is to show that the iFREDKIN gates can be implemented between the probe qubit P and neighboring system qubits S.
In sec. <ref>, as a proof of principle, we use GRadient Ascent Pulse Engineering (GRAPE) <cit.>
to show that the iFREDKIN gate can be implemented in a time comparable to a simple iSWAP gate between neighboring system qubits, even when leakage is included in a SPARQS circuit. Thus, we conclude that SPARQS circuits with an appropriate iFREDKIN control scheme provide a natural platform for the simulation of Fermionic systems. | null | null | null | null | Quantum simulations could be one of the main applications for future quantum computers where they could outperform their classical counterparts.
We outlined the SPARQS circuit as an explicit superconducting implementation for a quantum simulator.
We showed how the most expensive gate in quantum simulations of Fermionic systems, the three-qubit iFREDKIN, can be efficiently implemented in such a device using GRAPE, a standard optimal control method.
For reasonable parameters of the SPARQS qubits, the iFREDKIN can be implemented in a time slightly longer than a typical iSWAP gate.
As coherence times for superconducting qubits keep increasing to date, it is realistic to expect that a large number of those gates could be reliably used for the purpose of performing quantum simulations beyond what can be done on classical computers. |
http://arxiv.org/abs/1701.08095v1 | 20170127160031 | Experimental observations and modelling of intrinsic rotation reversals in tokamaks | [
"Y. Camenen",
"C. Angioni",
"A. Bortolon",
"B. P. Duval",
"E. Fable",
"W. A. Hornsby",
"R. M. Mcdermott",
"D. H. Na",
"Y-S. Na",
"A. G. Peeters",
"J. E. Rice"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
]Experimental observations and modelling of intrinsic rotation reversals in tokamaks
^1 CNRS, Aix-Marseille Univ., PIIM UMR7345, Marseille, France
^2 Max Planck Institut für Plasmaphysik, Garching, Germany
^3 Princeton Plasma Physics Laboratory, Princeton, USA
^4 EPFL, Swiss Plasma Center (SPC), Lausanne, Switzerland
^5 Departement of Nuclear Engineering, Seoul National University, Seoul, Korea
^6 Physics Department, University of Bayreuth, Bayreuth, Germany
^7 PSFC, MIT, Cambridge, Massachusetts, USA
The progress made in understanding spontaneous toroidal rotation reversals in tokamaks is reviewed and current ideas to solve this ten-year-old puzzle are explored. The paper includes a summarial synthesis of the experimental observations in AUG, C-Mod, KSTAR, MAST and TCV tokamaks, reasons why turbulent momentum transport is thought to be responsible for the reversals, a review of the theory of turbulent momentum transport and suggestions for future investigations.
§ INTRODUCTION
During the 1980s, it was shown that stationary toroidal flows that reach up to 20% of the thermal velocity can develop in tokamak plasmas in the absence of externally applied torque (a summary of these early observations is available in Table 1 of <cit.>).
This phenomenon, dubbed intrinsic rotation, has practical implications for future low torque devices like ITER owing to the potential stabilising impact of plasma flows on turbulence and deleterious MHD instabilities.
Following initial measurements, experiments were performed to explore the physics of intrinsic rotation. The observations have been regularly summarised in review articles <cit.>.
Momentum transport, reconnection events (sawteeth and ELMs), non-axisymmetric magnetic fields (from magnetic perturbation coils, error fields or large MHD modes), orbit losses and interactions with neutrals were all observed to impact intrinsic rotation. The underlying physics is surprisingly rich, involving many competing mechanisms that determine the final rotation profile. These include collisional momentum transport, fluctuation-induced Reynolds and Maxwell stresses (turbulent transport), charge exchange with neutrals, J_∥×δ B torque induced by resonant magnetic perturbations, J_r× B torque induced by non-axisymmetric magnetic fields (neoclassical toroidal viscosity NTV), ionisation currents and orbit losses. A convenient set of transport equations has been proposed in <cit.> to describe the evolution of toroidal flows in tokamak plasmas resulting from these various mechanisms in a consistent fluid moment framework. This approach highlights the complexity of intrinsic rotation inherent to the number of mechanisms at play.
The present paper focuses on a specific puzzle within intrinsic rotation: spontaneous toroidal rotation reversals.
This intriguing phenomenon was reported 10 years ago on the TCV tokamak where the core toroidal rotation was observed to flip from counter-current to co-current when a threshold in density was exceeded in Ohmic L-mode plasmas <cit.>. Rotation reversals have since been demonstrated in C-Mod <cit.>, AUG <cit.>, MAST <cit.> and KSTAR <cit.>.
In parallel, the theory of intrinsic rotation has undergone considerable development and many possible physical mechanisms have been identified.
In spite of this progress, understanding toroidal rotation reversals still eludes us and predicting the direction of the core rotation in Ohmic L-modes remains a challenge. Toroidal rotation reversals do not directly affect plasma performance, but they represent a critical test for the theory of intrinsic rotation. The purpose of the present work is to survey the observations and the theoretical framework with the goal of presenting the current understanding of this research and explaining current ideas and approaches to its resolution.
The definitions and conventions adopted in this paper are introduced in Sec. <ref>, followed by a summary of the experimental observations in Sec. <ref>. The constraints these observations put on the theory and, in particular, the reasons why turbulent momentum transport is thought to be responsible for rotation reversals are discussed in Sec. <ref>.
The theory of turbulent momentum transport is briefly reviewed in Sec. <ref> before summarising the current status of the modelling activities in Sec. <ref>. Finally, in Sec. <ref> future work and open issues are discussed.
§ DEFINITIONS AND CONVENTIONS
Throughout this paper intrinsic rotation refers to the toroidal rotation that develops in the absence of externally applied torque. The toroidal rotation is noted v_φ and its direction is given with respect to the plasma current: v_φ>0 for co-current rotation and v_φ<0 for counter-current rotation.
The rotation profile is said to be peaked (hollow) when v_φ increases (decreases) from the edge to the magnetic axis. Note that other definitions exist in the litterature, where for instance |v_φ| is used in place of v_φ to define peaked and hollow profiles. The present definition is deemed more appropriate to describe profiles that cross v_φ=0.
Three regions are distinguished in the rotation profile following <cit.>: the sawtooth region 0≤ r/a ≲ r_ inv/a, with r_ inv the sawtooth inversion radius and a the plasma minor radius, the gradient region r_ inv/a ≲ r/a ≲ 0.8 (typically) and the edge region 0.8 ≲ r/a ≤ 1. The separation between the gradient and edge regions relies on the different dependencies of the toroidal rotation gradient on plasma parameters in these two regions. The plasma core includes the sawtooth and gradient regions.
Toroidal rotation reversals are defined as a large change (≳ 100%) of the intrinsic toroidal rotation gradient over the whole gradient region triggered by minor changes (≲ 20%) in the control plasma parameters.
It is important to notice that this definition does not require a change in the direction of the central toroidal rotation, nor of the toroidal rotation gradient, which is somehow at odds with the reversal qualifier. Toroidal rotation bifurcation would be a more accurate description of this phenomenology. For consistency with the past literature, however, we retain toroidal rotation reversal and broaden its definition to include cases where the sign of the toroidal rotation and/or of its gradient do not change.
§ EXPERIMENTAL OBSERVATIONS
The experimental observations of toroidal rotation reversals collected over the last ten years in AUG <cit.>, C-Mod <cit.>, MAST <cit.>, KSTAR <cit.> and TCV <cit.> are summarised in this section. In some cases, direct reference is made to figures in the published works.
§.§ Measurements
* Toroidal rotation reversals were reported observing impurity ions (boron, carbon and argon) with a variety of diagnostics: X-ray imaging crystal spectrocsopy (XICS) in C-Mod and KSTAR (argon), charge exchange recombination spectroscopy (CXRS) with a diagnostic neutral beam in TCV (carbon), CXRS using short pulses of a heating neutral beam in AUG (boron) and KSTAR (carbon).
* Reversals have also been inferred from Doppler back-scattering measurements of the perpendicular velocity of electron density fluctuations in AUG and MAST (assuming a dominant E× B velocity).
* The measurements were mostly performed in Ohmic L-modes, but reversals were also reported in the presence of ion cyclotron heating <cit.> and electron cyclotron heating <cit.>, still in L-mode.
§.§ Triggers
* Toroidal rotation reversals have been triggered by density ramps, plasma current-ramps, toroidal magnetic field ramps, impurity injection and by switching on/off electron cyclotron heating, see e.g. <cit.>.
* The reversals appear to be highly reproducible and weakly sensitive to machine conditioning <cit.>.
§.§ Reversal direction
The reversal direction is discussed here as a function of increasing density, as density ramps are the most common trigger of reversals.
* Type I. Co-current to counter-current reversals (or more precisely bifurcations from peaked/flat to hollow profiles in the gradient region) have been observed in AUG, C-Mod, MAST and TCV (diverted configuration) and in KSTAR (limited configuration) above a critical density.
AUG: Fig. 4 in <cit.>, C-Mod Fig. 13 in <cit.>, KSTAR: Fig. 15 in <cit.>, MAST: Fig. 4 in <cit.>, TCV: Fig. 6 in <cit.>.
* Type II. Counter-current to co-current reversals (transition from hollow to peaked profiles) have been observed in TCV limited plasmas for q_95≲3 (Type II.a reversals), in AUG and TCV diverted plasmas at very high density (Type II.b reversals) and in MAST low current and low density diverted plasmas (Type II.c reversals).
AUG: Fig. 4 in <cit.> and Fig. 7 in <cit.>, TCV: Fig. 1 in <cit.>
Note that the distinction made above is purely phenomenological and does not exclude that all reversals be manifestations of the same physical mechanism observed in different plasma conditions.
In particular, in Ohmic plasmas, T_e, T_i, n_e and q are strongly coupled and a unique threshold identified as a combination of these parameters may be traversed several times in a density ramp, with a trajectory possibly dependent on the operation mode (limited versus diverted for instance).
§.§ Initial and final states
Typical pre- and post-reversal toroidal rotation profiles are shown in Fig. <ref> for AUG, C-Mod and TCV plasmas.
* In the sawtooth region, the toroidal rotation profile is mostly flat with a bulge in the co-current direction, independent of the reversal state (measurement integrated over several sawtooth cycles)
* In the gradient region, the toroidal rotation gradient has a wide range of values and often a different sign before and after the reversal.
* In the edge region, the toroidal rotation profiles are similar just before and just after the reversal.
* In C-Mod, the modification of the rotation profile in Type I reversals occurs in the region q ≲ 3/2.
C-Mod: Fig 16 in <cit.>
§.§ Dynamics
The dynamics of reversals have been investigated during density ramp experiments for Type I reversals in C-Mod and Type II.a reversals in TCV. A similar behaviour is reported in the two cases.
* In C-Mod and TCV, the reversal process appears as a clear break in slope of the toroidal rotation response to an increase in density.
C-Mod: Fig. 1 in <cit.>, TCV: Fig. 5.3 in <cit.>
* After the reversal process commences, the temporal dynamics of the central toroidal rotation is rather well described by an exponential fit of the form exp[-t/τ_ rev]. The characteristic time of the reversal τ_rev is comparable to, or longer than, the energy confinement time. In TCV, for the Type II.a reversals shown in Fig. 5.3, 5.4 and 5.6 of <cit.>, τ_ rev ranges from 40 to 120ms.
* When the reversal is triggered by a density ramp, the time scale of the reversal is independent of the density ramp-rate.
C-Mod: Fig. 2 in <cit.>, TCV: Fig. 5.4 in <cit.>
* Even with small increases of the line averaged density, stabilising the profiles between the initial and final states of a reversal has never been demonstrated. Fig. 6 in <cit.> shows the typical gap between the two stationary states.
* During the reversal, there is a transient evolution of the edge rotation (0.8≲ r/a ≲ 1) in the direction opposite to that of the core (edge recoil). The edge rotation then relaxes to its pre-reversal value.
C-Mod: Fig. 19, 20 in <cit.>, TCV: Fig. 5.7 and 5.8 in <cit.>
* The reversal is, itself, reversible, with some hysteresis in density >10%. There may also be a hysteresis in the plasma current, but this is not as clear due to a relatively slow current diffusion time.
C-Mod: Fig. 4 and 7 in <cit.>, TCV: Fig. 5.5 in <cit.>
§.§ Critical density for the reversal
* The density threshold for Type I reversals increases with increasing plasma current (C-Mod). This was also indirectly observed in experiments where the plasma current was scanned at constant density (TCV).
C-Mod: Fig. 10 in <cit.>, TCV: Fig. 6.5 in <cit.> and Fig. 1 in <cit.>.
* The density threshold for Type I reversals decreases with increasing toroidal magnetic field.
C-Mod: Fig. 12 in <cit.>.
* In C-Mod, the I_p and B_T dependencies of the density threshold can be unified by a critical density proportional to 1/q_95. For the cases investigated, a critical collisionality of the form ν_ rev∝ n_eZ_ eff/T_e^2 works equally well as the factor Z_ eff/T_e^2 was nearly constant at the reversal <cit.>. In AUG, a critical collisionality better unifies the data than a critical density, see Fig 7 in <cit.>.
* The density threshold for Type II.a reversals (backwards with respect to Type I reversals) decreases with increasing plasma current and, therefore, has an opposite dependence on I_p compared to Type I reversals.
TCV: Fig. 5.6 in <cit.>.
* The density threshold for Type II.a reversals increases with increasing ECH power.
TCV: Fig. 5.17 and 5.18 in <cit.>.
* The scaling of the density threshold for Type II.b and Type II.c reversals with respect to plasmas parameters is, to date, unexplored.
§.§ Poloidal rotation
Poloidal rotation profiles before and after Type II.a reversals have been measured in TCV.
* The pre- and post- reversal profiles are similar and no large excursions of poloidal rotation are observed during the reversal (on the measurement timescale).
TCV: Fig 5.9 in <cit.> and Fig 8 in <cit.>.
* No departure from the neoclassical theory prediction is observed within the measurement uncertainties (≲ 2km/s) <cit.>.
§.§ Plasma shape
* In TCV, Type II.a reversals vanished for negative triangularity: the rotation profile is peaked even at low density (with the edge counter-current rotating) and shifts rigidly towards more counter-current rotation when the density is increased <cit.>.
* In KSTAR, no sharp evolution of the toroidal rotation is observed when ramping the plasma density in low elongation limited plasmas. The toroidal rotation remains counter-current and displays a mild U-curve behaviour with density <cit.>.
§.§ Sawteeth and MHD activity
* Toroidal rotation reversals are often observed in plasmas that also display sawtooth phenomena. However, the presence of sawteeth appears to be a consequence of the constrained operational space. Reversals have been triggered in plasmas exhibiting a wide variety of sawtooth characteristics and no correlation has been established between the reversals and the sawtooth frequency, amplitude or inversion radius <cit.>. In AUG, hollow rotation profiles with the core rotating in the counter-current direction (high density branch of Type I reversals) were observed in the absence of detectable sawteeth <cit.>.
* Low amplitude MHD activity is sometimes observed during toroidal rotation reversal experiments. For instance, in TCV, a (2,1) mode and a (1,1) sawtooth precursor, whose amplitude increases with the plasma density is often detected by magnetic probes. As for the sawteeth, no clear correlation is found between the MHD activity and the rotation direction, with co- and counter- rotation observed for similar MHD spectrograms <cit.>
§.§ LOC/SOC transition and turbulence changes
* Type I reversals generally occur close to, but not necessarily at, the transition from linear Ohmic confinement (LOC) to saturated Ohmic confinement (SOC) <cit.> and to the non-local heat/cold pulse cut-off <cit.>.
* The prediction of the turbulence regime from linear stability calculation is a delicate exercise due to the sensitivity of the TEM/ITG transition to input parameters that are difficult to measure precisely (temperature, density and rotation gradients, collisionality, magnetic shear, etc.) and to the choice of the collision operator in the numerical simulations <cit.>. In addition, the linear stability does not necessarily reflect the non-linear state. Experimentally, the characterisation of the turbulence regime from temperature and density fluctuation measurements is not straigtforward either. These caveats in mind, linear stability calculations <cit.> for AUG and C-Mod plasmas indicate that toroidal rotation reversals often occur close to the boundary between the TEM and ITG instabilities and fluctuations measurements on C-Mod <cit.> show changes in the fluctuation spectra across toroidal rotation reversals. These changes in the turbulence characteristics do not appear, however, to trigger the reversal as they can occur in a region where the toroidal rotation gradient is not modified <cit.> or the rotation profile experience a reversal without a change of the predicted dominant instability <cit.>.
§.§ Dependence of the toroidal rotation gradient on plasmas parameters
* Multi-variable regressions performed for a large database of AUG Ohmic L-modes <cit.> and for a reduced set of AUG and TCV Ohmic L-modes <cit.>, show that the toroidal rotation gradient is mostly correlated to the normalised density gradient R/L_n and effective collisionality ν_ eff. Larger R/L_n increases the hollowness of the rotation profile. Large variations of the toroidal rotation gradient (including a change of sign) can, nevertheless, be observed at nearly constant R/L_n <cit.>.
* Interestingly, in AUG the strong dependence of the toroidal rotation gradient on R/L_n is also observed for a wide operational range including Ohmic and electron cyclotron heated L-modes and H-modes <cit.>. Strongly hollow rotation profiles are only observed at large R/L_n values.
§ WHY IS MOMENTUM TRANSPORT CONSIDERED AS THE KEY TO EXPLAIN THE REVERSALS?
The mechanisms invoked to explain toroidal rotation reversals need to be consistent with all the experimental observations summarised in Sec. <ref>. This includes the observed dynamics (time scale, edge recoil, hysteresis), the parametric dependencies of the critical density and the constancy of the pre/post-reversal rotation profiles in the edge region.
The main mechanisms expected to impact the intrinsic rotation profile in the gradient region, where the reversal takes place, are the neoclassical and turbulent momentum transport (momentum redistribution), the neoclassical toroidal viscosity due to field ripple or a strong MHD mode (damping towards a diamagnetic level offset), a torque due to resonant non-axisymmetric fields (locking to the wall) and sawteeth (momentum redistribution and possibly transient torque).
Sawteeth and strong MHD modes certainly affect the intrinsic rotation profile, but a causal link between sawteeth/MHD and toroidal rotation reversal has yet to be established. As described in Sec. <ref>, toroidal rotation reversals are observed with little to no MHD activity and for a variety of MHD spectrograms and sawtooth behaviour, independent of the reversal state,
The magnitude of the toroidal rotation in the pre- and post- reversal states is often larger than a diamagnetic level rotation and, therefore, somewhat incompatible with NTV or resonant non-axysimmetric fields as the dominant process. In addition, rotation reversals are observed in tokamaks with very low ripple like KSTAR but not in tokamaks with high ripple like Tore Supra <cit.>. In Tore Supra, a slightly hollow profile of counter-current rotation is measured in Ohmic L-modes that is satisfactorily described by NTV theory. Close to the LOC/SOC transition, a small departure from NTV predictions is observed, reminiscent of a toroidal rotation reversal but far from significantly affecting the rotation profile: when NTV dominates, toroidal rotation reversals are hampered <cit.>.
Summing up these various considerations, momentum transport is the only viable candidate left to explain rotation reversals. An additional strong argument in this direction is brought by the transient acceleration of the plasma edge in the direction opposite to that of the core observed during a reversal (edge recoil, see Sec. <ref>). The edge recoil cannot easily be produced by a localised torque or damping term: this would require us to invoke several radially localised contributions of opposite directions and different temporal behaviour. In contrast, a sudden (i.e. faster that the momentum confinement time) change of momentum transport in the plasma core at constant edge momentum transport produces an edge recoil as a consequence of momentum conservation. The profile evolves on a time scale dictated by momentum diffusion, i.e. τ_ rev∼ a^2/χ_φ with χ_φ the momentum diffusivity.
§ MOMENTUM TRANSPORT THEORY
§.§ Momentum conservation and momentum flux
Assuming momentum transport to be the only mechanism at play, the toroidal rotation in the core of an axisymmetric tokamak is governed by the redistribution of toroidal angular momentum, e.g. <cit.>:
∂/∂ t∑_s < n_s m_s R v_φ,s> + 1/V'∂/∂ r[ V' Π_φ] = 0
with n_s, m_s and v_φ,s the density, mass and toroidal velocity of species s, respectively, R the local major radius, <.> the flux surface average, r a radial coordinate (flux surface label), V the flux surface volume, V'=∂ V /∂ r (radial derivative) and Π_φ=<Π_φ·∇ r> the flux surface averaged radial component of the toroidal momentum density flux.
Eq. (<ref>) is obtained from the flux surface average of the momentum conservation equation and simply states that, in the absence of sources and sinks, the evolution of the toroidal angular momentum density is driven by the divergence of the momentum flux Π_φ.
Up to first order in ρ_*=ρ_i/R_0, the species flow entering Eq. (<ref>) lies within a flux surface and is given by the sum of the parallel streaming along the magnetic field lines, of the E× B drift and of the diamagnetic flow
𝐯_s = v_∥𝐛 + 𝐯_ E + 𝐯_ dia + 𝒪(ρ_*^2)
Here, ρ_i=m_iv_ thi/(eB_0) is the main ion Larmor radius, R_0 and B_0 are a reference major radius and magnetic field, respectively, and v_ thi=√(2T_i/m_i) is the thermal velocity.
The parallel flow can be split into three components v_∥=v_∥^ E+v_∥^ dia+v_∥^θ so that v_∥^ E𝐛 + 𝐯_ E and v_∥^ dia𝐛 + 𝐯_ dia are purely toroidal whereas the remaining contribution to the total flow, v_∥^θ𝐛, has finite poloidal and toroidal components. The two purely toroidal flows are given by:
v_∥^ E𝐛 + 𝐯_ E = Rω_Φ𝐞_φ =- R ∂Φ/∂ψ𝐞_φ
v_∥^ dia𝐛 + 𝐯_ dia = R ω_p,s𝐞_φ = - R 1/Z_s en_s∂ p_s/∂ψ𝐞_φ
with Φ the electrostatic potential, ψ the poloidal magnetic flux, 𝐞_φ the unit vector in the toroidal direction and Z_s and p_s the species charge number and pressure, respectively. The parallel flow v_∥^θ is constrained by neoclassical physics.
Combining Eqs. (<ref>), (<ref>) and (<ref>), yields the customary expression of the toroidal flow <cit.>:
v_φ,s = Rω_Φ + Rω_p,s + v_θ,sB_t/B_p
The first term, related to the E× B flow, is the lowest order contribution. It is species independent and can assume arbitrarily large values in an axisymmetric tokamak. The lowest order momentum transport theory is formulated with respect to ω_Φ. The next contributions, related to the diamagnetic and poloidal flows, are first order in ρ_* for a neoclassical level poloidal flow and roughly scale as 1/2ρ_*B_t/B_pv_ thiR/L_T_i, with R/L_T_i=-R_0∂ln T_i/∂ r the normalised temperature gradient. For ρ_*=1/600, B_t/B_p=10 and R/L_T_i=6, the first order toroidal flow in Eq. (<ref>) is about 0.05 v_ thi and, therefore, not negligible compared to the total toroidal flow, which is often less than 0.2 v_ thi for intrinsic rotation. When dealing with intrinsic rotation, the distinction between the total toroidal flow v_φ,s, that is the measured quantity, and the lowest order toroidal flow Rω_Φ therefore needs to be taken into account.
The momentum flux entering the transport equation, Eq. (<ref>), is now decomposed into diagonal (diffusive), pinch (convective) and residual stress components:
Π_φ = n m R_0 v_ thi[ χ_φ u' + R_0V_φu + C_φ]
Here, the decomposition is performed with respect to the lowest order flow, i.e. the diagonal part components with respect to u'=- R_0/v_ thi∂ω_Φ/∂ r and the pinch components with respect to u=R_0ω_Φ/v_ thi.
In the expression above, nm=∑ n_sm_s is the species averaged mass density and the momentum transport coefficients have also been species averaged using A = ∑ n_s m_s A_s/∑ n_s m_s where A represents the momentum diffusivity χ_ϕ, pinch velocity V_φ or residual stress coefficient C_φ.
In stationary state, Π_φ=0 so the intrinsic rotation profile is determined by the balance between the diagonal flux, which tends to flatten the profile, and the non-diagonal flux (pinch and residual stress) which tends to sustain a finite gradient. The sign and magnitude of the resulting rotation gradient is dictated by the ratio of the pinch and residual stress components to the momentum diffusivity:
u' = - R_0V_φ/χ_φu - C_φ/χ_φ
The fundamental difference between the pinch and residual stress is that only the pinch requires a finite rotation to sustain a gradient. A residual stress contribution is, therefore, required to describe rotation profiles crossing zero, as observed in Fig. <ref> for the impurity rotation v_φ,s or in <cit.> for the E× B angular frequency ω_Φ.
In the core of an axisymmetric tokamak, the neoclassical momentum flux is typically an order of magnitude smaller than the gyro-Bohm momentum flux <cit.> and negligible compared to the turbulent momentum flux. The following discussion, therefore, focuses on turbulent momentum transport. The main mechanisms are briefly outlined in the framework of gyrokinetic theory, emphasising their potential link with rotation reversals. For a more comprehensive description of the theory of turbulent momentum transport and further references to the original work, the reader is referred to published reviews <cit.>.
§.§ Lowest order contributions
To lowest order, with respect to the gyrokinetic ordering (local limit, ρ_*→ 0), five mechanisms that can generate a momentum flux are described. The parallel <cit.> and perpendicular <cit.> components of the toroidal flow shear give rise to a diagonal flux. For positive magnetic shear, these two contributions have opposite sign and the perpendicular component of the toroidal flow shear acts to reduce toroidal momentum diffusivity <cit.>. The pinch also has two contributions: the Coriolis pinch <cit.> and the momentum carried by any particle flux. In the stationary state, the second contribution vanishes if no particle source remains.
Finally, the only contribution to the lowest order residual stress arises from the up-down asymmetry of the magnetic flux surfaces C_φ^ FS <cit.>.
The ratio of the toroidal momentum diffusivity and ion heat diffusivity, the Prandtl number, is typically predicted as Pr = χ_φ/χ_i∼ 0.7 <cit.>, but values in the range 0.4 to 1.5 are possible depending on the plasma parameters <cit.>. The Coriolis pinch is generally directed inward and acts to increase the absolute value of the rotation. The pinch to diffusivity ratio R_0V_φ/χ_φ typically ranges from -1 to -4 with a marked dependence on the normalised density gradient R/L_n. The Coriolis pinch tends to be smaller in the TEM regime <cit.> and can be directed outward close to the kinetic ballooning mode threshold <cit.>. The ratio of the residual stress from the flux surface asymmetry to the momentum diffusivity is typically |C_φ^ FS/χ_φ|≲ 1 near the edge where the flux surface shaping is the highest and |C_φ^ FS/χ_φ|≲ 0.3 in the core <cit.>. The sign of C_φ^ FS is determined by the flux surface asymmetry and the direction of the magnetic field.
The momentum diffusivity, pinch and up-down asymmetry residual stress have all been identified experimentally and found to be in fair agreement with lowest order gyrokinetic theory predictions <cit.>.
As the intrinsic rotation gradient in the vicinity of u=0 is typically between -1.5≲ u' ≲ 1.5, including up-down symmetric plasmas for which C_φ^ FS=0, intrinsic rotation can clearly not be described by the lowest order theory: it lacks residual stress contributions.
§.§ First order contributions
To next order in ρ_*, new contributions to the residual stress arise from:
* the impact of a poloidally inhomogeneous turbulence on the parallel symmetry <cit.>
* profile shearing, i.e. the shear in the drifts and parallel motion due to first and second order derivatives of the magnetic equilibrium, density and temperature profiles <cit.>
* the generic impact of a radially inhomogeneous turbulence on the parallel symmetry <cit.>
* the impact of a radially inhomogeneous turbulence on passing ions with different orbit shifts <cit.>
* the deviation of the equilibrium distribution function from a Maxwellian, i.e. the impact of the neoclassical equilibrium <cit.>, which includes the pressure gradient contributions to the E× B shear <cit.>.
For the parameter dependencies, all contributions that rely on coupling by parallel compression between density and parallel velocity fluctuations increase in magnitude with R/L_n. This is the case for contributions (i-iii) and a part of (v). The dependence already appears in reduced fluid models when considering a generic parallel symmetry breaking, see e.g. <cit.>. Another robust feature of residual stress is that its magnitude tends to be smaller for TEM dominated turbulence than for ITG dominated turbulence, typically by a factor ≳ 2, consistently with the more symmetric mode structure with respect to the horizontal midplane obtained in the TEM regime.
The residual stress contributions related to the radial inhomogeneity of turbulence <cit.>, to the shear in the perturbed E× B drift advection of the background <cit.> and to the neoclassical equilibrium <cit.> all strongly depend on the second order derivatives of the temperature and/or density profiles. This dependence can extend as far as to change their sign. The neoclassical equilibrium residual stress also strongly depends on, and can change sign with, the ion-ion collisionality. The contribution related to the shear of the parallel motion and of the curvature and ∇ B drift <cit.> does not depend on second order derivatives.
Interestingly, contribution (iv) alone strongly depends on the radial position of the X-point. Its impact on the rotation gradient is limited to the edge region <cit.> and therefore not directly relevant for toroidal rotation reversals.
Of course, for all first order contributions, C_φ/χ_φ is expected to depend on ρ_*.
This dependence is linear in ρ_* for (iii) and (v) according to analytical calculations, but may be more complicated for the other contributions. For instance, it has been shown in global non-linear simulations that for profile shearing, C_φ/χ_φ is first linear in ρ_* but saturates at high ρ_* values <cit.>. Overall, the ρ_* scaling of residual stress is still debated and deserves further investigation. What is certainly true, however, is that the exponent on any ρ_* scaling is between 0 and 1 and could quite possibly depend on the plasma parameters.
§.§ Numerical simulations
All the contributions listed above combine to generate the total residual stress. Unfortunately, many tend to be of comparable magnitude, at least from simple scaling arguments, with various signs, making the prediction of their sum a delicate exercise. A quantitative prediction of intrinsic rotation therefore requires numerical simulations.
The lowest order momentum flux can be computed in gyrokinetic δ f flux-tube codes provided that the background E× B toroidal flow and an arbitrary flux surface geometry are included.
The first order contribution (i) can also be computed within the flux-tube approach but requires the inclusion of higher order parallel derivatives.
The contributions (ii) to (iv) require a radially global approach.
Contribution (v) can be treated in a δ f flux-tube code by adding the neoclassical correction to the background Maxwellian distribution function. It can also be treated by solving the coupled neoclassical and turbulent problem. The second option requires an accurate collision operator and is considerably more computationally expensive as the simulations must cover several ion-ion collision times to reach a stationary state with respect to the neoclassical physics. This method includes, however, the impact of turbulence on neoclassical physics, which may also be relevant for momentum transport <cit.>.
Finally, all simulations that aim for a quantitative prediction of the momentum flux must treat the electrons kinetically as an adiabatic electron approximation has a dramatic impact on the parallel symmetry <cit.>.
The first principle prediction of the intrinsic rotation profile resulting from momentum transport including the interplay between neoclassical and turbulence physics represents a formidable challenge. Only one example of a global simulation with kinetic electrons (at reduced ion to electron mass ratio) including all the lowest and first order contributions to momentum transport and evolving the rotation profile over a confinement time has been reported <cit.>. It remains an extremely important result as it demonstrates that a rotation profile reaching 0.15 v_ thi can be sustained by the internal redistribution of momentum by turbulence, see Fig 10 in <cit.>, lending further support to the interpretation of intrinsic rotation in this framework.
It also confirms the critical role of including kinetic electrons as the toroidal rotation was shown to develop in the opposite direction when the electrons were described adiabatically.
At some point, the issue was raised as to whether the conventional gyrokinetic ordering was sufficient to properly describe turbulent momentum transport, in particular in full-f global simulations <cit.>. It was first proven in the context of the gyrokinetic field theory that momentum conservation is guaranteed to any order provided the approximations are made at the level of the Hamiltonian <cit.> and it was then demonstrated numerically that the conventional gyrokinetic ordering is sufficient to describe momentum transport in the long wavelength approximation that is valid for ITG and TEM turbulence <cit.>.
§ ON-GOING MODELLING ACTIVITIES
At present, numerical simulations as reported in <cit.> are far too costly to be systematically compared to experimental observations. Modelling activities therefore focus on the residual stress contributions separately in the hope that one of these contributions dominates the others in magnitude.
The collisionality dependence of the neoclassical equilibrium residual stress C_φ^ NC is appealing and was invoked to explain Type I rotation reversals in MAST <cit.>. A simplified qualitative model was used to predict the reversal state. According to this model, the rotation profile is predicted to be peaked in the plasma region where the collisionality is lower than a threshold value and hollow in the region above, with the transition between the two regions moving radially inward across a density ramp. While the prediction of the reversal state was reasonably successful, the model did not appear compatible with the experimental observation that the rotation profile is strongly modified at the critical density but relatively independent of the density before and after the reversal.
More recently, the impact of C_φ^ NC has been investigated for the AUG database assembled in <cit.> and covering pre- and post- reversal profiles. The focus of this work was the prediction of the toroidal rotation gradient around mid-radius with the modelling based on a quasi-linear approach supported by a few non-linear simulations <cit.>. The gyrokinetic simulations included all the lowest order terms and the residual stress driven by the neoclassical equilibrium. The latter appeared comparable in magnitude to the up-down asymmetry residual stress and the Coriolis pinch. The predicted toroidal rotation gradient was up to an order of magnitude smaller than measured, demonstrating the need to invoke other contributions to explain the measurements.
The impact of profile shearing was investigated for a DIII-D plasma in which the toroidal rotation was nearly zeroed out by counter-current neutral beam injection <cit.>. In these conditions, the Coriolis pinch is negligible and the momentum input balances the residual stress. The simulations were performed in the non-linear regime, including all the lowest order contributions to the momentum flux and the profile shearing residual stress (from first and second order derivatives). Around mid-radius, the predicted momentum flux was comparable in magnitude to experiment, but could differ, even in sign, depending on the chosen second order derivatives of the temperature and density profiles. Further studies are required to better quantify the relevance of this contribution.
Finally, the impact of a generic symmetry breaking term was explored for the AUG databases of <cit.> and <cit.> by imposing a finite ballooning angle shift θ_0 in the linear gyrokinetic simulations. A unique θ_0 value was chosen for all the cases in the TEM regime and another for those in the ITG regime. After these two ad-hoc values are chosen, the experimental toroidal rotation gradient around mid-radius is surprisingly well reproduced by the quasi-linear prediction across the whole database capturing the strong R/L_n dependence of the rotation gradient. This suggests that the residual stress mechanism sustaining the intrinsic rotation profile relies on the coupling, by parallel compression, between density and parallel velocity fluctuations, as this coupling directly engenders a R/L_n dependence of residual stress.
§ SUMMARY AND DISCUSSION
Based on the experimental observations gathered in the last ten years in AUG, C-Mod, MAST, KSTAR and TCV and the developments in the theory of intrinsic rotation, turbulent momentum transport appears to be the most likely candidate to explain toroidal rotation reversals in the core of Ohmic L-modes.
Concerning the stationary rotation profiles, the lowest order contributions in the turbulent momentum flux (the diagonal part, the Coriolis pinch and the up-down asymmetry residual stress) cannot account for the experimental observations and higher order contributions to the residual stress are required.
The first order contributions are now identified <cit.> and in the process of being tested against the experimental observations. Combining one of these first order contributions, the neoclassical equilibrium residual stress, with the lowest order contributions was recently demonstrated to be insufficient to reproduce the experimentally measured intrinsic rotation gradient for a database of AUG Ohmic L-modes <cit.>.
The focus is now moving to another first order contribution, profile shearing residual stress, which was shown to be sufficiently large to reproduce the required momentum flux in a zeroed-rotation DIII-D case <cit.>. One difficulty of this validation exercise arises from the dependence of several residual stress contributions on the second derivatives of the temperature and density profiles, which are unlikely to be ever sufficiently well measured experimentally. There are two main ways to minimise the impact of this issue: either perform the modelling at multiple radial positions as in <cit.> to account for the constraint engendered by the value of the first derivatives or, when possible, use a sufficiently complete simulation to compare the magnitude of the different residual stress terms.
At present, it remains unclear whether a first order contribution dominates for specific experimental conditions.
For the reversals dynamics, the rotation is observed to be a very sharp function of the plasma density at the reversal, at least in C-Mod and TCV. Such a sharp variation could be the signature of:
* a continuous but sharp dependence of C_φ/χ_φ on a plasma parameter that varies in the density ramp, e.g. density, collisionality, T_e/T_i, etc.
* a bifurcation in momentum transport triggered by the density increase
* a moderate dependence of C_φ/χ_φ on a plasma parameter that exhibits a strong variation (continuous or bifurcation) close to the critical density
Conceptually, a bifurcation is very different from a continuous transition as it requires unstable states and some direct feedback of the rotation profile on momentum transport. Some features of the reversals in C-mod and TCV (the insensitivity of the dynamics to the density ramp rate, the hysteresis and the gap observed in the stationary profiles) suggest a bifurcation rather than a continuous transition. This aspect would deserve further investigation, in particular in AUG and KSTAR, as the choice between a bifurcation and a continuous transition not only impacts the way data should be handled in multi-variable regressions (one or two sets?) but also provides a strong constraint on the theoretical solution.
From a theoretical perspective, a mechanism that supports hypothesis (i) is not directly offered by current theories, since the dependence of turbulent transport on plasma parameters is predicted to be rather mild in general. A change of sign of one of the residual stress components at the TEM/ITG transition <cit.> could be invoked but the TEM/ITG transition is, itself, not a particularly sharp function of collisionality (it occurs at a different collisionality for the different wavevectors). For hypothesis (ii), a bifurcation in momentum transport could be triggered if the momentum diffusivity becomes locally negative, i.e. the momentum flux locally decreases as the rotation gradient increases. This could, in principle, occur if the contribution to the toroidal momentum diffusivity from the perpendicular dynamics overcomes the parallel one, which requires large values of ϵ/q. Whether such a mechanism can be at play for realistic plasma conditions remains, however, to be demonstrated. Hypothesis (iii) can probably be dismissed as no plasma parameter except the toroidal rotation has so far been observed to strongly vary at the reversal and this despite an exhaustive search.
To summarise, in spite of considerable progress, there is, to date, no modelling that quantitatively predicts the core intrinsic rotation gradient over a large scale database encompassing pre- and post- reversal profiles, nor the dynamics of a reversal. Possible routes to progress are suggested below.
* Is there a single or several reversals? Is there a common threshold on a local parameter that unifies the different types of reversals?
Important parameters for turbulent transport are the normalised gradients R/L_T_e, R/L_T_i and R/L_n, the safety factor, the magnetic shear, T_e/T_i, the collisionality, the local plasma β and the magnetic equilibrium (elongation, triangularity, etc.).
* In the same vein, further characterisation of the scalings of the threshold(s) in terms of local plasma parameters, in particular for Type II reversals would be helpful. Here, an important issue is whether a critical collisionality better unifies the data than a critical density. Is that the case in all devices? Dedicated experiments with electron cyclotron heating power ramps may help decouple density and collisionality effects.
* Does the strong correlation between the intrinsic rotation gradient and R/L_n observed in AUG hold across C-Mod, KSTAR and the full TCV databases? This would hint at a residual stress mechanism that relies on parallel compression and would merits examination over as wide an operational range as possible.
* An interesting observation from C-Mod is that the region where the toroidal rotation reverses is typically restricted to q≲ 3/2. In TCV, Type II.a reversals were only observed for a sufficiently low q_95 value and in KSTAR no reversals are observed for low elongation high q_95 plasmas, which may or not be related.
From the theory standpoint, the toroidal projection of the perpendicular flow becomes larger at low q and a marked q dependence could suggest an enhanced role of the perpendicular dynamics (E× B shearing and radial-perpendicular Reynolds and Maxwell stress) close to the reversal.
Again, a more systematic characterisation of the radial region where the reversal is observed, including different q_95 values, would bring new elements to this issue.
* In TCV, toroidal rotation reversals do not occur for negative triangularity (in the sense that a sharp transition is not observed). This should be better understood, in particular whether this is connected to the stabilisation of TEM turbulence at negative triangularity <cit.>. More generally, plasma shaping offers a convenient tool to help decouple the plasma current and the edge safety factor, which could be used in limited and diverted plasmas to broaden the operational domain, as in KSTAR experiments <cit.>.
* Finally, as mentioned in <cit.>, there may be a similarity worth investigating between toroidal rotation reversals in Ohmic L-modes and the impact of electron cyclotron heating in H-modes with and without NBI injection <cit.>.
Fruitful discussions with J. Hillesheim and O. Sauter are warmly acknowledged.
§ REFERENCES
unsrt
| During the 1980s, it was shown that stationary toroidal flows that reach up to 20% of the thermal velocity can develop in tokamak plasmas in the absence of externally applied torque (a summary of these early observations is available in Table 1 of <cit.>).
This phenomenon, dubbed intrinsic rotation, has practical implications for future low torque devices like ITER owing to the potential stabilising impact of plasma flows on turbulence and deleterious MHD instabilities.
Following initial measurements, experiments were performed to explore the physics of intrinsic rotation. The observations have been regularly summarised in review articles <cit.>.
Momentum transport, reconnection events (sawteeth and ELMs), non-axisymmetric magnetic fields (from magnetic perturbation coils, error fields or large MHD modes), orbit losses and interactions with neutrals were all observed to impact intrinsic rotation. The underlying physics is surprisingly rich, involving many competing mechanisms that determine the final rotation profile. These include collisional momentum transport, fluctuation-induced Reynolds and Maxwell stresses (turbulent transport), charge exchange with neutrals, J_∥×δ B torque induced by resonant magnetic perturbations, J_r× B torque induced by non-axisymmetric magnetic fields (neoclassical toroidal viscosity NTV), ionisation currents and orbit losses. A convenient set of transport equations has been proposed in <cit.> to describe the evolution of toroidal flows in tokamak plasmas resulting from these various mechanisms in a consistent fluid moment framework. This approach highlights the complexity of intrinsic rotation inherent to the number of mechanisms at play.
The present paper focuses on a specific puzzle within intrinsic rotation: spontaneous toroidal rotation reversals.
This intriguing phenomenon was reported 10 years ago on the TCV tokamak where the core toroidal rotation was observed to flip from counter-current to co-current when a threshold in density was exceeded in Ohmic L-mode plasmas <cit.>. Rotation reversals have since been demonstrated in C-Mod <cit.>, AUG <cit.>, MAST <cit.> and KSTAR <cit.>.
In parallel, the theory of intrinsic rotation has undergone considerable development and many possible physical mechanisms have been identified.
In spite of this progress, understanding toroidal rotation reversals still eludes us and predicting the direction of the core rotation in Ohmic L-modes remains a challenge. Toroidal rotation reversals do not directly affect plasma performance, but they represent a critical test for the theory of intrinsic rotation. The purpose of the present work is to survey the observations and the theoretical framework with the goal of presenting the current understanding of this research and explaining current ideas and approaches to its resolution.
The definitions and conventions adopted in this paper are introduced in Sec. <ref>, followed by a summary of the experimental observations in Sec. <ref>. The constraints these observations put on the theory and, in particular, the reasons why turbulent momentum transport is thought to be responsible for rotation reversals are discussed in Sec. <ref>.
The theory of turbulent momentum transport is briefly reviewed in Sec. <ref> before summarising the current status of the modelling activities in Sec. <ref>. Finally, in Sec. <ref> future work and open issues are discussed. | null | null | null | null | null |
http://arxiv.org/abs/1701.08076v2 | 20170127152013 | Structural scale $q-$derivative and the LLG-Equation in a scenario with fractionality | [
"José Weberszpil",
"José Abdalla Helayël-Neto"
] | math-ph | [
"math-ph",
"cond-mat.stat-mech",
"hep-th",
"math.MP",
"quant-ph"
] |
[email protected]
Universidade Federal Rural do Rio de Janeiro, UFRRJ-IM/DTL
Av. Governador Roberto Silveira s/n- Nova Iguaçú, Rio de Janeiro,
Brasil, 695014.
[email protected]
Centro Brasileiro de Pesquisas Físicas-CBPF-Rua Dr Xavier Sigaud
150,
22290-180, Rio de Janeiro RJ Brasil.
In the present contribution, we study the Landau-Lifshitz-Gilbert
equation with two versions of structural derivatives recently proposed:
the scale-q-derivative in the non-extensive statistical mechanics
and the axiomatic metric derivative, which presents Mittag-Leffler
functions as eigenfunctions. The use of structural derivatives aims
to take into account long-range forces, possible non-manifest or hidden
interactions and the dimensionality of space. Having this purpose
in mind, we build up an evolution operator and a deformed version
of the LLG equation. Damping in the oscillations naturally show up
without an explicit Gilbert damping term.
Structural scale q-derivative and the LLG-Equation in a scenario
with fractionality
J. A. Helayël-Neto
December 30, 2023
===================================================================================
Keywords: Structural Derivatives, Deformed Heisenberg Equation, LLG
Equation, Non-extensive Statistics, Axiomatic Deformed Derivative
§ INTRODUCTION
In recent works, we have developed connections and a variational formalism
to treat deformed or metric derivatives, considering the relevant
space-time/ phase space as fractal or multifractal <cit.>
and presented a variational approach to dissipative systems, contemplating
also cases of a time-dependent mass <cit.>.
The use of deformed-operators was justified based on our proposition
that there exists an intimate relationship between dissipation, coarse-grained
media and a limit energy scale for the interactions. Concepts and
connections like open systems, quasi-particles, energy scale and the
change in the geometry of space–time at its topological
level, nonconservative systems, noninteger dimensions of space–time
connected to a coarse-grained medium, have been discussed. With this
perspective, we argued that deformed or, we should say, Metric or
Structural Derivatives, similarly to the Fractional Calculus (FC),
could allows us to describe and emulate certain dynamics without explicit
many-body, dissipation or geometrical terms in the dynamical governing
equations. Also, we emphasized that the paradigm we adopt was different
from the standard approach in the generalized statistical mechanics
context <cit.>, where the
modification of entropy definition leads to the modification of the
algebra and, consequently, the concept of a derivative <cit.>.
This was set up by mapping into a continuous fractal space <cit.>
which naturally yields the need of modifications in the derivatives,
that we named deformed or, better, metric derivatives <cit.>.
The modifications of the derivatives, accordingly with the metric,
brings to a change in the algebra involved, which, in turn, may lead
to a generalized statistical mechanics with some adequate definition
of entropy.
The Landau-Lifshitz-Gilbert (LLG) equation sets out as a fundamental
approach to describe physics in the field of Applied Magnetism. It
exhibits a wide spectrum of effects stemming from its non-linear structure,
and its mathematical and physical consequences open up a rich field
of study. We pursue the investigation of the LLG equation in a scenario
where complexity may play a role. The connection between LLG and fractionality,
represented by an α-deformation parameter in the deformed
differential equations, has not been exploited with due attention.
Here, the use of metric derivatives aims to take into account long-range
forces, possible non-manifest or hidden interactions and/or the dimensionality
of space.
In this contribution, considering intrinsically the presence of complexity
and possible dissipative effects, and aiming to tackle these issues,
we apply our approach to study the LLG equation with two metric or
structural derivatives, the recently proposed scale -q-derivative
<cit.> in the nonextensive statistical mechanics
and, as an alternative, the axiomatic metric derivative (AMD) that
has the Mittag-Leffler function as eigenfunction and where deformed
Leibniz and chain rule hold - similarly to the standard calculus -
but in the regime of low-level of fractionality. The deformed operators
here are local. We actually focus our attention to understand whether
the damping in the LLG equation can be connected to some entropic
index, the fractionality or even dimensionality of space; in a further
step, we go over into anisotropic Heisenberg spin systems in (1+1)
dimensions with the purpose of modeling the weak anisotropy effects
by means of some representative parameter, that depends on the dimension
of space or the strength of the interactions with the medium. Some
considerations about an apparent paradox in the magnetization or angular
damping is given.
Our paper is outlined as follows: In Section 2, we briefly present
the scale-q-derivative in a nonextensive context, building up the
q- deformed Heisenberg equation and applying to tackle the problem
of the LLG equation; in Section 3, we apply the axiomatic derivative
to build up the α-deformed Heisenberg equation and to tackle
again the problem of LLG equation. We finally present our Conclusions
and Outlook in Section 4.
§ APPLYING SCALE-Q-DERIVATIVE IN A NONEXTENSIVE CONTEXT
Here, in this Section, we provide some brief information to recall the main forms of scale -q- derivative. The readers may see ref. <cit.> for more details.
Some initial claims here coincide with our work of Refs. <cit.>
and the approaches here are in fact based on local operators <cit.>.
The local differential equation,
dy/dx=y^q,
with convenient initial condition, yields the solution given by the q-exponential, y=e_q(x) <cit.>.
The key of our work here is the Scale-q-derivative (Sq-D) that we have recently defined as
D_(q)^λf(λ x)≡ [1+(1-q)λ x]df(x)/dx.
The eigenvalue equation holds for this derivative operator, as the
reader can verify:
D_(q)^λf(λ x)=λ f(λ x).
§.§ q- deformed Heisenberg Equation in the Nonextensive
Statistics Context
With the aim to obtain a scale-q- deformed Heisenberg equation, we now consider
the scale-q- derivative <cit.>
d^q/dt^q=(1+(1-q)λ xd/dx
and the Scale -q- Deformed Schrödinger Equation <cit.>,
iħ D_q,t^λψ=-ħ^2/2m∇^2ψ-Vψ=Hψ,
that, as we have shown in <cit.>, is related
to the nonlinear Schrödinger equation referred to in Refs. <cit.>
as NRT-like Schrödinger equation (with q=q'-2 compared to the q-index
of the reference) and can be thought as resulting from a time-scale-q-deformed-derivative
applied to the wave function ψ.
Considering in eq.(<ref>), ψ(r⃗,⃗t⃗)=U_q(t,t_0)ψ(r⃗,⃗t⃗_0),
the q- evolution operator naturally emerges if we take into account
a time-scale-q- deformed-derivative (do not confuse with formalism
of discrete scale time derivative):
U_q(t,t_0)=e_q^(-i/ħ M_qℋ_qt).
Here, M_q is a constant for dimensional regularization reasons.
Note that the q-deformed evolution operator is neither Hermitian nor
unitary, the possibility of a q-unitary as U_q^†(t,t_0)⊗_qU_q(t,t_0)=1 could be thought to come over these facts. In this work, we assume
the case where the commutativity of U_q and ℋ holds, but the q-unitarity is also a possibility.
Now, we follow similar reasonings that can be found in Ref.<cit.>
and considering the Sq-D.
So, with these considerations, we can now write a nonlinear Scale-q-deformed
Heisenberg Equation as
D_t,q^λÂ(t)=-i/ħ M_q[Â,ℋ],
where we supposed that U_q and ℋ commute and M_q
is some factor only for dimensional equilibrium.
§.§ q-deformed LLG Equation
To build up the scale-q- deformed Landau-Lifshitz-Gilbert Equation, we consider eq.(<ref>), with Â(t)=Ŝ_q
D_t,q^λŜ_q(t)=-i/ħ M_q[Ŝ_q,ℋ],
where we supposed that U_q and ℋ commute.
ℋ=-g_qμ_B/ħ M_qŜ_q∘H⃗_eff.
Here, H⃗_eff is some effective Hamiltonian whose form that
we shall clearly write down in the sequel.
The scale-q-deformed momentum operator is here defined as p_q'^λ=-iħ M_q'[1+λ(1-q')x]∂^q/∂ x^q.
Considering this operator, we obtain a deformed algebra, here in terms of commutation relation between coordinate and momentum
[x̂_i^q,p̂_j^q]=[1+λ(1-q')x]ħ M_q'δ_ jI
and, for angular momentum components, as
[L̂_i^q,L̂_j^q]=[1+λ(1-q')x]ħ M_qL̂_k^q.
The q' factor in x̂_^q',p̂_j^q',L̂_i^q',L̂_j^q',M_q'
is only an index and q is not necessarily equal to q'.
The resulting scale-q-deformed LLG equation can now be written as
D_t,q^λŜ_q(t)=-[1+λ(1-q')x]g_qμ_B/ħ M_qŜ_̂q̂×H⃗_eff.
Take m̂_q≡γ_qŜ_q, γ_q'≡[1+λ(1-q')x]g_qμ_B/ħ M_q.
If we consider that the spin algebra is nor affected by any emergent
effects, we can take q'=1.
Considering the eq.(<ref>) with Â(t)=Ŝ_̂q̂
and m̂_̂q̂=|γ_q|Ŝ_̂q̂ and q'=1;
we obtain the q-time deformed LLG dynamical equation for magnetization
as
D_t,q^λm̂_̂q̂(t)=-|γ|m̂_̂q̂×H⃗_eff.
Considering H⃗_eff=H_0k̂, we have the solution:
m_x,q=ρcos_q(θ_0)cos_q(γ H_0t)+ρsin_q(θ_0)sin_q(γ H_0t).
In the figure, θ_0=0.
§ APPLYING AXIOMATIC DERIVATIVE AND THE Α-DEFORMED HEISENBERG
EQUATION
Now, to compare results with two different local operators, we apply
the axiomatic metric derivative.
Following the steps on <cit.> and considering the axiomatic
MD <cit.>, there holds the eigenvalue equation D_x^αE_α(λ x^α)=λ E_α(λ x^α),
where E_α(λ x^α) is the Mittag-Leffler function
that is of crucial importance to describe the dynamics of complex
systems. It involves a generalization of the exponential function
and several trigonometric and hyperbolic functions. The eigenvalue
equation above is only valid if we consider α very close to
1. This is what we call low-level fractionality <cit.>.
Our proposal is to allow the use o Leibniz rule, even if it would
result in an approximation.
So, we can build up an evolution operator:
U_α(t,t_0)=E_α(-i/ħ^αℋt^α),
and for the deformed Heisenberg Equation
D_t^αA_α^H(t)=-i/ħ^α[A_α^H,ℋ],
where we supposed that U_α and ℋ commute.
To build up the deformed Landau-Lifshitz-Gilbert Equation, we use
the eq. (<ref>), and considering and spin operator Ŝ_α(t),
in such a way that we can write the a deformed Heisenberg equation as
D_t^αŜ_α(t)=-i/ħ^α[Ŝ_α,ℋ],
whith
ℋ=-g_αμ_B/ħ^αŜ_α∘H⃗_eff.
Here, H⃗_eff is some effective Hamiltonian whose form that we will turn out clear forward.
Now, consider the deformed momentum operator as <cit.>
p^α=-i(ħ)^αM_x,α∂^α/∂ x^α.
Taking this operator, we obtain a deformed algebra, here in terms
of commutation relation for coordinate and momentum
[x̂_i^α,p̂_j^α]=Γ(α+1)ħ^αM_αδ_ jI
and for angular momentum components as
[L̂_i^α,L̂_j^α]=Γ(α+1)ħ^αM_αL̂_k^α.
The resulting the α-deformed LLG equation can now be written as
_0^JD_t^αŜ_α(t)=-M_αΓ(α+1)g_αμ_B/ħ^αŜ_̂α̂×H⃗_eff.
If we take m̂_α≡γ_αŜ_α, γ_α≡M_αΓ(α+1)g_αμ_B/ħ^α, we can re-write the equation as the α-deformed LLG
_0^JD_t^αm̂_α(t)=-|γ_α|m̂_α×H⃗_eff,
with H⃗_eff=H_0k̂.
We have the Solution of eq.(<ref>):
m_α x=Acosθ_0E_2α(-ω_0^2t^2α)+Asinθ_0.x.E_2α,1+α(-ω_0^2t^2α).
In the figure below, the reader may notice the behavior of the magnetization,
considering θ_0=0.
For α=1, the solution reduces to m_x=Acos(ω_0t+θ_0),
the standard Simple Harmonic Oscillator solution for the precession
of magnetization.
The presence of complex interactions and dissipative effects that
are not explicitly included into the Hamiltonian can be seen with
the use of deformed metric derivatives. Without explicitly adding
up the Gilbert damping term, the damping in the oscillations could
reproduce the damping described by the Gilbert term or could it disclose
some new extra damping effect. Also, depending on the relevant parameter,
the q- entropic parameter or for α, the increasing oscillations
can signally that it is sensible to expect fractionality to interfere
on the effects of polarized currents as the Slonczewski term describes.
We point out that there are qualitative similarities in both cases,
as the damping or the increasing of the oscillations, depending on
the relevant control parameters. Despite that, there are also some
interesting differences, as the change in phase for axiomatic derivative
application case.
Here, we cast some comments about an apparent paradox: If we make,
as usually done in the literature for LLG, the scalar product in eq.
(<ref>) with, m̂_q, we obtain an
apparent paradox that the modulus of m̂_qdoes not change.
On the other hand, if instead of m̂_α,we proceed now
with a scalar product with H⃗_eff and we obtain thereby
the indications that the angle between m̂_αand H⃗_eff
does not change. So, how to explain the damping in osculations for
m̂_q? This question can be explained by the the following
arguments. Even the usual LLG equation, with the term of Gilbert,
can be rewritten in a form similar to eq. LLG without term of Gilbert.
See eq. (2.7) in the Ref. <cit.>.
The effective H⃗_eff field now stores information about
the interactions that cause damping. In our case, when carrying out
the simulations, we have taken H⃗_eff as a constant effective
field. Here, we can argue that the damping term, eq. (2.8) in Ref.
<cit.>
being small, this would cause the effective field H⃗_eff=H(t)+k(S×H)
to be approximately H(t). In this way, the scalar
product would make dominate over the term of explicit dissipation.
This could, therefore, explain the possible inconsistency.
§ CONCLUSIONS AND OUTLOOK
In short:
Here, we tackle the problem of LLG equations considering the presence
of complexity and dissipation or other interactions that give rise
to the term proposed by Gilbert or the one by Slonczewski.
With this aim, we have applied scale -q-derivative and the axiomatic
metric derivative to build up deformed Heisenberg equations. The evolution
operator naturally emerges with the use of each case of the structural
derivatives. The deformed LLG equations are solved for a simple case,
with both structural or metric derivatives.
Also, in connection with the LLG equation, we can cast some final
considerations for future investigations:
Does fractionality simply reproduce the damping described by the Gilbert
term or could it disclose some new effect extra damping?
Is it sensible to expect fractionality to interfere on the effects
of polarized currents as the Slonczewski term describes?
These two points are relevant in connection with fractionality and
the recent high precision measurements in magnetic systems may open
up a new venue to strengthen the relationship between the fractional
properties of space-time and Condensed Matter systems.
The authors wish to express their gratitude to FAPERJ-Rio de Janeiro
and CNPq-Brazil for the partial financial support.
99
Nosso On conection2015J. Weberszpil, Matheus Jatkoske Lazo
and J.A. Helayël-Neto, Physica A 436, (2015) 399–404.
Variational DeformedWeberszpil, J.; Helayël-Neto, J.A.,
Physica. A (Print), v. 450, (2016) 217-227; arXiv:1511.02835 [math-ph].
Tsallis1 C. Tsallis, J. Stat. Phys. 52, (1988) 479-487.
Tsallis BJP- 20 anosC. Tsallis, Brazilian Journal of Physics,
39, 2A, (2009) 337-356.
Tsallis2 C. Tsallis, Introduction to Nonextensive Statistical
Mechanics - Approaching a Complex World (Springer, New York, 2009).
Balankin PRE85-Map-2012Alexander S. Balankin and Benjamin
Espinoza Elizarraraz, Phys. Rev. E 85, (2012) 056314.
Balankin Rapid CommA. S. Balankin and B. Espinoza, Phys.
Rev. E 85, (2012) 025302(R).
Balankin-Towards a physics on fractalsAlexander Balankin,
Juan Bory-Reyes and Michael Shapiro, Phys A, in press, (2015) doi:10.1016/j.physa.2015.10.035.
nosso AHEP-g-factorWeberszpil, J. ; Helayël-Neto, J. A.,
Advances in High Energy Physics, (2014), p. 1-12.
NRT-PRL-2011F. D. Nobre, M. A. Rego-Monteiro, and C. Tsallis,
Phys. Rev. Lett. 106, (2011) 140601.
Aspects-nossoJ. Weberszpil, C.F.L. Godinho, A. Cherman
and J.A. Helayël-Neto, In: 7th Conference Mathematical Methods in
Physics - ICMP 2012, 2012, Rio de Janeiro. Proceedings of Science
(PoS). Trieste, Italia: SISSA. Trieste, Italia: Published by Proceedings
of Science (PoS), 2012. p. 1-19.
ZitterJ. Weberszpil and J. A. Helayël-Neto, J. Adv. Phys.7,
2 (2015) 1440-1447, ISSN 2347-3487.
Axiomatic MetricJ. Weberszpil, J. A. Helayël-Neto, arXiv:1605.08097
[math-ph]
The fascinating world of the Landau=00003D002013Lifshitz=00003D002013GilbertM.
Lakshmanan, Phil. Trans. R. Soc. A (2011) 369, 1280–1300
doi:10.1098/rsta.2010.0319
| In recent works, we have developed connections and a variational formalism
to treat deformed or metric derivatives, considering the relevant
space-time/ phase space as fractal or multifractal <cit.>
and presented a variational approach to dissipative systems, contemplating
also cases of a time-dependent mass <cit.>.
The use of deformed-operators was justified based on our proposition
that there exists an intimate relationship between dissipation, coarse-grained
media and a limit energy scale for the interactions. Concepts and
connections like open systems, quasi-particles, energy scale and the
change in the geometry of space–time at its topological
level, nonconservative systems, noninteger dimensions of space–time
connected to a coarse-grained medium, have been discussed. With this
perspective, we argued that deformed or, we should say, Metric or
Structural Derivatives, similarly to the Fractional Calculus (FC),
could allows us to describe and emulate certain dynamics without explicit
many-body, dissipation or geometrical terms in the dynamical governing
equations. Also, we emphasized that the paradigm we adopt was different
from the standard approach in the generalized statistical mechanics
context <cit.>, where the
modification of entropy definition leads to the modification of the
algebra and, consequently, the concept of a derivative <cit.>.
This was set up by mapping into a continuous fractal space <cit.>
which naturally yields the need of modifications in the derivatives,
that we named deformed or, better, metric derivatives <cit.>.
The modifications of the derivatives, accordingly with the metric,
brings to a change in the algebra involved, which, in turn, may lead
to a generalized statistical mechanics with some adequate definition
of entropy.
The Landau-Lifshitz-Gilbert (LLG) equation sets out as a fundamental
approach to describe physics in the field of Applied Magnetism. It
exhibits a wide spectrum of effects stemming from its non-linear structure,
and its mathematical and physical consequences open up a rich field
of study. We pursue the investigation of the LLG equation in a scenario
where complexity may play a role. The connection between LLG and fractionality,
represented by an α-deformation parameter in the deformed
differential equations, has not been exploited with due attention.
Here, the use of metric derivatives aims to take into account long-range
forces, possible non-manifest or hidden interactions and/or the dimensionality
of space.
In this contribution, considering intrinsically the presence of complexity
and possible dissipative effects, and aiming to tackle these issues,
we apply our approach to study the LLG equation with two metric or
structural derivatives, the recently proposed scale -q-derivative
<cit.> in the nonextensive statistical mechanics
and, as an alternative, the axiomatic metric derivative (AMD) that
has the Mittag-Leffler function as eigenfunction and where deformed
Leibniz and chain rule hold - similarly to the standard calculus -
but in the regime of low-level of fractionality. The deformed operators
here are local. We actually focus our attention to understand whether
the damping in the LLG equation can be connected to some entropic
index, the fractionality or even dimensionality of space; in a further
step, we go over into anisotropic Heisenberg spin systems in (1+1)
dimensions with the purpose of modeling the weak anisotropy effects
by means of some representative parameter, that depends on the dimension
of space or the strength of the interactions with the medium. Some
considerations about an apparent paradox in the magnetization or angular
damping is given.
Our paper is outlined as follows: In Section 2, we briefly present
the scale-q-derivative in a nonextensive context, building up the
q- deformed Heisenberg equation and applying to tackle the problem
of the LLG equation; in Section 3, we apply the axiomatic derivative
to build up the α-deformed Heisenberg equation and to tackle
again the problem of LLG equation. We finally present our Conclusions
and Outlook in Section 4. | null | null | null | null | null |
http://arxiv.org/abs/1701.08089v1 | 20170127155347 | Comments on the temperature dependence of the gauge topology | [
"Edward Shuryak"
] | hep-lat | [
"hep-lat"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.07893v2 | 20170126223341 | Microstate Counting of $AdS_4$ Hyperbolic Black Hole Entropy via the Topologically Twisted Index | [
"Alejandro Cabo-Bizet",
"Victor I. Giraldo-Rivera",
"Leopoldo A. Pando Zayas"
] | hep-th | [
"hep-th"
] |
ḍ
∏_i=1^n⟨i ,i+1⟩
[ α^' e.g. i.e. Tr tr⟩⟨αβ̱γ̧δϵϕφφ̃γηıιȷψκ̨łλμνωπθρ̊στῠξζΔΦΓΨŁΛΩΠΘΥΞ∇∂α̇β̇γ̇δ̇L̂M̂N̂ĴX̂ĜL̂̂̂Ĵ̂Ĝ̂superconformal algebrał̂ł̂̂̂Q̃higher spin black holeequationsection MCTP-17-01
.7 cm
Microstate Counting of AdS_4 Hyperbolic Black Hole Entropy .6 cm
via the Topologically Twisted Index
1 cm
Alejandro Cabo-Bizet^a, Victor I. Giraldo-Rivera^b and Leopoldo A. Pando Zayas^c
.4cm
^a Instituto de Astronomía y Física del Espacio (CONICET-UBA) Ciudad Universitaria, C.P. 1428
Buenos Aires, Argentina .4cm
^b International Centre for Theoretical Sciences (ICTS-TIFR) Shivakote, Hesaraghatta Hobli,
Bengaluru 560089, India .4cm^bThe Abdus Salam International Centre for Theoretical Physics Strada Costiera 11, 34014 Trieste, Italy .4cm ^c Michigan Center for Theoretical
Physics Randall Laboratory of Physics, The University of
Michigan Ann Arbor, MI 48109-1120 1.5 cm
We compute the topologically twisted index for general N=2 supersymmetric field theories on ℍ_2× S^1.
We also discuss asymptotically AdS_4 magnetically charged black holes with hyperbolic horizon, in four-dimensional N=2 gauged supergravity. With certain assumptions, put forward by Benini, Hristov and Zaffaroni, we find precise agreement between the black hole entropy and the topologically twisted index, for ABJM theories.
§ INTRODUCTION
Black holes have an entropy that fits neatly in a thermodynamics framework as originally established in the works of Bekenstein and Hawking in the early 1970's. The microscopic origin, that is, the nature of the degrees of freedom that this entropy counts, has been an outstanding challenge for many decades. Any candidate to a theory of quantum gravity must provide an answer to this fundamental question. String theory, in the works of Strominger and Vafa, has successfully passed this test for a particular type of black holes <cit.>. In the context of the AdS/CFT correspondence, the original work of Strominger and Vafa can be interpreted as an instance of AdS_3/CFT_2. A natural question pertains higher dimensional versions of the AdS/CFT correspondence. Recent work by Benini, Hristov and Zaffaroni addresses the microscopic counting of the entropy of certain black holes from the point of view of AdS_4/CFT_3<cit.>.
In this manuscript we explore the topologically twisted index, originally introduced by Benini and Zaffaroni in the framework of N=2 supersymmetric three-dimensional field theories in S^2× S^1<cit.> (see also <cit.>), for the case of supersymmetric theories in ℍ_2× S^1, where ℍ_2 is the hyperbolic plane. Although we provide the ingredients for arbitrary N=2 supersymmetric theories, we will particularize our results for a specific deformation of ABJM theory. The holographic dual of such deformation is thought to be a hyperbolic black hole. In this work, our main motivation comes from the prospect of understanding the D=3 SCFT representation of the appropriate AdS_4 black hole microstates. With this aim we are driven to explore four dimensional N=2 gauged supergravity and find black hole solutions with ℍ_2 horizon. Hyperbolic black holes have been discussed in the context of AdS/CFT in, for example, <cit.>.
Asymptotically AdS_4 black holes in 𝒩=2 gauged supergravity, which are sourced by magnetic fluxes, have been widely studied <cit.>. Roughly speaking, from the bulk perspective, the presence of fluxes allows to define the black hole as interpolating from the UV AdS_4 to the near horizon AdS_2× S^2. As a result of our study we are able to identify the role of such fluxes from the dual SCFT perspective. These flavor fluxes, together with a continuous of color fluxes, generate a one-parameter hierarchy of Landau levels on ℍ_2, that determines the value of the ABJM index. What we are set to explore in this paper, is whether the leading behavior in the large N limit of the topologically twisted index of a specific deformation of ABJM, evaluated on the Hilbert space composed by the aforementioned Landau levels, coincides with the Bekenstein-Hawking expression for the semiclassical entropy of the black holes in question. We will find that indeed both results coincide.
Another important motivation for our work, is the intrinsically interesting field theory problem of localization of supersymmetric field theories in non-compact spaces. This problem naturally appears in the context of localization of supergravity theories, for an understanding of exact black hole entropy counting <cit.>. The same problem appears in holographic approaches to Wilson loops where the world volume of the classical configuration contains an AdS_2 factor. For example, the excitations on a D3 brane which is dual to a Wilson loop in the totally symmetric rank k representation <cit.> were identified to correspond, to an N=4 vector multiplet in ℍ_2 × S^2<cit.>. Localization in non-compact spaces has recently been addressed in <cit.> and <cit.>, our work constitutes an extension to the topologically twisted case.
The manuscript is organized as follows. In section <ref> we discuss the preliminary ingredients we need, for example, our guidance principle on the field theory side: supersymmetric localization <cit.>, the background metric, spin connection, and supersymmetric structure of the actions needed to compute BPS observables in a generic three-dimensional 𝒩=2 Chern-Simons-Matter theory on ℍ_2× S^1. To complete section <ref>, we discuss the boundary conditions to be used in the manuscript. In section <ref> we present the space of square and delta-normalizable functions that will be used to integrate upon, and their respective discrete and continuous spectrum. In section <ref> we compute the one loop super-determinants. In section <ref> we assemble our results to write down the ABJM index on ℍ_2× S^1, and then move on to compute its leading contribution in the large N expansion, by following the procedure pioneered in <cit.>. In section <ref> we find what we believe to be the dual AdS_4 black holes and compare its Bekenstein-Hawking entropy to the leading contribution in the large N expansion of the ABJM index on ℍ_2× S^1. In section <ref> we conclude with a short summary of our results and comment on interesting open and related problems. In a series of appendices we discuss more technical aspects such as, for instance, the construction of square integrable modes in appendix <ref>.
§ TOWARDS THE INDEX ON ℍ_2× S^1
In this section we summarize the building blocks that will be needed in order to compute the topologically twisted index of a generic 𝒩=2 Chern-Simons-Matter theory on ℍ_2× S^1. The zero locus will be parametrized by a continuous of color fluxes and holonomies. On ℍ_2, these flux BPS configurations are non-normalizable but they are part of the zero locus: The localizing term Q_ϵ V, which is constructed to be semi-positive definite, will vanish at them.
First, we review the SUSY localization method to compute the partition function of 3d Chern-Simons-Matter defined over a Euclidean space ℳ with off-shell supersymmetry charge 𝒬. The space ℳ is usually taken to be compact. The localization principle is well known and has been elegantly summarized and interpreted in various reviews, for example, <cit.>. However, given some of the intricacies we face, for the case we discuss, we review it here, with the goal of setting up our guiding principle, notation and to highlight some of the points on which we will make a particular emphasis.
To close this section we elaborate on the specific set of boundary conditions that we shall use for background and fluctuations.
§.§ SUSY localization principle
The SUSY localization method is summarized in the following steps
* Select a “middle dimensional" section Γ in the space of complex fields, such as a 3d vector multiplet {A_μ,σ,D,λ,λ̅} of your theory. The path integral defining the SUSY partition function of a classical action S_cl, Z[Γ] is to be performed over Γ. The path Γ must be a consistent path of integration of S_cl.
*
The contour Γ intersects a set of Q_ϵ-BPS configurations that will be denoted as BPS[Γ] and that is better known as: The localization locus.
*
For each Γ there should exist a Q_ϵ V local functional of fields whose bosonic part is semi-positive definite at Γ and vanishes at BPS[Γ].
*
Given the previous conditions, the strict limit τ→∞ can be taken in such a way that the final result for the partition function is guaranteed to be
Z[Γ] = ∑_X^(0)∈ BPS[Γ] e^-S_cl[X^(0)]Z_X^(0)[Γ],
Z_X^(0)[Γ] := ∫_Γ e^- δ^(2)( Q_ϵ V, X^(0)) ,
where δ^(2)( Q_ϵ V,X^(0)) is the quadratic expansion of Q_ϵ V about X^(0).
We have omitted the integration over the space ℳ to ease the reading, but remember it is there.
Let us review the semiclassical reduction (<ref>).
The starting point, is to notice that the partition function Z[Γ] does not change if the initial classical action S_cl is deformed by an arbitrary Q_ϵ-exact deformation τ Q_ϵ V∂_τ(Z[Γ] := ∫_Γ e^-S_cl[X]-τ Q_ϵ V )
= ∫_ΓQ_ϵ( V e^-S_cl[X]-τ Q_ϵ V)=0,
provided the measure of integration in field configuration space is Q_ϵ invariant and that there are not contributions from the boundary of the latter.
Under the aforementioned conditions, we can choose a deformation term Q_ϵ V with semi-positive definite bosonic part and thereafter perform a field redefinition
X→ X^(0)+1/√(τ) X^(1).
As Z[Γ] is independent of τ we are free to take the limit τ→∞ and proceed as follows
∫_X e^-S_cl[X^(0)]- τ QV = e^-S_cl[X^(0)]∫_X^(1) e^-τ QV
→ e^-S_cl[X^(0)]-τ QV[X^(0)]∫_X^(1) e^- δ^(2)( Q_ϵ V ,X^(0)) .
Because of the suppression factor e^- τ Q V[X^(0)] and semi-positive definiteness of the bosonic part of Q_ϵ V only classical configurations X^(0)∈Γ, namely X^(0)∈ BPS[Γ], solutions of the zero locus of Q_ϵ V (Q_ϵ V_bos=0) contribute in this limit and (<ref>) is recovered.
§.§ Background geometry and supersymmetry
In this subsection we introduce the basic elements needed for the evaluation of the localization formula for the topologically twisted index of a generic 𝒩=2 Chern-Simons-Matter theory. Specifically, we are interested in U(N)_k× U(N)_-k Chern-Simons theory coupled to matter in the bi-fundamental representation: ABJM<cit.>, living in the non-compact space ℳ=ℍ_2× S^1 whose metric we will represent as
ds^2 = -dt^2+ds_2d^2,
ds_2d^2 := - dθ^2-sinh^2( θ) dφ^2,
φ∼φ+2π, t ∼ t+1.
We shall use in this paper the following signature (-,-,-) on the 3d boundary theory. The flat space metric is η=diag(-1,-1,-1).
In the conventions used in this section, the non trivial spin connection component is
ω^21_φ= -coshθ.
The 2d space ℍ_2 has infinite volume. When dealing with extensive quantities on ℍ_2 we will use a cut-off at large θ and drop out the dependence on such cut-off in the very end. More precisely, this recipe has been used in the context of black hole entropy in <cit.> and, in the context of holographic computations for Wilson loops it was discussed in <cit.>; it amounts to defining the volume of ℍ_2 as:
vol_ℍ_2=-2π.
As general principle, we will consider background configurations that grow asymptotically as the volume element, or slower. As for extensive quantities constructed out of such non normalizable backgrounds, we shall apply the previous regularization recipe [
For example, to work with boundary objects - like the boundary action (<ref>)- with finite limit in the cut-off θ_0 →∞, we follow <cit.>. The idea is to use coordinates (:=_0-, φ̃=1/2e^_0φ), in such a way the metric
d^2+sinh^2 dφ^2,
transforms to
d^2+(e^--e^-2_0+)^2 dφ̃^2,
where φ̃ is a periodic coordinate with period =π e^_0 and 0<<_0.
].
The results of these sections allow to compute the topologically twisted index of any 𝒩=2 Chern-Simons theory coupled to matter. As mentioned before we are interested in the particular case of ABJM. The ABJM theory is composed by two vector multiplets and four matter multiplets in the bi-fundamental of the gauge group. Specifically
Chern-Simons ± k : {A_μ, σ, D, λ_q=1, λ̅_q=1}_± k,
matter : {ϕ_q^a,ϕ̅_q^a, ψ^a_q-1, ψ̅^a_q-1, F^a, F̅^a}, a=1,2,3,4.
where [The supersymmetry transformation rules are defined over the complex conjugated of (,j̅,F̅), which are denoted as (^†,j̅^†,F̅^†).]q is the charge of the corresponding field under the R-symmetry flux (<ref>).
We can represent ABJM theories by the following standard quiver diagram:
[every node/.style=circle,draw,thick,minimum size=1.5cm]
(NL) at (0,0)N__k;
(NR) at (3,0)N__-k;
[->>](NL) to [bend left=40] node[draw=none,minimum size=0.1cm,above]^1,^2(NR);
[->>](NR) to[bend left=40]node[draw=none,minimum size=0.1cm,below]^3,^4(NL);
ABJM theories have = 8 superconformal symmetry for level k=1,2 and =6 for level k≥3, the global symmetry that is manifest in the =2 notation is SU(2)_1,2× SU(2)_3,4× U(1)_T× U(1)_R, where each SU(2) acts upon the doublets composed by the corresponding labels <cit.>.
We are interested in a specific deformation of ABJM. Part of such deformation is a classical background for the R-symmetry potential
V_μ dx^μ=1/2coshθ dφ.
The background (<ref>) is non normalizable. However, V goes like the volume element of ℍ_2 for large θ. The deformation (<ref>) has non trivial consequences in the final result of the localization technic.
The R-symmetry background allows for the presence of a Killing spinor ϵ with R-charge q=1. The Killing spinor equation (KSE) being
(∂_μ +1/4ω^a b_ μσ_a b- i V_μ) ϵ = 0,
with σ_a b:=[σ_a, σ_b]/2 and σ_a, σ_b being Pauli matrices. As we are using negative signature it is important to keep in mind that
σ^a=-σ_a.
In fact, the algebrae and actions that will be defined later on, are obtained out of the results in <cit.> by the appropriate change of signature, and (<ref>).
The most general normalized solution to the KSE (<ref>), is proportional to
ϵ= ([ 1; 0 ]).
Out of ϵ we can construct an off-shell supercharge Q_ϵ. Before dealing with the construction of Q_ϵ, it is convenient to perform the following field redefinition
Â_3:=A_3+i σ.
In terms of the new variables, the offshell algebra takes the following form for the vector
Q_ϵÂ_θ, φ = -i/2(
-λ̅^†σ_θ, φϵ), Q_ϵÂ_t=0,
Q_ϵσ = 1/2(
-λ̅^†ϵ),
Q_ϵλ = -1/2σ^μνϵF̂_μν+D ϵ
+i σ^3 ϵ𝒟̂_3 σ,
Q_ϵλ̅^† = 0,
Q_ϵ D=
+i/2 (𝒟̂_μλ̅)^†σ^μϵ,
and matter multiplet [Where ϵ^C:=C ϵ^* and C=-i σ_2. Notice, that the C conjugation matrix is real.]
Q_ϵϕ =
0, Q_ϵϕ̅^† = - ψ̅^†ϵ,
Q_ϵψ =
+i σ^μϵ𝒟̂_μϕ,
Q_ϵψ̅^†=
F̅^†ϵ^c^†,
Q_ϵ F =
+i (ϵ^c)^†σ^μ𝒟̂_μψ+i (ϵ^c)^†λ ϕ,
Q_ϵF̅^†= 0.
The action of the gauge covariant derivative being
𝒟̂_μ:={[ ( ∂_μ+1/4σ_a bω^a b_ μ-i Â_μ - i q_sp V_μ) on spinors,; ( ∂_μ-i Â_μ - i q_sc V_μ) on scalars. ].
It can be shown, that the Chern-Simons theory
ℒ_CS= -i k/4 π( ϵ^μνβ(Â_μ∂_νÂ_̂β̂ -2 i/3Â_μÂ_νÂ_β) -λ̅^† 1-σ_3/2λ),
is annihilated by (<ref>), up to a total derivative
-i k/4 π𝒟̂_μ( ϵ^μνβ(Q_ϵÂ_ν) Â_β).
There can also be a mixed CS term whenever we have several Abelian factors:
_mCS=-i k_ij/4π( ^Â^(i)__Â^(j)_+̱ł̅^(i)†(1-_3/2)ł^(j)).
Where k_ij is symmetric and i≠ j, in this case one similarly gets boundary pieces
_mCS=-i k_ij/4π(_( ^ (Q_Â^(i)_) Â^(j)_).
The discussion of the topological current in <cit.> is valid for any Chern-Simons theory. In the case of ABJM, the topological U(1)_T global symmetry is generated by the conserved current J^_T= tr (*F̂-*F̂̃̂)^. One can couple background U(1)_T gauge potentials Â^T_μ, to the current J^_T. The supersymmetry completion of such term, is a particularization of the action (<ref>). Such particularization, is given by picking k_ij=k_j i=1
and regarding just a couple of indices (i=1,2). The index “1” labels a background Q_ϵ-spurion vector multiplet, and the index “2” labels a U(1) dynamical vector multiplet. In such a way, we obtain the corresponding mixed supersymmetric Chern-Simons action, out of (<ref>). For instance, in the case of gauge group U(N), there is a unique dynamical U(1), and the bosonic term of the latter action is
^Bos_T=-i/4 π^(Â^T__[ Â_]̱+[Â_] _Â^T_).
In the very end, we will fix the v.e.v of the spurion vector supermultiplet to specific Q_ϵ-BPS values
[Which is the family Â_3^T=u^T, D^T= iF^T_12, ^T=0.].
At this point, we must select a “middle dimensional” contour of integration in field space. Let us introduce a contour Γ consistent with the one of <cit.>Γ_vector: Bosonic Fields=(Bosonic Fields)^* , e. g. D=-(D)^*.
The contour Γ_vector will cross a specific family of Q_ϵ-BPS configurations.
BPS[Γ_vector] : {[ F_12=-𝔪/2, D=-i 𝔪/2, fermions=0.; ; Â_3=u=u^*∈ [0,2π), ].
where 𝔪 and u are Cartan valued arbitrary constants. The u are the Coulomb moduli and parametrize the Coulomb branch of the theory. Expression (<ref>) is the most general solution - single valued at the S^1 factor and without fermionic zero modes- to the BPS equation
Q_ϵλ = -1/2σ^μνϵF̂_μν+D ϵ+i σ^3 ϵ𝒟̂_3 σ=0,
along the contour (<ref>) .
As for the matter multiplet we define
Γ_matter: ϕ̅=ϕ, F̅=F.
In our case, the zero locus of matter is
BPS[Γ_matter]: ϕ̅=ϕ=F̅=F=fermions=0.
Finally, we define Q_ϵ exact terms. The Q_ϵ exact terms must be semi-positive definite along Γ, as already stressed. In the case of the vector multiplet and the choice of Γ(<ref>), such a term is
Q_ϵ V^vector := -Q_ϵ((∙Q_ϵλ) λ),
(∙Q_ϵλ) := (Q_ϵλ)^* |_Â^*→Â, σ^*→σ, D^* →- D.
The bosonic and fermionic part of (<ref>) are
Q_ϵ V^vector_B := (F_1 2+ 𝒟̂_3 σ +i D )^2+(F̂_13)^2+ (F̂_2 3)^2,
Q_ϵ V^vector_F := - i λ̅^†_2 𝒟̂_t λ_2.
where λ_2 is the lower component of the gaugino λ=([ λ_1; λ_2 ]).
For the matter multiplet, and given the choice of Γ in (<ref>) and (<ref>), such a term is
Q_ϵ V^matter :=
-Q_ϵ( -i ϵσ^μψ𝒟̂_μϕ̅^†+ F ψ̅^†ϵ^c + i ϕ̅^†ϵλϕ).
The bosonic and fermionic part of (<ref>) are
-Q_ϵ V^matter_B = (𝒟̂^μϕ̅)^† 𝒟̂_μϕ + ϕ̅^†(𝒟̂_3σ+i D- ϵ^μν_ βv^β(q V_μν+W_μν) )ϕ
+F̅^† F+ 𝒟̂_μ( i ϵ^μν_ βv^βϕ̅^†𝒟̂_νϕ)
,
-Q_ϵ V^matter_F = -i ψ̅^†σ^μ𝒟̂_μψ- i ψ̅^†λ ϕ- i ϕ̅^† λ̅^† P^- ψ
+ i 𝒟̂_μ( ψ̅^† P^+ σ^μψ),
where P^∓:=1∓σ_3/2 and V_μν, W_μν are the field strengths of R- and flavor symmetry backgrounds, respectively.
The term Q_ϵ V^matter_B is semi-positive definite when expanded around BPS[Γ_vector] and over Γ_matter. As shall be shown in due time, this last statement is implied by the requirement of square integrability over ℍ_2. Square integrability over ℍ_2, imposes bounds on the spectrum of eigenvalues of the relevant magnetic Laplacian. The aforementioned bounds imply the convergence of the Gaussian path integral ∫_X^(1) e^- δ^(2)( Q_ϵ V ,X^(0)) in (<ref>).
Chern-Simons, being a gauge theory, requires gauge fixing, which we choose to be the axial condition
Â_3=const.
In contradistinction to 3d pure Yang-Mills theory, in 3d Chern-Simons coupled to Yang-Mills and/or matter, the constraint (<ref>), fixes the gauge degeneracy completely. In the latter theory there are 3-1=2 physical off-shell vector degrees of freedom (DoF) meanwhile in 3d pure Yang-Mills there is 3-2=1 massless vector offshell DoF. For a nice review on the canonical quantization of 3d Chern-Simons theory, see for instance <cit.>.
To implement the gauge fixing, we use BRST method <cit.> and enlarge the vector multiplet, by adding the ghost fields (c,c̅,b̅). We enlarge the algebra (<ref>), by the following transformation rules
Q_ϵ c=0, Q_ϵc̅=0, Q_ϵb̅=0.
Any gauge invariant functional of the physical fields is BRST invariant. The BRST transformations Q_B are
Q_BÂ_μ= 𝒟̂_μ c, Q_Bc̅= b̅, Q_B c =i/2{c,c} , Q_Bλ=i {c, λ},
Q_Bλ̅^†= i {c,λ̅^†}, Q_Bσ= i[c,σ], Q_B D=i {c,σ},
Q_Bϕ=i[c,σ], Q_Bϕ̅^† =i [c,ϕ̅^†]
, Q_Bψ=i{c,σ}, Q_Bψ̅^† =i {c,ψ̅^†},
Q_B F= i [c,F], Q_BF̅^†= i [c,F̅^†],
from (<ref>), (<ref>) and (<ref>) it can be shown that
(Q_ϵ+Q_B)^2={Q_ϵ,Q_B}=0.
As the V's in the Q_ϵ V's localizing terms, (<ref>) and (<ref>), are gauge invariant objects, then from the corresponding algebra (<ref>) is easy to check that the V's in (<ref>) and (<ref>), are Q_B invariant and consequently (<ref>) and (<ref>) are (Q_ϵ+Q_B)-exact.
On top of the localizing actions (<ref>) and (<ref>), a gauge fixing term must be added. To our purposes the most convenient choice is the following (Q_ϵ+Q_B)-exact term
Q_BTr(c̅(Â_t-const))= c̅𝒟̂_t c+ b̅(Â_t-const).
From (<ref>) and Q_ϵÂ_3=0, it follows that (<ref>) is (Q_ϵ+Q_B)-exact. In (<ref>) we wrote the gauge index trace Tr only on the LHS, but the reader should keep in mind that by default we are working with gauge invariant density Lagrangians.
Our BRST construction is conceptually that of Pestun <cit.> and it has been previously presented in the 3d case by Kapustin, Willet and Yaakov <cit.>.
§.§ Boundary conditions
In non-compact manifolds like ℍ_2× S^1 or manifolds with boundary, appropriate boundary conditions must be imposed in order to have a well defined variational -Lagrangian- problem. Once a proper classical theory has been defined, quantization is in order. Let
X^(0) = {A^(0)_μ, σ^(0), D^0,…}∈ BPS[Γ]
and
X^(1)={δ A_μ, δσ, δ D, δλ, δλ̅, δ c, δc̅, b̅, δϕ, δϕ̅, δψ, δψ̅, δ F, δF̅},
be the non trivial zero locus background fields and offshell fluctuations respectively. As for the X^(0) we define the following boundary condition
e^μ_a A^(0)_μ, D^(0), σ^(0) θ→∞∼ O(1).
As for offshell fluctuations X^(1), we define Dirichlet boundary conditions
e^μ_a δ A_μ, δσ, δ D, δλ, δλ̅, δ c, δc̅, δb̅, δϕ, δϕ̅, δψ, δψ̅, δ F, δF̅ θ→∞∼ O(e^-κθ),
e^μ_a δ A_μ, δσ, δ D, δλ, δλ̅, δ c, δc̅, δb̅, δϕ, δϕ̅, δψ, δψ̅, δ F, δF̅ θ→ 0∼ O(1),
with κ≥1/2. The value of κ defines important features of the spectrum of the associated S^1 quantum mechanics, if a sort of dimensional reduction is possible to perform in this case.
The following table sketches the relation between the boundary conditions (<ref>) and the results reported in the next section:
κ Spectrum Norm
1/2 Continuous: λ∈ [0,∞) Delta-Square Integrable
>1/2 Discrete: j=|s|-1,|s|-2, … >-1/2. Square Integrable
The total derivative part of the offshell variation of the Chern-Simons Lagrangian (<ref>), multiplied by the volume element √(-g) is
-i k/4 π𝒟̂_μ(√(-g)ϵ^μνβ(δÂ_ν) Â_β), with ϵ^θφ t:= 1/√(-g).
After integration and imposition of gauge fixing condition δÂ_3=0 - see (<ref>)-, the total derivative (<ref>) becomes a boundary term
-i k/4 π∫_φ=0^2πdφ∫_t=0^1dt (δÂ_φ)Â_t |^θ=∞_θ=0.
Boundary conditions (<ref>), do not imply the vanishing of (<ref>) at θ=∞, due to the non-compactness of ℍ_2 - specifically because e^φ_2θ→∞→ 0-. The contribution from θ=0 vanishes.
To have a well-defined variational principle, we redefine the classical action from Chern-Simons to
∫√(-g)ℒ_cl= ∫√(-g)ℒ_CS + S_bdy, S_bdy=+i k/4π∫_φ=0^2πdφ∫_t=0^1dt Tr ( Â_φÂ_t ) at θ=∞.
Note that S_bdy is gauge invariant, provided we restrict the derivatives of gauge transformations parameters to vanish at θ→ ∞. In this paper, we will assume the latter condition.
It is immediate to check that the supersymmetric transformation of ℒ_cl is trivial by construction: the supersymmetry variation of S_bdy cancels the integration of the total derivative term (<ref>), as it should.
The classical action evaluated on the zero locus is
∫√(-g)ℒ_cl[BPS[Γ]]
=-ik/2 u ·𝔪
Where u ·𝔪:=u_i 𝔪_i= 1/2Tr(u 𝔪). In our conventions h_i and h_j are Cartan generators in the Chevalley basis, and consequently Tr[h_i h_j]=2δ_i j.
The contributions proportional to
coshθ_Max cancel out, θ_Max being the large cut off in θ. The divergent terms coming from the integral over ℍ_2 and the boundary term (<ref>) cancel each other.
Whenever we have contributions which diverge like the volume, we regulate them as we regulate the volume in (<ref>), and boundary terms are regulated as explained in footnote in page 4.
It is convenient to write down the exponential of -(<ref>)
x_i^k 𝔪_i/2,
with x_i=e^i u_i. Expression (<ref>), is the contribution to the index of a Chern-Simons term with level k.
The total derivative part of the variation with respect to ϕ̅^† of the bosonic localizing action of matter is
+∫_ℳ𝒟̂_μ(√(-g)(δϕ̅^† 𝒟̂^μϕ+i ϵ^μν_ βv^βδϕ̅^†𝒟̂_νϕ) )
Under off-shell boundary conditions (<ref>) and (<ref>), the integration of (<ref>) gives
+i ∫^2π_0 dt ∫^2π_0 dφ[ (δϕ̅ ^†𝒟̂_φϕ) ]_θ=0.
The term (<ref>) vanishes, when evaluated in the functional space we are going to integrate over. The explanation of the latter fact, shall be given in the beginning of subsection <ref>.
Notice, that the ghosts vanish at the boundary, due to (<ref>). Consequently, BRST gauge transformations do not affect the boundary.
Having established the localization locus, the next step is to compute one loop determinant contributions Z_X^(0). In order to do that, we need to define an appropriate functional space to integrate upon. That, will be the scope of the next section. Thereafter, we can compute Z_X^(0) and use equation (<ref>) to evaluate our final result for the topologically twisted index.
§ THE SPECTRUM ON ℍ_2 WITH FLUX
The spectrum of the Laplace operator on ℍ_2 has a long history (<cit.> and references therein).
Even though this section might seem just a technical remark, the result of its analysis is very relevant in reaching our conclusions. We therefore choose to include it in the main body of the text and provide more details in an appendix.
The eigenvalue problem solved in appendix <ref>, is related to the propagation of a scalar particle in the presence of a flux on ℍ_2. The hierarchy of modes to be reported in this section, can be interpreted as a series of Landau levels on ℍ_2, that emerge due to the presence of a flux s<cit.>. These alternative viewpoints deserve further attention and we are keen to pay them so in forthcoming works.
In this section, we will present the outcome of the analysis that shall be reported in appendix <ref>. We encourage the reader looking for a detailed understanding, to go through that appendix.
The Laplacian in the presence of a flux s, coming from a potential
A=s coshθ dφ,
is given by
□_s:=-∂_θ^2-^2θ∂_θ+1/sinh^2θ(j_3-scoshθ)^2,
with j_3=-i∂_φ.
The equation that defines the functional space upon which we will compute
determinants is
( □_s+Δ) f_Δ,j_3=0.
The boundary conditions that will define our functional space are
(<ref>) and (<ref>) with κ > 1/2.
§.§ The discrete spectrum
First, we parametrize the eigenvalue as:
Δ=j(j+1)-s^2.
The quantization conditions
j_3-s, j-|s|∈ℤ, together with equation (<ref>), define a finite (resp. infinite) dimensional space of square integrable functions on ℍ_2, that we denote asΞ_j^(1)
(s):={ f_Δ,j_3^(1)} _j_3,
respectively
Ξ_j^(2)(s):={ f_Δ,j_3^(2)} _j_3.
The explicit form of the eigenfunctions f_Δ,j_3^(1) and f_Δ,j_3^(2), is defined in appendix <ref> and <ref>, respectively, for the case s>1/2. The case s<-1/2 can be worked out analogously.
The range of j_3 is given by the relations
c]lll
s≥ j_3≥max(|j|,|j+1|) if s>+1/2,
-j_3≥-s≥max(|j|,|j+1|) if s<-1/2,
respectively
c]lll
j_3≥ s≥max(|j|,|j+1|) if s>+1/2,
-s≥-j_3≥max(|j|,|j+1|) if s<-1/2.
There are additional constraints to the value of j. Indeed, the eigenfunctions
f^(1,2)_Δ,j_3 and henceforth the spectrum Δ=j(j+1)-s^2, are invariant under the transformation
j→-( j+1).
Thenceforth we must restrict j to be either
[ j>-1/2 or j<-1/2 ]
.
In order to have square integrable functions
j≠-1/2.
For the choice j>-1/2, restrictions (<ref>) and (<ref>), imply an upper bound for j
j=|s|-1, |s|-2,... > -1/2.
Interestingly, for
0≤|s| ≤1/2,
there are no square integrable modes. A particular conclusion of this last statement, is the known fact that in ℍ_2 there are no square integrable scalar modes. In the presence of flux s: |s|> 1/2, square integrable modes emerge.
In appendix <ref>, it is proven that in the case |s|=1, our square integrable eigenmodes match those well known discrete modes, of the vector Laplace-Beltrami operator in ℍ_2, with helicity s=±1. This last statement, suggests to explore the possibility that our spectrum encodes the full tower of higher spin square integrable eigenmodes of the Laplace-Beltrami operator on ℍ_2. We hope to come back to this point in the future.
The relevant scalar product is<𝑓,𝑔>:=∫_0^∞𝑑θ∫_0^2π
𝑑φsinh( θ) 𝑓^∗(θ
,φ)𝑔(θ,φ).
As already mentioned, and proven in appendix <ref>, square integrability off_Δ,j_3
^(1) (resp. f_Δ,j_3^(2)) is interrelated to the
specific bounds on j_3 and j that were previously written.
Different states f_Δ,j_3^(1,2)e^ij_3φ in Ξ_j^(1,2)(s) are orthogonal with
respect to the scalar product (<ref>). Spaces Ξ_j^(1,2)(s) with
different label j>-1/2(orj<-1/2) are orthogonal.
This is because □_s is Hermitian in Ξ_j(s)
and spaces with different label j>- 1/2(orj<-1/2), have
different eigenvalues Δ under □_s.
Summarizing, the space of square integrable modes for a given s is
Ξ(s)= ⊕^|s|-1_j>-1/2(Ξ^(1)_j(s) ⊕Ξ^(2)_j(s)).
In the next section, we will refer to the following spaces
Ξ_j(s):=
( Ξ_j^(1)(s)
⊕
Ξ_j^(2)(s)).
The spaces (<ref>) are subspaces of (<ref>).
§.§ The continuous spectrum
The continuous spectrum is a direct generalization of the spectrum reported by Higuchi and Camporesi <cit.> to the case where there is a constant flux on ℍ_2. The corresponding eigenmodes solve the defining equation
(
□_s+Δ_(λ,s)) f_Δ_(λ, s),j_3=0,
with
Δ_(λ,s):=-λ^2-s^2-1/4, λ∈ℝ,λ≥0,
and boundary conditions
f_Δ_(λ,s),j_3(x) x→ -∞∼ c_1(λ
,j_3,s)x^+iλ/x^1/2+c_2(λ,j_3,s)
x^-iλ/x^1/2,
f_Δ_(λ,s),j_3(x) x→ 0∼ O(1).
Conditions (<ref>) and (<ref>) are given in coordinates x, but they are equivalent to the particularization κ=1/2 of (<ref>).
The final solution to the boundary problem just presented, is obtained by imposing (<ref>) on the most general solution (<ref>). The result is
f^(1)_Δ_(ł,s)(x) if j_3 ≥ s,
f^(2)_Δ_(ł,s)(x) if j_3 < s.
The norm of f_Δ_λ j_3 under the scalar product (<ref>) is infinite. By choosing appropriately the remaining
integration constant one can set
⟨ f_Δ_(λ,s), j_3, f_Δ_(λ^',s), j^'_3⟩ =δ(ł-ł')_j_3 j_3'.
Comments:
Our thermal cycle is not the S^1 inside the ℍ_2, but the trivially fibered one. The latter fact, is related to an important conceptual difference between the physical framework of our approach and the one of, for instance, <cit.>. In physical terms, our ℍ_2 modes are not probing the near horizon limit of a black hole in the presence of electric flux <cit.>, but the boundary dynamics of a magnetically charged hyperbolic AdS_4 black hole.
That said, if we interpret our φ-cycle as the thermal one, our hierarchy of square integrable modes is certainly probing Euclidean AdS_2, in the presence of an electric flux deformation. That is closely related to the problem addressed in <cit.>. To have a self-consistent approach, coming from supersymmetric localization, to the problem studied in <cit.>, one should try to localize an appropriate off-shell supercharge on the quantum gravity side. In that spirit, in <cit.>, AdS_2× S^2 was shown to be the unique ungauged BPS localizing solution, to 4d 𝒩=2 Super-Conformal gravity in a convenient gauge fixing. It is plausible, that other electric or magnetic AdS_2×Σ localizing solutions can be found, by relaxing some of the conditions used in <cit.>, as suggested by the results in <cit.>. If that is the case, it would be quite interesting to explore what can supersymmetric localization say about the problems addressed in <cit.>.
§ ONE-LOOP DETERMINANTS
Having clarified the structure of the spectrum for the flux Laplacian on ℍ_2 (see appendix <ref>), we have all the ingredients to address the computation of one-loop determinants.
§.§ Bosonic localizing operator
For square integrable modes, the total derivatives of the quadratic expansion in the localizing
terms, integrate to zero. In the case of the matter multiplet, the total derivate term is
+∫_ℳ𝒟̂_μ(√(-g)(ϕ̅^† 𝒟̂^μϕ+i ϵ^μν_ βv^βϕ̅^†𝒟̂_νϕ) )
=+i ∫ d t d φ[ ϕ̅^†𝒟̂_φϕ]_ At θ=0,
where the result in the second line follows from the asymptotic behavior (<ref>) and (<ref>). This boundary term vanishes because, as proven in appendix <ref>, the only modes of ϕ that do not vanish at the contractible cycle θ=0, have the following angular dependence
e^i s φ, with s := -ρ(𝔪)-q_R/2,
and they are annihilated by
𝒟̂_φ:=∂_φ -i s.
Due to a careful choice of boundary conditions, total derivatives are irrelevant for the current
discussion, as they do not contribute to the 1-loop determinant.
After integration by parts, the quadratic expansion of the bosonic part of the Lagrangian density of the matter localizing term (<ref>), takes the form
ϕ̅^† O_Bϕ
:=
ϕ̅^†( (ρ(u)+i∂_t)^2+( □_s-s) ) ϕ.
Notice that the operator O_B is
positive definite on “ representations”
Ξ_j(s) labeled by j running at step 1 down from |s|-1 but
larger than -1/2. For “ representations” labeled by j:-1/2 <
j≤|s|-1 (or -|s|≤ j<-1/2), the operator ( □_s-s)
has eigenvalues that obey the following inequality-j(j+1)+s(s-1)≥0 and
hence is semi-positive definite. Consequently, O_B is positive definite if (ρ(u)+i∂_t)^2>0. This last condition is guaranteed, provided we avoid points in the Coulomb branch such that ρ(u) ∈ℤ.
Having in mind the particular case
j, j_3, s ∈ℤ,
at some stages, we will denote the aforementioned set of
j^'s as follows
j:0≤ j≤|s|-1.
The union of the aforementioned Ξ_j(s), is the maximal space of square integrable modes (<ref>).
For latter convenience, let us define
□_s^± := □_s± s.
Should we select ϕ in the vector space spanned by Ξ_j(s),
thence for s>1/2
_Ξ_j^(1)(s)(O_B) =
∏_j_3=j+1^s
∏_
k
( (ρ(u)+k)^2-j(j+1)+s(s-1)) ,
_Ξ_j^(2)(s)(O_B) =
∏_j_3=s+1^∞
∏_
k
( (ρ(u)+k)^2-j(j+1)+s(s-1)) ,
where k=i ∂_t. The result for s<-1/2 is obtained analogously.
As ϕ is a complex scalar, the functional integration
∫[Dϕ^† Dϕ]
Exp[-
∫_ℍ_2× S^1
ϕ^†·(O_B)·ϕ
],
is proportional to 1/(O_B), where
_Ξ(s)(O_B)={c]lll
∏_j=0^s-1
∏_j_3=j+1^∞
∏_ρ^∗,k
( (ρ(u)+k)^2-j(j+1)+s(s-1)) if s>1/2
∏_j=0^-s-1
∏_j_3=-∞^-j-1
∏_ρ^∗,k
( (ρ(u)+k)^2-j(j+1)+s(s-1)) if s<-1/2
. .
§.§ Fermionic localizing operator
To compute the fermionic determinant, we used the square of the kinetic operator that appears in the quadratic expansion of the fermionic part (<ref>) of the localizing term, we also specify the space of functions on which each component acts
(
[ O_B 0; 0 (u+i∂_t)^2+□_s-1^+ ]) ψ^+∈ Span(Ξ(s))ψ^-∈ Span(Ξ(s-1)).
While reproducing the computations that will be reported in section <ref>, it will be convenient to use the following identity
□_s-1^+f^(1,2)_Δ(s-1),j_3=(
-j(j+1)+s(s-1)) f^(1,2)_Δ(s-1),j_3.
§.§ ζ-function regularization: s> 1/2
We use ζ-function regularization to compute the determinants of O_B
upon the functional space
Ξ_j(s):=Ξ_j^(1)(s)
⊕
Ξ_j^(2)(s), j=s-1 (or -s).
which is the space of zero modes of □_s^-
(the space of eigenstates of O_B with eigenvalue (ρ(u)+k)^2). We stress that in the case s>1/2 and after
cohomological cancellations, these zero modes are the only ones that contribute
to the one loop super-determinant.
In order to compute the heat kernel K(0,0) associated to the eigenspaces in
question, we need to analize the relevant case j_3=s. In the
latter case and after particularizing to j= s-1 or -s, the square
integrable modes f^(1) and f^(2) drastically simplify to
f^(1,2)_Δ(s), j_3=s=χ( x-1) ^-s.
The constant χ is determined from the normalization condition
|χ|^22π vol_S_1∫_-∞^0dx(2) | x-1| ^-2s
=|χ|^2 2π vol_S^1/s-1/2=1.
The 2 in the LHS of (<ref>) is the line element in coordinates x.
From the value of |χ|^2, we obtain the heat kernel at origin
K(t;0,0)=1/2π vol_S^1(s-1/2)e^-t(ρ(u)+k)^2.
From (<ref>) we obtain the zeta function
ζ(z) =vol_ℍ_2vol_S^1/2π vol_S^1(s-1/2
)/((ρ(u)+k)^2)^z=-(s-1/2)/((ρ(u)+k)^2)^z
,
with
vol_ℍ_2=-2π, vol_S^1=1.
After using ζ-function method, we obtain the desired determinant_Ξ_j=-s(s)(O_B)=_Ξ_j=s-1(s)(O_B
)=e^-ζ^'(0) = ((ρ(u)+k)^2)^-s+1/2
= |(ρ(u)+k)|^-2s+1.
In appendix <ref> we obtain the same result, by using an alternative procedure. Notice that in (<ref>) we could have also written
= (-|(ρ(u)+k)|)^-2s+1.
It is possible that such a change in the election of sign, changes the value of the partition function. From now on, we will ignore this second choice, except for specific steps where having it in mind will be useful.
§.§ ζ-function regularization: s<1/2
In the case s<1/2, the contribution to the super-determinant is coming from
the following set of eigenfunctions
Ξ_j(s-1):=Ξ_j^(1)(s-1)
⊕
Ξ_j^(2)(s-1), j=-s (or s-1),
which is the space of zero modes of
□_s-1^+.
In the case s<1/2, and after cohomological cancellations, these zero modes are the only modes that contribute
to the one loop super-determinant.
Following the very same steps described in the previous section, we focus on the
solutions obtained for j_3=s-1 and j=-s (or s-1). In this
case, the zero mode solutions reduce to
f_Δ(s-1),j_3=s-1^(1,2)=χ( x-1) ^s-1,
from where it is straigthforward to compute the ζ-function.
The constant χ is determined from the normalization condition
|χ|^22π vol_S_1∫_-∞^0dx(2) |x-1| ^2(s-1)
=|χ|^22 π vol_S^1/-s+1/2=1.
From the value of |χ|^2, we obtain the heat kernel at origin
K(t;0,0)=1/2 π vol_S^1(-s+1/2)e^-t(ρ(u)+k)^2.
From (<ref>), we obtain the zeta function
ζ(z) =vol_ℍ_2vol_S^1/2 π vol_S^1(-s+1/2
)/((ρ(u)+k)^2)^z=-(-s+1/2)/((ρ(u)+k)^2)^z.
Finally, we obtain the desired determinant
_Ξ_j=-s(s-1)( (u+i∂_t)^2+□_s-1^+)
= _Ξ_j=s-1(s-1)( (u+i∂_t)^2+□_s-1
^+)
= e^-ζ^'(0)=((ρ(u)+k)^2)^s-1/2
= |(ρ(u)+k)|^2s-1.
§.§ Super-determinant
In the computation presented in this section, we will assume j,j_3,s∈ℤ, the other cases can be worked out in complete analogy. The super-determinant in the case s>1/2 is
√(_Ξ(s)(O_B)_Ξ(s-1)((ρ(u)+k)^2+□_s-1^+))/_Ξ
(s)(O_B) =√(_Ξ(s-1)((ρ
(u)+k)^2+□_s-1^+)/_Ξ(s)(O_B))
=√(
∏_j=0^s-2
∏_j_3=j+1^∞
( (ρ(u)+k)^2-j(j+1)+s(s-1)) /
∏_j=0^s-2
∏_j_3=j+1^∞
( (ρ(u)+k)^2-j(j+1)+s(s-1)) )
×√(1/_Ξ_j=s-1(s)(O_B)),
=√(1/_Ξ_j=s-1(s)(O_B))=|(ρ(u)+k)|^s-1/2.
Notice that in the RHS of the second line, we have a quotient of two identical infinite products. This cancellation, occurs due to the supersymmetric pairing of eigenmodes: cohomological cancellations.
Let us comment about the particular case s=1. In that case, there are not cohomological cancellations. The reason is, that when s=1, only the
space of zero modes j=s-1 (or j=-s), for the scalar ϕ, and the
“ chiral” spinor ψ^+ exist. In the
case s=1, there is not “ anti-chiral”
square integrable mode ψ^- on ℍ_2, because for such spinors the
effective flux is s-1=0.
Next, let us compute the super-determinant in the case s<1/2
√(_Ξ(s)(O_B)_Ξ(s-1)
((ρ(u)+k)^2+□_s-1^+))/_Ξ(s)(O_B)
=√(_Ξ(s-1)((ρ(u)+k)^2+□_s-1^+)/_Ξ(s)(O_B))
=√(
∏_j=0^-s-1
∏_j_3=-∞^-j-1
( (ρ(u)+k)^2-j(j+1)+s(s-1)) /
∏_j=0^-s-1
∏_j_3=-∞^-j-1
( (ρ(u)+k)^2-j(j+1)+s(s-1)) )
×√(_Ξ_j=-s(s-1)( (u+i∂_t)^2
+□_s-1^+) ),
=√(_Ξ_j=-s(s-1)( (u+i∂_t)^2+□
_s-1^+) )=|(ρ(u)+k)|^s-1/2.
The final expression coincides with the one of the case s>1/2. However, in the case
of s<1/2 the unpaired modes are the “
anti-chiral” square integrable modes ψ^-, labeled by a
radial number j=-s, and perceiving a flux s-1 on ℍ_2.
Let us treat separately the case s=0. In that case,
there are not square integrable “chiral” modes ψ^+, neither scalar
ones ϕ. However, there exist “
anti-chiral” square integrable modes ψ^-, perceiving
an effective magnetic flux of -1 and labeled by radial number j=0. In this
case the regularized super-determinant is given by
√(
∏_j=0^0
_Ξ(-1)((ρ(u)+k)^2+□_-1^+))=|(ρ(u)+k)|^-1/2.
Collecting partial results, not only for the case j,j_3,s ∈ℤ, but for the most general spectrum j,j_3 and s=-ρ(𝔪)-q_R/2 obeying “discreteness" conditions (<ref>),
is straigthforward to obtain the final result for the one loop super-determinant part of the index on ℍ_2
Z_1-loop^matter(ℍ_2,𝔪,q_R)=
∏_ρ:s≠1/2[
|C_regsin( ρ(u)/2)|] ^-ρ(𝔪)+q_R-1/2,
where C_reg=-2 i.
The restriction to s ≠1/2, is a necessary
condition to have square integrable modes. However, |(ρ(u)+k)|^s-1/2=1 and consequently s≠1/2 can very well be ignored and the result for Z_1-loop^matter(ℍ_2,𝔪,q_R) will be the same.
Interestingly, under GNO conditions
s∈ℤ or s∈ℤ+1/2,
the one loop result for the index on ℍ_2× S^1
Z_1-loop^matter(ℍ_2,𝔪,q_R)=
∏_ρ
[ |C_regsin( ρ(u)/2)|] ^-ρ(𝔪)+q_R-1/2,
coincides with the square root of the analog result on S^2× S^1, under the identification of s_ℍ_2 with
s_𝕊^2, for each mode (ρ,k).
Namely, under GNO quantization conditions
Z_1-loop^matter(ℍ_2,𝔪,q_R) =
∏_ρ
[ |C_regsin( ρ(u)/2)| ]^-ρ(𝔪)+q_R-1/2
= √(Z_1-loop^matter(S^2,𝔪,q_R)).
Let us remark that we are not forced to impose GNO conditions on ℍ_2. Consequently, we shall not impose GNO quantization conditions
The one loop determinant of a matter multiplet in the adjoint representation of the gauge group, with R-charge q_R=2 (which coincides with
the vector multiplet super-determinant, see appendix <ref>) in the presence of flux is
Z_1-loop^vector(ℍ_2,𝔪) =
∏_α
[| C_regsin( α(u)/2)| ] ^-α(𝔪)+1/2
∼ [∏_α>0sin( α(u)/2)^2 ]^1/2
= √(Z_1-loop^vector(S^2,𝔪)).
Notice that the result for the vector multiplet (<ref>)– which is independent on 𝔪– , matches with the result of <cit.> in their
flat space limit L→∞ and under the transformation u → i u. Notice that their transformation of A_μ under ϵ, δ_ϵ A_μ matches with ours, if only if A_μ is substituted by i A_μ.
We tried to obtain the cohomological cancellations from ζ-regularisation
of the non zero modes. However, the heat kernel method for spinors of
<cit.>, definitely, does not respect supersymmetry
(in the case of the discrete spectrum when the absolute value of the total flux felt by the mode must be larger than 1/2) and breaks the cancellations between
fermions and bosons, unless the normalization associated to the heat kernel of
antichiral modes ψ^- is modified in a non elegant way.
§.§ What about the continuous spectrum?
So far, we have focused on the contribution of the discrete spectrum to the index. It is time to find out what is the contribution of the continuous spectrum.
The eigenvalues of the relevant differential operators when acting upon the eigenfunctions of the continuous spectrum are
O_B f_Δ_(λ,s),j_3 = ( (ρ(u)+k)^2+λ^2+1/4+s(s-1)) f_Δ_(λ,s),j_3
( (ρ(u)+k)^2+□_s-1^+f_Δ_(λ,s-1),j_3) = ( (ρ(u)+k)^2+λ^2+1/4+s(s-1)) f_Δ_(λ,s-1),j_3.
The super-determinant to compute is
√((O_B)
((ρ(u)+k)^2+□_s-1^+))/(O_B).
The determinants in (<ref>) are computed by the method of Heat Kernel. Once the f_Δ_(λ,s), j_3 are normalized
as in (<ref>) the Heat Kernel of an operator with eigenvalues E[ł,s] when acting upon f__(ł,s),j_3 is defined as
K^(k,)[p,p',]=∫_0^∞ dł∑_j_3 ∈ℤ f^*__(ł,s),j_3(p) f__(ł,s),j_3(p') e^-E[ł,s] ,
where p={θ,φ,t} and p' labels the set of coordinates of a given point.
We do not need the full heat kernel, since for the -function all we need is its value at the origin p=p'=0.
As in the case of square integrable modes, all the eigenmodes f__(ł,s),j_3 vanish at the origin, except for those with j_3=s. After some work, the spectral function μ^(s)(λ) is found to be
1/vol_ℍ_2 vol_S^1μ^(s)(ł) := ∑_j_3 ∈ℤ f^*__(ł,s),j_3(0) f__(ł,s),j_3(0)=f^*__(ł,s),s(0) f__(ł,s),s(0)
= 1/(2π)^2λsinh (2 πλ )/cosh (2 πλ )+cos (2 π s).
Having the spectral function in (<ref>) we are ready to compute the -functions by using the following definition
^(z;s)= ∫_0^∞dł^(s)(ł)/(E_[ł,s])^z.
From the definition of -function (<ref>) we compute the relevant determinants
=e^-∂_z (0,s).
Notice that, the spectral function obeys the following property
^(s)=^(s-1), .
and the eigenvalues of the operators O_B and ((ρ(u)+k)^2+□_s-1^+) upon the respective eigenfunctions f_Δ_(λ,s),j_3 and f_Δ_(λ,s-1),j_3 are the same, see equations (<ref>). From the latter facts and after the definitions (<ref>) and (<ref>) it follows the triviality of (<ref>)√((O_B)
((ρ(u)+k)^2+□_s-1^+))/(O_B)=1.
In conclusion, the continuous spectrum provides a trivial contribution to the topologically twisted index.
§.§ GNO condition?
The gauge potential representative (<ref>) is singular at the contractible cycle θ=0. There are ways to solve this issue. One of them is to impose the holonomy of the gauge potential θ=0 to be in the centre of the group, we shall not resort to this way. A second way is to simply perform a non trivial gauge transformation with parameter
Ł(φ)= -s φ.
The new potential
A= s(coshθ -1 )d φ,
is regular at θ=0 and has the same behavior at the boundary θ→∞ as (<ref>). In fact (<ref>) is the analytic continuation of the section on the north chart of the magnetic monopole bundle on S^2 to the single chart that covers ℍ_2.
To appreciate the consequence of the non triviality of (<ref>) at θ=0, let us comment on its effect on matter. Out of the hierarchy of eigenmodes,
the only modes that do not vanish at the contractible cycle, are those with j_3=s, see equation (<ref>). When cycling around the contractible cycle, these modes exhibit an Aharonov-Bohm phase of
2 π s,
due to the non triviality of (<ref>) at θ=0. Should we impose the scalars to be periodic at θ=0, the phase (<ref>) must be an integer multiplet of 2π. In consequence, periodicity of scalars is not implying GNO quantization conditions.
The GNO conditions <cit.> are
α(𝔪)∈ℤ,
where α is any element of the root lattice of the gauge group 𝒢. Condition (<ref>) states that
𝔪 is in the co-root lattice of 𝒢.
If we particularize (<ref>) to ρ=α and q_R=0 then (<ref>) implies
s ∈ℤ or ℤ+1/2.
As already explained, (<ref>) is consistent with square integrability. However, if s∈ℤ+1/2, the Aharonov-Bohm phase is an odd multiple of π and the scalar modes are multivalued at θ=0. Notice that with the smooth representative (<ref>) such issue is no longer present. In this representation the only modes that do not vanish at θ=0 are those with j_3=0. This is because by performing a gauge transformation to a regular gauge potential the j_3 gets substituted by j_3-s. In fact, in this smooth representation, the Aharonov-Bohm phase at any φ-cycle becomes an integer multiple of 2π in virtue of our quantization conditions (<ref>).
Conditions (<ref>) are less restrictive than GNO conditions as they include not just (<ref>), but a continuous family of flux configurations. Consequently, we must not restrict our zero locus BPS[Γ] by the GNO quantization conditions.
Notice that in the case of S^2, the monopole bundle consists of two charts. In that case, the GNO condition comes from imposing single-valuedness of the structure group transformation that relates the sections at north and south <cit.>. In the case of ℍ_2, there is not such a feature.
We must say, however, that in our case, relaxing GNO conditions has consequences on global gauge invariance —see for instance section 2.1 of <cit.>, to appreciate a related discussion— . Let us analyze the case of ABJM. In that case, indeed, under a large gauge transformation u→ u+2π and ũ→ũ+2 π, the Chern-Simons terms x^i k 𝔪_i/2 and x̃^-i k 𝔪̃_i/2, change by a phase of e^π i k 𝔪_i and e^- π i k 𝔪̃_i, respectively. Those phases can be absorbed by a couple of topological U(1)_T holonomies [These terms arise from a couple of mixed Chern-Simons terms of the form (<ref>). Specifically, from the coupling of the U(1)_L,R dynamical vector multiplets and Q_ϵ-spurion vector multiplets: U(1)_L-spurion_L, U(1)_R- spurion_R . To obtain (<ref>) we have considered the following non trivial v.e.v's Â_L 3^T=π p and Â^T_R 3=-πp̃ for the L and R spurion multiplets. Allover our discussion, we will fix the spurion fluxes to zero 𝔱_L=𝔱̃_R=0.
] of the form
∏^N_i=1ξ _p(𝔪_i/2) := e^π i p ∑^N_i=1𝔪_i,
∏^N_i=1ξ_p(𝔪_i/2) := e^-π i p∑^N_i=1
𝔪_i,
after the change of labels
p → p-k,
p̃ → p̃-k.
Transformation (<ref>), is a symmetry of the measure ∑_p,
p̃∈ℤ, if k∈ℤ, and consequently, symmetry under large gauge transformations is restored, if we perform an average over p and p̃. The one loop contributions do not spoil the previous procedure, because the determinants in the case of ABJM are invariant under u→ u+2π.
Comment In this section, it was proven that if we discard the discrete modes, and consider only the continuous spectrum, the index is trivial. In contradistinction, the index becomes non trivial when evaluated on square integrable eigenfunctions. The index is somehow encoding information about the tower of normalizable modes.
Indeed, its one loop contribution is determined by the zeta regularized number of zero modes <cit.> of the operators □_s^-— if s>1/2— and □_s-1^+— if s<-1/2—.
§ THE ABJM INDEX ON ℍ_2× S^1
It is time to analyze the ABJM index on ℍ_2× S^1. In this section, we borrow notation and strategy from section 2.1 of <cit.>. The final scope is to obtain the leading large-N behavior of the index, in terms of flavor fluxes and holonomies. In order to do that, we will show that the corresponding large-N Bethe Ansatz equations (BAE), are equivalent to the ones defined in <cit.>. In fact, the leading large-N solution presented in <cit.> will be a solution to our BAE, and consequently, can be used to evaluate the leading large-N result for the ABJM index on ℍ_2× S^1.
Let us start by writing down the localization formula for the ABJM index. The Chern-Simons plus boundary term contribution is
i=1N∏x_i^1/2k 𝔪_ix
_i^-1/2k 𝔪_i.
After collecting classical and 1-loop contributions, we can write down the expression of the ABJM index on ℍ_2× S^1
Z_ℍ_2× S^1 := c ∑_p,p∈ℤ(
1/N!) ^2∫_|x|=|x̃|=1∏_i=1^N
dx_i/2π ix_idx_i/2π ix_i∫^M_+_-M_- ∫^M̃_+_-M̃_- ∏_i=1^N d𝔪_i d𝔪̃_i
×∏_i=1^Nx_i^k 𝔪_i/2x_i^-k
𝔪_i/2ξ _p(𝔪_i/2) ξ_p(𝔪_i/2) i≠ jN∏√(( 1-x_i/x_j) ( 1-
x_i/x_j))
×
∏_i,j=1^N∏_a=1,2( ±| √(
x_i/x_jy_a)/1-x_i/x_jy_a|
) ^𝔪_i-𝔪_j-𝔫_a+1/2∏_b=3,4(±|
√(x_j/x_iy_b)/1-x_j/x_i
y_b| ) ^𝔪_j-𝔪_i-𝔫_b+1/2,
where c:=1/∑_p, p̃∈ℤ1 and 𝔫_a are flavor fluxes. Notice that we have written back the sign degeneracy, mentioned below equation (<ref>). If we suppose
N ∈ 2 ℕ,
the former signs and the absolute values between parenthesis, become spurious in the contour of integration to be defined below, and consequently we drop them from now on. As we are interested in the large-N limit, (<ref>) is enough to our purposes.
The integration over fluxes and eigenvalues is dictated by the localization principle: they are the zero locus BPS[Γ] associated to our contour of field-integration Γ and their values are not fixed by boundary conditions. The color holonomies x_i=e^i u_i, x̃_j=e^i ũ_̃j̃ are integrated along S^1 as follows from our reality conditions on Γ: Im[u_i]=0 and 2π-periodicity of the integrand dependence on u and ũ.
The general idea is to pick up certain residues in the large-N limit. In order to find out the position of the relevant simple poles, we will need to compute the large-N solution to the very same Bethe ansatz equation of <cit.>. In <cit.> it was suggested that in the large-N limit, the set of Bethe ansatz eigenvalues, which is the set of simple poles enclosed by our contour, condense to a support included in the region [Even though the evidence presented in <cit.> is quite convincing, it would be nice to have a proof of the absence of extra eigenvalues outside the region 𝕏(Δ_a). We believe this is a point that deserves further understanding, but we will leave that analysis for future work. ]𝕏(Δ_a): ={(ũ_j-u_i)∈ℂ: {0<-ℝe [ũ_j -u_i]+Δ_a<2π if a=1,2
0< ℝe[ũ_j -u_i]+Δ_a<2π if a=3,4}}.
The region 𝕏(Δ_a) is the union of the regions covered by angles of the (i,j)-complex planes with coordinate x̃_ji:=e^i (ũ_j -u_i). Each (i,j)-angle is defined as
max( -Δ_3,-Δ_4, max(Δ_1,Δ_2)-2π)<(x̃_ji)<min(Δ_1,Δ_2 ,2π -max(Δ_3,Δ_4)).
To compute our residues we have two possibilities. Either we deform the (ij)-S^1: |x̃_ji|=1, to the perimeter of the inner region of the (i,j)-angle including the origin, or we deform it to the perimeter of the outer region, and close the contour at infinity x̃_ji=∞[By inner (resp. outer) region we mean the region of the (i,j)-angle that is in (resp. out) of the (i,j)-S_1.]. Both choices are equivalent, provided we include the respective “boundary contribution” at x̃_ji=0 or x̃_ji=∞, depending on the case. The “inner” choice corresponds to selecting poles with 𝕀m(ũ_j-u_i)>0. The “outer” choice corresponds to poles with 𝕀m(ũ_j-u_i)<0. Notice that for the large N solution of our interest, an outer (i,j)-pole implies the presence of an inner (j,i)-pole and vice versa (see equation (2.39) of <cit.>). In fact in the limit N→∞ all of such poles will condense either at 0 or ∞, except for the case i=j. In the particular case i=j, the poles condense to an arc of (i,i)-S^1. For each (i,j), we will choose to deform the (i,j)-S^1 contour in the way that encloses the poles at the “bulk” (these, are the poles whose positions are the eigenvalues in equation (2.39) of <cit.>).
The projection of the domain 𝕏(Δ_a) upon our integration contour, which is obtained by demanding 𝕀m[u]=𝕀m[ũ]=0 on the former, is a parametrization of the maximally connected region without Coulomb branch singularities [In fact in integrating x_i over S_1 we should avoid colliding with such singularities, either by slightly deforming the contour, or by turning on an infinitesimal mass regulator in the matter one loop determinant.].
Actually, the intersection of the boundary of the complex region 𝕏(Δ_a), ∂𝕏(Δ_a) with our contour of integration S^1, is a parametrization of the domain of such singularities. By a singularity of the Coulomb branch we mean a point (u_i,ũ_j) such that the quantity that defines the one loop contributions
∏_a=1,2( √(
x_i/x_jy_a)/1-x_i/x_jy_a)^-1∏_b=3,4(
√(x_j/x_iy_b)/1-x_j/x_i
y_b) = sin(-ũ_j+ u_i+Δ_1)/2sin(-ũ_j+u_i+Δ_2)/2/sin(ũ_j-u_i+Δ_3)/2sin(ũ_j-u_i+Δ_4)/2,
becomes 0 or ∞.
Notice that, as we are not imposing the GNO conditions, we have to integrate over the values of fluxes 𝔪_i and 𝔪̃_i along the Cartan directions. As we have already stated, the fluxes 𝔪 and 𝔪̃ are non-normalizable modes, even though they are in BPS[Γ]. In that respect, our approach is reminiscent of the one advocated in <cit.>. In <cit.>, integration over non-normalizable modes belonging to the zero locus of the relevant supercharge, was suggested for the localization on ℍ_2× S^1. Although the localization performed there, was on the branched sphere, the integration over the Coulomb branch parameter completes the nice picture suggested by (4.15) of <cit.>[A second possibility we will not explore in this work, is to fix the values of non normalizable modes in Q_ϵ[Γ] to specific values. However, in order to match the final result to the supergravity dual, an extremization procedure should be engineered for those values. In some sense, integration over these non normalizable color - not flavor- modes is such sort of extremization. In this second, more open, line of thought, perhaps one could relax our reality condition on u_i and simply define the latter extremization as integration over the complex and more abstract Jeffrey-Kirwan (JK) contour. It would be quite remarkable if with such an alternative approach, one obtains the same result for the index. In that case, the approach followed in this manuscript, would provide a less abstract viewpoint of the JK contour. We suspect this is indeed the case, but as it is not the final goal of our study, we shall not check so in this manuscript. ].
We use a couple of very large cut-offs M_±, M̃_±>0, because the volume of the moduli space of fluxes (𝔪, 𝔪̃) is infinite. After computing the integral over the holonomies for fixed values of M_± and M̃_±, we are free to send one and only one, of either M_+(resp. M̃_+) or M_-(resp. M̃_-), to infinity. The other one, remains as a regulator that we will redefine as M (resp. M̃). Thereafter, we pick up the residues of the analytical continuation of the regulated integrand, the final result will be independent on M and M̃, and we are free to take M, M̃→∞ on such residues
[The consequences of using this regularization procedure are somehow reminiscent of the consequences of applying the Jeffrey-Kirwan recipe in <cit.>. ].
Next, we shall show independence of the regulated expression on the cutoffs M and M̃, but before proceeding, let us remind a couple of conditions that we have implicitly used so far. The topological twisting condition is
∑_a 𝔫_a=2.
To understand how (<ref>) is the topological twisting condition, please, refer to section 2.1 of <cit.>.
From conservation of flavor symmetry, it follows that
∏_a y_a=1.
For clarity, it is convenient to re-organize the RHS of (<ref>) as follows
c ∑_p,p∈ℤ( 1/N!) ∫i=1N∏
dx_i/2π ix_idx_i/2π ix_i
∫_-M_-^M_+∫_-M̃_-^M̃_+i=1N
∏d𝔪_id𝔪_i( i≠ jN∏√(( 1-x_i/
x_j) ( 1-x_i/x_j)) A
)
×( i=1N∏exp[Υ _i(x,
x)𝔪_i]) ( j=1N∏exp[
Υ _j(x,x)𝔪_j]),
where
Υ _i(x,x) = log( x_i^k e^2 π i p∏
_j=1^N√(x_i/x_jy_1x_i
/x_jy_2)/( 1-x_i/x_j
y_1) ( 1-x_i/x_jy_2)
( 1-x_j/x_iy_3) ( 1-
x_j/x_iy_4) /√(x_j/x_i
y_3x_j/x_iy_4))^1/2
= log( x_i^ke^2 π i pNj=1∏
( 1-y_3x_j/x_i) ( 1-y_4
x_j/x_i) /( 1-y_1^-1x_j
/x_i) ( 1-y_2^-1x_j/x_i)
)^1/2
:= 1/2log( e^i B_i)
and
Υ_j(x,x)=1/2log( e^i B
_j),
with
e^i B_j := x_j^ke^2 π i p̃Ni=1∏
. ( 1-y_3x_j/x_i) ( 1-y_4
x_j/x_i) /( 1-y_1^-1
x_j/x_i) ( 1-y_2^-1x_j/x_i
) . ,
A := ∏_i,j=1^N(∏_a=1,2( . √(
x_i/x_jy_a)/1-x_i/x_jy_a
.) ^-𝔫_a+1/2∏_b=3,4(.
√(x_j/x_iy_b)/1-x_j/x_i
y_b.) ^-𝔫_b+1/2).
Definitions (<ref>), (<ref>) and (<ref>) are the ones in equations (2.21) and (2.20) of <cit.>.
After evaluating the integral over fluxes, we obtain
Z_ℍ^2×S^1(𝔫,y,M) = c ∑_p,p∈ℤ( 1/N!)^2 ∫_|x|=|x̃|=1i=1
N∏dx_i/2π ix_idx_i/
2π ix_i( i≠ jN∏
√(( 1-x_i/x_j) ( 1-x_i/
x_j)) A)
× i=1N∏s_Υ_iexp[ s_Υ_iΥ _i(x,x) M ]
/1/2log( e^i B_i) ×j=1N∏s_Υ̃_jexp[s_Υ̃_j
Υ _j(x,x) M̃]/1/2log( e^iB_j) .
where
s_Υ_i:= sign ℝeΥ_i(x,x̃), s_Υ̃_j:= sign ℝe Υ̃_j(x,x̃).
In (<ref>) we have already taken M_- or +→∞ and M̃_- or +→∞.
The next step is to evaluate the residues of the analytical continuation of the regulated integrand in (<ref>), at the simple poles enclosed by our contour. Such poles are located at positions x_* and x̃_*, defined by the eigenvalues of the BAE
e^i
B_i(x_∗,x_∗)=1, e^i
B_i(x_∗,x_∗)=1.
Solutions of (<ref>) are not onto our integration contour [This is because the imaginary part of both eigenvalues u and ũ is different from 0 (see equation (2.39) of <cit.>).], but are enclosed by it.
In this way we solve the remaining integrals in (<ref>).
Notice that we are naively focusing on the contribution coming from simple poles generated by the BAE (<ref>). Next, we shall see how the solution to (<ref>) is independent on the topological holonomies p and p̃. The integrand of (<ref>) is also independent on p and p̃. Consequently, the average over p and p̃ will be trivial.
The final result for the index is
Z_ℍ_2 × S^1(𝔫,y)=∏_a=1^4y_a^-
N^2n_a/4∑_I ∈ BAE 2^2N/𝔹(Ni=1∏x_*i^Nx_*i^N
i≠ jN∏( 1-x_*i/x_*j) ( 1-
x_*i/x_*j) /i≠ jN
∏∏_a=1^2( x_*j-y_ax_*i)
^1-n_a∏_a=3^4( x_*i-y_ax_*j)
^1-n_a)^1/2,
where
𝔹:=∂( e^i B_j,e^iB_j) /
∂( log x_l,logx_l) =
[ x_l∂ e^i B_j/∂ x_l x_l
∂ e^iB_j/∂x_l; x_l∂ e^iB_j/∂ x_l x_l∂ e^iB_j/∂x_l ]
_2N× 2N.
Notice that the cut-off dependence disappeared in (<ref>) due to the BAE.
We are interested in the large-N limit of Z_ℍ_2× S^1. In that limit,
there is a unique BAE solution thence the summation over the label I becomes spurious.
It is convenient to define
D(z):=( 1-z y_3) ( 1-z y_4) /(
1-z y_1^-1) ( 1-z y_2^-1) .
In terms of D(z), the LHS of the BAE are
e^i B_i(x_∗,x_∗)=x_i^k e^2 π pN
j=1∏. D( x_j/x_i).
, e^i B_j(x_∗,x_∗)=
x_j^ke^2πp̃Ni=1∏. D(
x_j/x_i). .
In terms of the quantity
G_ij:=. ∂log D/∂log z| _z=
x_j/x_i,
the matrix 𝔹 takes the form
. 𝔹| _BAE=
[ δ _jl[k-Nm=1∑G_jm] G_jl; -G_lj δ _jl[k+Nm=1∑G_mj] ].
The next step is to write down the BAE in “angular” coordinates u_i, ũ_i, and Δ_a, which are defined from
x_i=e^iu_i, x_j=e^iu_j, y_a=e^i
Δ _a.
In these coordinates, the constraint ∏_ay_a=1 looks like ∑_aΔ _a=0 (
mod 2π ).
In “angular” coordinates, the BAE (<ref>) are
0 = ku_i+i∑_j=1^N[
∑_a=3,4Li_1( e^i( u_j-u_i+Δ
_a) ) -∑_a=1,2Li_1( e^i( u
_j-u_i-Δ _a) ) ] -2π(n_i -p ),
0 = k
u_j+i∑_j=1^N[ ∑_a=3,4Li_1( e^i( u
_j-u_i+Δ _a) ) -∑_a=1,2Li_1( e^i(
u_j-u_i-Δ _a) ) ] -2π(
n_j-p̃),
with n_i, ñ_j ∈ℤ. Equations (<ref>) are the same BAE given in (2.32) of <cit.>.
To fix the values of n_i and ñ_j, we use the identity
Li_1( e^iu) -Li_1( e^-iu) =-iu+iπ
together with the assumption of absence of “long range interaction”<cit.>. The latter condition, implies
2π n_i = 2 π p +( ∑_aΔ _a-4π)
∑_jΘ(𝕀m( u_i-u_j) ),
2πn_i = 2 πp̃+( ∑_aΔ
_a-4π) ∑_iΘ(𝕀m( u_i-u
_j) ).
We can solve the constraints (<ref>) and (<ref>), for n_i and ñ_j, with the choice
∑_aΔ _a=2π .
It is key to observe that (<ref>) and (<ref>), imply that solutions to the BAE are independent of the topological holonomies p, p̃. Additionally, (<ref>) is also independent of p and p̃ and we conclude that the average on p and p̃ can be substituted by one.
The solution of (<ref>), obeying (<ref>), is precisely the one used in <cit.>, to obtain their quite nice result, (2.89), in the large-N limit. In the next section, for completeness, we evaluate the aforementioned solution. We have tried other choices, such as ∑_a Δ_a=0. However, as pointed out in <cit.>, there are always some issues with the potential solutions.
§.§ Large-N behavior of the index
From now on, we take the limit N→∞, assume the Chern-Simons level k=1, introduce the density of eigenvalues ρ (t)=1/N
∑_i=1^Nδ (u_i-t) and the quantity δ v(t), as precisely done in
section 2.3 of <cit.>. In this continuous limit, the BAE arise from the
variations of the auxiliary Lagrangian
𝒱/iN^3/2 = ∫ dt[ tρ (t)δ v(t)+ρ
(t)^2( ∑_a=3,4g_+(δ v(t)+Δ
_a)-∑_a=1,2g_-(δ v(t)-Δ _a)) ]
-μ[ ∫ dtρ (t)-1] -i/N^1/2∫ dtρ (t)
[ ∑_a=3,4Li_2( e^i( δ v(t)+Δ
_a) ) -∑_a=1,2Li_2( e^i( δ
v(t)-Δ _a) ) ] ,
where
g_±(u):=u^3/6∓π/2 u^2+π^2/3u.
It is easy to follow the steps in <cit.>. Indeed
for ∑_aΔ _a=2π and under the assumptions
μ >0, ∃t̃:δ v(t̃)=0, Δ _1<Δ _2 < Δ _3<Δ _4,
together with (<ref>)
0<-δ v(t)+Δ_a<2π if a=1,2,
0< δ v(t)+Δ_a<2π if a=3,4,
the large-N relevant part of the solution to the continuous limit of the BAE, coming from
Lagrangian (<ref>) is
ρ (t):={[ -μ +Δ_3 t/(Δ_1+Δ_3) (Δ_2+Δ_3) (Δ_3-Δ_4) if t_0<t<t_1; 2 πμ +t (Δ_3Δ_4-Δ_1Δ_2)/(Δ_1+Δ_3) (Δ_1+Δ_4) (Δ_2+Δ_3) (Δ_2+Δ_4) if t_1<t<t_2; Δ_1 t-μ/(Δ_1-Δ_2) (Δ1+Δ_3) (Δ_1+Δ_4) if t_2<t<t_3 ]
. .
δ v(t):={[ -Δ_3 + e^-N^1/2 Y_3(t) if t_0<t<t_1; μ (Δ_1Δ_2-Δ_3Δ_4)+t
(Δ_1Δ_2Δ_3+Δ_1Δ_2Δ_4+Δ_1Δ_3Δ_4+Δ_2Δ_3Δ_4)/2 πμ
+Δ_1Δ_2 (-t)+Δ_3Δ_4 t if t_1<t<t_2; Δ_1-e^-N^1/2 Y_1(t) if t_2<t<t_3 ]
. .
Y_1(t) = {[ (Δ_1+Δ_4) (μ +Δ_3 t)/(Δ_2+Δ_3) (Δ_3-Δ_4)+t if t_0<t<t_1; 0 if t_1<t<t_2; μ -Δ_2 t/Δ_1-Δ_2 if t_2<t<t_3 ]
.
Y_2(t) = {[ (Δ_2+Δ_4) (μ +Δ_3 t)/(Δ_1+Δ_3) (Δ_3-Δ_4)+t if t_0<t<t_1; 0 if t_1<t<t_2; t-(Δ_2+Δ_3) (Δ_2+Δ_4)
(Δ_1 t-μ )/(Δ_1-Δ_2) (Δ_1+Δ_3) (Δ_1+Δ_4) if t_2<t<t_3 ]
.
Y_3(t) = {[ μ +Δ_4 t/Δ_3-Δ_4 if t_0<t<t_1; 0 if t_1<t<t_2; -(Δ_2+Δ_3) (Δ_1 t-μ )/(Δ1-Δ_2) (Δ_1+Δ_4)-t if t_2<t<t_3 ]
.
Y_4(t) = {[ (Δ_1+Δ_4) (Δ_2+Δ_4) (μ
+Δ_3 t)/(Δ_1+Δ_3) (Δ_2+Δ_3) (Δ_3-Δ_4)-t if t_0<t<t_1; 0 if t_1<t<t_2; -(Δ_2+Δ_4) (Δ_1 t-μ )/(Δ_1-Δ_2) (Δ_1+Δ_3)-t if t_2<t<t_3 ]
. ,
with
t_0=-μ/Δ_3, t_1=-μ/Δ_4 ,t_2=μ/Δ_2, t_3= μ/Δ_1 .
From (<ref>) it follows the ordering of transition times
t_0<t_1<t_2<t_3, ρ >0.
From the normalization condition ∫_t_0^t_3dtρ (t)=1 it follows that
μ =√(2Δ _1Δ _2Δ _3Δ _4).
To obtain the leading free energy in the limit N →∞,
one evaluates (<ref>) at the BAE solution (<ref>). The final result can be easily inferred, given the fact that our BAE
solution is the same one found in <cit.>, for the case k=1, ∑_aΔ _a=2π . In the latter case, and from the fact that the
summand in (<ref>) is the square root
of the one in eq
(2.24) of <cit.>, it results that
ℝelog. Z_ℍ_2×S^1| _Large N BAE
solution=1/2ℝelog. Z_S^2×S^1| _
Large N BAE solution+ sub. terms
Finally, one can arrive to the result
ℝelog Z_ℍ_2× S^1^k=1 = -F_ℍ_2×
S^1^k=1(𝔫,Δ )
= -( 1/2) ×N^3/2
/3√(2 Δ _1Δ _2Δ _3Δ _4)∑
_a=1^4n_a/Δ _a+sub. terms.
∑_aΔ _a = 2π .
After extremizing with respect to Δ_1, Δ_2 and Δ_3, we obtain the following relation between fluxes and holonomies
𝔫_1 = Δ_1 (Δ_1-π )/Δ_1^2+Δ_1
(Δ_2+Δ_3)-2 π (Δ_1+Δ_2+Δ_3)+Δ_2^2+Δ_2Δ_3+Δ_3^2+π ^2,
𝔫_2 = Δ_2 (Δ_2-π )/Δ_1^2+Δ_1
(Δ_2+Δ_3)-2 π (Δ_1+Δ_2+Δ_3)+Δ_2^2+Δ_2Δ_3+Δ_3^2+π ^2,
𝔫_3 = Δ_3 (Δ_3-π )/Δ_1^2+Δ_1
(Δ_2+Δ_3)-2 π (Δ_1+Δ_2+Δ_3)+Δ_2^2+Δ_2Δ_3+Δ_3^2+π ^2 ,
that will prove to be useful later on, when comparing with the conjectured AdS/CFT dual quantity.
§.§ Comments on the index
The index computed in this work, which follows closely <cit.>, has the canonical interpretation of a Witten index counting ground states according to Z(𝔫_a, Δ_a)= Tr (-1)^F e^-β H e^iJ_a Δ_a. From the 3d perspective, we are simply counting operators with the corresponding relation among quantum numbers. Now, assuming that the deformed ABJM theory flows to an effective quantum mechanics in the IR, then the index computes the degeneracy of ground states in the quantum mechanics. Since the index is an invariant of the flow, we connect directly the 3d and 1d perspective.
On the gravity side, we have, similarly, the possibility of viewing the counting from the 4d or 2d perspectives. The better formulated one, at the moment, turns out to be the 2d perspective, which Sen has developed in the framework of AdS_2/CFT_1<cit.>. In this context, the ground state degeneracy in the quantum mechanics is computed by an AdS_2 partition function with specific boundary conditions, which leads precisely to the quantum black hole entropy. There are, of course, some open issues with the application of Sen's proposal, in the context of asymptotically AdS black holes, but it certainly provides a solid starting point.
§ THE HYPERBOLIC ADS_4 BLACK HOLE
In this section we construct what we believe are the holographic dual to the ABJM configuration discussed thus far. Namely, we construct magnetically charged, asymptotically AdS_4 black holes with non-compact ℍ_2 horizon that are embedded in M-theory. Our construction follows similar spherical solutions in N=2 gauged supergravity, see, for example, <cit.>. We will comment on some similarities and differences with the solutions with spherical horizon in subsection <ref>. We shall focus on the case of n_V=3 vector multiplets. In this way the n_V+1 vector fields– counting also the one in the graviphoton vector multiplet – are set to be identified as dual to the global charges of ABJM.
§.§ A brief summary of 4d 𝒩=2 gauged SUGRA with n_V=3
For completeness, let us briefly introduce the concepts that we shall use. The central object is the pre-potential
ℱ= ℱ(X^),
which is a holomorphic function of the holomorphic sections X^(z^i), =1,2,3 and 4. The symplectic sections are functions of the physical scalars z^i with i=1,2,3.
Another important object is the Kähler potential
𝒦=- logi (X̅^Λℱ_Λ-X^Λℱ̅_Λ), ℱ_Λ:=∂ℱ/∂
X^Λ.
We will also need to use the period matrix
N_ΛΣ:=ℱ_ΛΣ+2i𝕀m(
ℱ_ΛΓ) X^Γ𝕀m( ℱ_ΣΔ) X^Δ/X^Γ𝕀m( ℱ_ΓΔ)
X^Δ, ℱ_ΓΔ:=∂ℱ_Γ/∂
X^Δ.
and the following auxiliary variables
( L^Λ,M_Λ) :=e^K/2( X^Λ,ℱ_Λ) , ( f_i^Λ,h_Λ ,i)
:=e^K/2( D_iX^Λ,D_iℱ_Λ),
where the covariant derivative D_i is defined as D_i:=∂ _z^i+K_i.
In our case we will be interested in real holomorphic sections
X̅^=X^, z̅^i=z^i.
To construct black holes, we shall set the fermions, and fermionic variations, to zero. The supersymmetry variation of the gravitino is
δψ _μ A:=∇ _με _A+2iF_μν^-ΛI_ΛΣL^Λγ^νϵ
_ABε ^B-g/2σ _AB^3ξ _ΛL^Λγ _με ^B,
where the covariant derivative of the Killing spinor ε_A is
∇ _με _A=( ∂ _μ-1/4ω
_μ^abγ _ab) ε _A+1/4(
K_i∂ z^i-K_i∂z^i
) ε _A+i/2gξ _ΛA_μ^Λσ _A^3 Bε _B.
The supersymmetry variation of the gaugino
δλ ^iA=i∂ _μz^iγ ^με
^A-g^i jf_j^ΛI_ΛΣF_μν^Σ -γ ^μνϵ ^ABε _B+igg^i
jf_j^Λξ _Λσ
^3,ABε _B,
will be used too. In equations (<ref>) and (<ref>) we are discarding higher order terms in fermions. These terms are not relevant to our discussion.
To reproduce our results it will be useful to have the following definitions <cit.>
F_Λ^- := 1/2( F_Λ-i∗ F_Λ) ,
∗ F_Λ_μν := 1/2ϵ _μναβF^αβ,
1+γ _5/2ε _A=1-γ _5/2ε
^A=0.
§.§ Hyperbolic black holes
To avoid confusion, the index Λ={1,2,3,4} is equivalent to the index a={1,2,3,4} that will be introduced in the next subsection.
We are interested in the STU model. Thence, we fix the Fayet-Iliopoulus parameters in an isotropic manner
ξ_0=ξ_1=ξ_2=ξ_3=ξ_V.
The relevant pre-potential will be
ℱ(X)=-2 i √(X_1 X_2 X_3 X_4 ).
We consider real sections, with the following parametrization
X^Λ=X̅^Λ={-z^1/z^1+z^2+z^3+3,-z^2/z^1+z^2+z^3+3,-z^3/z^1+z^2+z^3+3
,-1/z^1+z^2+z^3+3},
and propose the following static, spherically symmetric ansatz, for the metric and sections
ds^2 = -U^-2(r)dr^2-h^2(r)dθ ^2-h^2(r)sinh ^2θ dφ
^2+U^2(r)dt^2,
X^Λ = α+ β_Ł/r.
The non trivial components of the spin connection are [In this section we used different conventions than in section <ref>. We have used standard conventions on four-dimensional 𝒩=2 gauged supergravity, which are the ones given in <cit.> (see also <cit.>). For instance, the definition of spin connection is minus the one used in section <ref>. Consequently, in the covariant derivatives there is a relative minus sign in front of the term proportional to the spin-connection between this section and section 2. ]
ω _t^14 = -U(r)U^'(r), ω _θ^12=-U(r)h^'(r),
ω _φ^13 = -U(r)h^'(r)sinhθ , ω _φ^23=-coshθ .
In this section
ϵ _4123 = 1,
η _ab = (-1,-1,-1,1),
γ _5 = iγ ^4γ^1γ ^2γ ^3.
We use the following Ansätze for the functions U(r) and h(r)
U(r) := e^K/2(g r-c/2g r),
h(r) := d e^-K/2r,
where g, c and d are constants.
The corresponding black holes, are sourced by magnetic fluxes p_Ł:
A_Λφ=-p_Λcoshθ , F_Λθφ=-
p_Λ/2sinhθ .
The non trivial components of the anti-selfdual field strength are
F_Λ _θφ^-=-F_Λ _θφ^-=-
p_Λ/4sinhθ , F_Λ _rt^-=-F_Λ
_tr^-=ip_Λ/4h^2(r)·
The chiral and anti-chiral Killing spinors ϵ_A and ϵ^B, have to obey the following relation - these conditions are obtained from the vanishing of the gravitino supersymmetric transformation-
ε _A=ϵ _ABγ ^4ε ^B, ε
_A=±σ _AB^3γ ^1ε ^B.
The most general solution to (<ref>) is
ε _A=
[ ±ϰ (r) ∓ i ϰ(r); ± i ϰ (r) ±ϰ(r); -i ϰ (r) ϰ(r); -ϰ (r) -i ϰ(r) ]
, ε ^B=
[ ϰ(r) iϰ (r); -iϰ(r) ϰ (r); ± iϰ(r) ±ϰ (r); ∓ϰ(r) ± iϰ (r) ].
Solving the BPS conditions, leads to relations
α = ∓1/4ξ _V,
c = 1+8 d^2g^2ξ _V^2( β _4^2+β
_1^2+β _2^2+β _3^2) /d^2,
0 = β _4+β _1+β _2+β _3,
1 = ± gξ _V( p_4+p_1+p_2+p_3).
Notice that the constant c is positive.
The relation between fluxes and the parameters β_a is also obtained from the BPS conditions
p_1 = ±1+16 d^2g^2ξ _V^2 ( -β _1^2+β
_2^2+β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V,
p_2 = ± 1+16 d^2g^2ξ _V^2 ( +β _1^2-β
_2^2+β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V,
p_3 = ±1+16 d^2g^2ξ _V^2 ( +β _1^2+β
_2^2-β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V.
The warping of the Killing spinor is also fixed by the BPS conditions
ϰ =ϰ _0√(U(r)), ϰ=
ϰ_0√(U(r)).
We have, finally, completely solved the BPS conditions and constructed our hyperbolic AdS_4 black holes.
§.§ Spherical black holes
A prevalent intution in the context of supergravity states that changing the horizon from spherical to hyperbolic, leads from black holes to naked singularities and vice versa<cit.>. In this section we explore the details of this intuition in the context of the magnetically charged black holes.
Let us first solve the BPS equations for the spherical black hole ansatz
ds^2 = -U^-2(r)dr^2-h^2(r)dθ ^2-h^2(r)sin^2θ dφ
^2+U^2(r)dt^2,
X^Λ = α+ β_Ł/r.
with U(r) and h(r) defined in (<ref>) and (<ref>).
The non vanishing components of the spin connection are
ω _t^14 = -U(r)U^'(r), ω _θ^12=-U(r)h^'(r),
ω _φ^13 = -U(r)h^'(r)sinθ , ω _φ^23=-cosθ .
For technical convenience, let us parametrize the gauge potential as follows [The definition of field strength used in this section differs from the one used in section <ref>.]
A_Λφ=-p_Λcosθ , F_Λθφ=
p_Λ/2sinθ .
The non trivial components of the antiselfdual potential are
F_Λ _θφ^-=-F_Λ _θφ^-=
p_Λ/4sinθ , F_Λ _rt^-=-F_Λ
_tr^-=ip_Λ/4h^2(r).
After solving the BPS equations, we arrive to
α = ∓1/4ξ _V,
c = -1+8 d^2g^2ξ _V^2( β _4^2+β
_1^2+β _2^2+β _3^2) /d^2,
0 = β _4+β _1+β _2+β _3,
-1 = ± gξ _V( p_4+p_1+p_2+p_3).
and
p_1 = ∓ 1+16 d^2g^2ξ _V^2 ( -β _1^2+β
_2^2+β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V,
p_2 = ∓ 1+16 d^2g^2ξ _V^2 ( +β _1^2-β
_2^2+β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V,
p_3 = ∓ 1+16 d^2g^2ξ _V^2 ( +β _1^2+β
_2^2-β _3^2+β _1β _2+β _2β _3+β
_1β _3) /4gξ _V.
The warping of Killing spinor is also fixed by the BPS conditions
ϰ =ϰ _0√(U(r)), ϰ=
ϰ_0√(U(r)).
In contradistinction to the hyperbolic solution, in the spherical case, c can be negative, see equation (<ref>).
The position of the curvature singularity is
r_s= {-1/αmax{β_1, β_2 ,β_3 ,β_4 }>0 if α<0
-1/αmin{β_1, β_2 ,β_3 ,β_4 }>0 if α>0. .
If β_a=0, the position of the curvature singularity is r_s=0. In the case β_a=0, one encounters a hyperbolic black hole since c>0 and there is a horizon. However for the spherical solutions, c<0, and we encounter a naked singularity. It is straightforward to check that, when β_a=0, the change
(r, θ, t, p_a, c ) →(i r, i θ, i t, -p_a, -c)
transforms the hyperbolic black hole BPS solutions of the previous subsection, into the spherical BPS solution of this subsection, particularized to β_a=0. The latter has a naked singularity. Actually, the exchange p_a → -p_a can be cancelled by an exchange of Killing spinor — only because β_a=0—. By an exchange of Killing spinor, we mean to change the choice of sign in the constraint (<ref>). Such a change, has physical meaning, as it leads to a configuration that is BPS with respect to a different supercharge.
We emphasize that the intuition emanating from <cit.>, is restricted to the case of β_a=0, that is, the case of constant sections; it is, therefore, relaxed in the case β_a≠ 0.
§.§ The Bekenstein-Hawking entropy: ℍ_2 vs S^2
In this subsection, we compare the entropy of hyperbolic and spherical black holes. We first consider the case
of isotropic fluxes. In the end, we will find that the entropy density of the hyperbolic solution coincides with the entropy density of the spherical one which were discussed in <cit.>.
From now on, we particularize our hyperbolic solutions to the following case
α =-1/4, ξ _V=1, g=1/√(2).
A sufficient – not necessary – condition for the existence of hyperbolic AdS_4 black holes is
β_1,β_2, β_3>0, r_h=√(c)> r_s= 4 max(β_1,β _2,β_3),
where the domain of the radial coordinate r is r > r_s. The constant r_s is the radial position of the singularity, which is covered by the horizon at r_h=√(c).
A particular solution to these conditions is
β _1= β _2= β _3=β >0.
In that case, the fluxes are
p_1=p_2=p_3=p=1+32 d^2 β ^2/2√(2)>0.
The regularized area density of the horizon is
A_ℍ_2(β)/vol_ℍ_2 = 1/2√(1+512 d^3β ^3(-6 d β +√(
1+48 d^2β ^2)))
= √(-3 (1-2 √(2) p)^2+2 (2 √(2) p-1)^3/2√(6 √(2) p-1)+1)/2
A_ℍ_2(p)/vol_ℍ_2 p→ +∞∼ √((4√(3)-
6)) p, p>0.
Next, we compare the entropy density (<ref>) to the one of spherical black holes used in <cit.>. The isotropic solution presented in section 4.1 of <cit.> is
𝔫_1=𝔫_2=𝔫_3=√(2)p^' , 𝔫_4=2-3√(2)p^', p^'<0.
From the following quantities <cit.>
F_2(p) : =-( 12p^' 2-6√(2)p^' +1) ,
Θ (p) : =192 p^' 4-160√(2)p^' 3+96p^' 2-12
√(2)p^' +1,
one arrives to the following expression for the area density, of the spherical horizon in terms of the flux p^'
A_S^2(p^')/vol_S^2 := √(F_2+√(Θ))/√(2)
= √(√(4 p^'(8 p^'(6 p^' 2-5 √(2) p^'+3)-3
√(2))+1)+6 (√(2)-2 p^') p^'-1)/√(2)
A_S^2(p^')/vol_S^2 p^'→ -∞∼ -√((4√(3)-
6)) p^', p^'<0.
We can do the comparison with the result obtained from scratch, with our spherical solutions, however, in order to match our results with the ones in <cit.>, we report the comparison by using theirs. Notice that the large flux limits (<ref>) and (<ref>), do coincide. In fact, it can be checked that for any value of u the entropy density as a function of flux p and p^' coincide, as
A_ℍ_2(|u|)/vol_ℍ_2=A_S^2(|u|)/vol_S^2.
Equation (<ref>) can be checked by comparing (<ref>) and (<ref>), order by order in Taylor expansions in u about 0 and ∞, or by simply working out the expressions.
§.§ Matching results
In this final section, we compare the AdS/CFT dual results. On one side, we have the result for the ABJM index on ℍ_2× S^1(<ref>). On the other side, we have the entropy of the hyperbolic magnetic AdS_4 black holes (<ref>). The first thing to do, is to compute the Bekenstein-Hawking entropy of (<ref>). Thereafter, we check the relation between the classical entropy and the value of the holomorphic sections X_a– or as were denoted in the previous subsection X^Ł– at the horizon r_h. The aforementioned relation is identical, up to a relabelling of variables, to the relation between the logarithm of the ABJM index and the holonomies Δ_a(<ref>). Finally, we prove that under the appropriate relabeling of variables and extremization of the logarithm of the ABJM index (<ref>) with respect to the holonomies Δ_a, the bulk and SCFT results coincide, as was the case in <cit.>.
For generic p_1, p_2 and p_3, we have checked that the classical entropy
S_BH = A_ℍ_2/4 G_4d
= -π/4 G_4 D√((Ψ -4 d β_1) (Ψ -4 d β_2) (Ψ -4 d
β_3) (4 d (β_1+β_2+β_3)+Ψ )),
where
Ψ(β_1,β_2,β_3):=√(8 d^2 (β_1^2+β_1β_2+β_1β_3+β_2β_3+β_2^2+β_3^2)+1),
coincides with the expression
+2 π/4G_4d√(X_1(r_h) X_2(r_h) X_3(r_h) X_4(r_h))∑_a=1^4 √(2) p_a /X_a(r_h),
where
X_a(r_h)=d β _a/√(4 d^2 (β_1^2+β_2^2+β_3^2+β_4^2)+1)-1/4.
The value of β_4 is determined by (<ref>). The p_a's as function of β_a's have been given in equation (<ref>) which follows from the BPS equations.
We will prove next, that equation (<ref>)– that comes from the analysis in the bulk– is equal to the extremal value of the SCFT topologically twisted index (<ref>), under the specific dictionary
√(2) p_a ↔ 𝔫_a,
-2π X_a(r_h) ↔ Δ̅_a, a=1,2,3,4,
where the Δ̅_a are the solutions to the variables Δ_a that come out of the inversion of equation (<ref>).
There are many ways to prove that (<ref>) is equivalent to the extreme value of (<ref>), the simplest one is to evaluate (<ref>) on
Δ_a=Δ_a(β_1,β_2,β_3)=-2π X_a(r_h),
to obtain
𝔫_1=1/2+4 d^2 (-β_1^2+β_2^2+β_3^2+β_1β_2+β_1β_3+β_2β_3),
𝔫_2=1/2+4 d^2 (+β_1^2-β_2^2+β_3^2+β_1β_2+β_1β_3+β_2β_3),
𝔫_3=1/2+4 d^2 (+β_1^2+β_2^2-β_3^2+β_1β_2+β_1β_3+β_2β_3).
Notice that
Equation (<ref>)= √(2) Equation (<ref>),
under (<ref>) and identification (<ref>).
The relation (<ref>) implies that the positions Δ̅_a of the saddle points of (<ref>) coincide with the values of the sections X_a at the horizon (<ref>), under identification (<ref>). As the logarithm of the ABJM index (<ref>) and the Bekenstein-Hawking entropy (<ref>) are the same under identification (<ref>) and
1/G_4d= 2 √(2)/3 N^3/2,
we have thence proven that under the aforementioned identifications, boundary “degeneracy of states” and bulk black hole entropy coincide.
As a final comment, we notice that the identifications (<ref>) are not directly obtained from the AdS/CFT dictionary. The AdS/CFT dictionary is naturally formulated in the UV, the UV value of the holomorphic sections X_a is -1/4. To obtain agreement, the use of extremization principle of the result of the SCFT side is crucial <cit.>. We believe there is a proper way to clarify some of these ad hoc issues but we leave the discussion for future work.
§ CONCLUSIONS
In this manuscript we have first studied topologically twisted localization of N=2 supersymmetric field theories in ℍ_2× S^1. Our work differs in various important points from recent work on localization on this space by <cit.>. In particular, we have crucially considered topologically twisted theories and extended the type of theories under consideration beyond vector multiplets, to include, for example, matter multiplets.
At a technical level, we have also discussed explicitly subtle aspects of the eigenvalue problem corresponding to the Laplacian in the presence of a background magnetic field and we expect that such results could have wide application in the general context of localization. Quite interestingly, we have found a hierarchy of normalizable modes and its corresponding discrete spectrum. A particular sub-family of the aforementioned hierarchy, corresponds to the vector zero modes of the Laplace-Beltrami operator on ℍ_2, that were introduced in <cit.> and figure prominently in, for example, <cit.> and more recently in <cit.>. The full hierarchy of normalizable modes exists due to the presence of magnetic fluxes s over a specific threshold: |s|>1/2. We strongly suspect, that the discrete spectrum is encoding the full hierarchy of higher spin normalizable modes of the Laplace-Beltrami operator on ℍ_2. If this is indeed the case, it would be very interesting to pursue a study of the potential traces of 2d higher spin symmetry, on the set of black holes microstates <cit.>. As a first step toward formulating such a problem, it would be useful to start by identifying the square integrable modes in the language of <cit.>.
We have also studied N=2 gauged supergravity and found magnetically charged supersymmetric solutions with hyperbolic horizon. We have shown that under assumptions similar to those advanced in <cit.> the entropy of these solutions coincides with the real part of the logarithm of the topologically twisted index of the dual field theory. In conclusion, we have provided evidence in favor of identifying the set of square integrable modes in the presence of a constant flux on ℍ_2(× S^1), precisely speaking a very restricted set of zero modes out of the maximal set, as the boundary microstates responsible for the Bekenstein-Hawking entropy of the AdS_4 hyperbolic black holes presented in section <ref>. One important further test for this identification, would be to compute quantum corrections to the Bekenstein-Hawking entropy, on both sides of the duality.
On the 3d SCFT side we have made crucial use of the extremization approach advocated in <cit.>. The result of this approach is consistent with the constraints from the BPS equations on the gravity side and has been argued to be equivalent to the attractor mechanism. Under these conditions we have found precise agreement between the leading large-N results on the two sides. However, it would be important to elucidate the role of extremization intrinsically in the field theory but also from the gravity perspective. This is particularly important because in some cases the attractor mechanism has been shown to apply away from the strictly supersymmetric context.
Another natural generalization of this work, following <cit.>, is to extend the analysis to dyonic black holes. More generally, it would be interesting to consider mapping the full space of deformations on both sides of the correspondence and, in particular, its modifications on the free energy and the entropy. Another interesting direction concerns potential factorizations of the index on S^2× S^1, introduced in <cit.>, in terms of blocks given by the partition functions in ℍ_2× S^1. A similar factorization principle has been uncovered in various theories and in different dimensions, see, for example, <cit.>. In this manuscript we have found a particular relation but it should be pointed out that we have set all fermionic zero modes to zero and have integrated over a particular set of modes. Clearly, to achieve a bona fide factorization formula we will need to consider more general boundary conditions and contemplate retracing some of the steps suggested in <cit.>. Indeed, such an approach with general boundary conditions has been implemented for GLSM's in <cit.>.
Finally, it would be interesting to discuss the microstate counting of magnetically charged strings in asymptotically AdS_5 spacetimes. Such magnetically charged solutions have a long history in supergravity dating back to explorations in <cit.>. It is logical to expect that the microscopic explanation should be found within 4d topologically twisted field theories on S^2× T^2 or possibly ℍ_2× T^2. Indeed, as a natural starting point along these lines, the topologically twisted index introduced by Benini and Zaffaroni in <cit.> for supersymmetric field theories on S^2× S^1, was briefly discussed for 4d theories in S^2× T^2 in their original work, and was also addressed in <cit.>. It has recently been shown that, in the high temperature limit, the index produces a central charge that matches the supergravity answer <cit.> therefore providing a strong argument in favor of the identification. We hope to report on some of these interesting directions soon.
§ ACKNOWLEDGMENTS
We are first and foremost grateful to the Abdus Salam Centre for Theoretical Physics where this work originated and was largely conducted; L. PZ., in particular, acknowledges sabbatical support. The work of ACB is supported by CONICET.
We would like to thank Junya Yagi for collaboration in the early stages of this project. We are especially thankful to E. Gava and K. Narain for discussions and clarifications at various stages of this work. We are thankful to G. Bonelli and A. Tanzini for comments on a preliminary presentation by V. G-R. of parts of this work in summer 2016. We are also grateful to R.K. Gupta, K. Intriligator, U. Kol, J. Liu, S. J. Rey, A. Sen and V. Rathee for discussions. ACB is very grateful to Carmen Núñez for her support and encouragement.
§ COMMENTS ON THE DISCRETE SPECTRUM
In this appendix we report details on the construction of the square integrable modes defined in section <ref>.
The general solution to the defining equation (<ref>) is
f = χ_1( ( coshθ-1) ^s/sinh^j_3
+sθ_2F_1(a_1,b_1,c_1;-sinh^2θ/2))
+χ_2( ( coshθ-1) ^j_3/sinh^j_3
+sθ_2 F_1(a_2,b_2,c_2;-sinh^2θ/2)),
where χ_1,2 are arbitrary integration constants. It is very useful to work in the following coordinates
x :=-sinh^2θ/2, -∞<x≤0.
In x coordinate, (<ref>) takes the form
f = χ_1( x^s( x( x-1) )
^-j_3+s/2_2 F_1(a_1,b_1,c_1;x))
+χ_2( x^j_3( x( x-1) )
^-j_3+s/2_2 F_1(a_2,b_2,c_2;x)) ,
where the parameters a_1,2, b_1,2 and c_1,2 are
a_1 =1/2-j_3-√(1/4+Δ+s^2),
b_1=1/2-j_3+√(1/4+Δ+s^2,)
c_1 =1-j_3+s,
a_2 =1/2-s-√(1/4+Δ+s^2),
b_2=1/2-s+√(1/4+Δ+s^2),
c_2 =1-s+j_3.
Let us particularize Δ to
Δ= j(j+1)-s^2, with j ∈ℝ.
The choice (<ref>) completes perfect square inside the square roots in (<ref>).
The asymptotic behavior of the linear solutions proportional to χ_1 and χ_2— from now on f_1 and f_2—,
is
f_1(x) x→ -∞∼ χ_1^- x^-1-j( 1+O(1/x)) +χ_1^+
x^j( 1+O(1/x)),
f_1(x) x→ 0∼ x^s-j_3/2( O(0)),
and
f_2(x) x→ -∞∼ χ_2^- x^-1-j( 1+O(1/x)) +χ_2^+
x^j( 1+O(1/x)),
f_2(x) x→ 0∼ x^j_3-s/2( O(0)).
Regularity at the contractible cycle x=0, conditions to pick up
f_1 if j_3 ≤ s,
f_2 if j_3 > s.
We demand 𝒞^∞-differentiability at the contractible cycle x=0. In the vicinity of x=0, f_1 and
f_2 go like x^s-j_3/2 and x^j_3-s/2,
respectively. Thus 𝒞^∞-differentiability at the contractible cycle x=0
implies
j_3 -s ∈ℤ,
and the appropriate choice among (<ref>) and (<ref>). Notice that from (<ref>), it follows that: j_3∈ℤ implies s∈ℤ.. However, we are not forced to impose integrality of j_3, j or s.
For the time being, let us assume s≥0. In due time, we extend the analysis to the case of generic s.
It is important to stress that square integrability condition is equivalent to impose the conditions
χ^+_1=0 if j_3 ≤ s and j> -1/2
χ^-_1=0 if j_3 ≤ s and j< -1/2
χ^+_2=0 if j_3>s and j > -1/2
χ^-_2=0 if j_3 ≤ s and j< -1/2 .
§.§ The quantization conditions: f_1
Let us find out the quantization conditions that guarantee (<ref>). Our starting point in this subsection is
j_3≤ s and s≥ 0 .
For pedagogical reasons, let us assume for the time being
j, j_3, s ∈ℤ or ℤ+1/2.
We shall see in due time that assumption (<ref>) is not necessary. As for j, let us not assume nothing else at this point. On the track, we will comment on the restrictions that arise for j.
Condition (<ref>) selects the solution f_1
f_Δ(s),j_3^(1)
:=χ_1( x^s( x( x-1) )
^-j_3+s/2_2 F_1(-j_3-j,1-j_3+j,1-j_3+s;x)).
Notice that this solution is invariant under the transformation
j→-( j+1)
and in consequence we have to restrict j to be either
j>- 1/2 or j<- 1/2,
as preferred.
Notice that j=-1/2 is left invariant by the transformation above. For j=-1/2 both independent solutions have
the same asymptotic behavior x^j and x^-j-1, and they are not
square integrable.
In order to have square integrability it is necessary to have
j≠ -1/2.
We exclude the particular case j=-1/2.
Before writing down the quantization conditions, let us comment on the strategy.
It turns out that the quantization conditions are the conditions for which the hypergeometric factor
in (<ref>) truncates to a specific polynomial. The sum of the degree of such polynomial with the degree of the leading power of the prefactor in (<ref>) in the limit x→-∞ must equate to
-1-j if j>-1/2,
j if j<-1/2.
The conditions to obtain the previously mentioned goal are
1-j_3+j≤0 if j> -1/2,
-j_3-j ≤0 if j<-1/2.
Together with (<ref>) these conditions are compactly written in the following one
max(|j|,|j+1|)≤ j_3≤ s.
It is straightforward to check that, if we assume (<ref>) together with (<ref>) the desired truncation holds:
_2 F_1(-j_3-j,1-j_3+j,1-j_3+s;x) =
=1+
∑_n=0^∞
(a)_n(b)_n/(c)_nx^n+1/(n+1)!
=1+{c]lll
0 if d^(1)=0
∑_n=0^d^(1)-1
(a)_n+1(b)_n+1/(c)_n+1x^n+1/(n+1)! if d^(1)>0
. ,
(a)_n+1 :=
∏_i=0^n
(a+i),
The degree of the polynomial d^(1) being
d^(1):={c]lll
j_3-j-1 if -1/2< j<j_3
j_3+j if -j_3≤ j<-1/2
. .
At this point is easy to check that indeed the aforementioned asymptotic
behavior of (<ref>) about x=0 and x=-∞ holds.
§.§ The quantization conditions: f_2
In this subsection we analyze the case
j_3 > s and s≥ 0.
We assume again (<ref>). Condition (<ref>) selects the solution f_2
f_Δ(s),j_3^(2)
:=χ_2( x^j_3( x( x-1) )
^-j_3+s/2_2 F_1(-s-j,1-s+j,1-s+j_3;x))
As the previous case, this solution is invariant under the transformation
j→-( j+1)
and in consequence at some stage we shall be forced to assume (<ref>). Let us, however, not assume
the latter restriction on j yet. Let us just assume (<ref>).
The quantization conditions are
1-s+j≤0 if j>-1/2,
-s-j ≤0 if j<-1/2.
Together with (<ref>), these conditions are compacted in the following one
max(|j|,|j+1|)≤ s < j_3.
It is straightforward to check that, if we assume (<ref>) together with (<ref>) the desired truncation holds:
_2F_1(-s-j,1-s+j,1-s+j_3;x) =
=1+{c]lll
0 if d^(2)=0
∑_n=0^d^(2)-1
(a)_n+1(b)_n+1/(c)_n+1x^n+1/(n+1)! if d^(2)>0
. .
The degree of the polynomial d^(2) being
d^(2):={c]lll
s-j-1 if -1/2≤ j<s
s+j if -s≤ j<-1/2
. ,
and f_Δ(s),j_3^(2) is square integrable. Notice that, there are not
square integrable modes for s=0, as well known.
§.§ The case of negative flux s<0
So far, we have focused on the case of positive magnetic flux s>0, or being more specific on the case s>1/2. However, there are square integrable
modes when s<0 too –as parity preservation dictates–. To find those, it is convenient to use the identity
_2F_1(a,b,c;x)=(1-x)^c-a-b_2F_1(c-a,c-b,c;x)
upon the previously written solutions f_Δ(s),j_3^(1) and f_Δ
(s),j_3^(2), to obtain
f_Δ(s),j_3^(1) =χ_1 ( x^-j_3( x( x-1) ) ^j_3+s/2_2F_1(s-j,1+s+j,1+s-j_3
;x)) .
f_Δ(s),j_3^(2) =χ_2 ( x^-s( x( x-1) ) ^j_3+s/2_2F_1(j_3-j,1+j_3
+j,1+j_3-s;x)) .
Again, these eigenfunctions are invariant under the change (<ref>) and in consequence
j>-1/2 or j<-1/2.
The hypergeometric factors written above, truncate to polynomials —and in consequence
f_Δ(s),j_3^(1) and f_Δ(s),j_3^(2) square integrable— provided the following quantization conditions hold
max(|j|,|j+1|) ≤-s≤-j_3,
max(|j|,|j+1|) ≤-j_3≤-s,
— and for the time being (<ref>)—, for f_Δ(s),j_3^(1) and f_Δ(s),j_3^(2), respectively. The explicit form of these square integrable
modes can be obtained by repeating the analysis done for the case s>1/2, and they exist if and only if
s< -1/2.
§.§ Generalized conditions
So far we have been assuming
j_3, j, s ∈ℤ or ℤ+1/2.
However, the aforementioned GNO conditions – see subsection <ref>– can be relaxed.
As already stated, to achieve regularity at the contractible cycle
x=0, the following necessary condition
j_3-s∈ℤ,
must hold. To have discrete spectrum there are necessary conditions too:
c]lll
f^(2)→-s+j∈ℤ, f^(1)→-j_3+j∈ℤ if s>+1/2,
f^(1)→ s+j∈ℤ, f^(2)→ j_3+j∈ℤ if s<-1/2.
Notice that the conditions in the right (resp. left) side, follow from a linear
combination of the condition of regularity at the contractible cycle, and the respective conditions
in the left (resp. right) side. Hence, we can write down the more compact and equivalent statement
j_3-s ∈ℤ and j-|s| ∈ℤ.
In the table below, we write down the explicit form of the spectrum. For simplicity of presentation but without lack of generality, let us take
j>-1/2. In that case, the relevant spectrum is
c]|l|l|l|∀ s such that s>1/2 s<-1/2
j_3 j+1,j+2,...,j+k,...,∞ -∞,...,-k-j,...,-2-j,-1-j
j s-1,s-2,...,s-k,...> -1/2 -s-1,-s-2,...,-s-k,...>-1/2
.
A particular case is when j,j_3,s∈ℤ+1/2. In that case, the table
above reduces to
c]|l|l|l|∀ s such that s >1/2 s<-1/2
j_3 j+1,j+2,...,j+k,...,∞ -∞,...,-k-j,...,-2-j,-1-j
j s-1,s-2,...,s-k,...,1/2. -s-1,-s-2,...,-s-k,...,1/2.
.
The corresponding eigenfunctions can be recovered from the summary that shall
be presented next, and the results in previous sections.
§.§ Collecting the eigenfunctions
The maximal functional space of square integrable modes isΞ(s):=
⊕_ -1/2<j<|s|
( Ξ_j^(1)(s)
⊕
Ξ_j^(2)(s)) ,
where the subspace Ξ_j^(1)(s) is defined as
Ξ_j^(1)(s):={f_Δ(s),j_3^(1)}_j_3, with
Δ:=j(j+1)-s^2,
together with conditions (<ref>) and
c]lll
max(|j|,|j+1|)≤ j_3≤ s, if s>1/2,
max(|j|,|j+1|)≤-s≤-j_3 if s<-1/2.
The subspace Ξ_j^(2)(s) is defined as
Ξ_j^(2)(s):={f_Δ(s),j_3^(2)}_j_3 , with
Δ:=j(j+1)-s^2,
together with conditions (<ref>) and
c]lll
max(|j|,|j+1|)≤ s≤ j_3 if s > 1/2,
max(|j|,|j+1|)≤-j_3≤-s if s <-1/2.
Of special interest will be the following limiting spaces
Ξ_s-1 (or -s)^(1,2)(s):={f_Δ,j_3^(1,2)
}, with j:=s-1 (or -s) and Δ=-s.
These spaces are the ones that contribute to the super-determinant that concerns
us, when s> 1/2 and cohomological cancellations are performed.
It will be useful to keep in mind that for every j, in the direct sum space
Ξ_j(s):=Ξ_j^(1)(s)⊕Ξ_j^(2)(s) the angular number
j_3 will range at step 1, departing from the lower (resp. upper) bound
given below
c]lll
max(|j|,|j+1|)<j_3<∞ if s> 1/2,
-∞<j_3<-max(|j|,|j+1|) if s < -1/2.
Comment: Notice that upon square integrable “
representations” Ξ_j(s), which are labeled by j
running at step 1 down from |s|-1 and greater than -1/2, namely
j:-1/2< j≤|s|-1,
the bosonic operator O_B(<ref>), is positive definite if (ρ(u)+k )^2>0.
Indeed, that operator needs to be positive definite in order to have
convergence of the functional integral of the exponential of the quadratic
expansion of the bosonic localizing term.
§.§ Normalizable modes from asymptotics
One can also find the discrete spectrum by looking at the asymptotic expansion of the general solutions (<ref>) (one can repeat the procedure for the other solution). We choose to focus on f_1 and for values s>1/2, from regularity and smoothness at x=0 it follows that j_3≤ s and the difference s-j_3∈ℤ_+.
As before, we define Δ:=j(j+1)-s^2.
At x=-∞f_1 is
f_1∼χ_1^- x^-1-j( 1+O(1/x)) +χ_1^+
x^j( 1+O(1/x)).
The coefficients above are:
χ_1^- ∼Γ1+s-j_3]Γ-1-2j]/Γ-j_3-j
]Γ s-j],
χ_1^+ ∼Γ1+s-j_3]Γ1+2j]/Γ1-j_3+j
]Γ s+j].
Suppose j>-1/2, to cancel out the x^j behavior of f_1 while preserving the x^-1-j, we need to have that χ_1^+ vanishes, this is achieved when either of the arguments of the 's in the denominator is 0 or a negative integer. Then
1-j_3+j=-n
or s+j=-n with n∈ℤ_+ ,
the second choice is out of order given our assumptions (s>1/2, j>-1/2)
therefore 1-j_3+j=-n when replacing this value in χ_1^- one has to be careful, since the arguments of 's in the denominator might be also a negative integer,
χ_1^- ∼Γ1+s-j_3]Γ1+2n-2j_3]/Γ-2j_3+n+1
]Γ s-j] ∼Γ1+2n-2j_3]/Γ-2j_3+n+1
],
We have not replaced the value of j in terms of n and j_3, these functions, given our assumptions will be a number different from zero and finite and will not play a role. Looking at the denominator in the last expression one naively conclude that there are values of j_3 and n for which the argument is negative integer (since j_3>n+1), and therefore χ_1^- is also zero, but it is not the case since, for each of this values, the argument of in the numerator is also a negative integer, and the divergences cancel. One can recast the ratio above as:
Γ1+2n-2j_3]/Γ-2j_3+n+1]=[-m+n]/[-m] where m>n,
using :
Γ [ϵ -m]=(-1)^m-1Γ [-ϵ ] Γ [ϵ +1]/Γ [m-ϵ +1] where is very small.
Applying this relation on both numerator an denominator and taking →0[-m+n]/[-m]→(-1)^n [m+1]/[m-n+1],
therefore for j>-1/2 we conclude:
χ_1^- =O(1),
χ_1^+ =0,
We can then proceed analogously for j<-1/2 to get:
χ_1^- =0,
χ_1^+ =O(1),
We then have:
for s>1/2 {c]lll
j<j_3≤ s if j> -1/2
-s≤-j_3≤ j if j<-1/2
..
§.§ The relation between spin-1 discrete modes and ours
Let ∇_μ be the covariant derivative of diffeomorphisms. The
Laplace-Beltrami operator is defined as ∇^μ∇_μ and acting
upon a covariant vector field X of components (X_θ,X_φ) has the explicit form
∇^μ∇_μX:=(
[ □_s=0+^2(θ) 2(θ)/sinh^2(θ
)∂_φ; -2(θ)∂_φ □_s=0-2(θ)∂_θ+1 ]) (
[ X_θ; X_φ ]),
where □_s=0 is the scalar Laplacian. We have added the subscript
s=0 to remind that it can be obtained from the magnetic Laplacian previously
defined, by particularising to s=0. The eigenvector
X_0=∇Φ, with Φ:=( sinh(θ)/1+cosh(θ)) ^|j_3|e^ij_3φ, j_3=±1,±2,... .
One can check that indeed
□_s=0Φ=0,
and second that
∇^μ∇_μX_0=X_0.
In words, X_0 is an eigen-tensor of rank one of the
Laplace-Beltrami operator ∇^μ∇_μ with eigenvalue 1.
More important to our purpose, we have checked that
( -∂_θ^2-^2θ∂_θ+1/sinh^2θ(|j_3|-coshθ)^2) X_0θ=X_0θ.
Notice that the operator in the LHS equation above coincides with our
□_s=1if and only if
j_3>0.
In fact, the equation above for X_0θ implies that X_0θ obeys
our defining equation
( □_s+Δ) X_0θ=0,
if and only if
0=j (or-1=j)<s=1≤ j_3∈ℤ.
It is then consequence that
{ X_0θ} _j_3∈ℕ = {( coshθ-1) ^j_3/sinh^j_3+1θe^ij_3φ=1/sinhθ( tanhθ/2)
^j_3e^ij_3φ} _j_3∈ℕ ,
= Ξ_j=0^(2)(s=1).
The remaining θ-components of the vector discrete modes, {
X_0θ} _j_3∈-ℕ, do solve our defining equation (
□_s+Δ) X_0θ=0 if and only if
j_3≤-1=s<j=0∈ Z.
We have thence proven that { X_0θ} _j_3≠0∈ Z
are included in our set of square integrable modes
{ X_0θ} _j_3∈-ℕ = {( coshθ-1) ^-j_3/sinh^-j_3+1θe^ij_3φ=1/sinhθ( tanhθ/2) ^-j_3
e^ij_3φ} _j_3∈-ℕ,
= Ξ_j=0^(1)(s=-1).
and correspond to the two possible unit flux (spin one) “helicities”s=±1.
§ ON 1 LOOP DETERMINANTS
§.§ Alternative regularization
In this appendix we report a second approach to regularize the determinant of O_B in the subspace _j=s-1(s)– We present the case s> 1/2, the s < -1/2 is analogous –:
__j=s-1(s)O_B = ∏_k∈ℤ∏_j_3=s^∞((̊u)+k)^2,
= ∏_k∈ℤ(((̊u)+k)^2)^∑_j_3=s1=∏_k∈ℤ(((̊u)+k)^2)^∑_j_3=1^∞1-∑_j_3=1^s-11,
= ∏_k∈ℤ(((̊u)+k)^2)^(0)-(s-1)=∏_k∈ℤ|((̊u)+k)|^-2s+1.
Where we use the basic definition of Riemann function (t)=∑_n=1^∞1/n^t and the value (0)=-1/2.
§.§ Vector multiplet
In this appendix, we prove that the index of a vector multiplet coincides with the index of matter multiplet with R-charge q_R=2.
The quadratic actions coming out of the localizing terms (<ref>) and (<ref>), along the complex path (<ref>) and after imposing gauge fixing condition (<ref>), are
ℒ^B_quadratic := (i δD̃ )^2+(𝒟_t δ A_1)^2+ (𝒟_t δ A_2)^2,
ℒ^F_quadratic := i δλ̅^†_2 𝒟̂_t δλ_2,
where
i δD̃ := i δ D+δ F_1 2+ δ𝒟̂_3 σ .
The δD̃ integrates trivially. The functional spaces to integrate over the vector and ghost degrees of freedom are
(δ A_1, δ A_2, δσ, δc̅, δ c) → (Ξ_(s-1), Ξ_(s-1), Ξ_(s), Ξ_(s), Ξ_(s)).
The integration of δ A_1 and δ A_2 is
∏_k∏^|s-1|-1_j=0∏^∞_j_3=j+1 |(ρ(u)+k )| ×∏_k∏^|s-1|-1_j=0∏^∞_j_3=j+1 |(ρ(u)+k )|.
In obtaining we have used √((ρ(u)+k )^2)=|(ρ(u)+k )|. In our contour of integration (ρ(u)+k ) is real.
The functional space to integrate the gaugini degres of freedom is
(
δλ_2, δλ̅_2^† )
→ (
Ξ_(s-1), Ξ_(s-1)).
The integration of δλ_2 and δλ̅^†_2, multiplied by the integration of δc̅^† and δ c, following from the BRST action (<ref>), gives
∏_n∏^|s-1|-1_j=0∏^∞_j_3=j+1 |(ρ(u)+k )| ×∏_n∏^|s|-1_j=0∏^∞_j_3=j+1 |(ρ(u)+k )| .
As already mentioned, we will not integrate over the zero modes δλ_1 and δλ̅_1 in order not to obtain vanishing results.
The super-determinant to compute is
(<ref>)/(<ref>).
The result (<ref>) is a divergent quantity. To regularize these objects we use the zeta-regularization procedure but only after co-homological cancellations are performed.
If s>1/2 the only contribution to the quotient (<ref>), comes from the integration of the ghosts degrees of freedom (c̅,c) with quantum number j=s-1. The divergent contribution from this functional space is
∏^s-1_j=s-1∏^∞_j_3=j+1 |(ρ(u)+k )|.
Those degrees of freedom live in Ξ_j=s-1(s) and are coupled to s units of flux. As a formal object (<ref>) is equal to
√(Ξ_j=s-1(s)(O_B)),
as can be straightforwardly checked from taking the product between equations (<ref>) and (<ref>) and particularizing the result to j=s-1.
We have already computed the zeta regularized determinant of the operator O_B on Ξ_j=s-1(s). In this space and for a given S^1 KK mode k, O_B has a unique eigenvalue: (ρ(u)+k )^2 and the square root of its zeta-regularized determinant is
√(Ξ_j=s-1(s)(O_B))=|(ρ(u)+k)|^-s+1/2.
The value of the parameter s is
s=-ρ(𝔪)/2,
because the ghosts have q_R=0.
From (<ref>), after taking the products over roots and KK modes and after regularization we obtain
Z_1-loop^vector(ℍ_2,𝔪) = ∏_ρ^*[ |C_regsin( ρ(u)/2)| ] ^ρ(𝔪)+1/2,
= ∏_ρ^*[
|C_regsin( ρ(u)/2)|] ^-ρ(𝔪)+1/2,
where
C_reg=-2 i.
Equality (<ref>) proves the statement made below equation (<ref>), that is, independence of the GNO conditions. In the second equality in (<ref>), we have
performed the inversion of roots ρ→ -ρ.
For s < 1/2 the same result (<ref>) is obtained by following analog steps.
§ CONVENTIONS: 4D 𝒩=2 GAUGED SUPERGRAVITY
In this appendix we summarize our conventions for 4d 𝒩=2 gauged supergravity. The construction of black holes reported in section <ref>, was implemented in a Mathematica file. If the reader is interested in the file, please write an email to us. If there is interest, we are more than happy to share it.
The 4d gamma matrices
γ^1=(
[ i 0 0 0; 0 -i 0 0; 0 0 i 0; 0 0 0 -i ]), γ^2=(
[ 0 0 0 i; 0 0 -i 0; 0 -i 0 0; i 0 0 0 ]), γ^3=(
[ 0 -i 0 0; -i 0 0 0; 0 0 0 -i; 0 0 -i 0 ]),
γ^4=(
[ 0 0 0 -i; 0 0 i 0; 0 -i 0 0; i 0 0 0 ]). γ_a b=1/2[γ_a,γ_b], γ_5=i γ^4γ^1γ^2γ^3.
The SU(2)_RR-symmetry invariant tensors
ϵ_AB=ϵ^A B=(
[ 0 1; -1 0 ]).
The SU(2)_R generators
σ^1_AB = (
[ 1 0; 0 -1 ]), σ^2_AB=(
[ -i 0; 0 -i ]), σ^3_AB=(
[ 0 -1; -1 0 ]),
σ^1 AB = (
[ -1 0; 0 1 ]), σ^2 AB=(
[ -i 0; 0 -i ]), σ^3 AB=(
[ 0 1; 1 0 ]).
The σ^I B_A with I=1,2,3 are the Pauli matrices.
The ordering of coordinates is
(1,2,3,4) ↔ (r, θ, φ, t).
For hyperbolic solutions:
F^Λ_μν=(
[ 0 0 0 0; 0 0 1/2sinh (θ ) p_Λ 0; 0 -1/2sinh (θ ) p_Λ 0 0; 0 0 0 0 ]),
F_μ,ν^- Λ=(
[ 0 0 0 i p_Λ/4 h(r)^2; 0 0 1/4sinh (θ ) p_Λ 0; 0 -1/4sinh (θ ) p_Λ 0 0; -i p_Λ/4 h(r)^2 0 0 0 ]).
§.§ Parametrization in terms of scalar
In this subsection we post a series of useful parametrizations in terms of physical scalars z^1, z^2, z^3. The results reported in this subsection, are consistent with the BPS equations obtained for the choice + in equation (<ref>).
𝒦=-log(8 √(z^1 z^2 z^3)/(z^1+z^2+z^3+3)^2),
ℱ=(ℱ̅)^*=-2 i √(z^1 z^2 z^3/(z^1+z^2+z^3+3)^4),
ℱ_Λ=(i √(z^1 z^2 z^3)/z^1+z^2+z^3+3,i √(z^2
z^3/z^1)/z^1+z^2+z^3+3,i √(z^1
z^3/z^2)/z^1+z^2+z^3+3,i √(z^1
z^2/z^3)/z^1+z^2+z^3+3),
ℱ_ΛΣ=(ℱ̅_ΛΣ)^*=(
[ 1/2 i √(z^1 z^2 z^3) -1/2 i √(z^2 z^3/z^1) -1/2 i √(z^1 z^3/z^2) -1/2 i √(z^1
z^2/z^3); -1/2 i √(z^2 z^3/z^1) 1/2 i √(z^2 z^3/z^1^3) -i z^3/2 √(z^1 z^2 z^3) -i z^2/2 √(z^1 z^2 z^3); -1/2 i √(z^1 z^3/z^2) -i z^3/2 √(z^1 z^2 z^3) 1/2 i √(z^1 z^3/z^2^3) -i z^1/2 √(z^1 z^2 z^3); -1/2 i √(z^1 z^2/z^3) -i z^2/2 √(z^1 z^2 z^3) -i z^1/2 √(z^1 z^2 z^3) 1/2 i √(z^1 z^2/z^3^3) ]),
f̅^Λ_j=(
[ 1/8 √(2)√(z^1^ 5 z^2 z^3) -3/8 √(2)√(z^1 z^2
z^3) z^2/8 √(2)√(z^1^ 5 z^2 z^3) z^3/8 √(2)√(z^1^5 z^2 z^3); 1/8 √(2)√(z^1 z^2^5 z^3) z^1/8 √(2)√(z^1
z^2^5 z^3) -3/8 √(2)√(z^1 z^2 z^3) z^3/8 √(2)√(z^1 z^2^5 z^3); 1/8 √(2)√(z^1 z^2 z^3^5) z^1/8 √(2)√(z^1 z^2
z^3^5) z^2/8 √(2)√(z^1 z^2 z^3^5) -3/8 √(2)√(z^1 z^2 z^3) ]),
𝒩_ΛΣ= i (
[ -√(z^1 z^2 z^3) 0 0 0; 0 -√(z^2 z^3/z^1^3) 0 0; 0 0 -√(z^1 z^3/z^2^3) 0; 0 0 0 -√(z^1 z^2/z^3^3) ]),
g_z z̅=(
[ 3/161/z^1^2 -1/161/ z^1 z^2 -1/16 1/z^1 z^3; -1/161/ z^1 z^2 3/161/z^2^2 -1/161/ z^2 z^3; -1/161/ z^1 z^3 -1/161/ z^2 z^3 3/161/z^3^2 ]).
JHEP ]
| Black holes have an entropy that fits neatly in a thermodynamics framework as originally established in the works of Bekenstein and Hawking in the early 1970's. The microscopic origin, that is, the nature of the degrees of freedom that this entropy counts, has been an outstanding challenge for many decades. Any candidate to a theory of quantum gravity must provide an answer to this fundamental question. String theory, in the works of Strominger and Vafa, has successfully passed this test for a particular type of black holes <cit.>. In the context of the AdS/CFT correspondence, the original work of Strominger and Vafa can be interpreted as an instance of AdS_3/CFT_2. A natural question pertains higher dimensional versions of the AdS/CFT correspondence. Recent work by Benini, Hristov and Zaffaroni addresses the microscopic counting of the entropy of certain black holes from the point of view of AdS_4/CFT_3<cit.>.
In this manuscript we explore the topologically twisted index, originally introduced by Benini and Zaffaroni in the framework of N=2 supersymmetric three-dimensional field theories in S^2× S^1<cit.> (see also <cit.>), for the case of supersymmetric theories in ℍ_2× S^1, where ℍ_2 is the hyperbolic plane. Although we provide the ingredients for arbitrary N=2 supersymmetric theories, we will particularize our results for a specific deformation of ABJM theory. The holographic dual of such deformation is thought to be a hyperbolic black hole. In this work, our main motivation comes from the prospect of understanding the D=3 SCFT representation of the appropriate AdS_4 black hole microstates. With this aim we are driven to explore four dimensional N=2 gauged supergravity and find black hole solutions with ℍ_2 horizon. Hyperbolic black holes have been discussed in the context of AdS/CFT in, for example, <cit.>.
Asymptotically AdS_4 black holes in 𝒩=2 gauged supergravity, which are sourced by magnetic fluxes, have been widely studied <cit.>. Roughly speaking, from the bulk perspective, the presence of fluxes allows to define the black hole as interpolating from the UV AdS_4 to the near horizon AdS_2× S^2. As a result of our study we are able to identify the role of such fluxes from the dual SCFT perspective. These flavor fluxes, together with a continuous of color fluxes, generate a one-parameter hierarchy of Landau levels on ℍ_2, that determines the value of the ABJM index. What we are set to explore in this paper, is whether the leading behavior in the large N limit of the topologically twisted index of a specific deformation of ABJM, evaluated on the Hilbert space composed by the aforementioned Landau levels, coincides with the Bekenstein-Hawking expression for the semiclassical entropy of the black holes in question. We will find that indeed both results coincide.
Another important motivation for our work, is the intrinsically interesting field theory problem of localization of supersymmetric field theories in non-compact spaces. This problem naturally appears in the context of localization of supergravity theories, for an understanding of exact black hole entropy counting <cit.>. The same problem appears in holographic approaches to Wilson loops where the world volume of the classical configuration contains an AdS_2 factor. For example, the excitations on a D3 brane which is dual to a Wilson loop in the totally symmetric rank k representation <cit.> were identified to correspond, to an N=4 vector multiplet in ℍ_2 × S^2<cit.>. Localization in non-compact spaces has recently been addressed in <cit.> and <cit.>, our work constitutes an extension to the topologically twisted case.
The manuscript is organized as follows. In section <ref> we discuss the preliminary ingredients we need, for example, our guidance principle on the field theory side: supersymmetric localization <cit.>, the background metric, spin connection, and supersymmetric structure of the actions needed to compute BPS observables in a generic three-dimensional 𝒩=2 Chern-Simons-Matter theory on ℍ_2× S^1. To complete section <ref>, we discuss the boundary conditions to be used in the manuscript. In section <ref> we present the space of square and delta-normalizable functions that will be used to integrate upon, and their respective discrete and continuous spectrum. In section <ref> we compute the one loop super-determinants. In section <ref> we assemble our results to write down the ABJM index on ℍ_2× S^1, and then move on to compute its leading contribution in the large N expansion, by following the procedure pioneered in <cit.>. In section <ref> we find what we believe to be the dual AdS_4 black holes and compare its Bekenstein-Hawking entropy to the leading contribution in the large N expansion of the ABJM index on ℍ_2× S^1. In section <ref> we conclude with a short summary of our results and comment on interesting open and related problems. In a series of appendices we discuss more technical aspects such as, for instance, the construction of square integrable modes in appendix <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07541v1 | 20170126013912 | An ALMA and MagAO Study of the Substellar Companion GQ Lup B | [
"Ya-Lin Wu",
"Patrick D. Sheehan",
"Jared R. Males",
"Laird M. Close",
"Katie M. Morzinski",
"Johanna K. Teske",
"Asher Haug-Baltzell",
"Nirav Merchant",
"Eric Lyons"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP"
] |
[∗]This paper includes data gathered with the 6.5 m Magellan Clay Telescope at Las Campanas Observatory, Chile.
^1Steward Observatory, University of Arizona, Tucson, AZ 85721, USA; [email protected]
^2Department of Terrestrial Magnetism, Carnegie Institute of Washington, 5241 Broad Branch Road, NW, Washington, DC 20015, USA
^3CyVerse, University of Arizona, Tucson, AZ 85721, USA
^4Bio5 Institute, University of Arizona, Tucson, AZ 85721, USA
Accepted for publication in ApJ
Multi-wavelength observations provide a complementary view of the formation of young directly-imaged planet-mass companions. We report the ALMA 1.3 mm and Magellan adaptive optics (MagAO) Hα, i', z', and Y_S observations of the GQ Lup system, a classical T Tauri star with a 10–40 M_Jup substellar companion at ∼110 AU projected separation. We estimate the accretion rates for both components from the observed Hα fluxes. In our ∼005 resolution ALMA map, we resolve GQ Lup A's disk in dust continuum, but no signal is found from the companion. The disk is compact, with a radius of ∼22 AU, a dust mass of ∼6 M_, an inclination angle of ∼56, and a very flat surface density profile indicative of a radial variation in dust grain sizes. No gaps or inner cavity are found in the disk, so there is unlikely a massive inner companion to scatter GQ Lup B outward. Thus, GQ Lup B might have formed in situ via disk fragmentation or prestellar core collapse. We also show that GQ Lup A's disk is misaligned with its spin axis, and possibly with GQ Lup B's orbit. Our analysis on the tidal truncation radius of GQ Lup A's disk suggests that GQ Lup B's orbit might have a low eccentricity.
§ INTRODUCTION
In recent years, high-contrast imaging surveys have discovered many wide-orbit substellar companions, which are located at tens to hundreds of AU from their host stars and have masses of a few to tens of M_Jup. Some of these companions have features indicative of accretion disks, including optical and near-infrared emissions such as Hα, Br-γ, and Pa-β (e.g., ; ; ; , ; ; ; ; ), high dust extinction (e.g., ; , ), and infrared excess from dust emission (e.g., ; ; ; ). It is expected that disks could be common among young substellar companions because the main formation mechanisms—collapse of prestellar cores, fragmentation of circumstellar disks, and core accretion plus subsequent scattering—can all produce disk-bearing companions. Each mechanism, however, can leave distinct imprints on disk properties. For instance, objects formed by disk fragmentation may have higher disk masses and accretion rates compared to those formed by prestellar core collapse <cit.>. On the other hand, scattering can be destructive to disks (e.g., ), and unlike stars, low-mass objects are not efficient to accrete new disks from the natal molecular clouds after scattering <cit.>. Characterizing disks of substellar companions therefore provides a new avenue for studying wide companions' mass assembly history.
In addition, if gas emission lines such as CO can be spatially and spectrally resolved, the dynamical mass of the central object can be determined assuming a Keplerian velocity field. Since masses are usually derived by comparing observables to theoretical predictions, this dynamical approach has great potential to calibrate evolutionary models (e.g., ). Finally, disk masses, sizes, structure, and lifetimes ultimately regulate satellite formation and satellite-disk interaction. As wide companions are well separated from their host stars, they offer a clear view to the relevant physical processes.
Still, imaging disks around very low-mass companions remains challenging (e.g., ). Simulations have shown that they tend to be compact because they are tidally truncated at ∼1/3 of the Hill radius (e.g., ). For objects in nearby star-forming regions (∼100 to 150 pc), their disks are probably not larger than ∼5 to 30 AU in radii, which in turn requires a <01 resolution to resolve them in (sub)millimeter. With the advent of ALMA, it is now possible to directly image and characterize these disks. <cit.> showed that GSC 6214-210 B has a dust mass of <0.15 M_ in its disk. <cit.> also found that only <0.04 M_ of dust is present in the disk around GQ Lup B. Most notably, <cit.> and <cit.> detected FW Tau C's accretion disk in 1.3 mm dust continuum and ^12CO (2–1) emission, respectively. <cit.> inferred a dust mass of 1–2 M_, sufficient to form satellites analogous to the Galilean moons.
Here we present the ALMA 1.3 mm map of the GQ Lup system, a pre-main-sequence star with a 10–40 M_Jup companion at ∼110 AU projected separation. Both components have been shown to exhibit accretion signatures (e.g., ; ). With a ∼005 resolution, we resolve the primary star's accretion disk. The companion's disk is, however, not detected. We also present the 0.6–1 imaging of the system using the Magellan adaptive optics (MagAO; ; ; ), and derive mass accretion rates from Hα intensities.
§ GQ LUP A AND B
GQ Lup A is a classical T Tauri star (CTTS) with a spectral type of K7eV (, ) in the Lupus I cloud (∼150 pc; ; ). Spectropolarimetric observations suggested that it is 2–5 Myr old, with a photospheric temperature of 4300 K and a mass of 1.05 M_ <cit.>. Photometric monitoring indicated that it has an inclination of 27 and a rotation period of 8.45 days <cit.>. It is one of the most studied CTTSs due to many features indicative of active accretion onto the star from a circumstellar disk, such as brightness variations (e.g., ; ; ; ), inverse P-Cygni profiles (e.g., ; ; ), optical veiling (e.g., ; ), very intense magnetic field (e.g., ; ), and strong excess from near to far-infrared (e.g., ; ; ; ; ). The accretion disk was first imaged at 1.3 mm dust continuum emission by <cit.> using the Submillimeter Array. The disk is compact, low-mass, and possibly tidally truncated. <cit.> presented far-infrared spectra taken with Herschel PACS and proposed that the 63 excess may come from crystalline water ice in the outer disk. Recently, <cit.> presented the ALMA 870 and CO (3–2) imaging of the disk and derived a gas-to-dust ratio well below typical ratios in the interstellar medium (ISM).
The substellar companion GQ Lup B was discovered by <cit.>, and its common proper motion was soon confirmed <cit.>. Mass estimates are very model-dependent, but most studies overlap in the range of ∼10 to 40 M_Jup (see Table 1 of for published results). Like the primary star, GQ Lup B is believed to harbor an accretion disk. Lines of evidence include red K'-L' compared to young free-floating objects <cit.>, the 1.28 Pa-β emission line (; but also see and ), and overluminosity in the HST F606W flux <cit.>. The Hα emission, along with optical continuum excess, were detected by <cit.>. A relatively high accretion rate Ṁ∼10^-9.3 M_ yr^-1 was derived by modeling the continuum excess as a hot hydrogen slab <cit.>. The dust mass in the disk was shown to be <0.04 M_ <cit.>.
Recently, <cit.> measured the spin and the barycentric radial velocity (RV) of GQ Lup B using high-dispersion spectroscopy. They showed that compared to the giant planet β Pic b, GQ Lup B spins slowly because it is still contracting and gaining angular momentum from its disk. Their new RV estimate, together with the astrometric monitoring in <cit.>, has provided constraints on its orbit. <cit.> also detected CO and H_2O in GQ Lup B's atmosphere.
We list the properties of GQ Lup A and B in Table <ref>.
@lccl@
Properties of GQ Lup
Parameter
GQ Lup A
GQ Lup B
References
Distance (pc) ∼150 ∼150 1, 2
Separation () ⋯ 0.721±0.003 3
PA () ⋯ 277.6± 0.4 3
Age (Myr) 2–5 2–5 4
SpT K7eV L1±1 5, 6
A_V (mag) 0.4±0.2 ⋯ 7
log(L/L_) 0.0±0.1 -2.47±0.28 4, 6
T_eff (K) 4300±50 2400±100 4, 6
Radius 1.7±0.2 R_ 3.4±1.1 R_Jup 4, 8
Mass 1.05±0.07 M_ ∼10–40 M_Jup 4, 6, 9, 10, 11, 12
log Ṁ (M_ yr^-1) -9 to -7 -12 to -9 3, 4, 13, 14, 15
log g 3.7±0.2 4.0±0.5 4, 6
Inclination () 27±5 ⋯ 16
v sin(i) (km s^-1) 5±1 5.3^+0.9_-1.0 4, 17
Rotation Period (d) 8.45±0.20 ⋯ 16
(1) . (2) . (3) This work. (4) . (5) . (6) . (7) . (8) GQ Lup B's radius is derived from the adopted L and T_eff. (9) . (10) . (11) . (12) . (13) . (14) . (15) . (16) . (17) .
§ METHODOLOGY
§.§ ALMA 1.3 mm
GQ Lup was observed with ALMA in Cycle 3 on UT 2015, November 1 with the Band 6 receiver and 41 12-m antennas reaching a maximum baseline of 14969.3 meters. Three of the 4 available basebands were configured for continuum observations to search for dust emission, each with 128 15.625 MHz channels for a total of 2 GHz continuum bandwidth, and centered at 233.0, 246.0, and 248.0 GHz. The final baseband was centered at 230.538 GHz with 3840 0.122 MHz (Hanning smoothed to a resolution of 0.244 MHz, or 0.32 km s^-1) channels to search for ^12CO (2–1) emission from our targets. Scans on GQ Lup were interleaved with the phase calibrator QSO J1534-3526. The total on-source time was 11.19 minutes.
The data were reduced in the standard way with the software package, using the water vapor radiometry data and QSO J1534-3526 for gain calibration, QSO J1427-4206 for bandpass calibration, and QSO J1337-1257 for flux calibration. The calibrated data were Fourier inverted and deconvolved from the beam using the MSMFS-CLEAN algorithm <cit.> with no frequency dependence, CLEAN components which are point sources and 1, 2, 4, and 8 times the size of the beam, and natural weighting for the best sensitivity. We produced a continuum map from all four basebands, excluding channels in the -15 to 15 km s^-1 range of the CO baseband to avoid contaminating our map with CO emission. The final 1.3 mm continuum map was produced after four iterations of phase-only self-calibration using a model produced from the CLEAN algorithm. We show the 1.3 mm continuum map in Figure <ref>. The continuum map has an rms of 39 μJy beam^-1, with a synthesized beam of 0054×0031 and position angle of 687.
§.§ Sources in the ALMA Image
In the continuum map there is a clear detection of a source located at 15^h49^m1209, -35390543. Accounting for GQ Lup A's proper motion (-15.1±2.8 mas yr^-1, -23.4±2.5 mas yr^-1; ), this is coincident with its reported position at 15^h49^m1210, -35390512 (J2000). We measured a flux of 27.5±0.6 mJy for this source.
As we know the position of GQ Lup B, we can search a smaller region of the image with a lower detection threshold for emission. Noise in our ALMA map is highly Gaussian, so we would expect 68% of the peaks to be within 1σ of zero, 95% to be within 2σ, and so on. In a 01 diameter region around the known position of GQ Lup B there are 4 beams, so we would expect ∼0.2 noise peaks above 2σ, but ≪1 noise peak above 3σ, so any peak above 3σ is likely real. However, we did not detect any emission from GQ Lup B.
§.§ GQ Lup B's Disk Mass
To place an upper limit on the disk mass of GQ Lup B, we inserted fake sources into our image in a 01 diameter area around the known position of GQ Lup B, and used our source finding routine to search for them. We varied both the disk size and the flux of the input sources, and for each disk size/flux combination we calculated the percentage of the fake sources that were recovered in the map. For each disk size, we set the upper limit on disk flux to be the minimum flux for which 99.7% (3σ) of the input fake sources were detected.
Flux upper limits can be converted into dust mass upper limits by assuming the dust is optically thin and using the standard prescription <cit.>,
M_disk = D^2 F_disk/κ_ν B_ν(T).
We used the standard assumption of a characteristic dust temperature of T=20K and a 1.3 mm dust opacity of κ_ν = 2.3 cm^2 g^-1. We used a distance of 150 pc to GQ Lup.
In Figure <ref> we show a plot of the upper limit on GQ Lup B's disk mass as a function of disk radius. The calculation ran from a point source disk to a disk with a radius of 50 mas. This radius corresponds to a third of the Hill radius for GQ Lup B, which is expected to be the upper limit on the size of the disk. This calculation also assumed that the projected separation of 0721 is equal to the semi-major axis of the companion's orbit. The Hill radius, and therefore the expected disk radius upper limit, would be larger if the system has a larger separation.
§.§ GQ Lup A Disk Modeling
As shown in Figure <ref>, the disk around the primary star is strongly detected at 1.3 mm continuum. We fit the visibility data for GQ Lup A with a series of two disk models to constrain the parameters of the disk.
We modeled the disk with a detailed radiative transfer modeling scheme, using the RADMC-3D code <cit.> to produce disk models. Our model included a central protostar with a temperature of 4300 K and luminosity of 1 L_⊙, consistent with the measured values (see Table <ref>). It also included a dusty protoplanetary disk, for which we used two different density prescriptions. Model A used the density profile of a flared power-law disk,
Σ = Σ_0 (R/1 AU)^-γ, and
ρ = Σ(R)/√(2π) h(R) exp(-1/2[z/h(R)]^2), with
h(R) = h_0 (R/1 AU)^β.
We allowed the disk mass (M_disk), inner and outer disk radii (R_in and R_disk), the surface density index (γ), scale height index (β), and scale height at 1 AU (h_0), to vary as free parameters. Model B used the surface density profile of a flared accretion disk with a radial power-law distribution of viscosity <cit.>,
Σ = Σ_0 (R/R_c)^-γexp[-(R/R_c)^2-γ].
The parameters for Model B were the same as those of Model A, with the exception of the critical radius R_c, beyond which the surface density drops exponentially. This replaced the disk radius, beyond which the density drops to zero in Model A. We also supplied dust opacities to the models. We assumed that dust grains are 70% astronomical silicate and 30% graphite <cit.> following a power-law distribution of dust grain sizes, with a minimum size of 5 nm, a maximum size of 3 mm, and a power law exponent of -3.5. This produced a 1.3 mm opacity of 2.25 cm^2 g^-1, in good agreement with the typical value assumed for disk mass calculations.
We used RADMC-3D to calculate the temperature throughout the density distribution. Following this we produced synthetic images with raytracing of the protostar model, and Fourier transformed the images to produce model 1.3 mm visibility profiles. We fit these models directly to the visibility data using the Markov Chain Monte Carlo code <cit.>. uses an affine-invariant MCMC ensemble sampler, which employs a series of walkers that step through parameter space and converge on the best fit. We positioned 200 walkers throughout a large region of parameter space and allowed them to move towards regions of lower χ^2. We show the best fit parameters for each model in Table <ref>, and images, residuals, and visibilities for the best-fit models in Figure <ref>.
§.§ MagAO Hα Photometry
GQ Lup was observed on UT 2015, April 16 in the simultaneous differential imaging (SDI) mode <cit.> of the VisAO camera (8×8 field of view, 79 mas plate scale; ). The rotator was off to facilitate angular differential imaging (ADI) <cit.>. Weather was photometric with low ground level winds. Seeing varied from 07 to 08. AO parameters and exposure time are listed in Table <ref>. Images in the Hα (656 nm; Δλ = 6.3 nm) and continuum (643 nm; Δλ = 6.1 nm) channels were separated, dark subtracted, registered and centered, and then the median radial profile of each image was subtracted from itself.
After these basic reduction steps, we then proceeded to employ the KLIP algorithm <cit.> for PSF subtraction with ADI. Our implementation of the ADI+KLIP algorithm allows for the selection of many parameters, including the size of the search region, a minimum rotation requirement, and the number of modes <cit.>. The following sequence of steps was taken to find a set of signal-to-noise (S/N) optimizing reduction parameters in an unbiased way.
First, an initial reduction was carried out to locate the companion. This employed a search annulus from 50 to 150 pixels (04 to 12), minimum of 2.5 pixels of rotation at the inner edge between the image being reduced and basis images, and modes ranging from 5 to 30. After PSF subtraction, images were de-rotated and the final image was formed as the 5σ-clipped mean. The final image was then unsharp-masked with a 20-pixel Gaussian kernel, and then smoothed with a 5-pixel Gaussian. Unsharp masking acts a high-pass filter, removing the stellar halo and the residual low-spatial frequency noise remaining after PSF subtraction. Given the low quality correction which yielded an FWHM much larger than λ/D, the PSF was oversampled by the diffraction limited plate scale of the VisAO camera. Gaussian smoothing (low-pass filter) hence smooths pixel-to-pixel noise. The companion was readily seen in the Hα channel with S/N ∼ 5 using 8 modes, but no detection was evident in the continuum channel (upper panels of Figure <ref>).
Next, we injected negative planets <cit.> with the same search region, using 8 modes to form an initial estimate of the planet flux, finding ΔHα ∼ 8.7 mag as the planet brightness which minimized the standard deviation in an aperture with radius of 1 FWHM at the location of the companion.
It is difficult to estimate uncertainties using the negative planet technique (e.g., ), and optimizing the reduction parameters on the companion itself risks biasing results due to speckles. To address these issues, we conducted a grid search over the parameters, testing on an ensemble of positive fake planets injected at the same separation but at a range of position angles. To conduct this search in a reasonable amount of time we employed the “Findr” distributed computing (cloud-based) data reduction system <cit.>. In all trials, a negative planet with the above estimated brightness was injected at the location of the detection to avoid cross-talk. Planet injection was performed on the dark-subtracted/registered/centered images, and then the radial profile was subtracted. The parameters tested are given in Table <ref>, and the KLIP algorithm was applied for each possible combination. The final combined image at each combination was filtered as above. The optimum parameters were determined as those which maximized the mean S/N on the ensemble of positive fake planets injected at 8.7 mag brightness. The flux of the companion was then determined by comparing its photometry when reduced with those optimum parameters to the positive fake planet results. Uncertainties were estimated from the standard deviation of the results from the fake planets.
At the optimum parameters, GQ Lup B was detected in Hα at S/N ∼ 6.3 with a contrast of 8.60 ± 0.16 mag. For comparison, the Hα contrast in <cit.> is ∼7.1 mag (Zhou, Y. 2016, private communication). The contrast at 643 nm is >8.81 mag (3σ upper limit). We also found that GQ Lup A's Hα flux is ∼0.60 mag brighter than its 643 nm continuum due to active accretion.
§.§ Derivation of Accretion Rates from Hα Fluxes
We derived the mass accretion rates for GQ Lup A and B following <cit.> and <cit.>. In brief, we computed the Hα line luminosity L_Hα, used it to derive the accretion luminosity L_acc, and applied the energy equation L_acc∝ GM_⋆Ṁ/R_⋆ to determine Ṁ.
Since we did not observe a standard star at R band, we estimated GQ Lup A's average R brightness to be 11.0 mag by averaging literature fluxes in <cit.>, <cit.>, <cit.>, and <cit.>, with no attempt to homogenize photometric systems. The uncertainty was taken to be 0.7 mag as <cit.> found that GQ Lup A's R flux can change by 1.4 mag over its 8.45-day rotation period. To recover the true L_Hα, we also have to correct for dust extinction, which is A_V∼0.4 mag to the star <cit.> but unknown to the companion. As a result, our Ṁ estimate for GQ Lup B should be considered a lower limit.
We thus estimated L_Hα∼ 10^-2.3 to 10^-1.8 L_ for A, and ∼10^-5.9 to 10^-5.4 L_ for B. Substituting L_Hα into the empirical relation, log(L_acc) = 2.99 + 1.49 × log(L_Hα), in <cit.>, we found L_acc∼ 0.3 to 2.3 L_ for A, and ∼ 1.4×10^-6 to 9.7×10^-6 L_ for B. The resulting accretion rates for GQ Lup A and B are Ṁ∼10^-8 to 10^-7 M_ yr^-1, and ∼10^-12 to 10^-11 M_ yr^-1, respectively. Our measurement for A is similar to previous results, Ṁ∼10^-9 to 10^-7 M_ yr^-1 <cit.>. On the other hand, for the companion we obtained a lower value compared to 10^-9.3 M_ yr^-1 in <cit.>. Possible causes include the unknown dust extinction or an inactive period of accretion during the time of our observations. Our Ṁ estimates and literature values are also shown in Table <ref>.
§.§ MagAO i', z', Y_S Photometry
We observed GQ Lup at broad-band filters i' (0.77, Δλ = 0.13 ), z' (0.91 ; Δλ = 0.12 ), and Y_S (0.98 ; Δλ = 0.09 ) on UT 2014, April 5. Weather was partially cloudy with ∼13 seeing. Data reduction and photometry were carried out with IRAF[1][1]IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.(, ) and MATLAB. Raw data were dark-subtracted, registered, de-rotated, and median-combined. We then subtracted the median radial profile of the combined image from itself. For i', we further filtered out the residual using a 15-pixel Gaussian kernel, and smoothed the resulting high-pass filtered image with a 6-pixel Gaussian to better bring out the companion. Our i', z', and Y_S detections of GQ Lup B are also shown in Figure <ref>.
We estimated GQ Lup B's fluxes by injecting fake planets. For z' and Y_S, the primary star in the unsaturated data was used to create fake planets. For i', since we did not acquire any unsaturated data, we estimated the peak height of the primary star in the saturated data from its beam splitter optical ghost <cit.>. Then, we flagged the saturated core, assigned the peak value to the center, and fit a two-component Gaussian to create a PSF template. To compensate for flux loss from data reduction, including filtering, we injected fake planets at the same separation from the star but with different position angles, and repeated the data reduction procedures. We found that throughputs were >98% for z' and Y_S, but only ∼32% for i' as it involved more aggressive spatial filtering and so higher losses of low spatial frequency flux. Compensating for these flux losses, we derived contrasts of 8.13±0.23 mag, 6.63±0.05 mag, and 6.45±0.05 mag for i', z', and Y_S, respectively.
To perform absolute photometry, we compared GQ Lup A to the standard star GJ 440. We used an 80-pixel aperture to include most of the flux, and adopted an 8% photometric uncertainty recommended for absolute photometry with the VisAO camera by <cit.>. Finally, we obtained i'=10.76±0.08 mag, z'=9.77±0.08 mag, and Y_S=9.43±0.08 mag for GQ Lup A, and i'=18.89±0.24 mag, z'=16.40±0.10 mag, and Y_S=15.88±0.10 mag for GQ Lup B. For comparison, our z' flux is similar to the F850LP flux of 16.2 mag in <cit.>, but our i' flux is ∼1 mag fainter than their F775W flux of 17.8 mag. It is possible that GQ Lup B was in a quiescent accretion state during our observations. Table <ref> summarizes our photometric measurements.
§.§ MagAO Astrometry of GQ Lup B
Following the calibrations in <cit.> and <cit.>, in our MagAO data we found that GQ Lup B is 0721 ± 0003 from its host star, with a position angle of 2776 ± 04. Figure <ref> shows the astrometric monitoring in the last ∼20 years, and GQ Lup B's orbital motion is evident. Our results are consistent with the trends derived by <cit.>, Δρ∼-1.4 mas yr^-1 and ΔPA ∼+016 yr^-1. Table <ref> also lists our astrometric measurements.
@lcc@
GQ Lup A Disk Properties
Parameter
Model A
Model B
Dust Mass (M_) 5.9 ± 1.0 5.5 ± 0.8
Total Mass† (M_) 77.2 ± 8.4 76.8 ± 8.3
Inner Radius (AU) 1.5 ± 0.8 1.7 ± 1.1
Radius (AU) 23.8 ± 1.6 19.5 ± 1.4
h_0 0.084 ± 0.065 0.075 ± 0.039
γ 0.10 ± 0.22 -0.21 ± 0.20
β 1.26 ± 0.19 1.45 ± 0.25
Inclination () 56.2 ± 4.8 55.3 ± 6.0
PA () 348.8 ± 4.8 348.6 ± 5.0
†Total mass is calculated by adding our dust mass to the gas mass of 71.3 ± 8.3 M_ measured by <cit.>.
@lcccc@
MagAO Observations
Filter
AO speed
AO modes
t_ sat
t_ unsat
643 nm 300 Hz 120 ⋯ 25 s × 143
656 nm (Hα) 300 Hz 120 ⋯ 25 s × 143
i' 625 Hz 120 10 s × 20 ⋯
z' 625 Hz 120 5 s × 13 0.283 s × 54
Y_S 625 Hz 120 ⋯ 15 s × 14
@lccc@
Parameters of Hα KLIP Reduction
Parameter
Grid
Optimum
Notes
Minimum radius of region (pixel) 40–80, steps of 10 50
Maximum radius of region (pixel) 110–140, steps of 10 110
Minimum rotation (pixel) 0.0, 0.25, 0.5, 1.0, 2.0 1.0
Number of modes 2–20, steps of 2 6
Fake Planet PA 7.15–357.15, steps of 10 ⋯ 33 total, skipped ±10 from planet
Fake Planet Contrast 1.65, 3.30, 4.95 × 10^-4 ⋯ ±50% from negative planet result
@lccc@
MagAO Photometry
Contrast/Filter
GQ Lup A
GQ Lup B
Δ643 nm† >8.81
ΔHα 8.60±0.16
i' 10.76±0.08 18.89±0.24
z' 9.77±0.08 16.40±0.10
Y_S 9.43±0.08 15.88±0.10
†3σ upper limit.
§ RESULTS
§.§ GQ Lup A's Disk
We list the best-fit parameters for our modeling of the GQ Lup A disk in Table <ref>, and show our best-fit models in Figure <ref>. Both models match the data well, and produce similar best-fit parameter values. We find that GQ Lup A's disk has a radius of ∼22 AU, an inclination angle of ∼56, and a position angle of ∼349. The mass in dust in the disk is ∼6 M_, lower than ∼9.5 M_ found by <cit.> and ∼15 M_ found by <cit.>. This difference in dust mass likely comes from the adopted temperature profiles in the disk. In this study, we use radiative transfer to calculate the local temperature. Alternatively, if we assume a constant temperature of 20 K throughout the entire disk and use Equation <ref> to calculate the dust mass from the measured flux of 27.5 mJy, we obtain a higher value of ∼18 M_.
We find that the disk size we measure (R∼22 AU) is smaller than the size measured by <cit.> (R∼30 AU from 870 continuum, and R∼46.5 AU from CO (3–2) emission). This may be because dust grain growth is expected to occur preferentially in the inner disk, where densities are higher, and radial drift will tend to concentrate large particles at smaller radii. Our 1.3 mm map is more sensitive to large dust grains than the 870 map in <cit.>, so we may expect to measure a smaller radius at longer wavelengths.
Our map does not show any structures in the GQ Lup A disk such as holes or gaps, which can be the signposts of additional companions. The best-fit disk models also have very small inner disk radii, of ∼1.6 AU. This is consistent with no inner clearing, because the resolution of our observations does not allow us to well constrain the inner radius below ∼4.5 AU. Since there is unlikely any gaps or companions hidden within the disk, GQ Lup B was probably not scattered to its current orbit, but instead formed in-situ like binary stars, as we discuss more in Section <ref>.
Our models show that the surface density profile of the disk can be very flat. The flat profile is similar to some brown dwarf disks in ρ Ophiuchus <cit.>, but in contrast to brown dwarf disks in Taurus <cit.>, which have rather steep profiles and smooth edges. As <cit.> argued, if dust has a radial variation in the size distribution, assuming uniform dust properties across the disk can result in a shallower profile. This has been confirmed by <cit.>, who showed that GQ Lup A's disk does have a radial gradient in both the dust composition and grain size, with larger grains in the inner disk and submicron grains in the outer disk. The smaller disk size measured at 1.3 mm compared to 870 provides further evidence that larger grains are present in the inner disk.
§.§ GQ Lup B's Disk Mass
The disk around GQ Lup B is undetected in our map. As shown in Figure <ref>, our data put a upper limit on the disk mass for GQ Lup B at <0.25–1 M_ from a point source to 1/3 of the Hill radius, although this is not as strong as the upper limit of <0.04 M_ by <cit.>. Unlike another wide-orbit substellar companion FW Tau C, which has 1 to 2 M_⊕ of dust in its disk <cit.>, GQ Lup B's disk appears to have little dust, similar to the dust-depleted disk around GSC 6214-210 B (<0.15 M_ of dust; ). This may arise from different evolutionary stages: FW Tau C is younger (∼2 Myr) and has a more massive accretion disk, while GQ Lup B (2–5 Myr) and GSC 6214-210 B (5–10 Myr) are more evolved and their disks are rather depleted.
§.§ Orbital Constraints from Disk Size
GQ Lup B's orbital motion was first detected by <cit.>, who showed that the best-fit orbits were eccentric. With new RV measurements, <cit.> showed that the semi-major axis a, eccentricity e, and inclination i of GQ Lup B's orbit fall into three groups:
1. a∼ 100 AU, e∼ 0.15, i∼57,
2. a< 185 AU, 0.2<e<0.75, 28<i<63, and
3. a> 300 AU, e> 0.8, 52<i<63.
As was argued by <cit.>, Group 3 is a priori unlikely as its high eccentricity and long orbital period would mean that we are observing GQ Lup B close to periastron. Since the circumstellar disk may be disrupted if the companion goes too close to the star, we can calculate the truncation radius for each of the orbit groups to determine whether they are consistent with the size of the GQ Lup A disk. We adopt a disk radius of 46.5 AU determined by <cit.> from CO (3–2) emission, as it more likely represents the full extent of the disk compared with our 1.3 mm size measurement.
Semi-analytic approximations for the tidal truncation radius of the primary star as a function of orbital parameters find that the disk should be truncated at a radius of
R_t ≈ 0.36 [(1-e)^1.2 ϕ^2/3 μ^0.07/0.6 ϕ^2/3 + ln(1 + ϕ^1/3)] a,
where ϕ is the ratio of the mass of the primary to the mass of the secondary, and μ≡ 1/(1+ϕ) <cit.>. This equation is relatively insensitive to whether the mass of GQ Lup B is closer to 10 M_Jup or 40 M_Jup (because it is at most 4% of the primary's mass), but is very sensitive to eccentricity and semi-major axis. We hence arbitrarily use 25 M_Jup for this calculation.
We show the truncation radius as a function of semi-major axis and eccentricity in Figure <ref>. We find that the truncation radii for Group 1 and Group 3 are all less than 40 AU, not compatible with the measured disk size. Most of the parameter space for Group 2 is also excluded; the remaining solutions are those with a∼ 160–180 AU and e∼ 0.2–0.3. As a result, the size of the GQ Lup A disk suggests that GQ Lup B's orbit probably has a low eccentricity. However, it is important to note that this equation may break down if the primary's disk and secondary's orbit are substantially inclined (as may be the case for GQ Lup, see Section <ref>). In this scenario, it is likely that larger disk radii than we conclude here would be acceptable. Indeed, the strong constrains we place here based on this analysis may instead be an indication of a high degree of inclination for the orbit.
§.§ Geometry of the System
Since the inclination of GQ Lup A's disk, ∼56, is consistent with orbital solutions in <cit.> and <cit.>, here we investigate if the star's disk and the companion's orbit can be possibly in the same plane.
In Figure <ref>, we plot the three best-fit orbits in <cit.> (see their Table 4 for orbital parameters) as well as the GQ Lup A disk in two viewing angles, one along the line of sight and the other with the disk viewed edge-on. We also show one representative orbit from <cit.>, for which the semi-major axis and eccentricity are constrained by our tidal truncation analysis in Section <ref>, and other parameters including the longitude of the ascending node and the argument of periastron are extracted from unpublished astrometric fitting (Ginski, C. 2016, private communication). All these orbits are unlikely in the same plane of the disk. Although A's disk and B's orbit may share similar inclinations, they probably have very different orientations in space.
In Figure <ref> we plot a possible geometry of the GQ Lup system. The circumstellar disk is not aligned with the star's spin axis either, because they have different inclinations: ∼56 for the disk and ∼27 for the spin axis <cit.>. This is not unusual among T Tauri stars. <cit.> showed that although stellar rotation angle is correlated with disk inclination, they are not identical but a mean difference ∼19 in T Tauri systems. We note that this misalignment might be induced by a torque from GQ Lup B (e.g., ).
We caution that the results presented here and in Section <ref> are preliminary. As <cit.> and <cit.> stressed, many orbital solutions share similar χ^2 in their orbital fitting. Future astrometric monitoring is essential to lift degeneracies and ascertain GQ Lup B's orbit.
§.§ SED of GQ Lup B
Figure <ref> compares the spectral energy distribution (SED) of GQ Lup B to the 2400 K, log g = 4.0 BT-Settl model <cit.>. The temperature and surface gravity are chosen to be consistent with previous estimates (e.g., ; ; ). The model is normalized at K to minimize the effects from dust emission and extinction. We include 0.3 to 3.7 literature photometry, the 3σ upper limit of 0.643 , and fluxes of Hα, i', z', and Y_S in the figure.
Overall, the 2400 K model gives a reasonable fit longward of 0.7 . At shorter wavelengths, the observed fluxes are much higher than photospheric due to excess continuum emission from accretion <cit.>. The 0.656 Hα emission is especially prominent. As recently simulated by <cit.>, Hα emission likely comes from the extended shock front on the surface of the circumsubstellar disk. Our measured Hα flux, albeit with a large uncertainty, is about 10 times fainter than that of <cit.>. Our 0.643 non-detection also indicates a much weaker accretion continuum. Therefore, accretion onto GQ Lup B seems to be very variable; we probably observed a more quiescent phase than did Zhou et al.
§.§ Accretion Rate and Disk Lifetime
Disk lifetime can be roughly estimated assuming a constant accretion rate, but it should be taken with caution as accretion can vary significantly. Our measured Ṁ for GQ Lup A, 10^-8 to 10^-7 M_ yr^-1, is typical of T Tauri stars (e.g., ). With M_disk∼ 2×10^-4 M_, GQ Lup A's disk may be depleted in a few thousand years. For the companion, with a disk mass upper limit of ∼10^-5 M_ found by <cit.>, and an accretion rate of Ṁ∼ 10^-12 to 5×10^-10 M_ yr^-1 derived by <cit.> and our Hα photometry, GQ Lup B's disk perhaps can continue for tens of thousands of years. Disk lifetime for GQ Lup B might be longer than that of the host star.
§ DISCUSSION
§.§ Formation of GQ Lup B: Scattering
<cit.> argued that scattering might be the most favorable scenario for GQ Lup B where it originally formed close to the star, but was scattered outward by a more massive body. Search for a close-in massive companion to the star have not yielded positive results. Deep AO imaging in <cit.> excluded any object as bright as GQ Lup B outside ∼18 AU. Similarly, RV monitoring rejected objects massive than 0.1 M_ within 2.6 AU <cit.>, but <cit.> speculated that a massive brown dwarf might reside only a few AU's from the star in order to explain their observed 0.4 km s^-1 RV change in two years. However, if such an inner object exists, morphology of GQ Lup A's disk can be used to infer its presence since it may sculpt a gap or hole. The inner edge of a circumbinary disk can be truncated at approximately 2 to 3 times the binary separation <cit.>, corresponding to a ∼10 AU hole for the brown dwarf companion posited by <cit.>. Nevertheless, neither the 0.3 to 1.3 mm disk SED <cit.> nor our ALMA 1.3 mm disk-resolved map find evidence of a gap or central clearing in the disk. Thus, it is quite unlikely that another massive body is very close to the primary star to serve as the scatterer.
In addition, scattering often induces a very eccentric orbit (e.g., ), but for GQ Lup B, orbital solutions with low eccentricities are more probable, as shown in Section <ref>. Therefore, all lines of observational evidence suggest that in situ formation via disk fragmentation or prestellar core collapse is more likely the formation pathway of GQ Lup B. This is in line with the null result of the dedicated AO search for scatterers in other systems <cit.>; thus, core accretion plus subsequent scattering is perhaps not responsible for most substellar companions on wide orbits.
§.§ Formation of GQ Lup B: in situ
Observationally distinguishing prestellar core collapse from disk fragmentation requires high-resolution imaging toward the earliest phase in star formation. Recent studies suggest that prestellar core collapse can be effective to form very wide (>1000 AU) binary or multiple star systems (e.g., ), while disk fragmentation may form more compact systems with separations of tens to hundreds of AU between the components (e.g., ). At ∼110 AU from the host star, GQ Lup B seems to fit nicely to the disk fragmentation scenario.
For fragmentation to occur, GQ Lup A's disk must have been very massive and presumably more extended than 50 to 100 AU, since circumstellar disks are expected to become gravitationally unstable beyond that radius (e.g., ). It is sometimes suggested that the disk plane and the companion's orbital plane should be coplanar because the companion formed in the disk. However, dynamical interactions with other fragments in the parent disk can gradually alter the initial configuration, thereby creating inclined systems <cit.>. As a result, even though in Section <ref> we have shown that GQ Lup B's orbital plane is probably not coplanar with GQ Lup A's disk, disk fragmentation remains a possibility.
Recently, <cit.> proposed that properties of circumsubstellar disks provide an observational diagnostic to distinguish disk fragmentation from prestellar core collapse. They predicted higher disk masses and accretion rates for objects formed via disk fragmentation, because they have longer time to accrete and thus retain a more massive disk. The discrepancy between the two scenarios is more profound for very low-mass companions, especially <10 M_Jup. The authors also predicted that, under the disk fragmentation framework, low-viscosity circumsubstellar disks tend to have masses and accretion rates higher than that of high-viscosity ones, because higher viscosity facilitates angular momentum transport and disk dissipation.
In Figure <ref> we overplot substellar companions FW Tau C, GSC 6214-210 B, and GQ Lup B on the Figure 7 of <cit.>. Objects formed by disk fragmentation have a roughly constant disk mass, in drastic contrast to the monotonic correlation M_disk∝ M_star for prestellar core collapse. While FW Tau C has a rather massive disk, considering the large dispersion in the M_disk∝ M_star correlation (±0.7 dex; ), it is still consistent with both formation scenarios. The very low-mass disks around GQ Lup B and GSC 6214-210 B are more in line with the formation via prestellar core collapse. Alternatively, if they formed via disk fragmentation, their disks might have a high viscosity to quickly dissipate.
§.§ Formation of Satellites
Disks around wide-orbit substellar companions provide clues to the formation and population of exomoons, which are challenging to detect with current facilities. Simulations of <cit.> found that Jupiter-mass satellites are unlikely to form around brown dwarfs because no rocky cores grow fast enough to accrete a gaseous envelope before the disk dissipates, while Earth-like satellites can be common if disk mass is a few M_Jup. Nonetheless, even Earth-mass satellites are rare if disk mass is only a fraction of M_Jup. Thus, it appears that GQ Lup B has no gaseous moons, while a few Earth-like moons may have formed in early times when the disk was more massive. As <cit.> and our ALMA observations find that GQ Lup B's disk is deficient in dust, forming Earth-like satellites is no longer possible. Only tiny rocky moons analogous to the Moon (∼0.012 M_) may still form out of the remaining material (<0.04 M_). Satellite formation around GQ Lup B is probably in the late stage and may have already ceased.
§ CONCLUSIONS
We observe the 2–5 Myr GQ Lup system with ALMA at 1.3 mm and MagAO at 0.6 to 1 . With an unprecedented 0054×0031 resolution at 1.3 mm, we resolve GQ Lup A's accretion disk. Our observations, however, are not deep enough to detect GQ Lup B's disk. The main results are as follows.
* GQ Lup A's disk has a radius of ∼22 AU, a dust mass of ∼6 M_, an inclination angle of ∼56, a position angle of ∼349, and a very flat surface density profile. The flat profile is indicative of radial variation of dust sizes, with larger grains growing in the inner disk. This is also supported by the larger disk size measured at a shorter wavelength of 870 <cit.>.
* GQ Lup A's disk is not aligned with the star's spin axis (i∼56 versus ∼27), and it is unlikely to be coplanar with GQ Lup B's orbit. We use the size of the GQ Lup A disk to demonstrate that GQ Lup B's orbit might have a low eccentricity e∼ 0.2–0.3 with semi-major axis a∼ 160–180 AU. Highly eccentric orbits have tidal truncation radii incompatible with the measured disk size.
* Both components are glowing in Hα, indicating active accretion. We derive accretion rates of Ṁ∼10^-8 to 10^-7 M_ yr^-1 for GQ Lup A, and Ṁ∼10^-12 to 10^-11 M_ yr^-1 for GQ Lup B. This implies that GQ Lup A's disk may be depleted in a few thousand years, while GQ Lup B's disk may remain longer.
* Both our disk modeling and the more sensitive observations by <cit.> suggest that GQ Lup B's disk is rather dust-depleted, similar to GSC 6214-210 B (<0.15 M_ of dust; ), but in contrast to the dust-abundant disk around FW Tau C (1–2 M_ of dust; ). This may be due to age differences, as GQ Lup and GSC 6214-210 are old compared with FW Tau.
* Since there are no gaps or an inner cavity in GQ Lup A's disk, the chance of having another inner companion more massive than GQ Lup B is low. Therefore, scattering is unlikely responsible for GQ Lup B's formation; in situ formation via disk fragmentation or prestellar core collapse is favored. The very low-mass disk of GQ Lup B is more consistent with prestellar core collapse based on the simulations in <cit.>.
* Based on the results of <cit.>, GQ Lup B probably has no gaseous satellites. With very little dust remaining in the disk, only tiny rocky moons might form around GQ Lup B.
We thank the referee for helpful comments. We are grateful to Christian Ginski and Henriette Schwarz for providing their new astrometric fitting, and Yifan Zhou for the Hα contrast in the HST data. We thank Kaitlin Kratter, Min-Kai Lin, Yu-Cian Hong, and Jing-Hua Lin for discussions. We are also grateful to the MagAO development team and the Magellan Observatory staff for their support. This material is based upon work supported by the National Science Foundation under Grant No. 1506818 (PI Males) and NSF AAG Grant No. 1615408 (PI Close). Y.-L.W. and L.M.C. are supported by the NASA Origins of Solar Systems award and the TRIF fellowship. J.R.M. and K.M.M. were supported under contract with the California Institute of Technology (Caltech) funded by NASA through the Sagan Fellowship Program. K.M.M's and L.M.C's work is supported by the NASA Exoplanets Research Program (XRP) by cooperative agreement NNX16AD44G. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2015.1.00773.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Results from distributed computing were obtained using the Chameleon testbed supported by the National Science Foundation.
[Allard et al.(2011)]A11Allard, F., Homeier, D., & Freytag, B. 2011, in ASP Conf. Ser. 448, XVI Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, ed. C. Johns-Krull, M. K. Browning, & A. A. West (San Francisco, CA: ASP), 91
[Andrews et al.(2013)]A13Andrews, S. M., Rosenfeld, K. A., Kraus, A. L., & Wilner, D. J. 2013, , 771, 129
[Appenzeller & Bertout(2013)]AB13Appenzeller, I., & Bertout, C. 2013, , 558, A83
[Appenzeller et al.(1978)]A78Appenzeller, I., Mundt, R., & Wolf, B. 1978, , 63, 289
[Artymowicz & Lubow(1994)]AL94Artymowicz, P., & Lubow, S. H. 1994, , 421, 651
[Ayliffe & Bate(2009)]AB09Ayliffe, B. A., & Bate, M. R. 2009, , 397, 657
[Bailey et al.(2013)]Ba13Bailey, V., Hinz, P. M., Currie, T., et al. 2013, , 767, 31
[Batalha et al.(2001)]B01Batalha, C., Lopes, D. F., & Batalha, N. M. 2001, , 548, 377
[Bate(2012)]B12Bate, M. R. 2012, , 419, 3115
[Batygin(2012)]Batygin12Batygin, K. 2012, , 491, 418
[Beckwith et al.(1990)]B90Beckwith, S. V. W., Sargent, A. I., Chini, R. S., & Guesten, R. 1990, , 99, 924
[Bertout et al.(1982)]B82Bertout, C., Carrasco, L., Mundt, R., & Wolf, B. 1982, A&AS, 47, 419
[Bonnefoy et al.(2011)]Bonnefoy11Bonnefoy, M., Lagrange, A.-M., Boccaletti, A., et al. 2011, , 528, L15
[Bowler et al.(2015)]B15Bowler, B. P., Andrews, S. M., Kraus, A. L., et al. 2015, , 805, L17
[Bowler et al.(2014)]B14Bowler, B. P., Liu, M. C., Kraus, A. L., & Mann, A. W. 2014, , 784, 65
[Bowler et al.(2011)]Bowler11Bowler, B. P., Liu, M. C., Kraus, A. L., Mann, A. W., & Ireland, M. J. 2011, , 743, 148
[Broeg et al.(2007)]B07Broeg, C., Schmidt, T. O. B., Guenther, E., et al. 2007, , 468, 1039
[Bryan et al.(2016)]Br16Bryan, M. L., Bowler, B. P., Knutson, H. A., et al. 2016, , 827, 100
[Caceres et al.(2015)]C15Caceres, C., Hardy, A., Schreiber, M. R., et al. 2015, , 806, L22
[Clarke(2009)]C09Clarke, C. J. 2009, , 396, 1066
[Close et al.(2014)]C14Close, L. M., Follette, K. B., Males, J. R., et al. 2014, , 781, L30
[Close et al.(2012)]C12Close, L. M., Males, J. R., Kopon, D., et al. 2012, Proc. SPIE, 8447, 84470X
[Close et al.(2013)]C13Close, L. M., Males, J. R., Morzinski, K., et al. 2013, , 774, 94
[Covino et al.(1992)]C92Covino, E., Terranegra, L., Franchini, M., Chavarría-K., C., & Stalio, R. 1992, A&AS, 94, 273
[Crawford(2000)]C00Crawford, I. A. 2000, , 317, 996
[Currie et al.(2015)]Cu15Currie, T., Cloutier, R., Brittain, S., et al. 2015, , 814, L27
[Czekala et al.(2015)]Cz15Czekala, I., Andrews, S. M., Jensen, E. L. N., et al. 2015, , 806, 154
[Czekala et al.(2016)]Cz16Czekala, I., Andrews, S. M., Torres, G., et al. 2016, , 818, 156
[Dai et al.(2010)]D10Dai, Y., Wilner, D. J., Andrews, S. M., & Ohashi, N. 2010, , 139, 626
[Debes & Sigurdsson(2006)]DS06Debes, J. H., & Sigurdsson, S. 2006, , 451, 351
[Donati et al.(2012)]D12Donati, J.-F., Gregory, S. G., Alencar, S. H. P., et al. 2012, , 425, 2948
[Dullemond(2012)]Dullemond12Dullemond, C. P. 2012, RADMC-3D: A multi-purpose radiative transfer tool, Astrophysics Source Code Library, ascl:1202.015
[Eggleton(1983)]E83Eggleton P. P., 1983, , 268, 368
[Foreman-Mackey et al.(2013)]FM13Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, , 125, 306
[Franco(2002)]F02Franco, G. A. P. 2002, , 331, 474
[Friedrich & Schöffel(1971)]FS71Friedrich, D., & Schöffel, E. 1971, Inf. Bull. Var. Stars No. 558
[Ginski et al.(2014)]G14Ginski, C., Schmidt, T. O. B., Mugrauer, M., et al. 2014, , 444, 2280
[Haug-Baltzell et al.(2016)]HB16Haug-Baltzell, A., Males, J. R., Morzinski, K. M., et al. 2016, Proc. SPIE 9913, 9913–134
[Herbig(1962)]H62Herbig, G. H. 1962, AdA&A, 1, 47
[Herbig(1977)]H77Herbig, G. H. 1977, , 214, 747
[Huber et al.(2013)]H13Huber, D., Carter, J. A., Barbieri, M., et al. 2013, Science, 342, 331
[Hügelmeyer et al.(2009)]H09Hügelmeyer, S. D., Dreizler, S., Hauschildt, P. H. et al. 2009, , 498, 793
[Hughes et al.(1994)]H94Hughes, J., Hartigan, P., Krautter, J., & Kelemen, J. 1994, , 108, 1071
[Isella et al.(2014)]I14Isella, A., Chandler, C. J., Carpenter, J. M., Pérez, L. M., & Ricci, L. 2014, , 788, 129
[Janson et al.(2006)]J06Janson, M., Brandner, W., Henning, T., & Zinnecker, H. 2006, , 453, 609
[Johns-Krull et al.(2013)]J13Johns-Krull, C. M., Chen, W., Valenti, J. A., et al. 2013, , 765, 11
[Kardopolov & Filipev(1985)]KF85Kardopolov, V. I., & Filipev, G. K. 1985, Peremennye Zvezdy, 22, 103
[Kessler-Silacci et al.(2006)]KS06Kessler-Silacci, J., Augereau, J.-C., Dullemond, C. P., et al. 2006, , 639, 275
[Kopon et al.(2013)]K13Kopon, D., Close, L. M., Males, J. R., & Gasho, V. 2013, , 125, 966
[Kraus et al.(2015)]K15Kraus, A. L., Andrews, S. M., Bowler, B. P., et al. 2015, , 798, L23
[Kraus et al.(2014)]K14Kraus, A. L., Ireland, M. J., Cieza, L. A., et al. 2014, , 781, 20
[Lachapelle et al.(2015)]L15Lachapelle, F.-R., Lafrenière, D., Gagné, J., et al. 2015, , 802, 61
[Lavigne et al.(2009)]L09Lavigne, J.-F., Doyon, R., Lafrenière, D., Marois, C., & Barman, T. 2009, , 704, 1098
[Lynden-Bell & Pringle(1974)]LBP74Lynden-Bell, D., & Pringle, J. E. 1974, , 168, 603
[MacGregor et al.(2017)]M16MacGregor, M. A., Wilner, D. J., Czekala, I., et al. 2017, , 835, 17
[Males(2016)]Males16Males, J. R. 2016, MAOP-710: VisAO Photometric Calibration (University of Arizona, Steward Observatory), <https://visao.as.arizona.edu/wp-content/uploads/2016/09/nd_cal_2016.09_21.pdf>
[Males et al.(2014)]Males14 Males, J. R., Close, L. M., Morzinski, K., et al. 2014, , 786, 32
[Marois et al.(2006)]M06Marois, C., Lafrenière, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, , 641, 556
[Marois et al.(2007)]MMB07Marois, C., Macintosh, B., & Barman, T. 2007, , 654, L151
[McClure et al.(2012)]MC12McClure, M. K., Manoj, P., Calvet, N., et al. 2012, , 759, L10
[McElwain et al.(2007)]M07McElwain, M. W., Metchev, S. A., Larkin, J. E., et al. 2007, , 656, 505
[Mendoza(1968)]M68Mendoza, E. E. 1968, , 151, 977
[Morales et al.(2012)]M12Morales, F. Y., Padgett, D. L., Bryden, G., Werner, M. W., & Furlan, E. 2012, , 757, 7
[Morzinski et al.(2014)]M14Morzinski, K. M., Close, L. M., Males, J. R., et al. 2014, Proc. SPIE, 9148, 914804
[Morzinski et al.(2015)]M15Morzinski, K. M., Males, J. R., Skemer, A. J., et al. 2015, , 815, 108
[Mugrauer & Neuhäuser(2005)]MN05Mugrauer, M., & Neuhäuser, R. 2005, AN, 326, 701
[Nagasawa & Ida(2011)]NI11Nagasawa M., & Ida S., 2011, , 742, 72
[Natta et al.(2006)]N06Natta, A., Testi, L., & Randich, S. 2006, , 452, 245
[Neuhäuser et al.(2005)]N05Neuhäuser, R., Guenther, E. W., Wuchterl, G., Mugrauer, M., Bedalov, A., & Hauschildt, P. H. 2005, , 435, L13
[Neuhäuser et al.(2008)]N08Neuhäuser, R., Mugrauer, M., Seifahrt, A., Schmidt, T. O. B., & Vogt, N. 2008, , 484, 281
[Patience et al.(2012)]P12Patience, J., King, R. R., De Rosa, R. J., et al. 2012, , 540, A85
[Payne & Lodato(2007)]PL07Payne, M. J. & Lodato, G. 2007, , 381, 1597
[Pichardo et al.(2005)]P05Pichardo, B., Sparke, L. S., & Aguilar, L. A. 2005, , 359, 521
[Pineda et al.(2015)]P15Pineda, J. E., Offner, S. S. R., Parker, R. J., et al. 2015, , 518, 213
[Quillen & Trilling(1998)]QT98Quillen, A. C., & Trilling, D. E. 1998, , 508, 707
[Rau & Cornwell(2011)]RC11Rau, U., & Cornwell, T. J. 2011, , 532, A71
[Reipurth & Clarke(2001)]RC01Reipurth, B., & Clarke, C. 2001, , 122, 432
[Ricci et al.(2014)]R14Ricci, L., Testi, L., Natta, A., et al. 2014, , 791, 20
[Rigliaco et al.(2012)]R12Rigliaco, E., Natta, A., Testi, L., et al. 2012, , 548, A56
[Sallum et al.(2015)]S15Sallum, S., Follette, K. B., Eisner, J. A., et al. 2015, , 527, 342
[Schmidt et al.(2008)]S08Schmidt, T. O. B., Neuhäuser, R., Seifahrt, A., et al. 2008, , 491, 311
[Schwartz & Noah(1978)]SN78Schwartz, R. D., & Noah, P. 1978, , 83, 785
[Schwarz et al.(2016)]S16Schwarz, H., Ginski. C, de Kok, R. J., et al. 2016, , 593, 74
[Seifahrt et al.(2007)]S07Seifahrt A., Neuhäuser R., & Hauschildt P. H., 2007, , 463, 309
[Seperuelo Duarte et al.(2008)]SD08Seperuelo Duarte, E., Alencar, S. H. P., Batalha, C., & Lopes, D. 2008, , 489, 349
[Shabram & Boley(2013)]SB13Shabram, M., & Boley, A. C. 2013, , 767, 63
[Soummer et al.(2012)]S12Soummer, R., Pueyo, L., & Larkin, J. 2012, , 755, L28
[Stamatellos & Herczeg(2015)]SH15Stamatellos, D. & Herczeg, G. J. 2015, , 449, 3432
[Stamatellos & Whitworth(2009)]SW09Stamatellos, D., & Whitworth, A. P., 2009, , 392, 413
[Szulágyi & Mordasini(2017)]SM17Szulágyi, J., & Mordasini, C. 2017, , 465, L64
[Testi et al.(2016)]T16Testi, L., Natta, A., Scholz, A., et al. 2016, , 593, 111
[Tobin et al.(2016)]Tobin16Tobin, J. J., Kratter, K. M., Persson, M. V., et al. 2016, , 538, 483
[Tody(1986)]T86Tody, D. 1986, Proc. SPIE, 627, 733
[Tody(1993)]T93Tody, D. 1993, in ASP Conf. Ser. 52, Astronomical Data Analysis Software and Systems II, ed. R. J. Hanisch, R. J. V. Brissenden, & J. Barnes (San Francisco, CA: ASP), 173
[Uyama et al.(2017)]U17Uyama, T., Hashimoto, J., Kuzuhara, M., et al. 2017, arXiv:1604.04697
[Weingartner & Draine(2001)]WD01Weingartner, J. C., & Draine, B. T. 2001, , 548, 296
[White & Ghez(2001)]WG01White, R. J., & Ghez, A. M. 2001, , 556, 265
[Wu et al.(2015a)]W15aWu, Y.-L., Close, L. M., Males, J. R., et al. 2015a, , 801, 4
[Wu et al.(2015b)]W15bWu, Y.-L., Close, L. M., Males, J. R., et al. 2015b, , 807, L13
[Zacharias et al.(2010)]Z10Zacharias, N., Finch, C., Girard, T., et al., 2010, , 139, 2184
[Zhou et al.(2014)]Z14Zhou, Y., Herczeg, G. J., Kraus, A. L., Metchev, S., & Cruz, K. L., 2014, , 783, L17
| In recent years, high-contrast imaging surveys have discovered many wide-orbit substellar companions, which are located at tens to hundreds of AU from their host stars and have masses of a few to tens of M_Jup. Some of these companions have features indicative of accretion disks, including optical and near-infrared emissions such as Hα, Br-γ, and Pa-β (e.g., ; ; ; , ; ; ; ; ), high dust extinction (e.g., ; , ), and infrared excess from dust emission (e.g., ; ; ; ). It is expected that disks could be common among young substellar companions because the main formation mechanisms—collapse of prestellar cores, fragmentation of circumstellar disks, and core accretion plus subsequent scattering—can all produce disk-bearing companions. Each mechanism, however, can leave distinct imprints on disk properties. For instance, objects formed by disk fragmentation may have higher disk masses and accretion rates compared to those formed by prestellar core collapse <cit.>. On the other hand, scattering can be destructive to disks (e.g., ), and unlike stars, low-mass objects are not efficient to accrete new disks from the natal molecular clouds after scattering <cit.>. Characterizing disks of substellar companions therefore provides a new avenue for studying wide companions' mass assembly history.
In addition, if gas emission lines such as CO can be spatially and spectrally resolved, the dynamical mass of the central object can be determined assuming a Keplerian velocity field. Since masses are usually derived by comparing observables to theoretical predictions, this dynamical approach has great potential to calibrate evolutionary models (e.g., ). Finally, disk masses, sizes, structure, and lifetimes ultimately regulate satellite formation and satellite-disk interaction. As wide companions are well separated from their host stars, they offer a clear view to the relevant physical processes.
Still, imaging disks around very low-mass companions remains challenging (e.g., ). Simulations have shown that they tend to be compact because they are tidally truncated at ∼1/3 of the Hill radius (e.g., ). For objects in nearby star-forming regions (∼100 to 150 pc), their disks are probably not larger than ∼5 to 30 AU in radii, which in turn requires a <01 resolution to resolve them in (sub)millimeter. With the advent of ALMA, it is now possible to directly image and characterize these disks. <cit.> showed that GSC 6214-210 B has a dust mass of <0.15 M_ in its disk. <cit.> also found that only <0.04 M_ of dust is present in the disk around GQ Lup B. Most notably, <cit.> and <cit.> detected FW Tau C's accretion disk in 1.3 mm dust continuum and ^12CO (2–1) emission, respectively. <cit.> inferred a dust mass of 1–2 M_, sufficient to form satellites analogous to the Galilean moons.
Here we present the ALMA 1.3 mm map of the GQ Lup system, a pre-main-sequence star with a 10–40 M_Jup companion at ∼110 AU projected separation. Both components have been shown to exhibit accretion signatures (e.g., ; ). With a ∼005 resolution, we resolve the primary star's accretion disk. The companion's disk is, however, not detected. We also present the 0.6–1 imaging of the system using the Magellan adaptive optics (MagAO; ; ; ), and derive mass accretion rates from Hα intensities. | null | §.§ ALMA 1.3 mm
GQ Lup was observed with ALMA in Cycle 3 on UT 2015, November 1 with the Band 6 receiver and 41 12-m antennas reaching a maximum baseline of 14969.3 meters. Three of the 4 available basebands were configured for continuum observations to search for dust emission, each with 128 15.625 MHz channels for a total of 2 GHz continuum bandwidth, and centered at 233.0, 246.0, and 248.0 GHz. The final baseband was centered at 230.538 GHz with 3840 0.122 MHz (Hanning smoothed to a resolution of 0.244 MHz, or 0.32 km s^-1) channels to search for ^12CO (2–1) emission from our targets. Scans on GQ Lup were interleaved with the phase calibrator QSO J1534-3526. The total on-source time was 11.19 minutes.
The data were reduced in the standard way with the software package, using the water vapor radiometry data and QSO J1534-3526 for gain calibration, QSO J1427-4206 for bandpass calibration, and QSO J1337-1257 for flux calibration. The calibrated data were Fourier inverted and deconvolved from the beam using the MSMFS-CLEAN algorithm <cit.> with no frequency dependence, CLEAN components which are point sources and 1, 2, 4, and 8 times the size of the beam, and natural weighting for the best sensitivity. We produced a continuum map from all four basebands, excluding channels in the -15 to 15 km s^-1 range of the CO baseband to avoid contaminating our map with CO emission. The final 1.3 mm continuum map was produced after four iterations of phase-only self-calibration using a model produced from the CLEAN algorithm. We show the 1.3 mm continuum map in Figure <ref>. The continuum map has an rms of 39 μJy beam^-1, with a synthesized beam of 0054×0031 and position angle of 687.
§.§ Sources in the ALMA Image
In the continuum map there is a clear detection of a source located at 15^h49^m1209, -35390543. Accounting for GQ Lup A's proper motion (-15.1±2.8 mas yr^-1, -23.4±2.5 mas yr^-1; ), this is coincident with its reported position at 15^h49^m1210, -35390512 (J2000). We measured a flux of 27.5±0.6 mJy for this source.
As we know the position of GQ Lup B, we can search a smaller region of the image with a lower detection threshold for emission. Noise in our ALMA map is highly Gaussian, so we would expect 68% of the peaks to be within 1σ of zero, 95% to be within 2σ, and so on. In a 01 diameter region around the known position of GQ Lup B there are 4 beams, so we would expect ∼0.2 noise peaks above 2σ, but ≪1 noise peak above 3σ, so any peak above 3σ is likely real. However, we did not detect any emission from GQ Lup B.
§.§ GQ Lup B's Disk Mass
To place an upper limit on the disk mass of GQ Lup B, we inserted fake sources into our image in a 01 diameter area around the known position of GQ Lup B, and used our source finding routine to search for them. We varied both the disk size and the flux of the input sources, and for each disk size/flux combination we calculated the percentage of the fake sources that were recovered in the map. For each disk size, we set the upper limit on disk flux to be the minimum flux for which 99.7% (3σ) of the input fake sources were detected.
Flux upper limits can be converted into dust mass upper limits by assuming the dust is optically thin and using the standard prescription <cit.>,
M_disk = D^2 F_disk/κ_ν B_ν(T).
We used the standard assumption of a characteristic dust temperature of T=20K and a 1.3 mm dust opacity of κ_ν = 2.3 cm^2 g^-1. We used a distance of 150 pc to GQ Lup.
In Figure <ref> we show a plot of the upper limit on GQ Lup B's disk mass as a function of disk radius. The calculation ran from a point source disk to a disk with a radius of 50 mas. This radius corresponds to a third of the Hill radius for GQ Lup B, which is expected to be the upper limit on the size of the disk. This calculation also assumed that the projected separation of 0721 is equal to the semi-major axis of the companion's orbit. The Hill radius, and therefore the expected disk radius upper limit, would be larger if the system has a larger separation.
§.§ GQ Lup A Disk Modeling
As shown in Figure <ref>, the disk around the primary star is strongly detected at 1.3 mm continuum. We fit the visibility data for GQ Lup A with a series of two disk models to constrain the parameters of the disk.
We modeled the disk with a detailed radiative transfer modeling scheme, using the RADMC-3D code <cit.> to produce disk models. Our model included a central protostar with a temperature of 4300 K and luminosity of 1 L_⊙, consistent with the measured values (see Table <ref>). It also included a dusty protoplanetary disk, for which we used two different density prescriptions. Model A used the density profile of a flared power-law disk,
Σ = Σ_0 (R/1 AU)^-γ, and
ρ = Σ(R)/√(2π) h(R) exp(-1/2[z/h(R)]^2), with
h(R) = h_0 (R/1 AU)^β.
We allowed the disk mass (M_disk), inner and outer disk radii (R_in and R_disk), the surface density index (γ), scale height index (β), and scale height at 1 AU (h_0), to vary as free parameters. Model B used the surface density profile of a flared accretion disk with a radial power-law distribution of viscosity <cit.>,
Σ = Σ_0 (R/R_c)^-γexp[-(R/R_c)^2-γ].
The parameters for Model B were the same as those of Model A, with the exception of the critical radius R_c, beyond which the surface density drops exponentially. This replaced the disk radius, beyond which the density drops to zero in Model A. We also supplied dust opacities to the models. We assumed that dust grains are 70% astronomical silicate and 30% graphite <cit.> following a power-law distribution of dust grain sizes, with a minimum size of 5 nm, a maximum size of 3 mm, and a power law exponent of -3.5. This produced a 1.3 mm opacity of 2.25 cm^2 g^-1, in good agreement with the typical value assumed for disk mass calculations.
We used RADMC-3D to calculate the temperature throughout the density distribution. Following this we produced synthetic images with raytracing of the protostar model, and Fourier transformed the images to produce model 1.3 mm visibility profiles. We fit these models directly to the visibility data using the Markov Chain Monte Carlo code <cit.>. uses an affine-invariant MCMC ensemble sampler, which employs a series of walkers that step through parameter space and converge on the best fit. We positioned 200 walkers throughout a large region of parameter space and allowed them to move towards regions of lower χ^2. We show the best fit parameters for each model in Table <ref>, and images, residuals, and visibilities for the best-fit models in Figure <ref>.
§.§ MagAO Hα Photometry
GQ Lup was observed on UT 2015, April 16 in the simultaneous differential imaging (SDI) mode <cit.> of the VisAO camera (8×8 field of view, 79 mas plate scale; ). The rotator was off to facilitate angular differential imaging (ADI) <cit.>. Weather was photometric with low ground level winds. Seeing varied from 07 to 08. AO parameters and exposure time are listed in Table <ref>. Images in the Hα (656 nm; Δλ = 6.3 nm) and continuum (643 nm; Δλ = 6.1 nm) channels were separated, dark subtracted, registered and centered, and then the median radial profile of each image was subtracted from itself.
After these basic reduction steps, we then proceeded to employ the KLIP algorithm <cit.> for PSF subtraction with ADI. Our implementation of the ADI+KLIP algorithm allows for the selection of many parameters, including the size of the search region, a minimum rotation requirement, and the number of modes <cit.>. The following sequence of steps was taken to find a set of signal-to-noise (S/N) optimizing reduction parameters in an unbiased way.
First, an initial reduction was carried out to locate the companion. This employed a search annulus from 50 to 150 pixels (04 to 12), minimum of 2.5 pixels of rotation at the inner edge between the image being reduced and basis images, and modes ranging from 5 to 30. After PSF subtraction, images were de-rotated and the final image was formed as the 5σ-clipped mean. The final image was then unsharp-masked with a 20-pixel Gaussian kernel, and then smoothed with a 5-pixel Gaussian. Unsharp masking acts a high-pass filter, removing the stellar halo and the residual low-spatial frequency noise remaining after PSF subtraction. Given the low quality correction which yielded an FWHM much larger than λ/D, the PSF was oversampled by the diffraction limited plate scale of the VisAO camera. Gaussian smoothing (low-pass filter) hence smooths pixel-to-pixel noise. The companion was readily seen in the Hα channel with S/N ∼ 5 using 8 modes, but no detection was evident in the continuum channel (upper panels of Figure <ref>).
Next, we injected negative planets <cit.> with the same search region, using 8 modes to form an initial estimate of the planet flux, finding ΔHα ∼ 8.7 mag as the planet brightness which minimized the standard deviation in an aperture with radius of 1 FWHM at the location of the companion.
It is difficult to estimate uncertainties using the negative planet technique (e.g., ), and optimizing the reduction parameters on the companion itself risks biasing results due to speckles. To address these issues, we conducted a grid search over the parameters, testing on an ensemble of positive fake planets injected at the same separation but at a range of position angles. To conduct this search in a reasonable amount of time we employed the “Findr” distributed computing (cloud-based) data reduction system <cit.>. In all trials, a negative planet with the above estimated brightness was injected at the location of the detection to avoid cross-talk. Planet injection was performed on the dark-subtracted/registered/centered images, and then the radial profile was subtracted. The parameters tested are given in Table <ref>, and the KLIP algorithm was applied for each possible combination. The final combined image at each combination was filtered as above. The optimum parameters were determined as those which maximized the mean S/N on the ensemble of positive fake planets injected at 8.7 mag brightness. The flux of the companion was then determined by comparing its photometry when reduced with those optimum parameters to the positive fake planet results. Uncertainties were estimated from the standard deviation of the results from the fake planets.
At the optimum parameters, GQ Lup B was detected in Hα at S/N ∼ 6.3 with a contrast of 8.60 ± 0.16 mag. For comparison, the Hα contrast in <cit.> is ∼7.1 mag (Zhou, Y. 2016, private communication). The contrast at 643 nm is >8.81 mag (3σ upper limit). We also found that GQ Lup A's Hα flux is ∼0.60 mag brighter than its 643 nm continuum due to active accretion.
§.§ Derivation of Accretion Rates from Hα Fluxes
We derived the mass accretion rates for GQ Lup A and B following <cit.> and <cit.>. In brief, we computed the Hα line luminosity L_Hα, used it to derive the accretion luminosity L_acc, and applied the energy equation L_acc∝ GM_⋆Ṁ/R_⋆ to determine Ṁ.
Since we did not observe a standard star at R band, we estimated GQ Lup A's average R brightness to be 11.0 mag by averaging literature fluxes in <cit.>, <cit.>, <cit.>, and <cit.>, with no attempt to homogenize photometric systems. The uncertainty was taken to be 0.7 mag as <cit.> found that GQ Lup A's R flux can change by 1.4 mag over its 8.45-day rotation period. To recover the true L_Hα, we also have to correct for dust extinction, which is A_V∼0.4 mag to the star <cit.> but unknown to the companion. As a result, our Ṁ estimate for GQ Lup B should be considered a lower limit.
We thus estimated L_Hα∼ 10^-2.3 to 10^-1.8 L_ for A, and ∼10^-5.9 to 10^-5.4 L_ for B. Substituting L_Hα into the empirical relation, log(L_acc) = 2.99 + 1.49 × log(L_Hα), in <cit.>, we found L_acc∼ 0.3 to 2.3 L_ for A, and ∼ 1.4×10^-6 to 9.7×10^-6 L_ for B. The resulting accretion rates for GQ Lup A and B are Ṁ∼10^-8 to 10^-7 M_ yr^-1, and ∼10^-12 to 10^-11 M_ yr^-1, respectively. Our measurement for A is similar to previous results, Ṁ∼10^-9 to 10^-7 M_ yr^-1 <cit.>. On the other hand, for the companion we obtained a lower value compared to 10^-9.3 M_ yr^-1 in <cit.>. Possible causes include the unknown dust extinction or an inactive period of accretion during the time of our observations. Our Ṁ estimates and literature values are also shown in Table <ref>.
§.§ MagAO i', z', Y_S Photometry
We observed GQ Lup at broad-band filters i' (0.77, Δλ = 0.13 ), z' (0.91 ; Δλ = 0.12 ), and Y_S (0.98 ; Δλ = 0.09 ) on UT 2014, April 5. Weather was partially cloudy with ∼13 seeing. Data reduction and photometry were carried out with IRAF[1][1]IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.(, ) and MATLAB. Raw data were dark-subtracted, registered, de-rotated, and median-combined. We then subtracted the median radial profile of the combined image from itself. For i', we further filtered out the residual using a 15-pixel Gaussian kernel, and smoothed the resulting high-pass filtered image with a 6-pixel Gaussian to better bring out the companion. Our i', z', and Y_S detections of GQ Lup B are also shown in Figure <ref>.
We estimated GQ Lup B's fluxes by injecting fake planets. For z' and Y_S, the primary star in the unsaturated data was used to create fake planets. For i', since we did not acquire any unsaturated data, we estimated the peak height of the primary star in the saturated data from its beam splitter optical ghost <cit.>. Then, we flagged the saturated core, assigned the peak value to the center, and fit a two-component Gaussian to create a PSF template. To compensate for flux loss from data reduction, including filtering, we injected fake planets at the same separation from the star but with different position angles, and repeated the data reduction procedures. We found that throughputs were >98% for z' and Y_S, but only ∼32% for i' as it involved more aggressive spatial filtering and so higher losses of low spatial frequency flux. Compensating for these flux losses, we derived contrasts of 8.13±0.23 mag, 6.63±0.05 mag, and 6.45±0.05 mag for i', z', and Y_S, respectively.
To perform absolute photometry, we compared GQ Lup A to the standard star GJ 440. We used an 80-pixel aperture to include most of the flux, and adopted an 8% photometric uncertainty recommended for absolute photometry with the VisAO camera by <cit.>. Finally, we obtained i'=10.76±0.08 mag, z'=9.77±0.08 mag, and Y_S=9.43±0.08 mag for GQ Lup A, and i'=18.89±0.24 mag, z'=16.40±0.10 mag, and Y_S=15.88±0.10 mag for GQ Lup B. For comparison, our z' flux is similar to the F850LP flux of 16.2 mag in <cit.>, but our i' flux is ∼1 mag fainter than their F775W flux of 17.8 mag. It is possible that GQ Lup B was in a quiescent accretion state during our observations. Table <ref> summarizes our photometric measurements.
§.§ MagAO Astrometry of GQ Lup B
Following the calibrations in <cit.> and <cit.>, in our MagAO data we found that GQ Lup B is 0721 ± 0003 from its host star, with a position angle of 2776 ± 04. Figure <ref> shows the astrometric monitoring in the last ∼20 years, and GQ Lup B's orbital motion is evident. Our results are consistent with the trends derived by <cit.>, Δρ∼-1.4 mas yr^-1 and ΔPA ∼+016 yr^-1. Table <ref> also lists our astrometric measurements.
@lcc@
GQ Lup A Disk Properties
Parameter
Model A
Model B
Dust Mass (M_) 5.9 ± 1.0 5.5 ± 0.8
Total Mass† (M_) 77.2 ± 8.4 76.8 ± 8.3
Inner Radius (AU) 1.5 ± 0.8 1.7 ± 1.1
Radius (AU) 23.8 ± 1.6 19.5 ± 1.4
h_0 0.084 ± 0.065 0.075 ± 0.039
γ 0.10 ± 0.22 -0.21 ± 0.20
β 1.26 ± 0.19 1.45 ± 0.25
Inclination () 56.2 ± 4.8 55.3 ± 6.0
PA () 348.8 ± 4.8 348.6 ± 5.0
†Total mass is calculated by adding our dust mass to the gas mass of 71.3 ± 8.3 M_ measured by <cit.>.
@lcccc@
MagAO Observations
Filter
AO speed
AO modes
t_ sat
t_ unsat
643 nm 300 Hz 120 ⋯ 25 s × 143
656 nm (Hα) 300 Hz 120 ⋯ 25 s × 143
i' 625 Hz 120 10 s × 20 ⋯
z' 625 Hz 120 5 s × 13 0.283 s × 54
Y_S 625 Hz 120 ⋯ 15 s × 14
@lccc@
Parameters of Hα KLIP Reduction
Parameter
Grid
Optimum
Notes
Minimum radius of region (pixel) 40–80, steps of 10 50
Maximum radius of region (pixel) 110–140, steps of 10 110
Minimum rotation (pixel) 0.0, 0.25, 0.5, 1.0, 2.0 1.0
Number of modes 2–20, steps of 2 6
Fake Planet PA 7.15–357.15, steps of 10 ⋯ 33 total, skipped ±10 from planet
Fake Planet Contrast 1.65, 3.30, 4.95 × 10^-4 ⋯ ±50% from negative planet result
@lccc@
MagAO Photometry
Contrast/Filter
GQ Lup A
GQ Lup B
Δ643 nm† >8.81
ΔHα 8.60±0.16
i' 10.76±0.08 18.89±0.24
z' 9.77±0.08 16.40±0.10
Y_S 9.43±0.08 15.88±0.10
†3σ upper limit. | §.§ GQ Lup A's Disk
We list the best-fit parameters for our modeling of the GQ Lup A disk in Table <ref>, and show our best-fit models in Figure <ref>. Both models match the data well, and produce similar best-fit parameter values. We find that GQ Lup A's disk has a radius of ∼22 AU, an inclination angle of ∼56, and a position angle of ∼349. The mass in dust in the disk is ∼6 M_, lower than ∼9.5 M_ found by <cit.> and ∼15 M_ found by <cit.>. This difference in dust mass likely comes from the adopted temperature profiles in the disk. In this study, we use radiative transfer to calculate the local temperature. Alternatively, if we assume a constant temperature of 20 K throughout the entire disk and use Equation <ref> to calculate the dust mass from the measured flux of 27.5 mJy, we obtain a higher value of ∼18 M_.
We find that the disk size we measure (R∼22 AU) is smaller than the size measured by <cit.> (R∼30 AU from 870 continuum, and R∼46.5 AU from CO (3–2) emission). This may be because dust grain growth is expected to occur preferentially in the inner disk, where densities are higher, and radial drift will tend to concentrate large particles at smaller radii. Our 1.3 mm map is more sensitive to large dust grains than the 870 map in <cit.>, so we may expect to measure a smaller radius at longer wavelengths.
Our map does not show any structures in the GQ Lup A disk such as holes or gaps, which can be the signposts of additional companions. The best-fit disk models also have very small inner disk radii, of ∼1.6 AU. This is consistent with no inner clearing, because the resolution of our observations does not allow us to well constrain the inner radius below ∼4.5 AU. Since there is unlikely any gaps or companions hidden within the disk, GQ Lup B was probably not scattered to its current orbit, but instead formed in-situ like binary stars, as we discuss more in Section <ref>.
Our models show that the surface density profile of the disk can be very flat. The flat profile is similar to some brown dwarf disks in ρ Ophiuchus <cit.>, but in contrast to brown dwarf disks in Taurus <cit.>, which have rather steep profiles and smooth edges. As <cit.> argued, if dust has a radial variation in the size distribution, assuming uniform dust properties across the disk can result in a shallower profile. This has been confirmed by <cit.>, who showed that GQ Lup A's disk does have a radial gradient in both the dust composition and grain size, with larger grains in the inner disk and submicron grains in the outer disk. The smaller disk size measured at 1.3 mm compared to 870 provides further evidence that larger grains are present in the inner disk.
§.§ GQ Lup B's Disk Mass
The disk around GQ Lup B is undetected in our map. As shown in Figure <ref>, our data put a upper limit on the disk mass for GQ Lup B at <0.25–1 M_ from a point source to 1/3 of the Hill radius, although this is not as strong as the upper limit of <0.04 M_ by <cit.>. Unlike another wide-orbit substellar companion FW Tau C, which has 1 to 2 M_⊕ of dust in its disk <cit.>, GQ Lup B's disk appears to have little dust, similar to the dust-depleted disk around GSC 6214-210 B (<0.15 M_ of dust; ). This may arise from different evolutionary stages: FW Tau C is younger (∼2 Myr) and has a more massive accretion disk, while GQ Lup B (2–5 Myr) and GSC 6214-210 B (5–10 Myr) are more evolved and their disks are rather depleted.
§.§ Orbital Constraints from Disk Size
GQ Lup B's orbital motion was first detected by <cit.>, who showed that the best-fit orbits were eccentric. With new RV measurements, <cit.> showed that the semi-major axis a, eccentricity e, and inclination i of GQ Lup B's orbit fall into three groups:
1. a∼ 100 AU, e∼ 0.15, i∼57,
2. a< 185 AU, 0.2<e<0.75, 28<i<63, and
3. a> 300 AU, e> 0.8, 52<i<63.
As was argued by <cit.>, Group 3 is a priori unlikely as its high eccentricity and long orbital period would mean that we are observing GQ Lup B close to periastron. Since the circumstellar disk may be disrupted if the companion goes too close to the star, we can calculate the truncation radius for each of the orbit groups to determine whether they are consistent with the size of the GQ Lup A disk. We adopt a disk radius of 46.5 AU determined by <cit.> from CO (3–2) emission, as it more likely represents the full extent of the disk compared with our 1.3 mm size measurement.
Semi-analytic approximations for the tidal truncation radius of the primary star as a function of orbital parameters find that the disk should be truncated at a radius of
R_t ≈ 0.36 [(1-e)^1.2 ϕ^2/3 μ^0.07/0.6 ϕ^2/3 + ln(1 + ϕ^1/3)] a,
where ϕ is the ratio of the mass of the primary to the mass of the secondary, and μ≡ 1/(1+ϕ) <cit.>. This equation is relatively insensitive to whether the mass of GQ Lup B is closer to 10 M_Jup or 40 M_Jup (because it is at most 4% of the primary's mass), but is very sensitive to eccentricity and semi-major axis. We hence arbitrarily use 25 M_Jup for this calculation.
We show the truncation radius as a function of semi-major axis and eccentricity in Figure <ref>. We find that the truncation radii for Group 1 and Group 3 are all less than 40 AU, not compatible with the measured disk size. Most of the parameter space for Group 2 is also excluded; the remaining solutions are those with a∼ 160–180 AU and e∼ 0.2–0.3. As a result, the size of the GQ Lup A disk suggests that GQ Lup B's orbit probably has a low eccentricity. However, it is important to note that this equation may break down if the primary's disk and secondary's orbit are substantially inclined (as may be the case for GQ Lup, see Section <ref>). In this scenario, it is likely that larger disk radii than we conclude here would be acceptable. Indeed, the strong constrains we place here based on this analysis may instead be an indication of a high degree of inclination for the orbit.
§.§ Geometry of the System
Since the inclination of GQ Lup A's disk, ∼56, is consistent with orbital solutions in <cit.> and <cit.>, here we investigate if the star's disk and the companion's orbit can be possibly in the same plane.
In Figure <ref>, we plot the three best-fit orbits in <cit.> (see their Table 4 for orbital parameters) as well as the GQ Lup A disk in two viewing angles, one along the line of sight and the other with the disk viewed edge-on. We also show one representative orbit from <cit.>, for which the semi-major axis and eccentricity are constrained by our tidal truncation analysis in Section <ref>, and other parameters including the longitude of the ascending node and the argument of periastron are extracted from unpublished astrometric fitting (Ginski, C. 2016, private communication). All these orbits are unlikely in the same plane of the disk. Although A's disk and B's orbit may share similar inclinations, they probably have very different orientations in space.
In Figure <ref> we plot a possible geometry of the GQ Lup system. The circumstellar disk is not aligned with the star's spin axis either, because they have different inclinations: ∼56 for the disk and ∼27 for the spin axis <cit.>. This is not unusual among T Tauri stars. <cit.> showed that although stellar rotation angle is correlated with disk inclination, they are not identical but a mean difference ∼19 in T Tauri systems. We note that this misalignment might be induced by a torque from GQ Lup B (e.g., ).
We caution that the results presented here and in Section <ref> are preliminary. As <cit.> and <cit.> stressed, many orbital solutions share similar χ^2 in their orbital fitting. Future astrometric monitoring is essential to lift degeneracies and ascertain GQ Lup B's orbit.
§.§ SED of GQ Lup B
Figure <ref> compares the spectral energy distribution (SED) of GQ Lup B to the 2400 K, log g = 4.0 BT-Settl model <cit.>. The temperature and surface gravity are chosen to be consistent with previous estimates (e.g., ; ; ). The model is normalized at K to minimize the effects from dust emission and extinction. We include 0.3 to 3.7 literature photometry, the 3σ upper limit of 0.643 , and fluxes of Hα, i', z', and Y_S in the figure.
Overall, the 2400 K model gives a reasonable fit longward of 0.7 . At shorter wavelengths, the observed fluxes are much higher than photospheric due to excess continuum emission from accretion <cit.>. The 0.656 Hα emission is especially prominent. As recently simulated by <cit.>, Hα emission likely comes from the extended shock front on the surface of the circumsubstellar disk. Our measured Hα flux, albeit with a large uncertainty, is about 10 times fainter than that of <cit.>. Our 0.643 non-detection also indicates a much weaker accretion continuum. Therefore, accretion onto GQ Lup B seems to be very variable; we probably observed a more quiescent phase than did Zhou et al.
§.§ Accretion Rate and Disk Lifetime
Disk lifetime can be roughly estimated assuming a constant accretion rate, but it should be taken with caution as accretion can vary significantly. Our measured Ṁ for GQ Lup A, 10^-8 to 10^-7 M_ yr^-1, is typical of T Tauri stars (e.g., ). With M_disk∼ 2×10^-4 M_, GQ Lup A's disk may be depleted in a few thousand years. For the companion, with a disk mass upper limit of ∼10^-5 M_ found by <cit.>, and an accretion rate of Ṁ∼ 10^-12 to 5×10^-10 M_ yr^-1 derived by <cit.> and our Hα photometry, GQ Lup B's disk perhaps can continue for tens of thousands of years. Disk lifetime for GQ Lup B might be longer than that of the host star. | §.§ Formation of GQ Lup B: Scattering
<cit.> argued that scattering might be the most favorable scenario for GQ Lup B where it originally formed close to the star, but was scattered outward by a more massive body. Search for a close-in massive companion to the star have not yielded positive results. Deep AO imaging in <cit.> excluded any object as bright as GQ Lup B outside ∼18 AU. Similarly, RV monitoring rejected objects massive than 0.1 M_ within 2.6 AU <cit.>, but <cit.> speculated that a massive brown dwarf might reside only a few AU's from the star in order to explain their observed 0.4 km s^-1 RV change in two years. However, if such an inner object exists, morphology of GQ Lup A's disk can be used to infer its presence since it may sculpt a gap or hole. The inner edge of a circumbinary disk can be truncated at approximately 2 to 3 times the binary separation <cit.>, corresponding to a ∼10 AU hole for the brown dwarf companion posited by <cit.>. Nevertheless, neither the 0.3 to 1.3 mm disk SED <cit.> nor our ALMA 1.3 mm disk-resolved map find evidence of a gap or central clearing in the disk. Thus, it is quite unlikely that another massive body is very close to the primary star to serve as the scatterer.
In addition, scattering often induces a very eccentric orbit (e.g., ), but for GQ Lup B, orbital solutions with low eccentricities are more probable, as shown in Section <ref>. Therefore, all lines of observational evidence suggest that in situ formation via disk fragmentation or prestellar core collapse is more likely the formation pathway of GQ Lup B. This is in line with the null result of the dedicated AO search for scatterers in other systems <cit.>; thus, core accretion plus subsequent scattering is perhaps not responsible for most substellar companions on wide orbits.
§.§ Formation of GQ Lup B: in situ
Observationally distinguishing prestellar core collapse from disk fragmentation requires high-resolution imaging toward the earliest phase in star formation. Recent studies suggest that prestellar core collapse can be effective to form very wide (>1000 AU) binary or multiple star systems (e.g., ), while disk fragmentation may form more compact systems with separations of tens to hundreds of AU between the components (e.g., ). At ∼110 AU from the host star, GQ Lup B seems to fit nicely to the disk fragmentation scenario.
For fragmentation to occur, GQ Lup A's disk must have been very massive and presumably more extended than 50 to 100 AU, since circumstellar disks are expected to become gravitationally unstable beyond that radius (e.g., ). It is sometimes suggested that the disk plane and the companion's orbital plane should be coplanar because the companion formed in the disk. However, dynamical interactions with other fragments in the parent disk can gradually alter the initial configuration, thereby creating inclined systems <cit.>. As a result, even though in Section <ref> we have shown that GQ Lup B's orbital plane is probably not coplanar with GQ Lup A's disk, disk fragmentation remains a possibility.
Recently, <cit.> proposed that properties of circumsubstellar disks provide an observational diagnostic to distinguish disk fragmentation from prestellar core collapse. They predicted higher disk masses and accretion rates for objects formed via disk fragmentation, because they have longer time to accrete and thus retain a more massive disk. The discrepancy between the two scenarios is more profound for very low-mass companions, especially <10 M_Jup. The authors also predicted that, under the disk fragmentation framework, low-viscosity circumsubstellar disks tend to have masses and accretion rates higher than that of high-viscosity ones, because higher viscosity facilitates angular momentum transport and disk dissipation.
In Figure <ref> we overplot substellar companions FW Tau C, GSC 6214-210 B, and GQ Lup B on the Figure 7 of <cit.>. Objects formed by disk fragmentation have a roughly constant disk mass, in drastic contrast to the monotonic correlation M_disk∝ M_star for prestellar core collapse. While FW Tau C has a rather massive disk, considering the large dispersion in the M_disk∝ M_star correlation (±0.7 dex; ), it is still consistent with both formation scenarios. The very low-mass disks around GQ Lup B and GSC 6214-210 B are more in line with the formation via prestellar core collapse. Alternatively, if they formed via disk fragmentation, their disks might have a high viscosity to quickly dissipate.
§.§ Formation of Satellites
Disks around wide-orbit substellar companions provide clues to the formation and population of exomoons, which are challenging to detect with current facilities. Simulations of <cit.> found that Jupiter-mass satellites are unlikely to form around brown dwarfs because no rocky cores grow fast enough to accrete a gaseous envelope before the disk dissipates, while Earth-like satellites can be common if disk mass is a few M_Jup. Nonetheless, even Earth-mass satellites are rare if disk mass is only a fraction of M_Jup. Thus, it appears that GQ Lup B has no gaseous moons, while a few Earth-like moons may have formed in early times when the disk was more massive. As <cit.> and our ALMA observations find that GQ Lup B's disk is deficient in dust, forming Earth-like satellites is no longer possible. Only tiny rocky moons analogous to the Moon (∼0.012 M_) may still form out of the remaining material (<0.04 M_). Satellite formation around GQ Lup B is probably in the late stage and may have already ceased. | null |
http://arxiv.org/abs/1701.07437v3 | 20170125190002 | Muon Beam Experiments to Probe the Dark Sector | [
"Chien-Yi Chen",
"Maxim Pospelov",
"Yi-Ming Zhong"
] | hep-ph | [
"hep-ph"
] |
T1calligramn
T1calligramn
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08093v2 | 20170127155630 | A local diagnostic energy to study diabatic effects on a class of degenerate Hamiltonian systems with application to mixing in stratified flows | [
"Alberto Scotti",
"Pierre-Yves Passaggia"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
^1 Dept. of Marine Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599
Oceanography, Fluid dynamics
A. Scotti
[email protected]
In Hamiltonian systems characterized by a degenerate Poisson algebra, we show how to construct a local energy-like quantity that can be used to study diabatic effects on the evolution of the Available Energy of the system, the latter concept formalizing the original idea of Margules'.
We calculate the local diagnostic energy for geophysically relevant flows. For the particular case of stratified Boussinesq flows, we show that under moderately general conditions, in inertial frames where the initial distribution of potential vorticity is even around the origin, our framework recovers the Available Potential Energy introduced by Holliday and McIntyre <cit.>, and as such depends only on the mass distribution of the flow.
In non-inertial frames, we show that the local diagnostic energy of flows which are, in an appropriate sense, characterized by a low-Rossby number Ro ground state, has to lowest order in Ro, a universal character.
§ INTRODUCTION
A major challenge in contemporary oceanography is to understand the role of small scale turbulence in regulating the oceanic Meridional Overturning Circulation (MOC), the process that over
millennial time scales exchanges surface with deep water <cit.>. Given the large storage capacity of the deep ocean for heat and greenhouse gases,
understanding the drivers of the MOC is essential for climate prediction <cit.>.
<cit.> used energetics arguments in an attempt to quantify the amount of energy required to sustain the MOC,
but significant gaps remain in quantifying the pathways that energy injected
A local diagnostic energy to study diabatic effects on a class of degenerate Hamiltonian systems with application to mixing in stratified flows
A. Scotti^1 and P.-Y. Passaggia^1
December 30, 2023
================================================================================================================================================
at large scales
by winds and tides take to reach the small
scales at which mixing occurs <cit.>.
Particularly vexing is the problem of estimating the energetic cost incurred when turbulent processes irreversibly mix the stratifying agents
<cit.>, that is how to estimate the energy input necessary to sustain a given rate of dissipation of the variance of the stratifying agents.
For a fluid in the Boussinesq approximation, <cit.> used the concept of Available Potential Energy (APE) as a diagnostic tool to achieve such a relation. APE has a long history, starting with the pioneering work of <cit.> and <cit.>, later developed by <cit.> into a working tool, the so-called Lorenz Energy Cycle, which is still used today <cit.>. Essentially, Margules' idea was to calculate a minimum-energy state, compatible with certain constraints, and use the energy of such state to "gauge" the amount of potential energy effectively available to produce mechanical work, yielding a definition of the APE of the system.
Winters et al. considered the global effects of mixing (i.e., diabatic effects) on a closed system. For this purpose, they only needed a definition of APE for the system as a whole. There are however situations that call for a definition of APE that apply to localized regions of the domain, e.g. when measuring the energy carried by nonlinear internal waves <cit.>, when partitioning energy between mean and fluctuating components in a cyclone <cit.>, in determining the efficiency of different mixing systems <cit.>, and in studying mixing and turbulence in spatially inhomogeneous systems <cit.>. In all these studies, the starting point was the local definitions of APE developed in the early 1980's by <cit.> for incompressible flows and by <cit.> for compressible flows, based on a reference state that depends only on the mass distribution, though <cit.> already suggested that more general reference states can be considered. Indeed, in the atmospheric literature, more general reference states have been considered <cit.>, whereas in the oceanographic applications the Lorenz paradigm still dominates <cit.>.
For a recent review see <cit.>.
The inadequacy of the standard definition of energy as the sum of kinetic, potential and, for compressible fluids, internal energy to quantify the capacity of the system to do actual work, is connected to the degeneracy of the Poisson algebra in the Hamiltonian formulation of the problem <cit.>. By degeneracy, we mean that the center of the Poisson algebra contains non-constant functionals of the phase space, the so-called Casimir functionals, or Casimirs for short. The set of Casimirs is an ideal of the algebra, and thus can be used to define a notion of equivalence
on the set of Hamiltonians: Two Hamiltonians are equivalent, in the sense that they give the same dynamics, if they differ by a Casimir. In other words, the Hamiltonian possesses a local (in phase space) gauge symmetry.
From this point of view,
the local APE formulations can be seen as selecting, out of a specific equivalence class, an Hamiltonian that satisfies one or more additional conditions <cit.>, given by a gauge-fixing condition. As we shall clarify later, the Casimir includes the effects of constraints on the system.
In this paper, we aim to revisit the issue of constructing a local diagnostic energy that can be used to diagnose the effect of diabatic processes on the energetics of fluid systems that, in the adiabatic limit, are described by a Hamiltonian with a degenerate Poisson algebra. In particular we are seeking a quantity with the following properties:
* Accounts for relevant constraints.
* Locality, and such that it satisfies (up to diabatic effects) suitable conservation laws.
* Can be connected to Margules' intuitive notion of Available Energy when the latter is properly formalized.
* Convexity in phase space, so that it can be meaningfully partitioned into a mean and eddy (or turbulent) component.
* Its evolution under diabatic conditions reflects the loss of Available Energy.
Following <cit.>, we first lay down in sec. <ref> the general theoretical framework that applies to generic systems that in the adiabatic limit have a Hamiltonian description characterized by a degenerate Poisson algebra.
From there, we introduce the specific gauge-fixing condition that, when applied to the equivalence class of Hamiltonians, identifies the one whose density is the local diagnostic energy that we seek.
In practice, the gauge-fixing condition identifies which Casimir needs to the added to the energy. At the same time, we also obtain the equations that specify the appropriate reference state.
We then consider two models which are commonly used in oceanography and which have an adiabatic limit described by a degenerate Hamiltonian structure: the incompressible shallow water equations and the incompressible Euler equations in the Boussinesq approximation for a continuously stratified flow. Both models are considered in inertial and non-inertial (i.e., rotating) frames. For these systems, we give general properties for both the gauge-fixed Casimir and the reference state. In simple geometric configurations, we calculate analytically the solution or a suitable approximation. An interesting result that applies to both shallow water and Boussinesq equations is that, in rotating frames, the local diagnostic energy associated to low Rossby number reference states has a universal character.
§ THEORETICAL BACKGROUND
The phase space which describes a Hamiltonian mechanical system characterized by a finite number of degrees of freedom typically, but not always <cit.>, has a non-degenerate Poisson algebra. In this case, if no other constraints exist, two Hamiltonians are dynamically indistinguishable, and thus belong to the same equivalence class, if they differ by at most a time-dependent constant. We can think of this as a global symmetry.
If, however, the Poisson algebra is degenerate, then the equivalence class, as pointed out in the introduction, is wider. If that is the case, the global symmetry, manifested by the invariance of the dynamics under the addition of a (time varying) constant to the energy at every point in phase space, widens to a local (in phase space) symmetry, i.e. the energy can be altered by the addition of a Casimir that can depend on the location in phase space, and still yield the same dynamical equations. From a field-theoretical point of view, the Hamiltonian in this case possesses a local gauge symmetry <cit.>.
It is important to remark that the existence of non-trivial Casimirs depends on the degeneracy of the Poisson algebra, rather than on the particular choice of the Hamiltonian. They represent conserved quantities whose existence does not depend via Noether's theorem on symmetries of the Hamiltonian. If the algebra is degenerate, equilibrium solutions need not be (and in general, are not) extrema of the Hamiltonian.
For notational purposes, here and thereafter we will denote with H[q] the "naive" Hamiltonian, i.e. the sum of kinetic, potential and, when needed, internal energy, while H[q] will denote a member of the class of equivalent Hamiltonians, which can be written as H[q]=H[q]+C[q], the sum of the naive Hamiltonian and a Casimir.
It is also convenient to introduce the following notation: if F[q] is a functional over the phase space which can be represented as the integral of an appropriate density n-form 𝔉(q) over the n-dimensional oriented manifold 𝔻, then F(q)≡⋆𝔉(q) is the corresponding scalar density (⋆ is the Hodge-star operator), i.e.
F[q]=∫_𝔻𝔉(q)=∫_𝔻 F(q)𝔙,
where 𝔙 is the volume n-form. To summarize, square brackets [ ] denote a volume integrated quantity, with the corresponding local density indicated by the use of ().
In fluid systems <cit.>, we find that
the Casimirs are generally underdetermined. A set of Casimirs can be constructed as follows <cit.>: Let s_1(𝐱),⋯,s_p(𝐱) be p Lagrangian invariants, by which we mean they are intensive quantities (0-forms) that are constant along Lagrangian trajectories, i.e.
∂ s_i/∂ t+ L_ v(s_i)=0, i=1,…,p,
with L_𝐯 being the Lie derivative w.r.t. the flow induced by 𝐯.
We also need a conserved density 𝔇, that is an n-form
representing a materially conserved extensive property, i.e. such that
∂𝔇/∂ t+ L_𝐯(𝔇)=0.
For our purposes, we will use the mass n-form or, for incompressible flows, the volume n-form.
Then
C[q]=∫_𝔻 C(s_1(𝐱),⋯,s_p(𝐱))𝔇,
where C(s_1,⋯,s_p) is any real-valued function which depends on its arguments algebraically (i.e., we are not allowing C to depend on derivatives of the s_i's).
It is trivial to verify that dC/dt≅ 0 (here and thereafter ≅ means equality up to boundary terms, or, in the case of n-forms, equality up to an exact form). Note that there may be more general classes of Casimirs (e.g., which may depend on derivatives of the conserved quantities). Here we limit to Casimirs that can be written as (<ref>).
§.§ Holonomic brakes and Available Energy
It is straightforward to reformulate Margules' intuitive idea of "Available Energy" within the framework of a Hamiltonian system with a degenerate Poisson algebra.
Consider a system described at time t=0 by a point q_0 in phase space. Under adiabatic conditions, the system is described by a trajectory q(t) in phase space such that along the trajectory H[q(t)]= H[q_0], where H[q] is any member of the class of equivalent Hamiltonian functionals.
While <cit.> modified the Hamiltonian dynamics to dissipate the Casimirs, here we introduce into the system what we may call a holonomic brake, whose purpose is to extract energy from the system (hence a brake) in such a way that the s_i's
remain Lagrangian invariants (holonomic), and 𝔇 remains an integral invariant, i.e. (<ref>) and (<ref>) are still satisfied. Thus, the holonomic brake leaves the Casimirs invariant. Later, we will consider how holonomic brakes can be realized for particular classes of flows.
With the brake turned on, d H[q(t)]/dt≤ 0 and, since the brake is holonomic, any decrease in H[q(t)] is due, entirely, to a decrease in the "naive" Hamiltonian H[q(t)]. We assume that, in the limit t→∞, the application of the brake causes the system to relax to a stationary "ground" state q_*. If such a ground state exists and is unique, the energy extracted by the holonomic brake from the system which is initially at q_0, is the difference E_ AE[q_0]≡ H[q_0]-H[q_*]. We identify E_ AE with the Available Energy envisioned by Margules. We have thus the following definition:
The Available Energy E_ AE is the energy of the system referenced to the energy of the ground state, the latter being the state that the system settles to when subject to a holonomic brake.
We shall see presently that, in the process of determining q_*, a gauge-fixing condition will naturally arise which selects an element H_* of the equivalence class of Hamiltonians. The rate of change in time of H_*[q(t)] measures the loss of Available Energy when q(t) evolves under full diabatic conditions.
§.§ A special family of Casimirs: the Mass Distribution Function
The main postulate of our approach is that the q_* to which the system relaxes after the holonomic brake is turned on, is the point that minimizes the naive Hamiltonian subject to a set of constraints required by the conservation of the s_i's along trajectories, which are not affected by the brake. Thus, the extremal point of H[q] must be sought among the points that satisfy an extra set of conditions, the definition of which is the subject of this section.
Consider a Lagrangian set of coordinates α=(α^1,…,α^n).
A Lagrangian particle characterized by a p-tuple s=(s_1,…,s_p) of Lagrangian Invariants retain its identity. The mass of all Lagrangian particles whose p-tuple falls between s and s+d s is given by
Δ M( s)=(∫_𝔻[∏_i=1^p(s_i(α)-s_i)]𝔇(α))d s,
where 𝔇(α) is the mass n-form expressed in Lagrangian coordinates, and is the Dirac's delta (not to be confused with the δ used to denote variations). Integrating Δ M over the p-tuples, from s_i to its maximum value S_i≡max{s_i(α), α∈𝔻} for each i, we obtain the Mass Distribution Function (MDF)
M( s)=∫_s_1^S_1⋯∫_s_p^S_pΔ M( s')=∫_𝔻[∏_i=1^pθ(s_i(α)-s_i)]𝔇(α),
where
θ(s)=∫_∞^s(t)dt,
is the Heaviside function. It represents the mass of all Lagrangian particles with values of the p-tuple greater than (s_1,⋯,s_n).[
<cit.> used a similar approach to construct a background state in an atmospheric context.
]
If we wish to express the theory in Eulerian coordinates, we just need to replace Eulerian coordinates x^i=x^i(α,t) in (<ref>) and integrate over the corresponding Eulerian mass density form
𝔇( x)=(ρ(α)∂(α^1,⋯,α^n)/∂(x^1,⋯,x^n)(t))𝔙( x),
where 𝔙( x) is the volume form in Eulerian coordinates and ρ(α) the scalar mass density in Lagrangian coordinates.
By construction, the MDF is a p-parameter family of Casimirs, one for each p-tuple. MDFs introduce a phase space "foliation": with the holonomic brake turned off, q(t) moves on a leaf of the foliation along isolines H[q]; when the brake is turned on, q(t) still moves on the same leaf, but crosses the isolines of H[q] on its way to q_*, which, by our hypothesis, is where the naive Hamiltonian attains its minimum on the leaf (see figure <ref>).
Two remarks are in order: While it may be possible to use (<ref>) to quantify the constraints, the integral version (<ref>) has the advantage of including bulk constraints, such as conservation of total mass; also, for incompressible flows in the Boussinesq approximation, we can replace the mass n-form with the volume n-form, in which case we should properly speak of a Volume Distribution Function. For simplicity, we will still refer to M( s) as the MDF regardless of what n-form is used.
To solve the minimization problem, we introduce a suitable Lagrange multiplier ψ( s), defined on the support of the MDF M^0( s) that specifies the leaf. That is,
we seek
to render the Lagrange functional
L[q,ψ]≡ H[q]+∫ψ( s)[∫_𝔻∏_i=1^pθ(s_i( x)-s_i)𝔇( x)-M^0( s)]d s
stationary,
by varying both q and ψ (recall that s_i(𝐱) is the value of the i-th Lagrangian invariant at 𝐱 calculated from the fields in q), that is we seek the solution[Though we should admit the possibility that more than one extremal (minimal) solution exists, for simplicity we assume that the minimal solution is unique.] q_*,ψ_* of
δ L/δ q≅ 0, δ L/δψ=0.
If all the s_i's depend algebraically on the field in q, then the ≅ 0 can be replaced by equality stricto sensu.
From now on, q_* will be referred to as the "ground state".
§.§ The gauge-fixing condition
Let
Ψ[q]≡∫_𝔻[∫ψ( s)(∏_i=1^pθ(s_i( x)-s_i))d s]𝔇( x),
be the Casimir associated to the Lagrange multiplier and let
H[q]=H[q]+Ψ[q].
By switching the integral over the p-tuples s with the integral over the manifold 𝔻, the Lagrange functional can be rewritten as
L[q,ψ]=
H[q]-∫ψ( s)M^0( s)d s.
Since the last term in (<ref>) does not depend on q, the Lagrangian multiplier ψ_* associated to the ground state q_*
can be used to define the following gauge-fixing condition to the equivalence class of Hamiltonians
.δ H/δ q|_q_*≅ 0,
which is satisfied when H[q]= H_*[q]≡ H[q]+Ψ_*[q], where the Casimir Ψ_*[q] is related to the Lagrange multiplier ψ_* by (<ref>).
We leave to the reader to verify that if two states q_1 and q_2 belong to the same leaf (i.e., they have the same MDF), then
Ψ_*[q_1]=Ψ_*[q_2].
We are now in a position to define the local diagnostic energy as follows:
The local diagnostic energy E is the scalar density of the Hamiltonian which satisfies the gauge-fixing condition (<ref>), with the ground state q_* obtained from (<ref>) , i.e.
E(q)=H(q)+Ψ_*(q).
In terms of the local diagnostic energy,
the Available Energy is
E_AE[q]≡ E[q]- E[q_*]=H[q]-H[q_*],
where the last equality follows from (<ref>), since q and q_* belong to the same leaf. Thus, the gauge-fixing condition selects, out of the class of dynamically equivalent Hamiltonians, the one that returns the same Available Energy as the naive Hamiltonian, when referenced to the ground state. Of course, the naive Hamiltonian is physically interpreted as being the energy of the system.
We also make the following conjecture: There exists a neighborhood of q_* over which E[q] is a convex function (we implicitly assume that the phase space has a vector space structure).
Note that (<ref>) is a necessary condition for the conjecture to hold.
What makes the local diagnostic energy interesting is how the evolution of the Available Energy under full diabatic conditions is related to the local diagnostic energy.
§.§ Evolution of the Available Energy under diabatic conditions
The Hamiltonian selected by the gauge-fixing condition (<ref>) acquires a special role when we apply non-holonomic forces which break the invariance of the s_i's along Lagrangian trajectories. Without brakes, the motion occurs on a leaf along lines of constant energy. With a holonomic brake, the motion still occurs within the leaf, crossing the isolines if the naive Hamiltonian. Finally with the full non-holonomic brake on, the MDF becomes time dependent and q(t) drifts across leaves. At each time, we can still calculate a ground state solving (<ref>-<ref>) with M^0( s) replaced by its instantaneous value M^t( s) calculated from q(t). Thus, starting from a time series of q(t) states, we obtain a sequence of ground states q_*(t) with the corresponding Lagrange multipliers ψ_*( s,t). Note that (<ref>) still holds on the pairs (q(t),q_*(t)).
Based on our definition of local diagnostic energy,
E[q_*(t)]=H[q_*(t)]+Ψ_*[q_*(t),t],
where the second argument in Ψ_*[·,t] is a reminder that the time dependence of the Casimir is due both to the time dependence of the ground state as well as to the intrinsic time dependence of ψ_*. The rate of change of the integrated local diagnostic energy of the ground state is
d E[q_*(t)]/dt=. (δ H/δ q+δΨ_*/δ q)|_(q_*(t),t)[d q_*/dt]+.∂Ψ_*/∂ t|_(q_*(t),t)≅.∂Ψ_*/∂ t|_(q_*(t),t),
where the last equality is made possible by the gauge-fixing condition.
Note that the last partial derivative only applies to the explicit dependence on time in the gauge-fixed Casimir[ In the following, when we speak of the Casimir, we will intend the gauge-fixed Casimir, as opposed to a generic Casimir. The context will usually suffice to make the distinction clear. ]. Since q(t) and q_*(t) are on the same leaf, we have
d E[q_*(t)]/dt≅.∂Ψ_*/∂ t|_(q(t),t),
where now the Casimir is evaluated on q(t), rather than on the ground state q_*(t).
Therefore, the rate of change of the Available Energy
dE_AE[q(t)]/dt≅.δ E/δ q|_(q(t),t)[dq/dt]≅. (δ H/δ q+δΨ_*/δ q)|_(q(t),t)[d q/dt],
can be expressed solely in terms of the rate of change of the local diagnostic energy evaluated on q.
The rate of change of the naive Hamiltonian reflects the action of the holonomic and non-holonomic components of the diabatic processes, while the change in the Casimir is due solely to the non-holonomic component. Interestingly, the non-holonomic component may actually increase the Available Energy, at least temporarily and/or locally.
The geometric structure of the leaf provides the "maximal" constraint: the evolution of the system in the presence of non-holonomic forces can relax the constraint, and in so doing frees up energy that would have been frozen in the ground state.
§ SPECIFIC FLUID MODELS
The theory developed in the preceding sections is quite general, and applies to any type of flow regime that can be described in the adiabatic limit with a Hamiltonian structure having a degenerate Poisson algebra.
In this section, we apply the theory to two types of flows: Shallow water flows, and
continuously stratified flows that can be studied in the Boussinesq approximation.
The first is chosen because it is often used as a first approximation to describe large-scale geophysical flows. Since it has only one Lagrangian invariant, it provides a good pedagogical introduction. The second is commonly used to study low-Mach number flows in which vertical oscillations are confined to vertical scales much smaller than a suitably defined compressibility scale.
While there are issues with the Boussinesq approximation, especially regarding the proper relation to thermodynamics <cit.>, it nonetheless provides a good model to study flows in which the internal energy reservoir, in an approximate sense, merely acts as a sink of energy dissipated via diabatic effects.
Most oceanographic models are based on the Boussinesq approximation, and therefore it is worthwhile to provide energy-based diagnostic tools applicable to Boussinesq flows.
The main result is that in the flows considered here, the Casimir is related via a simple transformation to the Bernoulli invariant of the ground state, and that in strongly rotating systems, to leading order, the Casimir is independent on the details of the MDF. These results can be obtained relatively easily in the shallow water case. The generalization to Boussinesq flows is more involved. For this reason, we will isolate the main results as theorems and corollaries to allow readers who are not interested in the derivation to skip over the mathematical details.
§.§ Shallow water flows
Shallow water flows provide a relatively simple system where we can apply the ideas outlined in the previous section. While relevant in its own from a Geophysical Fluid Dynamics point of view, it is mostly used as a propaedeutic tool, having the advantage of possessing a single Lagrangian invariant, the potential vorticity. We will keep the discussion informal.
A more formal approach will be reserved for the more interesting case of continuously stratified flows in the Boussinesq approximation.
Shallow water flows are described by the pair (h,λ), where h is the local surface elevation, measured relative to a surface of constant geopotential height[In a rotating system, the geopotential height includes the "potential" of the centrifugal force.], and, λ= v^♭=v_idx^i the 1-form associated via the ♭ musical isomorphism to the velocity vector v=v^i∂/∂ x^i. Here and thereafter, the Einstein convention of summation over repeated indexes is applied to Latin indexes. Together, the pair (h,λ) satisfies
∂λ/∂ t=-i_ vΩ-d(gh+E_k),
∂ h𝔄/∂ t+ L_ v(h𝔄)=0,
where 𝔄 is the area element (a 2-form) of the manifold which contains the flow, E_k is the kinetic energy, and
the total vorticity Ω=dλ+Ξ is the sum of the relative vorticity dλ and the frame vorticity Ξ. In an inertial frame Ξ=0. . Because of (<ref>), the role of 𝔇 is played by h𝔄.
For a flow in the shallow water approximation, the only Lagrangian invariant (up to a constant multiplicative factor) is the potential vorticity
p≡ω/h,
where ω=⋆Ω is the Hodge-dual of the vorticity Ω.
The naive Hamiltonian is
H[q]=∫_𝔻 h(E_k+gh/2)𝔄.
The Casimir that satisfies the gauge fixing condition takes the form
Ψ[q]=∫_𝔻 hΨ(p)𝔄.
A holonomic brake can be obtained adding to the r.h.s. of (<ref>) a term proportional to d(⋆ d(hΦ)). Physically, it represents a conservative (so as not to affect the vorticity) force field that extracts energy from surface waves.
§.§.§ Ground state and gauge-fixed Casimir
The ground state on a leaf characterized by the MDF M(s), and the Casimir that specifies the gauge-fixed Hamiltonian are found solving
δ H/δ h=(E_k+g h+Ψ-pΨ_,p)𝔄=0,
δ H/δλ≅ h(⋆λ)+dΨ_,p=0,
∫_𝔻 hθ(p-s)𝔄=M(s), p=⋆
(dλ+Ξ)/h.
for the three unknown h_*, λ_* and Ψ_*(p). To avoid notational clutter, we denote with (·)_,s_1… s_p the p-th derivative w.r.t. s_1,…,s_p (note that the indexes are to the right of the comma). We assume that M is differentiable, and that M_,s<0. In other words, we assume that there is a continuous distribution of potential vorticity, so that with the introduction of a suitable second coordinate t, (p,t) can be used as local[It is important to realize that more than one chart may be needed to cover the entire manifold 𝔻.] coordinates over the manifold.
Before considering actual solutions to (<ref>–<ref>),
we point out a few general features.
* Eq. (<ref>-<ref>) is a system of three equations relating 4 quantities (λ_*,h_*,Ψ_*,M). The first two equations define a steady state solution of the shallow water equations in terms of Ψ_*.
Indeed, from (<ref>) we have, with Φ_*≡-⋆λ_* being the flux form,
d(h_*Φ_*)=d(d(Ψ_*,p))=0,
by Poincare's lemma, thus (<ref>) is satisfied.
Substituting dλ_*=-⋆ ph_*=-ph_*𝔄 in the l.h.s. of (<ref>) and using (<ref>) we have
-i_ v(ph_*𝔄)+d(pΨ_*,p-Ψ_*)=-ph_*Φ_*+pΨ_*,ppdp=-pd(Ψ_*,p)+pΨ_*,ppdp=0,
thus satisfying (<ref>).
* If Ψ_* is a solution, so is Ψ_*+ap, where a is any constant. This indeterminacy simply reflects the fact that to fully determine the solution we need to specify the circulation on the boundary of the manifold. Indeed, the Casimir associated to ap is
a∫_𝔻 hp𝔄=a∫_𝔻 dλ=a∫_∂𝔻λ=aΓ,
where Γ is the circulation along the boundary of the manifold. In particular, if the manifold is closed (i.e., has no boundary) Γ=0.
* The first equation tells us that the Casimir is related to the quantity E_*k+gh_* familiar from Bernoulli's theorem. Though it does not appear to have an agreed-upon name, for convenience we will refer to it as the Bernoulli invariant.
Thus, if we know h_* and λ_*, say by integrating (<ref>-<ref>) with a holonomic brake until a steady state is reached, we obtain the Casimir via
Ψ_*=p∫_p_ min^pE_*k(s)+gh_*(s)/s^2ds.
§.§.§ The natural coordinates of the ground state
The form of the equations, and the considerations just made, suggest that the ground state is more conveniently expressed with a "natural" set of coordinates, which in the present case are the potential vorticity and a second, time-like coordinate t. The latter is defined so that the relationship between "geometric" (x^1,x^2) and natural coordinates (p,t) is locally given integrating along the trajectories
dx^i=v_*^idt,
to which we associate the space-time 1-forms ϕ^i≡ dx^i-v_*^idt (the subscript _* reminds us that these are ground state fields).
To calculate the volume element in natural coordinates, consider the 2-form defined in space-time
𝔔=√(g)ϕ^1∧ϕ^2=𝔄-Φ∧ dt.
Here √(g) is the square root of the determinant of the metric in geometric coordinates. Let (p,t)ζ→ (x^1,x^2) be the (local) map from natural to geometric coordinates, and let (p,t)ζ̂→(ζ(p,t),t) be the map that embeds space into space-time. Under the pullback ζ̂^*𝔔=0 or
𝔄=Φ∧ dt=Ψ_,pp/hdp∧ dt,
which gives the volume form in natural coordinates. Alternatively, we can define t as the second coordinate such that the r.h.s. of (<ref>) is the volume form of the manifold (from now on, when multiplying simple differentials, we will follow standard practice and omit the exterior product symbol ∧).
Incidentally, note that (<ref>) is integrable since
dϕ^i∧𝔔=0, i=1,2.
In natural coordinates, v=∂/∂ t, thus if Ω=ω dpdt is the vorticity, from (<ref>) we have the following integrability condition
0=d(i_ vΩ_*)=-d(ω_* dp)=ω_*,t dpdt.
We want to consider under what conditions the ground state coincides with a constant geopotential surface, i.e. when dh=0 (recall that h is measured relative to the geopotential). Since the total volume is conserved, such a ground state has the minimum attainable potential energy consistent with conservation of volume. Setting h_*=const. in (<ref>) we obtain
0=ω_* dp-dEk_*.
The MDF is frame-invariant, by which we mean that observers in different frames of reference (inertial or otherwise) will measure the same MDF.
Thus, we can always take the point of view of an
inertial observer, for which the vorticity is given by
dλ=d(⋆Φ)=d(g_ptdp+g_ttdt)=(g_tt,p-g_pt,t)dpdt,
where the g_XY=g_ijx^i_,Xx^j_,Y are the covariant components of the metric in natural coordinates.
Given that g_tt=g_ijx^i_,tx^j_,t=2E_k*, we have
ω_*=Ek_*,p-x^i_,pg_ij(x^j_,tt+Γ^j_knx^k_,tx^n_,t),
where the Γ^j_km's are the Christoffel symbols describing the Levi-Civita connection of the manifold 𝔻. Substituted in (<ref>) we obtain the following two conditions
E_k*,t=0,
x^i_,pg_ij(x^j_,tt+Γ^j_knx^k_,tx^n_,t)=0.
The first condition states that the kinetic energy must be constant along streamlines. A sufficient condition for the second condition to be satisfied is that the streamlines are geodesics.
§.§.§ The low-Rossby number ground state of shallows flows in rotating channels
A particularly interesting application of the observations made so far is to the case of a shallow water flow contained in a rotating channel, with Coriolis frequency f. We seek a geostrophic ground state where the contribution of the relative vorticity dλ to the total vorticity is O( Ro) relative the frame vorticity Ξ, the latter in Cartesian coordinates being fdx^1dx^2, where Ro is the Rossby number.
In a geostrophic ground state, the kinetic energy must be O( Ro^2), and thus to leading order the Bernoulli invariant is simply gh_*(p)=gfp^-1+O( Ro).
The MDF of such a ground state is narrowly distributed around a non-zero p, i.e.
M(p)=hL_1L_2m(p-p/Δ p).
with Ro≡Δ p/p and h an average depth. Vice versa, it is possible to show that if the MDF is narrowly distributed, there is a rotating frame in which the ground state is geostrophic. The interesting aspect of geostrophic ground states is that to leading order the Casimir is given by
Ψ_*=-fg/2p+O( Ro),
and thus to O( Ro) the local diagnostic energy does not depend on the details of the MDF.
§.§.§ Non-holonomic dynamics on geostrophic leaves
The evolution of the Casimir of shallow-water geostrophic leaves under the influence of non-holonomic forces provides an interesting perspective on how the erosion of the potential vorticity constraint affects the Available Energy of a system. For simplicity, we choose a simple model for the non-holonomic dynamics, which includes the effects of the bottom frictional Ekman layers on the vorticity dynamics <cit.>. Denoting with T the spin-down time of an Ekman layer, the potential vorticity satisfies
∂ p/∂ t+ L_ v(p)=-⋆ dλ/Th,
i.e., the potential vorticity changes along trajectories by an amount proportional to the relative vorticity scaled with the local depth (Note that in a geostrophic flow, the r.h.s. is O( Ro pT^-1).). Thus, the change of the Casimir density along trajectories is to leading order
∂ hΨ_*/∂ t𝔄+ L_ v(hΨ_*𝔄)=Ψ_*,p(∂ p/∂ t+ L_ v(p))=-fg/2p^2⋆ dλ/T𝔄.
Therefore, Ekman layers with cyclonic relative vorticity act as a sink of Available Energy along trajectories, whereas layers with anticyclonic relative vorticity are a source of Available Energy. Of course, both cyclonic and anticyclonic layers act as a net sink on the naive energy.
§.§ Stratified Boussinesq flows
A continuously stratified flow in the Boussinesq approximation is described by the pair (b,λ), where b≡ g(ρ_0-ρ)/ρ_0 is the buoyancy, and, as before, λ≡ v^♭=v_idx^i satisfying
∂λ/∂ t=-i_ vΩ-d(P+E_k)+bdZ,
∂ b/∂ t+i_ vdb=0,
dΦ=0,
where P is the pressure, Φ=⋆λ=i_ v𝔙 is the flux form, and Z the geopotential height <cit.>.
As before, the total vorticity Ω≡ dλ+Ξ is the sum of the relative vorticity dλ and the frame vorticity Ξ. Also, we assume that the geometry of the manifold 𝔻 that contains the fluid is Euclidean. Thus, there exists geometric coordinates (x^1,x^2,x^3) where the metric tensor assumes the simple form g_ij=1 if i=j and g_ij=0 if i≠ j.
Also note that (<ref>) implicitly assumes that the volume form 𝔙 is constant in time. While the extension to non-inertial frames whose volume form is not constant in time is possible, in the interest of simplicity we do not pursue here.
In a continuously stratified Boussinesq fluid there are two independent Lagrangian invariants, the buoyancy b itself and the potential vorticity p≡⋆ (db∧Ω).
Note that if f(b) is any (smooth) function of b, then f(b) is also a Lagrangian invariant, and so is ⋆(df(b)∧Ω). This will be reflected in certain degrees of freedom available in the definition of the gauge-fixed Casimir.
The mass form 𝔇 can be replaced by the volume form 𝔙 of the manifold that contains the flow.
A holonomic brake is provided by adding a term proportional to (i_ vdb)db to the r.h.s. of (<ref>), which extracts energy from the internal wave field without affecting the component of the vorticity aligned along the buoyancy gradient.
The naive Hamiltonian is
H[q]=∫_𝔻(E_k-bZ)𝔙,
where E_k is the kinetic energy per unit mass,
while Casimirs have the general form
Ψ[q]=∫_𝔻Ψ(b,p)𝔙.
In terms of the Lagrange multiplier, the density of the gauge-fixed Casimir is
Ψ(b,p)=∫_b_ min^b(∫_p_ min^pψ(q,s)dq)ds.
§.§.§ The gauge-fixing condition
The Frèchet derivatives of the naive energy w.r.t. to λ and b are trivial. We concentrate here on the variations of the Casimir written as in (<ref>).
The variation w.r.t. to b is given by
δΨ/δ bδ b=(Ψ_,bδ b+Ψ_,pδ p)𝔙=(Ψ_,bδ b+Ψ_,p(⋆(d(δ b)∧Ω)))𝔙=
Ψ_,bδ b𝔙+Ψ_,pd(δ b)∧Ω≅Ψ_,bδ b𝔙-d(Ψ_,pΩ)δ b=
[(Ψ-pΨ_,p)_,b𝔙-Ψ_,ppdp∧Ω]δ b
while the variation w.r.t. λ is given by
δΨ/δλ∧δλ=Ψ_,pδ p𝔙=Ψ_,pdb∧ d(δλ)≅ d(Ψ_,pdb)∧δλ.
Given the MDF V(p,b), the ground state and the gauge-fixed Casimir are found solving
δ H/δ b≅[-Z+(Ψ-pΨ_,p)_,b]𝔙-Ψ_,ppdp∧Ω= 0,
δ H/δλ≅Φ+d(Ψ_,pdb)= 0,
δ H/δψ=∫_𝔻[θ(p( x)-p)θ(b( x)-b)]𝔙=V(p,b)
An attentive reader may have wondered why we did not enforce the incompressibility condition (<ref>) via an explicit Lagrangian multiplier. This is not necessary, as (<ref>) guarantees that the flux form of the ground state is closed. Also, as before, we assume that the MDF is a continuous and differentiable function, i.e. buoyancy and potential vorticity are continuously distributed. More complicated cases consisting of layers where buoyancy and/or potential vorticity are interleaved with layers where one or the other are constant will not be considered here.
§.§.§ The gauge-fixed Casimir
The purpose of this section is to prove the following
Up to an inconsequential constant, the gauge-fixed Casimir is given by
Ψ_*=p∫_ p_ min^pB_*(s,b)/s^2ds,
where B_*=Ek_*(p,b)-bZ_*(p,b)+P_*(p,b) is the Bernoulli invariant
of the ground state expressed in natural coordinates.
As we did for the shallow water case, we introduce the natural set of coordinates, which now include the buoyancy, the potential vorticity and the time-like coordinate, in terms of which the volume 3-form is 𝔙=Ψ_,pp dbdpdt.
In natural coordinates, we write the total vorticity as
Ω_*=Ω^bdpdt+Ω^pdtdb+Ω^tdbdp.
Since Ω is closed by definition
(Ω^b)_,b+(Ω^p)_,p+(Ω^t)_,t=0.
From the definition of potential vorticity,
p=⋆(db∧Ω)=Ω^b⋆(dbdpdt)=Ω^b(Ψ_,pp)^-1.
We introduce Ψ_*=pΨ_*,p-Ψ_*.
Taking the Hodge star of (<ref>) and using (<ref>) we get
Ψ_*,b=-(Z_*+Ω^p).
Ψ_*,p=Ω^b,
where we have used the fact that Ψ_*,p=pΨ_*,pp and (<ref>). These two equations can be combined into a single equation
d(Ψ_*+Z_*b)=bdZ-Ω^pdb+Ω^bdp=bdZ_*-i_ v_*Ω_*.
We have derived the above expression in natural coordinates, where v_* has the simple expression v_*=∂/∂ t, but
(<ref>) is coordinate-free. Thus, comparing (<ref>) with (<ref>), we see immediately that Ψ_* is, up to a constant, the Bernoulli invariant of the ground state, and thus we have proven the theorem. <cit.>
§.§.§ The apedic theorem and its corollaries
Let us introduce a general
An apedic[From the Greek a)/pedos, meaning level, flat.] ground state is a ground state such that b_*=b_*(Z). By extension, the leaf in phase space to which an apedic ground state belongs is called an apedic leaf.
To characterize under what conditions a ground state is apedic we have the following
(Apedic theorem) Let the absolute vorticity of a ground state be
Ω_*=Ω^tdbdp+Ω^bdpdt+Ω^pdtdb.
The ground state is apedic if and only if
Ω^p_,t=0,
Ω^t_,t=0,
that is, the absolute circulation over loops embedded in surfaces of constant potential vorticity and the circulation over loops embedded in surfaces of constant "time" must be independent on "time" for a ground state to be apedic.
This follows from the integrability condition for (<ref>). Indeed, taking the external derivative of (<ref>) and leveraging the closedness of Ω, we obtain after some simple algebra
dZdb+Ω^t_,tdbdp+Ω^p_,tdtdb=0.
The ground state must satisfy (<ref>). Since dpdt and dtdb are independent bases of the module of 2-forms over 𝔻, and on an apedic manifold dbdZ=0 by definition, we have the proof of the apedic theorem.
Let us now explore some consequences of the apedic theorem. From (<ref>) we know that in an inertial system, the vorticity of the ground state
Ω_*=dλ_*=d⋆(Ψ_*,ppdbdp)=d(g_btdb+g_ptdp+g_ttdt).
For a manifold to be apedic, (<ref>) substituted in (<ref>-<ref>) requires that the metric satisfy
g_pt,bt-g_bt,pt=0,
g_bt,tt-g_tt,bt=0.
In terms of Cartesian coordinates, the apedic condition becomes
x^i_,px^i_,btt-x^i_,bx^i_,ptt=0,
x^i_,bx^i_,ttt-x^i_,tx^i_,btt=0.
which provides the proof to the first apedic corollary
A sufficient condition for a ground state to be apedic is that streamlines are geodesics and the velocity is uniform along streamlines.
Note that the Cartesian nature of the coordinates is of the essence.
Consider, for example, x^i's that are cylindrical coordinates (z,r,ϕ) and a ground state flow such that in its natural coordinates the x^i's at most depend linearly on t. Then, for example, we have from (<ref>) that
Ω^t_,t=2ϕ_,tr(r_,bϕ_,pt-r_,pϕ_,bt),
which in general will not be equal to zero (and indeed, the trajectories are not geodesics!). In other words, the acceleration is relative to an inertial frame. Consequently, we can expect that when the system is non-inertial, a generic ground state will not be apedic.
Also, it is worth remarking that the notion of apedicity, depends on the local direction of the plumb line, and thus it is not frame-independent. Thus, a fluid in solid body rotation will appear apedic to an observer in solid body rotation with the flow, but not to an inertial observer.
§.§.§ The Casimir of inertial apedic ground states
Under the assumption that the ground state is apedic in an inertial system, we can make further analytic inroads. Let E_k=x^i_,tx^i_,t/2=g_tt/2 be the kinetic energy of the ground state. Assuming that apedicity is due to the trajectories being inertial (recall that the first apedic corollary proves only the sufficiency condition), we have
Ω^b=g_tt,p-g_pt,t=2Ek_,p-Ek_,p=Ek_,p,
Ω^p=-g_tt,b+g_bt,t=-2Ek_,b+Ek_,b=-Ek_,b,
Then (<ref>-<ref>) can be combined
d(Ψ-Ek+Zb)=bdZ=d(∫^Zb dZ),
and since we know that Ψ is the Bernoulli function of the ground state, we immediately deduce that the pressure distribution in apedic ground states is hydrostatic.
The definitions of local Available Potential Energy proposed in the past that rely on a Casimir <cit.>, usually, <cit.> include only the potential and pressure term, combined in
∫^bZdb. This is
not surprising, since
the traditional Available Energy approach is to define the Background Potential Energy as the energy of the isochorically restratified flow. In our formalism, it amounts to enforcing conservation of buoyancy but not potential vorticity, which is to say we are minimizing the energy on a wider leaf. If we want to enforce both buoyancy and potential vorticity conservation, the corresponding ground state in general must have kinetic energy, and thus the system will have a lower Available Energy.
This, however, does not exclude the existence of interesting classes of flows for which the traditional approach is valid. They will be discussed in the next section,
where we apply the theory to simple channel geometries, both inertial and rotating.
§.§.§ Boussinesq Channel flows
For arbitrary MDFs and geometries, (<ref>-<ref>) constitute a formidable nonlinear problem for the ground state. In these situations, to calculate the Casimir it is better to solve by other means (more likely, numerically) the Euler equations with a holonomic brake to find the ground state. Once the pair b_*,λ_* is known, the Casimir can be calculated from its Bernoulli invariant.
Analytic progress can be made if we are willing to sacrifice geometric complexity. Thus, here we consider simple "open" channel geometries, which geometrically can be described by Cartesian coordinates. For definiteness, let 0≤ x^1≡ Z≤ L_z be the vertical direction, 0≤ x^2≡ y≤ L_y the spanwise direction and 0≤ x^3≡ x≤ L_x the streamwise direction. The bottom, top and side walls are free-slip boundaries, while periodicity in streamwise direction.
We seek what we may call 2-1/2 dimensional ground states: these are states such that x^i=x^i(b,p), i=1,2, and such that the velocity, which depends on two dimensions (b,p) is normal to the x^1,x^2 plane (the 1/2 dimension).
Inertial channels and the second apedic corollary.
In inertial channels, we have the following
Let V be the MDF of a flow contained in an inertial channel which can be described by a 2-1/2 dimensional ground state as defined above. Then the ground state is apedic.
Further, if, for any value of b, V_,pb is an even function of p, the
kinetic energy of the ground state is zero.
The method to construct an apedic ground state for simple flow geometries extends the technique introduced by <cit.> and consists of a series of steps:
– Determine the resorted height Z_*(b) solving (<ref>) in the limit p→ p_ min. This coincides with the isochoric restratification of the fluid.
– Compute the resorted distribution of potential vorticity along the spanwise direction y_*(p).
– Finally, determine the kinetic energy E_k* of the ground state.
Later, we will modify this recipe to calculate the ground state and associated Casimir of rotating flows characterized by a low-Rossby number ground state, with the Rossby number being defined as Ro=O(ω_*/f) where ω_* is the relative vorticity and f is the Coriolis frequency.
It is straightforward to verify that 2-1/2 dimensional ground states in inertial channels are apedic. Indeed, in geometric coordinates, the components of the velocity vector are [0,0,v_*(b(Z,y),p(Z,y))], thus dx=v(b,p)dt and dy=dZ=0, hence the velocity is uniform along geodesics. Rather than working in "natural" coordinates, we work with "hybrid" coordinates (b,p,x).
Being apedic Z_*=Z(b), (here, as we have done before, quantities used as a coordinates will not carry the _* subscript) which can be calculated from the limit p→ p_ min of (<ref>)
∫θ(b_*(Z)-s)dZdydx=-L_yL_x∫_b_ min^b_ Maxθ(b-s)dZ_*/dbdb=V(p_ min,s),
whose solution is
Z_*(b)=L_z(1-V(p_ min,b)/V(p_ min,b_ min)).
Its functional inverse b_*(Z) is the buoyancy distribution of the ground state. From this, we can calculate the hydrostatic pressure of the ground state.
Also, for later convenience, we define N^-2_*≡ Z_*,b(b).
Next, consider the volume element
𝔙=dZdydx=|∂(Z,y)/∂(b,p)|dbdpdx.
Using again (<ref>)
|∂(Z,y)/∂(b,p)|=V_,pb/L_x,
from which y=y(p,b) is obtained solving
V_,pb/L_x=|∂(Z,y)/∂(b,p)|=|V_,b(p_ min,b)/L_xL_y∂ y/∂ p|.
The solution that spans the domain is
y_*=L_yV_,b(p_ min,b)-V_,b(p,b)/V_,b(p_ min,b).
Note that in this apedic ground state, isopycnals are flat, whereas surfaces of constant potential vorticity are
not and that the relationship between natural and geometric coordinates is invariant under the dilation
V→ e^s V.
At this point, all we have left is to calculate the kinetic energy of the ground state. By construction, we must have
λ_*= Fdx,
and since Ω=dλ=F_,bdbdx+F_,pdpdx, using the definition of potential vorticity (<ref>) and (<ref>) we may try as a first Ansatz a solution to
F_,p=pV_,pb/L_x,
with the constant of integration set along planes of constant buoyancy so that the total momentum along the x direction is conserved.
In the special case
V_,pb=δ(p)Ṽ_,b,
which corresponds to a leaf characterized by a uniformly zero distribution of potential vorticity,
F and the kinetic energy of the ground state are zero. The Casimir then reduces (up to a factor pf(b) which does not concern us) to
Ψ_*=∫_b_ min^bZ(s)ds,
and the ground state to a quiescent fluid isochorically restratified according to (<ref>).
Two-dimensional flows contained in a vertical plane have zero potential vorticity, and thus satisfy (<ref>). In this case, we recover the same definition of Available Energy used by Winters et al..
Consider a flow at time t which evolved from a purely two-dimensional flow at t=0, i.e. at t=0 the MDF was V_,p=V_,bδ(p). As instabilities develop, the delta-like distribution in the MDF develops tails. Let us assume that in the absence of any mechanism that breaks the symmetry, the potential vorticity that the diabatic processes generate ex nihilo is equally distributed around the origin. In other words, we assume that whatever mechanism generates potential vorticity, it is not biased against either positive or negative potential vorticity.
Then, without loss of generality, we can write for the MDF at time t as
V_,pb≃L_yL_xN_*^-2(b)/σ(b)w(p/σ(b)),
where w(x) is a suitable even function. Analysis of numerical experiments indicate that a stretched exponential appears to give a good fit in different configurations (figure <ref>). As long as it decays sufficiently rapidly, we can replace p_ min with -∞ and p_ Max with ∞. σ(b) measures the spread (i.e. the variance if the distribution was Gaussian) in potential vorticity near a given buoyancy level. Denoting with W(x)=∫_-∞^x tw(t)dt and with W_n=∫_-∞^∞W^n(t)dt, integration of (<ref>) yields for the velocity the following expression (setting the integration constant to zero for the moment)
F(p,b)=∫_-∞^ppV_,pb/L_xdp=L_y W(p/σ(b))N_*^-2(b)σ(b),
from which the kinetic energy per unit volume of the ground state can be easily calculated
Ek_*=∫_-∞^+∞(∫_b_ min^b_ MaxF^2/2V_,bpdb )dp/∫_-∞^+∞(∫_b_ min^b_ MaxV_,bpdb )dp.=
L_y^2/2(W_2/L_z∫_0^L_z(N_*^-2(b_*(Z)σ(b_*(Z)))^2dZ),
the inescapable consequence of which is that the kinetic energy per unit volume of this point on the leaf grows with the width L_y of the channel squared. While we have derived this result assuming that the MDF has the particular form given by (<ref>), it fundamentally rests on the fact that under the dilation V→ e^sV we have F→ e^s F and Ek→ e^2s Ek. Therefore, what we have derived cannot be the ground state that we are seeking. To understand what went wrong, and to find a cure for it, in figure <ref> we plot F calculated using (<ref>) where the MDF was obtained from a DNS of stratified Couette flow <cit.> (indicated with the label 1).
In this set up, the Lagrangian parcels with high negative potential vorticity are shunted on one side of the domain, most of the domain is filled by the (many) particles with little potential vorticity, while the parcels with large positive vorticity are pushed to the other side of the physical domain. We can see immediately two problems with this approach: First, having separated the large-negative from the large-positive potential vorticity particles, we have created a large, sustained shear that forces the velocity to reach a large value in the center of the domain; and second, while this setup does not violate a free-slip boundary condition at the side walls, it cannot accommodate periodic boundary conditions.
Indeed, no matter what the details of the MDF are, a single (b,p,x)ζ→(z,y,x) chart cannot cover the manifold if the latter is periodic in the y direction.
Let us explore then what happens if we allow two charts to cover the manifold. The first chart maps the (b,p,x) space to the half of the channel such that 0≤ y≤ L_y/2, while the second covers the other half. Further, since the MDF is an extensive quantity by definition, we require that each half of the physical domain contributes exactly one half to the total MDF. In each half, we proceed exactly as we did before, and since F(p,b)p→±∞⟶0 it is possible to stitch together the velocity at the edge of every subdomain to obtain the profile shown on the right panel of figure <ref> labelled (2). Because in each half the MDF is half of the original one, the energy per unit volume in each half (an intensive quantity) is now 25% of the energy per unit volume obtained using only one chart. As a bonus, the velocity field is now periodic. Again, what makes this possible is the fact that V_,pb is an even function of p.
Of course, there is no reason to stop at two charts. The procedure can be repeated with 4,8,…, charts, each covering 1/4,1/8,… of the channel, and each time obtaining a field with a kinetic energy per unit volume 1/16,1/64,… of the previous (figure <ref> shows the profiles obtained using one, two and four charts). Thus, we generate a sequence of velocity fields in which potential vorticity is interleaved at finer and finer scales, while at the same time reducing the overall kinetic energy by a factor 4 with each iteration. Barring the existence of a cut-off scale, we have proven the second apedic corollary.
Naturally, it is possible to consider initial conditions in which the symmetry in p is broken ab initio, even in the absence of rotation. For example, consider filling a channel, which, as before, has dimensions (L_z,L_y,L_x) with a fluid whose buoyancy
has a uniform gradient along the spanwise direction db=N_0^2dy, and flowing such that λ=Δ v_0 (Z/L_z)dx, so that Ω=-Δ v_0/L_zdxdz. Thus, the potential vorticity is uniform and equal to -N^2_0Δ v_0/L_z. We can handle this singular limit by "smearing" the potential vorticity distribution around its constant value by an amount O(ϵ), carry out the calculation, and finally taking the ϵ→0 limit. In this case, the buoyancy of the ground state is such that db_*=N^2_0(L_y/L_z)dZ and λ_*=Δ v_0(y/L_y)dx. Interestingly, the ground state has as much kinetic energy as the initial state, and thus the Available Energy per unit volume is simply the difference between the potential energy of the initial and ground state (Δ b L_z/12).
Rotating channels: geostrophic ground states.
We want to consider here the ground states of flows contained in rotating channels where the frame vorticity is much larger than the relative vorticity. In other words, we look for ground states characterized by a small value of the Rossby number Ro≡ O(ω/f), where ω is the order of magnitude of the relative vorticity and f the Coriolis frequency. As we did in the shallow-water case, we call low-Ro ground states geostrophic ground states. We work with natural (b,p,t) coordinates assuming that the ground state is 2-1/2 dimensional.
Under these assumptions, the main source of non-inertiality along trajectories in the ground state is due to the frame vorticity, which we write as
Ξ=fdydx=Ξ^tdbdp+Ξ^bdpdt+Ξ^pdtdb,
with
Ξ^t=f(y_,bx_,p-y_,px_,b),
Ξ^b=fy_,px_,t,
Ξ^p=-fy_,bx_,t,
where f is the Coriolis parameter.
Under the 2-1/2 assumption, x_,t=√(2Ek), and (<ref>) becomes
dΨ=f√(2Ek) dy-Zdb+O( Ro),
where we made use of (<ref>-<ref>).
To O( Ro) the integrability condition
f√(2Ek)_,ZdZdy=b_,ydZdy,
shows that the ground state is in the so-called thermal wind balance <cit.>. As we did for flows in inertial channels, we endeavor to construct
the ground state explicitly by seeking first the coordinate maps Z_*=Z(b,p) and y_*=y(b,p).
Along a streamline extending from one end of the domain to the other, ∫ dt=L_x/√(2Ek). From the definition of MDF
L_x/√(2Ek)Ψ_,pp= V_,pb,
which, combined with (<ref>) gives
fy_*,p=pV_,pb/L_x.
A second equation involving the coordinates is (<ref>),
|∂(Z_*,y_*)/∂(b,p)|=Z_*,by_*,p-Z_*,py_*,b=V_,pb/L_x.
In choosing the sign, we assume that f>0.
We seek solutions of (<ref>-<ref>) in power series of (b-b_ min) as follows
Z_*=∑_n=1^∞ Z_n(p)(b-b_ min)^n,
y_*=∑_n=0^∞ y_n(p)(b-b_ min)^n,
where we assume that the level of zero motion is the flat isopycnal b=b_ min. We also write
V_,pb/L_x=∑_n=0^∞F_n(p)(b-b_ min)^n.
as power series.[From a computational point of view, it may be more advantageous to use orthogonal polynomials. Here, we use a simple power series for convenience.] Inserting (<ref>) into (<ref>) we obtain the coefficients of the power series for y as
y_n=f^-1∫^p_p_ minsF_n(s)ds,
while combining (<ref>) and (<ref>)
∑_l=0^∞∑_n=1^l+1(nZ_ny_l+1-n,p-(l+1-n)Z_n,py_l+1-n)(b-b_ min)^l=∑_l=0^∞ F_l(b-b_ min)^l,
which yields the following upper triangular set of algebraic equations for the coefficients Z_n of the series for Z
∑_n=1^l+1(nZ_ny_l+1-n,p-(l+1-n)Z_n,py_l+1-n)=F_l.
The equation for the first term
Z_1y_0,p=F_0
can be readily solved using (<ref>)
Z_1=f/p,
while the other terms satisfy the following recurrence relation
Z_l+1=[(l+1)y_0,p]^-1[lZ_1,py_l-∑_n=2^l(nZ_ny_l+1-n,p-(l+1-n)Z_n,py_l+1-n) ], l≥ 1.
Note how Z_l+1 depends on the y_n's with n≥ 1.
Having determined the map from natural to physical coordinates, we now calculate the kinetic energy of the ground state, by solving
the thermal wind equation (<ref>), which expressed in (b,p) coordinates reads
(√(2Ek))_,py_,b-(√(2Ek))_,by_,p=f^-1Z_,p.
Once again, we look for a power series solution
√(2Ek)=∑_l=0^∞v_l(p)(b-b_ min)^l,
and, since we have assumed that b=b_ min is the level of no-motion, v_0=0.
Substituting into (<ref>) and rearranging we again obtain a triangular algebraic set of equations for the v_l's (no surprise here, as we are dealing with the same nonlinear operator)
v_1=0,
v_l+1=[ -Z_l,p+f∑_n=1^l (nv_l+1-n,py_n-(l+1-n)v_l+1-ny_n,p)]/[f(l+1)y_0,p],l≥ 1.
As before, the v_l's with l≥ 3 depend on the y_n's with n≥ 1.
In particular,
fy_0,pv_2=f/2p^2.
An special case is when F_n=0, n>0, i.e. the VDF is linear in the buoyancy. In this case, the solution has the very simple form
y_*=f^-1∫^p_p_ minsF_0(s)ds, Z_*=fb-b_ min/p, √(2Ek_*)=f(b-b_ min)^2/2p^2[pF_0(p)]^-1.
With the complete solution in hand, it is now time to revisit the small-Ro assumption made at the beginning.
The goal is to characterize the MDFs that lead to small-Ro ground states.
It is straightforward to calculate the relative vorticity of the ground state
Ω_*=-(2Ek)_,bdtdb+(2Ek)_,pdpdt.
Let us start from the special case that leads to (<ref>). The most stringent condition is
Ro=O((√(2Ek_*))_,Z/f)=L_z/f^2pF_0(p)≪ 1
We can estimate
O(pF_0)=pL_yL_z/Δ pΔ b=pL_y/Δ pN^2,
where N^2=Δ b/L_z. Here Δ p is a measure of the spread of potential vorticity, e.g. the rms of the fluctuations, Δ b is a measure of the spread of buoyancy and p the mean potential vorticity (which, when needed, can be estimated as fN^2).
Thus, the ground state satisfies the low-Ro condition if the following condition on the spread of PV
Δ p/p≪L_yL_z/L_R^2,
where L_R≡ L_zN/f is the internal Rossby radius of deformation, is satisfied.
Let us now consider the more general case of a MDF which is not linear in the buoyancy, but which still satisfies (<ref>). We have
O(pF_n)= Ro^-1L_R^2/Δ b(Δ b)^-n,
whence
O(fy_n)=O(Δ p pF_n)=fL_y(Δ b)^-n,
O(fy_n,p)=O(pF_n)= Ro^-1L_R^2/Δ b(Δ b)^-n.
Using (<ref>), we have
O(Z_2(b-b_ min)^2)= RoL_yL_z/L_R^2L_z.
Given the nature of the recurrence relation (<ref>), it follows that all the other terms in the series
for Z are O( Ro), and thus
Z_*=f/p(b-b_ min)+O( Ro).
Along essentially the same lines we have that √(2Ek_*)=O( Ro), and thus to O( Ro) the pressure in the ground state is hydrostatic.
Hence
Ψ=-f/2(b-b_ min)^2/p+O( Ro),
which shows that the Casimir of geostrophic leaves, defined as leaves characterized by a geostrophic ground state (i.e., a low-Ro state) does not depend, to leading order, on the details of the MDF that defines the leaf to which the ground state belongs.
A field on a geostrophic leaf does not need to be in geostrophic equilibrium, but the ageostrophic component must be such that the overall spread in potential vorticity is small when looked over the entire field.
As long as the diabatic dynamics does not cause a significant spread in potential vorticity, the flow remains on a geostrophic leaf and the gauge-fixed Hamiltonian is to leading order independent on the MDF. Thus, the local diagnostic energy has, to O( Ro) a universal character.
§.§ The non-holonomic dynamics of Casimirs in Boussinesq flows
Once the gauge-fixed Casimir is known, non-holonomic effects on the Available Energy can be analyzed by considering how the Casimir evolves along Lagrangian trajectories. In apedic manifolds characterized by an even distribution of potential vorticity, our theory recovers the standard SBB local APE formulation (here, we reinstate the qualified Potential, since the ground state has no kinetic energy). In particular, under Boussinesq approximation, the sink of APE is quantified by -κψ_,bb|∇ b|^2, where κ is the diffusivity of the stratifying agent. Archetypal flows for small-scale mixing (e.g., shear layers, Couette flows,…) fall into this category. On geostrophic manifolds, the evolution of the Casimir depends on the details of the non-holonomic brake. Assuming a standard diffusion term in the buoyancy equation, the sink term due to mixing of the stratifying agent -κψ_,bb|∇ b|^2 is equal to κ f/p|∇ b|^2+O( Ro). However, turbulent momentum fluxes will change the potential vorticity along trajectories and thus modify the Available Energy in either direction, just as we saw for the shallow-water case. The specific way depends in general on how the turbulent fluxes are modeled, and details are left as future work.
§ RELATION TO OTHER DEFINITIONS OF LOCAL AND GLOBAL AVAILABLE ENERGY
Examples of global <cit.> as well as local Available Energy definitions <cit.> applied to study the effects of mixing on the energetics of three-dimensional flows have been previously considered in the literature.
At the level of a global definition of Available Energy, the novelty of the approach followed in sec. <ref> lies in setting a framework that allows, at least in principle, to calculate the ground state from the MDF that describes the state of a generic state, though it can also be used to calculate the MDF associated to a prescribed ground state. While the framework is amenable to analytic treatment in some special cases, it suggests a way to calculate numerically the ground state via the application of suitably defined holonomic brakes. Once the ground state is known, the gauge-fixed Casimir can be calculated, and thus the local diagnostic energy is obtained.
Out of the cases that can be treated analytically, we show that for an important class of flows, the isochoric restratification proposed by Winters et al. coincides with the ground state obtained with our framework.
The major departure from previous local formulations of APE
<cit.>, aside from the way the Casimir is calculated, resides in not subtracting the contribution of the ground state at the local level. If we were to follow Scotti and White's "recipe", we would define the local diagnostic energy as E^ SW(q)= H_*(q)- H_*(q_*), with H_* given by the gauge-fixing condition (<ref>).
Subtracting the gauge-fixed Hamiltonian density of the ground state in the local definition has an intuitive appeal, because the global Available Energy would coincide with E^ SW[q], whereas in the approach followed here the integral of the local diagnostic energy does not have an immediate physical interpretation. However, the quantity of interested is the rate of change of the Available Energy under diabatic conditions, which coincides with the rate of change of E[q]. Thus, the local diagnostic energy allows to explore at the local level the processes that are responsible for energy degradation, and at the local level there are reasons to question the inclusion of the ground state energy in the local definition.
From a functional point of view, the inclusion of H_*(q_*) amounts to no more than a "constant" term, in the sense that its variation w.r.t. q is zero, and it is not necessary to impose either the gauge condition or to ensure the convexity property.
There is also a subtler reason why it is not advisable to include q_* in the definition at the local level. Consider the local evolution of H_*(q_*) under the flow v associated to the point q in phase space. For simplicity, we consider the stratified Boussinesq case, where the volume form is an integral invariant. Then H_*(q_*)= H_*(λ_*,b_*,Z) (we do not need to worry about the explicit time dependence of H_*) and
∂( H_*𝔙)/∂ t+ L_ v( H_*𝔙)=
δ ( H_*𝔙)/δλ_*(∂λ_*/∂ t+ L_ v(λ_*))
+δ ( H_*𝔙)/δ b_*(∂b_*/∂ t+ L_ v(b_*))
+δ ( H_*𝔙)/δ Z L_ v(Z)
≅δ ( H_*𝔙)/δ Z L_ v(Z),
where, as before, ≅ means equal up to an exact form. For an apedic manifold,
δ ( H_*𝔙)/δ Z L_ v(Z)=-b_*(Z)d(ZΦ)=-d(P_*(Z)Φ)≅ 0, Φ=i_ v𝔙, P_*=∫^Z b_*(s)ds,
so that in the adiabatic limit, E^ SW is, up to boundary terms, an integral invariant.
However, under more general ground states (e.g., geostrophic), E^ SW is not an integral invariant, whereas E is. Therefore, referencing the local diagnostic energy to the local background state does not contribute to the desirable properties of the local diagnostic energy, while, aside from special circumstances, adding terms to the local evolution equation of dubious physical interpretation.
§ CONCLUSION
In this paper, we have developed a framework to diagnose diabatic effects on systems that in the adiabatic limit are described by a degenerate Hamiltonian structure. The degeneracy has two important consequences. The first is that the phase space foliates into a collection of leaves, each labeled by a particular volumetric distribution of the Lagrange invariants associated to the Casimirs. Trajectories in phase space are contained within a leaf. The Available Energy on a leaf is thus the energy referenced to the minimum energy attainable on the leaf. In other words, it is the energy that can be extracted from the system without disturbing the dynamics of the Lagrangian invariants.
The second consequence is that the Hamiltonian that describes the flow in the adiabatic limit possesses a local (in phase space) gauge symmetry. Thus, there exists a class of dynamically equivalent Hamiltonians.
We show that a specific gauge-fixing condition can be imposed to select a specific Hamiltonian that can be used to define a local diagnostic energy with the following property: the temporal rate of change of the Available Energy under diabatic conditions is given by the diabatic evolution of the gauge-fixed Hamiltonian.
The present study provides expressions of the gauge-fixed Hamiltonians for a number of geophysically relevant flows, together with general properties of the reference state.
For non-rotating flows in the Boussinesq limit, we have shown that the local diagnostic energy based on the gauge-fixed Hamiltonian recovers the Available Potential Energy of <cit.>, which form the basis of <cit.> and <cit.> application of APE to mixing, provided the distribution of potential vorticity in the flow is even around the origin. Most archetypal flows used to study small-scale mixing start as two-dimensional flows contained in a vertical plane. For such flows, under the assumption that the diabatic dynamics does not favor negative over positive potential vorticity, the standard approach remains correct.
Our definition of local diagnostic energy extends naturally to rotating flows: under the effect of rotation, the conservation of potential vorticity imposes a strong constraint, which is not accounted for in the Winter et al.'s definition of APE. In particular, when the flow is obtained from a general-form perturbation which preserves both the potential vorticity and the buoyancy distribution of a low-Rossby number state in near-geostrophic equilibrium,
to lowest order in the Rossby number, we show that the expression for the local diagnostic energy has a universal character.
Once the functional form of the local diagnostic energy is known, it can be used to study the effect of diabatic processes. The effect of these processes, which in general break the conservation of the Lagrangian invariants on the change of the Available Energy, can be studied following the evolution of the local diagnostic energy along Lagrangian trajectories.
Using a simple model for diabatic processes in shallow-water flows, we have shown that such processes can locally increase the Available Energy.
This is not too surprising, since diabatic processes erode the potential vorticity constraint, keeping energy locked up in the ground state field.
The results were derived under the assumption that the ground state is unique. Of course, we can envision situations where the naive Hamiltonian has multiple minima on a given leaf. In this case, it cannot be ruled out that the application of different holonomic brakes may lead the different ground states, especially if the system is close to the separatrix between the basin of attractions. On the other hand, if the system is not too far from a minimum, then it is not unreasonable to assume that most holonomic brakes will lead to the same extremal point, even though there may exists other ground states with overall less energy.
We considered applications of the general formalism developed in section <ref> to flows in which thermodynamic effects are neglected.
However, the formalism is quite general, and can be applied to any flow regime described by a Hamiltonian on a phase space with a degenerate Poisson algebra. In particular, it could be applied to flows where the internal energy plays a more active role, such as low Mach number applications <cit.>.
In conclusion, the formalism presented in this paper extends Margules' original idea to highlight the role that multiple constraints play in the available energetics framework.
Since the Mass Distribution Function, or, for incompressible flows, the Volume Distribution Function plays a central role in defining the gauge-fixing condition, our approach applies to flows contained on a finite volume.
However, the possibility that other ways to enforce the constraints, not dependent on the system having a finite volume, exist should not be discarded.
Since the applicability of any local diagnostic tool is constrained by the data available for the analysis, it can rarely be considered independent from the flow data itself
6pt
20pt
The research described in this paper did not involve any living or deceased organism.
AS conceived the study and drafted the manuscript. AS and PYP worked together on the calculations, PYP heavily edited the manuscript and contributed some of the figures.
We have no competing interests.
The authors acknowledge the support by the National Science Foundation Grant N^o OCE-1155558.
We would like to thank Dr. K. Lamb, Dr. A. Hoggs and Dr. R. Tailleux for numerous engaging discussions on the subject.
AS would like to thank Dr. E. Santilli for opening his eyes to the beauty of exterior calculus.
Insert disclaimer text here.
rspublicnat
| A major challenge in contemporary oceanography is to understand the role of small scale turbulence in regulating the oceanic Meridional Overturning Circulation (MOC), the process that over
millennial time scales exchanges surface with deep water <cit.>. Given the large storage capacity of the deep ocean for heat and greenhouse gases,
understanding the drivers of the MOC is essential for climate prediction <cit.>.
<cit.> used energetics arguments in an attempt to quantify the amount of energy required to sustain the MOC,
but significant gaps remain in quantifying the pathways that energy injected
A local diagnostic energy to study diabatic effects on a class of degenerate Hamiltonian systems with application to mixing in stratified flows
A. Scotti^1 and P.-Y. Passaggia^1
December 30, 2023
================================================================================================================================================
at large scales
by winds and tides take to reach the small
scales at which mixing occurs <cit.>.
Particularly vexing is the problem of estimating the energetic cost incurred when turbulent processes irreversibly mix the stratifying agents
<cit.>, that is how to estimate the energy input necessary to sustain a given rate of dissipation of the variance of the stratifying agents.
For a fluid in the Boussinesq approximation, <cit.> used the concept of Available Potential Energy (APE) as a diagnostic tool to achieve such a relation. APE has a long history, starting with the pioneering work of <cit.> and <cit.>, later developed by <cit.> into a working tool, the so-called Lorenz Energy Cycle, which is still used today <cit.>. Essentially, Margules' idea was to calculate a minimum-energy state, compatible with certain constraints, and use the energy of such state to "gauge" the amount of potential energy effectively available to produce mechanical work, yielding a definition of the APE of the system.
Winters et al. considered the global effects of mixing (i.e., diabatic effects) on a closed system. For this purpose, they only needed a definition of APE for the system as a whole. There are however situations that call for a definition of APE that apply to localized regions of the domain, e.g. when measuring the energy carried by nonlinear internal waves <cit.>, when partitioning energy between mean and fluctuating components in a cyclone <cit.>, in determining the efficiency of different mixing systems <cit.>, and in studying mixing and turbulence in spatially inhomogeneous systems <cit.>. In all these studies, the starting point was the local definitions of APE developed in the early 1980's by <cit.> for incompressible flows and by <cit.> for compressible flows, based on a reference state that depends only on the mass distribution, though <cit.> already suggested that more general reference states can be considered. Indeed, in the atmospheric literature, more general reference states have been considered <cit.>, whereas in the oceanographic applications the Lorenz paradigm still dominates <cit.>.
For a recent review see <cit.>.
The inadequacy of the standard definition of energy as the sum of kinetic, potential and, for compressible fluids, internal energy to quantify the capacity of the system to do actual work, is connected to the degeneracy of the Poisson algebra in the Hamiltonian formulation of the problem <cit.>. By degeneracy, we mean that the center of the Poisson algebra contains non-constant functionals of the phase space, the so-called Casimir functionals, or Casimirs for short. The set of Casimirs is an ideal of the algebra, and thus can be used to define a notion of equivalence
on the set of Hamiltonians: Two Hamiltonians are equivalent, in the sense that they give the same dynamics, if they differ by a Casimir. In other words, the Hamiltonian possesses a local (in phase space) gauge symmetry.
From this point of view,
the local APE formulations can be seen as selecting, out of a specific equivalence class, an Hamiltonian that satisfies one or more additional conditions <cit.>, given by a gauge-fixing condition. As we shall clarify later, the Casimir includes the effects of constraints on the system.
In this paper, we aim to revisit the issue of constructing a local diagnostic energy that can be used to diagnose the effect of diabatic processes on the energetics of fluid systems that, in the adiabatic limit, are described by a Hamiltonian with a degenerate Poisson algebra. In particular we are seeking a quantity with the following properties:
* Accounts for relevant constraints.
* Locality, and such that it satisfies (up to diabatic effects) suitable conservation laws.
* Can be connected to Margules' intuitive notion of Available Energy when the latter is properly formalized.
* Convexity in phase space, so that it can be meaningfully partitioned into a mean and eddy (or turbulent) component.
* Its evolution under diabatic conditions reflects the loss of Available Energy.
Following <cit.>, we first lay down in sec. <ref> the general theoretical framework that applies to generic systems that in the adiabatic limit have a Hamiltonian description characterized by a degenerate Poisson algebra.
From there, we introduce the specific gauge-fixing condition that, when applied to the equivalence class of Hamiltonians, identifies the one whose density is the local diagnostic energy that we seek.
In practice, the gauge-fixing condition identifies which Casimir needs to the added to the energy. At the same time, we also obtain the equations that specify the appropriate reference state.
We then consider two models which are commonly used in oceanography and which have an adiabatic limit described by a degenerate Hamiltonian structure: the incompressible shallow water equations and the incompressible Euler equations in the Boussinesq approximation for a continuously stratified flow. Both models are considered in inertial and non-inertial (i.e., rotating) frames. For these systems, we give general properties for both the gauge-fixed Casimir and the reference state. In simple geometric configurations, we calculate analytically the solution or a suitable approximation. An interesting result that applies to both shallow water and Boussinesq equations is that, in rotating frames, the local diagnostic energy associated to low Rossby number reference states has a universal character. | null | null | null | null | In this paper, we have developed a framework to diagnose diabatic effects on systems that in the adiabatic limit are described by a degenerate Hamiltonian structure. The degeneracy has two important consequences. The first is that the phase space foliates into a collection of leaves, each labeled by a particular volumetric distribution of the Lagrange invariants associated to the Casimirs. Trajectories in phase space are contained within a leaf. The Available Energy on a leaf is thus the energy referenced to the minimum energy attainable on the leaf. In other words, it is the energy that can be extracted from the system without disturbing the dynamics of the Lagrangian invariants.
The second consequence is that the Hamiltonian that describes the flow in the adiabatic limit possesses a local (in phase space) gauge symmetry. Thus, there exists a class of dynamically equivalent Hamiltonians.
We show that a specific gauge-fixing condition can be imposed to select a specific Hamiltonian that can be used to define a local diagnostic energy with the following property: the temporal rate of change of the Available Energy under diabatic conditions is given by the diabatic evolution of the gauge-fixed Hamiltonian.
The present study provides expressions of the gauge-fixed Hamiltonians for a number of geophysically relevant flows, together with general properties of the reference state.
For non-rotating flows in the Boussinesq limit, we have shown that the local diagnostic energy based on the gauge-fixed Hamiltonian recovers the Available Potential Energy of <cit.>, which form the basis of <cit.> and <cit.> application of APE to mixing, provided the distribution of potential vorticity in the flow is even around the origin. Most archetypal flows used to study small-scale mixing start as two-dimensional flows contained in a vertical plane. For such flows, under the assumption that the diabatic dynamics does not favor negative over positive potential vorticity, the standard approach remains correct.
Our definition of local diagnostic energy extends naturally to rotating flows: under the effect of rotation, the conservation of potential vorticity imposes a strong constraint, which is not accounted for in the Winter et al.'s definition of APE. In particular, when the flow is obtained from a general-form perturbation which preserves both the potential vorticity and the buoyancy distribution of a low-Rossby number state in near-geostrophic equilibrium,
to lowest order in the Rossby number, we show that the expression for the local diagnostic energy has a universal character.
Once the functional form of the local diagnostic energy is known, it can be used to study the effect of diabatic processes. The effect of these processes, which in general break the conservation of the Lagrangian invariants on the change of the Available Energy, can be studied following the evolution of the local diagnostic energy along Lagrangian trajectories.
Using a simple model for diabatic processes in shallow-water flows, we have shown that such processes can locally increase the Available Energy.
This is not too surprising, since diabatic processes erode the potential vorticity constraint, keeping energy locked up in the ground state field.
The results were derived under the assumption that the ground state is unique. Of course, we can envision situations where the naive Hamiltonian has multiple minima on a given leaf. In this case, it cannot be ruled out that the application of different holonomic brakes may lead the different ground states, especially if the system is close to the separatrix between the basin of attractions. On the other hand, if the system is not too far from a minimum, then it is not unreasonable to assume that most holonomic brakes will lead to the same extremal point, even though there may exists other ground states with overall less energy.
We considered applications of the general formalism developed in section <ref> to flows in which thermodynamic effects are neglected.
However, the formalism is quite general, and can be applied to any flow regime described by a Hamiltonian on a phase space with a degenerate Poisson algebra. In particular, it could be applied to flows where the internal energy plays a more active role, such as low Mach number applications <cit.>.
In conclusion, the formalism presented in this paper extends Margules' original idea to highlight the role that multiple constraints play in the available energetics framework.
Since the Mass Distribution Function, or, for incompressible flows, the Volume Distribution Function plays a central role in defining the gauge-fixing condition, our approach applies to flows contained on a finite volume.
However, the possibility that other ways to enforce the constraints, not dependent on the system having a finite volume, exist should not be discarded.
Since the applicability of any local diagnostic tool is constrained by the data available for the analysis, it can rarely be considered independent from the flow data itself
6pt
20pt
The research described in this paper did not involve any living or deceased organism.
AS conceived the study and drafted the manuscript. AS and PYP worked together on the calculations, PYP heavily edited the manuscript and contributed some of the figures.
We have no competing interests.
The authors acknowledge the support by the National Science Foundation Grant N^o OCE-1155558.
We would like to thank Dr. K. Lamb, Dr. A. Hoggs and Dr. R. Tailleux for numerous engaging discussions on the subject.
AS would like to thank Dr. E. Santilli for opening his eyes to the beauty of exterior calculus.
Insert disclaimer text here.
rspublicnat |
http://arxiv.org/abs/1701.07889v1 | 20170126221133 | Effect of tetrahedral shapes in heavy and superheavy nuclei | [
"P. Jachimowicz",
"M. Kowal",
"J. Skalski"
] | nucl-th | [
"nucl-th"
] |
Institute of Physics,
University of Zielona Góra, Szafrana 4a, 65-516 Zielona
Góra, Poland
[email protected]
National Centre for Nuclear Research, Hoża 69,
00-681 Warsaw, Poland
National Centre for Nuclear Research, Hoża 69,
00-681 Warsaw, Poland
We search for effects of tetrahedral deformation β_32
over a range of ∼ 3000 heavy and superheavy nuclei, 82≤ Z ≤ 126,
using a microscopic-macroscopic model based on the deformed Woods-Saxon
potential, well tested in the region.
We look for the energy minima with a non-zero tetrahedral distortion, both
absolute and conditional - with the quadrupole distortion constrained to
zero. In order to assure reliability of our results
we include 10 most important deformation parameters in the energy
minimization. We could not find any cases of stable tetrahedral shapes.
The only sizable - up to 0.7 MeV - lowering of the ground state
occurs in superheavy nuclei Z≥ 120 for N=173-188, as a result of
a combined action of two octupole deformations: β_32 and
β_30, in the ratio β_32/β_30≈√(3/5).
The resulting shapes are moderately oblate, with the superimposed distortion
β_33 with respect to the oblate axis,
which makes the equator of the oblate spheroid slightly triangular.
Almost all found conditional minima are excited and not protected by any
barrier; a handful of them are degenerate with the axial minima.
PACS number(s): 21.10.-k, 21.60.-n, 27.90.+b
Effect of tetrahedral shapes in heavy and superheavy nuclei
J. Skalski
December 30, 2023
===========================================================
§ INTRODUCTION
The idea of intrinsic shape of a nucleus turned out instrumental for
understanding many features of the nuclear structure and spectroscopy.
In particular, specific nuclear shapes were related to the prominent shell
effects in both proton and neutron systems exhibited by the nuclear
binding, and to the observed patterns of collective excitations.
Besides the axial quadrupole distortion which is the nuclear
deformation of primary importance, the secondary effects of hexadecapole
<cit.> and, in some regions, octupole
<cit.> distortion are clearly recognized.
Additionally, there are theoretical predictions of quadrupole triaxial
equilibrium shapes in some nuclei, e.g. <cit.>, but
rather limited experimental evidence for them, see e.g.
<cit.>.
From the theoretical point of view even more exotic shapes are possible,
characterized by a high rank symmetry group which would lead to an extra
degeneracy of s.p. energy levels. One such possibility is the tetrahedral
symmetry. It is well known that many quantum
objects, like molecules, fullerenes and alkali metal clusters prefer such a
shape in their ground state.
Due to these facts a hypothesis of tetrahedral symmetry of an atomic nucleus
was put forward as early as in the 1970s for ^16O
<cit.> in relation to its expected four-α cluster
structure.
Since the 1990s, such concept has been extended also to the heavier systems,
e.g. <cit.> and then intensively studied, both within
microscopic-macroscopic (MM) <cit.> and
selfconsistent models <cit.>.
Generally,
these studies are inconclusive since: a) the existence of global tetrahedral
minima was rare and model-dependent b) contradictory results were obtained
within the same models. Similar ambiguity occurs also in experiments which
so far either did not give a clear evidence <cit.> or even gave a
strong evidence against tetrahedral symmetry <cit.>.
For example, negative-parity bands in ^156Dy, observed quite recently
<cit.>, are most likely related to the octupole excitations
rather than the exotic tetrahedral symmetry.
Here we summarize the results of a search for tetrahedral minima in heavy and
super-heavy nuclei obtained within the MM model based on the deformed
Woods-Saxon potential with parameters used many times before, therefore well
tested in this region.
The present work is a much improved version of <cit.>, extended to
odd-A and odd-odd nuclei, with an expanded space of deformations used for
searching ground-state (absolute) tetrahedral minima.
§ CALCULATIONS
The microscopic-macroscopic results were obtained with the
deformed Woods-Saxon potential. The nuclear deformation enters via a
definition of the nuclear surface <cit.>:
R(θ,φ) = c({β}) R_0 { 1+∑_λ>1β_λ 0
Y_λ 0(θ,φ)+
∑_λ>1, μ>0, evenβ_λμ Y^c_λμ (θ,φ)} ,
where c({β}) is the volume-fixing factor. The real-valued spherical
harmonics Y^c_λμ, with even μ>0, are defined in terms of the
usual ones as: Y^c_λμ=(Y_λμ+Y_λ -μ)/ √(2).
In other words, we consider shapes with two symmetry planes. Note, that
traditional quadrupole deformations β and γ are related to
β_20 and β_22 by:
β_20=βcosγ and β_22=βsinγ.
The n_p=450 lowest proton levels and n_n=550 lowest
neutron levels from the N_max=19 lowest shells of the deformed oscillator
were taken into account in the diagonalization procedure.
For the macroscopic part we used the Yukawa plus
exponential model <cit.>.
All parameters used in the present work,
determining the s.p. potential, the pairing strength, and the
macroscopic energy, are equal to those used previously in the calculations
of masses <cit.> and fission barriers <cit.> in
actinides and the heaviest nuclei. In particular, we took the
"universal set" of potential parameters and the pairing strengths
G_n=(17.67-13.11· I)/A for neutrons, G_p=(13.40+44.89· I)/A
for protons (I=(N-Z)/A).
As always within this model, N neutron and
Z proton s.p. levels have been included when solving BCS equations.
For systems with odd proton or neutron (or both) we use blocking. We assume
the g.s. configuration consisting of an odd particle
occupying one of the levels close to the Fermi level and the rest of the
particles forming a paired BCS state on the remaining levels.
Any minimum, including the ground state, is found by minimizing over
configurations, blocking particles on levels from the 10-th below to 10-th
above the Fermi level.
We performed three types of calculations looking for both conditional
and ground-state tetrahedral minima in nearly 3000 heavy and superheavy
nuclei with Z≥ 82.
1) Conditional tetrahedral minima were found by
fixing quadrupole deformations at zero: β_20=β_22=0, and
calculating total energy with the step 0.02 in β_32
by minimization over the other seven deformation parameters:
β_30, β_40, β_42, β_50, β_60, β_70, β_80.
The occurence of a minimum at β_32≠ 0 in such an energy plot
(after additional interpolation of the energy to the step 0.01 in β_32) signals the conditional
minimum, usually excited above the g.s.
The rationale behind this procedure is that, as known from other studies,
quadrupole deformation does not cooperate with the tetrahedral one <cit.>,
and switching off the effects of the quadrupole might help to locate a prominent
tetrahedral shell effect at sizable deformation β_32.
2) The ground states in all nuclei were found initially by the minimization
over seven axial deformations β_λ 0, λ=2 - 8.
3) Finally, the ground states were found for the second time, by the
minimization over ten deformations: the axial ones from 2) plus β_22,
β_32, and β_42.
In additional calculations, for a restricted region of SH nuclei in which
the minima found in 3) were ∼ 0.5 MeV deeper than those resulting
from 2), we minimized energy with respect to nine deformations, excluding
β_32. The aim was to see whether the effect is driven by
β_32.
In all calculations we used one non-axial version of the WS code to
eliminate possible numerical differences which could follow from the different
imposed spatial symmetries.
When looking for ground state minima, a minimization for each nucleus
was repeated at least 30 times, with various starting points,
in order to ensure that the proper ground state was found.
As, especially for superheavy nuclei,
the minimization can end behind the fission barrier, only minima within the
the fission barrier were accepted.
§ RESULTS AND DISCUSSION
§.§ Tetrahedral minima
The map of tetrahedral deformations in the obtained conditional
minima is shown in Fig. 1. We emphasize that in these minima the quadrupole
deformation was forced to vanish in order to exhibit large tetrahedral shell
effects. As may be seen, the largest β_32 reach ∼0.2.
The conditional tetrahedral minima with sizable β_32>0.1 occur
in three regions: a wide region around Z=94, N=136, and two very exotic
regions: Z≈ 98, N≈ 192, and Z=126, N≈ 192.
This, however, should be confronted with the excitation energies
of the conditional minima above the axially symmetric g.s. minima [found in
calculations no. 2)], shown in Fig. 2. As may be seen there, the low excitation
energies in the a priori interesting first region occur in neutron rich
Z=84-94 nuclei and in the very neutron deficient Z=92-106 isotopes
which probably cannot be reached in experiment.
There are some very exotic nuclei in which the conditional tetrahedral
minima lie lower than the axially symmetric ones, but the largest difference
is only 0.25 MeV. The conclusion from this part of the study is that in the
whole investigated region there are no prominent low-energy tetrahedral minima.
Among the group of neutron-rich Z=84 - 94 nuclei, there are altogether
fourteen conditional minima below 2 MeV excitation energy: nine in Po isotopes,
four in Rn isotopes, and in ^223Np.
An example of the energy landscape of a nucleus
^219Po, with coexisting shallow, nearly degenerate minima: with
β_32≈ 0.1 and the wide prolate-β_30 one, is shown in
Fig. 3, in three
projections: (β_20, β_22), (β_20, β_30), and
(β_20, β_32). These maps were obtained by using all
10 deformations by the minimization over 8 remaining ones. The unusual
landscape may be interpreted as two competing minima with a slight barrier
between them.
Concerning the excited minima, the important question is whether they
are protected by a barrier from the transition to the axially symmetric
g.s. minimum.
The typical situation is shown in Fig. 4, for the nucleus ^222Rn.
The landscape in (β_20, β_32) plane was obtained by the
minimmization over 8 remaining deformations. One can see that the conditional
minimum with β_20=0 is not a minimum after lifting the constraint on
β_20: the very shallow real tetrahedral minimum occurs at
β_20=-0.04 and β_32=0.04 which is smaller than
β_32=0.10 of the conditional minimum. The barrier between the
tetrahedral and the
axial prolate g.s. minimum is less than 300 keV. One has to notice though,
that the presented picture involves the minimization over other deformations,
while finding the height of the saddle would require another method, like,
for example, the imaginary water flow (e.g. <cit.>), applied in the whole deformation
hypercube.
§.§ Minima including tetrahedral deformation
In the next step we found all nuclei in which the energy minimization
over 10 deformations, including nonaxial β_22, β_32 and
β_42, lead to the g.s. lying lower than the axially symmetric
minimum. They are shown in Fig. 5.
In many of them, the effect comes entirely from the quadrupole and
hexadecapole nonaxiality (β_22 and β_42). Such is the
situation in nuclei with Z<118, forming vertical lines in Fig. 5:
at N=121, 179 (nuclei with small oblate deformation β_20 > -0.1),
and N=137, 153 (well deformed prolate nuclei with β_20≈ 0.25).
Among the last group, there are only a few cases in which a small deformation
β_32≈ 0.02 - 0.03 occurs in the g.s. On the other hand,
in many nearly spherical N=185 isotones a small distortion
β_32≈ 0.03 results from the energy minimization.
The energy differences greater than 200 keV between non-axial and axial
minima occur in rather exotic nuclei.
For example, the purely tetrahedral effect occurs in the neutron-poor
Es isotope with N=128 neutrons and in ultra-neutron-rich
Es isotopes with N=185-192, and also in a few nuclei around them.
The largest effect occurs for SH nuclei with Z>118, especially around
Z=123, N=173.
In Fig. 6 are shown nuclei from this region in which the tetrahedral
deformation β_32 lowers the ground state by more than 150 keV.
This effect is calculated as the difference between energies in the g.s.
minimum from the minimizations including nine (excluding
β_32) and ten (including β_32) deformations. Although
this could be named a "pure" β_32 effect, the reality is more
intricate. It turns out that including terahedral deformation induces
also oblate quadrupole and the axial octupole β_30.
The obtained minima, corresponding to moderately oblate shapes with
octupole distortions in the ratio: β_32/β_30≈√(3/5)
are equivalent to the octupole deformation β_33 superimposed
on the oblate shape along its symmetry axis. A result of this superposition
is an oblate spheroid with a slightly triangular equator. The minimum
corresponding to such a nuclear shape was previously reported for
^308126 in <cit.>. In contrast to the case of ^308126,
some of the oblate-β_33 minima in nuclei depicted in Fig. 6 lie
significantly lower than
the oblate minima obtained when assuming the axial symmetry.
The landscapes around the g.s. minima in nuclei ^296123 and ^305124
are shown in Fig. 7 in three different projections: (β_20,β_22),
(β_20,β_30) and (β_20,β_32). As previously,
these maps are obtained by minimizing over the remaining eight deformation
parameters. The oblate-β_33 minima are lower by 720 and 530 keV,
respectively, than the axially symmetric oblate minima. As there is no
barrier between both, the previously found axially symmetric minima were
spurious.
One has to mention that the depth of the oblate-β_33 minima
diminishes with increasing pairing strengths which is especially relevant
in odd-A and odd-odd nuclei.
As we have checked, at least some of these minima survive even after
a 10% increase in pairing strengths which corresponds to a considerable
increase in rather weak g.s. pairing correlations of the original model.
For example, such a change leads to the β_33 g.s. in ^296123
still lying by 450 keV lower than the axially symmetric minimum.
§ SUMMARY AND CONCLUSIONS
The results obtained for about 3000 heavy and superheavy nuclei by the
microscopic-macroscopic model based on the deformed Woods-Saxon potential
and the Yukawa-plus-exponential energy within the ten-dimensional space of deformations may be
summarized as follows:
- We could not find any deep minima of large tetrahedral deformation.
The conditional minima, found under the restriction of zero quadrupole
distortion, have mostly a large excitation and are not protected by
any substantial barrier. The g.s. minima relatively
soft with respect to the tetrahedral coordinate β_32 occur
in Po isotopes with N≈ 136 and in a few very exotic (off β -
stability) systems.
- The tetrahedral deformation β_32 appears in the g.s. minima when
one combines it with β_30 and allows simultaneously for the
quadrupole nonaxiality β_22. Then it turns out that
in ∼ 40 superheavy nuclei with Z=119-126, N=173-188, the ground
states have a combined oblate and octupole deformation of the
β_33 symmetry with respect to the axis of the oblate shape. The
maximal g.s. lowering by this deformation, by 730 keV, occurs for the
nucleus ^296123.
The effect, although reduced (to 450 keV in ^296123), survives in the
calculation with 10% larger pairing strengths. This suggests
some robustness of the prediction of oblate-β_33 ground states.
Summarizing, one may thus say that our search for tetrahedral minima lead us
instead to finding a combined oblate-plus-β_33 g.s. deformation
in a restricted region of superheavy nuclei.
§ ACKNOWLEDGEMENTS
M. K. and J. S. were co-financed by the National Science Centre under Contract No. UMO-2013/08/M/ST2/00257
(LEA COPIGAL). One of the authors (P. J.) was co-financed by Ministry of Science and Higher Education: Iuventus Plus
grant nr IP2014 016073. This research was also supported by an allocation of advanced computing resources provided by
the Świerk Computing Centre (CIŚ) at the National Centre for Nuclear Research (NCBJ) (http://www.cis.gov.pl).
99
B40exp1
R. C. Lemmon et al., Phys. Lett. B 316, 32 (1993).
B40exp2
J. R. Leigh et al., Phys. Rev. C 52, 3151 (1995).
B30exp1
L. P. Gaffney et al., Nature 497, 199 (2013).
B30exp2
B. Bucher et al., Phys. Rev. Lett. 116, 112503 (2016).
B30theory
P. A. Butler and W. Nazarewicz, Rev. Mod. Phys. 68, 349 (1996).
B22theory1
P. Möller, R. Bengtsson, B. G. Carlsson, P. Olivius, and T. Ichikawa, Phys.
Rev. Lett. 97, 162502 (2006).
B22theory2
J. Skalski, S. Mizutori, and W. Nazarewicz, Nucl. Phys. A 617, 282 (1997).
Ober2009
A. Obertelli et al., Phys. Rev. C 80, 031304 (2009)
B22exp
Y. Toh at al., Phys Rev. C 87, 041304(R) (2013)
B32_light1
N. Onishi and R. K. Sheline, Nucl. Phys. A 165, 180 (1971).
B32_light2
D. Robson, Phys. Rev. Lett. 42, 876 (1979).
first1
I. Hamamoto, B. Mottelson, H. Xie, and X. Z. Zhang, Z. Phys. D 21, 163 (1991).
first2
X. Li and J. Dudek, Phys. Rev. C 49, R1250 (1994).
MMandSF1
N. Schunck, J. Dudek, A. Góźdź, and P. H. Regan, Phys. Rev. C 69, 061305(R) (2004).
MMandSF2
J. Dudek et al., Phys. Rev. Lett. 97, 072501 (2006).
MM1
J. Dudek, A. Góźdź, N. Schunck, and M. Miskiewicz, Phys. Rev. Lett. 88, 252502 (2002).
MM2
K. Mazurek, J. Dudek, A. Góźdź, D. Curien, M. Kmiecik, A. Maj, Acta Phys. Pol. B 40, 731 (2009).
SF0
S. Takami, K. Yabana, and M. Matsuo, Phys. Lett. B 431, 242 (1998).
SF1
M. Yamagami, K. Matsuyanagi, M. Matsuo, Nucl. Phys. A 693, 579 (2001).
SF2
K. Zberecki, P. Magierski, P.-H. Heenen, and N. Schunck, Phys. Rev. C 74, 051302(R) (2006).
SF3
K. Zberecki, P.-H. Heenen, and P. Magierski, Phys. Rev. C 79, 014319 (2009).
SF4
J. Zhao, B. N. Lu, E. G. Zhao, and S.-G. Zhou, Phys. Rev. C 86, 057304 (2012).
SF5
J. Zhao, B. N. Lu, E. G. Zhao, and S.-G. Zhou, Phys. Rev. C 95, 014320 (2017).
JRKSS
P. Jachimowicz, P. Rozmej, M. Kowal, J. Skalski, and A. Sobiczewski, Int. J. Mod. Phys. E 20, 514 (2011).
expB32_1
T. Sumikama et al., Phys. Rev. Lett. 106, 202501 (2011).
expB32_2
R. A. Bark et al., Phys. Rev. Lett. 104, 022501 (2010).
expB32_3
M. Jentschel et al., Phys. Rev. Lett. 104, 222502 (2010).
expB32_4
D. J. Hartley et al., Phys. Rev. C 95, 014321 (2017).
WS
S. Ćwiok, J. Dudek, W. Nazarewicz, J. Skalski, and T. Werner, Comput. Phys. Commun. 46, 379 (1987).
KN
H. J. Krappe, J. R. Nix and A. J. Sierk, Phys. Rev. C 20, 992 (1979).
WSpar
I. Muntian, Z. Patyk and A. Sobiczewski, Acta Phys. Pol. B 32, 691 (2001).
Kow
M. Kowal, P. Jachimowicz, A. Sobiczewski, Phys. Rev. C 82, 014303 (2010).
2bar
P. Jachimowicz, M. Kowal and J. Skalski, Phys. Rev. C 85, 034305 (2012).
bar2017
P. Jachimowicz, M. Kowal and J. Skalski, Phys. Rev. C 95, 014303 (2017).
MoBar
P. Möller, A. J. Sierk, T. Ichikawa, A. Iwamoto and R. Bengtsson, Phys. Rev. C 79, 064304 (2009).
JKS2010
P. Jachimowicz, M. Kowal, and J. Skalski, Int. J. Mod. Phys. E 19, 508 (2010).
| The idea of intrinsic shape of a nucleus turned out instrumental for
understanding many features of the nuclear structure and spectroscopy.
In particular, specific nuclear shapes were related to the prominent shell
effects in both proton and neutron systems exhibited by the nuclear
binding, and to the observed patterns of collective excitations.
Besides the axial quadrupole distortion which is the nuclear
deformation of primary importance, the secondary effects of hexadecapole
<cit.> and, in some regions, octupole
<cit.> distortion are clearly recognized.
Additionally, there are theoretical predictions of quadrupole triaxial
equilibrium shapes in some nuclei, e.g. <cit.>, but
rather limited experimental evidence for them, see e.g.
<cit.>.
From the theoretical point of view even more exotic shapes are possible,
characterized by a high rank symmetry group which would lead to an extra
degeneracy of s.p. energy levels. One such possibility is the tetrahedral
symmetry. It is well known that many quantum
objects, like molecules, fullerenes and alkali metal clusters prefer such a
shape in their ground state.
Due to these facts a hypothesis of tetrahedral symmetry of an atomic nucleus
was put forward as early as in the 1970s for ^16O
<cit.> in relation to its expected four-α cluster
structure.
Since the 1990s, such concept has been extended also to the heavier systems,
e.g. <cit.> and then intensively studied, both within
microscopic-macroscopic (MM) <cit.> and
selfconsistent models <cit.>.
Generally,
these studies are inconclusive since: a) the existence of global tetrahedral
minima was rare and model-dependent b) contradictory results were obtained
within the same models. Similar ambiguity occurs also in experiments which
so far either did not give a clear evidence <cit.> or even gave a
strong evidence against tetrahedral symmetry <cit.>.
For example, negative-parity bands in ^156Dy, observed quite recently
<cit.>, are most likely related to the octupole excitations
rather than the exotic tetrahedral symmetry.
Here we summarize the results of a search for tetrahedral minima in heavy and
super-heavy nuclei obtained within the MM model based on the deformed
Woods-Saxon potential with parameters used many times before, therefore well
tested in this region.
The present work is a much improved version of <cit.>, extended to
odd-A and odd-odd nuclei, with an expanded space of deformations used for
searching ground-state (absolute) tetrahedral minima. | null | null | null | null | null |
http://arxiv.org/abs/1701.07678v3 | 20170126125102 | Markov Chain Monte Carlo technics applied to Parton Distribution Functions determination: proof of concept | [
"Yémalin Gabin Gbedo",
"Mariane Mangin-Brinet"
] | hep-ph | [
"hep-ph"
] |
O
MS-0.05em0.05em
↔D
→D
←D
↔D
→D
←D
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08021v2 | 20170127115701 | Random walks in random conductances: decoupling and spread of infection | [
"Peter Gracar",
"Alexandre Stauffer"
] | math.PR | [
"math.PR"
] |
Some surprises in the neutrino cross sections associated with neutrino spin
İnanç Şahin
December 30, 2023
===========================================================================
Let (G,μ) be a uniformly elliptic random conductance graph on ℤ^d with a Poisson point process of particles at time t=0 that perform independent simple random walks. We show that inside a cube Q_K of side length K, if all subcubes of side length ℓ<K inside Q_K have sufficiently many particles, the particles return to stationarity after cℓ^2 time with a probability close to 1. We also show this result for percolation clusters on locally finite graphs. Using this mixing result and the results of <cit.>, we show that in this setup, an infection spreads with positive speed in any direction. Our framework is robust enough to allow us to also extend the result to infection with recovery, where we show positive speed and that the infection survives indefinitely with positive probability.
Keywords and phrases: mixing, decoupling, spread of infection, heat kernel
§ INTRODUCTION
We consider the graph G=(ℤ^d,E), d≥ 2 to be the d-dimensional square lattice, with edges between nearest neighbors: for x,y∈ℤ^d we have (x,y)∈ E iff x-y_1=1. Let {μ_x,y}_(x,y)∈ E be a collection of i.i.d. non-negative weights, which we call conductances. In this paper, edges will always be undirected, so μ_x,y=μ_y,x for all (x,y)∈ E. We also assume that the conductances are uniformly elliptic: that is,
there exists C_M>0, such that μ_x,y∈[C_M^-1,C_M] for all (x,y)∈ E.
We say x∼ y if (x,y)∈ E and define μ_x=∑_y∼ xμ_x,y. At time 0, consider a Poisson point process of particles on ℤ^d, with intensity measure λ(x)=λ_0μ_x for some constant λ_0>0 and all x∈ℤ^d. That is, for each x∈ℤ^d, the number of particles at x at time 0 is an independent Poisson random variable of mean λ_0μ_x. Then, let the particles perform independent continuous-time simple random walks on the weighted graph so that a particle at x∈ℤ^d jumps to a neighbor y∼ x at rate μ_x,y/μ_x. It follows from the thinning property of Poisson random variables that the system of particles is in stationarity; thus, at any time t, the particles are distributed according to a Poisson point process with intensity measure λ.
We study the spread of an infection among the particles. Assume that at time 0 there is an infected particle at the origin, and all other particles are uninfected. Then an uninfected particle gets infected as soon as it shares a site with an infected particle. Our first result establishes that the infection spreads with positive speed.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). For any time t≥ 0, let I_t be the position of the infected particle that is furthest away from the origin. Then
lim inf_t→∞I_t_1/t>0 almost surely.
The above result has been established on the square lattice (i.e., μ_x,y=1 for all (x,y)∈ E) by Kesten and Sidoravicius <cit.> via an intricate multi-scale analysis; see also <cit.> for a shape theorem. In a companion paper <cit.>, we develop a framework which can be used to analyze processes on this setting without the need of carrying out a multi-scale analysis from scratch. We prove our <Ref> via this framework, showing the applicability of our technique from <cit.>. We also apply this technique to analyze the spread of an infection with recovery.
Let the setup be as before, but now each infected particle independently recovers and becomes uninfected at rate γ for some fixed parameter γ>0. After recovering, a particle becomes again susceptible to the infection and gets infected again whenever it shares a site with an infected particle. Our next result shows that if γ is small enough, then with positive probability there will be at least one infected particle at all times. When this happens, we also obtain that the infection spreads with positive speed.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). For any λ_0>0, there exists γ_0>0 such that, for all γ∈(0,γ_0), with positive probability, the infection does not die out. Furthermore, there are constants c_1,c_2>0 such that
ℙ[I_t_1≥ c_1t for all t≥0]≥ c_2,
where I_t is the position of the infected particle that is furthest away from the origin at time t.
The challenge in this setup comes from the heavily dependent structure of the model. Though particles move independently of one another, dependencies do arise over time. For example, if a ball of radius R centered at some vertex x of the graph turns out to have no particles at time 0, then the ball B(x,R/2) of radius R/2 centered at x, will continue to be empty of particles up to time R^2, with positive probability. This means that the probability that the (d+1)-dimensional, space-time cylinder B(x,R/2)×[0,R^2] has no particle is at least exp{-cR^d} for some constant c, which is just a stretched exponential in the volume of the cylinder.
On the other hand, one expects that, after time t≫ R^2, the set of particles inside the ball will become “close” to stationarity.
To deal with dependences, one often resorts to a decoupling argument, showing that two local events behave roughly independently of each other, provided they are measurable according to regions in space time that are sufficiently far apart. We will obtain such an argument by extending a technique which we call local mixing, and which was introduced in <cit.>. The key observation is the following. Consider a cube Q⊆ℤ^d, tessellated into subcubes of side length ℓ>0. For simplicity assume for the moment that μ_x,y=1 for all (x,y)∈ E. Suppose that at some time t, the configuration of particles inside Q is dense enough, in the sense that inside each subcube there are at least cℓ^d particles, for some constant c>0. Regardless of how the particles are distributed inside Q, as long as the subcubes are dense, we obtain that at some time t+c'ℓ^2, not only particles had enough time to move out of the subcubes they were in at time t, but also we obtain that the configuration of particles inside “the core” of Q (i.e., away from the boundary of Q) stochastically dominates a Poisson point process of intensity (1-ϵ)cℓ^d that is independent of the configuration of particles at time t. Moreover, the value ϵ can be made arbitrarily close to 0 by setting c' large enough. In words, we obtain a configuration at time t+c'ℓ^2 inside the core of Q that is roughly independent of the configuration at time t, and is close to the stationary distribution. To the best of our knowledge, the idea of local mixing in such settings originated in the work of Sinclair and Stauffer <cit.>, and was later applied in <cit.>. This idea was then extended with the introduction of soft local times by Popov and Teixeira <cit.> (see also <cit.>), and applied to other processes, such as random interlacements.
Our second main goal in this paper is to show that this local mixing result can be obtained in a larger setting, in which a local CLT result, which plays a crucial role in the proof[The results of <cit.> are in the setting of Brownian motions on ℝ^d, but can be adapted in a straightforward way to random walks on ℤ^d with μ_x,y=1 for all (x,y)∈ E by using the local CLT.] of <cit.>, might not hold or only holds in the limit as time goes to infinity, with no good control on the convergence rate. This is precisely the situation of our setting, where the weights μ_x,y are not all identical to 1. To work around that, we will show that local mixing can be obtained whenever a so-called Parabolic Harnack Inequality holds, and we have some good estimates on the displacement of random walks.
For the result below, we can impose slightly weaker conditions on μ_x,y. Let p_c be the critical probability for bond percolation on ℤ^d. Assume that μ_x,y are i.i.d. and that, for each (x,y)∈ E, we have
ℙ[μ_x,y=0]< p_c and μ_x,y satisfies (<ref>) whenever μ_x,y>0.
For two regions Q'⊆ Q⊂ℤ^d, we say that Q' is x away from the boundary of Q iof the distance between Q' and Q^c is at least x.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). There exist positive constants c_1, c_2, c_3, c_4, c_5 such that the following holds.
Fix K>ℓ>0 and ϵ∈(0,1). Consider a cube Q of side-length K, tessellated into subcubes (T_i)_i of side length ℓ.
Assume each subcube T_i contains at least β∑_x∈ T_iμ_x particles for some β>0, and let Δ≥ c_1ℓ^2ϵ^-c_2.
If ℓ is large enough, then after the particles move for time Δ, we obtain that within a region Q'⊆ Q that is at least c_3ℓϵ^-c_4 away from the boundary of Q, the particles dominate an independent Poisson point process of intensity measure ν(x)=(1-ϵ)βμ_x, x∈ Q', with probability at least
1-∑_y∈ Q'exp{-c_5βμ_yϵ^2Δ^d/2}.
We will prove a more detailed version of this theorem in <Ref> (see <Ref>). Although we only prove the result for the case of conductances on the square lattice, <Ref> holds for more general graphs. The theorem holds for any graph G and any region Q of G that can be tessellated into subregions of diameter at most ℓ whenever each such subregion is dense enough, the so-called parabolic Harnack inequality holds for G and we have estimates on the displacement of random walks on G. We discuss some extensions in <Ref>.
The structure of this paper is as follows. In <Ref>, we formally define the family of graphs we consider for local mixing and present results concerning the parabolic Harnack inequality, heat kernel bounds and exit times for random walks on such graphs. In <Ref>, we state a more precise version of <Ref> and prove it. In <Ref> we prove an extension of the local mixing result to random walks whose displacement is conditioned to be bounded, which is particularly useful in applications <cit.>. In <Ref>, we use the local mixing result and results from our companion paper <cit.> to prove Theorems <ref> and <ref> for graphs satisfying (<ref>).
§ HEAT KERNEL ESTIMATES AND EXIT TIMES
In this section, we consider a general graph G=(V,E), with uniformly bounded degrees. For x,y∈ V, let |x-y| denote the distance between x and y in G. For x∈ V, let B(x,r)={y∈ V: |x-y|≤ r} be the ball of radius r centered at x. We consider non-negative weights (conductances) (μ_x,y)_(x,y)∈ E, that are symmetric. As in <Ref>, we denote by x∼ y whenever x,y∈ V are neighbors in G, and define μ_x=∑_y∼ xμ_x,y. We also extend μ to a measure on V. For simplicity, the reader may think of V as ℤ^d and μ_x,y being i.i.d. random variables satisfying (<ref>).
We keep our notation in greater generality as we want to highlight the exact conditions we need for our results.
Assume the existence of d≥ 1 and C_U such that
μ(B(x,r))≤ C_Ur^d, for all r≥1, and x∈ V.
We consider a continuous time simple random walk on the weighted graph 𝒢:=(G,μ), which jumps from vertex x to vertex y at rate μ_x,y/μ_x. More formally, for any function f:V→ℝ, let
ℒf(x)=μ_x^-1∑_y∼ xμ_x,y(f(y)-f(x)),
and define the random walk started at vertex x as the Markov process Y=(Y_t,t∈[0,∞),ℙ_x,x∈ V) with generator ℒ. Its heat kernel on the graph is defined as
q_t(x,y)=ℙ_x(Y_t=y)/μ_y, for any x,y∈ V.
We will say that a particle walks along 𝒢 if it is a Markov process with generator ℒ as defined above.
We now state several definitions from <cit.> which we use throughout the paper.
[Very good balls]
Let C_V, C_P and C_W≥ 1 be fixed constants. We say B(x,r) is (C_V,C_P,C_W)-good if:
μ(B(x,r))≥ C_Vr^d,
and the weak Poincaré inequality
∑_y∈ B(x,r)(f(y)-f̅_B(x,r))^2μ_y≤ C_Pr^2∑_y,z∈ B(x,C_Wr),z∼ y(f(y)-f(z))^2μ_yz
holds for every f:B(x,C_Wr)→ℝ, where f̅_B(x,r)=μ(B(x,r))^-1∑_y∈ B(x,r)f(y)μ_y is the weighted average of f in B(x,r). Furthermore, we say B(x,R) is (C_V,C_P,C_W)- very good if there exists N_B=N_B(x,R)≤ R^1/(d+2) such that for all r≥ N_B, B(y,r) is good whenever B(y,r)⊆ B(x,R). We assume that N_B≥ 1.
For the remainder of the paper we assume that d≥ 2, fix C_U, C_V, C_P and C_W and take 𝒢=(V,E,μ) to satisfy (<ref>).
We are now ready to present some key results from <cit.> that control the variation of the random walk density function. We will also present a result about random walk exit times which was initially shown in <cit.> for Bernoulli percolation clusters and then generalized to our setup in <cit.>. The first result gives Gaussian upper and lower bounds for the heat kernel for very good balls.
<cit.>
Assume the weights μ_x,y are i.i.d. and (<ref>) holds. Fix a vertex x∈ V. Suppose there exists R_1=R_1(x) such that B(x,R) is very good with N_B(x,R)^3(d+2)≤ R for every R≥ R_1. Then there exist positive constants c_1, c_2, c_3, c_4 such that if t≥ R_1^2/3, we obtain
q_t(x,y)≤ c_1t^-d/2e^-c_2|x-y|^2/t, for all y∈ V with |x-y|≤ t
and
q_t(x,y)≥ c_3t^-d/2e^-c_4|x-y|^2/t, for all y∈ V with |x-y|^3/2≤ t.
Now define the space-time regions
Q(x,R,T) = B(x,R)× (0,T],
Q_-(x,R,T) = B(x,R2)×[T4,T2]
and
Q_+(x,R,T) = B(x,R2)×[3T4,T].
We denote by t+Q(x,R,T)=B(x,R)× (t,t+T). We call a function u:V×ℝ→ℝ caloric on Q if it is defined on Q=Q(x,R,T) and
∂/∂ tu(x,t)=ℒu(x,t) for all (x,t)∈ Q.
We say the parabolic Harnack inequality (PHI) holds with constant C_H for Q=Q(x,R,T) if whenever u=u(x,t) is non-negative and caloric on Q, then
sup_(x,t)∈ Q_-(x,R,T)u(x,t)≤ C_Hinf_(x,t)∈ Q_+(x,R,T)u(x,t).
It is well known that the heat kernel of a random walk on 𝒢 started at x is a caloric function; in fact taking x=0 and u(x,t)=q_t(0,x) we have
d/dtq_t(0,x) = lim_dt→01/μ_x∑_y∈ Vℙ_0(Y_t=y)ℙ_y(Y_dt=x)-ℙ_0(Y_t=x)ℙ_x(Y_dt≠ x)/dt
= 1/μ_x(∑_y∼ xℙ_0(Y_t=y)μ_y,x/μ_y-∑_y∼ xμ_x,y/μ_xℙ_0(Y_t=x))
= 1/μ_x∑_y∼ xμ_x,y(q_t(0,y)-q_t(0,x))=ℒq_t(0,x).
The main result from <cit.> shows that the PHI holds in regions that are very good according to <Ref>.
<cit.>
Let x_0∈ V. Suppose that R_1≥ 16 and B(x_0,R_1) is (C_V,C_P,C_W)- very good with N_B(x_0,R_1)^2d+4≤ R_1/(2log R_1). Then there exists a constant C_H>0 such that the PHI holds for Q(x_1,R,R^2) for any x_1∈ B(x_0,R_1/3) and for R such that Rlog R=R_1.
A direct consequence of the PHI is the following known proposition, which when applied to the caloric function u(x,t)=q_t(0,x) gives that q_t(0,x) and q_t(0,y) are very similar to each other when x and y are close by. This property will be crucial for our proof of local mixing, so we give the proof of this proposition for completeness.
Let x_0∈ V. Suppose that there exists s(x_0)≥ 0 so that for all R≥ s(x_0), the PHI holds with constant C_H for Q(x_0,R,R^2) and that the ball B(x_0,R) is (C_V,C_P,C_W)- very good. Let Θ=log_2(C_H/(C_H-1)), and for x,y∈ V define
ρ(x_0,x,y)=s(x_0)∨ |x_0-x|∨ |x_0-y|.
There exists a constant c>0 such that the following holds. Let r_0≥ s(x_0) and suppose that u=u(x,t) is caloric in Q=Q(x_0,r_0,r_0^2). Then for any x_1,x_2∈ B(x_0,1/2r_0) and any t_1,t_2 such that r_0^2-ρ(x_0,x_1,x_2)^2≤ t_1,t_2≤ r_0^2 we have
|u(x_1,t_1)-u(x_2,t_2)|≤ c(ρ(x_0,x_1,x_2)/r_0)^Θsup_(t,x)∈ Q_+(x_0,r_0,r_0^2)|u(t,x)|.
For any integer k≥0, set r_k=2^-kr_0, and let
Q(k) = (r_0^2-r_k^2)+Q(x_0,r_k,r_k^2),
Q_+(k) = (r_0^2-r_k^2)+Q_+(x_0,r_k,r_k^2)
and
Q_-(k) = (r_0^2-r_k^2)+Q_-(x_0,r_k,r_k^2).
This gives that Q_+(k)=Q(k+1). Now take k≥ 1 small enough, so that r_k≥ s(x_0). If we apply the PHI to the non-negative caloric functions -u+sup_Q(k)u and u-inf_Q(k)u, we get the inequalities
sup_Q(k)u-inf_Q_-(k)u ≤ C_H(sup_Q(k)u-sup_Q_+(k)u)
and
sup_Q_-(k)u-inf_Q(k)u ≤ C_H(inf_Q_+(k)u-inf_Q(k)u).
Adding them together and using sup_Q_-(k)u-inf_Q_-(k)u≥ 0 gives
sup_Q(k)u-inf_Q(k)u≤ C_H(sup_Q(k)u-inf_Q(k)u)-C_H(sup_Q_+(k)u-inf_Q_+(k)u)
Denoting by Osc(u,A)=sup_Au-inf_Au and setting δ=C_H^-1, this gives
Osc(u,Q_+(k))≤(1-δ)Osc(u,Q(k)).
Next, take the largest m such that r_m≥ρ(x_0,x_1,x_2). Then, applying (<ref>) repeatedly on Q(1)⊃ Q(2)⊃… Q(m) yields, since (x_i,t_i)∈ Q(m),
|u(t_1,x_1)-u(t_2,x_2)|≤Osc(u,Q(m))≤ (1-δ)^m-1Osc(u,Q(1)).
Since
(1-δ)^m=2^-mΘ≤(2ρ(x_0,x_1,x_2)/r_0)^Θ,
the result follows.
We will also need to control the exit time of the random walk out of a ball of radius r, which we define as
τ(x,r)=inf{t: Y_t∉B(x,r)}.
Let x_0∈ V and let B(x_0, R) be (C_V,C_P,C_W)- very good with N_B^d+2<R. Let x∈ B(x_0,5/9R). There exist positive constants c_1, c_2, c_3, c_4 such that if t, r satisfy
0<r≤ R and c_1 N_B^d(log N_B)^1/2r≤ t≤ c_2 R^2/log R,
then we have
ℙ_x(τ(x,r)<t)≤ c_3exp{-c_4r^2/t}.
The proposition was proven for percolation clusters in <cit.>. The proof for more general 𝒢 is similar and can be found in <cit.>.
Since Propositions <ref>, <ref> and <ref> rely on very good balls and the related value N_B, we can assume a lower bound S such that if R>S, then the conditions of all three are satisfied. More formally, we assume the following.
The graph G has polynomial growth; i.e., it satisfies (<ref>). Furthermore, there exists a function S: V↦ℝ such that for all R_1 with R_1log R_1≥ S(x_0), the ball B(x_0,R_1) is (C_V,C_P,C_W)-very good with N_B(x_0,R_1)^2d+4≤ R_1.
As a consequence, Propositions <ref>, <ref>, <ref> and <ref> all hold for any R>S(x).
For i.i.d. weights as defined in <Ref>, we obtain the following.
If V=ℤ^d and the weights μ_x,y are i.i.d. and satisfy (<ref>) or (<ref>), then <Ref> holds. Furthermore, we have that there exist constants c,γ>0 such that
ℙ[S(x)≥ n]≤ cexp{-cn^γ} for all x∈ℤ^d and n≥ 0.
If the weights μ_x,y are i.i.d. and satisfy (<ref>), then <Ref> holds with S(x)= 1 for all x∈ V.
This has been shown in <cit.>, following from the framework developed in <cit.>. The tail estimate of S(x) was obtained in <cit.>. For the case where μ_x,y satisfy (<ref>), the function S can be set to 1 by <cit.> and the results of <cit.>.
In <cit.> it has been shown that when the weights μ_x,y are i.i.d. but can assume values arbitrarily close to zero, so neither (<ref>) nor (<ref>) hold, it is possible to find distributions (at least in dimensions d≥ 5) for which <Ref> does not hold. Hence, even though we do not explicitly use uniform ellipticity of μ_x,y in our proofs, this property has a fundamental role in our analysis. Recent results (see, for example, <cit.>) have been derived to relax assumption (<ref>), but they do not establish all properties we need.
§ DECOUPLING VIA LOCAL MIXING
In this section, we will restrict to the case V=ℤ^d and (x,y)∈ E if and only if x-y_1=1, but we do not assume the μ_x,y are i.i.d. We define a cube of side length z>0 as Q_z:=[-z/2,z/2]^d.
In the remainder of the paper, we will work with the heat kernel q_t as defined in (<ref>). Since we allow μ_x,y=0, it is possible for two sites not to be connected. To address this we require the existence of an infinite component. Formally, we assume the following.
For each (x,y)∈ E, either μ_x,y=0 or it satisfies (<ref>) for a uniform constant C_M. Moreover, the weights μ_x,y are such that an infinite connected component of edges of positive weight within 𝒢 exists and contains the origin.
With this let 𝒞_∞ be the infinite connected component of 𝒢 that contains the origin and define
Q̃_z:=Q_z∩𝒞_∞.
We note that if μ_x,y satisfy (<ref>), then <Ref> is automatically satisfied. We will continue to call Q̃_z as a “cube”. We are now ready to state the more detailed version of <Ref>.
Let μ_x,y satisfy Assumptions <ref> and <ref>. There exist constants c_0, c_1, C>0 such that the following holds.
Fix K>ℓ>0 and ϵ∈(0,1). Consider the cube Q_K tessellated into subcubes (T_i)_i of side length ℓ and assume that ℓ>S^d+1(x) for all x∈Q̃_K.
Let (x_j)_j⊂Q̃_K be the locations at time 0 of a collection of particles, such that each subcube T̃_i contains at least ∑_y∈T̃_iβμ_y particles for some β>0.
Let Δ≥ c_0ℓ^2ϵ^-4/Θ where Θ is as in <Ref>.
For each j denote by Y_j the location of the j-th particle at time Δ.
Fix K'>0 such that K-K'≥√(Δ)c_1ϵ^-1/d. Then there exists a coupling ℚ of an independent Poisson point process ψ with intensity measure ζ(y)=β(1-ϵ)μ_y, y∈𝒞_∞, and (Y_j)_j such that within Q̃_K'⊂Q̃_K, ψ is a subset of (Y_j)_j with probability at least
1-∑_y∈Q̃_K'exp{-Cβμ_yϵ^2Δ^d/2}.
Note that, due to <Ref>, <Ref> is a special case of <Ref>, which we prove below. In order to do so, we will use something called soft local times, which was introduced in <cit.> to analyze random interlacements, following the introduction of local mixing in <cit.>; see also <cit.> for an application of this technique to random walks on ℤ^d.
Let (Z_j)_j≤ J be a collection of J independent random particles on V distributed according to a family of density functions g_j:V→ℝ, j≤ J. Define for all y∈ V the soft local time function H_J(y)=∑_j=1^Jξ_jg_j(y), where the ξ_j are i.i.d. exponential random variables of mean 1. Let ψ be a Poisson point process on V with intensity measure ρ:V→ℝ and define the event
E={ψ is a subset of (Z_j)_j≤ J}.
Then there exists a coupling such that,
ℙ[E]≥ℙ[H_J(y)≥ρ(y), ∀ y∈ V].
We are now ready to prove <Ref>.
By <Ref>, there exists a coupling ℚ of an independent Poisson point process ψ with intensity measure ζ(y)=β(1-ϵ)μ_y1_{y∈Q̃_K'} and the locations of the particles Y_j, which are distributed according to the density functions f_Δ(x_j,y):=q_Δ(x_j,y)μ_y, y∈𝒞_∞, such that ψ is a subset of (Y_j)_j with probability at least
ℚ[H_J(y)≥βμ_y(1-ϵ), ∀ y∈Q̃_K'],
where H_J(y)=∑_j=1^Jξ_j f_Δ(x_j,y) and (ξ_j)_j≤ J are i.i.d. exponential random variables with parameter 1. We first observe that the probability of the converse event is
ℚ[∃ y∈Q̃_K': H_J(y)<βμ_y(1-ϵ)] ≤ ∑_y∈Q̃_K'ℚ[H_J(y)<βμ_y(1-ϵ)]
≤ ∑_y∈Q̃_K'e^κμ_yβ(1-ϵ)𝔼^ℚ[exp{-κ H_J'(y)}],
where we used Markov's inequality in the last step, which is valid for any κ>0. Let c_1 be a positive constant which we will fix later and let
R=√(Δ)c_1ϵ^-1/d.
Let J' be a subset of {1,2,…,J} such that for each T̃_i, J' contains exactly ⌈∑_y∈T̃_iβμ_y⌉ particles that are inside T̃_i.
Define J'(y)⊆ J' to be the set of j∈ J' such that |x_j-y|≤ R and define H'(y) as H_J(y) but with the sum restricted to j∈ J'(y). Since H_J(y)≥ H'(y) we get that
𝔼^ℚ[exp{-κ H_J(y)}]≤𝔼^ℚ[exp{-κ H'(y)}].
Next, we use that the ξ_j in the definition of H are independent exponential random variables to obtain
𝔼^ℚ[exp{-κ H'(y)}] = ∏_j∈ J'(y)𝔼^ℚ[exp{-κξ_jf_Δ(x_j,y)}]
= ∏_j∈ J'(y)(1+κ f_Δ(x_j,y))^-1.
Using Taylor's expansion we have that log(1+x)≥ x-x^2 for |x|≤1/2. Since ℓ≥ S(x), we can apply <Ref>, to have q_Δ(x,y)≤ c_2Δ^-d/2 for a constant c_2>0 and all y∈Q̃_K' and x∈ J'(y).
Hence if κ = CϵΔ^d/2 for the constant C=(4C_Uc_2)^-1, then
sup_x∈ B(y,R+√(d)ℓ)κ f_Δ(x,y)=sup_x∈ B(y,R+√(d)ℓ)κμ_y q_Δ(x,y)≤ C_Uc_2κΔ^-d/2<ϵ/4.
For such a value of κ we have
∏_j∈ J'(y)(1+κ f_Δ(x_j,y))^-1 ≤ ∏_j∈ J'(y)exp{-κ f_Δ(x_j,y)(1-κ f_Δ(x_j,y))}
≤ exp{-∑_j∈ J'(y)κ f_Δ(x_j,y)(1-sup_x∈ B(y,R+√(d)ℓ)κ f_Δ(x,y))}
≤ exp{-κ∑_j∈ J'(y) f_Δ(x_j,y)(1-ϵ/4)}.
We claim that
∑_j∈ J'(y)f_Δ(x_j,y)≥βμ_y(1-ϵ/2),
which together with (<ref>), (<ref>) and (<ref>) give that
ℚ[∃ y∈Q̃_K': H_J(y)<βμ_y(1-ϵ)] ≤ exp{κμ_yβ(1-ϵ)-κβμ_y(1-ϵ/2)(1-ϵ/4)}
≤ exp{-κβμ_yϵ/4}.
Using the value of κ gives the theorem.
It remains to show (<ref>). For each T̃_i and each particle x_j∈T̃_i, let x_j'∈T̃_i be such that x_j'=max_w∈T̃_if_Δ(w,y).
Then, write
∑_j∈ J'(y)f_Δ(x_j,y) ≥ ∑_j∈ J'(y)(f_Δ(x_j',y)-|f_Δ(x_j',y)-f_Δ(x_j,y)|).
We have for each T̃_i
∑_j∈ J'(y)
x_j∈T̃_if_Δ(x_j',y)
= max_w∈T̃_if_Δ(w,y)∑_j∈ J'(y)
x_j∈T̃_i1
≥ max_w∈T̃_if_Δ(w,y)∑_z∈T̃_iβμ_z
≥ ∑_z∈T̃_iβμ_z f_Δ(z,y).
Set R(y) to be the set of all sites z such that |z-y|≤ R-√(d)ℓ. Note that if z∈ R(y) then for all particles x_j with x'_j=z and j∈ J' we have j∈ J'(y).
We observe that since μ_z f_Δ(z,y)=μ_y f_Δ(y,z), we have by using (<ref>) for each T̃_i that
∑_j∈ J'(y)f_Δ(x_j',y) ≥ ∑_z∈ R(y)βμ_z f_Δ(z,y)
= βμ_y∑_z∈ R(y)f_Δ(y,z).
Then, since ℓ>S^d+1(x) we have by <Ref> that there exist constants c_4 and c_5 such that
∑_j∈ J'(y)f_Δ(x_j',y) ≥ βμ_yℙ_y(τ(y,R-√(d)ℓ)≥Δ)
≥ βμ_y(1-c_4exp{-c_5c_1^2ϵ^-2/d})
≥ βμ_y(1-ϵ/4),
where we set c_1 large enough with respect to c_4 and c_5 for the last inequality to hold.
Now it remains to obtain an upper bound for the term ∑_j∈ J'(y)|f_Δ(x_j',y)-f_Δ(x_j,y)|.
We define I to be the set of all i such that T̃_i contains a particle x_j from the set (x_j)_j∈ J'(y). Then, since ℓ>S(x), there exists positive constants C_PHI and C_BH such that if we apply the PHI (cf. <Ref>) with
r_0^2=Δ≥ c_0ℓ^2ϵ^-4/Θ
for some constant c_0>d, we obtain
∑_j∈ J'(y)|f_Δ(x_j',y)-f_Δ(x_j,y)| = ∑_i∈ I∑_j∈ J'(y):
x_j∈T̃_i|f_Δ(x_j',y)-f_Δ(x_j,y)|
= μ_y∑_i∈ I∑_j∈ J'(y):
x_j∈T̃_i|q_Δ(x_j',y)-q_Δ(x_j,y)|
≤ μ_y∑_i∈ I∑_j∈ J'(y):
x_j∈T̃_iC_PHIℓ^Θ/Δ^Θ/2C_BHΔ^-d/2
≤ μ_y∑_i∈ I∑_x∈T̃_i2βμ_x C_PHIℓ^Θ/Δ^Θ/2C_BHΔ^-d/2,
where in the first inequality we replaced the supremum term coming from <Ref> by its upper bound C_BHΔ^-d/2 from <Ref>, and used that r_0=√(Δ) in the bound from <Ref>. Then
∑_j∈ J'(t)|f_Δ(x_j',y)-f_Δ(x_j,y)| ≤ 2βμ_y C_PHIC_BH∑_i∈ I∑_x∈T̃_iμ_xℓ^ΘΔ^-(d+Θ)/2
≤ 2βμ_y C_PHIC_BHC_UR^dℓ^ΘΔ^-(d+Θ)/2
≤ βμ_yϵ/4,
where the last inequality holds by using Δ≥ c_0ℓ^2ϵ^-4/Θ and setting c_0>(2C_PHIC_BHC_Uc_1^d)^-2/θ. Note that in order to use <Ref>, we need to have that each pair x_j,x_j' is contained in some ball B(x_0,r_0/2). This is satisfied since x_j-x_j'≤√(d)ℓ and r_0 is set sufficiently large by (<ref>).
Plugging (<ref>) and (<ref>) into (<ref>) proves (<ref>).
§ EXTENSIONS
Although the estimate derived in <Ref> does not depend on the particles outside of Q_K at time 0 when K-K' is sufficiently large, it still depends on the geometry of the entire graph outside of Q_K. In some applications, as in our companion paper <cit.>, one needs to apply this coupling in many different regions of the graph simultaneously. In such cases, in order to control dependences between different regions, it is important that the coupling procedure depends only on the local structure of the graph. In order to do this, we will condition the particles to be inside some large enough, but finite region while they move for time Δ. Recall that, for any ρ>0, Q_ρ=[-ρ/2,ρ/2]^d is the cube of side length ρ. For any ρ>0, we say that a random walk has displacement in Q_ρ during [0,Δ] if the random walk never exits x+Q_ρ during the time interval [0,Δ], where x is the starting vertex of the random walk.
Let μ_x,y satisfy Assumptions <ref> and <ref>. There exist constants c_1 and c_2 so that the following holds. Let V=ℤ^d, ℓ>0 and consider the cube Q_ℓ. Assume ℓ>S(x) for all x∈ Q_ℓ. Let Δ>c_1ℓ^2 and ρ≥ c_2√(ΔlogΔ). Consider a random walk Y that moves along 𝒢 for time Δ conditioned on having its displacement in Q_ρ during the time interval [0,Δ]. Let x,y∈ Q_ℓ with x being the starting point of the walk, and define
g(x,y):=ℙ_x[Y_Δ=y | Y has displacement in Q_ρ during [0,Δ]].
Then there exists a constant C>2 such that for x,y,z∈ Q_ℓ we have
|g(x,y)/μ_y-g(z,y)/μ_y|≤ Cℓ^ΘΔ^-(d+Θ)/2.
Note that the above bound has the same form as the one for the heat kernel of unconditioned random walks in <Ref>, with the supremum being bounded above by the heat kernel bound from <Ref>. This allows us to extend <Ref> to random walks conditioned to have a bounded displacement during [0,Δ].
Denote by p_E(ρ) the probability that a random walk started at x has displacement in Q_ρ during [0,Δ]. From <Ref> , we have that if Δ is sufficiently big, then
1-p_E(ρ) ≤ℙ_x[Y exits B(x,ρ/2) during [0,Δ]]
=ℙ_x(τ(x,ρ/2)<Δ)
≤ c_aexp{-c_bρ^2/Δ}.
Next, using h(x,y):=ℙ_x[Y_Δ=y | Y exits x+Q_ρ during [0,Δ]] and f_Δ(x,y)=ℙ_x[Y_Δ=y], we can write
f_Δ(x,y)=g(x,y)p_E(ρ)+h(x,y)(1-p_E(ρ)).
With this we have
g(x,y)≤ f_Δ(x,y)1/p_E(ρ).
Then, we can write
|g(x,y)/μ_y-g(z,y)/μ_y| = 1_{g(x,y)>g(z,y)}(g(x,y)/μ_y-g(z,y)/μ_y)
+1_{g(x,y)<g(z,y)}(g(z,y)/μ_y-g(x,y)/μ_y)
≤ 1_{g(x,y)>g(z,y)}(f_Δ(x,y)/μ_yp_E(ρ)-f_Δ(z,y)/μ_yp_E(ρ)+h(z,y)(1-p_E(ρ))/p_E(ρ)μ_y)
+1_{g(x,y)<g(z,y)}(f_Δ(z,y)/μ_yp_E(ρ)-f_Δ(x,y)/μ_yp_E(ρ)+h(x,y)(1-p_E(ρ))/p_E(ρ)μ_y)
≤ |q_Δ(y,x)-q_Δ(y,z)|/p_E(ρ)+max{h(x,y),h(z,y)}(1-p_E(ρ))/p_E(ρ)μ_y.
Note that h(x,y) can be written as f_Δ-τ(w,y), where τ is the first time Y exists x+Q_ρ and w is the random vertex at the boundary of x+Q_ρ where Y is at time τ. Since the weights μ_x,y satisfy (<ref>) by <Ref>, we have that f_Δ-τ(w,y)/μ_y is at most some positive constant c. This holds because either Δ-τ is larger than |w-y|, which allows us to apply heat kernel bounds from <Ref>, or Δ-τ is smaller than |w-y| so f_Δ-τ(w,y) is bounded above by the probability that a random walk jumps at least |w-y| steps in time Δ-τ, which is small enough since |w-y| is large. This gives that max{h(x,y),h(z,y)}/μ_y is at most c. With this and (<ref>) we obtain that
max{h(x,y),h(z,y)}(1-p_E(ρ))/μ_y p_E(ρ) ≤c c_a/p_E(ρ)exp{-c_bρ^2/Δ}
≤c c_a/p_E(ρ)exp{-c_b c_2logΔ}.
By (<ref>) we can just bound p_E(ρ)≥ 1/2 above. Then, applying <Ref> to |q_Δ(y,x)-q_Δ(y,z)|, and using <Ref> to bound the resulting supremum term, concludes the proof.
The next theorem is an adaptation of <Ref> for conditioned random walks. Note that we need a stronger condition on K-K' below than in <Ref>.
Let μ_x,y satisfy Assumptions <ref> and <ref>. There exist constants c_0, c_1, C>0 such that the following holds.
Fix K>ℓ>0 and ϵ∈(0,1). Consider the cube Q_K tessellated into subcubes (T_i)_i of side length ℓ and assume that ℓ>S^d+1(x) for all x∈Q̃_K.
Let (x_j)_j⊂Q̃_K be the locations at time 0 of a collection of particles, such that each subcube T̃_i contains at least ∑_y∈T̃_iβμ_y particles for some β>0.
Let Δ≥ c_0ℓ^2ϵ^-4/Θ, where Θ is as in <Ref>.
Fix K'>0 such that K-K'≥ c_1√(ΔlogΔ).
For each j, denote by Y_j the location of the j-th particle at time Δ, conditioned on having displacement in Q_K-K' during [0,Δ].
Then there exists a coupling ℚ of an independent Poisson point process ψ with intensity measure ζ(y)=β(1-ϵ)μ_y, y∈Q̃_K, and (Y_j)_j such that within Q̃_K'⊂Q̃_K, ψ is a subset of (Y_j)_j with probability at least
1-∑_y∈Q̃_K'exp{-Cβμ_yϵ^2Δ^d/2}.
Using <Ref> and (<ref>) when setting κ, the proof goes in the same way as the proof of <Ref>. The independence from G outside of Q̃_K follows from the fact that we only consider particles which have displacement in Q_K-K' and ended in Q̃_K', so that they never left Q̃_K during [0,Δ].
§.§ Extension to other graphs
We have shown that the local mixing result of Theorems <ref> and <ref> work for ℤ^d, but they can easily be extended to the more general graphs defined in <Ref>, as long as Assumptions <ref> and <ref> hold.
We start with a region A⊆𝒞_∞ around the origin of G and tesselate it into tiles (T_i)_i∈ I of diameter at most ℓ. Let Δ be as in <Ref>. Let A'⊂ A be all the sites in A that are at least √(Δ)c_1ϵ^-1/d+cℓ away from the boundary of A. Then, if A' is not empty, using the same steps as in the proof of <Ref>, if each tile T_i of A contains at least β∑_y∈ T_iμ_y particles at time 0, it holds that in the region A', there is a coupling with an independent Poisson point process ψ of intensity measure ζ(y)=β(1-ϵ)μ_y such that at time Δ the particles inside A' are contained in ψ with probability at least
1-∑_y∈ A'exp{-Cβμ_yϵ^2Δ^d/2},
for some constant C>0.
Furthermore, <Ref> can analogously be extended in the same way, if we require that A' contains only sites that are at least c_1√(ΔlogΔ) away from the boundary of A, for some constant c_1, and if we condition the random walks to have their displacement limited to a ball of radius c_1√(ΔlogΔ).
§ SPREAD OF THE INFECTION
Our goal in this section will be to use <Ref> in order to show that on the graph G=(V,E) with V=ℤ^d and E={(x,y):x-y_1=1}, and with μ_x,y, (x,y)∈ E being i.i.d. and satisfying (<ref>), information spreads with positive speed in any direction, as claimed in Theorems <ref> and <ref>. In this setting, <Ref> guarantees that <Ref> holds with S(x)≡ 1 and since μ_x,y≠ 0 for all (x,y)∈ E, we also have that <Ref> holds.
Recall that we assume d≥ 2. Tessellate ℤ^d into cubes of side length ℓ, indexed by i∈ℤ^d. Next, tessellate time into intervals of length β, indexed by τ∈ℤ. With this we denote by the space-time cell (i,τ)∈ℤ^d+1 the region ∏_j=1^d[i_jℓ,(i_j+1)ℓ]×[τβ,(τ+1)β]. In the following, β is set as a function of ℓ so that the ratio β/ℓ^2 is fixed first to be a small constant, and then ℓ is set sufficiently large.
We will use a result from <cit.> that gives the existence of a Lipschitz connected surface (cf. Definitions <ref> and <ref> below) that surrounds the origin and which is composed of space-time cells, for which a certain local event holds. This will allow us to obtain an infinite sequence of space-time cells, such that the infection spreads from one cell to the next.
In order to obtain this result, we will need to consider overlapping space-time cells. Let η≥ 1 be an integer which will represent the amount of overlap between cells. For each cube i=(i_1,…,i_d) and time interval τ, define the super cube i as ∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] and the super interval τ as [τβ,(τ+η)β]. We define the super cell (i,τ) as the Cartesian product of the super cube i and the super interval τ.
In the following we will say a particle has displacement inside X' during a time interval [t_0,t_0+t_1], if the location of the particle at all times during [t_0,t_0+t_1] is inside x+X', where x is the location of the particle at time t_0. For each time s≥ 0, let Π_s be a point process on V, which represents the locations of the particles at time s. We say an event E is increasing for (Π_s)_s≥ 0 if the fact that E holds for (Π_s)_s≥ 0 implies that it holds for all (Π'_s)_s≥ 0 for which Π_s'⊇Π_s for all s≥ 0. We say an event E is restricted to a region X⊂ℤ^d and a time interval [t_0,t_1] if it is measurable with respect to the σ-field generated by all the particles that are inside X at time t_0 and their positions during [t_0,t_1]. For an increasing event E that is restricted to a region X and time interval [t_0,t_1], we have the following definition.
ν_E is called the probability associated to a an increasing event E that is restricted to X and a time interval [t_0, t_0+t_1] if, for an intensity measure ζ, ν_E(ζ,X,X',t_1) is the probability that E happens given that, at time t_0, the particles in X are given by a Poisson point process of intensity measure ζ and their motions from t_0 to t_0+t_1 are independent continuous time random walks on the weighted graph (G,μ), where the particles are conditioned to have displacement inside X'.
For each (i,τ)∈ℤ^d+1, let E_st(i,τ) be an increasing event restricted to the super cube i and the super interval τ. Here the subscript st refers to space-time. We say that a cell (i,τ) is bad if E_st(i,τ) does not hold and good otherwise.
We will need a different way to index space-time cells, which we refer to as the base-height index. In the base-height index, we pick one of the d spatial dimensions and denote it as height, using index h∈ℤ, while the remaining d space-time dimensions form the base, which we index by b∈ℤ^d. In this way, for each space-time cell (i,τ) there will be (b,h)∈ℤ^d+1 such that the base-height cell (b,h) corresponds to the space-time cell (i,τ)
We analogously define the base-height super cell (b,h) to be the space-time super cell (i,τ), for which the base-height cell (b,h) corresponds to the space-time cell (i,τ). Similarly, we define E_bh(b,h), the increasing event restricted to the super cell (b,h) that is the same as the event E_st(i,τ) for the space-time super cell (i,τ) that corresponds to the base-height super cell (b,h). Here, the subscript bh refers to the base-height index.
In order to prove Theorems <ref> and <ref>, we will need a theorem from <cit.>, which gives the existence of a two-sided Lipschitz surface F.
A function F:ℤ^d→ℤ is called a Lipschitz function
if |F(x)-F(y)|≤ 1 whenever x-y_1 = 1.
A two-sided Lipschitz surface F is a set of base-height cells (b,h)∈ℤ^d+1 such that for all b∈ℤ^d there are exactly two (possibly equal) integer values F_+(b)≥ 0 and F_-(b)≤0 for which (b,F_+(b)),(b,F_-(b))∈ F and, moreover, F_+ and F_- are Lipschitz functions.
We say a space-time cell (i,τ) belongs to F if there exists a base-height cell (b,h)∈ F that corresponds to (i,τ). We say a two-sided Lipschitz surface F is finite, if for all b∈ℤ^d, we have F_+(b)<∞ and F_-(b)>-∞. For a positive integer D, we say a two-sided Lipschitz surface surrounds a cell (b',h') at distance D if any path (b',h')=(b_0,h_0),(b_1,h_1),…,(b_n,h_n) for which (b_i,h_i)-(b_i-1,h_i-1)_1=1 for all i∈{1,… n} and (b_n,h_n)-(b_0,h_0)_1>D, intersects with F.
We now present the main result from our paper <cit.>, which holds for graphs where a local mixing result, such as the one in <Ref>, hold. More precisely, for a graph satisfying Assumption <ref> and (<ref>) (which implies <Ref> holds) we have that <Ref> holds (with S(x)=1 for all x∈ V), which in turn gives that the following result from <cit.> holds. Recall that, for any ρ≥ 2, Q_ρ stands for the cube [-ρ/2,ρ/2]^d, and that λ is the intensity measure of the Poisson point process of particles as defined in <Ref>.
Let 𝒢=(G,μ) be a graph satisfying Assumption <ref> and (<ref>) on the lattice ℤ^d for d≥ 2. There exist positive constants c_1 and c_2 such that the following holds. Tessellate G in space-time cells and super cells as described above for some ℓ,β,η>0 such that the ratio β/ℓ^2 is small enough.
Let E_st(i,τ) be an increasing event, restricted to the space-time super cell (i,τ).
Fix ϵ∈(0,1) and fix w such that
w≥√(ηβ/c_2ℓ^2log(8c_1/ϵ)).
Then, there exists a positive number α_0 that depends on ϵ, η and that ratio β/ℓ^2 so that if
min{C_M^-1ϵ^2λ_0ℓ^d,log(1/1-ν_E_st((1-ϵ)λ,Q_(2η+1)ℓ,Q_wℓ,ηβ))}≥α_0,
a two-sided Lipschitz surface F where E_st(i,τ) holds for all (i,τ)∈ F exists. Furthermore, the surface is finite almost surely and surrounds the origin at a finite distance almost surely.
Recall that we want to show that the infection spreads with positive speed. Given a space-time tessellation of G and a local increasing event E_st, <Ref> gives the existence of a Lipschitz surface F on which E_st holds. Let T=ℓ^5/3. We will define the increasing event E_st(i,τ) to represent a single infected particle in the middle of the super cube i at time τβ infecting a large number of particles in that super cube by time τβ+T, after which the infected particles move up to time (τ+1)β, spreading to all of the cubes contained in the super cube.
Let (i,τ) be a space-time cell as defined previously. We consider that there is an infected particle in the center cube of the super cube i at time τβ, that is, the particle is inside ∏_j=1^d[i_jℓ,(i_j+1)ℓ]. Starting from time τβ, we let the infected particle move and infect sufficiently many other particles by time τβ+T. This is given in the lemma below.
There exist positive constant C_1 such that the following holds for all large enough ℓ.
Let Q^*=∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] and let (ρ(t))_τβ≤ t≤τβ+T be the path of an infected particle that starts in ∏_j=1^d[i_jℓ,(i_j+1)ℓ] and stays inside ∏_j=1^d[(i_j-η+1)ℓ,(i_j+η)ℓ] during [τβ,τβ+T]. Assume that at time τβ, the number of particles at each vertex x∈ Q^*∖ρ(τβ) is a Poisson random variable of mean λ_0/2μ_x.
Let Υ be the subset of those particles that do not leave Q^* during [τβ,τβ+T], and let Υ'⊂Υ be the particles colliding with the path ρ, that is, for each particle of Υ' there exists a time t∈[τβ,τβ+T] such that the particle is located at ρ(t). Then, Υ' is a Poisson random variable of mean at least
C_1λ_0ℓ^1/3.
For each time t∈ [τβ,τβ+T], let Ψ_t be the Poisson point process on V giving the locations at time t of the particles that belong to Υ. Since the particles that start in Q^* move around and can leave Q^*, we need to find a lower bound for the intensity of Ψ_t for times in [τβ,τβ+T]. Note that the infected particle we are tracking is not part of Ψ, since Ψ does not include particles located at ρ(τβ) at time τβ.
We will need to apply heat kernel bounds from <Ref> to the particles in Q^*, so we need to ensure that the time intervals we consider are large enough for the proposition to hold.
We will only consider times t∈[ℓ^4/3,T] so that for large enough ℓ, t≥sup_x∈ Q^*
y∈ Q^*x-y_1 and so the heat kernel bounds from <Ref> hold. Then, we have that for all sites x∈ Q^* that are at least ℓ away from the boundary of Q^* and at any such time t the intensity of Ψ_τβ+t at vertex x∈ V is at least
ψ(x,τβ+t)≥∑_y∈ Q^*
y≠ρ(τβ)λ_02μ_y·μ_xq_t(y,x)
= λ_02μ_x∑_y∈ Q^*
y≠ρ(τβ)ℙ_x[Y_t=y],
where we used in the last step that the heat kernel q_t is symmetric. We now use the exit time bound from <Ref> to get that
∑_y∈ Q^*ℙ_x[Y_t=y]≥ 1-c_3exp{-c_4ℓ^2/t}.
Next, we use that ℙ_x[Y_t=y]=μ_y q_t(x,y)≤ C_M q_t(x,y), and use <Ref> to account for the particles at ρ(τβ), yielding
∑_y∈ Q^*
y≠ρ(τβ)ℙ_x[Y_t=y]≥ 1-c_3exp{-c_4ℓ^2/t}-C_Mc_5t^-d/2.
This gives that for any t∈[ℓ^4/3,T], the intensity of Ψ_τβ+t is at least
ψ(x,τβ+t)≥λ_02μ_x(1-c_3exp{-c_4ℓ^2/T}-C_Mc_5ℓ^-2d/3).
Let [τβ,τβ+T] be divided into subintervals of length W∈(0,T], where we set W=ℓ^4/3 so that it is large enough to allow the use of the heat kernel bounds from <Ref>. Let J={1,…, ⌊ T/W⌋} and t_j:=τβ+jW. Then the intensity of particles that share a site with the initially infected particle only at one time among {t_1, t_2, …, t_⌊ T/W⌋} is at least
∑_j∈ Jψ(ρ(t_j),t_j)ℙ_ρ(t_j)[X_r-t_j≠ρ(r) ∀ r∈{t_j+1,…,t_⌊ T/W⌋}]
≥λ_02 C_M^-1(1-c_3exp{-c_4ℓ^2/T}-C_Mc_5ℓ^-2d/3)∑_j∈ J(1-∑_z>jℙ_ρ(t_j)[X_t_z-t_j= ρ(t_z)]).
We want to make all of the terms of the sum over J positive, so we consider the term ∑_z>jℙ_ρ(t_j)[X_t_z-t_j= ρ(t_z)] and show that it is smaller than 1/2 for large enough ℓ. To do this, we use that ℙ_x[Y_t=y]=μ_y q_t(x,y) with the heat kernel bounds from <Ref>, which hold when W≥ℓ^4/3 and ℓ is large enough, to bound it from above by
∑_z>jℙ_ρ(t_j)[X_t_z-t_j= ρ(t_z)]
≤∑_z>jC_MC_HK(t_z-t_j)^-d/2
≤ C_M C_HKW^-d/2∑_z=1^T/W-jz^-d/2
where C_HK is the constant coming from <Ref>. Then, (<ref>) can be bound from above by
C_M C_HK W^-d/2(2+∑_z=3^T/W-jz^-d/2)
≤ C_M C_HKW^-d/2(2+∫_2^T/Wz^-d/2dz).
Let C be a constant that can depend on C_HK, C_M and d. Then for d=2, (<ref>) it is smaller than CW^-1log(T/W), and for d≥ 3 the expression in (<ref>) is smaller than CW^-d/2. Thus, setting ℓ large enough, both terms are smaller than 1/2.
Then, as a sum of Poisson random variables, we get that Υ' is a Poisson random variable with a mean at least
λ_02 C_M^-1(1-c_3exp{-2c_4ℓ^2/T}-C_Mc_5ℓ^-2d/3)T2W.
Using that T=ℓ^5/3 and setting ℓ large enough establishes the lemma, with C_1 being any constant satisfying C_1<C_M^-1/4.
Next we show that the particles from <Ref> move to nearby cells, spreading the infection.
Let z=(z_1,…,z_d) with z_j∈{-η,-η+1,…,η} for all j∈{1,… d}, and fix the ratio β/ℓ^2. Let A(i,τ,N,z) be the event that given a set of N>0 particles in ∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] at time τβ+T,
at least one of them is in ∏_j=1^d[(i_j+z_j)ℓ,(i_j+z_j+1)ℓ] at time (τ+1)β. Then, if ℓ is sufficiently large while keeping β/ℓ^2 fixed, we obtain
ℙ[A(i,τ,N,z)]≥ 1-exp{-Nc_p},
where c_p is a positive constant that is bounded away from 0 and depends only on d, η and the ratio β/ℓ^2.
Let Q^*=∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] and Q^**=∏_j=1^d[(i_j+z_j)ℓ,(i_j+z_j+1)ℓ]. For t^2/3≥sup_x∈ Q^*
y∈ Q^**x-y_1, define p_t:=inf_x∈ Q^*∑_y∈ Q^**ℙ_x[Y_t=y]. Then, if we define bin(N,p_t) to be a binomial random variable of parameters N∈ℕ and p_t∈[0,1], it directly follows that
ℙ[A(i,τ,N,z)]≥ℙ[bin(N,p_t)≥1]≥1-exp{-Np_t}.
It remains to show that for t=β-T, we have that p_t≥ c_p>0 for some constant c_p. We will use the heat kernel bounds for the pair x,y, which hold if x-y_1^3/2≤β-T for all x∈ Q^*,y∈ Q^**. Given the ratio β/ℓ^2, d and η, this is satisfied if ℓ is large enough. Then we have that
p_β-T =inf_x∈ Q^*∑_y∈ Q^**ℙ_x[Y_β-T=y]
≥inf_x∈ Q^*C_M^-1∑_y∈ Q^**q_β-T(x,y)
≥inf_x∈ Q^*C_M^-1∑_y∈ Q^**c_1 β^-d/2exp{-c_2x-y_1^2/β-T}.
Now we use that x and y can be at most c_ηℓ apart where c_η is a constant depending on d and η only, and that β-T≥β/2 for ℓ large enough. Hence,
p_β-T ≥inf_x∈ Q^*C_M^-1∑_y∈ Q^**c_1 β^-d/2exp{-c_22(c_ηℓ)^2/β}
= C_M^-1c_1ℓ^d(1/β)^d/2exp{-c_22(c_ηℓ)^2/β}
≥ c_p.
In the next lemma, we will tie together the results from <Ref> and <Ref>. In order to precisely describe the behavior of the particles involved, we say a particle x collides with particle y during a time interval [t_0,t_1], if for at least one t∈[t_0,t_1], x and y are at the same site.
Consider the super cell (i,τ). Assume that at each site x∈∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] the number of particles at x at time τβ is a Poisson random variable of intensity λ_0/2μ_x, and let Υ be the collection of such particles. Assume that, at time τβ, there is at least one infected particle x_0 inside ∏_j=1^d[i_jℓ,(i_j+1)ℓ]. Let E_st(i,τ) be the event that at time (τ+1)β, for all i'∈ℤ^d with i-i'_∞≤η, there is at least one particle from Υ in ∏_j=1^d[(i_j')ℓ,(i_j'+1)ℓ] that collided with x_0 during [τβ,τβ+T]. If ℓ is sufficiently large for Lemmas <ref> and <ref> to hold,
then there exists a positive constant C such that
ℙ[E_st(i,τ)]≥1-exp{-Cλ_0ℓ^1/3}.
We note that, by definition, the event E_st(i,τ) is restricted to the super cube ∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] and time interval [τβ,(τ+1)β].
We define the following 3 events.
F_1: The initial infected particle x_0 never leaves ∏_j=1^d[(i_j-η+1)ℓ,(i_j+η-1)ℓ] during [τβ,τβ+ T].
F_2: Let C_1 be the constant from <Ref>. During the time interval [τβ,τβ+T] the initial infected particle x_0 collides with at least C_1λ_0ℓ^1/3/2 different particles from Υ that are in the supercube Q^**=∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] at time τβ+T.
F_3: Out of the C_1λ_0ℓ^1/3/2 or more particles from F_2, at least one of them is in the cube ∏_j=1^d[(i_j+k_j)ℓ,(i_j+k_j+1)ℓ] at time (τ+1)β, for all k=(k_1,…,k_d) for which ∏_j=1^d[(i_j+k_j)ℓ,(i_j+k_j+1)ℓ]⊂ Q^**.
By definition of the events, we clearly have that ℙ[E_st(i,τ)]≥ℙ[F_1∩ F_2∩ F_3].
Using <Ref> we have
ℙ[F_1]≥ 1-C_2exp{-C_3ℓ^2/T}=1-C_2exp{-C_3ℓ^1/3}
for some positive constants C_2 and C_3. We observe that F_1 is restricted to the super cube ∏_j=1^d[(i_j-η)ℓ,(i_j+η+1)ℓ] and the time interval [τβ,τβ+T].
For the event F_2, we apply <Ref> to get that the intensity of the Poisson point process of particles that are in Q^** at time τβ and collide with x_0 during [τβ,τβ+T] is at least λ_0 C_1ℓ^1/3 for some positive constant C_1. Since every particle that collides with x_0 enters ∏_j=1^d[(i_j-η+1)ℓ,(i_j+η)ℓ] during [τβ,τβ+T], we can use <Ref> to bound the probability that the particle is inside of Q^** at time τβ+T by
1-C_aexp{-C_bℓ^2/T}=1-C_aexp{-C_bℓ^1/3},
for some positive constants C_a and C_b. This term can be made as close to 1 as possible by having ℓ sufficiently large. We assume ℓ is large enough so that this term is larger than 2/3. This gives that the intensity of the process of particles from Υ that collided with x_0 during [τβ,τβ+T] and are in Q^** at time τβ+T is at least
2λ_0 C_1ℓ^1/3/3.
Using Chernoff's bound (see <Ref>) we have that
ℙ[F_2]≥ 1-exp{-(2/3)^2C_1λ_0ℓ^1/3}.
Note that, by construction, F_2 is restricted to the super cube Q^** and the time interval [τβ,τβ+T]. Furthermore, F_2 is clearly an increasing event.
We now turn to F_3. Using <Ref>, and a uniform bound across the number of cubes inside a super cube, we have that
ℙ[F_3]≥ 1-(2η+1)^dexp{-C_1λ_0ℓ^1/3/2c_p},
where c_p is a small but positive constant. Again, the event is restricted to the super cube Q^** and the time interval [τβ+T,(τ+1)β] and is an increasing event. Taking the product of the probability bounds in (<ref>), (<ref>) and (<ref>), we see that the probability that E_st(i,τ) holds is at least
1- exp{-Cλ_0ℓ^1/3}
for some constant C and all large enough ℓ.
We start by using <Ref>. Set η∈ℕ such that η≥ d and set ϵ=1/2. Fix the ratio β/ℓ^2 small enough so that the lower bound for w is at most 2η+1, and then set w=2η+1.
Assume ℓ is large enough so that <Ref> holds.
For each (i,τ)∈ℤ^d+1, define E_st(i,τ) as in <Ref>. This event is increasing in the number of particles, is restricted to the super cube i and time interval [τβ,(τ+1)β], and satisfies
ℙ[E_st(i,τ)]≥ 1-exp{-Cλ_0ℓ^1/3},
for some constant C. Hence, letting λ/2 stand for the measure λ/2(x)=λ_0μ_x/2, we have
log(1/1-ν_E_st(λ2,Q_(2η+1)ℓ,Q_(2η+1)ℓ,ηβ))≥ Cλ_0ℓ^1/3,
which increases with ℓ, as does the term ϵ^2λ_0ℓ^d in the condition of <Ref>.
Thus, setting ℓ large enough, we apply <Ref> which gives the existence of a two-sided Lipschitz surface F, on which the event E_st(i,τ) holds. We also get that the surface is almost surely finite and that it surrounds the origin.
We now proceed to argue that the existence of the surface F implies that the infection spreads with positive speed.
Since the two-sided Lipschitz surface F is finite and surrounds the origin, we have that in almost surely finite time, an infected particle started from the origin will enter some cube ∏_j=1^d[i_jℓ,(i_j+1)ℓ] for which (i,τ) is in F. We call this the central cube of (i,τ). Once that holds, the starting assumption of E_st(i,τ) from <Ref> is satisfied for the super cell (i,τ), and the event E_st(i,τ) holds. By the definition of E_st(i,τ) this means that the initial infected particle for the super cell (i,τ) infects a large number of other particles, which spread the infection to the central cube of (i',τ+1) for all i'∈ℤ^d such that i'-i_∞≤η.
Let (b,h) be the base-height index of the cell (i,τ)∈ F. Recall that h is one of the spatial dimensions. We will also select one of the d-1 spatial dimensions from b and denote it b_1. Let b'∈ℤ^d be obtained from b by increasing the time dimension from τ to τ+1, and by increasing the chosen spatial dimension from b_1 to b_1+1. Since b-b'_1=2, we can choose h'∈ℤ such that (b',h')∈ F and |h-h'|≤ 2, where the latter holds by the Lipschitz property of F. Therefore, there must exists i'∈ℤ^d such that (i',τ+1) is the space-time super cell corresponding to (b',h') and i-i'_∞≤ 1. Hence, at time (τ+1)β, there is an infected particle in the central cube of the super cell i'.
We can then recursively repeat this procedure for the super cell (i',τ+1), since E_st(i',τ+1) holds. Repeating this process we obtain that the infection spreads by a distance of at least ℓ in time β in the chosen spatial direction. Consequently
lim inf_t→∞I_t_1/t>0 almost surely.
In order to prove <Ref>, we can follow the same steps as in the proof of <Ref> with the additional consideration that we have to ensure that the relevant infected particles do not recover too quickly. For that, we will require that all the particles involved do not recover for at least β.
Recall the definition of Υ and ρ from <Ref> and of E_st(i,τ) from <Ref>. Let E_st'(i,τ) be the event that E_st(i,τ) holds, and that the particles in Υ and the initial infected particle whose path is ρ do not recover during [τβ,(τ+1)β]. Since each such particle does not recover during [τβ,(τ+1)β] with probability exp{-γβ}, for <Ref> we consider that for each x∈ Q^*∖ρ(τβ) the number of particles at x at time τβ that do not recover during [τβ, (τ+1)β] is a Poisson random variable of intensity λ_0/2μ_xexp{-λβ}. Thus,
once η, β and ℓ are fixed, setting γ small enough gives that E'_st(i,τ) holds with probability at least
1-(1-exp{-γβ})-exp{-Cλ_0exp{-γβ}ℓ^1/3}
for some positive constant C, where the term inside the parenthesis accounts of the probability that the initial infected particles recovers during [τβ,(τ+1)β].
We now follow the same steps as in the proof of <Ref> to get that the two-sided Lipschitz surface F on which the increasing event E'_st(i,τ) holds exists, is finite and surrounds the origin almost surely. This gives that an initially infected particle that is at the origin at time 0 has a strictly positive probability of surviving long enough to enter a cell of the two-sided Lipschitz surface. Once on the surface, the infection survives indefinitely by the definition of E'_st(i,τ). Hence
ℙ[I_t_1≥ c_1t for all t≥0]≥ c_2.
§ APPENDIX: STANDARD LARGE DEVIATION RESULTS
Let P be a Poisson random variable with mean λ. Then, for any 0<ϵ<1,
ℙ[P<(1-ϵ)λ] < exp{-λϵ^2/2}
and
ℙ[P > (1 + ϵ)λ] < exp{-λϵ^2/4}.
plaintocsectionReferences
| We consider the graph G=(ℤ^d,E), d≥ 2 to be the d-dimensional square lattice, with edges between nearest neighbors: for x,y∈ℤ^d we have (x,y)∈ E iff x-y_1=1. Let {μ_x,y}_(x,y)∈ E be a collection of i.i.d. non-negative weights, which we call conductances. In this paper, edges will always be undirected, so μ_x,y=μ_y,x for all (x,y)∈ E. We also assume that the conductances are uniformly elliptic: that is,
there exists C_M>0, such that μ_x,y∈[C_M^-1,C_M] for all (x,y)∈ E.
We say x∼ y if (x,y)∈ E and define μ_x=∑_y∼ xμ_x,y. At time 0, consider a Poisson point process of particles on ℤ^d, with intensity measure λ(x)=λ_0μ_x for some constant λ_0>0 and all x∈ℤ^d. That is, for each x∈ℤ^d, the number of particles at x at time 0 is an independent Poisson random variable of mean λ_0μ_x. Then, let the particles perform independent continuous-time simple random walks on the weighted graph so that a particle at x∈ℤ^d jumps to a neighbor y∼ x at rate μ_x,y/μ_x. It follows from the thinning property of Poisson random variables that the system of particles is in stationarity; thus, at any time t, the particles are distributed according to a Poisson point process with intensity measure λ.
We study the spread of an infection among the particles. Assume that at time 0 there is an infected particle at the origin, and all other particles are uninfected. Then an uninfected particle gets infected as soon as it shares a site with an infected particle. Our first result establishes that the infection spreads with positive speed.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). For any time t≥ 0, let I_t be the position of the infected particle that is furthest away from the origin. Then
lim inf_t→∞I_t_1/t>0 almost surely.
The above result has been established on the square lattice (i.e., μ_x,y=1 for all (x,y)∈ E) by Kesten and Sidoravicius <cit.> via an intricate multi-scale analysis; see also <cit.> for a shape theorem. In a companion paper <cit.>, we develop a framework which can be used to analyze processes on this setting without the need of carrying out a multi-scale analysis from scratch. We prove our <Ref> via this framework, showing the applicability of our technique from <cit.>. We also apply this technique to analyze the spread of an infection with recovery.
Let the setup be as before, but now each infected particle independently recovers and becomes uninfected at rate γ for some fixed parameter γ>0. After recovering, a particle becomes again susceptible to the infection and gets infected again whenever it shares a site with an infected particle. Our next result shows that if γ is small enough, then with positive probability there will be at least one infected particle at all times. When this happens, we also obtain that the infection spreads with positive speed.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). For any λ_0>0, there exists γ_0>0 such that, for all γ∈(0,γ_0), with positive probability, the infection does not die out. Furthermore, there are constants c_1,c_2>0 such that
ℙ[I_t_1≥ c_1t for all t≥0]≥ c_2,
where I_t is the position of the infected particle that is furthest away from the origin at time t.
The challenge in this setup comes from the heavily dependent structure of the model. Though particles move independently of one another, dependencies do arise over time. For example, if a ball of radius R centered at some vertex x of the graph turns out to have no particles at time 0, then the ball B(x,R/2) of radius R/2 centered at x, will continue to be empty of particles up to time R^2, with positive probability. This means that the probability that the (d+1)-dimensional, space-time cylinder B(x,R/2)×[0,R^2] has no particle is at least exp{-cR^d} for some constant c, which is just a stretched exponential in the volume of the cylinder.
On the other hand, one expects that, after time t≫ R^2, the set of particles inside the ball will become “close” to stationarity.
To deal with dependences, one often resorts to a decoupling argument, showing that two local events behave roughly independently of each other, provided they are measurable according to regions in space time that are sufficiently far apart. We will obtain such an argument by extending a technique which we call local mixing, and which was introduced in <cit.>. The key observation is the following. Consider a cube Q⊆ℤ^d, tessellated into subcubes of side length ℓ>0. For simplicity assume for the moment that μ_x,y=1 for all (x,y)∈ E. Suppose that at some time t, the configuration of particles inside Q is dense enough, in the sense that inside each subcube there are at least cℓ^d particles, for some constant c>0. Regardless of how the particles are distributed inside Q, as long as the subcubes are dense, we obtain that at some time t+c'ℓ^2, not only particles had enough time to move out of the subcubes they were in at time t, but also we obtain that the configuration of particles inside “the core” of Q (i.e., away from the boundary of Q) stochastically dominates a Poisson point process of intensity (1-ϵ)cℓ^d that is independent of the configuration of particles at time t. Moreover, the value ϵ can be made arbitrarily close to 0 by setting c' large enough. In words, we obtain a configuration at time t+c'ℓ^2 inside the core of Q that is roughly independent of the configuration at time t, and is close to the stationary distribution. To the best of our knowledge, the idea of local mixing in such settings originated in the work of Sinclair and Stauffer <cit.>, and was later applied in <cit.>. This idea was then extended with the introduction of soft local times by Popov and Teixeira <cit.> (see also <cit.>), and applied to other processes, such as random interlacements.
Our second main goal in this paper is to show that this local mixing result can be obtained in a larger setting, in which a local CLT result, which plays a crucial role in the proof[The results of <cit.> are in the setting of Brownian motions on ℝ^d, but can be adapted in a straightforward way to random walks on ℤ^d with μ_x,y=1 for all (x,y)∈ E by using the local CLT.] of <cit.>, might not hold or only holds in the limit as time goes to infinity, with no good control on the convergence rate. This is precisely the situation of our setting, where the weights μ_x,y are not all identical to 1. To work around that, we will show that local mixing can be obtained whenever a so-called Parabolic Harnack Inequality holds, and we have some good estimates on the displacement of random walks.
For the result below, we can impose slightly weaker conditions on μ_x,y. Let p_c be the critical probability for bond percolation on ℤ^d. Assume that μ_x,y are i.i.d. and that, for each (x,y)∈ E, we have
ℙ[μ_x,y=0]< p_c and μ_x,y satisfies (<ref>) whenever μ_x,y>0.
For two regions Q'⊆ Q⊂ℤ^d, we say that Q' is x away from the boundary of Q iof the distance between Q' and Q^c is at least x.
Let {μ_x,y}_(x,y)∈ E be i.i.d. satisfying (<ref>). There exist positive constants c_1, c_2, c_3, c_4, c_5 such that the following holds.
Fix K>ℓ>0 and ϵ∈(0,1). Consider a cube Q of side-length K, tessellated into subcubes (T_i)_i of side length ℓ.
Assume each subcube T_i contains at least β∑_x∈ T_iμ_x particles for some β>0, and let Δ≥ c_1ℓ^2ϵ^-c_2.
If ℓ is large enough, then after the particles move for time Δ, we obtain that within a region Q'⊆ Q that is at least c_3ℓϵ^-c_4 away from the boundary of Q, the particles dominate an independent Poisson point process of intensity measure ν(x)=(1-ϵ)βμ_x, x∈ Q', with probability at least
1-∑_y∈ Q'exp{-c_5βμ_yϵ^2Δ^d/2}.
We will prove a more detailed version of this theorem in <Ref> (see <Ref>). Although we only prove the result for the case of conductances on the square lattice, <Ref> holds for more general graphs. The theorem holds for any graph G and any region Q of G that can be tessellated into subregions of diameter at most ℓ whenever each such subregion is dense enough, the so-called parabolic Harnack inequality holds for G and we have estimates on the displacement of random walks on G. We discuss some extensions in <Ref>.
The structure of this paper is as follows. In <Ref>, we formally define the family of graphs we consider for local mixing and present results concerning the parabolic Harnack inequality, heat kernel bounds and exit times for random walks on such graphs. In <Ref>, we state a more precise version of <Ref> and prove it. In <Ref> we prove an extension of the local mixing result to random walks whose displacement is conditioned to be bounded, which is particularly useful in applications <cit.>. In <Ref>, we use the local mixing result and results from our companion paper <cit.> to prove Theorems <ref> and <ref> for graphs satisfying (<ref>). | null | null | null | null | null |
http://arxiv.org/abs/1701.07516v1 | 20170125231257 | Non-colocated Time-Reversal MUSIC: High-SNR Distribution of Null Spectrum | [
"D. Ciuonzo",
"P. Salvo Rossi"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Non-colocated Time-Reversal MUSIC:
High-SNR Distribution of Null Spectrum
D. Ciuonzo, Senior Member, IEEE and P. Salvo Rossi, Senior Member, IEEEManuscript received 2nd December 2016; accepted 25th January 2017.
D. Ciuonzo is with DIETI, University of Naples “Federico II”,
Naples, Italy.
P. Salvo Rossi is with the Dept. of Electronics and Telecommunications,
NTNU, Trondheim, Norway.
E-mail: {domenico.ciuonzo, salvorossi}@ieee.org.
Received: date / Accepted: date
===================================================================================================================================================================================================================================================================================================================================================================================
We derive the asymptotic distribution of the null spectrum of the
well-known Multiple Signal Classification (MUSIC) in its computational
Time-Reversal (TR) form. The result pertains to a single-frequency
non-colocated multistatic scenario and several TR-MUSIC variants are
here investigated. The analysis builds upon the 1st-order perturbation
of the singular value decomposition and allows a simple characterization
of null-spectrum moments (up to the 2nd order). This enables a comparison
in terms of spectrums stability. Finally, a numerical analysis is
provided to confirm the theoretical findingsNotation - Lower-case (resp. Upper-case) bold letters denote column vectors (resp. matrices), with a_n (resp. a_n,m) being the nth (resp. the (n,m)th) element of a (resp. A); 𝔼{·}, var{·}, (·)^T, (·)^†, (·)^*, Tr[·], vec(·), (·)^-, (·), δ(·), ‖·‖ _F and ‖·‖ denote expectation, variance, transpose, Hermitian, conjugate, matrix trace, vectorization, pseudo-inverse, real part, Kronecker delta, Frobenius and ℓ_2 norm operators, respectively; j denotes the imaginary unit; 0_N× M (resp. I_N) denotes the N× M null (resp. identity) matrix; 0_N (resp. 1_N) denotes the null (resp. ones) column vector of length N; diag(a) denotes the diagonal matrix obtained from the vector a; x_1:M≜[ x_1^T ⋯ x_M^T ]^T denotes the vector concatenation; 𝒩_ℂ(μ,Σ) denotes a proper complex Gaussian pdf with mean vector μ and covariance Σ; 𝒞χ_N^2 denotes a complex chi-square distribution with N (complex) Degrees of Freedom (DOFs); finally the symbol ∼ means “distributed as”..
Time-Reversal (TR), Radar imaging, Null-spectrum, Resolution, TR-MUSIC.
§ INTRODUCTION
Time-Reversal (TR) refers to all those methods which
exploit the invariance of the wave equation (in lossless and stationary
media) by re-transmitting a time-reversed version of the scattered
(or radiated) field measured by an array to focus on a scattering
object (or radiating source), by physical <cit.> or synthetic
<cit.> means. In the latter case (computational
TR), it consists in numerically back-propagating the
field data by using a known Green’s function, representative
of the propagation medium. Since the employed Green function depends
on the object (or source) position, an image is formed by varying
the probed location (this procedure is referred to as “imaging”).
Computational TR has been successfully applied in different contexts
such as subsurface prospecting <cit.>, through-the-wall
<cit.> and microwave imaging <cit.>.
The key entity in TR-imaging is the Multistatic Data Matrix (MDM),
whose entries are the scattered field due to each Transmit-Receive
(Tx-Rx) pair. Two popular methods for TR-imaging are the decomposition
of TR operator (DORT) <cit.> and the TR Multiple Signal
Classification (TR-MUSIC) <cit.>. DORT imaging exploits
the MDM spectrum by back-propagating each eigenvector of the so-called
signal subspace, thus allowing to selectively focus on each
(well-resolved) scatterer. On the other hand, TR-MUSIC imaging is
based on a complementary point of view and relies on the noise
subspace (viz. orthogonal-subspace[Such term underlines that it is orthogonal to the signal subspace.]),
leading to satisfactory performance as long as the data space dimension
exceeds the signal subspace dimension and sufficiently high Signal-to-Noise-Ratio
(SNR) is present. TR-MUSIC was first introduced for a Born Approximated
(BA) linear scattering model <cit.> and, later, successfully
applied to the Foldy-Lax (FL) non-linear model <cit.>.
Also, it became popular mainly due to: (a) algorithmic efficiency;
(b) no need for approximate scattering models; and (c) finer
resolution than the diffraction limits (especially in scenarios with
few scatterers). Recently, TR-MUSIC has been expanded to extended
scatterers in <cit.>.
Though a vast literature on performance analysis of MUSIC <cit.>
for Direction-Of-Arrival (DOA) estimation exists (see <cit.>
for resolution studies and <cit.>
for asymptotic Mean Squared Error (MSE) derivation, with more advanced
studies presented in <cit.>),
such results cannot be directly applied to TR-MUSIC. Indeed, in TR
framework scatterers/sources are generally assumed deterministic and
more importantly a single snapshot is used, whereas MUSIC results
for DOA refer to a different asymptotic condition (i.e. a large number
of snapshots). Also, to our knowledge, no corresponding theoretical
results have been proposed in the literature for TR-MUSIC, except
for <cit.>, providing the asymptotic (high-SNR)
localization MSE for point-like scatterers. Yet, a few works have
tackled achievable theoretical performance both for BA and
FL models via the Cramr-Rao lower-bound <cit.>.
In this letter we provide a null-spectrum[We underline that the MUSIC imaging function is commonly referred
to as “pseudo-spectrum” in DOA literature. Though less used, in
this paper we will instead adopt to the term “null-spectrum” employed
in <cit.>, as the latter work represents the closest counterpart
in DOA estimation to the present study.] analysis of TR-MUSIC for point-like scatterers, via a 1st-order perturbation
of Singular Value Decomposition (SVD) <cit.>, thus having
asymptotic validity (i.e. meaning a high SNR regime). The present
results are based on a homogeneous background assumption and neglecting
mutual coupling, as well as polarization or antenna pattern effects.
Here we build upon <cit.> (tackling the simpler
colocated case) and consider a general non-colocated multistatic
setup with BA/FL models where several TR-MUSIC variants, proposed
in the literature, are here investigated. The obtained results complement
those found in DOA literature <cit.> and allow to obtain
both the mean and the variance of each null-spectrum, as well as to
draw-out its pdf. Also, they highlight performance dependence of null-spectrum
on the scatterers/arrays configurations and compare TR-MUSIC variants
in terms of spectrum stability. We recall that stability property
is important for TR-MUSIC, and has been investigated by numerical
means <cit.> or using compressed-sensing based
approaches <cit.>. Finally, a few numerical examples,
for a 2-D geometry with scalar scattering, are presented to confirm
our findings.
The letter is organized as follows: Sec. <ref>
describes the system model and reviews classic results on SVD perturbation
analysis. Sec. <ref> presents the theoretical
characterization of TR-MUSIC null-spectrum, whereas its validation
is shown in Sec. <ref> via simulations. Finally,
conclusions are in Sec. <ref>.
§ SYSTEM MODEL
We consider localization of M point-like scatterers[The number of scatterers M is assumed to be known, as usually done
in array-processing literature <cit.>.] at unknown positions {x_k}_k=1^M in ℝ^p
with unknown scattering potentials {τ_k}_k=1^M in ℂ.
The Tx (resp. Rx) array consists of N_T (resp. N_R) isotropic
point elements (resp. receivers) located at {r̃_i}_i=1^N_T
in ℝ^p (resp. {r̅_j}_j=1^N_R
in ℝ^p). The illuminators first send signals to the
probed scenario (in a known homogeneous background with wavenumber
κ) and the transducer array records the received signals.
The (single-frequency) measurement model is then <cit.>:
K_n = K(x_1:M,τ)+W
= G_r(x_1:M) M(x_1:M,τ) G_t(x_1:M)^T+W
where K_n∈ℂ^N_R× N_T (resp. K(x_1:M,τ))
denotes the measured (resp. noise-free) MDM. Differently W∈ℂ^N_R× N_T
is a noise matrix s.t. vec(W)∼𝒩_ℂ(0_N,σ_w^2 I_N),
where N≜ N_TN_R. Additionally, we have denoted: (i)
the vector of scattering coefficients as τ≜[[ τ_1 ⋯ τ_M ]]^T∈ℂ^M×1; (ii) (b) the Tx (resp. Rx) array matrix as G_t(x_1:M)∈ℂ^N_T× M
(resp. G_r(x_1:M)∈ℂ^N_R× M),
whose (i,j)th entry equals 𝒢(r̃_i,x_j)
(resp. 𝒢(r̅_i,x_j)), where 𝒢(·,·)
denotes the (scalar) background Green function <cit.>.
Also, jth column g_t(x_j) (resp. g_r(x_j))
of G_t(x_1:M) (resp. G_r(x_1:M))
denotes the Tx (resp. Rx) Green's function vector evaluated at x_j.
In Eq. (<ref>) the scattering matrix
M(x_1:M,τ)∈ℂ^M× M equals
M(x_1:M,τ)≜diag(τ)
for BA model <cit.>, while M(x_1:M,τ)≜[diag^-1(τ)-S(x_1:M)]^-1
in the case of FL model <cit.>, where the (m,n)th entry
of S(x_1:M) equals 𝒢(x_m,x_n)
when m≠ n and zero otherwise. We recall that our null-spectrum
analysis of TR-MUSIC is general and can be applied to both
scattering models.
Finally, we define the SNR≜‖K(x_1:M,τ)‖ _F^2/(σ_w^2 N_TN_R)
and, for notational convenience, N_Rdof≜(N_R-M)
and N_Tdof≜(N_T-M) as the dimensions of the
left and right orthogonal subspaces, whereas N_dof≜(N_Rdof+N_Tdof).
§.§ TR-MUSIC Spatial Spectrum
Several TR-MUSIC variants have been proposed in the literature for
the non co-located setup <cit.>. A first approach consists
in using the so-called Rx mode TR-MUSIC, which evaluates the
null (or spatial) spectrum (assuming M<N_R):
𝒫_r(x;U_n)≜g̅_r(x)^† P_r,n g̅_r(x)=‖U_n^† g̅_r(x)‖ ^2 ,
where U_n∈ℂ^N_R× N_Rdof
is the matrix of left singular vectors of K_n spanning the
noise subspace, g̅_r(x)≜g_r(x)/‖g_r(x)‖
is the unit-norm Rx Green vector function and P_r,n≜(U_nU_n^†)
(i.e. the “noisy” projector into the left noise subspace). A dual
approach, denoted as Tx mode TR-MUSIC, constructs the null
spectrum (assuming M<N_T):
𝒫_t(x;V_n)≜g̅_t(x)^T P_t,n g̅_t(x)^*=‖V_n^† g̅_t^*(x)‖ ^2 ,
where V_n∈ℂ^N_T× N_Tdof
is the matrix of right singular vectors of K_n spanning
the noise subspace, g̅_t(x)≜g_t(x)/‖g_t(x)‖
is the unit-norm Tx Green vector function and P_t,n≜(V_nV_n^†)
(i.e. the “noisy” projector into the right noise subspace). Finally,
a combined version of two modes, named generalized TR-MUSIC,
is built as (assuming M<min{N_T,N_R}) <cit.>:
𝒫_tr(x;U_n,V_n)≜𝒫_t(x;V_n)+𝒫_r(x;U_n).
Usually, the M largest local maxima of 𝒫_r(x;U_n)^-1,
𝒫_t(x;V_n)^-1 and
𝒫_tr(x;U_n,V_n)^-1
are chosen as the estimates {x̂_k}_k=1^M. Indeed,
it can be shown that Eq. (<ref>) (resp. Eq.
(<ref>)) equals zero when x equals one
among {x_k}_k=1^M in the noise-free case, since when
U_n=U_n (resp. V_n=V_n)
this reduces to the eigenvector matrix spanning the left (resp. right)
noise subspace of K(x_1:M,τ) <cit.>.
Similar conclusions hold for 𝒫_tr(x;U_n,V_n)
in a noise-free condition.
§.§ Review of Results on SVD Perturbation
We consider a rank deficient matrix A∈ℂ^R× T
with rank δ<min{R,T}, whose SVD A=U Σ V^†
is rewritten as:
A=([ U_s U_n ])([ Σ_s 0_δ×δ̌; 0_δ̅×δ 0_δ̅×δ̌ ])([ V_s^†; V_n^† ]) ,
where δ̅≜(R-δ) and δ̌≜(T-δ),
respectively. Also, U_s∈ℂ^R×δ and
V_s∈ℂ^T×δ (resp. U_n∈ℂ^R×δ̅
and V_n∈ℂ^T×δ̌) denote the
left and right singular vectors of signal (resp. orthogonal) subspaces
in Eq. (<ref>), while Σ_s∈ℝ^δ×δ
collects the (>0) singular values of the signal subspace. Then,
consider A=(A+N), where N is
a perturbing term. Similarly to (<ref>), the SVD A=UΣV^†
is rewritten as
A=([ U_s U_n ])([ Σ_s 0_δ×δ̌; 0_δ̅×δ Σ_n ])([ V_s^†; V_n^† ]) ,
showing the effect of N on the spectral representation[Indeed, as opposed to Eq. (<ref>), A
may be full-rank in general.] of A, highlighting the change of the left and
right principal directions. We are here concerned with the perturbations
pertaining to U_n and V_n,
stressed as U_n=U_n+Δ U_n
and V_n=V_n+Δ V_n, where
Δ(·) terms are generally complicated functions of
N. However, when N has a “small magnitude” compared
to A (see <cit.>), a 1st-order perturbation (i.e.
Δ(·) are approximated as linear with N),
will be accurate <cit.>. The key result is that perturbed
orthogonal left subspace U_n (resp. right subspace
V_n) is spanned by U_n+U_sB
(resp. V_n+V_sB̅), where norm (any sub-multiplicative
one, such as ℓ_2 or ‖·‖ _F norm)
of B (resp. B̅) is of the same order of that
of N. Intuitively, a small perturbation is observed
at high-SNR. The expressions for Δ U_n and Δ V_n,
at 1st-order, are[We notice that in obtaining Eq. (<ref>), “in-space”
perturbations (e.g. the contribution to Δ U_n depending
on U_n) are not considered, though they have been shown
to be linear with N (and thus not negligible at first-order)
<cit.>. The reason is that these terms do not affect performance
analysis of TR-MUSIC null-spectrum when evaluated at scatterers positions
{x_k}_k=1^M, due to the null spectrum orthogonality
property.] <cit.>:
Δ U_n=-(A^-)^† N^† U_n; Δ V_n=-(A^-) N V_n;
where we have exploited A^-=V_s Σ_s^-1 U_s^†
<cit.>.
§ NULL-SPECTRUM ANALYSIS
First, we observe that the null spectrums at scatterer positions 𝒫_r(x_k;U_n),
𝒫_t(x_k;V_n) and
𝒫_tr(x_k;U_n,V_n),
k∈{1,…,M}, in Eqs. (<ref>), (<ref>)
and (<ref>) can be simplified, using U_n=U_n+Δ U_n
and V_n=V_n+Δ V_n and exploiting
the properties[Such conditions directly follow from orthogonality between left (resp.
right) signal and orthogonal subspaces U_s and U_n
(resp. V_s and V_n).] U_n^† g̅_r(x_k)=0_N_Rdof
and V_n^†g̅_t^*(x_k)=0_N_Tdof,
as
𝒫_r(x_k;U_n)=‖ξ_r,k‖ ^2 , 𝒫_t(x_k;V_n)=‖ξ_t,k‖ ^2,
where ξ_r,k≜Δ U_n^† g̅_r(x_k)∈ℂ^N_Rdof×1
and ξ_t,k≜Δ V_n^† g̅_t^*(x_k)∈ℂ^N_Tdof×1,
respectively. Similarly,
𝒫_tr(x_k;U_n,V_n)=‖ξ_t,k‖ ^2+‖ξ_r,k‖ ^2=‖ξ_k‖ ^2,
where ξ_k≜[ ξ_r,k^T ξ_t,k^T ]^T∈ℂ^N_dof×1.
Thus, to characterize 𝒫_r(x_k;U_n),
𝒫_t(x_k;V_n) and
𝒫_tr(x_k;U_n,V_n),
it suffices to study the random vector ξ_k. Indeed, the
marginal pdfs of ξ_r,k and ξ_t,k
are easily drawn from that of ξ_k. As a byproduct, ξ_k
definition also allows an elegant and simpler MSE analysis with respect
to <cit.>, as it can be shown that the position-error
of the estimates with Tx mode (Δx_T,k), Rx mode
(Δx_R,k) and generalized (Δx_TR,k)
TR-MUSIC can be expressed as Δx_T,k≈-Γ_T,k^-1{J_T,k^T V_n ξ_t,k},
Δx_R,k≈-Γ_R,k^-1{J_R,k^† U_n ξ_r,k}
and Δx_TR,k≈-Γ_TR,k^-1{[[ (J_R,k^† U_n) (J_T,k^T V_n) ]]ξ_k}, respectively, where J_T,k, J_R,k, Γ_T,k,
Γ_R,k and Γ_TR,k are suitably defined
known matrices (see <cit.>). Clearly, finding
the exact pdf of ξ_k is hard, as Δ U_n
and Δ V_n are generally complicated functions of the
unknown perturbing matrix W.
However, Δ U_n and Δ V_n assume a (tractable)
closed form with a 1st-order approximation (see Eq. (<ref>)).
This approximation holds tightly at high-SNR, as W will be
statistically “small” compared to noise-free MDM K(x_1:M,τ).
Hence, at high-SNR, ξ_k is (approximately) expressed in
terms of W as:
ξ_k=[ ξ_r,k; ξ_t,k ]≈[ -U_n^† W t_r,k; -V_n^† W^† t_t,k ] ,
where t_r,k≜K^-(x_1:M,τ) g̅_r(x_k)∈ℂ^N_T×1
and t_t,k≜K^-(x_1:M,τ)^† g̅_t^*(x_k)∈ℂ^N_R×1
are deterministic. Since the vector ξ_k is linear[In the following of the letter we will implicitly mean that the results
hold “approximately” in the high-SNR regime.] with the noise matrix W, it will be Gaussian distributed;
thus we only need to evaluate its moments up to the 2nd order to characterize
it completely. Hereinafter we only sketch the main steps and provide
the detailed proof as supplementary material. First, the mean vector
𝔼{[ ξ_r,k^T ξ_t,k^T ]^T}=0_N_dof,
exploiting 𝔼{W} =0_N_R× N_T.
Secondly, the covariance matrix Ξ_k≜𝔼{ξ_kξ_k^†}
(since 𝔼{ξ_k}=0_N_dof) is
given in closed-form as:
Ξ_k=[ σ_w^2 ‖t_r,k‖ ^2 I_N_Rdof 0_N_Rdof× N_Tdof; 0_N_Tdof× N_Rdof σ_w^2 ‖t_t,k‖ ^2 I_N_Tdof ] .
The above result is based on circularity of the entries of W,
along with their mutual independence. Thirdly, aiming at completing
the statistical characterization, we evaluate the pseudo-covariance
matrix Ψ_k≜𝔼{ξ_kξ_k^T}
(since 𝔼{ξ_k}=0_N_dof), whose
closed-form is Ψ_k=0_N_dof× N_dof.
The latter result is based on circularity of the entries of W,
along with their mutual independence and exploiting the results V_n^† t_r,k=0_N_Tdof
and U_n^† t_t,k=0_N_Rdof,
arising from subspaces orthogonality V_n^†V_s=0_N_Tdof× M
and U_n^†U_s=0_N_Rdof× M.
Therefore, in summary ξ_k∼𝒩_ℂ(0_N_dof, Ξ_k),
i.e. a proper complex Gaussian vector <cit.>.
Similarly, it is readily inferred that ξ_r,k∼ 𝒩_ℂ(0_N_Rdof, σ_w^2 ‖t_r,k‖ ^2 I_N_Rdof)
and ξ_t,k∼ 𝒩_ℂ(0_N_Tdof, σ_w^2 ‖t_t,k‖ ^2 I_N_Tdof),
respectively, i.e. they are independent proper Gaussian
vectors. Clearly, since ξ_r,k and ξ_t,k
have zero mean and scaled-identity covariance, the corresponding variance-normalized
energies ‖ξ_r,k‖ ^2/(σ_w^2‖t_r,k‖ ^2)∼𝒞χ_N_Rdof^2
and ‖ξ_t,k‖ ^2/(σ_w^2‖t_t,k‖ ^2)∼𝒞χ_N_Tdof^2,
respectively (i.e. they are chi-square distributed). Interestingly
these DOFs coincide with those available for TR-MUSIC localization
through Rx and Tx modes, respectively.
Based on these considerations, the means of the null-spectrum for
Tx and Rx modes are 𝔼{‖ξ_r,k‖ ^2}=σ_w^2 ‖t_r,k‖ ^2 N_Rdof
and 𝔼{‖ξ_t,k‖ ^2}=σ_w^2 ‖t_t,k‖ ^2 N_Tdof,
respectively, whereas for generalized null-spectrum 𝔼{‖ξ_k‖ ^2}=𝔼{‖ξ_r,k‖ ^2}+𝔼{‖ξ_t,k‖ ^2}
(by linearity). By similar reasoning, the variances for Tx and Rx
modes are given by var{‖ξ_r,k‖ ^2}=σ_w^4 ‖t_r,k‖ ^4 N_Rdof
and var{‖ξ_t,k‖ ^2}=σ_w^4 ‖t_t,k‖ ^4 N_Tdof,
respectively, whereas for the generalized null-spectrum var{‖ξ_k‖ ^2}=var{‖ξ_r,k‖ ^2}+var{‖ξ_t,k‖ ^2}
(by independence of ξ_r,k and ξ_t,k).
Hence, once we have obtained the mean and the variance of 𝒫_r(x_k;U_n),
𝒫_t(x_k;V_n) and
𝒫_tr(x_k;U_n,V_n),
respectively, we can consider the Normalized Standard Deviation
(NSD), generically defined as
NSD_k≜ √(var{𝒫(x_k; ·))} / 𝔼{𝒫(x_k; ·)}.
Clearly, the lower the NSD, the higher the null-spectrum stability
at x_k <cit.>. For Rx and Tx modes
it follows that NSD_r,k=1/√(N_Rdof)
and NSD_t,k=1/√(N_Tdof), respectively.
It is apparent that in both cases the NSD does not depend (at
high SNR) on the scatterers and measurement setup, as well as σ_w^2,
but only on the (complex) DOFs, being equal to N_Rdof
and N_Tdof, respectively. Thus, the NSD becomes (asymptotically)
small only when the number of scatterers is few compared to the Tx
(resp. Rx) elements of the array. Those results are analogous to the
case of MUSIC null-spectrum for DOA, whose NSD depends on the DOFs,
namely the difference between the (Rx) array size and the number of
sources <cit.>. Differently, the NSD for generalized null
spectrum equals
NSD_k=√(‖t_r,k‖ ^4N_Rdof+‖t_t,k‖ ^4N_Tdof)/‖t_r,k‖ ^2N_Rdof+‖t_t,k‖ ^2N_Tdof .
Eq. (<ref>) underlines (i) a clear dependence
of generalized null-spectrum NSD on scatterers and measurement setup
and (ii) independence from the noise level σ_w^2. Also,
it is apparent that when ‖t_r,k‖≈0
(resp. ‖t_t,k‖≈0) the
expression reduces to NSD_k≈1/√(N_Tdof)
(resp. NSD_k≈1/√(N_Rdof)), i.e.
the NSD is dominated by Tx (resp. Rx) mode stability. Finally,
the same equation is exploited to obtain the conditions ensuring that
generalized spectrum is “more stable” than Tx and Rx modes (NSD_k≤NSD_t,k
and NSD_k≤NSD_r,k, respectively),
expressed as the pair of inequalities
1/2[1-N_Rdof/N_Tdof]≤(‖t_t,k‖ /‖t_r,k‖)^2 (Tx)
1/2[1-N_Tdof/N_Rdof]≤(‖t_r,k‖ /‖t_t,k‖)^2 (Rx)
Clearly, when N_R>N_T (resp. N_T>N_R) the inequality
regarding the Tx (resp. Rx) mode is always verified as the left-hand
side is always negative. Also, in the special case N_T=N_R
the left-hand side is always zero for both inequalities.
§ NUMERICAL RESULTS
In this section we confirm our findings through simulations, focusing
on 2-D localization, with Green function[We discard the irrelevant constant term j/4.]
being 𝒢(x',x)=H_0^(1)(κ‖x'-x‖).
Here H_n^(1)(·) and κ=2π/λ denote the nth
order Hankel function of the 1st kind and the wavenumber (λ
is the wavelength), respectively. First, we consider a setup with
λ/2-spaced Tx/Rx arrays (N_T=11 and N_R=17, respectively,
see Fig. <ref>). Secondly, to quantify
the level of multiple scattering (as in <cit.>) we define
the index η≜‖K_f(x_1:M,τ)-K_b(x_1:M,τ)‖ _F/‖K_b(x_1:M,τ)‖ _F,
where K_b(x_1:M,τ) and K_f(x_1:M,τ)
denote the MDMs generated via BA and FL models, respectively. Finally,
for simplicity we consider M=2 targets located at (x_1/λ)=[[ -1 -6 ]]^T and (x_2/λ)=[[ +1 -6 ]]^T and having scattering coefficients τ=[[ 3 4 ]]^T; thus η=(0.7445).
Then, we compare the asymptotic NSD (Eq. (<ref>),
solid lines) with the true ones obtained via Monte Carlo (MC) simulation
(dashed lines, 10^5 runs), focusing only on the generalized null-spectrum
for brevity. To this end, Fig. <ref> depicts
the null-spectrum NSD vs. SNR for the two targets being considered,
both for FL and BA models. It is apparent that, as the SNR increases,
the theoretical results tightly approximate the MC-based ones, with
approximations deemed accurate above SNR≈10 dB.
Differently, in Fig. <ref>, we plot the
asymptotic NSD of the three TR-MUSIC variants vs. d, where (x_1/λ)=[[ (-1-d) -6 ]]^T and (x_2/λ)=[[ (1-d) -6 ]]^T (i.e. a rigid shift of the two scatterers), in order to investigate
the potentially improved asymptotic stability (viz. NSD) of the generalized
spectrum in comparison to Tx and Rx modes. It is apparent that the
gain is significant when d∈(-5,5), while outside this interval
the NSD expression is either dominated by Tx or Rx mode, which for
the present case NSD_t,k=1/√(11-2)≈0.33
and NSD_r,k=1/√(17-2)≈0.26, with
the generalized NSD never above that of NSD_t,k
(as dictated from Eq. (<ref>)).
§ CONCLUSIONS
We provided an asymptotic (high-SNR) analysis of TR-MUSIC null-spectrum
in a non-colocated multistatic setup, by taking advantage of the 1st-order
perturbation of the SVD of the MDM. Three different variants of TR-MUSIC
were analyzed (i.e. Tx mode, Rx mode and generalized), based on the
characterization of a certain complex-valued Gaussian vector. This
allowed to obtain the asymptotic NSD (a measure of null-spectrum stability)
for all the three imaging procedures. While similar results as the
DOA setup were obtained for Tx and Rx modes, it was shown a clear
dependence of generalized null-spectrum NSD on the scatterer and measurement
setup. Finally, its potential stability advantage was investigated
in comparison to Tx and Rx modes. Future works will analyze mutual
coupling, antenna pattern and polarization effects <cit.>,
and propagation in inhomogeneous (random) media <cit.>.
IEEEtran
| Time-Reversal (TR) refers to all those methods which
exploit the invariance of the wave equation (in lossless and stationary
media) by re-transmitting a time-reversed version of the scattered
(or radiated) field measured by an array to focus on a scattering
object (or radiating source), by physical <cit.> or synthetic
<cit.> means. In the latter case (computational
TR), it consists in numerically back-propagating the
field data by using a known Green’s function, representative
of the propagation medium. Since the employed Green function depends
on the object (or source) position, an image is formed by varying
the probed location (this procedure is referred to as “imaging”).
Computational TR has been successfully applied in different contexts
such as subsurface prospecting <cit.>, through-the-wall
<cit.> and microwave imaging <cit.>.
The key entity in TR-imaging is the Multistatic Data Matrix (MDM),
whose entries are the scattered field due to each Transmit-Receive
(Tx-Rx) pair. Two popular methods for TR-imaging are the decomposition
of TR operator (DORT) <cit.> and the TR Multiple Signal
Classification (TR-MUSIC) <cit.>. DORT imaging exploits
the MDM spectrum by back-propagating each eigenvector of the so-called
signal subspace, thus allowing to selectively focus on each
(well-resolved) scatterer. On the other hand, TR-MUSIC imaging is
based on a complementary point of view and relies on the noise
subspace (viz. orthogonal-subspace[Such term underlines that it is orthogonal to the signal subspace.]),
leading to satisfactory performance as long as the data space dimension
exceeds the signal subspace dimension and sufficiently high Signal-to-Noise-Ratio
(SNR) is present. TR-MUSIC was first introduced for a Born Approximated
(BA) linear scattering model <cit.> and, later, successfully
applied to the Foldy-Lax (FL) non-linear model <cit.>.
Also, it became popular mainly due to: (a) algorithmic efficiency;
(b) no need for approximate scattering models; and (c) finer
resolution than the diffraction limits (especially in scenarios with
few scatterers). Recently, TR-MUSIC has been expanded to extended
scatterers in <cit.>.
Though a vast literature on performance analysis of MUSIC <cit.>
for Direction-Of-Arrival (DOA) estimation exists (see <cit.>
for resolution studies and <cit.>
for asymptotic Mean Squared Error (MSE) derivation, with more advanced
studies presented in <cit.>),
such results cannot be directly applied to TR-MUSIC. Indeed, in TR
framework scatterers/sources are generally assumed deterministic and
more importantly a single snapshot is used, whereas MUSIC results
for DOA refer to a different asymptotic condition (i.e. a large number
of snapshots). Also, to our knowledge, no corresponding theoretical
results have been proposed in the literature for TR-MUSIC, except
for <cit.>, providing the asymptotic (high-SNR)
localization MSE for point-like scatterers. Yet, a few works have
tackled achievable theoretical performance both for BA and
FL models via the Cramr-Rao lower-bound <cit.>.
In this letter we provide a null-spectrum[We underline that the MUSIC imaging function is commonly referred
to as “pseudo-spectrum” in DOA literature. Though less used, in
this paper we will instead adopt to the term “null-spectrum” employed
in <cit.>, as the latter work represents the closest counterpart
in DOA estimation to the present study.] analysis of TR-MUSIC for point-like scatterers, via a 1st-order perturbation
of Singular Value Decomposition (SVD) <cit.>, thus having
asymptotic validity (i.e. meaning a high SNR regime). The present
results are based on a homogeneous background assumption and neglecting
mutual coupling, as well as polarization or antenna pattern effects.
Here we build upon <cit.> (tackling the simpler
colocated case) and consider a general non-colocated multistatic
setup with BA/FL models where several TR-MUSIC variants, proposed
in the literature, are here investigated. The obtained results complement
those found in DOA literature <cit.> and allow to obtain
both the mean and the variance of each null-spectrum, as well as to
draw-out its pdf. Also, they highlight performance dependence of null-spectrum
on the scatterers/arrays configurations and compare TR-MUSIC variants
in terms of spectrum stability. We recall that stability property
is important for TR-MUSIC, and has been investigated by numerical
means <cit.> or using compressed-sensing based
approaches <cit.>. Finally, a few numerical examples,
for a 2-D geometry with scalar scattering, are presented to confirm
our findings.
The letter is organized as follows: Sec. <ref>
describes the system model and reviews classic results on SVD perturbation
analysis. Sec. <ref> presents the theoretical
characterization of TR-MUSIC null-spectrum, whereas its validation
is shown in Sec. <ref> via simulations. Finally,
conclusions are in Sec. <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07759v1 | 20170126162141 | Evidence of Significant Energy Input in the Late Phase of a Solar Flare from NuSTAR X-Ray Observations | [
"Matej Kuhar",
"Säm Krucker",
"Iain G. Hannah",
"Lindsay Glesener",
"Pascal Saint-Hilaire",
"Brian W. Grefenstette",
"Hugh S. Hudson",
"Stephen M. White",
"David M. Smith",
"Andrew J. Marsh",
"Paul J. Wright",
"Steven E. Boggs",
"Finn E. Christensen",
"William W. Craig",
"Charles J. Hailey",
"Fiona A. Harrison",
"Daniel Stern",
"William W. Zhang"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
1University of Applied Sciences and Arts Northwestern Switzerland, Bahnhofstrasse 6, 5210 Windisch, Switzerland
2Institute for Particle Physics, ETH Zürich, 8093 Zürich, Switzerland
3Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450, USA
4SUPA School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
5School of Physics and Astronomy, University of Minnesota - Twin Cities , Minneapolis, MN 55455, USA
6Cahill Center for Astrophysics, 1216 E. California Blvd, California Institute of Technology, Pasadena, CA 91125, USA
7School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
8Air Force Research Laboratory, Albuquerque, NM, USA
9Physics Department and Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
10DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby, Denmark
11Lawrence Livermore National Laboratory, Livermore, CA 94550, USA
12Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA
13Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
14NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
We present observations of the occulted active region AR12222 during the third NuSTAR solar campaign on 2014 December 11, with concurrent SDO/AIA and FOXSI-2 sounding rocket observations. The active region produced a medium size solar flare one day before the observations, at ∼18UT on 2014 December 10, with the post-flare loops still visible at the time of NuSTAR observations. The time evolution of the source emission in the SDO/AIA 335Å channel reveals the characteristics of an extreme-ultraviolet late phase event, caused by the continuous formation of new post-flare loops that arch higher and higher in the solar corona. The spectral fitting of NuSTAR observations yields an isothermal source, with temperature 3.8-4.6 MK, emission measure 0.3-1.8 × 10^46 cm^-3, and density estimated at 2.5-6.0 × 10^8 cm^-3. The observed AIA fluxes are consistent with the derived NuSTAR temperature range, favoring temperature values in the range 4.0-4.3 MK. By examining the post-flare loops' cooling times and energy content, we estimate that at least 12 sets of post-flare loops were formed and subsequently cooled between the onset of the flare and NuSTAR observations, with their total thermal energy content an order of magnitude larger than the energy content at flare peak time. This indicates that the standard approach of using only the flare peak time to derive the total thermal energy content of a flare can lead to a large underestimation of its value.
§ INTRODUCTION
The Nuclear Spectroscopic Telescope ARray (NuSTAR) is a focusing hard-X ray (HXR) telescope operating in the energy range from 3 to 79 keV <cit.>. While primarily designed to observe far, faint astrophysical sources such as active galactic nuclei (AGN), black holes and supernova remnants, it is also capable of observing the Sun. With its focusing optics system, it can directly observe HXRs from previously undetected sources on the Sun due to its ten-times higher effective area and orders of magnitude reduced background when compared to state-of-the art solar HXR instruments such as Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI, ). However, because it is optimized for observations of astrophyscial objects, NuSTAR experiences some technical challenges when observing the Sun; these include ghost-rays and low throughput. Ghost-rays are unfocused, single-bounced photons (in contrast to properly focused photons which reflect twice off the Wolter-I mirrors) coming from sources outside the field-of-view <cit.>. The throughput of NuSTAR's focal plane detector electronics, with a maximum of 400 counts per second per telescope, can effectively diminish the hard X-ray sensitivity in the presence of extremely bright sources <cit.>, making detections of fainter spectral components (such as a non-thermal component) difficult.
Despite these challenges, NuSTAR has begun to provide critical new observations of faint X-ray sources on the Sun <cit.> and giving us new insights into the coronal heating problem and particle energization in solar flares. In that respect, occulted active regions are priority targets in the planning of NuSTAR observations. With the brightest emission from the footpoints and low corona hidden, NuSTAR can search for faint coronal signature of heated material and particle acceleration. In order to maximize NuSTAR livetime and minimize ghost-rays during these observations, they should be carried out during low-activity periods (preferably with no other active sources on disk).
In this paper, we analyze the occulted active region AR12222 which produced a C5.9 GOES (Geostationary Operational Environmental Satellite) class flare ∼24 hours before NuSTAR observations. AR12222 was observed in the third NuSTAR solar campaign on 2014 December 11. The active region was also observed by Solar TErrestrial RElations Observatory (STEREO), Atmospheric Imaging Assembly on Solar Dynamics Observatory (SDO/AIA) and the second launch of Focusing Optics X-ray Solar Imager (FOXSI-2) sounding rocket. The goal of this paper is to analyze the time evolution of the X-ray and extreme-ultraviolet (EUV) emission of the observed source above the solar limb in the context of the flare evolution scenario proposed by <cit.> and <cit.>. In these papers, the authors argue that flares may have four distinct phases in their evolution: (1) impulsive phase (best seen in HXRs), (2) gradual phase seen in SXR/EUV from the post-flare loops, (3) coronal dimming, best seen in the 171Å line and (4) an EUV-late phase, best seen as a second peak in the 335 Å line few (up to 6) hours after the flare onset. The explanation of the EUV late-phase emission lies in the formation of subsequent flare loops, overlying the original flare loops, which result from the reconnection of magnetic fields higher than those that reconnected during the flare's impulsive phase. Similar observations of “giant post-flare loops" and “giant arches" can be found in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>, among others; a theoretical model of the subsequent magnetic reconnections (and its successful description of the flare SOL1973-07-29T13) is given in <cit.>. More recently, <cit.> proposed that the subsequent loop system(s) is produced by magnetic reconnection of the overlying active region magnetic field lines and the loop arcade produced by the flare, adding more complexity to the theoretical description of these events.
This paper is structured as follows. In Section 2 we give an overview of NuSTAR, SDO/AIA, STEREO and FOXSI-2 observations of AR12222. We present the results of NuSTAR spectroscopy in Section 3, along with the comparison of NuSTAR derived parameters with observations in other wavelengths. The discussion of the results, as well as possible future studies, is presented in Section 4.
§ OBSERVATIONS
The data presented in this paper come from the third set of solar observations with NuSTAR, which were carried out on 2014 December 11. The observations consisted of observations of the north pole region (quiet Sun observations) and the solar limb (from 18:39:00 to 19:04:00 UT) that is discussed in this paper.
The target for the limb pointing and of this study is the active region AR12222, located ∼35 degrees behind the south-west solar limb at the time of the NuSTAR observations. AR12222 produced a GOES C5.9 flare one day before the NuSTAR observations, at 18UT on 2014 December 10. Figure 1 presents the time evolution of the GOES flux, the 7 SDO/AIA EUV channels and the AIA-derived Fe xvi and Fe xviii fluxes from flare onset until more than a day later. The NuSTAR observing period is indicated with vertical dashed lines. Smaller spikes in the GOES curve between the flare and NuSTAR observations represent various fainter flares coming from other active regions (AR12233, AR12230, AR12235) on the solar disk. Due to the high occultation, the estimate of the GOES class as given above of the flare SOL2014-12-10T18 is a severe lower limit of the actual GOES class. The STEREO satellites can generally be used to give a prediction of the actual GOES class as they view the Sun from a different angle <cit.>. Even though STEREO A was at the right location at an angle of ∼175^∘ with respect to the Earth, it was not observing during the main and gradual phases of the flare; therefore, we cannot give an accurate GOES class estimate for this flare.
The time evolution of fluxes in different AIA channels reveals two main characteristics of an EUV late-phase event, as described in <cit.> and <cit.>: a second (in this case weaker) peak in the 335Å line a few hours after the flare, and coronal dimming in the 171Å line with the local minimum ∼5 hours after the flare. As previously noted <cit.>, there is a strong correlation between coronal dimming and coronal mass ejection (CME) events; indeed, a strong CME with the velocity of ∼1000 km s^-1 was associated with this nominally C-class flare[Data taken from the LASCO CME Catalog: <http://cdaw.gsfc.nasa.gov/CME_list/>.].
The olive curve in Figure 1 presents the time evolution of the Fe xviii line flux. An estimate of the emission in the Fe xviii line can be constructed from the 94Å line, by subtracting the lower temperature responses from the 171Å, 193Å and/or 211Å channels <cit.>. In obtaining the Fe xviii flux, we followed the approach of <cit.>, using the formula
F(Fe xviii) ≈ F(94 Å)-F(211Å)/120-F(171Å)/450,
where F(Fe xviii) is the Fe xviii flux, F(94 Å), F(211 Å) and F(171 Å) are the fluxes in the 94 Å, 211 Å and 171 Å channels, respectively.
The Fe xviii line has a strong response in the temperature range from ∼3 to ∼10 MK, with the peak around 6.5 MK. The Fe xviii time evolution shows a strong peak due to the flare, with a long decay phase lasting past the NuSTAR observations.
Similar to the Fe xviii line, a lower-temperature Fe xvi line can be constructed from the 335 Å and 171 Å lines <cit.>:
F(Fe xvi) ≈ F(335Å)-F(171Å)/70.
Similar to Fe xviii, the above formula is just an approximation of the Fe xvi flux. The Fe xvi line has a temperature response of similar shape to the Fe xviii line, with its peak at a lower temperature of ∼2.5 MK. The time evolution of the Fe xvi flux is also shown in Figure <ref>. It is characterized by a strong dip followed by the initial rise, soon after which a decrease is observed, due to the fact that the flare becomes weaker. After ∼8UT on 2014 December 11, the time evolution of Fe xvi flux is determined by fore- and background emission along the line-of-sight, making the post-flare loops no longer observable in this line.
The evolution of 5-minute integrated NuSTAR fluxes (blue dots) and Fe xviii fluxes (olive line) is given in the inset of Figure <ref>. The NuSTAR and Fe xviii time evolutions show similar behaviour, with the (slow) decay rate of the two agreeing within the error bars and the only difference being the steeper decay of NuSTAR flux towards the end of the observation, which is likely an instrumental effect. The NuSTAR focal plane consists of a 2 × 2 array of CdZnTe detectors, which are divided into quadrants by a chip gap <cit.>. As the telescope pointing drifted slowly during the observations, the gap covered part of the area used for calculating the flux. Therefore, it is probable that the steeper decay of the NuSTAR emission towards the end of the observation is not due to solar variability, but rather a consequence of the telescope drift. This might also have some effect on the determination of the temperature and emission measure of the source, which will be discussed in the following sections.
Due to the slow decay of Fe xviii emission, we were able to make Fe xviii images even at the time of NuSTAR observations one day after the flare onset (see Figure <ref>). The upper row presents the Fe xviii maps of the flare onset, the post-flare loops 6 hours after the flare, and the remaining features 20 hours after the flare. Left and central panels in the bottom row present 25-minute integrated NuSTAR images above 2 keV from focal plane modules A (FPMA) and B (FPMB). Dashed lines denote the area covered by the gap during the observations that is further enlarged due to the drift of the telescope. As the drift was dominantly along the x-direction (45 arcsecs in total) and negligible in the y-direction, the area affected by the gap is much larger in the x-direction. The region of interest for the analysis that will be presented in the next section, with an area of 50”× 50” = 2500 arcsec^2, is marked by the white box. The last image in the bottom right corner is the 25-minute integrated (same time range as NuSTAR) Fe xviii map of the source together with the 30, 50, 70 and 90% contours of NuSTAR emission in blue. As the uncertainty in NuSTAR absolute pointing accuracy is relatively large (see , ), the NuSTAR image was shifted by -100” and 25” in the x and y directions, respectively, in order to match the Fe xviii source location. NuSTAR and Fe xviii maps show the same sources, such as the top parts of the coronal loops, and the high emission source above them <cit.>.
In Figure <ref> we present the STEREO A image of active region AR12222 an hour before the NuSTAR observation. The orange line shows the solar limb as viewed from the Earth, while the red line is a projection of the line-of-sight from the Earth to the NuSTAR source, passing right above AR12222 located at ∼[-730”, -330”] in the STEREO A 195Å image. The NuSTAR source is not evident in this image as the 195Å channel is sensitive only to lower temperatures. From STEREO images, it is possible to calculate the height of the post-flare loops, defined as the distance between AR12222 and the mid-point of the line that minimizes the distance between the Earth-Sun line-of-sight and the radial extension above the active region. We estimate this height to be ∼300”. If we assume the height of the original loops at the flare onset to be 50” (as there are no STEREO observations of this active region immediately after the flare, we assume this height as a common value for ordinary flares), this yields a radial velocity of ∼ 2 km s^-1 when averaged over the whole day. This is similar to typical speeds of rising post-flare loops very late in an event <cit.>, giving further evidence that the NuSTAR source is indeed associated with the flare that occurred a day earlier.
§ ANALYSIS OF THE HIGH CORONAL SOURCE
§.§ Spectral fitting
We fitted the NuSTAR count spectrum inside the region of interest from Figure <ref> separately for FPMA and FPMB, following the approach of Hannah et al. 2016, using SolarSoft/OSPEX[ <http://hesperia.gsfc.nasa.gov/ssw/packages/spex/doc/>.]. The counts were binned with 0.2 keV energy resolution, while the integration time was 25 minutes (full NuSTAR observing time of the active region). As the livetime was around 1% during the whole observation period, this is roughly equal to 15 seconds of exposure at full livetime. In order to investigate the influence of the adopted energy range on the fitted temperature and emission measure, we fitted CHIANTI 7.1 isothermal models <cit.> to our data for different energy ranges: 2.5–5.2, 3.0–5.2, 3.5–5.2, 4.0–5.2 keV. These fits are presented in Figure <ref>. The lower limit of 2.5 keV was chosen as the lowest energy for which the calibration is still completely understood and reliable <cit.>, while the upper limit of 5.2 keV was chosen as the highest energy with a significant number of counts (>3 counts per bin). Both focal plane modules give consistent results, with temperature 3.8-4.6 MK and emission measure 0.3 × 10^46 cm^-3 - 1.8 × 10^46 cm^-3, depending on the lower limit of the energy range used in the fitting. The temperature gets higher and the emission measure gets lower as we go to higher energies. The 67% confidence ranges of temperature and emission measure were calculated using the standard Monte Carlo procedure in OSPEX and are given in Table 1 together with the best-fit values. A point to note is that our region of interest is located very close to the gap between the detectors, which leads to fewer counts, especially in later phases of the integration interval. The reason for this is the slow drift of the spacecraft pointing with time, resulting in covering a part of the region of interest by the gap. The missing counts could lead to an underestimation of the emission measure, but do not change the value of the determined temperature (as it is determined by the slope in the counts spectrum). A single temperature component is enough to fit the observations, similar to the results of <cit.>. We determine the density of the source to be (assuming a volume of 50×50×50 arcsec^3) in the range 2.5-6.0 × 10^8 cm^-3 (roughly 10-100 times the density of the quiet Sun corona at this height; see e.g., ), suggesting the density of late-phase loops to be significantly higher than that of the quiet Sun corona.
§.§ Comparison of NuSTAR to SDO/AIA
§.§.§ Comparison to Fe xviii
In order to investigate the extent of the agreement between NuSTAR and Fe xviii sources, we compare the Fe xviii loci curve with the NuSTAR loci curves in different energy channels. For reference, the results of NuSTAR spectral fitting from the previous section for both focal plane modules are presented in Figure <ref> with different symbols for different energy ranges, together with the Fe xviii and NuSTAR loci curves. The Fe xviii loci curve is extracted from the temperature response functions <cit.> and the observed fluxes using the following formula
EM=F · S/R(T),
where EM is the emission measure [cm^-3], F is the flux [DN s^-1pix^-1], S is the area of the region [cm^2] and R(T) is the temperature response function of the Fe xviii line [DNcm^5s^-1pix^-1]. The NuSTAR loci curves are extracted in a similar way from the NuSTAR temperature response function, determined by folding the generated photon spectra for different temperatures through the NuSTAR response matrix. The good agreement of our results is best seen in the inset of Figure <ref>, where we plot the loci curves and the determined EM-T pairs on linear scale. The intersection of the Fe xviii loci curve with the NuSTAR loci curves in the temperature range 4.0-4.3 MK is consistent with the EM-T pairs shown in Figure <ref>, except for the fit including the lowest energies. A part of these low energy counts might originate from cooler post-flare loops, which will also be discussed in more detail in the next sections.
§.§.§ Comparison to other AIA channels
It is also possible to investigate the results of NuSTAR fitting to other AIA channels by calculating the expected count rates in different AIA channels from the source with the emission measure and temperature as given by NuSTAR, and compare them to the observed fluxes in AIA maps. The difficulty of this comparison is that the fraction of the cold background emission (in the temperature range below ∼3 MK) in these channels is unknown and non-removable. This is not an issue for the derived Fe xviii channel, which is not sensitive to this cooler plasma. The expected AIA fluxes are calculated by inverting Equation <ref>. This is a NuSTAR-predicted AIA flux coming from the NuSTAR source alone, without any additional contribution from the cooler plasma. The comparison between NuSTAR-predicted and observed fluxes is presented in Figure <ref>. The circles are the predicted fluxes for NuSTAR spectral fitting in the range 2.5-5.2 keV, and the stars for 4.0-5.2 keV. We use the fitted values of FPMB in both ranges, as they represent the two extreme T-EM fits. The full and dashed lines represent 1, 5, 10, 50 and 100% ratios of NuSTAR-predicted and observed fluxes in different AIA channels. The area where the predicted AIA flux from the NuSTAR source is larger than the total observed flux is shown with the red lines. If the NuSTAR -predicted flux for a given AIA channel is close to the observed flux (e.g., region between 50% and 100% lines in the plot), the emission in that AIA channel is dominated by the same plasma that NuSTAR observes. Unsurprisingly, this is best achieved for the 94Å channel and, consequently, the Fe xviii channel. For the first T-EM fit, the NuSTAR-predicted flux for the Fe xviii channel is greater than the observed flux. This result indicates that a single temperature fit is not enough to fit the observations at the lowest energies, as some of the low-energy counts are produced by a lower temperature plasma. The ratio for the Fe xviii channel for the fit at higher energies (second T-EM fit) lies in the range between 50% and 100%, while the 335 Å channel and its derived Fe xvi channel have ratios in the range 5-10 %. These results are in agreement with the fact that the Fe xviii source showed the same spatial features as the NuSTAR source, while we were not able to detect the Fe xvi source. Cooler lines at 171Å, 211Å and 193Å have ratios of NuSTAR-predicted fluxes to the observed fluxes at a percent level, which is expected as these lines are sensitive to plasma cooler than NuSTAR can observe.
§.§ Comparison of NuSTAR to FOXSI
The FOXSI <cit.> sounding rocket also uses direct focusing HXR optics, but is optimized especially for solar purposes. FOXSI has about one fifth of NuSTAR's effective area with a higher spatial resolution (FWHM of 9 arcsec). The main difference for solar observations between the two telescopes is the different low energy threshold. While NuSTAR detects photons down to ∼2 keV, the FOXSI entrance window intentionally blocks the large number of low energy photons, giving a typical peak in the count spectrum around 5 keV. The entrance window largely reduces the number of incoming photons, keeping the livetime high for the faint, higher-energy components. For example, a 25 minute observation by NuSTAR at 1% livetime and five times the effective area is equal to a FOXSI observation of 75 s at full livetime. However, this also means that FOXSI is not sensitive to low temperature plasmas that are best seen below 4 keV.
The FOXSI-2 rocket flew for a 6.5-minute observation interval during the NuSTAR solar pointing discussed here. FOXSI-2 targeted AR12222 for 35.2 seconds, though 12 minutes after the NuSTAR observation finished. As the NuSTAR/AIA source has a slow time variation, the time difference between the observations is of minor importance, at least for the order-of-magnitude estimate discussed here. Using the temperature and emission measure derived from NuSTAR (T=3.8 MK and EM=1.7 × 10^46 cm^-3), the expected FOXSI count rate is ∼1.6 counts for the FOXSI-2's most sensitive optics/detector pair D6. This value is computed above 5 keV and with the integration time of 35.2 seconds (integrating during the whole observation period). In total, 4 counts were observed by D6. This is a reasonable value given that the estimated non-solar background flux is 1.8 counts, while the expected count rate due to ghost-rays from sources outside of the FOV is unknown. Given the small-number statistics and the uncertainty of the ghost-ray background, the observed FOXSI-2 measurement is consistent with the values expected for the plasma observed with NuSTAR, but does not provide any further diagnostics for this event.
§ DICUSSION AND CONCLUSIONS
In this paper, we have presented the first observations of the EUV late-phase of a solar flare in X-rays with NuSTAR. NuSTAR has provided unique opportunity to perform spectroscopy on X-rays from a coronal source a full day after the flare onset. With knowledge of the location of this faint source from NuSTAR, we were also able to find it in Fe xviii by eliminating the lower temperature response of the AIA 94Å channel and integrating for 25 minutes (adding together 125 maps to obtain a higher signal-to-noise ratio). Here, NuSTAR played a crucial role in providing the information needed for extracting the very faint signal which was far from evident in the 94Å maps.
The fact that the post-flare loops have been observed so late in the flare evolution points to continuing energy input in the later phases of the solar flare evolution. To quantify this statement, we estimate the cooling times of subsequent post-flare loops and compare them to the flare duration. We follow the approach of <cit.>, with the following formula for the cooling time of post-flare loops:
τ_cool=2.35·10^-2· L^5/6· n_e^-1/6· T_e^-1/6,
where τ_cool [s] is the cooling time (the time needed for post-flare loops to cool down to ∼ 10^5 K) and L [cm], n_e [cm^-3] and T_e [K] are loop length, density and temperature at the start time. The temperature estimate of the original post-flare loops from the GOES observations is 10.5 MK, while the emission measure is 5 × 10^48cm^-3. Even though the above estimates might only be a rough approximation because of the high occultation of the flare, we are anyway making only an approximate calculation of the cooling time. By assuming the length of the original post-flare loops to be ∼50”, we estimate the density to be 9 × 10^9cm^-3. This gives us a cooling time of ∼1 hour, indicating that the original post-flare loops are long-gone at the time of NuSTAR observations and that the additional heating took place during the evolution of the post-flare system. The most probable explanation is the previously mentioned scenario of subsequent magnetic reconnections, resulting in reconnected loops being produced higher and higher in the corona.
The above results are in agreement with original Skylab and SMM results, and the recent observations of a large post-flare loop system between 2014 October 14-16 by <cit.>. They conclude that the giant late-phase arches are similar in structure to the ordinary post-flare loops, and formed by magnetic reconnection. Their reasoning follows the work of <cit.>, in which it is pointed out that the reconnection rate may not depend only on the magnetic field (in which case, it would decrease with height), but possibly on the local Alfven speed, which is proportional to B/ √(ρ), where B is the magnetic field strength and ρ the density. So, if the density decreases sufficiently fast, the reconnection rate could remain constant out to 0.5R_ despite the decreasing magnetic field strength, and thus produce the giant post-flare loops analyzed by <cit.> or in this study.
From NuSTAR and GOES data, it is possible to estimate the additional energy input needed to form the subsequent, rising post-flare loops. The total thermal energy of the loop system is proportional to the density, temperature and volume <cit.>:
E_th=3NkT=3k· nVT,
while k is the Boltzmann constant. We have obtained all the above parameters for the original flare loops from GOES and for the post-flare loops a day after from NuSTAR. We estimate that the thermal energy content in NuSTAR-loops is 5% of the thermal energy content of the original flare loops, indicating there is still significant energy release even a full day after the flare onset. Next, by assuming linearity in the change of density, loop length and temperature over time (for simplicity), it is possible to calculate the change in cooling times of all the post-flare loops formed in between. Although the above assumption might not be accurate for all (or any) of the parameters, we are only interested in calculating an order of magnitude estimate here. The other assumption we use is that new loop systems are only produced when the old ones vanish. This assumption is in principle not valid as new systems are produced while the old ones persist, but it gives us an approximate lower limit on the total thermal energy content in all the loops systems. The sequence is as follows: original post flare loops vanish after ∼1 hour, and during this time density, temperature and volume change as well, and a new loop system with a different cooling time is produced. We calculate that this sequence repeats about 12 times during the 24 hours between the flare onset and NuSTAR observations, with the total energy content in those 12 cycles of reconnection and cooling estimated at a factor of ∼13 larger than the one released during the impulsive phase of the flare only.
Previous estimates of the additional energy input during the decay phase of solar flares were derived using radiative losses at specific wavelength ranges. <cit.> calculate the total radiated energy in the EUV band during the late phase to be between 0.4 and 3.7 times the flare energy in the X-rays during the peak. <cit.> conclude in their statistical study of 38 solar flares that, on average, the total energy radiated from hot SXR-emitting plasma exceeds the peak thermal energy content by a factor of ∼3. It is important to note that the above studies used non-overlapping wavelength ranges, thus missing the contribution to total energy content from the wavelength range of the other study (and the rest of the wavelength spectrum). Our results for a single event are consistent with these statistical studies, especially as we compare our value with statistical averages that miss significant energy contributions.
In summary, all results indicate that the impulsive energy release is only a fraction of the energy release in the late phase of the flare evolution, at least for events with clearly observable late phase emission. This statement calls for re-examining the approach of using just the peak energy content or the non-thermal emission during the impulsive phase of the flare as the estimate of the total energy content of the flare. In order to assess this in more detail, a statistical study of similar events should be carried out. However, NuSTAR is not a solar-dedicated observatory, and therefore the observations are few and sporadic, making statistical studies difficult. Additionally, it is most likely that faint signals such as presented in this study can only be observed when the flare (and the active region) is occulted or at least over the limb, as the emission from these kinds of coronal sources on the disk would likely be masked by the much stronger emission of the active region beneath. Nevertheless, a statistical search for SDO/AIA Fe xviii sources in above-the-limb flares could give us new insights about the influence of the long-lasting decay phase on flare energetics.
This work made use of data from the NuSTAR mission, a project led by the California Institute of Technology, managed by the Jet Propulsion Laboratory, and funded by NASA. We thank the NuSTAR Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research made use of the NuSTAR Data Analysis Software (NuSTARDAS), jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). M.K. and S.K. acknowledge funding from the Swiss National Science Foundation (200021-140308). Funding for this work was also provided under NASA grants NNX12AJ36G and NNX14AG07G. A.J.M.'s participation was supported by NASA Earth and Space Science Fellowship award NNX13AM41H. I.G.H. is supported by a Royal Society University Research Fellowship. P. J. W. is supported by an EPSRC-Royal Society fellowship engagement grant. FOXSI was funded by NASA LCAS grant NNX11AB75G.
apj
| The Nuclear Spectroscopic Telescope ARray (NuSTAR) is a focusing hard-X ray (HXR) telescope operating in the energy range from 3 to 79 keV <cit.>. While primarily designed to observe far, faint astrophysical sources such as active galactic nuclei (AGN), black holes and supernova remnants, it is also capable of observing the Sun. With its focusing optics system, it can directly observe HXRs from previously undetected sources on the Sun due to its ten-times higher effective area and orders of magnitude reduced background when compared to state-of-the art solar HXR instruments such as Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI, ). However, because it is optimized for observations of astrophyscial objects, NuSTAR experiences some technical challenges when observing the Sun; these include ghost-rays and low throughput. Ghost-rays are unfocused, single-bounced photons (in contrast to properly focused photons which reflect twice off the Wolter-I mirrors) coming from sources outside the field-of-view <cit.>. The throughput of NuSTAR's focal plane detector electronics, with a maximum of 400 counts per second per telescope, can effectively diminish the hard X-ray sensitivity in the presence of extremely bright sources <cit.>, making detections of fainter spectral components (such as a non-thermal component) difficult.
Despite these challenges, NuSTAR has begun to provide critical new observations of faint X-ray sources on the Sun <cit.> and giving us new insights into the coronal heating problem and particle energization in solar flares. In that respect, occulted active regions are priority targets in the planning of NuSTAR observations. With the brightest emission from the footpoints and low corona hidden, NuSTAR can search for faint coronal signature of heated material and particle acceleration. In order to maximize NuSTAR livetime and minimize ghost-rays during these observations, they should be carried out during low-activity periods (preferably with no other active sources on disk).
In this paper, we analyze the occulted active region AR12222 which produced a C5.9 GOES (Geostationary Operational Environmental Satellite) class flare ∼24 hours before NuSTAR observations. AR12222 was observed in the third NuSTAR solar campaign on 2014 December 11. The active region was also observed by Solar TErrestrial RElations Observatory (STEREO), Atmospheric Imaging Assembly on Solar Dynamics Observatory (SDO/AIA) and the second launch of Focusing Optics X-ray Solar Imager (FOXSI-2) sounding rocket. The goal of this paper is to analyze the time evolution of the X-ray and extreme-ultraviolet (EUV) emission of the observed source above the solar limb in the context of the flare evolution scenario proposed by <cit.> and <cit.>. In these papers, the authors argue that flares may have four distinct phases in their evolution: (1) impulsive phase (best seen in HXRs), (2) gradual phase seen in SXR/EUV from the post-flare loops, (3) coronal dimming, best seen in the 171Å line and (4) an EUV-late phase, best seen as a second peak in the 335 Å line few (up to 6) hours after the flare onset. The explanation of the EUV late-phase emission lies in the formation of subsequent flare loops, overlying the original flare loops, which result from the reconnection of magnetic fields higher than those that reconnected during the flare's impulsive phase. Similar observations of “giant post-flare loops" and “giant arches" can be found in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>, among others; a theoretical model of the subsequent magnetic reconnections (and its successful description of the flare SOL1973-07-29T13) is given in <cit.>. More recently, <cit.> proposed that the subsequent loop system(s) is produced by magnetic reconnection of the overlying active region magnetic field lines and the loop arcade produced by the flare, adding more complexity to the theoretical description of these events.
This paper is structured as follows. In Section 2 we give an overview of NuSTAR, SDO/AIA, STEREO and FOXSI-2 observations of AR12222. We present the results of NuSTAR spectroscopy in Section 3, along with the comparison of NuSTAR derived parameters with observations in other wavelengths. The discussion of the results, as well as possible future studies, is presented in Section 4. | null | null | null | null | null |
http://arxiv.org/abs/1701.07905v1 | 20170126235800 | Multi-Year X-ray Variations of Iron-K and Continuum Emissions in the Young Supernova Remnant Cassiopeia A | [
"Toshiki Sato",
"Yoshitomo Maeda",
"Aya Bamba",
"Satoru Katsuda",
"Yutaka Ohira",
"Ryo Yamazaki",
"Kuniaki Masai",
"Hironori Matsumoto",
"Makoto Sawada",
"Yukikatsu Terada",
"John P. Hughes",
"Manabu Ishida"
] | astro-ph.HE | [
"astro-ph.HE"
] |
1Department of Physics, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo 192-0397
2Department of High Energy Astrophysics, Institute of Space and Astronautical Science (ISAS),Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshinodai, Sagamihara, 229-8510, Japan;
3Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
4Research Center for the Early Universe, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
5Department of Physics, Faculty of Science & Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo, Tokyo 112-8551, Japan
6Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8602, Japan
7Saitama University, Shimo-Okubo 255, Sakura, Saitama 338-8570, Japan
8Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ. 08854-8019, USA
We found simultaneous decrease of Fe-K line and 4.2-6 keV continuum
of Cassiopeia A with the monitoring data taken by Chandra in 2000–2013.
The flux change rates in the whole remnant are -0.65±0.02 % yr^-1 in the
4.2–6.0 keV continuum and -0.6±0.1 % yr^-1 in the Fe-K. In the
eastern region where the thermal emission is considered to dominate, the
variations show the largest values: -1.03±0.05 % yr^-1 (4.2-6 keV
band) and -0.6±0.1 % yr^-1 (Fe-K line). In this region,
the time evolution of the emission measure and the temperature have a decreasing trend.
This could be interpreted as the adiabatic cooling with the expansion
of m = 0.66. On the other hand, in the non-thermal emission dominated
regions, the variations of the 4.2–6 keV continuum show the smaller
rates: -0.60±0.04 % yr^-1 in the southwestern region,
-0.46±0.05 % yr^-1 in the inner region and +0.00±0.07 % yr^-1
in the forward shock region. In particular, the flux does not show
significant change in the forward shock region. These results imply
that a strong braking in the shock velocity has not been occurring in
Cassiopeia A (< 5 km s^-1 yr^-1). All of our results
support that the X-ray flux decay in the remnant is mainly caused by the
thermal components.
§ INTRODUCTION
Supernova remnants (SNRs) are known to be one of the most dynamic phenomenon in
the Universe. The spectral and image evolutions are quicker when the age
of the remnants are younger since the shock velocity is faster and its
braking is larger. Recently, there are several arguments about the X-ray
spectral variations from young SNRs
<cit.>.
Mainly, the variable component is considered to be synchrotron X-rays
(non-thermal X-rays) caused by high-energy electrons in the amplified
magnetic field (∼mG). Also, time series of images revealed
moments of expanding shell structures in young SNRs
<cit.>.
These facts tell us that SNRs are experiencing an extreme evolution, and
we can detect these evolutions in our observational time-scale (∼10 yr).
Such an information would be very useful for understanding how the
remnants evolve and effect the ambient medium.
, a Galactic young remnant of ∼340 yrs old <cit.>,
has been found to display several X-ray time variations by intensive observations
with the Chandra observatory. <cit.> found year-scale
X-ray variability in thermal and non-thermal knots using the Chandra data
taken in 2000, 2002, and 2004. In the entire face of the remnant, they
identified six time-varying structures, four of which show count rate
increase from ∼10 % to over 90 %. <cit.> analyzed
the same dataset and found year-scale time variation in the X-ray intensity
for a number of non-thermal X-ray filaments or knots associated with the
reverse-shocked regions. They found that variable non-thermal features are
much more prevailing than the thermal ones. <cit.> found
a steady ∼1.5–2 % yr^-1 decline in the 4.2–6.0 keV band of the
overall X-ray emission of . They discussed a possible cause of this
decline as a deceleration of the forward shock velocity. The strong braking
with ≈30–70 km s^-1 yr^-1 was necessary to explain the decay
of the X-ray flux.
On the other hand, we cannot completely ignore a contribution of the thermal
X-rays to the time evolution because the continuum emissions below 4 keV have
not only the non-thermal component but also the thermal bremsstrahlung component.
<cit.> estimated that the fraction of non-thermal component
is ∼54 % in the 4.2–6 keV band. In the further lower energy band (1-3 keV),
no significant X-ray variability in the soft X-ray band (1–3 keV) has been found
<cit.>. However, we note the possibility
that the 4.2–6 keV and the 1–3 keV band emission may originate
from different plasma. The spectrum of can be well fitted with two
temperature thermal model <cit.>
in addition to the non-thermal component. Of the two thermal components, the higher
temperature one occupies a significant fraction of the observed 4.2–6 keV spectrum,
and explains the entire Fe-K line. In addition, spatial distribution of the iron in
Cassiopeia A is not similar to hard X-ray intensity distribution
<cit.>. The thermal component has a different spatial
distribution from the non-thermal component, and hence we believe it is worth while
to investigate time variation of the thermal component.
In this paper, while placing a possibility of time variation of the thermal component
in mind, we aimed to identify the variable component of the 4.2–6 keV of in
more detail. We investigated the time variations in the 4.2–7.3 keV band including
the Fe-K emission lines by using Chandra ACIS for the first time. The time
variations in thermal emission dominated and non-thermal emission dominated regions
were also investigated.
§ OBSERVATION AND DATA REDUCTION
§.§ Chandra ACIS-S
For our study of the year-scale variability in flux, data of Chandra X-ray Observatory
were utilized. The Chandra observations of have been carried out several
times since the launch in 1999
<cit.>.
The data used in our analysis are listed in Table <ref>. The archived data
taken with ACIS-S with a Timed Exposure(TE) mode are gathered. ACIS-S3 is the back-illuminated CCD
chip with enhanced soft X-ray response and fairly constant spectral resolution during the course of the mission
compared to the ACIS-I array. The satellite and instrument are
described by <cit.>.
We reprocessed the event files (from level 1 to level 2) to remove pixel randomization and
to correct for CCD charge transfer efficiencies using CIAO version 4.6 and CalDB 4.6.3.
The bad grades were filtered out and good time intervals were reserved. is so
bright that the data were all taken with the single chip operation mode of S3 to avoid
the telemetry loss. Cassiopeia A was usually pointed near the center of the ACIS-S3 chip
of 1024 × 1024 pixel CCDs, each with 0^''.5 × 0^''.5
pixels and a field of view of 8'.4 × 8'.4. The pointing position is moderately
shifted from the aim-point and its roll angle varies from observation to observation.
The effective areas (arf) of individual observations were then calculated for each ObsID
using the Chandra standard analysis software package mkwarf in CIAO.
§.§ Suzaku XIS0 & XIS3
For tracing the non-thermal emission, the Suzaku data were utilized. A deep
observation with Suzaku was made in 2012. The exposure time was 205 ksec long
(XIS0+XIS3). Suzaku has four X-ray CCD cameras
<cit.>. One of the four XIS
detector (XIS 1) is back-side illuminated (BI) and the other three (XIS 0, XIS 2 and XIS 3)
are front side illuminated (FI). In the XIS data taken with the Spaced-row Charge Injection
(SCI) option with the normal exposure mode, the gap columns due to the injected charges
appear at every 54 lines. The column widths of the FIs are three pixels which are smaller
than five for the BI. To minimize the flux uncertainty due to the gap, we used only
the FI data. In the FI CCDs, the XIS-2 were not operated in 2012. Data screening
was made with the standard criteria provided by the Suzaku processing team.
§ ANALYSIS AND RESULTS
§.§ Region Selection
X-ray emissions from are known to be a composite of the thermal and non-thermal
components <cit.>. The thermal component is characterized
by emission lines from highly ionized ions of heavy elements such as iron, accompanied
with a continuum emission by a thermal bremsstrahlung. The non-thermal emission is known
to be traced by a hard band continuum emission above 10 keV <cit.>.
Figure <ref> shows the hard X-ray image (10–12 keV band) with Suzaku.
We can see the concentrated hard X-ray distribution from the western to region toward
the center of Cassiopeia A. Figure <ref> shows the three color images of
Chandra in the 4.2–6.0 keV, the 6.54–6.92 keV (Fe-K line) and the 1.75–1.95
keV (Si-K line) bands overlaid with the Suzaku contour in the 10-12 keV band.
The Fe-K emission is believed to originate from the optically thin thermal plasma.
Using the Suzaku contour map and the Chandra lines, we can segregate the distributions
of the thermal and non-thermal X-rays. Based on this information, five local regions
and one whole SNR region were selected from the image for our analysis.
We defined the “East" and the“North West" regions as the “Thermal dominant" region
(Magenta ellipses in Figure <ref>). The East region has the most abundant
X-ray flux of the Fe-K line. Therefore, this region is the best region
to discuss the time evolution of the Fe-K line with less contribution of non-thermal
emission. The North West region is the second luminous region of the Fe-K
line emission. This region shows a separation from the hard X-ray peak and a separation
from continuum X-ray dominant region <cit.>.
On the other hand, we defined “South West", “Inner" and “Forward Shock" region as
“Non-thermal dominant" region (Light blue regions in Figure <ref>).
<cit.> and <cit.> show the area dominated
by a harder spectrum. As a matter of fact, these regions are manifest themselves in hard
X-rays. Our image also shows the Inner region is bright with hard X-ray. In addition,
the forward shock has a featureless non-thermal emission
<cit.>. In the forward shock,
it was found that the average proper motion of Cassiopeia A is 0.30^'' yr^-1
<cit.>, so we corrected the Forward Shock region of each year
with this value (see Appendix).
§.§ Spectra
With the regions defined in the section <ref>, we extracted the spectra using
a custom pipeline based on in CIAO script. Here, the background spectra
were extracted from outside of the Whole SNR region defined in Figure <ref>.
Since the X-ray emission from Cassiopeia A is very strong, the background contribution is
almost negligible for an estimation of the time variation. The background fraction of the whole
SNR is only ∼3 % in 4.2–7.3 keV band and this is almost constant value from 2002 to
2013. In 2000, this fraction shows a larger value (∼7 %) presumably because of an increase
in the charged particle flux experienced during this observation. After the background subtraction,
we fitted the 4.2-7.3 keV band spectra
at each epoch with a power-law model and a Gaussian line with XSPEC version 12.8.2. The best-fit
parameters are summarized in Table <ref>.
Figure <ref> shows the spectra in the 4.2–7.3 keV band taken from the six regions.
As described in the section <ref>, the two bright non-thermal dominant regions
South West and Inner are remarkable in the hard X-ray continuum flux (Figure <ref>
and <ref>). Accordingly, the photon indexes of these regions are ∼2.6–3.0 while
those of the thermal are 3.0–3.4. In the Forward Shock and the Whole SNR, the photon indexes and
that evolutions are a slightly different from the results in <cit.>.
This is probably due to difference of the energy band used in the spectral fitting. The fit residuals
are larger for the thermal dominant than for the non-thermal regions. This is because weak thermal
lines such as Cr-K (5.6 keV) appeared in the band.
As shown in Figure <ref> and Table <ref>, we also found the increase of
the equivalent width of the Fe-K line of Cassiopeia A for the first time. In the Thermal dominant
regions, we can see ∼ 10 % increasing of the equivalent width for 13 years, and it is notable
that a large evolution of the equivalent width like this is the first detection from all the
supernova remnants. In addition, we found the Fe-K line centroid varies among the observations.
Since, there is a calibration uncertainty for Fe K-line centroids of ∼0.3 % (or ∼20
eV at Fe-K)[Available at http://cxc.harvard.edu/cal/docs/cal_present_status.html
]
, however, it is difficult to discuss its evolution.
§.§ Time Variation of 4.2-6.0 keV and Fe-K
From the fitting, we investigated the time variation of the 4.2-6.0 keV band and Fe-K line fluxes.
Flux evolution of the was well reproduced by linear decline <cit.>.
Therefore, we also fitted time variation of the fluxes with a linear model. These fitting results
of each region is summarized in Figure <ref> and Table <ref>. In the
Whole SNR, the 4.2–6 keV band and the Fe-K line fluxes show a significant decline in these
∼10 years with a similar change (∼ -0.6 % yr^-1).
There is a discrepancy between the variation of the 4.2-6 keV band in this work: -0.65 ± 0.02 % yr^-1
and that in <cit.>: -1.5 ± 0.17 % yr^-1, in spite of nearly the same data set.
Although we tried several analysis methods (see Appendix), we could not reveal the cause of the discrepancy.
In the local regions, time variation of the 4.2-6.0 keV continuum and Fe-K line fluxes are different
from the Whole SNR. In Figure <ref>, we can see larger time variation of the 4.2-6 keV
band in the regions which have higher equivalent width of the Fe-K line and the softer photon index.
In addition, we also found the Forward Shock region has no significant change of 4.2-6 keV and Fe-K.
From these results, we can interpret that Cassiopeia A is undergoing a flux change in the reverse
shock region which has a larger contribution of the Fe-K line.
§.§ Fitting the spectra of the East region with the bremsstrahlung model
As shown in Figure <ref>, we found that the East region has the largest decay rate and
the largest equivalent width of the Fe-K line. Since the Fe-K line is a tracer of the thermal plasma
emission, we evaluated time variation of the emission from the East region via a thermal model.
Table <ref> shows the results of fitting with a thermal bremsstrahlung instead
of the power-law model. We then drew time histories of the resultant temperature and normalization
(=emission measure), and fitted them with a linear model. As a result, it is found that the time
evolution of the temperature and the emission measure are -(0.4±0.3) % yr^-1 and
-(0.5±0.5) % yr^-1 (90 % confidence level), respectively.
§ DISCUSSION
<cit.> already reported the decay of the intensity of the 4.2-6 keV continuum
from the whole remnant of . They discussed the cause of the decay by assuming all the emission
is originated from the non-thermal emission. As shown in Figure <ref> we found time
variation in the whole remnant is also observed at the Fe-K line. Moreover, if we pick-up the local
regions, the variabilities in the continuum and the Fe-K line are highly different from region to
region. Therefore, the cause of the time variation in the 4.2-6 keV continuum and the Fe-K lines
must be revisited. We positively use the fact that the Fe-K line is evidence of thermal emission.
Then, we found the Forward Shock region which has faint Fe-K emission shows the smallest decay rate
and the East region which has bright Fe-K emission shows the largest decay rate. The result naturally
supports that these regions have different variable components (thermal or non-thermal origin).
Using these regions, we here discuss the origin of the decay with the thermal dominant scenario
(section <ref>) and non-thermal dominant scenario (section <ref>)
individually.
§.§ A decay scenario of the thermal components
Young SNRs like Cassiopeia A are experiencing a drastic expansion due to the high speed of their
ejecta. The plasma formed by the shock heating should be then adiabatically expanded. The adiabatic
expansion causes the cooling of the plasma, too. Therefore, the changing of X-ray flux from thermal
component must be first examined with an adiabatic cooling. <cit.> suggested
that Cassiopeia A is currently transitioning from the ejecta-dominated to the Sedov–Taylor phase,
and hence it is natural to assume that Cassiopeia A is experiencing an adiabatic evolution
without a radiative cooling.
Here, we evaluate the time decay of the regions where the thermal emission is dominant by assuming
that their entire emission is of purely thermal origin. Bremsstrahlung X-ray flux from a thermal
plasma is described as
F_ν∝ EM · T^-1/2 exp(-hν/k_BT) ·g̅_ff
where EM, h, k_B and g̅_ff are the emission measure, the Planck constant, the Boltzmann
constant and the Gaunt factor, respectively. The Gaunt factor is given by <cit.>,
g̅_ff = ( 3/πk_BT/hν)^1/2 for k_BT/hν<1,
g̅_ff = √(3)/π ln( 4/ζk_BT/hν) for k_BT/hν>1,
where the constant ζ = 1.781. In the case of Cassiopeia A, a typical electron temperature
is in the rage of 1-3 keV <cit.>. Therefore, we can use equation (<ref>)
because the energy band we chose (4.2-6.0 keV) is above the spectrum cut-off.
First, we calculated time evolution of the emission measure. We assume the number of particle nV = constant.
Using the expansion parameter: m (r ∝ t^m), we can describe V ∝ r^3 ∝ t^3m,
n ∝ V^-1∝ t^-3m around dynamical time scale or in the beginning of the Sedov-Taylor
phase, and then
EM = n^2 V ∝ t^-3m
where we assumed n_e ∼ n_i. In the case of Cassiopeia A, the change rate of emission measure is
1/EMdEM/dt = -3m/t⇒ -0.58 (m/0.66) (t/340)^-1 % yr^-1.
where we normalized m by 0.66 <cit.> and t by the remnant age of 340 yr.
Next, we calculate thermal decay rate by taking adiabatic cooling into account.
For the adiabatic gas, PV^γ = const.
By the same token, we can estimate temperature evolution along with the plasma volume as described below,
TV^γ - 1 = const. ⇒ T ∝ V^1-γ∝ t^-2m
where γ = 5/3 is the heat capacity ratio. Thus, we can estimate the rate of decline of the
temperature is
1/TdT/dt = -2m/t⇒ -0.39 (m/0.66) (t/340 yr)^-1 % yr^-1.
Here we can describe EM(t)=EM_0(t/t_0)^-3m and T(t)=T_0(t/t_0)^-2m. In order to calculate the
flux evolution including both of these effects, F_ν∝ EM(t) T(t)^-1/2 exp(-hν/k_B T(t)) (k_BT(t)/hν)^1/2,
and then
dF_ν/dt = -F_ν( 3m/t + 2m/thν/k_B T).
We measured a typical electron temperature of Cassiopeia A is k_BT ∼ 2.5 keV. This is about twice
as low as the mean photon energy of the 4.2-6.0 keV band. Thus we can estimate flux change rate of thermal
component as below,
1/F_νdF_ν/dt = -7m/t⇒ -1.36 (m/0.66) (t/340 yr)^-1 % yr^-1.
The variation in the East region (= -1.03; see Table <ref>) is closest to this.
Also, the predicted rates of the emission measure and temperature (see eq.(<ref>) and eq.(<ref>))
are consistent with the observational values in the East region: -(0.4±0.3) % yr^-1 for
the emission measure and -(0.5±0.5) % yr^-1 for the temperature. Thus we conclude that the
flux variation in the East region of Cassiopeia A could be explained by the thermal variation due to
the adiabatic expansion.
On the other hand, the change rates in the other regions are much smaller than the value predicted by eq.(<ref>).
In particular, the variations in the non-thermal dominant regions (< -0.6 % yr^-1) could not be
explained by the adiabatic expansion.
§.§ A decay scenario of the non-thermal components
The cosmic-ray electrons are considered to be accelerated in the shock front by diffusive shock acceleration
<cit.>. In the Sedov phase, the blast wave is decelerated
by sweeping up ambient interstellar matter. This effect causes a flux decay of synchrotron X-rays. In addition
to this, evolution of the magnetic field and the electron injection rate also changes the flux of synchrotron
X-rays. Here, we investigated whether the evolution of these parameters could explain the time variation of
Cassiopeia A or not.
The X-ray synchrotron emission is well approximated analytically. The energy spectrum of electrons is generally
given by
N_e(E) = AE^-p(1 + E/E_b)^-1 exp[-(E/E_e^ max)^2]
where A, E_b and E_e^ max are a normalization factor, the break energy and the maximum energy of
electrons, respectively. In the case of Cassiopeia A, the radio index α = (p-1)/2 = 0.77:
<cit.>. The break energy expresses the spectrum shape which suffers the synchrotron cooling.
During the acceleration, electrons with E > E_b are losing their energies via synchrotron cooling, which provides
a steepened energy spectrum. From this electron distribution, we can calculate the approximate formula of the
X-ray luminosity as shown in the equation (5) of <cit.>,
L_ν∝ AB_d^(p+1)/2ν_b^-(p-1)/2 (ν/ν_b)^-p/2 exp(- √(ν/ν_ roll))
∝ AB_d^(p-2)/2ν^-p/2 exp(- √(ν/ν_ roll)),
where B_d, ν_b and ν_ roll are the downstream magnetic field, the break frequency and the
roll-off frequency, respectively. In this calculation, <cit.> assumed the photon energy
is larger than the break photon energy (ν > ν_b), and the break frequency depends on the downstream
magnetic field (ν_b ∝ B_d^-3). This assumption could be adapted for young SNRs (t_ age≲ 10^3 yr)
due to their amplified magnetic field. Here, we attempted to estimate the time variation of the synchrotron X-ray
in the case of Cassiopeia A by transforming this equation into our framework of time evolution formulation.
First, we considered the time evolution of the normalization factor: A in eq.(<ref>). We assumed
that the amount of accelerated particles is proportional to the product of the fluid ram pressure and the SNR
volume as assumed in <cit.>. Then, we can describe
A ∝ (ρ v_s^2)r^3 ∝ t^3m-2, where v_s is the shock velocity and we assumed the shock is moving
through the progenitor wind of the supernova: ρ∝ r^-2. Then the decay of this term is sensitive to the value of m.
In the case of Cassiopeia A (m = 0.66), we can find that this normalization is almost constant with time.
Second, we considered the time evolution of the term B_d^(p-2)/2 in eq.(<ref>).
The magnetic energy density is amplified to a constant fraction of ρ v_s^2: case (a) or ρ v_s^3:
case (a^') <cit.>, and we
can interpret the magnetic energy density evolution as a function of time below,
B_d^2 ∝ρ v_s^2 ∝1/r^2(dr/dt)^2 ∝ t^-2⇒ B_d ∝ t^-1,
B_d^2 ∝ρ v_s^3 ∝1/r^2(dr/dt)^3 ∝ t^(m-3)⇒ B_d ∝ t^1/2(m-3).
Hereafter we denote B_d^(p-2)/2∝ t^X. If we neglect the time evolution of ν_ roll for the time being,
the discussion so far results in the synchrotron intensity at the forward shock as L_ν∝ t^X. Thus, we can
estimate the time variation of the synchrotron radiation by the evolution of the magnetic field as 1/L_ν· dL_ν/dt = X · t^-1. In the case of
Cassiopeia A, m = 0.66: <cit.>, α = 0.77: <cit.> and t_ age = 340
yr predicts the variation of -0.08 % yr^-1 for the case (<ref>) and -0.09 % yr^-1 for the case (<ref>),
whose difference is quite small. From this result, we found that the contribution of the magnetic field evolution
to the X-ray variation is small.
Finally, we considered the time evolution of the term ν^-p/2 exp(- √(ν/ν_ roll))
in eq.(<ref>). Assuming that ν_ roll is determined by a balance between the acceleration rate and the synchrotron
loss <cit.>, we obtain below.
E^ max_e ∝ B_d^-1/2v_s ⇒ν_ roll∝(E_e^ max)^2 B_d ∝ v_s^2 ∝ t^2m-2≡ t^Y
From the discussion of the time evolution all parameters (A, B_d and ν_ roll), the logarithmic derivative of eq.(<ref>),
results in <cit.>,
dL_ν/dt = L_ν(p/t + Y/2t√(ν/ν_ roll)) ; p = X+3m-2
In the case of Cassiopeia A, roll-off energy hν_ roll is suggested to be ∼ 2.3 keV in outer shock filament
<cit.>. This implies (ν/ν_ roll) ≃ 2 for the band 4.2-6 keV. Thus, we can estimate
the variation in outer filament as 1/L_ν· dL_ν/dt = 1/t · (p + Y/√(2)), and then a change rate is about
-0.26 % yr^-1 for the both cases of (a) and (a^').
Consequently the flux variability is sensitive to the value of the expansion parameter m. If we adopt m = 0.66, the variation in
the band 4.2-6 keV is estimated to be -0.26 % yr^-1.
Figure <ref> shows a comparison between the predicted rate and the observed time variation of the 4.2-6 keV continuum in the
forward shock region. We found the model rate well fits to the observations from 2000 to 2010. The parameter m could be interpreted as
a deceleration of the shock
velocity. If we assumed 5,000 km s^-1 as the shock velocity of Cassiopeia A, m = 0.66 means the deceleration of
∼ 5 km s^-1 yr^-1. In Fig. <ref>, however, the data points after 2010 do not follow the m = 0.66
line. For reference, we draw another line of m = 0.8 that is closer to the data after 2010. In this case the
deceleration is ∼ 3 km s^-1 yr^-1. This means a strong braking in the shock velocity has not been occurring in
the Forward Shock region (at most ∼ 5 km s^-1 yr^-1). We can see a flux jump between 2010 and 2012. Several
non-thermal filaments in Cassiopeia A have a flickering of X-ray flux with a time scale of ∼year, and this jump might
also be able to be explained by that feature.
The particle acceleration and the synchrotron cooling in the reverse shock are very complicated. The continuum emission seems
to be decreasing gradually in the South West and the Inner regions. However, these regions have a number of flickering
filaments and have a large contribution of the thermal X-ray. In addition, the dynamical evolution of the reverse shock inward the
remnants have not been well understood. <cit.> investigated the particle acceleration in the forward and the
reverse shock of Cassiopeia A by using the numerical calculations. They predicted the change rate of the synchrotron X-ray in the
reverse shock is ∼ -0.9 % yr^-1 at least because the ejecta density drops proportional to t^-3 ( = -0.9 % yr^-1).
However, we cannot see such a large decreasing in the South West and the Inner regions, and cannot explain the details of the
acceleration in the reverse shock in the present.
§ CONCLUSION
Our work shows the flux of the Fe-K in Cassiopeia A is decreasing with the continuum emissions in 4.2-6 keV for the first time.
By using hard X-ray distribution above 10 keV as a good indicator of non-thermal emission, we separated “Thermal dominant" and
“Non-thermal dominant" regions from the whole SNR, and investigated their time variations. Then, we found clear correlations
of the decay rates in 4.2-6 keV band with the photon indexes and with the equivalent width of Fe-K line. The correlation shows
that the flux in the regions which have softer spectrum and richer emissions of Fe-K are decreasing more drastically.
We found the East region, which is considered to be “Thermal dominant" region and has the softest spectrum (Γ∼ 3.2), shows
the most rapid decline. The flux change rate of the Fe-K line and 4.2-6 keV continuum are -0.6±0.1 % yr^-1 and -1.03± 0.05 % yr^-1,
respectively. In the region, the time evolution of continuum flux, emission measure and temperature are well explained by the
adiabatic cooling with the expansion of r ∝ t^-m with m = 0.66.
On the other hand, “Non-thermal dominant” regions show smaller decay rates. In particular, the Forward Shock region,
which has the hardest spectrum (Γ∼ 2.6), shows no large decay. It implies that the blast wave of Cassiopeia A
does not seem to experience a strong deceleration <cit.>.
From the decay rate, we conclude that the deceleration is ∼ 5 km s^-1 yr^-1 at most.
It is interesting to note that the time evolution of the East region and the Forward Shock region, where the thermal emission
and the non-thermal emission dominates the most among all selected regions, respectively, can be represented by the power law
expansion of r ∝ t^-m with a common index of m = 0.66. The emission from the other regions is a certain mixture of
the thermal and non-thermal emission. Even though m = 0.66 is common, the resulting intensity decay rate is larger for the
thermal emission, and the intensity of the non-thermal continuum is nearly constant, if m does not change in the last couple
of decade. A different mixing ratio probably results in a decay rate of the emission that is different from region to region.
Accordingly, we conclude that the decay of the X-ray intensity above ∼4 keV of the whole remnant is probably caused
by the thermal emission component.
T.S. is grateful for the travel support from HAYAKAWA FOUNDATION.
This work was supported by the Japan Society for the Promotion of Science (JSPS)
KAKENHI Grant Number 16J03448, 15K05107, 15K17657, 15K05088, 25105516 and 23540280.
We thank Jacco Vink, Takayuki Hayashi and Ryo Iizuka for helpful discussion
and suggestions in preparing this paper. We thank the anonymous referee for
his/her comments that helped us to improve the manuscript.
§ DIFFERENCE OF ARF AND SOURCE REGION
We found a discrepancy of X-ray flux between our results and <cit.>. Table <ref> shows a comparison
of our results with other results which are analyzed with different methods. In this table, we calculated X-ray flux and count-rate
in the 4.2-6 keV band with several arfs and source regions for investigation of a cause of this discrepancy.
In CIAO, we can calculate two kinds of arfs (weighted arf for extended sources or imaging arf for pointlike source). We checked
whether the types of arfs have influence on an estimation of X-ray flux or not. The second and third columns in Table <ref>
show the 4.2-6 keV fluxes calculated by a weighted arf and an imaging arf, respectively. In this comparison, we cannot find a
large difference, and cannot see a large decay from 2000 to 2010 like Patnaude et al. (see the fifth column). Therefore, a difference
of arf type is not likely to be the cause of inconsistency. The fourth columns in Table <ref> show the fluxes calculated by an
imaging arf in a different source region (r = 2.5'). The region within 2.5' circle in Cassiopeia A do not include a part of
the forward shock filaments (see Figure <ref> left). In this case, whole flux shows a less value than 3.5' circle
region, however the flux decay from 2000 to 2010 does not change that much. In the count rate, we can see a larger decay than flux decay,
as listed in the 6th and 7th columns of Table 5. This is because the effective area are decreasing with time. And, the decay of count rate
within 2.5' circle is very similar to
<cit.>.
§ CORRECTION OF PROPER MOTION EFFECT
The proper motion of the forward shock of Cassiopeia A has been well studied by X-ray <cit.>, and its expansion rate
is 0.30^” % yr^-1 on average. If we discuss a flux variation in the forward shock, we defined a region
which is shifting with the Forward Shock region at this rate. We adopt a polygon shape as the shape of the Forward Shock
region, and then we shifted each apex to expansion direction at 0.30^” % yr^-1 (see Figure <ref> right).
If we do not adapt this region shift, we found the flux in 2013 is ∼5 % higher than that in 2000 because a component which was on the
exterior of the region 13 years ago leaked into the inside of the region. In the reverse shock region, there are less contribution
of leaks than in the forward shock since the proper motion of the reverse shock is small. When we adapt the region shift to the East
region, it is found that the decay rate does not change. Therefore, we have not adopt the region shift in the reverse shock regions.
[Aharonian & Atoyan(1999)]1999A A...351..330A Aharonian, F. A., & Atoyan, A. M. 1999, , 351, 330
[Atoyan et al.(2000)]2000A A...355..211A Atoyan, A. M., Aharonian, F. A., Tuffs, R. J., Völk, H. J. 2000, , 355, 211
[Baars et al.(1977)]1977A A....61...99B Baars, J. W. M., Genzel, R., Pauliny-Toth, I. I. K., & Witzel, A. 1977, , 61, 99
[Bamba et al.(2005)]2005ApJ...621..793B Bamba, A., Yamazaki, R., Yoshida, T., Terasawa, T., & Koyama, K. 2005, , 621, 793
[Bell(1978)]1978MNRAS.182..443B Bell, A. R. 1978, , 182, 443
[Bell(2004)]2004MNRAS.353..550B Bell, A. R. 2004, , 353, 550
[Blandford & Eichler(1987)]1987PhR...154....1B Blandford, R., & Eichler, D. 1987, , 154, 1
[DeLaney et al.(2004)]2004ApJ...613..343D DeLaney, T., Rudnick, L., Fesen, R. A., et al. 2004, , 613, 343
[Fesen et al.(2006)]2006ApJ...645..283F Fesen, R. A., Hammell, M. C., Morse, J., et al. 2006, , 645, 283
[Grefenstette et al.(2014)]2014Natur.506..339G Grefenstette, B. W., Harrison, F. A., Boggs, S. E., et al. 2014, , 506, 339
[Grefenstette et al.(2015)]2015ApJ...802...15G Grefenstette, B. W., Reynolds, S. P., Harrison, F. A., et al. 2015, , 802, 15
[Harrison et al.(2013)]2013ApJ...770..103H Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013, , 770, 103
[Haug(2004)]2004A A...423..793H Haug, E. 2004, , 423, 793
[Helder & Vink(2008)]2008ApJ...686.1094H Helder, E. A., & Vink, J. 2008, , 686, 1094
[Hughes et al.(2000)]2000ApJ...528L.109H Hughes, J. P., Rakowski, C. E., Burrows, D. N., & Slane, P. O. 2000, , 528, L109
[Hwang et al.(2000)]2000ApJ...537L.119H Hwang, U., Holt, S. S., & Petre, R. 2000, , 537, L119
[Hwang et al.(2004)]2004ApJ...615L.117H Hwang, U., Laming, J. M., Badenes, C., et al. 2004, , 615, L117
[Hwang & Laming(2009)]2009ApJ...703..883H Hwang, U., & Laming, J. M. 2009, , 703, 883
[Hwang & Laming(2012)]2012ApJ...746..130H Hwang, U., & Laming, J. M. 2012, , 746, 130
[Katsuda et al.(2008)]2008ApJ...678L..35K Katsuda, S., Tsunemi, H., & Mori, K. 2008, , 678, L35
[Katsuda et al.(2010)]2010ApJ...723..383K Katsuda, S., Petre, R., Mori, K., et al. 2010, , 723, 383
[Koyama et al.(2007)]2007PASJ...59S..23K Koyama, K., Tsunemi, H., Dotani, T., et al. 2007, , 59, 23
[Laming & Hwang(2003)]2003ApJ...597..347L Laming, J. M., & Hwang, U. 2003, , 597, 347
[Lee et al.(2014)]2014ApJ...789....7L Lee, J.-J., Park, S., Hughes, J. P., & Slane, P. O. 2014, , 789, 7
[Maeda et al.(2009)]2009PASJ...61.1217M Maeda, Y., Uchiyama, Y., Bamba, A., et al. 2009, , 61, 1217
[Masai(1994)]1994ApJ...437..770M Masai, K. 1994, , 437, 770
[Nakamura et al.(2012)]2012ApJ...746..134N Nakamura, R., Bamba, A., Dotani, T., et al. 2012, , 746, 134
[Patnaude & Fesen(2007)]2007AJ....133..147P Patnaude, D. J., & Fesen, R. A. 2007, , 133, 147
[Patnaude & Fesen(2009)]2009ApJ...697..535P Patnaude, D. J., & Fesen, R. A. 2009, , 697, 535
[Patnaude et al.(2011)]2011ApJ...729L..28P Patnaude, D. J., Vink, J., Laming, J. M., & Fesen, R. A. 2011, , 729, L28
[Patnaude & Fesen(2014)]2014ApJ...789..138P Patnaude, D. J., & Fesen, R. A. 2014, , 789, 138
[Pérez-Rendón et al.(2009)]2009A A...506.1249P Pérez-Rendón, B., García-Segura, G., & Langer, N. 2009, , 506, 1249
[Reynolds & Chevalier(1981)]1981ApJ...245..912R Reynolds, S. P., & Chevalier, R. A. 1981, , 245, 912
[Reynolds(1998)]1998ApJ...493..375R Reynolds, S. P. 1998, , 493, 375
[Rutherford et al.(2013)]2013ApJ...769...64R Rutherford, J., Dewey, D., Figueroa-Feliciano, E., et al. 2013, , 769, 64
[Rybicki & Lightman(1979)]1979rpa..book.....R Rybicki, G. B., & Lightman, A. P. 1979, New York, Wiley-Interscience, 1979. 393 p.,
[Uchiyama et al.(2007)]2007SPIE.6686E..0PU Uchiyama, H., Hyodo, Y., Yamaguchi, H., et al. 2007, , 6686, 66860P
[Uchiyama et al.(2007)]2007Natur.449..576U Uchiyama, Y., Aharonian, F. A., Tanaka, T., Takahashi, T., & Maeda, Y. 2007, , 449, 576
[Uchiyama & Aharonian(2008)]2008ApJ...677L.105U Uchiyama, Y., & Aharonian, F. A. 2008, , 677, L105
[Vink et al.(1999)]1999A A...344..289V Vink, J., Maccarone, M. C., Kaastra, J. S., et al. 1999, , 344, 289
[Vink(2006)]2006ESASP.604..319V Vink, J. 2006, The X-ray Universe 2005, 604, 319
[Vink(2008)]2008A A...486..837V Vink, J. 2008, , 486, 837
[Vink(2008)]2008AIPC.1085..169V Vink, J. 2008, American Institute of Physics Conference Series, 1085, 169
[Weisskopf et al.(2002)]2002PASP..114....1W Weisskopf, M. C., Brinkman, B., Canizares, C., et al. 2002, , 114, 1
[Willingale et al.(2002)]2002A A...381.1039W Willingale, R., Bleeker, J. A. M., van der Heyden, K. J., Kaastra, J. S., & Vink, J. 2002, , 381, 1039
[Yamazaki et al.(2006)]2006MNRAS.371.1975Y Yamazaki, R., Kohri, K., Bamba, A., et al. 2006, , 371, 1975
[Zirakashvili et al.(2014)]2014ApJ...785..130Z Zirakashvili, V. N., Aharonian, F. A., Yang, R., Oña-Wilhelmi, E., & Tuffs, R. J. 2014, , 785, 130
| Supernova remnants (SNRs) are known to be one of the most dynamic phenomenon in
the Universe. The spectral and image evolutions are quicker when the age
of the remnants are younger since the shock velocity is faster and its
braking is larger. Recently, there are several arguments about the X-ray
spectral variations from young SNRs
<cit.>.
Mainly, the variable component is considered to be synchrotron X-rays
(non-thermal X-rays) caused by high-energy electrons in the amplified
magnetic field (∼mG). Also, time series of images revealed
moments of expanding shell structures in young SNRs
<cit.>.
These facts tell us that SNRs are experiencing an extreme evolution, and
we can detect these evolutions in our observational time-scale (∼10 yr).
Such an information would be very useful for understanding how the
remnants evolve and effect the ambient medium.
, a Galactic young remnant of ∼340 yrs old <cit.>,
has been found to display several X-ray time variations by intensive observations
with the Chandra observatory. <cit.> found year-scale
X-ray variability in thermal and non-thermal knots using the Chandra data
taken in 2000, 2002, and 2004. In the entire face of the remnant, they
identified six time-varying structures, four of which show count rate
increase from ∼10 % to over 90 %. <cit.> analyzed
the same dataset and found year-scale time variation in the X-ray intensity
for a number of non-thermal X-ray filaments or knots associated with the
reverse-shocked regions. They found that variable non-thermal features are
much more prevailing than the thermal ones. <cit.> found
a steady ∼1.5–2 % yr^-1 decline in the 4.2–6.0 keV band of the
overall X-ray emission of . They discussed a possible cause of this
decline as a deceleration of the forward shock velocity. The strong braking
with ≈30–70 km s^-1 yr^-1 was necessary to explain the decay
of the X-ray flux.
On the other hand, we cannot completely ignore a contribution of the thermal
X-rays to the time evolution because the continuum emissions below 4 keV have
not only the non-thermal component but also the thermal bremsstrahlung component.
<cit.> estimated that the fraction of non-thermal component
is ∼54 % in the 4.2–6 keV band. In the further lower energy band (1-3 keV),
no significant X-ray variability in the soft X-ray band (1–3 keV) has been found
<cit.>. However, we note the possibility
that the 4.2–6 keV and the 1–3 keV band emission may originate
from different plasma. The spectrum of can be well fitted with two
temperature thermal model <cit.>
in addition to the non-thermal component. Of the two thermal components, the higher
temperature one occupies a significant fraction of the observed 4.2–6 keV spectrum,
and explains the entire Fe-K line. In addition, spatial distribution of the iron in
Cassiopeia A is not similar to hard X-ray intensity distribution
<cit.>. The thermal component has a different spatial
distribution from the non-thermal component, and hence we believe it is worth while
to investigate time variation of the thermal component.
In this paper, while placing a possibility of time variation of the thermal component
in mind, we aimed to identify the variable component of the 4.2–6 keV of in
more detail. We investigated the time variations in the 4.2–7.3 keV band including
the Fe-K emission lines by using Chandra ACIS for the first time. The time
variations in thermal emission dominated and non-thermal emission dominated regions
were also investigated. | null | null | null | <cit.> already reported the decay of the intensity of the 4.2-6 keV continuum
from the whole remnant of . They discussed the cause of the decay by assuming all the emission
is originated from the non-thermal emission. As shown in Figure <ref> we found time
variation in the whole remnant is also observed at the Fe-K line. Moreover, if we pick-up the local
regions, the variabilities in the continuum and the Fe-K line are highly different from region to
region. Therefore, the cause of the time variation in the 4.2-6 keV continuum and the Fe-K lines
must be revisited. We positively use the fact that the Fe-K line is evidence of thermal emission.
Then, we found the Forward Shock region which has faint Fe-K emission shows the smallest decay rate
and the East region which has bright Fe-K emission shows the largest decay rate. The result naturally
supports that these regions have different variable components (thermal or non-thermal origin).
Using these regions, we here discuss the origin of the decay with the thermal dominant scenario
(section <ref>) and non-thermal dominant scenario (section <ref>)
individually.
§.§ A decay scenario of the thermal components
Young SNRs like Cassiopeia A are experiencing a drastic expansion due to the high speed of their
ejecta. The plasma formed by the shock heating should be then adiabatically expanded. The adiabatic
expansion causes the cooling of the plasma, too. Therefore, the changing of X-ray flux from thermal
component must be first examined with an adiabatic cooling. <cit.> suggested
that Cassiopeia A is currently transitioning from the ejecta-dominated to the Sedov–Taylor phase,
and hence it is natural to assume that Cassiopeia A is experiencing an adiabatic evolution
without a radiative cooling.
Here, we evaluate the time decay of the regions where the thermal emission is dominant by assuming
that their entire emission is of purely thermal origin. Bremsstrahlung X-ray flux from a thermal
plasma is described as
F_ν∝ EM · T^-1/2 exp(-hν/k_BT) ·g̅_ff
where EM, h, k_B and g̅_ff are the emission measure, the Planck constant, the Boltzmann
constant and the Gaunt factor, respectively. The Gaunt factor is given by <cit.>,
g̅_ff = ( 3/πk_BT/hν)^1/2 for k_BT/hν<1,
g̅_ff = √(3)/π ln( 4/ζk_BT/hν) for k_BT/hν>1,
where the constant ζ = 1.781. In the case of Cassiopeia A, a typical electron temperature
is in the rage of 1-3 keV <cit.>. Therefore, we can use equation (<ref>)
because the energy band we chose (4.2-6.0 keV) is above the spectrum cut-off.
First, we calculated time evolution of the emission measure. We assume the number of particle nV = constant.
Using the expansion parameter: m (r ∝ t^m), we can describe V ∝ r^3 ∝ t^3m,
n ∝ V^-1∝ t^-3m around dynamical time scale or in the beginning of the Sedov-Taylor
phase, and then
EM = n^2 V ∝ t^-3m
where we assumed n_e ∼ n_i. In the case of Cassiopeia A, the change rate of emission measure is
1/EMdEM/dt = -3m/t⇒ -0.58 (m/0.66) (t/340)^-1 % yr^-1.
where we normalized m by 0.66 <cit.> and t by the remnant age of 340 yr.
Next, we calculate thermal decay rate by taking adiabatic cooling into account.
For the adiabatic gas, PV^γ = const.
By the same token, we can estimate temperature evolution along with the plasma volume as described below,
TV^γ - 1 = const. ⇒ T ∝ V^1-γ∝ t^-2m
where γ = 5/3 is the heat capacity ratio. Thus, we can estimate the rate of decline of the
temperature is
1/TdT/dt = -2m/t⇒ -0.39 (m/0.66) (t/340 yr)^-1 % yr^-1.
Here we can describe EM(t)=EM_0(t/t_0)^-3m and T(t)=T_0(t/t_0)^-2m. In order to calculate the
flux evolution including both of these effects, F_ν∝ EM(t) T(t)^-1/2 exp(-hν/k_B T(t)) (k_BT(t)/hν)^1/2,
and then
dF_ν/dt = -F_ν( 3m/t + 2m/thν/k_B T).
We measured a typical electron temperature of Cassiopeia A is k_BT ∼ 2.5 keV. This is about twice
as low as the mean photon energy of the 4.2-6.0 keV band. Thus we can estimate flux change rate of thermal
component as below,
1/F_νdF_ν/dt = -7m/t⇒ -1.36 (m/0.66) (t/340 yr)^-1 % yr^-1.
The variation in the East region (= -1.03; see Table <ref>) is closest to this.
Also, the predicted rates of the emission measure and temperature (see eq.(<ref>) and eq.(<ref>))
are consistent with the observational values in the East region: -(0.4±0.3) % yr^-1 for
the emission measure and -(0.5±0.5) % yr^-1 for the temperature. Thus we conclude that the
flux variation in the East region of Cassiopeia A could be explained by the thermal variation due to
the adiabatic expansion.
On the other hand, the change rates in the other regions are much smaller than the value predicted by eq.(<ref>).
In particular, the variations in the non-thermal dominant regions (< -0.6 % yr^-1) could not be
explained by the adiabatic expansion.
§.§ A decay scenario of the non-thermal components
The cosmic-ray electrons are considered to be accelerated in the shock front by diffusive shock acceleration
<cit.>. In the Sedov phase, the blast wave is decelerated
by sweeping up ambient interstellar matter. This effect causes a flux decay of synchrotron X-rays. In addition
to this, evolution of the magnetic field and the electron injection rate also changes the flux of synchrotron
X-rays. Here, we investigated whether the evolution of these parameters could explain the time variation of
Cassiopeia A or not.
The X-ray synchrotron emission is well approximated analytically. The energy spectrum of electrons is generally
given by
N_e(E) = AE^-p(1 + E/E_b)^-1 exp[-(E/E_e^ max)^2]
where A, E_b and E_e^ max are a normalization factor, the break energy and the maximum energy of
electrons, respectively. In the case of Cassiopeia A, the radio index α = (p-1)/2 = 0.77:
<cit.>. The break energy expresses the spectrum shape which suffers the synchrotron cooling.
During the acceleration, electrons with E > E_b are losing their energies via synchrotron cooling, which provides
a steepened energy spectrum. From this electron distribution, we can calculate the approximate formula of the
X-ray luminosity as shown in the equation (5) of <cit.>,
L_ν∝ AB_d^(p+1)/2ν_b^-(p-1)/2 (ν/ν_b)^-p/2 exp(- √(ν/ν_ roll))
∝ AB_d^(p-2)/2ν^-p/2 exp(- √(ν/ν_ roll)),
where B_d, ν_b and ν_ roll are the downstream magnetic field, the break frequency and the
roll-off frequency, respectively. In this calculation, <cit.> assumed the photon energy
is larger than the break photon energy (ν > ν_b), and the break frequency depends on the downstream
magnetic field (ν_b ∝ B_d^-3). This assumption could be adapted for young SNRs (t_ age≲ 10^3 yr)
due to their amplified magnetic field. Here, we attempted to estimate the time variation of the synchrotron X-ray
in the case of Cassiopeia A by transforming this equation into our framework of time evolution formulation.
First, we considered the time evolution of the normalization factor: A in eq.(<ref>). We assumed
that the amount of accelerated particles is proportional to the product of the fluid ram pressure and the SNR
volume as assumed in <cit.>. Then, we can describe
A ∝ (ρ v_s^2)r^3 ∝ t^3m-2, where v_s is the shock velocity and we assumed the shock is moving
through the progenitor wind of the supernova: ρ∝ r^-2. Then the decay of this term is sensitive to the value of m.
In the case of Cassiopeia A (m = 0.66), we can find that this normalization is almost constant with time.
Second, we considered the time evolution of the term B_d^(p-2)/2 in eq.(<ref>).
The magnetic energy density is amplified to a constant fraction of ρ v_s^2: case (a) or ρ v_s^3:
case (a^') <cit.>, and we
can interpret the magnetic energy density evolution as a function of time below,
B_d^2 ∝ρ v_s^2 ∝1/r^2(dr/dt)^2 ∝ t^-2⇒ B_d ∝ t^-1,
B_d^2 ∝ρ v_s^3 ∝1/r^2(dr/dt)^3 ∝ t^(m-3)⇒ B_d ∝ t^1/2(m-3).
Hereafter we denote B_d^(p-2)/2∝ t^X. If we neglect the time evolution of ν_ roll for the time being,
the discussion so far results in the synchrotron intensity at the forward shock as L_ν∝ t^X. Thus, we can
estimate the time variation of the synchrotron radiation by the evolution of the magnetic field as 1/L_ν· dL_ν/dt = X · t^-1. In the case of
Cassiopeia A, m = 0.66: <cit.>, α = 0.77: <cit.> and t_ age = 340
yr predicts the variation of -0.08 % yr^-1 for the case (<ref>) and -0.09 % yr^-1 for the case (<ref>),
whose difference is quite small. From this result, we found that the contribution of the magnetic field evolution
to the X-ray variation is small.
Finally, we considered the time evolution of the term ν^-p/2 exp(- √(ν/ν_ roll))
in eq.(<ref>). Assuming that ν_ roll is determined by a balance between the acceleration rate and the synchrotron
loss <cit.>, we obtain below.
E^ max_e ∝ B_d^-1/2v_s ⇒ν_ roll∝(E_e^ max)^2 B_d ∝ v_s^2 ∝ t^2m-2≡ t^Y
From the discussion of the time evolution all parameters (A, B_d and ν_ roll), the logarithmic derivative of eq.(<ref>),
results in <cit.>,
dL_ν/dt = L_ν(p/t + Y/2t√(ν/ν_ roll)) ; p = X+3m-2
In the case of Cassiopeia A, roll-off energy hν_ roll is suggested to be ∼ 2.3 keV in outer shock filament
<cit.>. This implies (ν/ν_ roll) ≃ 2 for the band 4.2-6 keV. Thus, we can estimate
the variation in outer filament as 1/L_ν· dL_ν/dt = 1/t · (p + Y/√(2)), and then a change rate is about
-0.26 % yr^-1 for the both cases of (a) and (a^').
Consequently the flux variability is sensitive to the value of the expansion parameter m. If we adopt m = 0.66, the variation in
the band 4.2-6 keV is estimated to be -0.26 % yr^-1.
Figure <ref> shows a comparison between the predicted rate and the observed time variation of the 4.2-6 keV continuum in the
forward shock region. We found the model rate well fits to the observations from 2000 to 2010. The parameter m could be interpreted as
a deceleration of the shock
velocity. If we assumed 5,000 km s^-1 as the shock velocity of Cassiopeia A, m = 0.66 means the deceleration of
∼ 5 km s^-1 yr^-1. In Fig. <ref>, however, the data points after 2010 do not follow the m = 0.66
line. For reference, we draw another line of m = 0.8 that is closer to the data after 2010. In this case the
deceleration is ∼ 3 km s^-1 yr^-1. This means a strong braking in the shock velocity has not been occurring in
the Forward Shock region (at most ∼ 5 km s^-1 yr^-1). We can see a flux jump between 2010 and 2012. Several
non-thermal filaments in Cassiopeia A have a flickering of X-ray flux with a time scale of ∼year, and this jump might
also be able to be explained by that feature.
The particle acceleration and the synchrotron cooling in the reverse shock are very complicated. The continuum emission seems
to be decreasing gradually in the South West and the Inner regions. However, these regions have a number of flickering
filaments and have a large contribution of the thermal X-ray. In addition, the dynamical evolution of the reverse shock inward the
remnants have not been well understood. <cit.> investigated the particle acceleration in the forward and the
reverse shock of Cassiopeia A by using the numerical calculations. They predicted the change rate of the synchrotron X-ray in the
reverse shock is ∼ -0.9 % yr^-1 at least because the ejecta density drops proportional to t^-3 ( = -0.9 % yr^-1).
However, we cannot see such a large decreasing in the South West and the Inner regions, and cannot explain the details of the
acceleration in the reverse shock in the present. | Our work shows the flux of the Fe-K in Cassiopeia A is decreasing with the continuum emissions in 4.2-6 keV for the first time.
By using hard X-ray distribution above 10 keV as a good indicator of non-thermal emission, we separated “Thermal dominant" and
“Non-thermal dominant" regions from the whole SNR, and investigated their time variations. Then, we found clear correlations
of the decay rates in 4.2-6 keV band with the photon indexes and with the equivalent width of Fe-K line. The correlation shows
that the flux in the regions which have softer spectrum and richer emissions of Fe-K are decreasing more drastically.
We found the East region, which is considered to be “Thermal dominant" region and has the softest spectrum (Γ∼ 3.2), shows
the most rapid decline. The flux change rate of the Fe-K line and 4.2-6 keV continuum are -0.6±0.1 % yr^-1 and -1.03± 0.05 % yr^-1,
respectively. In the region, the time evolution of continuum flux, emission measure and temperature are well explained by the
adiabatic cooling with the expansion of r ∝ t^-m with m = 0.66.
On the other hand, “Non-thermal dominant” regions show smaller decay rates. In particular, the Forward Shock region,
which has the hardest spectrum (Γ∼ 2.6), shows no large decay. It implies that the blast wave of Cassiopeia A
does not seem to experience a strong deceleration <cit.>.
From the decay rate, we conclude that the deceleration is ∼ 5 km s^-1 yr^-1 at most.
It is interesting to note that the time evolution of the East region and the Forward Shock region, where the thermal emission
and the non-thermal emission dominates the most among all selected regions, respectively, can be represented by the power law
expansion of r ∝ t^-m with a common index of m = 0.66. The emission from the other regions is a certain mixture of
the thermal and non-thermal emission. Even though m = 0.66 is common, the resulting intensity decay rate is larger for the
thermal emission, and the intensity of the non-thermal continuum is nearly constant, if m does not change in the last couple
of decade. A different mixing ratio probably results in a decay rate of the emission that is different from region to region.
Accordingly, we conclude that the decay of the X-ray intensity above ∼4 keV of the whole remnant is probably caused
by the thermal emission component.
T.S. is grateful for the travel support from HAYAKAWA FOUNDATION.
This work was supported by the Japan Society for the Promotion of Science (JSPS)
KAKENHI Grant Number 16J03448, 15K05107, 15K17657, 15K05088, 25105516 and 23540280.
We thank Jacco Vink, Takayuki Hayashi and Ryo Iizuka for helpful discussion
and suggestions in preparing this paper. We thank the anonymous referee for
his/her comments that helped us to improve the manuscript. |
http://arxiv.org/abs/1701.07873v3 | 20170126210106 | Study of the $D^0 p$ amplitude in $Λ_b^0\to D^0 p π^-$ decays | [
"LHCb collaboration",
"R. Aaij",
"B. Adeva",
"M. Adinolfi",
"Z. Ajaltouni",
"S. Akar",
"J. Albrecht",
"F. Alessio",
"M. Alexander",
"S. Ali",
"G. Alkhazov",
"P. Alvarez Cartelle",
"A. A. Alves Jr",
"S. Amato",
"S. Amerio",
"Y. Amhis",
"L. An",
"L. Anderlini",
"G. Andreassi",
"M. Andreotti",
"J. E. Andrews",
"R. B. Appleby",
"F. Archilli",
"P. d'Argent",
"J. Arnau Romeu",
"A. Artamonov",
"M. Artuso",
"E. Aslanides",
"G. Auriemma",
"M. Baalouch",
"I. Babuschkin",
"S. Bachmann",
"J. J. Back",
"A. Badalov",
"C. Baesso",
"S. Baker",
"V. Balagura",
"W. Baldini",
"R. J. Barlow",
"C. Barschel",
"S. Barsuk",
"W. Barter",
"F. Baryshnikov",
"M. Baszczyk",
"V. Batozskaya",
"B. Batsukh",
"V. Battista",
"A. Bay",
"L. Beaucourt",
"J. Beddow",
"F. Bedeschi",
"I. Bediaga",
"A. Beiter",
"L. J. Bel",
"V. Bellee",
"N. Belloli",
"K. Belous",
"I. Belyaev",
"E. Ben-Haim",
"G. Bencivenni",
"S. Benson",
"A. Berezhnoy",
"R. Bernet",
"A. Bertolin",
"C. Betancourt",
"F. Betti",
"M. -O. Bettler",
"M. van Beuzekom",
"Ia. Bezshyiko",
"S. Bifani",
"P. Billoir",
"T. Bird",
"A. Birnkraut",
"A. Bitadze",
"A. Bizzeti",
"T. Blake",
"F. Blanc",
"J. Blouw",
"S. Blusk",
"V. Bocci",
"T. Boettcher",
"A. Bondar",
"N. Bondar",
"W. Bonivento",
"I. Bordyuzhin",
"A. Borgheresi",
"S. Borghi",
"M. Borisyak",
"M. Borsato",
"F. Bossu",
"M. Boubdir",
"T. J. V. Bowcock",
"E. Bowen",
"C. Bozzi",
"S. Braun",
"M. Britsch",
"T. Britton",
"J. Brodzicka",
"E. Buchanan",
"C. Burr",
"A. Bursche",
"J. Buytaert",
"S. Cadeddu",
"R. Calabrese",
"M. Calvi",
"M. Calvo Gomez",
"A. Camboni",
"P. Campana",
"D. H. Campora Perez",
"L. Capriotti",
"A. Carbone",
"G. Carboni",
"R. Cardinale",
"A. Cardini",
"P. Carniti",
"L. Carson",
"K. Carvalho Akiba",
"G. Casse",
"L. Cassina",
"L. Castillo Garcia",
"M. Cattaneo",
"G. Cavallero",
"R. Cenci",
"D. Chamont",
"M. Charles",
"Ph. Charpentier",
"G. Chatzikonstantinidis",
"M. Chefdeville",
"S. Chen",
"S. -F. Cheung",
"V. Chobanova",
"M. Chrzaszcz",
"X. Cid Vidal",
"G. Ciezarek",
"P. E. L. Clarke",
"M. Clemencic",
"H. V. Cliff",
"J. Closier",
"V. Coco",
"J. Cogan",
"E. Cogneras",
"V. Cogoni",
"L. Cojocariu",
"G. Collazuol",
"P. Collins",
"A. Comerma-Montells",
"A. Contu",
"A. Cook",
"G. Coombs",
"S. Coquereau",
"G. Corti",
"M. Corvo",
"C. M. Costa Sobral",
"B. Couturier",
"G. A. Cowan",
"D. C. Craik",
"A. Crocombe",
"M. Cruz Torres",
"S. Cunliffe",
"R. Currie",
"C. D'Ambrosio",
"F. Da Cunha Marinho",
"E. Dall'Occo",
"J. Dalseno",
"P. N. Y. David",
"A. Davis",
"K. De Bruyn",
"S. De Capua",
"M. De Cian",
"J. M. De Miranda",
"L. De Paula",
"M. De Serio",
"P. De Simone",
"C. T. Dean",
"D. Decamp",
"M. Deckenhoff",
"L. Del Buono",
"M. Demmer",
"A. Dendek",
"D. Derkach",
"O. Deschamps",
"F. Dettori",
"B. Dey",
"A. Di Canto",
"H. Dijkstra",
"F. Dordei",
"M. Dorigo",
"A. Dosil Suárez",
"A. Dovbnya",
"K. Dreimanis",
"L. Dufour",
"G. Dujany",
"K. Dungs",
"P. Durante",
"R. Dzhelyadin",
"A. Dziurda",
"A. Dzyuba",
"N. Déléage",
"S. Easo",
"M. Ebert",
"U. Egede",
"V. Egorychev",
"S. Eidelman",
"S. Eisenhardt",
"U. Eitschberger",
"R. Ekelhof",
"L. Eklund",
"S. Ely",
"S. Esen",
"H. M. Evans",
"T. Evans",
"A. Falabella",
"N. Farley",
"S. Farry",
"R. Fay",
"D. Fazzini",
"D. Ferguson",
"A. Fernandez Prieto",
"F. Ferrari",
"F. Ferreira Rodrigues",
"M. Ferro-Luzzi",
"S. Filippov",
"R. A. Fini",
"M. Fiore",
"M. Fiorini",
"M. Firlej",
"C. Fitzpatrick",
"T. Fiutowski",
"F. Fleuret",
"K. Fohl",
"M. Fontana",
"F. Fontanelli",
"D. C. Forshaw",
"R. Forty",
"V. Franco Lima",
"M. Frank",
"C. Frei",
"J. Fu",
"W. Funk",
"E. Furfaro",
"C. Färber",
"A. Gallas Torreira",
"D. Galli",
"S. Gallorini",
"S. Gambetta",
"M. Gandelman",
"P. Gandini",
"Y. Gao",
"L. M. Garcia Martin",
"J. García Pardiñas",
"J. Garra Tico",
"L. Garrido",
"P. J. Garsed",
"D. Gascon",
"C. Gaspar",
"L. Gavardi",
"G. Gazzoni",
"D. Gerick",
"E. Gersabeck",
"M. Gersabeck",
"T. Gershon",
"Ph. Ghez",
"S. Gianì",
"V. Gibson",
"O. G. Girard",
"L. Giubega",
"K. Gizdov",
"V. V. Gligorov",
"D. Golubkov",
"A. Golutvin",
"A. Gomes",
"I. V. Gorelov",
"C. Gotti",
"R. Graciani Diaz",
"L. A. Granado Cardoso",
"E. Graugés",
"E. Graverini",
"G. Graziani",
"A. Grecu",
"P. Griffith",
"L. Grillo",
"B. R. Gruberg Cazon",
"O. Grünberg",
"E. Gushchin",
"Yu. Guz",
"T. Gys",
"C. Göbel",
"T. Hadavizadeh",
"C. Hadjivasiliou",
"G. Haefeli",
"C. Haen",
"S. C. Haines",
"B. Hamilton",
"X. Han",
"S. Hansmann-Menzemer",
"N. Harnew",
"S. T. Harnew",
"J. Harrison",
"M. Hatch",
"J. He",
"T. Head",
"A. Heister",
"K. Hennessy",
"P. Henrard",
"L. Henry",
"E. van Herwijnen",
"M. Heß",
"A. Hicheur",
"D. Hill",
"C. Hombach",
"H. Hopchev",
"W. Hulsbergen",
"T. Humair",
"M. Hushchyn",
"D. Hutchcroft",
"M. Idzik",
"P. Ilten",
"R. Jacobsson",
"A. Jaeger",
"J. Jalocha",
"E. Jans",
"A. Jawahery",
"F. Jiang",
"M. John",
"D. Johnson",
"C. R. Jones",
"C. Joram",
"B. Jost",
"N. Jurik",
"S. Kandybei",
"M. Karacson",
"J. M. Kariuki",
"S. Karodia",
"M. Kecke",
"M. Kelsey",
"M. Kenzie",
"T. Ketel",
"E. Khairullin",
"B. Khanji",
"C. Khurewathanakul",
"T. Kirn",
"S. Klaver",
"K. Klimaszewski",
"S. Koliiev",
"M. Kolpin",
"I. Komarov",
"R. F. Koopman",
"P. Koppenburg",
"A. Kosmyntseva",
"A. Kozachuk",
"M. Kozeiha",
"L. Kravchuk",
"K. Kreplin",
"M. Kreps",
"P. Krokovny",
"F. Kruse",
"W. Krzemien",
"W. Kucewicz",
"M. Kucharczyk",
"V. Kudryavtsev",
"A. K. Kuonen",
"K. Kurek",
"T. Kvaratskheliya",
"D. Lacarrere",
"G. Lafferty",
"A. Lai",
"G. Lanfranchi",
"C. Langenbruch",
"T. Latham",
"C. Lazzeroni",
"R. Le Gac",
"J. van Leerdam",
"A. Leflat",
"J. Lefrançois",
"R. Lefèvre",
"F. Lemaitre",
"E. Lemos Cid",
"O. Leroy",
"T. Lesiak",
"B. Leverington",
"T. Li",
"Y. Li",
"T. Likhomanenko",
"R. Lindner",
"C. Linn",
"F. Lionetto",
"X. Liu",
"D. Loh",
"I. Longstaff",
"J. H. Lopes",
"D. Lucchesi",
"M. Lucio Martinez",
"H. Luo",
"A. Lupato",
"E. Luppi",
"O. Lupton",
"A. Lusiani",
"X. Lyu",
"F. Machefert",
"F. Maciuc",
"O. Maev",
"K. Maguire",
"S. Malde",
"A. Malinin",
"T. Maltsev",
"G. Manca",
"G. Mancinelli",
"P. Manning",
"J. Maratas",
"J. F. Marchand",
"U. Marconi",
"C. Marin Benito",
"M. Marinangeli",
"P. Marino",
"J. Marks",
"G. Martellotti",
"M. Martin",
"M. Martinelli",
"D. Martinez Santos",
"F. Martinez Vidal",
"D. Martins Tostes",
"L. M. Massacrier",
"A. Massafferri",
"R. Matev",
"A. Mathad",
"Z. Mathe",
"C. Matteuzzi",
"A. Mauri",
"E. Maurice",
"B. Maurin",
"A. Mazurov",
"M. McCann",
"A. McNab",
"R. McNulty",
"B. Meadows",
"F. Meier",
"M. Meissner",
"D. Melnychuk",
"M. Merk",
"A. Merli",
"E. Michielin",
"D. A. Milanes",
"M. -N. Minard",
"D. S. Mitzel",
"A. Mogini",
"J. Molina Rodriguez",
"I. A. Monroy",
"S. Monteil",
"M. Morandin",
"P. Morawski",
"A. Mordà",
"M. J. Morello",
"O. Morgunova",
"J. Moron",
"A. B. Morris",
"R. Mountain",
"F. Muheim",
"M. Mulder",
"M. Mussini",
"D. Müller",
"J. Müller",
"K. Müller",
"V. Müller",
"P. Naik",
"T. Nakada",
"R. Nandakumar",
"A. Nandi",
"I. Nasteva",
"M. Needham",
"N. Neri",
"S. Neubert",
"N. Neufeld",
"M. Neuner",
"T. D. Nguyen",
"C. Nguyen-Mau",
"S. Nieswand",
"R. Niet",
"N. Nikitin",
"T. Nikodem",
"A. Nogay",
"A. Novoselov",
"D. P. O'Hanlon",
"A. Oblakowska-Mucha",
"V. Obraztsov",
"S. Ogilvy",
"R. Oldeman",
"C. J. G. Onderwater",
"J. M. Otalora Goicochea",
"A. Otto",
"P. Owen",
"A. Oyanguren",
"P. R. Pais",
"A. Palano",
"M. Palutan",
"A. Papanestis",
"M. Pappagallo",
"L. L. Pappalardo",
"W. Parker",
"C. Parkes",
"G. Passaleva",
"A. Pastore",
"G. D. Patel",
"M. Patel",
"C. Patrignani",
"A. Pearce",
"A. Pellegrino",
"G. Penso",
"M. Pepe Altarelli",
"S. Perazzini",
"P. Perret",
"L. Pescatore",
"K. Petridis",
"A. Petrolini",
"A. Petrov",
"M. Petruzzo",
"E. Picatoste Olloqui",
"B. Pietrzyk",
"M. Pikies",
"D. Pinci",
"A. Pistone",
"A. Piucci",
"V. Placinta",
"S. Playfer",
"M. Plo Casasus",
"T. Poikela",
"F. Polci",
"A. Poluektov",
"I. Polyakov",
"E. Polycarpo",
"G. J. Pomery",
"A. Popov",
"D. Popov",
"B. Popovici",
"S. Poslavskii",
"C. Potterat",
"E. Price",
"J. D. Price",
"J. Prisciandaro",
"A. Pritchard",
"C. Prouve",
"V. Pugatch",
"A. Puig Navarro",
"G. Punzi",
"W. Qian",
"R. Quagliani",
"B. Rachwal",
"J. H. Rademacker",
"M. Rama",
"M. Ramos Pernas",
"M. S. Rangel",
"I. Raniuk",
"F. Ratnikov",
"G. Raven",
"F. Redi",
"S. Reichert",
"A. C. dos Reis",
"C. Remon Alepuz",
"V. Renaudin",
"S. Ricciardi",
"S. Richards",
"M. Rihl",
"K. Rinnert",
"V. Rives Molina",
"P. Robbe",
"A. B. Rodrigues",
"E. Rodrigues",
"J. A. Rodriguez Lopez",
"P. Rodriguez Perez",
"A. Rogozhnikov",
"S. Roiser",
"A. Rollings",
"V. Romanovskiy",
"A. Romero Vidal",
"J. W. Ronayne",
"M. Rotondo",
"M. S. Rudolph",
"T. Ruf",
"P. Ruiz Valls",
"J. J. Saborido Silva",
"E. Sadykhov",
"N. Sagidova",
"B. Saitta",
"V. Salustino Guimaraes",
"C. Sanchez Mayordomo",
"B. Sanmartin Sedes",
"R. Santacesaria",
"C. Santamarina Rios",
"M. Santimaria",
"E. Santovetti",
"A. Sarti",
"C. Satriano",
"A. Satta",
"D. M. Saunders",
"D. Savrina",
"S. Schael",
"M. Schellenberg",
"M. Schiller",
"H. Schindler",
"M. Schlupp",
"M. Schmelling",
"T. Schmelzer",
"B. Schmidt",
"O. Schneider",
"A. Schopper",
"K. Schubert",
"M. Schubiger",
"M. -H. Schune",
"R. Schwemmer",
"B. Sciascia",
"A. Sciubba",
"A. Semennikov",
"A. Sergi",
"N. Serra",
"J. Serrano",
"L. Sestini",
"P. Seyfert",
"M. Shapkin",
"I. Shapoval",
"Y. Shcheglov",
"T. Shears",
"L. Shekhtman",
"V. Shevchenko",
"B. G. Siddi",
"R. Silva Coutinho",
"L. Silva de Oliveira",
"G. Simi",
"S. Simone",
"M. Sirendi",
"N. Skidmore",
"T. Skwarnicki",
"E. Smith",
"I. T. Smith",
"J. Smith",
"M. Smith",
"H. Snoek",
"l. Soares Lavra",
"M. D. Sokoloff",
"F. J. P. Soler",
"B. Souza De Paula",
"B. Spaan",
"P. Spradlin",
"S. Sridharan",
"F. Stagni",
"M. Stahl",
"S. Stahl",
"P. Stefko",
"S. Stefkova",
"O. Steinkamp",
"S. Stemmle",
"O. Stenyakin",
"H. Stevens",
"S. Stevenson",
"S. Stoica",
"S. Stone",
"B. Storaci",
"S. Stracka",
"M. Straticiuc",
"U. Straumann",
"L. Sun",
"W. Sutcliffe",
"K. Swientek",
"V. Syropoulos",
"M. Szczekowski",
"T. Szumlak",
"S. T'Jampens",
"A. Tayduganov",
"T. Tekampe",
"G. Tellarini",
"F. Teubert",
"E. Thomas",
"J. van Tilburg",
"M. J. Tilley",
"V. Tisserand",
"M. Tobin",
"S. Tolk",
"L. Tomassetti",
"D. Tonelli",
"S. Topp-Joergensen",
"F. Toriello",
"E. Tournefier",
"S. Tourneur",
"K. Trabelsi",
"M. Traill",
"M. T. Tran",
"M. Tresch",
"A. Trisovic",
"A. Tsaregorodtsev",
"P. Tsopelas",
"A. Tully",
"N. Tuning",
"A. Ukleja",
"A. Ustyuzhanin",
"U. Uwer",
"C. Vacca",
"V. Vagnoni",
"A. Valassi",
"S. Valat",
"G. Valenti",
"R. Vazquez Gomez",
"P. Vazquez Regueiro",
"S. Vecchi",
"M. van Veghel",
"J. J. Velthuis",
"M. Veltri",
"G. Veneziano",
"A. Venkateswaran",
"M. Vernet",
"M. Vesterinen",
"J. V. Viana Barbosa",
"B. Viaud",
"D. Vieira",
"M. Vieites Diaz",
"H. Viemann",
"X. Vilasis-Cardona",
"M. Vitti",
"V. Volkov",
"A. Vollhardt",
"B. Voneki",
"A. Vorobyev",
"V. Vorobyev",
"C. Voß",
"J. A. de Vries",
"C. Vázquez Sierra",
"R. Waldi",
"C. Wallace",
"R. Wallace",
"J. Walsh",
"J. Wang",
"D. R. Ward",
"H. M. Wark",
"N. K. Watson",
"D. Websdale",
"A. Weiden",
"M. Whitehead",
"J. Wicht",
"G. Wilkinson",
"M. Wilkinson",
"M. Williams",
"M. P. Williams",
"M. Williams",
"T. Williams",
"F. F. Wilson",
"J. Wimberley",
"J. Wishahi",
"W. Wislicki",
"M. Witek",
"G. Wormser",
"S. A. Wotton",
"K. Wraight",
"K. Wyllie",
"Y. Xie",
"Z. Xing",
"Z. Xu",
"Z. Yang",
"Y. Yao",
"H. Yin",
"J. Yu",
"X. Yuan",
"O. Yushchenko",
"K. A. Zarebski",
"M. Zavertyaev",
"L. Zhang",
"Y. Zhang",
"Y. Zhang",
"A. Zhelezov",
"Y. Zheng",
"X. Zhu",
"V. Zhukov",
"S. Zucchelli"
] | hep-ex | [
"hep-ex"
] |
roman
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
pdflatex
CERN-EP-2017-007
LHCb-PAPER-2016-061
26 January 2017
Study of the amplitude in decays
The LHCb collaboration[Authors are listed at the end of this paper.]
An amplitude analysis of the decay is performed in the part of the phase space
containing resonances in the channel. The study is based on a data sample
corresponding to an integrated luminosity of 3.0of pp collisions recorded by the LHCb
experiment. The spectrum of excited states that decay into is studied.
The masses, widths and quantum numbers of the and resonances are measured.
The constraints on the spin and parity for the state are obtained for the first time.
A near-threshold enhancement in the amplitude
is investigated and found to be consistent with a new resonance, denoted the ,
of spin 3/2 and positive parity.
Submitted to JHEP
CERN on behalf of the collaboration, licence http://creativecommons.org/licenses/by/4.0/CC-BY-4.0.
plain
arabic
§ INTRODUCTION
Decays of beauty baryons to purely hadronic final states
provide a wealth of information about the interactions between the fundamental
constituents of matter.
Studies of direct violation in these decays can help constrain the parameters of the Standard Model
and New Physics effects in a similar way as in decays of beauty
mesons <cit.>.
Studies of the decay dynamics of beauty baryons can provide important information on the spectroscopy of
charmed baryons, since the known initial state provides strong constraints on the
quantum numbers of intermediate resonances.
The recent observation of pentaquark states at LHCb <cit.> has renewed
the interest in baryon spectroscopy.
The present analysis concerns the decay amplitude of the
Cabibbo-favoured decay
(the inclusion of charge-conjugate processes is implied throughout this paper).
A measurement of the branching fraction of this decay
with respect to the mode
was reported by the LHCb collaboration using a data sample corresponding to
1.0 of integrated luminosity <cit.>.
The decay includes resonant contributions in the channel
that are associated with intermediate excited states,
as well as contributions in the channel due to excited nucleon (N) states.
The study of the part of the amplitude will help to constrain the dynamics of the Cabibbo-suppressed
decay , which is potentially sensitive to the angle γ of the Cabibbo-Kobayashi-Maskawa
quark mixing matrix <cit.>.
The analysis of the amplitude is interesting in its own right.
One of the states decaying to , the , has a possible interpretation as a D^*N molecule
<cit.>.
There are currently no experimental constraints on the quantum numbers of the state.
The mass spectrum of the predicted and observed orbitally excited states <cit.>
is shown in Fig. <ref>. In addition to the ground state and to the _(2595)^+ and
_(2625)^+ states, which are identified as the members of the P-wave doublet,
a D-wave doublet with higher mass is predicted. One of the members of this doublet could be the state known as the ,
which is measured to have spin and parity J^P=5/2^+ <cit.>, while
no candidate for the other state has been observed yet.
Several theoretical studies provide mass predictions for this state and other excited charm
baryons <cit.>.
The BaBar collaboration has previously reported indications of a structure in the mass spectrum
close to threshold, at a mass around
2.84[Natural units with ħ=c=1 are used throughout.],
which could be the missing member of the D-wave doublet <cit.>.
This analysis is based on a data sample corresponding to an integrated luminosity of
3.0of collisions recorded by the LHCb detector, with 1.0collected at
centre-of-mass energy √(s)=7 in 2011 and 2.0at √(s)=8 in 2012.
The paper is organised as follows.
Section <ref> gives a brief description of the LHCb experiment and its
reconstruction and simulation software.
The amplitude analysis formalism and fitting technique is introduced in Sec. <ref>.
The selection of candidates is described in Sec. <ref>, followed by the measurement of
signal and background yields (Sec. <ref>), evaluation of the efficiency (Sec. <ref>),
determination of the shape of the background distribution (Sec. <ref>), and discussion of the
effects of momentum resolution (Sec. <ref>).
Results of the amplitude fit are presented in Sec. <ref> separately for four different
regions of the phase space, along with the systematic uncertainties for those fits.
Section <ref> gives a summary of the results.
§ DETECTOR AND SIMULATION
The detector <cit.> is a single-arm forward
spectrometer covering the range 2<η <5,
designed for the study of particles containing or quarks. The detector includes a high-precision tracking system
consisting of a silicon-strip vertex detector surrounding the pp
interaction region, a large-area silicon-strip detector located
upstream of a dipole magnet with a bending power of about
4 Tm, and three stations of silicon-strip detectors and straw
drift tubes placed downstream of the magnet.
The tracking system provides a measurement of momentum, , of charged particles with
relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200.
The minimum distance of a track to a primary vertex (PV), the impact parameter (IP),
is measured with a resolution of (15+29/),
where is the component of the momentum transverse to the beam, in .
Different types of charged hadrons are distinguished using information
from two ring-imaging Cherenkov detectors.
Photons, electrons and hadrons are identified by a calorimeter system consisting of
scintillating-pad and preshower detectors, an electromagnetic
calorimeter and a hadronic calorimeter. Muons are identified by a
system composed of alternating layers of iron and multiwire
proportional chambers.
The online event selection is performed by a trigger <cit.>,
which consists of a hardware stage, based on information from the calorimeter and muon
systems, followed by a software stage, which applies a full event
reconstruction.
At the hardware trigger stage, events are required to have a muon with high or a
hadron, photon or electron with high transverse energy in the calorimeters.
The software trigger requires a two-, three- or four-track
secondary vertex with significant displacement
from any PV in the event. At least one charged particle forming the vertex
must exceed a threshold in the range 1.6–1.7and be inconsistent with originating from a PV.
A multivariate algorithm <cit.> is used for
the identification of secondary vertices consistent with the decay
of a hadron.
In the simulation, pp collisions are generated using
8 <cit.> with a specific configuration <cit.>. Decays of hadronic particles
are described by <cit.>, in which final-state
radiation is generated using <cit.>. The
interaction of the generated particles with the detector, and its response,
are implemented using the toolkit <cit.> as described in
Ref. <cit.>.
§ AMPLITUDE ANALYSIS FORMALISM
The amplitude analysis is based on the helicity formalism used in previous LHCb analyses.
A detailed description of the formalism can be found in Refs. <cit.>. This section gives details of the implementation specific to the decay .
§.§ Phase space of the decay
Three-body decays of scalar particles are described by the two-dimensional phase space
of independent kinematic parameters, often represented as a Dalitz plot <cit.>.
For baryon decays, in general also the additional angular dependence of the decay products
on the polarisation of the decaying particle has to be considered.
A vector of five kinematic variables (denoted Ω) describes the phase space of the decay .
The kinematic variables are the two Dalitz plot variables, namely the invariant masses squared
of the and combinations M^2() and M^2(),
and three angles that determine the orientation of the three-body decay plane (Fig. <ref>).
These angles are defined in the rest frame of the decaying baryon with
the x̂ axis given by the direction of the baryon in the laboratory frame,
the polarisation axis ẑ given by the cross-product of beam direction and x̂ axis, and the ŷ axis
given by the cross-product of the ẑ and x̂ axes.
The angular variables are the cosine of the polar angle cosϑ_,
and the azimuthal angle φ_ of the proton momentum in the reference frame defined above (Fig. <ref>(a)),
and the angle φ_Dπ between the plane and the plane formed by the proton direction
and the polarisation axis ẑ (Fig. <ref>(b)).
§.§ Helicity formalism
The baseline amplitude fit uses the helicity formalism where the interfering amplitude components
are expressed as sequential quasi-two-body decays → R, R→ (where R
denotes the intermediate resonant or nonresonant state).
The decay amplitude for a baryon with spin projection μ decaying via
an intermediate state R with helicity λ_R into a
final state with proton helicity λ_p is
𝒜_μ, λ_R, λ_p [M^2(), θ_p, ϕ_p, θ_R, ϕ_R] =
a_λ_R b_λ_p
e^i(μ-λ_R)ϕ_R e^i(λ_R-λ_p)ϕ_p
d^J__μ,λ_R(θ_R) d^J_R_λ_Rλ_p(θ_p) ℛ(M^2()),
where J_=1/2 and J_R are the spins of the baryon and the R state,
d^J_λ_1,λ_2(θ) are the reduced Wigner functions, and
a_λ_R and b_λ_p are complex constants (couplings).
The mass-dependent complex lineshape ℛ(M^2) defines the dynamics of the R decay.
The angles defining the helicity amplitude are
the polar (θ_R) and azimuthal (ϕ_R) angles of the intermediate state R in the reference frame
defined above,
and the polar (θ_p) and azimuthal (ϕ_p) angles of
the final-state proton in the frame where the intermediate state R is at rest and
the polar axis points in the direction of R in the rest frame.
All of these angles
are functions of the five phase space variables Ω defined previously and thus do not constitute
additional degrees of freedom.
The strong decay R→ conserves parity, which implies that
b_λ_p = (-1)^J_p+J_D-J_R η_R η_D η_p b_-λ_p,
where J_p=1/2, J_D=0 and J_R are the spins of the proton, meson and resonance R, respectively, and
η_p=+1, η_D=-1 and η_R are their parities.
This relation reduces the number of free parameters in the helicity amplitudes: |b_λ_p|
is absorbed by a_λ_R, and each coefficient a_λ_R
enters the amplitude multiplied by a factor η_λ_p=± 1. The convention used is
η_λ_p = {[ 1 ,; (-1)^J_p+J_D-J_R η_R η_D η_p .; ].
As a result, only two
couplings a_λ_R remain for each intermediate state R, corresponding to its
two allowed helicity configurations. The two couplings are denoted for brevity as a^±.
The amplitude, for fixed μ and λ_p, after summation over the intermediate resonances R_j
and their two possible helicities λ_R_j=± 1/2 is
A_μ, λ_p(Ω) = e^i(μϕ_R-λ_pϕ_p)∑_j η_j, λ_p [ a^+_j d^J__μ,+1/2(θ_R)
d^J_R_i_+1/2, λ_p(θ_p) ℛ_j(M^2()) + .
. a^-_j d^J__μ,-1/2(θ_R)
d^J_R_i_-1/2, λ_p(θ_p) ℛ_j(M^2()) e^i(ϕ_R-ϕ_p)].
To obtain the decay probability density, the amplitudes corresponding to different polarisations of the
initial- and final-state particles have to be summed up incoherently.
The baryons produced in collisions can only have
polarisation transverse to the production plane, along the ẑ axis.
The longitudinal component is forbidden due to parity conservation in the strong processes that
dominate production. In this case, the probability density function (PDF)
of the kinematic variables that characterise the decay
of a with the transverse polarisation P_z, after summation over μ and λ_p, is
proportional to
p(Ω, P_z) = ∑_μ, λ_p=± 1/2 (1+2μ P_z)|A_μ, λ_p(Ω)|^2.
Equations (<ref>) and (<ref>) can be combined to yield the
simplified expression:
p(Ω, P_z) =
∑_n=0^2J_ maxp_n(M^2())cos(nθ_p) +
P_z cosθ_R∑_n=0^2J_ maxq_n(M^2())cos(nθ_p),
where J_ max is the highest spin among the intermediate resonances and
p_n and q_n are functions of only M^2().
As a consequence, p(Ω, P_z) does not depend on the azimuthal angles ϕ_p and ϕ_R. Dependence on
the angle θ_R appears only if the is polarised. In the unpolarised case the density depends only on
the internal degrees of freedom M^2() and θ_p (which in turn can be expressed as
a function of the other Dalitz plot variable,
M^2()). Moreover, after integration over the angle θ_R, the dependence on polarisation
cancels if the detection efficiency is symmetric over cosθ_R. Since polarisation in
collisions is measured to be small (P_z=0.06± 0.07± 0.02, <cit.>) and the efficiency
is highly symmetric in cosθ_R, the effects of polarisation can safely be neglected in the amplitude analysis,
and only the Dalitz plot variables ω = (M^2(), M^2()) need to be used to
describe the probability density p(ω) of the decay. The density p(ω) is given by Eq. (<ref>)
with P_z=0 such that no dependence on the angles ϑ_p, φ_p or φ_Dπ remains.
Up to this point, the formalism has assumed that resonances are present only in the channel.
While in the case of
decays the regions of phase space with contributions from and resonances
are generally well separated, there is a small region where they can overlap, and thus interference between resonances in
the two channels has to be taken into account. In the helicity formalism, the proton spin-quantisation
axes are different for the helicity amplitudes corresponding to and
resonances <cit.>: they are parallel to the proton direction
in the and rest frames, and are thus antiparallel to the and momenta,
respectively.
The rotation angle between the two spin-quantisation axes is given by
cosθ_ rot = (p⃗^ ()_·p⃗^ ()_)/|p⃗^ ()_| |p⃗^ ()_)|,
where p⃗^ ()_ and p⃗^ ()_ are the momenta of the and mesons, respectively, in the proton rest frame.
If the proton spin-quantisation axis is chosen with respect to the resonances and the
helicity basis is denoted as |λ_p ⟩,
the helicity states |λ'_p ⟩ corresponding to states are
|λ'_p ⟩ = ∑_λ'_p = ± 1/2 d^1/2_λ_p, λ'_p(θ_ rot)|λ_p ⟩
and thus the additional terms in the amplitude (Eq. (<ref>)) related to the channel
are expressed as
A_μ, λ_p^()(Ω) = ∑_λ'_p = ± 1/2 d^1/2_λ_p, λ'_p(θ_ rot)
e^i(μϕ'_R-λ'_pϕ'_p)∑_j η_j, λ'_p×
[ a^+_j d^J__μ,+1/2(θ'_R)
d^J_R_i_+1/2, λ'_p(θ'_p) ℛ_j(M^2()) + .
. a^-_j d^J__μ,-1/2(θ'_R)
d^J_R_i_-1/2, λ'_p(θ'_p) ℛ_j(M^2()) e^i(ϕ'_R-ϕ'_p)],
where the angles θ'_p, ϕ'_p, θ'_R and ϕ'_R are defined in a
similar way as θ_p, ϕ_p, θ_R and ϕ_R,
but with the intermediate state R in the channel.
§.§ Resonant and nonresonant lineshapes
The part of the amplitude that describes the dynamics of the quasi-two-body decay, ℛ(M^2), is given by one
of the following functions. Resonances are parametrised with relativistic Breit–Wigner lineshapes multiplied
by angular barrier terms and corrected by Blatt–Weisskopf form factors <cit.>:
ℛ_ BW(M^2) = [q(M)/q_0]^L_[p(M)/p_0]^L_RF_(M,L_) F_R(M,L_R)/m_R^2-M^2 - i m_RΓ(M)
,
with mass-dependent width Γ(M) given by
Γ(M) = Γ_0[p(M)/p_0]^2L_R+1m_R/M F_R^2(M,L_R),
where m_R and Γ_0 are the pole parameters of the resonance.
The Blatt–Weisskopf form factors for the resonance, F_R(M,L_R), and for the , F_(M,L_),
are parametrised as
F_R,(M,L) = {[ 1 L=0; √(1+z_0^2/1+z^2(M)) L=1; √(9+3z_0^2+z_0^4/9+3z^2(M)+z^4(M)) L=2; √(225+45z_0^2+6z_0^4+z_0^6/225+45z^2(M)+6z^4(M)+z^6(M)) L=3; ].,
where the definitions of the terms z(M) and z_0 depend on whether
the form factor for the resonance R or for the is being considered. For R these terms are given by
z(M)=p(M)d and z_0=p_0d, where p(M) is the centre-of-mass momentum of the
decay products in the two-body decay R→ with the mass
of the resonance R equal to M, p_0≡ p(m_R), and
d is a radial parameter taken to be 1.5^-1.
For the respective functions are z(M)=q(M)d and z_0=q_0d, where q(M) is the centre-of-mass
momentum of decay products in the two-body decay → R, q_0=q(m_R), and d=5.0^-1.
The analysis is very weakly sensitive to the values of d, and these are
varied in a wide range for assessing the associated systematic uncertainty (Sec. <ref>).
The mass-dependent width and form factors depend on the orbital angular momenta of the two-body decays.
For the weak decay of the , the minimum possible angular momentum L_=J-1/2
(where J is the spin of the resonance) is taken, while for the strong decay
of the intermediate resonance, the angular momentum L_R is fully determined
by the parity of the resonance, P=(-1)^L_R+1,
and conservation of angular momentum, which requires L_R=J± 1/2.
Two parametrisations are used for nonresonant amplitudes: exponential and polynomial functions.
The exponential nonresonant lineshape <cit.> used is
ℛ_ NRexp(M^2) = [q(M)/q_0]^L_[p(M)/p_0]^L_Re^-α M^2,
where α is a shape parameter.
The polynomial nonresonant lineshape <cit.> used is
ℛ_ NRpoly(M^2) = [q(M)/q_0]^L_[p(M)/p_0]^L_R(a_2 Δ M^2 + a_1 Δ M + a_0),
where Δ M=M-M_0, and M_0 is a constant that is chosen to minimise the correlations between the
coefficients a_i when they are treated as free parameters. In the case of the
amplitude fit, M_0 is chosen to be near the middle of the fit range, M_0≡ 2.88.
In both the exponential and the
polynomial parametrisations, M_0 also serves as the resonance mass parameter in the definition of p_0 and q_0
in the angular barrier terms. Note that in Ref. <cit.> the polynomial
form was introduced to describe the slow variations of a nonresonant amplitude across the large phase space of
charmless decays, and thus the parameters a_i were defined as complex constants to allow slow phase motion over the
wide range of invariant masses. In the present analysis, the phase space is much more constrained and
no significant phase rotation is expected for the nonresonant amplitudes.
The coefficients a_i thus are taken to be real.
To study the resonant nature of the states, model-independent parametrisations of the lineshape are used.
One approach used here consists of interpolation with cubic splines, done independently for
the real and imaginary parts of the amplitude (referred to as the “complex spline” lineshape) <cit.>.
The free parameters of such a fit are the real Re(ℛ_i) and
imaginary Im(ℛ_i) parts of the amplitude at the spline knot positions.
Alternatively, to assess the significance of the complex phase rotation in a model-independent
way, a spline-interpolated shape is used in which the imaginary parts of the amplitude
at all knots are fixed to zero (“real spline”).
§.§ Fitting procedure
An unbinned maximum likelihood fit is performed in the two-dimensional phase space
ω=(M^2(), M^2()).
Defining ℒ as the likelihood function, the fit minimises
-2lnℒ=-2∑_i=1^Nln p_ tot(ω_i),
where the summation is performed over all candidates in the data sample and
p_ tot is the normalised PDF. It is given by
p_ tot(ω) = p(ω)ϵ(ω)n_ sig/𝒩 + p_ bck(ω)n_ bck/𝒩_ bck,
where p(ω) is the signal PDF, p_ bck(ω) is the background PDF,
ϵ(ω) is the efficiency, and 𝒩 and 𝒩_ bck are the signal and background
normalisations:
𝒩=∫_𝒟p(ω)ϵ(ω) dω,
and
𝒩_ bck=∫_𝒟p_ bck(ω) dω,
where the integrals are taken over the part of the phase space 𝒟
used in the fit (Section <ref>),
and n_ sig and n_ bck are the numbers of signal and background events in the signal region, respectively,
evaluated from a fit to the M() invariant mass distribution.
The normalisation integrals are calculated numerically using a fine grid with 400× 400 cells
in the baseline fits; the numerical uncertainty is negligible compared with the other uncertainties in the analysis.
§.§ Fit parameters and fit fractions
The free parameters in the fit are the couplings a^± for each of the amplitude components and certain parameters
of the lineshapes (such as the masses and/or widths of the resonant states, or shape parameters of the nonresonant lineshapes).
Since the overall normalisation of the density is arbitrary, one of the couplings can be set to unity. In this analysis, the
convention a^+≡ 1 for the state is used.
Additionally, the amplitudes corresponding to different helicity states of the initial- and final-state particles
are added incoherently, so that the relative phase between a^+ and a^- for one of the contributions is arbitrary.
The convention Im(a^-)≡ 0 for the is used.
The definitions of the polynomial and spline-interpolated shapes already contain terms that characterise the
relative magnitudes of the corresponding amplitudes. The couplings for them are defined in such a way as to remove the
additional degree of freedom from the fit. For the polynomial and real spline lineshapes, the following couplings are used:
a^+ = r e^iϕ_+, a^-=(1-r)e^iϕ_-,
where r, ϕ_+ and ϕ_- are free parameters. For the complex spline lineshape,
a similar parametrisation is used with ϕ_+ fixed to zero, since the complex phase is
already included in the spline definition.
The observable decay density for an unpolarised particle in the initial state does not allow each polarisation
amplitude to be obtained independently. As a result, the couplings a^± in the fit can be strongly correlated.
However, the size of each contribution can be characterised by its spin-averaged fit fraction
ℱ_i=∑_μ, λ_p=± 1/2 ∫_𝒟 |A^(i)_μ,λ_p(ω)|^2 dω/∑_μ, λ_p=± 1/2 ∫_𝒟 |∑_iA^(i)_μ,λ_p(ω)|^2 dω.
If all the components correspond to partial waves with different spin-parities, the sum of the spin-averaged fit
fractions will be 100%;
otherwise it can differ from 100% due to interference effects. The statistical uncertainties on the fit fractions are obtained
from ensembles of pseudoexperiments.
§.§ Evaluation of fit quality
To assess the goodness of each fit, a χ^2 value is calculated by summing over the bins of the
two-dimensional Dalitz plot.
Since the amplitude is highly non-uniform and a meaningful χ^2 test requires a certain minimum
number of entries in each bin,
an adaptive binning method is used to ensure that each bin
contains at least 20 entries in the data.
Since the fit itself is unbinned, some information is lost by the binning.
The number of degrees of freedom for the χ^2 test in such a case is not well defined.
The effective number of degrees of freedom (ndf_ eff)
should be in the range N_ bins-N_ par-1≤ ndf_ eff≤ N_ bins-1,
where N_ bins is the number of bins and N_ par is the number of free parameters in the fit.
For each fit, ndf_ eff is obtained from ensembles of
pseudoexperiments by requiring that the probability value for the χ^2 distribution with ndf_ eff degrees of freedom,
P(χ^2, ndf_ eff), is distributed uniformly.
Note that when two fits with different models have similar binned χ^2 values, it does not necessarily follow that both models describe the data
equally well. Since the bins in regions with low population density have large area, the binning can obscure features that could discriminate between
the models. This information is preserved in the unbinned likelihood.
Thus, discrimination between fit models is based on the difference Δlnℒ, the statistical significance of which is determined using
ensembles of pseudoexperiments. The binned χ^2 serves as a measure of the fit quality for individual models and is not used to discriminate
between them.
§ SIGNAL SELECTION
The analysis uses the decay , where mesons are reconstructed in the final state .
The selection of candidates is performed in three stages: a preliminary selection, a kinematic fit,
and a final selection.
The preliminary selection uses loose criteria on the kinematic and topological properties of the
candidate. All tracks forming a candidate, as well as the and vertices,
are required to be of good quality and be separated from every PV in the event.
The separation from a PV is characterised by a quantity χ^2_ IP, defined as
the increase in the vertex-fit χ^2 when the track (or combination of tracks corresponding
to a short-lived particle) is included into the vertex fit.
The tracks forming a candidate are required to be positively identified as a pion
and a kaon, and the and decay vertices are required to be downstream
of their production vertices. All of the tracks are required to have no
associated hits in the muon detector.
For candidates passing this initial selection, a kinematic fit is performed <cit.>.
Constraints are imposed that the and decay products originate from the corresponding vertices,
that the candidate originate from its associated PV (the one with the smallest value of χ^2_ IP for the ),
and that the mass of the candidate be equal to its known
value <cit.>. The kinematic fit is required to converge with a good χ^2, and
the mass of the candidate after the fit is required to be in the range 5400–5900.
To suppress background from charmless → decays,
the decay time significance of the candidate obtained after the fit is required to be
greater than one standard deviation. To improve the resolution of the squared invariant masses M^2()
and M^2() entering the amplitude fit, the additional constraint that the
invariant mass of the combination be equal to the known mass <cit.>
is applied when calculating these variables.
After the initial selection, the background in the region of the signal is dominated
by random combinations of tracks. The final selection is based on a boosted decision tree
(BDT) algorithm <cit.> designed to separate signal from this background.
The selection is trained using simulated events
generated uniformly across the phase space as the signal sample, and the sample of opposite-flavour
, → combinations from data as background. In total, 12 discriminating variables
are used in the BDT selection: the χ^2 of the kinematic fit, the angle between the
momentum and the direction of flight of the candidate, the χ^2 of the and vertex fits,
the lifetime significance of the candidate with respect to the vertex,
the χ^2_ IP of the final-state tracks and the candidate,
and the particle identification (PID) information
of the proton and pion tracks from the vertex.
Due to differences between simulation and
data, corrections are applied to all the variables from the simulated sample used in the BDT training, except for the PID variables.
These corrections are typically about 10% and are obtained from a large and clean sample of decays.
The simulated proton and pion PID variables are replaced with values generated using
distributions obtained from calibration samples of → and → decays in data.
For these calibration samples, the four-dimensional distributions of PID variable,
, η and the track multiplicity of the event are described using a nonparametric kernel-based procedure <cit.>.
The resulting distributions are used to generate PID variables for each pion or proton track given its ,
η and the track multiplicity in the simulated event.
The BDT requirement is chosen such that the fraction of background in the signal region used for the
subsequent amplitude fit, |M()-m()|<30, does not exceed 15%. This corresponds to a signal
efficiency of 66% and a background rejection of 96% with respect to the preliminary selection.
After all selection requirements are applied, fewer than 1% of
selected events contain a second candidate. All multiple candidates are retained;
the associated systematic uncertainty is negligible.
§ FIT REGIONS AND EVENT YIELDS
The Dalitz plot of selected events, without background subtraction or efficiency correction,
in the signal invariant mass range defined in Sec. <ref> is shown in Fig. <ref>(a).
The part of the phase space near the threshold that contains contributions from resonances is shown in Fig. <ref>(b). The latter uses
M() as the horizontal axis instead of M^2().
In Fig. <ref>, the four amplitude fit regions of the phase space are indicated.
These are denoted regions 1–4. Region 1, M()>3 and M()>2,
is the part of the phase space that does not include resonant contributions and is used only to constrain
the nonresonant amplitude in the regions. Region 2,
2.86<M()<2.90, contains the well-known state and is used to measure its
parameters and to constrain the slowly varying amplitude underneath it in a model-independent way.
The fit in region 3 near the threshold, M()<2.90, provides additional
information about the slowly-varying amplitude.
Finally, the fit in region 4, M()<3.00,
which includes the state, gives information about the properties of this resonance and the relative
magnitudes of the resonant and nonresonant contributions.
Note that region 2 is fully contained in region 3, while region 3 is fully contained in region 4.
The signal and background yields in each region are obtained from extended unbinned maximum likelihood fits
of the invariant mass distribution in the range 5400–5900. The fit model
includes the signal component, a contribution from random combinations of tracks (combinatorial
background) and the background from partially reconstructed decays
(where decays into or and the or are not included in the
reconstruction).
The signal component is modelled as the sum of two Crystal Ball functions <cit.> with the same most
probable value and power-law tails on both sides.
All parameters of the model are fixed from simulation except for the peak position
and a common scale factor for the core widths, which are floated in the fit to data. The combinatorial background
is parametrised by an exponential function, and the partially reconstructed
background is described by a bifurcated Gaussian distribution. The shape parameters of the background
distributions are free parameters of the fit.
The results of the fit for candidates in the entire phase space are shown in Fig. <ref>.
The background and signal yields in the entire
phase space, as well as in the regions used in the amplitude fit, are given in
Table <ref>.
§ EFFICIENCY VARIATION OVER THE DALITZ PLOT
The same sample of simulated events as in the selection training (Sec. <ref>)
is used to determine the variation of the efficiency across the Dalitz plot.
The sample is generated uniformly in the decay phase space and consists of approximately
8× 10^4 events satisfying the selection requirements.
Each simulated event is assigned a weight,
derived from control samples of data, to correct
for known differences in track reconstruction and hardware trigger efficiency between data and simulation.
Since the PID variables in the sample are replaced by those generated from calibration data,
the efficiency of PID requirements is included in the efficiency calculation and
does not need to be treated separately.
The Dalitz plot efficiency profile is calculated separately for two disjoint sets of candidates, defined
according to whether the hardware trigger was activated by one of the decay products or by other particles in the event.
For each of those samples, a kernel-based density estimation procedure with a correction for boundary effects <cit.>
is used to obtain a description of the relative
efficiency as a function of the Dalitz plot variables. The overall efficiency is then given by the average of the two
profiles, weighted according to the ratio of yields of the two classes of events in data.
The resulting profile is shown in Fig. <ref>(a).
The normalisation of the efficiency profile used in the amplitude fit likelihood
(Eqs. (<ref>) and (<ref>)) does not affect the result.
The efficiency profile shown in Fig. <ref>(a)
is normalised such that the average efficiency over the phase space is equal to unity.
§ BACKGROUND DISTRIBUTION
Background in the vicinity of the invariant mass peak is dominated by random
combinations of mesons, proton, and pion tracks.
To determine the background shape as a function of Dalitz plot variables M^2() and M^2(),
the mass sidebands are used: 5500<M()<5560 and 5680<M()<5900.
The same procedure is applied to the opposite-flavour sample
to verify that the background shape in the mass sidebands is representative of that in the signal window.
Good agreement is found.
The background distribution as a function of the Dalitz plot variables is estimated using
a Gaussian mixture model, describing the background as a sum of several two-dimensional
Gaussian distributions, whose parameters are allowed to vary in the fit.
For the limited-size sample of background events this approach appears more suitable
than a kernel-based technique. The parametrisation is
obtained using an iterative procedure where Gaussian components are added to the model
one by one; at each iteration the parameters of all components are adjusted using an unbinned
maximum likelihood fit.
The result of the procedure is shown in Fig. <ref>(b).
The baseline parametrisation is a sum of 25 two-dimensional Gaussian components.
The normalisation of the background density used in the fit is arbitrary;
for the purposes of illustration in Fig. <ref>(b) it is set such
that the average density across the phase space is unity.
§ EFFECT OF MOMENTUM RESOLUTION
Finite momentum resolution smears the structures in the Dalitz plot. The use of the
kinematic fit with and mass constraints significantly improves the resolution near the edges of the phase space, but less so in the central region.
The only structure in the amplitude that is expected to be affected by the finite resolution is the
resonance , which has a natural width of approximately 6. Therefore, only the M() resolution
is considered, and is obtained from a sample of simulated events
by comparing the generated and reconstructed values of M().
The width of the resolution function at M()=2.88 is 1.1, significantly
smaller than the natural width of the .
However, simulation shows that neglecting the resolution would lead to a bias on the width of about 10%.
Therefore, the M() resolution is taken into account in the fit by convolving the signal PDF
with a Gaussian resolution function, where the width of the Gaussian is a function of M().
§ AMPLITUDE ANALYSIS
The amplitude fit is performed in the four phase space regions defined in Fig. <ref>.
This approach has been chosen instead of performing the fit to the entire Dalitz plot since
the amplitude contains many unexplored contributions. The full fit would include too many
degrees of freedom and a very large range of systematic variations would need to be considered.
Instead, the fit is first performed around the well-known
resonance and then the fitting region is gradually extended
to include a larger portion of the phase space.
§.§ Fit in the nonresonant region
The fit in region 1, where no significant resonant contributions are expected,
provides constraints on the high-mass
behaviour of the amplitude, and thus on the
partial waves in the fit regions.
The fit model includes four exponential nonresonant components (Eq. (<ref>)) in each of the
and spectra, corresponding to the four combinations of spin (1/2 and 3/2) and parity
(negative and positive).
Since there is no reference amplitude with known parity in this region,
there is an ambiguity: all parities can be reversed simultaneously without changing the amplitude.
The shape parameters α of all eight nonresonant components are varied in the fit.
The projections of the fitted data are shown in Fig. <ref>. The fitted
amplitude is extrapolated into the regions 2–4 of the phase space
using the fitted helicity distributions.
The estimated contributions of the nonresonant components in the
mass regions are given in Table <ref> and compared with the total
numbers of signal events in those regions. They amount to less than 1% of the signal yield for
the regions 2 and 3, and around 1.5% for region 4. Therefore, the baseline fit models for regions
2 and 3 do not include crossfeed (although it is taken into account as a part of
the uncertainty due to modelling of nonresonant amplitudes),
while for region 4 the nonresonant component is included in the model. Since only a
small part of the helicity distribution enters the fit region, the spin and parity assignment
of the amplitude should have a very small effect. Thus only one partial
wave (J^P=1/2^-) of the nonresonant component is included for the amplitude fit.
§.§ Fit in the region of
Next, an amplitude fit is performed in region 2, in the vicinity of the well-established
resonance.
The quantum numbers of this state have been measured by the Belle collaboration to be J^P=5/2^+ <cit.>.
The fit probes the structure of the wide amplitude component underneath
the peak using the shape of the latter as a reference.
Other spin assignments from 1/2 to 7/2 are also tried (spin 7/2 was not tested in the Belle
analysis <cit.>).
Since the amplitude is not sensitive
to the absolute parities of the components, the parity of the is always fixed to be
positive; the parities of the other amplitude components are determined relative to its parity.
As for region 1, the nonresonant amplitude
model consists of four contributions with spins 1/2 and 3/2 and both parities. The nonresonant
components are parametrised either with the exponential model of Eq. <ref> (“Exponential”),
or the amplitude with both real and imaginary parts varying linearly in M^2() (“Linear”,
which is a special case of the spline-interpolated shape with only two knots). The mass and width
of the state are free parameters.
The model in which the has spin 5/2 is preferred for both
nonresonant models, while the difference between exponential and linear models is negligible.
The model with spin 5/2 and linear nonresonant amplitude parametrisation is taken as the baseline.
Table <ref> gives the differences in lnℒ compared to the baseline, along with
the χ^2 values and the associated probabilities.
The quality of the fit is obtained using the adaptive binning approach with at least 20 data entries in each bin
and with the effective number of degrees of freedom ndf_ eff obtained from pseudoexperiments.
The results of the fit with the baseline model are shown in Fig. <ref>.
Argand diagrams illustrating the amplitude and phase motion of the fit components are shown in Fig. <ref>.
The plots contain a hint of phase rotation for the J^P=3/2^+ partial wave in a counter-clockwise direction,
consistent with the resonance-like phase motion observed in the near-threshold fit (Sec. <ref>).
The statistical significance of this effect is studied with
a series of pseudoexperiments where the samples are generated according to the fit where the complex
phase in all the nonresonant components is constant. Each is fitted with two models, with the complex
phase constrained to be the same for both endpoints, and floated freely. The distribution of the
logarithmic likelihood difference Δlnℒ between the two fits is studied and compared to the value obtained in data.
The study shows that around 55% of the samples have Δlnℒ greater than the
value observed in data (1.4), this effect is not statistically significant with the data in region 2 alone.
Ensembles of pseudoexperiments, where the baseline model is used both to generate and to fit
samples of the same size as in the data, are used to validate the statistical uncertainties
obtained from the fit, check for systematic biases due to the fitting procedure, evaluate the
statistical uncertainties on the fit fractions, and obtain the effective number of degrees of freedom
for the fit quality evaluation based on a binned χ^2 measure.
The unbinned maximum likelihood fit is unbiased only in the limit of a large data sample; in general a fit to a
finite sample can exhibit a bias that is usually significantly smaller than the statistical uncertainty.
Pseudoexperiments are used to evaluate and correct for such biases on the mass and the width of the state,
as well as on the fit fractions of the amplitude components obtained from the fit. The corrected values are
m() =,
Γ() =,
ℱ() =()%,
ℱ(1/2^+) =()%,
ℱ(1/2^-) =()%,
ℱ(3/2^+) =()%,
ℱ(3/2^-) =()%.
The uncertainties are statistical only.
Correlations between the fit parameters do not exceed 20%.
Since all the amplitude components have different quantum numbers, the
interference terms cancel out after integrating over the phase space, and the
sum of uncorrected fit fractions is exactly 100%.
After the bias correction is applied individually for each fit fraction,
statistical fluctuations in the corrections lead to a small, statistically not significant, difference from 100%
(in this case, the sum of fit fractions increases to 102.6%).
A number of experimental systematic uncertainties on the mass and width and on the
difference Δlnℒ between the baseline (5/2) and the next-best (7/2) spin assignments
are considered and are given in Table <ref>. These arise from:
* Uncertainty on the background fraction in the signal region (Sec. <ref>).
The statistical uncertainty is obtained from the fit to the M() distribution,
and a systematic uncertainty arising from the modelling of the signal and
background M() distributions is estimated
by performing fits with modified M() models.
The sum in quadrature of these contributions is taken as the systematic uncertainty.
* Uncertainty on the efficiency profile (Sec. <ref>).
The statistical uncertainty is evaluated via a
bootstrapping procedure <cit.>.
The uncertainty related to the kernel density estimation procedure is obtained by varying the kernel size.
The uncertainty due to differences between data and simulation in
the input variables of the BDT is estimated by varying the scaling
factors for these variables.
In addition, the replacement of simulated proton and pion PID variables with
values drawn from control samples in the data with matching kinematics,
described in Section 4, introduces further systematic uncertainties.
The uncertainty associated with the limited size of these control samples
is evaluated again with a bootstrapping procedure, and the uncertainty
associated with the kinematic matching process is assessed by changing the kernel
size in the nonparametric algorithm used to estimate the PID response as a
function of the kinematic properties of the track.
* Uncertainty on the background shape (Sec. <ref>). This is assessed by
varying the density estimation procedure (changing the number of Gaussian cores
in the mixture model, or
using kernel density estimation instead of a Gaussian mixture model), and by
using only a narrower upper sideband of the M() distribution, 5680<M()<5780.
The statistical uncertainty due to the finite size of the background sample
is estimated by bootstrapping.
* Uncertainty on the momentum resolution (Sec. <ref>). This is
estimated by varying the M^2() resolution by 15%.
It mainly affects the width of the resonance.
* Uncertainties on the mass scale. Due to the constraints on the hadron masses, the momentum scale
uncertainty of the detector has a negligible effect on the fit. However, the uncertainties
on the assigned mass
values themselves do contribute. For M() amplitudes the dominant
contribution comes from the mass uncertainty.
* Uncertainty on the fit procedure itself. This is assessed by fitting ensembles of pseudoexperiments,
where the baseline amplitude model is used for both generation and fitting, and the number of
events generated for each pseudoexperiment is equal to the number of events in the data sample.
The mean value for each fitted parameter is used as a correction for fitting bias, while the
statistical uncertainty on the mean is taken as the uncertainty due to the fit procedure.
The uncertainties on the mass and the fit procedure do not affect the significance of the quantum
number assignment and are thus not included in Δlnℒ uncertainty.
Also reported in Table <ref> is the uncertainty related to the amplitude model.
It consists of two contributions, corresponding to the
uncertainties in the modelling of the resonant shape and the nonresonant amplitudes.
The model uncertainties are asymmetric, and the positive and negative uncertainties for the two components
are combined in quadrature separately to obtain the total model uncertainty.
The uncertainty due to the Breit–Wigner parametrisation of the amplitude is estimated by
varying the radial parameters r_ and r_ between 0 and 10^-1 and 0 and 3^-1,
respectively, and by removing the angular barrier factor from the Breit–Wigner amplitude.
The maximum deviation is taken as the uncertainty.
The uncertainty due to the modelling of the nonresonant amplitudes is estimated
by taking the difference between the fit results obtained with the default linear nonresonant model
and the alternative exponential model. The possible crossfeed from the channel
is estimated by adding a
J^P=1/2^- component in the channel to the amplitude. This component has a fixed
exponential lineshape with shape parameter α=0.5^-2 (obtained in the fit to region 1 data)
and its complex couplings are free parameters in the fit.
The helicity formalism used to describe the amplitudes is inherently non-relativistic.
To assess the model uncertainty due to this limitation, an alternative description is
obtained with covariant tensors using the qft++ framework <cit.>, but it is much
more expensive from a computational point of view and is therefore not used for the baseline
fits. Differences between the helicity and the covariant formalism are mainly associated with the
broad amplitude components and are therefore treated as a part of the uncertainty due to the nonresonant model.
Although this contribution is included in the nonresonant model uncertainty in Table <ref>,
it is also reported separately.
The significance of the spin assignment J=5/2 with respect to the next most likely hypothesis
J=7/2 for the state is evaluated with a series of pseudoexperiments, where the samples are
generated from the model with J=7/2 and then fitted with both J=5/2 and 7/2 hypotheses.
The difference of the logarithmic likelihoods Δlnℒ is used as the test statistic.
The distribution in Δlnℒ is fitted with a Gaussian function and compared
to the value of Δlnℒ observed in data. The statistical significance is
expressed in terms of a number of standard deviations (σ).
The uncertainty in Δlnℒ due to systematic effects
is small compared to the statistical uncertainty;
combining them in quadrature results in an overall significance of 4.0σ.
The fits with spins 1/2 and 3/2 for the state yield large Δlnℒ
and poor fit quality, as seen from Table <ref>. These spin assignments
are thus excluded.
In conclusion, the mass and width of the resonance are found to be
m() = ,
Γ() = .
These are consistent with the current world averages, and have comparable precision.
The preferred value for the spin of this state is confirmed to be 5/2, with a significance of
4σ over the next most likely hypothesis, 7/2. The spin assignments 1/2 and 3/2 are excluded.
The largest nonresonant contribution underneath the state comes from a partial wave with spin 3/2
and positive parity.
With a larger dataset, it would be possible to constrain the phase motion of the nonresonant amplitude in a model-independent way
using the amplitude as a reference.
§.§ Fit in the near-threshold region
Extending the M() range down to the threshold (region 3), it becomes evident that a simple model
for the broad amplitude components, such as an exponential lineshape, cannot describe the data (Fig. <ref>).
The hypothesis that an additional resonance is present in the amplitude is tested in a model-dependent way
by introducing a Breit–Wigner resonance in each of the partial waves.
Model-independent tests are also performed via fits in which one or more partial waves
are parametrised with a spline-interpolated shape. The results of these tests
are summarised in Table <ref>. The mass and width of the state are fixed
to their known values <cit.> in these fits.
There are no states with mass around the threshold (2800)
that are currently known to decay to the final state. A broad structure has been seen
previously in the final state that is referred to as
the _(2765)^+ <cit.>.
It could contribute to the amplitude if its width is large.
Since neither the quantum numbers nor the width of this structure have been measured,
fits are carried out in which this structure is included, modelled as a Breit–Wigner amplitude
with spin-parity 1/2^± or 3/2^±,
and with a width that is free to vary; its mass is fixed to 2765.
In addition, four exponential
nonresonant components with J^P=1/2^+, 1/2^-, 3/2^+, and 3/2^- are included.
None of these fits are of acceptable quality, as shown in Table <ref>.
A Flatté parametrisation of the line shape <cit.> with couplings to
and channels is also considered, but does not produce a fit
of acceptable quality either.
Therefore, a resonance with a fixed mass of 2765 is not sufficient to explain the data.
If the mass of the Breit–Wigner resonance is allowed to vary in the fit,
good agreement with data can be obtained for the spin-parity assignment J^P=3/2^+.
Moreover, if the resonance is assumed to have J^P=3/2^+, the exponential
nonresonant component with J^P=3/2^+ can be removed from the
amplitude model without loss of fit quality. This model is taken as the baseline for this fit region.
The mass and the width of the resonance obtained from the fit are around 2856
and 65, respectively, and therefore this structure will be referred to as
hereafter. The results of this fit are shown in Fig. <ref>.
One model-independent test for the presence of structure in the broad component is to describe the
real and imaginary parts with spline-interpolated shapes.
Cubic splines with six knots at masses of 2800, 2820, 2840, 2860, 2880 and 2900are used.
Of the models where only one partial wave is described by a spline while the others remain exponential,
the best fit is again given by the model where the spline-interpolated amplitude has J^P=3/2^+.
The Argand diagram for the 3/2^+ amplitude in this fit is shown in Fig. <ref>(a).
Each of the points numbered from 0 to 5 corresponds to one spline knot at increasing values of M().
Note that knots 3 and 5 at masses 2860 and 2900correspond to the boundaries
of the region 2 where the nonresonant amplitude is described by a linear function (Sec. <ref>)
and that the amplitudes and phases in those two knots can be compared
directly to Fig. <ref>, since the convention is the
same in both fits. The Argand diagram demonstrates resonance-like phase rotation of the 3/2^+ partial wave
with respect to the other broad components in the amplitude, which are assumed to be
constant in phase. Note that the absolute phase motion cannot be obtained from this fit
since there are no reference amplitudes covering the entire mass range used in the fit.
As seen in Table <ref>, inclusion of a spline-interpolated shape in the 1/2^+ component
instead of 3/2^+ also gives a reasonable fit quality. The Argand diagram for the 1/2^+ wave in this fit
is shown in Fig. <ref>(b). Since the phase rotates clockwise, this solution
cannot be described by a single resonance.
A genuine resonance has characteristic phase motion as a function of M().
As a null test, the fits are repeated with a spline function with
no phase motion. This is implemented as a real spline function multiplied by a constant phase. The fits where
only one partial wave is replaced by a real spline give poor fits. If both spin-3/2
amplitudes are represented by real splines, the fit quality is good, but the
resulting amplitudes oscillate as functions of M(),
which is not physical. Figure <ref>(a) shows the real spline amplitudes
without the contribution of the phase space term, which exhibit oscillating behaviour,
while Fig. <ref>(b) shows the M()
projection of the decay density for this solution.
As in the case of the amplitude fit in the region, pseudoexperiments are used to validate the
fit procedure, obtain uncertainties on the fit fractions, and determine values of ndf_ eff for the
binned fit quality test. Pseudoexperiments
are also used to obtain the Δlnℒ distributions for fits with various spin-parity hypotheses.
After correcting for fit bias, the mass and width of the broad resonance are found to be
m()= and Γ()=, where the uncertainties are statistical only.
Systematic uncertainties are obtained following the same procedure as for the amplitude fit in
the region (Sec. <ref>) and are summarised in Table <ref>.
An additional contribution to the list of systematic uncertainties is the uncertainty in the knowledge of the
mass and width of the resonance, which are fixed in the fit. It is estimated by varying these
parameters within their uncertainties.
The model uncertainty associated with the parametrisation of the nonresonant components is estimated by performing
fits with an additional exponential 3/2^+ amplitude component and with the 3/2^- component removed, as well as
by adding the amplitude and using the covariant amplitude formalism in the same way as in
Sec. <ref>.
The J^P=3/2^+ hypothesis is preferred for the state, since its fit likelihood,
as measured by Δlnℒ, is substantially better than those of
the other J^P values tested.
The significance of this difference is assessed with pseudoexperiments and corresponds to
8.8σ, 6.3σ, and 6.6σ for the 1/2^+, 1/2^-, and 3/2^- hypotheses,
respectively.
When systematic uncertainties are included, these reduce to 8.4σ, 6.2σ and 6.4σ.
For J^P=3/2^+, the following parameters are obtained for the near-threshold resonant state:
m() = ,
Γ() = .
The largest uncertainties are associated with the modelling of the nonresonant
components of the amplitude.
§.§ Fit including
Finally, the mass region in the amplitude fit is extended up to M()=3.0
to include the state (region 4).
Since the behaviour of the slowly-varying amplitude is consistent with the presence of a resonance
in the J^P=3/2^+ wave and nonresonant amplitudes in the 1/2^+, 1/2^-, and 3/2^- waves,
the same model is used to describe those parts of the amplitude in the extended fit region.
The resonance is modelled by a Breit–Wigner lineshape.
The masses and widths of the and states are floated in the fit, while those
of the resonance are fixed to their nominal values <cit.>.
Several variants of the fit are performed in which the spin of is assigned
to be 1/2, 3/2, 5/2 or 7/2, with both
positive and negative parities considered.
Two different parametrisations of the nonresonant components are considered: the exponential model
(taken as the baseline) and a second-order polynomial (Eq. (<ref>)).
The results of the fits are given in Table <ref>. For both nonresonant parametrisations,
the best fit has a spin-parity assignment of
3/2^-. The results of the fit with
this hypothesis and an exponential model for the nonresonant amplitudes, which is taken as the baseline for fit region 4,
are shown in Fig. <ref>.
Although the 3/2^- hypothesis describes the data significantly better than all others in fits using an exponential
nonresonant model, this is not the case for the more flexible polynomial model: the assignment J^P=5/2^- is only slightly
worse (Δlnℒ=3.6) and a number of other spin-parity assignments are not excluded either.
In the baseline model, the mass of the state is measured to be
m()=, and the width is
Γ()=. The fit fractions
for the resonant components of the amplitude are ℱ()=()%,
ℱ()=()%, and ℱ()=()%.
All these uncertainties are statistical. Pseudoexperiments are used to correct for fit bias,
which is small compared to the statistical uncertainties, and to determine the linear correlation
coefficients for the statistical uncertainties between the measured masses, widths and fit fractions
(Table <ref>).
The systematic and model uncertainties for the parameters given above,
obtained following the procedure described in Sections <ref>
and <ref>, are presented in Table <ref>.
The part of the model uncertainty associated with the nonresonant amplitude is estimated
from fits that use the polynomial nonresonant parametrisation instead of the default
exponential form, by adding a 3/2^+ nonresonant amplitude or removing the 3/2^- or amplitudes, and by using the
covariant formalism instead of the baseline helicity formalism.
The uncertainty due to the unknown quantum numbers of the state is estimated
from the variation among the fits with spin-parity assignments that give reasonable fit quality (P(χ^2, ndf)>5%):
3/2^+, 3/2^-, 5/2^+, 5/2^-.
The systematic uncertainties on Δlnℒ between the various spin-parity hypotheses and the baseline hypothesis, J^P=3/2^-, are shown in Table <ref>
(for the exponential nonresonant model) and Table <ref> (for the polynomial model).
Only those systematic variations from Table <ref> that can affect the
significance of the quantum number assignment are considered.
Since the cases with exponential and polynomial nonresonant amplitudes
are treated separately, the model uncertainty associated with the nonresonant amplitudes
does not include the difference between these two models.
For each J^P hypothesis, the significance with respect to the baseline is obtained from ensembles of
pseudoexperiments and shown in Table <ref>. The column marked “Statistical” includes
only statistical uncertainties on Δlnℒ, while that marked “Total” is the sum in
quadrature of the statistical, systematic, and model uncertainties.
Including the systematic and model uncertainties, the mass and width of the resonance are
m() =
Γ() = .
The largest uncertainties in the measurement of these parameters, apart from those of statistical origin,
are related to the model of the nonresonant amplitude
and the uncertainties for the quantum numbers.
The fit fractions of the resonances in the region of the phase space
used in the fit, M()<3, are
ℱ() =()%,
ℱ() =()%,
ℱ() =()%.
The contributions of individual resonant components,
integrated over the entire phase space of the decay,
can be used to extract the ratios of branching fractions
(→)×(→)/(→)×(→) =,
(→)×(→)/(→)×(→) = ,
which assumes the ratios of the branching fractions
to be equal to the ratios of the fit fractions.
The constraints on the quantum numbers depend on the description of the
nonresonant amplitudes. If an exponential model is used for the nonresonant components, the single
best spin-parity assignment is J^P=3/2^-, and the 3/2^+, 5/2^+ and 5/2^-
assignments are excluded at the levels of 3.7, 4.4 and 4.5 standard deviations, respectively
(including systematic uncertainties), while spins of 1/2 or 7/2
are excluded by more than 5σ.
If a polynomial nonresonant parametrisation is used,
the solution with 3/2^- is again the most likely one,
though the data are consistent with the 5/2^- hypothesis at 2.2σ.
Several J^P assignments
(5/2^+, 3/2^+, 7/2^-, 1/2^+ and 1/2^-) are disfavoured with respect to the 3/2^-
hypothesis with significances between 3.1 and 4.5σ,
and only the 7/2^+ hypothesis is excluded by more than 5σ.
Since the data are consistent with both the exponential and polynomial nonresonant models,
only weak constraints on the spin and parity are obtained,
with J^P=3/2^- favoured and with positive parity excluded at the 3σ level.
§ CONCLUSION
An amplitude analysis of the decay is performed in the region of the phase space containing resonant
contributions.
This study provides important information about the structure of the amplitude
for future studies of CP violation in decays, as well as on the spectroscopy of excited
states.
The preferred spin of the state is found to be J=5/2, with the J=7/2 hypothesis disfavoured by 4.0
standard deviations.
The solutions with J=1/2 and 3/2 are excluded with a significance of more than 5 standard deviations.
The mass and width of the state are found to be:
m() = ,
Γ() = .
These results are consistent with and have comparable precision to the current world averages (WA), which are
m_ WA()=2881.53± 0.35, and Γ_ WA()=5.8 ± 1.1 <cit.>.
A near-threshold enhancement in the amplitude is studied. The enhancement is consistent with being a
resonant state (referred to here as the ) with mass and width
m() =,
Γ() =
and quantum numbers J^P=3/2^+, with the parity measured relative to that of the state.
The other quantum numbers are excluded with a significance of
more than 6 standard deviations. The phase motion of the 3/2^+ component with respect to the nonresonant
amplitudes is obtained in a model-independent way and is consistent with resonant behaviour.
With a larger dataset, it should be possible to constrain the phase motion of the 3/2^+
partial wave using the amplitude as a reference, without making assumptions on the nonresonant
amplitude behaviour. The mass of the state is consistent with recent predictions
for an orbital D-wave excitation with quantum numbers 3/2^+
based on the nonrelativistic heavy quark-light diquark model <cit.> and from
QCD sum rules in the HQET framework <cit.>.
First constraints on the spin and parity of the state are obtained in this analysis,
and its mass and width are measured.
The most likely spin-parity assignment for is J^P=3/2^- but the other solutions
with spins 1/2 to 7/2 cannot be excluded.
The mass and width of the state are measured to be
m() =,
Γ() =.
The J^P=3/2^- assignment for state is consistent with its interpretations as
a D^*N molecule <cit.>
or a radial 2P excitation <cit.>.
§ ACKNOWLEDGEMENTS
We express our gratitude to our colleagues in the CERN
accelerator departments for the excellent performance of the LHC. We
thank the technical and administrative staff at the LHCb
institutes. We acknowledge support from CERN and from the national
agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); NSFC (China);
CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy);
FOM and NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania);
MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland);
NASU (Ukraine); STFC (United Kingdom); NSF (USA).
We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open
source software packages on which we depend.
Individual groups or members have received support from AvH Foundation (Germany),
EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union),
Conseil Général de Haute-Savoie, Labex ENIGMASS and OCEVU,
Région Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851 and the Leverhulme Trust (United Kingdom).
tocsectionReferences
inbibliographytrue
LHCb
LHCb collaboration
R. Aaij^40,
B. Adeva^39,
M. Adinolfi^48,
Z. Ajaltouni^5,
S. Akar^59,
J. Albrecht^10,
F. Alessio^40,
M. Alexander^53,
S. Ali^43,
G. Alkhazov^31,
P. Alvarez Cartelle^55,
A.A. Alves Jr^59,
S. Amato^2,
S. Amerio^23,
Y. Amhis^7,
L. An^3,
L. Anderlini^18,
G. Andreassi^41,
M. Andreotti^17,g,
J.E. Andrews^60,
R.B. Appleby^56,
F. Archilli^43,
P. d'Argent^12,
J. Arnau Romeu^6,
A. Artamonov^37,
M. Artuso^61,
E. Aslanides^6,
G. Auriemma^26,
M. Baalouch^5,
I. Babuschkin^56,
S. Bachmann^12,
J.J. Back^50,
A. Badalov^38,
C. Baesso^62,
S. Baker^55,
V. Balagura^7,c,
W. Baldini^17,
R.J. Barlow^56,
C. Barschel^40,
S. Barsuk^7,
W. Barter^56,
F. Baryshnikov^32,
M. Baszczyk^27,
V. Batozskaya^29,
B. Batsukh^61,
V. Battista^41,
A. Bay^41,
L. Beaucourt^4,
J. Beddow^53,
F. Bedeschi^24,
I. Bediaga^1,
A. Beiter^61,
L.J. Bel^43,
V. Bellee^41,
N. Belloli^21,i,
K. Belous^37,
I. Belyaev^32,
E. Ben-Haim^8,
G. Bencivenni^19,
S. Benson^43,
A. Berezhnoy^33,
R. Bernet^42,
A. Bertolin^23,
C. Betancourt^42,
F. Betti^15,
M.-O. Bettler^40,
M. van Beuzekom^43,
Ia. Bezshyiko^42,
S. Bifani^47,
P. Billoir^8,
T. Bird^56,
A. Birnkraut^10,
A. Bitadze^56,
A. Bizzeti^18,u,
T. Blake^50,
F. Blanc^41,
J. Blouw^11,†,
S. Blusk^61,
V. Bocci^26,
T. Boettcher^58,
A. Bondar^36,w,
N. Bondar^31,40,
W. Bonivento^16,
I. Bordyuzhin^32,
A. Borgheresi^21,i,
S. Borghi^56,
M. Borisyak^35,
M. Borsato^39,
F. Bossu^7,
M. Boubdir^9,
T.J.V. Bowcock^54,
E. Bowen^42,
C. Bozzi^17,40,
S. Braun^12,
M. Britsch^12,
T. Britton^61,
J. Brodzicka^56,
E. Buchanan^48,
C. Burr^56,
A. Bursche^2,
J. Buytaert^40,
S. Cadeddu^16,
R. Calabrese^17,g,
M. Calvi^21,i,
M. Calvo Gomez^38,m,
A. Camboni^38,
P. Campana^19,
D.H. Campora Perez^40,
L. Capriotti^56,
A. Carbone^15,e,
G. Carboni^25,j,
R. Cardinale^20,h,
A. Cardini^16,
P. Carniti^21,i,
L. Carson^52,
K. Carvalho Akiba^2,
G. Casse^54,
L. Cassina^21,i,
L. Castillo Garcia^41,
M. Cattaneo^40,
G. Cavallero^20,
R. Cenci^24,t,
D. Chamont^7,
M. Charles^8,
Ph. Charpentier^40,
G. Chatzikonstantinidis^47,
M. Chefdeville^4,
S. Chen^56,
S.-F. Cheung^57,
V. Chobanova^39,
M. Chrzaszcz^42,27,
X. Cid Vidal^39,
G. Ciezarek^43,
P.E.L. Clarke^52,
M. Clemencic^40,
H.V. Cliff^49,
J. Closier^40,
V. Coco^59,
J. Cogan^6,
E. Cogneras^5,
V. Cogoni^16,40,f,
L. Cojocariu^30,
G. Collazuol^23,o,
P. Collins^40,
A. Comerma-Montells^12,
A. Contu^40,
A. Cook^48,
G. Coombs^40,
S. Coquereau^38,
G. Corti^40,
M. Corvo^17,g,
C.M. Costa Sobral^50,
B. Couturier^40,
G.A. Cowan^52,
D.C. Craik^52,
A. Crocombe^50,
M. Cruz Torres^62,
S. Cunliffe^55,
R. Currie^55,
C. D'Ambrosio^40,
F. Da Cunha Marinho^2,
E. Dall'Occo^43,
J. Dalseno^48,
P.N.Y. David^43,
A. Davis^3,
K. De Bruyn^6,
S. De Capua^56,
M. De Cian^12,
J.M. De Miranda^1,
L. De Paula^2,
M. De Serio^14,d,
P. De Simone^19,
C.T. Dean^53,
D. Decamp^4,
M. Deckenhoff^10,
L. Del Buono^8,
M. Demmer^10,
A. Dendek^28,
D. Derkach^35,
O. Deschamps^5,
F. Dettori^40,
B. Dey^22,
A. Di Canto^40,
H. Dijkstra^40,
F. Dordei^40,
M. Dorigo^41,
A. Dosil Suárez^39,
A. Dovbnya^45,
K. Dreimanis^54,
L. Dufour^43,
G. Dujany^56,
K. Dungs^40,
P. Durante^40,
R. Dzhelyadin^37,
A. Dziurda^40,
A. Dzyuba^31,
N. Déléage^4,
S. Easo^51,
M. Ebert^52,
U. Egede^55,
V. Egorychev^32,
S. Eidelman^36,w,
S. Eisenhardt^52,
U. Eitschberger^10,
R. Ekelhof^10,
L. Eklund^53,
S. Ely^61,
S. Esen^12,
H.M. Evans^49,
T. Evans^57,
A. Falabella^15,
N. Farley^47,
S. Farry^54,
R. Fay^54,
D. Fazzini^21,i,
D. Ferguson^52,
A. Fernandez Prieto^39,
F. Ferrari^15,40,
F. Ferreira Rodrigues^2,
M. Ferro-Luzzi^40,
S. Filippov^34,
R.A. Fini^14,
M. Fiore^17,g,
M. Fiorini^17,g,
M. Firlej^28,
C. Fitzpatrick^41,
T. Fiutowski^28,
F. Fleuret^7,b,
K. Fohl^40,
M. Fontana^16,40,
F. Fontanelli^20,h,
D.C. Forshaw^61,
R. Forty^40,
V. Franco Lima^54,
M. Frank^40,
C. Frei^40,
J. Fu^22,q,
W. Funk^40,
E. Furfaro^25,j,
C. Färber^40,
A. Gallas Torreira^39,
D. Galli^15,e,
S. Gallorini^23,
S. Gambetta^52,
M. Gandelman^2,
P. Gandini^57,
Y. Gao^3,
L.M. Garcia Martin^69,
J. García Pardiñas^39,
J. Garra Tico^49,
L. Garrido^38,
P.J. Garsed^49,
D. Gascon^38,
C. Gaspar^40,
L. Gavardi^10,
G. Gazzoni^5,
D. Gerick^12,
E. Gersabeck^12,
M. Gersabeck^56,
T. Gershon^50,
Ph. Ghez^4,
S. Gianì^41,
V. Gibson^49,
O.G. Girard^41,
L. Giubega^30,
K. Gizdov^52,
V.V. Gligorov^8,
D. Golubkov^32,
A. Golutvin^55,40,
A. Gomes^1,a,
I.V. Gorelov^33,
C. Gotti^21,i,
R. Graciani Diaz^38,
L.A. Granado Cardoso^40,
E. Graugés^38,
E. Graverini^42,
G. Graziani^18,
A. Grecu^30,
P. Griffith^16,
L. Grillo^21,40,i,
B.R. Gruberg Cazon^57,
O. Grünberg^67,
E. Gushchin^34,
Yu. Guz^37,
T. Gys^40,
C. Göbel^62,
T. Hadavizadeh^57,
C. Hadjivasiliou^5,
G. Haefeli^41,
C. Haen^40,
S.C. Haines^49,
B. Hamilton^60,
X. Han^12,
S. Hansmann-Menzemer^12,
N. Harnew^57,
S.T. Harnew^48,
J. Harrison^56,
M. Hatch^40,
J. He^63,
T. Head^41,
A. Heister^9,
K. Hennessy^54,
P. Henrard^5,
L. Henry^8,
E. van Herwijnen^40,
M. Heß^67,
A. Hicheur^2,
D. Hill^57,
C. Hombach^56,
H. Hopchev^41,
W. Hulsbergen^43,
T. Humair^55,
M. Hushchyn^35,
D. Hutchcroft^54,
M. Idzik^28,
P. Ilten^58,
R. Jacobsson^40,
A. Jaeger^12,
J. Jalocha^57,
E. Jans^43,
A. Jawahery^60,
F. Jiang^3,
M. John^57,
D. Johnson^40,
C.R. Jones^49,
C. Joram^40,
B. Jost^40,
N. Jurik^57,
S. Kandybei^45,
M. Karacson^40,
J.M. Kariuki^48,
S. Karodia^53,
M. Kecke^12,
M. Kelsey^61,
M. Kenzie^49,
T. Ketel^44,
E. Khairullin^35,
B. Khanji^12,
C. Khurewathanakul^41,
T. Kirn^9,
S. Klaver^56,
K. Klimaszewski^29,
S. Koliiev^46,
M. Kolpin^12,
I. Komarov^41,
R.F. Koopman^44,
P. Koppenburg^43,
A. Kosmyntseva^32,
A. Kozachuk^33,
M. Kozeiha^5,
L. Kravchuk^34,
K. Kreplin^12,
M. Kreps^50,
P. Krokovny^36,w,
F. Kruse^10,
W. Krzemien^29,
W. Kucewicz^27,l,
M. Kucharczyk^27,
V. Kudryavtsev^36,w,
A.K. Kuonen^41,
K. Kurek^29,
T. Kvaratskheliya^32,40,
D. Lacarrere^40,
G. Lafferty^56,
A. Lai^16,
G. Lanfranchi^19,
C. Langenbruch^9,
T. Latham^50,
C. Lazzeroni^47,
R. Le Gac^6,
J. van Leerdam^43,
A. Leflat^33,40,
J. Lefrançois^7,
R. Lefèvre^5,
F. Lemaitre^40,
E. Lemos Cid^39,
O. Leroy^6,
T. Lesiak^27,
B. Leverington^12,
T. Li^3,
Y. Li^7,
T. Likhomanenko^35,68,
R. Lindner^40,
C. Linn^40,
F. Lionetto^42,
X. Liu^3,
D. Loh^50,
I. Longstaff^53,
J.H. Lopes^2,
D. Lucchesi^23,o,
M. Lucio Martinez^39,
H. Luo^52,
A. Lupato^23,
E. Luppi^17,g,
O. Lupton^40,
A. Lusiani^24,
X. Lyu^63,
F. Machefert^7,
F. Maciuc^30,
O. Maev^31,
K. Maguire^56,
S. Malde^57,
A. Malinin^68,
T. Maltsev^36,
G. Manca^16,f,
G. Mancinelli^6,
P. Manning^61,
J. Maratas^5,v,
J.F. Marchand^4,
U. Marconi^15,
C. Marin Benito^38,
M. Marinangeli^41,
P. Marino^24,t,
J. Marks^12,
G. Martellotti^26,
M. Martin^6,
M. Martinelli^41,
D. Martinez Santos^39,
F. Martinez Vidal^69,
D. Martins Tostes^2,
L.M. Massacrier^7,
A. Massafferri^1,
R. Matev^40,
A. Mathad^50,
Z. Mathe^40,
C. Matteuzzi^21,
A. Mauri^42,
E. Maurice^7,b,
B. Maurin^41,
A. Mazurov^47,
M. McCann^55,40,
A. McNab^56,
R. McNulty^13,
B. Meadows^59,
F. Meier^10,
M. Meissner^12,
D. Melnychuk^29,
M. Merk^43,
A. Merli^22,q,
E. Michielin^23,
D.A. Milanes^66,
M.-N. Minard^4,
D.S. Mitzel^12,
A. Mogini^8,
J. Molina Rodriguez^1,
I.A. Monroy^66,
S. Monteil^5,
M. Morandin^23,
P. Morawski^28,
A. Mordà^6,
M.J. Morello^24,t,
O. Morgunova^68,
J. Moron^28,
A.B. Morris^52,
R. Mountain^61,
F. Muheim^52,
M. Mulder^43,
M. Mussini^15,
D. Müller^56,
J. Müller^10,
K. Müller^42,
V. Müller^10,
P. Naik^48,
T. Nakada^41,
R. Nandakumar^51,
A. Nandi^57,
I. Nasteva^2,
M. Needham^52,
N. Neri^22,
S. Neubert^12,
N. Neufeld^40,
M. Neuner^12,
T.D. Nguyen^41,
C. Nguyen-Mau^41,n,
S. Nieswand^9,
R. Niet^10,
N. Nikitin^33,
T. Nikodem^12,
A. Nogay^68,
A. Novoselov^37,
D.P. O'Hanlon^50,
A. Oblakowska-Mucha^28,
V. Obraztsov^37,
S. Ogilvy^19,
R. Oldeman^16,f,
C.J.G. Onderwater^70,
J.M. Otalora Goicochea^2,
A. Otto^40,
P. Owen^42,
A. Oyanguren^69,
P.R. Pais^41,
A. Palano^14,d,
M. Palutan^19,
A. Papanestis^51,
M. Pappagallo^14,d,
L.L. Pappalardo^17,g,
W. Parker^60,
C. Parkes^56,
G. Passaleva^18,
A. Pastore^14,d,
G.D. Patel^54,
M. Patel^55,
C. Patrignani^15,e,
A. Pearce^40,
A. Pellegrino^43,
G. Penso^26,
M. Pepe Altarelli^40,
S. Perazzini^40,
P. Perret^5,
L. Pescatore^41,
K. Petridis^48,
A. Petrolini^20,h,
A. Petrov^68,
M. Petruzzo^22,q,
E. Picatoste Olloqui^38,
B. Pietrzyk^4,
M. Pikies^27,
D. Pinci^26,
A. Pistone^20,
A. Piucci^12,
V. Placinta^30,
S. Playfer^52,
M. Plo Casasus^39,
T. Poikela^40,
F. Polci^8,
A. Poluektov^50,36,
I. Polyakov^61,
E. Polycarpo^2,
G.J. Pomery^48,
A. Popov^37,
D. Popov^11,40,
B. Popovici^30,
S. Poslavskii^37,
C. Potterat^2,
E. Price^48,
J.D. Price^54,
J. Prisciandaro^39,40,
A. Pritchard^54,
C. Prouve^48,
V. Pugatch^46,
A. Puig Navarro^42,
G. Punzi^24,p,
W. Qian^50,
R. Quagliani^7,48,
B. Rachwal^27,
J.H. Rademacker^48,
M. Rama^24,
M. Ramos Pernas^39,
M.S. Rangel^2,
I. Raniuk^45,
F. Ratnikov^35,
G. Raven^44,
F. Redi^55,
S. Reichert^10,
A.C. dos Reis^1,
C. Remon Alepuz^69,
V. Renaudin^7,
S. Ricciardi^51,
S. Richards^48,
M. Rihl^40,
K. Rinnert^54,
V. Rives Molina^38,
P. Robbe^7,40,
A.B. Rodrigues^1,
E. Rodrigues^59,
J.A. Rodriguez Lopez^66,
P. Rodriguez Perez^56,†,
A. Rogozhnikov^35,
S. Roiser^40,
A. Rollings^57,
V. Romanovskiy^37,
A. Romero Vidal^39,
J.W. Ronayne^13,
M. Rotondo^19,
M.S. Rudolph^61,
T. Ruf^40,
P. Ruiz Valls^69,
J.J. Saborido Silva^39,
E. Sadykhov^32,
N. Sagidova^31,
B. Saitta^16,f,
V. Salustino Guimaraes^1,
C. Sanchez Mayordomo^69,
B. Sanmartin Sedes^39,
R. Santacesaria^26,
C. Santamarina Rios^39,
M. Santimaria^19,
E. Santovetti^25,j,
A. Sarti^19,k,
C. Satriano^26,s,
A. Satta^25,
D.M. Saunders^48,
D. Savrina^32,33,
S. Schael^9,
M. Schellenberg^10,
M. Schiller^53,
H. Schindler^40,
M. Schlupp^10,
M. Schmelling^11,
T. Schmelzer^10,
B. Schmidt^40,
O. Schneider^41,
A. Schopper^40,
K. Schubert^10,
M. Schubiger^41,
M.-H. Schune^7,
R. Schwemmer^40,
B. Sciascia^19,
A. Sciubba^26,k,
A. Semennikov^32,
A. Sergi^47,
N. Serra^42,
J. Serrano^6,
L. Sestini^23,
P. Seyfert^21,
M. Shapkin^37,
I. Shapoval^45,
Y. Shcheglov^31,
T. Shears^54,
L. Shekhtman^36,w,
V. Shevchenko^68,
B.G. Siddi^17,40,
R. Silva Coutinho^42,
L. Silva de Oliveira^2,
G. Simi^23,o,
S. Simone^14,d,
M. Sirendi^49,
N. Skidmore^48,
T. Skwarnicki^61,
E. Smith^55,
I.T. Smith^52,
J. Smith^49,
M. Smith^55,
H. Snoek^43,
l. Soares Lavra^1,
M.D. Sokoloff^59,
F.J.P. Soler^53,
B. Souza De Paula^2,
B. Spaan^10,
P. Spradlin^53,
S. Sridharan^40,
F. Stagni^40,
M. Stahl^12,
S. Stahl^40,
P. Stefko^41,
S. Stefkova^55,
O. Steinkamp^42,
S. Stemmle^12,
O. Stenyakin^37,
H. Stevens^10,
S. Stevenson^57,
S. Stoica^30,
S. Stone^61,
B. Storaci^42,
S. Stracka^24,p,
M. Straticiuc^30,
U. Straumann^42,
L. Sun^64,
W. Sutcliffe^55,
K. Swientek^28,
V. Syropoulos^44,
M. Szczekowski^29,
T. Szumlak^28,
S. T'Jampens^4,
A. Tayduganov^6,
T. Tekampe^10,
G. Tellarini^17,g,
F. Teubert^40,
E. Thomas^40,
J. van Tilburg^43,
M.J. Tilley^55,
V. Tisserand^4,
M. Tobin^41,
S. Tolk^49,
L. Tomassetti^17,g,
D. Tonelli^40,
S. Topp-Joergensen^57,
F. Toriello^61,
E. Tournefier^4,
S. Tourneur^41,
K. Trabelsi^41,
M. Traill^53,
M.T. Tran^41,
M. Tresch^42,
A. Trisovic^40,
A. Tsaregorodtsev^6,
P. Tsopelas^43,
A. Tully^49,
N. Tuning^43,
A. Ukleja^29,
A. Ustyuzhanin^35,
U. Uwer^12,
C. Vacca^16,f,
V. Vagnoni^15,40,
A. Valassi^40,
S. Valat^40,
G. Valenti^15,
R. Vazquez Gomez^19,
P. Vazquez Regueiro^39,
S. Vecchi^17,
M. van Veghel^43,
J.J. Velthuis^48,
M. Veltri^18,r,
G. Veneziano^57,
A. Venkateswaran^61,
M. Vernet^5,
M. Vesterinen^12,
J.V. Viana Barbosa^40,
B. Viaud^7,
D. Vieira^63,
M. Vieites Diaz^39,
H. Viemann^67,
X. Vilasis-Cardona^38,m,
M. Vitti^49,
V. Volkov^33,
A. Vollhardt^42,
B. Voneki^40,
A. Vorobyev^31,
V. Vorobyev^36,w,
C. Voß^9,
J.A. de Vries^43,
C. Vázquez Sierra^39,
R. Waldi^67,
C. Wallace^50,
R. Wallace^13,
J. Walsh^24,
J. Wang^61,
D.R. Ward^49,
H.M. Wark^54,
N.K. Watson^47,
D. Websdale^55,
A. Weiden^42,
M. Whitehead^40,
J. Wicht^50,
G. Wilkinson^57,40,
M. Wilkinson^61,
M. Williams^40,
M.P. Williams^47,
M. Williams^58,
T. Williams^47,
F.F. Wilson^51,
J. Wimberley^60,
J. Wishahi^10,
W. Wislicki^29,
M. Witek^27,
G. Wormser^7,
S.A. Wotton^49,
K. Wraight^53,
K. Wyllie^40,
Y. Xie^65,
Z. Xing^61,
Z. Xu^4,
Z. Yang^3,
Y. Yao^61,
H. Yin^65,
J. Yu^65,
X. Yuan^36,w,
O. Yushchenko^37,
K.A. Zarebski^47,
M. Zavertyaev^11,c,
L. Zhang^3,
Y. Zhang^7,
Y. Zhang^63,
A. Zhelezov^12,
Y. Zheng^63,
X. Zhu^3,
V. Zhukov^33,
S. Zucchelli^15.
^1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
^2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
^3Center for High Energy Physics, Tsinghua University, Beijing, China
^4LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France
^5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France
^6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France
^7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France
^8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France
^9I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany
^10Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
^11Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
^12Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
^13School of Physics, University College Dublin, Dublin, Ireland
^14Sezione INFN di Bari, Bari, Italy
^15Sezione INFN di Bologna, Bologna, Italy
^16Sezione INFN di Cagliari, Cagliari, Italy
^17Sezione INFN di Ferrara, Ferrara, Italy
^18Sezione INFN di Firenze, Firenze, Italy
^19Laboratori Nazionali dell'INFN di Frascati, Frascati, Italy
^20Sezione INFN di Genova, Genova, Italy
^21Sezione INFN di Milano Bicocca, Milano, Italy
^22Sezione INFN di Milano, Milano, Italy
^23Sezione INFN di Padova, Padova, Italy
^24Sezione INFN di Pisa, Pisa, Italy
^25Sezione INFN di Roma Tor Vergata, Roma, Italy
^26Sezione INFN di Roma La Sapienza, Roma, Italy
^27Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland
^28AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland
^29National Center for Nuclear Research (NCBJ), Warsaw, Poland
^30Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania
^31Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia
^32Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia
^33Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia
^34Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia
^35Yandex School of Data Analysis, Moscow, Russia
^36Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia
^37Institute for High Energy Physics (IHEP), Protvino, Russia
^38ICCUB, Universitat de Barcelona, Barcelona, Spain
^39Universidad de Santiago de Compostela, Santiago de Compostela, Spain
^40European Organization for Nuclear Research (CERN), Geneva, Switzerland
^41Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
^42Physik-Institut, Universität Zürich, Zürich, Switzerland
^43Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands
^44Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands
^45NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
^46Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine
^47University of Birmingham, Birmingham, United Kingdom
^48H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom
^49Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
^50Department of Physics, University of Warwick, Coventry, United Kingdom
^51STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
^52School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom
^53School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom
^54Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
^55Imperial College London, London, United Kingdom
^56School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom
^57Department of Physics, University of Oxford, Oxford, United Kingdom
^58Massachusetts Institute of Technology, Cambridge, MA, United States
^59University of Cincinnati, Cincinnati, OH, United States
^60University of Maryland, College Park, MD, United States
^61Syracuse University, Syracuse, NY, United States
^62Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^2
^63University of Chinese Academy of Sciences, Beijing, China, associated to ^3
^64School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^3
^65Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to ^3
^66Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^8
^67Institut für Physik, Universität Rostock, Rostock, Germany, associated to ^12
^68National Research Centre Kurchatov Institute, Moscow, Russia, associated to ^32
^69Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to ^38
^70Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to ^43
^aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil
^bLaboratoire Leprince-Ringuet, Palaiseau, France
^cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia
^dUniversità di Bari, Bari, Italy
^eUniversità di Bologna, Bologna, Italy
^fUniversità di Cagliari, Cagliari, Italy
^gUniversità di Ferrara, Ferrara, Italy
^hUniversità di Genova, Genova, Italy
^iUniversità di Milano Bicocca, Milano, Italy
^jUniversità di Roma Tor Vergata, Roma, Italy
^kUniversità di Roma La Sapienza, Roma, Italy
^lAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland
^mLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain
^nHanoi University of Science, Hanoi, Viet Nam
^oUniversità di Padova, Padova, Italy
^pUniversità di Pisa, Pisa, Italy
^qUniversità degli Studi di Milano, Milano, Italy
^rUniversità di Urbino, Urbino, Italy
^sUniversità della Basilicata, Potenza, Italy
^tScuola Normale Superiore, Pisa, Italy
^uUniversità di Modena e Reggio Emilia, Modena, Italy
^vIligan Institute of Technology (IIT), Iligan, Philippines
^wNovosibirsk State University, Novosibirsk, Russia
^†Deceased
| Decays of beauty baryons to purely hadronic final states
provide a wealth of information about the interactions between the fundamental
constituents of matter.
Studies of direct violation in these decays can help constrain the parameters of the Standard Model
and New Physics effects in a similar way as in decays of beauty
mesons <cit.>.
Studies of the decay dynamics of beauty baryons can provide important information on the spectroscopy of
charmed baryons, since the known initial state provides strong constraints on the
quantum numbers of intermediate resonances.
The recent observation of pentaquark states at LHCb <cit.> has renewed
the interest in baryon spectroscopy.
The present analysis concerns the decay amplitude of the
Cabibbo-favoured decay
(the inclusion of charge-conjugate processes is implied throughout this paper).
A measurement of the branching fraction of this decay
with respect to the mode
was reported by the LHCb collaboration using a data sample corresponding to
1.0 of integrated luminosity <cit.>.
The decay includes resonant contributions in the channel
that are associated with intermediate excited states,
as well as contributions in the channel due to excited nucleon (N) states.
The study of the part of the amplitude will help to constrain the dynamics of the Cabibbo-suppressed
decay , which is potentially sensitive to the angle γ of the Cabibbo-Kobayashi-Maskawa
quark mixing matrix <cit.>.
The analysis of the amplitude is interesting in its own right.
One of the states decaying to , the , has a possible interpretation as a D^*N molecule
<cit.>.
There are currently no experimental constraints on the quantum numbers of the state.
The mass spectrum of the predicted and observed orbitally excited states <cit.>
is shown in Fig. <ref>. In addition to the ground state and to the _(2595)^+ and
_(2625)^+ states, which are identified as the members of the P-wave doublet,
a D-wave doublet with higher mass is predicted. One of the members of this doublet could be the state known as the ,
which is measured to have spin and parity J^P=5/2^+ <cit.>, while
no candidate for the other state has been observed yet.
Several theoretical studies provide mass predictions for this state and other excited charm
baryons <cit.>.
The BaBar collaboration has previously reported indications of a structure in the mass spectrum
close to threshold, at a mass around
2.84[Natural units with ħ=c=1 are used throughout.],
which could be the missing member of the D-wave doublet <cit.>.
This analysis is based on a data sample corresponding to an integrated luminosity of
3.0of collisions recorded by the LHCb detector, with 1.0collected at
centre-of-mass energy √(s)=7 in 2011 and 2.0at √(s)=8 in 2012.
The paper is organised as follows.
Section <ref> gives a brief description of the LHCb experiment and its
reconstruction and simulation software.
The amplitude analysis formalism and fitting technique is introduced in Sec. <ref>.
The selection of candidates is described in Sec. <ref>, followed by the measurement of
signal and background yields (Sec. <ref>), evaluation of the efficiency (Sec. <ref>),
determination of the shape of the background distribution (Sec. <ref>), and discussion of the
effects of momentum resolution (Sec. <ref>).
Results of the amplitude fit are presented in Sec. <ref> separately for four different
regions of the phase space, along with the systematic uncertainties for those fits.
Section <ref> gives a summary of the results. | null | null | null | null | An amplitude analysis of the decay is performed in the region of the phase space containing resonant
contributions.
This study provides important information about the structure of the amplitude
for future studies of CP violation in decays, as well as on the spectroscopy of excited
states.
The preferred spin of the state is found to be J=5/2, with the J=7/2 hypothesis disfavoured by 4.0
standard deviations.
The solutions with J=1/2 and 3/2 are excluded with a significance of more than 5 standard deviations.
The mass and width of the state are found to be:
m() = ,
Γ() = .
These results are consistent with and have comparable precision to the current world averages (WA), which are
m_ WA()=2881.53± 0.35, and Γ_ WA()=5.8 ± 1.1 <cit.>.
A near-threshold enhancement in the amplitude is studied. The enhancement is consistent with being a
resonant state (referred to here as the ) with mass and width
m() =,
Γ() =
and quantum numbers J^P=3/2^+, with the parity measured relative to that of the state.
The other quantum numbers are excluded with a significance of
more than 6 standard deviations. The phase motion of the 3/2^+ component with respect to the nonresonant
amplitudes is obtained in a model-independent way and is consistent with resonant behaviour.
With a larger dataset, it should be possible to constrain the phase motion of the 3/2^+
partial wave using the amplitude as a reference, without making assumptions on the nonresonant
amplitude behaviour. The mass of the state is consistent with recent predictions
for an orbital D-wave excitation with quantum numbers 3/2^+
based on the nonrelativistic heavy quark-light diquark model <cit.> and from
QCD sum rules in the HQET framework <cit.>.
First constraints on the spin and parity of the state are obtained in this analysis,
and its mass and width are measured.
The most likely spin-parity assignment for is J^P=3/2^- but the other solutions
with spins 1/2 to 7/2 cannot be excluded.
The mass and width of the state are measured to be
m() =,
Γ() =.
The J^P=3/2^- assignment for state is consistent with its interpretations as
a D^*N molecule <cit.>
or a radial 2P excitation <cit.>. |
http://arxiv.org/abs/1701.07667v1 | 20170126120614 | Indistinguishable sceneries on the Boolean hypercube | [
"Renan Gross",
"Uri Grupel"
] | math.CO | [
"math.CO",
"math.PR"
] |
Indistinguishable sceneries on the Boolean hypercube
Renan Gross[Weizmann Institute of Science. [email protected]]
and Uri Grupel[Weizmann Institute of Science. [email protected]. Supported by the European Research Council (ERC).]
=========================================================================================================================================================================================================
We show that the scenery reconstruction problem on the Boolean hypercube is in general impossible. This is done by using locally biased functions, in which every vertex has a constant fraction of neighbors colored by 1, and locally stable functions, in which every vertex has a constant fraction of neighbors colored by its own color. Our methods are constructive, and also give super-polynomial lower bounds on the number of locally biased and locally stable functions. We further show similar results for ^n and other graphs, and offer several follow-up questions.
§ INTRODUCTION
Let f : -1,1^n→-1,1 be a Boolean function on the n-dimensional hypercube, and let S_i be a random walk on the hypercube. Can we reconstruct the function f (with probability 1, up to the hypercube's symmetries) by only observing the scenery process f(S_i)_i?
Similar questions have been raised for other graphs. For example, it was shown in <cit.> that when G is a cycle graph, the answer is yes: it is possible to reconstruct the function f (which is a string up to choice of origin) up to rotation and reflection with probability 1. It is still an open question whether any such string can be reconstructed in polynomial time. When G=, reconstruction is generally impossible <cit.>; for random sceneries on see <cit.>.
When G is the hypercube, such a process was studied for a specific Boolean function, the percolation crossing, under the notion of dynamical percolation; see <cit.> for details.
In the general case, however, we show that for n≥ 4 the answer is no. We do this by considering a pair of non-isomorphic functions f and g such that if S_i and T_i are random walks on the hypercube, then f(S_i) and g(T_i) have exactly the same distribution. We discuss two different classes of such functions:
* Locally p-biased functions: Let G be a graph. A Boolean function f: G →-1,1 is called locally p-biased, if for every vertex x∈ G we have
{y∼ x; f(y) = 1}/deg(x)=p.
In words, f is locally p-biased if for every vertex x, f takes the value 1 on exactly a p-fraction of x's neighbors. If f is a locally p-biased function, then the random variables {f(S_i)}_i have the same distribution as independent Bernoulli random variables with (f(S_i)=1)=p.
* Locally p-stable functions: Let G be a graph. A Boolean function f: G →-1,1 is called locally p-stable, if for every vertex x∈ G we have
{y∼ x; f(x) = f(y)}/deg(x)=p.
In words, f is locally p-stable if for every vertex x, f retains its value on exactly a p-fraction of x's neighbors. If f is locally p-stable, then the random variables {f(S_i)f(S_i+1)}_i have the same distribution as independent Bernoulli random variables with (f(S_i)f(S_i+1)=1)=p.
We say that two Boolean functions f,g:{-1,1}^n→{-1,1} are isomorphic, if there exists an automorphism of the hypercube ψ:{-1,1}^n→{-1,1}^n such that f∘ψ = g. Two functions are non-isomorphic if no such ψ exists.
The existence of two non-isomorphic locally p-biased functions, or two non-isomorphic locally p-stable functions thus render scenery reconstruction on the hypercube impossible.
It is not immediately obvious that pairs of non-isomorphic locally p-biased and pairs of non-isomorphic locally p-stable functions exist. It is then natural to ask, for which p values do they exist? If they do exist, how many of them are there?
In this paper, we characterize the possible p values on the n-dimensional hypercube, give bounds on the number of non-isomorphic pairs, and discuss results on other graphs. The paper is organized as follows.
In <ref> we give a full characterization of the connection between the dimension of the hypercube n and the permissible p values of locally p-biased functions, as expressed in the following theorem:
Let n ∈ be a natural number and p ∈ [0,1]. There exists a locally p-biased function f:-1,1^n→-1,1 if and only if p = b/2^k for some integers b ≥ 0, k ≥ 0, and 2^k divides n.
Our proof can construct functions for all p of the above form.
In <ref> we inspect the class size of non-isomorphic locally p-biased functions on the hypercube. We show that the class size for p=1/2 is at least C2^√(n)/n^1/4 for some constant C > 0, and for p=1/n is super-exponential in n, when such p values are permissible. Thus reconstruction is impossible for such functions. We conjecture that the number of non-isomorphic locally p-biased functions scales quickly for all permissible p values:
Let n>0 be even. Let p=b/2^k, where 1≤ b≤ 2^k, k≥ 1 and 2^k divides n. Let B_p^n be the set of non-isomorphic locally p-biased functions. Then B_p^n is super-exponential in n.
In <ref> we briefly discuss locally p-stable functions. We show that they exist for all possible p values, and that for most p values there are many non-isomorphic pairs; however, for every n, there are p values for which there is a single unique locally p-stable function. The results in this section are based on those of <ref>.
In <ref> we discuss other graphs. First, we show that when G is a regular tree of degree n, then all p=a/n are permissible. Second, we show that for G=^n all the results for the hypercubes hold true. This gives us a partial answer for permissible p values for ^n, but there are additional values that cannot be achieved through the hypercube construction: for example, for n=1 we can define a function with p=1/2 and when n=2 we can find a function with p=1/4. We also discuss other Cayley graphs of , and suggest further questions on scenerey reconstruction.
Throughout most of this paper we treat the Boolean hypercube as the set -1,1^n. We identify it with the 0,1^n hypercube by considering -1 in the first to correspond to 0 in the second.
§ CHARACTERIZATION OF PERMISSIBLE P VALUES
In this section we prove Theorem <ref>. The “only if” part is achieved by a double counting argument.
Suppose that f:-1,1^n→-1,1 is a locally p-biased Boolean function. Let x be a uniformly random element of -1,1^n. Then f(x) is f's value on a uniformly random point of the hypercube, and is equal to 1 with probability l/2^n, where l = x ∈-1,1^n ; f(x) = 1 is the number of vertices on which f obtains the value 1. Now let y be a uniformly random neighbor of x. The function f is locally p-biased, so probability that f(y) = 1 is p by definition. Since both x and y are uniform random vertices, (f(x) = 1) = (f(y) = 1). Denoting p = m/n for some m ∈0,1,…,n, this gives
p = l/2^n = m/n.
Decompose n into its prime powers, writing n = c2^k, where c is odd. Then by (<ref>), we have that
l = 2^n-k· m/c
is an integer, and so c must divide m, i.e m = bc for some b. But then
p = m/n = b/2^k
as stated by the theorem.
The “if” part of Theorem <ref> is given by an explicit construction, performed in three steps. First, we use perfect codes in order to obtain a locally 1/n-biased function for n that is a power of two. Second, we extend the result to a locally m/n-biased function by taking the union of m locally 1/n-biased functions with disjoint support. Finally, given a locally p-biased function on n bits, we show how to manipulate its Fourier representation in order to yield a locally p-biased function on cn bits for any c.
We begin with a brief review of binary codes. We omit proofs and simply state definitions and known results; for a more thorough introduction, see e.g <cit.>.
A binary code C on the n-dimensional hypercube is simply a subset of -1,1^n; its elements are called codewords. The distance of a code C is defined as min_x,y ∈ Cδ_H(x,y), where δ_H(x,y) = i ∈1,…,n; x_i ≠ y_i is the Hamming distance between x and y, that is, the number of coordinates in which x and y differ. A code of odd distance d is called perfect if the Hamming balls of radius (d-1)/2 around each codeword completely tile the hypercube without overlaps. A code is called linear if its codewords form a vector space over _2.
A particularly interesting code is the Hamming code with k data bits, denoted H_k. It is a linear, distance-3 perfect code on the hypercube of dimension n = 2^k -1. Its codewords are structured as follows. For x ∈ H_k and i ∈1,…,n, the bit x_i is called a parity bit if i is a power of 2, and data bit otherwise. Thus every codeword contains k parity bits and 2^k - k -1 data bits. The data bits range over all
possible bit-strings on 2^k -k -1 bits, while the parity bits are a function of the data bits:
x_i = ⊕_j: i j ≠ 0 x_j ∀ i = 2^l, l≥0
where ⊕ denotes exclusive bitwise or (xor), and denotes bitwise AND. In words, the parity bit x_i is equal to the xor of all data bits x_j such that the bitwise AND between i and j is non-zero. Thus there are 2^k - k - 1 codewords in H_k.
Armed with perfect codes, we are ready to start our proof.
Let n = 2^k be a power of two. Then there exists a locally 1/n-biased function on -1,1^n.
In a locally 1/n-biased function f, every point in the hypercube must have exactly 1 neighbor which is given the value 1, and n-1 neighbors which are given the value -1.
Let C be a distance-3 perfect code on the n-1 = 2^k - 1-dimensional hypercube. That is, every two code words in C are at a Hamming distance of at least 3 from each other, and the Hamming balls of radius 1 centered around each codeword completely tile the hypercube. Such codes exist for dimension 2^k-1; for example, as mentioned above and shown in <cit.>, the Hamming code is such a code. Define f:-1,1^n →-1,1 to be the following function:
f(x) =
1, x∈ C×{-1,1}
-1, otherwise.
In words, f(x) takes the value of 1 whenever the first n-1 coordinates of x are a codeword in C, and otherwise takes the value of -1. Then f is a locally 1/n-biased function:
-
* If f(x) = 1, then x=(y,b)∈ C×{-1,1}. Thus x' = (y,-b) is the only neighbor of x on which f(x') = 1; any other neighbor differs from x in the first n-1 coordinates, and since C is a distance-3 code, these coordinates are not a codeword in C.
* If f(x) = -1, then x=(y,b) where b∈{-1,1} and y is not a codeword of C.
Since C is perfect, y must fall inside some radius-1 ball of a codeword z. Then x' = (z,b) is the only neighbor of x such that f(x') = 1; any other codeword differs from z in at least 3 coordinates since C is a distance-3 code, and so differs from y in at least 2.
Let n = 2^k be a power of two. Then there exists a locally m/n-biased function on -1,1^n for any m = 0, 1, …, n.
For m = 0 the statement is trivial. Let m ∈1,…,n. In order to construct a locally m/n-biased function, it is enough to find m locally 1/n-biased functions f_1, …, f_m with pairwise disjoint support, i.e x : f_i(x) = 1∩x : f_j(x) = 1 = ∅ for all i ≠ j. With these functions, we can define f in the following manner:
f(x) =
1, f_i(x) = 1 for some i
-1, otherwise.
Then f is a locally m/n-biased function: For every x ∈1,-1^n, consider its neighbors on which f takes the value 1, i.e the set y ; d(x,y) = 1, ∃ i s.t f_i(y) = 1. Each f_i contributes exactly one element to this set, since it is a locally 1/n-biased function; further, these elements are all distinct, since the f_i's have pairwise disjoint supports. So x has m neighbors on which f takes the value 1.
Recall that the Hamming code on 2^k-1 bits uses 2^k-k-1 data bits (these range over all possible bit-strings on 2^k-k-1 bits) and k parity bits (these are a function of the data bits). Let C be the Hamming code on 2^k-1 bits, and rearrange the order of the bits so that the parity bits are all on the right hand side of the codeword, i.e each codeword x can be written as x = (y,z) where y is a word of length 2^k-k-1 constituting the data bits and z is a word of length k constituting the parity bits.
Now, for all 1 ≤ i ≤ n, define the sets C_i = x ⊕ (i-1) ; x ∈ C, where ⊕ denotes the exclusive or (xor) operator. Then the sets C_i are all pairwise disjoint: in order for two words x = (y,z) ∈ C_i and x' = (y',z') ∈ C_j to be the same, we need to have both y = y' and z = z'. But if y = y' then the data bits are the same, and by construction z ⊕ z' = (i-1) ⊕ (j-1), so z ≠ z' if i ≠ j. Further, since xoring by a constant only amounts to a rotation of the hypercube, each C_i is still a perfect code.
Let f_i be the function which uses C_i as its perfect code in the proof of Lemma <ref>. Then, f_1, …, f_n are n locally 1/n-biased functions with pairwise disjoint supports. The combination of any m of these functions yields a locally m/n-biased function.
Let f:-1,1^n→-1,1 be a locally p-biased function on the n-dimensional hypercube.
Let c ∈, and define a new function f':-1,1^cn→-1,1 by
f'(x) = f(∏_j=0^c-1x_1+jn,…,∏_j=0^c-1x_n+jn).
Then f' is a locally p-biased function.
Let x' ∈-1,1^cn be a point on the cn-dimensional hypercube, and let y ∈-1,1^n be such that y_i = x'_i·…· x'_(c-1)n+i. Then by definition, f'(x') = f(y). Since f is a locally p-biased function, y has pn neighbors on which f takes the value 1. Each of these neighbors is obtained from y by flipping a single coordinate y_i; this amounts to changing any one of the c coordinates of x' which make up the y_i. Since the y_i's are disjoint monomials in the coordinates of x', this implies that there are at least pcn neighbors of x' on which f' takes the value 1.
The same argument can be repeated for the value -1 instead of 1, showing that there are at least (1-p)cn neighbors of x' on which f' takes the value -1. But since the number of neighbors of x' is nc, the inequalities are in fact equalities. Hence, there are exactly pcn neighbors of x' on which f' takes the value 1, completing the proof.
We are now ready to prove that the condition on p is sufficient in Theorem <ref>.
All that is left is to stitch the above lemmas together: Let n = c2^k. Using Lemma <ref>, create a locally p-biased function g:-1,1^2^k→-1,1 on 2^k variables; then, using Lemma <ref>, extend it to a function f on n variables.
§ NON-ISOMORPHIC FUNCTIONS
In this section we discuss the classes of non-isomorphic locally p-biased function.
We show that for the hypercube of dimension n, the growth rate with respect to n is at least Ω(2^√(n)/√(n)) for p=1/2 and super-exponential for p=1/n, when such p's are permissible. We conjecture that for any permissible p the growth rate is super-polynomial.
The proof for p=1/2 is based on an explicit construction of non-isomorphic locally 1/2-biased functions.
In order to define these functions we use the following simple proposition.
Let f_i:{-1,1}^n_i→{-1,1} be locally 1/2-biased functions for i=1,2 where n_1+n_2=n. Then
f(x)=f_1(x_1,...,x_n_1)f_2(x_n_1+1,...,x_n)
is a locally 1/2-biased function on {-1,1}^n.
Let x∈{-1,1}^n, let x' be a neighbor of x, and denote f(x)=f_1(x_1,…,x_n_1)f_2(x_n_1+1,…,x_n)=y_1· y_2 and f(x')=f_1(x'_1,…,x'_n_1)f_2(x'_n_1+1,…,x'_n)=y'_1· y'_2 . Then x' differs from x in either the first n_1 coordinates, or the last n_2 coordinates. If it differs in the first n_1 coordinates, then y'_2 = y_2. Since f_1 is locally 1/2-biased, there are exactly n_1/2 coordinate changes such that y'_1 = y'_2, yielding f(x') = 1. Similarly, if x' differs in the last n_2 coordinates, then y'_1 = y_1, and there are exactly n_2/2 coordinate changes such that y'_2 = y'_1, again yielding f(x') = 1. So overall, x has exactly n_1/2 + n_2/2 = n/2 neighbors where f is 1.
The above proposition allows us to construct examples for locally 1/2-biased functions, by combinations of such functions on lower dimensions.
We have two basic examples for locally 1/2-biased functions:
* In any even dimension n,
g_n(x_1,…,x_n)=x_1⋯ x_n/2.
*
In dimension n=4,
h(x_1,x_2,x_3,x_4)=1/2(x_1x_2+x_2x_3-x_3x_4+x_1x_4).
The Fourier decomposition of a Boolean function is its expansion as a real multilinear polynomial: any Boolean function f:-1,1^n → can be written as a sum
f(x_1,…,x_n) = ∑_S⊆1,…,nf̂_S ∏ _i∈ S x_i,
where the f̂_S are real coefficients. Such a representation is unique; for a proof and other properties of the Fourier decomposition, see e.g chapter 1 in <cit.>.
Automorphisms of the hypercube are manifested on the Fourier decomposition of a Boolean function by either permutation or by a sign change to a subset of indices. Hence, we can show that two Boolean functions are not isomorphic by showing that their Fourier decompositions cannot be mapped into one another by such permutations and sign changes.
In this section, a tensor product of two functions f(x_1, …, x_n) and g(x_1,…, x_m) is a function on disjoint indices, i.e.
h(x_1, …, x_n+m) = f(x_1,…,x_n)· g(x_n+1,…, x_n+m).
There exists h_1,h_2,… such that for any k the function h_k is locally 1/2-biased on the 4k-dimensional hypercube and h_k is not isomorphic to any tensor product of h_1,…,h_k-1,g_2,g_4,g_6,....
We define h_1=h, and
h_k=h(∏_i=0^k-1 x_1+4i,…,∏_i=0^k-1 x_4+4i),
where h is the function from example <ref>.
By Lemma <ref>, h_k is locally 1/2-biased on {-1,1}^4k.
Assume that h_k is isomorphic to a tensor product of h_1,…,h_k-1,g_2,...,g_n-2, as in Proposition <ref>. If there exists 1≤ i ≤ j< k such that both h_i and h_j appear in a product that is isomorphic to h_k, then the Fourier decomposition of the product would have at least 16 different monomials. But h_k has only 4 different monomials, and the functions cannot be isomorphic.
Similarly, if we do not use any of the functions h_1,…,h_k-1, then we get the parity function, which has only one monomial in its Fourier decomposition.
Hence, we may assume that there is only one 1 ≤ i<k such that h_i is in the product. Then, up to an automorphism, this function is of the form
f(x) = h_i(x_1,…,x_4i)g_4k-4i(x_4i+1,…,x_4k).
On the one hand, by definition of h_k, its Fourier decomposition has pairs of monomials with no shared indices (e.g. the monomials that replace x_1x_2 and x_3x_4 in h_1).
On the other hand, in the decomposition of f, all monomials have shared indices; for example x_4i+1 appears in all monomials. Hence they are not isomorphic.
Using the functions h_1,h_2,… we can give a lower bound for the class of non-isomorphic locally 1/2-biased functions.
The number of non-negative integer solutions to
a_1+2a_2+…+ka_k ≤ k
is at least C4^√(k)/k^1/4, where C>0 is a universal constant.
For any 1≤ℓ≤ k, the number of solutions to (<ref>) is at least the number of solutions to
ℓ a_1+ℓ a_2+…+ℓ a_ℓ≤ k.
It is well known that the number of solutions to this inequality is
ℓ+k/ℓℓ.
This term is maximized when ℓ^2 = k. Hence, a lower bound for the number of solutions to (<ref>) is
2√(k)√(k).
By Stirling's formula, the asymptotic of this is (1/√(π))4^√(k)/k^1/4.
The number of integer solutions to the equality case is the famous partition function p(n).
Hardy and Ramanujan <cit.> showed precise asymptotics.
Using their result it is possible to show that the number of integer solutions is
∑_j=1^k p(j) ∼ Ce^c√(k)/√(k),
with explicit constants C,c>0. For our purposes, the simple estimation in Lemma <ref> is enough.
Let n be even. Let B_1/2^n be a maximal class of non-isomorphic locally 1/2-biased functions, i.e every two functions in B_1/2^n are non-isomorphic to each other. Then B_1/2^n≥ C2^√(n)/n^1/4, where C>0 is a universal constant.
Let k=⌊ n/4⌋. By Proposition <ref>, we can construct locally 1/2-biased functions by tensor products of h_1,…,h_k and g_1,…,g_n, as follows: choose j functions h_i_j such that m := ∑ 4 i_j ≤ n. Then the tensor product ⊗ h_i_j uses m variables. This can be completed to n variables by tensoring with g_n-m.
If two functions use the same h_i's, then they are isomorphic (by change of indices). And if they have a different decomposition of h_i's, then by the same arguments used in Proposition <ref>, they have a different Fourier decomposition and are therefore non-isomorphic. Thus, the isomorphic class of such a function is determined by the number of times each h_i appears in the product.
Hence, the number of non-isomorphic functions we can construct in this manner is the number of solutions to
4a_1+8a_2+⋯+4ka_k ≤ n
where the a_1,…,a_k are non-negative integers that represent the number of copies of h_i in the product. Using Lemma <ref>, this number is at least C4^√(k)/k^1/4=C'2^√(n)/n^1/4
It should be noted that a locally 1/2-biased function has a natural condition on its Fourier decomposition.
It might be possible to obtain better bounds on the number of non-isomorphic functions using this condition.
Let f:{-1,1}^n→{-1,1} be a locally 1/2-biased function. Then the Fourier weight at degree n/2 is 1.
Let A_n be the adjacency matrix of the hypercube. The map
f→ (f(a_1),...,f(a_2^n)),
where a_1,...,a_2^n are the vertices of the hypercube, is a bijection between locally 1/2-biased functions and the null space of A_n. Since
A_n=[ A_n-1 I; I A_n-1 ],
We have
P_n(t)=P_n-1(t-1)P_n-1(t+1),
where P_n is the characteristic polynomial of A_n.
For A_2 the eigenvalue 0 has multiplicity 2 and ± 2 has 1. Continuing by induction, the eigenvalues of A_m are -m, -m+2,...,m with multiplicities m0, m2,..., mm.
Hence, for even n the dimension of the null space is nn/2.
For any S⊆{1,2,...,n} with S=n/2 we denote χ_S(x)=∏_i∈ Sx_i.
These functions are all locally 1/2-biased, hence we can define the image in the null space by the bijection with v_S. Note that there are nn/2 such vectors, and they form an independent set. Hence the set {v_S}_S is a basis of the null set. By the bijection we get that every locally biased function is a linear combination of χ_S.
Class sizes for locally 1/n-biased functions can also be achieved via the following proposition.
Let n=2^k, and let C_1 and C_2 be two non-isomorphic distance-3 perfect codes on the n-1-dimensional hypercube. Then the two functions f_1 and f_2 obtained by using C_1 and C_2 as the perfect codes in the proof of Lemma <ref> are non-isomorphic.
Suppose to the contrary that f_1 and f_2 are isomorphic, i.e there is an automorphism φ: -1,1^n→-1,1^n such that for all x ∈-1,1^n, we have f_1(x) = f_2(φ(x)). Denote by B =(y,1) ; y∈-1,1^n-1 the n-1-dimensional hypercube obtained by fixing the last coordinate to 1, denote C = (y,1) ; y ∈ C_2 and note that support(f_2|_B) = C by construction. Consider φ|_B, the restriction of φ to B. This restriction is an isomorphism between B and some n-1-dimensional hypercube A contained within the n- dimensional hypercube. Any sub-hypercube of dimension n-1 is obtained from -1,1^n by fixing one of the coordinates to be either 1 or -1, and taking the span of all other coordinates. Then A must be spanned by the first n-1 coordinates, leaving the last coordinate fixed: otherwise, by construction of Lemma <ref>, the set A would contain two neighboring points x and x' that differ only in their last coordinate such that f_1(x) = f_1(x') = 1. This means there are y,y' ∈ C obeying φ(y) = x, φ(y') = x'; but this is a contradiction, since φ should preserve distances, and the distance between x and x' is 1 while the distance between y and y' is 3. So A = (y,b) ; y ∈-1,1^n-1 for some b ∈-1,1. But then φ|_B is an isomorphism between C_1 and C_2, since C_1 is a perfect code in A and C_2 is a perfect code in B; a contradiction.
Let n=2^k. Let B_1/n^n be the class of non-isomorphic locally 1/n-biased functions. Then B_1/n^n is super-exponential in n.
By Proposition <ref>, any lower bound on the number of non-isomorphic perfect codes on the n-1-dimensional hypercube gives a lower bound to the number of locally 1/n-biased functions on the n-dimensional hypercube. Recent constructions, such as in <cit.>, give a super-exponential lower bound on the number of such perfect codes.
We would have liked to apply the same argument to locally m/n-biased functions, as given by the construction in Lemma <ref>. Our argument there used the explicit construction of the Hamming code which, being linear, was easy to modify in order to obtain functions with disjoint supports. Such is not the case for the construction of non-linear codes. However, we still believe that similar estimates are true for any permissible p.
By Proposition <ref>, scenery reconstruction is impossible for even-dimensional hypercubes.
For odd dimensional hypercubes, on which there are no non-trivial locally biased functions, we use locally stable functions instead, as described in the next section.
§ LOCALLY P-STABLE FUNCTIONS
Unlike locally p-biased functions, there is no restriction on permissible p values for locally p-stable functions:
Let p = m/n for some m ∈0,1,…,n. Then the parity function on n-m variables,
f(x_1,…, x_n) = x_m+1 x_m+2… x_n
is locally p-stable.
Thus we will focus on the number of non-isomorphic pairs of locally stable functions. A negative result is attainable by a simple examination:
If p = 1/n or p = (n-1)/n, then the parity function is the only locally stable p-function on the hypercube, up to isomorphisms.
We prove only for p = (n-1)/n; the proof for p=1/n is similar.
We will show that f depends only on a single coordinate. Let x be an initial point in the hypercube and y its unique neighbor such that f(x) ≠ f(y). Denote the coordinate in which they differ by i. By local stability, every other neighbor x' of x has f(x') = f(x), and every other neighbor y' of y has f(y') = f(y).
Let j≠ i, let x̃ be the neighbor of x that differs from x in coordinate j, and let ỹ be the neighbor of y that differs y in coordinate j. Then x̃ is a neighbor of ỹ, since x̃ and ỹ differ only in the i-th coordinate. Also, since f(x) = f(x̃) and f(y) = f(ỹ) but f(x) ≠ f(y), we have f(x̃) ≠ f(ỹ).
Since f is locally (n-1)/n-stable, each of x's neighbors x' has exactly one neighbor y' on which f attains the opposite value. By the above, for each such x', the corresponding y' differs from it in the i-th coordinate. This reasoning can be repeated, choosing a neighbor of x as the initial starting point, showing that for all x' with the same i-th coordinate as x, f(x) = f(x'), while for all x' that differ in the i-th coordinate from x, f(x) ≠ f(x'). This means that either f(x) = x_i or f(x) = -x_i.
Many other p values, however, have larger classes of non-isomorphic locally p-stable functions, since locally stable functions can be built out of locally 1/2-biased functions:
Let n > 0 be an even integer. For every locally 1/2-biased function f on the n-dimensional hypercube, there exists a locally (n/2)/(n+1)-stable function f' on the n+1-dimensional hypercube. Further, if f and g are two non-isomorphic locally 1/2-biased functions, then f' and g' are also non-isomorphic.
Define f' by
f'(x_1,…,x_n+1) = f(x_1,…, x_n) · x_n+1.
Let x ∈-1,1^n+1 be a point in the n+1-dimensional hypercube. Since f is locally 1/2-biased, the function f' attains the value 1 on exactly half of x's neighbors which differ from x in one of the first n coordinates, and the value -1 on the other half of these neighbors. Alternatively, f' retains its value on exactly half of the neighbors which differ in the first n coordinates. For the neighbor that is different from x in the last coordinate, though, f' flips its sign. Therefore f' retains its value on a (n/2)/(n+1) fraction of x's neighbors, so f is locally (n/2)/(n+1)-stable.
The claim about non-isomorphism follows directly from the functions' Fourier decomposition.
Observe that unlike locally biased functions, locally stable functions can be easily extended to higher dimension:
Let f be a locally (n-m)/n-stable function. Then f can be extended to hypercubes of size n' ≥ n by simply ignoring all but the first n coordinates. This gives a locally (n'-m)/n'-stable function.
We can use this observation to give a lower bound on the number of locally (n'-m)/n'-stable functions for a fixed m and any n' ≥ 2m-2.
This works as follows: first, pick any fixed m>1. Using Proposition <ref>, we obtain a locally (n-m)/n = (n/2)/(n+1)-stable with n = 2m - 2. This can be extended by Observation <ref> to any n'≥ n, and together with Proposition <ref> we get a lower bound of C2^√(2m-2)/(2m-2)^1/4 different locally (n'-m)/n'-stable functions.
This observation also provides us with a pair of non-isomorphic locally stable functions for all hypercubes of dimension n≥ 5, showing that:
Scenery reconstruction is impossible for n-dimensional hypercubes for n≥5.
§ OTHER DIRECTIONS AND OPEN QUESTIONS
In this section we discuss similar results and questions for other graphs. We also list some further questions regarding locally biased and locally stable functions on the hypercube. For other excellent open problems see <cit.>.
§.§ Hypercube reconstruction
Our work shows that in general, Boolean functions on the hypercube cannot be reconstructed.
Under which conditions is it possible to reconstruct Boolean functions on the hypercube?
Is a random Boolean function reconstructible with high probability?
Using the techniques of <cit.>, it can be shown that reconstruction is always possible in the hypercube of dimension at most 3.
§.§ Other graphs
Note that the necessity condition on p of Theorem <ref> can be applied to any finite regular graph, ruling out functions based on the relation between the graph degree and the number of vertices.
§.§ Trees
Let G be an n-regular infinite tree. Then for any p = b/n, b = 0,1,…,n there exists a locally p-biased function. Such a function can be found greedily by picking a root vertex v ∈ G, setting f(v) = 1, and iteratively assigning values to vertices further away in any way that meets the constraints.
Notice that the method above requires picking some initial vertex, and that the method yields many possible functions on labeled trees (all of which are isomorphic when we remove the labels). Once the initial vertex v has been fixed, it is possible to generate a distribution on locally p-biased functions, by setting f(v) to be 1 with probability b/n, and randomly expanding from there.
For an n-regular tree G, find an invariant probability measure on locally p-biased functions that commutes with the automorphisms of the tree.
§.§ The standard lattice
The following propositions show that there is a one-to-one mapping of locally p-biased functions from the hypercube to ^n. Since automorphisms of the lattice can be pulled back to automorphisms of the hypercube, we get lower bounds for the size of non-isomorphic locally p-biased functions on ^n.
Let f:{-1,1}^n→{-1,1} be a locally p-biased function.
Then there exists a locally p-biased extension f̃:^d→{-1,1} such that .f|_{-1,1}^n=f. In addition, if f and g are non-isomorphic locally p-biased functions on the hypercube, then f̃ and g̃ are non-isomorphic.
Here we think of the hypercube as {0, 1}^n instead of {-1,1}^n.
Let
ψ(x_1,…,x_n)=(x_1 mod 2,…,x_n mod 2).
For any t=(t_1,…,t_n)∈^n define Q_t=t+{0,1}^n. Then the set ψ(Q_t) is the hypercube {0, 1}^n. Moreover, there exists an automorphism φ of the hypercube, such that φ(ψ(Q_t))=Q_t-t.
We define
f(x)=f(ψ(x)).
Suppose that f is locally p-biased function on the hypercube. Note that .f|_Q_t is locally p-biased on Q_t for any t∈^n.
Let x∈^n. Consider Q^+=Q_x and Q^-=Q_x-e, where e=(1,…,1).
Then, Q^+∩ Q^- = {x} and the neighbors of x are partitioned such that half of them are in Q^- and the other half are in Q^+.
Since .f|_Q_x^± is locally p-biased, out of the n neighbors of x in Q^+ on exactly pn the value of f is 1, and the same holds true for Q^-. Thus, f is locally p-biased function on ^n.
Note that the automorphisms of ^n are those of the hypercube with the addition of translations.
Since there exists an isomorphism between any two hypercubes in the tiling (the above mentioned φ), any automorphism between f and g would induce one between f and g.
The above extension procedure gives us lower bounds on the growth rate of some classes of non-isomorphic locally p-biased functions.
Let B_p^n be the class of non-isomorphic locally p-biased functions on ^n.
* If n is even, then |B_1/2^n| ≥ C2^√(n)/n^1/4, where C>0 is a universal constant.
* If n=2^-m, then B_1/n^n is super-exponential.
Unlike for the hypercube, we do not have a characterization theorem for the lattice ^n. In fact, we have found a locally 1/2-biased function for and a locally 1/4-biased function for ^2; see Figure <ref>. Both of these are not the result of embedding the relevant hypercube in the lattice via Proposition <ref>.
Give a complete characterization of permissible p values for locally p-biased functions on ^n. When such functions exist, count how many there are.
§.§ Cayley Graphs
In general, for a given group with a natural generating set, it is interesting to ask whether its Cayley graph admits locally biased or locally stable functions, and if so, how many. Specific examples which spring to mind for such groups are the group of permutations S_n with either all transpositions σ_ij_i<j, and with any number of generators. For the latter case, the following observation shows that for any two generators, has a locally 1/2-biased function:
Let a > 1 and b > 1 generate . Then the function f defined by
f(x) =
1, 0 ≤ (x 2(a+b)) < a+b
-1, a+b ≤ (x 2(a+b)) < 2(a+b)
is locally 1/2-biased.
Computer search shows that for some generators, other locally biased functions exist; see Figure <ref> for an example.
Characterize the locally biased and locally stable functions on S_n as a function of its generating set.
Characterize the locally biased and locally stable functions on as a function of its generating set.
§.§ Locally biased and locally stable functions
Section <ref> only gives lower bounds on the number of locally biased functions, and applies only for p=1/2 and p = 1/n (and 1-1/n by taking negation of functions).
What are the exact asymptotics for the number of non-isomorphic locally biased functions, for all permissible p?
We can also ask about the robustness of the locally biased property:
How do the characterization and counting theorems for locally biased functions change, when we relax the locally biased demand for 2^o(n) of the vertices (i.e a small amount of vertices can have their neighbors labeled arbitrarily)?
The uniqueness of locally 1/n-stable functions is in stark contrast to the exponential size of locally 1/n-biased functions. Our bounds in section <ref> for the number of (n-m)/n-locally stable functions are exponential in m, but not in n. We seek a better understanding of these functions:
What are the exact asymptotics for the number of non-isomorphic locally stable functions?
§ ACKNOWLEDGMENTS
We thank Itai Benjamini for proposing the question of indistinguishability and for his advice, Ronen Eldan for his suggestions on locally stable functions, and David Ellis for the connection to perfect codes. We also thank Noga Alon and Peleg Michaeli for some useful discussions.
plain
| Let f : -1,1^n→-1,1 be a Boolean function on the n-dimensional hypercube, and let S_i be a random walk on the hypercube. Can we reconstruct the function f (with probability 1, up to the hypercube's symmetries) by only observing the scenery process f(S_i)_i?
Similar questions have been raised for other graphs. For example, it was shown in <cit.> that when G is a cycle graph, the answer is yes: it is possible to reconstruct the function f (which is a string up to choice of origin) up to rotation and reflection with probability 1. It is still an open question whether any such string can be reconstructed in polynomial time. When G=, reconstruction is generally impossible <cit.>; for random sceneries on see <cit.>.
When G is the hypercube, such a process was studied for a specific Boolean function, the percolation crossing, under the notion of dynamical percolation; see <cit.> for details.
In the general case, however, we show that for n≥ 4 the answer is no. We do this by considering a pair of non-isomorphic functions f and g such that if S_i and T_i are random walks on the hypercube, then f(S_i) and g(T_i) have exactly the same distribution. We discuss two different classes of such functions:
* Locally p-biased functions: Let G be a graph. A Boolean function f: G →-1,1 is called locally p-biased, if for every vertex x∈ G we have
{y∼ x; f(y) = 1}/deg(x)=p.
In words, f is locally p-biased if for every vertex x, f takes the value 1 on exactly a p-fraction of x's neighbors. If f is a locally p-biased function, then the random variables {f(S_i)}_i have the same distribution as independent Bernoulli random variables with (f(S_i)=1)=p.
* Locally p-stable functions: Let G be a graph. A Boolean function f: G →-1,1 is called locally p-stable, if for every vertex x∈ G we have
{y∼ x; f(x) = f(y)}/deg(x)=p.
In words, f is locally p-stable if for every vertex x, f retains its value on exactly a p-fraction of x's neighbors. If f is locally p-stable, then the random variables {f(S_i)f(S_i+1)}_i have the same distribution as independent Bernoulli random variables with (f(S_i)f(S_i+1)=1)=p.
We say that two Boolean functions f,g:{-1,1}^n→{-1,1} are isomorphic, if there exists an automorphism of the hypercube ψ:{-1,1}^n→{-1,1}^n such that f∘ψ = g. Two functions are non-isomorphic if no such ψ exists.
The existence of two non-isomorphic locally p-biased functions, or two non-isomorphic locally p-stable functions thus render scenery reconstruction on the hypercube impossible.
It is not immediately obvious that pairs of non-isomorphic locally p-biased and pairs of non-isomorphic locally p-stable functions exist. It is then natural to ask, for which p values do they exist? If they do exist, how many of them are there?
In this paper, we characterize the possible p values on the n-dimensional hypercube, give bounds on the number of non-isomorphic pairs, and discuss results on other graphs. The paper is organized as follows.
In <ref> we give a full characterization of the connection between the dimension of the hypercube n and the permissible p values of locally p-biased functions, as expressed in the following theorem:
Let n ∈ be a natural number and p ∈ [0,1]. There exists a locally p-biased function f:-1,1^n→-1,1 if and only if p = b/2^k for some integers b ≥ 0, k ≥ 0, and 2^k divides n.
Our proof can construct functions for all p of the above form.
In <ref> we inspect the class size of non-isomorphic locally p-biased functions on the hypercube. We show that the class size for p=1/2 is at least C2^√(n)/n^1/4 for some constant C > 0, and for p=1/n is super-exponential in n, when such p values are permissible. Thus reconstruction is impossible for such functions. We conjecture that the number of non-isomorphic locally p-biased functions scales quickly for all permissible p values:
Let n>0 be even. Let p=b/2^k, where 1≤ b≤ 2^k, k≥ 1 and 2^k divides n. Let B_p^n be the set of non-isomorphic locally p-biased functions. Then B_p^n is super-exponential in n.
In <ref> we briefly discuss locally p-stable functions. We show that they exist for all possible p values, and that for most p values there are many non-isomorphic pairs; however, for every n, there are p values for which there is a single unique locally p-stable function. The results in this section are based on those of <ref>.
In <ref> we discuss other graphs. First, we show that when G is a regular tree of degree n, then all p=a/n are permissible. Second, we show that for G=^n all the results for the hypercubes hold true. This gives us a partial answer for permissible p values for ^n, but there are additional values that cannot be achieved through the hypercube construction: for example, for n=1 we can define a function with p=1/2 and when n=2 we can find a function with p=1/4. We also discuss other Cayley graphs of , and suggest further questions on scenerey reconstruction.
Throughout most of this paper we treat the Boolean hypercube as the set -1,1^n. We identify it with the 0,1^n hypercube by considering -1 in the first to correspond to 0 in the second. | null | null | null | null | null |
http://arxiv.org/abs/1701.07698v2 | 20170126134118 | Mutations on a Random Binary Tree with Measured Boundary | [
"Jean-Jil Duchamps",
"Amaury Lambert"
] | math.PR | [
"math.PR",
"primary 05C05, 60J80, secondary 54E45, 60G51, 60G55, 60G57, 60K15,\n 92D10"
] |
3D Printing of Polymer Bonded Rare-Earth Magnets With a Variable Magnetic Compound Density for a Predefined Stray Field
D. Suess
December 30, 2023
=======================================================================================================================
Consider a random real tree whose leaf set, or boundary, is endowed with a finite mass measure. Each element of the tree is further given a type, or allele, inherited from the most recent atom of a random point measure (infinitely-many-allele model) on the skeleton of the tree. The partition of the boundary into distinct alleles is the so-called allelic partition.
In this paper, we are interested in the infinite trees generated by supercritical, possibly time-inhomogeneous, binary branching processes, and in their boundary, which is the set of particles `co-existing at infinity'. We prove that any such tree can be mapped to a random, compact ultrametric tree called coalescent point process, endowed with a `uniform' measure on its boundary which is the limit as t→∞ of the properly rescaled counting measure of the population at time t.
We prove that the clonal (i.e., carrying the same allele as the root) part of the boundary is a regenerative set that we characterize. We then study the allelic partition of the boundary through the measures of its blocks. We also study the dynamics of the clonal subtree, which is a Markovian increasing tree process as mutations are removed.
Keywords and phrases: coalescent point process; branching process; random point measure; allelic partition; regenerative set; tree-valued process.
MSC2000 subject classifications: primary 05C05, 60J80; secondary 54E45; 60G51; 60G55; 60G57; 60K15; 92D10.
§ INTRODUCTION
In this paper, we give a new flavor of an old problem of mathematical population genetics which is to characterize the so-called allelic partition of a population. To address this problem, one needs to specify a model for the genealogy (i.e., a random tree) and a model for the mutational events (i.e., a point process on the tree). Two typical assumptions that we will adopt here are: the infinite-allele assumption, where each mutation event confers a new type, called allele, to its carrier; and the neutrality of mutations, in the sense that co-existing individuals are exchangeable, regardless of the alleles they carry. Here, our goal is to study the allelic partition of the boundary of some random real trees that can be seen as the limits of properly rescaled binary branching processes.
In a discrete tree, a natural object describing the allelic partition without labeling alleles is the allele frequency spectrum
(A_k)_k≥ 1, where A_k is the number of alleles carried by exactly k co-existing individuals in the population. In the present paper, we start from a time-inhomogeneous, supercritical binary branching process with finite population N(t) at any time t, and we are interested in the allelic partition of individuals `co-existing at infinity' (t→∞), that is the allelic partition at the tree boundary. To define the analogue of the frequency spectrum, we need to equip the tree boundary with a measure ℓ, which we do as follows. Roughly speaking, if N_u(t) is the number of individuals co-existing at time t in the subtree 𝒯_u consisting of descendants of the same fixed individual u, the measure ℓ(𝒯_u) is proportional to lim_t↑∞ N_u(t)/N(t).
It is shown in Section <ref> that the tree boundary of any supercritical branching process endowed with the (properly rescaled) tree metric and the measure ℓ has the same law as a random real tree, called coalescent point process (CPP) generated from a Poisson point process, equipped with the so-called comb metric <cit.> and the Lebesgue measure.
Taking this result for granted, we will focus in Sections <ref>, <ref> and <ref> on coalescent point processes with mutations.
In the literature, various models of random trees and their associated allelic partitions have been considered. The most renowned result in this context is Ewens' Sampling Formula <cit.>, a formula that describes explicitly the distribution of the allele frequency spectrum in a sample of n co-existing individuals taken from a stationary population with genealogy given by the Moran model with population size N and mutations occurring at birth with probability θ/N. When time is rescaled by N and N→∞, this model converges to the Kingman coalescent <cit.> with Poissonian mutations occurring at rate θ along the branches of the coalescent tree. In the same vein, a wealth of recent papers has dealt with the allelic partition of a sample taken from a Λ-coalescent or a Ξ-coalescent with Poissonian mutations, e.g., <cit.>.
In parallel, several authors have studied the allelic partition in the context of branching processes, starting with <cit.> and the monograph <cit.>, see <cit.> and the references therein. In a more recent series of papers <cit.>, the second author and his co-authors have studied the allelic partition at a fixed time of so-called `splitting trees', which are discrete branching trees where individuals live i.i.d lifetimes and give birth at constant rate. In particular, they obtained the almost sure convergence of the normalized frequency spectrum (A_k(t)/N(t))_k≥ 1 as t→∞ <cit.> as well as the convergence in distribution of the (properly rescaled) sizes of the most abundant alleles <cit.>. The limiting spectrum of these trees is to be contrasted with the spectrum of their limit, which is the subject of the present study, as explained earlier.
Another subject of interest is the allelic partition of the entire progeny of a (sub)critical branching process, as studied in particular in <cit.>. The scaling limit of critical branching trees with mutations is a Brownian tree with Poissonian mutations on its skeleton. Cutting such a tree at the mutation points gives rise to a forest of trees whose distribution is investigated in the last section of <cit.>, and relates to cuts of Aldous' CRT in <cit.> or the Poisson snake process <cit.>.
The couple of previously cited works not only deal with the limits of allelic partitions for the whole discrete tree, but also tackle the limiting object directly. This is also the goal of the present work, but with quite different aims.
First, we construct in Section <ref> an ultrametric tree with boundary measured by a `Lebesgue measure' ℓ, from a Poisson point process with infinite intensity ν, on which we superimpose Poissonian neutral mutations with intensity measure μ. Section <ref> ends with Proposition <ref>, which states that the total number of mutations in any subtree is either finite a.s. or infinite a.s. according to an explicit criterion involving ν and μ.
The structure of the allelic partition at the boundary is studied in detail in Section <ref>. Theorem <ref> ensures that the subset of the boundary carrying no mutations (or clonal set) is a (killed) regenerative set with explicit Laplace exponent in terms of ν and μ and measure given in Corollary <ref>. The mean intensity Λ of the allele frequency spectrum at the boundary is defined by Λ(B):=∑_ℓ(R)∈ B, where the sum is taken over all allelic clusters at the boundary. It is explicitly expressed in Proposition <ref>. An a.s. convergence result as the radius of the tree goes to infinity is given in Proposition <ref> for the properly rescaled number of alleles with measure larger than q>0, which is the analogue of ∑_k≥ q A_k in the discrete setting.
Section <ref> is dedicated to the study of the dynamics of the clonal (mutation-free) subtree when mutations are added or removed through a natural coupling of mutations in the case when μ( x)=θ x. It is straightforward that this process is Markovian as mutations are added. As mutations are removed, the growth process of clonal trees also is Markovian, and its semigroup and generator are provided in Theorem <ref>.
Section <ref> is devoted to the links between measured coalescent point processes and measured pure-birth trees which motivate the present study. Lemma <ref> gives a representation of every CPP with measured boundary, in terms of a rescaled pure-birth process with boundary measured by the rescaled counting measures at fixed times. Conversely, Theorem <ref> gives a representation of any such pure-birth process in terms of a CPP with intensity measure ν(dx) = dx/x^2, as in the case of the Brownian tree.
§ PRELIMINARIES AND CONSTRUCTION
§.§ Discrete Trees, Real Trees
Let us recall some definitions of discrete and real trees, which will be used to define the tree given by a so-called coalescent point process.
In graph theory, a tree is an acyclic connected graph.
We call discrete trees such graphs that are labeled according to Ulam–Harris–Neveu's notation by labels in the set 𝒰 of finite sequences of non-negative integers:
𝒰 = ⋃_n≥ 0ℤ_+^n = {u_1 u_2 … u_n, u_i ∈ℤ_+, n≥ 0},
with the convention ℤ_+^0 = {∅}.
A rooted discrete tree is a subset 𝒯 of 𝒰 such that
* ∅∈𝒯 and is called the root of 𝒯
* For u = u_1 … u_n ∈𝒯 and 1≤ k < n, we have u_1 … u_k ∈𝒯.
* For u ∈𝒯 and i ∈ℤ_+ such that ui ∈𝒯, for 0≤ j ≤ i, we have uj∈𝒯 and uj is called a child of u.
For n≥ 0, the restriction of 𝒯 to the first n generations is defined by:
𝒯_|n := {u ∈𝒯, |u| ≤ n},
where |u| denotes the length of a finite sequence.
For u,v ∈𝒯, if there is w∈𝒰 such that v = uw, then u is said to be an ancestor of v, noted u ≼ v.
Generally, let u∧ v denote the most recent common ancestor of u and v, that is the longest word u_0 ∈𝒯 such that u_0≼ u and u_0≼ v.
The edges of 𝒯 as a graph join the parents u and their children ui.
For a discrete tree 𝒯, we define the boundary of 𝒯 as
∂𝒯 := {u∈𝒯, u0 ∉𝒯}∪{v ∈_+^, ∀ u ∈𝒰,u≼ v⇒ u∈𝒯},
and we equip ∂𝒯 with the σ-field generated by the family (B_u)_u∈𝒯, where
B_u := {v ∈∂𝒯, u≼ v}.
With a fixed discrete tree 𝒯, a finite measure ℒ on ∂𝒯 is characterized by the values (ℒ(B_u))_u∈𝒯.
Reciprocally if the number of children of u is finite for each u∈𝒯, by Carathéodory's extension theorem, any finitely additive map ℒ : {B_u, u∈𝒯}→ extends uniquely into a finite measure ℒ on ∂𝒯.
By assigning a positive length to every edge of a discrete tree, one gets a so-called real tree.
Real trees are defined more generally as follows, see e.g. <cit.>.
A metric space (𝕋, d) is a real tree if for all x, y ∈𝕋,
* There is a unique isometry f_x,y : [0,d(x,y)]→𝕋 such that f_x,y(0) = x and f_x,y(d(x,y)) = y,
* All continuous injective paths from x to y have the same range, equal to
f_x,y([0,d(x,y)]).
This unique path from x to y is written [[x,y]].
The degree of a point x∈𝕋 is defined as the number of connected components of 𝕋∖{x}, so that we may define:
* The leaves of 𝕋 are the points with degree 1.
* The internal nodes of 𝕋 are the points with degree 2.
* The branching points of 𝕋 are the points with degree larger than 2.
One can root a real tree by distinguishing a point ρ∈𝕋, called the root.
From this definition, one can see that for a rooted real tree (𝕋, d, ρ), for all x,y∈𝕋, there exists a unique point a∈𝕋 such that [[ρ, x]]∩[[ρ, y]] = [[ρ, a]].
We call a the most recent common ancestor of x and y, noted x∧ y.
There is also an intrinsic order relation in a rooted tree: if x∧ y = x, that is if x∈ [[ρ, y]], then x is called an ancestor of y, noted x ≼ y.
We will call a rooted real tree a simple tree
if it can be defined from a discrete tree by assigning a length to each edge.
From now on, we will restrict our attention to simple trees.
A simple (real) tree is given by (𝒯, α, ω), where 𝒯⊂𝒰 is a rooted discrete tree, and α and ω are maps from 𝒯 to satisfying
ζ(u) := ω(u) - α(u) > 0,
∀ u ∈𝒯, ∀ i ∈ℤ_+, ui ∈𝒯⟹α(ui) = ω(u).
Here α(u) and ω(u) are called the birth time and death time of u and ζ(u) is the life length of u.
We will sometimes consider simple trees (𝒯, α, ω, ℒ) equipped with ℒ a measure on their boundary ∂𝒯.
We call a reversed simple tree a triple (𝒯, α, ω) where (𝒯, -α, -ω) is a simple tree.
We may sometimes omit the term “reversed” when the context is clear enough.
The restriction of A = (𝒯, α, ω) to the first n generations is the simple tree defined by
A_|n = (𝒯_|n, α_|𝒯_|n, ω_|𝒯_|n).
One can check that a simple tree (𝒯, α, ω) defines a unique real rooted tree
defined as the completion of (𝕋, d, ρ), with
ρ := (∅, α(∅)),
𝕋 := {ρ}∪⋃_u∈𝒯{u}× (α(u), ω(u)] ⊂𝒰×,
d((u,x), (v,y)) :=
|x-y| if u ≼ v or v ≼ u,
x+y-2ω(u∧ v) otherwise.
In particular, we have (u,x)∧(v,y) = (u∧ v, ω(u∧ v)).
In this paper, we construct random simple real trees with marks along their branches.
We see these trees as genealogical/phylogenetic trees and the marks as mutations that appear in the course of evolution. We will assume that each new mutation confers a new type, called allele, to its bearer (infinitely-many alleles model).
Our goal is to study the properties of the clonal subtree (individuals who do not bear any mutations, black subtree in Figure <ref>) and of the allelic partition (the partition into bearers of distinct alleles of the population at some fixed time).
§.§ Comb Function
§.§.§ Definition
We now introduce ultrametric trees, using a construction with comb functions following Lambert and Uribe Bravo <cit.>.
Let T>0 and I=[0,T]. Let also f:I→ such that
#{x ∈ I, f(x) > ϵ} < ∞ ϵ > 0.
The pair (f,I) will be called a comb function. For any real number z > max_I f, we define the ultrametric tree of height z associated with (f,I) as the real rooted tree T_f which is the completion of (Sk, ρ, d_f), where Sk⊂ I× is the skeleton of the tree, and Sk, ρ and d_f are defined by
ρ := (0,z),
Sk := {0}×(0,z] ∪{(t,y) ∈ I×(0,z], f(t) > y},
d_f : Sk^2
((t,x),(s,y)) |max_(t,s]f - x| + |max_(t,s]f - y| if t<s,
|x-y| if t=s.
The set {0}× (0, z] ⊂Sk is called the origin branch of the tree.
* For t ∈ I, t > 0, we call the lineage of t the subset of the tree L_t ⊂ T_f defined as the closure of the set
{(s, x) ∈Sk, s ≤ t, ∀ s < u ≤ t, f(u) ≤ x }.
For t = 0 one can define L_0 as the closure of the origin branch.
One can check that d_f is a distance which makes (Sk, d_f) a real tree, and so its completion (T_f, d_f) also is a real tree.
Furthermore, the fact that { f > ϵ} is finite for all ϵ > 0 ensures that it is a simple tree, since the branching points in Sk are the points (t, f(t)) with f(t)>0.
For a visual representation of the tree associated with a comb function, see Figure <ref>, where the skeleton is drawn in vertical segments and the dashed horizontal segments represent branching points.
With the same notation as in Definition <ref>, for a fixed comb function (f, I) and a real number z> max_I f, writing T_f for the associated real tree, the following holds.
For each t ∈ I, there is a unique leaf α_t ∈ T_f such that
L_t = [[ρ, α_t]].
Furthermore, the map α : t ↦α_t is measurable with respect to the Borel sets of I and T_f.
For t=0, L_0 is defined as the closure of the origin branch {0}× (0,z].
Since d_f((0, x), (0, y)) = |x-y|, the map
ϕ_0 :
(0, z] ⟶Sk
x ⟼ (0, x)
is an isometry, and since T_f is defined as the completion of the skeleton Sk, there is a unique isometry ϕ_0 : [0,z] → T_f which extends ϕ_0.
Therefore we define α_0 := ϕ_0(0) ∈ T_f, which satisfies L_0 = [[ρ, α_0]] since ϕ_0 is an isometry.
Also α_0 is a leaf of T_f because it is in T_f ∖Sk.
Indeed, since T_f is the completion of Sk which is connected, T_f ∖{α_0} is necessarily also connected, which means that α_0 has degree 1.
Now for a fixed t ∈ I, t>0, write (t_i, x_i)_i ≥ 0 for the (finite or infinite) sequence with values in
{(0, z)}∪{(s, x) ∈ I× (0, ∞), f(s) = x }
defined inductively (as long as they can be defined) by (t_0,x_0) = (0,z) and
∀ i ≥ 0, x_i+1 := max_(t_i, t] f ,
and t_i+1 := max{ s ∈ (t_i, t], f(s) = x_i+1}.
* If the sequence (t_i , x_i)_i≥ 0 is well defined for all i≥ 0, then since f is a comb function, we necessarily have that x_i → 0 as i→∞.
* On the other hand, the sequence (t_i , x_i)_0≤ i ≤ n is finite if and only if it is defined up to an index n such that either t_n = t or f is zero on the interval (t_n, t].
In that case, we still define for convenience x_n+i := 0, t_n+i := t_n for all i ≥ 1.
Now it can be checked that we have
⋃_i=0^∞ [x_i+1, x_i) ∖{0} = (0,z),
and that L_t is defined as the closure of the set
A_t := ⋃_i=0^∞{t_i}× ([x_i+1, x_i)∖{0} ) ⊂Sk.
Also, by definition of the sequence (t_i , x_i)_0≤ i, the distance d_f satisfies, for
(s,x),(u,y)∈ A_t,
d_f((s,x),(u,y)) = |x-y|.
Therefore the following map is an isometry (and it is well defined because x_i ↓ 0).
ϕ_t :
(0, z) ⟶Sk
x ⟼ (t_i, x) if x∈ [x_i+1, x_i) for an index i ≥ 0.
As in the case t = 0, this isometry can be extended to ϕ_t : [0,z] → T_f and we define α_t := ϕ_t(0).
It is a leaf of T_f satisfying L_t = [[ρ, α_t]] for the same reasons as for 0.
It remains to prove that α : t ↦α_t is measurable.
It is enough to show that it is right-continuous, because in that case the pre-image of an open set is necessarily a countable union of right-open intervals, which is a Borel set.
Now for t < t' ∈ I, by taking limits along the lineages L_t and L_t', it is easily checked that the distance between α_t and α_t' can be written
d_f(α_t, α_t') = 2 max_(t,t'] f,
and since f is a comb function, necessarily we have
max_(t,t'] f 0.
Hence α is right-continuous, therefore measurable.
It follows from Proposition <ref> that the Lebesgue measure λ on the real interval I can be transported by the map α to a measure on the tree T_f, or more precisely on its boundary, that is the set of its leaves.
With the same notation as in Definition <ref> and Proposition <ref>, for any fixed comb function (f, I) and z> max_I f, writing T_f for the associated real tree, we define the measure on the boundary of T_f as the measure
:=λ∘α^-1
which concentrates on the leaves of the tree.
From now on, we always consider the tree T_f associated with a comb function f as a rooted real tree equipped with the measure on its boundary.
§.§.§ The Coalescent Point Process
Here we will consider the measured tree associated to a random comb function.
Let ν be a positive measure on (0,∞] such that for all ϵ > 0, we have
ν(ϵ) := ν ([ϵ, ∞]) < ∞,
and 𝒩 be the support of the Poisson point process on × (0,∞] with intensity t ⊗ν.
Then we can define f^𝒩 as the function whose graph
is 𝒩.
f^𝒩(t) =
x if (t,x) ∈𝒩,
0 if 𝒩∩ ({t}× (0, ∞]) = ∅.
Now fix z>0 such that ν(z) > 0 and set
T(z) := inf{ t ≥ 0, f^𝒩(t) ≥ z }.
The ultrametric random tree associated to I = [0, T(z)) and f^𝒩_|I is called coalescent point process (CPP) of intensity ν and height z, denoted by CPP(ν,z).
It is equipped with the random measure , concentrated on the leaves, which is the push-forward of the Lebesgue measure on [0,T(z)) by the map α.
Note that a coalescent point process is not directly related to coalescent theory, a canonical example of which is Kingman's coalescent <cit.>, although there exist links between the two: it is shown in <cit.> that a CPP appears as a scaling limit of the genealogy of individuals having a very recent common ancestor in the Kingman coalescent.
Formally, a CPP is a
random variable valued in the space of finitely measured compact metric spaces endowed with the Gromov-Hausdorff-Prokhorov distance defined in <cit.> as an extension of the more classical Gromov-Hausdorff distance.
Actually, it is easy to check that all the random quantities we handle are measurable, since we are dealing with a construction from a Poisson point process.
§.§ Mutations on a CPP
Here we set up how mutations appear on the random genealogy associated with a CPP of intensity ν.
Let μ be a positive measure on .
We make the following assumptions:
H∀ x > 0, 0 < ν(x) := ν ([x, ∞]) < ∞ and μ(x) := μ([0,x]) < ∞,
μ([0,∞)) = ∞,
ν and μ have no atom on .
We will now define the CPP of intensity ν and height z>0 marked with rate μ.
Recall that the CPP is constructed from the support 𝒩 of a Poisson point process with intensity t ⊗ν on × [0, ∞] and has a root ρ = (0,z).
Define independently for each point N := (t,x) of 𝒩∪{ρ} the Poisson point process M_N of intensity μ on the interval (0,x).
Each atom y∈[0,x] of M_N is a mark (t,y) on the branch {t}×(0, x)⊂Sk at height y.
The family (M_N)_N∈𝒩 therefore defines a point process M on the skeleton of the CPP tree:
M := ∑_(t,x) ∈𝒩∪{ρ}∑_y ∈ M_(t,x)δ_(t,y).
By definition, conditional on Sk, M is a Poisson point process on Sk whose intensity is such that for all non-negative real numbers t and a<b, we have:
[ M({t}×[a,b]) | {t}×[a,b] ⊂Sk ] = μ([a,b]).
Let ν, μ be measures satisfying assumption (<ref>).
A coalescent point process with intensity ν, mutation rate μ and height z, denoted CPP(ν, μ, z), is defined as the random CPP(ν, z) given by 𝒩, equipped with the point process M on its skeleton.
* The clonal subtree of the rooted real tree (𝕋, ρ) equipped with mutations M is defined as the subset of 𝕋 formed by the points :
{ x ∈𝕋, M([[ρ, x]]) = 0 }.
Equipped with the distance induced by d, this is also a real tree.
* Given the (ultrametric) rooted real tree (𝕋, ρ) equipped with mutations M and the application α from the real interval I=[0,T(z)) to 𝕋 whose range is included in the leaves of 𝕋, we can define the clonal boundary (or clonal population) R = R(𝕋, M, α) ⊂ I:
R := { t ∈ I, M([[ρ,α_t]]) = 0)}.
This set R is studied in a paper by Philippe Marchal <cit.> for a CPP with ν(dx) = x/x^2 and mutations at branching points with probability 1-β.
In that case the sets R_β have the same distribution as the range of a β-stable subordinator.
In the present case of Poissonian mutations, R is not stable any longer but we will see in Section <ref> that it remains a regenerative set.
Total number of mutations.
Since μ is a locally finite measure on , the number of mutations on a fixed lineage of the CPP(ν, μ, z) is a Poisson random variable with parameter μ([0,z])<∞, and so is a.s. finite.
However, it is possible that in a clade (here defined as the union of all lineages descending from a fixed point), there are infinitely many mutations with probability 1.
For instance, if μ
is the Lebesgue measure and if ν is such that
∫_0 x ν( x) = ∞,
we know from the properties of Poisson point processes
that the total length of any clade is a.s. infinite.
In this case, the number of mutations in any clade is also a.s. infinite so that each point x in the skeleton of the tree has a.s. at least one descending lineage with infinitely many mutations. Such a lineage can be displayed by choosing iteratively at each branching point a sub-clade with infinitely many mutations.
One can ask under which conditions this phenomenon occurs. Conditional on the tree of height z, the total number of mutations follows a Poisson distribution with parameter
Λ := μ(z) + ∑_(t,y)∈𝒩, t < T(z)μ(y),
where T(z) is the first time such that there is a point of 𝒩 with height larger than z.
Indeed, the origin branch is of height z and the heights of the other branches are the heights of points of 𝒩.
This number of mutations is finite a.s. on the event A:={Λ<∞} and infinite a.s. on its complement.
But by the properties of Poisson point processes, two cases are distinguished: either A has probability 0 or it has probability 1.
There is the following dichotomy:
∫_0 μ(x)ν( x) < ∞ ⟹ the total number of mutations is finite a.s.
∫_0 μ(x)ν( x) = ∞ ⟹ the number of mutations in any clade is infinite a.s.
In the former case, the total number of mutations has mean
[Λ] = μ(z) + 1/ν(z)∫_[0,z]μ(x) ν( x).
Conditional on T(z), the set 𝒩' := { (t, y) ∈𝒩, t<T(z)} is the support of a Poisson point process on [0, T(z)]×[0,z] with intensity t⊗ν.
Therefore, from basic properties of Poisson point processes, conditional on T(z), Λ = μ(z) + ∑_(t,y)∈𝒩'μ(y) is finite a.s. if and only if
∫_0^T(z) ( ∫_[0,z] ( μ(x) ∧ 1 ) ν( x) ) t < ∞ a.s.,
and since T(z) is finite a.s. and μ is increasing, this condition is equivalent to the condition of the proposition.
Now let us write N_tot for the total number of mutations.
The conditional distribution of N_tot given Λ is a Poisson distribution with mean Λ.
Therefore we deduce
[N_tot] = [Λ]
= μ(z) + [∑_(t,y)∈𝒩'μ(y) ]
= μ(z) + [ T(z) ∫_[0,z]μ(x) ν( x) ]
= μ(z) + 1/ν(z)∫_[0,z]μ(x) ν( x),
which concludes the proof.
§ ALLELIC PARTITION AT THE BOUNDARY
In this section, we will identify the clonal boundary R in a mutation-equipped CPP, that is the set of leaves of the tree which do not carry mutations, and characterize the reduced subtree generated by this set.
§.§ Regenerative Set of the Clonal Lineages, Clonal CPP
Denote by 𝕋^z a CPP(ν,μ,z) where ν,μ satisfy assumptions (<ref>).
A leaf of 𝕋^z is said clonal if it carries the same allele as the root.
Recall the canonical map α^z from the real interval [0, T(z)) to the leaves of 𝕋^z (see Proposition <ref>).
The clonal boundary (see Definition <ref>) of 𝕋^z is then the set R^z⊂[0, T(z)) defined as the pre-image of the clonal leaves by the map α^z.
We define the event
O^z := {M_ρ([0, z]) = 0}
that there is no mutation on the origin branch of 𝕋^z. Note that this event has a positive probability equal to ^-μ(z).
By definition, the point process of mutations on the origin branch M_ρ is independent of (M_N)_N∈𝒩.
Therefore conditioning on O^z amounts to considering the tree 𝕋^z equipped with the mutations on its skeleton which are given only by the point processes (M_N)_N∈𝒩.
We now define a random set R, whose distribution depends only on (ν, μ) and not on z, which will allow the characterization of the clonal boundaries R^z conditional on the event O^z.
Recall the notations 𝒩 and (M_N)_N∈𝒩.
For each fixed t∈, let (t_i, x_i)_i≥ 1 be the (possibly finite) sequence of points of 𝒩 such that
x_1 = sup{ x ∈ [0, ∞], #𝒩∩ (0,t]× [x, ∞] ≥ 1},
t_1 = sup{s ∈ [0,t], (s,x_1) ∈𝒩},
x_i+1 = sup{ x ∈ [0, x_i), #𝒩∩(t_i,t]× [x, ∞] ≥ 1},
t_i+1 = sup{s ∈ (t_i,t], (s,x_i+1) ∈𝒩},
with the convention sup∅ = 0, and where the sequence is finite if there is a n≥ 0 such that x_n = 0.
We define the following random point measure on :
M_t := ∑_i≥ 1, x_i > 0 M_(t_i, x_i)(∩ [x_i+1, x_i]).
Now we define the random set R as:
R := {t ∈, M_t() = 0}.
Recall that for a comb function (f,I) and a real number t∈ I, in the proof of Proposition <ref>, we defined a sequence (t_i, x_i)_i≥ 0 in the same way as in the previous definition and we remarked that the lineage L_t of t is the closure of the set
⋃_i≥ 0, x_i > 0{t_i}× ([x_i+1, x_i)∖{0} ) ⊂Sk.
It follows that in the case of the tree 𝕋^z equipped with the mutations M on its skeleton, we have the equality between events
O^z ∩{M([[ρ, α^z_t ]]) = 0} = O^z ∩{M_t() = 0}.
Therefore, on the event O^z, the clonal boundary R^z of the tree 𝕋^z coincides with the restriction of R to the interval [0, T(z)), which explains why we study the set R.
The subtree of 𝕋^z spanned by the clonal boundary R^z is called the reduced clonal subtree and defined as
⋃_t∈ R^z [[ρ, α^z_t]].
Note that it is a Borel subset of 𝕋^z because it is the closure of
⋂_n≥ 1⋃_p≥ n⋃_x ∈ C_p [[ρ, x]],
where C_p is the finite set {x ∈𝕋^z, d(x,ρ) = z(1-1/p), M([[ρ, x]]) = 0}.
The set R is proven to be a regenerative set (see Appendix <ref> for the results used in this paper and the references concerning subordinators and regenerative sets), and the reduced clonal subtree is shown to have the law of a CPP.
The law of R and of the associated reduced clonal subtree can be characterized as follows.
* Under the assumptions (<ref>) and with the preceding notation the random set R is regenerative.
It can be described as the range of a subordinator whose Laplace exponent ϕ is given by:
1/ϕ(λ) = ∫_(0, ∞)^-μ(x)/λ + ν(x)μ( x).
* The reduced clonal subtree, that is the subtree spanned by the set R, has the distribution of a CPP with intensity ν^μ, where ν^μ is the positive measure on _+∪{∞} determined by the following equation.
Letting W(x) := (ν(x))^-1 and W^μ(x) := (ν^μ(x))^-1, we have, for all x>0,
W^μ(x) = W(0) + ∫_0^x ^-μ(z) W (z).
The last formula of the theorem is an extension of Proposition 3.1 in <cit.>, where the case when ν is a finite measure and μ(dx)= θ x is treated.
Here, we allow ν to have infinite mass and μ to take a more general form (provided (<ref>) is satisfied).
Regenerative set.
Here, we prove the first part of the theorem concerning R.
Let (ℱ_t)_t≥ 0 be the natural filtration of the marked CPP defined by:
ℱ_t = σ (𝒩∩([0,t]×_+), M_(s,x), s ≤ t, x≥ 0 ).
To show first that R is (ℱ_t)-progressively measurable, we show that for a fixed t>0, the set
{ (s, ω) ∈ [0,t]×Ω , s ∈R(ω) }
is in ℬ([0,t])⊗ℱ_t.
Basic properties of Poisson point processes ensure there exists an ℱ_t-measurable sequence of random variables giving the coordinates of the mutations in 𝒩∩([0,t]×_+).
Let (U_i, X_i)_i be such a sequence, for instance ranked such that X_i is decreasing as in Figure <ref>.
We also define the following ℱ_t-measurable random variables:
T_i := t ∧inf{s ≥ U_i, (s,x) ∈𝒩, x≥ X_i}.
Now we have
R∩[0,t] = ⋂_i ([0,t]∖[U_i, T_i)),
which proves that the random set R is (ℱ_t)-progressively measurable, and almost-surely left-closed.
Let us now show the regeneration property of R.
Define
H(s,t) := max{x ≥ 0, (u,x) ∈𝒩, s < u ≤ t },
the maximal height of atoms of 𝒩 between s and t.
We will note H(t) := H(0,t) for simplicity.
Remark that
R = { t ≥ 0, M_t ([0,H(t)]) = 0 }.
Let S be a (ℱ_t)-stopping time, and suppose that almost surely, S<∞, and S∈R is not isolated to the right.
From elementary properties of Poisson point processes and the fact that the random variables (M_(s,x))_s ≥ 0, x ≥ 0 are i.i.d, we know that the tree strictly to the right of S is independent of ℱ_S and has the same distribution as the initial tree.
Now since S∈R almost surely, we have, for all t≥ S,
M_t ([0, H(t)]) = M_t ([0,H(S,t)]),
because M_t([H(S,t), H(0,t)]) = M_S([H(S,t), H(0,t)]) = 0, in other words there are no mutations on the lineage of t that is also part of the lineage of S.
As a consequence,
R∩ [S, ∞) = { t ≥ S, M_t ([0,H(S,t)]) = 0 },
which implies that R∩ [S, ∞) - S has the same distribution as R and is independent of ℱ_S.
Therefore it is proven that R has the regenerative property,
so one can compute its Laplace exponent.
Here we are in the simple case where R has a positive Lebesgue measure, and we have in particular, for all t ∈_+,
(t ∈R) = [ ^-μ(H_t) ]
= ∫_[0, ∞](H_t ∈ x) ^-μ(x)
= ∫_(0, ∞)(H_t ≤ x) ^-μ(x)μ( x)
= ∫_(0, ∞)^-t ν(x)-μ(x)μ( x).
The passage from the second to the third line is done integrating by parts thanks to the assumption that μ is continuous and that μ has an infinite mass.
The last displayed expression is therefore the density with respect to the Lebesgue measure of the renewal measure of R (see Remark <ref>).
This is sufficient to characterize our regenerative set, and the expression given in the Proposition is found by computing the Laplace transform of this measure:
1/ϕ(λ) = ∫_0^∞^-λ t (∫_(0, ∞)^-t ν(x)-μ(x)μ( x) ) t
= ∫_(0, ∞)^-μ(x)/λ+ν(x)μ( x),
which concludes the proof of (i).
It is important to note that the particular case of a CPP with intensity ν(dx) = x/x^2 has the distribution of a (root-centered) sphere of the so-called Brownian CRT (Continuum Random Tree), the real tree whose contour is a Brownian excursion. This is shown for example by Popovic in <cit.> where the term `Continuum genealogical point process' is used to denote what is called here a coalescent point process. The measure ν(dx) = x/x^2 is the push-forward of the Brownian excursion measure by the application which maps an excursion to its depth. In general, the sphere of radius say r of a totally ordered tree is an ultrametric space whose topology is characterized by the pairwise distances between `consecutive' points at distance r from the root. When the order of the tree is the order associated to a contour process, these distances are the depths of the `consecutive' excursions of the contour process away from r, see e.g. Lambert and Uribe Bravo <cit.>.
If in addition to ν(dx) = x/x^2, we assume that μ (dt)=θ t, which amounts to letting Poissonian mutations at constant rate θ on the skeleton of the CRT, we have
1/ϕ_θ(λ) = ∫_0^∞θ^-θ x/λ + 1/x x.
In particular, for all θ,c>0, we can compute:
ϕ_θ(cλ ) = c ϕ_θ/c(λ).
This implies the equality in distribution c R_θ(d)= R_θ / c.
Nevertheless R_θ is not a so-called `stable' regenerative set, contrary to the sets R_α in <cit.>.
Reduced clonal subtree.
To show that the reduced clonal subtree is a CPP, let us exhibit the Poisson point process that generates it.
Let σ be the subordinator with drift 1 whose range is R and let 𝒩' be the following point process:
𝒩' := { (t,x), t∈_+, x = H(σ_t-,σ_t) > 0 },
where H(s, t) := max{x, (u,x) ∈𝒩, s≤ u≤ t}.
This point process generates the reduced clonal subtree, because H(σ_t-,σ_t) is (up to a factor 1/2) the tree distance between the consecutive leaves σ_t- and σ_t in R.
To complete the proof of the theorem, it is sufficient to show that conditional on the death time ζ of the subordinator σ, 𝒩' is a Poisson point process on [0, ζ)×_+ with intensity t ⊗ν^μ.
This is due to the regenerative property of the process.
For fixed t≥ 0, σ_t is a (ℱ_t)-stopping time which is almost surely in R on the event {σ_t < ∞} = {ζ > t}.
This implies that conditional on {σ_t < ∞}, the marked CPP strictly to the right of σ_t is equal in distribution to the original marked CPP and is independent of ℱ_σ_t.
In particular:
({(s,x) ∈_+^2, (σ_t + s, x) ∈𝒩}, R∩[σ_t, ∞) - σ_t ) (d)= (𝒩, R).
This implies that 𝒩' ∩ ([t, ∞)×_+) - (t,0) has the same distribution as 𝒩' and is independent of ℱ_σ_t.
For fixed ϵ > 0, let (T_i, X_i)_i≥ 1 be the sequence of atoms of 𝒩' such that X_i > ϵ, ranked with increasing T_i.
Then T_i is a (ℱ_σ_t)-stopping time and the sequence (T_i - T_i-1, X_i)_i≥ 1 is i.i.d., with T_0:=0 for convenience.
It is sufficient to observe that T_1 is an exponential random variable to show that 𝒩' has an intensity of the form t⊗ν^μ:
(T_1 > t+s | T_1 > t) = (H(0, σ_t+s) ≤ϵ| H(0, σ_t) ≤ϵ)
= (H(σ_t, σ_t+s) ≤ϵ| H(0, σ_t) ≤ϵ)
= (H(0, σ_s) ≤ϵ) = (T_1 > s).
It remains to characterize the measure ν^μ by computing W^μ(x).
Note that the following computations are correct thanks to the assumption that ν has no atom, so that W is continuous.
To simplify the notation, let H_t := H(0,t) = max{x, (u,x) ∈𝒩, 0 ≤ u ≤ t}.
Then we can compute:
W^μ(x) = ∫_0^∞^-t ν^μ(x) t
= [ ∫_0^∞_{H_σ_t≤ x} t]
= [ ∫_0^∞_{H_u≤ x}_{u ∈R} u ].
Letting F(y) := (H_u ≤ y) = ^-u ν(y), we have
(H_u ≤ x, u ∈R) = (H_u = 0) + ∫_0^x (H_u ∈ y) ^-μ(y)
= F(0) + ∫_0^x ^-μ(y) F (y).
Now F(y) = u ^-u ν(y)ν ( y), hence
W^μ(x) = ∫_0^∞^-u ν(0) u +∫_0^x (∫_0^∞ u ^-u ν(y) u ) ^-μ(y)ν ( y)
= 1/ν(0) + ∫_0^x 1/ν(y)^2^-μ(y)ν ( y)
= W(0) + ∫_0^x ^-μ(y) W(y),
which concludes the proof.
Equality (<ref>) becomes, letting x→∞,
W^μ (∞) = [λ(R)].
In Remark <ref>, we explained that when the contour of a random tree is a strong Markov process as in the case of Brownian motion, the root-centered sphere of radius r of this tree is a CPP. In addition, the intensity measure of this CPP is the measure of the excursion depth under the excursion measure of the contour process (away from r).
Let 𝐧_c denote the excursion measure of the process (B^(c)_t - inf_s≤ tB^(c)_s)_t≥ 0 away from 0, with B^(c) a Brownian motion with drift c, and let h denote the depth of the excursion. In the case ν( x) = x/x^2=𝐧_0(h∈ x) and μ( x) = θ x, we have
W^θ(x) = 1-^- θ x/θ = 𝐧_θ/2(h∈ [x, ∞])^-1.
This is consistent with Proposition 4 in <cit.>, which shows that putting Poissonian random cuts with rate θ along the branches of a standard Brownian CRT yields a tree whose contour process is (e(s)-θ s/2)_s≥ 0 stopped at the first return at 0, where e is the normalized Brownian excursion.
§.§ Measure of the Clonal Population
Recall that for a CPP(ν, μ, z), conditional on O^z (no mutation on the origin branch), the Lebesgue measure λ(R∩[0,T(z)) is equal to the measure ℓ (R^z) of the set of clonal leaves .
Let ν, μ be two measures satisfying assumptions (<ref>).
* With the notation of Theorem <ref>, the random variable λ(R) follows an exponential distribution with mean W^μ (∞).
* In a CPP(ν,μ,z), conditional on O^z, the measure ℓ (R^z) of the set of clonal leaves is an exponential random variable of mean W^μ (z).
Given a subordinator σ with drift 1 and range R, it is known (a quick proof of this can be found in <cit.>) that
λ(R) = inf{ t > 0, σ_t = ∞}.
Now the killing time of the subordinator σ is an exponential random variable of parameter ϕ (0), where ϕ is the Laplace exponent of σ.
We already know from Remark <ref> the mean of that variable:
ϕ(0)^-1 = [ λ (R) ] = W^μ (∞).
With a fixed height z>0, one is interested in the law of λ(R∩[0,T(z))).
By the properties of Poisson point processes, stopping the CPP at T(z) amounts to changing the intensity measure ν of the CPP for ν, with
ν = ν(∩ [0,z]) + ν(z) δ_∞.
Then if W(x) := ν([x,∞])^-1, we have
W(x) = ( ν([x,∞]∩[0,z]) + ν(z) )^-1
= ( ν([x∧ z,z]) + ν([z, ∞]) )^-1
= ( ν([x∧ z, ∞]) )^-1
= W(x∧ z),
and because of the characterization of W^μ given in Theorem <ref>, we also have (W )^μ(x) = W^μ(x∧ z).
Therefore (W )^μ(∞) = W^μ (z), and we can conclude that λ(R∩[0,T(z)]) is an exponential random variable of mean W^μ (z).
Probability of clonal leaves.
Here, we consider a CPP(ν, μ,z) and aim at computing the probability of existence of clonal leaves in the tree.
In a CPP(ν,μ,z), under the assumptions (<ref>) and with the notation of Theorem <ref>, there is a mutation-free lineage with probability
W(z) ^-μ(z)/W^μ(z).
Using a description of CPP trees in terms of birth-death trees (see Section <ref>), the previous result could alternatively be deduced from the expression of the survival probability of a birth-death tree up to a fixed time (see Proposition <ref> in the appendix).
Suppose the CPP(ν, μ, z) is given by the usual construction with the Poisson point processes 𝒩 and (M_N)_n∈𝒩.
We use the regenerative property of the process with respect to the natural filtration (ℱ_t)_t≥ 0 of the marked CPP defined by:
ℱ_t = σ (𝒩∩([0,t]×_+), M_(s,x), s ≤ t, x≥ 0 ).
Let X be the first clone on the real half-line.
X := inf{x ∈ [0,T(z)), M ([[ρ,α_x]]) = 0},
with the convention inf∅ = ∞ and with the usual notation.
Then X is a (ℱ_t)-stopping time, and conditional on {X < ∞}, the law of the tree on the right of X is the same as that of the original tree conditioned on having no mutation on the origin branch.
Let C^z := {X<∞} denote the event of existence of a mutation-free lineage. Recall that R^z denotes the set of clonal leaves and that O^z denotes the event that there is no mutation on the origin branch.
Then we have
[ (R^z) ] = (C^z) [ (R^z) | C^z ]
= (C^z) [ (R^z∩ [X, ∞) - X) | X < ∞ ]
= (C^z) [ (R^z) | O^z ]
= (C^z) W^μ(z),
where the last equality is due to Corollary <ref> (ii).
Furthermore,
[ (R^z) ] = ∫_0^T(z)_{t∈R} t
= ∫_0^∞ (t ∈R, t < T(z)) t
= ∫_0^∞^-tν(z)^-μ(z) t
= ^-μ(z)/ν(z) = W(z) ^-μ(z).
Therefore, the probability that there exists a clone of the origin in the present population is
(C^z) = W(z) ^-μ(z)/W^μ(z),
which concludes the proof.
§.§ Application to the Allele Frequency Spectrum
§.§.§ Intensity of the Spectrum
From now on we fix two measures ν, μ satisfying assumptions (<ref>), and we further assume for simplicity that ν(z) ∈ (0,∞) for all z>0. We denote by 𝕋^z a CPP(ν, μ, z).
Under the infinitely-many alleles model, recall that each mutation gives rise to a new type called allele, so that the population on the boundary of the tree can be partitioned into carriers of the same allele, called allelic partition. The key idea of this section is that expressions obtained for the clonal population of the tree allow us to gain information on quantities related to the whole allelic partition. We call m∈𝕋^z a mutation if M({m})≠0 and denote by 𝕋^z_m the subtree descending from m.
If f is a functional of real trees (say simple, marked, equipped with a measure on the leaves), one might be interested in the quantity
ϕ(𝕋^z,f) := ∑_ mutationm ∈𝕋^z f(𝕋^z_m),
or in its expectation
ψ(z,f) := [ϕ(𝕋^z,f) ].
For each mutation m∈𝕋^z, we define the set R_m^z of the leaves carrying m as their last mutation
R_m^z := {t ∈_+, the most recent mutation on the lineage of α_t^z is m}.
We define the random point measure putting mass on the measures of the different allelic clusters
Φ_z := ∑_ mutationm ∈𝕋^z_{R_m^z ≠∅} δ_λ(R_m^z).
The intensity of the allele frequency spectrum is the mean measure Λ_z of this point measure, that is the measure on _+ such that for every Borel set B of _+,
Λ_z(B) = [Φ_z(B)].
The analog for this measure when the number of individuals in the population is finite is the mean measure ( A(k))_k > 0 of the number A(k) of alleles carried by exactly k individuals (notation A_θ(k,t) in <cit.> and <cit.>).
The goal here is then to identify Λ_z, by noticing that for a Borel set B,
Φ_z(B) = ϕ(𝕋^z, f_B) and Λ_z(B) = ψ(z,f_B),
with f_B(𝕋) := _ℓ(R) ∈ B, where 𝕋 is an ultrametric tree with point mutations and measure ℓ supported by its leaves, and R denotes the set of its clonal leaves.
In a CPP(ν, μ, z), under the assumptions (<ref>) and with the notation of Theorem <ref>, the intensity of the allele frequency spectrum has a density with respect to the Lebesgue measure:
Λ_z( q)/ q = W(z) (^-μ(z)/W^μ(z)^2^-q/W^μ (z) + ∫_[0,z)^-μ(x)/W^μ(x)^2^-q/W^μ (x)μ( x) ).
This expression is to be compared with Corollary 4.3 in <cit.> (the term (1-1/W^θ(x))^k-1 with discrete k becoming here ^-q/W^μ (x) with continuous q).
Integrating this expression, we get the expectation of the number of different alleles in the population:
Λ_z(_+) = [Φ_z(_+)] = W(z) ( ^-μ(z)/W^μ(z) + ∫_[0,z)^-μ(x)/W^μ(x)μ( x) ).
Note that W(z) is the expectation of the total mass of the measure in a CPP(ν, μ, z).
It is then natural to normalize by this quantity and then let z→∞.
In (<ref>) we assumed that μ([0,∞)) = ∞, and since W^μ(z) is an increasing, positive function of z, we have clearly ^-μ(z)/W^μ(z)→ 0 when z→∞.
Therefore we have
lim_z→∞[Φ_z(_+)]/W(z) = ∫_^-μ(x)/W^μ(x)μ( x).
This provides us with a limiting spectrum intensity, written simply Λ:
Λ( q)/ q := lim_z→∞1/W(z) (Λ_z( q)/ q ) = ∫_^-μ(x)/W^μ(x)^2^-q/W^μ (x)μ( x).
Note that in the Brownian case ν = x / x^2, we get a simple expression Λ( q) = (θ/q) ^-θ q q.
We aim at computing ψ(z,f), for f a measurable non-negative function of a simple real tree 𝕋 with point mutations equipped with a measure ℓ on its leaves.
Suppose the mutations (M_n)_n≥ 1 on the tree 𝕋 are numbered by increasing distances from the root.
Here we use the fact that a CPP can be seen as the genealogy of a birth-death process (see Section <ref> for the development of this argument), a Markovian branching process whose time parameter is the distance from the root.
This description implies that, for all n≥ 1, conditional on the height H_n of mutation M_n, the subtree growing from M_n has the law of 𝕋^H_n.
Set
f(x) := [f(𝕋^x)].
Denoting H_n^z the height of the n-th mutation M_n^z of 𝕋^z, we get
ψ(z,f) = [∑_n f({subtree of 𝕋^z growing from M_n^z}) ]
= ∑_n [f({subtree of 𝕋^z growing from M_n^z}) ]
= ∑_n [f(H_n^z) ]
= [∑_n f(H_n^z) ].
Now this expression is simple to compute knowing f and the intensity of the point process giving mutation heights.
Indeed, by elementary properties of Poisson point processes
[∑_n f(H_n^z) ] =
[f(z) + ∑_y ∈ M_(0,z)f(y) + ∑_(t,x)∈𝒩, t ≤ T(z) ( ∑_y ∈ M_(t,x)f(y) ) ]
= f(z) + ∫_[0,z)f(x) μ( x) + [ T(z) ∫_[0,z)ν( y) ∫_[0,y)f(x)μ( x) ]
= f(z) + ∫_[0,z)f(x) μ( x) + 1/ν(z)∫_[0,z)f(x) (ν(x) - ν(z)) μ( x)
= f(z) + W(z) ∫_[0,z)f(x)/W(x)μ( x).
Now consider, for a fixed q > 0, the function f given by f(𝕋):= _ℓ(R) >q, where 𝕋 is a generic ultrametric tree with point mutations and measure ℓ supported by its leaves, and R denotes the set of its clonal leaves.
This allows us to compute the expectation Λ_z((q,∞)) of the number of mutations carried by a population of leaves of measure greater than q.
Since the law of the measure of clonal leaves is known for a CPP, (see Corollary <ref>)
, we deduce
f(z) = (C^z) ((R^z) > q | C^z)
= (C^z) (λ(R∩[0,T(z))) > q)
= W(z)^-μ(z)/W^μ(z) ^-q/W^μ (z),
where C^z again denotes the event of existence of clonal leaves in 𝕋^z and R is the set defined in Definition <ref>.
Thus we have
Λ_z((q,∞)) = [Φ_z((q,∞))]
= W(z) (^-μ(z)/W^μ(z)^-q/W^μ (z) + ∫_[0,z)^-μ(x)/W^μ(x)^-q/W^μ (x)μ( x) ).
Differentiating the last quantity yields the expression in the Proposition.
§.§.§ Convergence Results for Small Families
Recall the construction of a CPP from a Poisson point process 𝒩 in Section <ref>, and the point processes of mutations (M_N)_N∈𝒩.
Since a CPP(ν, μ, z) is given by the points of 𝒩 with first component smaller than T(z), this construction yields a coupling of (𝕋^z)_z>0, where for each z>0, 𝕋^z is a CPP(ν,μ,z). Recall the notation Φ_z from the previous subsection.
Then, similarly to Theorem 3.1 in <cit.>, we have the following almost sure convergence.
Under the preceding assumptions, and further assuming ν({∞})=0, for any q> 0, we have the convergence:
lim_z→∞Φ_z((q,∞))/T(z) = ∫_^-μ(x)/W^μ(x)^-q/W^μ (x)μ( x) = Λ((q,∞)) a.s.
Recall that Φ_z((q,∞)) is the number of alleles carried by a population of leaves of measure larger than q in the tree 𝕋^z, and T(z) is the total size of the population of 𝕋^z.
The result is a strong law a large numbers: it shows that the number of small families (with a fixed size) grows linearly with the total measure of the tree at a constant speed given by the measure Λ defined by (<ref>) as the limiting allele frequency spectrum intensity.
We will use the law of large numbers several times.
Let us first introduce some notation.
For z>0, define (T_i(z))_i≥ 1 as the increasing sequence of first components of the atoms of 𝒩 with second component larger than z, that is T_1(z) = T(z) and for any i≥ 1
T_i+1(z) = inf{t > T_i(z), ∃ x > z, (t,x) ∈𝒩}.
For z < z', let N(z,z'):=#{(t,x)∈𝒩: t≤ T(z'), x >z}, that is the unique number n such that
T_n(z) = T(z').
Notice that the assumptions ν(z) ∈ (0,∞) for all z>0 and ν({∞}) =0 imply that T(z')→∞ and N(z, z')→∞ as z'→∞, for a fixed z.
Because the times (T_i+1(z) - T_i(z))_i≥ 1 are i.i.d. exponential random variables with mean W(z) and since we have
T(z') = T(z) + ∑_i=2^N(z,z') (T_i+1(z) - T_i(z)),
it is clear by the strong law of large numbers
that
T(z')/N(z,z')z' →∞⟶ W(z) a.s.
Also, write 𝕋^z_1, …, 𝕋^z_N(z,z') for the sequence of subtrees of height z within 𝕋^z' that are separated by the branches higher than z.
That is, 𝕋^z_i is the ultrametric tree generated by the points of 𝒩 with first component between T_i-1(z) and T_i(z).
From basic properties of Poisson point processes, they are i.i.d. and their distribution is that of 𝕋^z.
Now, write h(𝕋) for the height of an ultrametric tree (i.e., the distance between the root and any of its leaves), and take any non-negative, measurable function f of simple trees, such that
∗
f(𝕋) = 0 if h(𝕋) > z.
Recall the definition of ϕ(𝕋, f).
Since f satisfies (<ref>), we can write
ϕ(𝕋^z',f) = ∑_i=1^N(z,z')ϕ(𝕋^z_i, f).
Therefore, again by the strong law of large numbers, we have the following convergence
ϕ(𝕋^z',f)/N(z,z')z'→∞⟶ [ϕ(𝕋^z,f)] = ψ(z, f) a.s.
Combining the two convergence results, it follows that
ϕ(𝕋^z',f)/T(z')z'→∞⟶ψ(z, f)/W(z) a.s.
Let us apply this to the function f(𝕋) = _(R)>q.
This function f does not satisfy (<ref>) for any z>0, so we cannot apply (<ref>) directly because (<ref>) does not hold.
However, we can artificially truncate f by defining the restriction f^z:
f^z(𝕋) := f(𝕋) _h(𝕋)<z,
which does satisfy (<ref>).
Now since f^z≤ f, we have the inequality between random variables
ϕ(𝕋^z', f^z) ≤ϕ(𝕋^z',f),
and by taking limits,
ψ(z, f)/W(z)≤lim inf_z'→∞ϕ(𝕋^z', f)/T(z') a.s.
But we have ψ(z, f)=Λ_z((q,∞)) and as a consequence of Proposition <ref>, we have
Λ_z((q,∞))/W(z) = ^-μ(z)/W^μ(z)^-q/W^μ (z) + ∫_[0,z)^-μ(x)/W^μ(x)^-q/W^μ (x)μ( x)
z →∞⟶∫_^-μ(x)/W^μ(x)^-q/W^μ (x)μ( x),
which is Λ((q,∞)) by definition.
Therefore, we now have the inequality
Λ((q,∞)) ≤lim inf_z'→∞ϕ(𝕋^z', f)/T(z') a.s.
The converse inequality stems from a simple remark.
There are at most N(z,z') mutations of height greater than z giving rise to an allele carried by some leaves of 𝕋^z'.
This is simply because a population of n individuals can exhibit at most n different alleles.
Therefore, we have
ϕ(𝕋^z', f) ≤ϕ(𝕋^z',f^z) + N(z,z'),
which gives by taking limits
lim sup_z'→∞ϕ(𝕋^z', f)/T(z')≤ψ(z, f)+1/W(z)z→∞⟶Λ((q,∞)) a.s.
We can finally conclude
ϕ(𝕋^z, f)/T(z)z→∞⟶Λ((q,∞)) a.s.,
which is the announced result.
§ THE CLONAL TREE PROCESS
In this section we consider the clonal subtree A^z of a random tree 𝕋^z with distribution CPP(ν, μ, z), where ν, μ are measures satisfying assumptions (<ref>) and z>0.
We further assume ν() = ∞, that is we ignore the case when 𝕋^z is a finite tree almost surely.
We will focus on the case when μ(dx)=θ dx.
§.§ Clonal Tree Process
There is a natural coupling in θ of the Poisson processes of mutations, in such a way that the sets of mutations are increasing in θ for the inclusion.
Let 𝕄 denote a Poisson point process with Lebesgue intensity on _+^2, and define for θ≥ 0,
𝕄^θ := 𝕄([0, θ] ×).
Then 𝕄^θ is a Poisson point process on _+ with intensity θ dx, and the sequence of supports of 𝕄^θ increases with θ.
Let us use this idea to couple mutations with different intensities on the random tree 𝕋^z. Recall the construction of a CPP with a Poisson point process 𝒩 in Section <ref>.
For each point N = (t,x) of 𝒩∪{(0,z)}, let M_N be a Poisson point process on _+×[0,x] with Lebesgue intensity.
For fixed θ≥ 0, we get the original construction with μ (dx)= θ dx when considering
M^θ_N := M_N([0, θ] ×).
Therefore a natural coupling of mutations of different intensities (M^θ)_θ∈_+ is defined on the random tree 𝕋^z.
Denote A^z_θ the clonal subtree of height z at mutation level θ, that is the subtree of 𝕋^z defined by
A^z_θ := { x ∈𝕋^z, M^θ([[ρ, x]]) = 0 }.
It is natural to seek to describe the decreasing process of clonal subtrees (A^z_θ)_θ∈_+.
As θ increases, it is clearly a Markov process since the distribution of A^z_θ+θ' given A^z_θ is the law of the clonal tree obtained after adding mutations at a rate θ' along the branches of A^z_θ.
We will now study the Markovian evolution of the time-reversed process, as θ decreases. Its transitions are relatively simple to describe using grafts of trees.
§.§ Grafts of Real Trees
Given two real rooted trees (𝕋_1, d_1, ρ_1), (𝕋_2, d_2, ρ_2), and a graft point g ∈𝕋_1, one can define the real rooted tree that is the graft of the root of 𝕋_2 on 𝕋_1 at point g by
𝕋_1 ⊕_g 𝕋_2 := (𝕋_1 ⊔𝕋_2∖{ρ_2}, d, ρ_1),
with the new distance d defined as follows. For any x,y∈𝕋_1 ⊔𝕋_2,
d(x,y) := d_i(x,y) if x,y∈𝕋_i for i ∈{1,2},
and
d(x,y) := d_1(x,g) + d_2(ρ_2, y) if x∈𝕋_1, y∈𝕋_2.
For real simple trees, this graft has a nice representation when the graft point is a leaf of the first tree.
For a simple tree A = (𝒯, α, ω), define the buds of A as the set ℬ(A) of leaves of 𝒯 that live a finite time
ℬ(A) := {b ∈𝒯, b0 ∉𝒯, ω(b) < ∞}.
For two simple trees A_i = (𝒯_i, α_i, ω_i) with i ∈{1,2}, and for b ∈ℬ(A_1), we define the graft of A_2 on A_1 on the bud b, denoted A_1 ⊕_b A_2 by:
𝒯 := 𝒯_1 ∪ b𝒯_2,
α(b) := α_1(b), ω(b) := ω_1(b)+ζ_2(∅),
∀ u ∈𝒯_1∖{b}, α(u) := α_1(u), ω(u) := ω_1(u),
∀ u ∈𝒯_2∖{∅}, α(bu) := ω(b)+(α_2(u)-ω_2(∅)),
ω(bu) := α(bu) + ζ_2(u),
A_1 ⊕_b A_2 := (𝒯, (α(u), ζ(u), ω(u))_u∈𝒯).
It is then clear that ℬ(A_1 ⊕_b A_2) := ℬ(A_1)∖{b}∪ bℬ(A_2).
See Figure <ref> for an example.
§.§ Evolution of the Clonal Tree Process
We study the increasing clonal tree process as we remove mutations (decreasing θ).
We therefore reverse time by denoting η = -lnθ, and defining X^z_η := A^z_^-η.
Denote ℚ^z_η the distribution of X^z_η with values in the set of reversed (i.e., with time flowing from z to 0) simple binary trees.
See Figure <ref> for a sketch of the tree growth process.
The increasing process (X^z_η)_η∈ is nicely described in terms of grafts.
* The process (X^z_η)_η∈ is a time-inhomogeneous Markov process, whose transitions conditional on X^z_η can be characterized as follows.
* The buds of X^z_η are the leaves b of height ω(b).
Independently of the others, each bud b is given an exponential clock T_b of parameter 1.
* At time η' = η + T_b, a tree is grafted on the bud b, following the distribution ℚ^ω(b)_η', and each newly created bud b' is given an independent exponential clock T_b' of parameter 1.
* The infinitesimal generator evaluated at a function ϕ of simple trees which depends only on a finite number of generations (i.e. such that the property ∃ n≥ 0, ϕ() = ϕ(_|n) holds) can be written as follows
ℒ_ηϕ(A) = ∑_b ∈ℬ(A) ( ℚ^ω(b)_η [ϕ(A ⊕_b Y)] - ϕ(A) ),
where Y is the random tree drawn under the probability measure ℚ^ω(b)_η.
* Write τ_z for the first time the clonal tree process reaches the boundary, that is the first time there is a leaf x∈ X^z_η with d(ρ, x) = z, (where d is the distance in the real tree X^z_η):
τ_z= inf{η∈: ∃ x∈ X^z_η, d(ρ, x) = z }.
Then the distribution of τ_z is given by
(τ_z ≤η) = W(z) ^-^-η z/W_η(z),
where as previously W(z) = ν(z)^-1, and
W_η (z) = W(0) + ∫_(0,z]^-^-η x W(x),
that is W_η = W^μ with μ(dx) = ^-η dx.
We first state a result that is already interesting in itself, which ensures that CPP trees are reversed pure-birth trees (see next Section for details on birth-death trees and their links with CPPs). We refer the reader to Subsection <ref>, where a more general result is proved.
Let ν and μ be diffuse measures on , satisfying assumptions (<ref>) and ν()=∞.
Fix z_0∈ such that ν(z_0) =1 and let J = (0, z_0].
Then for z∈ J, a CPP(ν, z) is the genealogy of a reversed (i.e. with time flowing from z to 0) pure-birth process with birth intensity β defined as the Laplace-Stieltjes measure associated with the nondecreasing function -logν, started from z.
From Lemma <ref>, we can express the CPP in terms of a pure-birth tree, with time flowing from z to 0 (but measured from 0 to z) and birth intensity β = (log∘ W).
Let 𝒯⊂𝒰 denote the complete binary tree
𝒯 := ⋃_n≥ 0{0, 1}^n
Then we can define recursively (α(u), ω(u))_u∈𝒯 by setting α(∅) = z, and for u = vi, with i∈{0,1}:
α(u) = ω(v) = sup [0, α(v)) ∩ B_v,
with the convention sup∅ = 0, and where (B_v)_v∈𝒯 are i.i.d. Poisson point processes on [0,z] with intensity β.
This defines the random reversed simple tree (𝒯, α, ω) as the genealogy of a pure-birth process with birth intensity β, with time flowing from z to 0.
In other words, by the definition of β, (𝒯, α, ω) is the reversed simple tree with distribution CPP(ν, z).
Now we define independently of (𝒯, α, ω), a family (M_u)_u∈𝒯 of i.i.d. Poisson point processes on ×[0,z] with Lebesgue intensity.
Writing for η∈ and u∈𝒯,
M^η_u = M_u([0,^-η]×),
we define a coupling ((M_u^η)_u ∈𝒯)_η∈ of point processes with intensity ^-η x on the branches of (𝒯, α, ω).
Now let us define the process (Y_η)_η∈ by Y_η = (𝒯_η, α_η, ω_η), with
𝒯_η := {u∈𝒯, ∀ v ≺ u, M^η_v([α(v), ω(v)]) = 0 },
α_η(u) := α(u) ∀ u ∈𝒯_η,
and ω_η (u) := sup ({ω(u)}∪{ s < α(u), M_u^η([s, α(u)]) = 0 } ) ∀ u ∈𝒯_η,
By definition, one can check that Y_η is the clonal simple tree associated with the tree (𝒯, α, ω) and the point process of mutations (M_u^η)_u ∈𝒯.
Therefore (Y_η)_η∈ has the same distribution as (X^z_η)_η∈.
We define the filtration (ℱ_η)_η∈ as the natural filtration of the process (Y_η)_η∈, which we may rewrite:
ℱ_η := σ ( (α_η')_η'≤η, (ω_η')_η'≤η ).
From our definitions, for u∈𝒯, we have:
ω_η(u) = inf{s ∈ [0, α(u)], M_u([0,^-η]×[s,α(u)]) = 0 and B_u([s,α(u)]) = 0 },
and since M_u and B_u are independent Poisson point processes, it is known that conditional on ℱ_η, we have: M_u∩× [0, ω_η(u)) and B_u ∩ [0, ω_η(u)) are independent Poisson point processes, with intensity Lebesgue for M_u and β for B_u, on their respective domains.
We can further notice that on the event {u is a bud of Y_η}, conditional on ℱ_η, the families of point processes
(M_uv∩× [0, ω_η(u)))_v ∈𝒯 and (B_uv∩ [0, ω_η(u)))_v ∈𝒯
are independent families of independent Poisson point process with intensity Lebesgue for M_uv and β for B_uv, on their respective domains.
Also, since M_u and B_u are independent and with diffuse intensities, we have the a.s. equalities between events
{u is a bud of Y_η}
= {ω_η(u) = inf{s ∈ [0, α(u)], M_u([0,^-η]×[s,α(u)]) = 0}}
={B_u( {ω_η(u)}) = 0 }.
Moreover, since M_u is a Poisson point process with Lebesgue intensity on ^2, it is known that on this event, conditional on ω_η(u), the point process M_u restricted to [0, ^-η]×[0,ω_η(u)] has the conditional distribution of:
δ_(U, ω_η(u)) + M,
where U is a uniform random variable on [0, ^-η] and M is an independent Poisson point process on [0, ^-η]×[0,ω_η(u)) with Lebesgue intensity.
Hence on the event A := {u is a bud of Y_η}, the distribution of
η = inf{η' ≥η, M_u([0, ^-η']×{ω_η(u)}) = 0}
= sup{η' ≥η, M_u([0, ^-η']×{ω_η(u)}) = 1}
is given by
(η-η≥ t | A ) = (M_u([0, ^-(η+t)]×{ω_η(u)}) =1 | A)
= (U ∈ [0, ^-(η+t)])
= ^-t,
And so if u is a bud of Y_η, the first time η such that ω_η(u) is lower than ω_η(u) satisfies that η - η has an exponential distribution with parameter 1.
We may now prove the first point (i) of the theorem.
Fix η∈, and write (b_1, b_2, …) for the distinct buds of Y_η.
We define, for i ≥ 1 and η' ≥η:
𝒯^i_η' := {u, b_i u ∈𝒯_η'},
α^i_η'(∅) := ω_η(b_i) and for u ∈𝒯∖{∅}, α^i_η'(u) := α_η'(b_i u),
ω^i_η'(u) := ω_η'(b_i u),
Y^i _η' := (𝒯^i_η', α^i_η', ω^i_η' ).
This definition formulates that for η' ≥η, Y^i_η' is the unique simple tree such that Y_η' = A ⊕_b_iY^i_η' for another simple tree A in which b_i is a bud, with ω^A(b_i) = ω_η(b_i).
Note that when writing Y_η' = A ⊕_b_iY^i_η', A may be different from Y_η, even for η' arbitrarily close to η, since other grafts may have occurred (possibly infinitely many grafts if Y_η has infinitely many buds).
Since b_1,b_2, … are the buds of Y_η, the sets b_1 𝒯,b_2 𝒯,… are disjoint.
Thus, from our construction, the following families of random variables are independent conditional on ℱ_η:
(B_b_1 u)_u∈𝒯,(B_b_2 u)_u∈𝒯…, (M_b_1 u)_u∈𝒯, (M_b_2 u)_u∈𝒯, …
Furthermore, we know how to describe their distributions conditional on ℱ_η because of the previous observations.
It follows that the trees (Y^i_η')_i≥ 1 are independent conditional on ℱ_η and the distribution of (Y^i_η')_η' ≥η can be described by:
There is a random variable η such that
* η - η is exponentially distributed with parameter 1.
* For η≤η' < η, we have ω_η'(b_i) = ω_η(b_i) so Y^i_η' is the empty tree (or rather contains only one point, the root).
* Conditionally on η, the process (Y^i_η')_η' ≥η is distributed as our construction of the process (Y_η')_η'≥η, with the initial condition α(∅) = ω_η(b_i).
This concludes the proof of (i).
For (ii), write 𝔗 for the set of simple binary trees and suppose we have a bounded measurable map ϕ : 𝔗→ and a number n≥ 0 such that
ϕ(A) = ϕ(A_|n) A ∈𝔗.
Consider a fixed tree A = (𝒯, α, ω) ∈𝔗.
There is a finite number of buds b_1, …, b_m in the first n generations 𝒯_|n, therefore for a fixed η∈, conditional on {X^z_η = A}, the process (ϕ(X^z_η'))_η' ≥η is a continuous time Markov chain.
It follows from (i) that this Markov chain jumps after an exponential time with parameter m to a new state where one of the buds, uniformly chosen, grows into a new tree.
That is, denoting ℒ_η the infinitesimal generator of the process (X^z_η)_η≥η_0,
ℒ_ηϕ(A) = ∑_i=1^m ( ℚ^ω(b_i)_η [ϕ(A ⊕_b_i Y)] - ϕ(A) ),
where Y is the random tree drawn under the probability measure ℚ^ω(b_i)_η.
For (iii), note that the existence of a leaf in the clonal subtree at a distance z from the root coincides a.s. with the existence of a clonal leaf in 𝕋^z, where 𝕋^z is the original CPP(ν,z) with mutation measure μ( x) = ^-η x.
Then the formula in the proof follows from Proposition <ref>, which gives the probability that there is a clonal leaf in a CPP.
The branching random walk of the buds.
Forgetting the structure of the tree and considering only the height of the buds, the process becomes a rather simple branching random walk.
Write χ^z_η := ∑_b∈ℬ(X^z_η)δ_ω(b) for the point measure on _+ giving the heights of the buds in X^z_η.
Then (χ^z_η)_η≥η_0 is a branching Markov process where each particle stays at their height z' during their lifetime (an exponential time of parameter 1), then splits at their death time η according to the distribution of χ^z'_η.
Similarly to the preceding paragraph, one can describe the infinitesimal generator of this process as follows.
For a map f:_+ →_+ that is zero in a neighborhood of 0 and a Radon point measure Γ on (0,∞) (i.e. such that Γ([x,∞)) < ∞ ∀ x >0), write ϕ^f(Γ) for the sum
ϕ^f(Γ) := ∫ f(z) Γ( z).
Then the infinitesimal generator ℒ_η at time η of the time-inhomogeneous process (χ_η)_η∈, evaluated at ϕ^f, is given by
ℒ_ηϕ^f(Γ) = ∫ℚ^z_η [ϕ^f(χ)] Γ( z) - ϕ^f(Γ).
§ LINK BETWEEN CPP AND BIRTH-DEATH TREES
§.§ Birth-Death Processes
An additional well-known example of random tree is given by the genealogy of a birth-death process, which will appear as an alternative description of our CPP trees.
Here, a birth-death process is a time-inhomogeneous, time-continuous Markovian branching process living in _+ with jumps in {-1, 1}.
In a general context, we will define the genealogy of a birth-death process as a random simple tree, which we may equip with a canonical limiting measure on the set of its infinite lineages.
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞.
Suppose there are two measures on J, β and κ, respectively called the birth intensity measure and death intensity measure, or simply birth rate and death rate, which satisfy for all t∈ J
β([t_0, t])<∞, κ([t_0, t]) < ∞
β({t}) = 0, κ({t}) = 0.
In other words, β and κ are diffuse Radon measures on J.
Informally, the population starts with one individual at time t_0, and each individual alive at time t≥ t_0 may give birth to a new individual at rate β( t), and die at rate κ( t).
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and β and κ measures on J satisfying (<ref>).
Independently for each u∈⋃_n{0,1}^n, we define B_u and D_u two independent point processes, such that B_u (resp. D_u) is a Poisson point process on J with intensity β (resp. κ).
The genealogy of a (β, κ) birth-death process started from t ∈ J is the random binary simple tree (𝒯, α, ω) defined recursively by:
* ∅∈𝒯, with α(∅) = t.
* For each u∈𝒯, we set T_B(u) := inf B_u∩ (α(u),t_∞), and T_D(u) :=inf D_u∩ (α(u),t_∞).
Then there are three different possibilities:
* if T_B(u) < T_D(u), then we set u0, u1 ∈𝒯, and α(u0) = α(u1) = ω(u) := T_B(u),
* if T_D(u) < T_B(u), then we set ω(u) = T_D(u), and u0, u1 ∉𝒯,
* if T_B(u)=T_D(u)=t_∞, then we set ω(u) = t_∞, and u0, u1 ∉𝒯.
Birth-death processes have been known for a long time. They have been studied thoroughly as early as 1948 <cit.>.
In the case of pure-birth processes with infinite descendance, we introduce a canonical measure on the boundary of the tree.
Under the assumption κ = 0 and β(J)=∞, the tree (𝒯, α, ω) is said to be the genealogy of a pure-birth process. It may then be equipped with a measure ℒ on its boundary ∂𝒯= {0,1}^ defined by
ℒ (B_u ) := lim _s↑ t_∞N_u(s)/^β([t_0,s]) u∈𝒯,
where B_u ={v∈∂𝒯, u≺ v} is defined as in Definition <ref>, and N_u(s) is the number of descendants of u at time s:
N_u(s) := #{v∈𝒯, u≼ v, α(v) < s ≤ω(v)}.
The limits in the definition are well-defined because for each u∈𝒯, conditional on α(u), the process (N_u(s)/^β([t_0,s]) )_s ≥α(u) is a non-negative martingale.
Also, the fact that the map u↦ N_u(s) is additive combined with Remark <ref> justifies that the measure ℒ is well defined.
Finally, let us introduce random mutations on a birth-death tree as a random discrete set of points.
Let μ be a diffuse Radon measure on J, and let # denote the counting measure on
⋃_n{0,1}^n.
A birth-death tree (𝒯, α, ω) may be equipped with a set M of neutral mutations at rate μ by defining, independently of the preceding construction, a Poisson point process M on (⋃_n{0,1}^n)× J with intensity #⊗μ, and then defining:
M := {(u,s) ∈M, u∈𝒯, α(u) < s ≤ω(u)}.
This point process M is then a discrete subset of the skeleton of the real tree (defined as in (<ref>)) associated with (𝒯, α, ω).
Example.
The Yule tree is the genealogy of a pure-birth process with J= and a birth rate β equal to the Lebesgue measure, which means that the branches separating two branching points are i.i.d exponential random variables with parameter 1.
Every pure-birth tree with β(J)=∞ can be time-changed into a Yule tree, with the time-change ϕ:J→, t↦β([t_0, t]) (see Proposition <ref>).
§.§ Link between CPP and Supercritical Birth-Death Trees
We first provide a refined version of Lemma <ref> which is proved in Subsection <ref>.
Under the assumptions of Lemma <ref>, the CPP(ν, z) with boundary measured by ℓ is the genealogy of a reversed pure-birth process with birth intensity β= -logν started from z, with boundary measured by ℒ.
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and let β and κ be diffuse Radon measures on J, i.e. measures satisfying (<ref>).
Consider a birth-death process started from t_0 with birth rate β and death rate κ.
Let us define
ℐ_t := ∫_[t, t_∞)^-β([t,s])+κ([t,s]) β( s)
β^*( t) := β( t)/ℐ_t
In a birth-death process with β(J) = ∞, we say that an individual i alive at time t has an infinite progeny if N_i(s)>0 for any time s>t.
It is known (see <cit.>) that the process is supercritical (i.e., the event {lim inf_t→ t_∞ N_∅(t) >0 } has positive probability) if and only if ℐ_t_0 < ∞, and that the probability of non-extinction for a process started at time t∈ J is then ℐ_t^-1. Also, if the birth-death process with rates (β, κ) is supercritical, then conditional on non-extinction, the subtree of individuals with infinite progeny is a pure-birth tree with birth rate β^*.
Now we assume Poissonian neutral mutations are set on the genealogy of a (β, κ) supercritical birth-death process, according to a rate μ, where μ is a diffuse Radon measure on J.
We also assume β^*(J) =∞ so that lim_t→ t_∞ N_∅(t) =+∞ conditional on non-extinction.
Conditional on non-extinction, the subtree of individuals with infinite progeny is a measured simple tree equipped with mutations (𝒯, α, ω, ℒ, M), where:
* (𝒯, α, ω, ℒ) is a random simple binary tree constructed (see Definition <ref>) from a pure-birth process with birth rate β^*.
* With M a Poisson point process on (⋃_n≥ 0{0,1}^n)× J with intensity #⊗μ, the mutations on the branches of 𝒯 are defined as the set
M = {(i,t)∈M, i∈𝒯, α(i) < t ≤ω(i)}
One may study this measured tree with mutations as the limit in time of the genealogy of the birth-death process with neutral mutations.
We show that this measured tree with mutations is in fact a time-changed CPP tree.
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and let β and μ be diffuse Radon measures on J, with β(J) =∞.
Let 𝕋=(𝒯, α, ω, M, ℒ) be a random measured simple tree representing the genealogy of a pure-birth process with rate β started from t_0, equipped with mutations at rate μ.
Let ϕ: J → (0,1] be the time-change defined by
ϕ : t ↦^-β([t_0, t)).
Then the time-changed tree ϕ(𝕋) (see Proposition <ref>) has the distribution of a
CPP( x/x^2,μ∘ϕ^-1,1).
Thanks to Lemma <ref>, we only need to exhibit a correct time change to prove the Theorem.
We know that a time-changed birth-death tree is still a birth-death tree: this is explicitly stated in Proposition <ref> in the appendix.
This implies here that the time-changed tree ϕ(𝕋) is a (reversed) pure-birth process with birth rate β∘ϕ^-1, started from ϕ(t_0)=1, and equipped with mutations with rate μ∘ϕ^-1.
Let us first check that β∘ϕ^-1( x) = log (x).
Since β is diffuse, ϕ is continuous decreasing, so for all x∈ (0,z_0], we have ϕ(ϕ^-1(x))=x, where ϕ^-1 is the right-continuous inverse of ϕ.
Therefore we have, for all a<b ∈ (0,1]:
β∘ϕ^-1([a,b]) = β([ϕ^-1(b), ϕ^-1(a)])
= logϕ(ϕ^-1(b)) - logϕ(ϕ^-1(a))
= log(b) - log(a).
Now notice that for x∈ (0,1],
-log (∫_x^∞1/y^2 y) = log x,
so according to Lemma <ref>, a CPP( x/x^2,μ∘ϕ^-1,1) is a pure-birth process with birth rate β( x)= log (x), started from 1 and equipped with mutations at rate μ∘ϕ^-1.
Therefore its distribution is identical to the distribution of ϕ(𝕋).
Acknowledgements. The authors thank the Center for Interdisciplinary Research in Biology (Collège de France) for funding.
toc
§ APPENDIX
§.§ Birth-Death Processes
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and β and κ diffuse Radon measures on J (i.e. satisfying (<ref>)).
Let _t denote the distribution of the genealogy of a (β, κ) birth-death process started with one individual at time t∈ J, and let N_T be the number of individuals alive at time T ∈ J.
For T>t and α≥ 0, we have:
_t(^-α N_T) = 1-(1-^-α)/^κ([t,T]) - β([t,T])+(1-^-α)∫_[t, T]^κ([t,s]) - β([t,s])β( s),
and in particular,
_t(N_T > 0) = (^κ([t,T]) - β([t,T])+∫_[t, T]^κ([t,s]) - β([t,s])β( s) )^-1.
Note that the previous proposition shows that conditional on being non-zero, N_T is a geometric random variable, which is a known fact about birth-death processes (see for instance <cit.>).
We still provide a proof in our case where the birth and death intensity measures are not necessarily absolutely continuous with respect to Lebesgue.
With a fixed time horizon T∈ J and a fixed real number α≥ 0, write for t < T,
q(t) = _t(^-α N_T).
We use a different description of the birth-death process than the one used in Section <ref>, and consider a population where individuals die at rate κ, and during their lifetime, produce a new individual at rate β.
Notice that for any s>t, the number of individuals alive at time s has the same distribution in both models.
Thus we write D for the death time of the first individual, and B_i for the possible birth time of her i-th child.
With our description, D has the distribution of the first atom of a Poisson point process on [t, t_∞) with intensity κ and conditional on D, the set {B_1, B_2, …, B_N} is a Poisson point process on [t, D] with intensity β.
Also, write N^i_T for the number of alive descendants of the i-th child at time T.
Since we have N_T = _D>T + ∑_iN^i_T, we have
q(t) = _t [ ^-α_D>T∏_i^-αN^i_T ],
where we define by convention N^i_T = 0 if B_i > T.
Now conditional on D and (B_i), (N^i_T) are independent, with N^i_T equal to the distribution of N_T under _B_i.
Hence
q(t) = _t [ ^-α_D>T∏_iq(B_i) ],
where we use the convention q(u) := 1 if u > T.
Now conditional on D, (B_i) are the atoms of a Poisson point process with intensity β( s) on [t, D], so we have
q(t) = _t [ ^-α_D>Texp (-∫_[t,D] (1-q(s))β( s) ) ]
= ∫_[t, ∞)κ( u) ^-κ([t, u))^-α_u>Texp (-∫_[t,u] (1-q(s))β( s) ),
which implies by differentiation
q (t) = - κ( t)+q(t) [κ( t) + (1-q(t)) β( t) ],
which in turn may be rewritten
(1/1-q(t) ) = -β( t) + (1/1-q(t) ) (β( t)-κ( t)).
Remark that with F(t) := ^β([t,T]) - κ([t,T]), we have F(t) = F(t)(κ( t) - β( t)), so that
(F(t)/1-q(t) ) = -F(t) β( t),
and since q(T) = ^-α, we have by integration on [t, T]:
1/1-^-α - F(t)/1-q(t) = -∫_[t, T] F(s) β( s),
that is
1- q(t) = (1-^-α)/^κ([t,T]) - β([t,T])+(1-^-α)∫_[t, T]^κ([t,s]) - β([t,s])β( s).
This characterizes the distribution of N_T under _t for all T.
In particular, letting α→∞, we get
_t(N_T > 0) = (^κ([t,T]) - β([t,T])+∫_[t, T]^κ([t,s]) - β([t,s])β( s) )^-1,
which concludes the proof.
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and β, κ, and μ diffuse Radon measures on J (i.e. satisfying (<ref>)).
Let ϕ:J→ be an increasing function, and define t'_0 := ϕ(t_0), t'_∞ := lim_t↑ t_∞ϕ(t) and J'=[t'_0, t'_∞).
We assume that ϕ satisfies
∀ t<t_∞, ϕ(t)<t'_∞.
Let 𝕋=(𝒯, α, ω, M) be the genealogy of a (β, κ) birth-death process, started at t∈ J and equipped with Poissonian mutations with rate μ, as in Definition <ref>.
We define the time-changed simple tree:
ϕ(𝕋) := (𝒯, ϕ∘α, ϕ∘ω, {(u,ϕ(s)), (u,s)∈ M}).
If β∘ϕ^-1 and κ∘ϕ^-1 (the push-forwards of β and κ by ϕ) still have no atoms, then ϕ(𝕋) has the distribution of the genealogy of a (β∘ϕ^-1,κ∘ϕ^-1) birth-death process, started at ϕ(t)∈ J' and equipped with Poissonian mutations with rate μ∘ϕ^-1.
Also, if κ = 0 and β(J)=∞, then κ∘ϕ^-1=0 and β∘ϕ^-1(J')=∞, and the measures ℒ_𝕋 and ℒ_ϕ(𝕋) on ∂𝒯, defined for 𝕋 and for ϕ(𝕋), are the same.
Suppose 𝕋 is constructed as in Definition <ref> with independent Poisson point processes B_u and D_u with respective intensities β and κ, for each u∈⋃_n{0,1}^n.
This implies that the random sets defined by
ϕ(B_u) := {ϕ(s), s∈ B_u},
ϕ(D_u) := {ϕ(s), s∈ D_u},
are independent Poisson point processes on the interval J' with respective intensities β∘ϕ^-1 and κ∘ϕ^-1.
Remark that by assumption, for η∈{β,κ}, for all t'∈ J', we have η∘ϕ^-1 ({t'}) = 0, so we a.s. have t'∉ϕ(B_u) and t'∉ϕ(D_u).
Now since α(u) is independent of B_u and D_u, we have also a.s.
ϕ∘α(u) ∉ϕ(B_u)∪ϕ(D_u).
By definition, we have ∅∈𝒯 and α(∅) = t, so ϕ∘α(∅) = ϕ(t).
Then, if u ∈𝒯, with T_B(u)= inf B_u∩ (α(u),t_∞), and T_D(u)=inf D_u∩ (α(u),t_∞), the following assertions hold.
* Since we have (<ref>), we know that a.s. for all s∈ B_u∩(α(u), t_∞), we have ϕ(α(u))< ϕ(s).
This ensures that ϕ(T_B(u)) = infϕ(B_u)∩ (ϕ∘α(u),t'_∞).
* For the same reason, we have ϕ(T_D(u)) = infϕ(D_u)∩ (ϕ∘α(u),t'_∞).
* Because ϕ(B_u) is independent of ϕ(D_u) and because β∘ϕ^-1 and κ∘ϕ^-1 are diffuse by assumption, we have ϕ(B_u)∩ϕ(D_u) = ∅ almost surely.
Therefore, we have:
* ϕ(T_B(u)) < ϕ(T_D(u)) T_B(u) < T_D(u), which implies u0, u1 ∈𝒯, and ϕ∘α(u0) = ϕ∘α(u1) = ϕ∘ω(u) = ϕ(T_B(u)),
* ϕ(T_D(u)) < ϕ(T_B(u)) T_D(u) < T_B(u), which implies ϕ∘ω(u) = ϕ(T_D(u)), and u0, u1 ∉𝒯,
* ϕ(T_B(u)) = ϕ(T_D(u)) = t'_∞ T_B(u) = T_D(u) = t_∞, which implies ϕ∘ω(u) = t'_∞, and u0, u1 ∉𝒯.
Thus (𝒯, ϕ∘α, ϕ∘ω) is defined as a (β∘ϕ^-1,κ∘ϕ^-1) birth-death process, started at ϕ(t).
For the neutral mutations, we assume there is, as in Definition <ref>, a Poisson point process M on (⋃_n{0,1}^n)× J with intensity #⊗μ, and such that:
M = {(u,s) ∈M, u∈𝒯, α(u) < s ≤ω(u)}.
Now {(u,ϕ(s)), (u,s)∈M} is a Poisson point process on (⋃_n{0,1}^n)× J' with intensity #⊗μ∘ϕ^-1, so
{(u,ϕ(s)), (u,s)∈ M} = {(u,ϕ(s)), (u,s) ∈M, u∈𝒯, α(u) < s ≤ω(u)}
is the definition of random neutral mutations at rate μ∘ϕ^-1 on the tree (𝒯, ϕ∘α, ϕ∘ω).
It remains to prove that in the case κ = 0 and β(J) = ∞, the measures ℒ_𝕋 and ℒ_ϕ(𝕋) are the same.
By definition, we have for u ∈⋃_n{0,1}^n,
ℒ_ϕ(𝕋)(B_u) = lim_s'↑ t'_∞N'_u(s')/^β∘ϕ^-1([t'_0,s'])
= lim_s↑ t_∞N'_u(ϕ(s))/^β∘ϕ^-1([t'_0,ϕ(s)]) ,
where N'_u(s') := #{v∈𝒯, u≼ v, ϕ∘α(v) < s ≤ϕ∘ω(v)} is the number of descendants of u in the time-changed tree at time s'.
But we have a.s. for all s∈ J, N'_u(ϕ(s))=N_u(s), and also β∘ϕ^-1([t'_0,ϕ(s)]) = β([t_0,s]), so finally
ℒ_ϕ(𝕋)(B_u) = lim_s↑ t_∞N'_u(ϕ(s))/^β∘ϕ^-1([t'_0,ϕ(s)]) ,
= lim_s↑ t_∞N_u(s)/^β([t_0,s])
= ℒ_𝕋(B_u) ,
which ends the proof.
Let J = [t_0, t_∞) be a real interval, with -∞ < t_0 <t_∞≤∞, and β a diffuse Radon measure on J, such that β(J) = ∞.
There is a unique family (_t)_t∈ J of distributions on simple trees (𝒯, α, ω, ℒ) equipped with a measure ℒ on ∂𝒯 := {0,1}^, such that for all t∈ J
* 𝒯 = ⋃_n{0,1}^n and α(∅) = t _t-almost surely.
* _t(ω(∅) > s) = ^-β([t,s)).
* Under _t, ℒ(∂𝒯) is an exponential r.v. with mean ^-β([t_0,t)).
* Under _t, define for i∈{0,1},
α_i(u) := α(iu), ω_i(u) := ω(iu), ℒ_i the measure on ∂𝒯 such that ℒ_i(B_u) = ℒ(B_iu) for all u∈𝒯 and finally 𝕋_i := (𝒯, α_i, ω_i, ℒ_i).
Then the conditional distribution of the pair of trees (𝕋_0,𝕋_1) given ω(∅) is _ω(∅)^⊗ 2, i.e. they are independent with the same distribution _ω(∅).
Furthermore, for all t∈ J, _t is the distribution of the genealogy of a pure-birth process with birth rate β started with one individual at time t∈ J, equipped with ℒ the measure on ∂𝒯 introduced in Definition <ref>.
Let ℚ_t be the law of the genealogy of a β pure-birth process started from t.
We will first show that the family (ℚ_t)_t∈ J satisfies the assertions (i)-(iv) of the theorem.
(i) By definition α(∅) = t.
Also, the fact that for all t∈ T, β([t,t_∞))=∞, implies that for each Poisson point process with intensity β on J, there are infinitely many points in [t, t_∞).
This implies that each individual in the process will eventually split into two, so that 𝒯 = ⋃_n{0,1}^n _t-almost surely.
(ii) Under ℚ_t, ω(∅) is distributed as the first point of a Poisson point process B_∅ on [t, t_∞) with intensity β.
Therefore,
ℚ_t(ω(∅)> s) = ℚ_t(# B_∅∩ [t, s) = 0) =^-β([t,s)).
(iii) By Proposition <ref>, writing _t for the expectation under ℚ_t, we have for t<T<t_∞,
_t(^-α N_T) = 1-(1-^-α)/^- β([t,T])+(1-^-α)(1-^- β([t,T])).
Replacing α by α^-β([t_0, T]) and letting T→ t_∞, we have by dominated convergence:
_t(^-αℒ(∂𝒯)) = 1/α^-β([t_0, t)) + 1,
which implies that ℒ(∂𝒯) is an exponential random variable with mean ^-β([t_0, t)).
(iv) Let us define a family (B_u)_u∈𝒯 of independent Poisson point processes on J with intensity β.
Let us write F for the deterministic function such that for all t∈ J, F(t, (B_u)_u∈𝒯) is the simple tree 𝕋=(𝒯, α, ω, ℒ) constructed as in Definition <ref>, which follows the distribution ℚ_t.
By assumption, the two families (B_0u)_u∈𝒯 and (B_1u)_u∈𝒯 are independent, and by construction, we have
𝕋_0 = F(ω(∅), (B_0u)_u∈𝒯) and 𝕋_1 = F(ω(∅), (B_1u)_u∈𝒯),
where 𝕋_0 and 𝕋_1 are defined as in the statement of the Proposition.
Therefore, under ℚ_t, the conditional distribution of (𝕋_0, 𝕋_1) given ω(∅) is _ω(∅)^⊗ 2.
Now, let us show that if a family (_t)_t∈ J satisfies the assertions (i)-(iv) of the Proposition, it satisfies also the following one.
Let 𝒯_n be the complete binary tree with n generations
𝒯_n := ⋃_k=0^n {0,1}^k,
and let _t^n be the distribution of (α(u), ω(u), ℒ(B_u))_u∈𝒯_n, where (𝒯, α, ω, ℒ) has distribution _t.
Now we view _t^n as a probability measure on the space
(^3)^𝒯_n = {(x(u), y(u),z(u)), u∈𝒯_n}.
Then we have
* x(∅) := t ^n_t-almost surely.
* For all m≤ n and u∈𝒯_m, conditional on x(u) and independently of the variables (x(v), y(v))_v∈𝒯_m∖{u}, the distribution of y(u) is given by:
^n_t(y(u) > s) = ^-β([x(u), s)) s≥ x(u).
* For all u∈{0,1}^n, conditional on x(u) and independently of the rest, z(u) is defined as an exponential random variable with mean ^-β([t_0,x(u))).
* For all u∈𝒯_n-1, x(u0) = x(u1) := y(u).
* For all u∈𝒯_n-1, z(u) := z(u0)+z(u1).
Indeed, assertion 1 is directly deduced from (i), 5 is trivial because ℒ is additive, and 2, 3 and 4 are proved by induction on n using (iv).
One can check that 2 stems from (ii) and (iv),
3 from (iii) and (iv), and 4 from (i) and (iv).
Now it is clear that these five assumptions define ℙ^n_t uniquely for n≥ 0 and t∈ J.
Also, a measured simple tree (𝒯, α, ω, ℒ) for which 𝒯 = ⋃_n{0,1}^n is entirely described by (α(u), ω(u), ℒ(B_u))_u∈𝒯∈ (^3)^𝒯.
This implies that ℙ_t is uniquely determined by its marginal distribution (_t^n)_n≥ 0.
Finally, we have shown that the family (ℚ_t)_t∈ J, where ℚ_t is the law of the genealogy of a β pure-birth process started from t, satisfies assertions (i)-(iv).
In addition, we have shown that there is at most one family (_t) of simple tree distributions satisfying assertions (i)-(iv).
Therefore, such a family exists and is unique, which concludes the proof.
§.§ Proof of Lemmas <ref> and <ref>
Let us write _z for the distribution of a CPP(ν, z).
Let 𝒩 be a Poisson point process with intensity t⊗ν as in our construction of CPP trees.
Recall that T(z) = inf{t ≥ 0, (x,t) ∈𝒩, x ≥ z} and define
𝒩_z := 𝒩∩ ([0, T(z))×[0,z]).
Define also 𝕋^z as the comb function tree given by 𝒩_z with distribution denoted _z.
Write 𝒫_z for the distribution of the pair (𝒩_z, T(z)).
In Proposition <ref>, we characterized the distributions of pure-birth processes.
As a result, to conclude the present proof, it is sufficient to show that the family (_z)_z∈ J satisfies the following conditions:
* We have 𝒯 = ⋃_n{0,1}^n and α(∅) = z _z-almost surely.
* We have _z(ω(∅) < x) = ^-β((x,z]).
* Under _z, ℒ(∂𝒯) is an exponential r.v. with mean ^-β((z,z_0]).
* Under _z, define for i∈{0,1},
α_i(u) := α(iu), ω_i(u) := ω(iu), ℒ_i the measure on ∂𝒯 such that ℒ_i(B_u) = ℒ(B_iu) for all u∈𝒯 and finally 𝕋_i := (𝒯, α_i, ω_i, ℒ_i).
Then the conditional distribution of the pair of trees (𝕋_0,𝕋_1) given ω(∅) is _ω(∅)^⊗ 2, i.e. they are independent with the same distribution _ω(∅).
Let us now prove each assertion.
(i) Since ν() = ∞ we have a.s. for any 0≤ a < b ≤ T(z):
#(𝒩_z∩ [a,b]×)=∞.
Also, since ν is diffuse, we have a.s. for all x>0 that #(𝒩∩×{x}) ≤ 1
Those two conditions imply that 𝕋^z is a complete binary tree.
(ii) – (iii) The first branching point of the tree 𝕋^z is ω(∅) =max{x>0, (t,x) ∈𝒩_z}.
Also the total mass of the tree is ℒ(∂𝒯)=T(z), which is an exponential random variable with mean (ν(z))^-1=^-β((z,z_0]).
We can easily compute the distribution of ω(∅) under 𝒫_z, since conditional on T(z), 𝒩_z is a Poisson point process on [0,T(z))× [0,z] with intensity t⊗ν.
Therefore, for x∈ (0,z]:
𝒫_z(ω(∅) < x) = ∫_0^∞(T(z) ∈ t) ^-t ν([x, z])
= ∫_0^∞ν(z) ^-ν(z)t^-t(ν(x)-ν(z)) t
= ∫_0^∞ν(z) ^-ν(x)t t
= ν(z)/ν(x) = ^-β((x,z]).
(iv) It remains to prove the branching property for the family (_z)_z ∈ (0,z_0].
Under 𝒫_z, conditional on ω(∅), let (𝒩_1, T_1) and (𝒩_2, T_2) be independent random variables of identical distribution 𝒫_ω(∅).
We concatenate 𝒩_1 and 𝒩_2, adding a point of height ω(∅) between the two sets:
𝒩 = 𝒩_1 ∪{(T_1, ω(∅))}∪{(T_1 + t, x), (t,x) ∈𝒩_2 }.
We claim that the following equality in distribution holds:
(𝒩,T_1 + T_2) (𝒩_z, T(z)),
which formulates the branching property for the family (_z)_z∈ (0,z_0].
From basic properties of Poisson point processes, we know that conditional on T(z), the highest atom of 𝒩_z is (U, Z), with U having a uniform distribution on [0, T(z)] and Z:=ω(∅) independent of U, such that
𝒫^z(Z ≤ x | T(z)) = ^-T(z)(ν(x)-ν(z)).
The joint distribution of (Z, T(z)) is therefore given by:
[f(T(z))1_Z ≤ x] = ∫_0^∞ν(z) ^-ν(z)t^-t(ν(x)-ν(z)) f(t) t
= ∫_0^∞ν(z) ^-ν(x) t f(t) t
= ∫_0^∞ν(z) ∫_ν(x)^∞ t ^- u t u f(t) t
= ∫_ν(x)^∞ν(z)/u^2∫_0^∞ t u^2 ^- u t f(t) t u
In other words, the random variable ν(Z) has a density ν(z)/u^2_u ≥ν(z) u, and conditional on ν(Z), T(z) follows a Gamma distribution with parameter (ν(Z), 2).
As U/T(z) is uniform on [0, 1] and independent of Z, one can check that (Z, T(z), U) has the same distribution as (Z, T_1 + T_2, T_1), where conditional on Z, the variables T_1 and T_2 are independent with the same exponential distribution with parameter ν(Z).
This concludes the proof of (<ref>) since conditional on (Z, T(z), U) (resp. (Z, T_1 + T_2, T_1)), 𝒩_z∖{(U,Z)} (resp. 𝒩∖{(T_1, Z)}) is a Poisson point process on [0,T(z))× [0,Z] (resp. on [0,T_1+T_2)× [0,Z]) with intensity t⊗ν.
§.§ Subordinators and Regenerative Sets
We use some classical results about regenerative sets and subordinators, whose proofs can be found in the first two sections of Bertoin's Saint-Flour lecture notes <cit.>.
A subordinator is a right-continuous, increasing Markov process (σ_t)_t≥ 0 started from 0 with values in [0, ∞], where ∞ is an absorbing state, such that for all s<t, conditional on {σ_s < ∞}, we have
σ_t - σ_s σ_t-s.
The distribution of a subordinator is characterized by its Laplace exponent defined as the increasing function ϕ: →, such that for all λ, t ≥ 0,
[^-λσ_t] = ^-tϕ(λ),
with the convention ^-λ∞ = 0 for all λ≥ 0.
The Laplace exponent can be written under the form
ϕ(λ) = k + dλ + ∫_(0, ∞) (1 - ^-λ x) π( x),
where k is called the killing rate, d the drift coefficient and π the Lévy measure of the subordinator.
Necessarily, we have k,d ≥ 0 and π satisfies
∫_(0, ∞) (1∧ x) π( x) < ∞.
Letting ζ := inf{t ≥ 0, σ_t = ∞} be the lifetime of the subordinator, ζ follows an exponential distribution with parameter k (if k =0, then ζ≡∞).
Also we have almost surely for all t < ζ,
σ_t = dt + ∑_s≤ tΔσ_s,
and the set of jumps {(s, Δσ_s), Δσ_s > 0} is a Poisson point process with intensity s ⊗π.
The renewal measure of a subordinator is defined as the measure U( x) on such that for any non-negative measurable function f
∫_ f(x) U( x) = [∫_0^ζ f(σ_t) t ].
This renewal measure characterizes the distribution of σ since its Laplace transform is the inverse of ϕ
1/ϕ(λ) = ∫_^-λ x U( x).
Remark also that setting L_x := inf{t ≥ 0, σ_t > x} the right-continuous inverse of σ, we have
U(x) := U([0,x]) = [∫_0^∞_σ_t ≤ x t ] = [L_x].
Given a probability space (Ω, ℱ, ) equipped with a complete, right-continuous filtration (ℱ_t)_t≥ 0, a regenerative set R is a random closed set containing 0 for which the following properties hold
* Progressive measurability. For all t≥ 0, the set
{ (s, ω) ∈ [0,t]×Ω , s ∈ R(ω) }
is in ℬ([0,t])⊗ℱ_t.
* Regeneration property. For a (ℱ_t)_t≥ 0-stopping time T such that a.s. on {T< ∞}, T∈ R and T is not right-isolated in R, we have:
R∩[T, ∞[ - T R,
where R∩[T, ∞[ - T is defined formally as the set { t≥ 0, T+t ∈ R }.
We define the range of a subordinator σ as the closed set {σ_t, t ≥ 0}, and see that all regenerative sets can be expressed in this form.
The range of a subordinator is a regenerative set. Conversely, if R is a regenerative set without isolated points, there exists a subordinator σ whose range is R almost surely.
In the case where λ (R)>0 a.s., one can define such a subordinator as
σ_t := inf{x ≥ 0, λ([0,x]∩ R) > t }.
Then σ is the unique subordinator with drift 1 and range R, and its renewal measure is U( x) = (x ∈ R) x.
Notice that λ(R) = inf{t ≥ 0, σ_t = ∞} = ζ by definition.
Therefore λ(R) is an exponential random variable with parameter k, the killing rate of σ.
abbrvnat
tocsectionReferences
| In this paper, we give a new flavor of an old problem of mathematical population genetics which is to characterize the so-called allelic partition of a population. To address this problem, one needs to specify a model for the genealogy (i.e., a random tree) and a model for the mutational events (i.e., a point process on the tree). Two typical assumptions that we will adopt here are: the infinite-allele assumption, where each mutation event confers a new type, called allele, to its carrier; and the neutrality of mutations, in the sense that co-existing individuals are exchangeable, regardless of the alleles they carry. Here, our goal is to study the allelic partition of the boundary of some random real trees that can be seen as the limits of properly rescaled binary branching processes.
In a discrete tree, a natural object describing the allelic partition without labeling alleles is the allele frequency spectrum
(A_k)_k≥ 1, where A_k is the number of alleles carried by exactly k co-existing individuals in the population. In the present paper, we start from a time-inhomogeneous, supercritical binary branching process with finite population N(t) at any time t, and we are interested in the allelic partition of individuals `co-existing at infinity' (t→∞), that is the allelic partition at the tree boundary. To define the analogue of the frequency spectrum, we need to equip the tree boundary with a measure ℓ, which we do as follows. Roughly speaking, if N_u(t) is the number of individuals co-existing at time t in the subtree 𝒯_u consisting of descendants of the same fixed individual u, the measure ℓ(𝒯_u) is proportional to lim_t↑∞ N_u(t)/N(t).
It is shown in Section <ref> that the tree boundary of any supercritical branching process endowed with the (properly rescaled) tree metric and the measure ℓ has the same law as a random real tree, called coalescent point process (CPP) generated from a Poisson point process, equipped with the so-called comb metric <cit.> and the Lebesgue measure.
Taking this result for granted, we will focus in Sections <ref>, <ref> and <ref> on coalescent point processes with mutations.
In the literature, various models of random trees and their associated allelic partitions have been considered. The most renowned result in this context is Ewens' Sampling Formula <cit.>, a formula that describes explicitly the distribution of the allele frequency spectrum in a sample of n co-existing individuals taken from a stationary population with genealogy given by the Moran model with population size N and mutations occurring at birth with probability θ/N. When time is rescaled by N and N→∞, this model converges to the Kingman coalescent <cit.> with Poissonian mutations occurring at rate θ along the branches of the coalescent tree. In the same vein, a wealth of recent papers has dealt with the allelic partition of a sample taken from a Λ-coalescent or a Ξ-coalescent with Poissonian mutations, e.g., <cit.>.
In parallel, several authors have studied the allelic partition in the context of branching processes, starting with <cit.> and the monograph <cit.>, see <cit.> and the references therein. In a more recent series of papers <cit.>, the second author and his co-authors have studied the allelic partition at a fixed time of so-called `splitting trees', which are discrete branching trees where individuals live i.i.d lifetimes and give birth at constant rate. In particular, they obtained the almost sure convergence of the normalized frequency spectrum (A_k(t)/N(t))_k≥ 1 as t→∞ <cit.> as well as the convergence in distribution of the (properly rescaled) sizes of the most abundant alleles <cit.>. The limiting spectrum of these trees is to be contrasted with the spectrum of their limit, which is the subject of the present study, as explained earlier.
Another subject of interest is the allelic partition of the entire progeny of a (sub)critical branching process, as studied in particular in <cit.>. The scaling limit of critical branching trees with mutations is a Brownian tree with Poissonian mutations on its skeleton. Cutting such a tree at the mutation points gives rise to a forest of trees whose distribution is investigated in the last section of <cit.>, and relates to cuts of Aldous' CRT in <cit.> or the Poisson snake process <cit.>.
The couple of previously cited works not only deal with the limits of allelic partitions for the whole discrete tree, but also tackle the limiting object directly. This is also the goal of the present work, but with quite different aims.
First, we construct in Section <ref> an ultrametric tree with boundary measured by a `Lebesgue measure' ℓ, from a Poisson point process with infinite intensity ν, on which we superimpose Poissonian neutral mutations with intensity measure μ. Section <ref> ends with Proposition <ref>, which states that the total number of mutations in any subtree is either finite a.s. or infinite a.s. according to an explicit criterion involving ν and μ.
The structure of the allelic partition at the boundary is studied in detail in Section <ref>. Theorem <ref> ensures that the subset of the boundary carrying no mutations (or clonal set) is a (killed) regenerative set with explicit Laplace exponent in terms of ν and μ and measure given in Corollary <ref>. The mean intensity Λ of the allele frequency spectrum at the boundary is defined by Λ(B):=∑_ℓ(R)∈ B, where the sum is taken over all allelic clusters at the boundary. It is explicitly expressed in Proposition <ref>. An a.s. convergence result as the radius of the tree goes to infinity is given in Proposition <ref> for the properly rescaled number of alleles with measure larger than q>0, which is the analogue of ∑_k≥ q A_k in the discrete setting.
Section <ref> is dedicated to the study of the dynamics of the clonal (mutation-free) subtree when mutations are added or removed through a natural coupling of mutations in the case when μ( x)=θ x. It is straightforward that this process is Markovian as mutations are added. As mutations are removed, the growth process of clonal trees also is Markovian, and its semigroup and generator are provided in Theorem <ref>.
Section <ref> is devoted to the links between measured coalescent point processes and measured pure-birth trees which motivate the present study. Lemma <ref> gives a representation of every CPP with measured boundary, in terms of a rescaled pure-birth process with boundary measured by the rescaled counting measures at fixed times. Conversely, Theorem <ref> gives a representation of any such pure-birth process in terms of a CPP with intensity measure ν(dx) = dx/x^2, as in the case of the Brownian tree. | null | null | null | null | null |
http://arxiv.org/abs/1701.07465v2 | 20170125195916 | Relating the finite-volume spectrum and the two-and-three-particle $S$ matrix for relativistic systems of identical scalar particles | [
"Raúl A. Briceño",
"Maxwell T. Hansen",
"Stephen R. Sharpe"
] | hep-lat | [
"hep-lat",
"nucl-th"
] |
JLAB-THY-17-2400
[e-mail: ][email protected]
Thomas Jefferson National Accelerator Facility, 12000 Jefferson Avenue, Newport News, VA 23606, USA
[e-mail: ][email protected]
Institut für Kernphysik and Helmholtz Institute Mainz, Johannes Gutenberg-Universität Mainz,
55099 Mainz, Germany
[e-mail: ][email protected]
Physics Department, University of Washington, Seattle, WA 98195-1560, USA
Working in relativistic quantum field theory, we derive the quantization condition satisfied by coupled two- and three-particle systems of identical scalar particles confined to a cubic spatial volume with periodicity L. This gives the relation between the finite-volume spectrum and the infinite-volume 2→2, 2→3 and 3→3 scattering amplitudes for such theories. The result holds for relativistic systems composed of scalar particles with nonzero mass m, whose center of mass energy lies below the four-particle threshold, and for which the two-particle K matrix has no singularities below the three-particle threshold. The quantization condition is exact up to corrections of the order 𝒪(e^-mL) and holds for any choice of total momenta satisfying the boundary conditions.
Relating the finite-volume spectrum and the two-and-three-particle S matrix
for relativistic systems of identical scalar particles
Stephen R. Sharpe
December 30, 2023
===================================================================================================================================
§ INTRODUCTION
Over the past few decades, enormous progress has been made in determining the properties of hadrons
directly from the fundamental theory of the strong force,
quantum chromodynamics (QCD).
A key tool in such investigations is lattice QCD (LQCD), which can be used to numerically
calculate correlation functions defined on a discretized, finite, Euclidean spacetime.
State-of-the-art LQCD
calculations of stable hadronic states use dynamical up, down, strange, and even charm quarks, with physical quark masses, and include isospin breaking both from the mass difference of the up and down quarks and from the effects of quantum electrodynamics (QED).
For recent reviews, see Refs. <cit.>.
Using LQCD to investigate hadronic resonances that decay via the strong force is significantly more challenging. Resonances do not correspond to eigenstates of the QCD Hamiltonian and thus cannot be studied by directly interpolating a state with the desired quantum numbers. Instead, resonance properties are encoded in scattering and transition amplitudes, and only by extracting these observables can one make systematic, quantitative statements. In fact, it is not a priori clear that one can extract such observables using LQCD. Confining the system to a finite volume obscures the meaning of asymptotic states and restricting to Euclidean momenta prevents one from directly applying the standard approach of Lehmann-Symanzik-Zimmermann reduction. In addition, since one can only access numerically determined Euclidean correlators with nonvanishing noise, analytic continuation to Minkowski momenta is, in general, an
ill-posed problem.
For two-particle states, it is by now well known that scattering amplitudes
can be constrained indirectly, by first extracting the discrete finite-volume energy spectrum.
The approach follows from seminal work by Lüscher <cit.> who derived a relation between the finite-volume energies and the elastic two-particle scattering amplitude for a system of identical scalar particles.
Since then, this relation has been generalized to accommodate non zero spatial momentum in the finite-volume frame and also to describe more complicated two-particle systems, including nonidentical and nondegenerate particles as well as particles with intrinsic spin <cit.>. This formalism has been applied in many numerical LQCD calculations to determine the properties of low-lying resonances that decay into a single two-particle channel <cit.>,
including most recently the first study of the lightest hadronic
resonance, the σ/f_0(500) <cit.>. The extension
to systems with multiple coupled two-particle channels <cit.>, has led to the first LQCD results for resonances at higher energies, where more than one decay channel is open <cit.>.
Thus far, however, no LQCD calculations have been performed for resonances that have a significant branching
fraction into three or more particles. This is largely because the formalism needed to do so, the three-particle
extension of the relations summarized above, is still under construction.
Early work in this direction includes the nonrelativistic studies presented
in Refs. <cit.>.
More recently, in Refs. <cit.>,
two of the present authors derived a three-particle quantization condition for identical scalar particles
using a generic relativistic quantum field theory (subject to some restrictions described below).
Since these articles are the starting point for the present work,
we briefly summarize their methodology.[
We also note that additional checks of the quantization condition have been given in
Refs. <cit.>. ]
Reference <cit.> studied a three-particle finite-volume correlator and determined its pole positions,
which correspond to the finite-volume energies, in terms of an infinite-volume scattering quantity. This was done by deriving a skeleton expansion, expressing each finite-volume Feynman diagram in terms of its infinite-volume counterpart plus a finite-volume residue, summing the result into a closed form and then identifying the pole locations. The resulting expression for the finite-volume energies depends on a nonstandard infinite-volume scattering
quantity—the divergence-free K matrix, denoted .
A drawback of this result is that , as well as other quantities in the quantization condition,
depends on a smooth cutoff function (denoted H_3 below),
although the energies themselves are independent of this cutoff.
Thus the relation to the infinite-volume scattering amplitude is not explicit.
The second publication, Ref. <cit.>, resolved this issue by deriving
the relation between and the standard infinite-volume three-to-three scattering
amplitude ℳ_3. We comment that, like the two-to-two scattering amplitude, ℳ_2, the three-particle scattering amplitude must satisfy constraints relating its real and imaginary parts that are dictated by unitarity. These constraints are built into quantum field theory, and can be recovered order by order in a diagrammatic
expansion. In the two-particle case, both the definition of the S matrix and the diagrammatic analysis can be
used to show that [ℳ_2]^-1∝δ - i where the scattering phase shift δ
(and the proportionality constant) is real.
In the three-particle sector, unitarity takes a much more complicated form but enters our result through
the condition that is a real function on a three-particle phase space.
The relation to ℳ_3 then automatically produces the required unitarity properties,
in addition to removing the scheme dependence.
As mentioned above, the results of Refs. <cit.>
were obtained under some restrictions.
The finite spatial volume was taken to be cubic (with linear extent L),
with periodic boundary conditions on the fields,
and the particles were assumed to be spinless and identical (with mass m).
The more important restrictions concerned the class of interactions considered.
These were assumed to satisfy the following two properties:
* They have a ℤ_2 symmetry such that
2↔3 transitions are forbidden;
i.e. only even-legged vertices are allowed.
* They are such that the two-particle K matrix, appearing due to subprocesses in which two particles scatter while the third spectates, is smooth in the kinematically available energy range.
The relation between the three-particle finite-volume energies and the three-to-three scattering amplitude, summarized above, holds for any system satisfying these restrictions. The relation is valid up to exponentially suppressed corrections scaling as e^- m L, which we assume are also negligible here, and holds for any allowed
value of the total three-momentum in the finite-volume frame.
In this work we remove the first of the two major restrictions; i.e. we consider theories without a ℤ_2 symmetry, so that all vertices are allowed
in the field theory.
We continue to impose the second restriction.
This leads to a relativistic, model-independent quantization condition that can be used to
extract coupled two- and three-particle scattering amplitudes from LQCD.
We otherwise use the setup of the previous studies. In particular, we assume a theory of
identical scalar particles in a periodic, cubic box.
Given past experience in the two-particle sector, we expect that these restrictions on particle content
will be straightforward to remove. We also expect that the generalization to multiple two- and three-body
channels will be straightforward.
We defer consideration of these cases until a later publication.
The generalization that we derive here is a necessary step toward using LQCD to study resonances that decay into both two- and three-particle states. A prominent example is the Roper resonance, N(1440), the lowest lying excitation of the nucleon. This state is counterintuitive from the perspective of quark models, as it lies below the
first negative parity excited state.
The Roper resonance is estimated to decay to Nπ with a branching fraction of 55%-75%
and otherwise to Nππ, with other open channels highly suppressed.
Similarly, nearly all of the recently discovered XYZ states
have significant branching fractions into both two- and three-particle final states (see Refs. <cit.> for recent reviews).
These states exhibit the rich phenomenology of nonperturbative QCD and
it is thus highly desirable to have theoretical methods to extract their
properties directly from the underlying theory.
This article derives two main results:
The relation between the discrete finite-volume spectrum and the generalized divergence-free K matrix,
given in Eq. (<ref>), and the relation between the K matrix and the coupled
two- and three-particle scattering amplitudes, given compactly in Eq. (<ref>) and more explicitly throughout Sec. <ref>.
These results generalize those of Refs. <cit.> and <cit.>, respectively.
The first, Eq. (<ref>), has a form reminiscent of the coupled two-particle result <cit.>. The finite-volume effects are contained in a diagonal two-by-two matrix with entries F_2 in the two-particle sector and F_3 in the three-particle sector.
Aside from minor technical changes, these are the same finite-volume quantities that arise in the previously derived two- and three-particle quantization conditions <cit.>. The coupling between channels is captured by the generalized divergence-free K matrix. This contains diagonal elements, mediating two-to-two and three-to-three transitions, as well as off-diagonal elements that encode the two-to-three transitions.
To obtain both the quantization condition and the relation to the scattering amplitude from a single calculation, we use a matrix of finite-volume
correlators, , chosen so that it goes over to the corresponding matrix of
infinite-volume scattering amplitudes when the L→∞ limit is taken appropriately.
This differs from the type of correlator used in Ref. <cit.>,
but is the direct generalization of that considered in Ref. <cit.>.
The results of this work, like those given in Refs. <cit.>, are derived by analyzing an infinite set of finite-volume Feynman diagrams and identifying the power-law finite-volume effects.
The central complication new to the present derivation comes from diagrams such as that of
Fig. <ref>, in which a two-to-three transition is mediated by a one-to-two transition
together with a spectator particle. The cuts on the right-hand side of the figure indicate that this diagram
gives rise to finite-volume effects from both two- and three-particle states. As we describe in detail below,
a consequence of such diagrams is that we cannot use standard fully dressed propagators in two-particle loops,
but instead need to introduce modified propagators built from two-particle-irreducible (2PI) self-energy diagrams.
In addition, we must keep track of the fact that the two- and three-particle
states in these diagrams share a common coordinate. This makes it more
challenging to separate the finite-volume effects arising from the two- and three-particle states
in diagrams such as that of Fig. <ref>.
To address this complication, and other technical issues that arise, we use here an approach
for studying the finite-volume correlator that differs from the skeleton-expansion-based methods of
Refs. <cit.>.
In particular, we construct an expansion using a mix of fully dressed and
modified two- and three-particle irreducible propagators, which are connected via the local interactions of the
general quantum field theory.
We then identify all power-law finite-volume effects using time-ordered perturbation theory (TOPT).
We also introduce smooth cutoff functions, H_2 and H_3, that only have support in the vicinity of the two- and three-particle poles, respectively. A key simplification of this construction is that,
in disconnected two-to-three transitions such as that shown in Fig. <ref>,
the two- and three-particle poles do not contribute simultaneously.
This is an extension of the result that an on-shell one-to-two transition
is kinematically forbidden for stable particles.
After eliminating such disconnected two-to-three transitions we are left with a series of terms built from two- and three-particle poles, summed over the spatial momenta allowed in the periodic box, and with all two-to-three transitions mediated by smooth functions. To further reduce these expressions, we apply the results of Refs. <cit.>, to express the sums over poles as products of infinite-volume quantities and finite-volume functions. The modifications that we make to accommodate two-to-three transitions affect the exact forms of these poles, so that some effort is required to extend the previous results to rigorously apply here. With these modified relations we are able to derive a closed form for the finite-volume correlator and to express its pole positions in terms of a quantization condition.
The remainder of this work is organized as follows. In the following section we derive the quantization condition relating the discrete finite-volume spectrum to the generalized divergence-free K matrix. After giving the precise definition of the finite-volume correlator, ℳ_L, and introducing various kinematic variables, we divide the bulk of the derivation into four subsections. In Sec. <ref> we apply standard TOPT to identify all of the two- and three-particle states that lead to important finite-volume effects. However, because of technical issues, the form reached via the standard approach is not useful for the subsequent derivation. Thus, in Sec. <ref>, we provide an alternative procedure that displays the same finite-volume effects in a more useful form. This improved derivation is highly involved and we relegate the technical details to Appendix <ref>. With the two- and three-particle poles explicitly displayed, in Sec. <ref> we complete the decomposition of finite- and infinite-volume quantities by extending and applying various relations derived in Refs. <cit.>.
Again, many technical details are collected in Appendix <ref>.
Finally, in Sec. <ref>, we identify the poles in ℳ_L and thereby reach our quantization condition.
To complete the derivation, in Sec. <ref> we relate the generalized divergence-free K matrix to the standard infinite-volume scattering amplitude. Our derivation here closely follows the approach of Ref. <cit.> but is complicated by the mixing of two- and three-body states.
After deriving an expression for ℳ_3 in terms of the K matrix in Sec. <ref>,
we then invert the relation in Sec. <ref>.
Given a parametrization of the scattering amplitude, this allows one to determine the K matrix and thus predict the finite-volume spectrum in terms of a given parameter set. Having given the general relation between finite-volume energies and coupled two- and three-particle scattering amplitudes, in Sec. <ref> we study various limiting cases that simplify the general results. We conclude and give an outlook in Sec. <ref>.
We include four appendixes. In addition to the two mentioned above,
Appendix <ref> describes a specific example of the smooth cutoff functions, H_2 and H_3,
that are used to simplify the results in various ways, in particular by removing disconnected two-to-three transitions,
while Appendix <ref> derives properties of the divergence-free K matrix
that follow from the parity and time-reversal invariance of the theory.
§ DERIVATION OF THE QUANTIZATION CONDITION
In this section we derive the main result of this work, a relation between the discrete finite-volume energy spectrum of a relativistic quantum field theory and that theory's physically observable, infinite-volume scattering amplitudes in the coupled two- and three-particle subspace. We restrict attention to theories with identical massive scalar particles, whose physical mass is denoted m.
As we explain in more detail below, we must also assume that the two-particle K matrices,
appearing due to two-particle subprocesses in the three-to-three scattering amplitude,
are only sampled at energies where they have no poles.
The main result of this work, given in Eq. (<ref>) below, is a quantization condition of the form
Δ^[ℳ](E, P⃗, L) = 0 .
Here P⃗ is the total three-momentum of the system,
and L is the linear extent of the periodic, cubic spatial volume.
The superscript ℳ indicates that the quantization condition
depends on the infinite-volume scattering amplitudes of the theory.
For fixed values of P⃗ and L, solutions to Eq. (<ref>) occur at
a discrete set of energies E=E_1, E_2, E_3, ….
These give the finite-volume energy levels of the system,
up to exponentially suppressed corrections
of the form e^- m L that we neglect throughout.
We begin our derivation by introducing various kinematic variables.
Since in general we work in a “moving frame," with total energy-momentum (E, P⃗),
the energy in the center-of-mass (CM) frame is
E^* = √(E^2 - P⃗^2) .
If the energy-momentum is shared between two particles, we denote the momentum
of one by p⃗, and that of the other by b⃗_p = P⃗-p⃗.
We add primes to these quantities if there are multiple two-particle states.
If the particles are on shell, we denote their energies as ω_p
and ω_Pp, respectively, with
ω_p = √(p⃗^2 + m^2) and ω_Pp = √((P⃗ - p⃗)^2 + m^2 ) = √(b⃗_p^ 2 + m^2) .
If both particles are on shell, then when we boost to the CM frame, their energy-momentum four-vectors
become (ω_p^*, p⃗^*) and (ω_p^*, -p⃗^*), respectively, with
ω_p^*=E^*/2 and p^* ≡|p⃗^* | = q^*, where
q^* = √(E^*2/4 - m^2) .
Thus the only remaining degree of freedom, with (E, P⃗) fixed, is the direction of CM frame momentum p̂^*. Throughout this work we use p̂^* to parametrize an on-shell two-particle state.
A similar description applies when three particles share the total energy-momentum.
The generic names we use for their momenta are
k⃗, a⃗ and b⃗_ka=P⃗ - k⃗ -a⃗.
If these particles are on shell, their energies are denoted ω_k, ω_a and
ω_Pka, respectively, with
ω_Pka = √((P⃗ - k⃗ - a⃗)^2 + m^2) = √(b⃗_ka^ 2 + m^2) .
We will often consider the situation in which one of the particles, say that with momentum k⃗,
is on shell (and is referred to as the “spectator"),
while the other two may or may not be on shell (and are called the “nonspectator pair").
In this situation, if we boost to the CM frame of the nonspectator pair,
the energy of this pair in this frame is denoted E^*_2,k and is given by
E^*_2,k = √((E - ω_k)^2 - (P⃗ - k⃗)^2) .
If we further assume that all three particles are on shell, then the four-momenta
of the nonspectator pair boost to their CM frame as
(ω_a, a⃗) → (ω_a^*, a⃗^*),
(ω_Pka, b⃗_ka) → (ω_a^*,- a⃗^*),
where ω_a^* = E^*_2,k/2 and a^* ≡|a⃗^* | = q_k^*,
with
q_k^* = √(E_2,k^*2/4 - m^2) .
Thus the degrees of freedom for three on-shell particles
with total energy-momentum (E, P⃗) fixed can be parametrized
by the ordered pair k⃗, â^*—i.e. a spectator momentum and the direction
of the nonspectator pair in their CM frame.
The quantization condition derived in this work is valid for CM energies in the range[
Strictly speaking, the quantization condition is valid also for E^* < m,
but we do not expect this to be of practical interest as there are, in general,
no finite-volume states in this region.
The quantization condition will have a solution for E^*=m + 𝒪(e^-mL),
corresponding to a single-particle pole, but the exponentially suppressed finite-volume corrections
in the position of this pole will be incorrect. This is because we do not systematically control
such corrections. This is in contrast to finite-volume corrections to the mass of a two-particle
bound state, which are proportional to e^-κ L, with κ the binding momentum.
These are correctly reproduced by the quantization condition.
]
m < E^* < min[4m, m + M_p] .
Here M_p is the energy of the lowest lying pole in the two-particle K matrix
(in the two-particle CM frame).
In practice we expect the region of practical utility to run from just below the
two-particle threshold at E^*=2m, where there may be bound states, up to energies below the quoted upper limit. We caution that at energies below but near the upper limit, i.e. at E^* = min[4m,m+M_p] - κ^2/m with κ≪ m, neglected corrections of the form e^-c κ L [with c a constant of 𝒪(1)]
can become important. This indicates the transition into the new kinematic region where four-particle states (or K matrix poles) must be included.
To explain the kinematic range quoted in Eq. (<ref>), we work
though the different regimes in E^*.
The following discussion is summarized schematically in Fig. <ref>.
In the range m < E^* < 3m, the infinite-volume system
is described solely by the two-to-two scattering amplitude,
and in finite volume this amplitude is sufficient to determine the spectral energies.
This is done with the quantization condition of Lüscher <cit.>,
and its generalizations.
The major new result of the present work is to provide the quantization condition
for 3m < E^* < 4m. (For ease of discussion we assume first that
the two-particle K matrix is smooth for the energies considered.)
In this region, both two- and three-particle states can go on shell,
and the dynamics of the infinite-volume system are governed by the coupled two- and three-particle scattering
amplitudes. Thus, one would expect that these same amplitudes determine the
finite-volume spectrum.
In this work we demonstrate that this is in fact the case and give the
detailed form of the resulting quantization condition.
Above 4m, four-particle states become important.
We do not include the effects of these and are thus limited by the four-particle production threshold. In fact, depending on the dynamics of the system, contributions from four-particle states might become important below threshold, as already discussed above.
Finally, we note that within the three-to-three scattering amplitude, two-to-two scattering can occur as a subprocess with the third particle spectating. If the spectator is at rest in the three-particle CM frame, then the two-to-two amplitude is sampled at the highest possible two-particle CM frame energy, E^* - m. However, in our derivation of the quantization condition, we assume that the two-particle K matrix is a smooth function of the two-particle energies sampled.
Thus, if the K matrix does have a pole at some two-particle CM energy M_p,
then our result holds only when E^* - m < M_p ⟹ E^* < m + M_p.
This explains the additional restriction in Eq. (<ref>).
We now introduce the key object used in our derivation of the quantization condition,
a matrix of finite-volume correlators denoted ℳ_L,
ℳ_L ≡[ ℳ_L,22 ℳ_L,23; ℳ_L,32 ℳ_L,33 ] .
ℳ_L,ij is defined to be
the sum of all amputated, on-shell, connected diagrams with j incoming and i outgoing legs, evaluated in finite volume.
This is illustrated in Fig. <ref>.
The restriction to finite volume implies that all spatial loop momenta are summed, rather
than integrated, with the sum running over q⃗ = 2πn⃗/L, where n⃗ is a vector
of integers.[
We sometimes refer to the set of all such momenta as the “finite-volume set."]
The entries in depend on the coordinates introduced above that
parametrize either two or three on-shell particles. In particular,
ℳ_L,22 ≡ℳ_L,22(p̂'^*; p̂^*) ,
ℳ_L,23 ≡ℳ_L,23(p̂'^*; k⃗, â^*) ,
ℳ_L,32 ≡ℳ_L,32(k⃗' , â'^* ; p̂^*) ,
ℳ_L,33 ≡ℳ_L,33(k⃗', â'^*; k⃗, â^*) .
These are extensions of the quantities ℳ_2,L and ℳ_3,L introduced
in Ref. <cit.>. Indeed, the latter correspond, respectively,
to 22 and 33
in a theory having a ℤ_2 symmetry (in which case 23=32=0).
It is clear from their definition that the ij
are finite-volume versions of the infinite-volume scattering
amplitudes. Indeed, as discussed in Sec. <ref>, if the limit L→∞ is taken
in an appropriate way, goes over to the infinite-volume scattering matrix.
Because of this, we loosely refer to the entries of as “finite-volume scattering amplitudes,"
recognizing that this is an imprecise description since there are no asymptotic states for finite L.
As defined, the external momenta of (including P⃗) must lie in the finite-volume set.
In this case is a bona fide finite-volume correlation function whose poles occur at the
energies of the finite-volume spectrum, a property that is crucial for our derivation of the
quantization condition.
In order to relate to its infinite-volume counterpart, however,
we will need to extend its definition so as to allow arbitrary external momenta.
As discussed in Ref. <cit.>, this extension is straightforward
using the diagrammatic definition.
In every loop, the external momentum is routed such that only one loop momentum lies
outside the finite-volume set. A consistent choice of which momenta lie outside this set
can be made.
In many of the previous studies concerned with deriving such
quantization conditions (see for example
Refs. <cit.>) it is standard
to first construct a skeleton expansion that expresses the
finite-volume correlator as a series of diagrams built from
Bethe-Salpeter kernels connected by
fully dressed propagators. The utility of this approach is that it
explicitly displays the loops of particles that can go on shell, and
it turns out that only these long-distance loops lead to the power-law
finite-volume effects that we are after. It also leads to a final
expression where all quantities can be defined in terms of
relativistically covariant amplitudes constructed from Feynman
diagrams.
In the present case, however, we find it simpler to follow a somewhat
different approach, based more extensively on TOPT. This avoids the necessity of introducing
a large number of different Bethe-Salpeter kernels. Instead of using
a skeleton expansion, we start from an all-orders diagrammatic
expansion for ℳ_L in terms of an arbitrary collection of
contact interactions, including all possible derivative structures.
At this stage, the only place where we group diagrams together into composite
building blocks is in the propagators. Here we take all propagators to
be fully dressed with two classes of exceptions. The first applies
to propagators appearing in a two-particle loop carrying the total energy-momentum
(E, P⃗). Then, instead of standard fully dressed propagators defined
via the one-particle irreducible (1PI) self energy diagrams,
we use a modified propagator defined via the two-particle
irreducible (2PI) self energy (see Fig. <ref>). This is necessary because if one of the particles in the two-particle loop
splits into two, then this leads to a three-particle state that
carries the total energy and momentum and can thus go on shell.
We refer to such propagators as “2PI dressed.” The second exception occurs for diagrams in which a single propagator carries
the total energy-momentum. Such a propagator must be built from self-energies that are three-particle irreducible (3PI) (see Fig. <ref>). This is done so that all two- and
three-particle intermediate states are kept explicit, and we call the resulting propagator “3PI dressed”.
The possibility of self-energy diagrams leading to on-shell three-particle
states is, in fact, one of the central complications of this work.
A second nonstandard aspect of our construction,
closely related to the use of 2PI and 3PI propagators,
is our use of a “diagram-by-diagram” renormalization procedure.
All diagrams are regulated in the ultraviolet (UV)
using a regulator that we do not need to explicitly specify.
Counterterms are then broken into an infinite series of terms designed to cancel the UV
divergences of each individual diagram, as well as certain finite pieces.
We then define each diagram to be implicitly accompanied by its counterterm so that the
divergence is canceled immediately.
In fact, this construction is only crucial for self-energy diagrams.
Let D^R_i denote the renormalized ith self-energy diagram in some labeling scheme
i = 1, 2, …. We then require that the counterterms are chosen such that
D^R_i(m^2) = 0 , d/dp^2 D^R_i(p^2) |_p^2 = m^2 = 0 ,
implying that each self-energy diagram scales as (p^2 - m^2)^2 near the pole. This ensures
that the 1PI, 2PI, 3PI and bare propagators all coincide at the one-particle pole.
This choice is not strictly necessary, since our final result is renormalization scheme independent, but it greatly simplifies the analysis.
§.§ Identification of two- and three-particle poles: Naïve approach
In this section we use TOPT to give an expression for in which all the two- and
three-particle poles are explicit. However, the resulting expression turns out to be difficult
to use to determine the volume dependence, due to technical issues related
to self-energy insertions. This is why we call the approach taken here naïve.
The technical issues are resolved in the following section, and its accompanying appendix,
but we think that it is useful pedagogically to separate the basic
structure of the derivation, along with the needed notation, from the technicalities.
We give a brief recap of the essential features of TOPT in Appendix <ref>.
In essence, one evaluates all energy integrals in a Feynman diagram, arriving at a sum of terms, each of which is expressed as a set of integrals over only spatial momenta. This works equally well in finite volume, since we are taking the time direction to be infinite so that energy remains continuous. In the finite-volume case, the spatial momentum integrals are replaced by sums. Each term corresponds to a particular time ordering of vertices,
between which are intermediate states,
each coming with an energy denominator.
An example of such a time-ordered diagram is shown in Fig. <ref>.
In an abuse of notation we refer to the intermediate
states as “n-cuts" if they contain n particles.
In an amputated diagram, the factor associated with an n-cut is proportional to
_n ∝1/n!( ∏_i=1^n 1/2ω_i)
1/E- ∑_i=1^n ω_i ,
where ω_i is the on-shell energy of the i'th particle in the cut.
The 1/n! is the symmetry factor for identical particles,
and the factors of 1/(2ω_i) result from on-shell propagators.
The key point is that,
other than the factors appearing in Eq. (<ref>) associated with the intermediate states,
all contributions to a TOPT diagram are smooth, nonsingular functions of the momenta.
Thus, for the kinematic range we consider [given in Eq. (<ref>)]
the only singularities in the diagrams arise from two- and three-cuts, and have the respective forms
1/E - ω_p - ω_Pp and 1/E - ω_k - ω_a - ω_Pka .
Our aim here is to obtain an expression for in which all such factors are explicit.
If a summed momentum does not enter one of these two pole structures at least once,
then we infer that for this coordinate the summand is a smooth function of characteristic width m.
For such a smooth function s(k⃗),
the difference between the sum and corresponding integral
is exponentially suppressed,
[ 1/L^3∑_k⃗ - ∫_k⃗ ] s(k⃗) = 𝒪 (e^- m L) ,
Here the sum runs over the finite-volume set and ∫_k⃗ = ∫ d^3 k/(2 π)^3.
It follows that we may replace sums with integrals in all coordinates that do not
enter two- and three-particle poles.
This applies for loops with all n-cuts having n≥ 4,
and so we are left with the finite-volume
dependence arising only from loops involving two- and three-cuts.
This procedure is illustrated in Fig. <ref>.
Following this procedure and organizing all terms leads to the following result:
ℳ_L = A ∑_j = 0^∞ [𝒞 A]^j -
= A (1- A)^-1 - .
Each of the quantities on the right-hand side is a 2× 2 matrix, like .
The notation is highly compact, and is explained in detail below.
The basic content of the equation is, however, simple to state:
can be written as a sum of terms built from alternating insertions of smooth functions,
collected into the matrix A, and two- and three-particle poles, collected into the matrix 𝒞.
A contains all time orderings lying between adjacent two- or three-cuts,
and includes n-cuts with n≥ 4.
The same matrix A always appears between any pair of factors of or external states,
because the same set of time orderings always appears.
The elements of A are the analog of the Bethe-Salpeter kernels in the standard skeleton expansion approach.
The last term in Eq. (<ref>) is the subtraction, .
This arises because of the presence of disconnected terms in A.
That such terms are present is easily seen from Fig. <ref>.
In the left-hand diagram, the contribution to A_23 is disconnected,
since it involves a particle that runs between _2 and _3 without interacting.
Similarly, the rightmost A_32 obtains a disconnected contribution.
The other two contributions (to the leftmost A_32 and to A_22) are connected.
In the right-hand figure the contribution to A_22 is disconnected.
Disconnected contributions are characterized by
containing one or two Kronecker deltas setting initial and final momenta equal,
each multiplied by factors of 2ω L^3.
When such disconnected contributions are combined in A + A A + ⋯,
some of the resulting TOPT diagrams are themselves disconnected.
This is most obvious for the leading term, i.e. A itself.
Since is, by definition, fully connected, such terms must be removed by hand,
and is simply defined to be the sum of all disconnected contributions in
A [1 - 𝒞 A]^-1.
It will turn out that we do not need a more detailed expression for .
What will be important, however, is that only has diagonal entries,
≡[ _22 0; 0 _33 ] .
This is because off-diagonal disconnected pieces in ℳ_L necessarily
involve a 1→2 or 2→1 transition in which all external legs are on shell,
and this is not kinematically possible for stable particles.
We stress, however, that A itself does contain off-diagonal disconnected
contributions, because its external legs are in general not on shell.
An important property of A is that all loops contained within it are integrated, rather than summed.
For the connected component of A, this implies that it is an infinite-volume object
(albeit not Lorentz invariant).
This holds also for the disconnected part, up to the volume dependence in the
explicit factors of L^3 accompanying the Kronecker deltas mentioned above.
We now give precise definitions of the quantities entering Eq. (<ref>), beginning with 𝒞.
Like all quantities in Eq. (<ref>),
𝒞 is a two-by-two matrix on the space of two- and three-particle scattering channels.
In contrast to ℳ_L and A, 𝒞 (and also , as we have explained above)
is diagonal
𝒞≡[ 𝒞_2; p' ; p 0; 0 𝒞_3; k'a' ;k a ] .
The diagonal entries are matrices defined on the space of off-shell finite-volume momenta.
For example, 𝒞_2 has two indices of the form p⃗∈ (2 π/L) ℤ^3.
We abbreviate this with the subscript p';p as shown. The definition is
𝒞_2;p';p≡ - δ_p'p1/21/L^31/2 ω_Pp 2 ω_p (E - ω_p - ω_Pp) ,
which we recognize as containing the energy denominator of Eq. (<ref>),
as well as other factors. These additional factors are
(i) δ_p'p, which equals 1 for p⃗ ' = p⃗, and 0 otherwise,
and is present because the cut does not change loop momenta;
(ii) 1/L^3, which is always associated with a loop sum; (iii) a symmetry factor of 1/2 because the two intermediate particles are identical; and
(iv) the overall minus sign, which arises from keeping track of powers of i in the
Feynman propagators and vertices before decomposing into TOPT diagrams.
Similarly, the three-cut factor is
𝒞_3;k'a';k a≡ - δ_k' kδ_a'a1/61/L^61/2 ω_a 2 ω_k 2 ω_Pka (E - ω_a - ω_k - ω_Pka) ,
where the indices include two finite-volume momenta,[
Here we are choosing k⃗ and a⃗ to lie in the finite-volume set, so that, if the
external momenta do not lie in this set, the remaining momentum b⃗_ka also
lies outside the set. The apparent asymmetry in this choice is removed by the fact that
the entries of A are symmetric under particle exchange.]
with k a standing for {k⃗,a⃗}.
The definition of the matrix A depends on its location in the product.
If it appears between two factors of 𝒞,
A is defined as a matrix on the same space as 𝒞,
A = [ A_22;p';p A_23;p' ;ka; A_32;k'a';p A_33;k'a';ka ] , (between two factors of 𝒞 ) .
If the A lies at the left-hand end of a chain in Eq. (<ref>),
so that it only abuts a 𝒞 on the right,
then it has finite-volume indices on the right but on-shell momenta on the left,
A = A(p̂'^*;k⃗', â'^*) ≡[ A_22; p(p̂'^*) A_23; ka (p̂'^*); A_32; p(k⃗', â'^*) A_33; ka(k⃗', â'^*) ] ,
(𝒞 only on the right ) .
This is mirrored if the A appears on the far right end of a chain,
A = A (p̂^*, k⃗, â^*) ≡[ A_22; p'(p̂^*) A_23; p'(k⃗, â^*); A_32; k'a' (p̂^*) A_33; k'a'(k⃗, â^*) ] ,
(𝒞 only on the left ) .
Finally, the j=0 term in Eq. (<ref>) contains no factors of 𝒞 and
is evaluated only with on-shell momenta:
A = [ A_22(p̂'^*; p̂^*) A_23(p̂'^*; k⃗, â^*); A_32(k⃗', â'^*; p̂^*) A_33(k⃗', â'^*; k⃗, â^*) ] ,
(𝒞-independent term ) .
The various definitions of A are all closely related and can all be
determined from a “master function,”
A(p⃗ ', k⃗', a⃗'; p⃗, k⃗, a⃗) = [ A_22(p⃗ '; p⃗) A_23(p⃗ '; k⃗, a⃗); A_32(k⃗', a⃗'; p⃗) A_33(k⃗', â'; k⃗, a⃗) ] ,
by applying various coordinate-space restrictions.
The master function depends on unrestricted momenta.
It is obtained from the fully off-shell matrix form of A, Eq. (<ref>),
by continuing the momenta away from finite-volume values.
As discussed earlier, this continuation impacts the integrands inside A in a well-defined
and smooth way.
For a two-particle state only one momentum, p⃗, is specified.
We then define two restrictions of this coordinate.
To restrict to on-shell momenta we require that p⃗ is such that
E = ω_p + ω_Pp.
This leaves only a directional degree of freedom, denoted p̂^*.
Alternatively, to restrict to finite-volume momenta we require p⃗∈ (2 π/L) ℤ^3
and represent the momentum as an index, p.
For a three-particle state we begin with two momenta k⃗, a⃗.
The restriction to on-shell states is effected by requiring
E = ω_k + ω_a + ω_ka, leading to the degrees of freedom k⃗, â^*.
The restriction to finite-volume momenta, k⃗, a⃗∈ (2 π/L) ℤ^3,
is denoted with the index pair ka.
This notation allows one to easily construct various finite-volume sums. To give a concrete example we write out the term from Eq. (<ref>) that is linear in 𝒞,
A(p̂”^*, k⃗”, â”^*) 𝒞 A(p̂^*, k⃗, â^*) = ∑_ p'
[A_22; p'(p̂”^*) + A_32; p'(k⃗”, â”^*) ] 𝒞_2;p';p'
[ A_22; p'(p̂^*) + A_23; p'(k⃗, â^*) ]
+ ∑_k', a' [A_23; k'a' (p̂”^*) + A_33; k'a'(k⃗”, â”^*) ]
𝒞_3,k'a';k'a' [ A_32; k'a' (p̂^*) + A_33; k'a'(k⃗, â^*) ] ,
= - 1/21/L^3∑_p⃗ ' [A_22(p̂”^*; p⃗ ') + A_32(k⃗”, â”^*; p⃗ ') ] [ A_22(p⃗ ';p̂^*) + A_23(p⃗ '; k⃗, â^*) ] /2 ω_Pp' 2 ω_p' (E - ω_p' - ω_Pp')
- 1/61/L^6∑_k⃗', a⃗' [A_23 (p̂”^*; k⃗', a⃗') + A_33 (k⃗”, â”^*; k⃗', a⃗') ]
[ A_32 ( k⃗', a⃗'; p̂^*) + A_33 ( k⃗', a⃗'; k⃗, â^*) ] /2 ω_Pk'a' 2 ω_k' 2 ω_a' (E - ω_k' - ω_a' - ω_Pk'a') .
The simplest contribution is the product of two A_22 factors,
A 𝒞 A ⊃ - 1/21/L^3∑_p⃗ ' A_22(p̂”^*; p⃗ ') A_22(p⃗ ';p̂^*) /2 ω_Pp' 2 ω_p' (E - ω_p' - ω_Pp') .
The external momenta p̂”^* and p̂^* are fixed and the internal coordinate p⃗ ' is summed over all finite-volume values.
Disconnected terms in A complicate the determination of the volume dependence of
. Indeed, the analysis of Ref. <cit.> was largely concerned with
understanding the impact of such contributions.
Thus we would like to remove them to the extent possible.
This turns out to be possible for the off-diagonal disconnected parts of A, as we now explain.
We begin by recalling that finite-volume dependence arises when
one of the intermediate states goes on shell. As already noted in the discussion of I,
however,
it is not kinematically possible for both a two- and a three-particle state to be simultaneously
on shell if one of the particles has a common momentum.
This implies that any disconnected
component in A_23 or A_32 cannot simultaneously lead to finite-volume effects
from both the adjacent cuts.
This suggests including factors in the pole terms in such that this property is built
in from the beginning, rather than discovered at the end.
To formalize this idea, we introduce two functions H_2(p⃗) and H_3(k⃗, a⃗).
These depend, respectively, on the momenta in a two- and three-particle off-shell intermediate state.
These functions have four key properties.
First, they are smooth functions of the momenta.
Second, they are symmetric under interchange of the particles in their respective intermediate
states, i.e.
H_2(p⃗) =H_2(b⃗_p) ,
H_3(k⃗,a⃗) =H_3(a⃗, k⃗) = H_3(a⃗, b⃗_ka) =
H_3(b⃗_ka,a⃗) = H_3(k⃗, b⃗_ka) = H_3(b⃗_ka,k⃗) .
Third, they equal unity when all particles in a given intermediate state are on shell.
And, finally, they have no common support if one momentum is shared
between the two intermediate states.
As an equation, the “nonoverlap" property is
H_2(p⃗) H_3(p⃗, a⃗)=0 .
Further discussion of these properties and an explicit example of functions that
satisfy them are given in Appendix <ref>.
The reason that they can be defined is that there is a separation of O(m) between the individual momenta of the particles in an on-shell two-particle state and the corresponding momenta in an on-shell three-particle state.
We now rewrite Eq. (<ref>) using these smooth cutoff functions. Specifically, we
separate into a singular part, ^H, and a pole-free part, ^∞,
𝒞 = 𝒞^H + 𝒞^∞ ,
where
𝒞^H ≡[ H_2(p⃗) 𝒞_2; p'; p 0; 0 H_3(k⃗, a⃗) 𝒞_3; k'a'; k a ] ,
𝒞^∞ ≡[ [1-H_2(p⃗)] 𝒞_2; p'; p 0; 0 [1-H_3(k⃗, a⃗)] 𝒞_3; k'a'; k a ] .
^∞ is nonsingular because the factors of 1-H_i cancel their respective poles.
Substituting Eq. (<ref>) into Eq. (<ref>), and collecting terms according
to the power of ^H, we arrive at
= A∑_n=0^∞ [ 𝒞^H A ]^n - ,
where A is given by
A = A ∑_n=0^∞ [ 𝒞^∞ A ]^n .
This result (<ref>) is identical in form to Eq. (<ref>), but with the
poles now “regulated" by the H functions, and with the kernels suitably modified.
The additional terms that have been added to obtain A from A
[i.e. the n>0 terms in the sum in Eq. (<ref>)]
all involve sums over intermediate momenta that have nonsingular summands,
so that these sums can be replaced by integrals (1/L^3 ∑_k ⟶∫_k⃗).
Thus A remains an infinite-volume, smooth kernel, aside from
the above-mentioned Kronecker deltas accompanied by factors of L^3.
The reason for this reorganization can now be understood.
A = A + A ^∞ A + ⋯ contains disconnected parts,
built up from the disconnected parts of A discussed above.
However, it is easy to see that the off-diagonal disconnected parts of A
do not contribute to .
This is because,
if one of the A's in the expansion of Eq. (<ref>) lies between two
factors of ^H, then its off-diagonal parts
will be multiplied by H_2 H_3. But this factor vanishes for any disconnected parts,
by construction. The same is true if one or both sides of the A are at the
end of the chain, because then the external particles are on shell.[
In more detail, the argument in this case goes as follows. We are free to multiply the
on-shell external states by a factor of H_i (with i the number of particles in the state),
since this factor is unity. Thus off-diagonal terms in A also come with a factor of
H_2 H_3 here.]
Thus, with no approximation, in Eq. (<ref>) we can drop the disconnected
parts of A_23 and A_32.
Having derived the formula (<ref>) we now explain why it is not
yet in a form that allows the determination of the volume dependence of using the
methods of Refs. <cit.>.
The problems are related to self-energy diagrams and the presence of disconnected
contributions.
We provide here only a brief sketch of the problems, without explaining all the
technical details, since in the end we avoid them by using an alternative approach
described in the following section.
The first issue arises in self-energy insertions on propagators present in two-particle s-channel loops.
An example is provided by the central loop of both diagrams
in Fig. <ref>.
The difference between these two diagrams is that the two vertices in
the self-energy loop have a different time ordering, leading to a different
sequence of cuts. Focusing on the central region between the two factors of _2,
the left diagram contributes to
A_23_3 A_32, while that on the right contributes directly to A_22.
When we change A to A the two time orderings are recombined as
A_22 = A_22 + A_23 _3 (1-H_3) A_32 + ⋯ .
The sum over momenta that comes with _3 can be
converted into an integral because it is multiplied by 1-H_3.
Furthermore, since A_22 lies between two factors of either _2 H_2
or external on-shell states, we can set H_3 to zero.
Thus the two time orderings are recombined in A_22 without
any regulator functions.
At this point we would like to say that adding these two orderings will lead to
the full, Lorentz invariant one-loop self-energy, which is proportional to (p^2-m^2)^2,
given our renormalization conditions.
If so, the double zero would cancel the poles in both factors of _2,
so that such diagrams would not in fact lead to finite-volume dependence from the two-particle loop.
In this way we would not have to worry about the self-energy insertion, except for its
contribution to three-cuts with a factor of H_3.
However, this argument is incorrect. To obtain the full one-loop self-energy, one needs
to include additional time orderings in which the vertices in the self-energy loop lie either before
or after the bracketing _2 cuts. Without these, it turns out that the sum of the two diagrams that
are included only vanishes as (p^2-m^2), and thus only cancels the poles in one of the _2 factors. Thus the loop does contribute finite-volume effects.
Similarly, additional self-energy insertions on the propagators in the two-particle loop must
also be kept. This requires consideration of an infinite class of diagrams that does not arise
in the treatments of Refs. <cit.>.
The second issue concerns Feynman diagrams contributing to that are 1PI in the
s channel, i.e. have all the energy-momentum flowing through a single particle.
As noted above, the propagator of this particle must be 3PI. It turns out
that this leads to a new type of disconnected contribution to A_33 that is not a smooth function
of the external momenta.
This is explained in Appendix <ref>.
Such contributions cannot be handled using the methods
of Refs. <cit.>, which rely on certain
smoothness properties of the kernels.
The issue with the 3PI propagators must be addressed at the level of Feynman diagrams,
before turning to TOPT.
§.§ Identification of two- and three-particle poles: Improved approach
In this section we sketch the derivation of a replacement for
Eq. (<ref>) that has an identical form but
contains modified kernels (replacing A),
and a modified subtraction I (in place of ),
ℳ_L = 1/1 - 𝒞 - I .
The issues described at the end of
the previous section do not apply to the new formulation,
and thus the methods
of Refs. <cit.> can be applied to
analyze Eq. (<ref>).
The derivation is rather technical and lengthy and so is only sketched here.
It is explained in detail in Appendix <ref>.
We begin by following the same path as in the previous section,
constructing the diagrammatic expansion for ℳ_L in terms of all possible contact interactions and the three types of dressed propagators. The latter can
be replaced by their infinite-volume counterparts, as they contain no on-shell intermediate states.
This is described in more detail in Appendix <ref>, where we also explain why tadpole diagrams
can be absorbed into vertices to further simplify the set of allowed diagrams.
We then deviate from the naïve approach
in the class of diagrams containing self-energy insertions on
propagators in two-particle s-channel loops [see Fig. <ref> below].
As described in Appendix <ref>, by inserting
1 = H_2(p⃗) + [1 - H_2(p⃗)] in such loops, we find that self-energies can
be ignored for the part with H_2, because they cancel poles and collapse the propagators
to local interactions. The 1-H_2 terms remain, but they do not have any two-particle
cuts. This resolves the first complication described at the end of the previous section.
We next resolve the second complication from the
previous section involving 3PI-dressed propagators.
As described in Appendix <ref>, these propagators can effectively be
shrunk to point vertices that cannot be cut.
After taking stock of the remaining classes of diagrams in Appendix <ref>,
we next switch to using TOPT.
In Appendix <ref>, we explain how TOPT applies to our amputated on-shell correlators
involving dressed propagators.
We thus reach a result corresponding to Eq. (18) in the naïve approach, but with kernels that are better behaved, and with a subtraction only needed for the 33 component.
Next, in Appendix <ref>, we
separate the cut functions as in Eq. (<ref>), and use the
identity in (<ref>) to reduce the number of resulting terms.
In this and the following section of the appendix we show diagrammatically
how the result Eq. (<ref>) arises.
The key properties of the kernel are that the _22, _23, and _32
components contain
no disconnected parts, and are smooth, infinite-volume quantities,
while _33 has disconnected parts corresponding to the two-to-two scattering
subprocess. The explicit form of the disconnected part is given in Eq. (<ref>).
§.§ Volume dependence of
In this section we use the decomposition of the finite-volume scattering amplitude, given in Eq. (<ref>),
to determine the volume dependence of .
Our aim is to piggyback on the methods and results of
Refs. <cit.>, and it turns out that we can do so to a considerable extent.
However, since these works do not use TOPT to decompose finite-volume amplitudes,
some effort is needed to map their approach into the one used here.
We begin by reorganizing the series in (<ref>) so as to separate the
contributions from the diagonal and off-diagonal elements of B.
Specifically, we introduce
B_D = [ B_22 0; 0 B_33 ] and
B_T = [ 0 B_23; B_32 0 ] ,
such that B=B_D+B_T. We then rearrange Eq. (<ref>) into
= B_D + B_T + ( B_D + B_T)
Ξ∑_n=0^∞ [ B_T Ξ ]^n ( B_D + B_T) - I ,
where
Ξ≡1/1 - B_D ≡[ Ξ_22 0; 0 Ξ_33 ] .
In this way all off-diagonal entries of B are kept explicit, while the diagonal entries are
resummed into the diagonal matrix Ξ.
The latter contains all the intermediate-state factors C^H.
The key observation is that Ξ has exactly the form that arises in the analyses of
Refs. <cit.>. More specifically, Ξ_22 (which contains only two-cuts)
arises in Ref. <cit.>, while Ξ_33 (containing only three-cuts) arises in Refs. <cit.>.
The only subtlety is that the result for Ξ depends on the nature of the
B factors on either side, i.e. whether they are B_D or B_T.
This dependence arises because B_D (or, more precisely, B_33) contains
disconnected parts. Physically, these correspond to two-to-two subprocesses, and the
form of the result depends on whether such processes occur at the “ends" or not.
To keep track of the different environments of the factors of Ξ, we introduce
superscripts indicating which type of B is on either side. For example,
Ξ^(D,T) implies that there is a B_D on the left and a B_T on the right.
We stress that this is only a notational device, allowing us to make substitutions that
depend on the environment (as will be explained below).
Using this notation, we further decompose as
= B_D + B_D Ξ^(D,D) B_D - I
+ B_D Ξ^(D,T)∑_n=0^∞[B_T Ξ^(T,T)]^n B_T Ξ^(T,D) B_D
+ B_T
+ B_D Ξ^(D,T)∑_n=0^∞[ B_T Ξ^(T,T)]^n B_T
+ B_T ∑_n=0^∞[ Ξ^(T,T) B_T]^n Ξ^(T,D) B_D
+ B_T ∑_n=0^∞[ Ξ^(T,T) B_T ]^n Ξ^(T,T) B_T .
Our aim is to determine the appropriate substitutions for the four different types of
Ξ factors appearing in this form.
We begin with the diagonal quantity that contains no factors of B_T,
X ≡ B_D + B_D Ξ^(D,D) B_D - I = B_D ∑_n=0^∞[ B_D]^n - I
≡[ X_22 0; 0 X_33 ] .
In terms of the components we have
X_22 = B_22∑_n=0^∞[ _2 B_22]^n ,
X_33 = B_33∑_n=0^∞[ _3 B_33]^n - I_33 .
These two quantities are chosen to have very similar forms to the finite-volume amplitudes
analyzed previously in Refs. <cit.> and Refs. <cit.>, respectively,
so that we can make use of the results of these publications.
We focus first on X_22. This is the part of with two-particle external
states in which, by hand, we allow only two-cuts.
X_22 is not a physical quantity,
since three-cuts that are present in ℳ_L,22 have been removed in its definition.
We note that X_22 is not only unphysical above the three-particle threshold (where we have removed physical three-particle intermediate states) but also below (where virtual three-particle contributions to ℳ_L,22 have been dropped). In this regard, we see that, in deriving a formalism that works both above and below the three-particle threshold, we are left with subthreshold expressions that are more complicated than the standard results describing that region. In particular, below E^*=3m one can study the amplitude taking into account only the two-cuts,
and this is indeed the approach used in Ref. <cit.>.
Despite the unphysical nature of X_22, it has nevertheless been
constructed to have the same form as
the physical subthreshold finite-volume two-to-two amplitude.
In particular, X_22 is built of alternating smooth quantities (B_22) and two-cuts (_2).
This allows us to apply the methods of Ref. <cit.>, as explained in Appendix <ref>.
We show there that
X_22(E, P⃗) = _22,D(E,P⃗) 1/1 + F_2(E,P⃗) _22,D(E,P⃗) ,
where _22,D is an unphysical K matrix discussed below,
and F_2 is the moving-frame Lüscher zeta function[
In Ref. <cit.> what we call F_2 here is called simply F. Here we reserve
F for the slightly different quantity defined in Eq. (<ref>).]
F_2; ℓ' m'; ℓ m (E,P⃗) ≡1/2 [ 1/L^3∑_p⃗ - PV∫d^3 p/(2 π)^3 ] 4 π Y_ℓ' m'(p̂^*)
Y^*_ℓ,m(p̂^*)/2 ω_p 2 ω_Pp (E - ω_p - ω_Pp)( p^*/q^* )^ℓ + ℓ'
h(p⃗) .
h(p⃗) is a UV cutoff function, the details of which do not matter, except that
it must equal unity when E=ω_p+ω_Pp.
Different choices for the cutoff function are given in Ref. <cit.> and Refs. <cit.>.
“PV" indicates the use of the principal-value prescription for the integral
over the pole. For E^*>2m this is standard (given, for example,
by the real part of the iϵ prescription), while for E^*<2m
we define PV such that the
result is obtained by analytic continuation from the above threshold.
This corresponds, for example, to the definition given in Refs. <cit.>.
The derivation in Appendix <ref> leads
to an explicit expression for _22,D, Eq. (<ref>).
We stress that the appearance of an unphysical K matrix here
is analogous to the appearance of the unphysical quantity, , in the three-particle quantization condition of Ref. <cit.>.
This is not a concern, because in the end (Sec. <ref>)
we will be able to relate the unphysical quantities to physical scattering amplitudes.
We now turn to the quantity X_33, defined in Eq. (<ref>).
This is the part of with three-particle external states that contains only three-cuts.
It is unphysical at all energies since the physical amplitude always has two-cuts.
Nevertheless, it has the same structure as the finite-volume amplitude
considered in Ref. <cit.>, denoted _3,L. This quantity is defined for theories with a
ℤ_2 symmetry forbidding even-odd transitions (and thus forbidding two-cuts).
Thus we can hope to reuse results from that work.
As for X_22, however, we cannot do so directly, because the analysis leading to
these results uses Feynman diagrams, whereas here we are using TOPT. Since
we are dropping cuts by hand, we cannot in any simple way recast the TOPT result
(<ref>) into one using Feynman diagrams. Instead, in order to use the
results from Ref. <cit.>, we have to redo the analysis of Refs. <cit.>
using TOPT.
In a theory with a ℤ_2 symmetry we have B_23=B_32=0,
so X_33 is simply equal to _3,L and is thus physical.
The TOPT derivation given above still applies (and indeed is simplified by the absence
of 2↔3 mixing) so the result Eq. (<ref>) for X_33 still holds.
Although B_33 will differ in detail from that in our ℤ_2-less theory,
its essential properties are the same.
In particular, it can be separated into connected and disconnected parts
B_33 = B_33^ conn + B_33^ disc ,
with the latter containing all contributions in which two particles interact
while the other particle remains disconnected.
Determining the finite-volume dependence arising from
these disconnected contributions was the major challenge in
the analysis of Refs. <cit.>.
Thus we must start with Eq. (<ref>) rather than the Feynman diagram skeleton expansion.
This turns out to be a rather minor change. Both approaches have the same sequences of
cuts alternating with either connected or disconnected kernels. Working through
the derivation of Refs. <cit.> we find that all steps still go through,
the only change being in the precise definition of the kernels.
This is a tedious but straightforward exercise that we do not reproduce in detail, although
we collect some technical comments on the differences caused by using TOPT
in Appendix <ref>.
The outcome is that the final result, Eq. (68) of Ref. <cit.>, still holds,
but with some of the quantities having different definitions.
Applying this result to X_33 in the ℤ_2-less theory, we find[
Note that we use an italic L to denote finite volume, while calligraphic and
denote left and right, respectively.]
X_33 = _L,3 + _,3{^(u,u)_L,3𝒦_ df,33,D1/1+F_3 𝒦_ df,33,D^(u,u)_L,3}_,3 ,
_L,3 = _,3{_L,3^(u,u)}_,3 ,
_L,3^(u,u) = -
1/1+_2,L G^H_2,L G^H _2,L [2ω L^3] ,
^(u,u)_L,3 = 1/3 - 1/1+_2,L G^H_2,L F
,
^(u,u)_L,3 = 1/3 -
F/2ω L^31/1+_2,L G^H_2,L [2ω L^3]
,
F_3 = F/2ω L^3^(u,u)_L,3 = ^(u,u)_L,3F/2ω L^3 .
Here _L,3 and _R,3 are symmetrization operators acting respectively on the
arguments at the left and right ends of expressions within curly braces.
They are defined in Eqs. (36) and (37) of Ref. <cit.>.[
In Ref. <cit.> _L,3 and _R,3 were combined into a single
symmetrization operator . Here it is convenient to separate the two operations.
]
The superscripts involving u are explained in Ref. <cit.>.
𝒦_ df,33,D is an unphysical, three-particle K matrix that is a smooth function of its arguments,
and is given by Eq. (<ref>).
It takes the place of the quantity that appears in the theory with a ℤ_2 symmetry,
in an analogous way to the replacement of _2 with _22,D in X_22
described above.
F, which is defined in Ref. <cit.>, is similar to F_2, but includes an extra index to account for the third particle,[
This form of F differs from that defined in Ref. <cit.> by the
choice of UV regulator in the sum-integral difference. Here we use h(p⃗) [see Eq. (<ref>)],
whereas in Ref. <cit.> a product of two H functions is used. Since both regulators equal unity
at the on-shell point, the change in regulator only leads to differences of 𝒪(e^-mL).]
F_k'ℓ' m';k ℓ m = δ_k' k H(k⃗) F_2;ℓ' m';ℓ m(E-ω_k,P⃗-k⃗)
,
where the additional factor of H arises from the definition of H_3.
The two remaining quantities that need to be defined are _2,L and G^H.
The former is the finite-volume two-particle scattering amplitude below the three-particle threshold,
except with an extra index for the third particle
_2,L;k'ℓ' m';k ℓ m = δ_k' k[_2(E-ω_k,P⃗-k⃗)
1/1 + F_2(E-ω_k,P⃗-k⃗) _2(E-ω_k,P⃗-k⃗)]_ℓ' m';ℓ m.
It is important to distinguish this quantity from the two-particle finite-volume scattering amplitude, which we denote as _L,22. A key feature of this result is that
it is the physical K matrix, _2, that appears in this expression
(rather than the unphysical _22,D, for example)
as long as E^*< 4m.
This nontrivial result is explained in Appendix <ref>.
It implies that _L,3, _L,3^(u,u), _L,3^(u,u), and F_3
are the same as those appearing in Refs. <cit.>.
The only unphysical quantity in X_33 is thus 𝒦_ df,33,D.
We do not have an explicit expression for this rather complicated quantity, but this
does not matter as it will be related to the physical scattering amplitudes
in Sec. <ref> below.
Finally, we define G^H. This is almost identical to the matrix G defined in Refs. <cit.>
[see, for example, Eq. (A2) of Ref. <cit.>], except that it contains an additional
cutoff function. The necessity of this change is discussed in Appendix <ref>,
and the explicit form is given in Eq. (<ref>).
This is a minor technical change that has no impact on the general formalism.
The results for X_22 and X_33 can be conveniently combined by introducing
the matrices
_L = [ 0 0; 0 _L,3 ] , _ = [ 1 0; 0 _,3 ] , _ = [ 1 0; 0 _,3 ] , = [ F_2 0; 0 F_3 ] ,
= [ _22,D 0; 0 _df,33,D ] , ^(u)_L = [ 1 0; 0 _L,3^(u,u) ] and ^(u)_L = [ 1 0; 0 _L,3^(u,u) ] .
Then we have
X = _L + _{^(u)_L 1/1 + ^(u)_L }_ .
Our next step is to determine the result for Ξ^(T,T). This
lies between factors of B_T, so the two contributions we need to calculate are
Y_22≡ B_32Ξ_22 B_23 ,
Y_33≡ B_23Ξ_33 B_32 .
Y_22 differs only slightly from X_22 and is calculated in
Appendix <ref>, with the result
Y_22 = B_32[ _C,2
- _A',2 F_2 1/1+ _22,D F_2_A,2]B_23 .
The volume dependence enters through the factors of F_2.
_C,2, _A',2 and _A,2 are
infinite-volume integral operators, whose explicit forms are given in
Eqs. (<ref>)-(<ref>).
_A',2 acts to the left, _A,2 to the right,
while _C,2 acts in both directions.
We refer to them collectively as decoration operators.
Turning now to Y_33, we note that this is similar to X_33, as can be seen by
comparing Eq. (<ref>) to the following:
Y_33 = B_23∑_n=0^∞[_3 B_33]^n _3 B_32 .
The major difference is that Y_33 has factors of B_23 or B_32 on the
ends, while X_33 has factors of B_33. This is an important difference
because B_23 and B_32 do not have disconnected parts, while B_33 does.
This means that Y_33 is analogous to the correlation function studied in Ref. <cit.>,
in which there are three-particle connected operators at the ends (called σ and
σ^† in that work). We thus need to repeat the analysis of Ref. <cit.> using
the TOPT decomposition of the correlation function. This is a subset of the work already
done for X_33 (where the presence of a disconnected component in the kernels
on the ends leads to additional complications, as studied in Ref. <cit.>).
The result is that we can simply read off the answer from Eq. (250) of Ref. <cit.>,
Y_33 = B_23[ _C,3 - _A',3 F_3 1/1 + 𝒦_ df,33,D F_3_A,3] B_32 .
Here _C,3, _A',3 and _A,3 are decoration operators, whose
definition can be reconstructed from Ref. <cit.> taking into account the difference
between the Feynman-diagram analysis used there and the TOPT used here.
We will, in fact, not need the definitions and so do not reproduce them here.
We observe that the form of the result is very similar to that for Y_22, Eq. (<ref>).
The two can be combined into a matrix equation
Ξ^(T,T) = _C - _A'1/1 + 𝒦_ df,33,D_A ,
if we use the definitions
𝒟_C ≡[ 𝒟_C,2 0; 0 𝒟_C,3 ] , 𝒟_A'≡[ 𝒟_A',2 0; 0 𝒟_A',3 ] , 𝒟_A ≡[ 𝒟_A,2 0; 0 𝒟_A,3 ] .
The final quantities we need to determine are Ξ^(D,T) and its “reflection" Ξ^(T,D).
This requires that we calculate
Z_23 ≡ B_22Ξ_22 B_23 ,
Z_32 ≡ B_33Ξ_33 B_32 ,
and their reflections.
The former is obtained in Appendix <ref> by a simple extension of the
analysis for X_22 and Y_22. The result is
Z_23 + B_23 = [ 1/1+_22,D F_2_A,2] B_23 .
The calculation of Z_32 requires a more nontrivial extension of the analysis for
X_33 and Y_33. This is because Ξ_33 connects a kernel with a disconnected
component (B_33) to one without (B_32),
and such correlators were not explicitly considered in Refs. <cit.>.
We work out the extension in Appendix <ref>, finding
Z_32 + B_32 = _,3{^(u,u)_L,31/1 + 𝒦_ df,33,D F_3_A,3
B_32} .
Combining Eqs. (<ref>) and (<ref>) into matrix form yields
B_D Ξ^(D,T) =
_ [ ^(u)_L 1/1 + _A ] - 1 .
A similar analysis leads to the following result for the reflected quantity:
Ξ^(T,D) B_D =
[ _A'1/1 + 𝒦_ df,D^(u)_L ] _R - 1 ,
We have now determined the volume dependence of all factors of Ξ appearing
in the expression (<ref>) for . Substituting Eqs. (<ref>),
(<ref>), (<ref>) and (<ref>) into this expression,
expanding, and rearranging, we find the final result of this subsection,
= _L + _ [ ^(u)_L
_df1/1 + _df^(u)_L ]
_ .
Here the modified matrix of K matrices is given by
𝒦_df = 𝒦_df,D
+ 𝒟_A ∑_n=0^∞ [ B_T 𝒟_C ]^n
B_T 𝒟_A' .
We stress that the second term in _ df, which is induced by the presence of
2→3 and 3→2 transitions, contains both diagonal and off-diagonal parts
(the former having an even number of factors of B_T and the latter an odd number).
It is worth noting that, given the notation we use, the form of in Eq. (<ref>) qualitatively resembles that of the three-particle sector in the presence of the ℤ_2 symmetry.
§.§ Quantization condition
The result (<ref>) allows us to determine the energy levels of the theory
in a finite volume. This is because is simply a (conveniently chosen) matrix of correlation
functions through which four-momentum (E,P⃗) flows.
It will thus diverge whenever E equals the energy of a finite-volume state.[
In general, this means that all elements of the matrix will diverge, unless there are
symmetry constraints.]
In general, such a divergence cannot come from _L, because this quantity depends only
on the two-particle K matrix, while the spectrum should depend on both two- and three-particle
channels. Since symmetrization will not produce a divergence, it must be that
the quantity in square brackets in Eq. (<ref>) diverges.
For the same reason as for _L,
divergences in _L^(u) and _L^(u) cannot correspond to
finite-volume energies. A divergence in the matrix _ df will not lead to a divergent ,
since the former appears in both numerator and denominator.
Thus a divergence in can come, in general, only from the factor (1+ _ df)^-1.
Since this is a matrix, it will diverge whenever (1+ _ df) vanishes.
Thus we find the quantization condition
det[[ 1 0; 0 1 ] + [ F_2 0; 0 F_3 ][ 𝒦_ 22 𝒦_ 23; 𝒦_ 32 𝒦_df,33 ] ] = 0 ,
where 𝒦_22, 𝒦_23, 𝒦_32 and 𝒦_df,33
are entries in the matrix 𝒦_df defined in Eq. (<ref>).
We stress that each of the entries in Eq. (<ref>) is itself a matrix, containing angular-momentum indices and (for the three-particle cases) also a spectator-momentum index.
The angular momentum indices run over an infinite number of values,
so the quantization condition involves an infinite-dimensional matrix. To use it in practice
one must truncate the angular-momentum space. This will be discussed further in
Sec. <ref>. We also emphasize that Eq. (<ref>) separates finite-volume dependence,
contained in F_2 and F_3, from infinite-volume quantities, contained in _ df.
The generalized quantization condition has a form that is a relatively simple generalization of
those that hold separately for two and three particles in the case that there is a ℤ_2
symmetry. Indeed, this case can be recovered simply by setting _23=_32=0.
However, we recall that, in the absence of the ℤ_2 symmetry, the elements of _ df
are complicated quantities, as can be seen from Eq. (<ref>).
They are also unphysical, as they depend on the cutoff functions.
In particular, _22 is not equal to the physical two-particle K matrix.
In fact, all we know about the elements of _ df is that they are smooth functions
of their arguments. In a practical application they would need to be parametrized in
some way.
By contrast, we do know F_2—it is given in Eq. (<ref>)—and F_3 can
be determined from the spectrum of two-particle states below the three-particle threshold,
E^* < 3m.
Thus it can be determined first, before applying the full quantization condition in the regime
3m<E^* < 4m.
This means that by determining enough energy levels, both in the two- and three-particle regimes, one can in principle use the quantization condition to determine the parameters in any smooth ansatz for _ df.
How to go from these parameters to a result for the physical two- and three-particle scattering
amplitudes is the topic of the next section.
§ RELATING 𝒦_ DF TO THE SCATTERING AMPLITUDE
In this section we derive the relation between 𝒦_ df
and the physically observable scattering amplitude in the coupled two-
and three-particle sectors. The quantization condition derived in the
previous section depends on 𝒦_ df and also on the
finite-volume quantities F_2 and F_3. The two-particle
finite-volume factor, F_2, is a known kinematic function, whereas its
three-particle counterpart, F_3, depends on kinematic factors as
well as the two-to-two scattering amplitude at two-particle energies
below the three-particle threshold. Thus, if one uses the standard
Lüscher approach to determine the two-to-two scattering amplitude in
the elastic region, then both F_2 and F_3 are known functions and
each finite-volume energy above the three-particle threshold gives a
constraint on 𝒦_ df.
It follows that one can, in principle, use LQCD,
or other finite-volume numerical techniques,
to determine the divergence-free K matrix via Eq. (<ref>).
As we have already stressed, this infinite-volume quantity is unphysical
in several ways. First, the iϵ pole prescription is replaced
by the modified principal value prescription. Second, the K matrix depends
on the cutoff functions H_2 and H_3. And, finally, the physical
singularities that occur at all above-threshold energies
in the three-to-three scattering amplitude are subtracted to define
a divergence-free quantity.
To relate 𝒦_ df to physical scattering amplitudes,
we take a carefully defined infinite-volume limit of the result for
given in Eq. (<ref>), such that goes over to
a matrix of infinite-volume scattering amplitudes.
This is the approach taken in Ref. <cit.>
to derive a relation between 𝒦_ df,3 and the three-particle
scattering amplitude in theories
with a ℤ_2 symmetry preventing two-to-three transitions.
The extension here is that we must consider
a coupled set of equations with
both two- and three-particle channels.
As a warm-up, we briefly review the procedure for determining
the two-particle scattering amplitude, ℳ_22,
below the three-particle threshold,
from its finite-volume analogue, ℳ_L,22.
The latter has the same functional form as X_22 appearing in Eq. (<ref>), with the unphysical _22,D replaced by _2, the physical two-body K matrix below the three-body threshold,
ℳ_L,22(E, P⃗) = _2(E,P⃗) 1/1 + F_2(E,P⃗) _2(E,P⃗) ,
(E^* < 3 m)
.
To obtain ℳ_22, we first make the replacement
E → E + i ϵ in the poles that appear in the finite-volume sum
contained in F_2, Eq. (<ref>).
Then we send L→∞ with ϵ held fixed and positive,
and finally send ϵ→0. This converts the finite-volume Feynman diagrams
into infinite-volume diagrams with the iϵ prescription, which are
exactly those diagrams building up ℳ_22.
The result is
ℳ_22(E, P⃗)=
lim_L→∞|_iϵℳ_L,22(E, P⃗)
=𝒦_2(E, P⃗) 1/1+ρ_2(E, P⃗)𝒦_2(E, P⃗) ,
(E^* <3 m),
where we have used <cit.>
lim_L→∞|_iϵF_2(E,P⃗) = ρ_2(E,P⃗) ,
ρ_2;ℓ',m';ℓ,m(E, P⃗) ≡δ_ℓ',ℓδ_m',mρ̃(E^*) ,
ρ̃(E^*) ≡1/16 π E^*×
- i √(E^*2/4-m^2) (2m)^2< E^*2 ,
|√(E^*2/4-m^2)| 0<E^*2≤ (2m)^2 .
Equation (<ref>) is just the standard relation between the two-particle K matrix
and scattering amplitude.
§.§ Expressing ℳ in terms of 𝒦_ df
To relate the generalized divergence-free K matrix to the scattering amplitudes
we take the infinite-volume limit of Eq. (<ref>) using the same prescription
as that given in Eq. (<ref>),
[ ℳ_22 ℳ_23; ℳ_32 ℳ_33 ] = lim_ϵ→ 0lim_L →∞{𝒟_L + 𝒮_ℒ [ ℒ^(u)_L 𝒦_df1/1 + 𝒦_dfℛ^(u)_L ] 𝒮_ℛ} .
We stress that one must replace E → E + i ϵ
in all two- and three-particle poles appearing in finite-volume sums.
In principle this expression gives the desired relation but in very compact notation.
The remainder of this section is dedicated to explicitly displaying
the integral equations encoded in this result. In doing so, we take over several results from Ref. <cit.>.
We begin by studying the infinite-volume limit of 𝒟_L,
which is given in Eq. (<ref>),
and whose only nonzero element is D_3,L.
The latter, defined in Eq. (<ref>), is the symmetrized form of
D^(u,u)_L,3, given in Eq. (<ref>).
The infinite-volume limit of the latter quantity,
lim_L →∞|_i ϵ D^(u,u)_L,3;p ℓ' m';k ℓ m≡ D^(u,u)_3;ℓ' m; ℓ m(p⃗, k⃗) ,
satisfies the integral equation <cit.>
𝒟^(u,u)_3(p⃗, k⃗) =
- ℳ_22(p⃗) G^∞(p⃗,k⃗) ℳ_22(k⃗)
-
∫_r⃗ '1/2ω_r'ℳ_22(p⃗) G^∞(p⃗,r⃗ ' )
D^(u,u)_3(r⃗ ',k⃗) ,
where
G_ℓ' m' ; ℓ m^∞(p⃗, k⃗)
≡(k^*/q_p^*)^ℓ'4 π Y_ℓ' m'(k̂^*)
H_3(p⃗, k⃗)
Y_ℓ m^*(p̂^*)/2 ω_Pkp (E - ω_k - ω_p - ω_Pkp+iϵ)(p^*/q_k^*)^ℓ .
Note that in Eq. (<ref>) we are following the compact notation of
Ref. <cit.>, in which the dependence on the spectator momenta
is made explicit but the angular-momentum indices are suppressed.
Each element appearing in Eq. (<ref>) is a matrix in
angular momentum space with two sets of ℓ m indices,
contracted in the standard way. For example, the first term is explicitly given by
𝒟^(u,u)_3;ℓ' m' ; ℓ m(p⃗, k⃗) ⊃
-ℳ_22;ℓ' m' ; ℓ_1 m_1(p⃗)
G^∞_ℓ_1 m_1 ; ℓ_2 m_2(p⃗,k⃗)
ℳ_22;ℓ_2 m_2 ; ℓ m(k⃗) .
We next evaluate the infinite-volume limits of the three-particle end cap functions
ℒ^(u,u)_3,L and ℛ^(u,u)_3,L,
defined, respectively, in Eqs. (<ref>) and (<ref>).
These are the only nontrivial elements of the matrices ^(u)_L
and ^(u)_L [see Eq. (<ref>)].
Defining
lim_L →∞|_i ϵℒ^(u,u)_3,L; p ℓ' m'; k ℓ m ≡ℒ^(u,u)_3; ℓ' m'; ℓ m(p⃗, k⃗) ,
lim_L →∞|_i ϵℛ^(u,u)_3,L; p ℓ' m'; k ℓ m ≡ℛ^(u,u)_3; ℓ' m'; ℓ m(p⃗, k⃗) ,
we find <cit.>
ℒ^(u,u)_3(p⃗, k⃗) =
(1/3 - ℳ_22(p⃗ ) ρ_3(p⃗ ) )
(2 π)^3 δ^3(p⃗ - k⃗)
- 𝒟^(u,u)_3(p⃗, k⃗) ρ_3(k⃗)/2 ω_k ,
ℛ^(u,u)_3(p⃗, k⃗) =
(1/3 - ρ_3(p⃗ ) ℳ_22(p⃗ ) )
(2 π)^3 δ^3(p⃗ - k⃗)
- ρ_3(p⃗ )/2 ω_p𝒟^(u,u)_3(p⃗, k⃗ ) .
Here we have used[
What we call ρ_3 here is denoted simply ρ in Ref. <cit.>.]
lim_L →∞|_i ϵ F = ρ_3 ,
ρ_3;ℓ' m';ℓ m(k⃗) ≡δ_ℓ' ℓ δ_m' m H(k⃗) ρ̃(E_2,k^*) .
We also reiterate that, in Eqs. (<ref>) and (<ref>), _22 is
needed only below the three-particle threshold, so that, according to our assumptions,
it is a known quantity.
These end caps must be combined with the infinite-volume limit of the middle factor
in Eq. (<ref>),
≡lim_L →∞|_i ϵ_L ,
_L =
𝒦_1/1+ ℱ𝒦_ .
Here both _L and its infinite-volume counterpart, , are matrices in the space of two- and three-particle channels
_L ≡[ _22, L ; ℓ_2' m_2'; ℓ_2 m_2 _23, L ; ℓ_2' m_2'; k ℓ_3 m_3; _32, L ; k' ℓ_3' m_3'; ℓ_2 m_2 _33, L ; k' ℓ_3' m_3'; k ℓ_3 m_3 ] ,
≡[ _22; ℓ_2' m_2'; ℓ_2 m_2 _23; ℓ_2' m_2'; ℓ_3 m_3(k⃗); _32; ℓ_3' m_3'; ℓ_2 m_2(k⃗' ) _33; ℓ_3' m_3'; ℓ_3 m_3(k⃗'; k⃗) ] .
We have given different labels for the angular-momentum indices on the two- and three-particle states to stress that these are independent quantities. To take the infinite-volume limit of _L, it is more convenient to use one
of the following two matrix equations:
_L = 𝒦_ -𝒦_ℱ_L ,
=
𝒦_
-
_L ℱ𝒦_ .
These go over to integral equations for in the infinite-volume limit.
The nonzero components of the matrix are F_2 and F_3
[see Eq. (<ref>)].
The infinite-volume limit of F_2 is given in Eq. (<ref>),
while to obtain that for F_3 it is convenient to rewrite it as <cit.>
F_3=F/2ω L^3[1/3-ℳ_L,22F-𝒟_L^(u,u)F/2ω L^3] ,
which allows the limit to be constructed from those for F, _L,22 and _L^(u,u)
given above.
We now have all the components to proceed.
Taking the infinite-volume limits of Eqs. (<ref>), (<ref>) and
(<ref>), expanding out the 2×2 matrices, and performing
some simple algebraic manipulations, we find
_22 =
[1+ 𝒦_22 ρ_2 ]^-1[𝒦_22
-
∫_r⃗ '∫_r⃗ 𝒦_23(r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u)(r⃗ ',r⃗ ) _32(r⃗ ) ] ,
_23(k⃗ )
=
[1+ 𝒦_22 ρ_2]^-1[ 𝒦_23(k⃗ )
-
∫_r⃗ '∫_r⃗𝒦_23(r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u) (r⃗ ' ,r⃗ ) _33 (r⃗ ,k⃗ ) ] ,
_32(k⃗' )
=
[ 𝒦_32(k⃗' )
-
∫_r⃗ '∫_r⃗_33(k⃗', r⃗ ' ) R_3^(u,u) (r⃗ ' ,r⃗ ) ρ_3(r⃗ )/2 ω_r 𝒦_32 (r⃗ ) ]
[1+ρ_2 𝒦_22]^-1 ,
_33(k⃗' ,k⃗ )
=
𝒦_ df, 33(k⃗' ,k⃗ )
-
𝒦_32(k⃗' ) ρ_2 _23(k⃗ )
-
∫_r⃗ '∫_r⃗𝒦_ df,3(k⃗' ,r⃗ ') ρ_3(r⃗ ')/2 ω_r' L_3^(u,u)(r⃗ ' ,r⃗ ) _33(r⃗,k⃗ ) .
Substituting Eq. (<ref>) in Eq. (<ref>),
and performing some further manipulations, we arrive at an integral equation for _33 alone
_33(k⃗' ,k⃗ )
=
V_33(k⃗' ,k⃗ )
-
∫_r⃗ '∫_r⃗
V_33(k⃗' ,r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u)(r⃗ ' ,r⃗ ) _33(r⃗,k⃗ ) ,
where
V_33(k⃗' ,k⃗ )
= 𝒦_ df, 33(k⃗' ,k⃗ )
-
𝒦_32(k⃗' ) ρ_2 [1+ 𝒦_22 ρ_2]^-1𝒦_23(k⃗ ).
Given _33 we can then perform the integrals in Eqs. (<ref>) and (<ref>)
to obtain _23 and _32, respectively, and finally perform the integral in
Eq. (<ref>) to obtain _22.
We emphasize that all these equations involve on-shell quantities evaluated at fixed total energy and momentum, (E,P⃗).
Finally, we can combine the results for , the end caps (ℒ_L^(u) and ℛ_L^(u)), and _3, to read off
the results for the four components of the scattering amplitude from Eq. (<ref>),
ℳ_22(p̂'^*; p̂^*)
=
_22(p̂'^*; p̂^*) ,
ℳ_23(p̂'^*; k⃗,â^*)
=
{∫_r⃗ _23( r⃗ )
R_3^(u,u) (r⃗, k⃗ )
}𝒮_ℛ ,
ℳ_32(k⃗' , â'^* ; p̂^*) =𝒮_ℒ{∫_r⃗ ' L_3^(u,u)(k⃗ ',r⃗ ' ) _32(r⃗ ' ) } ,
ℳ_33(k⃗', â'^*; k⃗, â^*)
=
D_3(k⃗', â'^*; k⃗, â^*)
+
𝒮_ℒ{∫_r⃗∫_r⃗ ' L_3^(u,u)(k⃗ ', r⃗ )
_33(r⃗, r⃗ ' ) R_3^(u,u)(r⃗ ', k⃗ )
}𝒮_ℛ
.
In these expressions we have contracted the external harmonic indices with spherical harmonics to reach functions of momenta with no implicit indices, and symmetrized _3^(u,u) to obtain _3.
To summarize, given _ df at a given value of (E,P⃗),
together with knowledge of _22 below the three-particle threshold,
we can obtain ℳ at this same total four-momentum by
solving the integral equations (<ref>) for _3^(u,u)
and (<ref>) for _33, and then doing integrals, matrix multiplications
and symmetrizations. All the integrals are of finite range due to the presence of
the UV cutoff H(k⃗) in ρ_3. The angular-momentum matrices have infinite size, and thus for
practical applications one must truncate them, as will be discussed in Sec. <ref>.
We see from Eqs. (<ref>) and (<ref>) that the two-body scattering amplitude no longer satisfies Eq. (<ref>) above the three-particle threshold.[
If we use the full formalism below the three-particle threshold, then it is not
obvious from our results how one regains the two-particle form of
Eq. (<ref>). We return to this issue in the conclusions.
]
It is reassuring to apply the 𝒦_23→0 limit to Eq. (<ref>)
lim_𝒦_23→0ℳ_22 =
[1+ 𝒦_22 ρ_2]^-1𝒦_22 ,
in which we recover the elastic two-particle unitarity form, Eq. (<ref>).
In Appendix <ref> we explore the consequences of time-reversal and parity invariance for these quantities. We conclude that, for theories with these symmetries, the two off-diagonal components of both 𝒦_ df and the scattering amplitude are simply related, so that only one of the two need be explicitly calculated.
§.§ Expressing 𝒦_ df in terms of ℳ
In this subsection we give a method for determining 𝒦_ df from the scattering amplitude, ℳ.
In other words, we invert the expressions derived in the previous subsection.
The motivation for doing so is that we can imagine having a parametrization of ,
containing a finite number of parameters, from which we want to predict the finite-volume
spectrum. To do so, we need first to be able to convert from to _ df,
so as to be able, in a second step, to use
the quantization condition, Eq. (<ref>), to calculate the energy levels.
In the two-particle sector, applying the quantization condition in this manner
has allowed lattice practitioners to disentangle partial waves that mix due
to the reduction of rotational symmetry <cit.>,
as well as the different components in coupled-channel scattering <cit.>.
This is done by parametrizing the scattering amplitudes,
deducing how the finite-volume energy levels depend on a given parametrization
and then performing global fits of the energy levels extracted from various volumes, boosts, and irreducible representations of the various little groups associated with the different total momenta. This technique was proposed and tested in Ref. <cit.> for the study of coupled-channel two-particle systems. Given the parallels between coupled-channel systems with only two-particle states and the coupled two-to-three system considered here, this approach is likely to be required in an implementation of the present formalism as well.
We again follow closely the derivation of Ref. <cit.> and use results
from that work.
We begin by defining the divergence-free three-to-three scattering amplitude
ℳ_ df, 33(k⃗', â'^*; k⃗, â^*) ≡ℳ_33(k⃗', â'^*; k⃗, â^*) - D_3(k⃗', â'^*; k⃗, â^*) ,
and expressing this in terms of building blocks introduced in the previous subsection
ℳ_ df, 33(k⃗', â'^*; k⃗, â^*)
=
𝒮_ℒ{∫_r⃗∫_r⃗ ' L_3^(u,u)(k⃗ ', r⃗ ' )
_33(r⃗ ', r⃗ ) R_3^(u,u)(r⃗, k⃗ )
}𝒮_ℛ,
=
∫_r⃗∫_b̂^*∫_r⃗ '∫_b̂'^*{
(2π)^3 δ^3(k⃗ '- r⃗ ' )4πδ^2(â'^*-b̂'^*)
+ Δ_ L(k⃗ ',â'^*;r⃗ ',b̂'^*) }
×_33(r⃗ ' , b̂'^* ; r⃗, b̂^* ) {
(2π)^3 δ^3(k⃗- r⃗ )4πδ^2(â^*-b̂^*)
+ Δ_ R(r⃗,b̂^*;k⃗,â^*) } .
In the second form of the result we have written _33 in terms of on-shell momenta
rather than the spherical harmonic indices used in the first form.
The kernels Δ_ℛ and Δ_ℒ are
taken from Ref. <cit.> and their definition can be inferred by comparing
Eqs. (<ref>) and (<ref>).
Here and below, all angular integrals are normalized to unity,
i.e. ∫_â^* = ∫ d Ω_â^*/(4 π).
Similar relations hold for ℳ_23 and ℳ_32
ℳ_23(p̂'^*; k⃗,â^*)
=
∫_r⃗∫_b̂^*_23( r⃗ ) {
(2π)^3 δ^3(k⃗- r⃗ )4πδ^2(â^*-b̂^*)
+ Δ_ R(r⃗,b̂^*;k⃗,â^*) },
ℳ_32(k⃗ ',â'^*;p̂^* )
=
∫_r⃗'∫_b̂'^*{
(2π)^3 δ^3(k⃗ '- r⃗ ' )4πδ^2(â'^*-b̂'^*)
+ Δ_ L(k⃗ ',â'^*;r⃗ ',b̂'^*) }_32( r⃗ ') .
Now, using the kernels I_ L and
I_ R defined in Ref. <cit.> via the integral equations,
I_ L(k⃗ ',â'^*;k⃗,â^*) = (2π)^3 δ(k⃗ '-k⃗ )4πδ^2(â'^*-â^*)
- ∫_r'∫_b̂^*
I_ L(k⃗ ',â'^*;r⃗ ',b̂^*) Δ_ L(r⃗ ',b̂^*;k⃗,â^*) ,
I_ R(k⃗ ',â'^*;k⃗,â^*) = (2π)^3 δ(k⃗ '-k⃗ )4πδ^2(â'^*-â^*)
- ∫_r'∫_b̂^*Δ_ R(k⃗ ',â'^*;r⃗ ',b̂^*) I_ R(r⃗,b̂^*;k⃗,â^*)
,
we derive the following expressions for _23, _32, and _33 in terms of M_23, M_32, and M_ df,33 respectively:
4π Y^*_ℓ' m'(p̂'^*)_23;ℓ' m';ℓ m(k⃗ ) Y_ℓ m(â^*)
=
∫_r∫_b̂^* M_23(p̂'^*;r⃗,b̂^*)
I_ R(r⃗,b̂^*;k⃗,â^*)
,
4π Y^*_ℓ' m'(â'^*)_32;ℓ' m';ℓ m(k⃗ ') Y_ℓ m(p̂^*)
=
∫_r∫_b̂^*
I_ L(k⃗ ',â'^*;r⃗,b̂^*)
M_32(r⃗,b̂^*;p̂^*)
,
4π Y^*_ℓ' m'(â'^*)_33;ℓ' m';ℓ m(k⃗ ';k⃗ ) Y_ℓ m(â^*)
= ∫_r'∫_b̂'^*∫_r∫_b̂^*
I_ L(k⃗ ',â'^*;r⃗ ',b̂'^*) M_ df,33(r⃗ ',b̂'^*;r⃗,b̂^*)
I_ R(r⃗,b̂^*;k⃗,â^*)
,
while _22=ℳ_22 from Eq. (<ref>).
These expressions allow one to obtain the various components of from the scattering amplitude. The final task is to invert Eqs. (<ref>), (<ref>) and (<ref>), to determine 𝒦_ df given . One simple way to do this is to start with the inverted finite-volume relation and again take the infinite-volume limit, as in Eqs. (<ref>) and (<ref>). This gives
𝒦_22 =
[1- _22 ρ_2 ]^-1[_22
+
∫_r'∫_r
_23(r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u)(r⃗ ',r⃗ ) 𝒦_32(r⃗ ) ] ,
𝒦_23(k⃗ )
=
[1- _22 ρ_2 ]^-1[_23
+
∫_r'∫_r
_23(r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u) (r⃗ ' ,r⃗ ) 𝒦_ df,33 (r⃗ ,k⃗ )
] ,
𝒦_32(k⃗' )
=
[_32
+
∫_r'∫_r
𝒦_ df,33(k⃗ ';r⃗ ' ) R_3^(u,u) (r⃗ ' ,r⃗ ) ρ_3(r⃗ )/2 ω_r _32 (r⃗ )
]
[1- ρ_2 _22 ]^-1 ,
𝒦_ df,33(k⃗' ,k⃗ )
=
W_33(k⃗' ,k⃗ )
+
∫_r'∫_r
W_33(k⃗' ,r⃗ ' ) ρ_3(r⃗ ' )/2 ω_r' L_3^(u,u)(r⃗ ' ,r⃗ ) 𝒦_ df,33(r⃗,k⃗ ) ,
where
W_33(k⃗' ,k⃗ )
= _33(k⃗' ,k⃗ )
+
_32(k⃗' ) ρ_2 [1- _22 ρ_2]^-1_23(k⃗ ).
This completes the expression for 𝒦_ df in terms of ℳ.
In summary, given ℳ, one can determine the finite-volume energies as follows:
* Using ℳ_22 below the three-particle threshold, solve the integral equation (<ref>) to determine 𝒟^(u,u)_3(p⃗, k⃗).
* Substitute this into Eqs. (<ref>) and (<ref>) to determine ℒ^(u,u)_3(p⃗, k⃗) and ℛ^(u,u)_3(p⃗, k⃗) and from these infer Δ_ℒ and Δ_ℛ via Eqs. (<ref>) and (<ref>).
* Using Δ_ℒ and Δ_ℛ as inputs, solve the integral equations (<ref>) and (<ref>), and thereby determine I_ℒ and I_ℛ.
* Use these, in turn, in Eqs. (<ref>)-(<ref>) to deduce the two-by-two matrix 𝒯 from the scattering amplitude.
* Inserting 𝒯, ℒ^(u,u)_3 and ℛ^(u,u)_3 into Eqs. (<ref>)-(<ref>), calculate the generalized divergence-free K matrix, 𝒦_ df, corresponding to the input scattering amplitude.
* Substitute 𝒦_ df into Eq. (<ref>) and solve for all roots in E at fixed values of P⃗ and L.
Up to neglected terms that scale as e^- m L, these solutions correspond to the unique finite-volume energies associated with the input scattering amplitudes. Performing this procedure for a particular parametrization of ℳ, one may fit the parameter set to a large number of finite-volume energies and thereby determine the coupled two- and three-particle scattering amplitudes from Euclidean finite-volume calculations.
§ APPROXIMATIONS
In order to use Eq. (<ref>) in practice, it is necessary to
truncate the matrices appearing inside the determinant.
To systematically understand the various truncations that one might apply it is useful to “subduce” the quantization, i.e. to block diagonalize 1 + 𝒦_ dfℱ and identify the quantization conditions associated with each sector. The divergence-free K matrix is an infinite-volume quantity and is diagonal in the total angular momentum of the system. By contrast the finite-volume quantities F_2 and F_3 couple different angular-momentum states, a manifestation of the reduced rotational symmetry of the box.
At the same time, the residual symmetry of the finite volume still provides important restrictions on the form of F_2 and F_3. For a given boost, these can be block diagonalized, with each block corresponding to an irreducible representation of the symmetry group. One can then truncate each block by
assuming that all partial waves above some ℓ_ max do not
contribute. This subduction procedure is well understood for the
two-particle system <cit.>, and is expected to
carry through to three-particle systems.
In this work we do not further discuss the subduction of the quantization condition but instead consider two simple approximations applied directly to the main result. These approximations were also discussed in Refs. <cit.>. First, we
consider the case of ℓ_2, max = ℓ_3, max =0, in which all two-particle angular momentum components beyond the s wave are assumed to
vanish. In the two-particle sector, this
implies that all quantities that were previously matrices in angular
momentum are replaced with single numbers. The three-particle states, by contrast, still carry dependence on the spectator momentum so that the index space is reduced from k, ℓ, m to k.
We refer to this as the s wave approximation.
Using the same arguments as in Ref. <cit.>, one can show that the
presence of the cutoff function H_3 in F and G^H implies that only a finite
number of spectator momenta contribute to the quantization condition.
Labeling the set of allowed momenta {k_1,k_2,…,k_N },
we can write the condition out explicitly in the s wave approximation,
(
[ 1+F^s_2𝒦_2^s [F^s_2𝒦_23^s]_k_1 [F^s_2𝒦_23^s]_k_2 [F^s_2𝒦_23^s]_k_N; [F^s_3𝒦_32^s]_k_1 1+[F^s_3𝒦_
df,33^s]_k_1;k_1 [F^s_3𝒦_
df,33^s]_k_1;k_2 ⋯ [F^s_3𝒦_
df,33^s]_k_1;k_N; [F^s_3𝒦_32^s]_k_2 [F^s_3𝒦_
df,33^s]_k_2;k_1 1+[F^s_3𝒦_
df,33^s]_k_2;k_2 [F^s_3𝒦_
df,33^s]_k_2;k_N; ⋮ ⋱; [F^s_3𝒦_32^s]_k_N [F^s_3𝒦_
df,33^s]_k_N;k_1 [F^s_3𝒦_
df,33^s]_k_N;k_2 1+[F^s_3𝒦_
df,33^s]_k_N;k_N ])=0 .
The “s" superscripts indicate that ℓ=0 for the two-particle states and also for one of the particle pairs within the three-particle states. The explicit definitions for the components of 𝒦_ df are
𝒦_22^s≡𝒦_22;00;00 , 𝒦_23;k^s≡𝒦_23;00;k00 ,
𝒦_32;k'^s≡𝒦_32;k'00;00 , 𝒦_
df,33;k';k^s≡𝒦_ df,33;k'00;k00 .
The various finite-volume quantities are then given by
F_2^s ≡ F^s_2(E, P⃗) ≡1/2 [ 1/L^3∑_a⃗ -
PV∫d^3 a/(2 π)^3 ]
h(a⃗)/2 ω_a 2 ω_Pa (E - ω_a -
ω_Pa) ,
F^s_3;k';k ≡ [ F^s/6 ω L^3 -
F^s/2ω L^31/1+^s_2,L G^s^s_2,L F^s ]_k';k ,
F_k';k^s ≡δ_k' k H(k⃗) F_2^s(E - ω_k, P⃗ - k⃗) ,
G_k';k ^s ≡ H_3(k⃗', k⃗ ) /2 ω_P kk' (E - ω_k - ω_k'-
ω_P kk')1/2 ω_k L^3 ,
^s_2,L;k' ;k ≡δ_k' k ^s_2(E-ω_k,P⃗-k⃗)
1/1 + F^s_2(E-ω_k,P⃗-k⃗) ^s_2(E-ω_k,P⃗-k⃗) .
Thus in this approximation, there are (N+1)^2 unknown elements of _ df,
a complete determination of which would require determining the same number of
energy levels.
[The number of independent components is reduced if the theory is symmetric under time reversal and/or parity transformations. For example, if the theory has both symmetries, the relations (<ref>) and (<ref>) imply that the
number of independent components is (N+1)(N+2)/2.]
Assuming this has been achieved, the relations of Sec. <ref> that give
in terms of _ df still hold, except that now all the previously implicit
spherical-harmonic indices are set to zero.
Second, we consider the simplest possible case,
referred to in Refs. <cit.> as
the isotropic approximation. In this approximation all components of 𝒦_ df are constant functions of the momenta of the incoming and outgoing particles. Compared to the s-wave-only limit discussed above, here we make the additional assumption that 𝒦_23, 𝒦_32 and 𝒦_df,33 have the same values for all choices of the spectator momentum, i.e. are constant functions of these coordinates,
𝒦_23^ iso =𝒦_23;00;k00 ,
𝒦_32^ iso =𝒦_32;k'00;00 ,
𝒦_
df,33^ iso =𝒦_ df,33;k'00;k00 ,
for all spectator momenta. Within this approximation,
Eq. (<ref>) simplifies further to
(1+F^s_2𝒦_2^s) (1+F^ iso_3𝒦_
df,33^ iso) =F^s_2 F^ iso_3𝒦_32^
iso𝒦_23^ iso ,
where
F^ iso_3≡∑_k',k F^s_3;k';k .
Additional simplifications to the relation between _ df and also occur,
but we do not give these explicitly as they are simple generalizations of those
derived in Ref. <cit.>.
It is worth noting that Eq. (<ref>) resembles the expression for two coupled two-particle channels each projected to a single partial wave <cit.>. In the limit that the 2↔3 coupling vanishes, one recovers the spectrum for s wave two-particle states together with that obtained in Ref. <cit.> for three-particle states in the isotropic approximation. Turning on the two-to-three coupling then shifts the levels and also splits any degeneracies between two- and three-particle states, as is
shown schematically in the rightmost panel of Fig. <ref>.
§ CONCLUSIONS AND OUTLOOK
In this paper we have obtained the finite-volume quantization condition
for a general theory of identical scalar particles,
in the regime where both two- and three-particle states contribute
(3m < E^* < 4m).
In other words, we have generalized the quantization conditions of Refs. <cit.> to systems with general 2↔3 interactions. This opens the door for the first studies of particle production, a central aspect of relativistic quantum field theory, from finite-volume numerical calculations. The result also represents important progress toward our ultimate goal of relating the finite-volume spectrum and the S matrix for all possible two- and three-particle systems.
Significant work is still required in order to make this formalism a practical tool for numerical lattice QCD.
At this stage, the most important remaining restriction is that the quantization condition is valid for a given E^*, only if the two-particle K matrix, _2,
is a smooth function for two-particle energies below E^*-m.
This is a crucial limitation as there are many examples of interesting
three-particle systems in particle and nuclear physics where _2
does have such poles, due to the presence of narrow resonances.
In addition to the inclusion of singularities in _2,
the quantization condition must be generalized to describe nonidentical particles and particles with intrinsic spin, and to accommodate multiple two- and three-particle channels. The importance of these extensions is exemplified by the case of the Roper resonance, which can decay into multiple two- (N π, N η) and three-particle (N ππ) channels and for which poles in _2 should arise in the three-particle channel due to N ππ→Δπ→ N ππ.
We expect that the generalizations in particle content will be relatively straightforward,
based on the experience with two particles. Work in this direction is underway.
The methodology adopted here differs from that used in
previous field-theoretic derivations of quantization conditions
(e.g. that of Ref. <cit.>)
because it relies on time-ordered perturbation theory in an essential way.
This approach has the advantage that it appears to naturally generalize to four or more particles. While such a generalization seems quite ambitious
at present, it is our ultimate goal as it will allow us to completely establish the relation between finite-volume energies and scattering observables. This in turn will allow us to study a large variety of hadronic resonances that decay into many-particle final states.
One result that we find surprising concerns the transformation, under
time reversal, of the auxiliary amplitude _ df.
As shown in Appendix <ref>, _ df has exactly the same
transformation properties as .
The complicated construction of _ df,3, described in Ref. <cit.>
for the case of no mixing with two-particle channels, and carried over here to the
case where two-to-three mixing does occur,
includes a choice of ordering of loop integrals that seems to violate time reversal.
Nevertheless, any such violation must be canceled by the “decorations" that are
applied to obtain the final form.
Thus _ df has properties that are closer to those of than previously expected.
One property that _ df does not share with is Lorentz invariance.
Our derivation violates manifest Lorentz invariance since it uses time-ordered
perturbation theory. Nevertheless, as in the case of time-reversal symmetry, it
could have been the case that, at the end of the analysis, _ df turned out to
be Lorentz invariant. In fact, it nearly does. Looking at the relations
in Sec. <ref>, one finds that the only violation of Lorentz invariance
comes from the denominator in G^∞ [see Eq. (<ref>)].
The factor of ω_Pkp(E-ω_k-ω_p-ω_Pkp+iϵ) is
manifestly noninvariant.[
The remaining factors in G^∞ are invariant as they always refer to the
CM frame of the nonspectator pair.
Were it not for the form of the denominator,
ℒ_3^(u,u)(p⃗, k⃗) 2 ω_k
and 2ω_pℛ_3^(u,u)(p⃗, k⃗) would be Lorentz
invariant, as would _3^(u,u), and this would carry over to _ df,
because all integrals would then be over Lorentz invariant phase space.]
We are investigating an alternative, Lorentz-invariant definition of 𝒦_ df,3, but
save the details for a future publication.
Finally, we highlight another feature of our formalism that deserves to be better
understood. This concerns what happens when E^* passes through the three-particle
threshold at E^*=3m. When we are sufficiently far below this threshold, the two-particle
analysis should be valid leading to the quantization condition
(1+F_2 _2)=0. However, as stressed earlier, we can also use our more
general approach in this regime, and it should lead to the same answer.
This equality is not, however, manifest. The issue is that 𝒦_22 does not coincide with the standard two-particle K matrix, even below the three-particle threshold. To study the subthreshold behavior of 𝒦_22 one must use its relation to
the standard two-particle scattering amplitude given by Eqs. (<ref>) and (<ref>). It should then be possible to express the quantization condition as the vanishing of (1+F_2 _2), up to corrections that are exponentially suppressed in L, but become enhanced near the three-particle threshold.
§ ACKNOWLEDGMENTS
RAB acknowledges support from U.S. Department
of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates,
LLC, manages and operates Jefferson Lab. SRS was supported in part by the United States Department
of Energy grant DE-SC0011637.
§ DETAILS OF THE SMOOTH CUTOFF FUNCTIONS
In this appendix we give an explicit example of the smooth cutoff
functions used in the main text.
These must satisfy the symmetry properties of Eqs. (<ref>) and (<ref>),
as well as the “nonoverlap" property of Eq. (<ref>),
and must equal unity when the particles are on shell.
Our example uses the interpolating function J(x) introduced in Ref. <cit.>.
This vanishes for x≤0, equals unity for x≥1, and interpolates smoothly in between.
A specific example of such a function is
J(x) ≡
0 , x ≤ 0 ;
exp( - 1/xexp
[-1/1-x] ) , 0<x < 1 ;
1 , 1≤ x
,
but our formalism works for any J that satisfies the
key property of being smooth for all x.
Our example for the three-particle cutoff function is then given by
H_3(k⃗, a⃗) = H(k⃗) H(a⃗) H(b⃗_ka)
,
where b⃗_ka = P⃗ - k⃗ - a⃗, and
H(k⃗) = J(z_3) ,
z_3= E_2,k^*2 - (1 + α) m^2/(3 - α ) m^2 .
Here α is a parameter satisfying -1< α < 3 that
we discuss in more detail below. The value α=-1 corresponds to the cutoff used in
Refs. <cit.>, but here we need a more general form.
To understand Eqs. (<ref>) and (<ref>),
recall that E_2,k^*2=(E-ω_k)^2-(P⃗-k⃗)^2 is the energy of the
nonspectator pair in their CM frame, assuming that the spectator is on shell.
If all three particles are on shell, it follows that E_2,k^*2≥ 4 m^2.
In this case, z_3 ≥ 1
(with z_3=1 at threshold for the nonspectator pair, E_2,k^*2=4m^2)
and so H(k⃗)=1. Similarly, the other two H functions equal unity.
Thus H_3=1 if all three particles are on shell.[
We note that the converse does not hold: H_3=1 does not imply that all
three particles are on shell, as can be seen from the simple example of
P⃗=k⃗=a⃗=0 with E> 3m.]
Now consider changing k⃗ (with E and P⃗ fixed)
such that E_2,k^*2 drops below 4m^2. Then z_3 drops below unity,
and H(k⃗) falls smoothly,
vanishing when E_2,k^*2 reaches (1+α) m^2,
and staying zero thenceforth.
Because of the symmetric product in Eq. (<ref>) it follows that
H_3 vanishes when any nonspectator pair has a CM squared energy that
lies (3-α) m^2 below threshold.
We stress that this vanishing of H_3 always occurs when, with fixed E and P⃗, any of the three momenta becomes sufficiently large. Thus H_3 acts as a UV cutoff.
We next describe our example for the two-particle cutoff function, H_2(p⃗). This depends
only on a single momentum, since the momentum of the second particle is
fixed to b⃗_p≡P⃗-p⃗. The aim of H_2 is to ensure that,
if either p⃗ or b⃗_p is equal to one of the three-particle momenta
k⃗, a⃗ or b⃗_ka, then H_2(p⃗) H_3(k⃗,a⃗) = 0.
The motivation for this condition is discussed in the main text.
We also need H_2(p⃗) to equal unity if both particles are on shell.
A solution to these conditions is
H_2(p⃗) = J(z_p) J(z_b) ,
z_p = E_2,p^*2 - (1+α) m^2/(-α m^2) ,
z_b = E_2,b_p^*2 - (1+α) m^2/(-α m^2) .
Here α is the same parameter as above, except now satisfying 0 < α < 3.
In the two-particle case, E_2,p^*2 (given by the same expression as E_2,k^*2
except with k replaced with p)
is the invariant mass squared of the particle with momentum b⃗_p,
assuming that with momentum p⃗ is on shell.
Similarly, E_2,b_p^*2 is the invariant mass squared of the particle with momentum
p⃗ if that with momentum b⃗_p is on shell.
In general these two invariant masses are different.
One case when they are the same is if both particles are on shell, in which case
E_2,p^*2=E_2,b_p^*2=m^2. Then z_p=z_b=1, so that H_2=1, as required.
Now we consider what happens to H_2 as we vary p⃗ away from a value
leading to two on-shell particles. If E_2,p^*2 decreases below m^2,
then J(z_p) remains equal to unity.
If, instead, E_2,p^*2 increases above m^2, then J(z_p) decreases,
vanishing for E_2,p^*2≥ (1+α) m^2.
Thus H_2 vanishes when either E_2,p^*2 or
E_2,b_p^*2 reaches (1+α) m^2,
i.e. when one of these quantities lies
α m^2 or more above threshold.
We can now see why H_2 H_3=0 if one of the two-particle momenta equals one of
the three-particle momenta.
Consider first k⃗=p⃗, so that E_2,k^*2=E_2,p^*2.
If E_2,k^*2≤ (1+α) m^2, we have H_2(p⃗) > 0 and H(k⃗)=0,
while if
E_2,k^*2≥ (1+α) m^2 we have H_2(p⃗) =0 and H(k⃗)>0.
H_2 H_3 ∝ H_2(p⃗) H(k⃗) vanishes in either case.
The symmetries of H_2 and H_3 ensure that this holds also if any other
pair of two- and three-particle momenta are equal.
Finally, we argue that α=3/2 is a reasonable choice in order to
minimize exponentially suppressed finite-volume effects.
Such effects are generated by the difference between a sum and an integral over
the loop momenta with the integrand given by the cutoff functions multiplied by other smooth functions.
Generically, from the Poisson summation formula, we know that the suppression falls
as exp(-Δ L),
where Δ characterizes the size of the region over which the summand/integrand varies.
Thus we want the cutoff functions to change from 0 to 1 over as large a region as possible.
Here this leads to two conflicting conditions. From H_3, we want (3-α) m^2
[the range of E_2,k^*2 over which the variation in H(k⃗) occurs]
to be as large as possible, while from H_2 we want α m^2 to be maximized.
The choice α=3/2 sets these two distances from threshold equal.
We illustrate this optimization in Fig. <ref>.
We close this appendix by stressing that the forms we have given for H_2 and H_3 are
not unique. We think that these are reasonable, somewhat optimized choices, but
in a practical application it would be worthwhile investigating other options.
§ DETAILED DERIVATION OF EQ. (<REF>)
In this appendix we give the details of the derivation of the result Eq. (<ref>)
for the finite-volume correlator, .
This replaces the naïve analysis of Sec. <ref>.
The outline of the new derivation has been sketched in Sec. <ref>.
We break the derivation into seven steps.
§.§ Diagramatic expansion
The first step is the same as in the naïve approach, namely to write out
a perturbative expansion in Feynman diagrams for .
This has been described in some detail in Sec. <ref>, and here we
add a few further details.
We work with a general effective field theory (EFT) for our scalar field,
with Lagrange density
ℒ(x) = 1/2ϕ(x) (∂^2 + m^2 ) ϕ(x)
+ ∑_n=3^∞λ_n/n!ϕ(x)^n
+ ∑_n=3^∞g_n/(n-1)! [∂^2 ϕ(x)] ϕ(x)^n-1 + ⋯
+ 1/2 (δ Z_ϕ) ϕ(x) ∂^2 ϕ(x)
+ 1/2 (δ Z_m m^2) ϕ(x)^2
+ λ_3/3! (δ Z_λ_3) ϕ(x)^3
+ ⋯ .
The first ellipsis indicates additional interactions containing more derivatives, and the second indicates the counterterms corresponding to all included vertices.
We imagine regulating Feynman diagrams using, for example, dimensional regularization,
and choose the counterterms so that,
in the limit that the UV regulator is removed,
all correlation functions are finite functions
of the mass, m, and the coupling constants, λ_n, g_n,….
We define δ Z_ϕ and δ Z_m so that
m is the physical pole mass of the particle interpolated by
ϕ and the pole has unit residue
1/ilim_p^2 → m^2 (p^2 - m^2) ∫ d^4 x e^- i p x⟨ 0 |ϕ(x) ϕ (0) | 0 ⟩ = 1 .
We do not need to specify the precise definitions of the remaining counterterms—any
scheme may be used, e.g. the MS scheme.
ij is formally defined as the sum of all connected finite-volume
Feynman diagrams with j incoming and i outgoing legs, amputated and
put on shell.
As described in the main text, we use a diagram-by-diagram renormalization scheme
in which the appropriate counterterm is combined with each divergent diagram.
This implies, in particular, that the combination of each self-energy Feynman diagram with
its counterterm satisfies the renormalization conditions of Eq. (<ref>).
How this generalizes when using TOPT will be discussed later.
As noted in the main text, we sum self-energy insertions into dressed propagators of three
different types, shown in Fig. <ref>.
Here we describe in more detail where we use each type of dressed propagator.
The underlying rule is simple: All cuts in which two or three particles can go on shell must
be kept explicit. Here a cut must separate the diagram into two parts in the s channel
and pass through at least one propagator that is not external.
If a particular propagator appears only in cuts with three or more particles, it can be
fully dressed, i.e. composed of 1PI self-energies.
This is because any cut through the self-energy loops would contain at least four particles.
Similarly, if the propagator can appear in cuts with two particles,
then it must be composed of 2PI self-energies (and thus be 2PI dressed).
Finally, if the propagator can appear in cuts with a single particle, then it must be
composed of 3PI self-energies (and thus be 3PI dressed).[Note that it is not possible for a given propagator to appear in both two- and one-particle s channel cuts, so that our classification here is unambiguous.]
These three cases are illustrated in Fig. <ref>.
Further examples appear in Figs. <ref> and <ref> below.
An important observation is that all three types of dressed propagators have only
exponentially suppressed volume dependence and thus can be replaced by their
infinite-volume counterparts.
This is because the loops appearing (implicitly) in these propagators lead to four- or higher cuts
of the overall diagram, and thus do not have singularities in the kinematic range of
interest. Thus the summands are smooth and
the sum-integral difference is exponentially suppressed [see Eq. (<ref>)].
A final comment concerns “tadpole loops," i.e. loops through
which no external four-momentum flows. Examples are shown in Fig. <ref>.
Such loops do not lead to on-shell intermediate states precisely because
no external momentum flows through the subdiagrams. They are thus uncuttable according to our rules. This is equivalent to the observation that the summands are nonsingular, so that the momentum sums can be replaced with integrals. In fact, from the point of view of determining finite-volume effects, we can simply absorb
these loops [along with their (implicitly) associated counterterms]
into the adjoining vertices. This reduction is illustrated in the figure.
§.§ Partial reduction of two-particle self-energy bubbles
We now depart from the approach used in Sec. <ref>.
Rather than use TOPT immediately, we first sum up a class of Feynman diagrams.
These are the diagrams that contain at least one 2PI-dressed propagator
on which there is a self-energy insertion that is two-particle reducible.
Examples are shown in Fig. <ref>,
and we refer to them collectively as diagrams of class 2PI+.
The challenge here is that all such diagrams have three-particle cuts that lead
to finite-volume effects.
We stress that diagrams containing 2PI-dressed propagators without additional
self-energy insertions, such as those in Fig. <ref>,
are not included in the 2PI+ class of diagrams. However diagrams containing at least one two-particle loop with a self-energy insertion, as well as some number of two-particle loops without insertions, are included in 2PI+.
We next use the function H_2(p⃗) (defined in Appendix <ref>).
For each diagram in class 2PI+, we multiply each two-particle loop containing
at least one explicit two-particle self-energy insertion by
1 = H_2(p⃗) + [1-H_2(p⃗)] ,
and consider separately the H_2 and 1-H_2 parts.
Here p⃗1 is the momentum of one of the propagators—we
can use either of the two momenta in the loop as H_2 is symmetric.
It is important that only one such factor is inserted in a given loop, irrespective of how
many self-energy insertions are present. To illustrate these rules, we note that all of the diagrams of
Fig. <ref> except the last are multiplied by
H_2(p⃗) + [1- H_2(p⃗)],
while the last diagram is multiplied by
(H_2(p⃗) + [1-H_2(p⃗)])(H_2(q⃗) + [1-H_2(q⃗)]).
We stress that, in the latter case, the momenta p⃗ and q⃗
are independent.
For the remainder of this subsection we consider two-particle loops that have been
multiplied by the H_2 part of Eq. (<ref>).
The presence of H_2 leads to a key simplification:
The sums inside all of the self-energies on the 2PI-dressed propagators
can be replaced with integrals.
This result holds because the function H_2(p⃗) only has
support when the momenta in the three-particle state are far from going on-shell.
To explain this, we consider the first diagram in Fig. <ref>.
The three particles under consideration are those with momenta labeled
a⃗, p⃗-a⃗, and b⃗_p=P⃗-p⃗.
We recall that the function H_3(b⃗_p, a⃗) has support
in a region around the on-shell manifold
(those values of b⃗_p and a⃗ for which all three particles can go on shell)
of characteristic width m.
But, by construction, H_2(p⃗) H_3(b⃗_p, a⃗)= H_2(b⃗_p) H_3(b⃗_p, a⃗)=0,
implying that H_2(p⃗) vanishes everywhere in this near-on-shell region.
Thus H_2(p⃗) forces the momentum in
the self-energy loops to be well away from their on-shell values,
and thus well away from the _3 pole associated with a three-particle
intermediate state.[
We stress that this is not a direct constraint on the momentum in the
self-energy loop, i.e. on a⃗ in our example. This momentum is freely summed/integrated.
The point is that, in the presence of H_2(p⃗), the summand does not come close to
the three-particle singularity.]
The difference between momentum sums and integrals for such loops is
therefore exponentially suppressed.
The self-energy insertions on the 2PI-dressed propagators
can also contain loops with more than two particles.
An example is the third diagram in Fig. <ref>.
Since particles in such loops cannot go on shell
(requiring an intermediate state containing four or more particles for the complete diagram),
the momentum sums in these loops
can be also be replaced with integrals.
Thus we find the result claimed above: The entire self-energy can be evaluated in infinite volume.
The resulting integrated self-energies are just particular examples of
the quantities D^R_i(p^2) discussed in the main text.
In particular, since the diagrams are accompanied by counterterms that enforce the conditions of
Eq. (<ref>), we know that they vanish quadratically as one goes on shell,
D^R_i(p^2) p^2 → m^2⟶ c (p^2 - m^2)^2 .
Thus each self-energy cancels the poles from the 2PI-dressed propagators on either side.
If there is a chain of self-energies then the poles are “overcanceled”
leading to factors of (p^2-m^2) in the numerator.
As a result, each 2PI-dressed propagator with self-energy insertions, in a cut that is accompanied by a factor of H_2,
gives only short-distance contributions.
We can implement this diagrammatically by shrinking the propagator
to a new effective vertex, as shown in Fig. <ref>.
This vertex is complicated—possibly involving nonanalytic functions of momenta and
containing H_2(p⃗)—but it satisfies the key property that it is
“uncuttable.” In other words, it is a smooth function of real three-momenta and thus cannot lead to important finite-volume effects, which is also true for vertices in general.
As shown in Fig. <ref>, shrinking propagators often lead to
tadpole loops. These loops can then be absorbed into vertices,
as discussed in the previous subsection.
The conclusion of this analysis is that we can effectively ignore self-energy insertions
on 2PI propagators when the factor H_2 is present. They give rise to additional vertices,
which are special in that they occur only in certain topologies of diagrams and contain
factors of H_2. But since we are at no stage actually calculating the Feynman diagrams,
the presence of new vertices does not lead to any change in the diagrams
to be considered.[
The only exception to the statement that no new diagrams need to be considered
is that, after applying the shrinking procedure,
there are diagrams in which some of the propagators are 2PI dressed, whereas, if one applied
the rules discussed in Appendix <ref>, they would be fully dressed.
An example is shown by the second diagram in Fig. <ref>,
where the bottom propagator in the leftmost loop would be fully dressed according to
the general rules, but is in fact 2PI dressed.
This exception has, however, has no impact on determining finite-volume effects,
as both types of propagator have the same pole and residue.]
In summary, the analysis of this subsection allows us to avoid one of the problems with
the naïve result (<ref>), namely the fact that the quantity
A does not contain all time orderings needed to build up the full self-energy,
and so the result behaves as (p^2-m^2) rather than the quadratic dependence of
Eq. (<ref>).
By working at this stage with Feynman diagrams we are, in effect, summing
all the time orderings, rather than the restricted set contained in A.
§.§ Shrinking 3PI-dressed propagators
The second problem mentioned at the end of Sec. <ref> in the main text concerned
contributions to that involve 3PI-dressed propagators.
In this section we describe the problem in more detail and then explain how it can be avoided
by shrinking all 3PI-dressed propagators down to local vertices.
The problem arises once we switch from working with Feynman diagrams to using TOPT
(a change that is discussed more extensively in Appendix <ref> below).
We then discover that certain time orderings of diagrams containing 3PI-dressed propagators
have spurious three-particle intermediate states.
Two examples are shown in Fig. <ref>.
These are contributions to TOPT that have poles of the form (E - ω_a - ω_k - ω_Pka)^-1
and thus, in general, contribute to the kernels A introduced in Sec. <ref>.
These poles are spurious, however, because they cancel in the full Feynman diagrams.
This is clear in the examples shown because one can factorize the corresponding Feynman diagrams into a product of loops and propagators and the singularities arise only from these
individual factors, and not from overlapping cuts such as those shown.
In principle one could continue with the TOPT analysis, keeping track of these spurious
contributions until they cancel in the end. This is difficult, however, as they contain
disconnected contributions involving Kronecker deltas.
A better solution is to avoid these contributions from the beginning. This is possible
due to the fact that there are no on-shell intermediate states that involve the
3PI-dressed propagators in our kinematic range.
This is apparent from the initial Feynman diagram in which each 3PI-dressed propagator
appears factorized from the remainder of the diagram, and has singularities only
at E^*=m and E^*≥ 4m.
Thus the 3PI-dressed propagators are uncuttable.
They are also functions only of the fixed external four-momentum, (E,P⃗),
and are thus themselves fixed.
It follows that, from the point of view of determining finite-volume dependence, we
can shrink them into the adjoining vertices.
With this done, none of the spurious cuts remain.
In the following we assume that such a procedure has been employed.
§.§ Classification of remaining loops
At this stage it is useful to take stock of the types of Feynman diagrams that remain
after propagators and tadpole diagrams are shrunk as described above.
The remaining diagrams contain only fully dressed and 2PI-dressed propagators,
and are built from overlapping loops that fall into the four classes:
* Loops containing a pair of 2PI-dressed propagators, on which there
are no self-energy insertions.
Examples are shown in Fig. <ref>.
These loops are, at this stage, not multiplied by factors containing H_2.
* Loops containing a pair of 2PI-dressed propagators in which at least one of these propagators
has a self-energy insertion.
Such loops are contained in diagrams of class 2PI+ [see Fig. <ref>].
All such loops are multiplied by [1-H_2(p⃗)].
The presence of this factor implies that these loops
cannot give rise to two on-shell particles,
but do give rise to three particles that all go on shell.
* Loops that include sets of three particles that carry the total energy and momentum
(E, P⃗) (and can thus simultaneously go on shell)
but are not included in the previous class.
Examples are shown in Fig. <ref>(a).
* Loops that give rise to no on-shell intermediate states,
either because four or more particles carry the total energy and momentum or
because the loops are in a t-channel-like structure and
thus do not carry the total energy-momentum that flows through the diagram.
Examples are shown in Fig. <ref>(b).
The overall result is that we have removed all appearances of self-energy diagrams except where
they are needed because a physical on-shell cut can run through them, i.e. in loops of class (2).
Finally, we observe that, because loops overlap, there is not a
one-to-one correspondence between loops and cuts.
This is illustrated in Fig. <ref>. As a result, we cannot study individual loops, or even finite sets of loops, and determine the important finite-volume effects.
Indeed, in general, the singularity structure of a given diagram is quite complicated.
Since finite-volume dependence arises from two- and three-particle cuts,
what we need is a tool for breaking diagrams into multiple terms that
individually contain a specific sequence of cuts.
This can be done straightforwardly using TOPT, to which we now turn.
§.§ Applying time-ordered perturbation theory
At this stage we break up the Feynman diagrams into their component
time orderings. This can be achieved by evaluating all
energy integrals, and then partial fractioning the resulting products of poles.
A more direct method
is to evaluate the Feynman diagrams using a mixed time-momentum representation
for the propagators, and then do the time integrals.[
For a lucid explanation of this method, see Ref. <cit.>.]
The result—the TOPT expression—is a sum of terms each of which depend only on spatial momenta.
Since we work in finite volume, these momenta are summed over the finite-volume discrete set.
Our application of TOPT is slightly complicated by our use of dressed propagators.
We first describe the approach ignoring this complication, i.e. using bare propagators,
and then return to the complications introduced by dressing. Consider a Feynman diagram with some number of on-shell, amputated external legs
and with total energy-momentum (E, P⃗) flowing
from the initial to the final state.
One then enumerates all ordered sequences of vertices in the
diagram between the initial and final states.[
The requirement that all vertices must lie between the initial and final states
is a consequence of having on-shell, amputated external propagators.
One can think of this as occurring because the initial particles are
created at t=-∞ and the final particles destroyed at t=∞.]
Each individual ordering represents a mathematical expression determined as follows.
(1) Route a vertical line (i.e. a “cut,” c)
between each pair of consecutive vertices in the ordering.
(2) Define the factor ∑_i ∈{c}ω_i, given
by summing all of the on-shell energies of the propagators intersecting the cut.
(3) Calculate the product
𝒫_o = ∏_c ∈{o}( 1/E - ∑_i ∈{c}ω_i) ,
where o denotes a particular ordering, {o} denotes the set of cuts
within the ordering, and c denotes a particular cut.
(4) Multiply 𝒫_o by a factor of 1/(2ω_j) for
each internal propagator, and by the expressions arising from each vertex,
as well as possible 1-H_2 and symmetry factors.
This leads to the expression for the n-cut factor _n
given in Eq. (<ref>).
Summing over all orderings then gives the value of the Feynman diagram.
Examples of time orderings are shown in Fig. <ref>.
As noted in the main text,
when we apply TOPT in the kinematic range given in Eq. (<ref>),
the only singularities that can appear are the poles due
to two- and three-particle intermediate states, given in Eq. (<ref>).
Finite-volume effects arise only from momentum sums
that run over one or both of these poles.
All other sums can be converted to integrals.
The above discussion assumes a propagator of the form i/(p^2-m^2+iϵ), and thus does not directly hold for the dressed propagators.
Given the renormalization conditions of Eq. (<ref>),
however, both types of dressed propagator do have exactly this pole
structure, including the residue, for p^0 →ω_p.
The effect of dressing appears only in the constant and in terms of 𝒪(p^2-m^2), but such terms can be absorbed into the vertices as long as they remain smooth within
our kinematic range. Since the vertices are general, this leads to no additional complications.
Then it is legitimate to use TOPT ignoring the fact that
the propagators are dressed. This means that the distinction between fully and 2PI-dressed
propagators is no longer relevant.
The remaining issue is thus whether there are additional singularities in the dressed
propagators within our kinematic range (E^* < 4m).
The fully dressed propagator has a two-particle cut,
while the 2PI-dressed propagator has a three-particle cut. However, by construction, these
both correspond to cuts with four or more particles in the full diagram. Thus these singularities
do not appear within our kinematic range.
A final technical complication concerns counterterms in TOPT. When we break up
a UV divergent loop into its various time orderings we also need to break up the
counterterms accordingly. An example is given by the self-energy loop in the center
of the diagrams of Fig. <ref>: Its two vertices have different time orderings
in the two diagrams, and these are separately UV divergent. In fact, in general, since we
have broken Lorentz symmetry in TOPT, the individual counterterms needed for the
different time orderings will not be Lorentz invariant. Lorentz invariance is regained
only at the end when all time orderings are recombined.
In practice, one can always define the counterterms
operationally for each time ordering by using dimensional regularization and
removing the pole with a prescription such as MS
(up to finite corrections needed to satisfy renormalization conditions discussed previously).
In summary at this stage we have reduced every Feynman diagram to a
sum of terms each given by products of smooth functions and two-
and three-particle poles. Thus can be written in the form given in Eq. (<ref>) of
the main text, except that the kernels between two- and three-cuts are now different.[
Strictly speaking, we need to show that kernels that appear are independent of their
position in the chain of terms in Eq. (<ref>). We return to this issue below.]
These differences are due to the presence of
factors of [1-H_2] in diagrams with self-energy insertions,
to the absence of 3PI-dressed propagators,
and to the alterations in vertices arising from the shrinking procedure and from the tadpole loops and other smooth terms that have been absorbed.
In what follows we denote the coordinates
that appear in the two- and three-particle poles as “explicit”
whereas all coordinates that are integrated at this stage are buried
inside various smooth functions and are thus referred to as
“implicit”. Note that all H_2(p⃗ ) functions at this point
are implicit with the exception of the [1-H_2(p⃗)] factors
accompanying the two- and three-particle poles in class (2) loops.
§.§ Introduction of regulator functions on cuts
The next step is, as in Sec. <ref>, to multiply each two- and three-cut
by unity written, respectively, as Eq. (<ref>) and
1 = H_3(k⃗, a⃗) + [1-H_3(k⃗, a⃗)] .
The momenta here are the explicit summed coordinates appearing in the cut factors.
The only difference compared to the main text is that here
we do not make this substitution in the two-cuts in class 2 loops,
since these loops already come with a factor of [1 - H_2(p⃗)].
Having made these substitutions we then consider the parts containing H_i
and 1-H_i separately, so that the cuts that arise are _2^H, _2^∞,
_3^H, _3^∞ and higher-order cuts.
[See Eq. (<ref>) for the definitions of these cuts.]
At this stage singularities arise only from factors of _2^H or _3^H.
All other possibilities do not have poles within our kinematic regime.
This implies that any loop momentum that does not appear
in either a _2^H or _3^H can be integrated rather than summed.
We can now make use of the important result that,
whenever a two-cut and a three-cut share a common propagator,
then H_2 H_3 = 0 (as described in Appendix <ref>).
In Sec. <ref>, we used this result to drop disconnected parts from
A_23 and A_32.
Here we apply it at a slightly earlier stage.
The aim is to come up with a version of Eq. (<ref>) that does not
suffer from the problems described in the main text.
To see how this works we consider three examples, given in
Figs. <ref> and <ref>.
These show how a particular time ordering is reduced
to a product of smooth kernels and regulated cut factors, _2^H and _3^H.
Figure <ref>(a) shows a diagram containing a class 2 loop.
We recall that, although two-cuts appear in the TOPT expression, the factor of 1-H_2 cancels the
poles.[
A single factor of 1-H_2 can cancel any number of poles
since it has an essential zero at the pole.]
Now we insert the identity (<ref>) on the three-cut,
leading to the two diagrams on the right-hand side of the equality.
For that containing H_3, we use H_2(p⃗) H_3(p⃗,a⃗)=0
to drop the factor of H_2, as shown.[
The fact that the H_2 can be dropped means that we do not have to worry
about distributing the 1-H_2 factor between the kernels _23 and _32
on either side of the three-cut. This is important since we want to
treat all such kernels in a consistent manner.]
In other words, the presence of the H_3 in _3^H is sufficient to
ensure that there are no on-shell two-cuts. Thus we can decompose this diagram in the
form shown on the second line, with two smooth kernels and a single pole factor.
The diagram containing 1-H_3 is simpler to analyze. Since both two- and three-particle poles are
canceled, the two loop sums have smooth summands, and can be converted into integrals.
Thus this contribution has no pole, and gives only a smooth kernel.
It is important to note that the 1-H_2 factor, which remains for this time ordering,
is not associated with the left-hand cut, but rather with the entire outer loop.
We now turn to Fig. <ref>(b), which is a different time ordering
of the diagram in Fig. <ref>. In this case there are no cuts that require
the use of the identities in Eqs. (<ref>) and (<ref>). All cuts are
nonsingular in our kinematic region (the two-cuts due to the factor of 1-H_2, and the
5-cut due to the kinematic constraints), and so both loop sums can be replaced by
integrals, leading to a contribution to the kernel _22.
Finally, we consider Fig. <ref>, which is one time ordering of the diagram with
overlapping class 1 and class 3 loops shown in Fig. <ref>.
It thus comes with no explicit factors of H_i, and we must insert the
identities of Eqs. (<ref>) and (<ref>) on all three cuts.
This leads to 2^3 terms, but only the three shown survive.
To see this note that, because the rightmost two-particle state is on shell, it follows that the three particles present in the adjacent three-cut cannot all simultaneously go on shell, as they share an unscattered particle.
This already tells us that only 2^2 terms will be nonzero.
In other words, the right-hand cut cannot have a factor of H_3,
so only the 1-H_3 factor survives for this cut, and furthermore we can set 1-H_3→ 1.
A further reduction occurs if we choose H_2 for the left-hand cut,
for then the middle cut cannot have a factor of H_3.
If the left-hand cut has a factor of 1-H_2, however, then the
middle cut can contain either H_3 or 1-H_3, as shown.
In the former case, the H_2 in the left-hand cut can be dropped.
The net result is that there are only three diagrams.
These give the kernel and cut-factors shown in the figure,
where all momentum sums within the kernels can be replaced by integrals.
We can make several important general observations from these examples.
First, the off-diagonal kernels _23 and _32
produced by this reduction do not have disconnected contributions.
This is simply because such contributions necessarily come with a factor
of _2^H _3^H∝ H_2 H_3 which vanishes when one propagator is unscattered.
Thus, unlike in the naïve approach of Sec. <ref>, where
_23 and _32 had disconnected contributions that could be dropped,
here the corresponding kernels simply do not have such contributions.
The second observation is that
there are no disconnected contributions to _22.
Such contributions arise in the naïve method of Sec. <ref>
from diagrams involving self-energy insertions
such as Fig. <ref>.
For example, in Fig. <ref>(b), the loop lying between the two-cuts gives
a disconnected contribution to A_22. Here, however, all such contributions
are avoided because of the presence of the factor of 1-H_2
(and the renormalization scheme chosen), which cancels the poles in the two-cuts.
The third observation is that the kernel _33, unlike the other components of ,
can have disconnected parts. An example where this arises is shown in
Fig. <ref>.
A disconnected contribution occurs in the first diagram on the right-hand side of the
equality, arising from a 2↔ 2 scattering.
The explicit form of the disconnected part is shown in Eq. (<ref>) below.
Note that completely disconnected parts cannot occur because there must be a vertex
between the two cuts, and self-energy insertions are not allowed on fully dressed propagators.
The final observation is more technical, but nevertheless important for the following
development.
This is that all factors of 1 - H_2 remaining after reduction
lie within loops that are integrated.[
The same is not true of factors of 1- H_3, which can appear in
tree-level contributions to _33.]
The observation can be demonstrated simply by noting that the
loop momentum running through the 1-H_2 cannot be shared with either a
_2^H or a _3^H cut.
The former possibility is ruled out by
the construction of Appendix <ref>, in which
only a single regulator function was applied to each two-particle loop.
The latter is ruled out because, if a momentum is shared, then one can use
the H_2 H_3=0 identity to replace 1- H_2 with 1.
The importance of this observation can be seen most easily from
the middle diagrams on the right-hand side of Fig. <ref>.
Here the 1-H_2 is not in an integrated loop, so there would be an ambiguity as to
which two-cut it is attached. In fact, since 1-H_2 can be replaced by 1, this
problem is absent. In the right-hand diagram, where the 1-H_2 remains, it
can be unambiguously attached to the integrated loop as a whole.
This means that there is a well-defined set of rules for assigning factors of 1-H_i to
the diagrams contributing to the kernels.
§.§ Final summation
After following the steps described above we have decomposed into the
following sum of terms
= ∑_n=1^∞ℳ_L^(n),
ℳ_L^(n) = ∑_i∈ diagrams^(n,i;1)^H ^(n,i;2)^H ⋯^H ^(n,i;n-1)^H ^(n,i;n) .
Here we have reverted to the 2×2 matrix notation.
The sum over i runs over all contributions (coming from the different time orderings
of all Feynman diagrams with all possible appearances of regulator factors after the
reduction described above) containing n-1 factors of ^H.
From the previous section we know that the kernels
^(n,i;j)_22, ^(n,i;j)_23 and ^(n,i;j)_32
are connected, smooth, infinite-volume (L-independent) functions. The ^(n,i;j)_33, however, consist of a connected, smooth, infinite-volume part plus
a term involving a Kronecker delta and factor of L^3 multiplying a two-to-two smooth,
infinite-volume kernel [as in Eq. (<ref>)].
The construction of the ^(n,i;j) follows the rather involved steps described in the previous
sections of this Appendix. What we show in this final section is that the sum over i in
Eq. (<ref>) leads to the simple form[
As discussed in the main text, this is a slight oversimplification, in that the
matrix indices at the end of the chain are slightly different from those in the middle.
As reiterated below, however, all the kernels can be obtained from a single
master function, analogous to that in Eq. (<ref>).]
ℳ_L^(n) + I^(n) =
^H ^H ⋯^H ^H _n kernels .
Here I^(n) contains only disconnected contributions.
The key claim in this result is that the same kernels appear
in all positions and for all values of n.
Summing over n then leads to the claimed result, Eq. (<ref>),
with the full subtraction given by I = ∑_n=1^∞ I^(n).
Before demonstrating Eq. (<ref>) we recall the need for the subtraction term I^(n).
We know from diagrams such as Fig. <ref> that the kernel B must
contain disconnected parts in the 33 component.
If there were no subtraction in Eq. (<ref>), then ℳ_L^(1) would equal B,
and thus contain a disconnected part, which is inconsistent with its definition.
In other words, in order for the same kernel B to appear in ℳ_L^(n) for all n,
a subtraction is required.
Before demonstrating Eq. (<ref>) we recall the need for the subtraction.
This arises from a mismatch between the kernels appearing in ℳ_L^(1)
and those in the higher-order terms. The former must be connected (since is)
while those appearing in higher order terms must contain disconnected parts in the
33 component (in order to accommodate diagrams such as that in Fig. <ref>).
In order to have a uniform definition of the kernel a subtraction is required.
To proceed we next give a precise definition of the kernel B. This is done by following
exactly the same steps as described in the preceding subsections, but instead of
starting with the fully connected , we allow also diagrams with
2→ 2 scattering and a single disconnected propagator in 33.
Fully disconnected diagrams are not included,
nor are those involving a 1↔ 2 subprocess in the 32 or 23 components.
We call this extended quantity ext.
It can be expanded in powers of the number of pole factors ^H,
just as in Eq. (<ref>).
By construction, we then have that
ext^(n) = ℳ_L^(n) + I^(n) ,
where I^(n) is simply the disconnected part
of the left-hand side (which can be unambiguously identified).
B is simply defined as the part of ext without factors of ^H:
B ≡ ext^(1) .
Using the new extended , we can reformulate the result
Eq. (<ref>) in the simpler form
ext^(n) = B (^H B)^n-1 .
We now recall that, when we say that all factors of B are equal in (<ref>),
we mean aside from the different momenta at which they are sampled.
In particular, we define a master kernel
(p⃗ ', k⃗', a⃗'; p⃗, k⃗, a⃗) =
[ _22(p⃗ '; p⃗) _23(p⃗ '; k⃗, a⃗); _32(k⃗', a⃗'; p⃗) _33(k⃗', a⃗'; k⃗, a⃗) ] ,
by extending the on-shell definition of to
general momenta p⃗ ', k⃗', a⃗'; p⃗, k⃗, a⃗.
Then the kernel in Eq. (<ref>) is given by restricting the
momenta in the master kernel appropriately: External momenta are set on shell, while
internal coordinates (those contracted with ^H) are restricted to the finite-volume set.
This is identical to the description given for the naïve kernel A in the main text following
Eq. (<ref>).
By definition, Eq. (<ref>) holds true for n=1, so we begin by considering the
n=2 case. We know that, using the procedure of previous subsections, we can
bring the contributions to ext with a single ^H into the form
ext^(2) = ' ^H ' ,
with ' a matrix of kernels having the same properties as B (smooth
and connected except for '_33).
These kernels are constructed of all possible time orderings of the allowed
Feynman diagrams lying between the external states and the cut ^H,
with appropriate factors of 1-H_i inserted, and all loops integrated.
Since the same set of orderings can occur on both sides of the ^H, the
two kernels are equal.[
This relies on the fact that the cut factors ^H act just like amputation on the
external legs: Removing the factors associated with the cut propagators from the kernels,
and only allowing time orderings in which the vertices lie between the external states.]
What we need to show is that B'=B, i.e. that all contributions to B' are contained in B
and vice versa. The former property is clear—any diagram connecting
an external state to a cut ^H can also serve to connect two external states
(or, as needed below, two cut factors). The latter property follows because every
contribution contained in ^H will occur in ext^(2),
simply by gluing the two halves together and inserting the cut factor.
This argument extends straightforwardly to arbitrary n, and
completes the demonstration of Eq. (<ref>).
§ FINITE-VOLUME DEPENDENCE FROM THE TOPT RESULTS
In this appendix we sketch the derivations of various results quoted in the main text.
We first discuss quantities involving only two-cuts, and then we consider those containing three-cuts.
§.§ Derivation of the result for X_22
The analysis of Refs. <cit.> uses a skeleton expansion applied
to standard relativistic Feynman diagrams.
This is in contrast to the analysis in the main text, which uses TOPT,
leading to the expression Eq. (<ref>) for .
While the two approaches lead to the same poles, as they must, they
differ in the way that various nonpole parts are allocated to nonsingular kernels.
For example, the quantity B_22 in Eq. (<ref>)
differs from the Bethe-Salpeter kernel B_2 that appears
in the analogous expression from the Feynman diagram analysis
(as discussed further in Appendix <ref> below).
Because of this, there is no simple way to recast the TOPT expression
for X_22 back into a Feynman-diagram form.
Thus we cannot directly apply the results obtained in Refs. <cit.>.
Instead, we apply the methodology developed in those references directly to the TOPT expression.
Starting from Eq. (<ref>), we focus on one of the two-cuts, and make
the matrix multiplications explicit, leading to
[B_22 _2 B_22]_p”;p' =
∑_p⃗,r⃗ B_22;p”;p 𝒞_2;p;r H_2(r⃗) B_22;r;p' ,
=
-1/L^3∑_p⃗
B_22;p”;p1/21/2ω_p 2ω_Pp (E-ω_p-ω_Pp)
H_2(p⃗) B_22;p;p' .
The factor of -1 coming with arises from the product of the
i associated with the energy denominator and that associated with one of the adjacent vertices.
We now recall that the key property of B_22 for our purposes is that it is a smooth
function of its momentum arguments. Thus the only singularity in the summand is that
from the explicit pole in .
We now write the sum over p⃗ as an integral plus a sum-integral difference to reach
[B_22 _2 B_22]_p”;p' =
- PV∫_p⃗
B_22;p”;pH_2(p⃗)/8ω_p ω_Pp (E-ω_p-ω_Pp) B_22;p;p'
- [1/L^3∑_p⃗ - PV∫_p⃗ ]
B_22;p”;p1/2h(p⃗)/2ω_p 2ω_Pp (E-ω_p-ω_Pp) B_22;p;p' .
Here we have also replaced H_2(p⃗) with h(p⃗) in the sum-integral difference, with h(p⃗) the UV regulator introduced in Eq. (<ref>) above. This substitution is justified because H_2(p⃗) - h(p⃗) vanishes at the pole so that the replacement is equivalent to dropping the sum-integral difference of a function that is smooth for all real p⃗, i.e. dropping a contribution that is exponentially suppressed. Here and below we keep implicit the fact that we are dropping exponentially suppressed terms.
From here we follow the steps outlined in Ref. <cit.> to rewrite the sum-integral difference in terms of the zeta function F_2, defined in Eq. (<ref>).
Given that B_22 is a smooth function, the dominant finite-volume corrections from the second term above are due to the explicit propagator pole. As a result, one can replace B_22 with its value when the internal momentum p is projected on shell. This is effected by setting the CM frame magnitude to equal q^*. This fixes the magnitude but not the direction and this remaining degree of freedom motivates us to decompose B_22 in spherical harmonics
B_22;p”;p|_p^* = q^* = √(4π) Y_ℓ' m'(p̂^*) B_22;p”;ℓ' m',
B_22;p;p'|_p^* = q^*
= √(4π) Y^*_ℓ,m(p̂^*)B_22;ℓ m;p' .
Using the sum-integral-difference identity of Ref. <cit.>, as expressed in Appendix A of
Ref. <cit.>, we find
[B_22 _2 B_22]_p”;p' = - PV∫_q
B_22;p”;qH_2(q⃗)/8ω_q ω_Pq (E-ω_q-ω_Pq) B_22;q;p'
-
B_22;p';ℓ' m' F_2;ℓ'm';ℓ m B_22;ℓ m;p' .
We summarize this result in shorthand notation as
B_22 _2 B_22 = - B_22 I_C B_22 - B_22 F_2 B_22 ,
with I_C an integral operator.
We note that this identity holds for any choice of kernels on the left- and right-hand sides,
as long as they are smooth functions of momenta.
We can thus condense the notation even further and write
_2 = - I_C - F_2 .
Using this identity, we can reorganize the sum in
Eq. (<ref>) into a series in powers of F_2 (following the method of Ref. <cit.>)
X_22 = B_22∑_n=0^∞[(-I_C - F_2)B_22]^n ,
= _22,D∑_n=0^∞[ -F_2 _22,D]^n
,
where
_22,D = ∑_n=0^∞ B_22 [-I_C B_22]^n .
Summing the geometric series in Eq. (<ref>) leads to the result quoted in the main text,
Eq. (<ref>).
§.§ Derivation of the results for Y_22 and Z_23
The determination of the volume dependence of Y_22, defined in Eq. (<ref>),
follows similar steps to those described in Appendix <ref> for X_22.
We can use the identity (<ref>) for all two-cuts, since the kernels on either side
of the cut involve the smooth functions B_22, B_23 or B_32.
Collecting terms according to the number of factors of F_2, we find
Y_22 = B_32[ _2 + _2 B_22_2 + ⋯] B_23 ,
= B_32[ - I_C + I_C B_22 I_C - ⋯] B_23
- B_32[1 - I_C B_22 + ⋯] F_2 [1 - B_22 I_C + ⋯] B_23
+
B_32[1 - I_C B_22 + ⋯] F_2
[B_22 - B_22 I_C B_22 + ⋯] F_2
[1 - B_22 I_C + ⋯] B_23
+ ⋯ ,
=
B_32_C,2 B_23 - B_32_A',2 F_2 _A,2 B_23
+ B_32_A',2 F_2 _22,D F_2 _A,2 B_23 - ⋯ ,
where in the last step we have used Eq. (<ref>) and defined the integral operators
_C,2 = [ - I_C + I_C B_22 I_C - ⋯] ,
_A',2 = [1 - I_C B_22 + ⋯] ,
_A,2 = [1 - B_22 I_C + ⋯] .
Summing the geometric series in Eq. (<ref>) leads
to the result quoted in the main text, Eq. (<ref>).
This derivation applies also for Z_23, the only change being the replacement
of B_32 on the left with B_22. Thus from Eq. (<ref>) we obtain
Z_23 = B_22[ _C,2
- _A',2 F_2 1/1+ _22,D F_2_A,2]B_23 .
This can be simplified using the identities
B_22_A',2 = _22,D ,
B_22_C,2 = _A,2-1 ,
leading to
Z_23 = [ _A,2 - 1
-_22,D F_2 1/1+ _22,D F_2_A,2]B_23 .
The result for Z_23 in the main text, Eq. (<ref>), follows immediately.
§.§ Comments on the derivation of the result for X_33
As explained in the main text, to determine X_33 we must repeat the
analysis of Refs. <cit.> starting from the TOPT decomposition of Eq. (<ref>)
instead of the skeleton expansion of Feynman diagrams.
To do so, we use the decomposition of B_33 into connected and disconnected parts,
Eq. (<ref>).
B_33^ conn is the analog in the present analysis of the three-particle
Bethe-Salpeter amplitude B_3 in the analysis of Refs. <cit.>.
The disconnected part can be written
B_33;k'a';ka^ disc = 2 ω_k L^3 δ_k' kB_2(k⃗)_a';a
+ permutations ,
where B_2 plays the role here
of the two-to-two Bethe-Salpeter kernel B_2 appearing in Ref. <cit.>,
with some important distinctions that we discuss below.
“Permutations” refers to the inclusion of all possible choices
of incoming and outgoing spectator momenta.
There are nine terms in total,
corresponding to the three different choices of the momentum of the
spectator particle in both initial and final states
(e.g. k⃗, a⃗ or P⃗-k⃗ -a⃗ in the initial state).
Thus we can rewrite the result
using the symmetrization operators introduced in the main text:
B_33;k'a';ka^ disc = _{
2 ω_k L^3 δ_k' kB_2(k⃗)_a';a}_ .
The factor of 2ω_k is needed to cancel the 1/(2ω_k) contained
in the adjacent three-cut, _3, since each disconnected propagator should come with only
one overall factor of 1/(2ω_k), and this factor is provided by the first _3. Similarly, the factor of L^3 is introduced to assure that diagrams with insertions of B_33^ disc have the correct powers of L.
It is important to understand in some detail the differences between
the Bethe-Salpeter kernel, B_2, and the quantity appearing here, B_2.
B_2 consists of all amputated two-to-two Feynman diagrams that are two-particle
irreducible in the s channel.
B_2 contains all the time orderings arising from these Feynman diagrams,
except those in which any vertex lies before the initial three-cut or after the final three-cut.
In addition, because of the definition of B described in Sec. <ref>,
B_2 includes time orderings (constrained as above)
from two-to-two diagrams that are two-particle reducible in the s channel.
These, however, are weighted by a factor of 1-H_3, so that there is no physical cut.
(The weight involves H_3 and not H_2 because this is part of a three-particle kernel.)
These features are illustrated in Fig. <ref>. Because of the appearance of
1-H_3 in some intermediate states, B_2 is an unconventional quantity.
We now proceed through the steps of the derivation in Refs. <cit.>.
We recall that Ref. <cit.> studied the quantity of interest, , but
made heavy use of the work in Ref. <cit.>, so we need to repeat
the steps from both references. We stress that the steps we need to take using
the TOPT decomposition are in one-to-one correspondence
with those using the skeleton expansion. To illustrate this correspondence
we consider the following contributions to X_33:
X_33⊃
B_33^ conn[_3 + _3 B_33^ disc_3
+ _3 B_33^ disc_3 B_33^ disc_3+ ⋯]
B_33^ conn .
If we keep the subset of these contributions in which the spectator meson
remains the same for all factors of B_33
then we obtain the diagrams shown in Fig. <ref>.
These correspond to the
“no switch" diagrams considered in Sec. IVA of Ref. <cit.>,
and shown in Fig. 7 of that work.
The differences between the expressions represented by the diagrams
are as follows:
First, while here the “end caps" are provided by factors
of B_33^ conn, in Ref. <cit.> they are given by the
external operators σ and σ^†.
As noted in Ref. <cit.>, however, as long as they are nonsingular,
the choice of end caps has no impact on the form of the result.
Second, as already described, B_2 here is replaced by
B_2 in Ref. <cit.>.
Last, the expression for _3 differs from the “cut" that arises in
Ref. <cit.>.
The key point, however, is that the residue of the pole is the same in both cases,
with the differences appearing in nonsingular terms.
This can be seen, for example, from Eq. (56) of Ref. <cit.>, which is
proportional to _3.
Indeed, the essential difference between the TOPT analysis and that using
Feynman diagrams is that nonsingular terms are reshuffled between the kernels.
In the expression represented by the diagrams of Fig. <ref>,
the three-momentum sums associated with each _3 factor are replaced
by integrals and a zeta function,
using a generalization of the identity given in Eq. (<ref>).
Following the steps of Ref. <cit.>, we find that this class of diagrams
leads to the following volume-dependent terms
⊃ -B_33^ conn (1 + _A',3^(1,u)) F/2ω L^31/1 + 𝒦_22 F (1 + _A,3^(1,u) ) B_33^ conn
+ 2/3 B_33^ connF/2ω L^3 B_33^ conn .
Here F is defined in Eq. (<ref>), 𝒦_22 is given by
𝒦_22;k'ℓ'm';kℓ m = δ_k' k[
B_2(k⃗) + PV∫B_2(k⃗) 6 ω_k L^6 _3
B_2 (k⃗) + ⋯]_ℓ'm';ℓ m ,
(where the integral runs over the implicit a⃗ dependence of the two B_2
factors and of _3),
and _A',3^(1,u) and _A,3^(1,u) are the first contributions to
the decoration operators _A',3 and _A,3 discussed in the main text.
The result (<ref>) has the same form as Eq. (92) of Ref. <cit.>.
We have checked that all subsequent steps in the lengthy derivations
of Refs. <cit.> go through, and we do not present further details.
The conclusion is that we can read off the final result for X_33 from that
for given in Eq. (68) of Ref. <cit.>, as long as we change the
meaning of the symbols appropriately. This is what we have done in
Eqs. (<ref>)-(<ref>).
There are, however, two features of the result that deserve further mention.
The first concerns the matrix G^H. This arises from diagrams involving switches,
the simplest of which is shown in Fig. <ref>.
The corresponding diagram is analyzed in Sec. IVB of Ref. <cit.>.
In one of the volume-dependent contributions, the two outer _3 factors
are replaced by F factors, while the central factor gives rise to a switch matrix G^H:
G^H_pℓ' m';kℓ m = (k^*/q_p^*)^ℓ'4π Y_ℓ'm'(k̂^*) H_3(p⃗,k⃗) Y^*_ℓ m(p̂^*)/2ω_Pkp(E-ω_k-ω_p-ω_Pkp)(p^*/q_k^*)^ℓ1/2ω_k L^3 .
This switches the interacting pair from the upper two to the lower two particles.
The key point here is that G^H inherits the cutoff
H_3 = H(p⃗) H(k⃗) H(b⃗_kp) from _3.
By contrast, in Ref. <cit.>, where the switch matrix is first introduced in
Eq. (116), there is some freedom in the choice of the cutoff function, and the
choice made there is H(p⃗)H(k⃗). Thus G^H and G differ by
a factor of H(b⃗_kp). We note, however, that in Ref. <cit.>
one could equally well have included the full H_3 in the definition of G without
changing the derivation.
In other words, the form of G that is forced on us here is a completely viable
option in Ref. <cit.> as well.
The second feature of the result for X_33 concerns 𝒦_22, defined in Eq. (<ref>).
We find that
𝒦_22;k'ℓ' m';k ℓ m = δ_k' k𝒦_2;ℓ' m'; ℓ m(E-ω_k,P⃗-k⃗)
,
i.e. 𝒦_22 in fact contains the physical two-particle K matrix.
To show this requires two further results: The unphysical dependence of B_2 on H_3
must cancel, and the missing time orderings in B_2 must become irrelevant.
To explain the cancellation of H_3 dependence, we rewrite B_22 to
make its dependence on H_3 explicit:
B_2 = B_2 + PV∫B_2 6ω_k L^6 _3^∞B_2 + ⋯ .
Here B_2
is the result obtained when all diagrams containing _3^∞ are dropped,
and thus is independent of H_3.
For example, in Fig. <ref>, the last diagram would be dropped.
Thus B_2 differs from the Bethe-Salpeter amplitude B_2 only in that
certain time orderings are not included in the former.
The H_3 dependence of B_2 is then reintroduced by the terms involving
integrals in Eq. (<ref>), corresponding to adding back in diagrams like the
last one in Fig. <ref>.
Substituting this result into Eq. (<ref>), and rearranging terms, we find that
𝒦_22;k'ℓ'm';kℓ m = δ_k' k[
B_2(k⃗) + PV∫B_2(k⃗) 6ω_k L^6 _3
B_2 (k⃗) + ⋯]_ℓ'm';ℓ m .
The H_3 dependence has canceled because _3+_3^∞=_3.
Thus 𝒦_22 receives contributions from all amputated two-to-two TOPT diagrams,
except that no time orderings are allowed in which vertices lie before the initial cut
or after the final cut. However, as indicated by the spherical harmonic indices in
Eq. (<ref>), these diagrams are evaluated on shell assuring that
diagrams with the missing time orderings vanish. Thus we find the result (<ref>).
§.§ Derivation of the result for Z_32
The final quantity we consider in this appendix is Z_32=B_33Ξ_33B_32.
As noted in the main text, this is not a quantity for which a result can simply be
read off from Refs. <cit.>, since it has disconnected parts on one end
but not the other. Nevertheless, by a small extension of Eq. (64) in Ref. <cit.>,
the relevant result can be found. This equation gives a result for _3,L^(u,u),
the unsymmetrized three-particle finite-volume amplitude, with all factors of
B_3 (the fully connected three-particle Bethe-Salpeter amplitude) explicit.
To obtain Z_32 we must (a) drop any contribution in which there is no B_3,
(b) replace the rightmost B_3 with B_32, (c) replace all other
factors of B_3 with B_33^ conn, and (d) symmetrize on the left.
The result is
Z_32 = _,3{^(u,u)_L,3_A,3^[B_2,ρ]∑_n=0^∞(B_33^ conn M^[B_2,ρ])^n B_32}
-B_32 ,
where ^(u,u)_L,3 is defined in Eq. (<ref>), while
= 1/1 + _ df,33,D^[B_2,ρ] F_3 ,
M^[B_2,ρ] = _C,3^[B_2,ρ]
- _A',3^[B_2,ρ] F_3 _A,3^[B_2,ρ] .
The superscript [B_2,ρ], which is defined in Ref. <cit.>, indicates
the parts of the integral operators that do not contain factors of
B_33^ conn. The relation between these parts and the full integral operators
can be read off from Eqs. (247)-(249) of Ref. <cit.>, and is[
We comment that the decoration operator 𝒟_C,3 used here and the analog used in Ref. <cit.>, denoted D_C, differ by a trivial relative phase. In particular, in the limit where the two-to-three coupling is set to zero, the operators are related by 𝒟_C,3 = i D_C.]
_C,3 = _C,3^[B_2,ρ]∑_n=0^∞( B_33^ conn_C,3^[B_2,ρ])^n ,
_A,3 = _A,3^[B_2,ρ]∑_n=0^∞( B_33^ conn_C,3^[B_2,ρ])^n ,
_A',3 =
∑_n=0^∞( _C,3^[B_2,ρ]B_33^ conn)^n
_A',3^[B_2,ρ] .
These operators appear in the expression for Y_33, Eq. (<ref>).
Our final comment about Eq. (<ref>) concerns the subtraction of B_32 on the
right-hand side. This is required to cancel the leading contribution from the first term on the right-hand side, which comes from the symmetrization of the product of the
1/3 term in _L,3^(u,u) [Eq. (<ref>)], the 1 in , the 1 in _A,3^[B_2,ρ], and the n=0 term in the sum.
This B_32 term is absent in Z_32.
The next step is to substitute the result (<ref>) into Eq. (<ref>)
and collect terms according to the number of F_3 factors. This leads to
Z_32+B_32 = _,3{^(u,u)_L,3[1 - B_33^ conn, CA F_3 + (B_33^ conn, CA F_3 )^2 + ⋯]
_A,3 B_32} ,
where
B_33^ conn,CA = _A,3^[B_2,ρ]∑_n=0^∞( B_33^ conn_C,3^[B_2,ρ])^n B_33^ conn_A',3^[B_2,ρ] ,
is the analog here of the quantity B_3^[B_2,ρ] in Ref. <cit.>.
Finally, summing the geometric series in Eq. (<ref>), performing some
algebraic manipulations, and using
𝒦_ df,33,D = _ df,33^[B_2,ρ] + B_33^ conn,CA ,
[the analog of Eq. (65) of Ref. <cit.>],
leads to the claimed result, Eq. (<ref>).
§ TIME-REVERSAL AND PARITY INVARIANCE
In this section we investigate the implications for 𝒦_ df of assuming
that time-reversal and parity invariance hold in the underlying theory.
We first discuss the consequences of time-reversal invariance; the consequences of parity invariance can then be inferred by a straightforward modification.
Naïvely, one might expect that, since 𝒦_ df is an infinite-volume
scattering quantity, it should transform under time reversal in the same way as ℳ.
However, upon closer inspection, this result is far from obvious.
For example, the definition of 𝒦_ df,33, the three-to-three component of 𝒦_ df,
involves a choice of ordering of loop integrals that is not manifestly time-reversal
invariant <cit.>.
Nevertheless, as we show in this appendix, given the relations
between 𝒦_ df and ℳ derived in Sec. <ref>,
the transformation properties of ℳ are indeed inherited by 𝒦_ df.
Time-reversal invariance implies that the components of the scattering amplitude satisfy
M_22;P⃗(p̂'^*; p̂^*) = M_22;-P⃗(-p̂^*; -p̂'^*) ,
M_23;P⃗(p̂'^*; k⃗, â^*) = M_32;-P⃗(-k⃗, -â^*; -p̂'^*) ,
M_ df,33;P⃗ (k⃗', â'^*; k⃗, â^*) = M_ df,33;-P⃗ (-k⃗,-â^*;-k⃗', -â'^*) ,
where we have denoted dependence on the total momentum, P⃗, as a subscript.[
Previously the dependence on P⃗ has been implicit. We make it explicit throughout
this appendix.
]
Decomposing
using spherical harmonics, one finds that the various components satisfy
M_22; ℓ m; ℓ'm';P⃗ = (-1)^ℓ+m+ℓ'+m' M_22; ℓ' -m'; ℓ -m;-P⃗ ,
M_23;ℓ m; ℓ' m';P⃗( k⃗) = (-1)^ℓ+m+ℓ'+m' M_32;ℓ' -m'; ℓ -m;-P⃗(-k⃗) ,
M_ df,33;ℓ m; ℓ' m';P⃗(k⃗'; k⃗) = (-1)^ℓ+m+ℓ'+m' M_ df,33;ℓ' -m'; ℓ -m;-P⃗(-k⃗; -k⃗') .
To obtain these results
we have used standard properties of the spherical harmonics under complex conjugation and parity
transformation. Note that, since we are considering the divergence-free form
of M_33, we can decompose in spherical harmonics.
From these results we conclude that it is sufficient to determine
M_22, M_23, and M_ df, 33,
since ℳ_32 then follows trivially from Eq. (<ref>).
In the following, we will say that a quantity has “standard time-reversal transformation properties"
if Eqs. (<ref>)-(<ref>) hold with the quantity substituted for
ℳ.
We recall from Sec. <ref> that 𝒦_ df is obtained
from ℳ in two steps. First, the intermediate quantity is obtained from ℳ
using Eqs. (<ref>)-(<ref>),
and, second, 𝒦_ df is obtained from using
Eqs. (<ref>)-(<ref>).
In what follows we first show that has standard time-reversal transformation properties
and then show that the same holds for _ df.
is obtained from ℳ by integrating with the kernels I_ and I_,
which are themselves obtained from Δ_ and Δ_ by solving
the integral equations (<ref>) and (<ref>), respectively.
The latter kernels are essentially the symmetrized forms of _3^(u,u)
and _3^(u,u), as shown by Eqs. (<ref>) and (<ref>).
Thus, to proceed, we need to understand the time-reversal transformation properties of _3^(u,u) and _3^(u,u), defined in Eqs. (<ref>) and (<ref>), respectively.
These are built using _3^(u,u), which, as shown in Eq. (<ref>),
involves the kernel G^∞ of Eq. (<ref>).
Thus we begin by studying the transformation properties of G^∞.
It follows from its definition that
G^∞_ℓ m; ℓ' m';P⃗(k⃗'; k⃗) = (-1)^ℓ+m+ℓ'+m' G^∞_ℓ' -m'; ℓ -m;-P⃗(-k⃗; -k⃗') ,
where we have used
H_3;P⃗(k⃗', k⃗) = H_3; - P⃗(- k⃗, - k⃗').
Using the definition of 𝒟_3^(u,u), Eq. (<ref>), and substituting
the symmetry relations for ℳ_22, Eq. (<ref>), and G^∞, Eq. (<ref>), we find
𝒟^(u,u)_3;ℓ m; ℓ' m';P⃗(k⃗'; k⃗) = (-1)^ℓ+m+ℓ'+m'𝒟^(u,u)_3;ℓ' -m';ℓ -m;-P⃗(-k⃗;-k⃗') ,
i.e. 𝒟_3^(u,u) transforms in the same way as G^∞.
It is now straightforward to use the definitions, Eqs. (<ref>) and (<ref>),
to show that the components of ℒ_3^(u,u) and ℛ_3^(u,u) satisfy
ℒ^(u,u)_3;ℓ m; ℓ' m';P⃗(k⃗', k⃗)
=
(-1)^ℓ +m+ ℓ' +m'ℛ^(u,u)_3;ℓ' -m'; ℓ -m;-P⃗(-k⃗,-k⃗') .
We further note that ℒ_3^(u,u) and ℛ_3^(u,u) satisfy
ρ_3(k⃗ ' )/2 ω_k' L_3^(u,u)(k⃗ ' ,k⃗ )
=
R_3^(u,u)(k⃗ ' ,k⃗ )
ρ_3(k⃗ )/2 ω_k ,
and from this and Eq. (<ref>), we deduce
Δ_ℒ;ℓ m; ℓ' m';P⃗(p⃗, k⃗)
=
(-1)^ℓ +m+ ℓ' +m' Δ_ℛ;ℓ' -m'; ℓ -m;-P⃗(-k⃗,-p⃗)
.
Inserting this into Eq. (<ref>) and solving for I_ L iteratively then gives
I_ℒ;ℓ m; ℓ' m';P⃗(p⃗, k⃗)
=
(-1)^ℓ +m+ ℓ' +m' I_ℛ;ℓ' -m'; ℓ -m;-P⃗(-k⃗,-p⃗)
.
Substituting these properties of I_ℒ and I_ℛ along with
the standard time-reversal transformation properties of ℳ into
Eqs. (<ref>)-(<ref>), it then follows immediately that
has standard transformation properties.
Using this result in Eqs. (<ref>)-(<ref>),
we find the claimed result that 𝒦_ df also has standard time-reversal transformation properties, i.e.
K_22; ℓ m; ℓ'm';P⃗ = (-1)^ℓ+m+ℓ'+m' K_22; ℓ' -m'; ℓ -m;-P⃗ ,
K_23;ℓ m; ℓ' m';P⃗( k⃗) = (-1)^ℓ+m+ℓ'+m' K_32;ℓ' -m'; ℓ -m;-P⃗(-k⃗) ,
K_ df,33;ℓ m; ℓ' m';P⃗(k⃗'; k⃗) =(-1)^ℓ+m+ℓ'+m' K_ df,33;ℓ' -m'; ℓ -m;-P⃗(-k⃗; -k⃗') .
We conclude that the K matrix appearing in the quantization condition, Eq. (<ref>), satisfies the same time-reversal transformation properties as a standard K matrix.
This implies that only three of the four components of the K matrix
must be determined from the finite-volume spectrum.
We can extend this result if we also assume parity invariance. Since there is nothing in the
construction of 𝒦_ df that violates parity, it transforms in the same way
as ℳ under parity, namely by flipping the sign of all vectors and multiplying
spherical harmonics by (-1)^ℓ. We thus arrive at the following relations in a theory
that is invariant under both time-reversal and parity transformations:
K_22; ℓ m; ℓ'm';P⃗ = (-1)^m+m' K_22; ℓ' -m'; ℓ -m;P⃗ ,
K_23;ℓ m; ℓ' m';P⃗( k⃗) = (-1)^m+m' K_32;ℓ' -m'; ℓ -m;P⃗(k⃗) ,
K_ df,33;ℓ m; ℓ' m';P⃗(k⃗'; k⃗) =(-1)^m+m' K_ df,33;ℓ' -m'; ℓ -m;P⃗(k⃗; k⃗') .
These relations are more useful since the same value of the total three-momentum appears
on both sides. In particular, the second relation shows that
𝒦_23 is not independent of 𝒦_32.
apsrev4-1
| Over the past few decades, enormous progress has been made in determining the properties of hadrons
directly from the fundamental theory of the strong force,
quantum chromodynamics (QCD).
A key tool in such investigations is lattice QCD (LQCD), which can be used to numerically
calculate correlation functions defined on a discretized, finite, Euclidean spacetime.
State-of-the-art LQCD
calculations of stable hadronic states use dynamical up, down, strange, and even charm quarks, with physical quark masses, and include isospin breaking both from the mass difference of the up and down quarks and from the effects of quantum electrodynamics (QED).
For recent reviews, see Refs. <cit.>.
Using LQCD to investigate hadronic resonances that decay via the strong force is significantly more challenging. Resonances do not correspond to eigenstates of the QCD Hamiltonian and thus cannot be studied by directly interpolating a state with the desired quantum numbers. Instead, resonance properties are encoded in scattering and transition amplitudes, and only by extracting these observables can one make systematic, quantitative statements. In fact, it is not a priori clear that one can extract such observables using LQCD. Confining the system to a finite volume obscures the meaning of asymptotic states and restricting to Euclidean momenta prevents one from directly applying the standard approach of Lehmann-Symanzik-Zimmermann reduction. In addition, since one can only access numerically determined Euclidean correlators with nonvanishing noise, analytic continuation to Minkowski momenta is, in general, an
ill-posed problem.
For two-particle states, it is by now well known that scattering amplitudes
can be constrained indirectly, by first extracting the discrete finite-volume energy spectrum.
The approach follows from seminal work by Lüscher <cit.> who derived a relation between the finite-volume energies and the elastic two-particle scattering amplitude for a system of identical scalar particles.
Since then, this relation has been generalized to accommodate non zero spatial momentum in the finite-volume frame and also to describe more complicated two-particle systems, including nonidentical and nondegenerate particles as well as particles with intrinsic spin <cit.>. This formalism has been applied in many numerical LQCD calculations to determine the properties of low-lying resonances that decay into a single two-particle channel <cit.>,
including most recently the first study of the lightest hadronic
resonance, the σ/f_0(500) <cit.>. The extension
to systems with multiple coupled two-particle channels <cit.>, has led to the first LQCD results for resonances at higher energies, where more than one decay channel is open <cit.>.
Thus far, however, no LQCD calculations have been performed for resonances that have a significant branching
fraction into three or more particles. This is largely because the formalism needed to do so, the three-particle
extension of the relations summarized above, is still under construction.
Early work in this direction includes the nonrelativistic studies presented
in Refs. <cit.>.
More recently, in Refs. <cit.>,
two of the present authors derived a three-particle quantization condition for identical scalar particles
using a generic relativistic quantum field theory (subject to some restrictions described below).
Since these articles are the starting point for the present work,
we briefly summarize their methodology.[
We also note that additional checks of the quantization condition have been given in
Refs. <cit.>. ]
Reference <cit.> studied a three-particle finite-volume correlator and determined its pole positions,
which correspond to the finite-volume energies, in terms of an infinite-volume scattering quantity. This was done by deriving a skeleton expansion, expressing each finite-volume Feynman diagram in terms of its infinite-volume counterpart plus a finite-volume residue, summing the result into a closed form and then identifying the pole locations. The resulting expression for the finite-volume energies depends on a nonstandard infinite-volume scattering
quantity—the divergence-free K matrix, denoted .
A drawback of this result is that , as well as other quantities in the quantization condition,
depends on a smooth cutoff function (denoted H_3 below),
although the energies themselves are independent of this cutoff.
Thus the relation to the infinite-volume scattering amplitude is not explicit.
The second publication, Ref. <cit.>, resolved this issue by deriving
the relation between and the standard infinite-volume three-to-three scattering
amplitude ℳ_3. We comment that, like the two-to-two scattering amplitude, ℳ_2, the three-particle scattering amplitude must satisfy constraints relating its real and imaginary parts that are dictated by unitarity. These constraints are built into quantum field theory, and can be recovered order by order in a diagrammatic
expansion. In the two-particle case, both the definition of the S matrix and the diagrammatic analysis can be
used to show that [ℳ_2]^-1∝δ - i where the scattering phase shift δ
(and the proportionality constant) is real.
In the three-particle sector, unitarity takes a much more complicated form but enters our result through
the condition that is a real function on a three-particle phase space.
The relation to ℳ_3 then automatically produces the required unitarity properties,
in addition to removing the scheme dependence.
As mentioned above, the results of Refs. <cit.>
were obtained under some restrictions.
The finite spatial volume was taken to be cubic (with linear extent L),
with periodic boundary conditions on the fields,
and the particles were assumed to be spinless and identical (with mass m).
The more important restrictions concerned the class of interactions considered.
These were assumed to satisfy the following two properties:
* They have a ℤ_2 symmetry such that
2↔3 transitions are forbidden;
i.e. only even-legged vertices are allowed.
* They are such that the two-particle K matrix, appearing due to subprocesses in which two particles scatter while the third spectates, is smooth in the kinematically available energy range.
The relation between the three-particle finite-volume energies and the three-to-three scattering amplitude, summarized above, holds for any system satisfying these restrictions. The relation is valid up to exponentially suppressed corrections scaling as e^- m L, which we assume are also negligible here, and holds for any allowed
value of the total three-momentum in the finite-volume frame.
In this work we remove the first of the two major restrictions; i.e. we consider theories without a ℤ_2 symmetry, so that all vertices are allowed
in the field theory.
We continue to impose the second restriction.
This leads to a relativistic, model-independent quantization condition that can be used to
extract coupled two- and three-particle scattering amplitudes from LQCD.
We otherwise use the setup of the previous studies. In particular, we assume a theory of
identical scalar particles in a periodic, cubic box.
Given past experience in the two-particle sector, we expect that these restrictions on particle content
will be straightforward to remove. We also expect that the generalization to multiple two- and three-body
channels will be straightforward.
We defer consideration of these cases until a later publication.
The generalization that we derive here is a necessary step toward using LQCD to study resonances that decay into both two- and three-particle states. A prominent example is the Roper resonance, N(1440), the lowest lying excitation of the nucleon. This state is counterintuitive from the perspective of quark models, as it lies below the
first negative parity excited state.
The Roper resonance is estimated to decay to Nπ with a branching fraction of 55%-75%
and otherwise to Nππ, with other open channels highly suppressed.
Similarly, nearly all of the recently discovered XYZ states
have significant branching fractions into both two- and three-particle final states (see Refs. <cit.> for recent reviews).
These states exhibit the rich phenomenology of nonperturbative QCD and
it is thus highly desirable to have theoretical methods to extract their
properties directly from the underlying theory.
This article derives two main results:
The relation between the discrete finite-volume spectrum and the generalized divergence-free K matrix,
given in Eq. (<ref>), and the relation between the K matrix and the coupled
two- and three-particle scattering amplitudes, given compactly in Eq. (<ref>) and more explicitly throughout Sec. <ref>.
These results generalize those of Refs. <cit.> and <cit.>, respectively.
The first, Eq. (<ref>), has a form reminiscent of the coupled two-particle result <cit.>. The finite-volume effects are contained in a diagonal two-by-two matrix with entries F_2 in the two-particle sector and F_3 in the three-particle sector.
Aside from minor technical changes, these are the same finite-volume quantities that arise in the previously derived two- and three-particle quantization conditions <cit.>. The coupling between channels is captured by the generalized divergence-free K matrix. This contains diagonal elements, mediating two-to-two and three-to-three transitions, as well as off-diagonal elements that encode the two-to-three transitions.
To obtain both the quantization condition and the relation to the scattering amplitude from a single calculation, we use a matrix of finite-volume
correlators, , chosen so that it goes over to the corresponding matrix of
infinite-volume scattering amplitudes when the L→∞ limit is taken appropriately.
This differs from the type of correlator used in Ref. <cit.>,
but is the direct generalization of that considered in Ref. <cit.>.
The results of this work, like those given in Refs. <cit.>, are derived by analyzing an infinite set of finite-volume Feynman diagrams and identifying the power-law finite-volume effects.
The central complication new to the present derivation comes from diagrams such as that of
Fig. <ref>, in which a two-to-three transition is mediated by a one-to-two transition
together with a spectator particle. The cuts on the right-hand side of the figure indicate that this diagram
gives rise to finite-volume effects from both two- and three-particle states. As we describe in detail below,
a consequence of such diagrams is that we cannot use standard fully dressed propagators in two-particle loops,
but instead need to introduce modified propagators built from two-particle-irreducible (2PI) self-energy diagrams.
In addition, we must keep track of the fact that the two- and three-particle
states in these diagrams share a common coordinate. This makes it more
challenging to separate the finite-volume effects arising from the two- and three-particle states
in diagrams such as that of Fig. <ref>.
To address this complication, and other technical issues that arise, we use here an approach
for studying the finite-volume correlator that differs from the skeleton-expansion-based methods of
Refs. <cit.>.
In particular, we construct an expansion using a mix of fully dressed and
modified two- and three-particle irreducible propagators, which are connected via the local interactions of the
general quantum field theory.
We then identify all power-law finite-volume effects using time-ordered perturbation theory (TOPT).
We also introduce smooth cutoff functions, H_2 and H_3, that only have support in the vicinity of the two- and three-particle poles, respectively. A key simplification of this construction is that,
in disconnected two-to-three transitions such as that shown in Fig. <ref>,
the two- and three-particle poles do not contribute simultaneously.
This is an extension of the result that an on-shell one-to-two transition
is kinematically forbidden for stable particles.
After eliminating such disconnected two-to-three transitions we are left with a series of terms built from two- and three-particle poles, summed over the spatial momenta allowed in the periodic box, and with all two-to-three transitions mediated by smooth functions. To further reduce these expressions, we apply the results of Refs. <cit.>, to express the sums over poles as products of infinite-volume quantities and finite-volume functions. The modifications that we make to accommodate two-to-three transitions affect the exact forms of these poles, so that some effort is required to extend the previous results to rigorously apply here. With these modified relations we are able to derive a closed form for the finite-volume correlator and to express its pole positions in terms of a quantization condition.
The remainder of this work is organized as follows. In the following section we derive the quantization condition relating the discrete finite-volume spectrum to the generalized divergence-free K matrix. After giving the precise definition of the finite-volume correlator, ℳ_L, and introducing various kinematic variables, we divide the bulk of the derivation into four subsections. In Sec. <ref> we apply standard TOPT to identify all of the two- and three-particle states that lead to important finite-volume effects. However, because of technical issues, the form reached via the standard approach is not useful for the subsequent derivation. Thus, in Sec. <ref>, we provide an alternative procedure that displays the same finite-volume effects in a more useful form. This improved derivation is highly involved and we relegate the technical details to Appendix <ref>. With the two- and three-particle poles explicitly displayed, in Sec. <ref> we complete the decomposition of finite- and infinite-volume quantities by extending and applying various relations derived in Refs. <cit.>.
Again, many technical details are collected in Appendix <ref>.
Finally, in Sec. <ref>, we identify the poles in ℳ_L and thereby reach our quantization condition.
To complete the derivation, in Sec. <ref> we relate the generalized divergence-free K matrix to the standard infinite-volume scattering amplitude. Our derivation here closely follows the approach of Ref. <cit.> but is complicated by the mixing of two- and three-body states.
After deriving an expression for ℳ_3 in terms of the K matrix in Sec. <ref>,
we then invert the relation in Sec. <ref>.
Given a parametrization of the scattering amplitude, this allows one to determine the K matrix and thus predict the finite-volume spectrum in terms of a given parameter set. Having given the general relation between finite-volume energies and coupled two- and three-particle scattering amplitudes, in Sec. <ref> we study various limiting cases that simplify the general results. We conclude and give an outlook in Sec. <ref>.
We include four appendixes. In addition to the two mentioned above,
Appendix <ref> describes a specific example of the smooth cutoff functions, H_2 and H_3,
that are used to simplify the results in various ways, in particular by removing disconnected two-to-three transitions,
while Appendix <ref> derives properties of the divergence-free K matrix
that follow from the parity and time-reversal invariance of the theory. | null | null | null | null | null |
http://arxiv.org/abs/1701.07463v1 | 20170125195408 | Sorting by Reversals and the Theory of 4-Regular Graphs | [
"Robert Brijder"
] | cs.DM | [
"cs.DM",
"math.CO"
] |
calc,petri
snakes
patterns
external
plain
abbrv
theoremTheorem
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
definition[theorem]Definition
example[theorem]Example
remark[theorem]Remark
ProofProof
proof
Example
Hasselt University, Belgium
[myfootnote]Postdoctoral fellow of the Research Foundation – Flanders (FWO).
[email protected]
We show that the theory of sorting by reversals fits into the well-established theory of circuit partitions of 4-regular multigraphs (which also involves the combinatorial structures of circle graphs and delta-matroids). In this way, we expose strong connections between the two theories that have not been fully appreciated before.
We also discuss a generalization of sorting by reversals involving the double-cut-and-join (DCJ) operation. Finally, we also show that the theory of sorting by reversals is closely related to that of gene assembly in ciliates.
sorting by reversals sorting by DCJ operations genome rearrangements 4-regular graphs local complementation
gene assembly in ciliates
§ INTRODUCTION
Edit distance measures for genomes can be used to approximate evolutionary distance between their corresponding species. A number of genome transformations have been used to define edit distance measures. In this paper we consider the well-studied chromosome transformation called reversal, which is an inversion of part of a chromosome <cit.>. If two given chromosomes can be transformed into each other through reversals, then the difference between these two chromosomes can be represented by a permutation, where the identity permutation corresponds to equality. As a result, transforming one chromosome into the other using reversals is called sorting by reversals. The reversal distance is the least number of reversals needed to accomplish this transformation (i.e., to sort the permutation by reversals). In <cit.> a formula is given for the reversal distance, leading to an efficient algorithm to compute reversal distance. The proof of that formula uses a notion called the breakpoint graph. Subsequent streamlining of this proof led to the introduction of additional notions <cit.> such as the overlap graph and a corresponding graph operation. A reversal can be seen as a special case of a double-cut-and-join (DCJ) operation that can operate either within a chromosome or between chromosomes. Similar as for reversals, one can define a notion of DCJ distance, and a formula for DCJ distance is given in <cit.>.
The theory of circuit partitions of 4-regular multigraphs was initiated in <cit.> and is currently well-developed with extensions and generalizations naturally leading into the domains of, e.g., linear algebra and matroid theory. In this paper we show that this theory can be used to study sorting by reversals, and more generally sorting by DCJ operations. Moreover, we show how various aspects of the theory of circuit partitions of 4-regular multigraphs relate to the topic of sorting by reversals. In particular, we show that the notion of an overlap graph and its corresponding graph operation from the context of sorting by reversals <cit.> correspond to circle graphs and the (looped) local complementation operation, respectively, of the theory of circuit partitions of 4-regular multigraphs. This leads to a reformulation of the Hannenhalli-Pevzner theorem <cit.> to delta-matroids. We remark however that the theory of circuit partitions of 4-regular multigraphs is too broad to fully cover here, and so various references are provided in the paper for more information.
It has been shown that various other research topics can also be fit into the theory of circuit partitions of 4-regular multigraphs: examples include the theory of ribbon graphs (or embedded graphs) <cit.> and the theory of gene assembly in ciliates <cit.>. As such all these research topics arising from different contexts turn out to be strongly linked, and results from one research topic can often be carried over to another. Indeed, it is not surprising that the Hannenhalli-Pevzner theorem has been independently discovered in the context of gene assembly in ciliates <cit.>.
This paper is organized as follows. In Section <ref> we recall sorting by reversals and in Section <ref> we associate a 4-regular multigraph and a pair of circuit partitions to a pair of chromosomes (where one is obtainable from the other by reversals). This leads to a reformulation of a known inequality of the reversal distance in terms of 4-regular multigraphs. In Section <ref> we associate a circle graph to the two circuit partitions and we show that the adjacency matrix representation of this circle graph reveals essential information regarding the reversal distance. We recall local complementation in Section <ref> and the Hannenhalli-Pevzner theorem in Section <ref>, and reformulate the Hannenhalli-Pevzner theorem in terms of delta-matroids in Section <ref>. Before discussing delta-matroids, we also recall sorting by DCJ operations in Section <ref>. We discuss the close connection of sorting by reversals and gene assembly in ciliates in Section <ref>. Finally, a discussion is given in Section <ref>.
§ SORTING BY REVERSALS
In this section we briefly and informally recall notions concerning sorting by reversals. See, e.g., the text books <cit.> for a more formal and extensive treatment.
During the evolution of species, various types of modifications of the genome may occur. One such modification is the inversion (i.e., rotation by 180 degrees) of part of a chromosome, called a reversal. In Section <ref>, we recall that this inversion is the result of a so-called double-cut-and-join operation. The reversal distance between two given chromosomes is the minimal number of reversals needed to transform one into the other, and it is a measure of the evolutionary distance between the two species. Figure <ref> shows two toy chromosomes which have seven segments in common, but their relative positions and orientations differ (by, e.g., -5 we mean segment 5 in inverted orientation, i.e., rotated by 180 degrees). Figure <ref> shows that the reversal distance between the chromosomes of Figure <ref> is at most four.
We can concisely describe the chromosomes of Figure <ref> by the sequences (1, -6, 7, 4, -2, -5, 3) and (1, 2, 3, 4, 5, 6, 7). These sequences are called signed permutations in the literature, and we will adopt this convention here (although we won't treat them as permutations in this paper). A signed permutation of the form (1, 2, ⋯, n) is called the identity permutation. The reversal distance of a single signed permutation π, denoted by d_r(π), is the minimal number of reversals needed to transform π into the identity permutation. Thus, for π = (1, -6, 7, 4, -2, -5, 3) of Figure <ref>, we have d_r(π) ≤ 4 by Figure <ref>.
Viewing the chromosome of species B of Figure <ref> as the “sorted” chromosome, the transformation using reversals of the chromosome of species A to the chromosome of species B is called sorting by reversals.
For any signed permutation π = (π_1, π_2, ⋯, π_n), the signed permutation π = (-π_n, -π_n-1, ⋯, -π_1) represents the same chromosome (just considered 180 degrees rotated). The reversal distance of π and π may however differ (this difference is, of course, at most one because π can be turned into π using one “full” reversal). Therefore, one could argue that a better notion of the reversal distance of π would be the minimum value of the reversal distances of both π and π. Equivalently, one could view both (1, 2, ⋯, n) and (-n, -(n-1), ⋯, -1) as identity permutations. We revisit this issue in Sections <ref> and <ref>.
§ FOUR-REGULAR MULTIGRAPHS
In this paper, graphs are allowed to have loops but not multiple edges, and multigraphs are allowed to have both loops and multiple edges. We denote the sets of vertices and edges of a (multi)graph G by V(G) and E(G), respectively. For graphs, each edge e ∈ E(G) is either of the form {v} (i.e., v is a looped vertex) or of the form {v_1,v_2} (i.e., there is an edge between v_1 and v_2). A vertex v is said to be isolated if no edge is incident to it (in particular, v is not looped). A 4-regular multigraph is a multigraph where each vertex has degree 4, a loop counting as two.
A standard tool for the calculation of the reversal distance is the so-called breakpoint graph of a signed permutation. In this section we instead assign a 4-regular multigraph and two of its circuit partitions to a signed permutation. The main reason for considering this graph instead of the breakpoint graph is that in this way we can use the vast amount of literature concerning the theory of circuit partitions of 4-regular multigraphs, which began with the seminal paper of Kotzig <cit.>, and extended, e.g., in <cit.>.
We also note that a drawback of using the breakpoint graph is that the identity of the vertices is important, while the theory of circuit partitions of 4-regular multigraphs is independent of the identity of the vertices.
We now describe the construction of the 4-regular multigraph (along with two circuit partitions). As usual, the boundaries between adjacent segments of (the chromosomal depiction of) a signed permutation π are called breakpoints. Because reversals can also be applied on endpoints of a chromosome, we treat the endpoints of a signed permutation as breakpoints as well. We do this by circularizing the signed permutation, see Figure <ref>. Note, however, that the location of the endpoints is important. Indeed, e.g., the signed permutation (1, 2, ⋯, n), with n ≥ 2, has a different reversal distance than any of its proper conjugations (i.e., the signed permutations (i, i+1, ⋯, n, 1, 2, ⋯, i-1) for i ∈{2,…,n}). Therefore, we have anchored the two endpoints to a new segment, which is denoted by $. This corresponds to the usual procedure of framing the signed permutation in the theory of sorting by reversals, see, e.g., <cit.>. The next step, which we call here the expand step, is to insert an intermediate segment I_i between each two adjacent segments, see again Figure <ref>.
Next, we represent the circularized and expanded signed permutation by a digraph D_π. In D_π, each breakpoint is represented by a vertex and each segment is represented by an arrow. The arrow is labeled by the segment x it represents and goes from the left-hand breakpoint of x to the right-hand breakpoint of x, see Figure <ref>. Moreover, the boundaries/breakpoints of the original segments i and i+1 that coincide after the sorting procedure are given a common vertex label v_i. For example, the right-hand side breakpoint of segment 3 and the left-hand side breakpoint of segment 4 are given a common vertex label v_3, see again Figure <ref>. Notice that the orientation is important here: the right-hand side breakpoints of segment 1 and the right-hand side breakpoint of segment -2 are given a common vertex label because segment -2 is segment 2 in inverted orientation (i.e., rotated by 180 degrees). Since segment $ represents the endpoints, for the arrow corresponding to $, the head vertex is labeled by v_0 and the tail vertex is labeled by v_n.
From the digraph D_π of Figure <ref>, we construct a 4-regular multigraph, denoted by G_π, by turning each arrow into an undirected edge, removing the signs from the edge labels, and finally merging each two vertices with the same label, see Figure <ref>.
Let G be a multigraph and let l be the number of connected components of G. A (unoriented) circuit of G is a closed walk, without distinguished orientation or starting vertex, allowing repetitions of vertices but not of edges. A circuit partition of G is a set P of circuits of G such that each edge of G is in exactly one circuit of P. Note that |P| ≥ l. If |P| = l, then we say that P is an Euler system of G. Note that an Euler system contains an Eulerian circuit (i.e., a circuit visiting each edge exactly once) for each connected component of G. In particular, if G is connected, then an Euler system is a singleton containing an Eulerian circuit of G.
As illustrated in Figure <ref>, the circularized and expanded signed permutation of Figure <ref> belongs to a particular circuit partition P_A of the 4-regular multigraph G_π of Figure <ref> (this can also be verified by comparing Figure <ref> with Figure <ref>). In this way, P_A is the circuit partition belonging to the chromosome of species A. Notice that P_A is an Euler system, in fact, since G_π is connected for every signed permutation π, P_A contains an Eulerian circuit of G_π. Another circuit partition P_B is illustrated in Figure <ref>. It is the unique circuit partition that includes the circuit C = (1,2, ⋯,7,$). As such, P_B is the circuit partition belonging to the chromosome of species B. Notice that, besides C, P_B contains four other circuits in this example. Each of these four circuits consists of intermediate segments (recall that these are the segments of the form I_i for some i).
We remark that Figures <ref> and <ref> (corresponding to P_A and P_B, respectively) can be obtained from G_π by “splitting” each vertex in an appropriate way. This splitting is considered in <cit.> in the context of gene assembly in ciliates (we recall gene assembly in ciliates in Section <ref>).
While we do not recall the notion of a cycles of a signed permutation (see, e.g., <cit.>), we mention that it is easy to verify that these cycles correspond one-to-one to circuits of intermediate segments of the circuit partition P_B.
By using 4-regular multigraphs, we have given these cycles a more “physical” interpretation, cf. Figure <ref>.
Let c(π) be the number of cycles of a signed permutation π. The following result is well known (in fact, this result has been extended into an equality in <cit.>).
Let π be a signed permutation with n elements. Then d_r(π) ≥ n+1 - c(π).
The proof idea of Theorem <ref> is to show that (1) if π is the identity permutation, then c(π)=n+1 and (2) if π' is obtained from π by applying a single reversal, then c(π') - c(π) ∈{-1,0,1}.
We remark that the inequality of Theorem <ref> usually takes the form d_r(π) ≥ n' - c(π), where n' is the number of segments of the framed/anchored signed permutation and is natural when segment $ is instead denoted by n' = n+1.
For our running example we see by Figure <ref> that c(π)=4. Thus d_r(π) ≥ 7+1-4=4. We have seen in Section <ref>, and in particular Figure <ref>, that d_r(π) ≤ 4. Consequently, d_r(π)=4.
Notice that a circuit partition takes, for each vertex of the 4-regular multigraph, one of three possible routes, see Figure <ref>. Since care must be taken in the case of loops, the four h_i's are not edges but actually “half-edges”, where two half-edges form an edge. If circuit partitions P_1 and P_2 take a different route at each vertex of G, then we say that P_1 and P_2 are supplementary. Notice that P_A and P_B are supplementary circuit partitions.
For a given circuit partition P and vertex v of G, let P' and P” be the circuit partitions obtained from P by changing the route of P at vertex v. It is well known that the cardinalities of two of {P,P',P”} are equal, to say k, and the third is of cardinality k+1.
In terms of 4-regular multigraphs, the issue discussed in Remark <ref> translates to the question of whether the anchor should be $ or -$ (in other words, $ in inverted orientation). We assume the former, but the latter anchor is equally valid and may sometimes obtain a reversal distance that is one smaller. We revisit this issue in Section <ref>.
§ CIRCLE GRAPHS
To study the effect of sequences of reversals, we turn to circle graphs. Let us fix two supplementary circuit partitions P_1 and P_2 of a 4-regular multigraph G, where P_1 is an Euler system.
A vertex v of G is called oriented for P_1 with respect to P_2 if the circuit partition P' obtained from P_1 by changing the route of P_1 at v to coincide with the route of P_2 at v, is an Euler system. We say that vertex v of G_π is oriented for a signed permutation π if v is oriented for P_A with respect to P_B. Thus {v_1,v_2,v_4,v_6} is the set of oriented vertices of our running example.
To construct the circle graph, we assume first, for convenience, that G is connected, i.e., P_1 contains a single Eulerian circuit C. We draw C as a circle and connect each two vertices with the same label by a chord to obtain a chord diagram. See Figure <ref> for the chord diagram of the Eulerian circuit of Figure <ref>. We construct a (looped) circle graph H for G with respect to P_1 and P_2 as follows. The set V(H) = V(G) and two distinct vertices of H are adjacent when the corresponding two chords intersect in the chord diagram. Finally, a loop is added for each vertex that is oriented for P_1 with respect to P_2. In the general case where G is not necessarily connected, the circle graph for G is the union of the circle graphs of each connected component of G. The circle graph H_π for signed permutation π is the circle graph of G_π with respect to P_A and P_B. The circle graph for our running example is depicted in Figure <ref>.
We remark that a circle graph is called an “overlap graph” in the literature of sorting by reversals. However, we use here the term circle graph because circle graph is the usual name for this notion in mathematics. Also, the vertices of an overlap graph in the literature on sorting by reversals are often decorated by white or black labels instead of loops (black labels correspond to loops). The rest of this section shows why using loops instead of vertex colors is very useful when going to other combinatorial structures and matrices.
We now recall the well-known notion of an adjacency matrix of a graph. First, the rows and columns of the matrices we consider in this paper are not ordered, but are instead indexed by finite sets X and Y, respectively. We call such matrices X × Y-matrices. Note that the usual notions of rank and nullity of such a matrix A are defined — they are denoted by r(A) and n(A), respectively. The adjacency matrix of a graph G, denoted by A(G), is the V(G) × V(G)-matrix over the binary field GF(2) where for v,v' ∈ V(G), the entry indexed by (v,v') is 1 if and only if v and v' are adjacent (a vertex v is considered adjacent to itself precisely when v has a loop).
The adjacency matrix A(H_π) of the circle graph H_π of Figure <ref> is as follows:
A(H_π) =
v_0 v_1 v_2 v_3 v_4 v_5 v_6 v_7
v_0 0 0 0 0 0 0 0 0
v_1 0 1 1 1 1 1 0 1
v_2 0 1 1 0 1 1 0 0
v_3 0 1 0 0 0 1 0 0
v_4 0 1 1 0 1 1 0 0
v_5 0 1 1 1 1 0 1 1
v_6 0 0 0 0 0 1 1 0
v_7 0 1 0 0 0 1 0 0 .
We now recall the following result from <cit.>.
Let G be a 4-regular multigraph with l connected components and let P_1 and P_2 be supplementary circuit partitions of G with P_1 an Euler system. Let H be the circle graph for G with respect to P_1 and P_2. Then n(A(H))=|P_2|-c.
We have the following corollary to Theorem <ref>, which considers the case where H is of the form H_π.
Let π be a signed permutation. Then n(A(H_π)) = c(π).
By Theorem <ref>, n(A(H_π)) = |P_B|-|P_A| = |P_B|-1 since P_A contains an Eulerian circuit of G_π. Recall from Section <ref> that c(π) is the number of circuits of P_B excluding the circuit (1,…,n,$). Thus c(π) = |P_B|-1.
Corollary <ref> illustrates the usefulness of using loops instead of vertex colors for circle graphs (cf. Remark <ref>).
For our running example, we have that c(π) = |P_B|-1 = 5-1 = 4, so n(A(H_π)) = c(π) =4.
Using Corollary <ref>, we can translate the inequality of Theorem <ref> as follows.
Let π be a signed permutation with n elements. Then d_r(π) ≥ r(A(H_π)).
By Theorem <ref> and Corollary <ref>, d_r(π) ≥ n+1 - c(π) = n+1 - n(A(H_π)). The result follows by observing that |V(H_π)| = n+1.
Let π be a signed permutation. Then π is the identity permutation if and only if r(A(H_π)) = 0.
Let π have n elements. Note that π is the identity permutation if and only if each intermediate segment forms a circuit of length 1 if and only if c(π) = n+1. By Corollary <ref>, this is equivalent to n(A(H_π)) = n+1 and therefore equivalent to r(A(H_π)) = 0.
Note that r(A(H))=0 simply means that H contains no edges (i.e., consists of only isolated vertices).
§ LOCAL COMPLEMENTATION
In order to study the effect of reversals on circle graphs, we recall the following graph notions. For a graph H and vertex v, the neighborhood of v in H, denoted by N_H(v), is { v' ∈ V(G) |{v,v'}∈ E(H), v' ≠ v}.
Let H be a graph and v a looped vertex of H. The local complement of H at v, denoted by H*v, is the graph obtained from H by complementing the subgraph induced by N_H(v).
In other words, for all p ⊆ V(H) = V(H*v) with |p| ∈{1,2}, we have p ∈ E(H*v) if and only if either (1) p ∉ E(H) and p ⊆ N_H(v) or (2) p ∈ E(H) and p ⊈N_H(v).
Moreover, we denote by H*_c v the graph obtained from H*v by removing all edges incident to v (including the loop on v). Thus v is an isolated vertex of H*_c v. Equivalently, H *_c v complements the subgraph induced by the “closed neighborhood” { v' ∈ V(H) |{v,v'}∈ E(H)}. Finally, we denote by H|v the graph obtained from H*_c v by removing the isolated vertex v. Note that each of these three operations (* v, *_c v and |v) are only allowed on looped vertices v; we say that such an operation is applicable to H if v is a looped vertex of H.
The interest of local complement for the topic of sorting by reversals is that it corresponds to applying some particular type of reversal <cit.>.
Let π be a signed permutation and let π' be the signed permutation obtained from π by applying a reversal on the breakpoints corresponding to an oriented vertex v. Then H_π' = H_π *_c v.
It turns out that local complementation has interesting effects on the underlying adjacency matrix (see, e.g., <cit.>).
Let H be a graph and v a looped vertex of H. Then n(A(H|v)) = n(A(H)). In other words, r(A(H *_c v)) = r(A(H))-1.
While Lemma <ref> can be proved directly, another way is to (1) observe that A(H|v) is obtained from A(H) by applying the Schur complementation matrix operation <cit.> on the submatrix induced by {v} and (2) recall from, e.g., <cit.> that Schur complementation preserves nullity. We remark that while A(H|v) is obtained from A(H) by applying Schur complementation, A(H*v) is obtained from A(H) by applying the principal pivot transform matrix operation, which is a partial matrix inversion operation, see, e.g., <cit.>.
Let H be a graph and let σ = (v_1,…,v_k) be a sequence of mutually distinct vertices of H.
A sequence φ = *_c v_1 *_c v_2 ⋯ *_c v_k of *_c operations that is applicable to H (associativity of *_c is from left to right) is called an lc-sequence for H. We say that an lc-sequence is full if H φ contains only isolated vertices.
Since (1) by Lemma <ref>, *_c decreases rank by one and (2) a graph H contains only isolated vertices if and only if r(A(H))=0, we directly recover the following property observed in <cit.> (see also <cit.>).
Let H be a graph and let φ be an lc-sequence of length k for H. Then φ is full if and only if k = r(A(H)).
Note that if a full lc-sequence φ exists for H_π with π a signed permutation, then by
Theorem <ref> and Lemma <ref> we have that d_r(π) = r(A(H_π)). So, each full lc-sequence corresponds to an optimal sorting of π.
§ HANNENHALLI-PEVZNER THEOREM
We now recall the so-called Hannenhalli-Pevzner theorem which gives a precise criterion on arbitrary graphs H for the existence of a full lc-sequence. This theorem has been shown in <cit.> in terms of signed permutations π, but has later been extended to arbitrary graphs H (instead of essentially restricting to circle graphs H_π). Also, this result was shown independently in <cit.> in the context of gene assembly in ciliates (we recall this topic in Section <ref>). We give here another proof of this result, closely following the reasoning of <cit.>.
Let L be the set of looped vertices of a graph H. For all v ∈ V(H), we denote N^l_H(v) = N_H(v) ∩ L and N^ul_H(v) = N_H(v) ∖ L. Also, H is said to be loopless if L=∅.
Let H be a connected graph and let v ∈ V(H) be looped. If H' is a loopless connected component of H|v, then both (1) V(H') ∩ N^l_H(v) ≠∅ and (2) N^ul_H(v) ⊆ N^ul_H(w) and N^l_H(w) ⊆ N^l_H(v) for all w ∈ V(H') ∩ N^l_H(v).
Let H' be a loopless connected component of H|v. Since local complementation changes only edges between vertices of N_H(v), V(H') ∩ N_H(v) ≠∅. Because local complementation complements the loop status of each vertex of N_H(v) and H' is loopless, V(H') ∩ N^l_H(v) = V(H') ∩ N_H(v) ≠∅.
Let w ∈ V(H') ∩ N^l_H(v).
Firstly, let x ∈ N^ul_H(v). Then x is looped in H|v. Since H' is loopless, {x,w}∉ E(H|v). Thus {x,w}∈ E(H) (because x,w ∈ N_H(v)). Consequently, x ∈ N^ul_H(w). Thus N^ul_H(v) ⊆ N^ul_H(w).
Secondly, let x ∈ N^l_H(w). If x ∉ N^l_H(v), then x ∈ N^l_H|v(w) which contradicts the fact that w belongs to a loopless connected component of H|v. Thus N^l_H(w) ⊆ N^l_H(v).
For a vertex v of H, define s(v) = |N^ul_H(v)| - |N^l_H(v)|.
Let (H) be the set of looped vertices v of H such that s(w) ≤ s(v) for all w ∈ N^l_H(v). Note that for any graph H with looped vertices, (H) is nonempty since it contains all looped vertices that are (globally) maximal with respect to function s.
Let H be a connected graph and v ∈(H). Then each loopless connected component of H|v consists of only an isolated vertex.
Let H' be a loopless connected component of H|v. Since v ∈(H), we have by Lemma <ref>, N^ul_H(v) = N^ul_H(w) and N^l_H(w) = N^l_H(v) for some w ∈ V(H'). Since v and w are moreover looped and adjacent, we have that w, and therefore H', is an isolated vertex of H|v.
By iteration of Lemma <ref>, we obtain the following.
Let H be a graph. Then there is a full lc-sequence for H if and only if each loopless connected component of H consists of only an isolated vertex.
Let π be a signed permutation. If each connected component of H_π has at least one looped vertex or consists of only an isolated vertex, then d_r(π) = n+1 - c(π).
§ DCJ OPERATIONS AND MULTIPLE CHROMOSOMES
A reversal is a special case of a double-cut-and-join (DCJ for short) operation, also called recombination in other contexts. A DCJ operation is depicted in Figure <ref>. A DCJ operation consists of three stages: first two distinct breakpoints align, then both breakpoints are cut, and finally the ends are glued back together as depicted in Figure <ref>. Since we consider endpoints to be breakpoints as well, any of w, x, y and z may be nonexistent. From the alignment of Figure <ref> one observes that a reversal is a special case of a DCJ operation. DCJ operations are allowed to be intermolecular as well, and so sorting by DCJ operations may involve multiple chromosomes (for example a whole genome) and each chromosome may be linear or circular. The DCJ distance of two genomes g_A and g_B, denoted by d_DCJ(g_A,g_B), is the minimal number of DCJ operations needed to transform one genome into the other. A toy example of two genomes g_A and g_B of species A and B, respectively, consisting of both linear and circular chromosomes is given in Figure <ref>.
0.2cm
0.1cm
0.05cm
3
2
If g_A and g_B contain only circular chromosomes, then the method of Section <ref> to construct a 4-regular multigraph applies essentially unchanged — the only difference is that the circularization step is not done (and so no anchor $ is introduced) because the chromosomes are circular already. Thus, we directly apply the expand step to all chromosomes of g_A and then construct the 4-regular multigraph G as before.
It is now a special case of <cit.> that d_DCJ(g_A,g_B) = n-c, where n is the number of vertices of G (i.e., n is the number of segments in g_A and g_B) and c is the number of circuits containing intermediate segments of the circuit partition P_B belonging to g_B. The result in <cit.> is not stated in terminology of circuit partitions of a 4-regular multigraph, but instead in terms of a graph called an “adjacency graph”[The notion of adjacency graph is not to be confused with the different notion of adjacency matrix as recalled earlier in this paper.]. More precisely, in <cit.> c is defined as the number of cycles in the adjacency graph, and it can be readily verified that cycles in the adjacency graph correspond one-to-one to circuits of P_B containing intermediate segments.
While the construction of a 4-regular multigraph as outlined in Section <ref> works when both genomes have only circular chromosomes, there are issues for genomes containing linear chromosomes such as in Figure <ref>. Indeed, the circularization of the linear chromosomes of g_B in Figure <ref> leads to two anchors $_1 and $_2 which somehow need to be reconciled with the single anchor $ in the linear chromosome of g_A. Also, the issue discussed in Remarks <ref> and <ref> (which concerns the issue of whether to use $ or -$ as the anchor) is exasperated when there are several linear chromosomes. We leave it as an open problem to resolve this issue of the absence of a canonical 4-regular multigraph for two genomes. We mention that <cit.> is able to calculate the DCJ distance in this general setting (i.e., with linear chromosomes). The formula takes the form d_DCJ(g_A,g_B) = n-(c+i/2), where i is the number of connected components of the adjacency graph that are odd-length paths. Very roughly, one way to explain this formula in terms of 4-regular multigraphs is that each odd-length path corresponds to a side of a linear chromosome and an anchor (which increases n by one) can be introduced in such a way that both sides of a linear chromosome end up in different circuits (which increases c by two). So, the net effect of two sides of a linear chromosome is one, hence the contribution of i/2.
Recall that the construction of a circle graph from Section <ref> requires two supplementary circuit partitions P_A and P_B of a 4-regular multigraph G where P_A is an Euler system. While in the theory of sorting by reversals P_A is always an Euler system (in fact, P_A contains an Eulerian circuit of G_π), the circuit partition P_A belonging to a genome g_A is not necessarily a Euler system. In Section <ref> we recall the notion of a delta-matroid which can function as a substitute for circle graphs in scenarios where circle graphs do not exist. In fact, as we will see, delta-matroids are useful even in scenarios where circle graphs do exist, such as in the theory of sorting by reversals.
§ DELTA-MATROIDS
It was casually remarked in the discussion of <cit.> that the Hannenhalli-Pevzner theorem might be generalizable to the more general setting of delta-matroids, which are combinatorial structures defined by Bouchet <cit.>. Indeed, we recall now how delta-matroids can be constructed from circuit partitions in 4-regular multigraphs and from circle graphs. In this way, various results concerning the theory of sorting by reversals can be viewed in this more general setting.
A set system D = (V,S) is an ordered pair, where V is a finite set, called the ground set, and S a set of subsets of V. For set systems D_1 = (V_1,S_1) and D_2 = (V_2,S_2) with disjoint ground sets, we define the direct sum of D_1 and D_2, denoted by D_1 ⊕ D_2, as the set system (V_1 ∪ V_2, { X_1 ∪ X_2 | X_1 ∈ S_1, X_2 ∈ S_2 }). In this case we say that D_1 and D_2 are summands of D. A set system D is called connected if it is not the direct sum of two set systems with nonempty ground sets. Also, D = (V,S) is called even if the cardinalities of all sets in S are of equal parity. Let us denote symmetric difference by . A delta-matroid D = (V,S) is a set system where S is nonempty and, moreover, for all X,Y ∈ S and x ∈ X Y, there is an y ∈ X Y (possibly equal to x) with X {x,y}∈ S <cit.>.
Let G be a 4-regular multigraph and let P_1 and P_2 be two supplementary circuit partitions of G. Denote by 𝒟_G(P_1,P_2) the set system (V(G),S), where for X ⊆ V(G) we have X ∈ S if and only if the circuit partition P obtained from P_1 by changing, for each v ∈ X, the route of P_1 at v to coincide with the route of P_2 at v, is an Euler system.
The following result is stated in <cit.> (see also <cit.>).
Let G be a 4-regular multigraph and let P_1 and P_2 be two supplementary circuit partitions of G. Then 𝒟_G(P_1,P_2) is a delta-matroid.
Let G_π be from Figure <ref>, P_A = {C} be from Figure <ref> and P_B be from Figure <ref>. Then 𝒟_G_π(P_A,P_B) = (V(G),S), where
S = {∅, {v_1}, {v_2}, {v_4}, {v_6}, {v_1,v_3}, {v_1,v_5}, {v_1,v_6}, {v_1,v_7}, {v_2,v_4}, {v_2,v_5},
{v_2,v_6}, {v_3,v_5}, {v_4,v_5}, {v_4,v_6}, {v_5,v_6}, {v_5,v_7}, {v_1,v_2,v_3}, …}.
For a graph H and X ⊆ V(G), we denote by H[X] the subgraph of H induced by X (i.e., all vertices outside X are removed including their incident edges). Also, denote by 𝒟_H the set system (V(H),S), where for X ⊆ V(H), X ∈ S if and only if the matrix A(H[X]) is invertible. As usual, the empty matrix is invertible by convention.
Let H be a graph. Then 𝒟_H is a delta-matroid.
The delta-matroids of the form 𝒟_H for some graph H are called binary normal delta-matroids <cit.>.
We now recall the close connection between delta-matroids of circle graphs and of circuit partitions of 4-regular multigraphs.
Let H be the circle graph of a 4-regular multigraph G with respect to the supplementary circuit partitions P_1 and P_2 with P_1 an Euler system. Then 𝒟_H = 𝒟_G(P_1,P_2).
Note that G_π, P_A and P_B as used in Example <ref> are also used to construct the circle graph H_π of Figure <ref>. So, 𝒟_G_π(P_A,P_B) of Example <ref> is equal to 𝒟_H_π.
Notice, e.g., that the looped vertices of H precisely correspond to the singletons in 𝒟_G(P_1,P_2). Indeed, the adjacency matrix of a subgraph containing a single vertex v is invertible precisely when v is looped.
By Theorem <ref>, 𝒟_G(P_1,P_2) corresponds to a circle graph if P_1 an Euler system. Recall that in the case of the sorting of genomes by DCJ operations (see Section <ref>), the usual construction of a circle graph does not work since the input does not necessarily correspond to an Euler system. However, we can construct 𝒟_G(P_1,P_2) and so delta-matroids allow one to work with combinatorial structures similar to circle graphs in cases (such as in Section <ref>) where a corresponding circle graph does not seem to exist. This is one important reason for considering delta-matroids. Another reason is that delta-matroids are often more easy to work with than circle graphs (even in the cases where circle graphs exist) since the operation of local complementation (on looped vertices) is much more simple in terms of delta-matroids. We will recall this now.
For a set system D = (V,S) and X ⊆ V, define the twist of D on X, denote by D*X, as the set system (V,S') with S' = { X Y | Y ∈ S}. Notice that (D*X)*X = D. Also, it is easy to verify that if D is a delta-matroid, then so is D*X.
Let H be a graph and v ∈ V(H) be looped. Then 𝒟_H*v = 𝒟_H*{v}.
By Theorem <ref>, 𝒟_H *_c v is obtained from 𝒟_H*{v} by removing all sets containing v.
We can translate the notion of a full lc-sequence for graphs to the realm of delta-matroids. Let D = (V,S) be a set system. A sequence σ = (v_1,v_2,…,v_k) of mutually distinct elements of V is a lc-sequence for D if for all i ∈{0,…,k}, {v_1,…,v_i}∈ S. Note that, in particular, ∅, {v_1,…,v_k}∈ S. An lc-sequence σ = (v_1,v_2,…,v_k) for D is said to be full if {v_1,…,v_k}∈max(S), where max(S) is the set of maximal elements of S with respect to inclusion.
Let H be a graph. Then σ = (v_1,v_2,…,v_k) is an lc-sequence for 𝒟_H if and only if φ = *_c v_1 *_c v_2⋯ *_c v_k is an lc-sequence for H. Moreover, in this case, σ is full if and only if φ is full.
We prove the first statement by induction on k. If k=0, then σ and φ are empty sequences and so the result holds trivially. Assume that k > 0 and that the result holds for k' = k-1.
First, let σ be an lc-sequence for 𝒟_H. In particular, σ' = (v_1,v_2,…,v_k-1) is an lc-sequence for 𝒟_H. By the induction hypothesis, φ' = *_c v_1 *_c v_2 ⋯ *_c v_k-1 is an lc-sequence for H. Assume to the contrary that φ is not an lc-sequence. Then v_k is not a looped vertex of H φ'. Thus {v_k} is not a set of 𝒟_H φ'. By the sentence below Theorem <ref>, {v_k} is not a set of 𝒟_H*{v_1,v_2,…,v_k-1}. In other words, {v_1,v_2,…,v_k} is not a set of 𝒟_H — a contradiction.
Second, let φ be an lc-sequence for H. In particular, φ' = *_c v_1 *_c v_2⋯ *_c v_k-1 is an lc-sequence for H. By the induction hypothesis, σ' = (v_1,v_2,…,v_k-1) is an lc-sequence for 𝒟_H. It suffices now to show that {v_1,v_2,…,v_k} is a set of 𝒟_H. By the sentence below Theorem <ref>, {v_k} is a set of 𝒟_H*{v_1,v_2,…,v_k-1}. Thus {v_1,v_2,…,v_k} is a set of 𝒟_H.
We now prove the second statement. By the strong principal minor theorem, see <cit.> and also <cit.>, each set in max(S) is of cardinality r(A(H)). Thus lc-sequence σ is full if and only if k = r(A(H)) if and only if φ is full (by Corollary <ref>).
We are now ready to rephrase Theorem <ref> in delta-matroid terminology as follows.
Let D be a binary normal delta-matroid. Then there is a full lc-sequence for D if and only if each even connected summand of D with nonempty ground set is of the form ({v},∅) for some element v in the ground set of D.
Since D is a binary normal delta-matroid, we have that D = 𝒟_H for some graph H. By Lemma <ref>, there is a full lc-sequence for D if and only if there is a full lc-sequence for H. Also observe that, if a graph H' consists of only an isolated vertex v, then 𝒟_H' = ({v},∅). By Theorem <ref>, it suffices to recall that (1) a graph H' is loopless if and only if 𝒟_H' is even (the if-direction is immediate and the only-if direction follows from the well-known fact that invertible zero-diagonal skew-symmetric matrices have even dimensions, see, e.g., <cit.>) and (2) for all X ⊆ V(G), the subgraph of H induced by X is a (possibly empty) union of connected components of H if and only if there is a summand of 𝒟_H with ground set X <cit.>.
Theorem <ref> does not hold for arbitrary delta-matroids. Indeed, the delta-matroid D = (V,S) with |V| = 3 and S = { X ⊆ V | |X| ≠ 1 } is connected but not even and so the right-hand side of the equivalence of Theorem <ref> trivially holds. However, D does not have a full lc-sequence since D does not contain any singletons. It would be interesting to see to which class of delta-matroids Theorem <ref> can be generalized.
We briefly mention that delta-matroids have been generalized to multimatroids in <cit.>. In this general setting, delta-matroids translate to a class of multimatroids called 2-matroids. Therefore, this section could also have been phrased in the setting of 2-matroids. Another class of multimatroids, called tight 3-matroids can also be associated to 4-regular multigraphs. See <cit.> for the case where the 4-regular multigraph is derived from the context of gene assembly in ciliates, which is a theory closely related to that of sorting by reversals, see Section <ref>. Although it is out of the scope of this paper, it would be interesting to study tight 3-matroids associated to 4-regular multigraphs from the context of sorting by reversals.
§ A CLOSELY RELATED THEORY: GENE ASSEMBLY IN CILIATES
Gene assembly is a process taking place during sexual reproduction of unicellular organisms called ciliates. During this process, a nucleus, called the micronucleus (or MIC for short), is transformed into another, very different, nucleus, called macronucleus (or MAC for short). Each gene in the MAC is one block consisting of a number of consecutive[Actually, the MDSs overlap slightly in the MAC (the overlapping regions are called pointers), but this is not relevant for this paper.] segments called MDSs. These MDSs also appear in the corresponding MIC gene, but they can appear there in (seemingly) arbitrary order and orientation with respect to the MAC gene and they are moreover separated by noncoding segments called IESs. A (toy) example of a gene consisting of seven MDSs M_1,…,M_7 is given in Figures <ref> and <ref>, where the IESs are denoted by I_1,…,I_8 and if an MDS M_i in the MIC gene is in inverted orientation (i.e., rotated by 180 degrees) with respect to the MAC gene, then this is denoted by M_i (this is the standard notation in this theory, and would of course be written with a minus sign, i.e., -M_i, in the theory of sorting by reversals).
The postulated way in which a MIC gene is transformed into its MAC gene is through DCJ operations (called DNA recombination in this context), where MDSs that are not adjacent in the MIC gene but are adjacent in the MAC gene are aligned, cut and glued back such that the two MDSs become adjacent like they appear in the MAC. For example, segments w and z of Figure <ref> may be MDSs M_i and M_i+1, respectively, and x and y IESs. In the intramolecular model, the application of DCJ operations is restricted in such a way that they cannot result in MDSs appearing in different molecules <cit.> (however, it is allowed to apply two DCJ operations simultaneously, when each of them separately would result in a split of the molecule).
Starting from the MIC gene, Figure <ref>, we can construct a 4-regular multigraph in a similar way as for the theory of sorting by reversals. However, unlike before, the left-hand side of the first MDS M_1 and the right-hand side of the last MDS M_7 are not considered breakpoints. Hence, in Figure <ref>, we merge adjacent segments I_1 and M_1 (M_7 and I_4, respectively) in the MIC gene into a single segment called I_1 M_1 (M_7 I_4, respectively). Moreover, recall that before we introduced an anchor segment $ during the circularization, because the endpoints of a chromosome are breakpoints too. In the case of MIC genes, there are no endpoints that are breakpoints and so we omit the anchor segment $. Instead, during the circularization the rightmost segment I_8 is merged with the leftmost segment I_1 M_1 in the MIC gene into a single segment called I_8;I_1M_1, see Figure <ref>. Finally, because the “signed permutation” of Figure <ref> already contains intermediate segments (the IESs), we also do not have an expand step like in Figure <ref>. In this way, the intermediate segments are physical, while they are “virtual” in the theory of sorting by reversals.
The construction of a 4-regular multigraph from Figure <ref> is now identical as in the theory of sorting by reversals. Due the similarity of the “signed permutations” of Figures <ref> and <ref>, we see that the circle graph corresponding to Figure <ref> is obtained from the circle graph of Figure <ref> by removing vertices v_0 and v_7. See <cit.> for more details on how the 4-regular multigraphs, circle graphs and delta-matroids can be used to study gene assembly in ciliates.
From the discussion above it is not surprising that the theory of sorting by reversals and the theory of gene assembly in ciliates partly overlap. For example, in both theories local complementation plays an important role — indeed, as we mentioned in Section <ref> the Hannenhalli-Pevzner theorem was discovered independently in both theories. We remark that links between gene assembly in ciliates and sorting by reversals have also been appreciated in <cit.>.
0.8cm
0.65cm
0.08cm
12
§ DISCUSSION
We have shown that the theory of 4-regular multigraphs, including their accompanying combinatorial structures such as circle graphs and delta-matroids, can be applied to the topic of sorting by reversals. The fact that a notion such as local complementation has been rediscovered in the context of sorting by reversals signifies the importance of the theory of 4-regular multigraphs.
This paper may serve as an introduction to the theory of 4-regular multigraphs for the audience familiar with sorting by reversals. This paper has covered only very little of the extensive body of knowledge that the theory of 4-regular multigraphs provides and that can be applied to the topic of sorting by reversals. Indeed, we also mention for example the notion of touch graph <cit.> (see also, e.g., <cit.>) of a circuit partition P that is likely to be useful for sorting by reversals. Indeed, while omitting details, the touch graph of a circuit partition P_B belonging to species B in the context of sorting by reversals (and also in the context of gene assembly in ciliates) is of a very special form, that of a star graph, which likely have interesting consequences. Indeed, while only implicitly stated in <cit.>, this star graph property is key in the main result of <cit.> from the context of gene assembly in ciliates.
Since the theory of gene assembly in ciliates can also be fit into the theory of 4-regular multigraphs, we have also extended the links between the theories of gene assembly in ciliates and sorting by reversals as observed in <cit.>.
Finally, we have formulated the open problem of using the theory of 4-regular multigraphs in the case of DCJ operations in the presence of linear chromosomes (see Section <ref>) and also the open problem of generalizing the Hannenhalli-Pevzner theorem from graphs to a suitable subclass of delta-matroids (see Section <ref>).
§.§ Acknowledgements
We thank anonymous referees for their helpful comments on an earlier version of this paper.
| Edit distance measures for genomes can be used to approximate evolutionary distance between their corresponding species. A number of genome transformations have been used to define edit distance measures. In this paper we consider the well-studied chromosome transformation called reversal, which is an inversion of part of a chromosome <cit.>. If two given chromosomes can be transformed into each other through reversals, then the difference between these two chromosomes can be represented by a permutation, where the identity permutation corresponds to equality. As a result, transforming one chromosome into the other using reversals is called sorting by reversals. The reversal distance is the least number of reversals needed to accomplish this transformation (i.e., to sort the permutation by reversals). In <cit.> a formula is given for the reversal distance, leading to an efficient algorithm to compute reversal distance. The proof of that formula uses a notion called the breakpoint graph. Subsequent streamlining of this proof led to the introduction of additional notions <cit.> such as the overlap graph and a corresponding graph operation. A reversal can be seen as a special case of a double-cut-and-join (DCJ) operation that can operate either within a chromosome or between chromosomes. Similar as for reversals, one can define a notion of DCJ distance, and a formula for DCJ distance is given in <cit.>.
The theory of circuit partitions of 4-regular multigraphs was initiated in <cit.> and is currently well-developed with extensions and generalizations naturally leading into the domains of, e.g., linear algebra and matroid theory. In this paper we show that this theory can be used to study sorting by reversals, and more generally sorting by DCJ operations. Moreover, we show how various aspects of the theory of circuit partitions of 4-regular multigraphs relate to the topic of sorting by reversals. In particular, we show that the notion of an overlap graph and its corresponding graph operation from the context of sorting by reversals <cit.> correspond to circle graphs and the (looped) local complementation operation, respectively, of the theory of circuit partitions of 4-regular multigraphs. This leads to a reformulation of the Hannenhalli-Pevzner theorem <cit.> to delta-matroids. We remark however that the theory of circuit partitions of 4-regular multigraphs is too broad to fully cover here, and so various references are provided in the paper for more information.
It has been shown that various other research topics can also be fit into the theory of circuit partitions of 4-regular multigraphs: examples include the theory of ribbon graphs (or embedded graphs) <cit.> and the theory of gene assembly in ciliates <cit.>. As such all these research topics arising from different contexts turn out to be strongly linked, and results from one research topic can often be carried over to another. Indeed, it is not surprising that the Hannenhalli-Pevzner theorem has been independently discovered in the context of gene assembly in ciliates <cit.>.
This paper is organized as follows. In Section <ref> we recall sorting by reversals and in Section <ref> we associate a 4-regular multigraph and a pair of circuit partitions to a pair of chromosomes (where one is obtainable from the other by reversals). This leads to a reformulation of a known inequality of the reversal distance in terms of 4-regular multigraphs. In Section <ref> we associate a circle graph to the two circuit partitions and we show that the adjacency matrix representation of this circle graph reveals essential information regarding the reversal distance. We recall local complementation in Section <ref> and the Hannenhalli-Pevzner theorem in Section <ref>, and reformulate the Hannenhalli-Pevzner theorem in terms of delta-matroids in Section <ref>. Before discussing delta-matroids, we also recall sorting by DCJ operations in Section <ref>. We discuss the close connection of sorting by reversals and gene assembly in ciliates in Section <ref>. Finally, a discussion is given in Section <ref>. | null | null | null | We have shown that the theory of 4-regular multigraphs, including their accompanying combinatorial structures such as circle graphs and delta-matroids, can be applied to the topic of sorting by reversals. The fact that a notion such as local complementation has been rediscovered in the context of sorting by reversals signifies the importance of the theory of 4-regular multigraphs.
This paper may serve as an introduction to the theory of 4-regular multigraphs for the audience familiar with sorting by reversals. This paper has covered only very little of the extensive body of knowledge that the theory of 4-regular multigraphs provides and that can be applied to the topic of sorting by reversals. Indeed, we also mention for example the notion of touch graph <cit.> (see also, e.g., <cit.>) of a circuit partition P that is likely to be useful for sorting by reversals. Indeed, while omitting details, the touch graph of a circuit partition P_B belonging to species B in the context of sorting by reversals (and also in the context of gene assembly in ciliates) is of a very special form, that of a star graph, which likely have interesting consequences. Indeed, while only implicitly stated in <cit.>, this star graph property is key in the main result of <cit.> from the context of gene assembly in ciliates.
Since the theory of gene assembly in ciliates can also be fit into the theory of 4-regular multigraphs, we have also extended the links between the theories of gene assembly in ciliates and sorting by reversals as observed in <cit.>.
Finally, we have formulated the open problem of using the theory of 4-regular multigraphs in the case of DCJ operations in the presence of linear chromosomes (see Section <ref>) and also the open problem of generalizing the Hannenhalli-Pevzner theorem from graphs to a suitable subclass of delta-matroids (see Section <ref>).
§.§ Acknowledgements
We thank anonymous referees for their helpful comments on an earlier version of this paper. | null |
http://arxiv.org/abs/1701.07919v1 | 20170127013046 | Integrated and Differential Accuracy in Resummed Cross Sections | [
"Daniele Bertolini",
"Mikhail P. Solon",
"Jonathan R. Walsh"
] | hep-ph | [
"hep-ph"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.08103v2 | 20170127163029 | Complementarity Analysis of Interference between Frequency-Displaced Photonic Wave-Packets | [
"Gustavo C. Amaral",
"Elisa F. Carneiro",
"Guilherme P. Temporão",
"Jean Pierre von der Weid"
] | quant-ph | [
"quant-ph"
] |
[
Nicolás Sanhueza-Matamala
December 30, 2023
=============================
The complementarity relation between the visibility and the spectral distinguishability of frequency-displaced photonic wave-packets in a Hong-Ou-Mandel interferometer is studied. An experimental definition of K, the distinguishability parameter, is proposed and tested for the K^2+𝒱^2≤ 1 complementarity inequality when a consistent visibility parameter is defined. The results show that the spectral distinguishability is, indeed, complementary to the visibility and that the quantum aspect of the two-photon interference phenomenon can be examined by employing weak-coherent states.
In 1984, Hong, Ou, and Mandel developed an experiment capable of quantifying the degree of distinguishability between two photonic quantum states, the Hong-Ou-Mandel interferometer <cit.>. Two indistinguishable single-photons entering a symmetrical beam-splitter from different input ports are incapable of leaving the device through different output arms. The two-photon wave-packet that describes the collective input state experiences destructive interference and the photons leave the beam-splitter “bunched" together. The phenomenon is a fundamentally quantum one which translates the degree of distinguishability between the individual input states directly as the visibility of the Hong-Ou-Mandel interferogram: unitary visibility meaning complete indistinguishability; and null visibility meaning complete distinguishability. With the advent of quantum memories, the HOM interference has attracted great attention: a quantum memory's ability to preserve the entire photonic wave-packet can be assessed by measuring the visibility of the HOM interferogram after the states are stored and recovered from it <cit.>. Also, the HOM interference phenomenon is at the heart of the projection onto the Bell state basis <cit.>.
HOM interference and indistinguishability between photonic wave-packets gained great interest with the development of the Measurement-Device Independent Quantum Key Distribution (MDI-QKD) protocol <cit.>. Single-photons, which would be ideal for QKD protocols, are scarcely available, and faint laser pulses, or weak-coherent state (WCS) pulses, are employed as an approximation of a single-photon state <cit.>. HOM interference with WCSs, however, cannot reach unitary visibility even with perfect indistinguishability due to a non-zero multi-photon emission probability, with a maximum achievable visibility limited to 50% <cit.>. Nevertheless, and due to the fact that a WCS pulse will probabilistically contain a single photon, two-photon HOM interference of WCSs has been explained by a statistical decomposition of the input pairs of WCS pulses into pairs of Fock states; each possible outcome being weighted by its respective probability of occurrence <cit.>.
In a single-mode optical fiber setup, the spatial mode is pre-determined so that the degrees of freedom that may distinguish two photonic wave-packets are: their polarization mode; the mean number of photons; their temporal modes; and their spectral modes. In fact, a recent result has been presented where, by guaranteeing the indistinguishability of all degrees of freedom except for the spectral mode, the latter can be determined by analyzing the HOM interference of the wave-packets <cit.>. This result raises a question regarding the very nature of the HOM interference: how can two completely distinguishable photonic wave-packets (the spectral modes are disjoint) produce non-null HOM interference when the phenomenon is, itself, dependent on their indistinguishability?
Here, the proposition that the interference phenomenon is a result of an impossibility of the detectors to identify the individual spectral distribution of the input states is presented. For that, an experimental definition of the spectral distinguishability parameter between two photonic wave-packets is presented and shown to obey a complementarity relation when WCSs are employed. The impact of the presented results is two-fold: first, it shows that the spectral distinguishability is complementary to the visibility of the HOM interferogram; and, second, shows that the two-photon interference phenomenon can be distilled from the WCS interference in a HOM interferometer, since complementarity is a strictly quantum characteristic.
The setup employed to examine the HOM interference of frequency-displaced photonic wave-packets is simple, and is depicted in Fig. <ref>: two optical beams with identical polarization modes, spatial modes, and mean number of photons but centered at different frequencies, are sent to a time-resolved HOM interferometer; by adjusting the relative time τ between detections, the temporal mode of the wave-packets is synchronized. If the wave-packets have identical frequency modes, the result of the interferogram, as τ is swept, is the usual HOM dip <cit.>; however, in the case the center frequencies are different, the beat note between them is observed in the interferogram <cit.>.
An important observation is that, in the configuration of Fig. <ref>, the spatio-temporal modes and, thus, the frequency modes, are defined by the optical path, i.e., the wave-packet with frequency mode centered at ω_1,2 comes from input path 1,2. Any two single-photons generated by the optical sources that are directed to the interferometer through its upper (path 1) or lower (path 2) arms will be respectively described as
| 1 >_1 = e^-(t-τ_1)^2/(2σ_1^2)/σ_1√(2π)e^-i(ω_1)â^†_1| 0 >=f_1(t)â^†_1| 0 >
| 1 >_2 = e^-(t-τ_2)^2/(2σ_2^2)/σ_2√(2π)e^-i(ω_2)â^†_2| 0 >=f_2(t)â^†_2| 0 >,
where the fact that the sources emit photons with Gaussian-shaped wave-packets in two well-defined frequency modes ω_1 and ω_2, have been assumed for simplicity. τ_1,2 represent the relative delays of the wave-packets that must be compensated in the time-resolved HOM interferometer; σ_1,2 represent the half-width at 1/e of the wave-packets; and â^†_1,2 are the creation operators for spatial modes 1 and 2. The position of the beam splitter has been taken as a reference, so all spatial dependence of f_1,2(t) has been neglected <cit.>.
In the particular case where the wave-packets are monochromatic (i.e., their spectral distributions are unit impulses), and the integration time of the detectors is sufficiently long, two wave-packets of different frequencies will always be distinguishable. In practice, however, the detectors have a frequency response and the linewidth of the wave-packets is different from zero, which leaves margin for errors in distinguishing the provenance of the photons; inside the region of indetermination where the spectral distributions overlap and the provenance of the photons cannot be perfectly determined, the wave-packets are indistinguishable and produce an interference pattern even though frequency-displaced. Writing the spectral decomposition of the interfering wave-packets, one has:
.|1>_1 = ∫_-∞^∞dω X_1(ω)â^†_1(ω).|0>
.|1>_2 = ∫_-∞^∞dω X_2(ω)â^†_2(ω).|0>,
where X_1(ω) and X_2(ω) are the Fourier transforms of the spatio-temporal functions f_1(t) and f_2(t) of Eq. <ref>, respectively. Based on Eq. <ref>, it is interesting to develop a physical notion of the region of indistinguishability between the wave-packets, i.e., the region within which the two-photon HOM interference will take place; a natural means of doing so is through the fidelity between these quantum states.
The fidelity measures the probability of confusing two quantum states if one is allowed to perform a single measurement over the system and, therefore, translates the distinguishability between them <cit.>. If two quantum states, say .|ρ> and .|σ>, are orthogonal (hence, perfectly distinguishable), the fidelity of these states, calculated as F(.|ρ>,.|σ>) = |<ρ|σ>|^2, equals zero; in case they are completely indistinguishable, F(.|ρ>,|.σ>)=1. Calculating the spectral fidelity between the interfering states is straightforward from the spectral decomposition of the spatio-temporal mode and from Eq. <ref>:
|<1|1>_1,2|^2 =
|∫∫dω_1 dω_2X_1(ω_1)X_2^∗(ω_2)<0|._1â(ω_1)â^†(ω_2).|0>_2|^2.
Upon close inspection of the above expression, one notes that the inner product on the rightmost part of the expression can be simplified, i.e., <0|._1â(ω_1)â^†(ω_2).|0>_2 = δ(ω_1-ω_2). This allows one to rewrite the expression in a simpler form:
F = |∫_-∞^∞dω X_1(ω)X_2^∗(ω)|^2.
Note that, since the fidelity equals 1 whenever a measurement cannot distinguish between the states and zero whenever the states are completely distinguishable, a parameter of distinguishability between the states, say K, would take the form K=1-F, where F is the fidelity as calculated in Eq. <ref>. Furthermore, in order to calculate the fidelity between two interfering states in a time-tuned HOM interferometer, an extra simplification can be imputed into Eq. <ref>: for τ such that the temporal modes are perfectly synchronized, i.e., in the center of the HOM dip, the phases are fixed and can be removed from the integral. In fact, the phase difference Δϕ = ϕ_1 - ϕ_2 will be zero and, from the interference equation <cit.>, the cosine factor that appears multiplying the result will be unit. This way, the multiplication of two otherwise complex functions X_1(ω) and X_2^∗(ω) simplify to the multiplication of the modulus of these complex functions, which is always positive. In other words, the parameter K of distinguishability may be rewritten, in the center of the HOM dip, as:
K = 1 - |∫_-∞^∞dω |X_1(ω)| |X_2(ω)||^2.
In order to measure the spectral distributions X_1(ω) and X_2(ω), one might resort to one of two techniques depending on the maximum intensity available for the optical sources: if the intensity is in the few-photon regime, the solution is to make use of the Few-Photon Heterodyne Spectroscopy described in <cit.>; if, on the other hand, the sources are attenuated laser sources, a non-attenuated sample of the optical signal may be directed to a classical high-resolution Optical Spectrum Analyser (OSA). Despite the practicality of the second technique, care must be taken due to the nature of the measurement in an OSA: the intensity of the light fields is measured rather than the field itself. Fortunately, the intensity can be related to the electric field by I_1,2∝|X_1,2|^2 and |X_1,2|, the distribution one must determine for Eq. <ref>, and I_1,2 are all strictly positive. Therefore, using the positiveness of both |X_1,2| and I_1,2, the expression of K may be written as a function of the measurement of a standard OSA as
K = 1 - α|∫_-∞^∞dω√(I_1(ω)I_2(ω))|^2,
where α is a normalization factor since the integral in Eq. <ref> may not be normalized. This integral, however, represents the second cross-moment of two distributions, which has a natural way of being normalized: α corresponds to the inverse of the square root of the products of the individual second moments of each distribution <cit.>. This has the upside of yielding a dimensionless figure of merit, which agrees with the definition of K. Substituting α in Eq. <ref> leads to the final expression of K:
K = 1 - |∫_-∞^∞dω√(I_1(ω)I_2(ω))/√(∫_-∞^∞dω I_1(ω) ∫_-∞^∞dω I_2(ω))|^2.
Using the presented spectral definition of K, the herewith proposition is that the visibility V in a HOM interferometer is tied to K by the complementarity relation K^2+V^2=1 <cit.>. Analogously to <cit.>, the visibility is defined as V=(R_dist-R_min)/R_dist, where R_dist is the count rate outside the interval of mutual coherence of the wave-packets. In Fig. <ref>,a graphical interpretation of the proposed complementarity relation between the spectral measure of K and V is presented for clarity.
Our experimental apparatus, presented in Fig. <ref>, depicts the state preparation, interference, and measurement. The frequency-displaced coherent states are generated by two independent tunable wavelength lasers, which operate within the telecommunication band, and whose wavelength can be fine-tuned by a feedback signal. The lasers emission spectra have been adjusted to the absorption spectra of high-Q factor gas-cells through a PID system, which enables fine tuning of the order of MHz even within the absorption curve. The outputs of the frequency-tuned lasers are also polarization stabilized by active polarization controllers. Finally, variable optical attenuators and mechanical polarization controllers guarantee the fine-tuning necessary to guarantee indistinguishability in terms of polarization and intensity.
The prepared frequency-displaced WCSs are sent to the symmetrical beam-splitter BS_HOM. Connected to one of BS_HOM's output is a polarizing beam-splitter (PBS) with one of the ports connected to a superconducting single-photon detector (SSPD_ADJ), which provides the feedback for fine-tuning the input polarizations with the mechanical polarization controllers; by minimizing the counts in this SSPD for both input states, one guarantees that the polarization states are aligned. The HOM interferogram is determined by placing two SSPDs – master and slave – at the remaining outputs of the PBS and of BS_HOM. Before both SSPDs (master and slave), however, an electro-optical amplitude modulator (AM) is introduced, which is responsible for chopping the optical pulse that arrives at the detector. The effect of the AMs is, thus, to emulate the detector gate since the SSPDs are free-running and have, ideally, infinite integration time, i.e., the AMs impose an imperfect frequency response to the detectors. An internally triggered pulse generator triggers the AM of SSPD master and, in the event of a detection, the AM of SSPD slave is also triggered after a tunable time-delay τ; whenever coincidences from SSPD master and slave arrive at the same time at a coincidence unit, a coincidence count is registered; by sweeping the relative temporal delay τ between the pulses sent to AM master and AM slave the post-selected HOM interferogram is generated.
For the experimental determination of the spectral complementarity relation between K and V, the center frequencies of the laser sources are set to a fixed value so that their separation corresponds to 100 MHz. Since the spectral linewidth of each laser is on the order of 10 MHz (ω_1,2± 5 MHz in the optical spectrum) and assuming that the trigger pulse sent to the AMs is larger than the corresponding coherence time of 100 ns, no interference pattern will be observed for these lasers, i.e., the photons will be absolutely distinguishable from the detector's perspective. In other words, upon measuring the wave-packets, the detectors are able to successfully identify their center frequency and, thus, determining their provenance, which causes no interference to take place – recall that the wave-packets must be indistinguishable for HOM interference. By narrowing the AM trigger pulse widths (p_t) from 100 ns down to 4 ns (which is the minimum achievable by the employed pulse generator), an interference pattern can be observed when p_t≤10 ns. Focusing, therefore, on the range between 10 to 4 ns, the effects of decreasing p_t are two-fold and complementary: the spectral distributions are enlarged, since the chopping pulse is narrower than the optical source's temporal coherence, and end up overlapping, causing K to decrease – refer to Figs. <ref> and <ref>; meanwhile, the visibility of the HOM interferogram rises (V rises) presenting a fixed 100 MHz interference fringe pattern – refer to Fig. <ref>.
Even though the spectra are also enlarged when p_t lies within the 100 to 10 ns region, it is not enough to create a non-zero spectral overlap, i.e., the product between I_1(ω) and I_2(ω) is always zero. With the righthand side of Eq. <ref> zero, K is unit, and, as expected from the proposal, V is null, i.e., the visibility of the interferogram is zero. At p_t≤10 ns, however, the linewidths are enlarged up to a point where a non-zero spectral overlap is already observed. Using the inverse relation between the linewidth and the coherence time τ_c=0.66/Δω <cit.> for gaussian wave-packets, we calculate the FWHM at p_t=100 ns is Δω≈ 70 MHz (ω_1,2± 35 MHz), which is not enough for the distributions to overlap at half maximum (Δω = ω_1-ω_2=100 MHz), but enough so that the gaussian skirts overlap – refer to Fig. <ref> (b).
Before extracting the values of K^2+V^2 based on the experimental acquisitions, an important issue must be addressed: the achievable visibility is 50% due to the multi-photon contribution of WCSs <cit.> – a detailed explanation is given in the Supplementary Information of <cit.>. In order to distillate the multi-photon contribution from the coincidence count rates, a new visibility parameter is defined as
𝒱 = (R_dist-0.5· R_dist)-(R_min-0.5· R_dist)/(R_dist-0.5· R_dist).
𝒱 is calculated by subtracting the count floor imposed by the multi-photon pulses (0.5· R_dist) and attempts to take into account the contributions from purely two-photon interference even when WCSs, in the few-photon regime, are employed. In other words, based on the fact that perfect indistinguishability between WCSs is given by V=0.5, the value of the visibility taking into account the quantum two-photon interference effect will be defined as 𝒱=V/0.5.
In Fig. <ref>, the experimental values of K^2+𝒱^2 are presented as a function of the width of the pulse p_t sent to the AMs. The error bars correspond to fitting errors of the waveforms presented in Figs. <ref> and <ref> from which the values of K and 𝒱 have been determined; first order approximations have been employed, for simplicity, in order to account for the non-linear association between K and 𝒱.
In conclusion, the distinguishability between photonic wave-packets has been discussed in the spectral context, focusing on the two-photon interference phenomenon in a Hong-Ou-Mandel interferometer fed with weak-coherent states. An experimental and theoretically sound definition of the spectral distinguishability parameter K has been set forward based on the fidelity between quantum states. The complementarity relation between K and a consistently defined visibility parameter 𝒱 has been observed with weak-coherent states. The results attest that K and 𝒱 hold a complementarity relation, i.e., even though a spectral measurement is not performed by the single-photon detectors, the mere possibility of assessing the spectral distinguishability information is enough to impact on the visibility of the resulting interferogram. Furthermore, the quantum aspect of the two-photon interference in a Hong-Ou-Mandel interferometer could be examined with weak-coherent states.
§ ACKNOWLEDGEMENTS
G. C. Amaral is indebted to Dr. K. Lemr for invaluable discussions and comments. The authors acknowledge the financial support of brazilian agencies CNPq, CAPES, and FAPERJ.
10
hong1987measurement
C. Hong, Z.-Y. Ou, and L. Mandel, Physical Review Letters, vol. 59,
no. 18, p. 2044, 1987.
jin2013two
J. Jin, J. A. Slater, E. Saglamyurek, N. Sinclair, M. George, R. Ricken,
D. Oblak, W. Sohler, and W. Tittel, “Two-photon interference of weak
coherent laser pulses recalled from separate solid-state quantum memories,”
Nature communications, vol. 4, 2013.
lo2012measurement
H.-K. Lo, M. Curty, and B. Qi, Physical review letters, vol. 108,
no. 13, p. 130503, 2012.
gisin2002quantum
N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum cryptography,”
Reviews of modern physics, vol. 74, no. 1, p. 145, 2002.
mandel1983photon
L. Mandel, Physical Review A, vol. 28, no. 2, p. 929, 1983.
da2015linear
T. F. da Silva, G. C. Amaral, G. P. Temporão, and J. P. von der Weid,
“Linear-optic heralded photon source,” Physical Review A, vol. 92,
no. 3, p. 033855, 2015.
amaral2016few
G. C. Amaral, T. F. da Silva, G. P. Temporão, and J. P. von der Weid,
“Few-photon heterodyne spectroscopy,” Optics letters, vol. 41,
no. 7, pp. 1502–1505, 2016.
da2013proof
T. F. da Silva, D. Vitoreti, G. Xavier, G. do Amaral, G. Temporao, and
J. von der Weid, “Proof-of-principle demonstration of
measurement-device-independent quantum key distribution using polarization
qubits,” Physical Review A, vol. 88, no. 5, p. 052303, 2013.
legero2003time
T. Legero, T. Wilk, A. Kuhn, and G. Rempe, “Time-resolved two-photon quantum
interference,” Applied Physics B: Lasers and Optics, vol. 77, no. 8,
pp. 797–802, 2003.
vedral2006introduction
V. Vedral.1em plus 0.5em minus 0.4emOxford University Press on
Demand, 2006.
saleh1991fundamentals
B. E. Saleh, M. C. Teich, and B. E. Saleh.1em plus 0.5em minus
0.4emWiley New York, 1991, vol. 22.
lewis1995fast
J. Lewis, in Vision interface, vol. 10, no. 1, 1995, pp. 120–123.
englert1996fringe
B.-G. Englert, Physical review letters, vol. 77, no. 11, p. 2154, 1996.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07512v2 | 20170125224431 | ACIA, not ACID: Conditions, Properties and Challenges | [
"Yuqing Zhu",
"Jianxun Liu",
"Mengying Guo",
"Wenlong Ma",
"Guolei Yi",
"Yungang Bao"
] | cs.DC | [
"cs.DC"
] |
ACIA, not ACID: Conditions, Properties and Challenges
Yuqing Zhu^⋆, Jianxun Liu^⋆, Mengying Guo^⋆, Wenlong Ma^⋆, Guolei Yi^†, Yungang Bao^⋆
^⋆Institute of Computing Technology, Chinese Academy of Sciences ^†Baidu
==============================================================================================================================================================================
empty
§.§ Abstract
Although ACID is the previous golden rule for transaction support, durability is now not a basic requirement for data storage. Rather, high availability is becoming the first-class property required by online applications. We show that high availability of data is almost surely a stronger property than durability. We thus propose ACIA (Atomicity, Consistency, Isolation, Availability) as the new standard for transaction support. Essentially, the shift from ACID to ACIA is due to the change of assumed conditions for data management. Four major condition changes exist. With ACIA transactions, more diverse requirements can be flexibly supported for applications through the specification of consistency levels, isolation levels and fault tolerance levels. Clarifying the ACIA properties enables the exploitation of techniques used for ACID transactions, as well as bringing about new challenges for research.
§ INTRODUCTION
Transaction support has been recognized again as indispensable for online applications in recent years <cit.>. Not implementing transactions in highly available datastores is even considered one's biggest mistake <cit.>. In recent years, systems like Megastore <cit.> and Spanner <cit.> emerge; and, academic solutions for transactional support are also proposed, e.g., MDCC <cit.>, replicated commit <cit.>, and TAPIR <cit.>. These emergent proposals guarantee the ACID properties of transactions in distributed replicated datastores. More importantly, they simultaneously consider the guarantee of high availability through data replication.
High availability is now the de facto first-class property required by online applications. Without high availability, even the temporary inaccessibility of data or service can lead to great economic loss <cit.>. High availability is once guaranteed by abandoning ACID (Atomicity, Consistency, Isolation and Durability) transactions and supporting only BASE (Basically Availability, Soft state and Eventual consistency) data access <cit.>. As the CAP Theorem <cit.> states that only two can be guaranteed among consistency, availability and partition tolerance, the developers of BASE systems trade consistency for availability, guaranteeing eventual consistency instead of strong consistency and transactions <cit.>. In fact, the CAP Theorem does not indicate that transactions must be relinquished for high availability, which is then clarified <cit.>. Efforts are thus devoted to supporting transactions with high availability in recent years <cit.>. A concept of highly-available transactions is even proposed <cit.>. Transactions now must be supported with high availability for online applications.
In this paper, we propose ACIA to be the new standard for transaction support, instead of ACID, replacing Durability with Availability. As demonstrated by the years of practice with the BASE model, applications can work well with systems and data in soft state <cit.>, even on the datacenter-scale power outage <cit.>. Soft state only requires that correct data or states can be regenerated at the expense of additional computation or file I/O on faults such as network partition <cit.>. With soft state guaranteeing availability, durability is no longer a fundamental property required by data management. In fact, as long as data is available, applications do not care about whether the correct data is durably stored or regenerated on the fly in the system.
We show that high availability of data is in fact a stronger property than durability (Section <ref>). Highly available data can be made durable, while durable data is not necessarily highly available. Hence, ACIA transactions cannot be supported by all proposals designed for guaranteeing ACID transactions over replicated datastores. The fundamental reason is that ACIA transactions assume new conditions commonly made for online applications in system implementations (Section <ref>). These conditions are different from those generally assumed for the classic data management systems <cit.>. Assuming the classic conditions, e.g., predictable communication delay, some recent proposals <cit.> cannot support ACIA transactions. Even some proposals can support ACIA transactions, they are not necessarily designed without redundant components, e.g., persistent logs <cit.>. We present a specification of ACIA properties, which constitute the major highlights of a new transaction paradigm (Section <ref>). To check whether the transaction is supported by a particular system, one only needs to make an ACIA test of the system's quality. Clarifying the ACIA properties enables implementations to combine different mechanisms that guarantee the properties respectively. Besides, it also enables us to explore the new space of research challenges and problems (Section <ref>).
§ HIGH AVAILABILITY VS. DURABILITY
High availability is a property specifying that a database, service or system is continuously accessible to users <cit.>. Using with transactions, high availability refers to a property of database in this paper. We say that a database is high available if any client connected to the system can access any data within the database at any time. Note that, this description does not concern about implementations. In comparison, the durability property of ACID requires that a committed transaction's effect be reflected by the database in the persistent storage, such that the effect of the committed transaction will not be lost even on power failures <cit.>.
As a property of the database, high availability shares a few similarities with durability. First, high availability is a property independent of the atomicity, consistency, and isolation properties. Each of the four properties must be guaranteed by the respective measures. Second, high availability places no constraints on data or replica consistency <cit.>. A client accessing the highly available database can find the data outdated or up-to-date; and, how inconsistent the returned data can be depends on the consistency level agreed between the client and the database. Third, high availability has no indications for isolation levels. ACID transactions can be implemented with different isolation levels <cit.>, while different isolation levels can result in different database states kept durable. Similarly for highly available database, the isolation levels supported by the system can also affect the database image observed by a client.
High availability of data is almost surely a stronger property than durability. On the one hand, highly available databases can be made durable easily. Given high availability of data, we can use a client program to traverse the whole database and store all returned values to a persistent storage, leading to durability. Nowadays, highly available systems usually distribute their storage across multiple geographic locations. It is highly unlikely that power failures will affect storage in all locations. That is, power failures will almost surely not affect highly available systems, unless the power failure is global (a case with probability zero).
On the other hand, durable systems are not necessarily accessible at any time. To guarantee data durability, log-based crash recovery is extensively studied for ACID transactional database research <cit.>. The typical architecture of such durable and recoverable database systems is demonstrated in Figure <ref>. The four basic building blocks for such systems include durable data image, durable logs, buffer manager and recovery manager. The recovery manager relies on the durable logs to recover a correct database on errors, while the buffer manager manages durable data and logs for program access. The underlying assumption for such an architecture is that the recovery allows a system down time. In comparison, highly available systems handles failures or errors without system down time.
With high availability, a database will not become inaccessible because of partial system failures such as node failure or network partition. Replication is the typical mechanism to guarantee high availability of data. Exploiting fault-tolerant replication algorithms, partial system failures can be tolerated without bringing the system down. The number of replica failures a system can tolerate depends on the algorithms used and the consistency level specified. For example, eventually consistent systems can guarantee high availability as long as all replicas do not fail simultaneously <cit.>. A fault-tolerant system can be made highly available by fixing failed components timely. The architecture of a typical storage engine guaranteeing high availability is demonstrated in Figure <ref>. The data manager allows the data to reside in memory or in persistent storage.
§ NEW IMPLEMENTATION CONDITIONS
Conditions previously assumed for distributed system implementations become invalid now for large-scale distributed systems that are widely deployed to support the plethora of online applications. One example is whether a system can be inaccessible for some time. Previously, systems can have a down time for failure recovery (MTTR, Mean Time To Recovery); now, the system must remain accessible for the high availability requirement. Another example is whether a system can have synchronized clocks across servers. Previously, system servers are assumed to have synchronized clocks, enabling the timestamp-based distributed concurrency control (CC) <cit.>; now, the clocks on different clocks can be coordinated at most to a certain precision, requiring a different CC design <cit.>.
We capture the changed conditions for system implementations by observing how Consensus algorithms are exploited in replicated systems to guarantee high availability <cit.>. The most widely used Consensus algorithm is Paxos <cit.>. Years of practice have proven that Paxos is feasible in the practical scenarios of large-scale distributed systems. The Paxos algorithm assumes the asynchronous system model <cit.> and crash-stop failure <cit.>. The crash-stop failure is related to the conditions of implementation compliance and node recovery moment, while the asynchronous system model is related to the phenomena of unpredictable message delays, inconsistent clocks and unreliable failure detection. The conditions and the phenomena must all be considered in system implementations.
Implementation Compliance. A server node in the system must behave as the implementation dictates. Besides, it must follow the protocol implemented by the system to send and receive messages. A node can either behave according to the implementation or crash to stay in a stop state. The above phenomenon happens when the crash-stop failure is assumed. Other failure modes also exist. A commonly studied failure is when the server can send arbitrary messages to other servers, without complying to the implementation. This failure mode is called the Byzantine failure <cit.>. At present, the most commonly assumed failure is the crash-stop failure.
Node Recovery Moment. In large-scale distributed systems, the failures of individual server node are considered common situations, which must be tolerated by the system software. Replication is the common technique for fault tolerance and high availability. While the model of replicated state machines (RSM) <cit.> offers a measure to describe and analyze replication, Consensus algorithms coordinate the RSM to process commands properly even under node failures. The decision of each command for the RSM is modeled as a Consensus problem. Designed to solve the Consensus problem <cit.>, one run of a Consensus algorithm can reach a single agreement on the command for the RSM (denoted as single-decree in a related work <cit.>). Therefore, multiple runs of the algorithm are needed to make the system functional.
A node cannot recover and rejoin the system during a run of the Consensus algorithm. During a run, Consensus algorithms generally assume the static membership of nodes (called participants, acceptors or learners) <cit.>. Nodes can leave the membership due to failures but not join the membership. Rather, the recovery and the change of membership must be handled before or after a run of the Consensus algorithm. Otherwise, extra mechanisms called reconfiguration <cit.> must be added to the implementation. In fact, a recovered node does not need to join the system during an algorithm run unless the number of failed nodes exceed the algorithm's fault tolerance level.
Unpredictable Message Delay. The delay for a message to reach its receiver is not predictable. This condition exists because large-scale systems supporting online applications can be globally distributed and the system network in wide area is highly unpredictable. It is possible that a message sent by one server travels in the system network for an indefinitely long time, making the message look like a lost message. Note that this condition also models the situation that messages can get lost for some undetermined reasons, e.g., network congestion. Implementations relying on predictable communication delays <cit.> are not suitable for the real applications with this condition.
Inconsistent Clocks. Different servers in the same system can hardly have consistent clocks. The reason is two fold. First, the local clocks of servers can drift independently as time passes. Second, the communication delay is neither constant nor predictable. Although techniques exist to synchronize the clocks to a certain precision <cit.>, designs relying on precise timestamps <cit.> will generally not work in highly available systems. To track the global ordering of events, mechanisms such as vector clocks are needed <cit.>.
Unreliable Failure Detection. A server cannot precisely distinguish message slowness from failures of other servers. Since a message can travel in the system for an indefinite long time, a server has no way to differentiate whether its message is slow or the interacting server has failed. Furthermore, if a server is not receiving messages from any other server, it can hardly tell whether all other servers fail, whether a network partition happens, or whether all messages are travelling slowly suddenly. None of the three conditions can be made certain of. That is, reliable failure detection is not possible, which then rules out all possible solutions <cit.>. To solve the problem in practice, a server will resend its messages and take communication timeouts as the signal for server failures <cit.>. Therefore, the corresponding condition for failure detection is that server failures can be detected in some way such as timeouts.
Conditions for ACIA vs. ACID. We summarize the changes of typical implementation conditions from ACID to ACIA in Table <ref>. Four main changes exist. First, classic ACID designs assume the system recovers with down time, while ACIA requiring high availability assumes no system down time. Second, classic ACID designs assume predictable message delays to enable timestamp-based mechanisms, while ACIA assumes unpredictable message delays. Third, classic ACID designs are for system with limited distribution and synchronized server clocks, while ACIA are for large-scale systems that are impossible to have synchronized server clocks. Fourth, classic ACID systems stop and restart on failure detection and recovery, while ACIA systems assume built-in fault-tolerance mechanisms. These changes and differences are in general. There exist early systems assuming some of the conditions of ACIA systems, but not all conditions are considered. Now with ACIA requirements, all the conditions must be considered and handled in the system implementations.
§ SPECIFICATION OF ACIA PROPERTIES
Given the conditions of Section <ref>, the four properties of ACIA are specified as follows.
* Atomicity. The effects of either all or none of the operations in a transaction are reflected in the database, with the user knowing which of the two results is.
* Consistency. Each successful transaction can commit only legal results, preserving the consistency of the database. Legal results comply with rules and consistency levels specified for the database, e.g., integrity constraints <cit.> and replica consistency levels <cit.>.
* Isolation. Operations within a transaction must be isolated from other transactions running concurrently. How much transactions can be isolated from other transactions is defined as the isolation levels <cit.>, which are guaranteed by different concurrency control mechanisms <cit.>.
* Availability. Once a transaction has committed its results, the system must guarantee that these results are reflected in the database, whose data can be accessed by any client connected to the system.
The Clients' View. Previously with ACID, the client understands that transactions are atomic and durable and that the consistency and the isolation conditions can be specified for different applications. Typical consistency level agreements are integrity constraints like correlated updates for foreign keys <cit.>. Typical isolation level agreements are like serializability, snapshot isolation or read committed <cit.>.
With ACIA, the database is guaranteed to be continuously accessible and transactions are atomic. The various requirements of the client can be flexibly supported through the specification of consistency levels, isolation levels and fault tolerance levels. Different from ACID, the consistency levels for ACIA need to include replica consistency specifications. Besides, new isolation levels are possible and to be added <cit.>. The levels of fault tolerance can also be specified. The level of fault tolerance can be traded off with performance. For example, a low level of fault tolerance enables the system to use fast Paxos for low latency, while a high level can result in the system using the classic Paxos but with higher latency <cit.>.
Implementation. Clarifying the ACIA properties enables us to exploit early research results. Similar to the implementation of ACID transactions, the support of ACIA transactions require the implementation of consistency compliance, concurrency control and commit coordination, as well as replica control. In the past, the former three aspects are discussed separately with different measures. For example, consistency compliance can be about how foreign keys can be updated efficiently; concurrency control can be about which isolation levels are best for an application and which scheme is most efficient for a workload; and, commit coordination is about how atomicity is guaranteed through different protocols like 2PC or 1PC. We can thus exploit and combine such mechanisms to guarantee the ACIA properties.
Although the four properties of ACIA must be guaranteed simultaneously for transactions, the guarantee of the first three properties are independent. As each of the three properties are guaranteed through different schemes, numerous combinations of the schemes are possible. Furthermore, the consistency and the isolation can be guaranteed with multiple choices, various combinations of these choices are possible, leading to flexible application usages. Current solutions to transaction support in highly-available systems usually interwind the mechanisms that guarantee these properties. For example, highly-available transactions <cit.> mix replica consistency and availability with isolation levels, leaving a vague image blurring replica control and concurrency control. Clarifying the properties of ACIA enables further exploration in system designs and implementations.
§ CHALLENGES
We now discuss the challenges in supporting ACIA transactions. These challenges arise mainly due to the changes of implementation conditions.
Atomic Commit Problem Redefined. The classic problem of atomic commit assumes each participant serves orthogonal data units for a distributed transaction <cit.>. Many widely used protocols are designed to solve this classic atomic commit problem <cit.>. For ACIA transactions, multiple participants can server the same data units. These participants can fail independently, but not all participants serving the same data unit fail simultaneously. Obviously, the classic problem definition of atomic commit is not suit for ACIA transactions. A new problem definition of atomic commit is needed.
Furthermore, typical solutions to the classic atomic problem relies on persistent logs <cit.>. With high availability of data, keeping durable logs is not a wise choice. Therefore, new mechanisms not using persistent logs need to be devised to solve the new atomic commit problem of ACIA. In fact, as the classic atomic commit solutions is a vote-after-decide process <cit.>, we can take the vote-before-decide alternative and transform the new atomic commit problem into the Consensus problem <cit.>, making a solution based on existent algorithms possible.
Comprehensive Consistency Levels. The consistency of ACID is enforced through the compliance check of all defined rules, including constraints, cascades, triggers, and any combination thereof. These rules are based on a single-replica database image. In comparison, ACIA databases are inherently distributed and replicated. Therefore, the rules of constraints, cascades, triggers, and their combinations must be extended to consider replicas. Furthermore, there exist multiple consistency levels for replicated data, and these consistency levels are allowed by different applications <cit.>. The following questions thus remain to be answered: (1) how can these replica consistency levels be combined with classic consistency rules to offer comprehensive consistency levels? (2) which of the comprehensive consistency levels are allowed by applications? and, (3) how can the new consistency levels be specified in an application-understandable manner?
Application-Understandable Isolation Levels. Classic concurrency control mechanisms study how operations of different transactions can be ordered correctly. For ACIA transactions, concurrency control must also consider how each operations can ordered on different replicas, leading to a more complex global view of operation executions. The phenomena-based specification <cit.> of classic isolation levels cannot cover all isolation levels possible for ACIA transactions, e.g., partition-based isolation levels <cit.>. The spectra of isolation levels possible for ACIA transactions need to be studied, and a corresponding specification is in need. Besides, the specification must be made application-understandable, as isolation levels are mainly for applications to choose the correct implementations.
Failure Detection Mechanisms & Fault-Tolerance Levels. Failure detection is the basis for fault tolerance. Systems supporting ACIA transactions must have built-in fault tolerance mechanisms and thus failure detection mechanisms. Failure detections with different reliability require different solutions <cit.>, which can tolerate varied numbers of failures (denoted as fault-tolerance levels). Current implementations for failure detections are mainly based on timeouts <cit.>. Systems implementing failure detectors with different reliability remain to be devised. And, a specification on what fault-tolerance levels supported by such systems needs to be provided.
§ CONCLUSION
This paper makes a key observation that the high availability requirement of data has changed the conditions for transaction support. In this paper, we have shown that high availability is almost surely a stronger property than durability. On proposing ACIA instead of ACID, we have investigated the conditions for system implementations supporting ACIA transactions. The change of implementation conditions leads to new challenges for transaction support in large-scale distributed systems. We analyze the challenges regarding each property of ACIA. The implementation-independent specification of ACIA properties not only enables the reuse of mechanisms previously devised to support ACID transactions, but also opens up a new research space for future exploration.
Acknowledgments This work is supported in part by the State Key Development Program for Basic Research of China (Grant No. 2014CB340402) and the National Natural Science Foundation of China (Grant No. 61303054).
acm
| Transaction support has been recognized again as indispensable for online applications in recent years <cit.>. Not implementing transactions in highly available datastores is even considered one's biggest mistake <cit.>. In recent years, systems like Megastore <cit.> and Spanner <cit.> emerge; and, academic solutions for transactional support are also proposed, e.g., MDCC <cit.>, replicated commit <cit.>, and TAPIR <cit.>. These emergent proposals guarantee the ACID properties of transactions in distributed replicated datastores. More importantly, they simultaneously consider the guarantee of high availability through data replication.
High availability is now the de facto first-class property required by online applications. Without high availability, even the temporary inaccessibility of data or service can lead to great economic loss <cit.>. High availability is once guaranteed by abandoning ACID (Atomicity, Consistency, Isolation and Durability) transactions and supporting only BASE (Basically Availability, Soft state and Eventual consistency) data access <cit.>. As the CAP Theorem <cit.> states that only two can be guaranteed among consistency, availability and partition tolerance, the developers of BASE systems trade consistency for availability, guaranteeing eventual consistency instead of strong consistency and transactions <cit.>. In fact, the CAP Theorem does not indicate that transactions must be relinquished for high availability, which is then clarified <cit.>. Efforts are thus devoted to supporting transactions with high availability in recent years <cit.>. A concept of highly-available transactions is even proposed <cit.>. Transactions now must be supported with high availability for online applications.
In this paper, we propose ACIA to be the new standard for transaction support, instead of ACID, replacing Durability with Availability. As demonstrated by the years of practice with the BASE model, applications can work well with systems and data in soft state <cit.>, even on the datacenter-scale power outage <cit.>. Soft state only requires that correct data or states can be regenerated at the expense of additional computation or file I/O on faults such as network partition <cit.>. With soft state guaranteeing availability, durability is no longer a fundamental property required by data management. In fact, as long as data is available, applications do not care about whether the correct data is durably stored or regenerated on the fly in the system.
We show that high availability of data is in fact a stronger property than durability (Section <ref>). Highly available data can be made durable, while durable data is not necessarily highly available. Hence, ACIA transactions cannot be supported by all proposals designed for guaranteeing ACID transactions over replicated datastores. The fundamental reason is that ACIA transactions assume new conditions commonly made for online applications in system implementations (Section <ref>). These conditions are different from those generally assumed for the classic data management systems <cit.>. Assuming the classic conditions, e.g., predictable communication delay, some recent proposals <cit.> cannot support ACIA transactions. Even some proposals can support ACIA transactions, they are not necessarily designed without redundant components, e.g., persistent logs <cit.>. We present a specification of ACIA properties, which constitute the major highlights of a new transaction paradigm (Section <ref>). To check whether the transaction is supported by a particular system, one only needs to make an ACIA test of the system's quality. Clarifying the ACIA properties enables implementations to combine different mechanisms that guarantee the properties respectively. Besides, it also enables us to explore the new space of research challenges and problems (Section <ref>). | null | null | null | null | This paper makes a key observation that the high availability requirement of data has changed the conditions for transaction support. In this paper, we have shown that high availability is almost surely a stronger property than durability. On proposing ACIA instead of ACID, we have investigated the conditions for system implementations supporting ACIA transactions. The change of implementation conditions leads to new challenges for transaction support in large-scale distributed systems. We analyze the challenges regarding each property of ACIA. The implementation-independent specification of ACIA properties not only enables the reuse of mechanisms previously devised to support ACID transactions, but also opens up a new research space for future exploration.
Acknowledgments This work is supported in part by the State Key Development Program for Basic Research of China (Grant No. 2014CB340402) and the National Natural Science Foundation of China (Grant No. 61303054).
acm |
http://arxiv.org/abs/1701.07747v1 | 20170126154632 | Dipolar Dark Matter as an Effective Field Theory | [
"Luc Blanchet",
"Lavinia Heisenberg"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-th"
] |
apsrev
#1to 0pt#1
#1#2=0pt#1#2
#1#2=0pt#1#2
#1#2=0pt#1#2
()
[
]
M_ Pḍℰ
∂P̂
u_ rmsc_ sv_ Ak_ ME_ KE_ MH_ ME_ WT E E_ K E_ M H_ M
ϵ_ Kϵ_ Mξ_ Mξ_ dϵ_ T12
M_*M_ Pϵℰ
∇^2 FM_ susyḍ PΣσ G K L^(2)Ł* L_*Łℒ() i.e. et.al. ∂_μν_abStückelberg ∂^μ_ ν⟨⟩ℰ̂k⃗y⃗𝒜∂→≡ mod adk⃗l⃗q⃗p⃗y⃗X⃗P⃗Δ⃗X̂P̂()[]}{ðḢϕ̇ϕ̈2c_s^2ϵαβ̱γδΔεζþθλΛσφΦωΩδϕ∂δϕ̇α_1α_2θ_1θ_2▽ϕ̇η^'η^”Γγ^'δϕ^'Π^Π^ϕδ B
ℳM_*M_ Pϵℰ
∇^2 FM_ susyḍ PΣσ G K L^(2)Ł* L_*Łℒ() i.e. et.al. ∂_μνStückelberg ∂^μ_ ν⟨⟩ℰ̂ helicity∫^̣4k/(2π)^4k^2+m̃^2k^2+m^2 [email protected]𝒢ℝεℂ𝒪
Institut d'Astrophysique de Paris — UMR 7095 du CNRS,
Université Pierre & Marie Curie, 98bis
boulevard Arago, 75014 Paris, [email protected] for Theoretical Studies, ETH Zurich,
Clausiusstrasse 47, 8092 Zurich, Switzerland
Dipolar Dark Matter (DDM) is an alternative model motivated by the
challenges faced by the standard cold dark matter model to describe
the right phenomenology at galactic scales. A promising realisation of
DDM was recently proposed in the context of massive bigravity
theory. The model contains dark matter particles, as well as a vector
field coupled to the effective composite metric of bigravity. This
model is completely safe in the gravitational sector thanks to the
underlying properties of massive bigravity. In this work we
investigate the exact decoupling limit of the theory, including the
contribution of the matter sector, and prove that it is free of ghosts
in this limit. We conclude that the theory is acceptable as an
Effective Field Theory below the strong coupling scale.
95.35.+d, 04.50.KdDipolar Dark Matter as an Effective Field Theory
Lavinia Heisenberg
December 30, 2023
================================================
§ INTRODUCTION
We are witnesses of centenaries. The year 2015 marked the 100th
anniversary of Albert Einstein's elaborate theory of General
Relativity (GR), while 2016 celebrated the centenary of the first
paper on gravitational waves by the announcement of their experimental
detection <cit.>. GR meets the requirements of the underlying
physics in a broad range of scales, from black hole to solar system
size. It stood up to intense scrutiny and prevailed against all
alternative competitors. It constitutes the bedrock upon which our
fundamental understanding of gravity relies. However,
some important questions remain.
The lack of renormalizability motivates the modifications of gravity
in the ultraviolet (UV), that incorporate the quantum nature of
gravity. The singularities present in the classical theory could be
regularized by the new physics <cit.>. The UV
modifications might also dictate a different scenario for the early
Universe as an alternative to inflation <cit.>. The
inflaton field in the standard picture might be just a reminiscent of
the modification of gravity in the UV.
From a more observational point of view, GR faces additional
challenges on cosmological scales. In order to account for the
observed amount of ingredients of the Universe, it is necessary to
introduce dark matter and dark energy despite of their unclear
origin. Notwithstanding of remarkable efforts, the dark matter has so
far not been directly detected. Concerning the dark energy,
the standard model in form of a cosmological constant Λ
accounts for most of the observations even though it faces the
unnaturalness problem <cit.>. Combined with the
non-baryonic cold dark matter (CDM) component, the model explains
remarkably well the observed fluctuations of the cosmic microwave
background and the formation of large scale structures.
Albeit the many successes of the Λ-CDM model at large scales,
it has difficulties to explain the observations of dark matter at
galactic scales. For instance, it is not able to account for the tight
correlations between dark and luminous matter in galaxy
halos <cit.>. In this remark, the first
unsatisfactory discrepancy comes from the observed Tully-Fisher
relation between the baryonic mass of spiral galaxies and their
asymptotic rotation velocity. Another discrepancy, perhaps more
fundamental, comes from the correlation between the
presence of dark matter and the acceleration scale <cit.>. The prevailing view regarding these problems is that they
should be resolved once we understand the baryonic processes that
affect galaxy formation and evolution <cit.>. However, this
explanation is challenged by the fact that galactic data are in
excellent agreement with the MOND (MOdified Newtonian Dynamics)
empirical formula <cit.>. From a phenomenological
point of view, this formula accommodates remarkably well all
observations at galactic scales. Unfortunately, extrapolation of the
MOND formula to the larger scale of galaxy clusters confronts an
incorrect dark matter distribution <cit.>.
The ideal scenario would be to have a hybrid model in which the
properties of the Λ-CDM model are naturally incorporated on
large scales, whereas the MOND formula would take place on galactic
scales. There have been many attempts to embed the physics beyond the
MOND formula into an approved relativistic theory, either via
invoking new propagating fields without dark matter <cit.>, or by considering
MOND as an emergent phenomenology <cit.>.
Here we consider a model of the latter class, called Dipolar Dark
Matter (DDM) <cit.>. The most compelling
version of DDM has been recently developed, based on the formalism of
massive bigravity theory <cit.>. To
describe the potential interactions between the two metrics of
bigravity the model uses the effective composite metric introduced in
Refs. <cit.>. Two species of dark
matter particles are separately coupled to the two metrics, and an
internal vector field that links the two dark matter species is
coupled to the effective composite metric. The MOND formula is
recovered from a mechanism of gravitational polarization in the non
relativistic approximation. The model has the potential to
reproduce the physics of the Λ-CDM model at large
cosmological scales.
In the present paper we address the problem of whether there are ghost
instabilities in this model. The model itself <cit.> will be reviewed in Sec. <ref>. The model is safe
in the gravitational sector because it uses the ghost-free framework of massive bigravity.
The interactions of the matter fields with the effective metric reintroduce a ghost
in the matter sector beyond the strong coupling scale, as found in <cit.>.
In our model, apart from this effective coupling the different
species of matter fields interact with each other via an internal vector field.
This additional coupling might spoil the property of ghost freedom within the strong
coupling scale.
We therefore investigate, in Sec. <ref>,
the exact decoupling limit (DL) of our model, crucially including the
contributions coming from the matter sector and notably from the
internal vector field. The model dictates what are the relevant
scalings of the matter fields in terms of the Planck mass in the
DL. Using that, we shall prove that the theory is free of ghosts in the
DL and conclude that it is acceptable as an Effective Field Theory
below the strong coupling scale. We end the paper with a few
concluding remarks in Sec. <ref>.
§ DIPOLAR DARK MATTER
The model that we would like to study in this work is the dark matter
model proposed in Ref. <cit.> where the Dipolar Dark
Matter (DDM) at small galactic scales is connected to bimetric gravity
based on the ghost-free bimetric formulation of massive
gravity <cit.>. The action of a successful
realisation was investigated in <cit.> and we would
like to push forward the analysis performed there. The Lagrangian is
the sum of a gravitational part, based on massive bigravity theory,
plus a matter part: ℒ = ℒ_grav +
ℒ_mat. The gravitational part reads
ℒ_grav = M_g^2/2√(-g) R_g +
M_f^2/2√(-f) R_f + m^2M_eff^2
√(-g_eff) ,
where R_g and R_f denote the Ricci scalars of the two metrics
g_μν and f_μν, with the corresponding Planck scales
M_g and M_f and the interactions carrying another Planck scale
M_eff, together with the graviton's mass m. In this
formulation, the ghost-free potential interactions between the two
metrics are defined as the square root of the determinant of the
effective composite metric <cit.>
g^eff_μν=α^2 g_μν +2αβ 𝒢^eff_μν +β^2 f_μν ,
with the arbitrary dimensionless parameters α and β
(typically of the order of one). Here
𝒢^eff_μν denotes the effective metric in the
previous DDM model <cit.>, given by
𝒢^eff_μν = g_μρX^ρ_ν where
X=√(g^-1f), or equivalently 𝒢^eff_μν =
f_μρY^ρ_ν where Y=√(f^-1g). It is trivial to see
that the square root of the determinant of this effective metric
g^eff_μν corresponds to the allowed ghost-free
potential interactions <cit.>.
The matter part of the model will consist of ordinary baryonic matter
and a dark sector including dark matter particles. The crucial feature
of the model is the presence of a vector field 𝒜_μ in
the dark sector, that is sourced by the mass currents of dark matter
particles and represents a “graviphoton” <cit.>. This
vector field stabilizes the DDM medium and ensures a mechanism of
“gravitational polarisation”. The matter action reads
ℒ_mat = -
√(-g)(ρ_bar+ρ_g) - √(-f) ρ_f
+ √(-g_eff)[
𝒜_μ(j_g^μ-j_f^μ) + λ
M_eff^2 𝒲(𝒳) ] .
Note the presence of a non-canonical kinetic term for the vector field
in form of a function 𝒲(𝒳) of
𝒳 = -
ℱ^μνℱ_μν/4λ ,
with the field strength defined by ℱ^μν≡
g_eff^μρ
g_eff^νσℱ_ρσ where
ℱ_μν = ∂_μ𝒜_ν -
∂_ν𝒜_μ. The form of the function
𝒲(𝒳) has been determined by demanding that the
model reproduces the MOND phenomenology at galactic
scales <cit.>. This
corresponds to the limit 𝒳→ 0 and we have
𝒲(𝒳)=𝒳-2/3 (α+β)^2
𝒳^3/2+𝒪(𝒳^2) ,
so that the leading term in the action (<ref>) is
λ M_eff^2 𝒲(𝒳) = -
M_eff^2/4ℱ^μνℱ_μν +
𝒪(ℱ^3) .
Hence, we observe that the coupling scale of the vector field is
dictated by M_eff, while the parameter λ enters into
higher-order corrections. In order to recover the correct MOND regime
for very weak accelerations of baryons in the ordinary g sector,
i.e. below the MOND acceleration scale a_0, these constants
have been determined as <cit.>[Recall also
that the MOND acceleration a_0 is of the order of the cosmological
parameters, and thus is extremely small in Planck units, a_0∼
10^-63 M_Pl.]
M_eff=√(2) r_g
M_Pl and λ=a_0^2/2 .
Here M_Pl represents the standard Planck constant of GR and
the constant r_g is defined below. It is worth mentioning that the
standard Newtonian limit in the ordinary g sector is obtained by
imposing the relation
M_g^2+α^2/β^2M_f^2= M_Pl^2 .
Thus, in this model the three mass scales M_g, M_f and
M_eff are of the order of the Planck mass.
We represent the scalar energy densities of the ordinary pressureless
baryons, and the two species of pressureless dark matter particles by
ρ_bar, ρ_g and ρ_f respectively. Such densities
are conserved in the usual way with respect to their respective
metrics, hence ∇^g_μ(ρ_bar u_bar^μ)=0,
∇^g_μ(ρ_g u_g^μ)=0 and ∇^f_μ(ρ_f u_f^μ)=0,
with the four velocities being normalized as
g_μνu_bar^μ u_bar^ν=-1, g_μνu_g^μ
u_g^ν=-1 and f_μνu_f^μ u_f^ν=-1. The respective
stress-energy tensors are defined as
T_bar^μν=ρ_bar u_bar^μ
u_bar^ν, T_g^μν=ρ_g u_g^μ u_g^ν and
T_f^μν=ρ_f u_f^μ u_f^ν. The pressureless baryonic fluid
obeys the geodesic law of motion a^bar_μ≡
u_bar^ν∇^g_ν u^bar_μ=0, hence
∇_g^ν T^bar_μν=0. On the other hand, because of
their coupling to the vector field, the dark matter fluids pursue a
non-geodesic motion:
∇_g^ν T^g_μν = J_g^νℱ_μν ,
∇_f^ν T^f_μν = - J_f^νℱ_μν ,
where the dark matter currents J_g^μ and J_f^μ are related to
those appearing in Eq. (<ref>) by
J_g^μ =
√(-g_eff)/√(-g) j_g^μ and
J_f^μ = √(-g_eff)/√(-f) j_f^μ .
It remains to specify the link between these currents and the scalar
densities ρ_g and ρ_f of the particles. This is provided by
J_g^μ=r_g ρ_g u_g^μ and J_f^μ=r_f ρ_f u_f^μ, where
r_g and r_f are two constants of the order of one, which can be
interpreted as the ratios between the “charge” of the particles
(with respect to the vector interaction) and their inertial mass. For
correctly recovering MOND we must have α r_g=β
r_f <cit.>.
Whereas, the stress-energy tensor of the vector field
𝒜_μ is obtained by varying (<ref>) with respect to
g_μν^eff (holding the g and f metrics fixed) and
corresponds to
T_g_eff^μν = M_eff^2[𝒲_𝒳 ℱ^μρℱ^ν_νρ +
λ𝒲 g_eff^μν] ,
where 𝒲_𝒳≡𝒲/𝒳.
The evolution of the vector field is dictated by the Maxwell law
∇^g_eff_ν[
𝒲_𝒳ℱ^μν] =
1/M_eff^2(j^μ_g-j^μ_f) ,
where the covariant derivative associated with g_eff is
denoted by ∇^g_eff_ν. Together with the conservation
of the currents, ∇^g_eff_μ j_g^μ=0 and
∇^g_eff_μ j_f^μ=0, the equations of motion for the
vector field can also be expressed as
∇_g_eff^ν T^g_eff_μν = -
(j^ν_g-j^ν_f )ℱ_μν ,
and we can combine these equations of motion all together into a
“global” conservation law
√(-g_eff) ∇_g_eff^ν
T^g_eff_μν + √(-g) ∇_g^ν T^g_μν +
√(-f) ∇_f^ν T^f_μν = 0 .
§ DECOUPLING LIMIT
Being based on massive bigravity theory, the gravitational sector of
the model, Eq. (<ref>), is ghost-free up to any order in
perturbation theory <cit.>. In addition, the
baryonic and dark matter particles can be coupled separately to either
the g metric or f metric without changing this
property <cit.>. The case of the pure matter coupling between
the vector field 𝒜_μ and the effective composite metric
g_eff in Eqs. (<ref>)–(<ref>), is not trivial. In
that case, it was shown in Ref. <cit.> that the coupling is
ghost-free in the mini-superspace and in the decoupling
limit. Furthermore it is known that such coupling to the composite
metric is unique in the sense that it is the only non-minimal matter
coupling that maintains ghost-freedom in the decoupling
limit <cit.>.
However, in our model the vector field is also coupled to the g and
f particles, through the standard interaction term
∝𝒜_μ(j_g^μ-j_f^μ). This term plays a crucial
role for the dark matter model to work. This coupling introduces a
suplementary, indirect interaction between the two metrics of
bigravity, via the g and f particles coupled together by
the term 𝒜_μ(j_g^μ-j_f^μ). See Fig. <ref> for a
schematic illustration of the interactions in the model. As a result
it was found in Ref. <cit.> that a ghost is
reintroduced in the dark matter sector in the full theory. The aim of
this paper is to investigate the occurence and mass of this ghost, and
whether or not the decoupling limit (DL) is maintained ghost-free. If
the latter is true, then the model can be used in a consistent way as
an Effective Field Theory valid below the strong coupling scale.
We now detail the analysis of the DL interactions in the graviton and
matter sectors. We follow the preliminary work <cit.>
and investigate the scale of the reintroduced Boulware-Deser (BD)
ghost <cit.>. We first decouple the interactions below
the strong coupling scale from those entering above it, and
concentrate on the pure interactions of the helicity-0 mode of the
massive graviton. Using the Stückelberg trick, we restore the broken
gauge invariance in the f metric by replacing it by
f̃_μν = f_ab∂_μϕ^a ∂_νϕ^b ,
where a, b=0,1,2,3 and the four Stückelberg fields ϕ^a are
decomposed into the helicity-0 mode π and the helicity-1 mode
A^a,
ϕ^a=x^a-m A^a/Λ^3_3 -f^ab∂_b
π/Λ^3_3 .
Here Λ_3 ≡ (m^2 M_eff)^1/3 denotes the strong
coupling scale. Note, that we define it with respect to M_eff
here, since the potential interactions scale with m^2M_eff^2
in our case.
It is well known that the would-be BD ghost in the DL, would come in
the form of higher derivative interactions of the helicity-0 mode at
the level of the equations of motion. Therefore we shall only follow
the contributions of the helicity-0 mode π and neglect the
interactions of the helicity-1 mode A^a. For simplicity we do not
write the tilde symbol over the Stükelbergized version of the
metric (<ref>). Thus, considering also the helicity-2 mode in
the g metric, we have[If there is a BD ghost in the DL,
then it will manifest itself in the higher-order equations of motion
of the helicity-0 mode. For this purpose, it will be enough to
follow closely the contributions of the matter couplings to the
helicity-0 mode equations of motion and decouple the dynamics of the
secondary helicity-2 mode in f. The contributions of the latter,
as derived in <cit.>, will not play any role in our
analysis and will not change the final conclusions. The same is true
for the contributions of the helicity-1 mode.]
g_μν = (η_μν+h_μν/M_g)^2 ,
f_μν =
(η_μν-Π_μν/Λ^3_3)^2 ,
where we introduced the notation Π_μν≡∂_μ∂_νπ for convenience, and raised and lowered indices with
the Minkowski metric η_μν. The effective metric reads then
g_μν^eff = ((α+β)η_μν +
K_μν)^2 ,
in which we have introduced as a short-cut notation the linear combination
K_μν = α/M_gh_μν -
β/Λ^3_3Π_μν .
We will as next investigate the different contributions in the
gravitational and matter sectors.
§.§ Gravitational sector
There is no contribution of the Einstein-Hilbert term to the
helicity-0 mode, since this is invariant under diffeomorphisms. On the
other hand, there will be different contributions coming from the
ghost-free potential interactions. The allowed potential interactions
between the metrics g and f have been chosen in our model to be
given by the square root of the determinant of the composite
metric (<ref>), which becomes in this case
√(-g_eff) = ∑_n=0^4 (α+β)^4-n e_(n)(K) ,
where e_(n)(K) denote the usual symmetric polynomials associated
with the matrix K_μ^ρ≡η^ρνK_μν, and given
by products of antisymmetric Levi-Cevita tensors,
e_(0)(K) = - 1/24ε^μνρσε_μνρσ ,
e_(1)(K) = - 1/6ε^μνρσε_μνρλ
K^λ_σ ,
e_(2)(K) = - 1/4ε^μνρσε_μντλ
K^τ_ρ K^λ_σ ,
e_(3)(K) = - 1/6ε^μνρσε_μπτλ
K^π_ν K^τ_ρ K^λ_σ ,
e_(4)(K) = - 1/24ε^μνρσε_ϵπτλ
K^ϵ_μ K^π_ν K^τ_ρ K^λ_σ .
In particular, we see that e_(4)(K)=det(K).
First of all, the pure helicity-0 mode in the ghost-free potential
interactions (<ref>) will come in the form of total
derivatives <cit.>. Indeed, as is clear from their
definitions (<ref>) in terms of antisymmetric
Levi-Cevita tensors, the symmetric polynomials
e_(n)(Π)≡ℒ^der_(n)(Π) fully encode
the total derivatives at that order, and thus will not contribute to
the equation of motion of the helicity-0 mode. In fact, in
Ref. <cit.>, this very same property of total derivatives of
the leading contributions at each order was used to build the
ghost-free interactions away from h=0. Secondly, there will be the
pure interactions of the helicity-2 mode, obtained by setting Π=0,
and these will come with the corresponding inverse powers of
M_g. Finally, there will be the mixed interactions between the
helicity-2 and helicity-0 modes.
We are after the leading interactions in the DL, which correspond to
sending all the Planck scales to infinity,
M_Pl→∞ , M_g→∞ , M_eff→∞ , M_f →∞ ,
together with the graviton's mass m →0, while keeping
{Λ^3_3 = m^2 M_eff , M_g/M_Pl ,
M_eff/M_Pl , M_f/M_Pl} = const .
Taking into account the factor m^2 M_eff^2 in front of the
potential interactions, one immediately observes that the pure
non-linear interactions of the helicity-2 modes do not contribute to
the DL. As we already mentioned, the pure helicity-0 mode interactions
do not contribute either. So it remains the mixed terms, for which the
only surviving terms will be linear in the helicity-2 mode, and we
finally obtain
m^2 M_eff^2 √(-g_eff) = ∑_n=1^3
a_n/Λ_3^3(n-1) h^μν P_μν^(n)(Π) +
𝒪(1/M_g) ,
where a_n≡
(M_eff/M_g)^n+1α(-β)^n(α+β)^3-n
and we posed
P_μν^(n-1)(Π) ≡∂ e_(n)(Π)/∂Π^μν .
In arriving at Eq. (<ref>) we have removed the trivial
constant term in (<ref>), and ignored the “tadpole” which is
simply proportional to the trace [h]=h^μ_μ and can be
eliminated by choosing an appropriate de Sitter background (see,
e.g., a discussion in <cit.>).
We can then write the total contribution of the gravitational sector
in the DL, including that coming from the Einstein-Hilbert term of the
g metric, which enters only at the leading quadratic order in
h_μν,
ℒ^DL_grav = -
h^μνℰ^ρσ_μνh_ρσ +
∑_n=1^3 a_n/Λ_3^3(n-1) h^μν
P_μν^(n)(Π) ,
where ℰ^ρσ_μν is the usual Lichnerowicz
operator on a flat background as defined by
-2ℰ_μν^ρσh_ρσ =
(h_μν-η_μνh) + ∂_μ∂_ν h
- 2 ∂_(μ H_ν) +
η_μν∂_ρ H^ρ ,
with h=[h]=h^μ_μ and H_μ=∂_ν
h^ν_μ.
The symmetric tensors P^(n)_μν
are conserved, i.e.∂_ν P_(n)^μν=0. For
an easier comparison with the literature we give them as the product
of two Levi-Cevita tensors appropriately contracted with the second
derivative of the helicity-0 field,
P^(1)_μν(Π) = -1/2ε_μ^μλρσε_νλρτΠ^τ_σ ,
P^(2)_μν(Π) = -1/2ε_μ^μλρσε_νλπτΠ^π_ρΠ^τ_σ ,
P^(3)_μν(Π) =
-1/6ε_μ^μλρσε_νϵπτΠ^ϵ_λΠ^π_ρΠ^τ_σ .
The first two interactions between the helicity-0 and helicity-2
fields in the Lagrangian (<ref>) can be removed by the change
of variable, defining
ĥ_μν≡ h_μν - a_1/2π η_μν + a_2/2Λ_3^3∂_μπ∂_νπ .
In this way the Lagrangian of the gravitational sector in the
decoupling limit becomes <cit.>
ℒ^DL_grav = -
ĥ^μνℰ^ρσ_μνĥ_ρσ
+ ∑_n=0^3 b_n/Λ_3^3n(∂π)^2
e_(n)(Π)
+ a_3/Λ_3^6ĥ^μν P_μν^(3)(Π) .
We see in the first line the appearance of the ordinary Galileon terms
up to quintic order [we denote
(∂π)^2≡∂_μπ∂^μπ]. The
coefficients b_n are given by certain combinations of the
a_n's.[Namely, b_0=-3/4a_1^2,
b_1=-3/4a_1 a_2, b_2=-1/2a_2^2-1/3a_1
a_3 and b_3=-5/4a_2 a_3.] The last term of
Eq. (<ref>) is the remaining mixing between the helicity-0
and helicity-2 modes and is not removable by any local field
redefinition like in (<ref>).
The contribution of the gravitational sector to the equation of motion
of the helicity-2 field gives
δℒ^DL_grav/δĥ^μν = -2 ℰ^ρσ_μνĥ_ρσ + a_3/Λ_3^6
P^(3)_μν(Π) ,
while its contribution to the equation of motion of the helicity-0
field reads
δℒ^DL_grav/δπ = -2
∑_n=1^4 n b_n-1/Λ_3^3(n-1) e_(n)(Π) +
a_3/Λ_3^6 Q_μν^(2)ρσ(Π)
∂_ρ∂_σĥ^μν ,
where we posed
Q_μν^(2)ρσ(Π) ≡∂
P_μν^(3)/∂Π_ρσ = -1/2ε_μ^μρϵλε_νσπτ^νσΠ^π_ϵΠ^τ_λ .
The second-order nature of the equations of motion in the gravity
sector is apparent. This is the standard property of the ghost-free
massive gravity interactions <cit.>.
§.§ Matter sector
As next, we shall control the contributions in the matter sector due
to both the helicity-0 and helicity-2 fields. To this aim it is
important to properly identify the matter degrees of freedom that are
metric independent. These are provided by the coordinate densities
defined as ρ^*_g = √(-g)ρ_g u^0_g and ρ^*_f =
√(-f)ρ_f u^0_f, and by the ordinary (coordinate) velocities
v^μ_g=u^μ_g/u^0_g and v^μ_f=u^μ_f/u^0_f. The associated
currents J^*μ_g= ρ^*_g v^μ_g and J^*μ_f= ρ^*_f
v^μ_f are conserved in the ordinary sense, ∂_μ
J^*μ_g=0 and ∂_μ J^*μ_f=0, and are related to the
classical currents by
J^*μ_g = √(-g) J_g^μ and J^*μ_f =
√(-f) J_f^μ .
When varying the action we must carefully impose that the independent
matter degrees of freedom are the metric independent currents
J^*μ_g and J^*μ_f. After variation we may restore the
manifest covariance by going back to the classical currents
using (<ref>).
Next we must specify how the matter variables will behave in the DL
when we take the scaling
limits (<ref>)–(<ref>). In the DL we want to
keep intact the coupling between the helicity-2 mode h_μν and
the particles living in the g sector, therefore we impose
T_bar^μν = M_g
T̂_bar^μν and T_g^μν = M_g
T̂_g^μν ,
with T̂_bar^μν and T̂_g^μν remaining
constant in the DL. As for the f particles,
in a
similar way we demand that T_f^μν = M_f T̂_f^μν
with T̂_f^μν being constant.
The next important point concerns the internal vector field
𝒜_μ. As we have seen this vector field is a
graviphoton <cit.>, i.e. its scale is given by
the Planck mass, witness the factor M_eff^2 in front of the
kinetic term of the vector field (<ref>), see also the factor
M_eff^2 in front of the stress-energy tensor of the vector
field, Eq. (<ref>). For the model to work M_eff must be
of the order of the Planck mass, as determined
in (<ref>). This means that we have to canonically
normalize the vector field 𝒜_μ according to
𝒜_μ = 𝒜̂_μ/M_eff ,
and keep 𝒜̂_μ constant in the DL. Thus
T_g_eff^μν = T̂_g_eff^μν should
be considered constant in that limit.
A general variation of the matter action with respect to the two
metrics reads
δℒ_mat = √(-g)/2(T_bar^μν + T_g^μν)δ g_μν +
√(-f)/2 T_f^μνδ f_μν
+
√(-g_eff)/2 T_g_eff^μνδ
g^eff_μν .
We insert Eqs. (<ref>)–(<ref>) and change the
helicity-2 variable according to (<ref>) to obtain the
contribution of the matter action to the field equation for the
helicity-2 field (in guise ĥ_μν) as
δℒ_mat/δĥ_μν =
1/M_g√(-g) (T_bar^ρ(μ +
T_g^ρ(μ)(δ^ν)_ρ +
h^ν)_ρ/M_g)
+
α/M_g√(-g_eff) T_g_eff^ρ(μ((α+β)δ^ν)_ρ
+ K^ν)_ρ) .
Taking the DL with the postulated
scalings (<ref>)–(<ref>) we find that the
helicity-2 mode of the massive graviton is just coupled in this limit
to the baryons and g particles,
δℒ^DL_mat/δĥ_μν = T̂_bar^μν + T̂_g^μν ,
where the (rescaled) stress-energy tensors
T̂_bar^μν and T̂_g^μν in the DL are
computed with the Minkowski background.
We next consider the contributions of the matter sector to the
equation of motion of the helicity-0 field. We find three
contributions, two coming from the field
redefinition (<ref>),
δℒ_mat/δπ|_(1a) =
a_1/2M_g√(-g) (T_bar^μν +
T_g^μν)(η_μν +
h_μν/M_g)
+
a_2/M_g Λ_3^3∂_ν[√(-g) (T_bar^μ(ν +
T_g^μ(ν)(δ_μ^ρ) +
h_μ^ρ)/M_g)∂_ρπ]
,
δℒ_mat/δπ|_(1b) =
α a_1/2M_g√(-g_eff) T_g_eff^μν((α+β)η_μν
+ K_μν)
+ α a_2/M_g
Λ_3^3∂_ν[√(-g_eff) T_g_eff^μ(ν((α+β)δ^ρ)_μ
+ K^ρ)_μ)∂_ρπ] ,
and the third one being “direct”, and already investigated
in <cit.> with result
δℒ_mat/δπ|_(2) =
-1/Λ_3^3∂_μ∂_ν[√(-f) T_f^ρμ(δ^ν_ρ-Π^ν_ρ/Λ_3^3)
+ β√(-g_eff) T_g_eff^ρμ((α+β)δ^ν_ρ +
K^ν_ρ)] .
The latter contribution might look worrisome in the DL, but it becomes
finite after using the equation of motion for the f particles,
Eq. (<ref>), and that for the vector field,
Eq. (<ref>). The calculation proceeds similarly to the one
using Eqs. (3.29)–(3.32) in Ref. <cit.>. Finally the result can
be brought into the form <cit.>
δℒ_mat/δπ|_(2) =
1/Λ_3^3∂_ν[
J^*ρ_f ℱ_μρ(η^μν -
Π^μν/Λ_3^3)^-1
+
β(J^*ρ_g - J^*ρ_f)
ℱ_μρ((α+β)η^μν +
K^μν)^-1] ,
where we describe the matter degrees of freedom by means of the
coordinate currents (<ref>).
The results (<ref>) and (<ref>) are general at
this stage, and involve couplings between both the helicity-0 and
helicity-2 modes with the matter fields —g and f particles, and
the internal vector field 𝒜_μ. However, because of the
scaling (<ref>), which we recall is appropriate to the
graviphoton whose coupling scale is given by the Planck mass, the
vector field strength actually scales like
ℱ_μν=ℱ̂_μν/M_eff in the
DL limit. This fact kills all the interactions between the helicity-0
mode and the vector field in the DL, since they come with an inverse
power of M_eff.[Note that if we do not impose the
scaling
ℱ_μν=ℱ̂_μν/M_eff the
equation (<ref>) for the helicity-2 field diverges in the
DL. Similarly for Eq. (<ref>).] Thus the direct
contribution (<ref>) is identically zero in the DL, and
only the contribution (<ref>) is surviving,
while (<ref>) is also zero. After further simplification
with the matter equations of motion, we obtain (with
T̂_bar and T̂_g denoting the Minkowskian traces)
δℒ^DL_mat/δπ =
a_1/2 (T̂_bar + T̂_g) +
a_2/Λ_3^3 (T̂_bar^μν +
T̂_g^μν)∂_μ∂_νπ .
Recapitulating, we find that the DL of the model consists of the
following equation for the helicity-2 mode, i.e.δℒ^DL/δĥ_μν=0 or
equivalently
-2 ℰ_ρσ^μνĥ_ρσ +
a_3/Λ_3^6 P_(3)^μν(Π) +
T̂_bar^μν + T̂_g^μν = 0 ,
which is of second-order nature. Thus, the contributions of the
gravitational and matter sector to the equations of motion of the
helicity-2 mode in the DL are ghost-free. Note, that the Bianchi
identity of this equation (taking the divergence of it) is identically
satisfied, since the particles actually follow geodesics in the
DL. Indeed, using (<ref>)–(<ref>) together with the
equations of motion [e.g. (<ref>)], we have
∂_νT̂_bar^μν=∂_νT̂_g^μν=0
(the particles move on Minkowski straight lines).
In addition we have the total equation of motion of the helicity-0
mode, namely δℒ^DL/δπ=0 which reads
-2 ∑_n=1^4 n b_n-1/Λ_3^3(n-1)
e_(n)(Π) + a_3/Λ_3^6
Q_μν^(2)ρσ(Π)
∂_ρ∂_σĥ^μν
= -
a_1/2 (T̂_bar + T̂_g) -
a_2/Λ_3^3 (T̂_bar^μν +
T̂_g^μν)∂_μ∂_νπ .
Since this equation is perfectly of second-order in the derivatives of
the π field, we conclude our study by stating that the model is
safe (ghost-free) up to the strong coupling scale. Below that scale
the theory is perfectly acceptable as an Effective Field Theory, and
its consequences can be worked out using perturbation theory as
usual. For instance, solving at linear order the helicity-0
equation (<ref>) we obtain the usual well-posed
(hyperbolic-like) equation
π = a_1/4b_0(T̂_bar +
T̂_g)+𝒪(π^2) ,
which can then be perturbatively iterated to higher order. With this
we have proved, that the coupling of the dark matter particles with
the internal vector field does not introduce any ghostly contribution
in the DL.
§ CONCLUSIONS
This work was dedicated to the detailed study of the decoupling limit
interactions of the dark matter model proposed
in <cit.>. This model is
constructed via a specific coupling of two copies of dark
matter particles to two metrics in the framework of massive
bigravity. Furthermore, an internal vector field links the two dark
matter species. This enables us to implement a mechanism of
gravitational polarization, which induces the MOND phenomenology on
galactic scales (with the specific choice of parameters studied
in <cit.>). Note that, since our model successfully
reproduces all aspects of that phenomenology, it will be in agreement
with the recent observations of the MOND mass-discrepancy-acceleration
relation in <cit.>.
Some theoretical and phenomenological consequences of this model were
studied in detail in Ref. <cit.>, but it was also
pointed out that the decoupling limit of the theory may be
problematic, with higher derivative terms occuring in the equation of
motion of the helicity-0 mode of the massive graviton.
In the present work, we studied the complete DL interactions crucially
including the contributions of the matter sector, and we showed that
by necessary rescaling of the vector field (as appropriate for a
vector field with Planckian coupling constant) the theory is free from
ghosts in the DL, and hence can be used as a valid Effective Field
Theory up to the strong coupling scale.
We would like to thank Claudia de Rham and Andrew Tolley for very
useful and enlightening discussions. L.H. wishes to acknowledge the
Institut d'Astrophysique de Paris for hospitality and support at the
final stage of this work.
| We are witnesses of centenaries. The year 2015 marked the 100th
anniversary of Albert Einstein's elaborate theory of General
Relativity (GR), while 2016 celebrated the centenary of the first
paper on gravitational waves by the announcement of their experimental
detection <cit.>. GR meets the requirements of the underlying
physics in a broad range of scales, from black hole to solar system
size. It stood up to intense scrutiny and prevailed against all
alternative competitors. It constitutes the bedrock upon which our
fundamental understanding of gravity relies. However,
some important questions remain.
The lack of renormalizability motivates the modifications of gravity
in the ultraviolet (UV), that incorporate the quantum nature of
gravity. The singularities present in the classical theory could be
regularized by the new physics <cit.>. The UV
modifications might also dictate a different scenario for the early
Universe as an alternative to inflation <cit.>. The
inflaton field in the standard picture might be just a reminiscent of
the modification of gravity in the UV.
From a more observational point of view, GR faces additional
challenges on cosmological scales. In order to account for the
observed amount of ingredients of the Universe, it is necessary to
introduce dark matter and dark energy despite of their unclear
origin. Notwithstanding of remarkable efforts, the dark matter has so
far not been directly detected. Concerning the dark energy,
the standard model in form of a cosmological constant Λ
accounts for most of the observations even though it faces the
unnaturalness problem <cit.>. Combined with the
non-baryonic cold dark matter (CDM) component, the model explains
remarkably well the observed fluctuations of the cosmic microwave
background and the formation of large scale structures.
Albeit the many successes of the Λ-CDM model at large scales,
it has difficulties to explain the observations of dark matter at
galactic scales. For instance, it is not able to account for the tight
correlations between dark and luminous matter in galaxy
halos <cit.>. In this remark, the first
unsatisfactory discrepancy comes from the observed Tully-Fisher
relation between the baryonic mass of spiral galaxies and their
asymptotic rotation velocity. Another discrepancy, perhaps more
fundamental, comes from the correlation between the
presence of dark matter and the acceleration scale <cit.>. The prevailing view regarding these problems is that they
should be resolved once we understand the baryonic processes that
affect galaxy formation and evolution <cit.>. However, this
explanation is challenged by the fact that galactic data are in
excellent agreement with the MOND (MOdified Newtonian Dynamics)
empirical formula <cit.>. From a phenomenological
point of view, this formula accommodates remarkably well all
observations at galactic scales. Unfortunately, extrapolation of the
MOND formula to the larger scale of galaxy clusters confronts an
incorrect dark matter distribution <cit.>.
The ideal scenario would be to have a hybrid model in which the
properties of the Λ-CDM model are naturally incorporated on
large scales, whereas the MOND formula would take place on galactic
scales. There have been many attempts to embed the physics beyond the
MOND formula into an approved relativistic theory, either via
invoking new propagating fields without dark matter <cit.>, or by considering
MOND as an emergent phenomenology <cit.>.
Here we consider a model of the latter class, called Dipolar Dark
Matter (DDM) <cit.>. The most compelling
version of DDM has been recently developed, based on the formalism of
massive bigravity theory <cit.>. To
describe the potential interactions between the two metrics of
bigravity the model uses the effective composite metric introduced in
Refs. <cit.>. Two species of dark
matter particles are separately coupled to the two metrics, and an
internal vector field that links the two dark matter species is
coupled to the effective composite metric. The MOND formula is
recovered from a mechanism of gravitational polarization in the non
relativistic approximation. The model has the potential to
reproduce the physics of the Λ-CDM model at large
cosmological scales.
In the present paper we address the problem of whether there are ghost
instabilities in this model. The model itself <cit.> will be reviewed in Sec. <ref>. The model is safe
in the gravitational sector because it uses the ghost-free framework of massive bigravity.
The interactions of the matter fields with the effective metric reintroduce a ghost
in the matter sector beyond the strong coupling scale, as found in <cit.>.
In our model, apart from this effective coupling the different
species of matter fields interact with each other via an internal vector field.
This additional coupling might spoil the property of ghost freedom within the strong
coupling scale.
We therefore investigate, in Sec. <ref>,
the exact decoupling limit (DL) of our model, crucially including the
contributions coming from the matter sector and notably from the
internal vector field. The model dictates what are the relevant
scalings of the matter fields in terms of the Planck mass in the
DL. Using that, we shall prove that the theory is free of ghosts in the
DL and conclude that it is acceptable as an Effective Field Theory
below the strong coupling scale. We end the paper with a few
concluding remarks in Sec. <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07897v1 | 20170126225521 | New trends in free boundary problems | [
"Serena Dipierro",
"Aram Karakhanyan",
"Enrico Valdinoci"
] | math.AP | [
"math.AP"
] |
We present a series of recent results
on some new classes of free boundary problems.
Differently from the classical literature, the problems
considered have either a “nonlocal”
feature (e.g., the interaction or/and the interfacial
energy may
depend on global quantities) or a “nonlinear”
flavor (namely, the total energy is the nonlinear
superposition of energy components, thus losing
the standard additivity and scale invariances of the problem).
The complete proofs and the full details of the results
presented here are given in <cit.>.
[2010]35R35, 35R11.
Multi-Year X-ray Variations of Iron-K and Continuum Emissions
in the Young Supernova Remnant Cassiopeia A
Toshiki Sato1,2, Yoshitomo Maeda2,
Aya Bamba3,4,
Satoru Katsuda5,
Yutaka Ohira3,
Ryo Yamazaki3,
Kuniaki Masai1,
Hironori Matsumoto6,
Makoto Sawada3,
Yukikatsu Terada7,
John P. Hughes8
and Manabu Ishida2
January 2017
================================================================================================================================================================================================================================================
Para Don Ireneo, por supuesto.
§ INTRODUCTION
In this survey, we would like to present some
recent research directions in the study of variational
problems whose minimizers naturally exhibit the formation
of free boundaries. Differently than the cases
considered in most of the existing literature, the problems
that we present here are either nonlinear (in the sense
that the energy functional is the nonlinear superposition
of classical energy contributions) or nonlocal
(in the sense that some of the energy contributions
involve objects that depend on the global geometry
of the system).
In these settings, the problems typically show new features
and additional difficulties with respect to the classical cases.
In particular, as we will discuss in further details:
the regularity theory is more complicated,
there is a lack of scale invariance for some problems,
the natural scaling properties of the energy may not
be compatible with the optimal regularity,
the condition at the free boundary may be of nonlocal
or nonlinear type and involve the global behavior of
the solution itself, and
some problems may exhibit a variational instability
(e.g., minimizers in large domains and in small domains
may dramatically differ the ones from the others).
We will also discuss
how the classical free boundary problems in <cit.>
are recovered either as limit problems
or after a blow-up, under appropriate structural
conditions on the energy functional.
We recall the classical free boundary problems
of <cit.>
in Section <ref>.
The results concerning nonlocal free boundary problems
will be presented in Section <ref>,
while the case of
nonlinear energy superposition is discussed
in Section <ref>.
§ TWO CLASSICAL FREE BOUNDARY PROBLEMS
A classical problem in fluid dynamics is the description
of a two-dimensional ideal fluid in terms of its stream
function, i.e. of a function whose level sets describe
the trajectories of the fluid.
For this, we consider an incompressible, irrotational
and inviscid fluid which occupies
a given planar region Ω⊂^2.
If V:Ω→^2 represents the velocity of the particles
of the fluid, the incompressibility condition implies that
the flow of the fluid through any portions of Ω
is zero (the amount of fluid coming in is exactly the
same as the one going out), that is,
for any Ω_o⋐Ω, and denoting by ν
the exterior normal vector,
0=∫_∂Ω_o∇ V·ν=
∫_Ω_o div V.
Since this is valid for any subdomain of Ω, we thus
infer that
div V=0 Ω.
Now, we use that the fluid is irrotational to write equation (<ref>)
as a second order PDE. To this aim, let us analyze what a “vortex” is.
Roughly speaking, a vortex is given by a close trajectory,
say γ: S^1→Ω,
along which the fluid particles move. In this way, the velocity
field V is always parallel to the tangent direction γ'
and therefore
0∫_S^1 V(γ(t))·γ'(t) dt
=∮_γ V,
where the standard notation for the
circulation line integral is used.
That is, if we denote by S the region inside Ω
enclosed by the curve γ
(hence, γ=∂ S),
we infer by (<ref>) and Stokes' Theorem that
0∫_S curl V· e_3,
where, as usual, we write {e_1,e_2,e_3}
to denote the standard basis of ^3,
we identify the vector V=(V_1,V_2)
with its three-dimensional image V=(V_1,V_2,0), and
curl V(x):= ∇× V (x)=
(
e_1 e_2 e_3
∂_x_1 ∂_x_2 ∂_x_3
V_1(x_1,x_2) V_2(x_1,x_2) 0
)=( ∂_x_1V_2(x)-∂_x_2V_1(x)) e_3.
In this setting, the fact that the fluid is irrotational is translated
in mathematical language into the fact that the opposite of (<ref>)
holds true, namely
0=
∫_S curl V· e_3,
for any S⊂Ω (say, with smooth boundary).
Since this is valid for any arbitrary region S, we thus
can translate the irrotational property of the fluid into
the condition curl V=0 in Ω, that is
∂_x_1V_2-∂_x_2V_1=0 Ω.
Now, we consider the 1-form
ω:=
V_2 dx_1-V_1 dx_2,
and we have that
dω= -∂_x_2V_2 dx_1∧ dx_2-
∂_x_1V_1 dx_1∧ dx_2
=- div V dx_1∧ dx_2
=0,
thanks to (<ref>).
Namely ω is closed, and thus exact (by
Poincaré Lemma, at least if Ω is star-shaped).
This says that there exists a function u such that
ω=du=∂_x_1u dx_1+
∂_x_2u dx_2.
By comparing this and (<ref>), we conclude that
∂_x_1u=V_2 ∂_x_2u=-V_1.
We observe that u is a stream function for the fluid, namely
the fluid particles move along the level sets of u:
indeed, if x(t) is the position of the fluid particle at time t,
we have that ẋ(t)=V(x(t)) is the velocity of the fluid, and
d/dt u(x(t))=∂_x_1 u(x(t))ẋ_1(t)+
∂_x_2 u(x(t))ẋ_2(t)
=
∂_x_1 u(x(t))V_1(x(t))+
∂_x_2 u(x(t))V_2(x(t))
=V_2(x(t))V_1(x(t))-V_1(x(t))V_2(x(t))
=0,
in view of (<ref>).
The stream function u also satisfies a natural overdetermined
problem. First of all, since ∂Ω represents the boundary
of the fluid, and the fluid motion occurs on the level sets of u,
up to constants we may assume that u=0 along ∂Ω.
In addition, along ∂Ω Bernoulli's Law prescribes
that the velocity is balanced by the pressure (which we take
here to be p=p(x)). That is,
up to dimensional constants, we can write that, along ∂Ω,
p = |V|^2 =|∇ u|^2,
where we used again (<ref>) in the last identity.
Also, (<ref>) and (<ref>) give that, in Ω,
Δ u=∂_x_1V_2-∂_x_2V_1=0,
that is, summarizing,
{Δ u=0 Ω,
u=0 ∂Ω,
|∇ u|^2=p ∂Ω.
.
Notice that these types of overdetermined problems
are, in general,
not solvable: namely, only “very special” domains
allow a solution of such overdetermined problem to exist
(see e.g. <cit.>). In this spirit, determining such
domain Ω is part of the problem itself, and
the boundary of Ω is, in this sense, a “free boundary”
to be determined together with the solution u.
These kinds of free boundary problems
have a natural formulation, which was widely studied
in <cit.>.
The idea is to consider an energy functional
which is the superposition of a Dirichlet part
and a volume term. By an appropriate domain variation,
one sees that minimizers (or, more generally, critical
points) of this functional correspond (at least in a weak sense)
to solutions of (<ref>) (compare, for instance,
the system in (<ref>)
here with Lemma 2.4 and Theorem 2.5 in <cit.>).
Needless to say, in this framework, the analysis of the minimizers
of this energy functional and of their level sets
becomes a crucial topic of research.
In <cit.> a different energy functional
is taken into account, in which the volume term
is substituted by a perimeter term. This modification provides
a natural change in the free boundary condition (in this
setting, the pressure of the Bernoulli's Law is replaced
by the curvature of the level set, see formula (6.1)
in <cit.>).
In the following sections we will discuss what happens when:
* we interpolate the volume term of
the energy functional of <cit.>
and the perimeter term of
the energy functional of <cit.>
with a fractional perimeter term, which recovers the volume
and the classical perimeter in the limit;
* we consider a nonlinear energy superposition,
in which the total energy depends on the volume, or on the
(possibly fractional) perimeter, in a nonlinear fashion.
§ NONLOCAL FREE BOUNDARY PROBLEMS
A classical motivation for free boundary problems
comes from the superposition of
a “Dirichlet-type energy” D
and an “interfacial energy” I.
Roughly speaking, one may consider the minimization problem
of an energy functional
E:=D+I,
which takes into account the following two
tendencies of the energy contributions, namely:
* the term D
tries to reduce the oscillations of the minimizers,
* while the term I penalizes the formation
of interfaces.
Two classical approaches
appear in the literature to measure these interfaces,
taking into account the “volume” of the phases
or the “perimeter” of the phase separations.
The first approach, based on a “bulk” energy contribution,
was widely studied in <cit.>.
In this setting, the energy superposition
in (<ref>) (with respect to
a reference domain Ω⊂^n) takes the form
D=D(u):=∫_Ω |∇ u(x)|^2 dx
I=I(u):=
∫_Ωχ_{ u>0}(x) dx
= ℒ^n ( Ω∩{ u>0}),
where ℒ^n denotes, as customary,
the n-dimensional Lebesgue measure.
The case of two phase contributions
(namely, the one which takes into
account
the bulk energy of both {u>0} and {u<0})
was also considered in <cit.>.
The second approach, based on a “surface tension”
energy contribution,
was introduced in <cit.>.
In this setting, the energy superposition
in (<ref>) takes the form
D=D(u):=∫_Ω |∇ u(x)|^2 dx
I=I(u):=
Per({ u>0} ,Ω),
where the notation
Per(E,Ω):=∫_Ω |Dχ_E(x)| dx
= [ χ_E]_BV(Ω)
represents the perimeter of the set E in Ω;
hence, if E has smooth boundary, then
Per(E,Ω)=ℋ^n-1((∂ E)∩Ω),
being ℋ^n-1 the (n-1)-dimensional
Hausdorff measure.
As pointed out in <cit.>,
the two free boundary problems
in (<ref>) and (<ref>)
can be settled into a unified framework,
and in fact they may be seen as “extremal” problems
of a family of energy functionals
indexed by a continuous parameter σ∈(0,1).
To this aim, given two measurable
sets E, F⊂^n, with ℒ^n(E∩ F)=0,
one considers the σ-interaction of E and F,
as given by the double integral
𝒮_σ(E,F):=
σ (1-σ) ∬_E× Fdx dy/|x-y|^n+σ.
In <cit.>, the notion
of σ-minimal surfaces has been introduced
by considering minimizers of the σ-perimeter
induced by such interaction. Namely, one defines
the σ-perimeter of E in Ω
as the contribution relative to Ω
of the σ-interaction of E and its complement (which
we denote by E^c:=^n∖ E), that is
Per_σ(E,Ω)
:= 𝒮_σ(E,E^c∩Ω)+
𝒮_σ(E∩Ω,E^c∩Ω^c).
After <cit.>,
an intense activity has been performed to investigate
the regularity and the geometric properties
of σ-minimal surfaces:
see in particular <cit.>
for interior regularity results, and <cit.>
for the rather special behavior of
σ-minimal surfaces near the boundary
of the domain. See also <cit.>
for a recent survey on σ-minimal surfaces.
The analysis of the asymptotics of the σ-perimeter
as σ↗1 has been studied
under several perspectives in <cit.>. Roughly speaking,
up to normalization constants, we may say that Per_σ
approaches the classical perimeter as σ↗1.
On the other hand, as σ↘0,
we have that Per_σ
approaches the Lebesgue measure (again, up
to normalization constants,
see <cit.>
for precise statements and examples).
In virtue of these considerations,
we have that the free boundary problem
introduced in <cit.>,
which takes into account
the energy superposition
in (<ref>) of the form
D=D(u):=∫_Ω |∇ u(x)|^2 dx
I=I(u):=
Per_σ({ u>0} ,Ω),
may be seen as an interpolation
of the problems stated in (<ref>) and (<ref>)
(that is, at least at a formal level,
the energy functional in (<ref>) reduces to that
in (<ref>) as σ↘0
and to that in (<ref>)
as σ↗1).
A nonlocal variation of the classical Dirichlet energy
has been also considered in <cit.>.
In this setting, the classical H^1-seminorm
in Ω of a function u is replaced
by a Gagliardo H^s-seminorm of the form
s (1-s) ∬_Q_Ω|u(x)-u(y)|^2/|x-y|^n+2s dx dy,
where
Q_Ω:=(Ω×Ω)∪
(Ω×Ω^c)∪ (Ω^c×Ω)
and s∈(0,1). More precisely,
in <cit.> a superposition
of the Gagliardo seminorm
and the Lebesgue measure of the positivity
set is taken into account.
It is worth to point out that the domain Q_Ω
in (<ref>) comprises all the interactions
of points (x,y)∈^2n which involve the domain Ω,
since Q_Ω=(^n×^n)∖
(Ω^c×Ω^c).
In this sense, the integration over Q_Ω⊂^2n
is the natural counterpart of the classical integration over Ω
of the standard Dirichlet energy, since Ω=^n∖Ω^c.
Also, the double integral in (<ref>)
recovers the classical Dirichlet energy, see e.g. <cit.>.
In this spirit, in <cit.>
a fully nonlocal counterpart of the free boundary
problems in (<ref>) and (<ref>)
has been introduced, by studying
energy superpositions of Gagliardo norms
and fractional perimeters (see also <cit.>
for the
superpositions of Gagliardo norms
and classical perimeters).
More precisely,
the energy superposition
in (<ref>)
considered in <cit.> takes the form
D=D(u):=
s (1-s) ∬_Q_Ω|u(x)-u(y)|^2/|x-y|^n+2s dx dy
I=I(u):=
Per_σ({ u>0} ,Ω),
where s, σ∈(0,1).
We summarize here a series of results recently obtained
in <cit.>
for these nonlocal free boundary problems
(some of these results also rely on a notion of fractional harmonic
replacement analyzed in <cit.>).
First of all, we have that
minimizers[Here,
for simplicity,
we omit the fact that, in this setting, the minimization
is performed not only on a function, but
on a couple given by the function and its
positivity set. See Section 2 in <cit.>
for a rigorous discussion on this important
detail.]
of free boundary problems
with fractional perimeter interfaces
are continuous, possess suitable density estimates
and have smooth free boundaries
up to sets
of codimension 3:
[Theorems 1.1 and 1.2 in <cit.>]
Let u_⋆ be a minimizer of
E(u):=∫_B_1 |∇ u(x)|^2 dx+
Per_σ( {u>0},B_1),
with σ∈(0,1) and 0∈∂{u_⋆>0}.
Then u_⋆ is locally C^1- σ/2
and, for any r∈(0,1/2),
 min{ℒ^n(B_r∩{u_⋆≥0}), ℒ^n(B_r∩{u_⋆≤0}) }≥ cr^n,
for some c>0.
Moreover, the free boundary is a C^∞-hypersurface
possibly outside a small singular
set of Haussdorff dimension n-3.
We remark that the Hölder exponent 1- σ/2
is consistent with the natural scaling of the problem (namely u_r
(x) := r^σ/2-1 u_⋆ (rx) is still a minimizer).
Such type of regularity approaches the optimal exponent
in <cit.>
as σ↘0. Nevertheless, as σ↗1,
minimizers in <cit.> are known to be Lipschitz
continuous, therefore we think that it is a very interesting
open problem to investigate the optimal
regularity of u_⋆ in Theorem <ref>
(we stress that this optimal regularity may well approach
the Lipschitz regularity and so “beat the natural scaling
of the problem”).
Also, we think it is very interesting to obtain
optimal bounds on the dimension of the singular set
in Theorem <ref>.
It is also worth to observe that the minimizers
in Theorem <ref> satisfy a nonlocal free boundary
condition. Namely,
the normal jump J_⋆:=
|∇ u_⋆^+|^2-|∇ u_⋆^-|^2
along the smooth points
of the free boundary coincides (up to normalizing constants)
with the nonlocal mean curvature of the free boundary,
which is defined by
𝒦^σ(x):=∫_^nχ_{ u≤0}(y)-χ_{ u>0}(y)/|x-y|^n+σ dy,
for x∈∂{ u>0}.
This free boundary condition has been presented in formula (1.6)
of <cit.>. Since 𝒦^σ approaches
the classical mean curvature as σ↗1
and a constant as σ↘0 (see e.g. <cit.>
and
Appendix B
of <cit.>), we remark
that this nonlocal free boundary condition
recovers the classical ones in <cit.>
and in <cit.> as σ↗1,
and as σ↘0, respectively.
In <cit.>,
we consider the fully nonlocal case in which both the
energy components
become of nonlocal type,
namely we replace (<ref>) with the energy
functional
E(u):=
s (1-s) ∬_Q_B_1|u(x)-u(y)|^2/|x-y|^n+2s dx dy
+
Per_σ( {u>0},B_1),
with s, σ∈(0,1), where the notation in (<ref>)
has been also used.
In this setting, we have:
[Theorem 1.1 in <cit.>]
Let u_⋆ be a minimizer of (<ref>)
with 0∈∂{u_⋆>0}.
Assume that u_⋆≥0 in B_1^c and that
∫_^n|u_⋆(x)|/1+|x|^n+2s dx<+∞.
Then, u_⋆ is locally C^s- σ/2
and, for any r∈(0,1/2),
 min{ℒ^n(B_r∩{u_⋆≥0}), ℒ^n(B_r∩{u_⋆=0}) }≥ cr^n,
for some c>0.
We observe that the Hölder exponent in Theorem <ref>
recovers that of Theorem <ref> as s↗1.
Once again, we think that it would be very interesting
to investigate the optimal regularity of the minimizers
in Theorem <ref>. Also,
Theorem <ref> has been established in the
“one-phase” case, i.e. under the assumption that the
minimizer has a sign. It would be very interesting
to establish similar results
in the “two-phase” case in which minimizers can change
sign. It is worth to remark that the case in which
minimizers change sign is conceptually
harder in the nonlocal setting
than in the local one, since the two phases interact between
each other, thus producing additional energy contributions
which need to be carefully taken into account.
§ NONLINEAR FREE BOUNDARY PROBLEMS
In <cit.>
a new class of free boundary problems
has been considered, by taking into account
“nonlinear energy superpositions”.
Namely, differently than in (<ref>),
the total energy functional considered in <cit.>
is of the form
E:=D+Φ_0(I),
for a suitable function Φ_0.
When Φ_0 is linear, the energy functional in (<ref>)
boils down to its “linear counterpart”
given in (<ref>), but for a
nonlinear function Φ_0
the minimizers of the energy functional in (<ref>)
may exhibit[Let us give a brief motivation
for the nonlinear interface case. The classical energy functionals
in <cit.>
may be also considered in view of models arising in population
dynamics.
Namely, one can consider the regions {u>0} and {u≤0}
as areas occupied by two different populations, that have
reciprocal “hostile” feelings. Then, the diffusive behavior
of the populations (which is
encoded by the Dirichlet term of the energy)
is influenced by the fact that the two populations will have the
tendency to minimize the contact between themselves,
and so to reduce an interfacial energy as much as possible.
In this setting, it is natural to consider the case
in which the reaction of the populations to the mutual contact
occurs in a nonlinear way. For instance, the case in which
additional irrationally motivated hostile feelings arise
from further contacts between the populations is naturally
modeled by a superlinear interfacial energy, while the
case in which the interactions between the populations
favor the possibility of compromises and cultural exchanges
is naturally
modeled by a sublinear interfacial energy.]
interesting differences with respect to the classical
case.
A detailed analysis of free boundary problems as in (<ref>)
is given in <cit.>.
Here, we summarize some of the results obtained
(we give here simpler statements, referring
to <cit.>
for more general results).
We take Φ_0:[0,+∞)→[0,+∞) to
be monotone, nondecreasing,
lower semicontinuous and coercive – in the sense that
lim_t↗+∞Φ_0(t)=+∞.
We will also use the notation of writing Per_σ
for every σ∈[0,1], with the convention that
* when σ∈(0,1), Per_σ
is the nonlocal perimeter defined in (<ref>);
* when σ=1, Per_σ is the classical
perimeter;
* when σ=0, Per_σ(E;Ω)=
ℒ^n(E∩Ω).
Then, in the spirit of (<ref>), we consider energy functionals of the form
E(u):=∫_Ω |∇ u(x)|^2 dx+Φ_0(
Per_σ( {u>0},Ω)).
Notice that, for σ∈(0,1) and Φ_0(t)=t,
the energy in (<ref>) reduces to that in (<ref>).
Similarly, for σ=0 and σ=1,
the energy in (<ref>)
boils down
to those in <cit.> and <cit.>, respectively.
When σ=0,
a particularly interesting case of nonlinearity is given by Φ_0(t)=
t^n-1/n. Indeed, in this case, the interfacial energy
depends on the n-dimensional Lebesgue measure,
but it scales like an (n-1)-dimensional surface measure
(also, by Isoperimetric Inequality, the energy levels
of the functional in <cit.>
are above those in (<ref>)).
We point out that the free boundary problems in (<ref>)
develop a sort of natural instability, in the sense that
minimizers in a large ball, when restricted to smaller balls,
may lose their minimizing properties. In fact, minimizers
in large and small balls may be rather different from each
other:
[Theorem 1.1
in <cit.>]
There exist
a nonlinearity Φ_0
and radii R_0>r_0>0 such that
a minimizer
for (<ref>) in Ω:=B_R_0
is not a minimizer
for (<ref>) in Ω:=B_r_0.
The counterexample in Theorem <ref>, which clearly shows the
lack[Just to recall the importance of scaling invariances in the classical free
boundary problems, let us quote page 114 of <cit.>:
“for (small) balls B_r [...] let us assume (see 3.1) B_r=B_1(0)”.]
of scaling invariance of the problem,
is constructed by taking advantage of the different
rates of scaling produced by a suitable nonlinear function Φ_0,
chosen to be constant on an interval.
Namely, the “saddle function” in the plane
u_0(x_1,x_2)=x_1x_2 is harmonic and therefore
minimizes the Dirichlet energy of (<ref>).
In large balls, the interface of u_0 (as well as the ones
of its competitors) produces a contribution that
lies in the constant part of Φ_0, thus reducing
the minimization problem of (<ref>) to the one
coming from the Dirichlet contribution, and so favoring u_0
itself. Viceversa, in small balls, the interface
of u_0 produces more (possibly fractional) perimeter than
the one of the competitors whose positivity sets do not
come to the origin, and this fact implies that u_0
is not a minimizer in small balls.
This argument, which is depicted in Figures <ref> (A) and (B),
is rigorously explained in Section 3
of <cit.>.
In spite of this instability and of the lack of self-similar properties
of the energy functional, some regularity results
for minimizers of (<ref>) still hold true, under appropriate
assumptions on the nonlinearity Φ_0
(notice that, for Φ_0 with a constant part,
Theorem <ref> would produce the minimizer u_0
whose free boundary is a singular
cone, see Figure <ref> (A), hence any regularity
result on the free boundary has to rule
out this possibility by a suitable assumption on Φ_0).
In this sense, we have the following results:
[Corollary 1.4 and Theorems 1.5 and 1.6
in <cit.>]
Let σ∈(0,1], Φ_0 be Lipschitz continuous
and strictly increasing.
Let u_⋆ be a minimizer of (<ref>)
in Ω:=B_R, with 0≤ u_⋆≤ M
on ∂ B_R, for some M>0.
Then, u_⋆∈ C^1-σ/2(B_R/4).
Also, for any r∈(0,R/4),
 min{ℒ^n(B_r∩{u_⋆≥0}), ℒ^n(B_r∩{u_⋆=0}) }≥ cr^n,
for some c>0.
For σ=0, a result similar to that in Theorem <ref>
holds true, in
the sense that u_⋆ is Lipschitz,
see Theorems 1.3, 8.1 and 9.2 in <cit.>.
Moreover, in this case one obtains additional results, such as the
nondegeneracy of the minimizers,
the partial regularity of the free boundary and the
full regularity in the plane:
[Theorems 1.4, 1.6 and 1.7
in <cit.>]
Let σ=0, Φ_0 be Lipschitz continuous
and strictly increasing.
Let u_⋆ be a minimizer of (<ref>)
in Ω, with 0∈∂{u_⋆>0}
(in the measure theoretic sense).
Then, for any D⋐Ω,
there exists c>0 such that for any r>0 for which B_r⋐ D,
it holds that
∫_B_r∩{u_⋆>0} u_⋆^2(x) dx≥ c r^n+2.
Also, ∇ u_⋆ is locally BMO, in the sense that
sup_B_r⋐ D_B_r| ∇ u_⋆(x)-
⟨∇ u_⋆⟩_r | dx
≤ C,
for some C>0, where
⟨∇ u_⋆⟩_r:=_B_r∇ u_⋆ (x) dx.
In addition ℋ^n-1(B_r∩
(∂{u_⋆>0})) <+∞.
Finally, if n=2, then B_r∩
(∂{u_⋆>0}) is made of continuously differentiable curves.
The BMO-type regularity and the partial regularity
of the free boundary in Theorem <ref>
rely in turn on some techniques developed in <cit.>.
It is also interesting to remark that the case σ=0
recovers the classical problems in <cit.>
after a blow-up:
[Theorem 1.5 and Proposition 10.1
in <cit.>]
Let σ=0, Φ_0 be Lipschitz continuous
and strictly increasing.
Let u_⋆ be a minimizer of (<ref>)
in Ω, with 0∈Ω.
For any r>0, let u_r(x):=
u_⋆ (rx)/r.
Then, there exists the blow-up limit
u_0(x):=lim_r↘0 u_r(x).
Also, u_0 is continuous and with linear growth,
and it is a minimizer of the functional
E_0(u):=
∫_B_ρ |∇ u(x)|^2 dx+
λ_0 ℒ^n(B_ρ∩{u>0}),
where
λ_0:= Φ_0'(
ℒ^n(Ω∩{u_⋆>0})
).
We stress that the energy functional in (<ref>)
coincides with that analyzed in the classical paper <cit.>.
Nevertheless, the “scaling constant” λ_0
in (<ref>) depends on the original minimizer u_⋆,
as prescribed by (<ref>) (only in the case of a
linear Φ_0, we have that λ_0
is a structural constant independent of u_⋆).
The fact that geometric and physical quantities
arising in this type of problem are not universal constants
but depend on the minimizer itself
is, in our opinion, an intriguing feature of this type of
problems. In this sense, we recall that in <cit.>
the free boundary condition coincides with the classical
Bernoulli's law, namely
the normal jump J_⋆:=
|∇ u_⋆^+|^2-|∇ u_⋆^-|^2
along the smooth points
of the free boundary is constant
(in <cit.>
it coincides with the mean curvature
of the free boundary).
Differently from the classical cases,
in our nonlinear setting, the free boundary
condition depends on the minimizer itself.
Indeed,
in our case the normal jump J_⋆ coincides with
𝒦^σ Φ_0'(
Per_σ({u_⋆>0},Ω)),
where 𝒦^σ is the
nonlocal mean curvature of the free boundary, as defined
in (<ref>)
(see formula (1.12) in <cit.>
and formulas (1.13) and (1.14) in <cit.>).
We point out that (<ref>)
recovers the classical cases in <cit.>
when σ∈{0,1} and Φ_0 is linear.
On the other hand, when Φ_0 is not linear,
the free boundary condition in (<ref>)
takes into account the global behavior of the free boundary
and the (possibly fractional) perimeter of the minimizer
in the whole of the domain. In this sense,
this type of condition is “self-driven”, since it is influenced
by the minimizer itself and not only by the environmental conditions
and the structural constants.
§ REGULARITY OF STATIONARY POINTS OF THE ALT-CAFFARELLI FUNCTIONAL
In this section we would like to discuss some recent results on the further connections
between the Alt-Caffarelli problem and the minimal surfaces.
More specifically, we consider the stationary points (in particular minimizers) of the functional
E_AC[u]=∫_Ω|∇ u|^2+λ^2χ_{u>0}
and the capillarity surfaces in the sphere of radius λ
(notice that the critical points of the functional in (<ref>)
are related to the system
in (<ref>), see Theorem 2.5 in <cit.> for details on this).
The starting point of our analysis is to study the classical capillary drop problem.
§.§ Capillary drop problem
We first revisit the sessile drop problem and its higher dimensional analogue.
We consider the functional
J(E):=∫_Ω_0|Dχ_E|+g∫_Ω_0x_n+1χ_Edx-∫_∂Ω_0λ(x')χ_Edx'
where
Ω_0:={x=(x', x_n+1), x_n+1>0}, x'=(x_1, …, x_n)∈^n,
g>0 is a given constant, |λ(x')|<1, and
χ_E is the characteristic function of E∈𝒜, where
𝒜:={E⊂Ω_0 s.t. E has finite perimeter and ℋ^n+1(E)=V}.
Here, the parameter V>0 is the volume fraction
of the droplet.
For n=2, the functional in (<ref>) is related to
the sessile drop problem, i.e. the problem of a
(three-dimensional)
capillarity drop occupying the set E and sitting in the
halfspace {x_n+1>0}.
We observe that the first term in J(E) is the
energy due to the surface tension,
the second term is the gravitational energy and the
last term is the wetting energy which produces a contact angle
θ(x') such that cosθ(x')=λ(x')
(see Figure <ref>).
By a Taylor expansion, we see that
√(1+|∇ u|^2)=1+1/2|∇ u|^2+…
Hence, if ∂ E is the graph of a (smooth) function u≥0 (with small gradient),
we obtain the approximation
∫_Ω_0|Dχ_E|=
ℋ^n-1( (∂ E)∩Ω_0)=
∫_(∂Ω_0)∩{u>0}√(1+|∇ u|^2)
=
∫_∂Ω_0χ_{u>0}+1/2∫_∂Ω_0|∇ u|^2+…
In other words, the functional E_AC
in (<ref>)
is the linearization of the sessile drop problem
described by the functional J in (<ref>), with no gravity and constant
wetting energy density. This suggests
that there must be a strong link between the regularity of the minimizers of E_AC and the minimal surfaces.
We will now discuss[For completeness,
we recall that a nonlocal capillarity theory has been
recently developed in <cit.>.] in which sense this link rigorously occurs.
§.§ Homogeneity of blow-ups and the support function
The first of such direct links was established in <cit.>, where it
is showed that the singular axisymmetric critical point of E_AC is an energy minimizer in dimension 7. This
singular energy minimizer
of the Alt-Caffarelli problem can be seen as the analog of the Simons cone
S={x∈^8 : ∑_i=1^4x_i^2=∑_i=5^8x_i^2 },
which is an example of a singular hypersurface of least perimeter in dimension 8. The minimality of the Simons cone was
first proved by E. Bombieri, E. De Giorgi and E. Giusti in <cit.>.
The cones with non-negative mean curvature arise naturally in the blow-up procedure
of the minimizer u at a free boundary point. By Weiss' monotonicity formula (see <cit.>),
any blow-up
limit u_0 of an energy minimizer of E_AC
is defined on ^n and
must be a homogeneous function of degree one.
Let us write
u_0(x)=rg(σ)
where σ∈𝕊^n-1 (being 𝕊^n-1, as usual,
the unit sphere in ^n). Since u_0 is also a global minimizer of E_AC then it follows that Δ u_0=0 in Ω^+={u_0>0}∩^n.
Rewriting the equation Δ u_0=0 in polar coordinates we infer that g
is a solution of the equation
Δ_𝕊^n-1g+n g=0.
Here Δ_𝕊^n-1g is the Laplace-Beltrami operator
on the sphere.
We observe that u_0>0 if and only if g>0 and g=0 on the free boundary of u_0 which is a
cone due to the homogeneity of u_0.
Equation (<ref>) can be rewritten as
Trace[g_ij+δ_ijg]=0.
It is well-known that
determined by the parameterization
𝕊^n-1∋σ⟼ X(σ):=σ g(σ)+∇_σ g(σ),
see <cit.>.
In addition, we have that S and the sphere λ𝕊^n-1 are perpendicular at the
contact points, see <cit.>.
In this sense, one can interpret g as the Minkowski support function of the surface S. In other words
X(σ)·σ =g(σ) and it is the distance of the tangent plane with normal σ from the origin.
§.§ The mean radius equation
The previous discussion tells us that the sum of the principal radii of the surface S
is zero. Indeed, let κ_i=1/R_i, i=1, 2, …, n-1 be the ith principal curvature
of S and R_i the corresponding principal radius. Then, in view of (<ref>),
it holds that
the matrix g_ij+δ_ijg
has eigenvalues 1/κ_1,…,1/κ_n-1,
and so its trace is equal to ∑_i=1^n-11/κ_i.
From this and (<ref>), we thus obtain that
∑_i=1^n-1R_i=∑_i=1^n-11/κ_i=0 in {g>0}.
This is called the mean radius equation.
Recalling (<ref>),
the free boundary condition given in <cit.> (and corresponding
to the constancy of |∇ u_0|) along {g=0}
now becomes
|∇_σ g|=λ.
This means that
the surface S is contained in the sphere of radius λ.
We point out that in dimension n=2 formula (<ref>) reduces to
κ_1+κ_2/κ_1κ_2=0
and therefore the mean curvature vanishes whenever the Gauss
curvature is nonzero (i.e., κ_1κ_2≠0).
If n≥ 3, then such simple interpretation is not possible.
In terms of the classification of global cones for the Alt-Caffarelli problem, we recall that
the
open question is whether for n=5, 6 the stable solution of the mean radius equation
in λ𝕊^n-1 such that |∇_σ g|=λ on g=0 is
the disk passing through the origin (when n=2, this question is settled in <cit.>,
the case n=3 was addressed in <cit.> and n=4 was proved in <cit.>).
An open and challenging question is to classify the stationary solutions
g and
the corresponding zero mean radius surfaces of given topological type.
§.§ Flame models
A closely related problem is the behavior of the solutions to
the singular perturbation problem
{[ Δ u_(x)=β_(u_) B_1,; |u_|≤ 1 B_1, ].
where >0 is small and
{[ β_(t)=1/β(t/),; ; β(t)≥ 0, β⊂ [0, 1],; ; ∫_0^1β(t)dt= M>0 ].
is an approximation of the Dirac measure.
It is well known that (<ref>)
models propagation
of equidiffusional premixed flames with high activation of energy, see <cit.>.
The limit u_0:=lim__j→ 0u__j (for a suitable sequence _j→ 0) solves a Bernoulli type free boundary problem
with the following free boundary condition
| u^+|^2-| u^-|^2=2M.
In fact, it holds that u_0 is a stationary point of the Alt-Caffarelli functional
in (<ref>).
If we choose {u_} to be a family of minimizers of the functional
E_[u_]:=∫_Ω| u_|^2/2
+(u_/), (t)=∫_0^tβ(s)ds,
then u_ inherits the generic features of the
Alt-Caffarelli minimizers as described in <cit.>
(e.g. non-degeneracy, rectifiability of u, etc.).
Consequently, by
sending → 0, one can see that the limit u exists and it is a minimizer of
the Alt-Caffarelli functional
∫_B_1| u|^2+2Mχ_{u>0}.
As it was mentioned above, the singular set of minimizers is empty in dimensions 2, 3 and 4. However, if u_ is not a minimizer then not much is known about the classification of the blow-ups of
the limit function u. An interesting question is to classify these stationary points of given topological type or Morse index. One recent result in this direction is that
the associated surface S that we constructed via the support function is of ring type then
S is a piece of catenoid, see <cit.>.
'
ADPM11
#1CODEN #1 #1ISBN #1 #1ISSN #1 #1LCCN #1 #1#1 URL ath.sty
[AC81]MR618549
H. W. Alt and L. A. Caffarelli.
Existence and regularity for a minimum problem with free boundary.
J. Reine Angew. Math., 325:0 105–144, 1981.
JRMAA8.
0075-4102.
[ACF84]MR732100
Hans Wilhelm Alt, Luis A. Caffarelli, and Avner Friedman.
Variational problems with two phases and their free boundaries.
Trans. Amer. Math. Soc., 2820 (2):0 431–461,
1984.
TAMTAM.
0002-9947.
|http://dx.doi.org/10.2307/1999245|.
[ACKS01]MR1808651
I. Athanasopoulos, L. A. Caffarelli, C. Kenig, and S. Salsa.
An area-Dirichlet integral minimization problem.
Comm. Pure Appl. Math., 540 (4):0 479–499,
2001.
CPAMA.
0010-3640.
|http://dx.doi.org/10.1002/1097-0312(200104)54:4<479::AID-CPA3>3.3.CO;2-U|.
[ADPM11]MR2765717
Luigi Ambrosio, Guido De Philippis, and Luca Martinazzi.
Gamma-convergence of nonlocal perimeter functionals.
Manuscripta Math., 1340 (3-4):0 377–403, 2011.
MSMHB2.
0025-2611.
|http://dx.doi.org/10.1007/s00229-010-0399-4|.
[Ale39]Aleksandrov
A. Alexandroff.
über die Oberflächenfunktion eines konvexen Körpers.
(Bemerkung zur Arbeit “Zur Theorie der gemischten Volumina von
konvexen Körpern”).
Rec. Math. N.S. [Mat. Sbornik], 6(48):0 167–174, 1939.
[AV14]MR3230079
Nicola Abatangelo and Enrico Valdinoci.
A notion of nonlocal curvature.
Numer. Funct. Anal. Optim., 350 (7-9):0
793–815, 2014.
0163-0563.
|http://dx.doi.org/10.1080/01630563.2014.901837|.
[BBM02]MR1945278
Jean Bourgain, Haïm Brezis, and Petru Mironescu.
Limiting embedding theorems for W^s,p when s↑1 and
applications.
J. Anal. Math., 87:0 77–101, 2002.
JOAMAV.
0021-7670.
|http://dx.doi.org/10.1007/BF02868470|.
Dedicated to the memory of Thomas H. Wolff.
[BDGG69]MR0250205
E. Bombieri, E. De Giorgi, and E. Giusti.
Minimal cones and the Bernstein problem.
Invent. Math., 7:0 243–268, 1969.
0020-9910.
|http://dx.doi.org/10.1007/BF01404309|.
[BFV14]MR3331523
Begoña Barrios, Alessio Figalli, and Enrico Valdinoci.
Bootstrap regularity for integro-differential operators and its
application to nonlocal minimal surfaces.
Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 130
(3):0 609–639, 2014.
0391-173X.
[BLV16]2016arXiv161208295B
Claudia Bucur, Luca Lombardini, and Enrico Valdinoci.
Complete stickiness of nonlocal minimal surfaces for small values of
the fractional parameter.
ArXiv e-prints, December 2016.
[BN16]MR3556344
Haïm Brezis and Hoai-Minh Nguyen.
The BBM formula revisited.
Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl., 270
(4):0 515–533, 2016.
1120-6330.
|http://dx.doi.org/10.4171/RLM/746|.
[Caf95]MR1347971
Luis A. Caffarelli.
Uniform Lipschitz regularity of a singular perturbation problem.
Differential Integral Equations, 80 (7):0
1585–1590, 1995.
0893-4983.
[CDSS16]MR3532394
L. Caffarelli, D. De Silva, and O. Savin.
Obstacle-type problems for minimal surfaces.
Comm. Partial Differential Equations, 410 (8):0
1303–1323, 2016.
CPDIDZ.
0360-5302.
|http://dx.doi.org/10.1080/03605302.2016.1192646|.
[CJK04]CJK
Luis A. Caffarelli, David Jerison, and Carlos E. Kenig.
Global energy minimizers for free boundary problems and full
regularity in three dimensions.
In Noncompact problems at the intersection of geometry,
analysis, and topology, volume 350 of Contemp. Math., pages 83–97.
Amer. Math. Soc., Providence, RI, 2004.
|http://dx.doi.org/10.1090/conm/350/06339|.
[CRS10a]MR2675483
L. Caffarelli, J.-M. Roquejoffre, and O. Savin.
Nonlocal minimal surfaces.
Comm. Pure Appl. Math., 630 (9):0 1111–1144,
2010.
CPAMA.
0010-3640.
|http://dx.doi.org/10.1002/cpa.20331|.
[CRS10b]MR2677613
Luis A. Caffarelli, Jean-Michel Roquejoffre, and Yannick Sire.
Variational problems for free boundaries for the fractional
Laplacian.
J. Eur. Math. Soc. (JEMS), 120 (5):0
1151–1179, 2010.
1435-9855.
|http://dx.doi.org/10.4171/JEMS/226|.
[CSV15]MR3390089
Luis Caffarelli, Ovidiu Savin, and Enrico Valdinoci.
Minimization of a fractional perimeter-Dirichlet integral
functional.
Ann. Inst. H. Poincaré Anal. Non Linéaire, 320
(4):0 901–924, 2015.
0294-1449.
|http://dx.doi.org/10.1016/j.anihpc.2014.04.004|.
[CV11]MR2782803
Luis Caffarelli and Enrico Valdinoci.
Uniform estimates and limiting arguments for nonlocal minimal
surfaces.
Calc. Var. Partial Differential Equations, 410
(1-2):0 203–240, 2011.
0944-2669.
|http://dx.doi.org/10.1007/s00526-010-0359-6|.
[CV13]MR3107529
Luis Caffarelli and Enrico Valdinoci.
Regularity properties of nonlocal minimal surfaces via limiting
arguments.
Adv. Math., 248:0 843–871, 2013.
0001-8708.
|http://dx.doi.org/10.1016/j.aim.2013.08.007|.
[Dáv02]MR1942130
J. Dávila.
On an open question about functions of bounded variation.
Calc. Var. Partial Differential Equations, 150
(4):0 519–527, 2002.
0944-2669.
|http://dx.doi.org/10.1007/s005260100135|.
[DFPV13]MR3007726
Serena Dipierro, Alessio Figalli, Giampiero Palatucci, and Enrico Valdinoci.
Asymptotics of the s-perimeter as s↘0.
Discrete Contin. Dyn. Syst., 330 (7):0
2777–2790, 2013.
1078-0947.
|http://dx.doi.org/10.3934/dcds.2013.33.2777|.
[Dip14]MR3506705
S. Dipierro.
Asymptotics of fractional perimeter functionals and related problems.
Rend. Semin. Mat. Univ. Politec. Torino, 720
(1-2):0 3–16, 2014.
0373-1243.
[DK15]2015arXiv150807447D
Serena Dipierro and Aram L. Karakhanyan.
Stratification of free boundary points for a two-phase variational
problem.
ArXiv e-prints, August 2015.
[DKV15]2015arXiv151203043D
Serena Dipierro, Aram Karakhanyan, and Enrico Valdinoci.
A class of unstable free boundary problems.
ArXiv e-prints, December 2015.
[DKV16]2016arXiv161100412D
Serena Dipierro, Aram Karakhanyan, and Enrico Valdinoci.
A nonlinear free boundary problem with a self-driven Bernoulli
condition.
ArXiv e-prints, November 2016.
[DLV]LUCA
Serena Dipierro, Luca Lombardini, and Enrico Valdinoci.
A free boundary problem: superposition of nonlocal energy plus
classical perimeter.
Preprint.
[DMV16]2016arXiv161000075D
Serena Dipierro, Francesco Maggi, and Enrico Valdinoci.
Asymptotic expansions of the contact angle in nonlocal capillarity
problems.
ArXiv e-prints, September 2016.
[DNPV12]MR2944369
Eleonora Di Nezza, Giampiero Palatucci, and Enrico Valdinoci.
Hitchhiker's guide to the fractional Sobolev spaces.
Bull. Sci. Math., 1360 (5):0 521–573, 2012.
0007-4497.
|http://dx.doi.org/10.1016/j.bulsci.2011.12.004|.
[DSJ09]MR2572253
Daniela De Silva and David Jerison.
A singular energy minimizing free boundary.
J. Reine Angew. Math., 635:0 1–21, 2009.
0075-4102.
|http://dx.doi.org/10.1515/CRELLE.2009.074|.
[DSR12]MR2926238
D. De Silva and J. M. Roquejoffre.
Regularity in a one-phase free boundary problem for the fractional
Laplacian.
Ann. Inst. H. Poincaré Anal. Non Linéaire, 290
(3):0 335–367, 2012.
0294-1449.
|http://dx.doi.org/10.1016/j.anihpc.2011.11.003|.
[DSV15]MR3427047
Serena Dipierro, Ovidiu Savin, and Enrico Valdinoci.
A nonlocal free boundary problem.
SIAM J. Math. Anal., 470 (6):0 4559–4605,
2015.
0036-1410.
|http://dx.doi.org/10.1137/140999712|.
[DSV16a]Dipierro2016JFA
Serena Dipierro, Ovidiu Savin, and Enrico Valdinoci.
Boundary behavior of nonlocal minimal surfaces.
J. Funct. Anal., 2016.
0022-1236.
|http://www.sciencedirect.com/science/article/pii/S0022123616303858|.
[DSV16b]MR3516886
Serena Dipierro, Ovidiu Savin, and Enrico Valdinoci.
Graph properties for nonlocal minimal surfaces.
Calc. Var. Partial Differential Equations, 550
(4):0 Paper No. 86, 25, 2016.
0944-2669.
|http://dx.doi.org/10.1007/s00526-016-1020-9|.
[DV15]MR3320130
Serena Dipierro and Enrico Valdinoci.
On a fractional harmonic replacement.
Discrete Contin. Dyn. Syst., 350 (8):0
3377–3392, 2015.
1078-0947.
|http://dx.doi.org/10.3934/dcds.2015.35.3377|.
[DV16]Dipierro2016HP
Serena Dipierro and Enrico Valdinoci.
Continuity and density results for a one-phase nonlocal free boundary
problem.
Ann. Inst. H. Poincaré Anal. Non Linéaire, 2016.
0294-1449.
|http://www.sciencedirect.com/science/article/pii/S0294144916300853|.
[DV17]2016arXiv160706872D
Serena Dipierro and Enrico Valdinoci.
Nonlocal minimal surfaces: interior regularity, quantitative
estimates and boundary stickiness.
Recent Developments in the Nonlocal Theory (T. Kuusi, G.
Palatucci eds.). Book Series on Measure Theory. De Gruyter, Berlin, 2017.
[FV15]FIGALLI
Alessio Figalli and Enrico Valdinoci.
Regularity and bernstein-type results for nonlocal minimal surfaces.
J. Reine Angew. Math., 2015.
0075-4102.
|https://www.degruyter.com/view/j/crll.ahead-of-print/crelle-2015-0006/crelle-2015-0006.xml|.
[JS15]JS
David Jerison and Ovidiu Savin.
Some remarks on stability of cones for the one-phase free boundary
problem.
Geom. Funct. Anal., 250 (4):0 1240–1257, 2015.
1016-443X.
|http://dx.doi.org/10.1007/s00039-015-0335-6|.
[Kar16]AK
Aram Karakhanyan.
Minimal surfaces arising in singular perturbation problems.
Preprint, 2016.
[MS02]MR1940355
V. Mazya and T. Shaposhnikova.
On the Bourgain, Brezis, and Mironescu theorem concerning
limiting embeddings of fractional Sobolev spaces.
J. Funct. Anal., 1950 (2):0 230–238, 2002.
JFUAAW.
0022-1236.
|http://dx.doi.org/10.1006/jfan.2002.3955|.
[MV16]2016arXiv160608610M
Francesco Maggi and Enrico Valdinoci.
Capillarity problems with nonlocal surface tension energies.
ArXiv e-prints, June 2016.
[Pon04]MR2033060
Augusto C. Ponce.
A new approach to Sobolev spaces and connections to
Γ-convergence.
Calc. Var. Partial Differential Equations, 190
(3):0 229–255, 2004.
0944-2669.
|http://dx.doi.org/10.1007/s00526-003-0195-z|.
[Ser71]MR0333220
James Serrin.
A symmetry problem in potential theory.
Arch. Rational Mech. Anal., 43:0 304–318, 1971.
0003-9527.
[SV13a]MR3090533
Ovidiu Savin and Enrico Valdinoci.
Regularity of nonlocal minimal cones in dimension 2.
Calc. Var. Partial Differential Equations, 480
(1-2):0 33–39, 2013.
0944-2669.
|http://dx.doi.org/10.1007/s00526-012-0539-7|.
[SV13b]MR3035063
Ovidiu Savin and Enrico Valdinoci.
Some monotonicity results for minimizers in the calculus of
variations.
J. Funct. Anal., 2640 (10):0 2469–2496, 2013.
JFUAAW.
0022-1236.
|http://dx.doi.org/10.1016/j.jfa.2013.02.005|.
[Wei98]weiss
Georg S. Weiss.
Partial regularity for weak solutions of an elliptic free boundary
problem.
Comm. Partial Differential Equations, 230
(3-4):0 439–455, 1998.
0360-5302.
|http://dx.doi.org/10.1080/03605309808821352|.
| In this survey, we would like to present some
recent research directions in the study of variational
problems whose minimizers naturally exhibit the formation
of free boundaries. Differently than the cases
considered in most of the existing literature, the problems
that we present here are either nonlinear (in the sense
that the energy functional is the nonlinear superposition
of classical energy contributions) or nonlocal
(in the sense that some of the energy contributions
involve objects that depend on the global geometry
of the system).
In these settings, the problems typically show new features
and additional difficulties with respect to the classical cases.
In particular, as we will discuss in further details:
the regularity theory is more complicated,
there is a lack of scale invariance for some problems,
the natural scaling properties of the energy may not
be compatible with the optimal regularity,
the condition at the free boundary may be of nonlocal
or nonlinear type and involve the global behavior of
the solution itself, and
some problems may exhibit a variational instability
(e.g., minimizers in large domains and in small domains
may dramatically differ the ones from the others).
We will also discuss
how the classical free boundary problems in <cit.>
are recovered either as limit problems
or after a blow-up, under appropriate structural
conditions on the energy functional.
We recall the classical free boundary problems
of <cit.>
in Section <ref>.
The results concerning nonlocal free boundary problems
will be presented in Section <ref>,
while the case of
nonlinear energy superposition is discussed
in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.08097v1 | 20170127160647 | Dissipation dynamics with two distinct chaotic baths | [
"Diptanil Roy",
"A. V. Anil Kumar"
] | nlin.CD | [
"nlin.CD",
"cond-mat.stat-mech"
] |
Version xx as of December 30, 2023
Primary authors: Joe E. Physics
To be submitted to (PRL, PRD-RC, PRD, PLB; choose one.)
Comment to [email protected] by xxx, yyy
DØ INTERNAL DOCUMENT – NOT FOR PUBLIC DISTRIBUTION
School of Physical Sciences, National Institute of Science Education and Research, HBNI, Jatni - 752050, India
Dissipation using a finite environment coupled to a single harmonic
oscillator have been studied quite extensively. We extend the study by looking at the dynamics of the dissipation
when we introduce a second bath of N identical
quartic systems different from the 1st bath. We look at the energy flow into the environment as a function of the
chaotic parameters of the bath and also try to develop a
linear response theory to describe the system. The energy flow is always more to the more chaotic system irrespective of the
initial energy of the baths.
05.20.Gg, 05.45.Ac, 05.40.Jc
Dissipation dynamics with two distinct chaotic baths
A.V. Anil Kumar
December 30, 2023
====================================================
§ INTRODUCTION
The dissipation of energy occurs in diverse systems and it is one of the most fundamental processes in
dissipative dynamical systems. In the Langevin description of
Brownian motion, the damping force together with the randomly fluctating force are used to model dissipation<cit.>.
These forces relate the effect of the collisions of the system
with the particles in the thermal bath and are realted by the dissipation-fluctuation theorem<cit.>.
The dissipative dynamics of physical systems have been studied extensively<cit.>.
In modelling energy dissipation, these studies used a small system couple to the environment.
The environment can be of two types : one with an infinite collection of modes<cit.> and another
with a small number of chaotic modes<cit.>. These investigations have led to several
interesting results. For example, Wilkinson<cit.> showed that the rate of exchange of energy between the system
and the chaotic bath depends on the classical motion of the particles comprising the chaotic bath. The rate of energy
exchange increases when the bath shows chaotic motion. Later Marchiori and de Aguiar showed that energy dissipation occurs in the chaotic
regime even if the bath is composed of small number of particles<cit.>. Xavier et al.<cit.> also improved upon
the model in <cit.> and showed that the damping rate can be expressed in terms of the mean Lyapunov exponent of
the chaotic bath. Also they have shown that the dissipation is more effective if there is resonance between the system and bath
frequencies.
In this paper, we extend this study to the case of a small system coupled to multiple baths. The purpose of this investigation
is to study the dissipative dynamics of the system if it is connected to two distinct baths with different chaoticity. We use linear
response theory following the approach presented in <cit.>. We also carry our numerical simulations and compare
the theoretical results with those obtained from simulations.
§ THE MODEL
Our model comprises of a particle, with generalised coordinates (q, p) in a one dimensional harmonic oscillator potential with angular frequency ω_0. The
particle is connected to two baths comprising of N quartic oscillators each. The two baths are not connected to each other, but are coupled indirectly through the harmonic
oscillator. The quartic oscillators are two dimensional and the n_th particle of the i_th bath is characterised by the generalised
coordinates (q⃗_⃗n⃗_⃗i⃗ = x_n_iî + y_n_iĵ, p⃗_⃗n⃗_⃗i⃗ = p_x_n_iî + p_y_n_iĵ). The interaction between the baths
and the system is through a coupling term of the form H_I, where λ_N = λ/√(N) is a measure of the effective coupling <cit.>. The Hamiltonian govering
the dynamics of the system can be written as
H = H_HO + H_E_1 + H_E_2 + H_I
H_HO = p^2/2m + mω_0^2q^2/2
H_E_i = ∑_n=1^N[ p_x_n_i^2 + p_y_n_i^2/2 + a_i/4(x_n_i^4 + y_n_i^4) + x_n_i^2y_n_i^2/2]
H_I = ∑_n=1^Nλ_Nq(x_n_1 + x_n_2)
The total Hamiltonian is conservative, but the system can exchange energy with the baths and hence behave dissipatively. The non-linear dynamics of the baths is determined
completely by the parameters a_i where the systems are integrable for a_i = 1.0, mildly chaotic for a_i = 0.1 and strongly chaotic for a_i = 0.01. We numerically
solve the 8N+2 equations of motion arising from the Hamiltonians H_HO, H_E_1, H_E_2 and H_I respectively and compare our analytical results to them. The chaotic
dynamics of the baths acts as a sink of energy for the system. A temperature for the environment can be defined by using the general equipartition theorem, where every quadratic
degree of freedom has a contribution of k_BT/2 to the total energy E. It is clear here that the T represents the equilibrium temperature.
Before we proceed to the results for our model, it is imperative to discuss the behaviour of a particle in harmonic oscillator potential coupled to only one bath. We will compare
and contrast our results to those provided in <cit.>. When a = 1.0, the system is in the integrable regime and the environment has very
little effect, causing a slight decrease in the oscillator energy. With a=0.5, the energy loss is more and there are slightly pronounced oscillations. The equilibrium
energy value for such cases is 0.8 E_0 <cit.>. For example, for a bath with N = 100 QOs <cit.> and a = 1, the energy of the system
decreases a bit from E_S(0) and performs tiny oscillations around the decreased value. For a=0.1, the energy decreases further with more prominent oscillations with
increasing time. For a = 0.01, the situation however is completely different. The energy decay for such a system is exponential and can be
fit to E_S(t) ≈ E_S(0)e^-γ t where γ is a function of the bath properties and ω_0.
For small values of N, the indirect coupling does not have much effect on the system unless the coupling is strongly chaotic. For example, when
the coupling parameter is set in the chaotic regime, we cannot characterise the behaviour by a single realisation for 1 ≤ N ≤ 6 because of large
fluctuations. For 7 ≤ N ≤ 20 however, the onset of dissipation and the transition to an exponential decay are apparent, though the energy fluctuation
is still large. With increasing N, the system starts to behave differently, allowing dissipation to occur. The effect of different values of the coupling are discussed above.
As is apparent from the numerical results, the value of E_S(t) for large N becomes independent of N over large times. This is possible only when the
coupling term λ_N falls with N as N →∞. For one bath, it has been shown in <cit.> that λ_N = λ/√(N) using
Linear response theory. For large N, the system equilibrates with the environment and the equilibrium energy distribution is Boltzmann-like. This allows us to define a
temperature; however it is not possible in the regular or mixed regimes.
§ THEORY
We will try to understand the flow of energy to the two different baths using linear response theory (LRT) <cit.>. We will follow the formalism developed
in <cit.> to write the final equation of motion for the system.
The HO in hamiltonian <ref> satisfies the following equation of motion
q̈ + ω_0^2q = -λ_N/m∑_n = 1^N{x_n^(1)(t) + x_n^(2)(t)}≡ -λ_N/mX(t)
We consider the HO to be perturbed by an external force given by -λ_N/mX(t). Under the assumption that the bath coordinates are chaotic, we can
replace X(t) with ⟨ X(t) ⟩. LRT has been used to determine ⟨ X(t) ⟩ in terms of the response function. The dynamics of the system is
determined by the Liouville equations.
The integral form of the Liouville equation is given by
ρ(t) = e^i(t-t_0)L_0ρ(t_0) + i∫_t_0^te^i(t-s)L_0L_I(s)ρ(s)ds
where L_0 and L_I are the Liouville operators.
The Hamiltonian <ref> can be estimated using
H = H_E(Q,P) + H_I(Q, P, t)
where Q and P represent the entire set of canonical variables of the environment and H_I(Q, P, t) = A(Q, P)χ(t) where A(Q,P) is the corresponding
displacement (Â = -∂/∂ xĤ(q,p;x)).
Under the approximation of a small perturbation, Eq. <ref> can be expanded to 1^st order in L_I as
ρ(t) = ρ(t_0) + ∫_t_0^te^i(t-s)L_0{H_I(s), ρ(t_0)}ds
The ensemble average ⟨ X(t) ⟩ can thus be written as
⟨ X(t) ⟩ = ⟨ X(t) ⟩_0 - λ_N ∫_0^tϕ_XX(t-s)q(s)ds
where ⟨ X(t) ⟩_0 is equal to 0 due to the parity of H_E.
Therefore, Eq. <ref> reduces to
q̈ + ω_0^2q ≈λ_N^2 N/m∫_0^t(ϕ_xx^(1) + ϕ_xx^(2))q(s)ds
Here ϕ_XX(t-s) = ⟨{X(t), X(s)}⟩_0 and from <cit.>, it follows that
ϕ_XX(t-s) = N(ϕ^(1)_xx + ϕ_xx^(2))
Here, ϕ_xx^(i) is the response function obtained in <cit.> and is given by
ϕ_xx^(i)(t-s) = 5/4E_QS^(i)(0)⟨ x^(i)(t)P_x^(i)(s)⟩_0
+ t-s/4E_QS^(i)(0)⟨ P_x^(i)(t)P_x^(i)(s)⟩_0
or
ϕ_xx^(i)(t-s) = 5/4d/dsC_N^(i)(t,s) + t-s/4d^2/dsdtC_N^(i)(t,s)
where C_N^(i)(t,s) is the correlation function given by
C_N^(i)(t,s) = ⟨∑_n = 1^Nx_n^(i)(t)x_n^(i)(s)/E_QS^(i)(0)⟩
Following <cit.>, we make this approximation
C_N^(i)(t,s) ≈μ_E̅_̅i̅^(i)Nδ(t-s)
where A_N^(i) = μ_E̅_̅i̅^(i)N is the maximum amplitude of the correlation function. In the time scale of dissipation, this is a valid approximation when
the quartic oscillators are in the chaotic regime. As we transition to the mixed and integrable regimes, the approximation fits poorly, however we will consider the
delta function approximation in the mixed regime too for ease of calculations.
From Eq. <ref> and Eq. <ref>, we obtain
q̈ + ω_0^2q + γ_Tq̇ = 0
where
γ_T = 7λ_N^2(μ_E̅_̅1̅^(1)+μ_E̅_̅2̅^(2))N/4m
The γ_T we obtain theoretically compares well with the numerical results (Fig. <ref>) we obtained.
§ NUMERICAL SIMULATIONS
We have also simulated the time evolution of our system by solving 8N+2 equations of motion obtained from the Hamiltonian in Eq. 1 numerically. The second order
differential equations were solved using fourth order Runge-Kutta algorithm in a microcanonical ensemble. The time step has been taken to be 0.4 in each case.
For all the simulations, the value of λ is taken as 0.01. The mass m is taken as 1 for all the oscillators and the angular frequency of the harmonic
oscillator ω_0 is taken to be 0.3. The time is measured in terms of the harmonic oscillator time period τ≈ 20.93.
§ RESULTS AND DISCUSSION
The results for our model are obtained both from Linear response theory and numerical simulations. Figure 2 shows the energy of the
harmonic oscillator for four different set of initial conditions obtained from both simulations and linear response theory. The parameters
for the harmonic oscillators are same for all the four graphs. The thoeretical and numerical results are in good agreement with each other. The deviations
can be attributed to the δ-fucntion approximation for the correlation function used in the liner response theory. In the case of one chaotic bath, this has
been improved upon by Xavier et al.<cit.> by incorporating the history and frequency depence of the system. We intend to incorporate these in our model and
the results will be reported elsewhere. Figure 2 confirms that the harmonic oscillator is dissipative and energy flow occuring from the system to the bath. Also we have done
simulations in which both the baths are not chaotic. In this case, the harmonic oscillator does not dissipate energy considerably into the baths and the energy dissipation happens
only if at least one of the baths is in the chaotic regime.
Next, we have analysed the energy flow from the harmonic oscillator to the chaotic bath. For this, we have calculated the total energy of the system and each bath
at regular intervals. This has been shown in Figure 3 and Figure 4. In Figure 3, we have plotted the total energy of the system and the two baths, both of which are in
chaotic regime, but with different chaotic parameter and initial energy, versus time. In Figure 3(a) the the more chaotic system has less initial energy and less chaotic system
has more initial energy. As evident from the figure, more of the dissipated energy from the system goes to the more chaotic system. Figure 2(b) shows an interesting behaviour
in the energy flow. In figure 3, the more chaotic system has more energy and less chaotic system has less energy. So natuarally one would expect more energy to flow to the
bath which has lesser energy( and less chaotic). However, more of the dissipated energy has flown into the more chaotic system even though its energy is higher than the other bath.
Figure <ref> shows two graphs which depicts the enrgy flow into the two different baths, one bath in the chaotic regime and the other in the mixed regime. Here also it
is evident that irrespective of the initial energy of the baths, the energy dissipated from the system flows more into the chaotic system. This has been the case with
all the simulations, we have done in our model for different parameters. Also, it has been observed that the bath in the mixed regime attained the steady
state much faster than the chaotic bath.
This result is rather surprising and nonintuitive. Using the general equipartition theorem, we can define a temperature for the environment. Therefore, each bath
contributes E^(i) = Nk_BT/2 to the total energy. Therefore, different initial energies for the two baths essentially means coupling the
central HO to two baths at two different temperatures, the bath with higher energy being at a higher temperature. In such case, normally one would expect
more energy to flow to the bath at lower temperarute. This has been the case if the low temperature bath is more chaotic. However, the reverse happens if the low temperature
bath is less chaotic.
The seemingly counterintuitive result may be explained in terms of the entropy of the system. The more chaotic system will
have more entropy and the energy will flow
into the more chaotic bath such that the total entropy of the system will increase. This also causes the more chaotic
system to take more time to reach the steady state, which is evident in Figure 4. More analysis are being
carried out to quantitatively justify this observed result.
§ CONCLUSIONS
The dissipative dynamics of a one dimentional harmonic oscillator coupled to two distinct baths has been investigated using
linear response theory and numerical simulations. It has been shown that the harmonic oscillator dissipates energy
into the baths if the baths are chaotic. The numerical and theoretical results compares well. We have also analysed the
dissipative energy flow into the baths from the system. It has been observed that more energy flows into the
more chaotic system irrespective of the initial energy of the baths. This rather surprising result may be explained based on
the maximization of entropy of the system. One might be able to justify this more precisely by calculating the dynamical
entropy of the system for different sets of parameters.
99
Cortes
E. Cortes, B. J. West and K. Lindenberg, J. Chem. Phys. 82, 2708 (1985)
Risken
H. Risken, The Fokker-Plank Equation, Springer-Verlag: Berlin (1989).
Caldeira
A. O. Caldeira and A. J. Legget, Physica, 121A, 587(1993).
Wilkinson
M. Wilkinson, J. Phys. A, 23, 3603(1990).
Berry
M. V. Berry and J. M. Robbins, Proc. R. Soc. London. Ser. A, 442, 659(1993).
Ott
E. Ott, Phys. Rev. Lett., 42, 1628(1979).
Brown1
R. Brown, E. Ott and C. Greboi, Phys. Rev. Lett., 59, 1173(1987).
Brown2
R. Brown, E. Ott and C. Greboi, J. Stat. Phys., 49, 511(1987).
Marchiori
M. A. Marchiori and M. A. M. de Aguiar, Phys. Rev. E 83,
061112 (2011).
Xavier
J. C. Xavier, W. T. Strunz, and M. W. Beims. Phys. Rev. E 92, 022908 (2015)
Kubo
R.Kubo, M.Toda, and N.Hashitume, Statistical Physics II (Springer, Heidelberg, 1985).
Bonanca
M. V. S. Bonanca and M. A. M. de Aguiar, Phys. A 365, 333 (2006).
| The dissipation of energy occurs in diverse systems and it is one of the most fundamental processes in
dissipative dynamical systems. In the Langevin description of
Brownian motion, the damping force together with the randomly fluctating force are used to model dissipation<cit.>.
These forces relate the effect of the collisions of the system
with the particles in the thermal bath and are realted by the dissipation-fluctuation theorem<cit.>.
The dissipative dynamics of physical systems have been studied extensively<cit.>.
In modelling energy dissipation, these studies used a small system couple to the environment.
The environment can be of two types : one with an infinite collection of modes<cit.> and another
with a small number of chaotic modes<cit.>. These investigations have led to several
interesting results. For example, Wilkinson<cit.> showed that the rate of exchange of energy between the system
and the chaotic bath depends on the classical motion of the particles comprising the chaotic bath. The rate of energy
exchange increases when the bath shows chaotic motion. Later Marchiori and de Aguiar showed that energy dissipation occurs in the chaotic
regime even if the bath is composed of small number of particles<cit.>. Xavier et al.<cit.> also improved upon
the model in <cit.> and showed that the damping rate can be expressed in terms of the mean Lyapunov exponent of
the chaotic bath. Also they have shown that the dissipation is more effective if there is resonance between the system and bath
frequencies.
In this paper, we extend this study to the case of a small system coupled to multiple baths. The purpose of this investigation
is to study the dissipative dynamics of the system if it is connected to two distinct baths with different chaoticity. We use linear
response theory following the approach presented in <cit.>. We also carry our numerical simulations and compare
the theoretical results with those obtained from simulations. | null | null | null | null | null |
http://arxiv.org/abs/1701.08039v1 | 20170127124803 | Power dissipation in fractal Feynman-Sierpinski AC circuits | [
"Patricia Alonso Ruiz"
] | math-ph | [
"math-ph",
"math.MP",
"28A80, 31C45: 94C05: 78A25"
] |
This paper studies the concept of power dissipation in infinite graphs and fractals associated with passive linear networks consisting of non-dissipative elements. In particular, we analyze the so-called Feynman-Sierpinski ladder, a fractal AC circuit motivated by Feynman's infinite ladder, that exhibits power dissipation and wave propagation for some frequencies. Power dissipation in this circuit is obtained as a limit of quadratic forms, and the corresponding power dissipation measure associated with harmonic potentials is constructed. The latter measure is proved to be continuous and singular with respect to an appropriate Hausdorff measure defined on the fractal dust of nodes of the network.
Level crossings induced by a longitudinal coupling in the transverse field Ising chain
Frédéric Mila
December 30, 2023
======================================================================================
§ INTRODUCTION
Passive linear networks have a wide range of applications, and especially electrical circuits have since long been intensively studied in different research areas such as electrical engineering <cit.>, physics <cit.> and mathematics <cit.>. In particular for the latter, Dirichlet forms on finite sets and graphs can be interpreted in terms of electric linear networks by considering the current flow between nodes (vertices) connected by resistors (edges). This is the key idea behind the theory of Dirichlet and resistance forms on fractals introduced by Kigami <cit.>. In this context, one may associate these forms with “fractal networks”.
Resistors are just one type of passive components, or impedances, of an electrical network. Impedances are characterized by the fact that they produce no energy by themselves. A resistor is a dissipative element because power is lost (energy is absorbed) when an alternating current runs through it. On the contrary, no loss is caused when the current flows through a non-dissipative element such as an inductor or a capacitor. Finite linear networks consisting only of inductors and capacitors are uninteresting since no power dissipation is expected.
However, what if the network is infinite (as for instance fractal networks are)?
In the 60s, Feynman posed this “amusing question”, see <cit.>; to give an answer, he constructed an infinite ladder network as depicted in Figure <ref>. He found its behavior surprising and noticed a very interesting connection with wave propagation:
depending on the driving frequency of the signal, power will either dissipate, allowing waves to propagate along the network, or it will not dissipate at all, preventing waves from getting through. As a consequence, voltage will either stay constant, merely changing its phase, or it will die away rapidly.
This particular infinite ladder network is what is called a low-pass filter because low frequencies “pass” while higher frequencies are “rejected”. Although such an infinite network cannot actually occur, it is often possible to realize fairly good approximations that have many technical applications, see e.g. <cit.>.
Also fractal structures present unusual, a priori unexpected, physical properties <cit.>. Feynman's example motivated in <cit.> the construction of the so-called Feynman-Sierpinski ladder (F-S ladder for short), see Figure <ref>, as a first prototype of a fractal network consisting solely of inductors and capacitors that exhibits power dissipation, and hence wave propagation, at some frequencies.
The present paper aims to set up the mathematical framework to study the concept of power dissipation in infinite graphs and fractals, working out in detail the case of the F-S ladder. One of the main novelties lies in the fact that passive linear networks are studied in the frequency domain and this requires voltage, current and impedances to be considered as complex quantities. Following the classical intrinsic approach from analysis on fractals, the power dissipation in the F-S ladder will be defined as the limit of a suitable sequence of quadratic forms over complex-valued functions on its finite graph approximations.
A crucial role in this definition is played by the harmonic functions on the fractal dust that represents the nodes of the network. These functions describe the equilibrium potentials in a circuit when a signal is applied to the boundary nodes and they guarantee the existence of the aforementioned limit. Furthermore, proving them to be continuous will allow us to fully define power dissipation for harmonic potentials, as well as to construct the power dissipation measure associated with them. The latter measure will turn out to be singular with respect to an appropriate Hausdorff measure defined on the fractal dust related to the network.
The paper is organized as follows: In section <ref>, we review the classical notion of power dissipation in electric passive linear networks and transfer it to graphs and infinite networks. Here, we recall the construction of the F-S ladder and set the first step towards the definition of power dissipation in this network. Section <ref> discusses some geometric properties of the projection of the F-S ladder onto ^2. In particular, we prove this set to be a fractal quantum graph. In order to complete the definition of power dissipation, we analyze in section <ref> the harmonic potentials and prove in Theorem <ref> that they are continuous functions on the (fractal) set of nodes of the F-S ladder. Finally, section <ref> deals with the construction of a measure associated with power dissipation for harmonic potentials, c.f. Theorem <ref>. Further, we prove in Theorem <ref> that this measure is singular with respect to a suitable Hausdorff measure on the set of nodes of the F-S ladder.
§ BACKGROUND AND PRELIMINARIES
§.§ Complex AC currents and power dissipation
Electric linear networks are characterized by the well-known Ohm's relation V=R· I, where V denotes voltage, I current and R electric resistance. In general, passive linear networks are studied through Fourier transforms in the so-called frequency domain, requiring voltage and current to be time dependent. They are typically considered as complex quantities given by
V(t)=V̂e^i(ω t-φ_V), I(t)=Îe^i(ω t-φ_I),
where ω denotes the frequency and φ_I-φ_V is the phase shift or phase difference. The corresponding generalization of Ohm's relation <cit.> now becomes
V(t)=Z· I(t),
where Z=|Z|e^i(φ_I-φ_V) is called impedance and represents a “complex resistance” whose absolute value depends on the frequency ω. For ease of the notation, we will assume φ_V=0 and write φ:=φ_I for the phase shift, so that Z=|Z|e^iφ.
Due to Kirchhoff's rules, see e.g. <cit.>, the electromotive force of a generator connected to a linear circuit of several impedances satisfies that
(t)=Z^eff· I(t)=|Z^eff|e^iφI(t)=|Z^eff|Îe^iω t,
where Z^eff is the so-called effective/characteristic impedance, which represents an impedance equivalent to the initial set.
The power dissipation
, also called energy dissipation or average rate of energy loss, is given by
P =1/T∫_0^T((t))(I(t)) dt
=1/T∫_0^T|Z^eff|Î^2cos^2(ω t)cosφ dt+1/T∫_0^T|Z^eff|Î^2cos(ω t)sin(ω t)sinφ dt
=1/2|Z^eff|Î^2cosφ=1/2|I(t)|^2(Z^eff).
Notice that this quantity only depends on the real part of the effective impedance. Consequently, a purely complex impedance is called a non-dissipative element because, on average, there is no loss of electrical power (energy) when an alternating current runs through it.
In order to have non-trivial power dissipation,
it is thus necessary that the effective impedance of the circuit has positive real part. Motivated by the physical concept of power dissipation, we introduce next a quadratic form on graphs resembling this phenomenon.
§.§ Power dissipation in finite graphs
Let us start by considering a simple graph with two vertices x,y joined by an edge {x,y} and a network consisting in a single impedance Z_xy with nonzero real part. In this case, Z_xy coincides with the effective impedance of the circuit.
The voltage across the edge {x,y} is defined as the difference between the potential at each node. Following Ohm's law (<ref>), for any potential function (v(x),v(y))∈^2 the current flowing from x to y is given by
I_xy=v(y)-v(x)/Z_xy.
Although these quantities actually depend on time, we will consider this parameter fixed and omit it hereafter in the whole discussion.
The power dissipation associated with the network ={Z_xy} is defined as the quadratic form ω,^2→ given by
ω,[v]_xy=1/2(Z_xy)/|Z_xy|^2|v(x)-v(y)|^2,
equivalently
ω,[v]_xy=cosφ_xy/2|Z_xy||v(x)-v(y)|^2,
where φ_xy is the phase shift associated with Z_xy.
The subindex ω refers to the dependence of this expression on the frequency. For ease of the reading we will drop it off in the sequel and refer to this dependence explicitly only when confusion may occur.
Once power dissipation is defined for functions on a simple (2-node) network, this concept naturally extends to networks in graphs with several vertices and edges. In order for (<ref>) to provide a one-to-one correspondence between potentials and currents and thus identify functions on edges with functions on vertices, we will restrict our discussion to graphs/networks without multiple edges.
Let us now consider a finite graph =(V,E) and a network ={Z_xy | {x,y}∈ E} consisting of impedances Z_xy attached to each edge {x,y}. Further, we denote by ℓ(V) the space of complex-valued functions on V.
The quadratic form ℓ(V)→ given by
[v]:=∑_{x,y}∈ E[v]_xy
is called the power dissipation in associated with the network .
In the case of time-independent circuits with purely real impedances (resistors), power dissipation coincides with the classical definition of a resistance/energy form. Indeed, if the quantities Z_x,y, I_xy and the function v are real, (<ref>) becomes
[v]=1/2∑_{x,y}∈ E1/Z_xy(v(x)-v(y))^2,
where Z_xy is the resistance between x and y.
In the next section we extend the notion of power dissipation to infinite graphs and networks, focusing on the particular case of the Feynman-Sierpinski ladder (F-S ladder), for which some useful computations and results have been obtained in <cit.>.
§.§ Power dissipation in infinite networks. The F-S ladder
The F-S ladder circuit that we denote by _ was introduced in <cit.> as a fractal network whose underlying graph structure is described in Figure <ref>. The infinite graph that arises in the limit can be formally embedded in ^2; existence and geometric properties of this set are discussed in Section <ref>.
Let _n=(V_n,E_n), n≥ 0, be the graphs displayed in Figure <ref> and let _∞ denote the limit (in the Gromov-Hausdorff metric) of the sequence {_n}_n≥ 0. Notice that this limit exists in view of Proposition <ref> and Remark <ref>.
The F-S ladder _ is the infinite network on _∞ whose edges have impedances that are capacitors Z_C=1/iω C or inductors Z_L=iω L with C,L>0 as shown in Figure <ref>. By convention, the symbol ⊣ ⊢ is employed for capacitors, and for inductors.
As for any passive linear network, power dissipation in _ is only meaningful if its effective impedance , that depends on the frequency ω, has positive real part. In the case of the F-S ladder, this is satisfied under the filter condition
9(4-√(15))<2ω^2LC<9(4+√(15)),
c.f. <cit.>.
Following the intrinsic approach from analysis on fractals, we introduce next a sequence of networks on the approximating graphs _n that will eventually lead to the definition of the power dissipation in _∞ associated with _.
§.§ Networks _ε,n={Z_ε,n,xy | {x,y}∈ E_n}
At each level n≥ 1, the network _ε,n is constructed by adding a small positive resistance ε in series with each of the impedances of _, see Figure <ref>. Thus, for each {x,y}∈ E_n,
Z_ε,n,xy=Z_,xy+ε,
where Z_,xy∈{Z_C, Z_L} according to the previous construction in Figure <ref>. Under the filter condition (<ref>), we know from <cit.> that the effective impedance of the F-S ladder, , is the regularized limit of the effective impedances of the networks {_ε,n}_n≥ 1, i.e.
=lim_ε→ 0_+lim_n→∞ε,n.
Furthermore, we set Z_ε,0,xy=ε:=lim_n→∞ε,n for all {x,y}∈ E_0, and from now on assume that the F-S ladder satisfies the filter condition (<ref>).
In view of (<ref>), the sequence of networks {_ε,n}_n≥ 0 provides the base towards the desired definition of power dissipation.
Let V_*=⋃_n≥ 0V_n. The power dissipation in _∞ associated with the F-S ladder is the quadratic form → given by
[v]:=lim_ε→ 0_+lim_n→∞[v_|_V_n]
and ={v∈ℓ(V_*) | [v]<∞}.
The embedding of the infinite graph _∞ in ^2 presented in Section <ref> will reveal that the set of vertices of _∞ is actually larger than V_*, so that the latter form is actually incomplete. Its definition for potentials defined on the whole network will appear in Section <ref>, c.f. Definition <ref>.
At each approximating level, the network _ε,n is equivalent to a triangular network with impedances ε,n. Thus, for any n≥ 0 and u∈ℓ(V_0),
min{[v] | v∈ℓ(V_n),v_|_V_0=u}=(ε,n)/2|ε,n|^2∑_{x,y}∈ E_0|u(x)-u(y)|^2.
The latter remark is directly related to harmonic functions. These describe the equilibrium states of the F-S ladder when a potential is connected to the boundary vertices of the circuit, which in this case consists of the three vertices in V_0. More precisely, a function h∈ℓ(V_*) is said to be harmonic if for any ε>0 and n≥ 1
_ε,0[h_|_V_0]=[h_|_V_n].
In particular, _[h]=lim_ε→ 0_+_ε,n[h_|_V_n] for any n≥ 0. Since V_0 has three elements, the space of harmonic functions on V_*, that we denote by _(V_*), is a 3-dimensional subspace of .
In connection with harmonic functions we introduce the following auxiliary networks on the approximating graphs _n.
§.§ Networks _n={Z_n,xy | {x,y}∈ E_n}
At each n≥ 1, this network is constructed by changing the impedance of edges building triangles in the “deepest approximation level”, i.e. {x,y}∈ E_n∖ E_n-1 with x,y∈ V_n∖ V_n-1, to equal the effective impedance of the whole network _. The elements of _n are thus given by
Z_n,xy={[ if {x,y}∈ E_n∖ E_n-1,x,y∈ V_n∖ V_n-1,; ; Z_,xy otherwise. ].
For completeness, we set Z_0,xy= for all {x,y}∈ E_0. One of the most relevant differences between _n and _ε,n is that the impedance of the edges changes with the approximation level. Moreover, the impedance of edges building triangles have non-zero real part, whereas the impedance of the remaining edges is purely imaginary.
In view of <cit.>, the networks _n are all electrically equivalent. This fact relates them directly to harmonic functions and power dissipation, as the next proposition shows.
For any h∈_(V_*) it holds that
_[h]=_n[h_|_V_n] ∀ n≥ 0.
Here, [h] acts in place of what one would in general define as “trace” of the “limit” power dissipation.
We prove the equivalent statement that for each n≥ 0, measuring potential across vertices in
the F-S ladder network is equivalent to measuring them in the network _n.
By definition of _n, the impedance between edges {x,y}∈ E_n-1 with at least one vertex in V_n-1 is the same in _ and _n. Thus, it suffices to show that triangular cells of level n (n-cells) are electrically equivalent in both networks.
On the one hand, notice that an n-cell of the network _n is a triangular network with impedances . On the other hand, an n-cell of _ is itself a F-S ladder, which is electrically equivalent to a triangular network with impedances as well.
From the definition of _n it follows that for any v∈ℓ(V_*) and n≥ 1,
_n[v_|_V_n]=1/2∑_{x,y}∈ E_n∖ E_n-1
x,y∈ V_n∖ V_n-1()/||^2|v(x)-v(y)|^2,
which is a multiple of the energy of the n-th graph approximation of the Sierpinski gasket.
§ GEOMETRIC PROJECTION OF THE INFINITE GRAPH _∞
The power dissipation associated with the F-S ladder has so far been defined for potentials on V_*. The present section investigates some geometric properties of the subset of ^2 that corresponds to the graphical representation of _∞. Among them, this set turns out to be a fractal quantum graph whose set of vertices is a fractal dust larger than V_*.
Let S={1,2,3} and let {p_1,p_2, p_3}∈^2 denote the set of vertices of an equilateral triangle of side length 1 with baricentre p_0.
For each i∈ S, define the map G_i^2→^2 as
G_i(x):=F_i∘ G_0(x),
where F_i,G_0^2→^2 are given by
F_i(x)=1/2(x-G_0(p_i))+G_0(p_i),
respectively
G_0(x)=α(x-p_0)+p_0,
with α∈(0,1). Moreover, set p_ij=G_i(p_j) for each i,j∈ S.
Notice that p_i is not the fixed point of G_i for any i∈ S, and therefore p_i≠ p_ii for any i∈ S. On the other hand, since p_ij=p_ji for all i≠ j, we will restrict ourselves to writing p_ij only for i≤ j. Although the mappings G_i actually depend on α, we will see in Proposition <ref> that all lead to topologically equivalent sets, which eventually makes the parameter α irrelevant.
Let W_0 = {∅} and define for n ≥ 1
W_n = {w | w = w_1…w_n, w_i ∈ S, i = 1, …, n}.
Moreover, let W_* = ∪_n ≥ 0 W_n and for any w = w_1…w_n∈ W_* define G_w^2→^2 by
G_w = G_w_1∘ G_w_2∘⋯∘ G_w_n,
setting G_∅ to be the identity on ^2. Finally, define V_0 = {p_1, p_2, p_3} and
V_n = ⋃_w ∈ W_n G_w(Ṽ_0)
for n ≥ 1, as well as V_*=∪_n≥ 0V_n.
For each i∈ S we will denote by e_ii the line segment joining p_i and p_ii, and by e_ij the line segment joining p_i and p_j, with i<j, see Figure <ref>. Moreover, we define B:= {(i,j) | i≤ j} and write e_ij^w = G_w(e_ij) for any (w,(i,j))∈ W_*× B.
For any α∈ (0,1) there exists a unique compact set Q_α⊆^2 such that
Q_α = ⋃_i∈ SG_i(Q_α) ∪⋃_(i,j)∈ B e_ij.
Furthermore,
Q_α = C_α∪⋃_(w,(i,j)) ∈ W_*× Be_ij^w,
where C_α is the self-similar set associated with {G_1, G_2, G_3}, i.e. C_α is the unique nonempty compact set satisfying
C_α = ⋃_i∈ SG_i(C_α).
The mapping H(K):=⋃_i=1^3 G_i(K) is a α/2-contraction. By <cit.>, the inhomogeneous equation
x=H(x)∪∪_(i,j)∈ B e_ij has a unique solution Q_α in the space of compact subsets of ^2 that equals the closure of
⋃_(w,(i,j)) ∈ W_*× Be_ij^w and in particular (<ref>) holds.
Notice that
C_α is a fractal (Cantor) dust for any α∈(0,1). In view of the next proposition, we will refer to any of Q_α and C_α with α∈(0,1) simply by Q_∞ and C_∞. Although the notation might seem misleading at first sight, we write Q_∞ to underline its relation with _∞, not meaning α=∞.
The sets Q_α are pairwise homeomorphic for any α∈ (0,1).
Let G_w^α and e^α,w_ij denote G_w and e^w_ij respectively. Moreover, let ι^α denote the canonical coding mapping that identifies S^ with C_α, which is given by ι^α(w_1w_2…)=⋂_k≥ 1G_w_1… w_k(C_α). For any α_1,α_2∈ (0,1), define φ_α_1,α_2=ι^α_2∘(ι^α_1)^-1 C_α_1→ C_α_2. Extending this mapping onto each e^α_1,w_ij by φ_α_1,α_2|_e^α_1,w_ij=G_w^α_2∘(G_w^α_1)^-1|_e^α_1,w_ij for any (w,(i,j))∈ W_*× B yields the desired homeomorphism φ_α_1,α_2 Q_α_1→ Q_α_2.
The fractals Q_∞, C_∞, and the infinite graph _∞ are related through a projection mapping π_∞→^2 that in particular maps each approximating graph _n to its graphical representation in ^2 displayed in Figure <ref>. Based on this construction, each vertex x∈ V_n∖ V_n-1 will be associated with a word of length n≥ 1, w(x)∈ W_n, so that π(x)=G_w(x)_1… w(x)_n-1(p_w(x)_n). In view of (<ref>), any accumulation point, which corresponds to a vertex not captured by V_*, will be associated with an infinite word w(x)∈ S^ provided by the canonical coding mapping associated with C_∞, so that π(x)=∩_n≥ 1G_w(x)_1… w(x)_n(C_∞).
Let V_∞ and E_∞ the set of vertices, respectively edges, of _∞. For any fixed choice of the values of |_V_0 so that (V_0)=V_0, the projection mapping π_∞→^2 is defined as
(x)={[ G_w(x)(p_w(x)_n) if x∈ V_n∖ V_n-1,; ⋂_k≥ 1G_w_1(x)… w_k(x)(C_∞) if x∈ V_∞∖ V_*, ].
and
π({x,y}) ={(x)(1-t)+t(y), t∈ (0,1)}, {x,y}∈ E_∞.
An each level n≥ 1, (_n) is isomorphic to the so-called cable system associated with the graph _n, see <cit.>.
The sequence {π(_n)}_n≥ 0 is monotonically increasing and
Q_∞=⋃_n≥ 0⋃_{x,y}∈ E_n({x,y})^Eucl.
This last observation leads to the fact that Q_∞ is a fractal quantum graph, a concept introduced in <cit.>, whose definition we recall below.
A fractal quantum graph with length system {(ϕ_k,ℓ_k)}_k≥ 1 is a separable compact connected and locally connected metric space (X,d) that satisfies the following two conditions:
(i) For each k≥ 1, ℓ_k>0 and ϕ_k [0,ℓ_k]→ X is an isometry such that ϕ_k([0,ℓ_k])≅[0,ℓ_k] and
ϕ_k((0,ℓ_k)) ∩ ϕ_j((0,ℓ_j))=∅ ∀ k≠ j.
(ii) The set
K:=X∖⋃_k≥ 1ϕ_k((0,ℓ_k))
is totally disconnected.
In view of the definition of , the totally disconnected set K_∞ associated with Q_∞ corresponds to the closure of the set of nodes V_*. Potentials in the F-S ladder are thus defined on K_∞ and we will see in the next section how to extend our previous definition of power dissipation to a special class of them.
Q_∞ is a fractal quantum graph.
Equipped with the Euclidean metric, Q_∞ is a compact, and thus locally compact metric space. The length system is given by {(ϕ_ij^w,ℓ_ij^w)}_(w,(i,j))∈ W_*× B, where ℓ_ij^w= e_ij^w and ϕ_ij^w[0,ℓ_ij^w]→ e_ij^w is the curve parametrization of e_ij^w. In particular, ϕ_ij^w((0,ℓ_ij^w))=π({x,y}):=π({x,y})∖{(x),(y)}, where (x)=G_w(p_i) and (y)=G_w(p_j). In view of (<ref>) we have that
K_∞:=Q_∞∖⋃_(w,(i,j))∈ W_*× Bϕ_ij^w((0,ℓ_ij^w))=Q_∞∖⋃_n≥ 0⋃_{x,y}∈ E_n({x,y})=C_∞∪V_*
is a totally disconnected set.
Notice that Q_∞ is also a finitely ramified fractal <cit.> and it can be expressed as a graph directed fractal <cit.> as well.
In our particular case we have focused on the fact that it is a fractal quantum graph because of the importance of the totally disconnected set K_∞ in the next sections.
§ CONTINUITY OF POTENTIALS
The projection mapping allows us to identify the nodes of the F-S ladder network with a subset of ^2. In general, this kind of identification naturally transfers the notion of power dissipation in graphs to discrete subsets of ^2. The harmonic functions associated with power dissipation and in particular their continuity, proved in Theorem <ref>, are essential to define power dissipation in Q_∞ through the fractal dust K_∞.
From now on, we identify the approximating sets V_n in (<ref>) with the set of vertices V_n via V_n=(V_n) and hence use the notation V_n for any of both. The power dissipation in V_n associated with a network (formally given by [v∘π]) will be denoted by [v].
In this manner, the power dissipation associated with the F-S ladder is given by
[v]:=lim_ε→ 0_+lim_n→∞_ε,n[v_|_V_n],
where :={v∈ℓ(V_*) | [v]<∞} and now V_* is a subset of ^2.
As already mentioned, Lemma <ref> and the definition of allow us to identify the set of vertices V_∞ with the totally disconnected set K_∞ for which V_* is a dense subset. By Proposition <ref>, K_∞ is compact with respect to the topology induced by the Euclidean metric. The aim of this section is to prove the continuity of the harmonic functions on V_*, so that they can be uniquely extended to continuous (harmonic) functions on K_∞.
Recall that a function h∈ℓ(V_*) is said to be harmonic if for any ε>0 and n≥ 0
_ε,0[h_|_V_0]=_ε,n[h_|_V_n].
Moreover, the space of harmonic functions on V_*, denoted by _(V_*), is 3-dimensional, and for any h∈_(V_*) and n≥ 1,
lim_ε→ 0_+_ε,n[h_|_V_n]=[h]=_n[h_|_V_n]
c.f. Proposition <ref>.
Starting with a function h_0∈ℓ(V_0), harmonic functions are constructed by applying recursively the harmonic extension algorithm provided in <cit.>. This result conveys an explicit expression of the 3× 3-matrices A_1,A_2,A_3, that describe the algorithm. Therefore,
h_|_G_j(V_0)=A_j h_|_V_0.
for any h∈_(V_*) and j=1,2,3.
(i)The eigenvalues of A_j, j=1,2,3, can be explicitly computed with any mathematical software and equal
λ_1=1, λ_2=3/9Z_C+5, λ_3=1/3λ_2.
(ii) While the eigenvector associated with λ_1 is h_1=(1,1,1) in all matrices, the eigenvectors associated with λ_2 and λ_3 vary with the choice of j. The space of constant functions on V_* is thus the 1-dimensional subspace of _(V_*) spanned by h_1.
(iii) Under the filter condition (<ref>), substituting Z_C and by their actual values from <cit.>, one obtains
|λ_2|^2=9σ^2+(27+6CLω^2)^2/2106+25σ^2+90σ+100CLω^2(9+2CLω^2),
where σ=√(144 CLω^2-(2CLω^2)^2-81)∈. Although not directly readable from this expression, it holds that |λ_2|<1.
The eigenvalues of the matrices A_1,A_2,A_3 from the harmonic extension algorithm satisfy |λ_3|<|λ_2|<|λ_1|=1.
In view of <ref>(i) it only remains to prove that |λ_2|<1. Let us consider for instance the matrix A_1 and let h_2 be the eigenvector associated with λ_2. Further, let h∈_(V_*) be the harmonic function with h_|_V_0=h_2 and denote by D^2_0 the matrix representation of the power dissipation _0, i.e.
D^2_0=()/2||^2[ 2 -1 -1; -1 2 -1; -1 -1 2 ].
Since h is harmonic, we have that
_0[A_1 h_|_V_0] =⟨ D^2_0A_1 h_2,A_1h_2⟩=|λ_2|⟨ D^2_0h_2,h_2⟩
=|λ_2|_0[h_|_V_0]=|λ_2|[h].
On the other hand, it follows from (<ref>) and the definition of _n that
[h]=_1[h_|_V_1]=∑_j=1^3_0[A_j h_|_V_0].
Thus, if |λ_2|=1, then _0[A_2 h_|_V_0]=_0[A_3 h_|_V_0]=0 and hence A_2h_2 and A_3h_2 are constant, a contradiction.
In fact, it is possible to check directly that for instance A_3h_2 is non-constant because <cit.> provides the explicit expression of A_3 and h_2, which leads to
A_3h_2=3/9Z_c+5(3,27Z_C+10/3Z_C+2,18Z_C+8/3Z_C+2).
If this were to be constant, then 27Z_C+10=18Z_C+8, equivalently =-9/2Z_C. But Z_C is purely imaginary, whereas has positive real part, a contradiction.
The next theorem is the main result of this section. It justifies the extension of power dissipation in the F-S ladder to potentials defined on the whole K_∞.
Harmonic functions are continuous on V_*, i.e. _(V_*)⊆ C(V_*).
Before proving this result we show the following key lemma.
There exists r∈(0,1) such that for any non-constant h_0∈ℓ(V_0)
_0[A_jh_0]≤ r^2_0[h_0], j=1,2,3.
Let h_0∈ℓ(V_0) be non-constant and let D^2_0 be the matrix representation (<ref>) of _0. Since D^2_0 is non-negative definite and symmetric, ⟨ D^2_0A_jh_0,h_0⟩≥ 0 for any j=1,2,3. Consider j arbitrary but fixed.
Let h_2 and h_3 denote the eigenvectors of A_j associated with the eigenvalues λ_2, resp. λ_3 given in Remark <ref> (i). Non-constant harmonic functions are thus the 2-dimensional subspace of _(V_*) spanned by {h_2,h_3} and hence h_0=∑_k=2^3a_kh_k, a_k∈. Then,
_0[A_jh_0] =⟨ D^2_0A_jh_0,A_j h_0⟩=|∑_k,l=2^3a_ka_l⟨ D^2_0λ_kh_k,λ_lh_l⟩|
≤∑_k,l=2^3|λ_kλ_l|⟨ D^2_0a_kh_k,a_lh_l⟩≤ r^2⟨ D^2_0h_0,h_0⟩
with r=|λ_2|<1 in view of Lemma <ref>.
Without loss of generality, let h∈_(V_*) be non-constant and such that [h]=1. For each ε >0 and m≥ 0 large enough, points inside a m-cell G_w(V_*), w∈ W_m, satisfy |x-y|<δ for some δ>0. Since h is harmonic, the maximum principle guarantees that h takes its maximum and minimum value within G_w(V_*) on the boundary G_w(V_0). Thus, for any x,y∈ G_w(V_0), by definition of _m we have that
|h(x)-h(y)|^2≤2||^2/()∑_x,y∈ G_w(V_0)
{x,y}∈ E_m_0[h]_xy=2||^2/()_0[h_|_G_w(V_0)]
and since h is harmonic, h_|_G_w(V_0)=A_w_1⋯ A_w_mh_|_V_0, with w=w_1… w_m. Applying repeatedly Lemma <ref> yields
_0[h_|_G_w(V_0)]≤ r^2m_0[h_|_V_0]
and hence
|h(x)-h(y)|≤ ||√(2/()) r^m<ε
for m large.
As an immediate consequence of this result, the space of harmonic functions on K_∞, denoted by _(K_∞), is well-defined. For any h∈_(K_∞) we will identify [h] with the former [h_|_V_*] to obtain the power dissipation associated with the F-S ladder for harmonic potentials on the fractal dust K_∞.
The power dissipation in K_∞ associated with _ of a function h∈_(K_∞) is given by
[h]=lim_ε→ 0_+lim_n→∞_ε,n[h_|_V_n].
§ CONTINUITY OF THE POWER DISSIPATION MEASURE
In analogy to energy measures, this section aims to construct a measure on K_∞ that can be understood as the “power dissipation measure” associated with harmonic potentials. The main theorem states the existence of this continuous (atomless) measure.
For each non-constant harmonic function h∈_(K_∞), the power dissipation induces a continuous measure ν_h on K_∞ with ν_h=C_∞.
Before proving this result, we provide some useful observations. In this and the next section, n-cells will be denoted by T_w=G_w(K_∞) for any w∈ W_n, n≥ 0.
The following hold.
(i) For any h∈_(K_∞) and {x,y}∈ E_m, m≥ 0,
lim_ε→ 0_+lim_n→∞_ε,n[h]_xy=lim_ε→ 0_+_ε,m[h]_xy=0.
(ii) For any h∈_(K_∞) and w∈ W_m, m≥ 1,
lim_ε→ 0_+lim_n→∞∑_x,y∈ T_w∩ V_n
{x,y}∈ E_n_ε,n[h]_xy=()/2||^2∑_x,y∈∂ T_w|h(x)-h(y)|^2.
Let h∈_(K_∞) be non-constant. For each m-cell T_w define
ν_h(T_w):=lim_ε→ 0_+lim_n→∞∑_x,y∈ T_w∩ V_n
{x,y}∈ E_n_ε,n[h]_xy.
Notice that since h is harmonic,
0≤ν_h(T_w)≤ν_h(K_∞)=[h]<∞.
Applying the same definition of ν_h to isolated points, we have that ν_h({x})=0 whenever x∈ V_m for some m≥ 0
because no edges are involved. If x∈ K_∞ is an accumulation point, it satisfies x=⋂_k≥ 1T_w_1… w_k
for some infinite word w_1w_2…∈ S^. In view of Remark <ref>(ii) we have that
ν_h({x}) =lim_ε→ 0_+lim_n→∞lim_m→∞∑_x,y∈ T_w_1… w_m∩ V_n
{x,y}∈ E_n_ε,n[h]_xy
=lim_ε→ 0_+lim_n→∞lim_m→∞(ε)/2|ε|^2∑_x,y∈∂ T_w_1… w_m
{x,y}∈ E_n|h(x)-h(y)|^2.
By Theorem <ref>, h is continuous and therefore for any δ>0 we find m_0≥ 0 large enough such that
ν_h({x})<3()/2||^2δ^2 for all m≥ m_0. Thus, ν_h({x})=0.
Let us now prove finite-additivity of ν_h: Let T_w,T_v be two disjoint cells of levels n_1, n_2. If these cells can be connected by edges without additional vertices, there is at most one such connecting edge {p_1,p_2}∈ E_m with m=max{n_1,n_2}. Let us suppose this is the case. In view of Remark <ref>(i) we have
ν_h(T_w∪ T_v) =lim_ε→ 0_+lim_n→∞∑_x,y∈ T_w∩ V_n
{x,y}∈ E_n_ε,n[h]_xy+lim_ε→ 0_+lim_n→∞_ε,n[h]_p_1p_2
+lim_ε→ 0_+lim_n→∞∑_x,y∈ T_v∩ V_n
{x,y}∈ E_n_ε,n[h]_xy=ν_h(T_w_1)+ν_h(T_w_2).
If there is no possible connecting edge, the above equality follows directly. The same argument applies for any finite union (both of cells or isolated points), since they can be connected by at most finitely many single edges.
In order to prove σ-additivity, consider a sequence of pairwise disjoint cells {T_v(k)}_k≥ 1, where T_v(k) is a n_k-cell. Without loss of generality, assume that n_k≤ n_k+1. Notice that, since the cells are pairwise disjoint, it is only possible to connect T_v(k) and T_v(k+1) by an edge {x,y}∈ E_n_k+1 if n_k+1=n_k+1. Hence, we can assume that there will be at most and edge {x_k+1,y_k+1}∈ E_n_k+1 joining T_v(k) and T_v(k+1). Then,
ν_h(⋃_k≥ 1T_v(k)) =lim_ε→ 0_+lim_n→∞lim_m→∞∑_k=1^m∑_x,y∈ T_v(k)∩ V_n
{x,y}∈ E_n_ε,n[h]_xy
+lim_ε→ 0_+lim_n→∞lim_m→∞∑_k=2^m+1_ε,n_k[h]_x_ky_k=∑_k=1^∞ν_h(T_w(k))+0,
where last equality follows by Remark <ref>(i) after interchanging the order of the limits, which is possible because both summands are uniformly bounded by [h].
The same argument applies to countable unions of isolated points, which in particular implies that ν_h =C_∞.
Finally, by Carathéodory's extension theorem, ν_h admits a unique extension to a (finite) measure on K_∞ and this measure is continuous because isolated points have no mass.
For any m≥ 0, w∈ W_m, and any m-cell T_w it holds that
ν_h(T_w)≍(h_|_T_w)^2,
where (h_|_T_w)=max_x∈ T_wh(x)-min_y∈ T_wh(y).
First of all, recall from Remark <ref>(ii) that
ν_h(T_w)=()/2||^2∑_x,y∈∂ T_w|h(x)-h(y)|^2.
By the maximum principle and since h is harmonic, h_|_T_w takes its maximum and minimum on the boundary ∂ T_w. Hence, (h_|_T_w)=|h(x)-h(y)| for some x,y∈∂ T_w and the definition of ν_h yields
()/2||^2(h_|_T_w)^2≤ν_h(T_w)≤3()/2||^2(h_|_T_w)^2.
§ SINGULARITY OF THE POWER DISSIPATION MEASURE
This section is devoted to proving that the power dissipation measure ν_h discussed in the preceding section is singular with respect to the Bernouilli measure μ on K_∞ that satisfies
μ(T_w_1… w_n)=μ_w_1⋯μ_w_n
for any n-cell T_w_1… w_n, w_1… w_n∈ W_n, where ∑_i∈ Sμ_i=1. In particular in this case, we will consider μ_1=μ_2=μ_3=1/3. Together with the measure μ, K_∞ can be seen as a probability space. Notice that, as it happened with ν_h, μ =C_∞, where C_∞ is the Cantor dust defined in (<ref>).
Recall that any element in the support of μ is a non-isolated point of K_∞ such that x=⋂_n≥ 1T_w_1… w_n for some w_1w_2…∈ S^. For these points, we define the (random) matrices M_n(x)=A_w_n, where A_j, j∈ S, are the matrices of the harmonic extension algorithm (<ref>). The matrices M_n(x) are statistically independent with respect to μ.
The next result is based on a special case of <cit.> and we will mainly follow the proof given in <cit.>, including details for completeness. Since we are only dealing with non-constant harmonic functions, we will restrict to the 2-dimensional subspace of _(K_∞) spanned by the two harmonic functions h_2,h_3 associated to the eigenvalues λ_2,λ_3 from Remark <ref>(i).
Assume that for a non-constant h∈_(K_∞) there exists m≥ 1 such that the mapping x↦D_0M_m(x)⋯ M_1(x)h_|_V_0 is non-constant. Then, ν_h is singular with respect to μ.
The proof of this theorem essentially consists in proving the condition stated in the following lemma, which is a consequence of the generalized Lebesgue differentiation theorem for metric measure spaces.
The measure ν_h is singular with respect to μ if for μ-a.e. x∈ C_∞
lim_n→∞ν_h(T_w_1… w_n)/μ(T_w_1… w_n)=0,
where x=⋂_n≥ 1T_w_1… w_n.
For each n≥ 1, T_w_1… w_n is a neighborhood of x and lim_n→∞μ(T_w_1,… w_n)=0. Let us suppose that ν_h is absolutely continuous with respect to μ. Then, there exists a measurable function f (the Radon-Nikodym derivative of ν_h with respect to μ) such that
ν(T_w_1… w_n)/μ(T_w_1… w_n)
=1/μ(T_w_1… w_n)∫_T_w_1… w_nf(x) μ(dx).
Due to the definition of μ and since C_∞ is self-similar, it is Ahlfors regular (i.e. μ(B_d_E(x,r)∩ C_∞)≍ r^γ, in this case with γ being the Hausdorff dimension of C_∞). Thus, (C_∞,μ) equipped with the Euclidean metric is volume doubling
and the generalized Lebesgue differentiation theorem, see e.g. <cit.> holds, so that (<ref>) equals f(x) for μ-a.e. x∈ C_∞. By assumption, this implies that f is zero μ-a.e., a contradiction.
For any h∈_(K_∞), w∈ W_m and n large it holds that
∑_x,y∈ T_w∩ V_n
{x,y}∈ E_n_ε,n[h]_xy = ∑_x,y∈ T_w∩ V_n-m
{x,y}∈ E_n_ε,n[h∘ G_w]_xy
=∑_x,y∈ T_w∩ V_n-m
{x,y}∈ E_n-m_ε,n-m[h∘ G_w]_xy=_ε,n-m[h∘ G_w_|_V_n-m]
=(ε)/2|ε|^2∑_{x,y}∈ E_0|h∘ G_w(x)-h∘ G_w(y)|^2.
Letting ε→ 0_+ and n→∞ in both sides of the equality yields
ν_h(T_w) =()/2||^2∑_{x,y}∈ E_0|h∘ G_w(x)-h∘ G_w(y)|^2
=D_0A_w_m⋯ A_w_1h_|_V_0^2.
On the other hand, it follows from (<ref>) and the definition of μ that
D_0h_|_V_0^2 =⟨ D^2_0h_|_V_0,h_|_V_0⟩=_0[h_|_V_0]=∑_i=1^3_0[h∘ G_i_|_V_0]
=∑_i=1^3⟨ D^2_0A_ih_|_V_0,A_ih_|_V_0⟩=∑_i=1^3D_0A_ih_|_V_0^2
=3∑_i=1^3μ_iD_0A_ih_|_V_0^2
=3∫_K_∞D_0M_1(x)h_|_V_0^2μ(dx).
and induction leads to
D_0h_|_V_0^2=3^n∫_K_∞D_P_0M_n(x)⋯ M_1(x)h_|_V_0^2μ(dx)
for any n≥ 1. Notice that all computations are in fact basis independent.
Furthermore, by assumption, there exists m≥ 1 such that Jensen's and the Cauchy-Schwartz inequality yield
∫_K_∞logD_0M_m(x)⋯ M_1(x)h_|_V_0 μ(dx)<log∫_K_∞D_0M_m(x)⋯ M_1(x)h_|_V_0 μ(dx)
≤1/2log∫_K_∞D_0M_m(x)⋯ M_1(x)h_|_V_0^2μ(dx)=1/2log1/3^mD_0h_|_V_0^2,
where last equality follows from (<ref>). Hence,
β:=sup_h∈_1∫_K_∞logD_0M_m(x)⋯ M_1(x)h_|_V_0 μ(dx)<-m/2log 3,
where _1:={h∈_(K_∞) non-constant, D_0h_|_V_0=1}.
Let now h∈_1 and n≥ 1. Multiplying and dividing by D_0M_m(n-1)(x)⋯ M_1(x)h_|_V_0 and since the matrices M_i(x) are statistically independent, Jensen's inequality yields
∫_K_∞logD_0M_mn(x)⋯ M_mn-m(x)⋯ M_1(x)h_|_V_0 μ(dx)
≤β+∫_K_∞logD_0M_m(n-1)(x)⋯ M_1(x)h_|_V_0 μ(dx).
By induction we obtain
∫_K_∞logD_0M_mn(x)⋯ M_1(x)h_|_V_0 μ(dx)≤ nβ
and thus
∫_K_∞logD_0M_m_n(x)⋯ M_1(x)h_|_V_0 μ(dx)≤m_n/mβ
for the subsequence m_n=mn. Consequently,
lim sup_m_n→∞1/m_nlogD_0M_m_n(x)⋯ M_1(x)h_|_V_0≤1/mβ<-1/2log 3,
where the existence of the limit is guaranteed by Furstenberg's Theorem <cit.> because the matrices M_m_n(x) are i.i.d. By definition of μ and (<ref>) we thus have that
ν_h(T_w_1… w_n)/μ(T_w_1… w_n)=3^nD_0M_w_n(x)⋯ M_w_1(x)h_|_V_0^2
for μ-a.e. x∈ C_∞, hence μ-a.e. in K_∞, and (<ref>) yields
1/n(logD_0M_w_n(x)⋯ M(x)_w_1h_|_V_0^2+nlog 3)<-nlog 3-nlog 3=0.
Finally, this implies that
lim_n→∞ν_h(T_w_1… w_n)/μ(T_w_1… w_n)=0
for μ-a.e. x∈ K_∞. By Proposition <ref>, ν_h is singular with respect to μ.
§.§ Acknowledgments
The author would like to thank A. Teplyaev and L. Rogers for very fruitful discussions.
amsplain
| Passive linear networks have a wide range of applications, and especially electrical circuits have since long been intensively studied in different research areas such as electrical engineering <cit.>, physics <cit.> and mathematics <cit.>. In particular for the latter, Dirichlet forms on finite sets and graphs can be interpreted in terms of electric linear networks by considering the current flow between nodes (vertices) connected by resistors (edges). This is the key idea behind the theory of Dirichlet and resistance forms on fractals introduced by Kigami <cit.>. In this context, one may associate these forms with “fractal networks”.
Resistors are just one type of passive components, or impedances, of an electrical network. Impedances are characterized by the fact that they produce no energy by themselves. A resistor is a dissipative element because power is lost (energy is absorbed) when an alternating current runs through it. On the contrary, no loss is caused when the current flows through a non-dissipative element such as an inductor or a capacitor. Finite linear networks consisting only of inductors and capacitors are uninteresting since no power dissipation is expected.
However, what if the network is infinite (as for instance fractal networks are)?
In the 60s, Feynman posed this “amusing question”, see <cit.>; to give an answer, he constructed an infinite ladder network as depicted in Figure <ref>. He found its behavior surprising and noticed a very interesting connection with wave propagation:
depending on the driving frequency of the signal, power will either dissipate, allowing waves to propagate along the network, or it will not dissipate at all, preventing waves from getting through. As a consequence, voltage will either stay constant, merely changing its phase, or it will die away rapidly.
This particular infinite ladder network is what is called a low-pass filter because low frequencies “pass” while higher frequencies are “rejected”. Although such an infinite network cannot actually occur, it is often possible to realize fairly good approximations that have many technical applications, see e.g. <cit.>.
Also fractal structures present unusual, a priori unexpected, physical properties <cit.>. Feynman's example motivated in <cit.> the construction of the so-called Feynman-Sierpinski ladder (F-S ladder for short), see Figure <ref>, as a first prototype of a fractal network consisting solely of inductors and capacitors that exhibits power dissipation, and hence wave propagation, at some frequencies.
The present paper aims to set up the mathematical framework to study the concept of power dissipation in infinite graphs and fractals, working out in detail the case of the F-S ladder. One of the main novelties lies in the fact that passive linear networks are studied in the frequency domain and this requires voltage, current and impedances to be considered as complex quantities. Following the classical intrinsic approach from analysis on fractals, the power dissipation in the F-S ladder will be defined as the limit of a suitable sequence of quadratic forms over complex-valued functions on its finite graph approximations.
A crucial role in this definition is played by the harmonic functions on the fractal dust that represents the nodes of the network. These functions describe the equilibrium potentials in a circuit when a signal is applied to the boundary nodes and they guarantee the existence of the aforementioned limit. Furthermore, proving them to be continuous will allow us to fully define power dissipation for harmonic potentials, as well as to construct the power dissipation measure associated with them. The latter measure will turn out to be singular with respect to an appropriate Hausdorff measure defined on the fractal dust related to the network.
The paper is organized as follows: In section <ref>, we review the classical notion of power dissipation in electric passive linear networks and transfer it to graphs and infinite networks. Here, we recall the construction of the F-S ladder and set the first step towards the definition of power dissipation in this network. Section <ref> discusses some geometric properties of the projection of the F-S ladder onto ^2. In particular, we prove this set to be a fractal quantum graph. In order to complete the definition of power dissipation, we analyze in section <ref> the harmonic potentials and prove in Theorem <ref> that they are continuous functions on the (fractal) set of nodes of the F-S ladder. Finally, section <ref> deals with the construction of a measure associated with power dissipation for harmonic potentials, c.f. Theorem <ref>. Further, we prove in Theorem <ref> that this measure is singular with respect to a suitable Hausdorff measure on the set of nodes of the F-S ladder. | null | null | null | null | null |
http://arxiv.org/abs/1701.07839v1 | 20170126190313 | Collinear parton distributions and the structure of the nucleon sea in a light-front meson-cloud model | [
"S. Kofler",
"B. Pasquini"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
⟨#|1⟨#1|
|#⟩1|#1⟩
⟨#|1⟩⟨#1⟩
#1-10mu #1
|#⟩1| #1⟩
⟨#|1⟨ #1|
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07514v1 | 20170125225334 | On the existence of connecting orbits for critical values of the energy | [
"Giorgio Fusco",
"Giovanni F. Gronchi",
"Matteo Novaga"
] | math.DS | [
"math.DS"
] |
On the existence of connecting orbits for critical values of the energy
Giorgio
Fusco,[Dipartimento di Matematica, Università dell'Aquila; e-mail:
]
Giovanni F. Gronchi,[Dipartimento di Matematica, Università
di Pisa; e-mail:
]
Matteo Novaga[Dipartimento di Matematica, Università di Pisa; e-mail:
]
Received: date / Accepted: date
==========================================================================================================================================================================================================================================================
We consider an open connected set Ω and a smooth potential U
which is positive in Ω and vanishes on ∂Ω. We
study the existence of orbits of the mechanical system
ü=U_x(u),
that connect different components of ∂Ω and lie on the
zero level of the energy. We allow that ∂Ω contains a
finite number of critical points of U. The case of symmetric
potential is also considered.
§ INTRODUCTION
Let U:^n→ be a
function of class C^2. We assume that Ω⊂^n is a
connected component of the set {x∈^n: U(x)>0} and
that ∂Ω is compact and is the union of N≥ 1
distinct nonempty connected components Γ_1,…,Γ_N. We
consider the following situations
H N≥ 2 and, if Ω is unbounded, there is r_0>0 and
a non-negative function σ:[r_0,+∞)→ such that
∫_r_0^+∞σ(r)dr=+∞ and
√(U(x))≥σ(| x|), x∈Ω, | x|≥ r_0.
H_s Ω is bounded, the origin 0∈^n belongs to
Ω and U is invariant under the antipodal map
U(-x)=U(x), x∈Ω.
Condition (<ref>) was first introduced in
<cit.>. A sufficient condition for (<ref>)
is that lim inf_|x|→∞ U(x) >0.
We study non constant solutions u:(T_-,T_+)→Ω, of the
equation
ü=U_x(u), U_x=(∂ U/∂ x)^T,
that satisfy
lim_t→ T_±d(u(t),∂Ω)=0,
with d the Euclidean distance, and lie on the energy surface
1/2|u̇|^2-U(u)=0.
We allow that the boundary ∂Ω of Ω contains a
finite set P of critical
points of U and assume
H_1 If Γ∈{Γ_1,…,Γ_N} has positive
diameter and p∈ P∩Γ then p is a hyperbolic critical
point of U.
If Γ has positive diameter, then hyperbolic critical points
p∈Γ correspond to saddle-center equilibrium points in the
zero energy level of the Hamiltonian system associated to
(<ref>). These points are organizing centers of complex
dynamics, see <cit.>.
Note that 𝐇_1 does not exclude that some of the
Γ_j reduce to a singleton, say {p}, for some p∈ P. In
this case nothing is required on the behavior of U in a neighborhood
of p aside from being C^2.
A comment on 𝐇 and 𝐇_s is in order. If P is
nonempty u≡ p for p∈ P is a constant solution of
(<ref>) that satisfies (<ref>) and (<ref>). To
avoid trivial solutions of this kind we require N≥
2 in 𝐇, and look for solutions that connect different
components of ∂Ω. In 𝐇_s we do not exclude
that ∂Ω is connected (N=1) and avoid trivial solutions
by restricting to a symmetric context and to solutions that pass through
0.
We prove the following results.
Assume that 𝐇 and 𝐇_1 hold. Then for each
Γ_-∈{Γ_1,…,Γ_N} there exist
Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-} and a
map u^*:(T_-,T_+)→Ω, with -∞≤ T_-<T_+≤
+∞, that satisfies (<ref>), (<ref>) and
lim_t→ T_±d(u^*(t),Γ_±)=0.
Moreover, T_->-∞ (resp. T_+<+∞) if and only if
Γ_- (resp. Γ_+)
has positive diameter. If T_->-∞ it results
lim_t→ T_-u^*(t)= x_-,
lim_t→ T_-u̇^*(t)=0,
for some x_-∈Γ_-∖ P. An analogous statement
holds if T_+<+∞.
Assume that 𝐇_s and 𝐇_1 hold. Then there exist
Γ_+∈{Γ_1,…,Γ_N} and a map
u^*:(0,T_+)→Ω, with 0<T_+≤ +∞, that satisfies
(<ref>), (<ref>) and
lim_t→ T_+d(u^*(t),Γ_+)=0.
Moreover, T_+<+∞ if and only if Γ_+ has positive diameter. If
T_+<+∞ it results
lim_t→ T_+u^*(t)=x_+,
lim_t→ T_+u̇^*(t)=0,
for some x_+∈Γ_+∖ P.
We list a few straightforward consequences of Theorems <ref> and
<ref>.
Theorem <ref> implies that, if ∂Ω=P, given p_-∈
P there is p_+∈ P∖{p_-} and a heteroclinic connection
between p_- and p_+, that is a solution u^*:→^n of
(<ref>) and (<ref>) that satisfies
lim_t→±∞u^*(t)=p_±.
The problem of the existence of heteroclinic connections between two
isolated zeros p_± of a non-negative potential has been recently
reconsidered by several authors. In <cit.> existence was established
under a mild monotonicity condition on U near p_±. This
condition was removed in <cit.>, see also <cit.>. The most
general results, equivalent to the consequence of Theorem <ref>
discussed in Section <ref>, were recently obtained in <cit.> and in
<cit.>, see also <cit.>.
All these papers establish existence by a variational
approach. In <cit.>, <cit.> and <cit.> by minimizing the
action functional, and in <cit.> and
<cit.> by minimizing the Jacobi functional.
Theorem <ref> implies that,
if Γ_-={p} for some p∈ P and the elements of
{Γ_1,…,Γ_N}∖{Γ_-} have all positive
diameter, there exists a nontrivial orbit homoclinic to p that satisfies (<ref>), (<ref>).
Let v^*:→Ω∪{x_+} be the extension
defined by
v^*(T_++t)=u^*(T_+-t), t∈(0,+∞), v^*(T_+)=x_+,
of the solution u^*:(-∞,T_+)→Ω given by
Theorem <ref>.
The map v^* so defined is a smooth non-constant solution of
(<ref>) that satisfies
lim_t→±∞v^*(t)=p.
Theorem <ref> implies that, if all the sets
Γ_1,…,Γ_N have positive diameter, given
Γ_-∈{Γ_1,…,Γ_N}, there exist
Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-} and a
periodic solution v^*:→Ω of (<ref>) and
(<ref>) that oscillates between Γ_- and
Γ_+. This solution has period T=2(T_+-T_-).
The solution v^* is the T-periodic extension of the map
w^*:[T_-,2T_+-T_-]→Ω defined by w^*(t)=u^*(t) for
t∈(T_-,T_+), where u^* is given by Theorem <ref>, and
w^*(T_±)=x_±,
w^*(T_++t)=u^*(T_+-t), t∈ (0,T_+-T_-].
The problem of existence of heteroclinic, homoclinic and periodic
solutions of (<ref>), in a context similar to the one considered
here, was already discussed in <cit.> where ∂Ω is
allowed to include continua of critical points. Our result concerning
periodic solutions extends a corresponding result in <cit.> where
existence was established under the assumption that P=∅.
The following result is a direct consequence of Theorem <ref>.
Theorem <ref> implies that, if all the sets Γ_1,…,Γ_N have positive diameter,
there exists Γ_+∈{Γ_1,…,Γ_N} and a
periodic solution v^*:→Ω of (<ref>) and
(<ref>) that satisfies
v^*(-t) = -v^*(t), t∈.
This solution has period
T=4T_+, with T_+.
The solution v^* is the T-periodic extension of the map
w^*:[-2T_+,2T_+]→Ω defined by w^*(t) = u^*(t) for
t∈(0,T_+), where u^* is given by Theorem <ref>, and by
w^*(t) = -w^*(-t), 2cm t∈ (-T_+,0),
w^*(0) = 0, w^*(± T_+) = ± x_+,
w^*(T_++t) = w^*(T_+-t), 0.8cm t∈(0,T_+],
w^*(-T_++t) = w^*(-T_+-t), 0.3cm t∈[-T_+,0).
In particular the solution oscillates between x_+ and -x_+ and
this is true also when ∂Ω is connected (N=1).
§ PROOF OF THEOREMS <REF> AND <REF>
We recall a classical result.
Let G:^n→ be a smooth bounded and
non-negative potential, I=(a,b) a bounded interval.
Define the Jacobi functional
𝒥_G(q, I)=√(2)∫_I√(G(q(t)))|q̇(t)| dt
and the action functional
𝒜_G(q,I) =
∫_I(1/2|q̇(t)|^2+G(q(t)))dt.
Then
*
𝒥_G(q, I)
≤𝒜_G(q,I),
q∈ W^1,2(I;^n)
with equality sign if and only if
1/2|q̇(t)|^2-G(q(t))=0, t∈ I.
*
min_q∈𝒬𝒥_G(q, I)=min_q∈𝒬𝒜_G(q,I),
where
𝒬={q∈ W^1,2(I;^n):q(a)=q_a, q(b)=q_b}.
When G=U we shall simply write 𝒥, 𝒜 for
𝒥_U, 𝒜_U.
We now start the proof of Theorem <ref>. Choose
Γ_-∈{Γ_1,…,Γ_N} and set
d=min{| x-y|: x∈Γ_-, y∈∂Ω∖Γ_-}.
For small δ∈(0,d) let O_δ={x∈Ω:
d(x,Γ_-)<δ} and let U_0=1/2min_x∈∂
O_δ∩ΩU(x). We note that U_0>0 and define the
admissible set
𝒰={u∈ W^1,2((T_-^u,T_+^u);^n):
-∞< T_-^u < T_+^u< +∞, 2.2cm
u((T_-^u,T_+^u))⊂Ω, U(u(0))=U_0, u(T_-^u)∈Γ_-, u(T_+^u)∈∂Ω∖Γ_-}.
We determine the map u^* in Theorem <ref> as the limit of a
minimizing sequence {u_j}⊂𝒰 of the action
functional
𝒜(u,(T_-^u,T_+^u)) =
∫_T_-^u^T_+^u(1/2|u̇(t)|^2+U(u(t)))dt,
Note that in the definition of 𝒰 the times
T_-^u and T_+^u are not fixed but, in general,
change with u. Note also that the condition
U(u(0))=U_0 in (<ref>) is
a normalization which can always be imposed by a translation of time
and has the scope of eliminating the loss of compactness due to
translation invariance.
Let x̅_-∈Γ_- and
x̅_+∈∂Ω∖Γ_- be such that |x̅_+-x̅_-|=d and set
ũ(t)=(1-(t+τ))x̅_-+(t+τ)x̅_+, t∈[-τ,1-τ],
where τ∈(0,1) is chosen so that U(ũ(0))=U_0. Then
ũ∈𝒰, T_-^ũ=-τ,
T_+^ũ=1-τ and
𝒜(ũ,(-τ,1-τ))=a<+∞.
Next we show that there are constants M>0 and T_0>0 such that
each u∈𝒰 with
𝒜(u,(T_-^u,T_+^u))≤ a,
satisfies
u_L^∞((T_-^u,T_+^u);^n)≤ M,
T_-^u≤-T_0<T_0≤ T_+^u.
The L^∞ bound on u follows from 𝐇 and from
Lemma <ref>, in fact, if Ω is unbounded, |
u(t̅)|=M for some t̅∈(T_-^u,T_+^u) implies
a≥𝒜(u,(T_-^u,t̅)) ≥∫_T_-^u^t̅√(2U(u(t)))|u̇(t)|
dt≥√(2)∫_r_0^Mσ(s)ds.
The existence of T_0 follows from
d_1^2/|T_-^u|≤∫_T_-^u^0|u̇(t)|^2dt≤ 2a,
d_1^2/T_+^u≤∫_0^T_+^u|u̇(t)|^2dt≤ 2a,
where d_1=d(∂Ω,{x: U(x)>U_0}).
Let {u_j}⊂𝒰 be a minimizing sequence
lim_j→+∞𝒜(u_j,(T_-^u_j,T_+^u_j)) =
inf_u∈𝒰𝒜(u,(T_-^u,T_+^u)) := a_0≤ a.
We can assume that each u_j satisfies (<ref>) and
(<ref>). By considering a subsequence, that we still denote by {u_j},
we can also assume that there exist T_-^∞, T_+^∞ with -∞≤
T_-^∞≤-T_0<T_0≤ T_+^∞≤ +∞ and a continuous
map u^*:(T_-^∞,T_+^∞)→^n such that
lim_j→+∞T_±^u_j=T_±^∞,
lim_j→+∞u_j(t)=u^*(t), t∈(T_-^∞,T_+^∞),
and in the last limit the convergence is uniform on bounded
intervals. This follows from (<ref>) which
implies that the sequence {u_j} is equi-bounded and from (<ref>) which implies
| u_j(t_1)-u_j(t_2)|≤|∫_t_1^t_2|u̇_j(t)| dt|≤√(a)| t_1-t_2|^1/2,
so that the sequence is also equi-continuous.
By passing to a further subsequence we can also assume
that u_j⇀ u^* in
W^1,2((T_1,T_2);^n) for each T_1, T_2 with
T_-^∞<T_1<T_2<T_+^∞. This follows from (<ref>),
which implies
1/2∫_T_-^u_j^T_+^u_j|u̇_j|^2dt ≤𝒜(u_j,(T_-^u_j,T_+^u_j))≤ a,
and from the fact that each map u_j satisfies (<ref>) and
therefore is bounded in L^2((T_-^u_j,T_+^u_j);^n).
We also have
𝒜(u^*,(T_-^∞,T_+^∞))≤ a_0.
Indeed, from the lower semicontinuity of the norm, for each
T_1, T_2 with T_-^∞<T_1<T_2<T_+^∞ we have
∫_T_1^T_2|u̇^*|^2dt ≤lim inf_j→+∞∫_T_1^T_2|u̇_j|^2dt.
This and the fact that u_j converges to u^* uniformly in
[T_1,T_2]
imply
𝒜(u^*,(T_1,T_2)) ≤lim inf_j→+∞𝒜(u_j,(T_1,T_2)) ≤lim inf_j→+∞𝒜(u_j,(T_-^u_j,T_+^u_j))=a_0.
Since this is valid for each T_-^∞<T_1<T_2<T_+^∞ the claim
(<ref>) follows.
Define
T_-^∞≤ T_-≤-T_0<T_0≤ T_+≤
T_+^∞ by setting
T_-=inf{t∈(T_-^∞,0]:u^*((t,0])⊂Ω}
T_+=sup{t∈(0,T_+^∞):u^*([0,t))⊂Ω}.
Then
*
𝒜(u^*,(T_-,T_+))=a_0.
* T_+<+∞ implies lim_t→ T_+u^*(t)=x_+ for some x_+∈Γ_+ and
Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-}.
* T_+=+∞ implies
lim_t→+∞d(u^*(t),Γ_+)=0,
for some Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-}.
Corresponding statements apply to T_-.
We first prove (ii), (iii). If T_+<+∞ the existence of
lim_t→ T_+u^*(t) follows from (<ref>) which
implies that u^* is a C^0,1/2 map. The limit x_+
belongs to ∂Ω and therefore to Γ_+ for some
Γ_+∈{Γ_1,…,Γ_N}.
Indeed,
x_+∉∂Ω would imply the existence of τ>0 such
that, for j large enough,
d(u_j([T_+,T_++τ]),∂Ω)≥1/2d(x_+,∂Ω),
in contradiction with the definition of T_+. If T_+=+∞ and
(iii) does not hold there is δ>0 and a diverging sequence
{t_j} such that
d(u^*(t_j),∂Ω)≥δ.
Set U_m=min_d(x,∂Ω)=δU(x)>0. From the uniform
continuity of U in {| x|≤ M} (M as in (<ref>)) it
follows that there is l>0 such that
|
U(x_1)-U(x_2)|≤1/2U_m, for |
x_1-x_2|≤ l, x_1,x_2∈{| x|≤ M}.
This and u^*∈ C^0,1/2 imply
U(u^*(t))≥1/2U_m, t∈ I_j =
(t_j-l^2/a,t_j+l^2/a),
and, by passing to a subsequence, we can assume that the intervals
I_j are disjoint. Therefore for each T>0 we have
∑_t_j≤ Tl^2U_m/a≤∫_0^TU(u^*(t))dt≤ a_0,
which is impossible for T large. This establishes (<ref>) for
some Γ_+∈{Γ_1,…,Γ_N}. It remains to show
that Γ_+≠Γ_-. This is a consequence of the minimizing
character of {u_j}. Indeed, Γ_+=Γ_- would imply the
existence of a constant c>0 such that
lim_j→∞𝒜(u_j,(T_-^u_j,T_+^u_j))≥ a_0+c.
Now we prove (i). T_+- T_-<+∞, implies that u^* is an
element of 𝒰 with T_±^u^*= T_±. It follows that
𝒜(u^*,(T_-,T_+))≥ a_0, which together with (<ref>)
imply (<ref>). Assume now T_+- T_-=+∞. If T_+=+∞,
(<ref>) implies that, given a small number ϵ>0, there
are t_ϵ and x̅_ϵ∈∂Ω such that
| u^*(t_ϵ)-x̅_ϵ|=ϵ and the
segment joining u^*(t_ϵ) to x̅_ϵ belongs
to Ω. Set
v_ϵ(t) =
(1-(t-t_ϵ))u^*(t_ϵ) +
(t-t_ϵ)x̅_ϵ,
t∈(t_ϵ,t_ϵ+1].
From the uniform continuity of U there is η_ϵ>0, lim_ϵ→ 0η_ϵ=0, such that U(v_ϵ(t))≤η_ϵ, for t∈[t_ϵ,t_ϵ+1]. Therefore we have
𝒜(v_ϵ,(t_ϵ,t_ϵ+1))
≤1/2ϵ^2+η_ϵ.
If T_->-∞ the map u_ϵ =
1_[T_-,t_ϵ]u^* +
1_(t_ϵ,t_ϵ+1]v_ϵ
belongs to 𝒰 and it results
a_0≤𝒜(u_ϵ,(T_-,t_ϵ+1)) =
𝒜(u^*,(T_-,t_ϵ)) +
𝒜(v_ϵ,(t_ϵ,t_ϵ+1))
≤𝒜(u^*,(T_-,T_+))+1/2ϵ^2+η_ϵ.
Since this is valid for all small ϵ>0 we get
a_0≤𝒜(u^*,(T_-,T_+)),
that together with (<ref>)
establishes (<ref>) if T_->-∞ and T_+=+∞. The
discussion of the other cases where T_+-T_- =+∞ is similar.
We observe that there are cases with T_+<T_+^∞ and/or
T_->T_-^∞, see Remark <ref>.
The map u^* satisfies (<ref>) and (<ref>) in
(T_-,T_+).
1. We first show that
for each T_1, T_2 with T_-<T_1<T_2<T_+ we have
𝒜(u^*,(T_1,T_2))=inf_v∈𝒱𝒜(v,(T_1,T_2)),
where
𝒱={v∈ W^1,2((T_1,T_2);^n): v(T_i)=u^*(T_i),i=1,2;
v([T_1,T_2])⊂Ω}.
Suppose instead that there are η>0 and v∈𝒱 such that
𝒜(v,(T_1,T_2))=𝒜(u^*,(T_1,T_2))-η.
Set w_j:(T_-^u_j,T_+^u_j)→Ω defined by
w_j(t)={[ u_j(t), t∈(T_-^u_j,T_1]∪[T_2,T_+^u_j),; v(t)+T_2-t/T_2-T_1δ_1j+t-T_1/T_2-T_1δ_2j,
t∈(T_1,T_2), ].
where δ_ij=u_j(T_i)-u^*(T_i), i=1,2, with u_j as in (<ref>).
Define v_j:[T_-^v_j,T_+^v_j]→^n by
v_j(t) = w_j(t - τ_j),
where τ_j is such that U(v_j(0))=U_0, as in
(<ref>). Note that
𝒜(v_j,(T_-^v_j,T_+^v_j))= 𝒜(w_j,(T_-^u_j,T_+^u_j)).
From (<ref>) we have
lim_j→∞δ_ij= 0, i=1,2, so that
lim_j→+∞𝒜(w_j,(T_1,T_2))=
𝒜(v,(T_1,T_2))
=
𝒜(u^*,(T_1,T_2))-η≤lim inf_j→+∞𝒜(u_j,(T_1,T_2))-η.
Therefore we have
lim inf_j→+∞𝒜(w_j,(T_-^u_j,T_+^u_j))=
lim_j→+∞𝒜(w_j,(T_1,T_2))+
lim inf_j→+∞𝒜(u_j,(T_+^u_j,T_1)∪(T_2,T_+^u_j))
≤lim inf_j→+∞𝒜(u_j,(T_1,T_2))-η+
lim inf_j→+∞𝒜(u_j,(T_+^u_j,T_1)∪(T_2,T_+^u_j))≤
a_0-η,
that, given (<ref>), is in contradiction with the
minimizing character of the sequence {u_j}.
The fact that u^* satisfies (<ref>) follows from (<ref>)
and regularity theory, see <cit.>.
To show that u^* satisfies (<ref>) we
distinguish the case T_+-T_-<+∞ from the case
T_+-T_-=+∞.
2. T_+-T_-<+∞.
Given t_0, t_1 with T_-<t_0<t_1<T_+, let
ϕ:[t_0,t_1+τ]→[t_0,t_1] be linear, with |τ|
small, and let ψ:[t_0,t_1]→[t_0,t_1+τ] be the
inverse of ϕ. Define u_τ:[T_-,T_++τ]→^n
by setting
u_τ(t)={[ u^*(t), t∈[T_-,t_0],; u^*(ϕ(t)), t∈[t_0,t_1+τ],; u^*(t-τ), t∈(t_1+τ,T_++τ)] ].
Note that u_τ∈𝒰 with T_-^u_τ=T_- and
T_+^u_τ=T_++τ. Since u^* is a
minimizer we have
d/dτ𝒜(u_τ,(T_-^u_τ,T_+^u_τ))|_τ=0=0.
From (<ref>), using also the change of variables t=ψ(s), it
follows
𝒜(u_τ,(T_-^u_τ,T_+^u_τ))-𝒜(u^*,(T_-,T_+))
= ∫_t_0^t_1+τ(ϕ̇^2(t)/2|u̇^*(ϕ(t))|^2 +
U(u^*(ϕ(t))))dt -∫_t_0^t_1(1/2|u̇^*(t)|^2+U(u^*(t)))dt
= ∫_t_0^t_1(1 -
ψ̇(t)/2ψ̇(t)|u̇^*(t)|^2 +
(ψ̇(t)-1)U(u^*(t)))dt
= ∫_t_0^t_1(-τ/t_1-t_0/2(1 +
τ/t_1-t_0)|u̇^*(t)|^2 +
τ/t_1-t_0U(u^*(t)))dt
= - τ/t_1-t_0∫_t_0^t_1(
|u̇^*(t)|^2/2(1 + τ/t_1-t_0) -
U(u^*(t)))dt .
This and (<ref>) imply
∫_t_0^t_1(1/2|u̇^*(t)|^2-U(u^*(t)))dt=0.
Since this holds for all t_0,t_1, with T_-<t_0<t_1<T_+, then (<ref>) follows.
3. T_+-T_-=+∞. We only consider the case
T_+=+∞. The discussion of the other cases is similar.
Let T∈(T_-,+∞), let T_-<t_0<t_1<T and let
ϕ:[t_0,T]→[t_0,T] be linear in the intervals
[t_0,t_1+τ], [t_1+τ,T], with |τ| small, and such that
ϕ([t_0,t_1+τ])=[t_0,t_1].
Define u_τ:(T_-,+∞)→^n by setting
u_τ(t)={[ u^*(t), t∈(T_-,t_0]∪[T,+∞); u^*(ϕ(t)), t∈[t_0,T]. ].
We have
𝒜(u_τ,(T_-,T))-𝒜(u^*,(T_-,T))
= ∫_t_0^t_1(-τ/t_1-t_0/2(1 +
τ/t_1-t_0)|u̇^*(t)|^2 +
τ/t_1-t_0U(u^*(t)))dt +
∫_t_1^T(τ/T-t_1/2(1 +
τ/T-t_1)|u̇^*(t)|^2
-τ/T-t_1U(u^*(t)))dt.
Since u^* restricted to the interval [t_0,T] is a minimizer of
(<ref>), by differentiating with respect to τ and setting
τ=0 we obtain
-1/t_1-t_0∫_t_0^t_1(1/2|u̇^*(t)|^2
-U(u^*(t)))dt
+1/T-t_1∫_t_1^T(1/2|u̇^*(t)|^2
-U(u^*(t)))dt=0.
From (<ref>) it follows that the second term in this expression
converges to zero when T→+∞. Therefore, after taking
the limit for T→+∞, we get back to (<ref>) and,
as before, we conclude that (<ref>) holds.
Assume that lim_t→ T_+u^*(t)=p∈ P. Then
T_+=+∞.
Since U is of class C^2 and p is a critical point of U there are constants c>0 and ρ>0 such that
U(x)≤ c| x-p|^2, x∈ B_ρ(p)∩Ω.
Fix t_ρ so that u^*(t)∈ B_ρ(p)∩Ω for t≥
t_ρ. Then T_+=+∞ follows from (<ref>) and
d/dt| u^*-p|≥ -|u̇^*|=-√(2U(u^*))≥
-√(2c)| u^*-p|, t≥ t_ρ.
We now show that if Γ_+ has positive diameter then T_+<+∞. To prove this we first show that T_+=+∞ implies u^*(t)→ p∈ P as t→+∞, then we conclude that this is in contrast with (<ref>).
If T_+=+∞, then
there is p∈ P such that
lim_t→+∞u^*(t)=p.
An analogous statement applies to T_-.
If Γ_+={p} for some p∈ P, then
(<ref>) follows by (<ref>). Therefore we assume that
Γ_+ has positive diameter. The idea of the proof is to show
that if u^*(t) gets too close to ∂Γ_+∖ P it is
forced to end up on Γ_+∖ P in a finite time in
contradiction with T^*=+∞.
If (<ref>) does not hold there is q>0 and a sequence
{τ_j}, with lim_j→∞τ_j=+∞,
such that d(u^*(τ_j),P)≥ q,
for all j∈.
Since, by
(<ref>) u^* is bounded, using also (<ref>), we can
assume that
lim_j→+∞u^*(τ_j) = x̅, for
some x̅∈Γ_+∖∪_p∈ P B_q(p).
The smoothness of U implies that there
are positive constants r̅, r, c and C such
that
* the orthogonal projection on
π:B_r̅(x̅)→∂Ω is well defined
and π(B_r̅(x̅))⊂∂Ω∖ P;
* we have
B_r(x_0)⊂ B_r̅(x̅),
x_0∈∂Ω∩ B_r̅/2(x̅);
* if (ξ,s)∈^n-1× are local coordinates with
respect to a basis {e_1,…,e_n}, e_j=e_j(x_0), with
e_n(x_0) the unit interior normal to ∂Ω at
x_0∈∂Ω∩ B_r̅/2(x̅) it results
1/2c s≤ U(x(x_0,(ξ,s)))≤ 2c s, |ξ|^2+s^2≤ r^2, s≥ h(x_0,ξ),
where
x=x(x_0,(ξ,s))=x_0+∑_j=1^n ξ_je_j(x_0) + se_n(x_0),
and h:∂Ω∩ B_r̅/2(x̅) ×{|ξ|≤ r}→, | h(x_0,ξ)|≤
C|ξ|^2, for |ξ|≤ r, is a local
representation of ∂Ω in a neighborhood of x_0,
that is U(x(x_0,(ξ,h(x_0,ξ))))=0 for |ξ|≤ r.
Fix a value j_0 of j and set t_0=τ_j_0.
If j_0 is sufficiently large, setting t_0=τ_j_0 we
have that x_0=π(u^*(t_0)) is well defined. Moreover
x_0∈∂Ω∩ B_r̅/2(x̅) and
u^*(t_0)=x_0+δ e_n(x_0), δ=| u^*(t_0)-x_0|.
For k=8/3√(2) let Q_0 be the set
Q_0={x(x_0,(ξ,s)):|ξ|^2+(s-δ)^2<k^2δ^2, s>δ/2}.
Since δ→ 0 as j_0→+∞ we can assume
that δ>0 is so small (δ < min{1/2Ck^2,
r/1+k} suffices) that Q_0⊂Ω∩
B_r(x_0).
.2cm
Claim 1. u^*(t)
leaves Q_0 through the disc D_0=∂
Q_0∖∂ B_kδ(u^*(t_0)).
.2cm
From (<ref>) we have a_0≤𝒜(v,(T_-,T_+^v)) for
each W^1,2 map v:(T_-,T_+^v]→^n that coincides
with u^* for t≤ t_0, and satisfies
v((t_0,T_+^v))⊂Ω, v(T_+^v)∈∂Ω and
(<ref>). Therefore if we set
w(s)=x_0+s e_n(x_0),
s∈[0,δ], we have
a_0≤𝒜(u^*,(T_-,t_0)) + 𝒥(w,(0,δ)).
On the other hand, if u^*(t_0^')∈∂ Q_0(x_0)∩∂
B_kδ(u^*(t_0)), where
t_0^'=sup{t>t_0:u^*([t_0,t))⊂Q_0∖∂
B_kδ(u^*(t_0))},
from (<ref>) it follows
𝒜(u^*,(T_-,t_0))+𝒥(u^*,(t_0,t_0^'))≤ a_0.
Using (<ref>) we obtain
𝒥(w,(0,δ))≤4/3c^1/2δ^3/2,
and, since
cδ/4≤
U(x(x_0,(ξ,s))), (ξ,s)∈Q_0(x_0),
we also have, with k defined above,
8/3c^1/2δ^3/2 =
k/√(2)c^1/2δ^3/2≤c^1/2δ^1/2/√(2)∫_t_0^t_0^'|u̇^*(t)| dt ≤√(2)∫_t_0^t_0^'√(U(u^*(t)))|u̇^*(t)|
dt.
From (<ref>) and (<ref>) it follows
𝒥(w,(0,δ)) ≤1/2𝒥(u^*,(t_0,t_0^')),
and therefore (<ref>) and (<ref>) imply the absurd
inequality a_0<a_0. This contradiction proves the claim.
.2cm From Claim 1 it follows that there is t_1∈(t_0,+∞)
with the following properties:
u^*([t_0,t_1))⊂ Q_0(x_0),
u(t_1)∈ D_0.
Set x_0,1=π(u^*(t_1)) and δ_1=|
u^*(t_1)-x_0,1|.
Since h(x_0,0)=h_ξ(x_0,0)=0 and the radius
ρ_δ=(k^2-1/4)^1/2δ of D_0 is
proportional to δ, we can assume that δ is so small that
the ratio 2δ_1/δ and | x_0,1 -
x_0|/| u^*(t_1)-x(x_0,(0,δ/2))| are near
1 so that we have
δ_1≤ρδ, for some ρ<1,
| x_0,1-x_0|≤ kδ.
We also have
t_1-t_0≤ k^'δ^1/2, k^' =
8k/c^1/2.
This follows from
(t_1-t_0)c/4δ≤𝒜(u^*,(t_0,t_1))
=𝒥(u^*,(t_0,t_1))
=√(2)∫_t_0^t_1√(U(u^*(t)))|u̇^̇*̇(t)| dt
≤ 2√(cδ)| u^*(t_1)-u^*(t_0)|≤ 2 c^1/2 k δ^3/2.
where we used (<ref>) to estimate 𝒥 on the segment
joining u^*(t_0) with u^*(t_1).
We have u^*(t_1)=x_0,1+δ_1e_n(x_0,1) and we can apply Claim
1 to deduce that there exists t_2>t_1 such that
u^*([t_1,t_2))⊂ Q_1(x_0,1),
u^*(t_2)∈ D_1,
where Q_1 and D_1 are defined as Q_0 and D_0 with δ_1
and x(x_0,1,(ξ,s)) instead of δ and
x(x_0,(ξ,s)). Therefore an induction argument yields sequences
{t_j}, {x_0,j}, {δ_j} and {Q_j(x_0,j)} such
that
u^*([t_j,t_j+1))⊂ Q_j(x_0,j), x_0,j=π(u^*(t_j)),
δ_j+1≤ρδ_j≤ρ^j+1δ,
| x_0,j+1-x_0,j|≤ kδ_j≤ kρ^jδ,
(t_j+1-t_j)≤ k^'δ_j^1/2≤ k^'ρ^j/2δ^1/2,
u^*(t_j)=x_0,j+δ_je_n(x_0,j) ∈ D_j.
We can also assume that Q_j(x_0,j)⊂Ω∩ B_r(x_0),
for all j∈.
This follows from | u^*(t_j+1)-u^*(t_j)|≤
kδ_j≤ kρ^jδ.
From (<ref>) we obtain that
there exists T with
t_0<T≤k^'δ^1/2/1-ρ^1/2 such
that
u^*(T)=lim_t→ Tu^*(t)=lim_j→+∞
x_0,j∈∂Ω∖ P,
| u^*(T)-x_0|≤kδ/1-ρ.
This contradicts the existence of the sequence {τ_j}, with lim_j→∞τ_j=+∞, appearing in
(<ref>) and establishes (<ref>). The proof
of the lemma is complete.
We continue by showing (<ref>) contradicts (<ref>).
Assume that Γ_+ has positive diameter. Then
T_+<+∞.
An analogous statement applies to Γ_- and T_-.
From Lemma <ref>, if T_+=+∞ there exists p∈ P such
that lim_t→+∞u^*(t) = p. We use a local argument to show
that this is impossible if Γ_+ has positive diameter.
By a suitable change of variable we can
assume that p=0 and that, in a neighborhood of 0∈^n, U reads
U(u)=V(u)+W(u),
where V is the quadratic part of U:
V(u)=1/2(-∑_i=1^mλ_i^2u_i^2 +
∑_i=m+1^nλ_i^2u_i^2), λ_i>0
and W satisfies,
| W(u)|≤ C| u|^3, | W_x(u)|≤ C|
u|^2, | W_xx(u)|≤ C| u|.
Consider the Hamiltonian system with
H(p,q) = 1/2| p|^2 - U(q), p∈^n,
q∈Ω⊂^n.
For this system the origin of ^2n is an
equilibrium point that corresponds to the critical point p=0 of
U.
Set D = diag(-λ_1^2,
…,-λ_m^2,λ_m+1^2, …,λ_n^2).
The eigenvalues of the symplectic matrix
([ 0 D; I 0 ])
are
-λ_i, i=m+1,…,n
- λ_i, i=m+1,…,n
± iλ_i, i=1,…,m.
Let
(e_1,0),…,(e_n,0), (0,e_1),…,(0,e_n) be the basis of
^2n defined by e_j=(δ_j1,…,δ_jn), where
δ_ji is Kronecker's delta.
The stable S^s, unstable S^u and center S^c subspaces invariant
under the flow of the linearized Hamiltonian system at 0∈^2n
are
S^s=span{(-λ_j e_j,e_j)}_j=m+1^n,
S^u=span{(λ_j e_j,e_j)}_j=m+1^n,
S^c=span{(e_j,0), (0,e_j)}_j=1^m.
From (<ref>) and (<ref>) we have
lim_t→+∞(u̇^*(t),u^*(t))=0∈^2n.
Let W^s and W^u be the local stable and unstable
manifold and let W^c be a local center
manifold at 0∈^2n. From the center manifold theorem
<cit.>,
<cit.>, there is a constant λ_0>0 such that, for each
solution (p(t),q(t)) that remains in a neighborhood of 0∈^2n
for positive time, there is a solution (p^c(t),q^c(t))∈ W^c that
satisfies
|(p(t),q(t))-(p^c(t),q^c(t))|=O(e^-λ_0t).
Since W^c is tangent to S^c at 0∈^2n, the projection
W_0^c on the configuration space is tangent to
S_0^c=span{e_j}_j=1^m, which is the projection of S^c on the
configuration space. Therefore, if (p^c,q^c) ≢0, given
γ>0, by (<ref>) there is t_γ such that
d(q(t),S_0^c)≤γ| q(t)|, for t≥ t_γ. For
γ small, this implies that q(t)∉Ω for t≥
t_γ. It follows that (p^c,q^c)≡ 0 and from (<ref>)
(p(t),q(t)) converges to zero exponentially. This is possible only
if (p(t),q(t))∈ W^s and, in turn, only if q(t)∈ W_0^s, the
projection of W^s on the configuration space. This argument leads to
the conclusion that the trajectory of u^* in a neighborhood of 0
is of the form
u^*(t(s))=𝔲^*(s) = sη+z(s),
where
η=∑_i=m+1^nη_ie_i
is a unit vector[Actually
η coincides with one of the eigenvectors of U”(0).],
s∈[0,s_0) for some s_0>0, and z(s) satisfies
z(s)·η=0, | z(s)|≤ c| s|^2,
| z^'(s)|≤
c| s|
for a positive constant c.
We are now in the position of constructing our local perturbation of
u. We first discuss the case U=V, z(s)=0. We set
u̅(s)=sη
and, in some interval
[1,s_1],
construct a
competing map v̅:[1,s_1]→^n,
v̅ = u̅ + g e_1,
g:[1,s_1]→,
with the following
properties:
V(v̅(1))=0,
v̅(s_1)=u̅(s_1),
𝒥_V(v̅,[1,s_1])<𝒥_V(u̅,[0,s_1]).
The basic observation is that, if we move from u̅ in the
direction of one of the eigenvectors e_1,…,e_m corresponding to
negative eigenvalues of the Hessian of V, the potential V
decreases and therefore, for each s_0∈(1,s_1) we can define the
function g in the interval [1,s_0] so that
𝒥_V(u̅+ge_1, (1,s_0))=𝒥_V(u̅,
(1,s_0)).
Indeed it suffices to impose that
g:(1,s_0]→
satisfies the condition
√(V(u̅(s))) = √(1+g'^2(s))√(V(u̅(s)+g(s)e_1)), s∈(1,s_0].
According with this condition we take g as the
solution of the problem
{[ g' =
-λ_1g/√(s^2λ_η^2-λ_1^2g^2) =
-λ_1g/sλ_η/√(1 -
λ_1^2g^2/s^2λ_η^2); g(1)=λ_η/λ_1 ].,
where we have used (<ref>) and set
λ_η=√(∑_i=m+1^nλ_i^2η_i^2).
Note that the initial condition in (<ref>) implies
V(v̅(1))=0. The solution g of (<ref>) is well defined
in spite of the fact that the right hand side tends to
-∞ as s→ 1.
Since g defined by (<ref>) is positive for s∈[1,+∞),
to satisfy the condition v̅(s_1)=u̅(s_1), we give a
suitable definition of g in the interval [s_0,s_1] in order that
g(s_1)=0. Choose a number α∈(0,1) and extend g with
continuity to the interval [s_0,s_1] by imposing that
√(V(u̅(s))) = α√(1 + g'^2(s))√(V(u̅(s)+g(s)e_1)), s∈(s_0,s_1].
Therefore, in the interval (s_0,s_1], we define g by
g' = -1/α√(1 - α^2 +
α^2λ_1^2g^2/s^2λ_η^2/1 -
λ_1^2g^2/s^2λ_η^2)≤ -
√(1-α^2)/α.
Since (<ref>) implies
𝒥_V(v̅,[s_0,s_1]) =
1/α𝒥_V(u̅,[s_0,s_1]),
from (<ref>) we see that v̅ satisfies also the
requirement (<ref>) above if we can choose α∈(0,1) and
1<s_0<s_1 in such a way that
𝒥_V(u̅,(0,1)) >
1-α/α𝒥_V(u̅,(s_0,s_1)).
Since (<ref>) implies s_1<s_0+α
g(s_0)/√(1-α^2) a sufficient condition for this is
𝒥_V(u̅,(0,1)) >
1-α/α𝒥_V(u̅,
(s_0,s_0+α g(s_0)/√(1-α^2))),
or equivalently
1 > 1-α/α((s_0+α
g(s_0)/√(1-α^2))^2 - s_0^2) =
2s_0g(s_0)√(1-α/1+α) + α
g^2(s_0)/1+α.
By a proper choice of s_0 and α the right hand side of (<ref>)
can be made as small as we like. For instance we can
fix s_0 so that g(s_0)≤1/4 and then choose
α in such a way that
1/2s_0√(1-α/1+α)≤1/4 and
conclude that (<ref>) holds.
Next we use the function g
to define a comparison map v that coincides with
u^* outside an ϵ-neighborhood of 0 and show that the
assumption that the trajectory of u^* ends up in some p∈ P must
be rejected. For small ϵ>0 we define
v(ϵ s) = ϵ sη+z(ϵ s) + ϵ
g(s-σ)e_1, s∈[1+σ,s_1+σ],
where σ=σ(ϵ) is determined by the condition
U(v(ϵ(1+σ)))=0,
which, using (<ref>), (<ref>), (<ref>) and
g(1)=λ_η/λ_1, after dividing by ϵ^2,
becomes
1/2λ_η^2((1+σ)^2-1)=ϵ f(σ,ϵ),
where f(σ,ϵ) is a smooth bounded function defined in a
neighborhood of (0,0). For small ϵ>0, there is a unique
solution σ(ϵ)=O(ϵ) of (<ref>). Note
also that (<ref>) implies that
v(ϵ(s_1+σ))=𝔲^*(ϵ(s_1+σ)).
We now conclude by showing that, for ϵ>0 small, it results
𝒥_U(𝔲^*(ϵ·),(0,s_1+σ)) >
𝒥_U(v(ϵ·),(1+σ,s_1+σ)).
From (<ref>) and (<ref>) we have
lim_ϵ→ 0^+ϵ^-1|d/ds𝔲^*(ϵ s)|
=1, lim_ϵ→ 0^+ϵ^-1|d/dsv(ϵ s)|
= √(1+g'^2(s)),
and, using also (<ref>) and σ=O(ϵ),
lim_ϵ→ 0^+ϵ^-2U(𝔲^*(ϵ s)) =
V(u̅(s)), s∈(0,s_1),
lim_ϵ→ 0^+ϵ^-2U(v(ϵ s)) =
V(v̅(s)), s∈(1,s_1)
uniformly in compact intervals.
The limits (<ref>) and (<ref>) imply
lim_ϵ→
0^+ϵ^-2𝒥_U(𝔲^*(ϵ·),
(0,s_1+σ)) = lim_ϵ→
0^+√(2)∫_0^s_1+σ√(ϵ^-2U(𝔲^*(ϵ s)))ϵ^-1|d/ds𝔲^*(ϵ s)|
ds,
=√(2)∫_0^s_1√(V(u̅(s)))
ds = 𝒥_V(u̅,(0,s_1))
lim_ϵ→
0^+ϵ^-2𝒥_U(v(ϵ·),(1+σ,s_1+σ))
=lim_ϵ→
0^+√(2)∫_1+σ^s_1+σ√(ϵ^-2U(v(ϵ s)))ϵ^-1|d/dsv(ϵ s)| ds,
=√(2)∫_1^s_1√(V(v̅(s)))√(1+g'^2(s)) ds =
𝒥_V(v̅,(1,s_1)).
This and (iii) above imply that, indeed, the inequality (<ref>)
holds for small ϵ>0. The proof is complete.
We can now complete the proof of Theorem <ref>. We show that the
map u^*:(T_-,T_+)→^n possesses all the required
properties. The fact that u^* satisfies (<ref>) and
(<ref>) follows from Lemma <ref>. Lemma
<ref> implies (<ref>) and, if T_->-∞, also
(<ref>). The fact that x_-∈Γ_-∖ P is a consequence
of Lemma <ref> and implies that Γ_- has positive
diameter. Viceversa, if Γ_- has positive diameter, Lemmas
<ref> and <ref> imply that T_->-∞ and that (<ref>)
holds for some x_-∈Γ_-∖ P. The proof of Theorem
<ref> is complete.
From Theorem <ref> it follows that if N is even then there are
at least N/2 distinct orbits connecting different elements of
{Γ_1,…,Γ_N}. If N is odd there are at least
(N+1)/2.
Simple examples show that, given distinct Γ_i, Γ_j∈{Γ_1,…,Γ_N}, an orbit
connecting them does not always exist.
Let
𝒰_ij = { u∈
W^1,2((T_-^u,T_+^u);^n):u((T_-^u,T_+^u))⊂Ω,
u(T_-^u)∈Γ_i, u(T_+^u)∈Γ_j
}
with i≠ j and
d_ij = inf_u∈𝒰_ij𝒜(u,(T_-^u,T_+^u)).
An orbit connecting Γ_i and Γ_j exists if
d_ij < d_ik + d_kj, ∀ k≠ i,j.
The proof of Theorem <ref> uses, with obvious
modifications, the same arguments as in
the proof of Theorem <ref> to characterize u^* as the limit of
a minimizing sequence {u_j} of the action functional
𝒜(u,(0,T^u)) = ∫_0^T^u(1/2|u̇(t)|^2
+ U(u(t)))dt.
in the set
𝒰 = { u∈ W^1,2((0,T^u);^n) :
0<T_+^u <+∞, u(0) = 0, u([0,T_+^u))⊂Ω, u(T_+^u)
∈∂Ω}.
In the symmetric case of Theorem <ref> it is easy to
construct an example with T_+<T_+^∞. For U(x)=1-|
x|^2, x∈^2, the solution u:[0,π/2]→^2
of (<ref>) determined by (<ref>) and
u([0,π/2])={(s,0):s∈[0,1]} is a minimizer of
𝒜 in 𝒰. For small, let
t_=arcsin(1-) and define u_ϵ: [0,T^u_ϵ]
→^2 as the map determined by (<ref>),
u_ϵ([0,t_ϵ]) = {(s,0): s∈[0,1-)} and
u_((t_ϵ, T^u_]) =
{(1-,s):s∈(0,√(2-^2)]}. In this case
T_+=π/2 and T_+^∞=3π/4.
§.§ On the existence of heteroclinic connections
Corollary <ref> states the existence of heteroclinic
connections under the assumptions of Theorem <ref> and, in
particular, that U∈ C^2. Actually, by examining the proof of
Theorem <ref> we can establish an existence result under weaker
hypotheses. In the special case ∂Ω=P, #P≥ 2,
given p_-∈ P, the set 𝒰 defined in (<ref>) takes
the form
𝒰={u∈ W^1,2((T_-^u,T_+^u);^n):
-∞< T_-^u < T_+^u< +∞, 2.2cm
u((T_-^u,T_+^u))⊂Ω, U(u(0))=U_0, u(T_-^u)=p_-, u(T_+^u)∈ P∖{p_-}}.
In this section we slightly enlarge the set 𝒰 by allowing
T_±^u=±∞ and consider the admissible set
𝒰={u∈ W^1,2_loc((T_-^u,T_+^u);^n):
-∞≤ T_-^u < T_+^u≤+∞, 2.2cm
u((T_-^u,T_+^u))⊂Ω, U(u(0))=U_0,lim_t→ T_-^uu(t)=p_-,lim_t→ T_+^uu(t)∈ P∖{p_-}}.
Assume that U is a non-negative continuous function, which vanishes in a
finite set P, #P≥ 2, and satisfies
√(U(x))≥σ(| x|), x∈Ω, | x|≥ r_0
for some r_0>0 and
a non-negative function σ:[r_0,+∞)→ such that
∫_r_0^+∞σ(r)dr=+∞.
Given p_-∈ P there is p_+∈ P∖{p_-} and a
Lipschitz-continuous map u^*:(T_-,T_+)→Ω that satisfies
(<ref>) almost everywhere on (T_-,T_+),
lim_t→ T_±u^*(t)=p_±,
and minimizes the action functional 𝒜 on 𝒰̃.
We begin by showing that
a_0=inf_u∈𝒰𝒜=inf_u∈𝒰̃𝒜=ã_0.
Since 𝒰⊂𝒰̃ we have
a_0≥ã_0. On the other hand arguing as in the proof of
Lemma <ref>, if T_+-T_-=+∞, given a small number
ϵ>0, we can construct a map u_ϵ∈𝒰 that
satisfies
a_0≤𝒜(u_ϵ,(T_-^u_ϵ,T_+^u_ϵ))≤𝒜(u,(T_-^u,T_+^u))
+η_ϵ
where η_ϵ→ 0 as ϵ→ 0. This
implies a_0≤ã_0 and establishes (<ref>). It follows
that we can proceed as in the proof of Theorem <ref> and define
u^*∈𝒰̃ as the limit of a minimizing sequence
{u_j}⊂𝒰. The arguments in the proof of Lemma
<ref> show that (<ref>) holds. It remain to show that
u^* is Lipschitz-continuous. Looking at the proof of Lemma
<ref> we see that the continuity of U is sufficient for
establishing that (<ref>) holds almost everywhere on
(T_-,T_+), and the Lipschitz character of u^* follows. The proof
is complete.
Without further information on the behavior of U in a neighborhood
of p_± nothing can be said on T_± being finite or infinite and
it is easy to construct examples to show that all possible
combinations are possible. As shown in Lemma <ref> a sufficient
condition for T_±=±∞ is that, in a neighborhood of
p=p_±, U(x) is bounded by a function of the form c|
x-p|^2, c>0. U of class C^1 is a sufficient condition in
order that u^* is of class C^2 and satisfies (<ref>).
§ EXAMPLES
In this section we show a few simple applications of Theorems
<ref> and <ref>.
Our first application describes a class of potentials with
the property that, in spite of the existence of possibly infinitely
many critical values, (<ref>) has a nontrivial periodic orbit
on any energy level.
Assume that U:^n→ satisfies
U(-x)=U(x), x∈^n,
U(0) = 0, U(x)< 0 x≠ 0,
lim_|x|→∞U(x) = -∞
Assume moreover that each non zero critical point
of U is hyperbolic with Morse index i_m≥ 1. Then there is a
nontrivial periodic orbit of (<ref>) on the energy level
1/2|u̇|^2-U(u)=α for each α>0.
For each α>0 we set Ũ=U(x)+α and let
Ω⊂{Ũ>0} be the connected component that contains
the origin. Ω is open, nonempty and bounded and, from the
assumptions on the properties of the critical points of U, it
follows that ∂Ω is connected and contains at most a
finite number of critical points. Therefore we are under the
assumptions of Corollary <ref> for the case N=1 and the
existence of the periodic orbit follows.
An example of potential U:^2→ that satisfies the
assumptions in Proposition <ref> is, in polar coordinates
r,θ,
U(r,θ) = -r^2+1/2tanh^4(r)
cos^2(r^-1)cos^2k(2θ),
where k>0 is a sufficiently large number.
Next we give another application of Corollary <ref>. For the
potential U:^2→, with
U(x)=1/2(1-x_1^2)^2+1/2(1-4x_2^2)^2,
the energy level
α=-1/2
is critical and corresponds to
four hyperbolic critical points p_1=(1,0), -p_1,
p_2=(0,1/2) and -p_2. The connected component
Ω⊂{Ũ>0}, (Ũ=U(x)-1/2) that
contains the origin is bounded by a simple curve Γ that
contains ± p_1 and ± p_2. In spite of the presence of these
critical points, from Theorem <ref> it follows that there is a
minimizer u∈𝒰, with 𝒰 as in
(<ref>) and u(T^u)∈Γ∖{± p_1,± p_2}, and
Corollary <ref> implies the existence of a periodic solution v^*.
Note that there are also two heteroclinic orbits, solutions of
(<ref>) and (<ref>):
u_1(t) = (tanh(t),0), u_2(t) = (0,1/2tanh(2t)).
These orbits connect p_j to -p_j, for j=1,2. By
Theorem <ref> both u_1 and u_2 have action greater than
v^*|_(-T_+,T_+).
Our last example shows
that Theorems <ref> and <ref> can be used to derive
information on the rich dynamics that (<ref>) can exhibit when
U undergoes a small perturbation. We consider a family of potentials
U:^2×[0,1]→. We assume that U(x,0)=x_1^6+x_2^2
which from various points of view is a structurally unstable potential
and, for λ>0 small, we consider the perturbed potential
U(x,λ)=2λ^4x_1^2+x_2^2-2λ^2x_1x_2-3λ^2x_1^4+x_1^6.
This potential satisfies U(-x,λ)=U(x,λ) and, for
λ>0, has the five critical points p_0, ± p_1
and ± p_2 defined by
p_0=(0,0),
p_1 = (λ(1-(2/3)^1/2)^1/2,
λ^3(1-(2/3)^1/2)^1/2),
p_2 = (λ(1+(2/3)^1/2)^1/2,
λ^3(1+(2/3)^1/2)^1/2),
which are all hyperbolic.
We have U(p_2,λ)<0=U(p_0,λ)<U(p_1,λ) and p_0 is
a local minimum, p_1 a saddle and p_2 a global
minimum. Let α be the energy level. For
-α<U(p_2,λ) or -α≥ U(p_1,λ) no
information can be derived from Theorems <ref> and <ref>
therefore we assume -α∈[U(p_2,λ),U(p_1,λ)). For
-α=U(p_2,λ) Corollary <ref> or Corollary
<ref>
yields the existence of a heteroclinic connection u_2
between -p_2 and p_2. For -α∈(U(p_2,λ),0) Corollary
<ref> implies the existence of a periodic orbit
u_α. This periodic orbit converges uniformly in compact intervals to
u_2 and the period T_α→+∞ as
-α→ U(p_2,λ)^+. For α=0 Corollary
<ref> implies the existence of two orbits u_0 and -u_0
homoclinic to p_0=0. We can assume that u_0 satisfies the
condition u_0(-t)=u_0(t) and that u_α(0)=0. Then we have that
u_α(·±T_α/4) converges uniformly in compact intervals
to ∓ u_0 and T_α→+∞ as -α→
0^-. For -α∈(0,U(p_1,λ)), ∂Ω is the
union of three simple curves all of positive diameter: Γ_0 that
includes the origin and ±Γ_2 which includes ± p_2 and
Corollary <ref> together with the fact that U(·,λ) is
symmetric imply the existence of two periodic solutions
ũ_α and -ũ_α with ũ_α
that oscillates between Γ_0 and Γ_2 in each time
interval equal to T_α/2. Assuming that
ũ_α(0)∈Γ_2 we have that, as -α→
0^+, ũ_α→ u_0 uniformly in compacts and
T_α→+∞. Finally we observe that, in the limit
-α→ U(p_1,λ)^-, ũ_α converges
uniformly in to the constant solution u≡ p_1.
§.§ Acknowledgements
The first author is indebted with Peter Bates for fruitful discussions on
the subject of this paper.
plain
99
af N. Alikakos and G. Fusco. On the connection
problem for potentials with several global minima. Indiana Univ. Math. Journ. 57 No. 4, 1871-1906 (2008)
A P. Antonopoulos and P. Smyrnelis. On minimizers
of the Hamiltonian system u”=∇ W(u), and on the existence of
heteroclinic, homoclinic and periodic connections. Preprint (2016)
braides A. Braides. Approximation of
Free-Discontinuity Problems. Lectures Notes in Mathematics 1694,
Springer-Verlag, Heidelberg (1998)
B A. Bressan. Tutorial on the Center Manifold
Theorem. Hyperbolic systems of balance laws. CIME course (Cetraro
2003). Springer Lecture Notes in Mathematics 1911,
327-344. Springer-Verlag, Heidelberg (2007)
BGH G. Buttazzo, M. Giaquinta, S. Hildebrandt. One-dimensional Calculus of Variations: an Introduction Oxford University Press, Oxford (1998)
salomao N. V. De Paulo and P. A. S. Salomão.
Systems of transversal sections near critical energy
levels of Hamiltonian systems in ^4, arXiv:1310.8464v2 (2016)
monteil A. Monteil and F. Santambrogio. Metric
methods for heteroclinic connections. Mathematical Methods in the
Applied Sciences, DOI: 10.1002/mma.4072 (2016)
sourdis C. Sourdis. The heteroclinic connection
problem for general double-well potentials.
Mediterranean Journal of Mathematics 13 No. 6,
4693-4710 (2016)
sternberg P. Sternberg. Vector-Valued Local
Minimizers of Nonconvex Variational Problems, Rocky Mountain
J. Math. 21 No. 2, 799-807 (1991)
W A. Vanderbauwhede. Centre manifolds, normal
forms and elementary bifurcations. Dynamics
Reported 2 No. 4, 89-169 (1989)
ZS A. Zuniga and P. Sternberg. On the
heteroclinic connection problem for multi-well gradient systems.
Journal of Differential Equations 261 No. 7,
3987-4007 (2016)
| Let U:^n→ be a
function of class C^2. We assume that Ω⊂^n is a
connected component of the set {x∈^n: U(x)>0} and
that ∂Ω is compact and is the union of N≥ 1
distinct nonempty connected components Γ_1,…,Γ_N. We
consider the following situations
H N≥ 2 and, if Ω is unbounded, there is r_0>0 and
a non-negative function σ:[r_0,+∞)→ such that
∫_r_0^+∞σ(r)dr=+∞ and
√(U(x))≥σ(| x|), x∈Ω, | x|≥ r_0.
H_s Ω is bounded, the origin 0∈^n belongs to
Ω and U is invariant under the antipodal map
U(-x)=U(x), x∈Ω.
Condition (<ref>) was first introduced in
<cit.>. A sufficient condition for (<ref>)
is that lim inf_|x|→∞ U(x) >0.
We study non constant solutions u:(T_-,T_+)→Ω, of the
equation
ü=U_x(u), U_x=(∂ U/∂ x)^T,
that satisfy
lim_t→ T_±d(u(t),∂Ω)=0,
with d the Euclidean distance, and lie on the energy surface
1/2|u̇|^2-U(u)=0.
We allow that the boundary ∂Ω of Ω contains a
finite set P of critical
points of U and assume
H_1 If Γ∈{Γ_1,…,Γ_N} has positive
diameter and p∈ P∩Γ then p is a hyperbolic critical
point of U.
If Γ has positive diameter, then hyperbolic critical points
p∈Γ correspond to saddle-center equilibrium points in the
zero energy level of the Hamiltonian system associated to
(<ref>). These points are organizing centers of complex
dynamics, see <cit.>.
Note that 𝐇_1 does not exclude that some of the
Γ_j reduce to a singleton, say {p}, for some p∈ P. In
this case nothing is required on the behavior of U in a neighborhood
of p aside from being C^2.
A comment on 𝐇 and 𝐇_s is in order. If P is
nonempty u≡ p for p∈ P is a constant solution of
(<ref>) that satisfies (<ref>) and (<ref>). To
avoid trivial solutions of this kind we require N≥
2 in 𝐇, and look for solutions that connect different
components of ∂Ω. In 𝐇_s we do not exclude
that ∂Ω is connected (N=1) and avoid trivial solutions
by restricting to a symmetric context and to solutions that pass through
0.
We prove the following results.
Assume that 𝐇 and 𝐇_1 hold. Then for each
Γ_-∈{Γ_1,…,Γ_N} there exist
Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-} and a
map u^*:(T_-,T_+)→Ω, with -∞≤ T_-<T_+≤
+∞, that satisfies (<ref>), (<ref>) and
lim_t→ T_±d(u^*(t),Γ_±)=0.
Moreover, T_->-∞ (resp. T_+<+∞) if and only if
Γ_- (resp. Γ_+)
has positive diameter. If T_->-∞ it results
lim_t→ T_-u^*(t)= x_-,
lim_t→ T_-u̇^*(t)=0,
for some x_-∈Γ_-∖ P. An analogous statement
holds if T_+<+∞.
Assume that 𝐇_s and 𝐇_1 hold. Then there exist
Γ_+∈{Γ_1,…,Γ_N} and a map
u^*:(0,T_+)→Ω, with 0<T_+≤ +∞, that satisfies
(<ref>), (<ref>) and
lim_t→ T_+d(u^*(t),Γ_+)=0.
Moreover, T_+<+∞ if and only if Γ_+ has positive diameter. If
T_+<+∞ it results
lim_t→ T_+u^*(t)=x_+,
lim_t→ T_+u̇^*(t)=0,
for some x_+∈Γ_+∖ P.
We list a few straightforward consequences of Theorems <ref> and
<ref>.
Theorem <ref> implies that, if ∂Ω=P, given p_-∈
P there is p_+∈ P∖{p_-} and a heteroclinic connection
between p_- and p_+, that is a solution u^*:→^n of
(<ref>) and (<ref>) that satisfies
lim_t→±∞u^*(t)=p_±.
The problem of the existence of heteroclinic connections between two
isolated zeros p_± of a non-negative potential has been recently
reconsidered by several authors. In <cit.> existence was established
under a mild monotonicity condition on U near p_±. This
condition was removed in <cit.>, see also <cit.>. The most
general results, equivalent to the consequence of Theorem <ref>
discussed in Section <ref>, were recently obtained in <cit.> and in
<cit.>, see also <cit.>.
All these papers establish existence by a variational
approach. In <cit.>, <cit.> and <cit.> by minimizing the
action functional, and in <cit.> and
<cit.> by minimizing the Jacobi functional.
Theorem <ref> implies that,
if Γ_-={p} for some p∈ P and the elements of
{Γ_1,…,Γ_N}∖{Γ_-} have all positive
diameter, there exists a nontrivial orbit homoclinic to p that satisfies (<ref>), (<ref>).
Let v^*:→Ω∪{x_+} be the extension
defined by
v^*(T_++t)=u^*(T_+-t), t∈(0,+∞), v^*(T_+)=x_+,
of the solution u^*:(-∞,T_+)→Ω given by
Theorem <ref>.
The map v^* so defined is a smooth non-constant solution of
(<ref>) that satisfies
lim_t→±∞v^*(t)=p.
Theorem <ref> implies that, if all the sets
Γ_1,…,Γ_N have positive diameter, given
Γ_-∈{Γ_1,…,Γ_N}, there exist
Γ_+∈{Γ_1,…,Γ_N}∖{Γ_-} and a
periodic solution v^*:→Ω of (<ref>) and
(<ref>) that oscillates between Γ_- and
Γ_+. This solution has period T=2(T_+-T_-).
The solution v^* is the T-periodic extension of the map
w^*:[T_-,2T_+-T_-]→Ω defined by w^*(t)=u^*(t) for
t∈(T_-,T_+), where u^* is given by Theorem <ref>, and
w^*(T_±)=x_±,
w^*(T_++t)=u^*(T_+-t), t∈ (0,T_+-T_-].
The problem of existence of heteroclinic, homoclinic and periodic
solutions of (<ref>), in a context similar to the one considered
here, was already discussed in <cit.> where ∂Ω is
allowed to include continua of critical points. Our result concerning
periodic solutions extends a corresponding result in <cit.> where
existence was established under the assumption that P=∅.
The following result is a direct consequence of Theorem <ref>.
Theorem <ref> implies that, if all the sets Γ_1,…,Γ_N have positive diameter,
there exists Γ_+∈{Γ_1,…,Γ_N} and a
periodic solution v^*:→Ω of (<ref>) and
(<ref>) that satisfies
v^*(-t) = -v^*(t), t∈.
This solution has period
T=4T_+, with T_+.
The solution v^* is the T-periodic extension of the map
w^*:[-2T_+,2T_+]→Ω defined by w^*(t) = u^*(t) for
t∈(0,T_+), where u^* is given by Theorem <ref>, and by
w^*(t) = -w^*(-t), 2cm t∈ (-T_+,0),
w^*(0) = 0, w^*(± T_+) = ± x_+,
w^*(T_++t) = w^*(T_+-t), 0.8cm t∈(0,T_+],
w^*(-T_++t) = w^*(-T_+-t), 0.3cm t∈[-T_+,0).
In particular the solution oscillates between x_+ and -x_+ and
this is true also when ∂Ω is connected (N=1). | null | null | null | null | null |
http://arxiv.org/abs/1701.07858v1 | 20170126195243 | Characterization of the quantum phase transition in a two-mode Dicke model for different cooperation numbers | [
"L. F. Quezada",
"E. Nahmad-Achar"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de
México, Apartado Postal 70-543, 04510 Ciudad de México, México.
[email protected]
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de
México, Apartado Postal 70-543, 04510 Ciudad de México, México.
We show how the use of variational states to approximate the ground
state of a system can be employed to study a multi-mode Dicke model.
One of the main contributions of this work is the introduction of
a not very commonly used quantity, the cooperation number, and the
study of its influence on the behavior of the system, paying particular
attention to the quantum phase transitions and the accuracy of the
used approximations. We also show how these phase transitions affect
the dependence of the expectation values of some of the observables
relevant to the system and the entropy of entanglement with respect
to the energy difference between atomic states and the coupling strength
between matter and radiation, thus characterizing the transitions
in different ways.
Characterization of the quantum phase transition in a two-mode Dicke
model for different cooperation numbers
E. Nahmad-Achar
Received: date / Accepted: date
============================================================================================================
§ INTRODUCTION
Quantum phase transitions (QPTs) are informally seen as sudden, drastic
changes in the physical properties of the ground state of a system
at zero temperature due to the variation of some parameter involved
in the modeling Hamiltonian. One model of particular interest for
the study of such phenomena is the Dicke model <cit.>, as it
describes, in a simplified way (electric dipole approximation), the
interaction between matter and electromagnetic radiation. In 1973,
Hepp and Lieb <cit.>, and Wang and Hioe <cit.>
first theoretically proved the existence of a second-order QPT in
the Dicke model. Wang and Hioe also treated the multi-mode radiation case, where they reduce it to a single-mode case by using an effective coupling constant. To date, this QPT has been experimentally observed
in a Bose-Einstein Condensate coupled to an optical cavity <cit.>
and it has been shown to be relevant to quantum information and quantum
computing <cit.>.
Even though the formal definition of a QPT requires us to compute
the ground state's energy as a function of any desired parameter in
order to find its transition values, one of the main contributions
of this work is to show how the QPT in the Dicke model influences
the behavior of other quantities relevant to the system, thus characterizing
the transition in different, simpler ways.
§.§.§ Quantum Phase Transitions
The formal definition of the concept of “quantum phase” that we
will be using throughout this paper is that of an open region ℛ⊆ℝ^ℓ
where the ground state's energy ℰ_0, as a function
of ℓ parameters involved in the modeling Hamiltonian, is analytic.
Thus a QPT is identified by the boundary ∂ℛ of
the region at which ∂^nℰ_0/∂ x^n
is discontinuous for some n (known as the order of the transition).
Notice that in the previous definition, for the sake of generality,
we did not consider the thermodynamic limit, as it has been shown
that interesting phenomena regarding QPTs occur even for a finite
number of particles <cit.>.
§.§.§ Modeling Hamiltonian
The Hamiltonian (Dicke's Hamiltonian) describing the interaction,
in a dipolar approximation, between N two-level identical atoms (same
energy difference between the two levels) and one-mode of an electromagnetic
field in an ideal cavity, has the expression (ħ=1)
H_D=ω_AJ_z+Ω a^†a-γ/√(N)(J_-+J_+)(a+a^†).
Here, ω_A is the energy difference between the atomic levels,
Ω is the frequency of the field's mode, γ is the dipolar
coupling constant, J_z, J_-, J_+ are the collective
spin operators and a, a^† are the annihilation and creation
operators of the harmonic oscillator. The multi-mode Hamiltonian is
obtained summing over the number k of modes <cit.>, and
has the expression
H=ω_AJ_z+∑_=1^kΩ_a_^†a_-1/√(N)∑_=1^kγ_(J_-+J_+)(a_+a_^†).
The k modes of the electromagnetic field are described in terms
of annihilation and creation operators for each mode a_,
a_^†, acting on the tensor product of k copies
of the Fock space
and satisfying the commutation relations
[a_,a_^†]=δ_, [a_,a_]=[a_^†,a_^†]=0.
A two-level atom is described using the 1/2-spin matrices
S_z=1/2σ_z, S_±=1/2(σ_x± iσ_y)
(σ_x, σ_y and σ_z being the Pauli matrices),
which act on a two-dimensional complex Hilbert space ℂ^2
and satisfy the commutation relations
[S_+,S_-]=2S_z, [S_z,S_±]=± S_±.
When considering a system of N two-level atoms, we use the collective
spin operators J_z, J_-, J_+ defined as
J_♢=S_♢⊗ I_2^⊗(N-1)+I_2⊗ S_♢⊗ I_2^⊗(N-2)
+⋯+I_2^⊗(N-2)⊗ S_♢⊗ I_2+I_2^⊗(N-1)⊗ S_♢
where
I_2 is the identity operator on ℂ^2 and ♢∈{ z,-,+}.
These collective spin operators satisfy the commutation relations
[J_+,J_-]=2J_z, [J_z,J_±]=± J_±
and act, in principle, on the complex Hilbert space (ℂ^2)^⊗ N;
however, working with this space is physically equivalent to studying
a system of N fully distinguishable atoms, which we don't usually
have in the experimental setups used in the study of the QPT in the
Dicke model. To overcome this issue, we must use the common set of
eigenvectors {|j,m⟩} of the two
commuting observables J_z and J^2=1/2(J_+J_-+J_-J_+)+J_z^2,
where the label j is limited to the values j∈{ r,r+1,…,N/2}
(r=0 for even N and r=1/2 for odd N) and the label m∈ℤ
is constricted by |m|≤ j. These vectors do not form
a basis of (ℂ^2)^⊗ N for N>2,
as the dimension of their linear span is
dim{span{{|j,m⟩} _|m|≤ j^j=r,…,N/2}} =∑_j=r^N/2(2j+1)≤2^N.
We will denote by ℋ_A the subspace of (ℂ^2)^⊗ N
generated by the states {|j,m⟩} _|m|≤ j^j=r,…,N/2.
There are two main results concerning the states {|j,m⟩} _|m|≤ j^j=r,…,N/2
and the space ℋ_A: the first comes from noticing that
[H,J^2]=0, which means that the label j of the
eigenvalues of J^2 remains constant during the system's evolution;
the second is the decomposition ℋ_A=,
where each ℋ_j is the subspace of dimension dim{ℋ_j} =2j+1
generated by the states {|j,m⟩} _|m|≤ j
with a fixed j. In this treatment, in order to study indistinguishable
atoms, we are ignoring the multiplicities g(j) of the irreducible
representations of SU(2), i.e. the number of times that
each ℋ_j appears in the full decomposition (ℂ^2)^⊗ N=.
To make it clear that the space ℋ_A is the one we must
work with when indistinguishable atoms are considered we should inquire
into the physical interpretation of the labels j and m. In order
to give a physical interpretation to the label j we must notice
that the energy of the atomic system is bounded by ± jω
independently of the number of atoms N (but with the restriction
j≤N/2), this leads us to interpret the quantity 2j
as the effective number of atoms in the system and define it as the
cooperation number. To make the notion of the cooperation
number more intuitive, Dicke, in his original paper <cit.>,
compares a state with j=0, which exists only for an even number
of atoms, with a classical system of an even number of oscillators
swinging in pairs oppositely phased. The interpretation of the label
m is clear from the definition of J_z: m=1/2(n_e-n_g),
where n_e and n_g are the number of atoms in the excited
and ground states, respectively.
In this paper we restrict our analysis to the space ℋ_A,
as it allows us to choose j as an initial condition (which will
remain constant) and work in ℋ_j, where the atoms are
indistinguishable.
§ METHODOLOGY
There have been various contributions to the study of the phase transition
in the Dicke model (and other two-level models) <cit.>
and different approaches such as Husimi function analysis <cit.>,
entropic uncertainty relations <cit.> and energy surface minimization
<cit.>, have been used
for its investigation.
In this work we use the energy surface minimization method, which
consists on minimizing the surface that is obtained by taking the
expectation value of the modeling Hamiltonian with respect to some
trial variational state. The strength of this method lies on the choice
of the trial state, as it is the latter, after minimization, the one
that will be modeling the ground state of the system.
Here we take a variational approach for both matter and radiation
fields, and show how to calculate the QPT of the system modeled by
the Hamiltonian H given in eq. (<ref>) via four means:
* Using a tensor product of Heisenberg-Weyl HW(1) coherent states for
each mode of the electromagnetic field and SU(2) coherent states for
the atomic field as trial states, and analytically minimizing the
obtained energy surface with respect to its parameters.
* Using a projection operator on HW(1) coherent states and SU(2) coherent
states to obtain trial states that preserve the parity symmetry of
the Hamiltonian with respect to the total excitation number of the
system (symmetry adapted states), and numerically minimize
the obtained energy surface with respect to its parameters.
* Using symmetry adapted states, as in (2) above, to obtain the energy
surface and “minimize” it with the minimizing parameters obtained
in (1) above, thus allowing us to have analytic expressions for the
ground state.
* Numerically diagonalizing the Hamiltonian, which gives us the exact
quantum solution.
§.§.§ Coherent states (CS)
For each mode of the electromagnetic field the annihilation and creation
operators a_ and a_^†, appearing in
the modeling Hamiltonian H, satisfy the commutation relations (<ref>)
of the Lie algebra generators of the Heisenberg-Weyl group HW(1);
hence, a natural choice of a trial state for the radiation field is
a tensor product of k (number of modes) coherent states of HW(1)
|α̅⟩ :=|α_1⟩⊗⋯⊗|α_k⟩ ,
where each |α_⟩ is defined as
|α_⟩ :=e^α_a_^†-α_^*a_|0_⟩ =e^-|α_|^2/2∑_ν_=0^∞α_^ν_/√(ν_!)|ν_⟩ .
Furthermore, the commutation relations of the collective spin operators
J_-, J_+ and J_z (<ref>) are the same as the
ones of the Lie algebra generators of the special unitary group SU(2).
Thus, analogously as for the radiation field, we use the coherent
states of SU(2)
|ξ⟩ _j:=|υtan|υ|/|υ|⟩ _j:=e^υ J_+-υ^*J_-|j,0⟩
=1/(1+|ξ|^2)^j∑_m=0^2j2jm^1/2ξ^m|j,m-j⟩ .
as
trial states for the matter field.
§.§.§ Symmetry adapted states (SAS)
The modeling Hamiltonian we are considering has a parity symmetry
given by [e^iπΛ,H]=0, where Λ=√(J^2+1/4)-1/2+J_z+∑_=1^ka_^†a_
is the excitation number operator with eigenvalues λ=j+m+∑_=1^kν_.
This symmetry allows us to classify the eigenstates of H in terms
of the parity of the eigenvalues λ; however, as states with
opposite symmetry are strongly mixed by the CS defined in the previous
section, we should then adapt this symmetry to the CS by projecting
them with the operator P_±=1/2(I± e^iπΛ),
i.e.
|α̅,ξ_j⟩ _±:=𝒩_±P_±|α̅⟩⊗|ξ⟩ _j
=𝒩_±(|α̅⟩⊗|ξ⟩ _j±|-α̅⟩⊗|-ξ⟩ _j),
with
𝒩_±=(2±2E(-cosθ)^2j)^-1/2
the normalization factors for the even (+) and odd (-) states (where
E=exp{ -2∑_=1^k|α_^2|}).
As we are interested in the ground state of the system, which has
an even parity, we only focus on the state |α̅,ξ_j⟩ _+.
§.§.§ Entropy of entanglement (S_ε)
Entropy of entanglement is defined for a bipartite system as the Von
Neumann entropy of either of its reduced states, that is, if ρ
is the density matrix of a system in a Hilbert space ℋ=ℋ_1⊗ℋ_2,
its entropy of entanglement is defined as
S_ε:=-Tr{ρ_1logρ_1} =-Tr{ρ_2logρ_2} ,
where ρ_1=Tr_2{ρ} and ρ_2=Tr_1{ρ}.
Our Hamiltonian H models a bipartite system formed by matter and
radiation subsystems, which means that their entropy of entanglement
can be used to see the influence of the QPT on its behavior; this
we do below.
§.§.§ Fidelity between neighboring states (F)
Fidelity is a measure of the distance
between two quantum states; given |ϕ⟩ and
|φ⟩ it is defined as
F(ϕ,φ):=|⟨ϕ|φ⟩|^2.
Across a QPT the ground state of a system suffers a sudden, drastic
change, thus it is natural to expect a drop in the fidelity between
neighboring states near the transition. This drop has been, in fact,
already shown to happen <cit.> for the case 2j=N.
We study it here also, and its behavior with the cooperation number.
§ RESULTS
Writing the complex labels α_ and ξ as α_=q_+ip_ι
and ξ=tan(θ/2)e^iϕ, with q_,p_∈ℝ,
θ∈[0,π), ϕ∈[0,2π), the
CS's energy surface is obtained by taking the expectation value of
the modeling Hamiltonian H with respect to the state |α̅⟩⊗|ξ⟩ _j,
and has the form
ℋ_j,CS(q_,p_,θ,ϕ):=⟨α̅|⊗⟨ξ|_jH|α̅⟩⊗|ξ⟩ _j
=-jω_Acosθ+∑_=1^kΩ_(q_^2+p_^2)-4j/√(N)sinθcosϕ∑_=1^kγ_q_.
The critical points which minimize it are then found to be
θ_c=q__c=p__c=0, ω_A≥8j/N∑_=1^kγ_^2/Ω_,
.[ cosθ_c=Nω_A/8j(∑_=1^kγ_^2/Ω_)^-1,; ; ϕ_c=0,π,; ; q__c=2jγ_/Ω_√(N)cosϕ_csinθ_c,; ; p__c=0 ]} ω_A<8j/N∑_=1^kγ_^2/Ω_.
Substituting these values into (<ref>) we obtain the energy
of the coherent ground state as a function of the Hamiltonian parameters,
ℰ_CS(ω_A,γ_)={ -jω_A , δ≥1
-jω_A/2(1/δ+δ) , δ<1,
.
where we have defined δ=Nω_A/8jς
with ς=∑_=1^kγ_^2/Ω_.
Using the information of this coherent ground state we also obtain
the expectation values of the atomic relative population operator
J_z and of the number of photons of mode operator ν_:=a_^†a_:
⟨ J_z⟩ _CS(ω_A,γ_)={ -j , δ≥1
-jδ , δ<1,
.
⟨ν_⟩ _CS(ω_A,γ_,γ_)={ 0 , δ≥1
γ_^2/Ω_^2jω_A/2ς(1/δ-δ) , δ<1.
.
Analogously as for the CS's energy surface, the SAS's energy surface
is obtained by taking the expectation value of the modeling Hamiltonian
H with respect to the state |α̅,ξ_j⟩ _+,
and has the more complicated form
ℋ_j,SAS(q_,p_,θ,ϕ):=⟨α̅,ξ_j|_+H|α̅,ξ_j⟩ _+
=(1+E(-cosθ)^2j-2/1+E(-cosθ)^2j)(-jω_Acosθ)
+(1-E(-cosθ)^2j/1+E(-cosθ)^2j)∑_ℓ=1^kΩ_ℓ(q_ℓ^2+p_ℓ^2)
-4j/√(N)sinθ∑_ℓ=1^k{cosϕγ_ℓq_ℓ+E(-cosθ)^2j-1sinϕγ_ℓp_ℓ/1+E(-cosθ)^2j}.
As a first approximation, we may substitute the critical values obtained
for the CS's energy surface into (<ref>), we obtain the trial
state which approximates the lowest symmetry-adapted energy state,
and with respect to which we evaluate the expectation values of the
observables H, J_z and ν_:
ℰ_SAS(ω_A,γ_)
={[ -jω_A , δ≥1; ; -jω_A[δ(1+ε(-δ)^2j-2/1+ε(-δ)^2j).; .+1/2(1/δ-δ)] , δ<1, ].
⟨ J_z⟩ _SAS(ω_A,γ_)
={ -j , δ≥1
-jδ(1+ε(-δ)^2j-2/1+ε(-δ)^2j) , δ<1,.
⟨ν_⟩ _SAS(ω_A,γ_,γ_)
={ 0 , δ≥1
γ_^2/Ω_^2jω_A/2ς(1/δ-δ)(1-ε(-δ)^2j/1+ε(-δ)^2j) , δ<1,.
where
ε=exp{-jω_Aσ/ς}
with σ=∑_=1^kγ_^2/Ω_^2.
Of course, we can minimize eq. (<ref>) numerically for the
SAS and obtain the expectation value of the relevant matter and field
observables.
In our numerical analysis we study the case with two modes of the
radiation field, as it is the maximum number of orthogonal modes that
can be present in a 3D cavity with the restrictions that the modes
interact with the electric dipole moment of the atoms and to be in
resonance with the frequency associated with the energy difference
between the two levels of the atoms. This latter restriction is just
considered to have the maximum transition probability between states.
For the exact quantum solution we must resort to numerical diagonalization
of the Hamiltonian and use the lowest eigenstate to compute the expectation
values of the relevant observables.
The results, properties of the ground state related to the CS, those
related to the SAS using the critical points of the CS (which have
the advantage of also providing analytical solutions), those of the
SAS minimized numerically and those of the quantum solution through
numerical diagonalization, are shown in figures <ref> - <ref>
and are discussed below.
One advantage of having analytical solutions is, of course, that the
order of the transition may be easily found. Equations (<ref>)
and (<ref>) show a second-order QPT at δ=Nω_A/8jς=1
with the CS and SAS using CS's minima (SASc) approximations. In figure
<ref> it can be seen that the data of the SAS using numerical
minimization (SASn) has a small discontinuity (the QPT) at γ_2≈1.485
for j=5 and γ_2≈1.015 for j=9, while in figure
<ref> this discontinuity is at ω_A≈0.975 for
j=5 and ω_A≈1.965 for j=9. Note that the SASn
solution always approximate better the exact quantum result, as the
cooperation number increases this approximation gets better, in fact,
for 2j=18=N the loci of the separatrix between the normal and collective
regions for the quantum and SASn solutions are indistinguishable (except
in the zoomed inset). The true loci of the QPT may be found through
the fidelity: figures <ref> and <ref> show the fidelity
between neighboring states of the quantum solution, where the exact
QPT is characterized by the minimum, which is localized at γ_2≈1.550
for j=5, γ_2≈1.031 for j=9 in figure <ref>;
and ω_A≈0.817 for j=5, ω_A≈1.870
for j=9 in figure <ref>.
The discrepancies between the transition values of the SASc approximation
and the exact quantum solution become obvious when looking at figures
<ref> and <ref>, where the fidelity between SASc and
the quantum solution drops (and oscillates) in a vicinity of the separatrix.
Therefore, we conclude that SASc offer a good approximation (with
an analytic expression) to the exact quantum solution far from the
QPT for low cooperation numbers, but as j→∞, the
interval where the SASc fail to reproduce the correct behavior, becomes
smaller.
Figures <ref> and <ref> show the fidelity drop at
the separatrix for the SASn. The resemblance to figures <ref>
and <ref> is uncanny, showing the benefits of restoring the
Hamiltonian symmetry into the trial variational states. This improvement
comes with the disadvantage of losing the analytic expression, but
still has an advantage over the quantum solution: the computational
time. SASn are obtained by numerically minimizing a real function,
which is far easier to do (computationally speaking) than numerically
diagonalizing the Hamiltonian matrix.
Figures <ref> - <ref> show the comparison between the
different approximations to the ground state: CS, SASc, SASn and quantum
solution. We show the behavior of ℰ:=⟨ H⟩,
⟨ J_z⟩ and ⟨ν_⟩
as functions of the atomic frequency ω_A and one of the
coupling constants γ_2, for different cooperation numbers.
It can be noticed that the discontinuity in the second derivative
of the energy (as modeled with CS and SASc) translates into a discontinuity
in the first derivative of ⟨ J_z⟩ and
⟨ν_⟩, thus characterizing the
QPT by means of an abrupt change in the expectation values of the
observables. In general, it can be observed that the four methods
(CS, SASc, SASn and quantum solution) converge in the limit δ→0,
where the case j→∞ is particularly interesting as
the interval around the QPT, where all the approximations fail to
reproduce the correct behavior, becomes smaller.
It is worth mentioning the significance and importance of figures
<ref> and <ref> as they show aspects of the multi-mode
Dicke model which are not present in the single-mode case. In figure
<ref> it is shown how the different modes of radiation (orthogonal
in principle) interact through the matter field, analogously as it
occurs with different atoms interacting through the radiation field.
On the other hand, figure <ref> shows (pictorically) the phase
diagrams of the two-mode system, in which it can be observed that
any two points in the super-radiant region can be joined by a trajectory
that does not cross the normal region, a characteristic that the single-mode
system does not have.
Figures <ref> and <ref> show the comparison between
SASc, SASn and the quantum solution for the entropy of entanglement
S_ε as a function of the atomic frequency ω_A
and one of the coupling constants γ_2, using different cooperation
numbers. A characterization of the QPT can be made by observing that
the entropy of entanglement obtained using the quantum solution shows
a maximum at the transition, an attribute that SASc and SASn approximations
fail to reproduce.
§ DISCUSSION AND CONCLUSIONS
From figures presented we conclude that SASc offer a good approximation
(with an analytic expression) to the exact quantum solution far from
the QPT for low cooperation numbers, but as j→∞,
the interval where the SASc fail to reproduce the correct behavior,
becomes smaller.
On the other hand, the SASn provide a better approximation to the
quantum solution. This improvement comes with the disadvantage of
losing the analytic expression, but still has the advantage over the
quantum solution of the computational time. SASn are obtained by numerically
minimizing a real function, which is far easier to do (computationally
speaking) than numerically diagonalizing the Hamiltonian matrix.
A characterization of the QPT can be made by looking at the entropy
of entanglement; that obtained using the quantum solution shows a
maximum at the transition, an attribute that SASc and SASn approximations
fail to reproduce.
The behavior of the expectation values of the relevant observables
of the system ⟨ H⟩, ⟨ J_z⟩
and ⟨ν_⟩, is also affected by
the QPT (figures <ref> - <ref>), thus allowing us to
characterize the QPT by means of its influence over the observables.
In general, it can be observed that the four methods (CS, SASc, SASn
and quantum solution) converge in the limit δ→0,
where the case j→∞ is particularly interesting as
the interval around the QPT, where all the approximations are weaker,
becomes smaller.
In conclusion, we have shown how the use of variational states to
approximate the ground state of a system can be useful to characterize
the QPT in a multi-mode Dicke model using the expectation value of
the observables relevant to the system and the entropy of entanglement
between matter and radiation. We have also introduced a not very commonly
used dependence: the cooperation number, showing its influence over
the behavior of the system, paying particular attention to the QPT
and the accuracy of the used approximations. Some aspects of the multi-mode
Dicke model which are not present in the single-mode case were also
briefly discussed.
We thank R. López-Peña, O. Castaños and S. Cordero for their comments
and discussion. This work was partially supported by DGAPA-UNAM under
project IN101217. L. F. Q. thanks CONACyT-México for financial support
(Grant #379975).
10
key-1R. H. Dicke, http://journals.aps.org/pr/abstract/10.1103/PhysRev.93.99Phys. Rev. 93, 99 (1954).
key-2K. Hepp and E. H. Lieb, http://www.sciencedirect.com/science/article/pii/0003491673900390Ann. Phys. 76, 360 (1973).
key-3K. Hepp and E. H. Lieb, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.8.2517Phys. Rev. A 8, 2517 (1973).
key-4Y. K. Wang and F. T. Hioe, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.7.831Phys. Rev. A 7, 831 (1973).
key-5K. Baumann, C. Guerlin, F. Brennecke and T. Esslinger,
http://www.nature.com/nature/journal/v464/n7293/abs/nature09009.htmlNature (London) 464, 1301 (2010).
key-6D. Nagy, G. Kónya, G. Szirmai and P. Domokos, http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.130401Phys. Rev. Lett. 104, 130401 (2010).
key-7T. Brandes, http://www.sciencedirect.com/science/article/pii/S0370157304005496Phys. Rep. 408 , 315 (2005).
key-8G. Chen, Z. Chen and J. Liang, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.055803Phys. Rev. A 76 , 055803 (2007).
key-9N. Lambert, C. Emary and T. Brandes, http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.92.073602Phys. Rev. Lett. 92, 073602 (2004).
key-10N. Lambert, C. Emary and T. Brandes, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.71.053804Phys. Rev. A 71, 053804 (2005).
key-11E. Nahmad-Achar, S. Cordero, O. Castaños and R. López-Peña,
http://iopscience.iop.org/article/10.1088/0031-8949/90/7/074026Phys. Scr. 90, 074026 (2015).
key-12E. Nahmad-Achar, S. Cordero, R. López-Peña and O.
Castaños, http://iopscience.iop.org/article/10.1088/1751-8113/47/45/455301/meta;jsessionid=100FF83C0FD0D059102F9D188FAFA77E.ip-10-40-1-98J. Phys. A: Math. Theor. 47, 455301 (2014).
key-13S. Cordero, O. Castaños, R. López-Peña and E. Nahmad-Achar,
http://iopscience.iop.org/article/10.1088/1751-8113/46/50/505302J. Phys. A: Math. Theor. 46, 505302 (2013).
key-14J. Reslen, L. Quiroga and N. F. Johnson, http://iopscience.iop.org/article/10.1209/epl/i2004-10313-4/metaEurophys. Lett. 69, 1 (2005).
key-15P. Zanardi and N. Paunković, http://journals.aps.org/pre/abstract/10.1103/PhysRevE.74.031123Phys. Rev. E 74, 031123 (2006).
key-16H. Goto and K. Ichimura, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.77.053811Phys. Rev. A 77, 053811 (2008).
key-17C. Emary and T. Brandes, http://journals.aps.org/pre/abstract/10.1103/PhysRevE.67.066203Phys. Rev. E 67, 066203 (2003).
key-18C. Emary and T. Brandes, http://journals.aps.org/pre/abstract/10.1103/PhysRevE.67.066203Phys. Rev. Lett. 90, 044101 (2003).
key-19E. Romera, R. del Real and M. Calixto, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.85.053831Phys. Rev. A 85, 053831 (2012).
key-20E. Romera, M. Calixto and Á. Nagy, http://iopscience.iop.org/article/10.1209/0295-5075/97/20011/metaEurophys. Lett. 97, 20011 (2012).
key-21O. Castaños, R. López-Peña, E. Nahmad-Achar, J. G.
Hirsch, E. López-Moreno and J. E. Vitela, http://iopscience.iop.org/article/10.1088/0031-8949/79/06/065405/metaPhys. Scr. 79, 065405 (2009).
key-22O. Castaños, E. Nahmad-Achar, R. López-Peña and J.
G. Hirsch, http://iopscience.iop.org/article/10.1088/0031-8949/80/05/055401/metaPhys. Scr. 80, 055401 (2009).
key-23O. Castaños, E. Nahmad-Achar, R. López-Peña, and
J. G. Hirsch, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.83.051601Phys. Rev. A 83, 051601 (2011).
key-24O. Castaños, E. Nahmad-Achar, R. López-Peña, and
J. G. Hirsch, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.84.013819Phys. Rev. A 84, 013819 (2011).
key-25O. Castaños, E. Nahmad-Achar, R. López-Peña, and
J. G. Hirsch, http://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.023814Phys. Rev. A 86, 023814 (2012).
key-26J. G. Hirsch, O. Castaños, E. Nahmad-Achar and R.
López-Peña, http://iopscience.iop.org/article/10.1088/0031-8949/2013/T153/014033/metaPhys. Scr. 2013, 014033 (2013).
| Quantum phase transitions (QPTs) are informally seen as sudden, drastic
changes in the physical properties of the ground state of a system
at zero temperature due to the variation of some parameter involved
in the modeling Hamiltonian. One model of particular interest for
the study of such phenomena is the Dicke model <cit.>, as it
describes, in a simplified way (electric dipole approximation), the
interaction between matter and electromagnetic radiation. In 1973,
Hepp and Lieb <cit.>, and Wang and Hioe <cit.>
first theoretically proved the existence of a second-order QPT in
the Dicke model. Wang and Hioe also treated the multi-mode radiation case, where they reduce it to a single-mode case by using an effective coupling constant. To date, this QPT has been experimentally observed
in a Bose-Einstein Condensate coupled to an optical cavity <cit.>
and it has been shown to be relevant to quantum information and quantum
computing <cit.>.
Even though the formal definition of a QPT requires us to compute
the ground state's energy as a function of any desired parameter in
order to find its transition values, one of the main contributions
of this work is to show how the QPT in the Dicke model influences
the behavior of other quantities relevant to the system, thus characterizing
the transition in different, simpler ways.
§.§.§ Quantum Phase Transitions
The formal definition of the concept of “quantum phase” that we
will be using throughout this paper is that of an open region ℛ⊆ℝ^ℓ
where the ground state's energy ℰ_0, as a function
of ℓ parameters involved in the modeling Hamiltonian, is analytic.
Thus a QPT is identified by the boundary ∂ℛ of
the region at which ∂^nℰ_0/∂ x^n
is discontinuous for some n (known as the order of the transition).
Notice that in the previous definition, for the sake of generality,
we did not consider the thermodynamic limit, as it has been shown
that interesting phenomena regarding QPTs occur even for a finite
number of particles <cit.>.
§.§.§ Modeling Hamiltonian
The Hamiltonian (Dicke's Hamiltonian) describing the interaction,
in a dipolar approximation, between N two-level identical atoms (same
energy difference between the two levels) and one-mode of an electromagnetic
field in an ideal cavity, has the expression (ħ=1)
H_D=ω_AJ_z+Ω a^†a-γ/√(N)(J_-+J_+)(a+a^†).
Here, ω_A is the energy difference between the atomic levels,
Ω is the frequency of the field's mode, γ is the dipolar
coupling constant, J_z, J_-, J_+ are the collective
spin operators and a, a^† are the annihilation and creation
operators of the harmonic oscillator. The multi-mode Hamiltonian is
obtained summing over the number k of modes <cit.>, and
has the expression
H=ω_AJ_z+∑_=1^kΩ_a_^†a_-1/√(N)∑_=1^kγ_(J_-+J_+)(a_+a_^†).
The k modes of the electromagnetic field are described in terms
of annihilation and creation operators for each mode a_,
a_^†, acting on the tensor product of k copies
of the Fock space
and satisfying the commutation relations
[a_,a_^†]=δ_, [a_,a_]=[a_^†,a_^†]=0.
A two-level atom is described using the 1/2-spin matrices
S_z=1/2σ_z, S_±=1/2(σ_x± iσ_y)
(σ_x, σ_y and σ_z being the Pauli matrices),
which act on a two-dimensional complex Hilbert space ℂ^2
and satisfy the commutation relations
[S_+,S_-]=2S_z, [S_z,S_±]=± S_±.
When considering a system of N two-level atoms, we use the collective
spin operators J_z, J_-, J_+ defined as
J_♢=S_♢⊗ I_2^⊗(N-1)+I_2⊗ S_♢⊗ I_2^⊗(N-2)
+⋯+I_2^⊗(N-2)⊗ S_♢⊗ I_2+I_2^⊗(N-1)⊗ S_♢
where
I_2 is the identity operator on ℂ^2 and ♢∈{ z,-,+}.
These collective spin operators satisfy the commutation relations
[J_+,J_-]=2J_z, [J_z,J_±]=± J_±
and act, in principle, on the complex Hilbert space (ℂ^2)^⊗ N;
however, working with this space is physically equivalent to studying
a system of N fully distinguishable atoms, which we don't usually
have in the experimental setups used in the study of the QPT in the
Dicke model. To overcome this issue, we must use the common set of
eigenvectors {|j,m⟩} of the two
commuting observables J_z and J^2=1/2(J_+J_-+J_-J_+)+J_z^2,
where the label j is limited to the values j∈{ r,r+1,…,N/2}
(r=0 for even N and r=1/2 for odd N) and the label m∈ℤ
is constricted by |m|≤ j. These vectors do not form
a basis of (ℂ^2)^⊗ N for N>2,
as the dimension of their linear span is
dim{span{{|j,m⟩} _|m|≤ j^j=r,…,N/2}} =∑_j=r^N/2(2j+1)≤2^N.
We will denote by ℋ_A the subspace of (ℂ^2)^⊗ N
generated by the states {|j,m⟩} _|m|≤ j^j=r,…,N/2.
There are two main results concerning the states {|j,m⟩} _|m|≤ j^j=r,…,N/2
and the space ℋ_A: the first comes from noticing that
[H,J^2]=0, which means that the label j of the
eigenvalues of J^2 remains constant during the system's evolution;
the second is the decomposition ℋ_A=,
where each ℋ_j is the subspace of dimension dim{ℋ_j} =2j+1
generated by the states {|j,m⟩} _|m|≤ j
with a fixed j. In this treatment, in order to study indistinguishable
atoms, we are ignoring the multiplicities g(j) of the irreducible
representations of SU(2), i.e. the number of times that
each ℋ_j appears in the full decomposition (ℂ^2)^⊗ N=.
To make it clear that the space ℋ_A is the one we must
work with when indistinguishable atoms are considered we should inquire
into the physical interpretation of the labels j and m. In order
to give a physical interpretation to the label j we must notice
that the energy of the atomic system is bounded by ± jω
independently of the number of atoms N (but with the restriction
j≤N/2), this leads us to interpret the quantity 2j
as the effective number of atoms in the system and define it as the
cooperation number. To make the notion of the cooperation
number more intuitive, Dicke, in his original paper <cit.>,
compares a state with j=0, which exists only for an even number
of atoms, with a classical system of an even number of oscillators
swinging in pairs oppositely phased. The interpretation of the label
m is clear from the definition of J_z: m=1/2(n_e-n_g),
where n_e and n_g are the number of atoms in the excited
and ground states, respectively.
In this paper we restrict our analysis to the space ℋ_A,
as it allows us to choose j as an initial condition (which will
remain constant) and work in ℋ_j, where the atoms are
indistinguishable. | null | There have been various contributions to the study of the phase transition
in the Dicke model (and other two-level models) <cit.>
and different approaches such as Husimi function analysis <cit.>,
entropic uncertainty relations <cit.> and energy surface minimization
<cit.>, have been used
for its investigation.
In this work we use the energy surface minimization method, which
consists on minimizing the surface that is obtained by taking the
expectation value of the modeling Hamiltonian with respect to some
trial variational state. The strength of this method lies on the choice
of the trial state, as it is the latter, after minimization, the one
that will be modeling the ground state of the system.
Here we take a variational approach for both matter and radiation
fields, and show how to calculate the QPT of the system modeled by
the Hamiltonian H given in eq. (<ref>) via four means:
* Using a tensor product of Heisenberg-Weyl HW(1) coherent states for
each mode of the electromagnetic field and SU(2) coherent states for
the atomic field as trial states, and analytically minimizing the
obtained energy surface with respect to its parameters.
* Using a projection operator on HW(1) coherent states and SU(2) coherent
states to obtain trial states that preserve the parity symmetry of
the Hamiltonian with respect to the total excitation number of the
system (symmetry adapted states), and numerically minimize
the obtained energy surface with respect to its parameters.
* Using symmetry adapted states, as in (2) above, to obtain the energy
surface and “minimize” it with the minimizing parameters obtained
in (1) above, thus allowing us to have analytic expressions for the
ground state.
* Numerically diagonalizing the Hamiltonian, which gives us the exact
quantum solution.
§.§.§ Coherent states (CS)
For each mode of the electromagnetic field the annihilation and creation
operators a_ and a_^†, appearing in
the modeling Hamiltonian H, satisfy the commutation relations (<ref>)
of the Lie algebra generators of the Heisenberg-Weyl group HW(1);
hence, a natural choice of a trial state for the radiation field is
a tensor product of k (number of modes) coherent states of HW(1)
|α̅⟩ :=|α_1⟩⊗⋯⊗|α_k⟩ ,
where each |α_⟩ is defined as
|α_⟩ :=e^α_a_^†-α_^*a_|0_⟩ =e^-|α_|^2/2∑_ν_=0^∞α_^ν_/√(ν_!)|ν_⟩ .
Furthermore, the commutation relations of the collective spin operators
J_-, J_+ and J_z (<ref>) are the same as the
ones of the Lie algebra generators of the special unitary group SU(2).
Thus, analogously as for the radiation field, we use the coherent
states of SU(2)
|ξ⟩ _j:=|υtan|υ|/|υ|⟩ _j:=e^υ J_+-υ^*J_-|j,0⟩
=1/(1+|ξ|^2)^j∑_m=0^2j2jm^1/2ξ^m|j,m-j⟩ .
as
trial states for the matter field.
§.§.§ Symmetry adapted states (SAS)
The modeling Hamiltonian we are considering has a parity symmetry
given by [e^iπΛ,H]=0, where Λ=√(J^2+1/4)-1/2+J_z+∑_=1^ka_^†a_
is the excitation number operator with eigenvalues λ=j+m+∑_=1^kν_.
This symmetry allows us to classify the eigenstates of H in terms
of the parity of the eigenvalues λ; however, as states with
opposite symmetry are strongly mixed by the CS defined in the previous
section, we should then adapt this symmetry to the CS by projecting
them with the operator P_±=1/2(I± e^iπΛ),
i.e.
|α̅,ξ_j⟩ _±:=𝒩_±P_±|α̅⟩⊗|ξ⟩ _j
=𝒩_±(|α̅⟩⊗|ξ⟩ _j±|-α̅⟩⊗|-ξ⟩ _j),
with
𝒩_±=(2±2E(-cosθ)^2j)^-1/2
the normalization factors for the even (+) and odd (-) states (where
E=exp{ -2∑_=1^k|α_^2|}).
As we are interested in the ground state of the system, which has
an even parity, we only focus on the state |α̅,ξ_j⟩ _+.
§.§.§ Entropy of entanglement (S_ε)
Entropy of entanglement is defined for a bipartite system as the Von
Neumann entropy of either of its reduced states, that is, if ρ
is the density matrix of a system in a Hilbert space ℋ=ℋ_1⊗ℋ_2,
its entropy of entanglement is defined as
S_ε:=-Tr{ρ_1logρ_1} =-Tr{ρ_2logρ_2} ,
where ρ_1=Tr_2{ρ} and ρ_2=Tr_1{ρ}.
Our Hamiltonian H models a bipartite system formed by matter and
radiation subsystems, which means that their entropy of entanglement
can be used to see the influence of the QPT on its behavior; this
we do below.
§.§.§ Fidelity between neighboring states (F)
Fidelity is a measure of the distance
between two quantum states; given |ϕ⟩ and
|φ⟩ it is defined as
F(ϕ,φ):=|⟨ϕ|φ⟩|^2.
Across a QPT the ground state of a system suffers a sudden, drastic
change, thus it is natural to expect a drop in the fidelity between
neighboring states near the transition. This drop has been, in fact,
already shown to happen <cit.> for the case 2j=N.
We study it here also, and its behavior with the cooperation number. | Writing the complex labels α_ and ξ as α_=q_+ip_ι
and ξ=tan(θ/2)e^iϕ, with q_,p_∈ℝ,
θ∈[0,π), ϕ∈[0,2π), the
CS's energy surface is obtained by taking the expectation value of
the modeling Hamiltonian H with respect to the state |α̅⟩⊗|ξ⟩ _j,
and has the form
ℋ_j,CS(q_,p_,θ,ϕ):=⟨α̅|⊗⟨ξ|_jH|α̅⟩⊗|ξ⟩ _j
=-jω_Acosθ+∑_=1^kΩ_(q_^2+p_^2)-4j/√(N)sinθcosϕ∑_=1^kγ_q_.
The critical points which minimize it are then found to be
θ_c=q__c=p__c=0, ω_A≥8j/N∑_=1^kγ_^2/Ω_,
.[ cosθ_c=Nω_A/8j(∑_=1^kγ_^2/Ω_)^-1,; ; ϕ_c=0,π,; ; q__c=2jγ_/Ω_√(N)cosϕ_csinθ_c,; ; p__c=0 ]} ω_A<8j/N∑_=1^kγ_^2/Ω_.
Substituting these values into (<ref>) we obtain the energy
of the coherent ground state as a function of the Hamiltonian parameters,
ℰ_CS(ω_A,γ_)={ -jω_A , δ≥1
-jω_A/2(1/δ+δ) , δ<1,
.
where we have defined δ=Nω_A/8jς
with ς=∑_=1^kγ_^2/Ω_.
Using the information of this coherent ground state we also obtain
the expectation values of the atomic relative population operator
J_z and of the number of photons of mode operator ν_:=a_^†a_:
⟨ J_z⟩ _CS(ω_A,γ_)={ -j , δ≥1
-jδ , δ<1,
.
⟨ν_⟩ _CS(ω_A,γ_,γ_)={ 0 , δ≥1
γ_^2/Ω_^2jω_A/2ς(1/δ-δ) , δ<1.
.
Analogously as for the CS's energy surface, the SAS's energy surface
is obtained by taking the expectation value of the modeling Hamiltonian
H with respect to the state |α̅,ξ_j⟩ _+,
and has the more complicated form
ℋ_j,SAS(q_,p_,θ,ϕ):=⟨α̅,ξ_j|_+H|α̅,ξ_j⟩ _+
=(1+E(-cosθ)^2j-2/1+E(-cosθ)^2j)(-jω_Acosθ)
+(1-E(-cosθ)^2j/1+E(-cosθ)^2j)∑_ℓ=1^kΩ_ℓ(q_ℓ^2+p_ℓ^2)
-4j/√(N)sinθ∑_ℓ=1^k{cosϕγ_ℓq_ℓ+E(-cosθ)^2j-1sinϕγ_ℓp_ℓ/1+E(-cosθ)^2j}.
As a first approximation, we may substitute the critical values obtained
for the CS's energy surface into (<ref>), we obtain the trial
state which approximates the lowest symmetry-adapted energy state,
and with respect to which we evaluate the expectation values of the
observables H, J_z and ν_:
ℰ_SAS(ω_A,γ_)
={[ -jω_A , δ≥1; ; -jω_A[δ(1+ε(-δ)^2j-2/1+ε(-δ)^2j).; .+1/2(1/δ-δ)] , δ<1, ].
⟨ J_z⟩ _SAS(ω_A,γ_)
={ -j , δ≥1
-jδ(1+ε(-δ)^2j-2/1+ε(-δ)^2j) , δ<1,.
⟨ν_⟩ _SAS(ω_A,γ_,γ_)
={ 0 , δ≥1
γ_^2/Ω_^2jω_A/2ς(1/δ-δ)(1-ε(-δ)^2j/1+ε(-δ)^2j) , δ<1,.
where
ε=exp{-jω_Aσ/ς}
with σ=∑_=1^kγ_^2/Ω_^2.
Of course, we can minimize eq. (<ref>) numerically for the
SAS and obtain the expectation value of the relevant matter and field
observables.
In our numerical analysis we study the case with two modes of the
radiation field, as it is the maximum number of orthogonal modes that
can be present in a 3D cavity with the restrictions that the modes
interact with the electric dipole moment of the atoms and to be in
resonance with the frequency associated with the energy difference
between the two levels of the atoms. This latter restriction is just
considered to have the maximum transition probability between states.
For the exact quantum solution we must resort to numerical diagonalization
of the Hamiltonian and use the lowest eigenstate to compute the expectation
values of the relevant observables.
The results, properties of the ground state related to the CS, those
related to the SAS using the critical points of the CS (which have
the advantage of also providing analytical solutions), those of the
SAS minimized numerically and those of the quantum solution through
numerical diagonalization, are shown in figures <ref> - <ref>
and are discussed below.
One advantage of having analytical solutions is, of course, that the
order of the transition may be easily found. Equations (<ref>)
and (<ref>) show a second-order QPT at δ=Nω_A/8jς=1
with the CS and SAS using CS's minima (SASc) approximations. In figure
<ref> it can be seen that the data of the SAS using numerical
minimization (SASn) has a small discontinuity (the QPT) at γ_2≈1.485
for j=5 and γ_2≈1.015 for j=9, while in figure
<ref> this discontinuity is at ω_A≈0.975 for
j=5 and ω_A≈1.965 for j=9. Note that the SASn
solution always approximate better the exact quantum result, as the
cooperation number increases this approximation gets better, in fact,
for 2j=18=N the loci of the separatrix between the normal and collective
regions for the quantum and SASn solutions are indistinguishable (except
in the zoomed inset). The true loci of the QPT may be found through
the fidelity: figures <ref> and <ref> show the fidelity
between neighboring states of the quantum solution, where the exact
QPT is characterized by the minimum, which is localized at γ_2≈1.550
for j=5, γ_2≈1.031 for j=9 in figure <ref>;
and ω_A≈0.817 for j=5, ω_A≈1.870
for j=9 in figure <ref>.
The discrepancies between the transition values of the SASc approximation
and the exact quantum solution become obvious when looking at figures
<ref> and <ref>, where the fidelity between SASc and
the quantum solution drops (and oscillates) in a vicinity of the separatrix.
Therefore, we conclude that SASc offer a good approximation (with
an analytic expression) to the exact quantum solution far from the
QPT for low cooperation numbers, but as j→∞, the
interval where the SASc fail to reproduce the correct behavior, becomes
smaller.
Figures <ref> and <ref> show the fidelity drop at
the separatrix for the SASn. The resemblance to figures <ref>
and <ref> is uncanny, showing the benefits of restoring the
Hamiltonian symmetry into the trial variational states. This improvement
comes with the disadvantage of losing the analytic expression, but
still has an advantage over the quantum solution: the computational
time. SASn are obtained by numerically minimizing a real function,
which is far easier to do (computationally speaking) than numerically
diagonalizing the Hamiltonian matrix.
Figures <ref> - <ref> show the comparison between the
different approximations to the ground state: CS, SASc, SASn and quantum
solution. We show the behavior of ℰ:=⟨ H⟩,
⟨ J_z⟩ and ⟨ν_⟩
as functions of the atomic frequency ω_A and one of the
coupling constants γ_2, for different cooperation numbers.
It can be noticed that the discontinuity in the second derivative
of the energy (as modeled with CS and SASc) translates into a discontinuity
in the first derivative of ⟨ J_z⟩ and
⟨ν_⟩, thus characterizing the
QPT by means of an abrupt change in the expectation values of the
observables. In general, it can be observed that the four methods
(CS, SASc, SASn and quantum solution) converge in the limit δ→0,
where the case j→∞ is particularly interesting as
the interval around the QPT, where all the approximations fail to
reproduce the correct behavior, becomes smaller.
It is worth mentioning the significance and importance of figures
<ref> and <ref> as they show aspects of the multi-mode
Dicke model which are not present in the single-mode case. In figure
<ref> it is shown how the different modes of radiation (orthogonal
in principle) interact through the matter field, analogously as it
occurs with different atoms interacting through the radiation field.
On the other hand, figure <ref> shows (pictorically) the phase
diagrams of the two-mode system, in which it can be observed that
any two points in the super-radiant region can be joined by a trajectory
that does not cross the normal region, a characteristic that the single-mode
system does not have.
Figures <ref> and <ref> show the comparison between
SASc, SASn and the quantum solution for the entropy of entanglement
S_ε as a function of the atomic frequency ω_A
and one of the coupling constants γ_2, using different cooperation
numbers. A characterization of the QPT can be made by observing that
the entropy of entanglement obtained using the quantum solution shows
a maximum at the transition, an attribute that SASc and SASn approximations
fail to reproduce. | null | null |
http://arxiv.org/abs/1701.07590v1 | 20170126065620 | Analysis of stochastic approximation schemes with set-valued maps in the absence of a stability guarantee and their stabilization | [
"Vinayaka G. Yaji",
"Shalabh Bhatnagar"
] | cs.SY | [
"cs.SY"
] |
shapes.geometric, arrows,calc
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
proof[1][Proof]
#1
definition[1][Definition]
#1
example[1][Example]
#1
remark[1][Remark]
#1
startstop = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30]
io = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=3cm, minimum height=1cm, text centered,draw=black, fill=blue!30]
process = [rectangle, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=orange!30]
decision = [diamond, minimum width=3cm, minimum height=1cm, draw=black, fill=green!30]
arrow = [thick,->,>=stealth]
Analysis of stochastic approximation schemes with set-valued maps in the absence of a stability guarantee and their stabilization
Vinayaka G. Yaji and Shalabh Bhatnagar,
Department of Computer Science and Automation,
Indian Institute of Science, Bangalore.
[email protected], [email protected]
December 30, 2023
==================================================================================================================================================================================================
In this paper, we analyze the behavior of stochastic approximation schemes with set-valued maps in the absence of a stability guarantee. We prove that after a large number of iterations if the stochastic approximation process enters the domain of attraction of an attracting set it gets locked into the attracting set with high probability. We demonstrate that the above result is an effective instrument for analyzing stochastic approximation schemes in the absence of a stability guarantee, by using it obtain an alternate criteria for convergence in the presence of a locally attracting set for the mean field and by using it to show that a feedback mechanism, which involves resetting the iterates at regular time intervals, stabilizes the scheme when the mean field possesses a globally attracting set, thereby guaranteeing convergence. The results in this paper build on the works of V.S. Borkar, C. Andrieu and H. F. Chen , by allowing for the presence of set-valued drift functions.
§ INTRODUCTION
It is well known that several optimization and control tasks can be cast as a root finding problem. That is, given f:ℝ^d→ℝ^d, one needs to find x^*∈ℝ^d, such that f(x^*)=0 (given such a point exists). Due to practical considerations, one usually has access to noisy measurements/estimations of the function whose root needs to be determined. An approach to solving such a problem with noisy measurements of f, is given by the recursion,
X_n+1-X_n-a(n)M_n+1=a(n)f(X_n),
where {M_n}_n≥1, denotes the noise arising in the measurement of f and having fixed an initial condition (X_0∈ℝ^d), the iterates {X_n}_n≥1 are generated according to recursion (<ref>). <cit.> under certain assumptions which include the Lipschitz continuity of the function f, boundedness of the iterates along almost every sample path (that is ℙ(sup_n≥0X_n<∞)=1) and a condition which ensures that the eventual contribution of the additive noise terms is negligible, showed that the linearly interpolated trajectory of recursion (<ref>) tracks the flow of the ordinary differential equation (o.d.e.) given by,
dx/dt=f(x).
Such a trajectory is called an asymptotic pseudotrajectory for the flow of o.d.e.(<ref>) (for a precise definition see <cit.>). Suppose the set of zeros of f is a globally asymptotically stable set for the flow of o.d.e. (<ref>), then it was shown that the limit set of an asymptotic pseudotrajectory was contained in such a set and hence the iterates {X_n}_n≥0 converge in the limit to a root of the function f.
In order to analyze recursion (<ref>) when the function f is no longer Lipschitz continuous or even continuous, but is just measurable satisfying the linear growth property, that is for every x∈ℝ^d, f(x)≤ K(1+x) for some K>0, or when there is a non-additive noise/control component taking values in a compact set whose law is not known
(in which case the recursion (<ref>) takes the form X_n+1-X_n-a(n)M_n+1=a(n)f(X_n,U_n), where U_n denotes the noise/control), the above mentioned o.d.e. method needed to be extended to recursions with much weaker requirements on the function f. This was accomplished in <cit.>, where the asymptotic behavior of the recursion given by,
X_n+1-X_n-a(n)M_n+1∈ a(n)F(X_n),
was studied, where F is a set-valued map satisfying some conditions (while the other quantities have same interpretation as in (<ref>)). Under the assumption of stability of iterates (that is ℙ(sup_n≥0X_n<∞)=1) and appropriate conditions on the additive noise terms, in <cit.>, it was shown that the linearly interpolated trajectory of recursion (<ref>) tracks the flow of the differential inclusion (d.i.) given by,
dx/dt∈ F(x).
We refer the reader to <cit.> for a detailed argument as to how the measurable case and the case with unknown noise/control be recast in the form of recursion (<ref>). For a brief summary of the convergence analysis of recursion (<ref>) we refer the reader to section <ref> of this paper.
Common to the analysis of both recursion (<ref>) and (<ref>) is the assumption on the stability of the iterates, that is ℙ(sup_n≥0X_n<∞)=1. The condition of stability is highly non-trivial and difficult to verify. Over the years significant effort has gone into providing sufficient conditions for stability (see <cit.>, <cit.>). In <cit.>, it was shown that for recursion (<ref>), in the absence of stability guarantee, the probability of converging to an attracting set of o.d.e. (<ref>) given that the iterates lie in a neighborhood of it converged to one as the index (n) in which the iterate entered the neighborhood of the attracting set increased to infinity. This probability of the iterates converging to an attracting set given that the iterate lies in a neighborhood of it is called the lock-in probability and in <cit.> a lower bound for the same was used to obtain sample complexity bounds for recursion (<ref>). Further a tighter lower bound for the lock-in probability was derived in <cit.> under a slightly stronger noise assumption and used to obtain convergence guarantee when the law of the iterates are tight. In this paper we extend the results in <cit.> to the case of stochastic approximation schemes with set-valued maps as in recursion (<ref>).
§.§ Contributions and organization of the paper
We first provide a lower bound for the lock-in probability of stochastic approximation schemes with set-valued maps as in recursion (<ref>). The bound is derived under an assumption on the additive noise terms which is stronger than the corresponding in <cit.>, which is necessitated due to the lack of Lipschitz continuity of the drift function F. We establish that,
ℙ(X_n→ A as n→∞|X_n_0∈𝒪')≥ 1-2de^-K̃/b(n_0),
for n_0 large, where, A⊆ℝ^d, denotes an attracting set of DI <ref>, 𝒪' is an open neighborhood of A with compact closure, K̃ is some positive constant and {b(n)}_n≥0 is a sequence of reals converging to zero, which are step size dependent.
Having summarized the convergence analysis under stability in section <ref>, we state the lock-in probability bound in section <ref> and provide a few implications of the same. Using the lock-in probability result we provide an alternate criteria for convergence in the presence of a locally attracting set which removes the need to verify stability. A detailed comparison between the obtained convergence guarantee and the corresponding in the presence of stability is also provided.
Proof of the lock-in probability result is presented in section <ref>. The proof relies heavily on the insights obtained from the analysis in <cit.> for single-valued maps. From the analysis in <cit.>, it is evident that the Lipschitz continuity of the drift function f plays a crucial role in obtaining events and decoupling error contributions which in turn are necessary to obtain the bound in the inequality above. But in the recursion studied in this paper (that is recursion (<ref>)), the drift function F is set-valued and the assumptions under which we study the said recursion (which are summarized in section <ref>), the drift function F is not even continuous. We overcome this problem by first obtaining a sequence of locally Lipschitz continuous set-valued maps which approximate the drift function F from above and then parameterizing them using the Stiener selection procedure. The associated results are summarized in section <ref>. This enables us to write recursion (<ref>) in the form of recursion (<ref>), but with locally Lipschitz continuous drift functions. Further the relation between the solutions of differential inclusions with the approximating set-valued maps as their vector field and those of DI (<ref>), is established in section <ref>. Having written recursion (<ref>) in the form of recursion (<ref>), we then collect sample paths of interest in section <ref>. Along the sample paths that are collected the iterates are such that, having entered a neighborhood of the attracting set at iteration n_0, the iterates will infinitely often enter the said neighborhood and the time elapsed between successive visits to the neighborhood of the attracting set can be upper bounded by a constant which is mean field dependent. Further we show that the probability of occurrence of such sample paths can be lower bounded by error contributions due to additive noise terms alone after a large number of iterations. Using the concentration inequality for martingale sequences we obtain the lock-in probability bound in section <ref>.
Using the lock-in probability result we design a feedback mechanism which enables us to stabilize the stochastic approximation scheme in the presence of a globally attracting set for DI (<ref>). The feedback mechanism involves resetting the iterates at regular time intervals if they are found to be lying outside a certain compact set. This approach to stabilization has been studied in various forms for stochastic approximation schemes with single-valued drift functions as in recursion (<ref>), in <cit.>, <cit.>, <cit.> and <cit.> to name a few. We extend the same to the case of set-valued drift functions. The main idea in the analysis of such a scheme is to show that along almost every sample path of the modified recursion, the number of resets that are performed is finite, thereby guaranteeing that eventually the iterates lie within a compact set. We observe that the lock-in probability result (to be precise the approach adopted to obtain the lock-in probability result) plays a central role in showing that the number of resets performed remain finite. Having shown that the iterates eventually lie within a compact set, we use the convergence arguments from <cit.> to argue that the iterates generated by the modified scheme converge to the globally attracting set of DI (<ref>). The modified scheme is presented and explained in detail in section <ref>. The proof of the finite resets theorem is presented in section <ref>. The procedure employed to collect sample paths in the proof of the lock-in probability result can be used to collect sample paths where only finite number of resets have occurred in the modified scheme and this in turn enables us show that the number of resets are finite almost surely.
Finally, we conclude by providing a few directions for future work in section <ref>.
§ RECURSION AND ASSUMPTIONS
Let (Ω,ℱ,ℙ) be a probability space and {X_n}_n≥0 be a sequence of ℝ^d-valued
random variables on Ω, such that for every n≥0,
X_n+1-X_n-a(n)M_n+1∈ a(n)F(X_n),
where,
(A1) F:ℝ^d→{subsets of ℝ^d} is a set-valued map which for every x∈ℝ^d
satisfies the following:
(i) F(x) is a convex and compact subset of ℝ^d,
(ii) there exists K>0 (independent of x) such that sup_y∈ F(x)y≤ K(1+x),
(iii) for every ℝ^d-valued sequence {x_n}_n≥1 converging to x and for every sequence {y_n∈ F(x_n)}_n≥1
converging to y∈ℝ^d, we have that y∈ F(x).
(A2) {a(n)}_n≥0 is a sequence of positive real numbers satisfying,
(i) ∑_n=0^∞a(n)=∞,
(ii) ∑_n=0^∞(a(n))^2<∞.
(A3) {M_n}_n≥1 is a ℝ^d-valued, martingale difference sequence with respect to the filtration {ℱ_n:=σ(X_m,M_m, m≤ n)}. Furthermore, {M_n}_n≥1 are such that,
M_n+1≤ K(1+x_n) a.s.,
for every n≥0, for some constant K>0.
Assumption (A1) ensures that the set-valued map F is a Marchaud map. The condition (A1)(ii) is called the linear growth property since it ensures that the size of the sets F(x) grow linearly with respect to the distance from the origin. The condition (A1)(iii) is called the closed graph property since it states that the graph of the set-valued map F, defined as,
{(x,y)∈ℝ^2d:x∈ℝ^d, y∈ F(x)},
is a closed subset of ℝ^2d. The map F being a Marchaud map ensures that the differential inclusion (DI) given by,
dx/dt∈ F(x),
possesses at least one solution through every initial condition. By a solution of DI (<ref>) with initial condition x_0∈ℝ^d, we mean an absolutely continuous function x:ℝ→ℝ^d such that x(0)=x_0 and for almost every t∈ℝ, dx(t)/dt∈ F(x(t)). DI (<ref>) is the mean field of recursion (<ref>) and its dynamics play an important role in describing the asymptotic behavior of recursion (<ref>).
Assumption (A2) states the conditions to be satisfied by the step size sequence {a(n)}_n≥0. Square summability (that is (A2)(ii)) is needed later in the analysis for obtaining a probability bound on certain tail events associated with the additive noise terms {M_n}_n≥1.
Assumption (A3), defines the martingale noise model. These terms denote the noise arising in the measurement of F(·). This condition holds in several reinforcement learning applications (see <cit.>)
Clearly when {M_n}_n≥1 are i.i.d. zero mean and bounded, assumption (A3) is satisfied. Further, since the drift function in recursion (<ref>) is a set-valued map, scenarios where the measurement noise terms possess a bounded bias can be recast in the form of recursion (<ref>) as explained below.
Consider the recursion given by,
X_n+1-X_n-a(n)M_n+1-a(n)η_n+1=a(n) f(X_n), n≥0,
where f:ℝ^d→ℝ^d is a single-valued Lipschitz continuous map, for every n≥0, η_n+1 denotes the bias in the measurement noise. Let the bias terms {η_n}_n≥1 be bounded by a positive constant, say ϵ>0 (that is, for every n≥1, η_n≤ϵ). Then, recursion (<ref>) can be written in the form of recursion (<ref>) with set-valued map F, given by, F(x)={f(x)+η: η≤ϵ}, for every x∈ℝ^d. We refer the reader to <cit.> for several other variants of the standard stochastic approximation scheme which can be analyzed with the help of recursion (<ref>).
§ LOCK-IN PROBABILITY FOR STOCHASTIC RECURSIVE INCLUSIONS
In order to state the main result of this paper, definition of the flow of a DI, an attracting set for such a dynamical system are needed. We recall these notions below and we state them with respect to the mean field of recursion (<ref>) (for a detailed description and associated results see <cit.>).
The flow of DI (<ref>) is given by the set-valued map Φ:ℝ×ℝ^d→{subsets of ℝ^d}, where for every (t,x)∈ℝ×ℝ^d,
Φ(t,x):={x(t)∈ℝ^d:x(·) is a solution of DI (<ref>) with x(0)=x}.
A compact set A⊂ℝ^d is an attracting set for the flow of DI (<ref>), if there exists an open neighborhood of A, say 𝒪, with the property that for every ϵ>0, there exists a time T>0 (depending on ϵ and 𝒪) such that for every t≥ T and for every x∈ U, Φ(t,x)∈ N^ϵ(A), where N^ϵ(A) denotes the ϵ-neighborhood of A. Such a neighborhood 𝒪 of an attracting set A is called the fundamental neighborhood of A.
The set of initial conditions in ℝ^d from which the flow is attracted to an attracting set A is called the basin of attraction and is denoted by B(A). Formally,
B(A):={x∈ℝ^d: ∩_t≥0{Φ(q,x):q≥ t}⊆ A}.
An attracting set A is said to be globally attracting if, B(A)=ℝ^d.
§.§ Summary of the asymptotic analysis under stability
Let t(0):=0 and for every n≥1, t(n):=∑_k=0^n-1a(k). The linearly interpolated trajectory of recursion (<ref>), is given by the stochastic process X̅:Ω×ℝ→ℝ^d, where for every (ω,t)∈Ω×[0,∞),
X̅(ω,t):=(t-t(n)/t(n+1)-t(n))X_n+1(ω)+(t(n+1)-t/t(n+1)-t(n))X_n(ω),
where n is such that t∈[t(n),t(n+1)) and for every (ω,t)∈Ω×(-∞,0), X̅(ω,t):=X_0(ω).
For ω∈Ω, the limit set map of X̅ is given by, λ:Ω→{subsets of ℝ^d} where for every ω∈Ω,
λ(ω):=∩_t≥0{X̅(ω,q):q≥ t}.
In <cit.>, under assumptions (A1)-(A3) along with the additional assumption of stability of the iterates (that is ℙ(sup_n≥0X_n<∞)=1), it was shown that for almost every ω∈Ω, the linearly interpolated trajectory of recursion (<ref>), X̅(ω,·), is an asymptotic pseudotrajectory for the flow of DI (<ref>). More precisely, for almost every ω∈Ω, X̅(ω,·) was shown to satisfy the following:
(a) The family of shifted trajectories given by {X̅(ω,·+t)}_t≥0 is relatively compact in 𝒞(ℝ,ℝ^d) where 𝒞(ℝ,ℝ^d) denotes the metric space of all continuous functions on ℝ taking values in ℝ^d with metric D, which for every z, z'∈𝒞(ℝ,ℝ^d) is given by
D(z,z')=∑_k=1^∞1/2^kmin{z-z'_[-k,k],1},
where z-z'_[-k,k]:=sup_t∈[-k,k]z(t)-z'(t).
(b) Every limit point of the shifted trajectories {X̅(ω,·+t)}_t≥0 is a solution of the DI (<ref>).
From <cit.>, it follows that for almost every ω∈Ω, the limit set of the linearly interpolated trajectory X̅(ω,·), λ(ω), is a non-empty, compact and an internally chain transitive (ICT) set for the flow of DI (<ref>) (see <cit.> for definition of an ICT set). Now using <cit.> the main convergence result of <cit.> follows and is stated below.
Let A⊆ℝ^d be an attracting set for the flow of DI (<ref>).
Under assumptions (A1)-(A3),
(a) for almost every ω∈{ω∈Ω:sup_n≥0X_n(ω)<∞}∩{ω∈Ω:λ(ω)∩ B(A)≠∅}, λ(ω)⊆ A and therefore as n→∞, X_n(ω)→ A.
(b) if B(A)=ℝ^d (that is A is a globally attracting set), then for almost every ω∈{ω∈Ω:sup_n≥0X_n(ω)<∞}, λ(ω)⊆ A and therefore as n→∞, X_n(ω)→ A.
The assumption of stability of the iterates used to obtain the above convergence result is highly non-trivial and difficult to verify. Moreover the proof method used to prove the above convergence result cannot be modified in a straight forward manner to obtain a similar convergence guarantee. This warrants an alternate approach to study the behavior of recursion (<ref>) in the absence of stability guarantee and we accomplish this by extending the lock-in probability result from <cit.> to the set-valued case. Using the obtained lock-in probability bound we recover convergence guarantee similar to Theorem <ref> while eliminating the need to verify stability.
§.§ Main result and its implications
Before we state the main result, we state an assumption which fixes the attracting set of interest.
(A4) Let A⊆ℝ^d, be an attracting set of DI (<ref>) (the mean field of recursion (<ref>)) with 𝒪⊆ℝ^d as its fundamental neighborhood of attraction.
Let 𝒪' be an open neighborhood of the attracting set A (as in (A4)) such that 𝒪̅'̅ is compact and 𝒪̅'̅⊆𝒪. Then the main result of the paper can be stated as follows.
(Lock-in probability)
Under assumptions (A1)-(A4), there exists a constant K̃>0 (depending on the attracting set A and 𝒪') and an N_0≥1 such that, for every n_0≥ N_0, for every E∈ℱ_n_0 satisfying E⊆{ω∈Ω:X_n_0(ω)∈𝒪'} and ℙ(E)>0, we have that,
ℙ(X_n→ A as n→∞| E)≥ 1-2de^-K̃/b(n_0),
where, for every n≥0, b(n):=∑_k=n^∞(a(k))^2.
There are two immediate implications of the above result and are stated below, one of which serves as an alternate convergence result in the absence of stability guarantees, that is it allows us to obtain the convergence guarantee in Theorem <ref>(a) without the need to verify whether a given sample path satisfies sup_n≥0X_n(ω)<∞.
(1) As a consequence of assumption (A2)(ii), we have that lim_n→∞b(n)=0. Therefore from Theorem <ref>, if the observation that iterate lies in a neighborhood of the attracting set is made later in time (n_0), the probability of converging to the attracting set increases and converges to one as n_0→∞. Formally,
lim_n_0→∞ℙ(X_n→ A as n→∞|X_n_0∈𝒪')=1.
(2) Suppose ℙ(∩_N≥0∪_n≥ N{X_n∈𝒪'})>0 (if ℙ(∩_N≥0∪_n≥ N{X_n∈𝒪'})=0 then the iterates almost surely do not converge to the attracting set A). Then for every N≥0, ℙ(∪_n≥ N{X_n∈𝒪'})>0 and
∪_n≥ N{X_n∈𝒪'}={X_N∈𝒪'}∪(∪_n>N{X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'}),
where, the union in the R.H.S. is disjoint. Then by Theorem <ref>, for every N≥ N_0,
ℙ({X_n→ A as n→∞} ∩(∩_N≥0∪_n≥ N{X_n∈𝒪'}))
≥∑_n≥ Nℙ({X_n→ A as n→∞}∩{X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'})
=∑_n≥ N[ℙ({X_n→ A as n→∞|{X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'})
ℙ({X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'})]
≥∑_n≥ N(1-2de^-K̃/b(n))ℙ({X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'})
≥(1-2de^-K̃/b(N))∑_n≥ Nℙ({X_k∉𝒪', for N≤ k≤ n-1, X_n∈𝒪'})
=(1-2de^-K̃/b(N))ℙ(∪_n≥ N{X_n∈𝒪'})
≥(1-2de^-K̃/b(N))ℙ(∩_N≥ 0∪_n≥ N{X_n∈𝒪'}).
The above inequality is true for every N≥ N_0. Taking limit and using the fact that lim_n→∞b(n)=0, we get that,
ℙ({X_n→ A as n→∞}∩(∩_N≥0∪_n≥ N{X_n∈𝒪'}))=ℙ(∩_N≥ 0∪_n≥ N{X_n∈𝒪'}).
Therefore from the above we can conclude that,
Under assumptions (A1)-(A4), for almost every ω∈∩_N≥0∪_n≥ N{X_n∈𝒪'}, X_n(ω)→ A as n→∞.
In comparison with Theorem <ref>(a), the condition that ω∈∩_N≥0∪_n≥ N{X_n∈𝒪'} is stronger than the requirement that ω∈{λ(ω)∩ B(A)≠∅} because the former requires the iterate sequence to enter an open neighborhood of A with compact closure infinitely often while the latter requires the iterates to enter the basin of attraction of A infinitely often which is larger than 𝒪'. But in the presence of stability we have that,
{sup_n≥0X_n<∞}∩{λ(·)∩ B(A)≠∅}⊆∩_N≥0∪_n≥ N{X_n∈𝒪'}.
Further, as a consequence of Corollary <ref>, we have that,
ℙ({sup_n≥0X_n<∞}∩{λ(·)∩ B(A)≠∅})=ℙ(∩_N≥0∪_n≥ N{X_n∈𝒪'}),
or in other words, the sample paths which visit 𝒪' infinitely often and are unstable, occur with zero probability.
§ APPLICATION: STABILIZATION VIA RESETTING
In this section we modify recursion (<ref>) in such a way that the modified procedure yields sample paths which are stable (that is lie in a compact set almost surely) which in turn allows us to recover the convergence result as in Theorem <ref>(b) without the need to verify stability, in the presence of a globally attracting set for the mean field. That is, we replace assumption (A4) with the following stronger requirement.
(A4)' Let A⊆ℝ^d be a globally attracting set for the flow of DI (<ref>).
The modification that we propose involves resetting the iterates at regular time intervals if they are found to be lying outside a certain compact set. Let the initial condition X_0(ω)=x_0∈ℝ^d for every ω∈Ω and {r_n∈ (0,∞)}_n≥0 be such that,
(1) x_0<r_0,
(2) for every n≥0, r_n<r_n+1,
(3) lim_n→∞r_n=∞.
The modified scheme, henceforth referred to as stabilized stochastic recursive inclusion (SSRI) is where every sample path is generated as outlined in Algorithm <ref>.
A flowchart depicting the flow of control in Algorithm <ref> is presented in Figure <ref>. In order to understand the algorithm let us consider the scenario where the k^th reset has been performed at iteration index n_0. Then the algorithm checks whether the iterate lies in the compact set r_kU ( closed ball of radius r_k centered at the origin) after approximately 2^kT_W amount of time has elapsed (for the relation between time and iteration index see section <ref>). Now either a reset occurs or the iterate is left unchanged.
(a) If the iterate is left unchanged then the next reset check is performed after 2^kT_W amount of time has elapsed.
(b) If the iterate is reset, then, the next check is performed after 2^k+1T_W amount of time has elapsed.
In fact it would suffice if the time between successive reset checks were set to be greater than a certain threshold which is determined by the minimum time needed by the flow of the mean field (that is DI (<ref>)) to reach the attracting set A from any initial condition in a compact neighborhood of it. But in practical scenarios one may not be able to compute such a time and hence may not be able to determine the required threshold. This approach of increasing time duration between successive reset checks with increasing reset count allows us to bypass this problem. The choice of exponentially increasing durations is one of convenience as it simplifies notations involved in proving certain results later.
For every n≥1, define the indicator random variable χ_n:Ω→{0,1} such that, for every ω∈Ω,
χ_n(ω)=
0 if X_n(ω)=X_n'(ω),
1 if X_n(ω)≠ X_n'(ω).
We assume that the noise terms {M_n}_n≥1 satisfy the following version of assumption (A3).
(A3)' {M_n} is a martingale difference sequence with respect to the filtration {ℱ_n}_n≥1, where, for every n≥1, ℱ_n denotes the smallest σ-algebra generated by the iterates X_m (that is the iterates before the reset operation) and noise terms M_m, for 0≤ m≤ n (then it is easy to show that for every n≥1, X_n' and hence χ_n are ℱ_n measurable). Since for every n≥1, M_n denotes the noise arising in the estimation (or measurement) of F at X_n-1', we assume that the energy of the noise depends on X_n-1'. That is for every n≥0, M_n+1≤ K(1+X_n') a.s..
The next theorem says that, for almost every sample path generated by Algorithm <ref>, the total number of resets is finite, thereby guaranteeing stability. The proof of this theorem (provided in section <ref>) crucially hinges on a lower bound for the probability of the event that there are no future resets given that there are a certain number of resets up until iteration n_0 for some large n_0. Specifically it requires the probability of the above mentioned event to converge to one as n_0 tends to infinity and this is guaranteed by Theorem <ref>.
(Finite resets)
Under assumptions (A1), (A2), (A3)' and (A4)', ℙ({ω∈Ω: ∑_n=1^∞χ_n(ω)<∞})
=1.
As a consequence of the above theorem, we have the following.
(a) Let ω∈{ω∈Ω: ∑_n=1^∞χ_n(ω)<∞}. Then there exists an N≥1 and R>0 (depending on ω) such that, for every n≥ N, X_n(ω)=X_n'(ω) and sup_n≥ NX_n(ω)≤ R. Therefore ∑_n≥ N𝔼[(a(n))^2M_n+1^2|ℱ_n](ω)≤∑_n≥ N(a(n))^2K^2(1+X_n'(ω))^2≤ K^2(1+R)^2∑_n≥ N(a(n))^2
<∞, where the last inequality follows from assumption (A2)(ii). Therefore,
{ω∈Ω:∑_n≥1χ_n(ω)<∞}⊆{ω∈Ω:∑_n=0^∞𝔼[(a(n))^2M_n+1^2|ℱ_n](ω)<∞}.
Therefore by Theorem <ref>, we have that the ℙ(∑_n=0^∞𝔼[(a(n))^2M_n+1^2|ℱ_n]<∞)=1 and by martingale convergence theorem (see <cit.>) we have that, the square integrable martingale {∑_m=0^n-1a(m)M_m+1,ℱ_n}_n≥1 converges almost surely.
(b) Thus for ω lying in a probability one set, there exists N≥1 and R>0 (depending on ω) such that along this sample path the iterates {X_n(ω)}_n≥ N, can be viewed as being generated by recursion (<ref>) with initial condition X_N(ω), their norms are bounded by R uniformly and the additive noise terms {M_n(ω)}_n≥ N satisfy the hypothesis of <cit.>. Then by arguments similar to those of Theorem <ref>(b) we have that,
Under assumptions (A1)-(A3) and (A4)', for almost every ω, the iterates generated by Algorithm <ref>, {X_n'(ω)}_n≥0, are such that X_n'(ω)→ A as n→∞.
§ PROOF OF THE LOCK-IN PROBABILITY THEOREM (THM. <REF>)
Proof of the lock-in probability result follows as a consequence of a series of lemmas. The overall structure can be summarized as follows.
(a) Our first aim is to replace the set-valued map in recursion (<ref>) with an equivalent single-valued locally Lipschitz continuous function with an additional parameter. In order to accomplish this, we first embed the graph of the set-valued map F in the graph of a sequence of locally Lipschitz continuous set-valued maps. These maps are then parametrized using the Stiener selection procedure which preserves the modulus of continuity.
(b) The relation between the solutions of DI (<ref>) and that of differential inclusions with continuous set-valued maps which approximate F (as in (a) above) is established.
(c) An ordinary differential equation(o.d.e) is defined using an appropriate single-valued parametrization of F (as in (a) above). The existence of solutions to such an o.d.e. and further its uniqueness follow from Caratheodary's existence theorem and locally Lipschitz nature of the vector field respectively. The solutions of this o.d.e. aide in separating the probability contributions due to the additive noise terms and the set-valued nature of the drift function. Using the results from part (b) above, we conclude that after a large number of iterations, the probability contribution is only due to the additive noise terms.
(d) We finally review the standard probability lower bounding procedure for the additive noise terms from <cit.>. Using this bound in the result obtained in part (c)
above gives us the desired lock-in probability bound.
Throughout, we use U to denote the closed unit ball in ℝ^d centered at the origin. Further, for every Y_1,Y_2⊆ℝ^d and r∈ℝ, define,
* Y_1+Y_2:={y_1+y_2:y_1∈ Y_1 and y_2∈ Y_2},
* rY_1:={ry_1:y_1∈ Y_1}.
§.§ Upper semicontinuous set-valued maps and their approximation
First we recall definitions of continuous set-valued maps and locally Lipschitz continuous set-valued maps. These notions are taken from <cit.>.
A set-valued map F:ℝ^d→{compact subsets of ℝ^d} is,
* upper semicontinuous (u.s.c.) if, for every x∈ℝ^d, for every ϵ>0, there exists a δ>0 (depending on x and ϵ) such that, for every x'∈ℝ^d satisfying x'-x<δ, we have that F(x')⊆ F(x)+ϵ U, where F(x)+ϵ U:={y+ϵ u: y∈ F(x), u∈ U}.
* lower semicontinuous (l.s.c.) if, for every x∈ℝ^d, for every ℝ^d-valued sequence {x_n}_n≥1 converging to x, for every y∈ F(x), there exists a sequence {y_n∈ F(x_n)}_n≥1 converging to y.
* continuous if, it is both u.s.c. and l.s.c.
* locally Lipschitz continuous if, for every x_0∈ℝ^d, there exists δ>0 and L>0 (depending on x_0) such that for every x,x'∈ x_0+δ U, we have that
F(x)⊆ F(x')+Lx-x'U.
Let 𝒦(ℝ^d) denote the family of all non-empty compact subsets of ℝ^d. Let H:𝒦(ℝ^d)×𝒦(ℝ^d)→[0,∞) be defined such that, for every S_1, S_2∈𝒦(ℝ^d),
H(S_1,S_2):=max{sup_s_1∈ S_1inf_s_2∈ S_2s_1-s_2, sup_s_2∈ S_2inf_s_1∈ S_1s_1-s_2}.
With H as defined above, (𝒦(ℝ^d),H) is a complete metric space (for a proof see <cit.>). The notions of continuity and local Lipschitz continuity of a set-valued map can be restated using the metric defined above and is stated as a lemma below for easy reference (for a proof see <cit.>).
A set-valued map F:ℝ^d→𝒦(ℝ^d) is
(a) Continuous, if and only if, for every x_0∈ℝ^d, for every ϵ>0, there exists δ>0 (depending on x_0 and ϵ), such that for every x∈ x_0+δ U, H(F(x),F(x_0))<ϵ.
(b) locally Lipschitz continuous, if and only if, for every x_0∈ℝ^d, there exists δ>0 and L>0 (depending on x_0), such that for every x, x'∈ x_0+δ U,
H(F(x),F(x'))≤ Lx-x'.
Before we proceed further we look at a certain form of locally Lipschitz continuous set-valued maps that arise later. The next lemma defines such maps and also states that the sum of two locally Lipschitz continuous set-valued maps is again a locally Lipschitz continuous set-valued map, a result needed later to obtain locally Lipschitz continuous single-valued parametrization of map F in recursion (<ref>).
(a) If f:ℝ^d→ℝ is a locally Lipschitz continuous map and C∈𝒦(ℝ^d), then the set-valued map F:ℝ^d→𝒦(ℝ^d), given by F(x):=f(x)C for every x∈ℝ^d, is a locally Lipschitz continuous set-valued map.
(b) If for every i∈{1,2}, F_i:ℝ^d→𝒦(ℝ^d) is a locally Lipschitz continuous set-valued map, then the set-valued map F:ℝ^d→𝒦(ℝ^d), given by F(x):=F_1(x)+F_2(x) for every x∈ℝ^d, is a locally Lipschitz continuous set-valued map.
(a) Fix x_0∈ℝ^d and let r:=sup_c∈ Cc. Since f is locally Lipschitz continuous, there exists δ^f_x_0>0 and L^f_x_0>0 such that for every x,x'∈ x_0+δ^f_x_0U, |f(x)-f(x')|≤ L^f_x_0x-x'. Let x,x'∈ x_0+δ^f_x_0U. Then for any c∈ C,
f(x)c-f(x')c =|f(x)-f(x')|c
≤ rL^f_x_0x-x'.
Therefore for every x,x'∈ x_0+δ^f_x_0U, for every c∈ C, f(x')c-f(x)c∈ rL^f_x_0x-x'U. Thus for every x,x'∈ x_0+δ^f_x_0U, F(x')⊆ F(x)+rL^f_x_0x-x'U, from which it follows that the set-valued map F is locally Lipschitz continuous at x_0 with δ:=δ^f_x_0 and L:=rL^f_x_0. Since x_0∈ℝ^d is arbitrary, the above argument gives us that F is locally Lipschitz continuous at every x_0.
(b) Fix x_0∈ℝ^d. Since for every i∈{1,2}, F_i are locally Lipschitz continuous, there exists δ_i>0 and L_i>0 such that for every x,x'∈ x_0+δ_i U, F_i(x)⊆ F_i(x')+L_ix-x' U. Let δ:=min{δ_1,δ_2}, L:=L_1+L_2 and x,x'∈ x_0+δ U. For any y∈ F(x), there exists y_1∈ F_1(x) and y_2∈ F_2(x) such that y=y_1+y_2 . By our choice of δ, we have y'_1∈ F_1(x'), y'_2∈ F_2(x') and u_1, u_2∈ U, such that for every i∈{1,2}, y_i=y'_i+L_ix-x'u_i. Therefore,
y =y_1+y_2
=y'_1+L_1x-x'u_1 +y'_2+L_2x-x'u_2
=y'_1+y'_2+(L_1+L_2)x-x'(L_1u_1+L_2u_2/L_1+L_2).
Clearly y'_1+y'_2∈ F(x') and since U is a convex subset of ℝ^d, L_1u_1+L_2u_2/L_1+L_2∈ U. From (<ref>) we get that, F(x)⊆ F(x')+(L_1+L_2)x-x' U, for every x,x' ∈ x_0+δ U. Therefore F is locally Lipschitz continuous at x_0. Since x_0 is arbitrary, the above argument gives us that F is locally Lipschitz continuous.
Consider a set-valued map F satisfying assumption (A1). A simple contradiction argument gives us that F is u.s.c. It is not possible to represent such u.s.c. set-valued maps with a single-valued continuous map with an additional parameter. But instead one can approximate them from above as explained next. The first step is to embed the graph of the map F in that of a sequence of continuous set-valued maps as stated in the lemma below. For the proof of the lemma below notions of a paracompact topological space, an open covering, its locally finite refinement and partition of unity subordinated to a locally finite covering are needed, which are summarized in Appendix <ref> for easy reference.
Let F:ℝ^d→𝒦(ℝ^d) be a set-valued map satisfying (A1). Then, there exists a sequence of continuous set-valued maps {F^(l):ℝ^d→𝒦(ℝ^d)}_l≥1, such that for every l≥1,
(a) for every x∈ℝ^d, F^(l)(x) is a non-empty, convex and compact subset of ℝ^d,
(b) for every x∈ℝ^d, F(x)⊆ F^(l+1)(x)⊆ F^(l)(x),
(c) there exists K^(l)>0, such that for every x∈ℝ^d, sup_y∈ F^(l)(x)y≤ K^(l)(1+x),
(d) F^(l) is a locally Lipschitz continuous set valued map.
Furthermore,
(e) for every x∈ℝ^d, F(x)=∩_l≥1 F^(l)(x).
For any ϵ>0, for every x_0∈ℝ^d, let B(ϵ,x_0):={x: ∥ x-x_0∥<ϵ}. Let {ϵ_l:=1/3^l}_l≥1. Then for every l≥1, 𝒞_l:={B(ϵ_l,x_0):x_0∈ℝ^d} is an open covering of ℝ^d. Since ℝ^d is a metric space, it is paracompact (see <cit.>). Therefore for every l≥1, there exists a locally finite open refinement of the covering 𝒞_l and let it be denoted by 𝒞̃_l:={C_i^l}_i∈ I^l where I^l is an arbitrary index set. By <cit.>, there exists a locally Lipschitz continuous partition of unity, {ψ_i^l}_i∈ I^l, subordinated to the covering 𝒞̃_l. Therefore,
for every l≥1, for every i∈ I^l, there exists x_i^l, such that support(ψ_i^l)⊆ C_i^l⊆ B(ϵ_l,x_i^l). For every l≥1, for every x∈ℝ^d, let I^l(x):={i∈ I^l:ψ_i^l(x)>0} and by definition of ψ_i^l, we have that 0<|I^l(x)|<∞ and ∑_i∈ I^l(x)ψ_i^l(x)=1.
For every l≥1, define the set valued map F^(l):ℝ^d→{subsets of ℝ^d},
such that for every x∈ℝ^d, F^(l)(x):=∑_i∈ I^l(x)ψ_i^l(x)A_i^l, where A_i^l:=c̅o̅(F(B(2ϵ_l,x_i^l))).
The proofs of parts (a), (b), (c) and (e) of the lemma are exactly the same as that of <cit.>. We shall provide a proof of part (d) of the lemma above from which continuity of the set-valued maps F^(l) follows.
(d) Fix l≥1 and x∈ℝ^d. Since 𝒞̃_l is a locally finite open covering of ℝ^d, there exists δ>0 (depending on x), such that
I^l(x,δ):={i∈ I^l: B(x,δ)∩ C^l_i≠∅} is finite. Since {ψ_i^l}_i∈ I^l is a locally Lipschitz continuous partition of unity subordinated to the covering 𝒞̃_l, we have that for every i∈ I^l, support(ψ_i^l)⊆ C_i^l. Therefore, for every x'∈ B(x,δ), F^(l)(x')=∑_i∈ I^l(x,δ)ψ^l_i(x')A_i^l.
From the proof of part (a) of this lemma we know that for every i∈ I^l, A_i^l is a compact and convex subset of ℝ^d. Therefore from Lemma <ref>(a), we get that , for every i∈ I^l(x,δ), the set-valued map given by y→ψ_i^l(y)A_i^l is locally Lipschitz continuous. Further since |I^l(x,δ)|<∞, from Lemma <ref>(b), we get that the set-valued map given by y→∑_i∈ I^l(x,δ)ψ^l(y)A_i^l is locally Lipschitz continuous. Since the set-valued map y→∑_i∈ I^l(x,δ)ψ^l_i(y)A_i^l restricted to B(x,δ) is the same as F^(l) on B(x,δ), we get that F^(l) is locally Lipschitz continuous at x. Since x is arbitrary, the above argument gives us that F^(l) is a locally Lipschitz continuous set-valued map.
The continuous set-valued maps F^(l) as obtained above can be now parametrized (that is represented with a single-valued continuous function with an additional parameter). Key to parametrization is a continuous selection procedure by which we mean a function σ:𝒦(ℝ^d)→ℝ^d which is continuous and is such that for every Y∈𝒦(ℝ^d), σ(Y)∈ Y. Since the maps F^(l) are convex set-valued, it suffices to look for a selection procedure which is continuous restricted to the family of compact and convex subsets of ℝ^d. Further we want a selection procedure which would preserve the local Lipschitz continuity of the set-valued map F^(l) in the parametrization as well. In order to accomplish this we shall use the Stiener selection procedure (for a definition see <cit.>). The next lemma summarizes some properties of the Stiener selection procedure and an intersection lemma which form the central tools for parameterizing the set-valued maps F^(l) (for a proof we refer the reader to <cit.> and <cit.>). Before we state the lemma we introduce some notation needed. Let 𝒦_c(ℝ^d) denote the family of all non-empty compact and convex subsets of ℝ^d. For any set Y⊆ℝ^d and for any x∈ℝ^d, define d(x,Y):=inf_y∈ Yx-y.
(a) There exists a function σ:𝒦_c(ℝ^d)→ℝ^d, such that for every Y,Y_1, Y_2∈𝒦_c(ℝ^d),
σ(Y)∈ Y and σ(Y_1)-σ(Y_2)≤ d H(Y_1,Y_2).
(b) The map Π:𝒦_c(ℝ^d)×ℝ^d→𝒦_c(ℝ^d), defined such that for every Y∈𝒦_c(ℝ^d) and x∈ℝ^d, Π(Y,u):=Y∩(x+2d(x,Y)U), is such that for every Y_1,Y_2∈𝒦_c(ℝ^d) and for every x_1,x_2∈ℝ^d,
H(Π(Y_1,x_1),Π(Y_2,x_2))≤5(H(Y_1,Y_2)+x_1-x_2).
We now use the results stated in the above lemma to parametrize the set-valued maps F^(l).
Let {F^(l)}_l≥1 be as in Lemma <ref>. For every l≥1, there exists a continuous function f^(l):ℝ^d× U→ℝ^d such that,
(a) for every x∈ℝ^d, f^(l)(x,U)=F^(l)(x) where f^(l)(x,U):={f^(l)(x,u): u∈ U}.
(b) for K^(l)>0 as in Lemma <ref>, for every (x,u)∈ℝ^d× U, f^(l)(x,u)≤ K^(l)(1+x).
(c) for every x_0∈ℝ^d, there exists δ^(l)>0 and L^(l)>0 (depending on x_0), such that for every x,x'∈ x_0+δ^(l) U, for every u∈ U,
f^(l)(x,u)-f^(l)(x',u)≤ L^(l)x-x'.
Fix l≥1. Let the map f^(l):ℝ^d× U→ℝ^d be defined such that, for every (x,u)∈ℝ^d× U,
f^(l)(x,u):=σ(Π(F^(l)(x),K^(l)(1+x)u)),
where σ and Π are as in Lemma <ref>.
(a) By definition of f^(l), σ and Π, for every (x,u)∈ℝ^d× U, we have that,
f^(l)(x,u)∈Π(F^(l)(x),K^(l)(1+x)u)⊆ F^(l)(x).
Therefore, for every x∈ℝ^d, f^(l)(x,U)⊆ F^(l)(x). By Lemma <ref>(c), we know that for every x∈ℝ^d, sup_y∈ F^(l)(x)y≤ K^(l)(1+x). Thus for every x∈ℝ^d, for any y∈ F^(l)(x), there exists u∈ U, such that y=K^(l)(1+x)u. For such a u∈ U, by definition of Π, we have that Π(F^(l)(x),K^(l)(1+x)u)=y and hence f^(l)(x,u)=σ(Π(F^(l)(x),K^(l)(1+x)u))=y. Therefore for every x∈ℝ^d, F^(l)(x)⊆ f^(l)(x,U) from which it follows that f^(l)(x,U)=F^(l)(x), for every x∈ℝ^d.
(b) Follows from part (a) of this lemma and Lemma <ref>(c).
(c) Fix x_0∈ℝ^d. Since F^(l) is a locally Lipschitz continuous set-valued map (see Lemma <ref>(d)), we obtain δ_F^(l)>0 and L_F^(l)>0 (depending on x_0) such that for every x,x'∈ x_0+δ_F^(l)U, H(F^(l)(x),F^(l)(x'))≤ L_F^(l)x-x'. Set δ^(l):=δ_F^(l) and L^(l):=5d(L_F^(l)+K^(l)). Then, for any x,x'∈ x_0+δ^(l)U, for every u∈ U,
f^(l)(x,u)-f^(l)(x',u) =σ(Π(F^(l)(x),K^(l)(1+x)u))-σ(Π(F^(l)(x'),K^(l)(1+x')u))
≤ d H(Π(F^(l)(x),K^(l)(1+x)u),Π(F^(l)(x'),K^(l)(1+x')u))
≤ 5d (H( F^(l)(x),F^(l)(x') ) +K^(l)( 1+x)u-K^(l)( 1+x')u )
= 5d ( H( F^(l)(x),F^(l)(x') )+K^(l)|x-x'|u)
≤ 5d ( H( F^(l)(x),F^(l)(x') )+K^(l)x-x')
≤ 5d (L_F^(l)x-x'+K^(l)x-x')
= L^(l)x-x',
where, (<ref>) follows from Lemma <ref>(a), (<ref>) follows from Lemma <ref>(b) and (<ref>) follows from our choice of δ^(l) and local Lipschitz continuity of F^(l).
The set-valued map in recursion (<ref>) can be replaced with the parametrization obtained in the lemma above as explained below.
(1) For every l≥ 1, by Lemma <ref>(b), we know that for every x∈ℝ^d, F(x)⊆ F^(l)(x). Therefore for every l≥1, for every n≥0,
X_n+1-X_n-a(n)M_n+1∈ a(n)F^(l)(X_n).
(2) For every l≥1, by Lemma <ref>(a), we know that for every x∈ℝ^d, F^(l)(x)=f^(l)(x,U). It can now be shown that for every n≥0, there exists a U-valued random variable on Ω, say U^(l)_n, such that for every ω∈Ω, for every n≥0,
X_n+1(ω)-X_n(ω)-a(n)M_n+1(ω)= a(n)f^(l)(X_n(ω),U^(l)_n(ω))
(for a proof see <cit.>).
§.§ Solutions of the mean field and their approximation
In this section, we shall approximate the solutions of mean field (that is DI (<ref>)) with the solutions of DI given by,
dx/dt∈ F^(l)(x),
for some l≥1. In order to accomplish this we need some notations which are introduced next.
For every T>0 and for every x∈ℝ^d, let S(T,x) denote the set of solutions of DI (<ref>) on [0,T]. Formally,
S(T,x):={x:[0,T]→ℝ^d : x is absolutely continuous with x(0)=x and for a.e. t∈[0,T], dx(t)/dt∈ F(x(t))}.
Since F is a Marchaud map, we have that for every T>0 and for every x∈ℝ^d, S(T,x)≠∅. Similarly for every l≥1, for every T>0 and for every x∈ℝ^d, let S^(l)(T,x) denote the set of solutions of DI (<ref>) on [0,T]. Formally,
S^(l)(T,x):={x:[0,T]→ℝ^d : x is absolutely continuous with x(0)=x and for a.e. t∈[0,T], dx(t)/dt∈ F^(l)(x(t))}.
From Lemma <ref>, we know that for every l≥1, F^(l) is a Marchaud map and hence for every T>0 and for every x∈ℝ^d, S^(l)(T,x)≠∅.
For any Y⊆ℝ^d, for any T>0, define S(T,Y):=∪_y∈ YS(T,y). Similarly, for every l≥1, S^(l)(T,Y):=∪_y∈ YS^(l)(T,y).
The next lemma summarizes some important relationships between the solutions of DI (<ref>) and those of DI (<ref>) needed later. It also states that for large enough l≥1, the solutions of DI (<ref>) are within an ϵ-neighborhood of the solutions of DI (<ref>) for every initial condition lying in a compact subset of ℝ^d.
For every T>0,
(a) for every l≥1, for every x∈ℝ^d, S(T,x)⊆ S^(l+1)(T,x)⊆ S^(l)(T,x).
(b) for every x∈ℝ^d, S(T,x)=∩_l≥1S^(l)(T,x).
(c) for any Y⊆ℝ^d, S(T,Y)=∩_l≥1S^(l)(T,Y).
(d) for every Y⊆ℝ^d compact, S(T,Y) is a compact subset of 𝒞([0,T],ℝ^d) (the vector space of ℝ^d-valued continuous functions on [0,T]).
(e) for every Y⊆ℝ^d compact, for every l≥1, S^(l)(T,Y) is a compact subset of 𝒞([0,T],ℝ^d).
(f) for every Y⊆ℝ^d compact, for every ϵ>0, there exists l'≥1, such that for every l≥ l', for every x^(l)∈ S^(l)(T,Y), there exists x∈ S(T,Y), such that sup_t∈[0,T]x(t)-x^(l)(t)< ϵ.
Fix T>0.
(a) Fix l≥1 and x∈ℝ^d. Let x∈ S(T,x). Then we have that x is absolutely continuous with x(0)=x and for a.e. t∈[0,T], dx(t)/dt∈ F(x(t)). By Lemma <ref>(b), we know that for every t∈ [0,T], F(x(t))⊆ F^(l+1)(x(t)). Therefore for a.e. t∈[0,T], dx(t)/dt∈ F^(l+1)(x(t)), from which we get that x∈ S^(l+1)(T,x).
Hence S(T,x)⊆ S^(l+1)(T,x). Using the fact that for every x'∈ℝ^d, F^(l+1)(x')⊆ F^(l)(x') (see Lemma <ref>(b)), a similar argument gives us that S^(l+1)(T,x)⊆ S^(l)(T,x).
(b) Fix x∈ℝ^d. From part (a) of this lemma we have that S(T,x)⊆∩_l≥1S^(l)(T,x). Let x∈∩_l≥1S^(l)(T,x). Then x is absolutely continuous with x(0)=x and for every l≥1, for a.e. t∈[0,T], dx(t)/dt∈ F^(l)(x(t)).
Thus for a.e. t∈[0,T], for every l≥1, dx(t)/dt∈ F^(l)(x(t)). Hence for a.e t∈[0,T], dx(t)/dt∈∩_l≥1F^(l)(x(t))=F(x(t)), where the equality follows from Lemma <ref>(e). Therefore x∈ S(T,x), from which we get that ∩_l≥1S^(l)(T,x)⊆ S(T,x).
(c) Follows from part (a) and (b) of this lemma.
(d) & (e) Follows from <cit.>.
(f) Suppose not. Then there exists Y⊆ℝ^d compact and ϵ>0, such that for every l'≥1, there exists l≥ l' and x^(l)∈ S^(l)(T,Y), such that d(x^(l),S(T,Y))≥ϵ, where d(x^(l),S(T,Y)):=inf_x∈ S(T,Y)sup_t∈[0,T]x^(l)(t)-x(t). Thus we can obtain a sequence of solutions, say {x^(l_k)}_k≥1, such that for every k≥1, 1≤ l_k<l_k+1 and x^(l_k)∈ S^(l_k)(T,Y) with d(x^(l_k),S(T,Y))≥ϵ. From part (a) of this lemma, we have that for every k≥1, S^(l_k)(T,Y)⊆ S^(1)(T,Y) and hence {x^(l_k)}_k≥1⊆ S^(1)(T,Y). Since Y⊆ℝ^d is compact, by part (e) of this lemma we know that S^(1)(T,Y) is a compact subset of 𝒞([0,T],ℝ^d). Thus there exists a subsequence of {x^(l_k)}_k≥1, say {x^(l_k_j)}_j≥1 such that x^(l_k_j)→x^* as j→∞ in 𝒞([0,T],ℝ^d) and x^*∈ S^(1)(T,Y). Since for every j≥1, d(x^(l_k_j),S(T,Y))≥ϵ, we get that d(x^*,S(T,Y))≥ϵ and hence x^*∉ S(T,Y).
From part (a) of this lemma, we get that for every l≥1, for J:=min{j≥1:l_k_j≥ l}, {x^(l_k_j)}_j≥ J⊆ S^(l)(T,Y). Further by part (e) of this lemma we have that for every l≥ 1, S^(l)(T,Y) is a compact subset of 𝒞([0,T],ℝ^d). Thus for every l≥1, x^*∈ S^(l)(T,Y) and hence x^*∈∩_l≥1S^(l)(T,Y)=S(T,Y) (see part (c) of this lemma). This leads to a contradiction.
The part (f) of the above lemma provides the necessary approximation result. Further since the set-valued maps F^(l) admit a single-valued parametrization (f^(l) as in Lemma <ref>), a solution of DI (<ref>) can be viewed as a solution of the ordinary differential equation (o.d.e.) given by,
dx/dt=f^(l)(x,u(t)),
for some u:[0,∞)→ U measurable and vice versa. The lemma below summarizes some useful results on the solutions of o.d.e. (<ref>) and its vector field.
For every l≥1,
(a) for every T>0, for any u:[0,T]→ U measurable, for every initial condition, the set of solutions of o.d.e. (<ref>) is non-empty. That is, for every x_0∈ℝ^d, there exists x:[0,T]→ℝ^d such that, x is absolutely continuous, x(0)=x_0 and for a.e. t∈ [0,T], dx(t)/dt=f^(l)(x(t),u(t)).
(b) for every T>0, for every Y⊆ℝ^d compact, there exists C_1(Y,T,l)>0, such that for every u:[0,T]→ U measurable, every solution of o.d.e. (<ref>) with initial condition in Y, say x:[0,T]→ℝ^d, satisfies,
sup_t∈[0,T]x(t)≤ C_1(Y,T,l).
(c) for any Y⊆ℝ^d compact, there exists L(Y,l)>0, such that for every T>0, for every u:[0,T]→ U, the map h:Y×[0,T]→ℝ^d, given by h(x,t):=f^(l)(x,u(t)) for every (x,t)∈ Y× [0,T], satisfies,
h(x,t)-h(x',t)≤ L(Y,l)x-x',
for every x, x'∈ Y and for every t∈[0,T].
(d) for every T>0, for every u:[0,T]→ U, for every initial condition, o.d.e. (<ref>) admits a unique solution.
Fix l≥1.
(a) Fix T>0 and u:[0,T]→ U measurable. The proof of this part is a direct application of <cit.>. We show here that the sufficient conditions required to apply the said theorem are satisfied by the vector field of the o.d.e. (<ref>). First, we show that f^(l)(·,u(·)) is a Caratheodary function (see <cit.>). By Lemma <ref>, it is clear that for every t∈[0,T], the map x→ f^(l)(x,u(t)) is continuous and for every x∈ℝ^d, the map t→ f^(l)(x,u(t)) is measurable. Further by Lemma <ref>(b), we have that for any c>0, for every x∈ℝ^d with x≤ c, for every t∈[0,T], f^(l)(x,t)≤ K^(l)(1+c). Thus f^(l)(·,u(·)) is a Caratheodary function. Final condition to verify is on the rate of growth of solutions. By Lemma <ref>(b), f^(l)(x,u(t))≤ψ(x):=K^(l)(1+x). The function ψ:[0,∞)→ [0,∞), is clearly positive everywhere and the function 1/ψ is locally integrable on [0,∞). A simple argument gives us that for every r>0, the integral ∫_r^∞dr̃/ψ(r̃) can be lower bounded by the tail of 1/K^(l)∑_n=1^∞1/n. Hence for every r>0, ∫_r^∞dr̃/ψ(r̃)=∞. Now <cit.> can be applied to obtain the required result.
(b) Fix T>0 and Y⊆ℝ^d compact. Since Y is compact, there exists r>0 such that sup_y∈ Yy≤ r. Set C_1(Y,T,l):=(r+K^(l)T)e^K^(l)T, where K^(l)>0 is as in Lemma <ref>(b). For some u:[0,T]→ U measurable and for some x_0∈ Y, let x:[0,T]→ℝ^d be a solution of o.d.e. (<ref>) with initial condition x_0. Then, for every t∈[0,T], x(t)=x_0+∫_0^tf^(l)(x(s),u(s))ds and hence for every t∈[0,T]
x(t) ≤x_0+∫_0^tf^(l)(x(s),u(s))ds
≤ r+K^(l)T+K^(l)∫_0^tx(s)ds
where, (<ref>) follows from the fact that x_0∈ Y and Lemma <ref>(b). The required bound follows from (<ref>) and Gronwall's result (see <cit.>).
(c) Fix Y⊆ℝ^d compact. It is enough to show that there exists L(Y,l)>0, such that for every y_1, y_2∈ Y, sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)≤ L(Y,l)y_1-y_2. From Lemma <ref>(c), we know that for every x_0∈ Y, there exists δ(x_0,l)>0 and L(x_0,l)>0, such that for every x,x'∈ x_0+δ(x_0,l)U, for every u∈ U, f^(l)(y_1,u)-f^(l)(y_2,u)≤ L(Y,l)y_1-y_2. Let 𝒢:={x_0+δ(x_0,l)/2U:x_0∈ Y}, where, U denotes the interior of U. Since Y is compact and 𝒢 is an open cover of Y, there exists {x_1,x_2,…,x_k}⊆ Y, such that Y⊆∪_i=1^k(x_i+δ(x_i,l)/2U). Set δ(Y,l):=min_1≤ i≤ kδ(x_i,l)/2 and L_0(Y,l):=max_1≤ i≤ kL(x_i,l).
Let y_1,y_2∈ (Y× Y)∩{(y_1,y_2): y_1-y_2<δ(Y,l)}. Then we know that there exists i∈{1,…,k}, such that y_1∈ x_i+δ(x_i,l)/2U. Further since y_1-y_2<δ(Y,l)≤δ(x_i,l)/2, we have that y_2∈ x_i +δ(x_i,l)U. Therefore y_1,y_2∈ x_i+δ(x_i,l)U and hence, for every u∈ U, f^(l)(y_1,u)-f^(l)(y_2,u)≤ L(x_i,l)y_1-y_2≤ L_0(Y,l)y_1-y_2. Thus for every y_1,y_2∈ (Y× Y)∩{(y_1,y_2): y_1-y_2<δ(Y,l)}, sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)≤ L_0(Y,l)y_1-y_2.
Let E:=(Y× Y)∩{(y_1,y_2):y_1-y_2≥δ(Y,l)}. By Lemma <ref>(b), the map (y_1,y_2)∈ Y× Y→sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u) is well defined. Further using the fact that for every (y_1,y_2), (y_1',y_2')∈ Y× Y, |sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)-sup_u∈ Uf^(l)(y_1',u)-f^(l)(y_2',u)|≤sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_1',u)+sup_u∈ Uf^(l)(y_2,u)-f^(l)(y_2',u) and Lemma <ref>(c), we have that the map (y_1,y_2)→sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u) is continuous. Thus the map (y_1,y_2)∈ E→sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)/y_1-y_2 is a continuous function on a compact set E and hence achieves a maximum, say L_1(Y,l)≥0. Therefore for every (y_1,y_2)∈ (Y× Y)∩{(y_1,y_2):y_1-y_2≥δ(Y,l)}, sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)≤ L_1(y,l)y_1-y_2.
Thus from the arguments in the two preceding paragraphs we have that there exists L(Y,l):=max{L_0(Y,l),L_1(Y,l)}, such that, for every y_1,y_2∈ Y, sup_u∈ Uf^(l)(y_1,u)-f^(l)(y_2,u)≤ L(Y,l)y_1-y_2.
(d) Using parts (b) and (c) of this lemma, the proof of uniqueness follows from arguments similar to that of <cit.>.
§.§ Bounding procedure
In this section we show that the lower bound on the probability of the event that the iterates converge to an attracting set given that after a large number of iterations the iterates lies in a neighborhood of it depends mainly on the additive noise terms.
In order to accomplish this we first define some terms which are a measure of the distance of the linearly interpolated trajectory of recursion (<ref>), that is X̅ (see (<ref>)) to the solutions of the DI (<ref>) over a T>0 length time interval, among others. Recall from section <ref> that 𝒪'⊆ℝ^d, is an open neighborhood of the attracting set A (as in assumption (A4)) with compact closure, such that A⊆𝒪'⊆𝒪̅'̅⊆𝒪, where 𝒪 denotes the fundamental neighborhood of A. Thus we can find an ϵ_0>0, such that N^ϵ_0(𝒪̅'̅)⊆𝒪 and N^2ϵ_0(A)⊆𝒪', where for any ϵ>0, N^ϵ(·) denotes the ϵ-neighborhood of a set. Further, since A is an attracting set for the flow of DI (<ref>), for ϵ_0>0 as obtained above, there exists T_A>0, such that for every x∈𝒪, for every t≥ T_A, Φ(t,x)∈ N^ϵ_0(A.
Throughout the rest of this paper ϵ_0 and T_A will denote the constants as obtained above.
For every T>0, for every n≥0,
* : let τ(n,T):=min{k≥ n: t(k)≥ t(n)+T}, where t(n), for every n≥0 are as defined in section <ref>. That is τ(n,T) denotes the first iterate such that, at least time T has elapsed since the n^th iteration. Further the time elapsed from iteration n to iteration τ(n,T), be denoted by Δ(n,T), that is Δ(n,T):=t(τ(n,T))-t(n). Then by the choice of our step sizes we have that T≤Δ(n,T)≤ T+1.
* : for every ω∈Ω, ρ(ω,n,T):=inf_x∈ S(T,𝒪̅'̅)sup_t∈[0,T]X̅(ω,t+t(n))-x(t), where S(T,𝒪̅'̅) denotes the set of solutions of DI (<ref>) as defined in equation (<ref>).
* : for every ω∈Ω, for every l≥1, let x̅^(l)(·;n,T,ω):[0,T]→ℝ^d denote the unique solution of the o.d.e.
dx/dt=f^(l)(x,u(t;n,T,ω)),
with initial condition x̅^(l)(0;n,T,ω)=X_n(ω), where u(·;n,T,ω):[0,T]→ U is defined such that, for every t∈[0,T], u(t;n,T,ω):=U_k^(l)(ω), where U_k^(l) is as in equation (<ref>) and k is such that t+t(n)∈ [t(k),t(k+1)) (for a proof of existence and uniqueness of solutions to o.d.e. (<ref>), see Lemma <ref>). It is easy to see that for every l≥1, x̅^(l)(·;n,T,ω)∈ S^(l)(T,X_n(ω)), where S^(l)(T,X_n(ω)) denotes the set of solutions of DI (<ref>), as defined in (<ref>).
* : for every ω∈Ω, for every l≥1, ρ^(l)_1(ω,n,T):=sup_t∈[0,T]X̅(ω,t+t(n))-x̅^(l)(t;n,T,ω) and ρ^(l)_2(ω,n,T):=inf_x∈ S(T,𝒪̅'̅)sup_t∈[0,T]x̅^(l)(t;n,T,ω)-x(t).
* : for any T_u≥ T_A, for any n_0≥0, let {n_m}_m≥1 denote a subsequence of natural numbers defined such that for every m≥0, T_A≤ T_m:= t(n_m+1)-t(n_m)≤ T_u.
Now we collect sample paths of interest using the quantities ρ, ρ_1^(l) and ρ_2^(l). The next lemma summarizes results in this regard.
For every T_u≥ T_A, for every n_0≥0, for every l≥1, for every event E∈ℱ_n_0, such that E⊆{ω : X_n_0(ω)∈𝒪'}, for every {n_m}_m≥1 as in <ref>,
(a) for every M≥0,
E ∩(∩_m=0^M{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)+ρ_2^(l)(ω,n_m,T_m)<ϵ_0}) ⊆ E ∩(∩_m=0^M{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0})
⊆{ω∈Ω: X_n_M+1(ω)∈𝒪'},
(b)
ℙ(E ∩(∩_m≥0{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)+ρ_2^(l)(ω,n_m,T_m)<ϵ_0})) ≤ℙ(E ∩(∩_m≥0{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}))
≤ℙ(E∩{ω∈Ω: X_n(ω)→ A as n→∞}),
where, {T_m}_m≥0 is as in <ref>.
Fix n_0≥0, l≥1 and E∈ℱ_n_0, such that E⊆{ω∈Ω:X_n_0(ω)∈𝒪'}.
(a) For every m≥0, for every ω∈Ω, from <ref> and <ref>, it is clear that,
ρ(ω,n_m,T_m)≤ρ_1^(l)(ω,n_m,T_m)+ρ_2^(l)(ω,n_m,T_m),
from which we get that for every m≥0,
{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)+ρ_2^(l)(ω,n_m,T_m)<ϵ_0}⊆{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}.
Therefore,
E ∩(∩_m=0^M{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)+ρ_2^(l)(ω,n_m,T_m)<ϵ_0})⊆ E ∩(∩_m=0^M{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}).
The proof of the second inclusion follows from induction. Fix M=0 and ω∈ E∩{ω∈Ω:ρ(ω,n_0,T_0)<ϵ_0}. Then X_n_0(ω)∈𝒪'. Since T_0≥ T_A, we have that for every x∈ S(T_0,𝒪̅'̅), x(T_0)∈ N^ϵ_0(A). Further, since ρ(ω,n,T_0)<ϵ_0 and by Lemma <ref>(d), we get that there exists x∈ S(T_0,𝒪̅'̅), such that X̅(ω,t(n_1))-x(T_0)=X_n_1(ω)-x(T_0)<ϵ_0 and hence X_n_1(ω)∈ N^2ϵ_0(A)⊆𝒪'. Therefore ω∈{ω∈Ω: X_n_1(ω)∈𝒪'}. Thus the inclusion is true for M=0. Suppose the inclusion is true for some M>0. Let ω∈ E ∩(∩_m=0^M+1{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}). Since the inclusion is true for M, we have that X_n_M+1(ω)∈𝒪'. Now by arguments exactly same as those for the base case (that is for M=0) we get that X_n_M+2(ω)∈𝒪'. Therefore the inclusion is true for M+1.
(b) The first inequality follows from part (a) of this lemma. We shall provide a proof of the second inequality. Let ω∈ E ∩(∩_m≥0{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}). Then by part (a) of this lemma we have that for every m≥0, X_n_m(ω)∈𝒪'. Since 𝒪̅'̅ is compact, by Lemma <ref>(d), we have that, S(T_u,𝒪̅'̅) is a compact subset of 𝒞([0,T_u],ℝ^d), and hence there exists C(𝒪̅'̅,T_m)>0 such that, sup_x∈ S(T_u,𝒪̅'̅)sup_t∈[0,T_u]x(t)≤ C(𝒪̅'̅,T_m). Further since for every m≥0, T_m≤ T_u, we get that sup_x∈ S(T_m,𝒪̅'̅)sup_t∈[0,T_m]x(t)≤sup_x∈ S(T_u,𝒪̅'̅)sup_t∈[0,T_u]x(t)≤ C(𝒪̅'̅,T_m). By our choice of ω, we have that for every m≥0, ρ(ω,n_m,T_m)<ϵ_0 and by <ref>, we get that for every m≥0, sup_t∈ [0,T_m]X̅(ω,t+t(n_m))≤ C(𝒪̅'̅,T_m)+ϵ_0. Therefore ω, is such that sup_n≥0X_n(ω)<∞ and for every m≥0, X_n_m(ω)∈𝒪'. Thus λ(ω) (see equation (<ref>) for definition), is non-empty, compact and λ(ω)∩𝒪̅'̅⊆λ(ω)∩ B(A)≠∅, where B(A) denotes the basin of attraction of the attracting set A. By Theorem <ref>(a), we have that for almost every ω in E ∩(∩_m≥0{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}) the iterates converge to the attracting set A. Therefore we get that ℙ( E ∩(∩_m≥0{ω∈Ω:ρ(ω,n_m,T_m)<ϵ_0}))≤ℙ(E∩{ω∈Ω: X_n(ω)→ A as n→∞)}).
The quantity ρ_1^(l) as in <ref>, captures the difference between the linearly interpolated trajectory of recursion (<ref>) and the solution of the o.d.e. (<ref>) over a T>0 length time interval. This difference can be shown to comprise of two components namely, the error due to discretization and the error due to additive noise terms. By the step size assumption, that is (A2), we know that the step sizes are converging to zero. Hence intuition suggests that after a large number of iterations have elapsed the discretization error must be negligible and the contribution to the difference term ρ_1^(l) is mainly due to the additive noise terms. The following is made precise in the lemma below. A brief outline of the proof of this lemma which follows from Lemma <ref>(c) and <cit.>, is presented in Appendix <ref>.
For every l≥1, for every T_u≥ T_A, there exists N_0'≥1, such that for every n_0≥ N_0', for every E∈ℱ_n_0 such that, E⊆{ω∈Ω:X_n_0(ω)∈𝒪'}, for every sequence {n_m}_m≥0 as in <ref>, for every m≥0, we have,
ℙ(ℬ_m-1^(l)∩{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)≥ϵ_0/2})≤ℙ({ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}∩ℬ^(l)_m-1),
where,
* ℬ_-1^(l_0):=E and for every M≥0, ℬ_M^(l_0):=E∩(∩_m=0^M{ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m)<ϵ_0}),
* for every j≥1, ζ_j:=∑_n=0^j-1a(n)M_n+1, where {M_n}_n≥1 denote the additive noise terms as defined in assumption (A3),
* {T_m}_m≥0 is as in <ref> and K_0(T_u)>0 is a positive constant increasing in T_u
Suppose event E as in the lemma above occurs with some positive probability. Then the next lemma says that the lower bound of ℙ({ω∈Ω : X_n(ω)→ A as n→∞}|E) depends mainly on the additive noise terms for n_0 large.
For every T_u≥ T_A, there exists l_0≥1 and N_0'≥1, such that for every n_0≥ N_0', for every E∈ℱ_n_0 such that, E⊆{ω∈Ω:X_n_0(ω)∈𝒪'} and ℙ(E)>0, for every sequence {n_m}_m≥0 as in <ref>, we have,
ℙ({ω∈Ω : X_n(ω)→ A as n→∞}|E)≥1-∑_m=0^∞ℙ(max_n_m≤ j≤ n_m+1ζ_j-ζ_n_m≥ϵ_0/4K_0(T_u)|ℬ^(l_0)_m-1),
where, the sequence of events {ℬ_m^(l_0)}_m≥-1, the sequence of random vectors {ζ_j}_j≥1 and the constant K_0(T_u) are as defined in Lemma <ref>.
By Lemma <ref>(f), we get that there exists l_0≥1 (depending on 𝒪̅'̅, T_u and ϵ_0) such that for every x^(l_0)∈ S^(l_0)(T_u,𝒪̅'̅), there exists x∈ S(T_u,𝒪̅'̅) such that sup_t∈[0,T_u]x^(l_0)(t)-x(t)<ϵ_0/2. Further by Lemma <ref>(a) and definition of E, we get that for every m≥0, ℬ_m-1^(l_0)⊆{ω∈Ω:X_n_m(ω)∈𝒪'}. Therefore, for every ω∈ℬ_m-1^(l_0), x̅^(l_0)(·;n_m,T_m,ω)∈ S^(l_0)(T_m,𝒪̅'̅) and
ρ_2^(l_0)(ω,n_m,T_m) =inf_x∈ S(T_m,𝒪̅'̅)sup_t∈[0,T_m]x̅^(l_0)(t;n_m,T_m, ω)-x(t)
≤inf_x∈ S(T_u,𝒪̅'̅)sup_t∈[0,T_u]x̅^(l_0)(t;n,T_u,ω)- x(t)
<ϵ_0/2,
where (<ref>) follows from the fact that T_m≤ T_u and (<ref>) follows from our choice of l_0 and <ref>. Therefore for every m≥-1,
ℬ_m^(l_0)∩{ω∈Ω:ρ_1^(l_0)(ω,n_m+1,T_m)+ρ_2^(l_0)(ω,n_m+1,T_m)≥ϵ_0}⊆ℬ_m^(l_0)∩{ω∈Ω:ρ_1^(l_0)(ω,n_m+1,T_m)≥ϵ_0/2},
and hence,
ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m+1,T_m)+ρ_2^(l_0)(ω,n_m+1,T_m)≥ϵ_0}|ℬ_m^(l_0))≤ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m+1,T_m)≥ϵ_0/2}|ℬ_m^(l_0)).
By Lemma <ref>, we know that there exists N_0'≥1 such that, for every n_0≥ N_0', for every m≥0,
ℙ(ℬ_m-1^(l_0)∩{ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)≥ϵ_0/2})≤ℙ(ℬ_m-1^(l_0)∩{ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}),
from which it follows that,
ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)≥ϵ_0/2}|ℬ_m-1^(l_0))≤ℙ({ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}|ℬ_m-1^(l_0)).
For l_0≥1 as obtained above and for n_0≥ N_0', we have that,
ℙ(X_n→ A as n→∞|E) ≥ℙ(∩_m≥0{ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m)< ϵ_0}|E)
=1-ℙ(∪_m≥0{ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m)≥ϵ_0}|E)
=1-ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_0,T_m)+ρ_2^(l_0)(ω,n_0,T_m)≥ϵ_0}|ℬ^(l_0)_-1)
-∑_m=1^∞ ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m)≥ϵ_0}|ℬ_m-1^(l_0))ℙ(ℬ_m-1^(l_0)|ℬ_-1^(l_0))
≥1- ∑_m=0^∞ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m) ≥ϵ_0}|ℬ_m-1^(l_0)),
where, (<ref>) follows from Lemma <ref>(b), (<ref>) follows from the observation that,
(∪_m≥0{ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m) +ρ_2^(l_0)(ω,n_m,T_m)≥ϵ_0})∩ E=
∪_m≥0({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0)(ω,n_m,T_m)≥ϵ_0}∩ℬ_m-1^(l_0)),
(where the union in R.H.S. is disjoint) and (<ref>) follows from the fact that for every m≥0, ℙ(ℬ_m-1^(l_0)|ℬ_-1^(l_0))≤ 1. Using (<ref>) and (<ref>) in (<ref>), we get that there exists l_0≥1 and N_0'≥1, such that for every n_0≥ N_0', for every E∈ℱ_n_0 such that E⊆{ω∈Ω: X_n_0(ω)∈𝒪'} and ℙ(E)>0,
ℙ(X_n→ A as n→∞|E) ≥1- ∑_m=0^∞ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)+ρ_2^(l_0) (ω,n_m,T_m)≥ϵ_0}|ℬ_m-1^(l_0))
≥1-∑_m=0^∞ℙ({ω∈Ω:ρ_1^(l_0)(ω,n_m,T_m)≥ϵ_0/2}| ℬ_m-1^(l_0))
≥ 1-∑_m=0^∞ℙ({ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m (ω)≥ϵ_0/4K_0(T_u)}|ℬ_m-1^(l_0)).
§.§ Review of the probability bounding procedure for the additive noise terms
In this section we fix l_0 and n_0≥ N_0, where l_0 and N_0 are as in Lemma <ref> and provide an upper bound for ℙ({ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}|ℬ_m-1^(l_0)), for every m≥0. The proof of the bounding procedure is similar to that of <cit.> and we provide a brief outline here for the sake of completeness.
(a) From recursion (<ref>), we have that for every m≥0, for every n_m≤ j≤ n_m+1-1, for every ω∈Ω, there exists V_j(ω)∈ F(X_j(ω)), such that,
X_j+1(ω)-X_j(ω)-a(n)M_j+1(ω)=a(n)V_j(ω).
By assumption (A1)(ii), we know that V_j(ω)≤ K(1+X_j(ω)) and hence for n_m≤ j≤ n_m+1-1,
X_j+1(ω)≤X_j(ω)(1+a(j)K)+a(j)K+a(j)M_j+1.
Further by assumption (A3), we get that, for every m≥0, for almost every ω∈Ω, for n_m≤ j≤ n_m+1-1,
X_j+1(ω)≤X_j(ω)(1+2a(j)K)+2a(j)K.
Now by arguments as in <cit.>, we get that, for every m≥0, for almost every ω∈Ω, for n_m≤ j≤ n_m+1,
X_j(ω)≤ e^2K T_u(X_n_m(ω)+2KT_u).
(b) Clearly {ζ_j-ζ_n_m,ℱ_j}_n_m≤ j≤ n_m+1 is a martingale. By (<ref>) and (A3), we get that for n_m≤ j<n_m+1,
ζ_j+1-ζ_j=a(j)M_j+1≤ a(j)K(1+X_j)≤ a(j) K(1+e^2KT_u(1+2KT_uX_n_m)). Since for every ω∈ℬ^(l_0)_m-1, X_n_m(ω)∈𝒪' (whose closure is compact), there exists a C>0, such that X_n_m(ω)≤ C. Therefore for every m≥0, for every ω∈ℬ^(l_0)_m-1,
for every n_m≤ j<n_m+1, ζ_j+1-ζ_j≤ a(j)K(1+e^2KT_u(1+2KT_uC)). Thus applying the concentration inequality for martingales, by arguments exactly the same as in the proof of <cit.>, we get that for every m≥0,
ℙ({ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}|ℬ_m-1^(l_0))≤ 2de^-K̃/(b(n_m)-b(n_m+1))
where, K̃:=ϵ_0^2/(32(K_0(T_u))^2dK(1+e^2KT_u(1+2KT_uC))).
of Theorem <ref>:
Let l_0≥1 and N_0' be as in Lemma <ref>. By definition of b(·), we get that there exists N_0”≥1, such that for every n≥ N_0”, b(n)<K̃. Define N_0:=max{N_0',N_0”}. Let n_0≥ N_0 and {n_m:=τ(n_m-1,T_A)}_m≥1. {n_m}_m≥1 as defined satisfies the conditions mentioned in <ref>. Then by Lemma <ref> and (<ref>), we get that for n_0≥ N_0,
ℙ(X_n→ A as n→∞|E)≥ 1-2d∑_m=0^∞e^-K̃/(b(n_m)-b(n_m+1)).
We know that e^-K̃/x/x→0 as x→0 and increases with x for 0<x<K̃. Therefore by our choice of n_0, we get that,
e^-K̃/(b(n_m)-b(n_m+1))/b(n_m)-b(n_m+1)≤e^-K̃/b(n_0)/b(n_0),
from which it follows that for every m≥0, e^-K̃/b(n_m)-b(n_m+1)≤ (b(n_m)-b(n_m+1))e^-K̃/b(n_0)/b(n_0).
Substituting the above in (<ref>), we get that for every n_0≥ N_0,
ℙ(X_n→ A as n→∞|E) ≥ 1-2d∑_m=0^∞e^-K̃/b(n_m)-b(n_m+1)
≥ 1-2d∑_m=0^∞ (b(n_m)-b(n_m+1))e^-K̃/b(n_0)/b(n_0)
=1-2de^-K̃/b(n_0)/b(n_0)∑_m=0^∞ (b(n_m)-b(n_m+1))
=1-2de^-K̃/b(n_0).
§ PROOF OF FINITE RESETS THEOREM (THM. <REF>)
From the definition of χ_n in equation (<ref>), we know that the χ_n takes the value one if there is a reset of the n^th iterate and is zero otherwise. Therefore ∑_n=1^∞χ_n denotes the total number of resets.
Suppose the event {∑_n=1^∞χ_n≥ k} has zero probability for some k≥1. Then for k≥1, such that ℙ(∑_n=1^∞χ_n≥ k)=0, we have ℙ(∑_n=1^∞χ_n< k)=1, from which Theorem <ref> trivially follows. Therefore without loss of generality assume ℙ(∑_n=1^∞χ_n≥ k)>0, for every k≥1.
For every k≥0, let G_k denote the event that there are at most k resets and G_∞ denote the event that there are finitely many resets. That is, for every k≥0, G_k:={∑_n=1^∞χ_n≤ k} and G_∞:={∑_n=1^∞χ_n<∞}. Then it is clear that, for every k≥ 1, G_k⊆ G_k+1 and G_∞=∪_k≥0G_k. Therefore lim_k→∞ℙ(G_k) exists and ℙ(G_∞)=lim_k→∞ℙ(G_k). For any k≥1,
ℙ(G_k)=ℙ(∑_n=1^∞χ_n≤ k)
=ℙ({∑_n=1^∞χ_n≤ k-1}∪{∑_n=1^∞χ_n = k})
=ℙ(G_k-1)+ℙ(∑_n=1^∞χ_n =k).
The event {∑_n=1^∞χ_n=k} can be written as a disjoint union of events as below. For every k≥1
{∑_n=1^∞χ_n=k}=∪_n_0≥1[{∑_n=1^n_0-1χ_n=k-1}∩{χ_n_0=1}∩{∑_n=n_0+1^∞χ_n=0}],
where, {∑_n=1^0χ_n=k-1}:=Ω. Let J(k):={n_0≥ 1: ℙ({∑_n=1^n_0-1χ_n=k-1}∩{χ_n_0=1})>0}. Then for every k≥1,
(a) By arguments in the second paragraph of this section we have that ℙ(G_k-1^c)=ℙ(∑_n=1^∞χ_n≥ k)>0. Further the event {∑_n=1^∞χ_n≥ k} can be written as a disjoint union of events as below.
{∑_n=1^∞χ_n≥ k}=∪_n_0≥1[{∑_n=1^n_0-1χ_n=k-1}∩{χ_n_0=1}],
from which it follows that,
0<ℙ({∑_n=1^∞χ_n≥ k})=∑_n_0=1^∞ℙ({∑_n=1^n_0-1χ_n=k-1}∩{χ_n_0=1}).
Therefore J(k)≠∅.
(b) min{n_0∈ J(k)}≥ k, since there cannot be k resets in less than k iterations.
From (<ref>) and definition of J(k), we have that for every k≥1,
ℙ(∑_n=1^∞χ_n=k)=∑_n_0∈ J(k)ℙ(∑_n=n_0+1^∞χ_n=0|∑_n=1^n_0-1χ_n=k-1,χ_n_0=1)
ℙ(∑_n=1^n_0-1χ_n=k-1,χ_n_0=1).
Step 1 (Obtaining 𝒪', ϵ_0 and T_A) : By (A4)', we have that A is a globally attracting set of DI (<ref>). Let r̃>0 be such that A⊆r̃U. By definition of a globally attracting set and <cit.>, we get that for any r≥r̃, rU is a fundamental neighborhood of A. Let k_1≥1 be such that r_k_1≥r̃. Set the fundamental neighborhood 𝒪:=r_k_1+1U and 𝒪':=r_k_1U. Obtain ϵ_0>0 and T_A>0 as in section <ref>. That is ϵ_0>0 is such that N^2ϵ_0(A)⊆𝒪'⊆ N^ϵ_0(𝒪̅'̅)⊆𝒪 and T_A>0, is such that for every x∈𝒪, for every t≥ T_A, Φ(t,x)∈ N^ϵ_0(A).
Step 2 (Obtaining {n_m}_m≥1 as in <ref>) : Clearly there exists k_2≥1, such that for every k≥ k_2, T_A≤2^kT_W. For any n_0≥1, for every m≥1, define n_m:=n_2^k_2,m-1, where for every 1≤ j≤ 2^k_2, n_j,m-1:=τ(n_j-1,m-1,T_W) with n_0,m-1:=n_m-1 and τ(·,·) is as defined in <ref>. Therefore for every m≥1, T_m-1:=t(n_m)-t(n_m-1)=∑_j=0^2^k_2-1Δ(n_j,m-1,T_W), where Δ(·,·) is as defined in <ref>. Thus for every m≥0, T_A≤ 2^k_2T_W≤ T_m≤ 2^k_2T_W+2^k_2 and hence T_u=2^k_2T_W+2^k_2.
Step 3 (Redefining trajectories) : Define X̅, as defined in (<ref>), with the iterates {X_n}_n≥0 (iterates before reset check) generated by Algorithm <ref>. For every n≥1, define X̃(·,·;n):Ω×[t(n),∞)→ℝ^d such that for every (ω,t)∈Ω×[t(n),t(n+1)),
X̃(ω,t;n):=(t-t(n)/t(n+1)-t(n))X_n+1(ω)+(t(n+1)-t/t(n+1)-t(n))X'_n(ω),
and for every (ω,t)∈[t(n+1),∞), X̃(ω,t;n)=X̅(ω,t).
Step 4 (Obtaining parameters) : By arguments exactly same as the ones used to obtain (<ref>), we get that, for every l≥1, for every n≥0, there exists a U-valued random variable on Ω, say Ũ^(l)_n such that, for every ω∈Ω,
X_n+1(ω)-X_n'(ω)-a(n)M_n+1(ω)= a(n)f^(l)(X_n'(ω),Ũ^(l)_n(ω)).
Step 5 (Redefining distance measures) :
For every ω∈Ω, for every n≥ n'≥1, for every T>0, for every l≥1,
(a) let x̃^(l)(·;n,n',T,ω):[0,T]→ℝ^d denote the unique solution of the o.d.e.
dx/dt=f^(l)(x,ũ(t;n,n',T,ω)),
with initial condition x̃^(l)(0;n,n',T,ω)=X_n''(ω), where ũ(·;n,n',T,ω):[0,T]→ U is defined such that, for every t∈[0,T], ũ(t;n,n',T,ω):=Ũ_k^(l)(ω), where Ũ_k^(l) is as in equation (<ref>) and k is such that t+t(n)∈ [t(k),t(k+1)) (for a proof of existence and uniqueness of solutions to o.d.e. (<ref>), see Lemma <ref>). It is easy to see that for every l≥1, x̃^(l)(·;n,n',T,ω)∈ S^(l)(T,X_n''(ω)), the set of solutions of DI (<ref>), as defined in (<ref>).
(b) define,
(1) ρ̃(ω,n,n',T):=inf_x∈ S(T,𝒪̅'̅)sup_t∈[0,T]X̃(ω,t+t(n);n')-x(t),
(2)ρ̃^(l)_1(ω,n,n',T):=sup_t∈[0,T]X̃(ω,t+t(n);n')-x̃^(l)(t;n,n',T,ω),
(3)ρ̃^(l)_2(ω,n,n',T):=inf_x∈ S(T,𝒪̅'̅)sup_t∈[0,T]x̃^(l)(t;n,n',T,ω)-x(t).
Step 6 (Collecting sample paths) : Fix k> max{k_1,k_2} and n_0∈ J(k). By our definition of ℱ_n_0 (see section <ref>), we have that E(k,n_0):={∑_n=1^n_0-1χ_n=k-1,χ_n_0=1}∈ℱ_n_0 and is contained in {X_n_0'(ω)∈𝒪'}. Given that there has been a reset at index n_0, the next reset check is performed by Algorithm <ref> at the iteration index n_2^k-k_2. So for n_0+1≤ j< n_2^k-k_2, X_j(ω)=X_j'(ω). From arguments exactly the same as Lemma <ref>(a), we get that,
E(k,n_0)∩(∩_m=0^2^k-k_2-1{ρ̃^(l)_1(ω,n_m,n_0,T_m)+ρ̃^(l)_2 (ω,n_m,n_0,T_m)<ϵ_0})
⊆ E(k,n_0)∩(∩_m=0^2^k-k_2-1{ρ̃(ω,n_m,n_0,T_m)<ϵ_0})
⊆{ω∈Ω:X_n_2^k-k_2(ω)∈𝒪'}
⊆{ω∈Ω: X_n_2^k-k_2(ω)=X_n_2^k-k_2'(ω)}
⊆{∑_n=n_0+1^n_2^k-k_2χ_n=0}
where, (<ref>) follows from the fact that k≥ k_1 and hence 𝒪'= r_k_1U⊆ r_kU. It is also worth mentioning here that the proof of Lemma <ref>(a) holds irrespective of how the iterates are generated. Given that ω∈ E(k,n_0)∩(∩_m=0^2^k-k_2-1{ρ̃^(l)_1(ω,n_m,n_0,T_m)+ρ̃^(l)_2(ω,n_m,n_0,T_m)<ϵ_0}), along this sample path there has been a reset at n_0 and at the next check performed at n_2^k-k_2 there has been no reset. Hence the next check for reset is performed by Algorithm <ref> at n_2^(k-k_2)+1. Again from arguments from Lemma <ref>(a), we get that,
E(k,n_0)∩(∩_m=0^2^(k-k_2)+1-1{ρ̃^(l)_1(ω,n_m,n_0,T_m)+ρ̃^(l)_2(ω,n_m,n_0,T_m)<ϵ_0}) ⊆{ω∈Ω:X_2^(k-k_2)+1(ω)∈𝒪'}
⊆{∑_n=n_0+1^n_2^(k-k_2)+1χ_n=0}.
Repeating the above for the third reset check after n_0 and so on, we obtain that,
E(k,n_0)∩(∩_m≥0{ρ̃^(l)_1(ω,n_m,n_0,T_m)+ρ̃^(l)_2(ω,n_m,n_0,T_m)<ϵ_0})
⊆{∑_n=n_0+1^∞χ_n=0}.
Step 7 (Bounding) : Define ℬ̃^(l)_-1:=E(k,n_0) and for every M≥1 define,
ℬ̃^(l)_M:=E(k,n_0)∩(∩_m=0^M{ρ̃^(l)_1(·,n_m,n_0,T_m)+ρ^(l)_2(·,n_m,n_0,T_m)<ϵ_0}).
Note that as in Lemma <ref>, we can obtain l_0≥1, such that for every m≥0, for every ω∈B̃^(l_0)_m-1 we have that ρ̃^(l_0)_2(ω,n_m,n_0,T_m)<ϵ_0/2, since for every ω∈ℬ̃^(l_0)_m-1, X_n_m(ω)∈𝒪' and whether or not a reset check is performed at this index, we have that X_n_m(ω)=X_n_m'(ω). Thus for such an l_0, mimicking the proof of Lemma <ref>, we obtain that,
ℙ(∩_m=0^M{ρ̃^(l)_1(·,n_m,n_0,T_m)+ρ^(l)_2(·,n_m,n_0,T_m)<ϵ_0}|ℬ̃^(l_0)_-1)≥ 1-∑_m=0^∞ℙ(ρ̃^(l_0)_1(·,n_m,n_0,T_m)≥ϵ_0/2|ℬ̃^(l_0)_m-1).
From Lemma <ref>, we have that for k> max{k_1,k_2,N_0'}, for every n_0∈ J(k),
ℙ(∩_m=0^M{ρ̃^(l)_1(·,n_m,n_0,T_m)+ρ^(l)_2(·,n_m,n_0,T_m)<ϵ_0}|ℬ̃^(l_0)_-1)≥ 1-∑_m=0^∞ℙ(max_n_m≤ j≤ n_m+1ζ_j-ζ_n_m≥ϵ_0/4K_0(T_u)|ℬ̃^(l_0)_m-1).
Step 8 (Noise bound) Similar to item (a) in section <ref>, from Algorithm (<ref>), we have that for every m≥0, for every n_m≤ n_m+1-1, X_j+1≤X_j'(1+a(j)K)+a(j)K+a(j)M_j+1 and since X_j+1'≤X_j+1, we get that for every n_m≤ j≤ n_m+1-1,
X'_j+1≤X_j'(1+a(j)K)+a(j)K+a(j)M_j+1.
Now by arguments exactly same as those item (a) of section <ref>, we get that for every m≥0, for every n_m≤ j≤ n_m+1-1, X_j+1'≤ e^2K T_u(X_n_m+2KT_u). Now by using concentration inequality as in item (b) of section <ref>, we get that for every m≥0,
ℙ(max_n_m≤ j≤ n_m+1ζ_j-ζ_n_m≥ϵ_0/4K_0(T_u)|ℬ̃_m-1^(l_0))≤2de^-K̃/(b(n_m)-b(n_m+1)).
Using (<ref>), (<ref>) and (<ref>) we get that, for every k≥max{k_1,k_2,N_0} (where N_0 is as defined in the proof of Theorem <ref>), for every n_0∈ J(k),
ℙ(∑_n=n_0+1^∞χ_n=0|∑_n=1^n_0-1χ_n=k-1,χ_n_0=1)≥ 1-2d∑_m=0^∞e^-K̃/(b(n_m)-b(n_m+1))≥1-2de^-K̃/b(n_0).
Substituting the above in (<ref>) and using the fact that for n≤ n', b(n')≤ b(n), we get that, for every k≥max{k_1,k_2,N_0},
ℙ(∑_n=1^∞χ_n=k) ≥(1-2de^-K̃/b(k))∑_n_0=1^∞ℙ({∑_n=1^n_0-1χ_n=k-1}∩{χ_n_0=1})
=(1-2de^-K̃/b(k))ℙ({∑_n=1^∞χ_n≥ k})
Substituting (<ref>) in (<ref>), we get that for every k≥max{k_1,k_2,N_0},
ℙ(G_k)≥ℙ(G_k-1)+(1-2de^-K̃/b(k))ℙ(G_k-1^c)
≥ 1-2de^-K̃/b(k).
Letting k→∞ in the above equation and using the fact that ℙ(G_∞)=lim_k→∞ℙ(G_k), we get that ℙ(G_∞)=1.
§ CONCLUSIONS AND DIRECTIONS FOR FUTURE WORK
We have extended the lock-in probability result (Theorem <ref>) in <cit.> to stochastic approximation schemes with set-valued drift functions which serves as an important tool for analyzing recursions when their stability is not guaranteed. The extension to set-valued map allows one to obtain lock-in probability for stochastic approximation schemes with measurable drift functions and schemes where the drift function itself possess a non-additive unknown noise component (see <cit.>). Further using Theorem <ref>, in the presence of a locally attracting set for the mean field, we have provided an alternate condition for verification of convergence in the absence of stability guarantee which involves verifying whether the iterates are entering infinitely often, an open neighborhood of the attractor with a compact closure. In the presence of a globally attracting set our modified recursion as in Algorithm <ref>, converges almost surely to the globally attracting set, the proof of which relies on the method used to obtain the lock-in probability result.
In future we wish to consider other applications of the lock-in probability result such as sample complexity (see <cit.>) and almost sure convergence under tightness of the iterates (see <cit.>). Another interesting direction, is to explore various additive noise models where the above result can be extended for the case of set-valued drift functions.
§ DEFINITIONS OF SOME TOPOLOGICAL CONCEPTS
Let (ℳ,Γ) be a topological space. {O_i}_i∈ I is an covering of ℳ if, for every i∈ I, O_i⊆ℳ and ∪_i∈ IO_i=ℳ. Further a covering {O_i}_i∈ I is said to be locally finite if for every p∈ℳ, there exists an O∈Γ, such O_i∩ O≠∅ for only finitely many i∈ I. Given any two coverings 𝒞:={O_i}_i∈ I and 𝒞':={O_j'}_j∈ J, 𝒞' is said to be a refinement of 𝒞 if, for every i∈ I, there exists a j∈ J such that, O_i⊆ O_j'. 𝒞' is said to be a locally finite refinement of 𝒞, if 𝒞' is a refinement of 𝒞 and is locally finite. The topological space (ℳ,Γ) is paracompact if it is a Hausdorff space and if every open covering has a locally finite open refinement.
A family of functions {ψ_i}_i∈ I is called a locally Lipschitz partition of unity if for all i∈ I,
* ψ_i is locally Lipschitz continuous and non negative,
* the supports of ψ_i, defined as {p∈ℳ:ψ_i(p)≠0}, are a closed locally finite covering of ℳ,
* for each p∈ℳ, ∑_i∈ Iψ_i(p)=1.
A partition of unity {ψ_i}_i∈ I is said to be subordinated to the covering {O_i}_i∈ I, if for every i∈ I, {p∈ℳ:ψ_i(p)≠0}⊆ O_i.
§ PROOF OF LEMMA <REF>
Fix l≥1 and T_u≥ T_A. Fix n_0≥ 1 and {n_m}_m≥0 as in <ref>. Fix m≥0. Let ω∈{ω∈Ω:ρ_1^(l)(ω,n_m,T_m)≥ϵ_0/2}∩ℬ_m-1^(l). Then, by Lemma <ref>(a) we have that, X_n_m(ω)∈𝒪'. By (<ref>) and <ref>, we have that for every 0≤ t≤ T_m, there exists n_m≤ k≤ n_m+1-1 such that t∈ [t(k),t(k+1)], and
X̅(ω,t+t(n_m))=α X_n_k(ω)+(1-α)X_n_k+1(ω),
for some α∈ [0,1] and
x̃^(l)(t;n_m,T_m,ω)= X_n_m(ω)+∫_0^qf^(l)(x̃^(l)(q;n_m,T_m,ω),u(q;n_m,T_m,ω))dq.
Therefore for any t∈ [0,T_m],
X̅(ω,t+t(n_m))-x̃^(l)(t;n_m,T_m,ω) ≤αX_k(ω)-X̃^(l)(t(k);n_m,T_m,ω)
+αX̃^(l)(t(k);n_m,T_m,ω)-X̃^(l)(t;n_m,T_m,ω)
+(1-α)X̃^(l)(t(k+1);n_m,T_m,ω)-X̃^(l)(t;n_m,T_m,ω)
+(1-α)X_k+1(ω)-X̃^(l)(t(k+1);n_m,T_m,ω)
Now the aim is to provide an upper bound for each terms in the R.H.S. of the above inequality which is independent of m and t. In order to apply <cit.> the only additional condition needed is the Lipschitz continuity of f^(l)(·,·) uniformly over u and this is obtained using local Lipschitz continuity as follows. Let M_1>0 be such that, for every x∈𝒪', x≤ M_1. By Lemma <ref>(b) and item (a) in section <ref>, we have that for r:=max{C_1(𝒪̅'̅,T_u,l), e^2KT_u(M_1+2KT_u)},
sup_t∈[0,T_m]X̅(ω,t+t(n_m))≤ r and sup_t∈ [0,T_u]x̃^(l)(t;n_m,T_m,ω)≤ r.
Further by Lemma <ref>(c), we know that there exists L(r,l)>0, such that for every x,x'∈ r U, for every t∈ [0, T_m]
f^(l)(x,u(t;n_m,T_m,ω))-f^(l)(x',u(t;n_m,T_m,ω))≤ L(r,l)x-x'.
The rest of the bounding procedure is exactly the same as <cit.>. We obtain that, for every t∈ [0,T_m],
sup_t∈ [0,T_m]X̅(ω,t+t(n_m))-x̃^(l)(t;n_m,T_m,ω)≤ ( M_1+KT_u)e^2L(r,l)T_uL(r,l)∑_j≥0a(n_0+j)^2
+e^L(r,l)T_umax_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)
+(M_1+KT_u)e^L(r,l)T_ua(n_0).
Now set N_0' such that (M_1+KT_u)e^2L(r,l)T_uL(r,l)∑_j≥0a(n_0+j)^2+(M_1+KT_u)e^L(r,l)T_ua(n_0)<ϵ_0/2 and define K_0(T_u):=e^L(r,l)T_u. Then, for every n_0≥ N_0', we have that ω∈{ω∈Ω:max_n_m≤ j≤ n_m+1ζ_j(ω)-ζ_n_m(ω)≥ϵ_0/4K_0(T_u)}∩ℬ^(l)_m-1, from which Lemma <ref> follows.
IEEEtran
| It is well known that several optimization and control tasks can be cast as a root finding problem. That is, given f:ℝ^d→ℝ^d, one needs to find x^*∈ℝ^d, such that f(x^*)=0 (given such a point exists). Due to practical considerations, one usually has access to noisy measurements/estimations of the function whose root needs to be determined. An approach to solving such a problem with noisy measurements of f, is given by the recursion,
X_n+1-X_n-a(n)M_n+1=a(n)f(X_n),
where {M_n}_n≥1, denotes the noise arising in the measurement of f and having fixed an initial condition (X_0∈ℝ^d), the iterates {X_n}_n≥1 are generated according to recursion (<ref>). <cit.> under certain assumptions which include the Lipschitz continuity of the function f, boundedness of the iterates along almost every sample path (that is ℙ(sup_n≥0X_n<∞)=1) and a condition which ensures that the eventual contribution of the additive noise terms is negligible, showed that the linearly interpolated trajectory of recursion (<ref>) tracks the flow of the ordinary differential equation (o.d.e.) given by,
dx/dt=f(x).
Such a trajectory is called an asymptotic pseudotrajectory for the flow of o.d.e.(<ref>) (for a precise definition see <cit.>). Suppose the set of zeros of f is a globally asymptotically stable set for the flow of o.d.e. (<ref>), then it was shown that the limit set of an asymptotic pseudotrajectory was contained in such a set and hence the iterates {X_n}_n≥0 converge in the limit to a root of the function f.
In order to analyze recursion (<ref>) when the function f is no longer Lipschitz continuous or even continuous, but is just measurable satisfying the linear growth property, that is for every x∈ℝ^d, f(x)≤ K(1+x) for some K>0, or when there is a non-additive noise/control component taking values in a compact set whose law is not known
(in which case the recursion (<ref>) takes the form X_n+1-X_n-a(n)M_n+1=a(n)f(X_n,U_n), where U_n denotes the noise/control), the above mentioned o.d.e. method needed to be extended to recursions with much weaker requirements on the function f. This was accomplished in <cit.>, where the asymptotic behavior of the recursion given by,
X_n+1-X_n-a(n)M_n+1∈ a(n)F(X_n),
was studied, where F is a set-valued map satisfying some conditions (while the other quantities have same interpretation as in (<ref>)). Under the assumption of stability of iterates (that is ℙ(sup_n≥0X_n<∞)=1) and appropriate conditions on the additive noise terms, in <cit.>, it was shown that the linearly interpolated trajectory of recursion (<ref>) tracks the flow of the differential inclusion (d.i.) given by,
dx/dt∈ F(x).
We refer the reader to <cit.> for a detailed argument as to how the measurable case and the case with unknown noise/control be recast in the form of recursion (<ref>). For a brief summary of the convergence analysis of recursion (<ref>) we refer the reader to section <ref> of this paper.
Common to the analysis of both recursion (<ref>) and (<ref>) is the assumption on the stability of the iterates, that is ℙ(sup_n≥0X_n<∞)=1. The condition of stability is highly non-trivial and difficult to verify. Over the years significant effort has gone into providing sufficient conditions for stability (see <cit.>, <cit.>). In <cit.>, it was shown that for recursion (<ref>), in the absence of stability guarantee, the probability of converging to an attracting set of o.d.e. (<ref>) given that the iterates lie in a neighborhood of it converged to one as the index (n) in which the iterate entered the neighborhood of the attracting set increased to infinity. This probability of the iterates converging to an attracting set given that the iterate lies in a neighborhood of it is called the lock-in probability and in <cit.> a lower bound for the same was used to obtain sample complexity bounds for recursion (<ref>). Further a tighter lower bound for the lock-in probability was derived in <cit.> under a slightly stronger noise assumption and used to obtain convergence guarantee when the law of the iterates are tight. In this paper we extend the results in <cit.> to the case of stochastic approximation schemes with set-valued maps as in recursion (<ref>).
§.§ Contributions and organization of the paper
We first provide a lower bound for the lock-in probability of stochastic approximation schemes with set-valued maps as in recursion (<ref>). The bound is derived under an assumption on the additive noise terms which is stronger than the corresponding in <cit.>, which is necessitated due to the lack of Lipschitz continuity of the drift function F. We establish that,
ℙ(X_n→ A as n→∞|X_n_0∈𝒪')≥ 1-2de^-K̃/b(n_0),
for n_0 large, where, A⊆ℝ^d, denotes an attracting set of DI <ref>, 𝒪' is an open neighborhood of A with compact closure, K̃ is some positive constant and {b(n)}_n≥0 is a sequence of reals converging to zero, which are step size dependent.
Having summarized the convergence analysis under stability in section <ref>, we state the lock-in probability bound in section <ref> and provide a few implications of the same. Using the lock-in probability result we provide an alternate criteria for convergence in the presence of a locally attracting set which removes the need to verify stability. A detailed comparison between the obtained convergence guarantee and the corresponding in the presence of stability is also provided.
Proof of the lock-in probability result is presented in section <ref>. The proof relies heavily on the insights obtained from the analysis in <cit.> for single-valued maps. From the analysis in <cit.>, it is evident that the Lipschitz continuity of the drift function f plays a crucial role in obtaining events and decoupling error contributions which in turn are necessary to obtain the bound in the inequality above. But in the recursion studied in this paper (that is recursion (<ref>)), the drift function F is set-valued and the assumptions under which we study the said recursion (which are summarized in section <ref>), the drift function F is not even continuous. We overcome this problem by first obtaining a sequence of locally Lipschitz continuous set-valued maps which approximate the drift function F from above and then parameterizing them using the Stiener selection procedure. The associated results are summarized in section <ref>. This enables us to write recursion (<ref>) in the form of recursion (<ref>), but with locally Lipschitz continuous drift functions. Further the relation between the solutions of differential inclusions with the approximating set-valued maps as their vector field and those of DI (<ref>), is established in section <ref>. Having written recursion (<ref>) in the form of recursion (<ref>), we then collect sample paths of interest in section <ref>. Along the sample paths that are collected the iterates are such that, having entered a neighborhood of the attracting set at iteration n_0, the iterates will infinitely often enter the said neighborhood and the time elapsed between successive visits to the neighborhood of the attracting set can be upper bounded by a constant which is mean field dependent. Further we show that the probability of occurrence of such sample paths can be lower bounded by error contributions due to additive noise terms alone after a large number of iterations. Using the concentration inequality for martingale sequences we obtain the lock-in probability bound in section <ref>.
Using the lock-in probability result we design a feedback mechanism which enables us to stabilize the stochastic approximation scheme in the presence of a globally attracting set for DI (<ref>). The feedback mechanism involves resetting the iterates at regular time intervals if they are found to be lying outside a certain compact set. This approach to stabilization has been studied in various forms for stochastic approximation schemes with single-valued drift functions as in recursion (<ref>), in <cit.>, <cit.>, <cit.> and <cit.> to name a few. We extend the same to the case of set-valued drift functions. The main idea in the analysis of such a scheme is to show that along almost every sample path of the modified recursion, the number of resets that are performed is finite, thereby guaranteeing that eventually the iterates lie within a compact set. We observe that the lock-in probability result (to be precise the approach adopted to obtain the lock-in probability result) plays a central role in showing that the number of resets performed remain finite. Having shown that the iterates eventually lie within a compact set, we use the convergence arguments from <cit.> to argue that the iterates generated by the modified scheme converge to the globally attracting set of DI (<ref>). The modified scheme is presented and explained in detail in section <ref>. The proof of the finite resets theorem is presented in section <ref>. The procedure employed to collect sample paths in the proof of the lock-in probability result can be used to collect sample paths where only finite number of resets have occurred in the modified scheme and this in turn enables us show that the number of resets are finite almost surely.
Finally, we conclude by providing a few directions for future work in section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.08058v1 | 20170127141512 | Optimal Communication Strategies in Networked Cyber-Physical Systems with Adversarial Elements | [
"Emrah Akyol",
"Kenneth Rose",
"Tamer Basar",
"Cedric Langbort"
] | cs.GT | [
"cs.GT",
"cs.CR",
"cs.IT",
"cs.MA",
"math.IT"
] |
Optimal Communication Strategies in Networked Cyber-Physical Systems with Adversarial Elements
Emrah Akyol,
Kenneth Rose,
Tamer Başar, and Cédric Langbort
E. Akyol, T. Başar and C. Langbort are with the Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, 1308
West Main Street, Urbana, IL 61801, USA email: {akyol, basar1, langbort}@illinois.edu. K. Rose is with the Department
of Electrical and Computer Engineering, University of California, Santa Barbara,
CA, 93106 USA e-mail: rose @ece.ucsb.edu.
The material in this paper was presented
in part at the IEEE International Symposium on Information Theory (ISIT), Turkey, July 2013 and at the Conference on Decision and Game Theory for Security(GameSec) Nov. 2013, Forth Worth, Texas, USA.
This work was supported in part by NSF under grants CCF-1016861, CCF-1118075, CCF-1111342, CCF-1320599 and also by an Office of Naval Research (ONR) MURI Grant N00014-16-1-2710.
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper studies optimal communication and coordination strategies in cyber-physical systems for both defender and attacker within a game-theoretic framework. We model the communication network of a cyber-physical system as a sensor network which involves one single Gaussian source observed by many sensors, subject to additive independent Gaussian observation noises. The sensors communicate with the estimator over a coherent Gaussian multiple access channel. The aim of the receiver is to reconstruct the underlying source with minimum mean squared error. The scenario of interest here is one where some of the sensors are captured by the attacker and they act as the adversary (jammer): they strive to maximize distortion. The receiver (estimator) knows the captured sensors but still cannot simply ignore them due to the multiple access channel, i.e., the outputs of all sensors are summed to generate the estimator input. We show that the ability of transmitter sensors to secretly agree on a random event, that is “coordination", plays a key role in the analysis. Depending on the coordination capability of the sensors and the receiver, we consider three different problem settings. The first setting involves transmitters and the receiver with “coordination" capabilities. Here, all transmitters can use identical realization of randomized encoding for each transmission. In this case, the optimal strategy for the adversary sensors also exploits coordination, where they all generate the same realization of independent and identically distributed Gaussian noise. In the second setting, the transmitter sensors are restricted to use deterministic encoders, and this setting, which corresponds to a Stackelberg game, does not admit a saddle-point solution. We show that the optimal strategy for all sensors is uncoded communications where encoding functions of adversaries and transmitters are aligned in opposite directions. In the third, and last, setting where only a subset of the transmitter and/or jammer sensors can coordinate, we show that the solution radically depends on the fraction of the transmitter sensors that can coordinate.
In the second half of the paper, we extend our analysis to an asymmetric scenario where we remove the assumption of identical power and noise variances for all sensors. Limiting the optimal strategies to conditionally affine mappings, we derive the optimal power scheduling over the sensors. We show that optimal power scheduling renders coordination superfluous for the attacker, when the transmitter sensors exploit coordination, as the attacker allocates all adversarial power to one sensor. In the setting where coordination is not allowed, both the attacker and the transmitter sensors distribute power among all available sensors to utilize the well-known estimation diversity in distributed settings.
§ INTRODUCTION
Cyber-physical systems (CPSs) are large-scale interconnected systems of heterogeneous, yet collaborating,
components that provide integration of computation with physical processes <cit.>. The inherent heterogeneity and integration of different components in CPS pose new security challenges<cit.>. One such security challenge pertains to the CPS communication network.
Most CPSs rely on the presence of a Wireless Sensor Network (WSN)
composed of distributed nodes that communicate their measurements to a central state estimator (fusion center) with
higher computation capabilities. Efficient and reliable communication of these measurements is a critical aspect of WSN
systems that determine usability of the infrastructure. Consider the architecture shown in Figure 1 where multiple sensors observe the state of the plant and transmit their observations over a wireless multiple access channel (MAC) to a central estimator (fusion center) which decides on the control action. The sensors in such architectures are known to be vulnerable to various attacks, see e.g., <cit.> and the references therein. For example, sensors may be captured and analyzed such that the attacker
gains insider information about the communication scheme and networking protocols. The attacker can then reprogram the
compromised sensors and use them to launch the so-called Byzantine attack<cit.>, where the objective of these adversarial sensors can be i) to distort the estimate made at the fusion center, which corresponds to a zero-sum game where the transmitting sensors aim to minimize some distortion associated the state measurements while the objective of the attacker is to maximize it, or ii) strategically craft messages to deceive the estimator in a way to render its estimate close to a predetermined, biased value<cit.>, as was done in the replay attacks of StuxNet in SCADA systems <cit.>. This paper presents an information/communication theoretic approach to Bayesian optimal sensor fusion in the presence of Byzantine sensors for the first setting, while a preliminary analysis of the second case can be found in <cit.>.
We analyze the communication scenario from the perspective of joint source-channel coding (JSCC) which has certain advantages over separate source and channel coding for sensor networks; see e.g., <cit.> and the references therein. In this paper, we extend the game theoretic analysis of the Gaussian test channel <cit.> to Gaussian sensor networks studied by <cit.>. In <cit.>, the performance of a simple uncoded communication is studied, in conjunction with optimal power assignment over the sensors given a sum power budget. For a particular symmetric setting, Gastpar showed that indeed this uncoded scheme is optimal over all encoding/decoding methods that allow arbitrarily high delay <cit.>. However, it is well understood that in more realistic asymmetric settings, the uncoded communication scheme is suboptimal, and in fact, the optimal communication strategies are unknown for these settings<cit.>.
Information-theoretic analysis of the scaling behavior of such sensor networks, in terms of the number of sensors, is provided in <cit.>.
In this paper, building on our earlier work on the topic <cit.>, we consider three settings for the sensor network model, which is illustrated in Figure 2 and described in detail in Section II. The first M sensors (i.e., the transmitters) and the single receiver constitute Player 1 (minimizer) and the remaining K sensors (i.e., the adversaries) constitute Player 2 (maximizer). Formulated as a zero-sum game, this setting does not admit a saddle point in pure strategies (deterministic encoding functions), but admits one in mixed strategies (randomized functions). In the first setting we consider, the transmitter sensors are allowed to use randomized encoders, i.e., all transmitters and the receiver agree on some (pseudo)random sequence, denoted as {γ} in the paper. We coin the term “coordination" for this capability, show that it plays a pivotal role in the analysis and the implementation of optimal strategies for both the transmitter and the adversarial sensors , and provide the mixed-strategy saddle-point solution in Theorem 1. In the second setting, we have a hierarchical scheme; it can be viewed as a Stackelberg game where Player 1 is the leader, restricted to pure strategies, and Player 2 is the follower, who observes Player 1's choice of pure strategies and plays accordingly. We present in Theorem 2 the optimal strategies for this Stackelberg game, whose cost is strictly higher than the cost associated with the first setting. The sharp contrast between the two settings underlines the importance of “coordination" in sensor networks with adversarial nodes. In the third setting, we consider only a given subset of the transmitters and also the adversarial sensors can coordinate. We show that if the number of transmitter sensors that can coordinate is sufficiently high (compared to ones that cannot), then the problem becomes a zero-sum game with a saddle-point, where the coordination-capable transmitters use randomized linear strategy and the remaining transmitters are not used at all. It may at first appear to be counter intuitive to forgo utilization of the second set of transmitter sensors but the gain from coordination (by the first set of transmitter sensors) more than compensates for this loss. Coordination is also important for the adversarial sensors. When transmitters coordinate, adversaries would benefit from coordination to generate identical realizations of Gaussian jamming noise. In contrast with transmitters, the adversarial sensors which cannot coordinate are of use: they generate independent copies of identically distributed Gaussian jamming noise. Otherwise, i.e., the number of coordinating transmitters is not sufficiently high, transmitters use deterministic (pure strategies) linear encoding, and optimal adversarial strategy is also uncoded communications in the opposite direction of the transmitters.
In the second part of the paper, we extend the analysis to asymmetric settings where sensing and/or communications channels, and allowed transmission power of each sensor are different. For this setting, information-theoretically optimal source-channel coding strategies are unknown (see e.g., <cit.> for inner and outer bounds of optimal performance). Here, we assume that the sensors use uncoded (zero-delay) linear communication strategies, which are optimal for the symmetric setting. We also allow another coordination capability to the sensors to combat with this inherent heterogeneity: we assume a total power limit over the sensors which allows for power allocation over sensors. We assume this power allocation optimization capability is also available to the adversarial sensors. We derive optimal power scheduling strategies for the transmitter and the adversarial sensors for both settings, i.e., with or without coordination[Here, the term coordination refers to the sensors' ability on generating identical realization of a (pseudo)random sequence.]. We show that the power allocation capability renders coordination superfluous for the adversarial sensors, while it is still beneficial to the transmitter sensors.
This paper is organized as follows: In Section II, we formulate the problem. In Section III, we present our results pertaining to the symmetric setting, and in Section IV, we analyze the asymmetric case. In Section V, we present conclusions and discuss possible future directions of research.
§ PRELIMINARIES
§.§ Notation
In general, lowercase letters (e.g., x) denote scalars, boldface lowercase (e.g., x) vectors, uppercase (e.g., U, X) matrices and random variables, and boldface uppercase (e.g., X) random
vectors. The k^th element of vector x is denoted by [ x]_k. 𝔼(·), ℙ(·), ℝ, and ℝ^+ denote, respectively, the expectation and probability operators, and the sets of real and positive real numbers. Bern(p) denotes the Binary random variable, taking values 1 with probability p and -1 with probability 1-p. Gaussian distribution with mean vector μ and covariance matrix R is denoted as 𝒩(μ,R). The mutual information of random variables X and Y is denoted by I(X;Y).
§.§ Problem Formulation
The sensor network model is depicted in Figure 2. The underlying source {S(i)} is a sequence of i.i.d. real valued Gaussian random variables with zero mean and unit variance[Normalizing the variance to 1 does not lead to any loss of generality.]. Sensor m ∈ [1:M+K] observes a sequence {U_m(i)} defined as
U_m (i)= S(i)+β_m W_m(i),
where {W_m(i)} is a sequence of i.i.d. Gaussian random variables with zero mean and unit variance, independent of {S(i)}, and β_m ∈ℝ^+ is the deterministic fading coefficient for the sensing channel. Sensor m ∈ [1:M+K] can apply arbitrary Borel measurable function g_m^N:ℝ^N →ℝ^N to the observation sequence of length N, U_m so as to generate the vector of length N channel inputs X_m=g_m^N( U_m) under power constraint:
∑_i=1^N𝔼{X_m^2(i)}≤ P_m
The channel output is then given as
Y(i)=Z(i)+∑_m=1^M+Kα_m X_m(i)
where {Z(i)} is a sequence of i.i.d. Gaussian random variables of zero mean and unit variance, independent of {S(i)} and {W_m(i)} and α_m ∈ℝ^+ is the deterministic fading coefficient for the communication channel of the m-th sensor. The receiver applies a Borel measurable function h^N: ℝ^N →ℝ^N to the received length-N vector Y to generate Ŝ
Ŝ = h^N( Y)
that minimize the cost, which is measured as mean squared error (MSE) between the underlying source S and the estimate at the receiver Ŝ as
J({g_m^N(·)}_m=1^M+K,h^N(·))= 1/N∑_i=1^N𝔼{(S(i)-Ŝ(i))^2}.
Game model: There are two players: transmitter sensors and the receiver constitute Player 1 who seeks to minimize (<ref>) over {g_m^N (·)}_m=1^M and h^N(·). Player 2 comprises the adversarial sensors whose common objective is to maximize (<ref>) by properly choosing {g_k^N(·)}_k=M+1^M+K. Since there is a complete conflict of interest, this problem constitutes a zero-sum game. We primarily consider the Stackelberg solution where Player-1 is the leader and plays first as a consequence of being the leader, and the Player-2 is the follower, responds to the strategies of Player-1. The game proceeds as follows: Player-1 plays first and announces its mappings. Player-2, knowing the mappings of Player-1, determines its own mappings that maximize t (<ref>), given the strategy of Player 1. Player-1 of course, will anticipate this, and pick its mappings accordingly. The adversarial sensors have access to the knowledge of the strategy of the transmitter sensors (except the sequence of coordination variables {γ} that enables Player-1 to use randomized strategies) while the receiver has access to the strategies of all sensors, i.e., the receiver also knows the statistics of the sensors captured by the adversary. We also note that the statistics of the variables, and the problem parameters, including the fading coefficients, are common knowledge.
More formally, we are primarily interested in
J_U ≜min_{g_m^N}_m=1^M, h^Nmax_{g_k^N}_k=M+1^M+K J ({g_m^N}_m=1^M,{g_k^N}_k=M+1^M+K,h^N )
which is the upper value of the game.
Some of the settings we analyze here admit, a special case of the described Stackelberg solution: a saddle-point solution. A transmitter-receiver-adversarial policy (g_m^N*,g_k^N*,h^N*) constitutes a saddle-point solution if it satisfies the pair of inequalities
J({g_m^N*}_m=1^M,{g_k^N}_k=M+1^M+K,h^N*) ≤ J({g_m^N*}_m=1^M,{g_k^N*}_k=M+1^M+K,h^N*) ≤ J({g_m^N}_m=1^M,{g_k^N*}_k=M+1^M+K,h^N)
We also show that whenever a saddle-point solution exists, it is essentially unique[In these settings, multiple strategies, that are different only upto a sign change, yield the same cost. To account for such trivially equivalent forms, we use the term “essentially unique."]. At the saddle point, it is well-known that the following holds (cf. <cit.>):
J_U=J({g_m^N*}_m=1^M,{g_k^N*}_k=M+1^M+K,h^N*)=J_L
which we will refer as the saddle-point cost throughout the paper, where
J_L=max_{g_k^N}_k=M+1^M+Kmin_{g_m^N}_m=1^M, h^N J ({g_m^N}_m=1^M,{g_k^N}_k=M+1^M+K,h^N )
We are primarily concerned with the information-theoretic analysis of fundamental limits, and hence we take N→∞.
In this paper, we consider three different problem settings (denoted as settings I, II and III), depending on the “coordination" capabilities of sensors. A salient encoding strategy that we will frequently encounter in this paper is the uncoded[Throughout this paper, we use “uncoded", “zero-delay" interchangeably to denote “symbol-by-symbol" coding structure. ] linear communication strategy where the N-letter communication mapping g_m^N consist of N identical linear maps:
g_m(U_m(i))=c_mU_m(i)
where c_m satisfies the individual sensor power constraint with equality, i.e., c_m=√(P_m/1+β_m^2) for m=1, …, M.
§ THE SYMMETRIC SCENARIO
In this section, we focus on the symmetric scenario. More formally, we have the following symmetry assumption.
[Symmetry Assumption] All sensors have identical problem parameters: β_m=β, α_m=α, and P_m=P for all m ∈ [1:M+K].
§.§ Problem Setting I
The first setting is concerned with the situation where the transmitter sensors have the ability to coordinate, i.e., all transmitters and the receiver can agree on an i.i.d. sequence of random variables {γ(i)} generated, for example, by a side channel, the output of which is, however, not available to the adversarial sensors[An alternative practical method to coordinate is to generate the identical pseudo-random numbers at each sensor, based on pre-determined seed.]. The ability of coordination allows transmitters and the receiver to agree on randomized encoding mappings. Perhaps surprisingly, in this setting, the adversarial sensors can also benefit from coordination, i.e., agree on an i.i.d. random sequence, denoted as {θ(i)}, to generate the optimal jamming strategy.
The saddle-point solution of this problem is presented in the following theorem.
Setting I, and under Assumption 1, admits a saddle-point solution with the following strategies: the strategy of the transmitter sensors is randomized uncoded transmission
X_m(i)=γ(i) c U_m(i), 1 ≤ m≤ M
where {γ(i)} is an i.i.d. sequence of binary variables γ(i)∼ Bern (1/2), and c=√(P/1+β^2).
The optimal jamming function (for adversarial sensors) is to generate the i.i.d. Gaussian output
X_k(i)=θ(i), M+1 ≤ k≤ M+K
where
θ(i)∼𝒩(0, P),
and is independent of the adversarial sensor input U_k(i).
The strategy of the receiver is the Bayesian estimator of S given Y, i.e.,
h(Y(i))=M c αβ/ M^2 α^2 β^2 c^2 +M c^2 α^2+ K^2P+1γ(i) Y(i).
The cost at this saddle-point is
J_C^S(M,K)=M c^2 α^2+ K^2P+1/ M^2 α^2 β^2 c^2 +M c^2 α^2+ K^2P+1
Moreover, this saddle-point solution is essentially unique.
We start by verifying that the mappings given in the theorem satisfy the pair of saddle-point inequalities (<ref>), following the approach in <cit.>.
RHS of (<ref>): Suppose the policy of the adversarial sensors is given as in Theorem <ref>. Then, the communication system at hand becomes essentially identical to the problem considered in <cit.>, whose solution is uncoded communication with deterministic, linear encoders, i.e., X_m(i)= c U_m(i). Any probabilistic encoder, given in the form of (<ref>) (irrespective of the density of γ) yield the same cost (<ref>) with deterministic encoders and hence is optimal. Given that the optimal transmitter is in the form of (<ref>), the optimal decoder is also zero-delay (symbol-by-symbol) mapping given as h(Y(i))=𝔼{S(i)|Y(i)}= 𝔼{SY} (𝔼{Y^2})^-1 Y(i) which can be explicitly obtained as in (<ref>) noting that
Y = c α γ(i) ∑_m=1 ^M U_m(i) + α∑_k=M+1 ^M+K X_k(i)+ Z(i)
𝔼{SY} =γ(i) M β c α , 𝔼{Y^2}=1+K^2 P+ (M β c α )^2+M α^2 c^2 .
The distortion is (observing that σ_S^2=1):
J= 1- (𝔼{SY})^2/𝔼{Y^2}
= M c^2 α^2+ K^2P+1/ M^2 α^2 β^2 c^2 +M c^2 α^2+ K^2P+1
LHS of (<ref>): By the symmetry of the problem, we assume, without any loss of generality, that all adversarial sensors use the same jamming strategy. Let us derive the overall cost conditioned on the realization of the transmitter mappings (i.e., γ=1 and γ=-1) used in conjunction with optimal linear decoders. If γ=1
D_1=J_1 +ξ𝔼{SX_k}+ψ𝔼{ZX_k}
for some constants ξ, ψ, and similarly if γ=-1
D_2=J_1 -ξ𝔼{SX_k}-ψ𝔼{ZX_k}
where the overall cost is
D(i)=ℙ(γ(i)=1)D_1+ℙ(γ(i)=-1)D_2 .
Clearly, for γ(i)∼ Bern (1/2) the overall cost J_1 is only a function of the second-order statistics of the adversarial outputs, irrespective of the distribution of {θ(i)}, and hence the solution presented here is indeed a saddle-point.
Having established this fact, we next show this saddle point is essentially unique.
Gaussianity of X_k(i): The choice X_k(i)=θ(i) maximizes (<ref>) since it renders the simple uncoded linear mappings asymptotically optimal, i.e., the transmitters cannot improve on the zero-delay performance by utilizing asymptotically high delays. Moreover, the optimal zero-delay performance is always lower bounded by the performance of the linear mappings, which is imposed by the adversarial choice of X_k(i)=θ(i).
Independence of {X_k(i)} of {S(i)} and {W(i)}: If the adversarial sensors introduce some correlation, i.e., if 𝔼{SX_k}≠0 or 𝔼{WX_k}≠0, the transmitter can adjust its Bernoulli parameter to decrease the distortion. Hence, the optimal adversarial strategy is setting 𝔼{SX_k}=𝔼{WX_k}=0 which implies independence since all variables are jointly Gaussian.
Choice of Bernoulli parameter: Note that the optimal choice of the Bernoulli parameter for the transmitters is 1/2 since other choices will not cancel the cross terms in (<ref>) and (<ref>), i.e., 𝔼{SX_k} and 𝔼{WX_k}. These cross terms can be exploited by the adversary to increase the cost, hence optimal strategy for transmitter is to set γ=Bern(1/2).
Coordination, i.e., the ability of using a common randomized sequence, is beneficial to adversarial sensors in the case of coordinating transmitters and receiver, in the sense that lack of adversarial coordination strictly decreases the overall cost.
Note that coordination, i.e., to be able to generate the same realization of θ(i) enables adversarial sensors to generate a Gaussian noise with variance K^2P_A yielding the cost in (<ref>). However, without coordination, each sensor can only generate independent Gaussian random variables, yielding an overall Gaussian noise with variance KP and the total cost
M c^2 α^2+ KP+1/ M^2 α^2 β^2 c^2 +M c^2 α^2+ KP+1
< J_C^S(M,K)
Hence, coordination of adversarial sensors strictly increases the overall cost.
We note that the optimal strategies do not depend on the sensor index m, hence the implementation of the optimal strategy, for both transmitter and adversarial sensors, requires “coordination" among the sensors. This highlights the need for coordination in game theoretic settings in sensor networks. Note that this coordination requirement arises purely from the game theoretic considerations, i.e., the presence of adversarial sensors. In the case where no adversarial node exists, transmitters do not need to “coordinate". Moreover, as we will show in Theorem 2 if the transmitters cannot coordinate, then adversarial sensors do not need to coordinate.
§.§ Problem Setting II
Here, we address the second setting, where the transmitters do not have the ability to secretly agree on a sequence of i.i.d. “coordination" random variables, {γ}, to generate their transmission functions X_m. This setting does not admit a saddle-point solution, hence a Stackelberg solution is sought here. We also assume the number of adversarial sensors is less than the number of transmitter ones, i.e., K<M, otherwise (if M≥ K) the adversarial sensors can effectively eliminate the output of the transmitters, and the problem becomes trivial.
We show that the essentially unique Stackelberg equilibrium is achieved by a transmitter strategy, which is identical across all transmitters: uncoded transmission with linear mappings. The equilibrium achieving strategy for the attack sensors, again identical across all adversarial sensors, is uncoded transmission with linear mappings, but with the opposite sign of the transmitter. The receiver strategy is symbol-by-symbol optimal estimation of the source from the channel output. A rather surprising observation is that the adversarial coordination (in the sense of sharing a random sequence that is hidden from the transmitter sensors and the receiver) of is superfluous for this setting, i.e., even if the adversarial sensors are allowed to cooperate, the optimal mappings and hence, the resulting cost at the saddle point do not change.
The following theorem captures this result.
For setting II and under Assumption 1, the essentially unique Stackelberg equilibrium is achieved by:
X_m(i)=c U_m(i), 1 ≤ m≤ M
for the transmitter sensors and
X_k(i)= -c U_k(i), M+1 ≤ k≤ M+K
for the adversarial sensors.
The optimal receiver strategy is the symbol-by-symbol Bayesian estimator of S given Y, i.e.,
h(Y(i))=(M-K) c αβ/ (M-K)^2 α^2 β^2 c^2 +(M-K) c^2 α^2+ 1Y(i).
The cost at this Stackelberg solution is
J_NC^S(M,K)=(M-K) c^2 α^2+ 1/ (M-K)^2 α^2 β^2 c^2 +(M-K) c^2 α^2+ 1
Let us first find the cost at the equilibrium, J_NC^S, for the given encoding strategies. We start by computing the expressions used in the MMSE computations,
Y = c α (∑_m=1^M U_m-∑_k=M+1^M+K U_k ) + Z
𝔼{SY} =(M-K) c αβ, 𝔼{Y^2}=(M-K)^2 α^2 β^2 c^2 +(M-K) c^2 α^2+ 1.
Plugging these expressions, we obtain the cost, J_NC^S(M,K)=1-(𝔼{SY})^2/𝔼{Y^2} as given in (<ref>), and the optimal receiver strategy, h(Y(i))𝔼{SY} (𝔼{Y^2})^-1 Y(i) as in (<ref>).
We next show that linear mappings are the optimal (in the information-theoretic sense) encoding and decoding strategies. We first note that adversarial sensors have the knowledge of the transmitter encoding functions, and hence the adversarial encoding functions will be in the same form as the transmitters functions but with a negative sign i.e., since outputs are sent over an additive channel (see e.g., <cit.> for a proof of this result). We next proceed to find the optimal encoding functions for the transmitters subject to this restriction. From the data processing theorem, we must have
I( U_1, U_2, …, U_M+K; Ŝ) ≤ I( X_1, X_2, …, X_M+K; Y)
where we use the notational shorthand U_m= [ U_m (1), U_m(2),… , U_m(N)] (and likewise for X_m, Y and Ŝ) for length N sequences of random variables.
The left hand side can be lower bounded as:
I( U_1, U_2, …, U_M+K; Ŝ)≥ R(D)
where R(D) is the rate-distortion function of the Gaussian CEO problem adopted to our setting, and is derived in Appendix A.
The right hand side can be upper bounded by
rCl
I(X_1, X_2 ,..., X_M+K; Y)
(a)≤ ∑_i=1^NI(X_1(i), …, X_M+K(i);Y(i))
≤ max∑_i=1^NI(X_1(i), …, X_M+K(i);Y(i))
= 1/2 ∑_i=1^N log( 1+ 1^T R_X(i) 1 )
where R_X(i) is defined as
{R_X(i)}_p,r≜𝔼{X_p(i)X_r(i)} ∀ p,r ∈ [1:M+K].
Note that (a) follows from the memoryless property of the channel and the maximum in (<ref>) is over the joint density over X_1(i), …,X_M+K(i) given the structural constraints on R_X(i) due to the power constraints. It is well known that the maximum is achieved, uniquely, by the jointly Gaussian density for a given fixed covariance structure <cit.>, yielding (<ref>).
Since logarithm is a monotonically increasing function, the optimal encoding functions g_m^N(·), m∈ [1:M] equivalently maximize ∑_p,r𝔼{X_p(i)X_r(i)}. Note that
X_m(i)= [g_m^N( U_m) ]_i
and hence {g_m^N(·)}_m=1^M that maximize
∑_p=1^p=M+K∑_r=1^r=M+K𝔼{ [g_p^N( U_p)]_i [g_r^N( U_r)]_i}
can be found by invoking Witsenhausen's lemma (given in Appendix B) as [g_m^N( U_m)]_i=c U_m(i) for all i∈ [1:N], and hence g_m^N( U_m)=c U_m for all m∈ [1:M].
Finally, we obtain J_NC^S as an outer bound by equating the left and right hand sides of (<ref>). The linear mappings in Theorem 2 achieve this outer bound, and hence are optimal.
Source-channel separation, based on digital compression and communications is strictly suboptimal for this setting.
We first note that the optimal adversarial encoding functions must be the negative of that of the transmitters to achieve the saddle-point solution derived in Theorem 2. But then, the problem at hand becomes equivalent to a problem with no adversary which was studied in <cit.>, where source-channel separation was shown to be strictly suboptimal. Hence, separate source-channel coding has to be suboptimal for our problem. A more direct proof follows from the calculation of the separate source-channel coding performance.
Coordination is beneficial to transmitter sensors, in the sense that lack of coordination strictly increases the equilibrium cost.
Proof follows from the fact that J_C^S<J_NC^S.
§.§ Problem Setting III
The focus of this section is the setting between the two extreme scenarios of coordination, namely full, or no coordination. In the following, we assume that M ϵ transmitter sensors can coordinate with the receiver while M(1-ϵ) of them cannot coordinate, where 0<ϵ<1 and Mϵ is integer. Similarly, we consider only K η of the adversarial sensors can coordinate while the remaining K (1-η) adversarial sensors cannot, where 0<η<1 and Kη is integer. Let us reorder the sensors, without loss of generality, such that the first M ϵ transmitters and Kη adversaries can coordinate. We again take K<M. Let us also define the quantity ϵ_0 as the unique[The fact that J_C^S is monotonically decreasing in ϵ_0 ensures that (<ref>) admits a unique solution.] solution to:
J_C^S(Mϵ_0, √(K^2η^2+K(1-η)))=J_NC^S(M, K)
The following theorem captures our main result.
For ϵ > ϵ_0, there exists a saddle-point solution with the following strategies: the optimal transmission strategy requires that the Mϵ capable transmitters use randomized linear encoding, while the remaining M (1-ϵ) transmitters are not used.
X_m(i) =γ(i) c U_m(i), 1 ≤ m≤ Mϵ
X_m(i) =0 Mϵ≤ m ≤ M
where {γ(i)} is an i.i.d. sequence of binary variables γ(i)∼ Bern (1/2). The optimal jamming policy (for the coordination-capable adversarial sensors) is to generate the identical Gaussian noise
X_k(i)=θ(i), M+1 ≤ k≤ M+K η
while the remaining adversarial sensors will generate independent Gaussian noise
X_k(i)=θ_k(i), M+K η≤ k≤ M+K
where θ_k(i) ∼𝒩(0, P)
are independent of the adversarial sensor input U_k(i).
The receiver strategy at this saddle point is
h(Y(i))=M ϵ c αβ/ M^2 ϵ^2 α^2 β^2 c^2 +Mϵ c^2 α^2+ (K^2 η^2+K(1-η)) P+1γ(i) Y(i).
If ϵ < ϵ_0, the Stackelberg equilibrium is achieved with the deterministic linear encoding for the transmitter sensors, i.e.,
X_m(i)=c U_m(i), 1 ≤ m≤ M
and the adversarial sensors use identical functional form with opposite sign of the transmitters, i.e.,
X_k(i)= -c U_k(i), M+1 ≤ k≤ M+K
and the receiver uses
h(Y(i))=(M-K) c αβ/ (M-K)^2 α^2 β^2 c^2 +(M-K) c^2 α^2+ 1Y(i).
The transmitters have two choices: i) All transmitters will choose not to use randomization. Then, the adversarial sensors do not need to use randomization since the optimal strategy is deterministic, linear coding with the opposite sign, as shown in Theorem 2. Hence, the cost associated with this option is J_NC^S(M,K). ii) Capable transmitters will use randomized encoding. This choice implies that remaining transmitters do not send information as they do not have access to randomization sequence {γ}, hence they are not used. The adversarial sensors which can coordinate generate identical realization of the Gaussian noise while, remaining adversaries generate independent realizations. The total effective noise adversarial power will be ((Kη)^2+(1-η)K)P, and the cost associated with this setting is J_C^S(Mϵ, √(K^2η^2+K(1-η))). Hence, transmitters will choose between two options depending on their costs, J_C^S(Mϵ, √(K^2η^2+K(1-η))) and J_NC^S(M, K). Since, J_C^S is a decreasing function in M and hence in ϵ, whenever ϵ >ϵ_0, transmitters use randomization (and hence so do the adversaries), otherwise problem setting becomes identical to “no coordination". The rest of the proof simply follows from the proofs of Theorems 1 and 2.
Note that in the first regime (ϵ > ϵ_0), we have a zero-sum game with saddle-point. In the second regime (ϵ < ϵ_0), we have a Stackelberg game where all transmitters and receiver constitute the leader and adversaries constitute the follower.
Theorem 3 states a rather interesting observation: depending on the network conditions, the optimal transmission strategy may not use all of the transmitter sensors. At first glance, it might seem that discarding some of the available transmitter sensors is suboptimal. However, there is no feasible way to use these sensors, which cannot coordinate, without compromising the benefits of coordination.
§ THE ASYMMETRIC SCENARIO
In this section, we remove the assumption of identical sensing and channel noise variances and identical transmitter and adversary average power. Instead, we assume there is a sum-power limit for the set of transmitters and for the set of adversarial nodes. In this general asymmetric case, the optimal, in information-theoretic sense, communication strategies are unknown in the absence of adversary. Here, we assume zero-delay linear strategies, in the light of our results in previous section, which provide an upper bound on the distortion-power performance:
In Setting-I (where the transmitter sensors can coordinate), the transmission strategies are restricted to
X_m(i)=γ (i)c_m U_m(i)
where {γ(i)} is an i.i.d sequence of binary variables γ(i)∼ Bern (1/2).
In Setting-II (where no coordination is allowed), the transmission strategies are limited to
X_m(i)=c_mU_m(i).
The problem we address in this section is two-fold: i) determine the optimal power allocation strategies for a given sum-power constraint of the form:
∑_m=1^M P_m ≤ P_T,
and ii) determine the optimal adversarial sensor strategies subject to a sum power constraint:
∑_k=M+1^M+K P_k ≤ P_A,
Before deriving our results, we introduce a few variables.
We let k^* be the index of an adversarial sensor having the best communication channel, i.e.,
k^*≜_k ∈ [M+1:M+K]α_k
and P_A' is the associated received power P_A'≜α_k^*^2 P_A. In the case of multiple k^*s, we pick one arbitrarily.
§.§ Setting-I
In this setting, the transmitters can coordinate, similar to the setting studied in Section <ref>. Following the same steps as in Section <ref>, we conclude that the solution sought here is a saddle point.
For setting I, and under Assumption 2, an essentially unique saddle-point solution exists with the following strategies: the communication strategy for the transmitter sensor m is given in (<ref>) where
c_m= λ_2 α_m β_m / 2 (1+β_m^2+λ_1 α_m^2 ), λ_1= P_T/1+P_A',
λ_2= √(4P_T/∑_m=1^M(1+β_m^2) α_m^2β_m^2/(1+β_m^2+λ_1α_m^2)^2)
The attacker uses only sensor k^*, and it generates i.i.d. Gaussian output
X_k^*(i)=θ(i), where θ(i)∼𝒩(0, P_A),
which is independent of the adversarial sensor input U_k^*(i).
The receiver is the Bayesian estimator of S given Y, i.e.,
h(Y(i))= (∑_m=1 ^Mβ_m c_m α_m ) γ(i) Y(i)/1+P_A'+ (∑_m=1 ^Mβ_m c_m α_m )^2+ ∑_m=1 ^Mα_m^2 c_m^2 .
The cost at this saddle-point solution is
J_C^AS= (1+ λ_1 ∑_m=1^Mα_m^2 β_m^2 / 2 (1+β_m^2+λ_1 α_m^2 ))^-1.
The existence of an essentially unique saddle-point solution follows from the same reasoning in Theorem 1. Let us take the transmission strategy as given in the theorem statement and derive the optimal attack strategy. Note that essentially, the attacker's role is limited to adding Gaussian noise subject to attack power P_A, the only remaining question here is how to allocate its power to the sensors. The objective of the attacker is to maximize the effective channel noise, i.e., to maximize:
∑_k=M ^M+Kα_k 𝔼{θ_k^2} subject to ∑_k=M ^M+K𝔼{θ_k^2}≤ P_A. The solution of this problem is simply: the attacker picks the best attack channel, i.e., the sensor with the largest α_k, allocates all adversarial power on this sensor.
Applying the optimal encoding map given in (<ref>), and given adversary strategy we have the following auxillary expressions for the terms used in standard MMSE estimation.
Y = γ(i)∑_m=1 ^M c_m α_m U_m(i) +∑_k=M+1 ^M+K α_k X_k(i)+ Z(i)
𝔼{SY} =γ(i)∑_m=1 ^Mβ_m c_m α_m, 𝔼{Y^2}=1+P_A'+ (∑_m=1 ^Mβ_m c_m α_m )^2+ ∑_m=1 ^Mα_m^2 c_m^2 .
The distortion is (observing that σ_S^2=1):
J= 1- (𝔼{SY})^2/𝔼{Y^2}
= 1- (∑_m=1 ^Mβ_m c_m α_m )^2/1+P_A'+ (∑_m=1 ^Mβ_m c_m α_m )^2+ ∑_m=1 ^Mα_m^2 c_m^2 ,
= 1+K^2 P+∑_m=1 ^Mα_m^2 c_m^2/1+P_A'+∑_m=1 ^Mα_m^2 c_m^2+ (∑_m=1 ^Mβ_m c_m α_m )^2.
Then, the problem is to determine c_m that minimizes (<ref>) subject to the power constraint, ∑_m=1 ^M (1+β_m^2) c_m^2 ≤ P_T. We first note that this problem is not convex in c_m. By changing the variables, we convert this problem into a convex form which is analytically solvable. First, instead of minimizing the distortion with a power constraint, we can equivalently minimize the power with a distortion constraint. Since distortion is a convex function of the total power (otherwise it can be converted to a convex problem by time sharing), there is no duality gap by this modification (cf. <cit.>). The modified problem is to minimize:
∑_m=1 ^M(1+β_m^2) c_m^2,
subject to
1+P_A'+∑_m=1 ^Mα_m^2 c_m^2/1+P_A'+∑_m=1 ^Mα_m^2 c_m^2+ (∑_m=1 ^Mβ_m c_m α_m )^2≤ J.
Note that
1/J=1+ (∑_m=1 ^Mβ_m c_m α_m )^2/1+P_A'+ ∑_m=1 ^Mα_m^2 c_m ^2
Next, we introduce a slack variable
r=∑_m=1 ^Mα_m β_m c_m.
The optimization problem is to minimize
∑_m=1 ^M(1+β_m^2) c_m^2,
subject to
1+P_A'+∑_m=1 ^Mα_m^2 c_m^2 ≤(J^-1-1)^-1 r^2,
and (<ref>). This problem is convex in the variables c_m and r. Hence, we construct the Lagrangian cost as
J =∑_m=1 ^M (1+β_m^2 ) c_m^2+λ_1 (1+P_A'+∑_m=1 ^Mα_m^2 c_m^2-r^2/(J^-1-1)) +λ_2 (r-∑_m=1 ^Mα_m β_m c_m),
where λ_1 ∈ℝ^+ and λ_2 ∈ℝ. The first-order conditions for stationarity of the Lagrangian yield:
∂ J/∂ c_m=2 c_m (1+β_m^2 )+2 λ_1 c_m α_m^2 -λ_2 α_m β_m=0,
∂ J/∂ r=-2λ_1 (J^-1-1)^-1 r + λ_2=0 ,
and we have (<ref>) and
1+P_A'+∑_m=1 ^Mα_m^2 c_m^2 = (J^-1-1 )^-1 r^2.
From (<ref>), we have
c_m= λ_2 α_m β_m / 2 (1+β_m^2+λ_1 α_m^2 ).
Using (<ref>) in (<ref>), we have
λ_2^2/4λ_1∑_m=1^Mα_m^2β_m^2 / (1+β_m^2+λ_1 α_m^2 )=1+P_A'+ λ_2^2/4∑_m=1^Mα_m^4β_m^2 / (1+β_m^2+λ_1 α_m^2 )^2
which simplifies to
λ_1(1+P_A') =λ_2^2/4∑_m=1^Mα_m^2β_m^2 (1+β_m^2) / (1+β_m^2+λ_1 α_m^2 )^2 = ∑_m=1^M P_m= P_T ⇒λ_1= P_T/1+P_A'.
We also have
P_T= ∑_m=1^M (1+β_m^2)c_m^2=λ_2^2/4∑_m=1^M (1+β_m^2) α_m^2β_m^2/(1+β_m^2+λ_1α_m^2)^2⇒λ_2= √(4P_T/∑_m=1^M(1+β_m^2) α_m^2β_m^2/(1+β_m^2+λ_1α_m^2)^2)
Plugging the expressions of λ_1 and λ_2 in (<ref>), we obtain the equilibrium cost.
If Assumption 1 is replaced with Assumption 2 in setting I, coordination becomes redundant for the attacker. This is because the optimal attack strategy uses only one sensor, and there is no need to coordinate (generate the same realization of θ(i)).
The optimal strategies can be computed for each sensor in a decentralized manner. The central agent can compute the optimal values of λ_1 and λ_2 and then broadcast this information to all sensors. Next, each transmitter sensor can compute its own mapping based on local parameters α_m and β_m and the broadcasted global parameters λ_1 and λ_2.
Finally, we analyze the asymmetric setting where the sensors are not allowed to coordinate. We characterize the policies achieving the Stackelberg equilibrium and associated cost in the following theorem.
For setting II, and under Assumption 2, the encoding functions for the transmitter and the adversarial sensors at the Stackelberg equilibrium are:
X_m(i)=c_m U_m(i), 1 ≤ m≤ M, X_k(i)= c_k U_k(i), M+1 ≤ k≤ M+K
where
c_m=λ_4 α_m β_m / 2 (1+β_m^2+λ_3 α_m^2 ), c_k= λ_2 α_k β_k / 2 (1+β_k^2-λ_1 α_k^2 ).
and λ_1 ∈ℝ, λ_2 ∈ℝ^+, λ_3∈ℝ, λ_4 ∈ℝ^+ are constants that satisfy the following equations:
λ_2=-2P_A+2λ_1 (1-∑_m=1 ^Mα_m^2 c_m^2 )/∑_m=1 ^Mα_m β_m c_m, (P_A+λ_1 (1-∑_m=1 ^Mα_m^2 c_m^2 )/∑_m=1 ^Mα_m β_m c_m)^2 ∑_k=M+1 ^M+K (1+β_k^2)α_k^2 β_k^2 / (1+β_k^2-λ_1 α_k^2 )^2 = P_A.
and
λ_4λ_1=-λ_2λ_3, 1=P_T/λ_1 +P_A/λ_3, λ_4^2/4∑_m=1^Mα_m^2 β_m^2 (1+β_m^2) / (1+β_m^2+λ_3 α_m^2 )^2 =P_T.
The optimal receiver is the Bayesian estimator of S given Y, i.e.,
h(Y(i))=∑_m=1 ^M+Kβ_m c_m α_m/1+∑_m=1 ^M+Kα_m^2 c_m^2+ (∑_m=1 ^M+Kβ_m c_m α_m )^2
Y(i).
The cost at this Stackelberg equilibrium is
J_NC^AS(M,K)= (1+ λ_3 ∑_m=1^Mα_m^2 β_m^2 / 2 (1+β_m^2+λ_3 α_m^2 )- λ_1 ∑_k=M+1^M+Kα_k^2 β_k^2 / 2 (1+β_k^2-λ_1α_k^2 ))^-1.
Since Player 1 (the transmitter sensors and the receiver) is the leader of this Stackelberg game and the adversarial sensors are the followers, we first compute the best response of the attacker to the given transmitter strategy and associated receiver policy. By the reasoning in Theorem 2, we conclude that the best adversary strategy is to use linear maps as given in the theorem statement.
In the following, we compute the optimal adversary coefficients, c_k, k ∈ [M+1:M+K] as a function of c_m, m∈ [1:M]. We first compute the expressions used in the MMSE computations as:
Y = ∑_m=1 ^M+K c_m α_m U_m + Z
𝔼{SY} =∑_m=1 ^M+Kβ_m c_m α_m, 𝔼{Y^2}=1+ (∑_m=1 ^M+Kβ_m c_m α_m )^2+ ∑_m=1 ^M+Kα_m^2 c_m^2 .
The objective of the attacker is to maximize
J=1- (𝔼{SY})^2/𝔼{Y^2}=1+∑_m=1 ^M+Kα_m^2 c_m^2/1+∑_m=1 ^M+Kα_m^2 c_m^2+ (∑_m=1 ^M+Kβ_m c_m α_m )^2.
over c_k, k ∈ [M+1:M+K] that satisfy
∑_k=M+1 ^M+K(1+β_k^2) c_k^2 ≤ P_A.
This problem is again non-convex, hence we follow the approach we used in the proof of Theorem 4: we first introduce a slack variable.
r_K=∑_k=M+1 ^M+Kα_k β_k c_k,
and apply the KKT optimality conditions. The stationarity conditions applied to the following Lagrangian cost
J_A =∑_k=M+1 ^M+K (1+β_k^2 ) c_k^2+λ_1 ((r_K+∑_m=1 ^Mα_m β_m c_m)^2(J^-1-1)^-1-1-∑_m=1 ^M+Kα_m^2 c_m^2) +λ_2 (r_K-∑_k=M+1 ^M+Kα_k β_k c_k),
where λ_1 ∈ℝ^+ and λ_2 ∈ℝ, yield
∂ J_A/∂ c_k=2 c_k (1+β_k^2 )-2 λ_1 c_k α_k^2 -λ_2 α_k β_k=0,
∂ J_A/∂ r_K=2λ_1 (J^-1-1)^-1 (r_K+∑_m=1 ^Mα_m β_m c_m) + λ_2=0 ,
and we have (<ref>) and
1+∑_m=1 ^Mα_m^2 c_m^2 + ∑_k=M+1 ^M+Kα_k^2 c_k^2 = (J^-1-1 )^-1 (r_K+∑_m=1 ^Mα_m β_m c_m)^2.
From (<ref>), we have
c_k= λ_2 α_k β_k / 2 (1+β_k^2-λ_1 α_k^2 ).
Using (<ref>) in (<ref>), we have
-λ_2/2λ_1∑_m=1 ^Mα_m β_m c_m-λ_2^2/4λ_1∑_k=M+1^M+Kα_k^2β_k^2 / (1+β_k^2-λ_1 α_k^2 )=1+∑_m=1 ^Mα_m^2 c_m^2 + λ_2^2/4∑_k=M+1^M+Kα_k^4β_k^2 / (1+β_k^2-λ_1 α_k^2 )^2
which simplifies to
-λ_2/2λ_1∑_m=1 ^Mα_m β_m c_m-1-∑_m=1 ^Mα_m^2 c_m^2= λ_2^2/4 λ_1 ∑_k=M+1^M+Kα_k^2β_k^2 (1+β_k^2) / (1+β_k^2-λ_1 α_k^2 )^2 =P_A/λ_1
or
-λ_2 ∑_m=1 ^Mα_m β_m c_m-2λ_1(1-∑_m=1 ^Mα_m^2 c_m^2) =2P_A ⇒λ_2=-2P_A+2λ_1 (1-∑_m=1 ^Mα_m^2 c_m^2 )/∑_m=1 ^Mα_m β_m c_m
Plugging (<ref>) in (<ref>), we have
λ_2^2/4∑_k=M+1 ^M+K (1+β_k^2)α_k^2 β_k^2 / (1+β_k^2-λ_1 α_k^2 )^2=(P_A+λ_1 (1-∑_m=1 ^Mα_m^2 c_m^2 )/∑_m=1 ^Mα_m β_m c_m)^2 ∑_k=M+1 ^M+K (1+β_k^2)α_k^2 β_k^2 / (1+β_k^2-λ_1 α_k^2 )^2 = P_A
The unique positive solution of (<ref>) provides the value of λ_1 and by (<ref>), λ_2 can be computed, once λ_1 is obtained. Having obtained the optimal c_k, k ∈ [M+1:M+K] values as a function of P_A and c_m, m ∈ [1:M], we next derive c_m that minimize (<ref>) subject to
∑_m=1 ^M(1+β_m^2) c_m^2 ≤ P_T.
Again, we modify the problem as to minimize ∑_m=1 ^M(1+β_m^2) c_m^2 subject to
1+∑_m=1 ^M+Kα_m^2 c_m^2 ≤(J^-1-1)^-1 (r+∑_k=M+1 ^M+Kα_k β_k c_k )^2,
and
r=∑_m=1 ^Mα_m β_m c_m.
The stationarity conditions applied to the following Lagrangian cost
J_T =∑_m=1 ^M (1+β_m^2 ) c_m^2+λ_3 (1+ ∑_m=1 ^M+Kα_m^2 c_m^2- (r+∑_k=M+1 ^M+Kα_k β_k c_k )^2(J^-1-1)^-1) +λ_4 (r-∑_m=1 ^Mα_m β_m c_m),
for λ_3 ∈ℝ^+ and λ_4 ∈ℝ yield
∂ J_T/∂ r=-2λ_3 (J^-1-1)^-1 (r+∑_k=M+1 ^M+Kα_k β_k c_k ) + λ_4=0 ,
∂ J_T/∂ c_m =2 c_m (1+β_m^2 )+2 λ_3 c_m α_m^2 +2λ_3∑_k=M+1 ^M+Kα_k^2 c_kc_k' -2λ_3 (J^-1-1)^-1 (r+∑_k=M+1 ^M+Kα_k β_k c_k )∑_k=M+1 ^M+Kα_k β_k c_k' - λ_4 α_m β_m
=2 c_m (1+β_m^2 )+2 λ_3 c_m α_m^2 +2λ_3∑_k=M+1 ^M+Kα_k^2 c_kc_k' -λ_4∑_k=M+1 ^M+Kα_k β_k c_k' - λ_4 α_m β_m=0
where c_k'=∂ c_k/∂ c_m and (<ref>) follows from (<ref>). We also have (<ref>) and
1+∑_m=1 ^Mα_m^2 c_m^2 + ∑_k=M+1 ^M+Kα_k^2 c_k^2 = (J^-1-1 )^-1 (r+∑_k=M+1 ^M+Kα_k β_k c_k )^2
= λ_4/2 λ_3 (r+∑_k=M+1 ^M+Kα_k β_k c_k)
as necessary conditions of optimality. Comparing (<ref>) and (<ref>), we have
λ_4/λ_3=-λ_2/λ_1.
We next use (<ref>) to rewrite the terms involving c_k':
2λ_3∑_k=M+1 ^M+Kα_k^2 c_kc_k' -λ_4∑_k=M+1 ^M+Kα_k β_k c_k' = ∂/∂ c_m(2λ_1∑_k=M+1 ^M+Kα_k^2 c_k^2 +λ_2∑_k=M+1 ^M+Kα_k β_k c_k )= ∂/∂ c_m P_A=0
Using (<ref>) in (<ref>), we obtain
c_m= λ_4 α_m β_m / 2 (1+β_m^2+λ_3 α_m^2 )
Plugging (<ref>) and (<ref>) in (<ref>) and using (<ref>), we have
1+ λ_4^2/4∑_m=1 ^Mα_m^4 β_m^2 / (1+β_m^2+λ_3 α_m^2 )^2 + λ_2^2/4∑_k=M+1 ^M+Kα_k^4 β_k^2 / (1+β_k^2-λ_1 α_k^2 )^2
=λ_4^2/4 λ_3∑_m=1 ^Mα_m^2 β_m^2 / (1+β_m^2+λ_3 α_m^2 )-λ_2^2/4 λ_1∑_k=M+1 ^M+Kα_k^2 β_k^2 / (1+β_k^2-λ_1 α_k^2 )
which yields, after algebraic manipulations,
1=P_T/λ_1 +P_A/λ_3
We also have
λ_4^2/4∑_m=1^Mα_m^2 β_m^2 (1+β_m^2) / (1+β_m^2+λ_3 α_m^2 )^2 =P_T.
The set of equations (<ref>, <ref>, <ref>, <ref>, <ref>) (essentially) uniquely characterizes the variables λ_1, λ_2, λ_3 and λ_4. Plugging these variables into (<ref>), we obtain the equilibrium cost.
We again observe that, as noted in Remark <ref>, the optimal power allocation admits a decentralized implementation: a central agent can compute and broadcast the values of constants λ_i, i=1, …, 4 and the sensors can implement optimal communication strategies using the local information α_m and β_m and these universal constants. The same interpretation also holds for the Byzantine sensors.
§ DISCUSSION AND CONCLUSION
In this paper, we have conducted a game-theoretical analysis of joint source-channel communication over a Gaussian sensor network with Byzantine sensors. Depending on the coordination capabilities of the sensors, we have analyzed three problem settings. The first setting allows coordination among the transmitter sensors, first for the totally symmetric case. Coordination capability enables the transmitters to use randomized encoders. The saddle-point solution to this problem is randomized uncoded transmission for the transmitters and the coordinated generation of i.i.d. Gaussian noise for the adversarial sensors. In the second setting, transmitter sensors cannot coordinate, and hence they use fixed, deterministic mappings. The solution to this problem is shown to be uncoded communication with linear mappings for both the transmitter and the adversarial sensors, but with opposite signs. We note that coordination aspect of the problem is entirely due to game-theoretic considerations, i.e., if no adversarial sensors exist, the transmitters do not need coordination. In the third setting, where only a fraction of sensors can coordinate, the solution depends on the number of transmitter and adversarial sensors that can coordinate. If the gain from coordination for the transmitter sensors and the receiver, is sufficiently high, only the coordination-capable transmitter sensors are used. Then, the problem simplifies to an instance of setting I, i.e, there exists a unique saddle-point solution achieved by randomized linear mappings as the transmitter and the receiver strategy and independent noise as the adversarial strategy. Otherwise, the transmitters do not utilize coordination, all available transmitter sensors are used, and the problem becomes an instance of setting II: a saddle-point solution does not exist and the Stackelberg equilibrium is achieved by deterministic linear strategies.
Our analysis has uncovered an interesting result regarding coordination among the transmitter sensors and the receiver, and among the adversarial nodes. If the transmitter nodes can coordinate, then the adversaries will benefit from coordination, i.e., all will generate the identical realization of an i.i.d. Gaussian noise sequence. If the transmitters cannot coordinate, adversarial sensors do not benefit from coordination, and the resulting Stackelberg equilibrium is at strictly higher cost than the one when transmitters can coordinate (setting I).
Finally, we have analyzed the impact of optimal power allocation among both the transmitter (defender) and the adversarial (attacker) sensors when various parameters that define the game are not the same for all sensors–the asymmetric case. We have shown that the optimal attack strategy, when the defender can coordinate, allocates all attack power on the best sensor, where the criteria of the selection of best sensor pertains to the receiver SNR. Moreover, the flexibility of power allocation renders coordination superfluous for the adversarial sensors, while it remains beneficial for the transmitter sensors. In the absence of coordination, both the optimal transmitter and the optimal attacker strategies use all available sensors to distribute power optimally.
Several questions still remain open and are currently under investigation, including extensions of the analysis to vector sources and channels. The information-theoretic analysis of such a setting requires a vector form of Witsenhausen's Lemma, which is an important research question in its own right, see <cit.> for recent progress in this direction. The investigation of optimal power allocation strategies for asymmetric settings for vector sources and channels, and the scaling analysis, in terms of the number sensors, are parts of our current research.
§ THE GAUSSIAN CEO PROBLEM
In the Gaussian CEO problem, an underlying Gaussian source S ∼𝒩(0,σ_S^2) is observed under additive noise W ∼𝒩( 0, R_W) as U=S+ W. These noisy observations, i.e., U, must be encoded in such a way that the decoder produces a good approximation to the original underlying source. This problem was proposed in <cit.> and solved in <cit.> (see also <cit.>). A lower bound for this function for the non-Gaussian sources within the “symmetric" setting where all U's have identical statistics was presented in <cit.>. Here, we simply extend the results in <cit.> to our setting, noting
D =𝔼{(S-Ŝ)^2},
R =min I( U; Ŝ),
where U=β S+ W, W ∼𝒩( 0, R_W), and R_W is an M × M identity matrix. The minimization in (<ref>) is over all conditional densities p(ŝ| u) that satisfy (<ref>). The MSE distortion can be written as sum of two terms
D= 𝔼{(S-T+ T-Ŝ)^2}=𝔼{(S-T)^2} +𝔼{(T-Ŝ)^2},
where T≜𝔼{S| U}. Note that (<ref>) holds since
𝔼{(S-T)(Ŝ-T)}=0,
as the estimation error, S-T is orthogonal to any function[Note that Ŝ is also a deterministic function of U, since the optimal reconstruction can always be achieved by deterministic codes.] of the observation, U. The estimation error D_est≜𝔼{(S-T)^2} is constant with respect to p(ŝ| u), i.e., a fixed function of U and S. Hence, the minimization is over the densities that satisfy a distortion constraint of the form 𝔼{(T-Ŝ)^2}≤ D_rd and R=min I( U; Ŝ). Hence, we write (<ref>) as
D=D_rd+D_est.
Note that due to their Gaussianity, T is a sufficient statistic of U for S, i.e., S-T- U forms a Markov chain in that order and T∼𝒩(0,σ_T^2). Hence, R=min I( U; Ŝ)=min I(T; Ŝ) where minimization is over p(ŝ|t) that satisfy 𝔼{(T-Ŝ)^2}≤ D_rd, where all variables are Gaussian. This is the classical Gaussian rate-distortion problem, and hence:
D_rd(R)=σ_T^2 2^-2R.
Note that T=R_SU R_U^-1 U, where R_SU≜𝔼{S U^T} and R_U≜𝔼{ U U^T} which can be written explicitly as:
R_U= ( [ 1+β_1^2 β_1β_2 … β_1 β_M; β_1β_2 1+β_2^2 … β_2 β_M; ⋮ ⋱ ⋮; β_1 β_M … 1+β_M^2 ] ).
Since R_U is structured, it can easily be manipulated. In particular, R_U admits an eigen-decomposition R_U=Q_U^T Λ Q_U where Q_U is unitary and Λ is a diagonal matrix with elements 1, , 1, 1+∑_m β_m^2. We compute σ_T^2 as
σ_T^2 =R_SU R_U^-1 R_SU^T= σ_S^2∑_m=1^Mβ_m^2 /1+∑_m=1^Mβ_m^2,
and using standard linear estimation principles, we obtain
D_est=σ_S^21 /1+∑_m=1^Mβ_m^2.
Plugging (<ref>) in (<ref>) and using (<ref>) yields
D=σ_S^2 (1/1+∑_m=1^Mβ_m^2 +∑_m=1^Mβ_m^2 /1+∑_m=1^Mβ_m^2 2^-2R).
§ WITSENHAUSEN'S LEMMA
In this section, we recall Witsenhausen's lemma <cit.>, which is used in the proof of Theorem 2.
Consider a pair of random variables X and Y, generated from a joint density P_X,Y, and two (Borel measurable) arbitrary functions f,g:ℝ→ℝ satisfying
rrcll
𝔼{f(X)} = 𝔼{g(Y)} = 0,
𝔼{f^2(X)} = 𝔼{g^2(Y)} = 1.
Define
ρ^*≜sup_f,g𝔼{ f(X) g(Y)}
Then for any (Borel measurable) functions f_N, g_N:ℝ^N→ℝ satisfying
rcl
𝔼{f_N(X)} = 𝔼{g_N(Y)}=0 ,
𝔼{f_N^2(X)} = 𝔼{g_N^2(Y)}=1,
for length N vectors sampled from the independent and identically distributed random sequences {X(i)} and {Y(i)},where each X(i), Y(i) pair is generated from P_X,Y, as X={X(i)}_i=1^N and Y={Y(i)}_i=1^N, we have
rcl
sup_f_N,g_N 𝔼{f_N(X )g_N(Y)} ≤ ρ^*.
Moreover, the supremum and the infimum above are attained by linear mappings, if P_X, Y is a bivariate normal density.
IEEEbib
| Cyber-physical systems (CPSs) are large-scale interconnected systems of heterogeneous, yet collaborating,
components that provide integration of computation with physical processes <cit.>. The inherent heterogeneity and integration of different components in CPS pose new security challenges<cit.>. One such security challenge pertains to the CPS communication network.
Most CPSs rely on the presence of a Wireless Sensor Network (WSN)
composed of distributed nodes that communicate their measurements to a central state estimator (fusion center) with
higher computation capabilities. Efficient and reliable communication of these measurements is a critical aspect of WSN
systems that determine usability of the infrastructure. Consider the architecture shown in Figure 1 where multiple sensors observe the state of the plant and transmit their observations over a wireless multiple access channel (MAC) to a central estimator (fusion center) which decides on the control action. The sensors in such architectures are known to be vulnerable to various attacks, see e.g., <cit.> and the references therein. For example, sensors may be captured and analyzed such that the attacker
gains insider information about the communication scheme and networking protocols. The attacker can then reprogram the
compromised sensors and use them to launch the so-called Byzantine attack<cit.>, where the objective of these adversarial sensors can be i) to distort the estimate made at the fusion center, which corresponds to a zero-sum game where the transmitting sensors aim to minimize some distortion associated the state measurements while the objective of the attacker is to maximize it, or ii) strategically craft messages to deceive the estimator in a way to render its estimate close to a predetermined, biased value<cit.>, as was done in the replay attacks of StuxNet in SCADA systems <cit.>. This paper presents an information/communication theoretic approach to Bayesian optimal sensor fusion in the presence of Byzantine sensors for the first setting, while a preliminary analysis of the second case can be found in <cit.>.
We analyze the communication scenario from the perspective of joint source-channel coding (JSCC) which has certain advantages over separate source and channel coding for sensor networks; see e.g., <cit.> and the references therein. In this paper, we extend the game theoretic analysis of the Gaussian test channel <cit.> to Gaussian sensor networks studied by <cit.>. In <cit.>, the performance of a simple uncoded communication is studied, in conjunction with optimal power assignment over the sensors given a sum power budget. For a particular symmetric setting, Gastpar showed that indeed this uncoded scheme is optimal over all encoding/decoding methods that allow arbitrarily high delay <cit.>. However, it is well understood that in more realistic asymmetric settings, the uncoded communication scheme is suboptimal, and in fact, the optimal communication strategies are unknown for these settings<cit.>.
Information-theoretic analysis of the scaling behavior of such sensor networks, in terms of the number of sensors, is provided in <cit.>.
In this paper, building on our earlier work on the topic <cit.>, we consider three settings for the sensor network model, which is illustrated in Figure 2 and described in detail in Section II. The first M sensors (i.e., the transmitters) and the single receiver constitute Player 1 (minimizer) and the remaining K sensors (i.e., the adversaries) constitute Player 2 (maximizer). Formulated as a zero-sum game, this setting does not admit a saddle point in pure strategies (deterministic encoding functions), but admits one in mixed strategies (randomized functions). In the first setting we consider, the transmitter sensors are allowed to use randomized encoders, i.e., all transmitters and the receiver agree on some (pseudo)random sequence, denoted as {γ} in the paper. We coin the term “coordination" for this capability, show that it plays a pivotal role in the analysis and the implementation of optimal strategies for both the transmitter and the adversarial sensors , and provide the mixed-strategy saddle-point solution in Theorem 1. In the second setting, we have a hierarchical scheme; it can be viewed as a Stackelberg game where Player 1 is the leader, restricted to pure strategies, and Player 2 is the follower, who observes Player 1's choice of pure strategies and plays accordingly. We present in Theorem 2 the optimal strategies for this Stackelberg game, whose cost is strictly higher than the cost associated with the first setting. The sharp contrast between the two settings underlines the importance of “coordination" in sensor networks with adversarial nodes. In the third setting, we consider only a given subset of the transmitters and also the adversarial sensors can coordinate. We show that if the number of transmitter sensors that can coordinate is sufficiently high (compared to ones that cannot), then the problem becomes a zero-sum game with a saddle-point, where the coordination-capable transmitters use randomized linear strategy and the remaining transmitters are not used at all. It may at first appear to be counter intuitive to forgo utilization of the second set of transmitter sensors but the gain from coordination (by the first set of transmitter sensors) more than compensates for this loss. Coordination is also important for the adversarial sensors. When transmitters coordinate, adversaries would benefit from coordination to generate identical realizations of Gaussian jamming noise. In contrast with transmitters, the adversarial sensors which cannot coordinate are of use: they generate independent copies of identically distributed Gaussian jamming noise. Otherwise, i.e., the number of coordinating transmitters is not sufficiently high, transmitters use deterministic (pure strategies) linear encoding, and optimal adversarial strategy is also uncoded communications in the opposite direction of the transmitters.
In the second part of the paper, we extend the analysis to asymmetric settings where sensing and/or communications channels, and allowed transmission power of each sensor are different. For this setting, information-theoretically optimal source-channel coding strategies are unknown (see e.g., <cit.> for inner and outer bounds of optimal performance). Here, we assume that the sensors use uncoded (zero-delay) linear communication strategies, which are optimal for the symmetric setting. We also allow another coordination capability to the sensors to combat with this inherent heterogeneity: we assume a total power limit over the sensors which allows for power allocation over sensors. We assume this power allocation optimization capability is also available to the adversarial sensors. We derive optimal power scheduling strategies for the transmitter and the adversarial sensors for both settings, i.e., with or without coordination[Here, the term coordination refers to the sensors' ability on generating identical realization of a (pseudo)random sequence.]. We show that the power allocation capability renders coordination superfluous for the adversarial sensors, while it is still beneficial to the transmitter sensors.
This paper is organized as follows: In Section II, we formulate the problem. In Section III, we present our results pertaining to the symmetric setting, and in Section IV, we analyze the asymmetric case. In Section V, we present conclusions and discuss possible future directions of research. | null | null | null | null | null |
http://arxiv.org/abs/1701.07650v3 | 20170126105135 | A USB-controlled potentiostat/galvanostat for thin-film battery characterization | [
"Thomas Dobbelaere"
] | physics.ins-det | [
"physics.ins-det"
] |
Supplementary material available online: <cit.>
[email protected]
Department of Solid State Sciences, Ghent University, Belgium
This paper describes the design of a low-cost USB-controlled potentiostat/galvanostat which can measure or apply potentials in the range of ±8V, and measure or apply currents ranging from nanoamps to max. ±25 mA. Precision is excellent thanks to the on-board 20-bit D/A-convertor and 22-bit A/D-convertors. The dual control modes and its wide potential range make it especially suitable for battery characterization. As an example use case, measurements are presented on a lithium-ion test cell using thin-film anatase TiO2 as the working electrode. A cross-platform Python program may be used to run electrochemical experiments within an easy-to-use graphical user interface. Designed with an open hardware philosophy and using open-source tools, all the details of the project (including the schematic, PCB design, microcontroller firmware, and host computer software) are freely available, making custom modifications of the design straightforward.
A USB-controlled potentiostat/galvanostat for thin-film battery characterization
Thomas Dobbelaere
December 30, 2023
================================================================================
§ INTRODUCTION
§.§ Potentiostat basics
The potentiostat is an essential tool in electrochemical research. It allows the experimenter to apply a potential to a system (i.e. an electrochemical cell) and measure the resulting current, or vice versa. The unique property of the potentiostat – the thing that differentiates it from a simple combination of an adjustable voltage source and an ammeter – is that it can do so while keeping the path where the current flows separate from the path where the potential is measured. This is necessary in electrochemical cells because potentials are usually measured against a “reference electrode” which only provides an accurate and stable potential in equilibrium conditions, i.e. when it is not disturbed by any current flow. Using two terminals for current flow and two others for potential sensing, a total of four electrode connections are needed; they are usually named as follows:
* Working electrode (abbreviation: WE)
* Counter electrode (abbreviation: CE)
* Sense electrode (abbreviation: SE)
* Reference electrode (abbreviation: RE)
In a four-electrode connection scheme, the potential is measured (and no current flows) between SE and RE, and current is applied (regardless of the voltage drop) between WE and CE.[It should be noted that the potentiostat is functionally equivalent to a source-measure unit with separate force/sense lines, with the WE/CE pair corresponding to the “force” connections and the SE/RE pair corresponding to the “sense” connections.]
In most electrochemical cells, the SE and WE are tied together (and still referred to as WE), resulting in a three-electrode connection scheme: the potential is measured between WE and RE, and current flows between WE and CE. This is illustrated in Figure <ref>. Using this scheme, one would naively assume that it is impossible to control the potential; as there is zero current flow, the potential can only be measured and not forced to a certain value.
The potentiostat can, however, apply current between WE and CE, which in turn influences the potential between WE and RE; thereby, using a feedback loop, any desired potential between WE and RE can be achieved by applying whatever current is necessary between WE and CE. This control mode is called the “potentiostatic mode”: it allows the user to set a desired potential, and the potentiostat will try to reach that potential by adjusting the current.
Alternatively, it is often useful to have control over the current instead (i.e. allowing the user to set a desired current, no matter what the resulting potential may be); this is called the “galvanostatic mode”. The presented circuit elegantly combines both of these modes by making it possible to switch the feedback path between the potentiostatic and the galvanostatic modes, as shown in Figure <ref>. On a basic level, the circuit operates as follows:
* A digital-to-analog convertor (DAC) outputs an electrical signal representing either the desired potential (in the potentiostatic mode) or the desired current (in the galvanostatic mode).
* An operational amplifier compares this to the measured potential (in the potentiostatic mode) or the measured current (in the galvanostatic mode), and drives current into the CE until the measured value equals the DAC setpoint.
* Both the measured potential and the measured current are fed into an analog-to-digital convertor (ADC) for data acquisition purposes.
§.§ Comparison to previously published designs
Although a number of similar open source, “do-it-yourself” potentiostat designs have recently been published <cit.>, these designs are not suitable for battery characterization. The Friedman et al. design <cit.> does not allow the working electrode potential to be scanned (it can only be adjusted to a fixed value in hardware), limiting its use to chronoamperometry (i.e. recording the working electrode current as a function of time). The CheapStat <cit.> supports a number of electrochemical techniques including cyclic voltammetry, but its potential range is limited to ±1 V. The DStat <cit.> improves upon the CheapStat in several ways, and has impressive low-current capabilities, but it is still limited to a potential range of ±1.5 V. Although these potential ranges are wide enough for many aqueous electrochemistry experiments, they are insufficient for e.g. lithium-ion batteries which can reach cell potentials over 4 V. The potential range of the design described in this paper is ±8 V.
Another highly desirable feature for battery characterization is the inclusion of a galvanostatic mode. The aforementioned designs implement the “adder potentiostat” topology <cit.> which only provides potentiostatic control. The presented design has a different topology which enables switching between potentiostatic control and galvanostatic control with a single (digitally controlled) switch.
§.§ Comparison to commercial instruments
While there are plenty of commercial instruments which can be bought from manufacturers such as Metrohm Autolab, Bio-Logic, Gamry, Ivium Technologies, CHI, Pine Research, Admiral Instruments, etc. to fulfill the same purpose, including models which provide wider current ranges, higher sample rates, and more measurement techniques (e.g. including impedance spectroscopy), the price of these instruments generally ranges from $2000 up to $20000 and more. To the author's best knowledge, the lowest-cost commercial instrument which could substitute for the presented design is the Squidstat Solo, sold by Admiral Instruments for a retail price of $1900 <cit.>. While it has a higher sample rate (1 ms/sample, versus 90 ms for the presented design), a slightly higher potential range (±10V versus ±8V) and similar current ranges (±3 A to ±25 mA versus ±2 A to ±20 mA), it has worse potential and current resolution (16-bit versus 22-bit), it is approx. 20 times more expensive, and its hardware and software cannot be freely modified.
§ THE HARDWARE
§.§ Circuit description
An annotated schematic diagram of the device is shown in Figure <ref>. It consists of several subcircuits, which are discussed below.
§.§.§ Power supply
In order to be able to apply cell voltages between -8 V and +8 V, the analog circuitry needs dual power supply rails which supply at least these voltages (plus some overhead). The required current is fairly low; the cell current will not exceed 25 mA, and the quiescent current of the analog circuitry is in the order of a few mA. To eliminate the inconvenience of needing an external power supply, the supply is generated internally from the +5 V line provided by the USB bus. This is achieved by a charge pump[The choice of a charge pump rather than a switched-mode power supply simplifies the design and eliminates inductors or transformers, which may emit undesirable electromagnetic noise.] consisting of U1 (an LM2662 switched-capacitor voltage convertor) and its associated circuitry. The switching action of U1 drives both a positive voltage doubler network (C2, C3, C4 and D1) and an inverting voltage doubler network (C5, C6, C7 and D2); this results in supply rails of ±10 V, minus the forward voltage losses of the diodes. These losses are minimized by choosing Schottky-type diodes such as the BAT721S, which conveniently houses two of them in a single package. The result is approx. ±9 V. Ceramic capacitors are recommended for C2–C8; their low ESR results in low ripple, they do not degrade over time like electrolytics, and the required 10 F capacities are nowadays inexpensively available.
§.§.§ Analog circuitry
The analog circuitry is implemented using OPAx192 operational amplifiers. These relatively new op-amps from Texas Instruments <cit.> are high-precision, low offset voltage, low bias current, low-noise devices which have a wide supply voltage, rail-to-rail inputs and outputs, and a rather high output current. They closely approximate ideal op-amp behaviour, making them highly suitable for a measurement circuit like this where DC precision is of the utmost importance.
The core of the circuit is the control amplifier, U7A. It compares the voltage set by the DAC output to either the potentiostatic or the galvanostatic feedback voltage, and drives the working electrode of the cell until they are equal. To prevent oscillation, its bandwidth is limited by R5 and C24. The present component values yield a –3 dB frequency of approx. 3 kHz, which is still much faster than the typical measurement timescale. An additional “snubber”-type network consisting of R4 and C25 on its output pin increases stability towards capacitive loads. The cell switch K1 allows the electrochemical cell to be connected (this will later be referred to as the “” state) or disconnected (“”) from the output of U7A; in its disconnected state, the cell is not driven, but may still be measured.
Potentiostatic feedback is acquired through U7B, U7C, and U7D. U7B and U7D are unity-gain buffers which present very high input impedances on respectively the sense electrode and the reference electrode, satisfying the requirement of having nearly zero current flow between SE and RE.[The OPAx192 has a typical input impedance of 10^13 and a typical input bias current of 5 pA <cit.>. Using e.g. a reference electrode with an impedance of 10 k, this results in an error voltage of 50 nV – an insignificant quantity.] The buffered voltages are then fed into U7C, which is configured as a differential amplifier by means of R6–R9. This amplifier implements the following operation:
V_MEAS=V_REF+(V_SE-V_RE)×(R7/R6)
(for R8=R6, R9=R7)
With V_REF=2.500 V, R7=75.0 k and R6=240 k, potential differences between SE and RE ranging from to are linearly scaled to output voltages V_MEAS ranging from 0 V to +5 V. In this way, the signal spans the same range as the DAC output (allowing it to be used as a feedback signal) and the ADC input (allowing it to be measured).
Galvanostatic feedback is acquired by making use of the shunt resistors R10, R11 or R12 to convert the CE current into a voltage (selectable by the ranging relays K2–K4), multiplying this voltage by a factor of exactly 10 through the non-inverting amplifier U9A, and summing this voltage with V_REF using U9B. In the highest current range, a range of –25 mA → +25 mA is mapped linearly to the range of 0 → +5V, which is again suitable as a feedback signal and for acquisition by the ADC. The lower current ranges are respectively 100 times and 10 000 times more sensitive, resulting in ranges of resp. ±250 A and ±2.5 A.
The potentiostatic and galvanostatic feedback signals (labelled and ) lead into the “normally closed” and “normally open” terminals of U8, a DG449 analog switch. When is low, the control amplifier receives the potentiostatic feedback signal; when high, it receives galvanostatic feedback. In this way, the circuit can quickly switch between the potentiostatic and galvanostatic control modes.
§.§.§ A/D conversion
The and signals are connected to U4 and U5, which are MCP3550 A/D convertors, for data acquisition. The MCP3550 is a 22-bit delta-sigma ADC which offers high accuracy and low noise; in particular, it strongly rejects line noise at either 50 Hz (using the MCP3550-50 model) or 60 Hz (using the MCP3550-60 model) <cit.>. This is highly desirable because it eliminates what is often the most important noise source in many lab environments; however, this comes at the cost of a fairly limited conversion rate of max. 12.5 samples/s. If faster conversion would be required, it could be directly replaced by the MCP3553 which samples at 60 samples/s but lacks the line noise filter. The inputs to the ADCs are additionally low-pass filtered by R1/C13 and R3/C15 to remove high-frequency switching noise; the present component values yield roll-off frequencies of approx. 1.6 kHz, well below the delta-sigma modulator’s oversampling rate of 25600 samples/s (corresponding to a Nyquist frequency of 12.8 kHz) <cit.>.
With 22-bit resolution, the potential is measured with a granularity of 3.8 V. In the most sensitive current range, current is measured with a granularity of 1.2 pA; this figure increases to resp. 120 pA and 12 nA for the higher ranges.
§.§.§ D/A conversion and voltage reference
The control amplifier receives its “setpoint” voltage from U6, a DAC1220 digital-to-analog convertor. The DAC1220 is a 20-bit delta-sigma DAC which is inherently linear and contains an on-chip calibration function <cit.>. It receives its +2.500 V reference voltage V_REF (as do U4, U5 and the analog circuitry) from U3, an ADR421 voltage reference using XFET technology for low noise, high accuracy (∼0.1%), and high stability <cit.>.
The 20-bit DAC resolution results in potential control with a granularity of 15.3 V, or in current control with granularities of 4.8 pA, 480 pA or 48 nA, depending on the current range.
§.§.§ Digital control
The device connects to a host computer by an on-board USB interface. This function is implemented by U2, a PIC16F1459 microcontroller which has built-in USB capabilities and provides a sufficiently large number of general-purpose input/output pins for relay switching and SPI communication <cit.>. Its function is to receive commands from the host computer through USB, which may instruct it to either toggle a pin, read from the ADC, or set the DAC. It communicates with the ADC or DAC through a software-implemented SPI interface, and in case of a DAC read, it then sends back the acquired data to the host computer.
Status LEDs D3 and D4 provide some basic status indication. D3 provides the “” indication; when D3 is illuminated, the cell is connected to the control amplifier. D4 is a dual-color LED which provides power-on and mode indication; it lights up green when the circuit is in the potentiostatic mode, and orange when it is in the galvanostatic mode.
§.§ PCB design
A compact, double-sided printed circuit board design was made in KiCad <cit.>. The design files are available in the supplementary material <cit.>, and a 3D rendering of the (populated and unpopulated) board is shown in Figure <ref>. The board is most easily fabricated by sending it off to a PCB prototyping service. Assembly can be carried out by either reflow soldering or hand-soldering.
Due to the small size of the PCB (approx. 5×5 cm), it may easily be put in a small enclosure, e.g. a mint tin, provided some openings are made to allow the mini-USB connection and the cell connection cables to pass through.
A bill of materials and a fabrication diagram (showing a top view of the PCB superimposed with the component values) may be found in the supplementary material <cit.> to aid component ordering and circuit assembly. The total cost of the device, including the components and a manufactured PCB, is well below $100. A photograph of the finished device is shown in Figure <ref>.
§.§ Microcontroller firmware
The microcontroller firmware including the source code and a compiled .hex image may be found in the supplementary material <cit.>. Compilation and usage instructions may be found in the sections below.
§.§.§ Compilation
The source code is written in the C programming language and can be compiled using the Microchip MPLAB XC8 compiler. The provided Makefile allows easy compilation because it automatically provides the compiler with the necessary flags and include paths. On a Linux system, simply running the command will compile and link everything and produce the output file .
§.§.§ Raw USB communication
The microcontroller firmware makes use of Signal 11's “M-stack” open-source USB stack <cit.> to implement communication through raw USB bulk transfers. Commands are received as ASCII strings on EP1 OUT, are executed, and a reply is sent on EP1 IN. The following commands are supported:
* “”, “”
Connects or disconnects the cell to the output of the control amplifier.
* “”, “”
Switches between potentiostatic or galvanostatic control modes.
* “” (where = “”, “”, or “”)
Switches between current ranges; range 1 is the highest (least sensitive) current range, increasing range numbers yield lower (more sensitive) ranges.
* “” ( = one byte of data)
Sets the DAC output code (three bytes).
* “”
Performs an automatic DAC calibration.
* “” ( = one byte of data)
Sets the DAC calibration data; the first three bytes are proportional to the offset, the latter three bytes are proportional to the gain <cit.>.
* “”
Returns six bytes, representing the current DAC calibration in the same format as above.
* “”
Reads the potential and current from the ADCs, and returns six bytes if a previous conversion has finished. The first three bytes represent the potential ADC value, the latter three bytes represent the current ADC value. If a conversion is still ongoing, it returns the string “” instead.
* “” ( = one byte of data)
Saves six bytes to internal flash memory; these are used for potential/current offset removal.
* “”
Retrieves the corresponding six bytes from internal flash memory and returns them.
* “” ( = one byte of data)
Saves six bytes to internal flash memory; these are used for fine-tuning the three current shunt resistors.
* “”
Retrieves the corresponding six bytes from internal flash memory and returns them.
The host computer communicates with the potentiostat using a generic driver provided by the cross-platform libusb library <cit.>.
§ THE SOFTWARE
A user-friendly GUI application allows the experimenter to easily perform typical electrochemical measurements. The program can be found in the supplementary material <cit.> and is written in Python 3. In addition to a working installation of Python, it requires NumPy <cit.> and SciPy <cit.> for data processing, PyUSB <cit.> for USB communication, PyQt <cit.> to provide the GUI, and PyQtGraph <cit.> for real-time plotting. These packages are freely available, and may be easily installed from the system software repositories on most Linux distributions. On Windows and Mac, it is recommended to use Anaconda <cit.>; it provides Python 3.6 along with all packages except PyQtGraph (which may be installed using conda) and PyUSB (which may be separately installed using pip). Only on Windows, a USB device driver is required upon connecting the device to the host computer; this driver may be generated by libusb <cit.> or downloaded from the supplementary material <cit.>.
After installing these dependencies, the program can be executed by running the command . An application GUI similar to the one in Figure <ref>a should then appear. The USB Vendor and Product IDs shown in the input fields should match the values in the microcontroller firmware source code; the default values are resp. and , but they can be adjusted if necessary. By clicking the “Connect” button, the application will start communicating with the potentiostat and will start displaying the measured potential and current; it does this both in numeric form (using the upper numeric indicators) and in graphical form (by plotting the potential and the current as a function of the time).
§.§ Calibration
Even though the device will already be reasonably accurate without applying any calibration (thanks to the precision op-amps, precision resistors, and the inherent linearity of the ADCs and DAC), the remaining – small – offset and gain errors, typically in the order of respectively 0.01% of full-scale and 0.1% of the measured value, can be calibrated out by adjusting the values in the “Calibration” field under the “Hardware” tab. It is recommended that this be done by the following procedure:
* Short the SE and RE; leave WE and CE unconnected. Either adjust the potential offset and current offset manually until both the potential and the current are exactly zero, or wait at least 20 s (to have the device measure enough data points to average) and press the “Auto Zero” button.
* Connect SE to WE, and RE to CE. Press the “Auto Calibrate” button. Under “Manual Control”, switch the cell on, and set a few different potentials. The measured potential should exactly match the set potential; if it does not, you can further fine-tune the calibration by making small adjustments to the DAC offset and DAC gain. Press “Save to device” to apply the values.
* Finally, the values R1–R3 represent relative fine adjustments to the shunt resistor values. If precise shunt resistors were used, they should not require adjustment, except for R1 (the lowest shunt resistor, used for the highest current range, having a nominal value of 10.00 ) which may have a slightly higher value due to the non-negligible contact resistance of the current ranging relay. To fine-adjust this value, connect SE and We to one leg and WE and CE to the other leg of an accurate 1.000 k resistor. Set a potential of e.g. 7.000 V, set potentiostatic mode, and turn the cell connection on; adjust the R1 value until the measured current is exactly 7.000 mA.
* By pressing the “Save to device” button, the calibration data is saved to the internal flash memory of the potentiostat, so that the correct values are automatically loaded upon the next application start-up. This is also useful in case the potentiostat is connected to a different host computer; the device will remember its own calibration settings.
To maintain the best accuracy, it is recommended to verify the calibration or repeat the calibration procedure if the device has been exposed to significant changes in ambient temperature or humidity.
§.§ Measurement
The user interface program can run a number of electrochemical techniques, which can be started from within the corresponding tab views. Figures <ref>–<ref> illustrate these techniques using example measurements performed on a “dummy cell”. The dummy cell consists of a series circuit of a 1.000 k precision resistor and a 1000 F (nominal) electrolytic capacitor, connected on one side to WE/SE and on the other side to RE/CE, and is drawn schematically in Figure <ref>. The RC series circuit can be used to verify the correct operation of the potentiostat by comparing the measured potential and current to the expected behavior, which is governed by a first-order differential equation:
I= U_R/R=CdU_c/dt
U= U_R+U_C
§.§.§ Constant potential / current
Using the manual controls in the “Hardware” tab, a fixed potential (in potentiostatic mode) or a fixed current (in galvanostatic mode) may be set. This makes it possible to use the device as a constant-voltage or constant-current source. Both the potential and current will be continuously measured and displayed. It is possible to log the data to a file by entering a filename and checking the “Log” checkbox. Figure <ref>a shows the result of periodically changing the potential over the dummy cell between 0 V and +8 V (green curve), resulting in current spikes which decay exponentially as expected for an RC circuit (red curve). Figure <ref>b shows the noise on the measured potential and current, using shorted leads (i.e. zero potential) and with the cell connection off (i.e. zero current). The RMS noise level on the potential is 28 V, while the RMS noise levels on the current are respectively 88 nA, 1.1 nA, and 9.9 pA in the 20 mA, 200 A, and 2 A ranges.
§.§.§ Cyclic voltammetry
Cyclic voltammetry (CV) is a versatile electrochemical measurement technique which finds application in many fields; the details can be found in literature <cit.>, but it is essentially based on the repeated application of a linear potential ramp, i.e. a “triangle waveform”, while measuring the resulting current. In a typical CV plot, the current is then plotted as a function of the applied potential.
Before starting a CV measurement, a number of parameters must be set in the user interface program (Figure <ref>). These parameters determine the nature of the applied triangular waveform and are graphically illustrated in Figure <ref>. Because of the finite resolution (i.e. step size) in potential and time, the waveform is actually not truly linear but rather consists of a “staircase” shape; however, when the steps are sufficiently small, the results are equivalent to a truly linear ramp.
By clicking the “OCP” button next to the “Start potential” input field, the currently measured potential is copied into the input field; this is convenient when starting a CV measurement at the open-circuit potential (OCP) of the cell.
The “Samples to average” parameter is automatically calculated based on the scan rate, but can be overwritten. It determines how many samples are averaged for one measurement of potential and current. Samples are acquired every 90 ms; this means that a measurement containing n averaged samples will take n ×90 ms. Averaging reduces noise, but also reduces the effective sampling rate; thus, n>1 should only be used for sufficiently slow scan rates. Because the minimum sampling time is 90 ms, the maximum scan rate is limited to approx. 500 mV/s; at this scan rate, the height of the potential steps is 45 mV.
Because the resulting current is not known a priori and may span a large dynamic range, an autoranging feature is available: during the measurement, the device will automatically choose an appropriate current range based on the actually measured current. This allows it to accurately measure currents ranging from nanoamps to max. 25 mA. If this is, for some reason, not desirable, a current range can be “disabled” (i.e. prevented from being selected by autoranging) by unticking its checkbox.
Before starting the measurement, the “Preview sweep” button may be used to make a potential vs. time plot based on the currently set CV parameters. This allows the experimenter to verify the CV settings. After setting an output filename, the measurement can be started by clicking “Start cyclic voltammetry”. This will switch the device to potentiostatic mode and start applying the potential profile. During the measurement, the application shows a continuously updating CV plot and continuously writes the measurement data to the output file. The output file is formatted as ASCII text and contains three tab-separated columns representing resp. the elapsed time (in s), the measured potential (in V), and the measured current (in A). It may be imported in any plotting or data analysis tool.
The CV curve of the dummy cell (Figure <ref>), acquired at a scan rate of 100 mV/s, reveals nearly horizontal plateaus at currents of ±100.6 A (with the plus sign on the rising potential and the minus sign on the declining potential). This is expected; because the current through the capacitor is proportional to the rate of change of its voltage, and that (given a constant current) the voltage drop over the resistor is constant, the rate of change of the capacitor voltage is also equal to ±100 mV/s. A straightforward calculation reveals that a 1006 F capacitance yields the measured constant current. This value is well within the tolerance of the (nominally) 1000 F capacitor in the dummy cell.
§.§.§ Constant-current charge/discharge
Another commonly used electrochemical technique is to apply a constant current and to observe the evolution of the measured potential over time. In analytical redox chemistry, this method is known as coulometric titration. In the field of battery research, it can be used to determine the capacity of an electrode material. Using the galvanostatic mode of the potentiostat, such a measurement technique is easily implemented. Figure <ref> shows a charge/discharge measurement running on the dummy cell. Before starting a measurement, the following parameters need to be set in the “Charge/Disch.” tab:
* Upper bound and lower bound: during the charging phase, the charge current is applied until the measured potential reaches the upper bound. This marks the end of the charging phase and the beginning of the discharging phase. During the discharging phase, the discharge current is applied until the measured potential reaches the lower bound. This marks the end of the discharging phase and the beginning of the next charging phase, thus repeating the cycle.
* Charge and discharge current: sets the applied currents (in A) during the charge, resp. discharge phases.
* Number of half cycles: sets the total number of charge and discharge phases. As opposed to counting full cycles, this allows the experimenter to carry out e.g. a single charge measurement (one half cycle), or a charge/discharge/charge measurement (three half cycles).
* Samples to average: has the same function as explained earlier for cyclic voltammetry. Set it to the desired acquisition period, divided by 90 ms; higher values reduce noise, but result in slower data acquisition.
The meaning of these parameters in the context of a typical charge/discharge measurement is illustrated graphically in Figure <ref>.
After setting the measurement parameters and choosing an output filename, clicking the “Start charge/discharge” button will start the charge/discharge process. During the measurement, the current half cycle is indicated in the “Information” box, and the plot window shows a continuously updating plot of the potential vs. the charge. The charge is calculated as the absolute value of the product of the (constant) current and the elapsed time.
For the dummy cell, the resulting measurement is shown in Figure <ref>. In the charge phase, using a constant current I of +100 A, the potential over the resistor U_R has a constant value of +100 mV, and the potential over the capacitor U_C increases linearly at a rate of dU_C/dt=100 mV/s. This corresponds to the observed behavior. In the discharge phase, the same current is applied in the opposite direction. This causes the potential to decrease linearly at the same rate. The inserted/extracted charge over the 1.8 V potential window equals 100 A×(1.8 V/0.1 V/s) = 1.8 mC = 500 nAh, which is in agreement with the value indicated on the horizontal axis.
§.§.§ Rate testing
Specifically in battery research, the experimenter is often interested in the “rate behavior” of a test cell. This refers to the influence of the charge/discharge current on the measured cell capacity; typically, the capacity decreases with increasing current due to kinetic limitations. The current is expressed as a “C-rate”, where 1C is defined as the current necessary to charge/discharge the cell to its theoretical capacity in exactly one hour.
Using the “Rate testing” feature which is accessible from the correspondingly named tab in the user interface program (see Figure <ref>), the measurement of this rate behavior can be automated. For each C-rate (multiple values may be separated by commas), a charge/discharge measurement is ran between the lower and upper potential bounds (as explained in the previous section) for a configurable number of cycles. The charge/discharge current is calculated as:
I [A]=± C [Ah]×C-rate
with the positive sign for the charge current, and the negative sign for the discharge current.
During the measurement, the plot continuously updates itself to show the charge/discharge capacity measured in the final cycle as a function of the C-rate. The measurement on the dummy cell (Figure <ref>) reveals a capacity which decreases linearly with the C-rate; indeed, the potential that is “lost” over the resistor is proportional to the applied current. The fact that the capacity decreases with the current, just like for a real battery, is no coincidence; in fact, the resistor in the dummy cell can be considered to represent the internal resistance encountered in real batteries.
§ USE CASE: A THIN-FILM LITHIUM-ION BATTERY ELECTRODE
In the following section, the utility of this low-cost potentiostat will be demonstrated by an example use case in the context of thin-film lithium-ion batteries. A lithium-ion test cell was constructed using a PTFE body filled with electrolyte (1 M LiClO4 in propylene carbonate) which was clamped against a 40 nm anatase TiO2 film on a TiN-coated silicon substrate, giving an active area of 0.95 cm^2. An electrical contact was made by applying conductive silver ink on the cleaved sides of the substrate in order to connect the TiN current collector layer to a piece of copper foil. This formed the working electrode. The counter electrode and reference electrode consisted of lithium metal strips dipped into the electrolyte.
As lithium-ion cells are incompatible with moisture and oxygen, the whole cell was constructed inside an argon glovebox. Thanks to its small size, the potentiostat could also be put inside the glovebox, needing only a USB cable feedthrough. Moving the potentiostat close to the test cell (inside the glovebox) greatly reduces noise and interference, compared to having it outside and extending the cell connection leads.
Cyclic voltammetry patterns were acquired at scan rates between 0.5 mV/s and 5 mV/s and are shown in Figure <ref>. The cathodic and anodic peaks are clearly visible around resp. 1.7 V and 2.2 V, as is expected for anatase TiO2.<cit.> As the scan rate increases, the available time for charge transfer decreases, causing the peak current to increase. The separation between the cathodic and anodic peak currents also increases; this is a kinetic effect, caused by the limited speed at which the lithium ions can diffuse through the TiO2 film. This makes such a CV experiment useful to study the electrode kinetics. A thorough study can already be found in literature <cit.> and will not be repeated here.
The same electrode was subjected to charge/discharge cycling at a constant current of ±0.18 A/cm^2 (positive sign for charge, negative for discharge) and between the same potential limits as in the CV experiment. The potential evolution, plotted as a function of the inserted/extracted charge, is shown in Figure <ref>. It shows characteristic plateaus at 1.85 V (charge) and 1.75 V (discharge). The final capacities are 3.03 Ah/cm^2 (charge) and 3.09 Ah/cm^2 (discharge), yielding a coulombic efficiency (defined as the ratio between the delithiation and the lithiation capacities) of 98.1%. The measured capacity corresponds to the insertion/extraction of approx. 0.6 Li^+ per unit of TiO2, according to the following calculation:
#Li=C_measured× M/d ρ N_A q_e
with C_measured≈0.011 C/cm^2, d=40×10^-7 cm, ρ=3.8 g/cm^3, M=79.9 g/mol, N_A as Avogadro's constant, and q_e as the elementary electron charge.
Taking the average of the measured charge and discharge capacity and defining 1C as 3.06 Ah/cm^2, the current density of ±0.18 A/cm^2 can be equivalently expressed as a C-rate of approx. C/17, which is a rather slow rate. The charge/discharge measurements were repeated with increasingly higher currents using the rate testing mode of the potentiostat. The capacity was determined for each C-rate and the result is shown in Figure <ref>. The results indicate that the capacity decreases quickly with an increasing C-rate, demonstrating the sluggish kinetics of the TiO2 electrode. At a rate of C/2, only approx. 50% of the C/10 capacity remains. At very high C-rates (≥10C), the slope of the decline becomes flatter; in this region, only a small pseudo-capacitive contribution to the capacity remains.
§ LIMITATIONS AND POSSIBLE MODIFICATIONS
Although this potentiostat design may be used “as-is” for many applications, some may have requirements that exceed its capabilities in its current state. Because the design is fully open source, it is entirely possible to modify it in order to accommodate custom requirements. In particular:
* In the present design, the maximum current of 25 mA is limited by (1) the current that can be delivered by the charge pump, and (2) the maximum output current of the control amplifier U7A. A higher current could be achieved by (1) replacing the charge pump by an external ±9 V power supply (which must be able to supply the desired current), (2) including a high-current buffer stage (e.g. a pair of medium current transistors in a class AB amplifier configuration), and (3) adding a low-value current shunt resistor (e.g. 1 for a maximum current of 250 mA) and ranging relay.
* The sampling period of 90 ms may be too slow in case of fast electrochemical processes. It can be straighforwardly lowered to approx. 17 ms by replacing the MCP3550-type ADCs with the MCP3553 and adjusting the polling interval in the Python software. Even faster sampling would require more elaborate modifications.
* A new experimental technique may be required. As these are implemented in the software running on the host computer, it is relatively straightforward to add a new technique by editing the Python source code.
T. Dobbelaere thanks the Fund for Scientific Research - Flanders (FWO) for financial support.
99
sup Supplementary material: GitHub repository, <https://github.com/thomasdob/tdstatv3>
friedman2012 E. S. Friedman, M. A. Rosenbaum, A. W. Lee, D. A. Lipson, B. R. Land, and L. T. Angenent, “A cost-effective and field-ready potentiostat that poises subsurface electrodes to monitor bacterial respiration”, Biosensors and Bioelectronics, vol. 32, no. 1, pp. 309–313, 2012.
rowe2011 A. A. Rowe et al., “CheapStat: An Open-Source, ‘Do-It-Yourself’ Potentiostat for Analytical and Educational Applications”, PLOS ONE, vol. 6, no. 9, p. e23783, 2011.
dryden2015 M. D. M. Dryden and A. R. Wheeler, “DStat: A Versatile, Open-Source Potentiostat for Electroanalysis and Integration”, PLOS ONE, vol. 10, no. 10, p. e0140349, 2015.
bard_faulkner A.J. Bard and L.R. Faulkner, “Electrochemical methods: fundamentals and applications”, 2nd ed., John Wiley & Sons, Inc., New York, 2001.
squidstat Admiral Instruments, Squidstat Solo. <https://www.admiralinstruments.com/product-page/squidstat-solo> (accessed August 23, 2017)
opax192 Texas Instruments, OPAx192 datasheet. <http://www.ti.com/lit/gpn/opa192> (accessed August 23, 2017)
mcp3550 Microchip Technology Inc., MCP3550/1/3 datasheet. <http://ww1.microchip.com/downloads/en/DeviceDoc/20001950F.pdf> (accessed August 23, 2017)
dac1220 Texas Instruments, DAC1220 datasheet. <http://www.ti.com/lit/gpn/dac1220> (accessed August 23, 2017)
adr421 Analog Devices, ADR421 datasheet. <http://www.analog.com/media/en/technical-documentation/data-sheets/ADR420_421_423_425.pdf> (accessed August 23, 2017)
pic16f1459 Microchip Technology Inc., PIC16F1454/5/9 datasheet. <http://ww1.microchip.com/downloads/en/DeviceDoc/40001639B.pdf> (accessed August 23, 2017)
kicad KiCad EDA. <http://kicad-pcb.org/> (accessed August 23, 2017)
mstack Alan Ott, Signal 11 Software, M-stack. <http://www.signal11.us/oss/m-stack/> (accessed August 23, 2017)
libusb libusb. <http://libusb.info/> (accessed August 23, 2017)
numpy NumPy. <http://www.numpy.org/> (accessed August 23, 2017)
scipy SciPy. <https://www.scipy.org/> (accessed August 23, 2017)
pyusb PyUSB. <https://github.com/walac/pyusb> (accessed August 23, 2017)
pyqt Riverbank Computing, PyQt. <https://www.riverbankcomputing.com/software/pyqt/> (accessed August 23, 2017)
pyqtgraph PyQtGraph. <http://pyqtgraph.org/> (accessed July 28, 2017)
anaconda Continuum Analytics, Anaconda. <https://www.continuum.io/downloads> (accessed August 23, 2017)
wang2007 J. Wang, J. Polleux, J. Lim, and B. Dunn, “Pseudocapacitive Contributions to Electrochemical Energy Storage in TiO2 (Anatase) Nanoparticles”, J. Phys. Chem. C, vol. 111, no. 40, pp. 14925–14931, 2007.
| §.§ Potentiostat basics
The potentiostat is an essential tool in electrochemical research. It allows the experimenter to apply a potential to a system (i.e. an electrochemical cell) and measure the resulting current, or vice versa. The unique property of the potentiostat – the thing that differentiates it from a simple combination of an adjustable voltage source and an ammeter – is that it can do so while keeping the path where the current flows separate from the path where the potential is measured. This is necessary in electrochemical cells because potentials are usually measured against a “reference electrode” which only provides an accurate and stable potential in equilibrium conditions, i.e. when it is not disturbed by any current flow. Using two terminals for current flow and two others for potential sensing, a total of four electrode connections are needed; they are usually named as follows:
* Working electrode (abbreviation: WE)
* Counter electrode (abbreviation: CE)
* Sense electrode (abbreviation: SE)
* Reference electrode (abbreviation: RE)
In a four-electrode connection scheme, the potential is measured (and no current flows) between SE and RE, and current is applied (regardless of the voltage drop) between WE and CE.[It should be noted that the potentiostat is functionally equivalent to a source-measure unit with separate force/sense lines, with the WE/CE pair corresponding to the “force” connections and the SE/RE pair corresponding to the “sense” connections.]
In most electrochemical cells, the SE and WE are tied together (and still referred to as WE), resulting in a three-electrode connection scheme: the potential is measured between WE and RE, and current flows between WE and CE. This is illustrated in Figure <ref>. Using this scheme, one would naively assume that it is impossible to control the potential; as there is zero current flow, the potential can only be measured and not forced to a certain value.
The potentiostat can, however, apply current between WE and CE, which in turn influences the potential between WE and RE; thereby, using a feedback loop, any desired potential between WE and RE can be achieved by applying whatever current is necessary between WE and CE. This control mode is called the “potentiostatic mode”: it allows the user to set a desired potential, and the potentiostat will try to reach that potential by adjusting the current.
Alternatively, it is often useful to have control over the current instead (i.e. allowing the user to set a desired current, no matter what the resulting potential may be); this is called the “galvanostatic mode”. The presented circuit elegantly combines both of these modes by making it possible to switch the feedback path between the potentiostatic and the galvanostatic modes, as shown in Figure <ref>. On a basic level, the circuit operates as follows:
* A digital-to-analog convertor (DAC) outputs an electrical signal representing either the desired potential (in the potentiostatic mode) or the desired current (in the galvanostatic mode).
* An operational amplifier compares this to the measured potential (in the potentiostatic mode) or the measured current (in the galvanostatic mode), and drives current into the CE until the measured value equals the DAC setpoint.
* Both the measured potential and the measured current are fed into an analog-to-digital convertor (ADC) for data acquisition purposes.
§.§ Comparison to previously published designs
Although a number of similar open source, “do-it-yourself” potentiostat designs have recently been published <cit.>, these designs are not suitable for battery characterization. The Friedman et al. design <cit.> does not allow the working electrode potential to be scanned (it can only be adjusted to a fixed value in hardware), limiting its use to chronoamperometry (i.e. recording the working electrode current as a function of time). The CheapStat <cit.> supports a number of electrochemical techniques including cyclic voltammetry, but its potential range is limited to ±1 V. The DStat <cit.> improves upon the CheapStat in several ways, and has impressive low-current capabilities, but it is still limited to a potential range of ±1.5 V. Although these potential ranges are wide enough for many aqueous electrochemistry experiments, they are insufficient for e.g. lithium-ion batteries which can reach cell potentials over 4 V. The potential range of the design described in this paper is ±8 V.
Another highly desirable feature for battery characterization is the inclusion of a galvanostatic mode. The aforementioned designs implement the “adder potentiostat” topology <cit.> which only provides potentiostatic control. The presented design has a different topology which enables switching between potentiostatic control and galvanostatic control with a single (digitally controlled) switch.
§.§ Comparison to commercial instruments
While there are plenty of commercial instruments which can be bought from manufacturers such as Metrohm Autolab, Bio-Logic, Gamry, Ivium Technologies, CHI, Pine Research, Admiral Instruments, etc. to fulfill the same purpose, including models which provide wider current ranges, higher sample rates, and more measurement techniques (e.g. including impedance spectroscopy), the price of these instruments generally ranges from $2000 up to $20000 and more. To the author's best knowledge, the lowest-cost commercial instrument which could substitute for the presented design is the Squidstat Solo, sold by Admiral Instruments for a retail price of $1900 <cit.>. While it has a higher sample rate (1 ms/sample, versus 90 ms for the presented design), a slightly higher potential range (±10V versus ±8V) and similar current ranges (±3 A to ±25 mA versus ±2 A to ±20 mA), it has worse potential and current resolution (16-bit versus 22-bit), it is approx. 20 times more expensive, and its hardware and software cannot be freely modified. | null | null | null | null | null |
http://arxiv.org/abs/1701.08211v2 | 20170127223139 | A microscopic nucleon spectral function for finite nuclei featuring two- and trhee-nucleon short-range correlations: I. The model vs. ab-initio calculations for the three-nucleon systems | [
"Claudio Ciofi degli Atti",
"Chiara Benedetta Mezzetti",
"Hiko Morita"
] | nucl-th | [
"nucl-th"
] |
plain
_e
^'_e
0̱
/
_e
^'_e
#1
⟨#|1⟨#1|
|#⟩1|#1⟩
i.e.e.g.𝒪𝒪̂#̱1 #1
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08009v2 | 20170127104905 | Bose - Einstein condensation of triplons with a weakly broken U(1) symmetry | [
"Asliddin Khudoyberdiev",
"Abdulla Rakhimov",
"Andreas Schilling"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
∗†
ρ̃
x
y
q
q
k
E
E
0 T_c^0
01 _0^1 dq_1 dq_2 dq_3
0T̃_c^0
z̃
Ẽ
P̃
S̃
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07524v2 | 20170126000348 | The Role of Transmitter Cooperation in Linear Interference Networks with Block Erasures | [
"Yasemin Karacora",
"Tolunay Seyfi",
"Aly El Gamal"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
The Role of Transmitter Cooperation in Linear Interference Networks with Block Erasures
Yasemin Karacora, Tolunay Seyfi and Aly El Gamal
ECE Department, Purdue University
Email: {ykaracor,tseyfi,elgamala}@purdue.edu
Received: date / Accepted: date
=====================================================================================================================================
In this work, we explore the potential and optimal use of transmitter cooperation in wireless interference networks with deep fading conditions. We consider a linear interference network with K transmitter-receiver pairs, where each transmitter can be connected to two neighboring receivers. Long-term fluctuations (shadow fading) in the wireless channel can lead to any link being erased with probability p. Each receiver is interested in one unique message that can be available at two transmitters. The considered rate criterion is the average per user degrees of freedom (puDoF) as K goes to infinity. Prior to this work, the optimal assignment of messages to transmitters were identified in the two limits p → 0 and p → 1. We identify new schemes that achieve average puDoF values that are higher than the state of the art for a significant part of the range 0 < p < 1. The key idea to our results is to understand that the role of cooperation shifts from increasing the probability of delivering a message to its intended destination at high values of p, to interference cancellation at low values of p. Our schemes are based on an algorithm that achieves the optimal DoF value in any network realization, when restricted to a given message assignment as well as the use of zero-forcing schemes.
§ INTRODUCTION
Our focus in this work is to analyze information theoretic models of interference networks that capture the effect of deep fading conditions through introducing random link erasure events in blocks of communication time slots. More specifically, in order to consider the effect of long-term fluctuations (deep fading or shadowing), we assume that communication takes place over blocks of time slots, and independent link erasures take place with a probability p in each block. Further, short-term channel fluctuations allow us to assume that in each time slot, all non erased channel coefficients are drawn independently from a continuous distribution; this is known as the assumption that the channel is generic.
We are interested in understanding the role of transmitter cooperation (also known as Coordinated Multi-Point (CoMP) Transmission) in these dynamic interference networks. In particular, if each message can be assigned to more than one transmitter, with a restriction only on the maximum number of such transmitters, without any constraint on their identity, what would be the optimal assignment of messages to transmitters and corresponding transmission scheme that maximizes the average rate over all possible realizations of the network? To simplify analysis, we consider the linear interference network introduced in <cit.>, where each transmitter can only be connected to the receiver having the same index as well as one following receiver. The channel capacity criterion we consider is the pre-log factor of the sum capacity at high Signal to Noise Ratio (SNR), also known as the Degrees of Freedom (DoF). Because our goal is to understand the optimal pattern of transmitter cooperation that scales in large networks, we consider the DoF normalized by the number of transmitter-receiver pairs, and take the limit as that number goes to infinity; we call this the per user degrees of freedom (puDoF).
In <cit.>, the considered setting was studied, where first the case where each message can only be available at a single transmitter was analyzed. The optimal assignment of messages to transmitters, and the value of the average puDoF were identified as a function of the erasure probability p.
In this work, we extend the work of <cit.> by studying the case where each message can be available at two transmitters, and transmitter cooperation is allowed. The optimal message assignment in the limits p → 1 and p → 0 were identified in <cit.> and <cit.>, respectively. As p → 1, each message is assigned to the two transmitters connected to its destination, to maximize the probability of successful delivery. As p → 0, the puDoF value goes to 4/5, and is achieved by splitting the network into subnetworks; each has five transmitter-receiver pairs. In order to avoid interference between the subnetworks, the last transmitter in each subnetwork is inactive. And hence, each of the first and last messages in each subnetwork is only assigned to one of the two transmitters connected to its destination, and the other assignment is used at a transmitter not connected to its destination, but connected to another receiver that is prone to interference caused by this message. Further, the middle message in each subnetwork is not transmitted. We find, through simulations, in this work that assigning that middle message to only one transmitter connected to its destination, and another transmitter not connected to its destination, leads to better rates than assigning it to the two transmitters connected to its destination at low values of p. That implies that a fraction of 3/5 of the messages are assigned to only one of the two transmitters connected to their destination, and the remaining 2/5 are assigned to the two transmitters connected to their destination. We show in this work, that at any value of p from 0 to 1, the assignment achieving the highest puDoF using an optimal zero-forcing scheme, has a fraction of f(p) of messages that are assigned to only one of the transmitters connected to their destination, and another transmitter used for interference cancellation, and the remaining fraction 1-f(p) of messages are assigned to the two transmitters connected to their destination. The value of f(p) decreases monotonically from 3/5 to 0 as p increases from 0 to 1, which agrees with the intuition about the shifting role of cooperative transmission from canceling interference to increasing the probability of successful delivery as p increases from 0 to 1.
§ SYSTEM MODEL AND NOTATION
We use the standard model for the K-user interference channel with single-antenna transmitters and receivers,
Y_i(t) = ∑_j=1^K H_i,j(t) X_j(t) + Z_i(t),
where t is the time index, X_j(t) is the transmitted signal of transmitter j, Y_i(t) is the received signal at receiver i, Z_i(t) is the zero mean unit variance Gaussian noise at receiver i, and H_i,j(t) is the channel coefficient from transmitter j to receiver i over the time slot t. We remove the time index in the rest of the paper for brevity unless it is needed. Finally, we use [K] to denote the set {1,2,…,K}
§.§ Channel Model
Each transmitter can only be connected to its corresponding receiver as well as one following receiver, and the last transmitter can only be connected to its corresponding receiver.
In order to consider the effect of long-term fluctuations (shadowing), we assume that communication takes place over blocks of time slots, and let p be the probability of block erasure. In each block, we assume that for each j, and each i ∈{j,j+1}, H_i,j=0 with probability p. Moreover, short-term channel fluctuations allow us to assume that in each time slot, all non-zero channel coefficients are drawn independently from a continuous distribution. Finally, we assume that global channel state information is available at all nodes.
§.§ Message Assignment
For each i ∈ [K], let W_i be the message intended for receiver i, and T_i ⊆ [K] be the transmit set of receiver i, i.e., those transmitters with the knowledge of W_i. The transmitters in T_i cooperatively transmit the message W_i to the receiver i. The messages {W_i} are assumed to be independent of each other. Each message can only be available at two transmitters,
| T_i| ≤ 2, ∀ i∈[K].
§.§ Degrees of Freedom
The total power constraint across all the users is P. In each block of time slots, the rates R_i(P) are achievable if the decoding error probabilities of all messages can be simultaneously made arbitrarily small as the block length goes to infinity, and this holds for almost all realizations of non-zero channel coefficients. The sum capacity 𝒞_Σ(P) is the maximum value of the sum of the achievable rates. The total number of degrees of freedom (η) is defined as lim sup_P →∞ C_Σ(P)/log P. For a K-user channel, and a probability of block erasure p, we let η_p(K) be the average value of η over possible choices of non-zero channel coefficients.
We further define the asymptotic per user DoF (puDoF) τ_p to measure how η_p(K) scales with K.
τ_p = lim_K→∞η_p(K)/K
§.§ Zero-forcing (Interference Avoidance) Schemes
We consider in this work the class of interference avoidance schemes, where all interference is cancelled over the air. Each message is either not transmitted or allocated one degree of freedom.
Accordingly, every receiver is either active or inactive. An active receiver does not observe interfering signals.
§ OPTIMAL ZERO-FORCING SCHEME
We make the following definition of a cluster of users within the K-user network.
We say that a set of users with consecutive N indices (say having indices in the set [N]={1,2,⋯,N}) form a cluster if all the diagonal links exist, i.e., H_i+1,i≠ 0, ∀ i∈[N-1], and the diagonal link between the last transmitter in the cluster and the following receiver is erased, i.e., H_N+1,N=0.
A cluster as defined above is given as an input to Algorithm 1. The output of the algorithm is the transmit signals {X_i,i∈[N]} that employs zero-forcing transmit beamforming to maximize the DoF value for users within the cluster.
For each message W_i, we define four binary variables; namely b_i,j, j ∈{i-2, i-1, i, i+1}. These are initialized to zero.
We look at every message starting from W_1 to W_N and evaluate the conditions under which a message can be sent and decoded at its desired receiver, such that no interference occurs. If a decision is made to send message W_i from transmitter j, the corresponding variable b_i,j is set to one.
Since message W_i can be sent to its destination using either transmitter i or i-1, there are two cases that are considered in the algorithm. In the following we are discussing and justifying both cases. Note that users one and two are considered separately in lines 4-15 since they represent a special case due to their position at the beginning of the cluster.
Case 1: In the first part of the for-loop starting at line 16, we check if message W_i can be sent from transmitter i-1. This is only possible if message W_i is available at transmitter i-1 and transmitter i-1 does not send W_i-1. Furthermore, we have to make sure that while sending W_i, transmitter i-1 does not cause interference at receiver i-1. There are three possibilities, for which message W_i can be decoded without interference. The trivial one is that the link between transmitter i-1 and receiver i-1 does not exist. Another possible scenario is that receiver i-1 is not able to decode its desired message anyway, i.e. W_i-1 is not sent from transmitter i-2. If these conditions are satisfied, then the variable b_i,i-1 is set to 1. Otherwise, if W_i does interfere with W_i-1 at receiver i-1, we might still be able to remove the interference by sending a signal from transmitter i-2 such that it will cancel the interference at receiver i-1.
This is possible as long as the following conditions hold: First, Message W_i must be available at transmitter i-2 as well (i.e. (i-2) ∈𝒯_i). Furthermore, we have to make sure that the signal sent for interference cancellation does not cause interference at receiver i-2. This is guaranteed, if either H_i-2,i-2 = 0 or receiver i-2 is not able to decode its desired message anyway. In this case, not only b_i,i-1 but also b_i,i-2 is set to 1.
Case 2: Now we consider the case of sending message W_i from transmitter i (lines 25-31). Here, the trivial conditions to make this possible are that H_i,i exists, message W_i is available at transmitter i, and W_i is not being delivered through transmitter i-1. This time, we have to make sure that receiver i can decode message W_i without any interference. This holds if transmitter i-1 is not active. Then b_i,i is set to 1.
Similar to the previous case, we can also cancel the interference from transmitter i-1 as long as message W_i-1 is available at transmitter i and W_i-1 is the only message that causes interference at receiver i. If these conditions hold, both b_i,i and b_i-1,i are set to 1.
We now prove the following result to justify Algorithm 1.
Given any assignment of messages to transmitters, such that each message can only be available at two transmitters, Algorithm 1 leads to the DoF-optimal zero-forcing transmission scheme for users within the input cluster.
We consider the messages in ascending order from W_1 to W_N, and check which transmitter can deliver message W_i such that it can be decoded at its desired receiver and without interfering at any previous active receiver. If this is true, we will transmit the message. Also, if this is possible through any of the transmitters i and i-1, then we prefer to transmit W_i from transmitter i-1. In the following, we prove by induction that this procedure leads to the optimal transmission scheme. In a first step, we consider the base case, i.e. we prove that sending W_1 from transmitter 1 is always optimal as long as it is available and the direct link exists. More precisely, as long as 1 ∈ T_1 and H_1,1≠ 0.
We define to be the subset of all links H_i,j through which a message W_i, i∈[K] can be sent and decoded at its desired receiver, and call it the feasible set.
In other words, all links in satisfy the trivial conditions for transmission; namely j ∈ T_i and H_i,j≠ 0.
Let 𝒮⊂\ H_1,1 be an arbitrary set of links that can be used simultaneously to deliver messages to their desired receivers while eliminating interference.
Starting with any set S, if H_1,1∈, we either add H_1,1 to S or replace the first link in S by H_1,1 if there is a conflict. We claim that this replacement cannot decrease the DoF. This is because on one hand, the first active receiver in the network does never observe interference. Also, if we send W_1 from the first transmitter, this can only cause interference at the second receiver, but as H_2,j, j ∈{1,2} is either not in 𝒮 or it is the first link in 𝒮 and hence is replaced by H_1,1, the transmission of W_1 does not prevent any other message corresponding to subsequent links in 𝒮 from being decoded at its destination. As a consequence, it is always optimal to transmit W_1 from the first transmitter as long as 1 ∈ T_1 and H_1,1 = 1.
Next, we extend the proof to all users by induction. The induction hypothesis is as follows.
We consider an arbitrary link H_i,j∈. Let 𝒮_1⊂ be the set of links H_k,l, through which the subset of messages {W_k, k < i} can be delivered simultaneously to their destinations, while eliminating interference. Assume that all links in 𝒮_1 are chosen optimally, i.e. the number of delivered messages cannot be increased by changing any of these links.
Then, we do the induction step. Let 𝒮_2⊂ be any set of links H_k,l, through which a subset of the messages {W_k,k > i} can be transmitted simultaneously such that they can be decoded at their destination. Also, the links in S_2 are chosen optimally to maximize the number of delivered messages. If it is possible to send W_i through H_i,i-1 without causing a conflict with any of the messages, that are sent through the links in 𝒮_1, the same logic applies to H_i,i-1 as to H_1,1 in the base case. More precisely, if W_i does not interfere at any previous active receiver and it can be decoded at receiver i while eliminating interference, H_i,i-1 can be either added to 𝒮_2 or replace the first link in 𝒮_2, in order to obtain an optimal set of links for the transmission of the messages {W_k,k ≥ i}. This is possible since again, W_i does not cause interference at any active receiver with an index k > i, because any of the links {H_i+1,k, k∈{i,i+1}}, is either not in 𝒮_2 or it is the link that is replaced by H_i,i-1. If it is not possible to send W_i through H_i,i-1 without causing a conflict with any of the messages that are sent through the links in S_1, but it is possible to do so through H_i,i, then again the same argument applies for adding H_i,i to S_2. Further, we note that the preference to send W_i through H_i,i-1 is optimal, since H_i,i-1 may only cause a conflict with H_i+1,i in S_2, while H_i,i may cause a conflict with any of H_i+1,i and H_i+1,i+1.
Therefore, as long as the aforementioned preference rule is applied, sending a message W_i through a link H_i,j is always optimal as long as it is possible to decode W_i at receiver i without causing interference at a previous active receiver.
This simplifies the optimal algorithm in two ways. On the one hand, we can go through the links one by one and check if it is possible to send a message to its desired receiver without interfering with any of the previous active messages. If it is possible, we will always decide to send the message. On the other hand, decisions that we already made do not have to be changed later, because at each step we make sure to avoid conflicts with previously activated messages. This procedure is applied in Algorithm 1, as we illustrate below.
In the following, we derive the decision conditions for the first three messages in a cluster.
If H_1,1∈, sending W_1 is optimal, as shown in the base case of the proof by induction. Hence, set b_1,1 = 1.
If H_2,1∈, we have two possibilities. If b_1,1 =1, we cannot send W_2 from transmitter 1 as well without causing interference at the first receiver. Otherwise, if b_1,1 = 0, it is optimal to send W_2 from the first transmitter.
If H_2,2∈ and we are not sending the second message from the first transmitter, i.e., b_2,1=0, then there are two cases to consider. First, if b_1,1=0, then W_1 is not causing interference at the second receiver and we set b_2,2=1. Second, if b_1,1=1, we have interference from W_1 at the second receiver. However, this interference can be canceled as long as W_1 is available at transmitter 2. If this is true, set b_2,2=1 and b_1,2=1.
In the following, we consider sending message W_3 from the second transmitter if H_3,2∈. We first consider the case where b_2,2=1. In this case, transmitter 2 is used to deliver W_2 and even if it can be used to deliver W_3 as well without causing interference at receiver 2, this would not increase the sum DoF of the second and third messages, and hence we always set b_3,2=0 in this case. It hence suffices to only consider the case where b_2,2=0. There are two cases to consider here. The first is when we can set b_3,2=1 and no interference cancellation for W_3 at the second receiver is needed. This is only possible when either the second receiver is not active, i.e., when b_2,1=0, or the second direct link is erased, i.e., H_2,2=0. The second case is when we can set b_3,2=1 while eliminating the interference caused by W_3 at the second receiver by setting b_3,1=1. This is only possible when W_3 is available at transmitter 1 and the first receiver is not active, i.e., b_1,1=0.
Next, we check the possibilities to send W_3 from the third transmitter if H_3,3∈ and b_3,2=0. If the second transmitter is inactive, then we set b_3,3=1. Note that the second transmitter is inactive if b_2,2=0, since in this case we also know that b_1,2=0. Otherwise, if b_1,2= 1, the interference caused by sending W_1 cannot be canceled, because W_1 is already assigned to the first two transmitters and hence, it cannot be also assigned to transmitter 3. Finally, if b_2,2=1 and b_1,2=0, then it is possible to set b_3,3=1, as long as the interference caused by W_2 at the third receiver can be canceled through the third transmitter, which is possible only when 3 ∈ T_2.
Since each message can only be available at two transmitters, it is not necessary to consider the users before receiver i-2 to decide whether W_i can be transmitted. As a consequence, the conditions for sending message W_3, if generalized to W_i, basically apply to all following messages as well. There is only one additional aspect that have to be considered. If we generalize the case where b_i,i-2=1 for interference cancellation, we have to make sure that W_i is not causing interference at an active receiver i-2. That means for i-2 > 1, not only b_i-2,i-2 = 0 but also it is either the case that b_i-2, i-3=0 or H_i-2,i-2=0.
We now show that Algorithm 1 can be lead to the optimal zero-forcing scheme in a general K-user network.
Algorithm 1 can be used to achieve the optimal zero-forcing DoF for any realization of a general K-user network.
If all diagonal links exist, then the whole network is given as input to Algorithm 1. Otherwise, we scan the diagonal links H_i,i-1, i∈{2,⋯,K}, in ascending order with respect to the index i. Let i_min be the minimum index such that H_i,i-1=0, then the users with indices {1,2,⋯,i_min-1} form a cluster. For any other index i such that H_i,i-1=0, let j be the largest index less than i such that H_j,j-1=0, then the users with indices {j,j+1,⋯,i-1} form a cluster. Finally, for the largest index i such that H_i,i-1=0, the users with indices {i,i+1,⋯,K} form a cluster. The network is now partioned into clusters; each is given as input to Algorithm 1 that achieves the optimal zero-forcing DoF within the cluster, which follows from Lemma <ref>.
The proof then follows by observing that any assignment of a message outside its cluster can be ignored without loss in optimality. This follows from <cit.> because no transmitter in a cluster is connected to a receiver outside the cluster.
§ SIMULATION
Using Algorithm 1, we can determine the optimal transmission scheme for a given network realization and message assignment. In this section, we apply this algorithm to compute the DoF as a function of the erasure probability p for several network sizes and message assignments. In particular, we find schemes that outperform those presented in <cit.> for a wide range of the open interval 0<p<1.
To compute the average puDoF at a certain p for a given message assignment, we simulate a sufficiently large number n of channel realizations, where links are erased with probability p, and apply Algorithm 1 to each realization by partitioning the network into clusters as in the proof of Theorem <ref>. The puDoF value is then computed as the average number of decoded messages divided by the network size K. In order to ensure that the computed value holds for large networks, we deactivate the last transmitter in the network, so that if we have a large network that consists of concatenated subnetworks; each of size K, then we can achieve the computed puDoF value in the large network by repeating the scheme for each subnetwork, since there will be no inter-subnetwork interference.
The simulation is done for a set of message assignments with different fractions f(p) of messages that are assigned to one transmitter connected to their desired receiver and another transmitter that can be used to cancel interference, while the remaining fraction of 1-f(p) of messages are assigned to both transmitters that are connected to their destination. Furthermore, we vary the network size K.
More precisely, we use the following assignment strategy:
𝒯_i =
{1, 2} i = 1,
{K-2, K-1} i = K,
{i, i+1} i= 1+ n ·max{2, ⌊K/f(p)· K -1⌋},
n ∈{1,2,…, min{f(p) · K -2, ⌊K/2 - 1 ⌋},
{i, i+1} i= 2n , n ∈{1,2,…,⌈ (f(p) - 1/2 ) K ⌉ -1},
{i-1, i} otherwise,
where we use the notation {1,2, ...,x} to denote the set [x] when x ≥1 and the empty set when x<1.
First, we choose K to be 100 and vary f(p) from 1/50 up to 99/100, calculating the puDoF as a function of p for each of these message assignments.
Additionally, we vary the network size and thus also the message assignment by reducing the fraction by its greatest common divisor.
As a result, the maximum puDoF that is achievable with the set of message assignments described above is shown in Figure <ref>. Compared to the schemes presented in <cit.>, there exist message assignments with a better performance. These are presented in Table <ref>. Note that in <cit.> it was shown that assignment with f(p) = 2/5 is optimal for p → 0. Interestingly, we find an assignment with f(p) = 3/5 (see the green curve in Fig. <ref>) that achieves the same puDoF for p = 0, but performs slightly better on the interval (0, 0.15]. From our results in Table <ref>, we observe that the optimal fraction f(p) decreases monotonically from 3/5 to 0 as p goes from 0 to 1.
IEEEtran
4
Wyner
A. Wyner, “Shannon-Theoretic Approach to
a Gaussian Cellular Multiple-Access Channel,” IEEE Trans.
Inf. Theory, vol. 40, no. 5, pp. 1713 –1727, Nov. 1994.
ElGamal-Veeravalli-Asilomar13
A. El Gamal, V. V. Veeravalli, “Dynamic Interference Management," in Proc. Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, Nov. 2013.
ElGamal-Annapureddy-Veeravalli-ICC12
A. El Gamal, V. S. Annapureddy, and V. V. Veervalli, “Degrees of freedom (DoF) of Locally Connected Interference Channels with Coordinated Multi-Point (CoMP) Transmission,” in Proc. IEEE International Conference on Communications (ICC), Ottawa, Jun. 2012.
Cover-Thomas
T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd Ed. Wiley, 2006.
| Our focus in this work is to analyze information theoretic models of interference networks that capture the effect of deep fading conditions through introducing random link erasure events in blocks of communication time slots. More specifically, in order to consider the effect of long-term fluctuations (deep fading or shadowing), we assume that communication takes place over blocks of time slots, and independent link erasures take place with a probability p in each block. Further, short-term channel fluctuations allow us to assume that in each time slot, all non erased channel coefficients are drawn independently from a continuous distribution; this is known as the assumption that the channel is generic.
We are interested in understanding the role of transmitter cooperation (also known as Coordinated Multi-Point (CoMP) Transmission) in these dynamic interference networks. In particular, if each message can be assigned to more than one transmitter, with a restriction only on the maximum number of such transmitters, without any constraint on their identity, what would be the optimal assignment of messages to transmitters and corresponding transmission scheme that maximizes the average rate over all possible realizations of the network? To simplify analysis, we consider the linear interference network introduced in <cit.>, where each transmitter can only be connected to the receiver having the same index as well as one following receiver. The channel capacity criterion we consider is the pre-log factor of the sum capacity at high Signal to Noise Ratio (SNR), also known as the Degrees of Freedom (DoF). Because our goal is to understand the optimal pattern of transmitter cooperation that scales in large networks, we consider the DoF normalized by the number of transmitter-receiver pairs, and take the limit as that number goes to infinity; we call this the per user degrees of freedom (puDoF).
In <cit.>, the considered setting was studied, where first the case where each message can only be available at a single transmitter was analyzed. The optimal assignment of messages to transmitters, and the value of the average puDoF were identified as a function of the erasure probability p.
In this work, we extend the work of <cit.> by studying the case where each message can be available at two transmitters, and transmitter cooperation is allowed. The optimal message assignment in the limits p → 1 and p → 0 were identified in <cit.> and <cit.>, respectively. As p → 1, each message is assigned to the two transmitters connected to its destination, to maximize the probability of successful delivery. As p → 0, the puDoF value goes to 4/5, and is achieved by splitting the network into subnetworks; each has five transmitter-receiver pairs. In order to avoid interference between the subnetworks, the last transmitter in each subnetwork is inactive. And hence, each of the first and last messages in each subnetwork is only assigned to one of the two transmitters connected to its destination, and the other assignment is used at a transmitter not connected to its destination, but connected to another receiver that is prone to interference caused by this message. Further, the middle message in each subnetwork is not transmitted. We find, through simulations, in this work that assigning that middle message to only one transmitter connected to its destination, and another transmitter not connected to its destination, leads to better rates than assigning it to the two transmitters connected to its destination at low values of p. That implies that a fraction of 3/5 of the messages are assigned to only one of the two transmitters connected to their destination, and the remaining 2/5 are assigned to the two transmitters connected to their destination. We show in this work, that at any value of p from 0 to 1, the assignment achieving the highest puDoF using an optimal zero-forcing scheme, has a fraction of f(p) of messages that are assigned to only one of the transmitters connected to their destination, and another transmitter used for interference cancellation, and the remaining fraction 1-f(p) of messages are assigned to the two transmitters connected to their destination. The value of f(p) decreases monotonically from 3/5 to 0 as p increases from 0 to 1, which agrees with the intuition about the shifting role of cooperative transmission from canceling interference to increasing the probability of successful delivery as p increases from 0 to 1. | null | null | null | null | null |
http://arxiv.org/abs/1701.08013v1 | 20170127110858 | Friction of viscoelastic elastomers with rough surfaces under torsional contact conditions | [
"M. Trejo",
"C. Frétigny",
"A. Chateauminois"
] | cond-mat.soft | [
"cond-mat.soft"
] |
[][email protected]
Soft Matter Science and Engineering Laboratory (SIMM), UMR CNRS
7615,
Ecole Supérieure de Physique et Chimie Industrielles (ESPCI), Université Pierre et Marie Curie, Paris (UPMC), France
Frictional properties of contacts between a smooth viscoelastic rubber and rigid surfaces are investigated using a torsional contact configuration where a glass lens is continuously rotated on the rubber surface. From the inversion of the displacement field measured at the surface of the rubber, spatially resolved values of the steady state frictional shear stress are determined within the non homogeneous pressure and velocity fields of the contact. For contacts with a smooth lens, a velocity dependent but pressure independent local shear stress is retrieved from the inversion. On the other hand, the local shear stress is found to depend both on velocity and applied contact pressure when a randomly rough (sand blasted) glass lens is rubbed against the rubber surface. As a result of changes in the density of micro-asperity contacts, the amount of light transmitted by the transparent multi-contact interface is observed to vary locally as a function of both contact pressure and sliding velocity. Under the
assumption that the intensity of light transmitted by the rough interface is proportional to the proportion of area into contact, it is found that the local frictional stress can be expressed experimentally as the product of a purely velocity dependent term, k(v), by a term representing the pressure and velocity dependence of the actual contact area, A/A_0. A comparison between k(v) and the frictional shear stress of smooth contacts suggests that nanometer scale dissipative processes occurring at the interface predominate over viscoelastic dissipation at micro-asperity scale.
46.50+d Tribology and Mechanical contacts;
62.20 Qp Friction, Tribology and Hardness
Friction of viscoelastic elastomers with rough surfaces under torsional
contact conditions
Antoine Chateauminois
December 30, 2023
===========================================================================================
§ INTRODUCTION
Rubber friction is a topic of huge practical importance in many applications, such as tires, rubber seals, conveyor belts, and syringes, to mention only a few. However, there is an incomplete understanding of the parameters that control the frictional behavior of rubber surfaces. Since the seminal experimental work by Grosh <cit.>, rubber friction is usually assumed to involve two dissipative components. The first one, often denoted as the adhesive component, corresponds to thermally and stress activated pinning/depinning mechanisms between rubber molecules and the contacting surface. This idea form the basis of the Schallamach model <cit.> which was subsequently extended by Chernyak and Leonov <cit.>. In a later study, Vorvokalos and Chaudhury <cit.> also showed that these models can consistently be used to describe the dependence of friction of poly(dimethylsiloxane) (PDMS) elastomers on molecular parameters such as molecular weight. The
second dissipative component involved in rubber friction is assumed to correspond to viscoelastic losses associated with the contact deformation of the soft rubber. In the case of a hard, rough surface sliding on a viscoelastic rubber, viscoelastic losses at microasperity scale occurs at characteristic frequency of the order of v/d where v is the sliding velocity and d is a characteristic size of asperity contacts. This so-called hysteretic component to friction was first evidenced by Greenwood and Tabor <cit.> in a series of experiments, in which hard spheres and cones were sliding or rolling on well-lubricated rubber surfaces. The work by Grosh <cit.> extended these investigations to the more complex situation of rubber sliding on microscopically rough surfaces. A maximum in friction was found to occur at a sliding velocity related to the frequency with which the asperities of the rough surface deform the rubber surface. This maximum was absent on a smooth track, thus
reflecting the deformation losses induced by
the passage of the asperities over the rubber surface. These frictional mechanisms involving viscoelastic losses at microasperity scale have motivated the development of several theoretical models starting from Fourier transform analysis applied to periodic surfaces <cit.> to the more complex model developed by Persson for rubber friction on randomly rough surfaces <cit.>. Using a spectral description of the topography of the rough surfaces, Persson's theory predicts how the component of friction force associated with hysteretic losses varies with velocity and contact pressure from an estimate of the actual contact area. Some experimental results tend to support this theory <cit.> but a detailed examination of the effects of surface topography on rubber friction remains very challenging in the case of randomly rough surfaces where adhesive and hysteretic components are strongly intricate.
In a previous work <cit.>, we have investigated the friction of a PDMS rubber with model rough surfaces consisting of silica lenses covered with various densities of spherical colloidal nano-particles. From an examination of the pressure dependence of the frictional shear stress, we showed that the actual contact area was close to saturation in the whole range of applied contact load and sliding velocity. These model surfaces thus allowed quantifying the contributions of interface dissipation and hysteretic losses to friction without the complications arising from the pressure and velocity dependence of the actual contact area. In addition, the use of a monodisperse distribution of colloidal particles allowed to control both the characteristic frequency associated with deformation at asperity scale and the volume of the viscoelastic substrate that is affected by this deformation. Within this framework, we were able to determine experimentally the hysteretic component of friction which compares
well with theoretical calculations. In this study, we consider the more realistic situation of a viscoelastic rubber sliding against a randomly rough rigid surface where the proportion of area into contact is expected to depend on both the applied pressure and the sliding velocity. Experiments are carried out using a torsional contact configuration which -as explained below- allows investigating frictional energy dissipation at the interface without the complications arising from bulk viscoelastic losses at the scale of the macroscopic contact. In addition, the inversion of the measured displacement field at the surface of the rubber provides local values of the frictional shear stress within the non homogeneous pressure and sliding velocity fields of the contact. Local changes in the density of asperity micro-contacts are evidenced from a measurement of the amount of light transmitted through the transparent rough contact.
In a first part of this paper, we consider the case of a smooth contact where friction is likely to arise only from molecular scale dissipation at the intimate contact formed between the surfaces. In a second part, we examine the pressure and velocity dependence of the frictional shear stress within rough contacts where asperity scale viscoelastic losses are likely to come into play. We show that the measured shear stress can be expressed as the product of a velocity dependent term by a velocity and pressure dependent term which describes the changes in the actual contact area as a function of nominal contact pressure and sliding velocity. From a comparison between the smooth and rough contacts, we discuss in a last part the contributions of interface dissipation and hysteretic losses to friction.
§ EXPERIMENTAL DETAILS
§.§ Materials and sample preparation
As a substrate, we use an epoxy based rubber obtained by crosslinking diglycidil ether of bisphenol A (DER 332, M_w=340 g mol^-1, Dow Corning) with a polyether-diamine crosslinker (Jeffamine®ED2003, M_w=2003 g mol^-1, Hunstman Chemical). As detailed in the appendix, this rubber exhibits a significant change (about one order in magnitude) in the loss modulus in the characteristic frequency range (≈ 0.1-10^3 Hz) involved in surface deformation at microasperity scale. In order to elaborate the specimens, each of the reactive parts is first separately stirred in a silicone bath at 70 ^∘C during about 30 min. Then, epoxy is mixed with the stoichiometric amount of diamine determined with the epoxy equivalent weight and amine hydrogen equivalent weight given by the supplier (Jeffamine® Data Sheets). The reactive mixture is stirred and subsequently degassed about 40 min at 50 ^∘C in a vacuum
chamber. Then, the mixture is poured into a parallelepiped shaped PDMS mold (size: 4.5 cm×4.5 cm×1.5 cm) and cured at 120 ^∘C for 20 h. In order to monitor contact induced surface displacements, a square network of small cylindrical holes (diameter 10 μm, depth 2 μm and center to center spacing 70 μm) is stamped on the PDMS surface. Once imaged in transmission with a white light, the pattern appears as a network of dark points. This surface marking is simply achieved by patterning the bottom part of the PDMS mold by a network of cylindrical posts using conventional soft lithography techniques. After curing, the glass transition temperature of the epoxy rubber is -42^∘C, as determined by Differential Scanning Calorimetry (DSC) at a scan rate of 10 ^∘C min^-1.
During friction experiments, the rubber specimen is contacting a plano-convex BK7 glass lens (Melles Griott, France) with a radius of curvature of 14.8 mm. After cleaning, the r.m.s. roughness of the lens is less than 2 nm, as measured by AFM using 1 × 1 μ m^2 pictures. One of the lenses is rendered microscopically rough using sand blasting (average grain size of 60 μm). The topography of the surface has been characterized by AFM measurements using image sizes ranging from 50 × 50 μ m^2 to 500 × 500 nm^2. Fig. <ref> depicts the results in the form of a roughness Power Spectrum Density (PSD) C_s(q). This PSD decays according to a power law, from 50 μm down to the nanometer scale. Accordingly, the surface roughness can be defined as self-affine fractal (C_s(q) q^-2(H+1)) with a Hurst exponent H=0.58 and a fractal dimension D_f=3-H=2.42. The r.m.s roughness of the sand blasted surface is measured as 1.69 ± 0.19 μtextm using 50 × 50 μm^2 images.
§.§ Friction setup and contact imaging
Contact torsion experiments are carried out using a custom made device which is fully described in reference <cit.>. The experiments consist in rotating continuously a glass lens about an axis perpendicular to the surface of the rubber substrate and passing through the apex of the lens. Normal contact is achieved under imposed indentation depth condition (between 60 and 320 μm) by means of a linear displacement stage. The resulting contact radius lies in the range 0.3-2.2 mm. Specimen size (4.5 cm×4.5 cm×1.5 cm) ensures that the ratio of the substrate thickness to the contact radius is greater than ten, i.e. that semi-infinite contact conditions are achieved during torsion experiments <cit.>. Separate indentation experiments using the same device equipped with a load cell allowed to determine the relationship between indentation depth and normal load (the load cell has to be removed during torsional contact experiments for imaging
purposes). During friction experiments, the glass lens is rotated at imposed angular velocity between 0.01 and 10 deg s ^-1 using a motorized rotation stage. Prior to use, the lenses are successively cleaned with acetone and ethanol in an ultrasonic bath during about 5 min. Epoxy based specimens are thoroughly washed with 2-isopropanol and subsequently dried under vacuum.
During torsion, images of the contact zone are continuously recorded through the transparent rubber substrate using a zoom lens and a CMOS camera. The system is configured to a frame size of 1024 × 1024 pixels with 8 bits resolution. Images are acquired at a frequency ranging from 0.01 to 30 Hz. The contact zone is illuminated using a parallel light system located behind the glass lens, as schematically described in Fig. <ref>. In the case of the smooth contact interface, subpixel detection of individual markers on the epoxy surface is carried out directly from single images taken during steady state friction (as
that shown in Fig. <ref>a) using a particle tracking method. Each contact picture provides a displacement field with about 6,000 data points with a spatial resolution corresponding to distance between markers (i.e. 70 μm). In the case of rough interfaces, the contact appears as bright spots against a darker background as a result of light scattering by the roughened surface (Fig. <ref>b). It is therefore no longer possible to detect the markers on the rubber surface on a single image. However, an averaging procedure allows revealing the location of the markers under steady state friction. As shown in Fig. <ref>c, averaging several images taken during steady state friction suppresses nearly all the light intensity fluctuations induced by surface roughness thus allowing to reveal the location of the markers which are fixed with respect to the camera.
§ FRICTION OF SMOOTH CONTACTS
When the smooth lens is twisted starting from rest, a stiction stage is first encountered which corresponds to the shear failure of the adhesive contact. This stiction process occurs according to a fracture like process characterized by progressive slip propagation from the periphery to the center of the contact. This phenomenon was discussed in a previous study <cit.> and it will not be considered further in this paper. In the case of the investigated epoxy rubber, this transient stiction phenomenon occurs for twist angles θ_s≲ 50 deg for all indentation depths and angular velocities under consideration. Then, a steady state friction state is achieved as indicated by the time independence of markers location on the rubber surface. As an example, Fig. <ref> depicts the displacement of an individual marker located within the contact as a function of the applied twist angle. Owing to the symmetry the contact, this displacement is expressed using its
cylindrical components with respect to the center of rotation (only the azimuthal displacement component u_θ is reported in the figure as the radial displacement component u_r is found to be systematically negligible in all experiments). After an initial increase corresponding to the stiction stage (θ≲ 40 deg), a steady state is achieved. Here, the time independent location of the marker is indicative of the achievement of a vanishing strain rate within the bulk rubber substrate. This means that no significant relaxation process takes place at the scale of the contact within the considered time window. The bulk substrate can thus be considered as deformed in a relaxed, time-independent, state.
Figure <ref> shows a typical displacement field obtained with the smooth contact under such a steady state friction condition. From this measured displacement field, the corresponding contact stress distribution can be retrieved using an appropriate inversion procedure. In a previous study dealing with linear sliding of silicone rubbers <cit.>, we showed that an inversion method based on a linear elastic contact mechanics approach can be inaccurate due to the occurrence of finite strains at the edge of the contact. A Finite Elements (FE) inversion procedure was thus developed in order to handle the associated geometrical and material non linearities. Here, a calculation of the surface shear strain
ϵ_r θ = 1/2 ( ∂ u_θ / ∂ r - u_θ/r ) from the measured azimuthal displacement profiles (bottom part of Fig. <ref>) shows that strain as high as 0.3 are achieved at the vicinity of the contact edge which are also outside the linear range of the epoxy rubber (about 0.1). In order to evaluate the effects of these non linearities on the inversion, a displacement field was inverted using either a linear elastic approach based on Green's tensor or a FE method able to handle the geometrical and material linearities of the problem. The results reported in appendix B show that both approaches give the same result. It therefore turns out that finite strains do not induce any significant error in a linear elastic inversion of torsional displacement which can be justified by some theoretical considerations <cit.>. As a result, all the stress fields to be reported in this study have been obtained from the semi-analytical deconvolution of the
measured displacement fields using the Green's tensor approach fully detailed in reference <cit.>.
The surface shear stress distribution of the smooth contact interface was systematically determined from the inversion of the measured steady-state azimuthal displacements at various indentation depths and angular velocities. All the shear stress data obtained from the inversion are expressed in a non dimensional form, τ̅(r)=τ_θ z(r)/E_r, where E_r is the relaxed elastic modulus of the rubber. As shown in Fig. <ref>a, a nearly constant frictional shear stress is achieved within the contact zone except at the center of the contact where shear stress vanishes for symmetry reasons. Contact pressure being expected to decrease continuously along the contact radial coordinate, it turns out that frictional shear stress is pressure independent, as already reported for smooth glass/PDMS contacts <cit.>. A close examination of stress profiles obtained at various imposed velocities (Fig. <ref>b) shows a systematic positive
gradient along the radial coordinate which should reflect the velocity dependence of the interface shear stress. This assumption was further considered from a plot of the measured local shear stress values as a function of the local sliding velocity v = θ̇r, where θ̇ is the angular velocity and r is the radial coordinate. According to a previous investigation <cit.>, the transition to a vanishing frictional stress in the vicinity of the contact center occurs over a length scale which represents about 10% of the contact radius and which is essentially dictated by the cut-off frequency of the deconvolution operation. As a result, data points close to the center of the contact (r/a<0.1 where a the contact radius) were discarded from the analysis together with data points outside the contact area (r/a>1). As shown in Fig. <ref>, all the selected shear stress values merge on a single master curve when the applied angular velocity is varied.
Over nearly three orders of magnitude in the sliding velocity, the shear stress is observed to increase continuously by about a factor three. The shear stress being measured in a steady state friction regime where no displacement occurs at the macroscale, it is thus associated with small scale dissipative processes. For such a smooth and intimate contact, friction is usually considered to arise from molecular scale dissipative processes occurring at the sliding interface. As mentioned in the introduction, formation and breakage of adhesive molecular bonds at the contact interface is often invoked as the underlying physical mechanism <cit.>. For rubber sliding on optically smooth glass, Grosh <cit.> noted that the velocity corresponding to maximum friction and the frequency corresponding to maximum viscoelastic loss form a ratio that is of the order of 7 nm for various materials. This nanometric length scale was assumed by Grosh to represent the molecular scale
involved in the pinning and depinning process of molecular chains to the glass surface. Here, the available frequency and sliding velocity ranges do not allow to extract a very accurate value of this characteristic length scale. However, it can be seen in Figure <ref> that the shape of the τ̅(v) plot matches that of the loss component of the shear modulus, G“, when the latter is represented as a function of λω where ω is the frequency and λ is a characteristic length close to 6 nm. In the following section, we address frictional dissipative processes occurring at larger length scales, i.e. at the scale of micro-asperity contacts within the rough contact interface.
§ FRICTION OF ROUGH CONTACTS
§.§ Shear stress field
In this section, we report on the frictional properties of the contact interface between the smooth viscoelastic elastomer and the sand blasted glass lens. As opposed to smooth contact, a dependence of the local frictional shear stress on contact pressure is now evidenced. As an example, shear stress profiles for various indentation depths are reported in Fig. <ref>a. The shear stress is clearly decreasing along the radial coordinate, i.e. when the contact pressure decreases. Similarly, increasing applied indentation depths (i.e. contact pressure) result in enhanced shear stress values. Such a pressure dependent frictional stress can be qualitatively accounted by the existence of a multi-contact interface where discrete micro-contacts are distributed within the frictional interface. As the local contact pressure is increased, a higher density of micro-contacts is achieved which in turn results in an enhanced local frictional shear stress. In addition, a velocity dependence of the shear
stress similar to that observed with smooth contacts is also evidenced (Fig. <ref>b). Here, the analysis of the local shear stress distribution is complicated by the fact that the local density of microcontacts not only depends on contact pressure but also potentially on the local sliding velocity as a result of viscoelastic effects. In the following section, the changes in the density of micro-contacts as a function of local pressure and velocity is further considered from an examination of fluctuations in the light transmitted by the transparent rough contacts.
§.§ Optical transmissivity of the multi-contact interface
Some interesting features of the rough contacts emerge when the changes in the light transmitted locally by the interface are considered. As mentioned above, rough contacts appear as spatially heterogeneous as a result of the scattering nature of the glass surface (cf Fig. <ref>b). Because of the difference between the index of refraction of the solids and that of the air, the rough interface transmits light more efficiently when the surfaces are in intimate contact than when they are out of contact. No complete optical model is available to describe these effects but, as a first approach, one can neglect scattering and just consider light transmission in contact and non contact regions of the rough interface. Obviously, light transmission will be more efficient if only one interface is present (contact condition) instead of two (non contact condition). Accordingly, the intensity of transmitted light at a given location within the rough contact should carry informations about the actual
area of micro-asperity contacts. Such an idea was initially developed by Dietrich and Kilgore <cit.> in a study where the actual contact area between rough transparent materials was determined from microscope contact observations. The relevance of this approach to rough contacts interfaces involving polymers was subsequently demonstrated in later studies by Scheibert et al. <cit.>, Rubinstein and co-workers <cit.> and Krick et al. <cit.>. As discussed by Dietrich and Kilgore, the analysis of the images can be complicated by various optical scattering and resolution effects (especially at the edges of microcontacts) which requires appropriate deconvolution procedures if one wants to get a quantitative measurement of the actual contact area from contact images. Here, contact images will be analyzed under the assumption that transmitted light intensity at a given contact location is proportional to the proportion of area into
contact. As detailed below, the validity of this assumption is supported by static indentation experiments carried out at various imposed indentation depths. In order to improve the signal to noise ratio of the camera, each static contact image at a given prescribed indentation depth is obtained by averaging 300 images. A reference image is also obtained in the same way using a non contact configuration. When subtracted to the contact image, this reference image enforces the background of the image to be almost zero thus allowing to clearly identify the size of the circular contact region. For each pixel, a normalized transmitted light intensity I_n is defined as follows
I_n=I_c-I_r/I_r ,
where I_c is the measured light intensity under contact conditions and I_r is the corresponding intensity in the reference image (without contact). Here, it should be kept in mind that the transmitted light intensity measured at the length scale of a pixel (5 × 5 μm^2) is characteristic of a multicontact interface as a result of the self affine fractal nature of the glass surface. Normalized radial intensity profiles are subsequently obtained from an angular average of the normalized images with respect to an origin defined by the apex of the lens. Results are shown in Fig. <ref>a where the profiles can be seen to be shifted to higher light intensity values when indentation depth is increased. Interestingly, it comes out that all the profiles obtained at various contact loads collapse onto a single plot (Fig. <ref>b) when intensity data are normalized with respect to the average contact pressure p_m=P/π a^2 (P is the applied normal
load) and the radial coordinate is normalized with respect to contact radius a. If the local contact pressure σ_zz is assumed to scale as σ_zz(r/a) ∝ P/π a^2 f(r/a) (where f is some function of the space coordinate), this means that transmitted light intensity scales locally with the applied contact pressure. This result is further illustrated by the linear relationship between the integrated light intensity transmitted through the rough contact area and the applied normal load (Fig. <ref>). It is noteworthy that a similar result was obtained by Rubinstein et al. <cit.> using a different optical technique where a laser sheet is incident on a contact interface between two rough PMMA blocks at an angle far beyond the angle for total internal reflection from the PPMA/air interface. Under the assumption that the transmitted light intensity is proportional to the proportion of area into contact, the observation of such a linear
relationship is consistent with many rough contact theories <cit.> which predict that the actual contact area varies linearly with the applied load, at least in the low load range. Accordingly, we will make the assumption that the recorded light intensity at a given pixel location is proportional to the proportion of area into contact, I_n ∝A/A_0 where A and A_0 are the actual and nominal contact areas, respectively. For the surface topography under consideration, this hypothesis is supported by the above reported indentation experiments even if it is not necessarily valid for any kind of roughness.
During steady state friction, a systematic change in the distribution of transmitted light within the contact is observed not only as a function of the applied indentation depth but also as a function of the imposed angular velocity. In order to quantify these changes, the following treatment is applied to the recorded contact images. For a given indentation depth and applied velocity, sequences of images such as that shown in Fig. <ref>b are averaged. The resulting time-averaged picture is subsequently averaged as a function of the angular coordinate with respect to the center of rotation in order to get a radial profile. For normalization purposes, a light intensity profile is also obtained using the same averaging procedure with a sequence of images where the rotating lens is close to but not in contact with the rubber surface. An example of the resulting profiles is shown in Fig. <ref> for an indentation depth of 140 μm and various velocities ranging from 0.
01 to 1 deg s^-1. At a given location within the contact, i.e. for a given contact pressure, it turns out that the amount of light transmitted locally through the rough contact interface is decreasing as the local sliding velocity is increased. There is thus some evidence that micro-contacts at the frictional interface are redistributed as a function of the sliding velocity, more precisely that the proportion of area in contact decreases at high sliding velocities.
Recalling the assumption that light intensity is proportional to the proportion of area into contact, i.e. I_n(p,v) ∝ A(p,v)/A_0 the dependence of the frictional shear stress on the actual contact area should therefore be reflected by the ratio τ(p,v)/I_n(p,v). When this ratio is plotted as a function of the local sliding velocity, it comes out that all the data point obtained at various imposed angular velocity and applied indentation depth merge on a single master curve (Fig. <ref>). Remarkably, this master curve is independent on the contact pressure (i.e. on both the location within the contact and on the imposed indentation depth). From this observation, the measured local shear stress can thus be expressed in the following way
τ(p,v)=k(v)I_n(p,v)∝ k(v)A(p,v)/A_0 .
§ DISCUSSION
From the inversion of the displacement field at the surface of a viscoelastic rubber contacting a rigid spherical asperity, local values of the steady state frictional shear stress were determined under torsional contact conditions. For rough contacts, measured values of the local shear stress are representative of a multicontact interface under given sliding velocity and nominal contact pressure conditions. In addition, information about the local proportion of area into contact A/A_0 is provided from contact images under the assumption that optical transmittivity of the rough interface is proportional to A/A_0. As detailed above, this assumption is supported by separate static indentation measurements where it yields the
expected linear relationship between the actual contact area and the applied nominal contact pressure. From a systematic investigation of the local shear stress and transmitted light intensity as a function of the applied indentation depth and twisting rate, it is found experimentally that the frictional shear stress can be expressed as the product of two terms (cf Eqn(<ref>)). The first one, A/A_0, incorporates the velocity and pressure dependence of the microcontacts density. The second one is a pressure-independent term, k(v), which can be viewed as some averaged measurement of the amount of frictional energy dissipated within micro-contacts. In other words, A/A_0 corresponds to a contact mechanics term describing the density of microcontacts under steady state sliding while k(v) quantifies the dissipative processes at play within asperity micro-contacts.
As shown by the dotted line in Fig. <ref>, it can interestingly be noted that the magnitude of k(v) is similar to that of the frictional shear stress measured for smooth contacts and that it follows a very similar velocity dependence. This suggests that frictional energy dissipation within microasperity contacts is mostly due to interfacial dissipation, the contribution of viscoelastic losses at asperity scale being negligible. This statement can be further considered within the framework of the friction model detailed in the introduction. Accordingly, the frictional force is assumed to arise from two independent contributions, namely the so-called adhesive and hysteretic components. The so-called adhesive term encompasses all dissipative mechanisms occurring at the points of intimate contact between the solids, i.e. on length scales lower than asperity size. The hysteretic term corresponds to the force required to displace the rubber material from the front of the rigid nano-asperities. Here, it
represents the contribution of the viscoelastic losses involved in the deformation of the rubber substrate by microasperities. Rewritten in terms of shear stress, this model can be expressed as follows
τ=τ_h + τ_a ,
where τ_a and τ_h are respectively the adhesive and hysteretic terms. The adhesive term can simply be expressed as
τ_a=τ_0 A/A_0
where τ_0 is the frictional shear stress of the smooth contact interface.
An exact calculation of the hysteretic component τ_h is much more complicated as it implies to solve the viscoelastic contact problem taking into account the whole frequency distribution associated with the topography of the self affine rough surface. As a first order approximation, we follow a simple approach where the rough surface is assimilated to a distribution of identical, non interacting, spherical asperities. Following a calculation by Greenwood and Tabor <cit.>, the friction force at the scale of a single asperity can be expressed as
F_asp=αE_eff/4a^4/R^2
where R is the radius of curvature of the asperity and E_eff is a frequency dependent effective modulus defined as
E_eff(ω)=|E(ω)|/1-ν^2
where ω is a characteristic frequency defined as ω=v/a and ν is the Poisson's ratio whose variations with frequency are neglected. In the above equation, α is term representing the fraction of the input elastic energy which is lost as a result of viscoelastic dissipation. The hysteretic frictional stress can thus be written as τ_h=ϕ F_asp where ϕ denotes the surface density of asperities. ϕ=A/(π a^2 A_0) thus allowing to express τ_h as
τ_h=αE_eff/4 πA/A_0(a/R)^2 .
As an upper bound value for τ_h, one can take a≈ R which gives
τ_h ≈αE_eff/4 πA/A_0 .
From eqns (<ref>) and (<ref>), the total frictional stress within the rough interface can thus be expressed as
τ=A/A_0( τ_0 + αE_eff/4 π) .
Within the investigated sliding velocity range (0.1 to 100 μm s^-1), the adhesive term τ_0 is found to vary between 0.2 and 0.5 MPa (Fig. <ref>). The estimate of the second, viscoelastic term, in the RHS of Eqn <ref> requires a knowledge of the dissipation factor α. Following an exact viscoelastic calculation by Persson <cit.>, we take for α an asymptotic (low velocity) value calculated as α≈ 5 tanδ where tanδ is the loss tangent of the rubber substrate. Using this approach and the viscoelastic data reported in the appendix, the viscoelastic term α E_eff/4 π is found to vary between 0.05 and 0.1 MPa when the characteristic frequency varies between 0.1 Hz and 1 kHz. This simple calculation thus yields an estimate of the hysteretic term which is found to be about half the magnitude of the interface term. The rough approximations embedded in the calculation do not really allow to draw a definite conclusion from a difference of less than one order of magnitude. Here, it can just be stated that the above calculation does not contradict the fact that the interfacial contribution to friction could be the dominant term, as suggested by the similarity between k(v) and τ_0(v).
However, this calculation is based on a very crude description of the contact interface which is assumed to consist of a distribution of identical, non interacting, single-asperity contacts. As a result, topographical features of the surface such as rms roughness, fractal dimension or correlation length are not taken into account. A more refined approach to the hysteretic component to friction would require that the multiscale features of surface topography as well as non linear effects encountered during deformation at microasperity scale are accounted for. Some of these features are embedded within theoretical rough contact models such as that developed by Persson <cit.> but using these models would require extensive calculations which are beyond the scope of this study. From an experimental perspective, more insights into the adhesive and hysteretic components to friction could be gained from experiments where the physical chemistry of the glass surface is varied (using silanization for example) independently of the viscoelastic properties of the rubber or, conversely, where the viscoelasticity of the substrate is changed independently of the properties of the glass surface. When doing so, one should take care to the potential occurrence of stick-slip motions or friction instabilities which would preclude such an analysis.
§ CONCLUSION
Using contact imaging approaches, we were able to determine the distribution of frictional stresses within contacts between a smooth viscoelastic rubber and a rigid rotating lens. When the lens surface is made randomly rough, the local frictional stress is observed to dependent on both contact pressure and on sliding velocity as a result of the multicontact nature of the sliding interface. Associated changes in the local density of micro-contacts are evidenced from variations in the distribution of the light intensity transmitted through the rough contact interface. From separate static indentation experiments, it is shown that all the light intensity data obtained locally at various contact loads and contact locations can be represented in the form of a single master curve which strongly supports the scaling of the transmitted light with the nominal contact pressure, at least for the considered rough surface. Accordingly, the theoretical prediction of a linear relationship between the proportion of area in contact and contact pressure is retrieved experimentally. More importantly, the combination of local stress and light intensity measurements allowed to separate the contributions of two mechanisms when contact pressure or velocity is varied. The first one consists in a decrease in the local density of micro-contacts when the pressure decreases. The second one encompasses all the frictional dissipative processes occurring within microasperity contacts. A comparison between smooth and rough contacts suggests that dissipative processes occurring at the interface predominate over viscoelastic dissipation at micro-asperity scale. More generally, these results open the way to a close reexamination of the validity of the hypothesis embedded in most rubber friction models, especially the assumption that friction can be separated into an adhesive and an hysteretic component.
This work was partially supported by the National Research Agency (ANR) within the framework of the Dynalo project (NT09499845). The authors wish to thank Basile Pottier and Laurence Talini for the surface fluctuations measurements. Thanks are also due to Danh Toan Nguyen for the finite element calculations reported in the appendix. We are also indebted to Alexis Prevost for many stimulating discussions.
§ LINEAR VISCOELASTIC MEASUREMENTS
The selected epoxy rubber is characterized by a crystallization of the flexible chains of the polyether-diamine crosslinker at low temperature (-20 ^∘C). As a result, it is not possible to determine the room temperature viscoelastic modulus of the rubber over an extended frequency range using the usual route of master curves and time-temperature superposition principle. Instead, we used two complementary techniques to determine the frequency dependence of the viscoelastic modulus at room temperature. Up to 20 Hz, the shear modulus was measured using conventional Dynamical Mechanical Thermal Analysis (DMTA). Elastomer disks 2 mm in thickness and 8 mm in diameter are sheared at low strain (0.05 %) between the parallel plates of a rheometer (Anton Paar, MCR 501). The shear modulus is measured at room temperature during a frequency sweep between 50 and 0.01 Hz. In the high frequency range (up to 10 kHz), the viscoelastic modulus is measured using Surface Fluctuation Specular Reflection (SFSR)
spectroscopy, a technique based on the principle that surface fluctuations reveal the properties of the medium. The principle of this technique is fully described in references <cit.>. The results of both viscoelastic measurements are shown in Fig. <ref>.
§ INVERSION OF THE DISPLACEMENT FIELD: COMPARISON BETWEEN GREEN'S TENSOR AND FINITE ELEMENT CALCULATIONS
In order to evaluate the influence of finite strains on the inversion of displacement fields, the same azimuthal displacement profile was inverted using both a linear elastic approach based on Green's tensor <cit.> and a Finite Element (FE) inversion procedure which is fully described in reference <cit.>. As opposed to Green's tensor calculations, the FE inversion is able to take into account both the geometrical and material non linearities (neo-Hookean behavior of the rubber) of the problem. As shown in Fig. <ref>, identical shear stress profiles are provided by both methods. In other words, the occurrence of finite strains at the edge of the contact (see Fig. <ref>) does not induce any significant error in the stress field deduced from an inversion using a linear elastic analysis. It should be noted that this conclusion is opposed to that drawn for linear sliding conditions: in this case, finite strains were found to alter significantly the accuracy of linear inversions <cit.>. Some theoretical justifications for this difference can be found in finite strain analytical calculations <cit.>.
unsrt
10
grosch1963a
A.K. Grosch.
The relation between the friction and visco-elastic properties of
rubber.
Proceedings of the Royal Society of London. Series A.
Mathematical and Physical Sciences, 274(1356):21–39, 1963.
schallamach1963
A. Schallamach.
A theory of dynamic rubber friction.
Wear, 6:375–382, 1963.
Chernyack1986
Y.B. Chernyack and A.I. Leonov.
On the theory of adhesive friction of elastomers.
Wear, 108:105–138, 1986.
vorvolakos2003effects
Katherine Vorvolakos and Manoj K Chaudhury.
The effects of molecular weight and temperature on the kinetic
friction of silicone rubbers.
Langmuir, 19(17):6778–6787, 2003.
greenwood1958
J.A. Greenwood and D. Tabor.
The friction of hard sliders on lubricated rubber: the importance of
deformation losses.
Proceedings of the Physical Society, 71:989–1001, 1958.
Schapery1978
R.A. Schapery.
Analytical models for the deformation and adhesion components of
rubber friction.
Tire Science and Technology, 6:3–47, 1978.
golden1980
J.M. Golden.
Hysteresis and lubricated rubber friction.
Wear, 65:75–87, 1980.
Perrson2006
B. N. J. Persson.
Contact mechanics for randomly rough surfaces.
Surface Science Reports, 61:201–227, 2006.
persson2001
B.N.J. Persson.
Theory of rubber friction and contact mechanics.
Journal of Chemical Physics, 115(8):3840–3861, 2001.
lorenz2011
B. Lorenz, B.N.J. Persson, S.Dieluweit, and T. Tada.
Rubber friction: comparison of theory with experiments.
European Physical Journal E: Soft Matter, 34:129, 2011.
nguyen2013
D. T. Nguyen, S. Ramakrishna, C. Frétigny, N. D. Spencer, Y. Le Chenadec, and
A. Chateauminois.
Friction of rubber with surfaces patterned with rigid spherical
asperities.
Tribology Letters, 49(1):135–144, January 2013.
chateauminois2010
A. Chateauminois, C. Frétigny, and L. Olanier.
Friction and shear fracture of an adhesive contact under torsion.
Physical Review E, 81:026106–026117, 2010.
Gacoin2006a
E. Gacoin, A. Chateauminois, and C. Frétigny.
Measurement of the mechanical properties of polymer films
geometrically confined within contacts.
Tribology Letters, 21(3):245–252, 2006.
nguyen2011
D.T. Nguyen, P. Paolino, M-C. Audry, A. Chateauminois, C. Frétigny, Y. Le
Chenadec, M. Portigliatti, and E. Barthel.
Surface pressure and shear stress field within a frictional contact
on rubber.
Journal of Adhesion, 87:235–250, 2011.
huy2013
Some theoretical justifications for this difference can be found in finite
strain analytical calculations which shows that a linear contact mechanics
description of torsional contacts remains valid up to the third order in the
applied twist angle (C.Y. Hui, private communication).
chateauminois2008
A. Chateauminois and C. Frétigny.
Local friction at a sliding interface between an elastomer and a
rigid spherical probe.
European Physical Journal E, 27(2):221–227, oct 2008.
dieterich1996
J. H. Dieterich and B. D. Kilgore.
Imaging surface contacts: power law contact distributions and contact
stresses in quartz, calcite, glass and acrylic plastic.
Tectonophysics, 256(1):219–239, 1996.
dieterich1994
J. H. Dieterich and B. D. Kilgore.
Direct observation of frictional contacts: New insights for
state-dependent properties.
Pure and Applied Geophysics, 143(1):283–302, 1994.
scheibert2008
J. Scheibert.
Mécanique du contact aux échelles mésoscopiques.
Sciences Mécaniques et Physiques. Edilivres, Paris, 2008.
Rubinstein2006a
S. M. Rubinstein, G. Cohen, and J. Fineberg.
Contact area measurements reveal loading-history dependence of static
friction.
Physical Review Letters, 96(25):256103, Jun 30 2006.
Krick2012
BA Krick, JR Vail, BNJ Persson, and WG Sawyer.
Optical in situ micro tribometer for analysis of real contact area
for contact mechanics, adhesion, and sliding experiments.
Tribology Letters, 45:185–194, 2012.
Rubinstein2006
S. M. Rubinstein, M. Shay, G. Cohen, and J. Fineberg.
Crack-like processes governing the onset of frictional slip.
International Journal of Fracture, 140(1-4):201–212, Jul 2006.
greenwood1966
JA Greenwood and JBP Williamson.
Contact of nominally flat surfaces.
Proceedings of the Royal Society of London. Series A.
Mathematical and Physical Sciences, 295(1442):300–319, 1966.
carbone2008
G. Carbone and F. Bottiglione.
asperity contact theories: Do they predict a linlinear between
contact area and load?
Journal of the Mechanics and Physics of Solids, 56:2555–2572,
2008.
campana2007
C. Campana and M. Muser.
Contact mechanics of real versus randomly rough surfaces: A green's
function molecular dynamics study.
Europhysics, 77:38005, 2007.
persson2010
B.N.J. Persson.
Rolling friction for hard cylinder and sphere on viscoelastic solid.
European Physical Journal E: Soft Matter, 33:327–333, 2010.
pottier2013
Basile Pottier, Allan Raudsepp, Christian Frétigny, François Lequeux,
Jean-François Palierne, and Laurence Talini.
High frequency linear rheology of complex fluids measured from their
surface thermal fluctuations.
Journal of Rheology, 57:441, 2013.
pottier2011
Basile Pottier, Guylaine Ducouret, Christian Frétigny, François
Lequeux, and Laurence Talini.
High bandwidth linear viscoelastic properties of complex fluids from
the measurement of their free surface fluctuations.
Soft Matter, 7(17):7843–7850, 2011.
| Rubber friction is a topic of huge practical importance in many applications, such as tires, rubber seals, conveyor belts, and syringes, to mention only a few. However, there is an incomplete understanding of the parameters that control the frictional behavior of rubber surfaces. Since the seminal experimental work by Grosh <cit.>, rubber friction is usually assumed to involve two dissipative components. The first one, often denoted as the adhesive component, corresponds to thermally and stress activated pinning/depinning mechanisms between rubber molecules and the contacting surface. This idea form the basis of the Schallamach model <cit.> which was subsequently extended by Chernyak and Leonov <cit.>. In a later study, Vorvokalos and Chaudhury <cit.> also showed that these models can consistently be used to describe the dependence of friction of poly(dimethylsiloxane) (PDMS) elastomers on molecular parameters such as molecular weight. The
second dissipative component involved in rubber friction is assumed to correspond to viscoelastic losses associated with the contact deformation of the soft rubber. In the case of a hard, rough surface sliding on a viscoelastic rubber, viscoelastic losses at microasperity scale occurs at characteristic frequency of the order of v/d where v is the sliding velocity and d is a characteristic size of asperity contacts. This so-called hysteretic component to friction was first evidenced by Greenwood and Tabor <cit.> in a series of experiments, in which hard spheres and cones were sliding or rolling on well-lubricated rubber surfaces. The work by Grosh <cit.> extended these investigations to the more complex situation of rubber sliding on microscopically rough surfaces. A maximum in friction was found to occur at a sliding velocity related to the frequency with which the asperities of the rough surface deform the rubber surface. This maximum was absent on a smooth track, thus
reflecting the deformation losses induced by
the passage of the asperities over the rubber surface. These frictional mechanisms involving viscoelastic losses at microasperity scale have motivated the development of several theoretical models starting from Fourier transform analysis applied to periodic surfaces <cit.> to the more complex model developed by Persson for rubber friction on randomly rough surfaces <cit.>. Using a spectral description of the topography of the rough surfaces, Persson's theory predicts how the component of friction force associated with hysteretic losses varies with velocity and contact pressure from an estimate of the actual contact area. Some experimental results tend to support this theory <cit.> but a detailed examination of the effects of surface topography on rubber friction remains very challenging in the case of randomly rough surfaces where adhesive and hysteretic components are strongly intricate.
In a previous work <cit.>, we have investigated the friction of a PDMS rubber with model rough surfaces consisting of silica lenses covered with various densities of spherical colloidal nano-particles. From an examination of the pressure dependence of the frictional shear stress, we showed that the actual contact area was close to saturation in the whole range of applied contact load and sliding velocity. These model surfaces thus allowed quantifying the contributions of interface dissipation and hysteretic losses to friction without the complications arising from the pressure and velocity dependence of the actual contact area. In addition, the use of a monodisperse distribution of colloidal particles allowed to control both the characteristic frequency associated with deformation at asperity scale and the volume of the viscoelastic substrate that is affected by this deformation. Within this framework, we were able to determine experimentally the hysteretic component of friction which compares
well with theoretical calculations. In this study, we consider the more realistic situation of a viscoelastic rubber sliding against a randomly rough rigid surface where the proportion of area into contact is expected to depend on both the applied pressure and the sliding velocity. Experiments are carried out using a torsional contact configuration which -as explained below- allows investigating frictional energy dissipation at the interface without the complications arising from bulk viscoelastic losses at the scale of the macroscopic contact. In addition, the inversion of the measured displacement field at the surface of the rubber provides local values of the frictional shear stress within the non homogeneous pressure and sliding velocity fields of the contact. Local changes in the density of asperity micro-contacts are evidenced from a measurement of the amount of light transmitted through the transparent rough contact.
In a first part of this paper, we consider the case of a smooth contact where friction is likely to arise only from molecular scale dissipation at the intimate contact formed between the surfaces. In a second part, we examine the pressure and velocity dependence of the frictional shear stress within rough contacts where asperity scale viscoelastic losses are likely to come into play. We show that the measured shear stress can be expressed as the product of a velocity dependent term by a velocity and pressure dependent term which describes the changes in the actual contact area as a function of nominal contact pressure and sliding velocity. From a comparison between the smooth and rough contacts, we discuss in a last part the contributions of interface dissipation and hysteretic losses to friction. | null | null | null | From the inversion of the displacement field at the surface of a viscoelastic rubber contacting a rigid spherical asperity, local values of the steady state frictional shear stress were determined under torsional contact conditions. For rough contacts, measured values of the local shear stress are representative of a multicontact interface under given sliding velocity and nominal contact pressure conditions. In addition, information about the local proportion of area into contact A/A_0 is provided from contact images under the assumption that optical transmittivity of the rough interface is proportional to A/A_0. As detailed above, this assumption is supported by separate static indentation measurements where it yields the
expected linear relationship between the actual contact area and the applied nominal contact pressure. From a systematic investigation of the local shear stress and transmitted light intensity as a function of the applied indentation depth and twisting rate, it is found experimentally that the frictional shear stress can be expressed as the product of two terms (cf Eqn(<ref>)). The first one, A/A_0, incorporates the velocity and pressure dependence of the microcontacts density. The second one is a pressure-independent term, k(v), which can be viewed as some averaged measurement of the amount of frictional energy dissipated within micro-contacts. In other words, A/A_0 corresponds to a contact mechanics term describing the density of microcontacts under steady state sliding while k(v) quantifies the dissipative processes at play within asperity micro-contacts.
As shown by the dotted line in Fig. <ref>, it can interestingly be noted that the magnitude of k(v) is similar to that of the frictional shear stress measured for smooth contacts and that it follows a very similar velocity dependence. This suggests that frictional energy dissipation within microasperity contacts is mostly due to interfacial dissipation, the contribution of viscoelastic losses at asperity scale being negligible. This statement can be further considered within the framework of the friction model detailed in the introduction. Accordingly, the frictional force is assumed to arise from two independent contributions, namely the so-called adhesive and hysteretic components. The so-called adhesive term encompasses all dissipative mechanisms occurring at the points of intimate contact between the solids, i.e. on length scales lower than asperity size. The hysteretic term corresponds to the force required to displace the rubber material from the front of the rigid nano-asperities. Here, it
represents the contribution of the viscoelastic losses involved in the deformation of the rubber substrate by microasperities. Rewritten in terms of shear stress, this model can be expressed as follows
τ=τ_h + τ_a ,
where τ_a and τ_h are respectively the adhesive and hysteretic terms. The adhesive term can simply be expressed as
τ_a=τ_0 A/A_0
where τ_0 is the frictional shear stress of the smooth contact interface.
An exact calculation of the hysteretic component τ_h is much more complicated as it implies to solve the viscoelastic contact problem taking into account the whole frequency distribution associated with the topography of the self affine rough surface. As a first order approximation, we follow a simple approach where the rough surface is assimilated to a distribution of identical, non interacting, spherical asperities. Following a calculation by Greenwood and Tabor <cit.>, the friction force at the scale of a single asperity can be expressed as
F_asp=αE_eff/4a^4/R^2
where R is the radius of curvature of the asperity and E_eff is a frequency dependent effective modulus defined as
E_eff(ω)=|E(ω)|/1-ν^2
where ω is a characteristic frequency defined as ω=v/a and ν is the Poisson's ratio whose variations with frequency are neglected. In the above equation, α is term representing the fraction of the input elastic energy which is lost as a result of viscoelastic dissipation. The hysteretic frictional stress can thus be written as τ_h=ϕ F_asp where ϕ denotes the surface density of asperities. ϕ=A/(π a^2 A_0) thus allowing to express τ_h as
τ_h=αE_eff/4 πA/A_0(a/R)^2 .
As an upper bound value for τ_h, one can take a≈ R which gives
τ_h ≈αE_eff/4 πA/A_0 .
From eqns (<ref>) and (<ref>), the total frictional stress within the rough interface can thus be expressed as
τ=A/A_0( τ_0 + αE_eff/4 π) .
Within the investigated sliding velocity range (0.1 to 100 μm s^-1), the adhesive term τ_0 is found to vary between 0.2 and 0.5 MPa (Fig. <ref>). The estimate of the second, viscoelastic term, in the RHS of Eqn <ref> requires a knowledge of the dissipation factor α. Following an exact viscoelastic calculation by Persson <cit.>, we take for α an asymptotic (low velocity) value calculated as α≈ 5 tanδ where tanδ is the loss tangent of the rubber substrate. Using this approach and the viscoelastic data reported in the appendix, the viscoelastic term α E_eff/4 π is found to vary between 0.05 and 0.1 MPa when the characteristic frequency varies between 0.1 Hz and 1 kHz. This simple calculation thus yields an estimate of the hysteretic term which is found to be about half the magnitude of the interface term. The rough approximations embedded in the calculation do not really allow to draw a definite conclusion from a difference of less than one order of magnitude. Here, it can just be stated that the above calculation does not contradict the fact that the interfacial contribution to friction could be the dominant term, as suggested by the similarity between k(v) and τ_0(v).
However, this calculation is based on a very crude description of the contact interface which is assumed to consist of a distribution of identical, non interacting, single-asperity contacts. As a result, topographical features of the surface such as rms roughness, fractal dimension or correlation length are not taken into account. A more refined approach to the hysteretic component to friction would require that the multiscale features of surface topography as well as non linear effects encountered during deformation at microasperity scale are accounted for. Some of these features are embedded within theoretical rough contact models such as that developed by Persson <cit.> but using these models would require extensive calculations which are beyond the scope of this study. From an experimental perspective, more insights into the adhesive and hysteretic components to friction could be gained from experiments where the physical chemistry of the glass surface is varied (using silanization for example) independently of the viscoelastic properties of the rubber or, conversely, where the viscoelasticity of the substrate is changed independently of the properties of the glass surface. When doing so, one should take care to the potential occurrence of stick-slip motions or friction instabilities which would preclude such an analysis. | Using contact imaging approaches, we were able to determine the distribution of frictional stresses within contacts between a smooth viscoelastic rubber and a rigid rotating lens. When the lens surface is made randomly rough, the local frictional stress is observed to dependent on both contact pressure and on sliding velocity as a result of the multicontact nature of the sliding interface. Associated changes in the local density of micro-contacts are evidenced from variations in the distribution of the light intensity transmitted through the rough contact interface. From separate static indentation experiments, it is shown that all the light intensity data obtained locally at various contact loads and contact locations can be represented in the form of a single master curve which strongly supports the scaling of the transmitted light with the nominal contact pressure, at least for the considered rough surface. Accordingly, the theoretical prediction of a linear relationship between the proportion of area in contact and contact pressure is retrieved experimentally. More importantly, the combination of local stress and light intensity measurements allowed to separate the contributions of two mechanisms when contact pressure or velocity is varied. The first one consists in a decrease in the local density of micro-contacts when the pressure decreases. The second one encompasses all the frictional dissipative processes occurring within microasperity contacts. A comparison between smooth and rough contacts suggests that dissipative processes occurring at the interface predominate over viscoelastic dissipation at micro-asperity scale. More generally, these results open the way to a close reexamination of the validity of the hypothesis embedded in most rubber friction models, especially the assumption that friction can be separated into an adhesive and an hysteretic component.
This work was partially supported by the National Research Agency (ANR) within the framework of the Dynalo project (NT09499845). The authors wish to thank Basile Pottier and Laurence Talini for the surface fluctuations measurements. Thanks are also due to Danh Toan Nguyen for the finite element calculations reported in the appendix. We are also indebted to Alexis Prevost for many stimulating discussions. |
http://arxiv.org/abs/1701.07657v2 | 20170126112150 | Operationalizing Declarative and Procedural Knowledge: a Benchmark on Logic Programming Petri Nets (LPPNs) | [
"Giovanni Sileno"
] | cs.AI | [
"cs.AI"
] |
Logic Programming Petri Nets: a Benchmark
Logic Programming Petri Nets: a Benchmark
Informatics Institute, University of Amsterdam, the Netherlands
Logic Programming Petri Nets: a benchmark
Giovanni Sileno
Operationalizing Declarative and Procedural Knowledge: a Benchmark on Logic Programming Petri Nets (LPPNs)
Giovanni Sileno^1
Received December 14, 2016; accepted January 23, 2017
==========================================================================================================
Modelling, specifying and reasoning about complex systems requires to process in an integrated fashion declarative and procedural aspects of the target domain. The paper reports on an experiment conducted with a propositional version of Logic Programming Petri Nets (LPPNs), a notation extending Petri Nets with logic programming constructs. Two semantics are presented: a denotational semantics that fully maps the notation to ASP via Event Calculus; and a hybrid operational semantics that process separately the causal mechanisms via Petri nets, and the constraints associated to objects and to events via Answer Set Programming (ASP). These two alternative specifications enable an empirical evaluation in terms of computational efficiency. Experimental results show that the hybrid semantics is more efficient w.r.t. sequences, whereas the two semantics follows the same behaviour w.r.t. branchings (although the denotational one performs better in absolute terms).
§ INTRODUCTION
A proper treatment of cases or scenarios is based on two requirements: on the one hand, to capture and adequately process the symbolic entities used to represent the target system: instances, classes, interrelationships forming a local ontology relevant to the domain in focus; on the other hand, to reproduce—by means of elements modelling causal mechanisms, processes, courses of actions, etc.—the same dynamics exhibited by the target system.
Consider for example this case: “While John was walking his dog, the dog ate Paul's flowers.” This event description is not sufficient for entailing that John is responsible to pay Paul for what happened, as typically this is entailed on the basis of norms as “The owner of an animal has to pay for the damages it produces.''.
However, even this addition lacks crucial connections between the conceptual domains of the case and the one of the norm, like “dogs are animals”, “eating an object destroys the object”, “destruction is damage”, etc.
These various elements exhibit two perspectives on knowledge: a declarative perceptive, concerning objects (physical, mental, institutional) and their logical relationships—both reified as symbols—; and a procedural perceptive, concerning patterns of events/actions, mechanisms, or processes (involving objects). Formal logic is the prototypical domain concerned with the first perspective, just as process modeling focuses on the second. Unfortunately, methodologies associated with one of the two aspects generally have a limited treatment of the other component, and they require specific mediating machinery to deal with. For instance, if you want to make a certain outcome impossible in a procedural model you need to add conditions that disable all transitions that might produce that outcome. If you want to represent a transition in a declarative way, a typical approach is to consider snapshots of the arrangements holding before and after the transition—possibly labeled with a sort of timestamp. This is essentially the principle behind situation calculus <cit.>, event calculus <cit.>, and fluent calculus <cit.>: using appropriate axioms, you can create and reason about the relations between these snapshots in a way such that they are compatible to the causal relationships between the moments they refer to.
Rather than trying to project one dimension on the other, an alternative tradition in AI and logic proposes to consider causality as a primitive notion. This approach is for instance behind the idea of all Action languages <cit.>. Even when the dichotomy is made clear, however, operationalizations of these languages often result in compiling action programs to logic programs <cit.>, returning again to `snapshot-handling' solutions.
The motivation behind this work stems from the hypothesis that leaving process analysis to procedural descriptions should be in principle a better choice: procedural components can directly map to native computational mechanisms, that can be used not only to re-present, but also re-create the process object, transforming the question from what the referent should be (characteristic of logic), to what it is (characteristic of simulation and more in general of model-execution).
The paper reports therefore on a simple benchmark experiment with an hybrid notation (that is, including procedural and declarative knowledge components), called Logic Programming Petri Nets (LPPNs).[A prototype of a LPPN interpreter is available on <http://github.com/s1l3n0/pypneu>, together with the code run for conducting the experiment.] Section <ref> will introduce the motivation and an informal semantics of LPPNs. Section <ref> will present a formalisation of a propositional version of LPPN. Section <ref> will present an hybrid operational semantics and a denotational semantics based on ASP programs with Event Calculus. Section <ref> will present the results of a first empirical experiment. Discussion and further developments end the paper.
§ LOGIC PROGRAMMING PETRI NETS
Logic Programming Petri Nets (LPPNs) are a visual notation first introduced in <cit.> as an common representational ground where to align representations of law (norms), of implementations of law (regulatory services in the form of business processes), and of action (behavioural scripts ascribed to social participants). It has been used for a wide class of models (business processes embedded with normative positions, representation of scenarios issued from narratives, agent scripts, deontic paradoxes, etc. <cit.>). The notation builds upon the intuition that places and transitions mirror the common-sense distinction between objects and events (e.g. <cit.>), roughly reflecting the use of noun/verb categories in language <cit.>: the procedural components captured by Petri nets can be used to model transient aspects of the system in focus; the declarative components captured by logic programming construct can be used to model steady state aspects, i.e. those on which the transient is irrelevant or does not make sense (e.g. terminological constraints). In this section we will informally describe the bases motivating their integration.
§.§ From Petri Nets to LPPNs
Petri nets are a simple, yet effective computational modelling representation featuring an intuitive visualisation (see Fig. 1). They consist in directed, bipartite graphs with two types of nodes: places (visually represented with circles) and transitions (with boxes). A place can be connected only to transitions and vice-versa. One or more tokens (dots) can reside in each place. The execution of Petri nets is also named “token game”: transitions fire by consuming tokens from their input places and producing tokens in their output places.[For an overview on the general properties of Petri nets see e.g. <cit.>.]
Despite their widespread use in computer science, electronics, business process modelling and biology, Petri nets are generally considered not to be enough expressive for reasoning purposes: in their simplest form, tokens are indistinct, and do not transport any data. Nevertheless, it is a common practice for modellers to introduce labels to set up a correspondence between the modelling entities and the modelled entities. This practice enables them to read the results of a model execution in reference to the modelled system. In other words, an adequate labelling is still functional to the use of a Petri net for modelling purposes, although it is not a requirement for the execution in itself. Further interaction is possible if these labels are processed according to an additional formalism, as for instance it occurs with Coloured Petri Nets (CPNs) <cit.> (for many aspects a descendant of Predicate/Transition nets <cit.>). If their expressiveness and wide application provide reasons for its adoption, CPNs also introduce many details which are unimportant in a case modelling setting (e.g. expressions on arcs); more importantly, they still lack the ability of specifying and processing declarative bindings, necessary, for instance, to model terminological relationships. This brings us to the idea of LPPN.
Whereas Petri nets essentially specify procedural mechanisms, LPPNs extend those (a) with literals as labels, attached on places and transitions; (b) with nodes specifying (logic-based) declarative bindings on places and on transitions. This paper will focus only on propositional labelling. Under this constraint, the execution of the LPPN procedural component is the same as Condition/Event nets, Petri nets whose places do not contain more than one token (Fig. <ref>).
Logic operator nodes might apply on places (lp-nodes) or on transitions (lt-nodes). An example of a sub-net with lp-nodes (small black squares) is given in Fig. <ref>; these are used to create logic compositions of places (via operators as , , , etc). or to specify logic inter-dependencies (via the logic conditional ). Similarly, transitions may be connected declaratively via lt-nodes (black circles), as in Fig. <ref>; these connections may be interpreted as channels enabling instantaneous propagation of firing. In this case, it is not relevant to introduce operators as , for the interleaving semantics, only one source transition may fire per step.
Operationally, the declarative components are treated integrating the stable model semantics used in answer set programming (ASP) <cit.>. This was a natural choice because process execution exhibits a prototypical `forward' nature, and ASP solvers can be interpreted as providing forward chaining.[Both SLD/SLDNF resolution (Prolog) and DPLL (ASP) are based on backward chaining. In DPLL, however, all variables are grounded, and all intermediate atoms generated in the search are collected in stable models; without defining any goal, all the worlds that are implied by the input knowledge are returned as output. From an external perspective, this is the same result we would associate with forward chaining. The intuition that there is a relation between ASP and forward chaining is confirmed e.g. in ASPeRiX <cit.>.]
§ FORMALIZATION
This section presents a simplified version of the LPPN notation considering only a propositional labeling. We start from the definition of propositional literals derived from ASP <cit.>, accounting for strong and default negation.
Given a set of propositional atoms A, the set of literals L = L^+∪ L^- consists of positive literals (atoms) L^+ = A, negative literals (negated atoms) L^- = { -a | a ∈ A}, where `-' stands for strong negation.[Strong negation is used to reify an explicitly false situation (e.g. “It does not rain”).] The set of extended literals L^* = L ∪ L^ consists of literals and default negation literals L^ = { l | l ∈ L }, where `' stands for default negation.[Default negation is used to reify a situation in which something cannot be retrieved/inferred (e.g. `It is unknown whether it rains or not').]
We denote the basic topology of a Petri net as a procedural net.
A procedural net is a bipartite directed graph connecting two finite sets of nodes, called places and transitions. It can be written as N = P, T, E, where P = {p_1, …, p_n} is the set of place nodes; T = {t_1, …, t_m} is the set of transition nodes; E = E^+ ∪ E^- is the set of arcs connecting them: E^+ from transitions to places, E^- from places to transitions.
LPPNs consists of three components: a procedural net specifying causal relationships, and two declarative nets specifying respectively logical dependencies at the level of objects or ongoing events (on places), and on impulse events (on transitions). Furthermore, propositional LPPNs build upon a boolean marking on places (like condition/event nets).
A propositional Logic Programming Petri Net 𝐿𝑃𝑃𝑁_prop is a Petri Net whose places and transitions are labeled with literals, enriched with declarative nets of places and of transitions. It is defined by the following components:
* P, T, 𝑃𝐸 is a procedural net; 𝑃𝐸 stands for procedural edges;
* C_P : P → L^* and C_T : T → L are labeling functions, associating literals respectively to places and to transitions;
* 𝑂𝑃 = {, -, ∧, ∨, →, ↔, …} is a set of logic operators.
* 𝐿𝑃 and 𝐿𝑇 are
sets of logic operator nodes respectively for places (lp-nodes) and for transitions (lt-nodes).
* C_𝐿𝑃 : 𝐿𝑃→𝑂𝑃 maps each lp-node to a logic operator; similarly, C_𝐿𝑇 : 𝐿𝑇→𝑂𝑃 does the same for lt-nodes.
* 𝐷𝐸_𝐿𝑃 = 𝐷𝐸^+_𝐿𝑃∪𝐷𝐸^-_𝐿𝑃
is the set of arcs connecting lp-nodes to places; similarly, 𝐷𝐸_𝐿𝑇 = 𝐷𝐸^+_𝐿𝑇∪𝐷𝐸^-_𝐿𝑇 for lt-nodes and transitions.[Note that 𝐷𝐸^-_𝐿𝑇⊆ (T ∪ P) ×𝐿𝑇, i.e. these edges go from transitions and places (modeling contextual conditions) to lt-nodes.]
* M: P →{0, 1} returns the marking of a place, i.e. whether the place contains (1) or does not contain (0) a token.
Note that if 𝐿𝑃∪𝐿𝑇 = ∅, we have a strictly procedural 𝐿𝑃𝑃𝑁_prop, i.e. a standard binary Petri net. If T = ∅, we have a strictly declarative 𝐿𝑃𝑃𝑁_prop, that can be directly mapped to an ASP program.
For simplicity, we overlook in this document the syntaxic constraints on the network topology which are inherited from ASP.
§ SEMANTICS
This section will present two semantics for LPPNs: a hybrid operational semantics and a denotational semantics, based on ASP and event calculus.
§.§ Hybrid operational semantics
The execution cycle of a LPPN consists of four steps:
* given a “source” marking M, the bindings of the declarative net of places entail a “ground” marking M^*;
* an enabled transition is selected to pre-fire, so determining a “source” transition-event e;
* the bindings of the declarative net of transitions entail all propagations of this event, obtaining a set of transition-events, also denoted as the “ground” event-marking E^*;
* all transition-events are fired, producing and consuming the relative tokens.
The steps (1) and (3) are processed in distinct moments by an ASP solver: the declarative nets of places (1) or of transitions (3) are translated as rules, tokens (1) or transition-events (3) are reified as facts. The ASP solver takes as input the resulting program and, if satisfiable, it provides as output respectively one or more ground marking (1) or one or more sets transition-events to be fired (3).
The steps (2) and (4) make clear the distinction the external firings (the “source” transition-event selected at execution level) from the internal firing, immediately propagated (the “ground” transition-events triggered by the declarative net of transitions).
The following definitions provides a formalisation of these concepts.
A transition t is enabled in a ground marking M^* if a token is available for each input places:
𝐸𝑛𝑎𝑏𝑙𝑒𝑑(t) ≡∀ p_i ∈∙ t, M^*(p) = 1
Similarly to what marking is for places, we consider an event-marking for transitions E: T→{0, 1}. E(t) = 1 if the transition t produces a transition-event e. Each step s has a “source” event-marking E.
An enabled transition t pre-fires (implicitly, at a step s) if it is selected to produce a transition-event:
∀ t ∈𝐸𝑛𝑎𝑏𝑙𝑒𝑑(T) :
t pre-fires≡
E(t) = 1
Applying an interleaving semantics for the pre-firing, the interpreter selects only one transition to pre-fire per step; for any other t', E(t') = 0.
An enabled transition t fires (implicitly, at a step s) by propagation consuming a token from each input place, and producing a token in each output place:
∀ t ∈𝐸𝑛𝑎𝑏𝑙𝑒𝑑(T) :
t fires≡
E^*(t) = 1 ↔∀ p_i ∈∙ t : M'(p_i) = 0 ∧ ∀ p_o ∈ t ∙: M'(p_o) = 1
Running example Let us consider the LPPN in Fig. <ref>. Here, for simplicity, we will conflate the names of the transition/places with their labels; in the general case this should be made different as there might be multiple nodes with the same label. The proposed net specifies causal mechanisms, declarative constraints. There is only one token in , enabling the transitions associated to and . The following execution paths are possible: (1) fires, consuming the token in , fires, consuming the token and producing a token in ; (2, 3) fires, and then one of or fires; (4) fires, consuming the token in ; the firing propagates to ; the source firing of also produces a token in ; the existence of is a sufficient condition for immediately reifying .
§.§ Denotational semantics
One of the possibilities to validate a formal language is to map it into another formal language, i.e. to provide a denotational semantics. The declarative component of a LPPN, by design, can be directly rewritten as ASP code. As we are already halfway down the path, we can translate the remaining procedural component into ASP.
§.§.§ Event Calculus axioms
A well-known solution to treat change in logic programming is event calculus (EC) <cit.>. The simple version is already satisfactory for our purposes. A modification of the original axioms is necessary to deal with the locality brought by places and transitions:
holdsAt(F, P, N) :-
initially(F, P), not clipped(0, F, P, N),
fluent(F), place(P), time(N).
holdsAt(F, P, N2) :-
firesAt(T, N1), N1 < N2,
initiates(T, F, P, N1), not clipped(N1, F, P, N2),
place(P), transition(T), fluent(F), time(N1), time(N2).
clipped(N1, F, P, N2) :-
firesAt(T, N), N1 <= N, N < N2,
terminates(T, F, P, N),
place(P), transition(T), fluent(F), time(N1), time(N2), time(N).
§.§.§ Interleaved semantics axioms
The interleaved semantics can be translated into the following rules:
* all enabled transitions may or may not pre-fire;
* pre-firing is transformed to firing;
* at least one enabled transition must pre-fire per step, i.e. it is impossible that no transition fire if there are enabled transitions;
* at maximum one transition can pre-fire per step.
In ASP code:
prefiresAt(T, N) :-
enabled(T, N), transition(T), time(N).
firesAt(e1, N) :- prefiresAt(e1, N).
someTransitionPrefiresAt(N) :-
prefiresAt(T, N), transition(T), time(N).
:- not someTransitionPrefiresAt(N), enabled(T, N), transition(T), time(N).
:- prefiresAt(T1, N), prefiresAt(T2, N), T1 != T2,
transition(T1), transition(T2), time(N).
§.§.§ Transformation of a LPPN to an ASP program
The mapping of a given LPPN to an equivalent ASP program includes the previous axioms and the output of the following steps:
* for each place p, with label C_P(p)
* type it as place,
* specify its initial state,
* for each place with more than one output, write down that you cannot consume more than the only available token.
* for each transition t, with label C_T(t)
* type it as transition,
* define the conditions for which it is enabled,
* for each output place, define how to create tokens in the output places,
* for each input place, define how to consume tokens in the output places.
* for each lp-node 𝑙𝑝,
* specify the logic constraint to be applied between inputs and outputs.
* for each lt-node 𝑙𝑡,
* write down the logic dependencies between transitions allowing the propagation.
As a concrete example, we apply these actions on some of the components of the LPPN in Fig. <ref>:
fluent(filled).
place(c1).
initially(filled, c1).
:- 2terminates(e2, filled, c1, N); terminates(e1, filled, c1, N).
transition(e1).
enabled(e1, N) :- holdsAt(filled, c1, N).
terminates(e1, filled, c1, N) :- firesAt(e1, N).
initiates(e1, filled, c2, N) :- firesAt(e1, N).
holdsAt(filled, c5, N) :- holdsAt(filled, c2, N).
firesAt(e3, N) :- firesAt(e1, N), enabled(e1, N).
Execution With the transformation steps given above, valid LPPNs can be transformed into ASP programs. In particular, for the axioms presented here, these programs can be solved the ASP engine <cit.>, also available online at: <https://potassco.org/clingo/run/>. Setting a temporal range (with the instruction “”) the output answer sets represent all possible executions path after at most n steps.
§ RESULTS
The proposal presented above has been used for developing a prototype Python application for specifying, executing and analyzing LPPNs[Available at <http://github.com/s1l3n0/pypneu>.]; it exploits <cit.>, as this provide runtime interfaces enabling a direct control of the solver instance without regrounding the program at each cycle.
This enabled us to perform some direct evaluation of any given LPPN input.
When we process the input LPPN by means of the denotational semantics, the input is transformed to an ASP program, and the solver intervenes fully to provide the possible execution paths. Instead, when we refer to the hybrid operational semantics, the solver intervenes only partially in the execution cycle, to entail the constraints implied by the declarative components of the net; the rest of the computational burden is on the module responsible for the Petri net execution. In this context, one might ask if we can observe some performances between these two alternative modes of analysis/execution.
At the moment, we have only evaluated a propositional version of LPPN, and a limited series of structures, namely compositions of minimal serial elements (a transition with an input and output places) or minimal forking elements (a place with two output transitions). In order to implement the procedural component of the operational semantics, the current Petri Net analysis module builds upon a simple brute force (BF) execution algorithm, and depth-first search with backtracking (BT) to cover all the possible execution paths.
Table <ref> summarises the performances of 10 executions of different network configurations.[The tests were run on a MacBook Pro (2018) provided with a 2.2 GHz 6-core processor Intel Code i7 and 16Gb RAM DDR4.] Results are also illustrated on Fig. <ref>. The data essentially confirms our hypothesis: the analysis based on the operational semantics (BF+BT) clearly outperforms and scales excellently for the serial configurations, while that based on the denotational semantics (EC) scales poorly in this configuration. For the forking configurations, BF+BT is evidently slower in absolute terms. Intuitively this is due to the efficient search and pruning capabilities of ASP. Unlike clingo, the Python code of the Petri net executor/analyzer is not optimised; on the contrary, for many aspects this represents a lower-bound on the possible implementation choices. Nevertheless, if we consider execution times in logarithmic scale, we observe that the two methods are essentially comparable in terms of tractability.
§ CONCLUSION
The paper presents an empirical experiment with LPPNs, a logic programming-based extension to Petri Nets. LPPNs were introduced with a practical goal in mind: a visual modelling notation relatively simple for non-experts, that could handle explicit declarative knowledge, and that could model causation and other procedural aspects <cit.>. It was inspired by the point made in <cit.> on the widespread confusion in cognitive science and computational disciplines around the notion of rules (namely between declarative and reactive rules). Previous contributions <cit.> highlighted the potential use of LPPNs in normative modelling tasks in combination with business process modelling, thus potentially facilitating cross-fertilization between theoretical to practical settings.
Here the focus has been put on its computational properties, showing that maintaining the two levels separated can bring better performances. The declarative dimension allows to treat at higher abstraction phenomena for which there is a viable specification at outcome level. The procedural dimension works better for processes that can be directly executed.
Future developments concern the extension of this work to a wider range of experiments, first considering mixed networks (of declarative, procedural components) with mixed configurations (serial compositions, forks, joins, etc.) and then passing to the extended LPPN notation accounting for predicates. The actual impact on real models should be evaluated as well: scenarios describing cases have very few forks, they rather function as orchestrated (i.e. directed from the scenario) scripts (procedural models distributed amongst actors). Consequently, applications that require the use of scenarios (e.g. for interpretation, model-based diagnosis, conformance checking, etc.) may take advantage of the hybrid operational semantics. The computational improvement may be further extended considering existing proposals in the literature. For instance, execution algorithms alternative to brute execution <cit.>; or decomposition techniques, for instance in single-entry-single-exit (SESE) components <cit.>, that open up the possibility of concurrent execution.
Further, these results should be confronted with existing techniques for handling temporal reasoning and causality, e.g. the already cited Action languages <cit.>, related works (e.g. F2LP <cit.>) and applications (CCalc, Coala, Cplus2ASP); optimized versions of Event Calculus (e.g. <cit.>); applications based on LTL, CTL and related formalisms.
splncs03
| A proper treatment of cases or scenarios is based on two requirements: on the one hand, to capture and adequately process the symbolic entities used to represent the target system: instances, classes, interrelationships forming a local ontology relevant to the domain in focus; on the other hand, to reproduce—by means of elements modelling causal mechanisms, processes, courses of actions, etc.—the same dynamics exhibited by the target system.
Consider for example this case: “While John was walking his dog, the dog ate Paul's flowers.” This event description is not sufficient for entailing that John is responsible to pay Paul for what happened, as typically this is entailed on the basis of norms as “The owner of an animal has to pay for the damages it produces.''.
However, even this addition lacks crucial connections between the conceptual domains of the case and the one of the norm, like “dogs are animals”, “eating an object destroys the object”, “destruction is damage”, etc.
These various elements exhibit two perspectives on knowledge: a declarative perceptive, concerning objects (physical, mental, institutional) and their logical relationships—both reified as symbols—; and a procedural perceptive, concerning patterns of events/actions, mechanisms, or processes (involving objects). Formal logic is the prototypical domain concerned with the first perspective, just as process modeling focuses on the second. Unfortunately, methodologies associated with one of the two aspects generally have a limited treatment of the other component, and they require specific mediating machinery to deal with. For instance, if you want to make a certain outcome impossible in a procedural model you need to add conditions that disable all transitions that might produce that outcome. If you want to represent a transition in a declarative way, a typical approach is to consider snapshots of the arrangements holding before and after the transition—possibly labeled with a sort of timestamp. This is essentially the principle behind situation calculus <cit.>, event calculus <cit.>, and fluent calculus <cit.>: using appropriate axioms, you can create and reason about the relations between these snapshots in a way such that they are compatible to the causal relationships between the moments they refer to.
Rather than trying to project one dimension on the other, an alternative tradition in AI and logic proposes to consider causality as a primitive notion. This approach is for instance behind the idea of all Action languages <cit.>. Even when the dichotomy is made clear, however, operationalizations of these languages often result in compiling action programs to logic programs <cit.>, returning again to `snapshot-handling' solutions.
The motivation behind this work stems from the hypothesis that leaving process analysis to procedural descriptions should be in principle a better choice: procedural components can directly map to native computational mechanisms, that can be used not only to re-present, but also re-create the process object, transforming the question from what the referent should be (characteristic of logic), to what it is (characteristic of simulation and more in general of model-execution).
The paper reports therefore on a simple benchmark experiment with an hybrid notation (that is, including procedural and declarative knowledge components), called Logic Programming Petri Nets (LPPNs).[A prototype of a LPPN interpreter is available on < together with the code run for conducting the experiment.] Section <ref> will introduce the motivation and an informal semantics of LPPNs. Section <ref> will present a formalisation of a propositional version of LPPN. Section <ref> will present an hybrid operational semantics and a denotational semantics based on ASP programs with Event Calculus. Section <ref> will present the results of a first empirical experiment. Discussion and further developments end the paper. | null | null | The proposal presented above has been used for developing a prototype Python application for specifying, executing and analyzing LPPNs[Available at < it exploits <cit.>, as this provide runtime interfaces enabling a direct control of the solver instance without regrounding the program at each cycle.
This enabled us to perform some direct evaluation of any given LPPN input.
When we process the input LPPN by means of the denotational semantics, the input is transformed to an ASP program, and the solver intervenes fully to provide the possible execution paths. Instead, when we refer to the hybrid operational semantics, the solver intervenes only partially in the execution cycle, to entail the constraints implied by the declarative components of the net; the rest of the computational burden is on the module responsible for the Petri net execution. In this context, one might ask if we can observe some performances between these two alternative modes of analysis/execution.
At the moment, we have only evaluated a propositional version of LPPN, and a limited series of structures, namely compositions of minimal serial elements (a transition with an input and output places) or minimal forking elements (a place with two output transitions). In order to implement the procedural component of the operational semantics, the current Petri Net analysis module builds upon a simple brute force (BF) execution algorithm, and depth-first search with backtracking (BT) to cover all the possible execution paths.
Table <ref> summarises the performances of 10 executions of different network configurations.[The tests were run on a MacBook Pro (2018) provided with a 2.2 GHz 6-core processor Intel Code i7 and 16Gb RAM DDR4.] Results are also illustrated on Fig. <ref>. The data essentially confirms our hypothesis: the analysis based on the operational semantics (BF+BT) clearly outperforms and scales excellently for the serial configurations, while that based on the denotational semantics (EC) scales poorly in this configuration. For the forking configurations, BF+BT is evidently slower in absolute terms. Intuitively this is due to the efficient search and pruning capabilities of ASP. Unlike clingo, the Python code of the Petri net executor/analyzer is not optimised; on the contrary, for many aspects this represents a lower-bound on the possible implementation choices. Nevertheless, if we consider execution times in logarithmic scale, we observe that the two methods are essentially comparable in terms of tractability. | null | The paper presents an empirical experiment with LPPNs, a logic programming-based extension to Petri Nets. LPPNs were introduced with a practical goal in mind: a visual modelling notation relatively simple for non-experts, that could handle explicit declarative knowledge, and that could model causation and other procedural aspects <cit.>. It was inspired by the point made in <cit.> on the widespread confusion in cognitive science and computational disciplines around the notion of rules (namely between declarative and reactive rules). Previous contributions <cit.> highlighted the potential use of LPPNs in normative modelling tasks in combination with business process modelling, thus potentially facilitating cross-fertilization between theoretical to practical settings.
Here the focus has been put on its computational properties, showing that maintaining the two levels separated can bring better performances. The declarative dimension allows to treat at higher abstraction phenomena for which there is a viable specification at outcome level. The procedural dimension works better for processes that can be directly executed.
Future developments concern the extension of this work to a wider range of experiments, first considering mixed networks (of declarative, procedural components) with mixed configurations (serial compositions, forks, joins, etc.) and then passing to the extended LPPN notation accounting for predicates. The actual impact on real models should be evaluated as well: scenarios describing cases have very few forks, they rather function as orchestrated (i.e. directed from the scenario) scripts (procedural models distributed amongst actors). Consequently, applications that require the use of scenarios (e.g. for interpretation, model-based diagnosis, conformance checking, etc.) may take advantage of the hybrid operational semantics. The computational improvement may be further extended considering existing proposals in the literature. For instance, execution algorithms alternative to brute execution <cit.>; or decomposition techniques, for instance in single-entry-single-exit (SESE) components <cit.>, that open up the possibility of concurrent execution.
Further, these results should be confronted with existing techniques for handling temporal reasoning and causality, e.g. the already cited Action languages <cit.>, related works (e.g. F2LP <cit.>) and applications (CCalc, Coala, Cplus2ASP); optimized versions of Event Calculus (e.g. <cit.>); applications based on LTL, CTL and related formalisms.
splncs03 |
http://arxiv.org/abs/1701.08187v2 | 20170127204603 | Analysis and Measurement of the Transfer Matrix of a 9-cell 1.3-GHz Superconducting Cavity | [
"A. Halavanau",
"N. Eddy",
"D. Edstrom",
"E. Harms",
"A. Lunin",
"P. Piot",
"A. Romanov",
"J. Ruan",
"N. Solyak",
"V. Shiltsev"
] | physics.acc-ph | [
"physics.acc-ph"
] |
Superconducting linacs are capable of producing intense, stable, high-quality electron beams that
have found widespread applications in science and industry. The 9-cell 1.3-GHz superconducting standing-wave accelerating RF cavity originally developed for e^+/e^- linear-collider applications [B. Aunes, et al. Phys. Rev. ST Accel. Beams 3, 092001 (2000)] has been broadly employed in various superconducting-linac designs. In this paper we discuss the transfer matrix of such a cavity and present its measurement performed at the Fermilab Accelerator Science and Technology (FAST) facility. The experimental results are found to be in agreement with analytical calculations and numerical simulations.
FERMILAB-PUB-17-020-APC
41.75Ht, 41.85.-p, 29.17.+w, 29.27.Bd
Analysis and Measurement of the Transfer Matrix of a
9-cell, 1.3-GHz Superconducting Cavity
A. Halavanau^1,2, N. Eddy^2, D. Edstrom Jr.^2, E. Harms^2, A. Lunin^2, P. Piot^1,2, A. Romanov^2, J. Ruan^2, N. Solyak^2, V. Shiltsev^2
^1 Department of Physics and Northern Illinois Center for Accelerator &
Detector Development, Northern Illinois University, DeKalb, IL 60115, USA
^2 Fermi National Accelerator Laboratory, Batavia, IL 60510, USA
December 30, 2023
====================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The 1.3-GHz superconducting radiofrequency (SRF) accelerating cavities were originally developed in the context of
the TESLA linear-collider project <cit.> and were included in the baseline design of the international linear collider (ILC) <cit.>
and in the design of various other operating or planned accelerator facilities. Projects based on such a cavity include electron- <cit.>, muon- <cit.>,
and proton-beam accelerators <cit.> supporting fundamental science
and compact high-power industrial electron accelerators <cit.>.
Such a cavity is a 9-cell standing-wave accelerating structure operating in the TM_010,π mode.
The transverse beam dynamics associated to such a cavity has been extensively explored over
the last decade and focused essentially on numerical simulations of single-bunch emittance dilution due to the field asymmetries <cit.> and multibunch effects due to trapped modes <cit.>. Most recently, experiments aimed at
characterizing the transverse beam dynamics in this type of SRF cavity were performed <cit.>.
In this paper we discuss the measurement and analysis of the transverse transfer matrix of a 9-cell 1.3-GHz SRF cavity.
In particular, we compare the results with the Chambers' analytical model <cit.>.
In brief, an analytical model of the transverse focusing in the accelerating cavity can be derived by considering the transverse motion
of the particle in a standing wave RF field with axial field E_z(z,t)=E_0∑_na_ncos(nkz)sin(ω t + ϕ), where E_0 is the peak field,
nk is the wave number associated to n-th harmonic of amplitude a_n, ϕ is an arbitrary phase shift, and z
is the longitudinal coordinate along the cavity axis.
The ponderomotive-focusing force is obtained under the paraxial approximation as
F_r=-e(E_r-v B_ϕ)≈ e r ∂ E_z/∂ z where v≃ c is the particle velocity along the axial direction.
Ref. <cit.> shows that the force averaged over one RF-period in the first order of perturbation theory yields the focusing strength,
K̅_r = - (E_0 e )^2/8(βγ m c^2)^2, for the case of a “pure” standing wave resonator (where the spatial profile of the axial field is modeled as E_z(z) ∝cos(kz) inside the cavity) originally considered in Ref. <cit.>.
The equation of motion then takes form:
x”+(γ '/γ)x'+ K̅_r (γ '/γ)^2 x=0,
where x is the transverse coordinate, x'≡dx/dz, γ' ≡dγ/dz = e E_0 cos(ϕ)/m_0 c^2 ≡G̅_RF/m_0 c^2
is the normalized energy gradient, where γ is the Lorentz factor.
The solution of the Eq. <ref> through the cavity is of the form 𝐱_f = R 𝐱_i,
where 𝐱≡ (x,x')^T, here R is a 2×2 matrix,
and the subscripts i and f indicate upstream and downstream particle coordinates respectively.
According to Chambers' model, the elements of R are given by <cit.>:
R_11 = cosα - √(2)cos(ϕ)sinα,
R_12 = √(8)γ_i/γ 'cos(ϕ)sinα,
R_21 = -γ '/γ_f[cos(ϕ)/√(2)+1/√(8)cos(ϕ)]sinα,
R_22 = γ_i/γ_f[cosα+√(2)cos(ϕ)sinα],
where α≡1/√(8)cos(ϕ)lnγ_f/γ_i,
γ_f≡γ_i + γ' L cosϕ is the final Lorentz factor (where L is the cavity length).
The determinant associated to the 2×2 block of the matrix is |R|_2×2=γ_i/γ_f.
The latter equation also holds for the vertical degree of freedom (y,y') owing to the assumed cylindrical symmetry.
Under such an assumption the equations for the vertical degree of freedom are obtained
via the substitutions x↔ y, 1↔ 3 and 2↔ 4. The total
transverse transfer matrix determinant is then |R|_4×4=(γ_i/γ_f)^2.
The assumed axially-symmetric electromagnetic field invoked while deriving Eq. <ref> is often violated, e.g., due to asymmetries introduced by the input-power (or forward-power) and high-order-mode (HOM) couplers. The input-power coupler couples the RF power to the cavity while the HOM couplers damp the harmful trapped fields potentially excited as
long trains of bunches are accelerated in the SRF cavities. In addition to the introduced field asymmetry, the coupler can also impact the beam via geometrical wakefields <cit.>.
The measurement of the transverse matrix of a standing wave accelerating structure (a plane-wave transformer, or PWT)
was reported in Ref. <cit.> and benchmarked against an “augmented" Chambers' model detailed in <cit.>. This refined model accounts for the presence of higher-harmonic spatial content in the axial field profile E_z(r=0,z). The present paper extends such a measurement to the case of a 1.3-GHz SRF accelerating
cavity and also investigates, via numerical simulation, the impact of the auxiliary couplers on the transfer matrix of the cavity. These simulations and measurements generally indicate that higher spatial harmonics do not play a significant role for the case of the TESLA cavity. Additionally, we note that the presented measurements are performed in a regime where the energy gain through the cavity is comparable to the beam injection energy [γ_i ∼γ' L].
In such a regime, the impact of field asymmetries are expected to be important.
§ NUMERICAL ANALYSIS
To investigate the potential impact of the couplers, a 3D electromagnetic model of the cavity, including auxiliary couplers,
was implemented in hfss <cit.>. The simulated 3D electromagnetic field map was imported as an external
field in the astra particle-tracking program <cit.>. The program astra tracked particles in the
presence of external field from first principle via a time-integration of the Lorentz equation. Additionally, astra can include space-charge effects using a quasistatic particle-in-cell approach based on solving Poisson's equation in the bunch's rest frame <cit.>.
The electromagnetic field map { E(x,y,z), B(x,y,z)} from hfss was generated
over a rectangular computational domain with x,y∈[- 10,+10] mm from the cavity axis and for z∈[-697.5,+697.5] mm with respect to the cavity center along the cavity length; see Fig. <ref>(a).
The mesh sizes in the corresponding directions were respectively taken to be δ x =δ y=0.5 mm and δ z=1 mm. The electromagnetic simulations assume a loaded quality factor Q≃ 3× 10^6 as needed for the nominal ILC operation. Such a loaded Q corresponds to the inner conductor of the input-coupler having a 6-mm penetration depth <cit.>. Figures <ref>(b) and (c) respectively present the axial and transverse fields simulated along the cavity axis and normalized to the peak axial field E_0≡max[E_z(r=0,z)]. As can be seen in Fig. <ref>(c) the impact of the coupler, aside from shifting the center of the mode, also introduces time-dependent
transverse electromagnetic fields that will impact the beam dynamics.
Given the field map loaded in astra, the program introduces the time dependence while computing the external Lorentz force experienced by a macroparticle at position r≡(x,y,z) at a given time t as
F( r,t) = q [ E( r)sinΨ(t) + v × B( r)cosΨ(t)],
where Ψ(t)≡ω t + ϕ (with ω≡ 2π f and f=1.3 GHz is the frequency) and q and v are respectively the macroparticle charge and velocity. In the latter equation the time origin is arbitrarily selected to ensure ϕ=0 corresponds to on-crest acceleration.
In order to deconvolve the impact of the auxiliary couplers from the dominant ponderomotive focusing of the cavity, numerical simulations based on a cylindrical-symmetric model were also performed. For these calculations the axial electric field E_z(r=0, z) displayed in Fig. <ref>(b) is imported in astra where the corresponding transverse electromagnetic fields at given positions (r,θ,z) are computed assuming an ideal TM_010 mode and under the paraxial approximation as E_r=-r/2∂ E_z(r=0, z)/∂ z and B_ϕ=i ω r/2c^2 E_z(r=0, z) <cit.>.
In order to quantitatively investigate the transverse beam dynamics in the cavity, we consider a monoenergetic distribution
of macroparticles arranged on the vertices of a 2×2 transverse grid in the (x,y) plane with
distribution ∑_i ∑_j δ (x-iΔ x) δ (y-jΔ y) where δ(x) is
Dirac's function and taking Δ x =Δ y =0.3 mm. The macroparticles, with vanishing incoming transverse momenta and located within the same axial position, are tracked through the cavity field and their final transverse momenta recorded downstream of the cavity. Figure <ref>(a) displays the change in transverse momentum δ P_⊥ imparted by the auxiliary
couplers normalized to the change in longitudinal momentum δ P_∥. This is computed as the difference between astra simulations using the cylindrical-symmetric field [Fig. <ref>(b)] from the ones based on the 3D field map [Fig. <ref>(c)]. Figure <ref>(a) indicates a strong dipole-like field and also hints to the presence of higher-moment components. To further quantify the impact of the auxiliary couplers, we write the change in transverse momentum as an electron passes through the cavity δ P_⊥≡ (δ p_x,δ p_y)^T as an affine function of the input transverse coordinates r_⊥,0≡ (x_0,y_0)^T (here the superscript ^T represents the transpose operator)
δ P_⊥ = d + M r_⊥,0,
where d≡(d_x, d_y) is a constant vector accounting for the dipole kick along each axis, and M is a 2×2 correlation matrix. The latter equation can be rewritten to decompose the final momentum in terms of the strength characterizing the various focusing components <cit.>
[ δ p_x; δ p_y ] = [ d_x; d_y ] + k_p [ x_0; y_0 ] + k_q [ x_0; -y_0 ]
+ k_sk[ y_0; x_0 ] + k_s[ y_0; -x_0 ],
where k_p,q≡ (M_11± M_22)/2, and k_sk,s≡ (M_12± M_21)/2 respectively account for the axially-symmetric ponderomotive, quadrupole, skew-quadrupole and solenoidal focusing effects. It should be pointed out that the coefficients introduced in the latter equation are implicit functions of the cavity field and operating phase.
Furthermore, the linear approximation resulting in Eq. <ref> requires validation.
In order to find the focusing strength we performed simulations similar to the one presented in Fig. <ref>(c) and directly compute the offset d and correlation matrix M necessary to devise the focusing strengths in Eq. <ref>. Such an analysis was implemented to provide the steering and focusing strength as a function of the injection phase ϕ as summarized in Fig. <ref>. Our analysis confirms the presence of higher-moment components such as quadrupole and skew-quadrupole terms as investigated in Ref. <cit.>. It also indicates the strength of these
quadrupolar components is very small compared to the cylindrical-symmetric ponderomotive focusing,
specifically k_sk∼ k_q ∼ O (10^-2× k_p). Finally, we observe that the
solenoidal contribution k_s ∼ O (10^-4× k_p) is insignificant. The relatively weak focusing strength arising from the presence of the auxiliary couplers confirm that the transfer matrix will be essentially dominated by the ponderomotive focusing. Therefore we expect the couplers to have negligible impact on the transfer-matrix measurement reported in the next Section. It should however be noted that the time dependence of these effects, especially of the dipole kick, can lead to significant emittance increase via a head-tail effect where different temporal slice within the bunch experience a time-varying kick resulting in a dilution of transverse emittance. Such an effect is especially important when low-emittance low-energy beams are being accelerated in a string of cavities <cit.>.
§ EXPERIMENTAL SETUP & METHOD
The experiment was performed in the electron injector of the IOTA/FAST facility <cit.>. The experimental setup is diagrammed in Fig. <ref>(a). In brief, an electron beam photoemitted from a high-quantum-efficiency semiconductor
photocathode is rapidly accelerated to ∼ 5 MeV in a L-band 1+1/2 cavity radiofrequency (RF) gun.
The beam energy is subsequently boosted using two 1.03 m long 1.3-GHz SRF accelerating cavities [labeled as CAV1 and CAV2 in Fig. <ref>(a)]
up to maximum of ∼ 52 MeV. In the present experiment the average accelerating gradient of the accelerating cavities was respectively set to
G̅_CAV1≃ 15 MeV/m and G̅_CAV2≃ 14 MeV/m. The simulated bunch transverse sizes and length along the IOTA/FAST
photoinjector appear in Fig. <ref> for the nominal bunch charge (Q=250 pC) and settings used in the experiment. The corresponding peak current, Î≃ 30 A, is small enough to ensure wakefield effects are insignificant - from From Fig. 4 of Ref. <cit.> we estimate the transverse geometric wakefield to yield a kick on the order of 1 eV/c, i.e., two order of magnitude lower than the dipole kick given in Fig. <ref> over the range of phase ϕ∈[-30^∘,30^∘]. The simulated kinetic energy downstream of CAV2 is K≃ 34 MeV consistent with the measured value.
The available electron-beam diagnostics include cerium-doped yttrium aluminum garnet (Ce:YAG) scintillating crystals for transverse beam size measurement upstream of CAV1 and downstream of CAV2 and beam position monitors (BPMs) which were the main diagnostics used duing our experiment. Each BPM consists of four electromagnetic pickup “button" antennae located 90^∘ apart at the same axial position and at a radial position 35-mm from the beamline axis. The beam position u = (x,y) is inferred from the beam-induced voltage on the antenna using a 7-th order polynomial u=∑_i a_u,i F(Φ_j) where Φ_j (j=1,2,3,4) are the induced voltages on each of the four BPM antenna and the coefficients a_u,i are inferred from a lab-bench calibration procedure using a wire-measurement technique; see Ref. <cit.>. At the time of our measurements, the BPM system was
still being commissioned and the resolution was about ≃ 80 μm in both dimensions <cit.>.
As the starting point of the transfer-matrix measurement, the beam was centered through both cavities CAV1
and CAV2 using a beam-based alignment procedure. The beam positions (x_i,y_i) [where i=1,2] downstream of the CAV2 were recorded for two phase settings (ϕ_1,2=±30^∘) and the function χ = √((x_1-x_2)^2+(y_1-y_2)^2) quantifying the relative beam displacement was evaluated. The settings of the dipole correctors upstream of the cavity CAV2 were then employed as free variables to minimize χ using a conjugate-gradient algorithm.
In order to measure the transfer matrix, we used a standard difference-orbit-measurement technique where beam-trajectory perturbations are applied with magnetic steerers located upstream of CAV2 and resulting changes are recorded downstream of the cavity with a pair of BPMs. In our experiment, the perturbations were applied using two sets of horizontal and vertical magnetic steerers (HV101 and HV103) with locations displayed in Fig. <ref>(a). Orbit perturbations were randomly
generated to populate a large range of initial conditions in the 4D trace space 𝐗_𝐢≡ (x_i,x_i', y_i, y_i'). Only the perturbations for which the beam was fully-transmitted were retained [the charge transmission is inferred from two integrated-current
monitors (ICM) shown in Fig. <ref>(a)].
For each measured cavity phase point, 20 different sets of perturbations (associated to a set of upstream dipole-magnet settings) were impressed. The beam was then propagated through CAV2 up to a pair of downstream electromagnetic button-style BPMs.
The measurement of beam position with CAV2 “off” and “on”, where “off” means zero accelerating gradient,
(indirectly) provided the initial 𝐗_𝐢 and final 𝐗_𝐟 beam positions and divergences respectively upstream and downstream of CAV2.
Correspondingly, given the 4× 4 transfer matrix of the cavity R, these vectors are
related via 𝐗_𝐟= R 𝐗_𝐢. An initial perturbation δ 𝐗_0_𝐢 to the nominal
orbit 𝐗_0_𝐢 such that 𝐗_𝐢= 𝐗_0_𝐢+δ 𝐗_0_𝐢 will result in an orbit change downstream of CAV2 given by
δ 𝐗_0_𝐟 = R δ 𝐗_0_𝐢.
Therefore any selected orbit can serve as a reference orbit to find the transformation R, assuming the set of perturbed trajectories around this reference is transformed linearly
(which is the essence of the paraxial approximation). Consequently, impressing a set of N initial perturbations δ𝐗^(n)_0_i where n=[1...N] results in a system of N equations similar to Eq. <ref> which can be casted in the matrix form
Ξ_f=R Ξ_i,
where Ξ_j (j=i, f) are 4 × N matrices containing the positions and divergence associated to the N orbit perturbations. This system can then be inverted via a least-squares technique to recover R.
The error analysis includes statistical fluctuations (which arise from various sources of jitter)
and uncertainties on the beam-position measurements. The statistical error bars were evaluated using an analogue of a
boot-strapping technique. Given that the transformation (<ref>) is linear,
any couple of initial 𝐗_𝐤,𝐢 and final 𝐗_𝐤,𝐟 beam position
measurements can define the reference orbit while the other couples (𝐗_𝐣,𝐢, 𝐗_𝐣,𝐟)
for j∈ N≠ k are taken as perturbed orbits and the transfer matrix can be inferred. Consequently, we retrieved
the transfer matrix R_j associated to a reference orbit (𝐗_𝐤,𝐢, 𝐗_𝐤,𝐟). Such a procedure is
repeated for all orbits k∈[1,N] and the resulting transfer matrix R_k is recorded. A final
step consists in computing the average R and variance σ_R^2=R^2-R^2 over the N
realizations of R_j. Finally, the measured value is reported as R= R± 2σ_R.
§ EXPERIMENTAL RESULTS
The elements of the transfer matrix were measured for nine values of phases in the range ϕ∈ [-20^∘,20^∘]
around the maximum-acceleration (or “crest") phase corresponding to ϕ=0^∘.
For each set of perturbation the beam positions along the beamline were recorded over 4 shots to account for
possible shot-to-shot variations arising from beam jitter or instrumental error. The corresponding set of 80 orbits were subsequently
used in the analysis algorithm described in the previous Section.
The comparison of the recovered transfer matrix elements with the Chambers' model along with the matrix inferred from particle
tracking with astra appear in Fig. <ref>. The shaded areas in Fig. <ref> and subsequent figures
correspond to the simulated uncertainties given the CAV2 cavity gradient G̅_CAV2=14 ± 1 MeV/m.
Overall, we note the very good agreement between the measurements, simulations, and theory. The slight discrepancies between the Chambers' model and the experimental results do not appear to have any correlations and are attributed to the instrumental jitter of the BPMs, RF power fluctuations, cavity alignment uncertainties, halo induced by non-ideal laser conditions.
During the measurement, we were unable to set the phase of the CAV2 beyond the aforementioned range
as it would require a significant reconfiguration of the IOTA/FAST beamline. Nevertheless we note that this range of phases is of interest to most of the project currently envisioned.
The elements of coupling (anti-diagonal) 2×2 blocks of the 4×4 matrix, modeled in the simulation are about one order of magnitude smaller than the elements of the diagonal block.
For instance, considering the x coordinate we
find that R_13/R_11∼ O(10^-2) and R_14/R_12∼ O(10^-2). This finding corroborates with our experimental results which indicate that R_13/R_11≲ 0.1 and R_14/R_12≲ 0.1; see Fig. <ref>. The latter observation confirms that, for the range of parameters being explored, the 3D effects associated to the presence of the couplers has small impact on the single-particle beam dynamics as already discussed in Sec. <ref>. The measured matrix elements were used to infer the determinant |R| which is in overall good agreement with the simulation and Chambers' models; see Fig. <ref>.
Finally, the field amplitude in CAV1 was varied, thereby affecting the injection energy in CAV2 and the transfer matrix element of CAV2 measured. Since the beam remained relativistic the change did not affect the injection phase in CAV2. The resulting determinant (for the 2×2 matrix) is expected to follow an adiabatic scaling γ_i/γ_f. The experimental measurement presented in Fig. <ref> confirm a scaling in (γ_i/γ_f)^2 as expected for the determinant of the 4×4 transfer matrix.
§ DISCUSSION
In summary, we have measured the transfer matrix of a 1.3-GHz SRF accelerating cavity at IOTA/FAST facility. The measurements are found to be in good agreement with numerical simulations and analytical results based on the Chambers' model.
In particular, the contributions from the auxiliary couplers are small and does not affect the 4× 4 matrix which can be approximated by a symmetric 2× 2-block diagonal matrix within our experimental uncertainties.
Furthermore the electromagnetic-field deviations from a pure cylindrical-symmetric TM_010 mode do not significantly affect the single-particle beam dynamics.
It should however be stressed that nonlinearities along with the time-dependence of the introduced dipole,
and non-cylindrical-symmetric first order perturbations contribute to transverse-emittance dilutions <cit.>. Investigating such effects would require beams with ultra-low emittances. A unique capability of the IOTA/FAST photoinjector is its ability to produce flat beams – i.e. beams with large transverse-emittance ratios <cit.>. The latter type of beams could produce sub-μm transverse emittances along one of the transverse dimensions thereby providing an ideal probe to quantify the emittance dilution caused by the cavity's auxiliary couplers.
§ ACKNOWLEDGMENTS
We are grateful to D. Broemmelsiek, S. Nagaitsev, A. Valishev and the rest of the IOTA/FAST group for their support. This work was partially funded by the US Department of Energy (DOE) under contract DE-SC0011831 with Northern Illinois University. Fermilab is operated by the Fermi Research Alliance, LLC for the DOE under contract DE-AC02-07CH11359.
40
Aune:2000gb
B. Aune, et al., “The superconducting TESLA cavities,"
Phys. Rev. ST Accel. Beams, 3, 092001, (2000).
ilc
N. Phinney, N. Toge and N. Walker (Eds), “ILC Reference Design Report Volume 3 - Accelerator," arXiv:0712.2361 [physics.acc-ph] (2007)
LCLS2
J. N. Galayda, “The New LCLS-II Project : Status and Challenges,"
in Proceedings of the International Linear Accelerator Conference (LINAC2014), JACoW, Geneva, Switzerland, 404 (2014).
Popovic:2005mj
M. Popovic and R. P. Johnson, “Muon acceleration in a superconducting proton Linac,”
Nucl. Phys. Proc. Suppl. 155, 305, (2006).
PIP2
S. Holmes, P. Derwent, V. Lebedev, S. Mishra, D. Mitchell,
V. P. Yakovlev, “PIP-II Status and Strategy,"
in Proceedings of the 2015 International Particle Accelerator Conference (IPAC15), JACoW, Richmond, VA, USA, 3982 (2015).
Kephart:SRF2015-FRBA03
R.D. Kephart, B.E. Chase, I.V. Gonin, A. Grassellino, S. Kazakov,
T.N. Khabiboulline, S. Nagaitsev, R.J. Pasquinelli, S. Posen, O.V. Pronitchev, A. Romanenko, V.P. Yakovlev,
“SRF Compact Accelerators for Industry & Society,"
in Proceeding of International Conference on RF Superconductivity (SRF2015), JACoW, Whistler, BC, Canada 1467 (2015).
Piot:2005id
P. Piot, M. Dohlus, K. Flöttmann, M. Marx, S.G. Wipf, “Steering and focusing effects in TESLA cavity due to high order mode and input couplers,"
in Proceedings of the 2005 Particle Accelerator Conference (PAC05), JACoW, Knoxville, TN, 4135 (2005).
dohlusEPAC08
M. Dohlus, I. Zagorodnov, E. Gjonaj, T. Weiland, “Coupler Kick for Very Short Bunches and its Compensation,"
in Proceedings of the 2008 European Particle Accelerator Conference (EPAC08), Genoa, Italy, 580 (2008).
luninIPAC2010
A. Saini, K.Ranjan, A. Latina, A. Lunin, S. Mishra, N. Solyak, V. Yakovlev, “Study of Coupler's Effects on ILC Like Lattice,"
in Proceedings of the 2010 International Particle Accelerator Conference (IPAC10), 4491, JACoW, Tokyo, Japan (2010).
vivoli
A. Vivoli, et al., “LCLS-II Injector Coupler Options Performance,"
in Proceedings of the International Linear Accelerator Conference (LINAC2014), JACoW, Geneva, Switzerland, 991 (2014).
hom
N. Baboi, M. Dohlus, C. Magne, A. Mosnier, O. Napoly, H.-W. Glock, “Investigation of a High-Q Dipole Mode at the TESLA Cavities,"
in Proceedings of 2000 European Particle Accelerator Conference (EPAC 2000), JACoW, Vienna, Austria, 1108 (2000).
FNPLNote
P. Piot and Y.-E. Sun, “Note on the transfer matrix measurement of a TESLA cavity," Fermilab internal report Beams Document 1521-v1 (unpublished, 2005).
Halavanau:2016wax
A. Halavanau, N. Eddy, D. Edstrom, A. Lunin, P. Piot, J. Ruan, J. Santucci, N. Solyak, “Preliminary Measurement of the Transfer Matrix of a TESLA-type Cavity at FAST,"
in Proceedings of the 2016 International Particle Accelerator Conference (IPAC16), JACoW, Busan, Korea, 1632 (2016).
Halavanau:2016ivl
A. Halavanau, N. Eddy, D. Edstrom, A. Lunin, P. Piot, J. Ruan, N. Solyak, “Measurement of the Transverse Beam Dynamics in a TESLA-type Superconducting Cavity,"
in Proceedings of the 2016 International Linear Accelerator Conference (LINAC16), JACoW, East Lansing, MI, USA, paper MOP106-18 (in press, 2016).
Chambers
E.E. Chambers, “Radial transformation matrix, standing wave accelerator," reports HEPL TN-68-17 and HEPL 570, available from SLAC archive (October 1968).
Serafini
J. Rosenzweig and L. Serafini, “Transverse particle motion in radiofrequency linear accelerators,"
Phys. Rev. E, 49, 1599 (1994).
Rosenzweig
S. C. Hartman and J. B. Rosenzweig, “Ponderomotive focusing in axisymmetric RF linacs,"
Phys. Rev. E, 47, 2031 (1993).
Hartman
S. C. Hartman, “The UCLA high-brightness RF photoinjector”,
Ph.D. Dissertation, University of California, Los Angeles, USA (1994).
Reiche:1997ek
S. Reiche, J. B. Rosenzweig, S. Anderson, P. Frigola, M. Hogan, A. Murokh, C. Pellegrini, L. Serafini, G. Travish, and
A. Tremaine, “Experimental confirmation of transverse focusing and adiabatic damping in a standing wave linear accelerator,"
Phys. Rev. E, 56, 3572 (1997).
luninIPAC2015
A. Lunin, N. Solyak, A. Sukhanov, V. Yakovlev, “Coupler RF Kick in the Input 1.3 GHz Accelerating Cavity of the LCLS-II Linac,"
in Proceedings of the 2015 International Particle Accelerator Conference (IPAC15), JACoW, Richmond, VA, USA, 571 (2015).
baneEPAC2008
K.L.F. Bane, C. Adolphsen, Z. Li, M. Dohlus, I. Zagorodnov, I. Gonin, A. Lunin, N. Solyak, V. Yakovlev, E. Gjonaj, T. Weiland,
“Wakefield and RF Kicks due to Coupler Asymmetry in TESLA-type Accelerating Cavities ,"
in Proceedings of the 2008 European Particle Accelerator Conference (EPAC08), JACoW, Genoa, Italy, 1571 (2008).
hfss
High frequency structure simulator. Software available from ANSYS.
ASTRAmanual
K. Flöttmann, Astra reference manual, available from DESY (<http://www.desy.de/ mpyflo/>), Hamburg, Germany (unpublished).
juntong
N. Juntong, R.M. Jones, “HOM and FP Coupler Design for the NLSF High Gradient SC Cavity,"
in Proceedings of the 2011 International Particle Accelerator Conference (IPAC11), JACoW, San Sebastián, Spain, 325 (2011).
helm
R. Helm, and R. Miller, in Linear Accelerators, eds. P.M. Lapostolle and A.L. Septier (North-Holland), 115 (1969).
SainiIPAC10
A. Lunin, I. Gonin, N. Solyak, V. Yakovlev, “Final Results on RF and Wake Kicks Caused by the Couplers for the ILC Cavity,"
in Proceedings of the 2010 International Particle Accelerator Conference (IPAC10), 3431, JACoW, Tokyo, Japan (2010).
Li:1993
Z. Li, J.J. Bisognano, B.C. Yunn, “Transport properties of the CEBAF cavity,"
in Proceedings of the 1993 Particle Accelerator Conference (PAC93), JACoW, Washington D.C, 179 (1993).
dowel
D. Dowell, “Cancellation of RF coupler-induced emittance due to astigmatism," preprint arXiv:1503.09142 [physics.acc-ph] also available as report SLAC-PUB-16896 (2015).
FAST
E. Harms, J. Leibfritz, S. Nagaitsev, P. Piot, J. Ruan, V. Shiltsev,
G. Stancari, A. Valishev, “The Advanced Superconducting Test Accelerator at Fermilab,"
ICFA Beam Dyn.Newslett., 64, 133 (2014).
FAST2
S. Antipov, et al., “IOTA (Integrable Optics Test Accelerator): Facility and Experimental Beam Physics Program,"
Journal of Instrumentation (JINST), 12 T03002 (2017).
McCrory:2013eta
E. McCrory, N. Eddy, F.G. Garcia, S. Hansen, T. Kiper, M. Sliczniak, “BPM Electronics Upgrade for the Fermilab H- Linac Based Upon Custom Downconverter Electronics,"
in Proceedings of the 2nd International Beam Instrumentation Conference (IBIC13), Oxford, UK, 396 (2013).
(2013).
Plopez
D. P. Juarez-Lopez, “Beam position monitor and
energy analysis at the Fermilab Accelerator Science
and Technology facility," M.S. Thesis, University of Guanajuato, Mexico (2015).
piotFB
P. Piot, Y.-E. Sun, and K.-J. Kim, “Photoinjector generation of a flat electron beam with transverse emittance ratio of 100,"
Phys. Rev. ST Accel. Beams 9, 031001 (2006).
zhu
J. Zhu, P. Piot, D. Mihalcea, and C. R. Prokop, “Formation of Compressed Flat Electron Beams with High Transverse-Emittance Ratios,"
Phys. Rev. ST Accel. Beams 17, 084401 (2014).
| The 1.3-GHz superconducting radiofrequency (SRF) accelerating cavities were originally developed in the context of
the TESLA linear-collider project <cit.> and were included in the baseline design of the international linear collider (ILC) <cit.>
and in the design of various other operating or planned accelerator facilities. Projects based on such a cavity include electron- <cit.>, muon- <cit.>,
and proton-beam accelerators <cit.> supporting fundamental science
and compact high-power industrial electron accelerators <cit.>.
Such a cavity is a 9-cell standing-wave accelerating structure operating in the TM_010,π mode.
The transverse beam dynamics associated to such a cavity has been extensively explored over
the last decade and focused essentially on numerical simulations of single-bunch emittance dilution due to the field asymmetries <cit.> and multibunch effects due to trapped modes <cit.>. Most recently, experiments aimed at
characterizing the transverse beam dynamics in this type of SRF cavity were performed <cit.>.
In this paper we discuss the measurement and analysis of the transverse transfer matrix of a 9-cell 1.3-GHz SRF cavity.
In particular, we compare the results with the Chambers' analytical model <cit.>.
In brief, an analytical model of the transverse focusing in the accelerating cavity can be derived by considering the transverse motion
of the particle in a standing wave RF field with axial field E_z(z,t)=E_0∑_na_ncos(nkz)sin(ω t + ϕ), where E_0 is the peak field,
nk is the wave number associated to n-th harmonic of amplitude a_n, ϕ is an arbitrary phase shift, and z
is the longitudinal coordinate along the cavity axis.
The ponderomotive-focusing force is obtained under the paraxial approximation as
F_r=-e(E_r-v B_ϕ)≈ e r ∂ E_z/∂ z where v≃ c is the particle velocity along the axial direction.
Ref. <cit.> shows that the force averaged over one RF-period in the first order of perturbation theory yields the focusing strength,
K̅_r = - (E_0 e )^2/8(βγ m c^2)^2, for the case of a “pure” standing wave resonator (where the spatial profile of the axial field is modeled as E_z(z) ∝cos(kz) inside the cavity) originally considered in Ref. <cit.>.
The equation of motion then takes form:
x”+(γ '/γ)x'+ K̅_r (γ '/γ)^2 x=0,
where x is the transverse coordinate, x'≡dx/dz, γ' ≡dγ/dz = e E_0 cos(ϕ)/m_0 c^2 ≡G̅_RF/m_0 c^2
is the normalized energy gradient, where γ is the Lorentz factor.
The solution of the Eq. <ref> through the cavity is of the form 𝐱_f = R 𝐱_i,
where 𝐱≡ (x,x')^T, here R is a 2×2 matrix,
and the subscripts i and f indicate upstream and downstream particle coordinates respectively.
According to Chambers' model, the elements of R are given by <cit.>:
R_11 = cosα - √(2)cos(ϕ)sinα,
R_12 = √(8)γ_i/γ 'cos(ϕ)sinα,
R_21 = -γ '/γ_f[cos(ϕ)/√(2)+1/√(8)cos(ϕ)]sinα,
R_22 = γ_i/γ_f[cosα+√(2)cos(ϕ)sinα],
where α≡1/√(8)cos(ϕ)lnγ_f/γ_i,
γ_f≡γ_i + γ' L cosϕ is the final Lorentz factor (where L is the cavity length).
The determinant associated to the 2×2 block of the matrix is |R|_2×2=γ_i/γ_f.
The latter equation also holds for the vertical degree of freedom (y,y') owing to the assumed cylindrical symmetry.
Under such an assumption the equations for the vertical degree of freedom are obtained
via the substitutions x↔ y, 1↔ 3 and 2↔ 4. The total
transverse transfer matrix determinant is then |R|_4×4=(γ_i/γ_f)^2.
The assumed axially-symmetric electromagnetic field invoked while deriving Eq. <ref> is often violated, e.g., due to asymmetries introduced by the input-power (or forward-power) and high-order-mode (HOM) couplers. The input-power coupler couples the RF power to the cavity while the HOM couplers damp the harmful trapped fields potentially excited as
long trains of bunches are accelerated in the SRF cavities. In addition to the introduced field asymmetry, the coupler can also impact the beam via geometrical wakefields <cit.>.
The measurement of the transverse matrix of a standing wave accelerating structure (a plane-wave transformer, or PWT)
was reported in Ref. <cit.> and benchmarked against an “augmented" Chambers' model detailed in <cit.>. This refined model accounts for the presence of higher-harmonic spatial content in the axial field profile E_z(r=0,z). The present paper extends such a measurement to the case of a 1.3-GHz SRF accelerating
cavity and also investigates, via numerical simulation, the impact of the auxiliary couplers on the transfer matrix of the cavity. These simulations and measurements generally indicate that higher spatial harmonics do not play a significant role for the case of the TESLA cavity. Additionally, we note that the presented measurements are performed in a regime where the energy gain through the cavity is comparable to the beam injection energy [γ_i ∼γ' L].
In such a regime, the impact of field asymmetries are expected to be important. | null | null | null | In summary, we have measured the transfer matrix of a 1.3-GHz SRF accelerating cavity at IOTA/FAST facility. The measurements are found to be in good agreement with numerical simulations and analytical results based on the Chambers' model.
In particular, the contributions from the auxiliary couplers are small and does not affect the 4× 4 matrix which can be approximated by a symmetric 2× 2-block diagonal matrix within our experimental uncertainties.
Furthermore the electromagnetic-field deviations from a pure cylindrical-symmetric TM_010 mode do not significantly affect the single-particle beam dynamics.
It should however be stressed that nonlinearities along with the time-dependence of the introduced dipole,
and non-cylindrical-symmetric first order perturbations contribute to transverse-emittance dilutions <cit.>. Investigating such effects would require beams with ultra-low emittances. A unique capability of the IOTA/FAST photoinjector is its ability to produce flat beams – i.e. beams with large transverse-emittance ratios <cit.>. The latter type of beams could produce sub-μm transverse emittances along one of the transverse dimensions thereby providing an ideal probe to quantify the emittance dilution caused by the cavity's auxiliary couplers. | null |
http://arxiv.org/abs/1701.07978v1 | 20170127090334 | Spatial environment of polar-ring galaxies from the SDSS | [
"S. S. Savchenko",
"V. P. Reshetnikov"
] | astro-ph.GA | [
"astro-ph.GA"
] |
St.Petersburg State University, Universitetskii pr. 28, St.Petersburg,
198504 Russia
[email protected], [email protected]
Environment of PRGs
Based on SDSS data, we have considered the spatial environment of
galaxies with extended polar rings. We used two approaches: estimating the
projected distance to the nearest companion and counting the number of companions
as a function of the distance to the galaxy. Both approaches have
shown that the spatial environment of polar-ring galaxies on scales of hundreds
of kiloparsecs is, on average, less dense than that of galaxies without polar
structures. Apparently, one of the main causes of this effect is that the polar
structures in a denser environment are destroyed more often during encounters
and mergers with other galaxies.
Environment of PRGs
Spatial environment of polar-ring galaxies from the SDSS
S.S. Savchenko1, V.P. Reshetnikov1
Jan 15 2017
========================================================
§ INTRODUCTION
Polar-ring galaxies (PRGs) are very rare and interesting extragalactic objects.
Two large-scale subsystems coexist in their structure: a central galaxy
and a ring or disk oriented at a large angle to its major axis (the catalog by
Whitmore et al. 1990; the SPRC catalog by Moiseev et al. 2011). As a
rule, the host galaxies and the polar structures differ noticeably in their
characteristics. The central galaxies in most PRGs are gas-poor early-type (E/S0)
galaxies. By contrast, the polar structures are usually gas-rich, have blue colors,
exhibit star formation, and generally resemble the disks of spiral galaxies (see,
e.g., Combes 2006; Reshetnikov & Combes 2015; Moiseev et al. 2015; and references
therein).
Some “secondary” event in the PRG history is usually invoked to
explain the formation of kinematically and morphologically decoupled structures.
Various secondary events are considered: the capture of matter from an approached
galaxy (Reshetnikov & Sotnikova 1997; Bournaud & Combes 2003), the merging of
galaxies (Bekki 1998; Bournaud & Combes 2003), and the accretion of matter
from intergalactic space (Maccio et al. 2006; Brook et al. 2008). Observations
show that several PRG formation mechanisms can apparently be realized, but the
relative contribution of different mechanisms remains unclear.
One test of the formation models of PRGs is a statistical study of their spatial
environment. (For example, if the polar rings are formed mainly during
close encounters of galaxies, then one might expect the PRGs to have an excess
of close neighbors.) Unfortunately, such studies are very rare so far. Brocca
et al. (1997) considered objects from the catalog by Whitmore et al. (1990) and
found the spatial environment of PRGs to be similar to that of normal galaxies.
From this analysis the authors concluded that if the polar structures are formed
during the interaction of galaxies, then these processes mainly occurred very
long (billions of years) ago and by now the signatures of interactions and mergers
in the PRG environment have been washed out. Finkelman et al. (2012)
studied the membership of PRGs selected from the SPRC in known groups of galaxies
and concluded that, on average, the PRGs are located in a less dense
spatial environment than are the ordinary early-type galaxies.
The goal of this note is to study the spatial environment of various groups of
galaxies from the SPRC. The SPRC catalog was compiled on the basis of the
Sloan Digital Sky Survey (SDSS), and it is much more homogeneous than the catalog by
Whitmore et al. (1990), which allows the nearest neighborhood of PRGs to be
investigated quite comprehensively.
All numerical values in the paper are given for the cosmological model with the Hubble
constant of 70 km s^-1 Mpc^-1 and Ω_m=0.3, Ω_Λ=0.7.
§ ANALYSIS OF THE SPATIAL ENVIRONMENT
§.§ Samples of galaxies
The SPRC catalog (Moiseev et al. 2011) contains 275 galaxies divided into four groups.
The first group includes 70 objects that are the best PRG candidates. They are
morphologically similar to the classical PRGs from the catalog by Whitmore et al.
(1990). The preceding experience of studying such galaxies has shown that almost
all objects with such a morphology are PRGs and contain two large-scale
kinematically decoupled subsystems (for examples see fig. 1 in Moiseev et al. (2015)).
Below this group will be referred to as the B (best) sample.
The second group of SPRC objects includes 115 galaxies that are good PRG candidates.
We will designate this sample as G (good).
The third and fourth parts of the SPRC catalog contain galaxies that may be related
to PRGs (the R (related) sample, 53 galaxies) and galaxies where the presumed polar
ring is seen nearly face-on (the P (possible face-on rings) sample, 37 galaxies).
A detailed description of all groups of galaxies is given in the SPRC.
§.§ Methods for studying the environment of galaxies
We applied two different approaches to study the environment of PRGs from the SDSS:
estimating the distance to the nearest galaxy (companion) and counting the number
of companions as a function of the distance to the galaxy.
We used the following criteria to classify a galaxy as a companion.
(1) The apparent r magnitude of the companion should not be fainter than the
apparent magnitude of the main galaxy increased by 2^m: m_comp≤ m_main + 2^m.
(2) The redshift difference between the galaxy being studied and its companion
should not exceed 0.001 (300 km/s) or 0.0015 (450 km/s).
(3) The distance between the galactic centers in projection onto the plane of the
sky should be less than a preset value dependent on the approach used (see below).
We searched for companion galaxies using the SQL queries to the SDSS
server[https://skyserver.sdss.org/dr12/en/tools/search/x_results.aspx ]
in which the above constraints were specified.
The r-band data on galaxies were taken from the SDSS DR 12 (Alam et al. 2015).
To avoid the errors due to the SDSS incompleteness, we took
only objects brighter than r = 16^m from the SPRC. Including less bright galaxies
in the sample does not allow us to search for faint companions by the methods
described below because of the SDSS limitation on the completeness of spectroscopic
data for galaxies (Strauss et al. 2002). As a result, 22, 20, 29,
and 12 galaxies were included in the B, G, R, and P samples, respectively.
I. Distance to the nearest neighbor.
In this approach the distance between the galaxy being studied and its nearest
companion (the nearest galaxy satisfying the above magnitude and redshift criteria)
was used as an estimate of the spatial density of galaxies. As the distance we used
the distance projected onto the plane of the sky expressed either in Petrosian radii
(R_petro) of the central galaxy or in kiloparsecs. In the former and latter
cases, we searched for companions within 100R_petro and 500 kpc, respectively.
If no neighbors were found within these regions, then for such galaxies we took
100R_petro or 500 kpc, respectively, as an estimate of the distance to the companion
(i.e., we used the lower limit).
II. Dependence of the number of companions on distance.
In this test for each galaxy from the sample we determined the number of companions as
a function of the distance to the central galaxy. To this end, for each galaxy we
found the number of companions within 2, 4, 8, 16, 32, and 64 Petrosian
radii and, as a separate test, within 10, 20, 40, 80, 160, and 320 kpc.
III. Comparison sample.
To compare the results of our study of the spatial environment for PRGs
and ordinary galaxies, we produced a comparison sample. The following principle
was used to produce the comparison sample. For each galaxy of the
SPRC catalog we selected galaxies from the entire SDSS with close apparent radii,
apparent magnitudes, colors, and redshifts:
Δ R_petro < 15%, Δ m < 0.15^m, Δ (g-r) < 0.05^m,
Δ z < 10 %.
The limitation in color is important for selecting galaxies of
similar types, because the spatial environment depends on the galaxy type
(see, e.g., Skibba et al. 2009). We added the limitation in redshift in order
that the luminosity distributions of SPRC galaxies and comparison galaxies be also
similar. On average, about 200 similar galaxies were found for each SPRC
galaxy (although this number could be considerably smaller for some galaxies).
Thus, we obtained an expanded comparison sample that is larger in volume
than the SPRC sample by hundreds of times.
We then drew the working comparison samples from the expanded comparison sample
by randomly selecting only one similar galaxy from the entire group
for each SPRC galaxy. As a result, not only the volume of the comparison sample
is equal to the volume the SPRC catalog, but also the magnitude, redshift,
size, luminosity, and color distributions turn out to be close. Multiple repetition
of this step allows us to obtain a large number of comparison samples, which
makes it possible to perform a series of tests on them, to average the results,
and to estimate the errors.
§ RESULTS AND DISCUSSION
§.§ Minimum distance to the nearest neighbor
Tables 1 and 2 present the results of our test to determine the distance to the
nearest galaxy expressed, respectively, in Petrosian radii (Table 1) and kiloparsecs
(Table 2) for two limitations in redshift. The second and fourth columns of the
tables give the mean values of this parameter for each galaxy subtype from
the SPRC (B, G, R, and P) and for the comparison sample (the cmp row). In addition,
we combined the B and G types of the most reliable candidates into the
BG type to increase the number of galaxies in this group, thereby increasing the
statistical weight of the result. The third and fifth columns of Tables 1 and 2
give the root-mean-square (rms) deviations σ; for the subgroups of the SPRC catalog
this is the rms deviation from the mean for galaxies in the group, while for
the comparison sample this is the rms deviation from the mean for 50 realizations.
The nearly zero scatter in the control group is the result of averaging over
many realizations of working comparison samples: the term N^-1/2
in the formula for the rms deviation led to a small resulting scatter.
It can be seen from Tables 1 and 2 that the mean distance to the nearest galaxy
for the best PRG candidates is larger than that for the less reliable candidates
and for the galaxies of the comparison sample, with this dependence being more
pronounced when measuring the distances in Petrosian radii, i.e., in
the relative scale that takes into account the galaxy sizes. As expected, using
the weaker limitation on the redshift difference (Δ z ≤ 0.0015)
reduces the distance to the nearest galaxy but does not change the
above trend. Thus, according to this test, the polar-ring galaxies are, on average,
in a less dense spatial environment than are similar galaxies without polar
structures.
§.§ Dependence of the number of companions on distance
The dependence of the mean number of companions on the distance to the galaxy
for various subgroups of the SPRC catalog and for the comparison
sample is presented in Fig. 1 (for the distances expressed in Petrosian radii)
and Fig. 2 (in kiloparsecs). The top and bottom panels of these figures show the
results obtained for the limitations on the redshift difference
Δ z ≤ 0.001 and Δ z ≤ 0.0015, respectively.
It can be seen from the figures that the dependences for all galaxy types do
not differ statistically at relatively small distances (tens of R_petro,
tens of kiloparsecs). This may be due to the small volume of the PRG samples and,
accordingly, the small number of PRG companions.
As the distance increases, the dependences for the most probable PRG candidates
(the B and G subgroups) begin to deviate from the dependence for
the comparison sample toward a smaller number of companions. This effect is most
prominent in Fig. 2, where the distances are expressed in kiloparsecs.
Both figures also clearly show a trend where the dependences for the objects of
the B and G subgroups at large distances lie below the dependences for other
subgroups and for the comparison sample.
As with the previous test, the results of our analysis suggest that the spatial
environment of PRGs turns out to be less dense. The mean number of
companions expectedly increases for the tests with a weaker redshift constraints,
but the overall character of the dependences is retained or even enhanced.
§ CONCLUSIONS
Using two simple and clear approaches, we investigated the environment of polar-ring
galaxies from the SDSS and obtained the following main results.
(1) The mean projected distance to the nearest companion galaxy for PRG candidates
is larger than that for galaxies without polar structures. For example,
this difference for the B subgroup reaches a factor of 1.6 for the distance
expressed in Petrosian radii.
(2) The mean number of companion galaxies within several hundred kiloparsecs for
PRG candidates is smaller than that for ordinary galaxies.
These results are consistent with the conclusion reached by Finkelman et al. (2012),
who studied the occurrence of PRGs in galaxy groups and found that
the PRGs are predominantly located in a less dense environment than are the normal
galaxies. On the other hand, Brocca et al. (1997) previously found no
differences in the environment of PRGs and galaxies without polar structures.
This may be because the catalog by Whitmore et al. (1990) used by them,
which includes a large number of objects that are not PRGs, is less homogeneous.
Of course, the less dense environment of PRGs needs to be confirmed further based
on a much larger (and so far lacking) observational material. Nevertheless,
the agreement between the results obtained by different methods in our paper and
in Finkelman et al. (2012) allows this conclusion to be considered
quite plausible.
How can the observed spatial environment of PRGs be explained? On the one hand,
the polar structures can undoubtedly be formed during close encounters of galaxies.
Several such objects, in which the accretion of matter from one galaxy onto
another and the formation of a circumpolar structure are observed directly, are
well known (see, e.g., Reshetnikov et al. 1996; Cox et al. 2001; Keel 2004).
However, such structures are apparently relatively short-lived ones, because they
will be destroyed during the interaction with neighboring galaxies and
during the capture of companions. For example, the long-term evolution of the
polar structure formed by external accretion from the intergalactic medium
was traced in the cosmological numerical simulations by Maccio et al. (2006).
This structure was destroyed by the dynamical effect of the companion merging
with the main galaxy approximately one billion years after its formation. When
the authors removed all companions from the environment of the galaxy under
consideration, the polar ring existed in their simulations much longer. It is
this effect that possibly leads to the observed reduced spatial density of galaxies in
the PRG environment; the polar structures have more chances to “survive” in a less
dense environment.
On the other hand, the polar rings can be formed through the so-called cool
(T ∼ 10^4 K) accretion of gas from filaments in the intergalactic medium (Maccio
et al. 2006; Connors et al. 2006; Brook et al. 2008). This scenario and the
formation of a massive polar ring require prolonged (billions of years) and
“coherent” accretion of matter (Snaith et al. 2012); the latter is more probable
in regions with a low density of galaxies. (The influence of nearby galaxies can
reorient or even destroy the external accretion flow.) In addition, as has been
noted above, the formed polar structure will be able to exist longer in a less dense
environment.
§ ACKNOWLEDGMENTS
This study is based on publicly SDSS data. SDSS is managed by the Astrophysical Research
Consortium for the Participating Institutions of the SDSS Collaboration including
the Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the French Participation
Group, Harvard-Smithsonian Center for Astrophysics, Instituto
de Astrofisica de Canarias, the Johns Hopkins University, Kavli Institute for the
Physics and Mathematics of the Universe (IPMU)/University of Tokyo,
Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP),
Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur
Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik
(MPE), National Astronomical Observatories of China, New Mexico State University,
New York University, University of Notre Dame, Observatorio Nacional/MCTI, the Ohio
State University, Pennsylvania State University, Shanghai Astronomical
Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de
Mexico, University of Arizona, University of Colorado Boulder,
University of Oxford, University of Portsmouth, University of Utah, University of
Virginia, University of Washington, University of Wisconsin, Vanderbilt
University, and Yale University.
§ REFERENCES
1. S. Alam, F.D. Albareti, C.A. Prieto, F. Anders,
S.F. Anderson, T. Anderton, B.H. Andrews, E. Armengaud,
et al., Astrophys. J. Suppl. Ser. 219, 12A (2015).
2. K. Bekki, Astrophys. J. 499, 635 (1998).
3. F. Bournaud and F. Combes, Astron. Astrophys. 401,
817 (2003).
4. Ch. Brocca, D. Bettoni, and G. Galletta, Astron.
Astrophys. 326, 907 (1997).
5. Ch.B. Brook, F. Governato, Th. Quinn, J. Wadsley,
A.M. Brooks, B. Willman, A. Stilp, and P. Jonsson,
Astrophys. J. 689, 678 (2008).
6. F. Combes, EAS Publ. Ser. 20, 97 (2006).
7. T.W. Connors, D. Kawata, J. Bailin, J. Tumlinson,
and B.K. Gibson, Astrophys. J. 646, L53 (2006).
8. A.L. Cox, L.S. Sparke, A.M. Watson, and
G. van Moorsel, Astron. J. 121, 692 (2001).
9. I. Finkelman, J.G. Funes, and N. Brosch, Mon. Not.
R. Astron. Soc. 422, 2386 (2012).
10. W.C. Keel, Astron. J. 127, 1325 (2004).
11. A.V. Maccio, B. Moore, and J. Stadel, Astrophys. J.
636, L25 (2006).
12. A.V. Moiseev, K.I. Smirnova, A.A. Smirnova, and
V.P. Reshetnikov, Mon.Not. R. Astron. Soc. 418, 244
(2011) (SPRC).
13. A. Moiseev, S. Khoperskov, A. Khoperskov,
K. Smirnova, A. Smirnova, A. Saburova, and
V. Reshetnikov, Baltic Astron. 24, 76 (2015).
14. V.P. Reshetnikov, V.A. Hagen-Thorn, and
V.A. Yakovleva, Astron. Astrophys. 314, 729
(1996).
15. V. Reshetnikov and F. Combes, Mon. Not. R. Astron.
Soc. 447, 2287 (2015).
16. V. Reshetnikov and N. Sotnikova, Astron. Astrophys.
325, 933 (1997).
17. R.A. Skibba, S.P. Bamford, R.C. Nichol, C.J. Lintott,
D. Andreescu, E.M. Edmondson, P. Murray,
M.J. Raddick, et al., Mon. Not. R. Astron. Soc. 399,
966 (2009).
18. O.N. Snaith, B.K. Gibson, C.B. Brook, A. Knebe,
R.J. Thacker, T.R. Quinn, F. Governato, and
P.B. Tissera, Mon. Not. R. Astron. Soc. 425, 1967
(2012).
19. M.A. Strauss, D.H. Weinberg, and R.H. Lupton,
Astron. J. 124, 1810 (2002).
20. B.C. Whitmore, R.A. Lucas, D.B. McElroy,
T.Y. Steiman-Cameron, P.D. Sackett, and
R.P. Olling, Astron. J. 100, 1489 (1990).
| Polar-ring galaxies (PRGs) are very rare and interesting extragalactic objects.
Two large-scale subsystems coexist in their structure: a central galaxy
and a ring or disk oriented at a large angle to its major axis (the catalog by
Whitmore et al. 1990; the SPRC catalog by Moiseev et al. 2011). As a
rule, the host galaxies and the polar structures differ noticeably in their
characteristics. The central galaxies in most PRGs are gas-poor early-type (E/S0)
galaxies. By contrast, the polar structures are usually gas-rich, have blue colors,
exhibit star formation, and generally resemble the disks of spiral galaxies (see,
e.g., Combes 2006; Reshetnikov & Combes 2015; Moiseev et al. 2015; and references
therein).
Some “secondary” event in the PRG history is usually invoked to
explain the formation of kinematically and morphologically decoupled structures.
Various secondary events are considered: the capture of matter from an approached
galaxy (Reshetnikov & Sotnikova 1997; Bournaud & Combes 2003), the merging of
galaxies (Bekki 1998; Bournaud & Combes 2003), and the accretion of matter
from intergalactic space (Maccio et al. 2006; Brook et al. 2008). Observations
show that several PRG formation mechanisms can apparently be realized, but the
relative contribution of different mechanisms remains unclear.
One test of the formation models of PRGs is a statistical study of their spatial
environment. (For example, if the polar rings are formed mainly during
close encounters of galaxies, then one might expect the PRGs to have an excess
of close neighbors.) Unfortunately, such studies are very rare so far. Brocca
et al. (1997) considered objects from the catalog by Whitmore et al. (1990) and
found the spatial environment of PRGs to be similar to that of normal galaxies.
From this analysis the authors concluded that if the polar structures are formed
during the interaction of galaxies, then these processes mainly occurred very
long (billions of years) ago and by now the signatures of interactions and mergers
in the PRG environment have been washed out. Finkelman et al. (2012)
studied the membership of PRGs selected from the SPRC in known groups of galaxies
and concluded that, on average, the PRGs are located in a less dense
spatial environment than are the ordinary early-type galaxies.
The goal of this note is to study the spatial environment of various groups of
galaxies from the SPRC. The SPRC catalog was compiled on the basis of the
Sloan Digital Sky Survey (SDSS), and it is much more homogeneous than the catalog by
Whitmore et al. (1990), which allows the nearest neighborhood of PRGs to be
investigated quite comprehensively.
All numerical values in the paper are given for the cosmological model with the Hubble
constant of 70 km s^-1 Mpc^-1 and Ω_m=0.3, Ω_Λ=0.7. | null | null | null | null | null |
http://arxiv.org/abs/1701.08164v1 | 20170127190001 | The history of chemical enrichment in the intracluster medium from cosmological simulations | [
"V. Biffi",
"S. Planelles",
"S. Borgani",
"D. Fabjan",
"E. Rasia",
"G. Murante",
"L. Tornatore",
"K. Dolag",
"G. L. Granato",
"M. Gaspari",
"A. M. Beck"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA"
] |
=1
=-0.8in
Fig.
Eq.
Section
Table
M_vir
R_vir
M_500
R_500
M_2500
R_2500
M_200m
R_200m
M_200
R_200
M_true
M_HE
M_rot
M_str
M_th
v_tan
Z_⊙
Z_⊙
Z_crit
Z_crit
M_⊙
M_⊙
h^-1 M_⊙
erg/s
keV
cm
km s^-1
kpc
h^-1 kpc
Mpc
h^-1 Mpc
lg
Log
ΛCDM
Ω_0M
Ω_b
Ω_0Λ
G_r
H_r
G_r/H_r
δ_HE
T_mw
T_sl
AGN
CSF
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07812v3 | 20170126184730 | Phonon-assisted two-photon interference from remote quantum emitters | [
"Marcus Reindl",
"Klaus D. Joens",
"Daniel Huber",
"Christian Schimpf",
"Yongheng Huo",
"Val Zwiller",
"Armando Rastelli",
"Rinaldo Trotta"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Turaev-Viro invariants, colored Jones polynomials and volume
Renaud Detcherry, Efstratia Kalfagianni [E.K. is supported by NSF Grants DMS-1404754 and DMS-1708249] and Tian Yang [T.Y. is supported by NSF Grant DMS-1405066]
=========================================================================================================================================================================
empty
§ INTRODUCTION
One of the very first requirements to observe ideal on-demand single photon emission is the population inversion of the quantum emitter's excited state. Such a preparation of the quantum state is usually achieved via coherent excitation using resonant laser pulses, leading to an inverted two-level system performing Rabi oscillations <cit.>. While single pulse resonant excitation of a quantum dot (QD) has been used to achieve high state preparation fidelities and remarkable single photon properties, this scheme cannot be used to efficiently prepare the biexciton state, the key step to achieve polarization-entangled photon-pairs generation with QDs. This task can be instead accomplished using two-photon excitation (TPE) <cit.> techniques which have recently led to the generation of on-demand entangled photon pairs <cit.>. This coherent excitation scheme, however, has one important drawback. Small fluctuations in the laser pulse area or energy as well as fluctuations in the QD environment result in a strong variation of the excited state population probability that, in turn, affects the efficiency of photon generation. Envisioned quantum communication applications demand instead for more robust excitation schemes, being immune against these sources of "environmental decoherence" and ensuring on-demand generation of single and entangled photon-pairs.
In principle, it is possible to overcome these problems by taking advantage of the solid state nature of QDs, and in particular of their coupling to acoustic phonons. Despite the phonon-assisted excitation scheme is inherently incoherent, it has been proposed <cit.> and recently demonstrated <cit.> that population inversion of X and XX states coupled to a quasicontinuum of vibrational modes is indeed possible.
Nonetheless, the capability of this technique to generate highly indistinguishable single and entangled photons has not been explored so far. In this letter, we show for the first time that phonon-assisted two-photon excitation of QDs allows for the generation of highly indistinguishable entangled photon-pairs. In comparison with standard excitation schemes, we demonstrate that this method is more resilient against environmental decoherence limiting the XX or X preparation fidelity in conventional TPE schemes. Most importantly, we exploit its addressability with a wide-range of laser detunings to prepare on demand two remote and dissimilar QDs and to let the generated photons interfere at a beam splitter, a key experiment for the realization of an all-optical quantum repeater <cit.>.
§ RESULTS AND DISCUSSION
We focus our study on highly symmetric GaAs/AlGaAs QDs obtained via the droplet-etching method <cit.> (see supporting note 1). The photon pairs emitted from this specific type of QDs have recently shown unprecedented high degree of entanglement as well as indistinguishability <cit.>.
A typical spectrum of our highly symmetric GaAs/AlGaAs QDs under phonon-assisted excitation is shown in Fig.<ref>(a). To address the vibrational modes coupled to the XX state, the excitation laser is blue detuned by Δ from the two-photon resonant case (Δ=0 meV) towards the X transition (Fig.<ref>(a)). The best excitation parameters for optimum state preparation are inherently locked to the materials deformation potential and QD structural details, which determines the coupling of the exciton complexes to the acoustic phonons of the solid state environment <cit.>, thus leading to the excitonic phonon sidebands <cit.>. The optimal detuning energy for the investigated type of QD system is around Δ=0.4 meV for a pulse length of τ_p=10 ps (see supporting note 2). It is important to point out that the laser energy can be swept across a range of 0.2 meV without perturbing the state preparation fidelity (5% population change, see supporting note 2), while in the conventional TPE the population varies for more than 80% on the same energetic range taking into account π-pulse excitation (Fig.<ref>(b)). The stability in the preparation fidelity offered by the phonon-assisted scheme is particularly relevant for this work and will be later used to address remote QDs with the same excitation laser. Before examining this point in more detail, we first discuss the robust nature of the phonon-assisted scheme in comparison with the standard TPE.
We first study the power dependence of the standard resonant TPE in the same excitation conditions and on the same QD (Fig.<ref>(c) red curve). Interestingly, the TPE manifests itself as oscillations of the state occupation probability locked to 1/2. While a possible explanation of this effect is the presence of a chirped laser pulse in conjunction with phonon-induced damping <cit.>, we show that it is instead connected to the details of the QD environment: The power dependence changes considerably as we additionally illuminate the QD with a weak white-light source (Fig.<ref>(c) blue curve) revealing traditional phonon-damped Rabi oscillations <cit.> with state population as high as 88±2 %. We attribute these modifications (which are particularly pronounced at the π-pulse) to saturation of crystal defects located in the vicinity of the QD. In the absence of the white light, these defects release/trap charge carriers, thus giving rise to a fluctuating electric field <cit.>. We hypothesize that the white light not only stabilizes the electric field experienced by the QD <cit.> (see below) but also suppresses/saturates recombination channels (probably charged XX states) which hamper the radiative recombination of the XX into the X state. Obviously, for high values of the pulse area, the effect becomes negligible as the carrier-phonon interaction dominates.
While the effect of the white light on the Rabi oscillations has never been reported so far, the addition of off-resonant lasers is common practice in experiments performed with QDs driven resonantly <cit.>, even if the exact origin of its effect is not yet fully understood. We also point out that the effect of the white light on the QD driven with a π-pulse differs from QD to QD, with an increase in the population probability which ranges from 10% to 50% (see Fig.<ref>(c)). Moreover, the intensity of the white light needed to achieve the optimal conditions is also QD-specific. Thus, the TPE schemes are not ideal for applications in complex networks and experiments with multiple sources. In stark contrast, no remarkable effect of the white light can be observed under phonon-assisted resonant excitation (5% change in preparation fidelity), probably due to the large laser power needed, which also stabilizes the environment. Another important advantage of the phonon-assisted scheme is that it is inherently immune to fluctuations of the laser pulse area (see Fig.<ref>(c)) due to its incoherent nature. More specifically, when the state preparation fidelity is maximum, a 10% fluctuation of the pulse area leads to a negligible (1%) change in the state population. In the standard TPE, the same fluctuation of pulse area gives rise to at least 7% variation in the state fidelity. Finally, we emphasize that the phonon-assisted scheme allows preparing the excited state with very high fidelity, which is as high as 80±2 % for the highest laser pulse area available.
After demonstration of the robust nature of the phonon assisted excitation scheme, we now investigate the quality of the generated photons in terms of entanglement fidelity and photon indistinguishability. We start out measuring the fidelity to the maximally entangled Bell state (see supporting note 1 and 4) using a QD with small FSS (1.3±0.5μeV). The polarization-resolved XX-X cross-correlation measurements used to estimate the fidelity are shown in Fig.<ref>(a) under phonon-assisted excitation. These data yields a fidelity of f = 90±2 % which is identical (within the experimental error) to the values obtained when the QD is driven under strict TPE (with and without white light, Fig.<ref>(b)). Therefore, the three different excitation schemes give rise to the same level of entanglement of the emitted photons. This is an expected result, as the fidelity is determined by three main contributions: (i) the relative value of the FSS with respect to the natural linewidth <cit.>, (ii) recapture processes <cit.> increasing the multiphoton emission and (iii) the hyperfine interaction <cit.>.
Since the lifetime (i) as well as the single photon purity (ii) and the hyperfine interaction (iii) are not affected by the excitation scheme (see supporting note 3), the fidelity to the Bell state is predicted to remain constant, as indeed measured experimentally (Fig.<ref>(b)).
The different excitation methods are instead expected to have a pronounced role in the indistinguishability of consecutive photons emitted by the same QD, as measured in an Hong-Ou-Mandel type experiment <cit.> on the XX and X photons. The time delay between consecutive photons was set to 2 ns via a Mach-Zehnder interferometer in the excitation path. The observation of the typical two-photon interference quintuplet is presented in Fig.<ref>, together with the visibility values V_X and V_XX, which are calculated taking into account the imperfections of the interference beam splitter (see supporting note 1 and 5).
If we first take a look at the standard TPE (Fig.<ref>(a)), we observe the usual tendency of XX and X photons with visibilities around 60%. The stabilization of the QD environment throughout illumination with the white light source (Fig.<ref>(b)), however, leads to an evident (slight) increase of the X (XX) visibility. This is reasonable as the X is more sensitive to spectral diffusion mediated by temporally charged defects <cit.> than the screened potential of the fully occupied XX state. The weaker visibility of the XX transition, on the other hand, can be related to the XX probing an extraordinary noise environment <cit.> and/or suffers from an initially higher pure dephasing rate<cit.>. Most importantly, under phonon-assisted excitation of the two-level system (Fig.<ref>(c)) a remarkably high level of indistinguishability (comparable to the TPE under white light illumination) can be observed. This demonstrates that the time jitter introduced by phonon relaxation is negligible in our measured values of photon indistinguishabilities. To summarize, the phonon-assisted two-photon excitation scheme not only leads to the generation of highly indistinguishable entangled photon-pairs but it is more robust than the standard two-photon excitation schemes. Yet, this scheme has an additional elegant advantage: it allows performing two-photon interference between remote QDs driven by the same pulsed laser of locked frequency. This is a direct consequence of the wide range of laser detunings that allows to achieve the maximum population inversion and it is in contrast to the traditional two-photon excitation schemes, which instead require a precise control of the laser energy for each individual QD featuring dissimilar XX binding energies. The phonon-assisted two-photon excitation is instead a universal clocked excitation for arbitrary large numbers of QDs, a scalable approach for quantum optics.
The excitation of the remote emitters is timed so that the individual single photons from the two QDs, depicted as ice cubes in Fig.<ref>(a), overlap on a beam splitter performing two-photon interference. For this experiment, we fabricate a second sample prepared on top of a piezoelectric actuator to provide tunability of the QD emission lines <cit.> (see methods) and to ensure frequency matching of the photons impinging at the beam splitter. Two X transitions with almost identical lifetimes and high single photon purity from two remote QDs (see supporting note 6) are tuned to the exact same frequency by applying a voltage across the piezoelectric actuator mounted below QD A (Fig.<ref>(b)). In this condition, the difference in the biexciton binding energy between the two QDs is around 0.1 meV. It implies that under standard TPE, the optimal state inversion for the two QDs cannot be achieved with the same laser (see the discussion above). In contrast, under phonon-assisted TPE we can prepare the two QDs with the highest probability by simply finding the optimum laser detuning for both, which in this special case turns out to be Δ=0.36 meV (Fig.<ref>(b)). The resulting correlation measurements are depicted in Fig.<ref>(c) for the cross- and co-polarized configurations.
We would like to point out that in contrast to the two-photon interference of consecutive photons from a single source - where the cross-polarized configuration always yields a value of g_⊥^(2)(0)=0.5 - this condition is not necessarily realized when combining remote single photon sources. In particular, long time-scale blinking reduces the value of g_⊥^(2)(0) even when the average intensities of the two emitters are kept the same <cit.>. Thus, it is crucial to first determine the cross-polarized correlation and to evaluate the real remote two-photon interference visibility V_remote as follows:
V_remote= g_⊥^(2)(0) - g_∥^(2)(0) / g_⊥^(2)(0)
The optimized value for the overlap of the individual photon energies is then found by sweeping the X transition of QD A in steps of voltages that modify the energy on the order of a fraction of the linewidth, as demonstrated in Fig.<ref>(d). By doing so we report on a remote interference visibility as high as V_remote=51±5 %, one of the highest value ever observed for QDs without the need of any temporal/spectral selection. So far only coherently scattered <cit.> or Raman photons <cit.> from remote QDs achieved higher two-photon interference visibilities. However, these excitation schemes cannot be used to generate pairs of photons and does not allow for the on-demand state preparation, both important prerequisites for quantum relays based on entangled photons from QDs <cit.>. The visibility is in good agreement with the theoretical limits obtained from Michelson interferometry (see supporting note 6). Currently we are only limited by the non Fourier-limited photon emission of our QDs. A possible way to overcome this problem is to use devices that enable the application of electric fields <cit.> and/or photonic cavities to shorten the lifetime of the transitions via the Purcell effect.
In conclusion, we performed two-photon interference between photons emitted by two remote QDs with a visibility of 51±5 %, by using the full power of the novel phonon-assisted two-photon excitation scheme. In a comprehensive study, we compare different resonant excitation schemes and show that the phonon-assisted state preparation is a robust method to generate on-demand single pairs of highly entangled and indistinguishable photons from semiconductor quantum dots. Our results paves the way towards entanglement swapping experiments between independent QDs and other complex multi-source experiments.
§ ASSOCIATED CONTENT
§.§ Supporting Information
Methods, Detuning parameters of the phonon-assisted TPE; lifetime and auto-corrleation; fidelity evaluation; two-photon interference using the same QD; two-photon interference from remote QDs.
§ AUTHOR INFORMATION
§.§ Corresponding Authors
*(M.R.) E-mail: [email protected]
(K.D.J.) E-mail: [email protected]
(R.T.) E-mail: [email protected]
§.§ Author contributions
M.R.,K.D.J. and D.H performed the measurements with the help from C.S. and R.T.. M.R. made the data analysis with the help from K.D.J.,D.H. and R.T.. Y.H. grew the sample with support of A.R.. C.S. processed the sample. M.R., K.D.J. and R.T. wrote the manuscript with the help from all the authors. R.T. conceived the experiment and supervised the project.
§.§ Notes
The authors declare no competing financial interests.
§ ACKNOWLEDGEMENTS
This work was financially supported by the ERC Starting Grant No. 679183 (SPQRel) and European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 601126 (HANAS). K.D.J. acknowledges funding from the MARIE SKŁODOWSKA-CURIE Individual Fellowship under REA grant agreement No. 661416 (SiPhoN). K.D.J. and R.T. acknowledge the COST Action MP1403, supported by COST (European Cooperation in Science and Technology). A.R. acknowledges funding from the Austrian Science Fund (FWF): P 29603. We acknowledge Florian Sipek and Matthias Gartner for help with the experimental setup and data evaluation. We further thank Javier Martin-Sanchez and Johannes Wildmann for fruitful discussions as well as Oliver G. Schmidt for providing access to the MBE facility.
§ TOC FIGURE
| One of the very first requirements to observe ideal on-demand single photon emission is the population inversion of the quantum emitter's excited state. Such a preparation of the quantum state is usually achieved via coherent excitation using resonant laser pulses, leading to an inverted two-level system performing Rabi oscillations <cit.>. While single pulse resonant excitation of a quantum dot (QD) has been used to achieve high state preparation fidelities and remarkable single photon properties, this scheme cannot be used to efficiently prepare the biexciton state, the key step to achieve polarization-entangled photon-pairs generation with QDs. This task can be instead accomplished using two-photon excitation (TPE) <cit.> techniques which have recently led to the generation of on-demand entangled photon pairs <cit.>. This coherent excitation scheme, however, has one important drawback. Small fluctuations in the laser pulse area or energy as well as fluctuations in the QD environment result in a strong variation of the excited state population probability that, in turn, affects the efficiency of photon generation. Envisioned quantum communication applications demand instead for more robust excitation schemes, being immune against these sources of "environmental decoherence" and ensuring on-demand generation of single and entangled photon-pairs.
In principle, it is possible to overcome these problems by taking advantage of the solid state nature of QDs, and in particular of their coupling to acoustic phonons. Despite the phonon-assisted excitation scheme is inherently incoherent, it has been proposed <cit.> and recently demonstrated <cit.> that population inversion of X and XX states coupled to a quasicontinuum of vibrational modes is indeed possible.
Nonetheless, the capability of this technique to generate highly indistinguishable single and entangled photons has not been explored so far. In this letter, we show for the first time that phonon-assisted two-photon excitation of QDs allows for the generation of highly indistinguishable entangled photon-pairs. In comparison with standard excitation schemes, we demonstrate that this method is more resilient against environmental decoherence limiting the XX or X preparation fidelity in conventional TPE schemes. Most importantly, we exploit its addressability with a wide-range of laser detunings to prepare on demand two remote and dissimilar QDs and to let the generated photons interfere at a beam splitter, a key experiment for the realization of an all-optical quantum repeater <cit.>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07651v1 | 20170126110159 | Quasicontinuum Method Extended to Irregular Lattices | [
"Karel Mikeš",
"Milan Jirásek"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
[
[
=====
The quasicontinuum (QC) method, originally proposed by Tadmor, Ortiz and Phillips in 1996,
is a computational technique that can efficiently handle regular atomistic lattices by
combining continuum and atomistic approaches. In the present work, the QC method is extended
to irregular systems of particles that represent a heterogeneous material.
The paper introduces five QC-inspired approaches that approximate a discrete model
consisting of particles connected by elastic links with axial interactions. Accuracy is first
checked on simple examples in two and three spatial dimensions. Computational efficiency is then
assessed by performing three-dimensional simulations of an L-shaped specimen with elastic-brittle
links. It is shown that the QC-inspired approaches substantially reduce the computational cost
and lead to macroscopic crack trajectories and global
load-displacement curves that are very similar to those obtained by a
fully resolved particle model.
§ INTRODUCTION
Discrete particle models use a network of particles interacting via discrete links or connections that represent a discrete microstructure of the modeled material.
An advantage of this approach is that discrete models can naturally capture small-scale phenomena. Therefore, a variety of sophisticated discrete material models have been developed and applied
in simulations of materials such as
paper <cit.>, textile <cit.>, fibrous materials <cit.>, woven composite fabrics <cit.> or fiber composites <cit.>.
Extensive effort has been invested into the formulation of a discrete model of concrete <cit.>.
Discrete mechanical models can accurately capture complex material response, especially localized phenomena such as damage or plastic softening.
However, they suffer by two main disadvantages.
Firstly, a large number of particles is needed to realistically describe the response of large-scale physically relevant models. This results in huge systems of equations, which are expensive to solve.
Secondly, the process of assembling of this system is also computationally expensive because all discrete connections must be individually taken into account.
Both of the aforementioned disadvantages of discrete particle models can be removed by using simplified continuous models based on one of the conventional homogenization procedures.
However, standard continuous models cannot capture localized phenomena in an objective way and
require enrichments, e.g., by nonlocal and gradient terms, which are again computationally
expensive. According to Bažant <cit.>, the most powerful approach to softening damage
in the multi-scale context is a discrete (lattice-particle) simulation of the mesostructure of the entire structural region in which softening damage can occur.
Another way to reduce the computational cost of discrete particle models is a
combination of a simplified continuous model with an exact discrete description in the
parts where it is needed. Such a combination of two different approaches entails that
some hand shaking procedure is needed at the interface between the continuous and discrete
domains <cit.>. The quasicontinuum (QC) method is a suitable technique combining the
advantages of continuous models with the exact description of discrete particle models without additional coupling procedures.
The quasicontinuum method was originally proposed by Tadmor, Ortiz and Phillips <cit.> in 1996. The original purpose of this computational technique was a simplification of large atomistic lattice models described by long-range conservative interaction potentials. Since that time, QC methods have been widely used to investigate local phenomena of atomistic models with long-range interactions <cit.>.
Recently, the application of QC methods has been successfully extended to other lattices and interaction potentials. For example, an application of the QC method to structural lattice models of fibrous materials with short-range nearest-neighbour interactions has been developed by Beex et al. for conservative <cit.> and non-conservative <cit.> interaction potentials including dissipation and fiber sliding as well as for planar beam lattices <cit.>, still applied to regular lattices only.
An overview of applications and current directions of QC methods has been provided by Miller and Tadmor in <cit.> and in part IV of their book <cit.>.
In last few years, a variational formulation of dissipative QC method has been done by Rokoš at al. <cit.>, a goal-oriented adaptive version of QC algorithm has been introduced in <cit.> or a meshless QC method has been developed by Kochmann research group <cit.>.
But the application of all mentioned QC methods is still restricted only to systems with regular geometry of particles.
In the present work, we extend the QC approach to irregular systems of particles with short-range interactions by axial forces. The main idea has been tentatively presented in a conference paper <cit.>.
Here we proceed to a more systematic evaluation of the performance of various QC formulations
applied to systems with elastic-brittle links. The proposed models are implemented
in OOFEM <cit.>, an open-source object-oriented simulation platform initially
developed for finite element methods but extensible to other discretization methods.
The procedure that results from the QC method combines the following three ingredients:
* Interpolation of particle displacements is used in the regions of low interest. Only a small subset of particles is selected to characterize the behavior of the entire system. These so-called repnodes (representative nodes) are used as nodes of an underlying triangular finite element mesh, and the displacements of other particles in the region of low interest are interpolated. In the regions of high interest, all particles are selected as repnodes, in order to provide the exact resolution of the particle model. This interpolation leads to a significant reduction of the number of degrees
of freedom (DOFs) without inducing a large error in the regions of high interest.
* A summation rule can be applied in order to eliminate the requirement of visiting all particles during assembly of the global equilibrium equations. If such a rule is not imposed, all particles need to be visited to construct the system of equations, which makes the process computationally expensive. If the summation rule is adopted, the contribution of all particles in each interpolation triangle is estimated based on sampling of the links that surround one single particle and proper scaling of their contribution. This makes the computational process faster but some problems occur on the interface between regions of high and low interest. The piecewise linear interpolation of displacements combined with the summation rule means that the deformation is considered as constant within each interpolation element in the regions of low interest, while the deformations of individual links in the regions of high interest are evaluated exactly. Consequently, forces of nonphysical character, called the ghost forces, appear on the interface <cit.>.
In our work, the summation procedure is based on homogenization of link networks contributing to the interpolation elements. Some of the links (truss elements) are selected to be processed exactly, in order to properly treat the interface between the exactly solved and interpolated domains and thus to eliminate the ghost forces.
* Adaptivity provides suitable changes of the regions of high interest during the simulation process. A new triangulation of the interpolation mesh could be done, but this is actually not necessary because the type of region can be changed by adding repnodes before each step. A suitable change of the regions of high interest often leads to a substantial increase of accuracy and, in several specific cases, it is necessary in order to represent the correct physical behavior, e.g., in a crack propagation process.
§ METHODS
§.§ Overview
The original QC approach was developed for regularly arranged crystal lattices, in which atoms interact at a longer distance (not just with immediate neighbors) and the interaction forces can be derived from
suitable potentials. In regions of low interest, displacements were interpolated in a piecewise linear fashion, using a selected set of representative atoms (repatoms). In this context, imposition of an affine displacement field on the periodic crystal lattice can be interpreted as an application of the Cauchy-Born rule.
In the present paper, we focus on discrete particle systems with short-range elastic or elastic-brittle interactions. Such systems are typically used in simulations of heterogeneous materials. Particles in these systems are distributed randomly and, in contrast to atomistic systems, do not form regular lattices, but the idea of QC can still be used.
Three approaches based on this idea are introduced here and are compared with the fully resolved particle model in two dimensions, which is considered as the reference case. Accuracy is assessed in terms of displacement and strain errors. The number and position of repnodes are adapted to achieve the optimal result.
The computational procedure consists of the following steps:
* generation of particles and of connecting links,
* selection of repnodes and generation of interpolation elements,
* application of a simplification rule,
* assembly of global equations with repnode displacements as basic unknowns,
* solution of global equations (for nonlinear models using an incremental-iterative scheme),
* post-processing of results and error evaluation.
The details of individual steps are described in the following subsections.
§.§ Generation of input geometry
In the first step, the input geometry of the particle system is generated; it is specified by the position of all particles in the system and by the information which pairs of particles are connected by links. This process depends on the type of represented material.
The second step consists of repnode selection and generation of interpolation elements. There are two possible reasons why a certain particle is selected as a repnode:
* All particles located in a region of high interest are selected as repnodes to represent the “exact” behaviour in this region.
* In regions of low interest, a sufficient number of repnodes are needed to construct the approximation of the displacements of other particles. Such repnodes represent vertices of the interpolation elements. The basic triangulation is done by the T3D mesh generator <cit.>. Then all newly created vertices of the mesh elements are shifted to the position of the nearest particles and are labeled as repnodes.
§.§ Application of simplifying rule
Once the input geometry is given, it is possible to apply a suitable simplifying rule based on the idea of QC. In this paper, five approaches using various levels of simplification are considered.
§.§.§ Pure particle approach (A1)
This approach does not use any simplification and corresponds to the reference model. Only the particles and links defining the particle model are used as input. Repnodes and interpolation elements are not needed. Every single particle represents a node with independent DOFs (displacements) and the links are described by 1D truss elements. Consequently, all links are taken into account explicitly and contribute directly to the internal forces and to the stiffness matrix.
This approach fully resolves the “exact” particle model, and the corresponding results are used as a reference solution for evaluation of accuracy and efficiency of the following simplified approaches.
§.§.§ Hanging node approach (A2)
The first technique which exploits the QC idea for simplification of the full particle model is based on approximation of DOFs of those particles that have not been selected as repnodes. Such particles are called the hanging nodes because their DOFs are not independent unknowns but are “hanging” on auxiliary elements with displacements interpolated from the neighboring repnodes. Triangular or tetrahedral interpolation elements with vertices at the repnodes are used here. For each hanging node, the corresponding interpolation element is found. It is either the element in which the hanging node is located, or the nearest element if the hanging node is not located in any interpolation element, which sometimes occurs at a curved part of the physical boundary of the particle system; see Figure <ref>.
The displacement of each hanging node (grey) is a linear combination of the displacements of the vertices of the corresponding interpolation element (black). This means that DOFs of all hanging nodes are interpolated (or extrapolated) using the DOFs of repnodes as the primary unknowns. Linear interpolation is used.
All links (truss elements connecting particles) contribute to the structural stiffness matrix, but only the repnodes possess independent DOFs. The repnodes represent the nodes of an interpolation mesh, which consists of linear triangular (2D) or tetrahedral (3D) elements. These elements are used only for approximation of displacements of the nodes not selected as repnodes and do not provide a direct contribution to the internal force vector and the structural stiffness matrix.
In OOFEM implementation, the particles carrying DOFs (repnodes) are modeled as regular nodes. The particles with interpolated DOFs (hanging nodes) are represented by a special type of node for which the subset of interpolation elements can be specified. The nearest interpolation element is taken from this subset and not from all interpolation elements. This technique allows to distinguish overlapping elements on the opposite sides of a crack.
The set of links that contribute to the global equilibrium equations is the same as in the A1 approach. In regions of high interest, all nodes are repnodes and the contribution of these regions is the same as in the pure particle approach (A1).
§.§.§ Global homogenization approach (A3)
In regions of low interest,
this approach replaces the stiffness that corresponds to the links by the stiffness of 2D triangular or 3D tetrahedral elements “filled” by a fictitious continuum material with properties obtained by homogenization of the discrete network. Thereby, a substantial number of truss elements can be removed from the assembly procedure, and the number of operations is significantly reduced.
In the A3 approach, only one (global) effective elastic stiffness tensor, common to all elements, is assembled from the contribution of all links.
Such a tensor can be derived from the Hill-Mandel condition <cit.>,
which requires that the virtual macroscopic work density at a point be equal to the average microscopic virtual work in a corresponding volume V_0 of the microstructure. This
condition can be written as
δ W_mac = δ W_mic
where
δ W_mac =
σ:δε
is the virtual macroscopic work density (i.e., work per unit volume)
and
δ W_mic =
1/V_0∫_V_0σ_mic:δε_mic Ṿ_0
is the average virtual microscopic work density in the discrete microstructure.
Here, σ and ε are the macroscopic
stress and strain tensors,
and σ_mic and ε_mic are the microscopic
stress and strain tensors.
For a microstructure consisting of particles connected by links that transmit axial forces only,
the integral in (<ref>) can be replaced by a sum over all links, which leads to
δ W_mic
= 1/V_0∑_p=1^N_t L_p A_p σ_Np δε_Np
= 1/V_0∑_p=1^N_tF_N_p δΔ L_p
where N_t is the number of links in volume V_0,
L_p and A_p is the length and cross-sectional area of link number p,
σ_Np and ε_Np is the axial stress and strain in that link,
F_Np = A_pσ_Np is the axial force and Δ L_p=L_p ε_Np
is the elongation (change of length).
In analogy to the Cauchy-Born rule used in the original quasicontinuum theory for
atomic lattices <cit.>, we will use the simplifying assumption that the microscopic strains
(actual strains as well as virtual ones) can be evaluated by projecting the macroscopic strain tensor. This assumption, in microplane theory referred to as the kinematic constraint <cit.>,
is written as
ε_Np = _p··_p = _p:,
10mm δε_Np = _p·δ·_p = _p:δ
where _p =_p ⊗_p is a second-order tensor, introduced for convenience.
Based on (<ref>)–(<ref>), the virtual work equality (<ref>) can be rewritten as
σ:δε =
1/V_0∑_p=1^N_t L_p A_p σ_Np_p:δ
Since (<ref>)
should hold for all symmetric second-order tensors δ, and since
_p is symmetric,
the macroscopic stress must be given by
σ=1/V_0∑_p=1^N_t L_p A_p _pσ_Np
Formula (<ref>) provides a rule for the evaluation of the macroscopic stress
tensor σ from the microscopic stresses, in our case
from the axial stresses in individual links, σ_Np. The formula is generally
applicable, even to inelastic materials. In the particular case of a linear elastic material response,
the constitutive behavior of links is described by Hooke's law
σ_Np = E_p ε_Np
where E_p is the microscopic elastic modulus of link number p (often considered
as the same for all links). Substituting (<ref>) into (<ref>) and exploiting
the first part of (<ref>),
we obtain
σ=1/V_0∑_p=1^N_t L_p A_p E_p _p⊗_p:
=:
where
=1/V_0∑_p=1^N_t L_p A_p E_p _p⊗_p
is the fourth-order macroscopic elastic stiffness tensor.
In the A3 approach, the sum in (<ref>) is taken over all links of the discrete model, and V_0 corresponds to the volume of the entire domain of analysis. Major and minor symmetries of the computed stiffness tensor are guaranteed because all these symmetries are exhibited by the
fourth-order tensor , for each p. Once the global stiffness tensor is evaluated, the corresponding material parameters are assigned to all 2D and 3D elements. The nature of these parameters depends on the assumed type of material (e.g., isotropic, orthotropic, or general anisotropic). For instance, if the material is supposed to be
macroscopically isotropic, the numerically evaluated stiffness tensor
is replaced by its best approximation by an isotropic stiffness tensor characterized
by two elastic constants. The details will be explained in Section <ref>.
Triangular or tetrahedral elements, which were in the A2 approach considered as interpolation elements for hanging nodes, are now used directly for evaluation of the structural stiffness matrix, based on the material stiffness tensor obtained in the homogenization process. Thus, all hanging nodes with interpolated DOFs and all truss elements connecting them can be removed from the computational model. This leads to a significant reduction of the computational cost, but the elimination of links must be done carefully.
Links connecting two hanging nodes (located both in the same element or in two different elements) can be removed because their stiffness is represented by the effective stiffness of the homogenized material assigned to the elements. Links connecting one repnode and one hanging node located in the same element can be removed, too, because their stiffness is also reflected by the effective material stiffness.
A special case occurs if a link passes through more than one element and connects one hanging node with one repnode. This can happen if the interpolation elements are too small, or on the interface between regions of low and high interest (see Fig. <ref>, in which
the pink rectangle is a region of high interest and the light blue rectangle is a region
of low interest). Such links should not be removed because their stiffness is not reflected by the homogenized material. Therefore, the involved hanging nodes are kept and the contribution of the links is taken into account explicitly, in addition to the contribution of the triangular or tetrahedral elements.
The global effective elastic stiffness tensor is determined by techniques that will be
described in Section <ref>.
For a general arrangement of particles and links, this tensor is considered as anisotropic.
If the internal structure of the material exhibits no preferential directions and is expected
to be macroscopically isotropic, the numerically evaluated effective elastic stiffness is
replaced by its best isotropic approximation. To distinguish between the general anisotropic
case and the special isotropic case, the corresponding versions of the A3 approach are
referred to as A3a and A3i.
§.§.§ Local homogenization approaches (A4, A5)
These approaches are refinements of A3 and aim at improving the following deficiencies:
* In the A3 approach, the effective material stiffness takes into account all links. However, certain links that partially cross homogenized elements are treated explicitly, and thus a part of their stiffness is actually accounted for twice, which artificially increases the resulting structural stiffness.
* The A3 approach assumes that, from the macroscopic point of view, material properties are the same at all points of the investigated domain, while in reality the local arrangement of links is variable across the domain.
The A4/A5 approaches remove these deficiencies by identifying effective properties of the homogenized material for each element separately.
The A4 approach treats the material as isotropic while the A5 approach accounts for local anisotropy. For computation of local material parameters, it makes sense to consider general anisotropy even if the overall material behavior is isotropic. The reason is that, for small elements, the particular local arrangement of a few links can result in a significant deviation from isotropy.
Evaluation of local stiffness for all homogenized elements
The material stiffness for each element is evaluated only from the contributions of those parts of the links that are really located inside the element. Certain links are still explicitly treated as 1D truss elements and do not contribute to the stiffness of any homogenized element.
In a loop over all links, the contribution of each link is computed according to the following set of rules, depending on the types of particles connected by the link. For better
understanding of the rules, examples of
links that correspond to individual cases (described below under labels 1ab, 2abc, 3ab)
are provided in Fig. <ref>c.
* Repnode – repnode
* If a link is located in the region of high interest:
The link is taken into account explicitly as a 1D truss element.
It does not contribute to the stiffness of any homogenized element.
* If a link is located in the region of low interest:
A link connecting two repnodes in the region of low interest is always located on an edge of an interpolation element. The stiffness contribution of this link is taken into account according to Equation (<ref>).
The stiffness contribution is equally distributed to all elements sharing this edge.
* Hanging node – hanging node
* If one or both ends of the link are located outside of all homogenized elements:
The link is taken into account as a truss element.
It does not contribute to the stiffness of any homogenized element.
* If both ends of the link are located in the same homogenized element:
The stiffness contribution of this link is taken into account according to Equation (<ref>).
The entire contribution is assigned to the corresponding element.
* If the ends of the link are located in two different homogenized elements:
The stiffness contribution of this link is taken into account according to Equation (<ref>).
All elements intersected by the link are detected.
The stiffness contribution is distributed to all the detected elements in proportion to the length of the part of the link inside each element.
* Repnode – hanging node
* If the hanging node is located outside of all homogenized elements:
The link is taken into account as a truss element.
It does not contribute to the stiffness of any homogenized element.
* If both ends of the link are located in the same element:
The stiffness contribution of this link is taken into account according to Equation (<ref>).
The entire contribution is assigned to the present element.
* If the ends of the link are located in two different elements:
The link is taken into account as a truss element.
It does not contribute to the stiffness of any homogenized element.
If a crack is modeled by overlapping elements, the same rules can be used. It is only necessary to take a decision on which side of the crack the link is located. Based on this decision, the stiffness contribution is assigned to one of the overlapping elements.
§.§ Homogenization
§.§.§ Two-dimensional models
Formula (<ref>) is written in tensorial notation and provides the
effective material stiffness tensor. In the actual numerical implementation, the
tensor is represented by the corresponding matrix, based on the Voigt notation.
For instance, in the two-dimensional setting, the resulting material stiffness matrix
is given by
𝐃^num =
([ D_1111 D_1122 D_1112; D_2211 D_2222 D_2212; D_1211 D_1222 D_1212 ])
where D_ijkl are components of the material stiffness tensor .
Note that, in two dimensions, only five of these components are independent.
Matrix 𝐃^num exhibits symmetry and, on top of that, D_1122=D_1212,
because formula (<ref>) leads to
D_1122=1/V_0∑_p=1^N_t L_p A_p E_p n_p1n_p1n_p2n_p2 =
1/V_0∑_p=1^N_t L_p A_p E_p n_p1n_p2n_p1n_p2 =
D_1212
If the material stiffness is considered as anisotropic, matrix 𝐃^num
is used directly. This is done by the A5 approach, with local evaluation of 𝐃^num
for each homogenized element separately, and also by the A3 approach, if it is decided
to use an anisotropic, globally evaluated stiffness (e.g., if the structure of the
particle model is indeed anisotropic).
In the A4 approach, and also in the A3 approach with isotropic stiffness (referred to as A3i),
𝐃^num is approximated by a matrix
𝐃^iso =
E/1-ν^2([ 1 ν 0; ν 1 0; 0 0 1-ν/2 ])
which corresponds to the isotropic material stiffness under plane stress conditions,
with E and ν denoting the Young modulus and Poisson ratio of the homogenized
material.
Alternatively, one could adopt the plane strain assumptions, which would lead to
different auxiliary values of E and ν but to the same resulting matrix 𝐃^iso.
Optimal values of parameters E and ν are identified by minimizing
a certain measure of the difference between matrices 𝐃^num and 𝐃^iso.
Several choices of such a measure are possible, but the results remain quite similar.
The calculations presented here are based on the error measure defined as
e(𝐃^num,𝐃^iso)=
∑_I=1^3(𝐯_I^T( 𝐃^num -𝐃^iso)𝐯_I)^2
where 𝐯_I is the I-th eigenvector of matrix 𝐃^iso.
For this choice, the optimal parameters can be expressed explicitly as
E = 4(D_1111+2D_1122+D_2222)(4D_1111-7D_1122+4D_2222)/33D_1111+6D_1122+33D_2222
ν = D_1111+62D_1122+D_2222/33D_1111+6D_1122+33D_2222
Note that the above expressions for E and ν do not depend on stiffness coefficients
D_1112 and D_2212, which are always zero for isotropic materials.
If the numerically computed coefficients D_1112 and D_2212 are not small
(compared to the other coefficients), the assumption of isotropy is not appropriate
and a fully anisotropic stiffness should be used. Also note that a perfect matching
between 𝐃^num and 𝐃^iso is possible only if
coefficients D_1112 and D_2212 vanish and the other coefficients satisfy
conditions
D_2222=D_1111 and D_1122=D_1111/3. In this case, formulae
(<ref>)–(<ref>)
give E=(8/9)D_1111 and ν=1/3.
An alternative error measure can be based on the standard Euclidean norm of fourth-order
tensors. The tensorial expression (D_ijkl^num-D_ijkl^iso)(D_ijkl^num-D_ijkl^iso), with sum over i, j, k and l implied by the summation convention,
would be
in the Voigt notation rewritten as
e(𝐃^num,𝐃^iso)=
∑_I=1^3∑_J=1^3 W_IJ( D_IJ^num -D_IJ^iso)^2
= (𝐃^num-𝐃^iso)𝐖(𝐃^num-𝐃^iso)
where W_IJ are suitable weight coefficients
that can be arranged into the matrix
𝐖 = ([ 1 1 2; 1 1 2; 2 2 4 ])
For plane stress, the corresponding expression for the optimized elastic constants is
E = (D_1111+2D_1122+D_2222)(D_1111+14D_1122+D_2222)/2(3D_1111+12D_1122+3D_2222)
ν = 2(D_1111-D_1122+D_2222)/3D_1111+12D_1122+3D_2222
For D_2222=D_1111 and D_1122=D_1111/3, we obtain again
E=(8/9)D_1111 and ν=1/3.
Alternatively, condition ν=1/3 could be imposed directly. Minimization of
(<ref>) with Young's modulus considered as the only fitting variable would lead to
E=8/81(4D_1111+3D_1122+4D_2222)
while minimization of (<ref>) would give
E=2/9(D_1111+6D_1122+D_2222)
§.§.§ Three-dimensional models
In the three-dimensional setting, the resulting material stiffness matrix
is given by
𝐃^num =
([ D_1111 D_1122 D_1133 D_1123 D_1113 D_1112; D_2211 D_2222 D_2233 D_2223 D_2213 D_2212; D_3311 D_3322 D_3333 D_3323 D_3313 D_3312; D_2311 D_2322 D_2333 D_2323 D_2313 D_2312; D_1311 D_1322 D_1333 D_1323 D_1313 D_1312; D_1211 D_1222 D_1233 D_1223 D_1213 D_1212 ])
Only fifteen components of the above matrix are independent because the
coefficients D_ijkl are invariant with respect to any permutation of the subscripts.
In the A3i and A4 approaches, 𝐃^num is approximated by an isotropic stiffness matrix in the form
𝐃^iso =
E/(1+ν)(1-2ν)([ 1-ν ν ν 0 0 0; ν 1-ν ν 0 0 0; ν ν 1-ν 0 0 0; 0 0 0 1-2ν 0 0; 0 0 0 0 1-2ν 0; 0 0 0 0 0 1-2ν ])
The weight coefficients used in the error measure according to formula (<ref>) are
𝐖 =
([ 1 1 1 2 2 2; 1 1 1 2 2 2; 1 1 1 2 2 2; 2 2 2 4 4 4; 2 2 2 4 4 4; 2 2 2 4 4 4 ])
By minimizing this error we obtain
E = (a+2b)(a+11b)/15a+39b
ν = 2a+b/5a+13b
where
a = D_1111 + D_2222 + D_3333
b = D_2233 + D_1133 + D_1122
In the special case of a perfectly isotropic matrix with D_3333=D_2222=D_1111 and D_2233=D_1133=D_1122=D_1111/3, the expressions can be simplified to
E=(5/6)D_1111 and ν=1/4.
§.§ Numerical simulation
Approaches A1–A5 described above have been implemented
into the OOFEM open-source code <cit.>.
OOFEM computes displacements of repnodes and hanging nodes and strains and stresses in truss elements and homogenized planar or spatial elements. Afterwards, some post-processing procedures are required to evaluate the error of each approach and to plot the computed results.
The OOFEM input for the A4/A5 approaches is almost the same as for A3, with only one exception. An A4/A5 input file contains a large number of materials with different parameters, which are then assigned to individual elements. Numerical tests show that computations with A4/A5 are slightly slower than with A3 because approaches A4/A5 deal with more materials. However, this difference is almost negligible. The process of identifying material parameters is visibly slower for A4/A5 and the difference depends on the type of homogenization. Even if the time of the initial set-up process is taken into account,
simulations based on A3–A5 turn out to be several times faster than those based on A1 and A2.
§.§ Error measures
Accuracy of the simplifying approaches A2–A5 is evaluated by comparing the results to the
exact approach, A1. The following error measures are used for that purpose:
* The relative stiffness error (RSE) is measured via the relative reaction error, defined as
RSE_Ai = ∑_k R_k^(Ai) - ∑_k R_k^(A1)/∑_k R_k^(Ai)
where ∑_k R_k^(Ai) is the sum of reactions in the loading direction at all nodes k with a prescribed nonzero displacement (the simulations are performed under direct displacement control).
* The energy error indicator (EEI) is defined as
EEI_Ai = √(1/2∑_j^NoL E_jA_jL_j (ε^(Ai)_j - ε^(A1)_j)^2)
where ε^(Ai)_j is the axial strain at link j, and the sum is taken over all links (NoL denotes the number of links).
* The total displacement error indicator (DEI) is defined as
DEI_Ai = √(∑_j^NoN‖𝐮^(Ai)_j - 𝐮^(A1)_j‖^2)
where 𝐮^(Ai)_j is the displacement vector at node j and NoN is the number of nodes.
§ SIMPLE TESTS OF ELASTIC RESPONSE
§.§ Two-dimensional periodic lattices
§.§.§ Lattice geometry and properties
For verification of the implemented methods, the first tests are run for regular
lattices composed of periodically repeated cells. The properties of the cells are adjusted
such that the resulting macroscopic behavior be isotropic.
The microstructure is generated by periodically repeating a square-shaped basic cell
of size L_x× L_y with
crossed diagonals; see Figure <ref>. The moduli of individual links in horizontal, vertical and diagonal directions are denoted as E_x, E_y and E_d, respectively.
For simplicity, the cross-sectional area A of all links within a cell
is considered to be the same. When multiple cells are combined into a rectangular
pattern, the horizontal and vertical links on the inter-cell boundaries are merged and their resulting area
is doubled.
Due to periodicity,
evaluation of the homogenized stiffness can be based on formula (<ref>) applied
to one single cell and combined with formula (<ref>).
The resulting stiffness matrix of a homogenized two-dimensional continuum is
𝐃^num =
2A/L_xL_yt([ E_xL_x+E_dL_dcos^4α E_dL_dcos^2αsin^2α 0; E_dL_dcos^2αsin^2α E_yL_y+E_dL_dsin^4α 0; 0 0 E_dL_dcos^2αsin^2α ])
where t is the out-of-plane thickness,
L_d=√(L_x^2+L_y^2) is the length of the diagonal link,
and α=arctan(L_y/L_x) is an angle characterizing the inclination of diagonals; see Fig. <ref>. In general, the stiffness matrix
given by (<ref>) corresponds to an orthotropic material.
By comparing (<ref>) with formula (<ref>) for the stiffness matrix of an isotropic material under plane stress conditions, we find that the periodic cell leads to
macroscopic isotropy if the following conditions are satisfied:
E_xL_x+E_dL_dcos^4α = E_yL_y+E_dL_dsin^4α
E_dL_dcos^2αsin^2α = ν(E_yL_y+E_dL_dsin^4α)
ν = 1-ν/2
Condition (<ref>) implies that, in the case of isotropy, the macroscopic Poisson ratio
is restricted to ν=1/3.
From conditions (<ref>)–(<ref>)
combined with relations L_x=L_dcosα and L_y=L_dsinα we obtain
E_x = E_d cosα (3-4cos^2α)
E_y = E_d sinα (3-4sin^2α)
Finally, we can substitute ν=1/3 and (<ref>)–(<ref>) into condition
E/1-ν^2 = 2A/L_xL_yt ( E_xL_x+E_dL_dcos^4α)
and link the diagonal stiffness
E_d = 3tL_dE/8Asin 2α
to geometrical parameters of the cell and to the macroscopic
elastic modulus.
For a cell of a given geometry and for a prescribed macroscopic elastic modulus,
the characteristics of individual links in the cell can be obtained from
(<ref>) and (<ref>)–(<ref>).
To ensure that all moduli are positive, angle α must be between π/6 and
π/3, which means that the ratio L_x:L_y must be between 1:√(3) and √(3):1.
§.§.§ Direct tension test
The presented QC approaches are first subjected to simple tests in direct tension and shear
on a regular lattice shown in Fig. <ref>a,
with boundary conditions according to Fig. <ref>.
The lattice is composed of 50× 50 periodic isotropic cells.
No region of high interest is defined in these elementary tests.
For direct tension (Fig. <ref>a),
the displacements of individual particles computed using the A1 approach
(fully resolved particle model)
exactly correspond to a linear displacement field that would arise in a homogeneous
continuum. All horizontal links are stretched in the same way and transmit the same
axial forces, and similar statements apply to the vertical links and to the diagonal links.
The purpose of this test is to check whether the simplified approaches A2–A5 lead to
the exact results. Indeed, this is the case, even on irregular meshes
shown in Fig. <ref>, provided that the link stiffnesses are
tuned up such that the macroscopic behavior is isotropic. The test is
analogous to patch tests of finite elements, because it demonstrates that a solution
with a uniform strain field is captured exactly by the numerical method.
Neither interpolation nor homogenization errors arise and all approaches pass
the patch test.
If the macroscopic behavior of the lattice is not isotropic, approaches A2 and A5 still lead to
the exact results, while approach A3 does so only if the globally homogenized material
stiffness is considered as anisotropic. For A4, the locally homogenized material stiffness
is by definition isotropic, which induces a homogenization error.
The fact that the global homogenization procedure gives exactly the same stiffness matrix
𝐃^num given by (<ref>) as the homogenization of one cell is obvious,
because the whole domain is composed of an integer number of identical cells.
On the other hand, it is not immediately clear that the same stiffness matrix is obtained
by local homogenization over an arbitrary triangular element, including cases when edges of the element
are not aligned with lattice directions (horizontal, vertical and diagonal).
A graphical proof of this interesting property is sketched in Fig. <ref>,
which shows a typical triangular element and the underlying regular lattice.
The key point is that the sum of the lengths of intersections of the triangle with
vertical links, ∑_p L_y,p, is exactly equal to the triangle area, A_e, divided by the horizontal
spacing between the horizontal links, L_x. As shown in Fig. <ref>, this sum multiplied
by L_x directly
corresponds to evaluation of the triangle area by numerical integration using the
trapezoidal rule. Since the function to be integrated is piecewise linear and one of the
integration points is always located at the point where the slope changes (i.e., at the projection
of one vertex), the numerical quadrature is exact, which means that
A_e = L_x ∑_p L_y,p
An analogous statement holds for horizontal links, for ascending diagonal
links, and for descending diagonal links.
Consequently, the relative
proportions of horizontal, vertical, ascending diagonal and descending diagonal links in each single triangular element are the same
as the overall relative proportions, and
formula (<ref>) always leads to the same material stiffness
as formula (<ref>), valid for one periodic cell.
This would not be the case if the vertices
of the triangle were placed at arbitrary locations and not at grid points.
§.§.§ Shear test
For shear, the boundary conditions shown in Fig. <ref>b do not lead
to a uniform strain field (note that the top and bottom parts of the boundary
are considered as traction-free and their displacements are not prescribed).
The relative stiffness errors evaluated according to formula (<ref>) for
meshes with different numbers of elements (NoE) are
graphically presented in Fig. <ref>a.
The error decreases fast with increasing number of elements, which indicates that
the error is due to interpolation. The homogenization procedure, even when performed
for each triangular element separately, does not introduce any error, for the same
reasons as explained in the previous subsection on direct tension
(see Fig. <ref>).
An exception is the finest interpolation mesh with 26 elements per edge,
for which the elements are not much bigger than the lattice cells.
As illustrated
in Fig. <ref>, for such fine meshes there exist
links (marked in red) that connect a repnode with a particle located in
an interpolation element
not connected to that repnode. Such links are accounted for explicitly and are excluded
from homogenization, which leads to disturbances. The A5 approach with locally evaluated anisotropic
material stiffness is then superior to the A3 and A4 approaches, which assume isotropy.
In terms of the stiffness error (RSE), the A2 approach (hanging nodes) gives in the present example somewhat higher accuracy
than the other approaches, even though the difference is not dramatic.
As shown in
Fig. <ref>b, the energy error indicator (EEI)
has the same value for all approaches (A2–A5) and decreases to very low levels
as the interpolation mesh is refined.
§.§ Two-dimensional lattices with randomized cells
§.§.§ Lattice geometry and properties
In this series of tests,
the microstructure is obtained by randomization of a periodic microstructure composed
of 50× 50 cells.
The randomization is achieved by random shifts of particle positions. The maximal shift values in x and y directions are 0.3L_x and 0.3L_y, respectively. The nodes located on edges are shifted along the edges only. The final result is evaluated as an average of computations with five different randomized microstructures. An example of a randomized microstructure is plotted in Fig. <ref>b.
Patch tests have been performed for a number of triangular meshes with various sizes of elements; see Fig. <ref>.
§.§.§ Direct tension test
For the direct tension test, the relative stiffness errors (RSE) corresponding to
different meshes and to approaches A2–A5 are listed in Table <ref>
and graphically presented in Fig. <ref>.
The energy errors (EEI) are listed in Table <ref>.
Isotropic parameters for A3 and A4 have been determined according to (<ref>).
For comparison, the RSEs are plotted again in Fig. <ref>, this time with parameters A3 and A4 determined according to (<ref>).
The isotropic stiffness is overestimated for both matrix norms.
Nevertheless, norm (<ref>) provides a significantly higher error and seems to be unacceptable for application to uniform tension.
The error induced by the A3 approach is independent of the refinement of the interpolation
mesh, except for very fine meshes. The reason is that the A3 approach deals with the same
homogenized stiffness matrix in all elements, and if the whole sample is replaced by
a homogeneous continuum, the prescribed boundary conditions induce
uniform strain across the sample. Such a state of uniform strain is then captured
by all meshes because the underlying finite elements pass the standard patch test.
Of course, the reference solution obtained using the fully resolved particle model
is somewhat different, because the microstructure is not completely regular.
The error of the A3 approach is thus nonzero; it does not depend on the mesh,
with the exception of very fine meshes, for which some links are considered
explicitly, as already discussed in Section <ref> (see Fig. <ref>). For approaches A4 and A5, the total error slowly decreases
as the mesh is refined. As expected, A4 is seen to be more accurate than A3, and A5
is still more accurate.
The last three columns in Table <ref> present the
homogenization errors for approaches A3–A5. The homogenization error is defined as the difference between the
total error of the given approach and the total error of the A2 approach, which does
not use any homogenization. The homogenization errors are seen to be below 1%,
and for the A5 approach to be virtually nil, with the exception of very fine meshes.
Even for coarser meshes, the homogenization error slightly increases with mesh
refinement. For approaches A3 and A4, this can be attributed to the anisotropic
character of the local arrangements of links. The deviation from isotropy is more
pronounced on fine interpolation meshes.
§.§.§ Shear test
For the shear test of a randomized lattice, the relative stiffness error is listed
in Table <ref> and graphically presented in Fig. <ref> with A3 and A4 according to the first matrix norm,
and again in Fig. <ref> with A3 and A4 according to the second matrix norm.
For this shear test, the results of comparison of different matrix norms is totally opposite to the results for tension.
The second matrix norm predicts correct results comparable with other approaches,
whereas the stiffness predicted with the first norm is significantly underestimated and
the total stiffness error can even become negative, which indicates that the
response is too compliant.
The energy error is listed in Table <ref>.
It is seen that A5 gives almost the same total error
as A2, and the error decreases with refinement of the interpolation mesh.
On the other hand, approaches A3 and A4 give seemingly a smaller relative stiffness error,
which can even become negative. Their energy error is larger than for A2 and A5.
Most of the error is due to interpolation. The homogenization error is smaller,
and for the A5 approach it is negligible.
§.§ Two-dimensional tests – conclusions
In partial summary, the tensile test of a regular lattice serves as a patch test that
verifies that the implementation is correct because all the approaches reproduce the
exact solution with no error. The shear test of a regular lattice shows that, upon
refinement, the error tends to zero.
For the tensile test of a randomized lattice, the error in stiffness caused by
interpolation remains above 5%
even on very fine meshes. For the shear test, it remains above 7.5%. This is the
intrinsic error that needs to be accepted. Isotropic homogenization in some cases
leads to an increase of compliance, which counteracts the increase of
stiffness due to interpolation. The resulting total error in stiffness is thus in
certain cases near zero or even negative, but the energy error always remains positive.
Differences in the performance of homogenization based on error measures (<ref>)
and (<ref>) in different patch tests indicate that
isotropic homogenization of anisotropic materials can be dangerous.
Therefore, it is better to avoid approaches A3i and A4 if the homogenized microstructure is strongly anisotropic.
§.§ Three-dimensional patch tests
To check the performance in three dimensions, basic patch tests are performed on cube samples composed of 24× 24× 24 cells.
All presented QC approaches are again applied to direct tension and shear.
The initial periodic 3D microstructure is randomized in the same way as in 2D.
The final result is evaluated as an average of computations with five different randomized microstructures.
Parameters of isotropic stiffness (for the A3i and A4 approaches) are obtained by using matrix norm (<ref>) only.
Same as in the 2D case, no region of high interest is defined in these tests.
§.§.§ Direct tension test
For the three-dimensional direct tension test, the relative stiffness errors (RSE) corresponding to
different meshes and to approaches A2–A5 are plotted in Fig. <ref>.
The energy errors (EEI) are plotted in Fig. <ref> and
the total displacement error indicator (DEI) in Fig. <ref>.
In terms of the RSE, all approaches exhibit the same behavior as in two dimensions.
The only difference is that the homogenized isotropic stiffness is underestimated instead of overestimated.
The EEI of A3i and A3a remains constant for all mesh sizes (except the finest mesh).
This confirms the fact that, for a uniform displacement field, homogenization used by these approaches is independent of the refinement of the interpolation mesh. On the other side, the EEI of A4 increases with mesh refinement because smaller elements are statistically more anisotropic.
The order of DEI reflects the quality of homogenized procedures used in all QC approaches.
§.§.§ Shear test
For the three-dimensional shear test, the RSE, EEI and DEI are plotted in Fig. <ref>,
Fig. <ref> and Fig. <ref>, respectively.
The performance of all approaches is quite similar. The response of A3 and A4 is too compliant due to homogenization.
Therefore these approaches appear to be more accurate than A2 and A5 in terms of RSE but not in terms of EEI and DEI.
§.§.§ Three-dimensional tests – conclusions
Three-dimensional patch tests have shown that local anisotropic homogenization (i.e., the A5 approach) provides results with an almost zero homogenization error, but an intrinsic stiffness error around 15% due to interpolation must be taken into account.
§ FAILURE SIMULATIONS
To assess the efficiency and accuracy of QC-based approaches in applications that involve
inelastic material response, failure of an L-shaped specimen is simulated in three dimensions.
The behavior of links that connect particles is assumed to be elastic-perfectly brittle, with link breakage occurring at a critical level of tensile strain.
The L-shaped specimen shown in Fig. <ref> has
dimensions 300× 300× 100 mm. It is fixed at the bottom section and loaded by
prescribed vertical and horizontal displacements imposed at the left end section.
As a result, the non-convex corner is opened, the upper part of the specimen is bent
and the horizontal part is twisted.
The microstructure is generated with a density of 20 nodes along the short edge and randomized. This procedure results in 38,400 particles (with 113,200 unknown DOFs) connected by 324,672 links.
Material parameters are considered to be the same for all links.
In the simulation, the prescribed displacement is increased until the critical value of tensile strain is reached in the most stretched link. Subsequently, the link is removed and the loading is imposed again. Repeating this process results in a series of link breakages that define the macroscopic crack trajectory.
§.§ Cylindrical fully resolved domain
For the purpose of this test, a cylindrical fully resolved domain (FSD) is specified around the non-convex corner; see Fig. <ref>. The FSD radius is set to 50 mm, i.e., to one half of the length of the shortest specimen edge. This FSD occupies 11.8 % of the volume of the entire domain. Basic characteristics of models used by simplified QC approaches are listed in Table <ref>.
The simulation proceeds until 1000 links are cracked. Relative errors evaluated by comparing the results of simplified approaches to the full particle model (approach A1) are depicted in Fig. <ref> and <ref>. Crack opening error and the energy error indicator (EEI) defined in (<ref>) are depicted in Fig. <ref>.
The ranking of individual QC approaches according to EEI in the first step reflects the quality of homogenization methods used by these approaches.
On the other side, in the last step, the ranking of approaches is different and depends on which links are cracked.
The capability of all simplified approaches A2-A5 to correctly predict the link that will break next
(i.e., the link with maximum strain) depends on the current microstructure. The numbers of incorrectly cracked links during the failure process are compared in Table <ref>. The numbers in the line denoted as 500/1000 indicate how many of the first 500 cracked links for the given approach are not found in the list of the first 1000 cracked links in the exact solution (approach A1). For simplified approaches, the precise sequence of cracked links is not always the same but just a small number of cracked links are predicted incorrectly.
Even though a few incorrectly cracked links appear, the simplified approaches are able to predict a correct overall crack trajectory, provided that the FSD is selected properly.
Another important quantity is
the maximum value of loading force F_ max observed in the force-displacement diagram. The results are shown in Table <ref>. For all simplified approaches, the maximum force is predicted in the 35th step, while the full particle model (approach A1) gives the maximum force in the 36th step. However, relative errors in F_ max as well as relative errors in the corresponding displacement are just a few percent.
Computational times consumed by individual components of various QC-based approaches are summarized in Table <ref>. Two most demanding procedures are the assignment of interpolation elements to all particles and the assembly of individual stiffness tensors from all elements.
Searching for the interpolation element is done independently for each particle.
The distribution of link stiffnesses to individual tensors is also done independently for each link.
Therefore, parallelization of the loop over particles or links can easily be envisaged with an almost ideal expected speed-up.
The computational times of one step and of the whole simulation are shown in Table <ref>. The QC approaches are able to reduce the computational time of one step more than ten times.
Even if the computational time needed for the initial simplification is high, the total simulation time is significantly reduced if the total number of steps is huge.
§.§ Fully resolved domain for crack propagation
The cylindrical FSD used in the previous section is not convenient for simulations of a long crack trajectory.
Even if cracking of individual links outside the FSD is implemented, propagation of the crack outside the FSD is not accurate.
To be able to predict the correct crack trajectory, it is necessary to change the FSD adaptively or set up a priori an FSD in the region where crack propagation is expected.
For that purpose a new wedge-shaped FSD is selected; see Fig. <ref>. For this FSD, cracking of 2000 links is computed and cracked links for approaches A1 and A5 are depicted in Figs. <ref>–<ref>
for two random realizations of the underlying particle model. The list of cracked links for the A5 approach is not the same as for the exact approach A1, but the macroscopic shape of the crack is quite similar to the exact solution.
The force-displacement diagram corresponding to a crack growth simulation in a microscopically elastic-brittle material exhibits high oscillations; see the scattered gray lines in Figs. <ref>–<ref>.
Interpretation and comparison of such force-displacement diagrams is facilitated
by smoothing the results.
The smoothing procedure replaces the values of force and displacement in each characteristic point by their weighted averages around this point. Different numbers of neighboring points have been used, ranging from ±5 to ±100. Three types of weight functions have been considered, namely constant, linear (closer points have a stronger influence) and bell-shaped polynomial (in the form often used by nonlocal material models)
given by
w(s) = ⟨ 1 - s^2/R^2⟩^2
where s is one plus the number of points between the averaged point and the point for which the weight is evaluated. Parameter R equals to one plus the number of considered neighboring points, and ⟨…⟩ are Macauley brackets denoting the positive part.
The results for different numbers of neighboring points are compared for constant and linear weight functions in Fig. <ref> and <ref>, respectively.
The results obtained with the bell-shaped weight function are very similar to the results with linear weights.
Constant weights lead to sharper shapes of the final diagrams and, for the same number of used neighboring points, the oscillations are smaller in comparison with linear or polynomial weights.
All three variants of smoothing reflect the character of the original diagram and
reduce oscillations.
Diagrams for all approaches smoothed with constant weights are compared in Fig. <ref> and <ref> for 10 and 50 neighboring points.
The same diagrams smoothed with linear weights are compared in Fig. <ref> and <ref>.
In accordance with patch tests, the initial elastic response of the simplified approaches is stiffer than the exact solution.
However, the shape of the softening branch is captured by all approaches very well.
§.§ Large-scale computations
The maximum size of the problem to be solved is limited by the size of available memory. Typically, today's office PCs come with 4GB of RAM. For such a computer, the maximum size of the L-shaped specimen that can be solved by OOFEM depends on the fineness of the microstructure. For a direct solver with a symmetric skyline matrix storage format, the maximum fineness of microstructure that can be solved
using the pure particle model (approach A1) is 27 particles along the shortest edge, which leads to 282,852 DOFs and 820,846 links. The more powerful conjugate gradient iterative solver with a dynamically growing compressed column matrix storage format is able to solve examples with 46 particles along the shortest edge and with 1,424,068 DOFs and 4,190,040 links. Larger examples cannot be run on a single standard PC due to lack of memory.
QC-based approaches can handle particle models with microstructure density up to 89 particles along the shortest edge using the same cheap PC with 4GB of RAM.
The full particle model has 3,493,161 particles, 31,007,592 links and 10,439,878 DOFs.
The QC approach uses a fully resolved domain with 433,265 repnodes and 3,694,490 links that cover 11.8 % of the solved domain.
In the remaining part of the domain, an interpolation mesh consisting of 4,457 elements and 1,098 mesh nodes is used,
which results into a total of 3,698,947 elements and 1,302,939 DOFs.
In this simplified solution, the link with the maximum strain is predicted correctly and the relative error in maximum strain is just a few percent.
Modern supercomputers would allow to solve problems with
a finer microstructure but such computations can be quite expensive. Furthermore, even for supercomputers, a finite limit on the size of the problem always exists and QC-based approaches can make even larger systems solvable.
§ CONCLUSIONS AND FUTURE WORK
The presented example has demonstrated that QC-based methods can lead to a substantial reduction of the computational cost. The error induced by this reduction can be kept
within acceptable limits by suitably setting the region of high interest (fully resolved domain, FSD).
Macroscopic properties associated with the global stiffness are naturally affected by a certain error induced by interpolation.
Local phenomena such as cracking are well captured by sufficiently large FSDs.
Approaches A3i and A4 based on isotropic homogenization tend to underestimate the global stiffness if the material is significantly anisotropic. In such cases, these approaches seemingly appear to be more accurate in certain examples but accuracy and convergence of this approach are not guaranteed.
On the other hand, the A5 approach based on local anisotropic homogenization seems to be very powerful. The homogenization error of A5 is negligible and this approach provides almost as accurate results as A2 while running substantially
faster.
Both elastic and simple inelastic material models have been presented. So far, the inelastic behavior has been considered to have the form of brittle failure on the
microscopic level.
Future work will deal with optimization of efficiency and extensions to elastoplasticity and to softening material response, e.g., to damage-based models.
§ ACKNOWLEDGEMENT
Financial support received
from the Czech Science Foundation (GAČR project No. 14-00420S) is gratefully acknowledged.
99
=0.0ex
=0.0ex
=0.0ex
=0.0ex
Liu10 J. Liu, Z. Chen, and K. Li,
“A 2-d lattice model for simulating the failure of paper”,
Theoretical and Applied Fracture Mechanics, 54(1), 1-10, 2010.
BeexVer13 L. Beex, C. Verberne, and R. Peerlings,
“Experimental identification of a lattice model for woven fabrics: Application to electronic textile”,
Composites Part A: Applied Science and Manufacturing, 48, 82-92, 2013.
BeePee15 L. Beex, R. Peerlings, K. van Os and M. Geers,
“The mechanical reliability of an electronic textile investigated using the virtual-power-based quasicontinuum method”,
Mechanics of Materials, 80, 52–66, 2015.
WilBeex13 D. Wilbrink, L. Beex, and R. Peerlings,
“A discrete network model for bond failure and frictional sliding in fibrous materials”,
International Journal of Solids and Structures, 50(9), 1354-1363, 2013.
RidGon10 A. Ridruejo, C. González and J. LLorca,
“Damage micromechanisms and notch sensitivity of glass-fiber non-woven felts: An experimental and numerical study”,
Journal of the Mechanics and Physics of Solids, 58, 1628-1645, 2010.
KulaUesa12 A. Kulachenko and T. Uesaka,
“Direct simulations of fiber network deformation and failure”
Mechanics of Materials, 51, 1-14, 2012.
PengCao04 X.Q. Peng and J. Cao,
“A continuum mechanics-based non-orthogonal constitutive model for woven composite fabrics”,
Composites: Part A: Applied science and manufacturing, 36, 859-874, 2005.
Baz90 Z. P. Bažant, M. R. Tabbara, M. T. Kazemi, and G. Pijaudier-Cabot,
“Random particle model for fracture of aggregate or fiber composites”,
Journal of Engineering Mechanics, 116(8), 1686–1705, 1990.
Jin16 C. Jin, N. Buratti, M. Stacchini, M. Savoia and G. Cusatis,
“Lattice discrete particle modeling of fiber reinforced concrete: Experiments and simulations”,
European Journal of Mechanics-A/Solids, 57, 85-107, 2016.
CusPel11 G. Cusatis, D. Pelessone, and A. Mencarelli,
“Lattice discrete particle model (ldpm) for failure behavior of concrete i: Theory”,
Cement and Concrete Composites, 33(9), 881-890, 2011.
LilMie03 G. Lilliu and J. van Mier,
“3d lattice type fracture model for concrete”,
Engineering Fracture Mechanics, 70(7-8), 927-941, 2003.
LiuDen07 J. Liu, S. Deng, J. Zhang, and N. Liang. Lattice type of fracture model for concrete. Theoretical and Applied Fracture Mechanics, 48(3):269 – 284, 2007.
Baz10 Z. P. Bažant.
“Can multiscale-multiphysics methods predict softening damage and structural failure?”,
International Journal for Multiscale Computational Engineering, 8(1), 2010.
CurMil03 W. A. Curtin and R. E. Miller,
“Atomistic/continuum coupling in computational materials science”,
Modelling and Simulation in Materials Science and Engineering, 11, R33-R68, 2003.
TadPhi96 E. Tadmor, R. Phillips, and M. Ortiz,
“Mixed atomistic and continuum models of deformation in solids”,
Langmuir, 12, 4529-4534, 1996.
TadOrt96 E. B. Tadmor, M. Ortiz, and R. Phillips,
“Quasicontinuum analysis of defects in solids”,
Philosophical Magazine A, 73, 1529-1563, 1996.
TadMil05 E. Tadmor and R. Miller,
“The theory and implementation of the quasicontinuum method”,
Handbook of Materials Modeling, Springer Netherlands, 663-682, 2005.
BeePee14 L. Beex, R. Peerlings, and M. Geers,
“Central summation in the quasicontinuum method”,
Journal of the Mechanics and Physics of Solids, 70, 242-261, 2014.
BeePee14a L. Beex, R. Peerlings, and M. Geers,
“A multiscale quasicontinuum method for dissipative lattice models and discrete networks”,
Journal of the Mechanics and Physics of Solids, 64, 154-169, 2014.
BeePee2014c L. Beex, R. Peerlings, and M. Geers,
“A multiscale quasicontinuum method for lattice models with bond failure and fiber sliding”,
Computer Methods in Applied Mechanics and Engineering, 269, 108-122, 2014.
beex15 L. A. A. Beex, O. Rokoš, J. Zeman and S. P. A. Bordas,
“Higher‐order quasicontinuum methods for elastic and dissipative lattice models: uniaxial deformation and pure bending”,
GAMM‐Mitteilungen, 38(2), 344-368, 2015.
MilTad09 R. E. Miller and E. B. Tadmor,
“A unified framework and performance benchmark of fourteen multiscale atomistic/continuum coupling methods”,
Modelling and Simulation in Materials Science and Engineering, 17, 053001, 2009.
MilTad02 R. E. Miller and E. B. Tadmor,
“The quasicontinuum method: Overview, applications and current directions”,
Journal of Computer-Aided Materials Design, 9, 203-239, 2002.
TadMil11 E. B. Tadmor and R. E. Miller,
“Modeling Materials: Continuum, Atomistic and Multiscale Techniques”,
Cambridge University Press, 2011.
rokos16 O. Rokoš, L. A. Beex, J. Zeman and R. H. Peerlings,
“A Variational Formulation of Dissipative Quasicontinuum Methods”,
in press... arXiv preprint arXiv:1601.00625, 2016.
MamLar15 A. Memarnahavandi, F. Larsson and K. Runesson,
“A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation”,
Computational Mechanics, 55, 617-642, 2015.
koch14 D. M. Kochmann and G. N. Venturini,
“A meshless quasicontinuum method based on local maximum-entropy interpolation”,
Modelling and Simulation in Materials Science and Engineering, 22, 2014.
MikJir15 K. Mikeš and M. Jirásek,
The quasicontinuum method extended to disordered materials,
in Proceedings of the 15th International Conference on Civil, Structural and
Environmental Engineering Computing, Civil-Comp Press, Stirlingshire, Scotland, 2015.
oofem01 B. Patzák and Z. Bittnar
“Design of object oriented finite element code.”
Advances in Engineering Software, 32(10), 759-767, 2001.
oofem12 B. Patzák and D. Rypl
“Object-oriented, parallel finite element framework with dynamic load balancing.”
Advances in Engineering Software, 47(1), 35-50, 2012.
Pat12 B. Patzák,
“OOFEM – an object-oriented simulation tool for advanced modeling of materials and structures”,
Acta Polytechnica, 52, 59-66, 2012.
SheMil99 V. Shenoy, R. E. Miller, E. B. Tadmor, D. Rodney, R. Phillips, and M. Ortiz,
“An adaptive finite element approach to atomic-scale mechanics—the quasicontinuum method”,
Journal of the Mechanics and Physics of Solids, 47, 611-642, 1999.
Ryp15 D. Rypl,
T3D mesh generator,
[online]. [cit. 2015-5-1]. Available at: http://ksm.fsv.cvut.cz/dr/t3d.html.
Hill63 R. Hill,
“Elastic properties of reinforced solids: some theoretical principles”
Journal of the Mechanics and Physics of Solids 11(5), 357-372, 1963.
Man72 J. Mandel,
“Plasticité classique et viscoplasticité. International Centre for Mechanical Sciences, Courses and Lectures No. 97.” Springer, Udine 1972.
SteiEliz06 P. Steinmann, A. Elizondo and R. Sunyk,
“Studies of validity of the Cauchy-Born rule by direct comparison of continuum and atomistic modelling. Modelling and Simulation in Materials”, Science and Engineering, 15(1), S271, 2006.
Ozbolt01 J. Ožbolt, Y. Li, and I. Kožar,
“Microplane model for concrete with relaxed kinematic constraint”, International Journal of Solids and Structures, 38(16), 2683-2711, 2001.
| Discrete particle models use a network of particles interacting via discrete links or connections that represent a discrete microstructure of the modeled material.
An advantage of this approach is that discrete models can naturally capture small-scale phenomena. Therefore, a variety of sophisticated discrete material models have been developed and applied
in simulations of materials such as
paper <cit.>, textile <cit.>, fibrous materials <cit.>, woven composite fabrics <cit.> or fiber composites <cit.>.
Extensive effort has been invested into the formulation of a discrete model of concrete <cit.>.
Discrete mechanical models can accurately capture complex material response, especially localized phenomena such as damage or plastic softening.
However, they suffer by two main disadvantages.
Firstly, a large number of particles is needed to realistically describe the response of large-scale physically relevant models. This results in huge systems of equations, which are expensive to solve.
Secondly, the process of assembling of this system is also computationally expensive because all discrete connections must be individually taken into account.
Both of the aforementioned disadvantages of discrete particle models can be removed by using simplified continuous models based on one of the conventional homogenization procedures.
However, standard continuous models cannot capture localized phenomena in an objective way and
require enrichments, e.g., by nonlocal and gradient terms, which are again computationally
expensive. According to Bažant <cit.>, the most powerful approach to softening damage
in the multi-scale context is a discrete (lattice-particle) simulation of the mesostructure of the entire structural region in which softening damage can occur.
Another way to reduce the computational cost of discrete particle models is a
combination of a simplified continuous model with an exact discrete description in the
parts where it is needed. Such a combination of two different approaches entails that
some hand shaking procedure is needed at the interface between the continuous and discrete
domains <cit.>. The quasicontinuum (QC) method is a suitable technique combining the
advantages of continuous models with the exact description of discrete particle models without additional coupling procedures.
The quasicontinuum method was originally proposed by Tadmor, Ortiz and Phillips <cit.> in 1996. The original purpose of this computational technique was a simplification of large atomistic lattice models described by long-range conservative interaction potentials. Since that time, QC methods have been widely used to investigate local phenomena of atomistic models with long-range interactions <cit.>.
Recently, the application of QC methods has been successfully extended to other lattices and interaction potentials. For example, an application of the QC method to structural lattice models of fibrous materials with short-range nearest-neighbour interactions has been developed by Beex et al. for conservative <cit.> and non-conservative <cit.> interaction potentials including dissipation and fiber sliding as well as for planar beam lattices <cit.>, still applied to regular lattices only.
An overview of applications and current directions of QC methods has been provided by Miller and Tadmor in <cit.> and in part IV of their book <cit.>.
In last few years, a variational formulation of dissipative QC method has been done by Rokoš at al. <cit.>, a goal-oriented adaptive version of QC algorithm has been introduced in <cit.> or a meshless QC method has been developed by Kochmann research group <cit.>.
But the application of all mentioned QC methods is still restricted only to systems with regular geometry of particles.
In the present work, we extend the QC approach to irregular systems of particles with short-range interactions by axial forces. The main idea has been tentatively presented in a conference paper <cit.>.
Here we proceed to a more systematic evaluation of the performance of various QC formulations
applied to systems with elastic-brittle links. The proposed models are implemented
in OOFEM <cit.>, an open-source object-oriented simulation platform initially
developed for finite element methods but extensible to other discretization methods.
The procedure that results from the QC method combines the following three ingredients:
* Interpolation of particle displacements is used in the regions of low interest. Only a small subset of particles is selected to characterize the behavior of the entire system. These so-called repnodes (representative nodes) are used as nodes of an underlying triangular finite element mesh, and the displacements of other particles in the region of low interest are interpolated. In the regions of high interest, all particles are selected as repnodes, in order to provide the exact resolution of the particle model. This interpolation leads to a significant reduction of the number of degrees
of freedom (DOFs) without inducing a large error in the regions of high interest.
* A summation rule can be applied in order to eliminate the requirement of visiting all particles during assembly of the global equilibrium equations. If such a rule is not imposed, all particles need to be visited to construct the system of equations, which makes the process computationally expensive. If the summation rule is adopted, the contribution of all particles in each interpolation triangle is estimated based on sampling of the links that surround one single particle and proper scaling of their contribution. This makes the computational process faster but some problems occur on the interface between regions of high and low interest. The piecewise linear interpolation of displacements combined with the summation rule means that the deformation is considered as constant within each interpolation element in the regions of low interest, while the deformations of individual links in the regions of high interest are evaluated exactly. Consequently, forces of nonphysical character, called the ghost forces, appear on the interface <cit.>.
In our work, the summation procedure is based on homogenization of link networks contributing to the interpolation elements. Some of the links (truss elements) are selected to be processed exactly, in order to properly treat the interface between the exactly solved and interpolated domains and thus to eliminate the ghost forces.
* Adaptivity provides suitable changes of the regions of high interest during the simulation process. A new triangulation of the interpolation mesh could be done, but this is actually not necessary because the type of region can be changed by adding repnodes before each step. A suitable change of the regions of high interest often leads to a substantial increase of accuracy and, in several specific cases, it is necessary in order to represent the correct physical behavior, e.g., in a crack propagation process. | null | null | null | null | null |
http://arxiv.org/abs/1701.07790v2 | 20170126174547 | Game-Theoretic Modeling of Human Adaptation in Human-Robot Collaboration | [
"Stefanos Nikolaidis",
"Swaprava Nath",
"Ariel D. Procaccia",
"Siddhartha Srinivasa"
] | cs.RO | [
"cs.RO"
] |
=10000
= 10000
arrows
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07817v2 | 20170126185223 | Effective field theory for dissipative fluids (II): classical limit, dynamical KMS symmetry and entropy current | [
"Paolo Glorioso",
"Michael Crossley",
"Hong Liu"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech",
"gr-qc",
"hep-ph",
"math-ph",
"math.MP"
] |
-
-
Tr
tr
Im
Re
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08215v1 | 20170127225329 | Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials | [
"Stephen Cameron",
"Luis Silvestre",
"Stanley Snelson"
] | math.AP | [
"math.AP"
] |
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
[NO \title GIVEN]
[NO \author GIVEN]
December 30, 2023
======================
§ INTRODUCTION
We consider the spatially inhomogeneous Landau equation, a kinetic model from plasma physics that describes the evolution of a particle density f(t,x,v)≥ 0 in phase space (see, for example, <cit.>). It is written in divergence form as
∂_t f + v·∇_x f = ∇_v·[a̅(t,x,v)∇_v f] + b̅(t,x,v)·∇_v f + c̅(t,x,v) f,
where t∈ [0,T_0], x∈^d, and v∈^d. The coefficients a̅(t,x,v)∈^d× d, b̅(t,x,v) ∈^d, and c̅(t,x,v)∈ are given by
a̅(t,x,v) := a_d,γ∫_^d( I - w/|w|⊗w/|w|) |w|^γ + 2 f(t,x,v-w) w,
b̅(t,x,v) := b_d,γ∫_^d |w|^γ w f(t,x,v-w) w,
c̅(t,x,v) := c_d,γ∫_^d |w|^γ f(t,x,v-w) w,
where γ is a parameter in [-d,∞), and a_d,γ, b_d,γ, and c_d,γ are constants. When γ = -d, the formula for c̅ must be replaced by c̅ = c_d,γ f. Equation (<ref>) arises as the limit of the Boltzmann equation as grazing collisions predominate, i.e. as the angular singularity approaches 2 (see the discussion in <cit.>). The case d=3, γ=-3, corresponds to particles interacting by Coulomb potentials in small scales. The case γ∈ [-d,0) is known as soft potentials, γ = 0 is known as Maxwell molecules, and γ >0 hard potentials. In this paper, we focus on moderately soft potentials, which is the case γ∈ (-2,0).
We assume that the mass density, energy density, and entropy density are bounded above, and the mass density is bounded below, uniformly in t and x:
0<m_0≤∫_^d f(t,x,v) v ≤ M_0,
∫_^d |v|^2 f(t,x,v) v ≤ E_0,
∫_^d f(t,x,v) log f(t,x,v) v ≤ H_0.
In the space homogeneous case, because of the conservation of mass and energy, and the monotonicity of the entropy, it is not necessary to make the assumptions (<ref>), (<ref>) and (<ref>). It would suffice to require the initial data to have finite mass, energy and entropy. It is currently unclear whether these hydrodynamic quanitites will stay under control for large times and away from equilibrium in the space inhomogeneous case. Thus, at this point, it is simply an assumption we make.
We now state our main results. Our first theorem makes no further assumption on the initial data f_in:^2d→ beyond what is required for a weak solution to exist in [0,T_0].
Let γ∈ (-2,0]. If f:[0,T_0]×^2d→ is a bounded weak solution of (<ref>) satisfying (<ref>), (<ref>), and (<ref>), then there exists K_0>0 such that f satisfies
f(t,x,v) ≤ K_0 (1+t^-d/2) (1+|v|)^-1,
for all (t,x,v)∈ [0,T_0]×^2d. The constant K_0 depends on d, γ, m_0, M_0, E_0, and H_0.
Note that even though we work with a bounded weak solution f, none of the constants in our estimates depend on f_L^∞. Note also that our estimate does not depend on T_0. We use a definition of weak solution for which the estimates in <cit.> apply, since that is the main tool in our proofs.
We will show in Theorem <ref> that an estimate of the form (<ref>) cannot hold with a power of (1+|v|) less than -(d+2), which also implies there is no a priori exponential decay. On the other hand, if f_in satisfies a Gaussian upper bound in the velocity variable, this bound is propagated:
Let f: [0,T_0]×^2d→ be a bounded weak solution of the Landau equation (<ref>) such that f_in(x,v) ≤ C_0 e^-α|v|^2, for some C_0>0 and a sufficiently small α>0. Then
f(t,x,v) ≤ C_1e^-α|v|^2,
where C_1 depends on C_0, α, d, γ, m_0, M_0, E_0 and H_0. The value of α must be smaller than some α_0>0 that depends on γ, d, m_0, M_0, E_0 and H_0.
This estimate is also independent of T_0. As a consequence of Theorem <ref>, we will show in Theorem <ref> that in this regime, f is uniformly Hölder continuous on [t_0,T_0]×^2d for any t_0∈ (0,T_0).
Note that under some formal asymptotic regime, the hydrodynamic quantities of the inhomogeneous Landau equations converge to solutions of the compressible Euler equation <cit.>, which is known to develop singularities in finite time. Should we expect singularities to develop in finite time for the inhomogeneous Landau equation as well? That question seems to be out of reach with current techniques. A more realistic project is to prove that the solutions stay smooth for as long as the hydrodynamic quantities stay under control (as in (<ref>), (<ref>) and (<ref>)). The results in this paper are an important step forward in that program.
§.§ Related work
It was established in <cit.> that solutions to (<ref>) become C^∞ smooth in all three variables conditionally to the solution being away from vacuum, bounded in H^8 (in the d=3 case) and having infinitely many finite moments. It would be convenient to extend this conditional regularity result to have less stringent assumptions. In particular, the assumptions (<ref>), (<ref>) and (<ref>) are a much weaker assumption, which is also in terms of physically relevant hydrodynamic quantities. In <cit.>, the authors show how their local Hölder continuity result for linear kinetic equations with rough coefficients can be applied to solutions of the Landau equation provided that (<ref>), (<ref>) and (<ref>) hold and in addition the solution f is assumed to be bounded. While we also assume boundedness of f, our results do not quantitatively rely on this and in addition tell us some information about the decay for large velocities.
The local estimates for parabolic kinetic equations with rough coefficients play an important role in this work. Local L^∞ estimates were obtained in <cit.> using Moser iteration, and local Hölder estimates were proven in <cit.> using a weak Poincaré inequality. A new proof was given in <cit.> using a version of De Giorgi's method.
Classical solutions for (<ref>) have so far only been constructed in a close-to-equilibrium setting: see the work of Guo <cit.> and Mouhot-Neumann <cit.>. A suitable notion of weak solution, for general initial data, was constructed by Alexandre-Villani <cit.>.
The global L^∞ estimate we prove in Theorem <ref> is similar to an estimate in <cit.> for the Boltzmann equation. The techniques in the proof are completely different. The propagation of Gaussian bounds that we give in Theorem <ref> is reminiscent of the result in <cit.>. That result is for the space-homogeneous Boltzmann equation with cut-off, which is in some sense the opposite of the Landau equation in terms of the angular singularity in the cross section.
In order to keep track of the constants for parabolic regularization estimates (as in <cit.>) for large velocities, we describe a change of variables in Lemma <ref>. This change of variables may be useful in other contexts. It is related to one mentioned in the appendix of <cit.> for the Boltzmann equation.
For the homogeneous Landau equation, which arises when f is assumed to be independent of x in (<ref>), the theory is more developed. The C^∞ smoothing is established for hard potentials in <cit.> and for Maxwell molecules in <cit.>, under the assumption that the initial data has finite mass and energy. Propagation of L^p estimates in the case of moderately soft potentials was shown in <cit.> and <cit.>. Global upper bounds in a weighted L^1_t(L^3_v) space were established in <cit.>, even for γ=-3, as a consequence of entropy dissipation. Global L^∞ bounds that do not depend on f_in and that do not degenerate as t→∞ were derived in <cit.> for moderately soft potentials, and this result also implies C^2 smoothing by standard parabolic regularity theory.
Note that in the space homogeneous case our assumptions (<ref>), (<ref>) and (<ref>) hold for all t>0 provided that the initial data has finite mass, energy and entropy. Both Theorems <ref> and <ref> are new results even in the space homogeneous case. The previous results for soft potentials do not address the decay of the solution for large velocities.
§.§ Organization of the paper
In Section <ref>, we establish precise bounds on the coefficients a̅, b̅, and c̅ in (<ref>). In Section <ref>, we derive the local estimates we will use to prove Theorem <ref>, starting from the Harnack estimate of <cit.>. Section <ref> contains the proof of Theorem <ref> and a propagating lower bound that implies the exponent of (1+|v|) in (<ref>) cannot be arbitrarily high. In Section <ref>, we prove Theorem <ref> and the Hölder estimate, Theorem <ref>. In Appendix <ref>, we derive a convenient maximum principle for kinetic Fokker-Planck equations.
§.§ Notation
We say a constant is universal if it depends only on d, γ, m_0, M_0, E_0, and H_0. The notation A≲ B means that A≤ CB for a universal constant C, and A ≈ B means that A≲ B and B≲ A. We will let z=(t,x,v) denote a point in _+×^d×^d. For any z_0=(t_0,x_0,v_0), define the Galilean transformation
𝒮_z_0(t,x,v) := (t_0+t, x_0 + x +tv_0,v_0+v).
We also have
𝒮_z_0^-1(t,x,v) := (t-t_0, x - x_0 -(t-t_0)v_0,v-v_0).
For any r>0 and z_0 = (t_0,x_0,v_0), let
Q_r(z_0) := (t_0-r^2,t_0] ×{x : |x-x_0 - (t-t_0) v_0| < r^3 }× B_r(v_0),
and Q_r = Q_r(0,0,0). The shift 𝒮_z_0 and the scaling of Q_r correspond to the symmetries of the left-hand side of (<ref>). We will sometimes write ∂_i or ∂_ij, and these will always refer to differentiation in v.
§ THE COEFFICIENTS OF THE LANDAU EQUATION
In this section we review various estimates of the coefficients a̅, b̅ and c̅ in (<ref>). In calculating these upper and lower bounds, the dependence of f on t and x is irrelevent, so in this section we will write f(v) and a̅(v), etc.
Let γ∈ [-2, 0), and assume f satisfies (<ref>), (<ref>), and (<ref>). Then there exist constants c and C depending on d, γ, m_0, M_0, E_0, and H_0, such that for unit vectors e∈^d,
a̅_ij(v) e_i e_j ≥ c (1+|v|)^γ, e∈𝕊^d-1,
(1+|v|)^γ+2, e· v = 0,
and
a̅_ij(v) e_i e_j ≤ C (1+|v|)^γ+2, e∈𝕊^d-1,
(1+|v|)^γ, e· v = |v|,
where a̅_ij(v) is defined by (<ref>).
The lower bounds (<ref>) are proven in <cit.>. For the upper bounds, the formula (<ref>) implies
a̅_ij(v)e_i e_j = a_d,γ∫_^d(1 - (w· e/|w|)^2)|w|^γ+2 f(v-w) w
≲∫_^d |w|^γ+2 f(v-w) w
= ∫_^d |v-z|^γ+2 f(z) z
≲∫_^d (|v|^γ+2 + |z|^γ+2)f(z) z
≲ M_0(1+|v|^γ+2) +E_0,
since 0≤γ+2≤ 2.
The above bound is valid for all e∈𝕊^d-1. If e is parallel to v, then
∫_^d(1 - (w· e/|w|)^2)|w|^γ+2 f(v-w) w = ∫_^d(1 - ((v-z)· e/|v-z|)^2)|v-z|^γ+2 f(z) z
= ∫_^d(|v-z|^2 - (|v|-z· e)^2)|v-z|^γ f(z) z
= ∫_^d(|z|^2 - (z· e)^2)|v-z|^γ f(z) z
= ∫_^d |z|^2sin^2θ |v-z|^γ f(z) z,
where θ is the angle between v and z. Let R = |v|/2. If z∈ B_R(v), then |sinθ| ≤ |v-z|/|v|, and
∫_B_R(v) |z|^2sin^2θ |v-z|^γ f(z) z ≤∫_B_R(v) |z|^2 |v|^-2 |v-z|^γ+2 f(z) z
≤|v|^γ/2^γ+2∫_B_R(v)|z|^2f(z) z ≲ E_0|v|^γ.
If |v-z| ≥ R=|v|/2, then |v-z|^γ≲ |v|^γ, and we have
∫_^d∖ B_R(v) |z|^2sin^2θ |v-z|^γ f(z) z ≲ |v|^γ∫_^d∖ B_R(v) |z|^2 f(z) z ≲ E_0 |v|^γ.
In the proof of Theorem <ref>, we will need to keep track of how the bounds on b̅ and c̅ in the next two lemmas depend on the local L^∞ norm of f. In Lemma <ref> and Lemma <ref>, f_L^∞(A) means f(t,x,·)_L^∞(A) for any set A⊆^d.
Let f satisfy (<ref>), (<ref>), and (<ref>). Then c̅(v) defined by (<ref>) satisfies
c̅(v) ≲ (1+|v|)^γ(1+f_L^∞(B_ρ(v)))^-γ/d, -2dd+2≤γ < 0,
(1+|v|)^-2-2γ/d(1+f_L^∞(B_ρ(v)))^-γ/d, -d < γ < -2dd+2,
where the constants
depend on d, γ, M_0, and E_0, and
ρ = 1, |v|<2,
|v|^-2/d, |v|≥ 2.
Assume first |v|≥ 2. Let r := |v|^-2/d (1+f_L^∞(B_ρ(v)))^-1/d < ρ. Consider
I_1 = ∫_B_r |w|^γ f(v-w) w, I_2 = ∫_B_|v|/2∖ B_r |w|^γ f(v-w) w,
I_3 = ∫_^d ∖ B_|v|/2 |w|^γ f(v-w) w.
We have
I_1 ≲f_L^∞(B_ρ(v)) r^d+γ≲ |v|^-2-2γ/df_L^∞(B_ρ(v))^-γ/d .
I_2 ≲ r^γ |v|^-2∫_B_|v|/2 |v-w|^2 f(v-w) w ≲ E_0 |v|^-2-2γ/d (1+f_L^∞(B_ρ(v)))^-γ/d.
Finally, for |w|≥ |v|/2, we have |w|^γ≲ |v|^γ, and
I_3 ≲ |v|^γ∫_^d∖ B_|v|/2f(v-w) w ≤ M_0|v|^γ.
Thus c̅(v)≲ (1+f_L^∞(B_ρ(v)))^-γ/d|v|^-2-2γ/d + |v|^γ for |v|>2.
When γ∈(-d, -2dd+2), -2-2γ/d > γ and we get
c̅(v)≲ (1+f_L^∞(B_ρ(v)))^-γ/d|v|^-2-2γ/d.
When γ∈[-2dd+2 , 0), γ > -2-2γ/d and we get
c̅(v)≲ (1+f_L^∞(B_ρ(v)))^-γ/d|v|^γ.
This completes the proof in the case |v| > 2.
For |v| ≤ 2, γ∈ (-d, 0], and any R∈ (0,1] we have that
∫_^d |w|^γ f(v-w) w = ∫_B_R |w|^γ f(v-w) w + ∫_^d∖ B_R |w|^γ f(v-w) w,
≲ R^d+γf_L^∞(B_1(v)) + R^γ M_0.
Choosing R = (1+f_L^∞(B_1(v)) )^-1/d, we then have
c̅(v) ≲ (R^d+γf_L^∞(B_1(v)) + R^γ M_0) ≲ (1+ f_L^∞(B_1(v)))^-γ/d,
for |v|≤ 2, completing the proof.
Let f satisfy (<ref>), (<ref>), and (<ref>).
Then b̅(v) defined by (<ref>) satisfies
the estimate
|b̅(v)| ≲(1+|v|)^γ+1(1+f_L^∞(B_ρ(v)))^-(γ+1)/d, γ∈ [-2,-1),
(1+ |v|)^γ+1, γ∈ [-1,0]
where the constants depend on d, γ, M_0, and E_0, and
ρ = 1, |v|<2,
|v|^-2/d, |v|≥ 2.
Taking norms, we have
|b̅(v)| ≲∫_^d |w|^1+γf(v-w) w.
If γ∈ [-2,-1), then 0>1+γ≥ -1 ≥-2d/d+2, and the conclusion follows from Lemma <ref>. If γ∈ [-1,0], we have
|b̅(v)| ≲∫_^d (|v|^γ+1 + |v-w|^γ+1 ) f(v-w) w
≲ |v|^γ+1 M_0 + E_0^(1+γ)/2 M_0^(1-γ)/2≲ (1+ |v|)^γ+1.
§ LOCAL ESTIMATES
In this section we refine the local estimates in <cit.> and <cit.> for linear kinetic equations with rough coefficients. Essentially, we start from their results and apply scaling techniques to improve the local L^∞ estimates.
We will need the following technical lemma. See <cit.> for the proof.
Let η(r)≥ 0 be bounded in [r_0,r_1] with r_0≥ 0. Suppose for r_0≤ r<R≤ r_1, we have
η(r) ≤θη(R) + A/(R-r)^α + B
for some θ∈ [0,1) and A,B,α≥ 0. Then there exists c(α,θ)>0 such that for any r_0≤ r<R≤ r_1, there holds
η(r) ≤ c(α,θ)(A/(R-r)^α + B).
If g(t,x,v)≥ 0 is a weak solution of
∂_t g + v·∇_x g = ∇_v ·(A∇_v g) + B ·∇_v g + s
in Q_1,
with
0 < λ I ≤ A(t,x,v) ≤Λ I, (t,x,v)∈ Q_1,
|B(t,x,v)| ≤Λ, (t,x,v)∈ Q_1,
s∈ L^∞(Q_1),
then
sup_Q_1/2 g ≤ C(g_L^∞_t,xL^1_v(Q_1) + s_L^∞(Q_1)),
with C depending only on d, λ, and Λ.
It is proven in <cit.> that if g(t,x,v) solves (<ref>) weakly with A, B, and s as in the statement of the proposition, then
g_L^∞(Q_1/2)≤ C (g_L^2(Q_1) + s_L^∞(Q_1)),
with C depending on d, λ, and Λ. Since g_L^2(Q_1)≤√(ω_d)g_L_t,x^∞ L_v^2(Q_1), where ω_d = ℒ^d(B_1), we also have
g_L^∞(Q_1/2)≤ C (g_L_t,x^∞ L_v^2(Q_1) + s_L^∞(Q_1)).
To replace g_L_t,x^∞ L_v^2(Q_1) with g_L_t,x^∞ L_v^1(Q_1), we use an interpolation argument. For 0<r≤ 1, define
[ g_r(t,x,v) := g(r^2t, r^3x,rv), s_r(t,x,v) := s(r^2t,r^3x,rv),; A_r(t,x,v) := A(r^2t,r^3x, rv), B_r(t,x,v) := B(r^2t,r^3x,rv), ]
and note that g_r satisfies
∂_t g_r + v·∇_x g_r = ∇_v ·(A_r∇_v g_r) + rB_r ·∇_v g_r + r^2s_r
in Q_1. Since r≤ 1, we may apply (<ref>) to g_r, which gives
g_L^∞(Q_r/2)≤ C(1/r^d/2g_L_t,x^∞ L_v^2(Q_r) + r^2s_L^∞(Q_r)),
for any r∈ (0,1]. Now, for θ,R∈ (0,1), apply (<ref>) in Q_(1-θ)R(z) for each z∈ Q_θ R to obtain
g_L^∞(Q_θ R) ≤ C(1/[(1-θ)R]^d/2g_L_t,x^∞ L_v^2(Q_R) + R^2s_L^∞(Q_R))
≤ C(1/[(1-θ)R]^d/2g_L_t,x^∞ L_v^2(Q_R) + s_L^∞(Q_1)).
By the Hölder and Young inequalities, we have
g_L^∞(Q_θ R) ≤ C(1/[(1-θ)R]^d/2g_L^∞(Q_R)^1/2g_L_t,x^∞ L_v^1(Q_R)^1/2 + s_L^∞(Q_1))
≤1/2g_L^∞(Q_R) + C(1/[(1-θ)R]^dg_L_t,x^∞ L_v^1(Q_R) + s_L^∞(Q_1)).
Define η(ρ) = g_L^∞(Q_ρ) for ρ∈ (0,1]. Then for any 0<r<R≤ 1, we have
η(r) ≤1/2η(R) + C/(R-r)^dg_L_t,x^∞ L_v^1(Q_1) + Cs_L^∞(Q_1).
Applying Lemma <ref>, we obtain
η(r) ≤C/(R-r)^dg_L_t,x^∞ L_v^1(Q_1) + Cs_L^∞(Q_1).
Let R→ 1- and set r=1/2 to conclude (<ref>).
Let g(t,x,v) solve (<ref>) weakly in Q_R(z_0) for some z_0∈^2d+1 and R>0, with
0 < λ I ≤ A(t,x,v) ≤Λ I, (t,x,v)∈ Q_R,
|B(t,x,v)| ≤Λ/R, (t,x,v)∈ Q_R,
s∈ L^∞(Q_R).
Then the improved estimate
g(t_0,x_0,v_0) ≤ C(g_L_t,x^∞ L_v^1(Q_R)^2/(d+2)s_L^∞(Q_R)^d/(d+2) + R^-dg_L_t,x^∞ L_v^1(Q_R))
holds, with C depending only on d, λ, and Λ.
By applying the change of variables
(t,x,v) ↦( t-t_0/R^2, x-x_0 - (t-t_0)v_0/R^3, v-v_0/R)
to g and s, we may suppose (t_0,x_0,v_0) = (0,0,0) and R=1.
For r∈(0,1] to be determined, we make the transformation (<ref>) as in the proof of Proposition <ref> and get a function g_r satisfying (<ref>) in Q_1. Then Proposition <ref> implies
g(0,0,0) ≤ C(g_r_L^∞_t,xL^1_v(Q_1) + r^2s_r_L^∞(Q_1))
= C(r^-dg_L^∞_t,xL^1_v(Q_r) + r^2s_L^∞(Q_r))
≤ C(r^-dg_L^∞_t,xL^1_v(Q_1) + r^2s_L^∞(Q_1)).
If g_L_t,x^∞ L_v^1(Q_1)≤s_L^∞(Q_1), then the choice r = (g_L^∞_t,xL_v^1(Q_1)/s_L^∞(Q_1))^1/(d+2) implies
g(0,0,0) ≤ Cg_L_t,x^∞ L_v^1(Q_1)^2/(d+2)s_L^∞(Q_1)^d/(d+2).
On the other hand, if s_L^∞(Q_1)≤g_L_t,x^∞ L_v^1(Q_1), the choice r=1 implies g(0,0,0) ≤ Cg_L_t,x^∞ L_v^1(Q_1), so we have
g(0,0,0) ≤ C(g_L_t,x^∞ L_v^1(Q_1)^2/(d+2)s_L^∞(Q_1)^d/(d+2) + g_L_t,x^∞ L_v^1(Q_1))
in both cases.
§ GLOBAL ESTIMATES
In this section, we prove global upper bounds for solutions f of (<ref>). Our bounds depend only on the estimates on the hydrodynamic quantities (<ref>), (<ref>) and (<ref>). Our bound does not depend on an upper bound of the initial data. We also get that the solution will have certain polynomial decay in v for t>0.
From Lemma <ref>, we see that the bounds on a̅_ij(t,x,v) degenerate as |v|→∞. In the first lemma, we show how to change variables to obtain an equation with uniform ellipticity constants independent of |v|.
Let z_0 =(t_0,x_0,v_0)∈_+×^2d be such that |v_0|≥ 2, and let T be the linear transformation such that
T e = |v_0|^1+γ/2 e , e · v_0 = 0
|v_0|^γ/2e, e · v_0 = |v_0|.
Let T̃(t,x,v) = (t,Tx,Tv), and define
𝒯_z_0(t,x,v) := 𝒮_z_0∘T̃ (t,x,v)
= (t_0+t,x_0+T x + t v_0 ,v_0 + T v).
Then,
(a) There exists a constant C>0 independent of v_0∈^d∖ B_2 such that for all v∈ B_1,
C^-1 |v_0| ≤ |v_0 + Tv| ≤ C |v_0|.
(b) If f_T(t,x,v) := f(𝒯_z_0(t,x,v)), then f_T satisfies
∂_t f_T + v ·∇_x f_T = ∇_v [ A(z)∇_v f_T] + B(z)·∇_v f_T + C(z) f_T
in Q_R for any 0<R<min{√(t_0),c_1 |v_0|^-1-γ/2}, where c_1 is a universal constant, and
λ I ≤ A(z) ≤Λ I,
|B(z)| ≲ |v_0|^1+γ/2(1+f(t,x,·)_L^∞(B_ρ(v)))^-(γ+1)/d, γ∈ [-2,-1),
|v_0|^1+γ/2, γ∈ [-1,0],
|C(v)| ≲ |v_0|^γ(1+f(t,x,·)_L^∞(B_ρ(v)))^-γ/d, -2dd+2≤γ < 0,
|v_0|^-2-2γ/d(1+f(t,x,·)_L^∞(B_ρ(v)))^-γ/d, -2 < γ < -2dd+2,
with λ and Λ universal, and ρ≲ 1+ |v_0|^-2/d.
Since |v|≤ 1 and |v_0| > 2,
|v_0| - |v_0|^1+γ/2≤ |v_0| - |Tv| ≤ |v_0 + Tv| ≤ |v_0| + |Tv| ≤ |v_0| + |v_0|^1+γ/2.
Thus, (a) follows since γ∈ (-2,0).
For (b), by direct computation, f_T satisfies (<ref>) with
A(z) = T^-1a̅(𝒯_z_0(z)) T^-1, B(z) = T^-1b̅(𝒯_z_0(z)), C(z) = c̅(𝒯_z_0(z)).
In order to keep the proof clean, let us write a̅_ij and A_ij instead of a̅_ij(𝒯_z_0(z)) and A_ij(z) for the rest of the proof.
Fix z = (t,x,v)∈ Q_R, and let ṽ = v_0+T v. From part (a), we know that |ṽ| ≈ |v_0|. Applying Lemma <ref>, we have that for any unit vector e,
a̅_ij e_ie_j ≲(1+|v_0|)^γ, e = ṽ / |ṽ|,
(1+|v_0|)^γ+2, e ∈ S^d-1.
and,
a̅_ij e_ie_j ≳(1+|v_0|)^γ, e ∈ S^d-1,
(1+|v_0|)^γ+2, e ·ṽ = 0.
Our first step is to verify that we can switch ṽ for v_0 in (<ref>) and (<ref>).
Let us start with (<ref>). This is where the assumption |v| < R ≤ C_1 |v_0|^-1-γ/2 plays a role. We can choose c_1 so as to ensure that |Tv| ≤ 1. Since v_0 = ṽ - Tv and using the fact that a̅_ij is positive definite,
a̅_ij (v_0)_i (v_0)_j ≤ 2 a̅_ijṽ_i ṽ_j + 2 a̅_ij(Tv)_i (Tv)_j ≤ C |v_0|^2+γ.
Let e_0 = v_0 / |v_0|. The computation above tells us that a̅_ij (e_0)_i (e_0)_j ≲ |v_0|^γ.
Let us now turn to (<ref>). We will show that
a̅_ij w_iw_j ≳ (1+|v_0|)^γ+2 |w|^2 if w · v_0 = 0.
Note that (1+|v_0|)^2+γ and (1+|v_0|)^γ are comparable when |v_0| is small, so we only need to verify (<ref>) for w · v_0 = 0 and |v_0| arbitrarily large. For such vector w, we write w= ηṽ + w' with w'·ṽ = 0. Since |ṽ - v_0| = |Tv| ≤ 1, we have |η| = |w·ṽ| / |ṽ|^2 = |w·(ṽ - v_0)|/|ṽ|^2 ≤ |w| |ṽ|^-2. Moreover, |w'| ≈ |w|.
Since a̅_ij is positive definite,
a̅_ij (√(2)ηṽ - w'/√(2))_i (√(2)ηṽ - w'/ √(2))_j ≥ 0,
then we have
a̅_ij w_i w_j ≥1/2a̅_ij w'_iw'_j - η^2a̅_ijṽ_i ṽ_j
≥( c(1+|v_0|)^γ+2 - (1+|v_0|)^γ)|w|^2 ≳ (1+|v_0|)^γ+2 |w|^2,
as desired.
Let w ∈^d be arbitrary. We will estimate A_ij w_i w_j from above. Writing w = μ e_0 + w̃, with w̃· e = 0.
A_ij w_i w_j = |v_0|^-γ( μ^2 a̅_ij (e_0)_i (e_0)_j + 2 μ |v_0|^-1a̅_ij (e_0)_i w̃_j + |v_0|^-2a̅_ijw̃_i w̃_j ),
and using that a̅_ij is positive definite,
A_ijw_iw_j ≤ 2|v_0|^-γ( μ^2 a̅_ij (e_0)_i (e_0)_j + |v_0|^-2a̅_ijw̃_i w̃_j ),
≤ C ( μ^2 + |w̃|^2 ) =: Λ |w|^2.
This establishes upper bound {A_ij}≤Λ I for some Λ>0.
Now we will prove the lower bound for A_ij. Again, we write w = μ e_0 + w̃ with e_0 ·w̃ = 0. We need to analyze the quadratic form associated with the coefficients a̅_ij more closely. From (<ref>), we have that for some universal constant c>0,
c |v_0|^γ (μ^2 + |w̃|^2) ≤a̅_ij w_i w_j = μ^2 a̅_ij (e_0)_i (e_0)_j + 2 μa̅_ij (e_0)_i w̃_j + a̅_ijw̃_i w̃_j.
Moreover, (<ref>) implies that there is a universal constant δ > 0 so that
c |v_0|^γ (μ^2 + |w̃|^2) ≥δμ^2 a̅_ij (e_0)_i (e_0)_j + δ |v_0|^-2a̅_ijw̃_i w̃_j.
Subtracting the two inequalities above,
(1-δ) μ^2 a̅_ij (e_0)_i (e_0)_j + 2 μa̅_ij (e_0)_i w̃_j + (1-δ |v_0|^-2) a̅_ijw̃_i w̃_j ≥ 0.
The same inequality holds if we replace w = μ e_0 + w̃ with w = (1-δ/2)^-1/2μ e_0 + (1-δ/2)^1/2|v_0|^-1w̃, therefore
1-δ/1-δ/2μ^2 a̅_ij (e_0)_i (e_0)_j + 2 μ |v_0|^-1a̅_ij (e_0)_i w̃_j + (1-δ/2)(1-δ |v_0|^-2) |v_0|^-2a̅_ijw̃_i w̃_j ≥ 0.
Recalling the formula above for A_ij w_i w_j, and replacing it in the left hand side, we get
A_ij w_i w_j - ( 1 - 1-δ/1-δ/2) |v_0|^-γμ^2 a̅_ij (e_0)_i (e_0)_j - ( 1 - (1-δ/2)(1-δ |v_0|^-2) ) |v_0|^-2-γa̅_ijw̃_i w̃_j ≥ 0.
Therefore, using (<ref>) and (<ref>),
A_ij w_i w_j ≥( 1 - 1-δ/1-δ/2) |v_0|^-γμ^2 a̅_ij (e_0)_i (e_0)_j + ( 1 - (1-δ/2)(1-δ |v_0|^-2) ) |v_0|^-2-γa̅_ijw̃_i w̃_j,
≥λ (μ^2 + |w̃|^2),
for some universal constant λ > 0. This establishes the lower bound {A_ij}≥λ I.
We now turn to the lower bound for
Next, (<ref>) and (<ref>) imply that there is a unit eigenvector e̅ of a̅_ij with eigenvalue λ̅≈ (1+|v_0|)^γ. For |v_0| sufficiently large, the angle between e_0 and e̅ must be small. Indeed, writing e_0 = (e_0·e̅)e̅ + e^⊥, we have a̅_ij(e_0)_i(e_0)_j = (e_0·e̅)^2 λ̅+ a̅_ij e^⊥_i e^⊥_j, so that (<ref>) and (<ref>) imply |e^⊥|^2 ≲ |v_0|^-2, and therefore, |e_0 - e̅|≲ |v_0|^-1. Similarly, for w̃ perpendicular to e_0, we can write w̃ = (w̃·e̅)e̅ + w̅ with |w̃ - w̅| ≲ |w̃| |v_0|^-1. Now, with w∈^d and w = μ e_0 + w̃ as above, we have
a̅_ij(e_0)_i w̃_j = a̅_ije̅_i (w̃ - w̅)_j + a̅_ij (e_0 - e̅)_i w̃_j
≥ - λ̅|w̃ - w̅| - |a̅ (e_0-e̅)| |w̃|
≥ -c(1+|v_0|)^γ |v_0|^-1 |w̃|,
for some constant c. Then, if |v_0|≥ρ_0 large enough, (<ref>) implies
A_ij w_i w_j ≥ |v_0|^-γ( μ^2 |v_0|^γ - 2cμ |v_0|^-2+γ|w̃| + |v_0|^γ|w̃|^2)
≥μ^2 + |w̃|^2 = |w|^2.
To derive the bound on B(z), Lemma <ref> and conclusion (a) imply
|B(z)| ≲T^-1 |b̅(𝒯_z_0(z))|
≲ (1+|v_0|)^γ/2+1(1+f_L^∞(B_ρ'(ṽ)))^-(γ+1)/d, γ∈ (-2,-1),
(1+|v_0|)^γ/2+1, γ∈ [-1,0],
where ρ' = |ṽ|^-2/d. From the triangle inequality, we have that B_ρ'(ṽ) ⊂ B_ρ(v_0), with ρ≲ (1+|v_0|)^-2/d+R(1+|v_0|)^(γ+2)/2≤ 1+(1+|v_0|)^-2/d.
The bound on C(z) follows in a similar manner, using Lemma <ref>.
The key lemma in the proof of Theorem <ref> is the following pointwise estimate on f:
Let γ∈ (-2,0], T_0>0, and let f:[0,T_0]×^2d→_+ solve the Landau equation (<ref>) weakly. If
f(t,x,v) ≤ K (1+t^-d/2)(1+|v|)^-α
in [0,T_0]×^2d for some α∈ [0,1] and K≥ 1, then
f(t,x,v) ≤ C((K (1+t^-d/2))^(d-γ)/(d+2) (1+|v|)^P(d,α,γ) + K^Q(γ)(1+ t^-d/2) (1+|v|)^-1),
for some C universal and
P(d,α,γ) = -1 - d(1+α)/(d+2), γ∈[-2dd+2, 0],
-[d(4+γ)+2 +2γ + α d]/(d+2), γ∈(-2, -2dd+2),
Q(γ) = 0, γ∈ [-1,0]
-(1+γ), γ∈ (-2,-1).
Case 1: γ∈ [-1,0].
Let z_0 = (t_0,x_0,v_0) be such that such that |v_0|≥ 2. Define r_0 = min{1,√(t_0)}, and note that r_0^-d≈ (1+t_0^-d/2). Letting f_T be as in Lemma <ref>, we will estimate f_T(t,x,v) in Q_R, where
R := c_1(r_0/2)(1+|v_0|)^-(2+γ)/2,
with c_1 as in Lemma <ref>(b). We have that f_T solves (<ref>) in Q_R, and by Lemma <ref>(a) and our assumption on f,
f_T(t,x,v) ≲ K r_0^-d (1+|v_0|)^-α
in Q_R. Feeding (<ref>) into Lemma <ref>(b), we have
0< λ I ≤ A(z) ≤Λ I,
|B(z)| ≲ (1+|v_0|)^(2+γ)/2,
|C(z)| ≲(Kr_0^-d)^-γ/d(1+|v_0|)^γ,
in Q_R.
Let Q_T,R be the image of Q_R under z ↦𝒯_z_0(z), and note that
f_T _L_t,x^∞ L_v^1(Q_R) = (T^-1)f_L_t,x^∞ L_v^1(Q_T,R)
= (1+|v_0|)^-[(d-1)(2+γ)/2 + γ/2]f_L_t,x^∞ L_v^1(Q_T,R)
≤ (1+|v_0|)^-(1+d(2+γ)/2)E_0,
where the last inequality comes from the energy bound (<ref>) and Lemma <ref>(a).
By (<ref>) and our choice of R, we can apply Lemma <ref> in Q_R with g = f_T and s = C(z) f_T to obtain
f(t_0,x_0,v_0) ≤ C(f_T_L_t,x^∞ L_v^1(Q_R)^2/(d+2) C(z) f_T_L^∞(Q_R)^d/(d+2)+ r_0^-d(1+|v_0|)^d(2+γ)/2f_T_L_t,x^∞ L_v^1(Q_R))
≤ C((Kr_0^-d)^(d-γ)/(d+2) (1+|v_0|)^-1-d(1+α)/(d+2) + r_0^-d(1+|v_0|)^-1),
using (<ref>), (<ref>), and (<ref>). Note that we derived (<ref>) assuming that |v_0|≥ 2. When |v_0|≤ 2, the matrix a̅_ij(z) is uniformly elliptic and we can apply Lemma <ref> directly to f to obtain (<ref>) in this case as well.
Case 2: γ∈ (-2, -1]. The argument is the same as in Case 1, but the estimates are quantitatively different as a result of the different bounds on B(z) and C(z) in Lemma <ref>. The changes are as follows: the radius R of the cylinder Q_R is chosen to be
R := K^(1+γ)/d(r_0/2)(1+|v_0|)^-(2+γ)/2,
the bound on B(z) becomes
|B(z)|≲ K^-(1+γ)/d r_0^1+γ(1+|v_0|)^(2+γ)/2≤Λ/R, z∈ Q_R,
and for C(z) we have
|C(z)| ≲(Kr_0^-d)^-γ/d(1+|v_0|)^γ, γ∈[-2dd+2, -1],
(Kr_0^-d)^-γ/d(1+|v_0|)^-2-2γ/d, γ∈(-2, -2dd+2),
for z∈ Q_R. After applying Lemma <ref> and (<ref>), we obtain
f(t_0,x_0,v_0) ≤ C((K r_0^-d)^(d-γ)/(d+2) (1+|v_0|)^P(d,α,γ) + K^-(1+γ) r_0^-d (1+|v_0|)^-1),
as desired, with P(d,α,γ) as in the statement of the lemma.
We are now in a position to prove our main theorem.
Define
K:= sup_(0,T_0]×^2dmin{t^d/2,1}f(t,x,v).
First, we will show that K≤ K_*, where K_* is universal. We can assume K > 1. For each γ∈ (-2, 0], define p_γ: (1,∞)→ by
p_γ(K) = C(K^(d-γ)/(d+2) + 1), γ∈ (-1, 0],
C(K^(d-γ)/(d+2) + (K)^-(1+γ)), γ∈(-2, -1],
where C is the appropriate constant from Lemma <ref> for each γ. Then since -(1+γ)<1 and d-γ/d+2<1 for γ>-2, there is a K_*>1 such that
K_* = p_γ(K_*),
K >p_γ(K), K>K_*.
Let >0. By the definition of K, there exists some (t_0,x_0,v_0)∈ (0,T]×^2d such that f(t_0,x_0,v_0)> (K-)max{t_0^-d/2, 1}. Therefore, Lemma <ref> implies that
K-≤ p_γ(K).
Since this is true for all >0, we have that K≤ K_*.
If γ∈[-2d/d+2,0], we apply Lemma <ref> with α=0 to conclude (<ref>) with K_0 =C K_*. If γ∈(-2,-2d/d+2), Lemma <ref> with α=0 implies
f(t,x,v) ≤ C K (1+t_0^-d/2) (1+|v_0|)^-[d(4+γ) + 2 + 2γ]/(d+2),
so we can apply Lemma <ref> again with α = [d(4+γ) + 2 + 2γ]/(d+2). We iterate this step, and since for any α∈ (0,1], we have α≤ 1 < d(4+γ)/2 + 1 + γ, the gain of decay at each step, -P(d,α,γ) - α, is bounded away from 0. Therefore, after finitely many steps (with the number of steps depending only on d and γ), we obtain (<ref>) for some K_0.
The next result shows that the generating decay in Theorem <ref> cannot be improved to polynomial decay with power greater than d+2, or to exponential decay. Note that since b̅_i = -∂_j a̅_ij, for smooth solutions (<ref>) may be written equivalently in non-divergence form as
∂_t f + v·∇_x f= a̅(t,x,v)D_v^2 f + c̅(t,x,v)f.
Let γ∈ [-2,0] and p > d+2. Assume f solves (<ref>) in [0,T_0]×^2d with
f_in(x,v) ≥ c_0 (1+|v|)^-p
for v, x∈^d, for some c_0>0. Then there exist c_1>0 and β>0 such that
f(t,x,v) ≥ c_1 e^-β t (1+|v| ) ^-p
for all |v|≥ 1, x∈^d, and t∈ [0,T_0].
Let η:_+→_+ be a smooth, decreasing function such that η(r) ≡ 2 when r∈[0,1/2] and η(r) = r^-p when r∈ [1,∞). Note η(r) ≈ (1+r)^-p. Let us define ψ(t,x,v) = e^-β tη(|v|) with β to be chosen later. Choose an arbitrary R_0 >1, and recall from Lemma <ref> that a̅_ij∂_ijψ≥ - C(1+|v|)^γ+2|D^2ψ|. (Throughout this proof, a̅_ij and c̅ are defined in terms of f.) From our choice of η, it is clear that |D^2ψ|/ψ is uniformly bounded from above in _+×^d×{v: |v|≤ R_0 + 1}, so for β≥β_1 sufficiently large, we have
-∂_t ψ + a̅_ij∂_ijψ + c̅ψ≥βψ - C(1+|v|)^γ+2|D^2ψ| ≥ 0, |v| ≤ R_0+1.
For |v|≥ R_0, we estimate a̅_ij∂_ijψ more carefully. Since |v|≥ 1, we have
∂_ijψ = ∂_rrψ/|v|^2 v_i v_j + ∂_rψ/|v|( δ_ij - v_iv_j/|v|^2) = [p(p+1)|v|^-4 v_i v_j - p|v|^-2( δ_ij - v_iv_j/|v|^2)] e^-β t|v|^-p,
and Lemma <ref> implies
-∂_t ψ + a̅_ij∂_ijψ ≥βψ + [p(p+1) C_1|v|^-2+γ - pC_2|v|^γ] ψ≥(β - C |v|^γ) ψ.
For β≥β_2 sufficiently large, the right-hand side is positive for all |v|≥ R_0. Since c̅(t,x,v)≥ 0, this implies ψ(t,x,v) = e^-β tη(|v|) with β = max(β_1,β_2) is a subsolution of (-∂_t + a̅_ij∂_ij + c̅)g = 0 in the entire domain _+×^2d. By (<ref>), there is some c_1≥ c_0 so that f_in(x,v) ≥ c_1ψ(0,x,v) in ^2d. Now we can apply the maximum principle (see Appendix <ref>) to c_1ψ - f to conclude (<ref>).
The bound on the energy ∫_^d |v|^2 f(t,x,v) v ≤ E_0 < ∞ implies that f_in(x,v) cannot be bounded below by c_0|v|^-p with p≤ d+2 as |v|→∞.
In particular, Theorem tells us that there is no generation of moments when γ∈ [-2,0].
§ GAUSSIAN BOUNDS
We show the propagation of Gaussian upper bounds. The first lemma says that a sufficiently slowly decaying Gaussian is a supersolution of the linear Landau equation for large velocities. As above, the coefficients a̅_ij and c̅ in (<ref>) are defined in terms of f.
Let γ∈(-2,0]. Let f be a bounded function satisfying (<ref>), (<ref>), and (<ref>). Let a̅ and c̅ be given by (<ref>) and (<ref>) respectively. If α >0 is sufficiently small, then there exists R_0>0 and C>0, depending on d, γ, M_0, m_0, E_0, H_0 and f_L^∞, such that
ϕ(v) := e^-α |v|^2
satisfies
a̅_ij∂_ijϕ + c̅ϕ≤ -C|v|^γ+2ϕ,
for |v|≥ R_0.
Since ϕ is radial, we have
∂_ijϕ = ∂_rrϕ/|v|^2 v_i v_j + ∂_rϕ/|v|( δ_ij - v_iv_j/|v|^2) = [4α^2|v|^2 - 2α/|v|^2 v_i v_j - 2α( δ_ij - v_iv_j/|v|^2)] e^-α |v|^2,
and the bounds (<ref>) and (<ref>) imply
a̅_ij∂_ijϕ ≤[(4α^2|v|^2 - 2α)C_1 |v|^γ - 2α C_2 |v|^γ+2] e^-α |v|^2
= ((4α^2 C_1 - 2α C_2) |v|^γ+2 - 2α C_1 |v|^γ) e^-α|v|^2
≤ -C |v|^γ+2ϕ(v),
for |v| sufficiently large, provided α < C_2 / (2C_1). With Lemma <ref> (this is the point where f_L^∞ plays a role), this implies
a̅_ij∂_ijϕ + c̅ϕ≤[-C|v|^γ+2 + C|v|^-2-2γ/d]ϕ(v).
For -2<γ≤ 0, the first term on the right-hand side will dominate for large |v|, since γ+2 > 0 > -2-2γ/d.
Theorem <ref> gives us an upper bound for a solution f to the Landau equation which is useful away from t=0. If the initial data f(0,x,v) is a bounded function, we can improve our upper bound for small values of t using the upper bound for f(0,x,v). That is the purpose of the next lemma.
Let f: [0,T_0]×^2d→ be a solution of the Landau equation (<ref>) for some γ∈ (-2, 0], and suppose that g: [0,T_0]×^2d→ is bounded from above and a subsolution to the equation
∂_t g(t,x,v) + v·∇_x g(t,x,v) ≤a̅_ij(t,x,v) ∂_ij g(t,x,v) + c̅(t,x,v) g(t,x,v),
where a̅_ij and c̅ are defined in terms of f as in (<ref>) and (<ref>). Let κ(t) be defined by
κ(t) = {[ β/1+γ/2 t^1+γ/2, 0≤ t ≤ 1; β/1+γ/2 + β(t-1), t≥ 1 ]. ,
where β>0 depends only on d, γ, m_0, M_0, E_0, and H_0.
Then
sup_[0,T_0]×^d e^-κ(t)g_+(t,x,v) = sup_^2d g_+(0,x,v).
By Theorem <ref>, we have that f(t,x,v)≤ K_0t^-d/2 for 0<t<1. Hence by Lemma <ref>, we have that c̅(t,x,v) ≲ t^γ/2. Since γ > -2, for some universal β>0, κ(t) satisfies c̅(t,x,v)≤κ'(t) for all t>0. Thus g̃(t,x,v) = e^-κ(t)g(t,x,v) satisfies
∂_t g̃(t,x,v) + v·∇_xg̃(t,x,v) ≤a̅_ij(t,x,v)∂_ijg̃(t,x,v) + (c̅(t,x,v)-κ'(t))g̃(t,x,v)
≤a̅_ij(t,x,v)∂_ijg̃(t,x,v).
We apply Lemma <ref> from the Appendix to g̃(t,x,v) - sup_^2d g(0,x,v) to conclude (<ref>).
Applying Lemma <ref> with g=f and t ∈ [0,1] and Theorem <ref> for t>1, we have that there is some constant C_2 (depending on C_0, d, γ, M_0, m_0, E_0, and H_0) so that f(t,x,v) ≤ C_2 for all t ≥ 0, x ∈^d and v ∈^d.
Let ϕ(v) := e^-α|v|^2. From Lemma <ref>, we have that there is a C, depending on C_2, d, γ, M_0, m_0, E_0, and H_0, such that
sup_(0,T_0]×^d ×^da̅_ij∂_ijϕ + c̅ϕ≤ C ϕ.
Thus C_0 e^Ctϕ(v) is a supersolution of the equation and f(t,x,v) ≤ C_0 e^Ctϕ(v) for all t >0, x ∈^d and v ∈^d.
This upper bound is good for small values of t. We see that there is some time t_0 > 0 so that C_0 e^Ct_0ϕ(v) > C_2 for |v| < R_0. Here C_2 is the upper bound for f mentioned above and R_0 is the radius from Lemma <ref>. Thus, the function
g(t,x,v) :=[ f(t_0+t,x,v) - C_0 e^Ct_0ϕ(v) ]_+
is a supersolution of
g_t + v ·∇_x g ≤a̅_ij∂_ij g + c̅ g.
Applying the maximum principle (Lemma <ref>), we have that g ≤ 0 for all t>0, so f(t,x,v) ≤ C_0 e^t_0 Cϕ(v) for all t>t_0, and we conclude the proof.
By combining Theorem <ref> with the local Hölder estimates proved in <cit.> or <cit.>, we derive a global Hölder estimate for solutions of (<ref>) under the assumption that f_in(x,v) ≤ C_0 e^-α |v|^2. The following local estimate is essentially the same as Theorem 2 of <cit.>:
Let f be a weak solution of
∂_t f + v·∇_x f = ∇_v ·(A∇_v f) + B ·∇_v f + s
in Q_1, with λ I ≤ A ≤Λ I, |B|≤Λ, and s∈ L^∞(Q_1). Then f is Hölder continuous with respect to (t,x,v) in Q_1/2, and
|f(z_1) - f(z_1)|/|t_1-t_2|^β/2+|x_1-x_2|^β/3+|v_1+v_2|^β≤ C(f_L^2(Q_1) + s_L^∞(Q_1)),
for all z_1,z_2∈ Q_1/2, where β and C depend on d, λ, and Λ.
To state our theorem as a global Hölder estimate, we will need an appropriate notion of distance in ×^d ×^d which is invariant by Galilean transformations. A natural choice is the following
d_P(z_1,z_2) := min{ r : ∃ z ∈×^d ×^d : z_1 ∈ Q_r(z) and z_2 ∈ Q_r(z) }.
We can easily estimate the value of d_P(z_1,z_2) by the simpler formula
d_P(z_1,z_2) ≈ |t_1-t_2|^1/2 + |x_1 - x_2 - (t_1-t_2)(v_1+v_2)/2|^1/3 + |v_1-v_2|.
It turns out that we need to deform this distance using the transformation 𝒯_z described in Lemma <ref>. We define
d_L(z_1,z_2) := min{ |v|^1+γ/2 r: z ∈×^d ×^d : 𝒯_z^-1 z_1 ∈ Q_r and 𝒯_z^-1 z_2 ∈ Q_r}.
(Here, we make the convention that 𝒯_z = 𝒮_z when |v|< 2.) An explicit expression for d_L(z_1,z_2) is messy. It involves the affine transformation 𝒯 which is anisotropic and affects both the x and v variables. In the case that we compare two points with identical values of t and x, it is straightforward to check that when d_L((t,x,v_1),(t,x,v_2))<1, then d_L is equivalent to the metric introduced by Gressman-Strain <cit.> in their study of the Boltzmann equation.
Under the assumptions of Theorem , there exist C>0 and β∈ (0,1) depending on C_0, α, d, γ, m_0, M_0, E_0, and H_0, such that for any z_1,z_2∈ [0,T_0]×^2d, one has
|f(z_1) - f(z_2)| ≤ C(e^-α|v_1|^2 + e^-α|v_2|^2)min{ 1, (1+ t_1^-β/2+t_2^-β/2)
d_L(z_1,z_2)^β}.
If |v_1|≤ 2 or |v_2| ≤ 2, the result follows by applying Theorem <ref> directly to f, noting that 1≲ e^-α|v_1|^2+e^-α|v_2|^2. So, we can assume that |v_1| > 2 and |v_2|>2.
Let z̅ = (t̅, x̅, v̅) be the point achieving the minimum in the definition of d_L(z_1,z_2). Thus z̃_1 := 𝒯_z̅^-1 z_1 ∈ Q_δ and z̃_1 := 𝒯_z̅^-1 z_1 ∈ Q_δ, where δ = |v̅|^-1-γ/2d_L(z_1,z_2).
Let r := min(t_1^1/2, t_2^1/2,(1+|v̅|)^-1-γ/2). If δ≥ r/2, then we simply estimate |f(z_1) - f(z_2)| ≤ C_1(e^-α|v_1|^2+e^-α|v_2|^2) from Theorem <ref>. We need to concentrate on the case δ < r/2.
Let us consider the function f_T as in Lemma <ref>, with base point z̅. By our choice of r, f_T satifies an equation of the form (<ref>) in Q_r, and since Theorem <ref> gives us a bound on f_T_L^∞, we have that A is uniformly elliptic (with constants independent of z̅), |B|≲ |v̅|^1+γ/2, and |s|=|C(z)f_T| ≲ |f_T|. Defining f̃_T(t,x,v) := f_T(r^2t,r^3x,rv), we see that f̃_T satisfies another equation of the form (<ref>) with the new |B| bounded independently of |v̅|. Moreover, the points (r^-2t̃_1, r^-3x̃_1, r^-1ṽ_1) and (r^-2t̃_2, r^-3x̃_2, r^-1ṽ_2) belong to Q_r^-1δ⊂ Q_1/2. Therefore, we can apply Theorem <ref> to f̃_T in Q_1 to obtain
|f(z_1) - f(z_2)|/r^-β d_L(z_1,z_2)^β|v̅|^β(1+γ/2) = |f_T(z̃_1) - f_T(z̃_2)|/r^-βδ^β
= |f̃_T(r^-2t̃_1, r^-3x̃_1, r^-1ṽ_1) - f̃_T(r^-2t̃_2, r^-3x̃_2, r^-1ṽ_2)|/r^-βδ^β,
≲f̃_T_L^1(Q_1) + f̃_T_L^∞(Q_1)≲sup_v∈ Q_r/2e^-α|v̅+Tv|^2≲ e^-α|v̅|^2.
We have used Theorem <ref> to estimate the L^∞ norm of f̃_T in Q_1. Rewriting this estimate, we obtain
|f(z_1) - f(z_2)| ≲ r^-β |v̅|^-β(1+γ/2) d_L(z_1,z_2)^β e^-α|v̅|^2
≲(1+t_1^-β/2+t_2^-β/2) d_L(z_1,z_2)^β(e^-α|v̅_1|^2 + e^-α|v̅_2|^2) .
§ MAXIMUM PRINCIPLE FOR WEAK SOLUTIONS TO KINETIC FOKKER-PLANCK EQUATIONS
In this appendix, we give a proof of the maximum principle in a form that is convenient for our purposes.
The following proposition is perhaps a classical result. We prove it here, since we could not find any easy reference and also for completeness. The result is for equations on a bounded domain with general coefficients (not necessarily defined by integrals as above).
Let Q = [0,T_0]×Ω, where Ω⊂^2d is a bounded domain, and assume that g is a subsolution of the equation
∂_t g + v·∇_x g ≤∇_v· [a(t,x,v) ∇_v g] + b(t,x,v)·∇_v g + c(t,x,v) g,
in the weak sense in Q, where a is uniformly elliptic in Q with constants λ and Λ, and b and c are uniformly bounded in Q.
If g≤ 0 on the parabolic boundary of Q, then g≤ 0 in Q.
Choosing the test function ϕ = g_+, the weak formulation of (<ref>) gives
∫_Q g_+ (∂_t g + v·∇_x g) x v t ≤∫_Q (-a ∇_v g ∇_v g_+ - g_+b ·∇ g + c g_+^2) x v t,
or
∫_Q1/2d/dt (g_+)^2 x v t ≤∫_Q (-λ |∇_v g_+|^2 - b g ∇ g_+ + c g_+^2) x v t
≤(b_L^∞/4λ + c_L^∞) ∫_Q g_+^2 x v t,
by Young's inequality. We apply Gronwall's Lemma to ∫_Ω (g_+)^2 x v on [0, T_0] to conclude g_+ ≡ 0 in Q.
Next, we derive a maximum principle on the whole space for subsolutions of a Landau-type equation without a zeroth-order term:
Let g be a bounded function on [0,T_0]×^2d that satisfies
∂_t g + v·∇_x g ≤a̅(t,x,v) D^2_v g,
in the weak sense. Here, a̅(t,x,v) is defined as in (<ref>) in terms of a function f satisfying (<ref>), (<ref>), and (<ref>). If g(0,x,v) ≤ 0 in ^2d, then g(t,x,v) ≤ 0 in [0,T_0]×^2d.
By the bounds on a̅ given in Lemma <ref>, we have
a̅_ij∂_ij (1+|v|) ≤ C_1(1+|v|)^1+γ,
for some constant C_1, and thus ϕ_1(t,v) := e^C_1t (1+|v|) satisfies
∂_t ϕ_1(t,v) ≥a̅_ij(t,x,v) ∂_ijϕ_1(t,v).
Let _1>0 be a small constant. Since g is bounded, there is R(_1)>0 such that g - _1 ϕ_1<0 whenever |v|≥ R(_1). Let R_1> R(_1), and choosing C_2>0 large enough depending on R_1, we can define ϕ_2(t,x) := (1+|x|) e^C_2 t, and we have
∂_t ϕ_2 + v·∇_x ϕ_2 ≥ 0,
whenever |v|<R_1. Finally, for _2>0 arbitrary, we define
g̃(t,x,v) := [g(t,x,v) - _1 ϕ_1(t,v) - _2 ϕ_2(t,x)]_+.
It is clear that g̃ is a subsolution as in (<ref>) with c≡ 0, whenever |v|<R_1. For R(_2) sufficiently large, we have that g - _1ϕ_1 - _2ϕ_2 <0 for |x|≥ R(_2) or |v|≥ R(_1). Then for any R_2>R(_2), we have that g̃ = 0 on the parabolic boundary of [0,T_0]× B_R_2× B_R_1, so Proposition <ref> applied to g̃ gives
g - _1ϕ_1 - _2 ϕ_2 ≤ 0, |v|<R_1, |x|< R_2.
Take R_2→∞ and _2→ 0 to conclude
g - _1ϕ_1 ≤ 0, |v|<R_1.
Take R_1→∞ and _1→ 0, and the proof is complete.
abbrv
| We consider the spatially inhomogeneous Landau equation, a kinetic model from plasma physics that describes the evolution of a particle density f(t,x,v)≥ 0 in phase space (see, for example, <cit.>). It is written in divergence form as
∂_t f + v·∇_x f = ∇_v·[a̅(t,x,v)∇_v f] + b̅(t,x,v)·∇_v f + c̅(t,x,v) f,
where t∈ [0,T_0], x∈^d, and v∈^d. The coefficients a̅(t,x,v)∈^d× d, b̅(t,x,v) ∈^d, and c̅(t,x,v)∈ are given by
a̅(t,x,v) := a_d,γ∫_^d( I - w/|w|⊗w/|w|) |w|^γ + 2 f(t,x,v-w) w,
b̅(t,x,v) := b_d,γ∫_^d |w|^γ w f(t,x,v-w) w,
c̅(t,x,v) := c_d,γ∫_^d |w|^γ f(t,x,v-w) w,
where γ is a parameter in [-d,∞), and a_d,γ, b_d,γ, and c_d,γ are constants. When γ = -d, the formula for c̅ must be replaced by c̅ = c_d,γ f. Equation (<ref>) arises as the limit of the Boltzmann equation as grazing collisions predominate, i.e. as the angular singularity approaches 2 (see the discussion in <cit.>). The case d=3, γ=-3, corresponds to particles interacting by Coulomb potentials in small scales. The case γ∈ [-d,0) is known as soft potentials, γ = 0 is known as Maxwell molecules, and γ >0 hard potentials. In this paper, we focus on moderately soft potentials, which is the case γ∈ (-2,0).
We assume that the mass density, energy density, and entropy density are bounded above, and the mass density is bounded below, uniformly in t and x:
0<m_0≤∫_^d f(t,x,v) v ≤ M_0,
∫_^d |v|^2 f(t,x,v) v ≤ E_0,
∫_^d f(t,x,v) log f(t,x,v) v ≤ H_0.
In the space homogeneous case, because of the conservation of mass and energy, and the monotonicity of the entropy, it is not necessary to make the assumptions (<ref>), (<ref>) and (<ref>). It would suffice to require the initial data to have finite mass, energy and entropy. It is currently unclear whether these hydrodynamic quanitites will stay under control for large times and away from equilibrium in the space inhomogeneous case. Thus, at this point, it is simply an assumption we make.
We now state our main results. Our first theorem makes no further assumption on the initial data f_in:^2d→ beyond what is required for a weak solution to exist in [0,T_0].
Let γ∈ (-2,0]. If f:[0,T_0]×^2d→ is a bounded weak solution of (<ref>) satisfying (<ref>), (<ref>), and (<ref>), then there exists K_0>0 such that f satisfies
f(t,x,v) ≤ K_0 (1+t^-d/2) (1+|v|)^-1,
for all (t,x,v)∈ [0,T_0]×^2d. The constant K_0 depends on d, γ, m_0, M_0, E_0, and H_0.
Note that even though we work with a bounded weak solution f, none of the constants in our estimates depend on f_L^∞. Note also that our estimate does not depend on T_0. We use a definition of weak solution for which the estimates in <cit.> apply, since that is the main tool in our proofs.
We will show in Theorem <ref> that an estimate of the form (<ref>) cannot hold with a power of (1+|v|) less than -(d+2), which also implies there is no a priori exponential decay. On the other hand, if f_in satisfies a Gaussian upper bound in the velocity variable, this bound is propagated:
Let f: [0,T_0]×^2d→ be a bounded weak solution of the Landau equation (<ref>) such that f_in(x,v) ≤ C_0 e^-α|v|^2, for some C_0>0 and a sufficiently small α>0. Then
f(t,x,v) ≤ C_1e^-α|v|^2,
where C_1 depends on C_0, α, d, γ, m_0, M_0, E_0 and H_0. The value of α must be smaller than some α_0>0 that depends on γ, d, m_0, M_0, E_0 and H_0.
This estimate is also independent of T_0. As a consequence of Theorem <ref>, we will show in Theorem <ref> that in this regime, f is uniformly Hölder continuous on [t_0,T_0]×^2d for any t_0∈ (0,T_0).
Note that under some formal asymptotic regime, the hydrodynamic quantities of the inhomogeneous Landau equations converge to solutions of the compressible Euler equation <cit.>, which is known to develop singularities in finite time. Should we expect singularities to develop in finite time for the inhomogeneous Landau equation as well? That question seems to be out of reach with current techniques. A more realistic project is to prove that the solutions stay smooth for as long as the hydrodynamic quantities stay under control (as in (<ref>), (<ref>) and (<ref>)). The results in this paper are an important step forward in that program.
§.§ Related work
It was established in <cit.> that solutions to (<ref>) become C^∞ smooth in all three variables conditionally to the solution being away from vacuum, bounded in H^8 (in the d=3 case) and having infinitely many finite moments. It would be convenient to extend this conditional regularity result to have less stringent assumptions. In particular, the assumptions (<ref>), (<ref>) and (<ref>) are a much weaker assumption, which is also in terms of physically relevant hydrodynamic quantities. In <cit.>, the authors show how their local Hölder continuity result for linear kinetic equations with rough coefficients can be applied to solutions of the Landau equation provided that (<ref>), (<ref>) and (<ref>) hold and in addition the solution f is assumed to be bounded. While we also assume boundedness of f, our results do not quantitatively rely on this and in addition tell us some information about the decay for large velocities.
The local estimates for parabolic kinetic equations with rough coefficients play an important role in this work. Local L^∞ estimates were obtained in <cit.> using Moser iteration, and local Hölder estimates were proven in <cit.> using a weak Poincaré inequality. A new proof was given in <cit.> using a version of De Giorgi's method.
Classical solutions for (<ref>) have so far only been constructed in a close-to-equilibrium setting: see the work of Guo <cit.> and Mouhot-Neumann <cit.>. A suitable notion of weak solution, for general initial data, was constructed by Alexandre-Villani <cit.>.
The global L^∞ estimate we prove in Theorem <ref> is similar to an estimate in <cit.> for the Boltzmann equation. The techniques in the proof are completely different. The propagation of Gaussian bounds that we give in Theorem <ref> is reminiscent of the result in <cit.>. That result is for the space-homogeneous Boltzmann equation with cut-off, which is in some sense the opposite of the Landau equation in terms of the angular singularity in the cross section.
In order to keep track of the constants for parabolic regularization estimates (as in <cit.>) for large velocities, we describe a change of variables in Lemma <ref>. This change of variables may be useful in other contexts. It is related to one mentioned in the appendix of <cit.> for the Boltzmann equation.
For the homogeneous Landau equation, which arises when f is assumed to be independent of x in (<ref>), the theory is more developed. The C^∞ smoothing is established for hard potentials in <cit.> and for Maxwell molecules in <cit.>, under the assumption that the initial data has finite mass and energy. Propagation of L^p estimates in the case of moderately soft potentials was shown in <cit.> and <cit.>. Global upper bounds in a weighted L^1_t(L^3_v) space were established in <cit.>, even for γ=-3, as a consequence of entropy dissipation. Global L^∞ bounds that do not depend on f_in and that do not degenerate as t→∞ were derived in <cit.> for moderately soft potentials, and this result also implies C^2 smoothing by standard parabolic regularity theory.
Note that in the space homogeneous case our assumptions (<ref>), (<ref>) and (<ref>) hold for all t>0 provided that the initial data has finite mass, energy and entropy. Both Theorems <ref> and <ref> are new results even in the space homogeneous case. The previous results for soft potentials do not address the decay of the solution for large velocities.
§.§ Organization of the paper
In Section <ref>, we establish precise bounds on the coefficients a̅, b̅, and c̅ in (<ref>). In Section <ref>, we derive the local estimates we will use to prove Theorem <ref>, starting from the Harnack estimate of <cit.>. Section <ref> contains the proof of Theorem <ref> and a propagating lower bound that implies the exponent of (1+|v|) in (<ref>) cannot be arbitrarily high. In Section <ref>, we prove Theorem <ref> and the Hölder estimate, Theorem <ref>. In Appendix <ref>, we derive a convenient maximum principle for kinetic Fokker-Planck equations.
§.§ Notation
We say a constant is universal if it depends only on d, γ, m_0, M_0, E_0, and H_0. The notation A≲ B means that A≤ CB for a universal constant C, and A ≈ B means that A≲ B and B≲ A. We will let z=(t,x,v) denote a point in _+×^d×^d. For any z_0=(t_0,x_0,v_0), define the Galilean transformation
𝒮_z_0(t,x,v) := (t_0+t, x_0 + x +tv_0,v_0+v).
We also have
𝒮_z_0^-1(t,x,v) := (t-t_0, x - x_0 -(t-t_0)v_0,v-v_0).
For any r>0 and z_0 = (t_0,x_0,v_0), let
Q_r(z_0) := (t_0-r^2,t_0] ×{x : |x-x_0 - (t-t_0) v_0| < r^3 }× B_r(v_0),
and Q_r = Q_r(0,0,0). The shift 𝒮_z_0 and the scaling of Q_r correspond to the symmetries of the left-hand side of (<ref>). We will sometimes write ∂_i or ∂_ij, and these will always refer to differentiation in v. | null | null | null | null | null |
http://arxiv.org/abs/1701.07724v1 | 20170126144603 | Spin relaxation 1/f noise in graphene | [
"S. Omar",
"M. H. D. Guimarães",
"A. Kaverzin",
"B. J. van Wees",
"I. J. Vera-Marun"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
corresponding author
[email protected]
The Zernike Institute for Advanced Materials University of Groningen Nijenborgh 4 9747 AG, Groningen, The Netherlands
The Zernike Institute for Advanced Materials University of Groningen Nijenborgh 4 9747 AG, Groningen, The Netherlands
Kavli Institute at Cornell for Nanoscale Science Cornell University, Ithaca, NY – 14853, USA
The Zernike Institute for Advanced Materials University of Groningen Nijenborgh 4 9747 AG, Groningen, The Netherlands
The Zernike Institute for Advanced Materials University of Groningen Nijenborgh 4 9747 AG, Groningen, The Netherlands
corresponding author
[email protected]
The Zernike Institute for Advanced Materials University of Groningen Nijenborgh 4 9747 AG, Groningen, The Netherlands
School of Physics and Astronomy The University of Manchester, Manchester M13 9PL, UK
We report the first measurement of 1/f type noise associated with electronic spin transport, using single layer graphene as a prototypical material with a large and tunable Hooge parameter. We identify the presence of two contributions to the measured spin-dependent noise: contact polarization noise from the ferromagnetic electrodes, which can be filtered out using the cross-correlation method, and the noise originated from the spin relaxation processes. The noise magnitude for spin and charge transport differs by three orders of magnitude, implying different scattering mechanisms for the 1/f fluctuations in the charge and spin transport processes. A modulation of the spin-dependent noise magnitude by changing the spin relaxation length and time indicates that the spin-flip processes dominate the spin-dependent noise.
PACS numbers
, ,
Spin relaxation 1/f noise in graphene
I.J. Vera-Marun
December 30, 2023
=====================================
Noise in electronic transport is often treated as nuisance. However, it can have much more information than the average (mean) of the signal and can probe the system dynamics in greater detail than conventional DC measurements <cit.>. Low frequency fluctuations with a power spectral density (PSD) that depend inversely on frequency, also known as 1/f noise are commonly observed phenomena in solid state devices.
A textbook explanation of the processes generating 1/f noise is given by the McWhorter model where traps are distributed over an energy range, leading to a distribution of characteristic time scales of trapping-detrapping processes of the electrons in the transport channel and causing slow fluctuations in conductivity <cit.>.
Graphene is an ideal material for spin transport due to low spin-orbit coupling and small hyperfine interactions <cit.>. However, the experimentally observed spin relaxation time τ_s ∼ 3 ns and spin relaxation length λ_s ∼ 24 µm are <cit.> lower than the theoretically predicted τ_s∼ 100 ns and λ_s∼ 100 µm <cit.>. There are a number of experiments and theories suggesting that the charge and magnetic impurities present in graphene might play an important role for the lower value of observed spin relaxation time <cit.>. It is an open question whether these impurities affect the spin transport in a similar way as the charge transport, or if the scattering mechanisms in both processes behave differently. For electronic transport in graphene, the effect of impurities can be studied via 1/f noise measurements. In a similar line, measuring low frequency fluctuations of the spin accumulation can unravel the role of impurities on the spin transport.
In this work, we report for the first time observation of spin-dependent 1/f noise, which we study on graphene spin valves performed in a non-local geometry. We find that the extracted noise magnitude (γ^s) for the spin transport is three orders of magnitude higher than the noise magnitude (γ^c) obtained from the local charge noise measurements, indicating different scattering mechanisms producing 1/f fluctuations in the charge and spin transport. Such a large difference had not been pointed out until now, although different scattering mechanisms for spin transport have been proposed before <cit.>.
In a recent experiment, Arakawa et al. <cit.> measure a spin dependent shot noise due to the spin-injection process. Also, they rule out the effect of spin-flip scattering due to similar Fano factor values obtained for the charge and spin transport. In contrast, we measure the spin dependent noise in a different frequency regime and find out that the dominant scattering mechanisms contributing to the 1/f noise are the processes which flip the spins, giving rise to a higher noise magnitude compared to the charge transport and highlighting the role of impurities in the spin relaxation.
In order to perform the spin-dependent noise measurements, we prepare graphene spin valves. Single layer graphene is contacted with 35 nm thick ferromagnetic cobalt electrodes with ∼ 0.8 nm thick TiO_2 tunnel barrier inserted in between for efficient spin injection and detection (see supplementary for fabrication details) <cit.>. We characterize two different regions of our sample. They are labeled as device A and device B for further discussion.
A lock-in detection technique is used for characterizing the charge and the spin transport properties. All the measurements are carried out in high vacuum (∼ 1 × 10^-7 mbar) at room temperature. For charge transport measurements we use the four probe connection scheme shown in Fig. <ref>(a), which minimizes the contribution of the contacts.
Spin transport is measured by applying a current between contacts C1-C2 to inject the spins into graphene and measure the spin accumulation between contacts C3-C5 (or C4-C5) in a four probe non-local detection scheme as shown in Fig. <ref>(b). This method decouples the paths of the spin and charge transport and thus minimizes the contribution of the charge signal to the measured spin signal <cit.>. In order to perform spin valve measurements, we first apply an in-plane high magnetic field (B_∥) along the easy axes of the ferromagnets to set their relative magnetization in the same direction. Then, the magnetic field is swept in the opposite direction in order to reverse the magnetization direction of the electrodes one by one depending on their coercivity. Each magnetization reversal appears as a sharp transition in the signal (Fig. <ref>(a)). For Hanle precession measurements, an out of plane magnetic field (B_⊥) is applied to precess the injected spins around the applied field for a fixed magnetization configuration of the ferromagnetic electrodes. A representative Hanle measurement from device A is shown in Fig. <ref>(b). With this measurement, we can extract the spin diffusion coefficient D_s and spin relaxation time τ_s, following the procedure described in ref. <cit.> and use them to calculate the contact polarization (P). For device A, we obtain D_s∼0.03 m^2/s, τ_s∼ 110 ps and P ∼ 5% and for device B, D_s∼ 0.01 m^2/s, τ_s∼ 290 ps and P ∼ 10%.
In order to measure the noise from the sample, we use a two channel dynamic signal analyzer from Stanford Research System (model SR785) which acquires the signal fluctuations in time and converts it into a frequency domain signal via Fast Fourier Transform (FFT) algorithm.
The 1/f noise of the charge transport in graphene is measured in a local four probe scheme, similar to the charge transport measurements (Fig. <ref>(a)). A dc current is applied between the ferromagnetic injectors C2 and C5. Since the contacts are designed lithographically on both sides of the ferromagnetic electrode, the fluctuations in the voltage drop V_local across the flake can be measured via the contact pair C1-C3 (path 1) and C1'-C3' (path 2). The measured signals are cross correlated in order to filter out the noise from external electronics such as preamplifiers and the spectrum analyzer <cit.>. The electronic 1/f noise S_V^local is measured at different bias currents (I_dc) at a fixed carrier density. By fitting the spectrum with the Hooge formula for 1/f noise i.e. S_V^local=γ^cV_local^2/f^a, where V_local is the average voltage drop across the flake and a is the exponent ∼ 1, we obtain the noise magnitude for the charge transport γ^c∼ 10^-7 (device A in Fig. <ref>(c)), similar to the values reported in literature <cit.> (see supplementary information for the details). The charge noise magnitude is defined as the Hooge parameter γ_H^c divided by the total number of carriers in the transport channel, i.e. γ^c=γ_H^c/(n*W*L). Here n is charge carrier density, W and L are the width and length of the transport channel. γ^c depends both on the concentration and the type of scatterers e.g. short range and long range scatterers <cit.>.
The spin-dependent 1/f noise can be expressed as:
S_V^NL=γ^sV_NL^2/f^a=γ^s(Pμ_s/e)^2/f^a
Here S_V^NL is the spin-dependent non-local noise, γ^s=γ_H^s/(n*W*λ_s) is the noise magnitude for spin transport, e is the electronic charge and V_NL= Pμ_s/e is the measured non-local spin signal due to the average spin accumulation μ_s in the channel [variables V_local, V_NL, μ_s, P, λ_s represent the time average of the quantities]. Here γ_H^s represents the Hooge parameter for spin tranport. In contrast with the charge current, spin current is not a conserved quantity and exists over an effective length scale of λ_s.
Spin transport in a non-local geometry is realized in three fundamental steps: i) spin current injection, ii) spin diffusion through the transport channel and iii) detection of the spin accumulation. All these steps can contribute to the spin-dependent noise. For the first step of spin injection, we use a dc current source to inject spin current, which helps to eliminate the resistance fluctuations in the injector contact, leaving only the polarization fluctuations of the injector electrode as a possible noise source. The polarization fluctuations of the injector can arise due to thermally activated domain wall hopping/rotation in the ferromagnet <cit.>. The second possible noise source contributing to the fluctuations in the spin accumulation is the transport channel itself, either via the fluctuating channel resistance or via fluctuations in the spin-relaxation process. The third noise source, similar to the first one, can be present at the detector electrode due to fluctuating contact polarization.
The spin-dependent noise in graphene is measured non-locally as shown in the connection diagram of Fig. <ref>(b). During the noise measurement, we keep the spin injection current I_dc fixed (10 µA) and change the detected spin accumulation in three different ways. At B_⊥ = 0 T, i) by changing the spin accumulation by switching the relative magnetization direction of the injector electrodes, ii) by keeping the spin accumulation constant and changing the spin detection sensitivity by switching the relative magnetization direction of detector electrodes, and iii) at B_⊥≠ 0 T, by dephasing the spins during transport and thus reducing the spin accumulation. We can also measure the noise due to a spin independent background signal at high B_⊥∼ 0.12 T, where the spin accumulation is suppressed. The spin-dependent component S_V^NL can be estimated by subtracting S_V^NL(at B_⊥∼ 0.12T) from the measured S_V^NL.
For the non-local noise measurements in spin valve configuration, the noise PSD measured (Fig. <ref>(c)) for the magnetization configuration corresponding to a higher spin accumulation (level II; blue spectrum) is higher in magnitude than for the one corresponding to a lower spin accumulation (level I; red spectrum) of the spin valve in Fig. <ref>(a). In a similar way for the Hanle configuration, we measure the maximum magnitude of the spin-dependent noise for B_⊥ = 0 T, corresponding to maximum spin accumulation (Fig. <ref>(d)). On increasing |B_⊥|, both the spin accumulation and the associated noise are reduced.
In order to study its dependence with the spin accumulation, we fit each measured spectrum of S_V^NL versus frequency, obtained at different spin accumulation values (V_NL) with Eq. <ref> in the frequency range of 0.5 Hz-5 Hz. We take the value of S_V^NL at f = 1 Hz from the fit as a representative value of the 1/f spectrum. The exponent a obtained from the fit is ∼ 1. A summary of the data points for the noise PSD at different values of spin accumulation, obtained for device A using Hanle precession is plotted in Fig. <ref>(a). The S_V^NL∝μ_s^2 relation is valid in the lowest order approximation. The parabolic fit of the measured non local noise using Eq. <ref> gives γ^s∼ 10^-4. It should be noted that γ^s∼ 1000×γ^c, for the same device. Geometrical factors such as length scales cannot account for such a huge difference, as for this sample we obtain λ_s∼ 1.5 μm which is similar to the channel length for charge 1/f noise. The three orders of magnitude enhanced γ^s points towards distinctive scattering processes affecting the spin dependent noise, in contrast to the charge 1/f noise. Our findings can be explained along the direction of the recently proposed resonant scattering mechanism <cit.> for spin transport where intrinsically present magnetic impurities strongly scatter the spins without a significant effect on the charge scattering strength. The scattering cross section of these impurities can fluctuate in time and could give rise to a spin dependent 1/f noise.
An analytical expression for the spin-dependent noise (at f = 1 Hz) which is derived from the equation for the non-local spin signal V_NL (see supplementary for the complete derivation) can be written as:
S_V^NL/ V_NL^2 =γ^s≃S_P/P^2+S_λ_s/λ_s^2(1+L/λ_s)^2
where S_P is the contact polarization noise which is Fourier transform of the auto correlation function for the time dependent polarization fluctuations i.e. ℱ⟨ P(t)P(t+τ)⟩ , S_λ_s is the noise associated with the spin transport i.e. spin relaxation noise (ℱ⟨λ_s(t)λ_s(t+τ)⟩), L is the separation between the inner injector and detector electrodes.
Eq. <ref> suggests that γ^s is increased for lower values of λ_s. In order to confirm that the spin-dependent noise is affected by the spin transport properties, we measure S_V^NL as a function of the back-gate voltage (carrier density). In agreement with literature <cit.>, a higher τ_s is observed at higher charge carrier densities for single layer graphene (Fig. <ref>(c)). The representative data is shown for device B. It is worth emphasizing here that for similar charge and spin transport parameters (R_sq,λ_s) for device A (350 Ω, 1.8 µm) and device B (400 Ω, 1.6 µm), we obtain similar values of γ^s∼ 10^-4. However, both devices have different values of contact polarization P ∼ 5% for device A and P ∼ 10% for device B. This similarity in γ^s values despite the difference in P indicates that there is insignificant contribution of the contact polarization noise to the extracted γ^s. On the other hand, for the noise measurements at different carrier densities, we get an increase in γ^s at lower values of τ_s (Fig. <ref>(d)). The carrier density dependent behavior of the extracted γ^s is in qualitative agreement with the λ_s dependence of γ^s in Eq. <ref> (red curve in Fig. <ref>(d)), supporting our hypothesis that the measured spin-dependent noise is dominated by the noise produced by the spin transport (relaxation) process in graphene.
In order to estimate/filter out the contribution of the contact polarization noise in our measurements, we use spatial cross-correlation (SXC). We measure the contact polarization noise (=S_P/P^2 ×V_NL^2∼ 10^-16 V^2Hz^-1) which is lower by two orders of magnitude than the spin relaxation noise power between C3 and C5 (= S_λ_s/ λ_s^2× V_NL^2∼ 10^-14 V^2Hz^-1)(see supplementary for measurement scheme). Here, based on the reciprocity argument for the injector and detector in spin-valve configuration, we can assume equal noise contribution from the injector electrode and can safely rule out the effect of the polarization noise.
Since the spin accumulation μ_s∝exp^(-L/λ_s), the spin relaxation noise is also expected to decay exponentially in accordance with the relation S_V^NL∝μ_s^2. We extend our analysis to study the distance dependence of the spin relaxation noise. With the spatial cross-correlation we can also measure the spin relaxation noise between the detector contacts C4 and C5 while removing the polarization noise from contact C4. For this, we measure the spin-dependent noise at different detector contacts via path 1 and path 3 in Fig. <ref>(b) independently, and cross correlate the measured signals (see supplementary). The polarization noise contribution from the reference detector C5 is expected to be negligible due to the lower value of spin accumulation at the contact (L_C1-C5/λ_s∼ 4).
We measure S_V^NL at the detectors C3 and C4 for two back-gate voltages: at V_g = 0 V (metallic regime) and at V_g = -45 V (close to the Dirac point) (see supplementary information).
Using the derived Eq. <ref>, we can now calculate λ_s from the noise measurement as:
S_λ_s^C3/S_λ_s^C4≃(expL^C3-C4/λ_s)^2(1+L^C1-C3/λ_s/1+L^C1-C4/λ_s)^2
Here S_λ_s^C3 and S_λ_s^C4 are the spin relaxation noise at contacts C3 and C4, and L^Ci-Cj is the separation between contacts C_i and C_j (i, j = 1,3,4). The solution to Eq. <ref> for the experimentally obtained noise ratios gives a value of λ_s∼1.5 µm and 1.0 µm at V_g = 0 V and -45 V, respectively. A close agreement with the values obtained independently from the Hanle measurements (λ_s∼ 1.5 µm at V_g = 0 V and 1.1 µm at V_g = -45 V) validates the analytical framework of Eq. <ref> and Eq. <ref>.
By performing the first measurement of 1/f noise associated with spin transport, we demonstrate that the non-local spin-dependent noise in graphene is dominated by the underlying spin relaxation processes. The obtained noise magnitude for charge and spin transport differ by three orders of magnitude, indicating fundamentally different scattering mechanisms such as resonant scattering of the spins, where the fluctuating scattering cross-section of the intrinsically present impurities could produce the spin dependent 1/f fluctuations <cit.>.
The presented work establishes 1/f noise measurements as a complementary approach to extract spin transport parameters and is expected to be valid other spintronic materials, where impurities play an important role in modifying the underlying spin relaxation process.
We acknowledge J. G. Holstein, H. M. de Roosz and H. Adema for their technical assistance. This research work was financed under EU-graphene flagship program (637088) and supported by the Zernike Institute for Advanced Materials, the Netherlands Organization for Scientific Research (NWO) and the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-open Grant No. 618083 (CN-TQC).
33
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Landauer(1998)]landauer_condensed-matter_1998
author author R. Landauer, 10.1038/33551 journal journal Nature volume 392, pages
658 (year 1998)NoStop
[Jayaraman and Sodini(1989)]jayaraman_1/f_1989
author author R. Jayaraman and author C. Sodini, 10.1109/16.34242 journal
journal IEEE Trans. Electron Devices volume 36, pages 1773 (year
1989)NoStop
[Hooge et al.(1981)Hooge,
Kleinpenning, and Vandamme]hooge_experimental_1981
author author F. N. Hooge, author T. G. M. Kleinpenning, and author L. K. J. Vandamme, 10.1088/0034-4885/44/5/001
journal journal Rep. Prog. Phys. volume 44, pages 479 (year
1981)NoStop
[Dutta and Horn(1981)]dutta_low-frequency_1981
author author P. Dutta and author P. M. Horn, 10.1103/RevModPhys.53.497 journal
journal Rev. Mod. Phys. volume 53, pages 497 (year 1981)NoStop
[Ertler et al.(2009a)Ertler, Konschuh,
Gmitra, and Fabian]ertler_electron_2009
author author C. Ertler, author S. Konschuh,
author M. Gmitra, and author J. Fabian, 10.1103/PhysRevB.80.041405 journal journal Phys.
Rev. B volume 80, pages 041405
(year 2009a)NoStop
[Huertas-Hernando et al.(2009)Huertas-Hernando, Guinea, and Brataas]huertas-hernando_spin-orbit-mediated_2009
author author D. Huertas-Hernando, author F. Guinea, and author A. Brataas, 10.1103/PhysRevLett.103.146801 journal journal Phys. Rev. Lett. volume 103, pages 146801 (year
2009)NoStop
[Ingla-Aynés et al.(2015)Ingla-Aynés, Guimarães, Meijerink,
Zomer, and van Wees]pep_2015
author author J. Ingla-Aynés, author M. H. D. Guimarães, author R. J. Meijerink, author P. J. Zomer, and author B. J. van
Wees, 10.1103/PhysRevB.92.201410 journal
journal Phys. Rev. B volume 92, pages 201410 (year 2015)NoStop
[Dugaev et al.(2011)Dugaev,
Sherman, and Barnaś]branas_graphene
author author V. K. Dugaev, author E. Y. Sherman,
and author J. Barnaś, 10.1103/PhysRevB.83.085306 journal journal Phys. Rev. B volume 83, pages 085306 (year 2011)NoStop
[Min et al.(2006)Min,
Hill, Sinitsyn, Sahu,
Kleinman, and MacDonald]macdonald_graphene
author author H. Min, author J. E. Hill,
author N. A. Sinitsyn, author B. R. Sahu, author
L. Kleinman, and author
A. H. MacDonald, 10.1103/PhysRevB.74.165310 journal journal Phys.
Rev. B volume 74, pages 165310
(year 2006)NoStop
[Ertler et al.(2009b)Ertler, Konschuh,
Gmitra, and Fabian]fabian_relaxation_sub
author author C. Ertler, author S. Konschuh,
author M. Gmitra, and author J. Fabian, 10.1103/PhysRevB.80.041405 journal journal Phys.
Rev. B volume 80, pages 041405
(year 2009b)NoStop
[Lundeberg et al.(2013)Lundeberg, Yang, Renard, and Folk]Folk_relaxation
author author M. B. Lundeberg, author R. Yang,
author J. Renard, and author J. A. Folk, 10.1103/PhysRevLett.110.156601 journal journal
Phys. Rev. Lett. volume 110, pages
156601 (year 2013)NoStop
[Kochan et al.(2014)Kochan,
Gmitra, and Fabian]fabian_resonant_scattering
author author D. Kochan, author M. Gmitra, and author J. Fabian, 10.1103/PhysRevLett.112.116602 journal journal Phys. Rev. Lett. volume 112, pages 116602 (year 2014)NoStop
[Soriano et al.(2015)Soriano, Tuan, Dubois, Gmitra, Cummings, Kochan, Ortmann, Charlier, Fabian, and Roche]review_Roche
author author D. Soriano, author D. V. Tuan,
author S. M.-M. Dubois, author M. Gmitra, author
A. W. Cummings, author
D. Kochan, author F. Ortmann, author J.-C. Charlier, author J. Fabian, and author S. Roche, 10.1088/2053-1583/2/2/022002 journal journal 2D Mater. volume
2, pages 022002 (year 2015)NoStop
[Omar et al.(2015)Omar,
Gurram, Vera-Marun, Zhang,
Huisman, Kaverzin, Feringa, and van Wees]omar_spin_2015
author author S. Omar, author M. Gurram,
author I. J. Vera-Marun,
author X. Zhang, author E. H. Huisman, author
A. Kaverzin, author
B. L. Feringa, and author
B. J. van Wees, 10.1103/PhysRevB.92.115442 journal journal Phys.
Rev. B volume 92, pages 115442
(year 2015)NoStop
[Arakawa et al.(2015)Arakawa, Shiogai, Ciorga, Utz, Schuh, Kohda, Nitta,
Bougeard, Weiss, Ono, and Kobayashi]arakawa_shot_2015
author author T. Arakawa, author J. Shiogai,
author M. Ciorga, author M. Utz, author
D. Schuh, author M. Kohda, author J. Nitta, author D. Bougeard, author D. Weiss, author T. Ono, and author K. Kobayashi, 10.1103/PhysRevLett.114.016601 journal journal
Phys. Rev. Lett. volume 114, pages
016601 (year 2015)NoStop
[Tombros et al.(2007)Tombros, Jõzsa, Popinciuc,
Jonkman, and van Wees]Tombros_nature
author author N. Tombros, author C. Jõzsa,
author M. Popinciuc, author H. T. Jonkman, and author B. J. van Wees, 10.1038/nature06037 journal journal Nature volume 448, pages 571 (year
2007)NoStop
[van den Brom and van
Ruitenbeek(1999)]van_den_brom_quantum_1999
author author H. E. van den Brom and author J. M. van Ruitenbeek, 10.1103/PhysRevLett.82.1526 journal journal Phys. Rev. Lett. volume 82, pages 1526 (year
1999)NoStop
[Balandin(2013)]balandin_low-frequency_2013
author author A. A. Balandin, 10.1038/nnano.2013.144 journal
journal Nat Nano volume 8, pages 549 (year 2013)NoStop
[Pal et al.(2011)Pal,
Ghatak, Kochat, Sneha,
Sampathkumar, Raghavan, and Ghosh]pal_microscopic_2011
author author A. N. Pal, author S. Ghatak,
author V. Kochat, author E. S. Sneha, author
A. Sampathkumar, author
S. Raghavan, and author
A. Ghosh, 10.1021/nn103273n journal journal ACS Nano volume 5, pages 2075 (year
2011)NoStop
[Liu et al.(2013)Liu,
Rumyantsev, Shur, and Balandin]liu_origin_2013
author author G. Liu, author S. Rumyantsev,
author M. S. Shur, and author A. A. Balandin, 10.1063/1.4794843 journal journal
Appl. Phys. Lett. volume 102, pages
093111 (year 2013)NoStop
[Kaverzin et al.(2012)Kaverzin, Mayorov, Shytov, and Horsell]kaverzin_impurities_2012
author author A. A. Kaverzin, author A. S. Mayorov, author A. Shytov, and author D. W. Horsell, 10.1103/PhysRevB.85.075435 journal journal Phys. Rev. B volume 85, pages 075435 (year 2012)NoStop
[Stolyarov et al.(2015)Stolyarov, Liu, Rumyantsev, Shur, and Balandin]stolyarov_suppression_2015
author author M. A. Stolyarov, author G. Liu,
author S. L. Rumyantsev,
author M. Shur, and author A. A. Balandin, 10.1063/1.4926872 journal journal Appl. Phys.
Lett. volume 107, pages 023106
(year 2015)NoStop
[Note1()]Note1
note Variables V_local, V_NL, μ _s, P, λ _s
represent the time average of the quantitiesNoStop
[Jiang et al.(2004)Jiang,
Nowak, Scott, Johnson,
Slaughter, Sun, and Dave]jiang_low-frequency_2004
author author L. Jiang, author E. R. Nowak,
author P. E. Scott, author J. Johnson, author
J. M. Slaughter, author
J. J. Sun, and author
R. W. Dave, 10.1103/PhysRevB.69.054407 journal journal Phys.
Rev. B volume 69, pages 054407
(year 2004)NoStop
[Ingvarsson et al.(1999)Ingvarsson, Xiao, Wanner, Trouilloud, Lu, Gallagher, Marley, Roche, and Parkin]ingvarsson_electronic_1999
author author S. Ingvarsson, author G. Xiao,
author R. A. Wanner, author P. Trouilloud, author
Y. Lu, author W. J. Gallagher, author A. Marley, author K. P. Roche, and author S. S. P. Parkin, 10.1063/1.369851 journal journal J. Appl. Phys. volume
85, pages 5270 (year 1999)NoStop
[Zomer et al.(2012)Zomer,
Guimarães, Tombros, and van Wees]zomer_long-distance_2012
author author P. J. Zomer, author M. H. D. Guimarães, author N. Tombros, and author B. J. van Wees, 10.1103/PhysRevB.86.161416 journal journal Phys. Rev. B volume
86, pages 161416 (year 2012)NoStop
[Jõzsa et al.(2009)Jõzsa, Maassen, Popinciuc,
Zomer, Veligura, Jonkman, and van Wees]jozsa_linear_2009
author author C. Jõzsa, author T. Maassen,
author M. Popinciuc, author P. J. Zomer, author
A. Veligura, author
H. T. Jonkman, and author
B. J. van Wees, 10.1103/PhysRevB.80.241403 journal journal Phys.
Rev. B volume 80, pages 241403
(year 2009)NoStop
[Maassen et al.(2012)Maassen, Vera-Marun, Guimarães, and van Wees]maassen_contacts
author author T. Maassen, author I. J. Vera-Marun, author M. H. D. Guimarães, and author B. J. van Wees, 10.1103/PhysRevB.86.235408
journal journal Phys. Rev. B volume 86, pages 235408 (year
2012)NoStop
[Hooge(1994)]hooge_1/f_1994
author author F. Hooge, 10.1109/16.333808 journal
journal IEEE Trans. Electron Devices volume 41, pages 1926 (year
1994)NoStop
[Berger et al.(2015)Berger,
Page, Wen, McCreary,
Bhallamudi, Kawakami, and Chris Hammel]berger_correlating_2015
author author A. J. Berger, author M. R. Page,
author H. Wen, author
K. M. McCreary, author
V. P. Bhallamudi, author
R. K. Kawakami, and author
P. Chris Hammel, 10.1063/1.4932673 journal journal Appl. Phys.
Lett. volume 107, pages 142406
(year 2015)NoStop
[Guimarães et al.(2014)Guimarães, van den Berg, Vera-Marun,
Zomer, and van Wees]guimaraes_spin_2014
author author M. H. D. Guimarães, author J. J. van den Berg, author I. J. Vera-Marun, author P. J. Zomer, and author B. J. van Wees, 10.1103/PhysRevB.90.235428
journal journal Phys. Rev. B volume 90, pages 235428 (year
2014)NoStop
[Krivorotov et al.(2002)Krivorotov, Gredig, Nikolaev, Goldman, and Dahlberg]krivorotov_role_2002
author author I. N. Krivorotov, author T. Gredig,
author K. R. Nikolaev, author A. M. Goldman, and author E. D. Dahlberg, 10.1103/PhysRevB.65.180406 journal journal Phys.
Rev. B volume 65, pages 180406
(year 2002)NoStop
[Fert and Jaffrès(2001)]fert_conditions_2001
author author A. Fert and author H. Jaffrès, 10.1103/PhysRevB.64.184420 journal journal Phys. Rev. B volume
64, pages 184420 (year 2001)NoStop
Supplementary Information
§ SAMPLE PREPARATION
Graphene is mechanically exfoliated from a highly oriented pyrolytic graphite (HOPG) ZYA grade crystal on to a pre-cleaned Si/SiO_2 substrate (300 nm thick SiO_2), where n^++ doped Si is used as a back gate electrode. Single layer graphene flakes were identified using an optical contrast. Ferromagnetic contacts are patterned via electron beam lithography on the PMMA (poly (methyl methacrylate)) coated graphene flake. Then, 0.8 nm of titanium (Ti) is deposited in two steps, each step of 0.4 nm of Ti deposition followed by in-situ oxidation by pure O_2 to form an oxide tunnel barrier to overcome the conductivity mismatch problem <cit.>. On top of the oxide barrier we deposit 35 nm of cobalt for the spin selective contacts. To prevent oxidation of the ferromagnetic electrodes, the contacts are covered with 3 nm thick aluminum layer.
§ LOCAL CHARGE NOISE MEASUREMENTS AND ITS MAGNETIC FIELD DEPENDENCE
For the noise measurements, we record 800 samples at a high sampling frequency (262 kHz) and measure the 1/f noise in the frequency range of 25 Hz with the resolution of 31.2 mHz. The final spectrum is recorded after performing the root mean square averaging over 20 FFT spectra.
We measure the charge 1/f noise associated with the flake in a local four probe measurement scheme, shown in Fig. 1(a) in the main text, using following equation<cit.>:
S_V/V_local^2=γ^c/f^a
where S_V is the PSD of voltage fluctuations (units V^2/Hz), γ^c is the noise magnitude i.e. the normalized Hooge parameter γ^c_H with respect to the total number of charge carriers in the channel and characterizes the noise magnitude of the material, and V_local is the average voltage drop across the sample. With the cross-correlation (XC) scheme, we are only sensitive to the noise from the conducting channel. As expected, the noise increases with the bias current (Fig. <ref>). We obtain γ^c∼ 10^-7 by fitting the spectrum with Eq. <ref>.
Here, we would also like to mention that the 1/f noise in Fig. <ref> nicely scales with I_dc^2, implying that we are only sensitive to the 1/f noise fluctuations from the flake and the current source is not introducing the frequency dependent fluctuations from the contact through capacitive coupling. When the impedance of the current source becomes equivalent to the contact resistance at higher frequencies (∼≥ 10 MHz ) due to capacitive coupling, the noise in the injected spin current can come from the fluctuating contact resistance. In this case, the noise would increase at higher frequencies. On the other hand, we observe the opposite frequency dependence for the measured noise going down at higher frequencies complying with the 1/f noise behavior, and the noise is measured at very low frequencies where the impedance of the current source is almost constant and is much higher than the contact resistance, ruling out the effect of the contact noise on the measured signal.
There could be local magnetoresistance (MR) contributions from the ferromagnetic contacts and the graphene flake present in our measurements. In order to rule out the flake MR contribution to the noise, we apply an out of plane magnetic field (B_⊥) with a dc current of 10 µA is applied between the outer contacts and the noise is measured between the inner contacts. Similar measurements are performed for the contact magnetoresistance in a three terminal connection scheme where noise at the graphene-tunnel barrier interface is measured.
However, we do not observe a detectable change in the noise level at different magnetic fields for both the measurements (Fig. <ref> and Fig. <ref>). In this way we can discard the contribution of the MR coming from the graphene flake and the contacts to the observed magnetic field dependent non-local noise.
§ SWITCHING BEHAVIOUR OF THE CONTACTS: SPIN VALVE MEASUREMENT
In our spin-valve measurements (Fig. 2(a) of the main text), we observe an asymmetric switching of the cobalt FM contacts for the positive and the negative sweep of the in-plain magnetic field. This behavior has been observed before for spin transport in graphene <cit.>. For cobalt contacts, if there is an anti-ferromagnetic cobalt oxide layer formed on the side or top of a FM electrode, it can induce an exchange bias on the cobalt magnetization, which leads to a shift of the hysteresis loop and can cause different switching fields for positive and negative magnetic fields <cit.>. For the positive magnetic fields we observe the switching of all four electrodes. However, for the negative fields, two electrodes seem to switch imultaneously, and instead of four, we only detect three switches.
§ CIRCUIT ANALYSIS
In order to identify the noise sources, contributing to the measured non-local noise, we develop an elementary 2-channel resistor model as described in ref. <cit.>, representing true spin and charge transport properties of the measured device (Fig. <ref>).
A ferromagnetic injector/detector is represented as a combination of two parallel resistors corresponding to spin-up and spin-down resistances, chosen in a way that they satisfy the condition for measured contact polarization P=R^C_↓-R^C_↑/R^C_↓+R^C_↑ and the contact resistance R_C=R^C_↓R^C_↑/R^C_↓+R^C_↑. Since graphene is non-magnetic, one can represent the spin resistance R^s=R_sq△x/W as a parallel combination
of R^s_↑ and R^s_↓, where R^s_↑=R^s_↓=2× R^s. Here △x is the length scale of graphene for which the channel and the relaxation resistance are defined. The model can be refined by taking smaller △x. The spin-relaxation process in the circuit is represented by the relaxation resistor R_↑↓=2R^s×λ_s/△x.
For our simulation, we take the ratio λ_s/△x =3, i.e. incorporating three relaxation resistors in one unit of λ_s. The thermal noise background is simulated by replacing each resistor by a noise-less resistor connected to the equivalent root mean squared (rms) current noise source of i_noise=√(4k_BT/R f) parallel to it at f=1 Hz.
The response of the equivalent current source is measured as a voltage difference between the detector pairs. In this way, one can estimate the contribution from the relaxation shunt resistors, channel resistance, and the contacts separately. The total noise is the root mean squared sum of the contribution from all the circuit components. A major contribution of the simulated noise comes from the detector contacts, since the equivalent resistance of the circuit is dominated by the detector resistance and when there is no current flow in the circuit, one observes the equivalent thermal noise background. The value obtained from the simulation is ∼ 1.4 × 10^-16 V^2/Hz at room temperature and the thermal noise measured experimentally for our non-local circuit is ∼ 10^-16 V^2/Hz, supporting the validity of our circuit model.
When a non-zero dc current flows through the graphene, 1/f charge noise is generated in addition to the background thermal noise. Coming back to our 2-channel resistor model, for non-zero current, we know the amount of current (i) flowing through each circuit element, which can be converted to the equivalent rms noise current i_noise^1/f=√(γ_H)*i/f. Here we use γ_H=γ^c∼ 10^-7 and all the calculations are done at f = 1 Hz. In a similar way, for the thermal noise simulation, we estimate the charge 1/f noise contribution at the non-local detector pair. The total 1/f noise (∼ 6 × 10^-17 V^2/Hz) due to charge 1/f noise is much lower than the noise we measure in the non-local geometry. For the thermal noise simulation, we can estimate the charge 1/f noise contribution from the channel resistors (∼ 10^-18 V^2/Hz ), relaxation resistors (∼ 10^-21 V^2/Hz ) and the contacts (∼ 10^-17 V^2/Hz) at the detector. The total 1/f noise (∼ 6 × 10^-17 V^2/Hz) is again dominated by the detector 1/f noise. However, it is clear that the noise contribution due to charge 1/f noise is much lower than the noise we measure in the non-local geometry. From the spin-dependent noise measurements at different spin accumulation, we experimentally obtain the proportionality constant γ^s∼ 10^-4 i.e. 10^3 times than the γ^c for the charge noise. Using γ_H=γ^s=10^-4 we can simulate the noise level ∼ 10^-14 V^2/Hz, close to the observed noise level in our measurements. We again confirm that it is only possible to see such a high noise non locally with a higher γ_H, which is not possible via the mechanism producing the charge 1/f noise.
§ BACKGROUND NOISE IN NON-LOCAL GEOMETRY
In order to confirm that the magnetic field dependence of the measured noise is not originated by the non-local background, we measure the non-local noise at high positive and negative perpendicular magnetic fields (B_⊥∼ 0.25 T and -0.25 T) where no spin accumulation is present (Fig. <ref>(a)). the non-local signal is different due to different background MR (dashed line in Fig. <ref>(a)). However, we do not observe any difference in the noise level for high positive and negative B_⊥, confirming that there is no detectable noise contribution from the non-local background signal(Fig. <ref>(b)).
§ SPIN-DEPENDENT NOISE: ANALYTICAL EXPRESSION
We quantitatively analyze the analytical expression of the non-local spin signal in order to figure out the dominant sources of spin sensitive noise. The measured non-local voltage V_NL= R_NL× I for the spin-valve geometry is expressed by Eq. <ref>:
V_NL=P^2IR_sqλ_sexp(-L/λ_s)/2W
Here P is the contact spin polarization, I is the current applied at the injector contact, R_sq is the square resistance of graphene, λ_s is the spin diffusion length in graphene, L is the spacing between the injector and the detector contact and W is the width of the transport channel.
The fluctuations in V_NL in time are represented by the correlation function:
R_V^NL(τ)=< V_NL(t)* V_NL(t+τ)>
The noise associated with different parameters (P, R_sq, I, λ_s) can be written in form of a power spectrum S_V^NL(f), which is the Fourier transform of R^V_NL(τ):
S_V^NL(f)≃exp(-L/λ_s)^2/4W^2[(P^2IR_sq)^2(1+L/λ_s)^2S_λ_s(f)+(P^2λ_sR_sq)^2S_I(f)+
(P^2λ_sI)^2S_R_sq(f)+(2PIR_sqλ_s)^2S_P(f)]
This equation can be rewritten as
S_V^NL(f)=A_λ_sS_λ_s(f)+A_IS_I(f)+A_R_sqS_R_sq(f)+A_PS_P(f)
where S_P is the due to the polarization fluctuations at the injector/detector electrode which is the Fourier transform of the auto correlation function for the time dependent polarization fluctuations i.e. ℱ⟨ P(t)P(t+τ)⟩ , S_λ_s is the noise associated with the spin transport i.e. spin relaxation noise (ℱ⟨λ_s(t)λ_s(t+τ)⟩), S_I(f) is the noise from the external current source (ℱ⟨ I(t)I(t+τ)⟩), S_R_sq is the 1/f charge noise and the thermal noise from the channel (ℱ⟨ R_sq(t)R_sq(t+τ)⟩). Here we take the assumption that the fluctuations in all four parameters (P, R_sq, I, λ_s) are uncorrelated.
We can measure S_I independently and S_R_sq is the local 1/f noise (Fig. <ref>).
After removing the contribution of S_I (∼ 10^-23 V^2/Hz) and S_R_sq (∼ 10^-22 V^2/Hz) to S_V^NL, which are negligible compared to the observed noise (∼ 10^-14 V^2/Hz), the only dominant sources of noise in the measured non-local signal are the polarization fluctuations at the injector/detector electrodes and the fluctuations in the spin transport parameters (λ_s=√(D_sτ_s) ). However, assuming R_sq and λ_s uncorrelated is not strictly true. These quantities are correlated as λ_s depends on the channel resistance with λ_s going down with the increase in the channel resistance, which would lead to different λ_s dependence of the analytical expression i.e. Eq. <ref>.
§ NON-LOCAL NOISE VIA CROSS CORRELATION
A measured non-local noise at the detector (S_NL^V) in the presence of spin-accumulation (spin-current) can be represented by Eq. <ref>:
S_NL^V=S_P^C1+S_P^C2+S_λ_s^C1-C2+S_bg
Here S_P^C_i is the contact polarization noise, S_λ_s is the noise due to spin accumulation (relaxation) between contacts C1-C2 and S_bg is the electronic
noise contribution due to residual charge current flowing in the non-local circuit and the thermal noise background. S_bg does not carry any spin-dependent information. On applying a high magnetic field B_⊥∼0.1 T, perpendicular to the device plane, one can suppress the spin transport and the
measured non-local noise contribution at high B_⊥ can come from the background charge 1/f noise (S_bg) due to non-homogeneous charge current distribution in the non-local regime. In this way the spin-dependent noise (polarization and spin accumulation) can be separated from the total noise.
Polarization fluctuations in each contact are independent from each other and one can filter the polarization noise from the spin current noise by
using the spatial XC method. We measure the non-local noise via XC scheme as shown in Fig. 1(b) in the main text. Simultaneously we also record the
single channel noise for contact pair C3-C5 (path 1) and C4-C5 (path 3). Single channel noise includes the polarization noise contribution of the contact pair on top of the spin relaxation noise between the contacts. These contributions can be summarized in following equations:
S_NL^channel1=S_P^C3+S_P^C5+S_λ_s^C3-C5
S_NL^channel2=S_P^C4+S_P^C5+S_λ_s^C4-C5
S_xcorr^C3-C5⊗ C4-C5=S_λ_s^C3-C5+S_P^C5
Note that we have not included the background noise contribution term here as it can be estimated separately at B_⊥∼ 0.1 T
via the procedure described above and the final equations can be rewritten without the background contribution.
On the other hand, the spatial cross correlation of V_C3-C5 and V_C4-C5 will have total noise contribution S_xcorr^C3-C5⊗ C4-C5 only from the outer detector C5 (S_P^C5) and the spin relaxation noise S_λ_s^C4-C5.
The advantage of spatial cross correlation over the regular cross correlation method is demonstrated in Fig. <ref>(a). For the regular XC measurement, we do not observe any difference between the single channel noise and the noise computed via XC method Fig. <ref>(b). The reason for this is the contribution from the contact leads and the external electronics is much lower than the measured noise level between the contacts C3 and C5. Therefore, the uncorrelated signals do not affect the measurement. On the other hand, by using the spatial cross correlation method one can eliminate the polarization noise contribution from different detectors and the method allows to see the noise contribution which is shared between the detector pairs C3-C5 and C4-C5 i.e. spin relaxation noise between contacts C4 and C5 (∼ 5×10^-15 V^2Hz^-1), without the contribution of the polarization noise from C4. Using S_λ_s∝μ_s^2 relation, we can extrapolate the spin relaxation noise at contact C3 (path 1), which should be approximately four times higher than the spin relaxation noise at C4 for λ_s=1.5 μm i.e. (∼10^-14 V^2Hz^-1). From Fig. <ref>(a), we can say that the extrapolated spin relaxation noise is almost equal to the spin-dependent noise measured via path 1 (∼ 1.4 ×10^-15 V^2Hz^-1 at f = 1 Hz), leading to the conclusion that the spin-dependent noise at C3 is dominated by the spin relaxation noise in graphene.
We also measure the contact polarization noise separately by cross correlating the noise measured from the detector pairs C3-C4 and C4-C5, while C1 and C2 are the current injectors. Here only the noise from contact C4 is measured for different values of B_⊥ ( spin accumulation) underneath the contact. We clearly see the spin-dependent noise (contact polarization noise in this case) is reduced to the background noise at B_⊥∼ 80 mT, where spin accumulation is suppressed (Fig. <ref>). The polarization noise is ∼ 10^-16 V^2Hz^-1 (at 1 Hz), which is negligible compared to the measured spin relaxation noise i.e. ∼ 10^-14 V^2Hz^-1.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07571v2 | 20170126040645 | Chiral dynamics, S-wave contributions and angular analysis in $D\to ππ\ell\barν$ | [
"Yu-Ji Shi",
"Wei Wang",
"Shuai Zhao"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
INPAC, Shanghai Key Laboratory for Particle Physics and Cosmology, Department of Physics and Astronomy, Shanghai Jiao-Tong University, Shanghai, 200240, China
We present a theoretical analysis of the D^-→π^+π^-
ℓν̅ and D̅^0→π^+π^0 ℓν̅ decays. We
construct a general angular distribution which can include
arbitrary partial waves of ππ. Retaining the S-wave and P-wave
contributions we study the branching ratios, forward–backward
asymmetries and a few other observables. The P-wave contribution is
dominated by ρ^0 resonance, and the S-wave contribution is
analyzed using the unitarized chiral perturbation theory. The
obtained branching fraction for D→ρℓν, at the order
10^-3, is consistent with the available experimental data. The
S-wave contribution has a branching ratio at the order of
10^-4, and this prediction can be tested by experiments like
BESIII and LHCb. Future measurements can also be used to examine the
π–π scattering phase shift.
Chiral dynamics, S-wave contributions and angular analysis in D→ππℓν̅
Yu-Ji Shi, Wei Wang [Email:[email protected]] and Shuai Zhao [Email:[email protected]]
December 30, 2023
====================================================================================================
§ INTRODUCTION
The Cabbibo–Kobayashi–Maskawa (CKM) matrix elements are key
parameters in the Standard Model (SM). They are essential to
understand CP violation within the SM and search for new physics
(NP). Among these matrix elements,
|V_cd| can
be determined from either exclusive or inclusive weak D decays,
which are governed by c→ d transition, for example, c→ dℓν transitions. However, for a general D decay process it is
difficult to extract CKM matrix elements, because strong and weak
interactions may be entangled.
The semi-leptonic D decays are ideal
channels to determine |V_cd|, not only because the weak and
strong dynamics can be separated in these process, but also the
clean experimental signals. Moreover, one can study the dynamics in
the heavy-to-light transition from semi-leptonic D decays. For
leptons do not participate in the strong interaction, all the strong
dynamics is included in the form factors; thus it provides a good
platform to measure the form factors. The D→ρ form factors
have been measured from D^0→ρ^- e^+ ν_e and D^+→ρ^0
e^+ ν_e at the CLEO-c experiment for both charged and neutral
channels <cit.>. Because of the large width of the
ρ meson, D→ρℓν̅_ℓ is in fact a quasi-four
body process D→ππℓν̅_ℓ. The ρ can be
reconstructed from the P-wave ππ mode. However, other
ππ resonant or non-resonant states may interfere with the
P-wave ππ pair, and thus it is necessary to analyze the
S-wave contribution to D→ππℓν̅_ℓ.
In addition, the internal structure of light mesons is an important
issue in hadron physics. It is difficult to study light mesons by
QCD perturbation theory due to the large strong coupling in the low
energy region. On the other hand, because of the large mass scale,
one can establish factorization for many heavy meson decay
processes, thus heavy mesons like B and D can be used to probe
the internal structure of light
mesons <cit.>. As mentioned above,
D→ππℓν̅_ℓ can receive contributions from
various partial waves of ππ. ρ(770) dominant for D to
P-wave ππ decay, at the same time, D meson can decay into
S-wave ππ through f_0(980). The structure of f_0(980) is
not fully understood yet. Analysis of D→ππℓν̅_ℓ may shed more light on understanding the nature of
f_0(980). The BESIII collaboration has collected 2.93
fb^-1 data in e^+e^- collisions at the energy around
3.773 GeV <cit.>, which can be used to study the
semi-leptonic D decays. Thus it presently is mandatory to make
reliable theoretical predictions. Some analyses of multi-body heavy
meson decays can be found in
Refs. <cit.>,
where the final state interactions between the light pseudoscalar
mesons are taken into account.
In this paper we present a theoretical analysis of
D^-→π^+π^-ℓν̅_ℓ and D^0→π^+π^0ℓν̅_ℓ decays. In Sec. II, we will present the results of
D→ f_0 (980) and D→ρ form factors. We also calculate D
to S-wave ππ pair form factors in non-resonance region, the
ππ form factor will be calculated by using unitarized chiral
perturbation theory. Based on these results, we present a full
analysis on the angular distribution of D→ππℓν̅_ℓ. We explore various distribution observables, including
the differential decay width, the S-wave fraction, forward–backward
asymmetry, and so on. These results will be collected in Sec. III.
The conclusion of this paper will be given in Sec. IV. The details
of the coefficients in angular distributions are relegated to the
appendix.
§ HEAVY-TO-LIGHT TRANSITION FORM FACTORS
Feynman diagram for the D→ππℓ^-ν̅_ℓ decay is
shown in Fig. <ref>. The lepton can be an electron or
a muon, ℓ=e,μ. The spectator quark could be the u or d
quark, corresponding to D^0→π^+π^0 ℓ^-ν̅_ℓ
and D^-→π^+π^- ℓ^-ν̅_ℓ. Integrating out the
virtual W-boson, we obtain the effective Hamiltonian describing
the c→ d transition
ℋ_eff = G_F/√(2) V_cd [d̅γ_μ(1-γ_5) c][ ν̅γ^μ(1-γ_5) ℓ] +h.c.,
where G_F is the Fermi constant and V_cd is the CKM
matrix element. The leptonic part is calculable using the
perturbation theory, while the hadronic effects are encoded into the
transition form factors.
§.§ D →ρ form factors
For the P-wave ππ state, the dominant contribution is from
the ρ(770) resonance. The D→ρ form factors are
parametrized by <cit.>
⟨ρ(p_2,ϵ)|d̅γ^μc|D(p_D)⟩ = -2V(q^2)/m_D+m_ρϵ^μνρσϵ^*_ν p_Dρp_2σ,
⟨ρ(p_2,ϵ)|d̅γ^μγ_5 c|
D(p_D)⟩ = 2im_ρ A_0(q^2)ϵ^*· q / q^2q^μ
+i(m_D+m_ρ)A_1(q^2)[ ϵ^*μ
-ϵ^*· q /q^2q^μ]
-iA_2(q^2)ϵ^* · q / m_D+m_ρ[ P^μ-m_D^2-m_ρ^2/q^2q^μ],
with q=p_D-p_2, and P=p_D+p_2. The V(q^2), and
A_i(q^2) (i=0,1,2) are nonperturbative form factors.
These form factors have been computed in many different approaches
<cit.>,
and here we quote the results from the light-front quark model
(LFQM) <cit.> and light-cone sum rules (LCSR)
<cit.>. To access the momentum distribution in the full
kinematics region, the following parametrization has been used:
F_i(q^2)=F_i(0)/1-a_iq^2/m_D^2+b_i(q^2/m_D^2)^2.
Their results are collected in Tab. <ref>. We
note that a different parametrization is adopted in
Ref. <cit.>, where A_3 appears instead of A_0. The
relation between A_0 and A_3 is given by
A_0(q^2)=1/2m_ρ(m_D+m_ρ)[A_1(q^2)
(m_D+m_ρ)^2+A_2(q^2)(m_ρ^2-m_D^2)-A_3(q^2)q^2].
§.§ Scalar ππ form factor and D to S-wave ππ
We first give the D→ f_0(980) form factor parametrized as
⟨ f_0(p_2) |d̅γ_μγ_5 c|
D^-(p_D)⟩ = -i{ F_+^D→ f_0 (q^2)
[P_μ - m_D^2-m_f_0^2/q^2q_μ ]
+F_0^D→ f_0 (q^2)m_D^2-m_f_0^2/q^2q_μ},
where F^D→ f_0_+ and F^D→ f_0_0 are D→ f_0 form
factors. We will use LCSR to compute the D → f_0(980)
transition form factors with some inputs, and we refer the reader
to Ref. <cit.> for a detailed derivation in LCSR.
The meson masses are fixed to the PDG values m_D=1.870 GeV and
m_f_0=0.99 GeV <cit.>. For quark masses we use
m_c=1.27 GeV <cit.> and m_d=5 MeV. As for
decay constants, we use f_D=0.21 GeV <cit.> and
f_f_0=0.18 GeV <cit.>. The threshold s_0 is
fixed at s_0=4.1 GeV^2, which should correspond to the squared
mass of the first radial excitation of D. The parameters
F_i(0), a_i and b_i are fitted in the region
-0.5 GeV^2<q^2<0.5 GeV^2, and the Borel
parameter M^2 is taken to be (6±1) GeV^-2. With
these parametrizations, we give the numerical results in
Tab. <ref>.
In the region where the two pseudo-scalar mesons strongly interacts, the resonance approximation fails and thus has to be abandoned. One of the such examples is the S-wave partial wave under 1 GeV, for which we can use the form factors as defined in Ref. <cit.>:
⟨ (ππ)_S(p_ππ)|u̅γ_μγ_5 c| D (p_D)
⟩ = -i 1/m_ππ{[P_μ
-m_D^2-m_ππ^2/q^2 q_μ] F_1^D→ππ(m_ππ^2, q^2)
+m_D^2-m_ππ^2/q^2 q_μ F_0^D→ππ(m_ππ^2, q^2) }.
The Watson theorem implies that phases measured in ππ elastic
scattering and in a decay channel in which the ππ system has
no strong interaction with other hadrons are equal modulo π
radians. In the process we consider here, the lepton pair ℓν̅ indeed decouples from the ππ final state, and thus
the phases of D to scalar ππ decay amplitudes are equal to
ππ scattering with the same isospin. It is plausible that
⟨ (ππ)_S|d̅Γ c|D⟩∝ F_ππ(m_ππ^2),
where the scalar form factor is defined as
⟨ 0| d̅d |π^+π^-⟩ = B_0
F_ππ(m_ππ^2),
where B_0=(1.7±0.2) GeV <cit.> is the QCD condensate
parameter.
An explicit calculation of these quantities requires knowledge of
generalized light-cone distribution amplitudes
(LCDAs) <cit.>. The twist-3 one has the same
asymptotic form with the LCDAs for a scalar
resonance <cit.>. Inspired by this similarity, we may
plausibly introduce an intuitive matching between the D→ f_0
and D→ (ππ)_S form factors <cit.>:
F_i^D→ππ(m_ππ^2, q^2) ≃ B_01/ f_f_0 F_ππ(m_ππ^2) F_i^D→ f_0
(q^2).
It is necessary to stress at this stage that the Watson theorem does
not strictly guarantee that one may use Eq. (<ref>).
Instead it indicates that, below the opening of inelastic channels
the strong phases in the D→ππ form factor and ππ
scattering are the same. First above the 4π or KK̅
threshold, additional inelastic channels will also contribute. The
KK̅ contribution can be incorporated in a coupled-channel
analysis. As a process-dependent study, it has been demonstrated
that states with two additional pions may not give sizable
contributions to the physical observables <cit.>.
Secondly, some polynomials with nontrivial dependence on
m_ππ have been neglected in Eq. (<ref>). In
principle, once the generalized LCDAs for the (ππ)_S system
are known, the D→ππ form factor can be straightforwardly
calculated in LCSR and thus this approximation in the matching
equation can be avoided. On the one side, the space-like
generalized parton distributions for the pion have been calculated
at one-loop level in the chiral perturbation theory
(χPT) <cit.>. The analysis of time-like
generalized LCDAs in χPT and the unitarized framework is in
progress. On the other side, the γγ^*→π^+π^-
reaction is helpful to extract the generalized LCDAs for the
(ππ)_S system <cit.>. The
experimental prospects at BEPC-II and BELLE-II in the near future
are very promising.
In the kinematic region where the π is soft, the crossed channel
from D+π→π will contribute as well and this crossed channel
would modify Eq. (<ref>) by an inhomogeneous part. For
the analogous decay of K or B mesons, it has been taken into
account either dynamically in terms of phase shifts (in the case of
the kaon decay) <cit.> or approximately in terms
of a pole contribution (in the case of the B meson
decay) <cit.>. However, if both pions move fast, the
D–π invariant mass is far from the D^* pole and this
contribution is negligible. In this case, the transition amplitude
for the D to 2-pion form factor can be calculated in light-cone
sum rules <cit.>. This will lead to the conjectured
formula in Eq. (<ref>).
The scalar ππ form factor can be handled using the unitarized chiral perturbation theory. In the following, we will give a brief description of this approach.
In terms of the isoscalar S-wave states
|ππ⟩_I=0^ = 1/√(3)|π^+π^-⟩ + 1/√(6)|π^0π^0⟩,
|KK̅⟩_I=0^ =
1/√(2)|K^+K^-⟩ +
1/√(2)|K^0K̅^0⟩,
the scalar form factors for the π and K mesons are defined as
√(2)B_0 F^n/s_1(s) = ⟨ 0|n̅n /s̅s|ππ⟩_I=0^,
√(2)B_0 F^n/s_2(s) = ⟨ 0|n̅n/s̅s|KK̅⟩_I=0^,
where s=m_ππ^2. The n̅n = (u̅u+d̅d)/√(2)
denotes the non-strange scalar current, and the notation (π =
1, K = 2) has been introduced for simplicity. With the above
notation, we have
F_ππ(m_ππ^2) = √(2/3) F_1^n(m_ππ^2).
Expressions have already been derived
in χPT up to next-to-leading
order <cit.>:
F_1^n(s) = . √(3/2)[ 1 + μ_π -
μ_η/3 + 16 m_π^2/f^2(2L_8^r-L_5^r)
+ 8(2L_6^r-L_4^r)2m_K^2 + 3m_π^2/f^2 +
8s/f^2 L_4^r + 4s/f^2 L_5^r
+ .
(2s - m_π^2/2f^2) J^r_ππ(s) +
s/4f^2 J^r_KK(s) + m_π^2/18f^2 J^r_ηη(s)
],
F_1^s(s) = √(3)/ 2[ 16
m_π^2/f^2(2L_6^r-L_4^r) + 8s/f^2 L_4^r +
s/2f^2 J^r_KK(s) + 2/9m_π^2/f^2
J^r_ηη(s)
],
F_2^n(s) = . 1/√(2)[ 1 +
.8 L_4^r/f^2(2s - m_π^2 - 6 m_K^2) +
4 L_5^r/f^2(s - 4 m_K^2) + 16 L_6^r/f^2(6 m_K^2 + m_π^2) + 32 L_8^r/f^2 m_K^2 +
2/3μ_η.
+ . (9s
- 8 m_K^2/36f^2) J^r_ηη(s) + 3s/4f^2
J^r_KK(s) + 3s/4f^2 J^r_ππ(s)
],
F_2^s(s) = 1 + 8 L_4^r/f^2(s - m_π^2 - 4
m_K^2) + 4 L_5^r/f^2(s - 4 m_K^2) +
16 L_6^r/f^2(4 m_K^2 + m_π^2) + 32
L_8^r/f^2 m_K^2 + 2/3μ_η
+
(9s - 8 m_K^2/18f^2) J^r_ηη(s) +
3s/4f^2 J^r_KK(s).
Here the L_i^r are the renormalized low-energy constants, and f
is the pion decay constant at tree level. The μ_i and J_ii^r
are defined as follows:
μ_i = m_i^2/32π^2 f^2lnm_i^2/μ^2,
J_ii^r(s) = 1/16π^2[ 1 -
log(m_i^2/μ^2) - σ_i(s)log(
σ_i(s)+1/σ_i(s)-1)],
with σ_i(s) = √(1- 4m_i^2/s). It is interesting to
note that the next-to-next-to-leading order results can also be
found in Refs. <cit.>. Imposing the
unitarity constraints, the scalar form factor can be expressed in
terms of the algebraic coupled-channel equation
F(s) = [I+K(s) g(s)]^-1 R(s)
= [I-K(s) g(s)] R(s) + 𝒪(p^6),
where R(s) has no right-hand cut and in the second line, the
equation has been expanded up to NLO in the chiral expansion.
K(s) is the S-wave projected kernel of meson-meson scattering
amplitudes that can be derived from the leading-order chiral
Lagrangian:
K_11 = 2s - m_π^2/2f^2,
K_12 = K_21 = √(3)s/4f^2,
K_22 = 3s/4f^2 .
The loop integral can be calculated either in the
cutoff-regularization scheme with q_ max∼ 1GeV being the
cutoff (cf. Erratum of Ref. <cit.> for an explicit
expression) or in dimensional regularization with the
MS subtraction scheme. In the latter scheme, the
meson loop function g_i(s) is given by
J_ii^r(s) ≡ 1/16π^2[ 1 -
log(m_i^2/μ^2) - σ_i(s)log(
σ_i(s)+1/σ_i(s)-1)]
= -g_i(s).
The expressions for the R_i are obtained by matching the
unitarization and chiral perturbation
theory <cit.>:
R_1^n(s) = √(3/2){ 1 + μ_π -
μ_η/3 + 16 m_π^2/f^2(2L_8^r-L_5^r)
+ 8(2L_6^r-L_4^r)2m_K^2 + 3m_π^2/f^2
+ 8s/f^2 L_4^r + 4s/f^2 L_5^r
- m_π^2/288π^2 f^2[1 +
log(m_η^2/μ^2)]
},
R_1^s(s) = √(3)/ 2{16
m_π^2/f^2(2L_6^r-L_4^r) + 8s/f^2 L_4^r -
m_π^2/72π^2 f^2[1 +
log(m_η^2/μ^2)]
},
R_2^n(s) = 1/√(2){ 1 + 8 L_4^r/f^2(2s - 6m_K^2 - m_π^2) + 4 L_5^r/f^2(s -
4m_K^2) + 16 L_6^r/f^2(6m_K^2 + m_π^2)
+ 32 L_8^r/f^2 m_K^2 + 2/3μ_η.
+ .m_K^2/72π^2 f^2[1 +
log(m_η^2/μ^2)]
},
R_2^s(s) = 1 + 8 L_4^r/f^2(s - 4m_K^2 -
m_π^2) + 4 L_5^r/f^2(s - 4m_K^2) +
16 L_6^r/f^2(4m_K^2 + m_π^2)
+ 32 L_8^r/f^2 m_K^2 + 2/3μ_η
+ m_K^2/36π^2 f^2[1 +
log(m_η^2/μ^2)].
With the above formulas and the fitted results for the low-energy
constants L_i^r in Ref. <cit.> (evolved from
M_ρ to the scale μ= 2q_ max/√(e)), we show the
non-strange ππ form factor in Fig. <ref>. The
modulus, real part and imaginary part are shown as solid, dashed and
dotted curves. As the figure shows, the chiral unitary ansatz
predicts a form factor F^n_1 with a zero close to the K̅K
threshold. This feature has been extensively discussed in
Ref. <cit.>.
§ FULL ANGULAR DISTRIBUTION OF D→ΠΠℓΝ̅
In this section, we will derive a full angular distribution of D→ππℓν̅. For the literature, one may consult Refs. <cit.>.
We set up the kinematics for the D^-→π^+π^- ℓν̅
as shown in Fig. <ref>, which can also be used for D^0→π^+π^0 ℓν̅. The ππ moves
along the z axis in the D^- rest frame.
θ_π^+(θ_ℓ) is defined in the ππ (lepton
pair) rest frame as the angle between z-axis and the flight
direction of π^+ (ℓ^-), respectively. The azimuth angle
ϕ is the angle between the ππ decay and lepton pair
planes.
Decay amplitudes for D→ππℓν̅_ℓ can
be divided into several individual pieces and each of them can be
expressed in terms of the Lorentz invariant helicity amplitudes.
The amplitude for the hadronic part can be obtained by the
evaluation of the matrix element:
A_λ = √( N_f_0/ρ)iG_F/√(2) V_cd^* ϵ_μ^*(h) ⟨ππ |c̅γ^μ(1-γ_5) d |D⟩,
where ϵ_μ(h) is an auxiliary polarization vector for the
lepton pair system and h= 0, ±, t, N_f_0/ρ=
√(λ) q^2β_l /(96 π^3m_D^3), β_l=1-m̂_l^2 and m̂_l= m_l/√(q^2). |V_cd| is taken to
be 0.22 <cit.>. The functions A_i can be
decomposed into different partial waves,
A_0/t (q^2, m_ππ^2,θ_π^+) = ∑_J=0,1,2... A^J_0/t (q^2, m_ππ^2)Y_J^0(θ_π^+,0),
A_||/⊥(q^2, m_ππ^2,θ_π^+) = ∑_J= 1,2... A^J_||/
⊥(q^2, m_ππ^2)Y_J^-1(θ_π^+,0),
A^J_0/t (q^2, m_ππ^2) = √( N_f_0/ρ) M_D(f_0/ρ, 0/t )(q^2)
L_f_0/ρ(m_ππ^2) ≡ | A^J_ 0/t |
e^iδ^J_ 0/t ,
A^J_ ||/⊥(q^2, m_ππ^2) = √( N_f_0/ρ) M_D(f_0/ρ, ||/⊥)(q^2)
L_f_0/ρ(m_ππ^2)≡ | A^J_||/⊥| e^iδ^J_ ||/
⊥.
Here J denotes the partial wave of the ππ system and the script t denotes the time-like component of a virtual vector/axial-vector meson decays into a lepton pair. The L_f_0/ρ(m_ππ) is the lineshape and for
the P-wave ρ we use the Breit–Wigner distribution:
L_ρ(m_ππ^2)= √( m_ρΓ_ρ→ππ/π)1/m_ππ^2 -m_ρ^2+ i m_ρΓ_ρ.
Considering the momentum dependence of the ρ decay, we have
the running width as
Γ_ρ (m_ππ^2) = Γ_ρ^0 ( |q⃗ |/ |
q⃗_0|)^3 m_ρ/m_ππ1+ (R|q⃗_0|)^2/1+ (R|q⃗ |)^2,
and the Blatt–Weisskopf parameter R=(2.1± 0.5± 0.5)
GeV^-1 <cit.>.
The spin-0 final state has only one polarization state and the
amplitudes are
i M_D(f_0,0) = N_1 i[ √(λ)/√( q^2) F_1(q^2) ],
i M_D(f_0,t)=N_1 i[ m_D^2-m_f_0^2/√(q^2) F_0(q^2) ],
with N_1= iG_FV_cd^*/√(2). For mesons with
spin J≥1, the π^+π^- system can be either longitudinally
or transversely polarized and thus we have the following form:
i M_D(ρ,0) = -α_L^J N_1 i/2m_ρ√(q^2)[ (m_D^2-m_ρ^2-q^2)(m_D+m_ρ)A_1
-λ/m_D+m_ρA_2],
i M_D(ρ,±)
= -β_T^J N_1 i [ (m_D+m_ρ)A_1±√(λ)/m_D+m_ρV ],
i M_D(ρ, t) = -α_L^J i N_1√(λ)/√(q^2)A_0.
The α_L^J and β_T^J are products of the Clebsch–Gordan
coefficients
α_L^J = C^J,0_1,0;J-1,0 C^J-1,0_1,0; J-2,0⋯ C^2,0_1,0;1,0, β_T^J = C^J,1_1,1;J-1,0 C^J-1,0_1,0; J-2,0⋯ C^2,0_1,0;1,0.
For the sake of convenience, we define
i M_D(ρ, ⊥/||) = 1/√(2)[i M_D(ρ, +) ∓ i M_D(ρ, -)],
i M_D(ρ, ⊥) = -iβ_T^J √(2) N_1[
√(λ)V/m_D+m_ρ],
i M_D(ρ,||)= -iβ_T^J√(2) N_1[
(m_D+m_ρ)A_1 ].
Using the generalized form factor, the matrix elements for D
decays into the spin-0 non-resonating ππ final state are
given as
A_0^0 = √( N_2) i1/m_ππ[ √(λ)/√( q^2) F_1^ππ(m_ππ^2, q^2) ] ,
A_t^0 =√( N_2) i 1/m_ππ[ m_D^2-m_ππ^2/√(q^2) F_0^ππ(m_ππ^2, q^2)],
N_2=N_1 N_ρρ_π/(16π^2), with ρ_π=
√(1-4m_π^2/m^2_ππ).
The above quantities can lead to the full angular distributions
d^5Γ/dm_ππ^2dq^2dcosθ_π^+dcosθ_l dϕ = 3/8[I_1(q^2, m_ππ^2, θ_π^+)
+I_2 (q^2, m_ππ^2, θ_π^+)
cos(2θ_ℓ)
+ I_3(q^2, m_ππ^2, θ_π^+) sin^2θ_ℓcos(2ϕ)
+I_4(q^2, m_ππ^2, θ_π^+) sin(2θ_ℓ)cosϕ
+I_5 (q^2, m_ππ^2, θ_π^+) sin(θ_ℓ) cosϕ
+I_6 (q^2, m_ππ^2, θ_π^+) cosθ_ℓ
+I_7 (q^2, m_ππ^2, θ_π^+)
sin(θ_ℓ) sinϕ
+I_8(q^2, m_ππ^2, θ_π^+) sin(2θ_ℓ)sinϕ
+I_9(q^2, m_ππ^2, θ_π^+) sin^2θ_ℓsin(2ϕ)].
For the general expressions of I_i, we refer the reader to the
appendix and to Refs. <cit.> for the
formulas with the S-, P- and D-waves. In the following, we shall
only consider the S-wave and P-wave contributions and thus the above
general expressions are reduced to:
I_1 = 1/4π[(1+m̂_l^2) |A^0_0|^2
+2 m̂_l^2 |A_t^0|^2] + 3/4πcos^2θ_π^+[(1+m̂_l^2) |A^1_0|^2
+2 m̂_l^2 |A_t^1|^2]
+ 2√(3)cosθ_π^+/4π[ (1+m̂_l^2) Re[A^0_0 A^1*_0] + 2m̂_l^2 Re[A^0_t A^1*_t] ]
+ 3+m̂_l^2/23/8πsin^2θ_π^+ [|A^1_⊥|^2+|A^1_|||^2 ],
I_2 = -β_l {1/4π |A^0_0|^2 + 3/4πcos^2θ_π^+ |A^1_0|^2 + 2√(3)cosθ_π^+/4π Re[A^0_0 A^1*_0] }+
1/2β_l 3/8πsin^2θ_π^+ (|A^1_⊥|^2+|A^1_|||^2),
I_3 = β_l 3/8πsin^2θ_π^+ (|A^1_⊥|^2-|A^1_|||^2),
I_4
= 2 β_l [ √(3)sinθ_π^+/4√(2)π Re[A^0_0A^1*_|| ] + 3sinθ_π^+cosθ_π^+/4√(2)π Re[A^1_0A^1*_|| ] ],
I_5
= 4{√(3)sinθ_π^+/4√(2)π ( Re[A^0_0A^1*_⊥ ] -m̂_l^2 Re[A^0_tA^1*_|| ] ) + 3 sinθ_π^+cosθ_π^+/4√(2)π ( Re[A^1_0A^1*_⊥ ] -m̂_l^2 Re[A^1_tA^1*_|| ]) } ,
I_6 = 4{3/8πsin^2θ_π^+ Re[ A^1_||A^1*_⊥ ] + m̂_l^2 1/4π Re[A_t^0 A_0^0*] + m̂_l^2 3/4πcos^2θ_π^+ Re[A_t^1 A_0^1*] }
I_7
= 4{√(3)/4√(2)πsinθ_π^+ ( Im[A^0_0A^1*_||] - m̂_l^2 Im[ A_t^0 A_⊥^1*] )
+ 3/4√(2)πsinθ_π^+cosθ_π^+ ( Im[A^1_0A^1*_||] - m̂_l^2 Im[ A_t^1 A_⊥^1*] )}
I_8 =
2 β_l {√(3)/4√(2)πsinθ_π^+ Im [A_0^0 A_⊥^1*]+ 3/4√(2)πsinθ_π^+cosθ_π^+ Im [A_0^1 A_⊥^1*]},
I_9
= 2 β_l 3/8πsin^2θ_π^+ Im[A_⊥^1 A_||^1* ].
Since the phase in P-wave contributions arise from the lineshape
which is the same for different polarizations, the I_9 term and
the second line in the I_7 are zero.
§.§ Differential and integrated decay widths
Using the narrow width approximation, we obtain the integrated
branching fraction:
B(D^-→ρ^0 e^-ν̅) = (2.24±0.09)× 10^-3/(2.16±0.36)× 10^-3(LFQM/LCSR),
B(D^-→ρ^0 μ^-ν̅) = (2.15± 0.08)× 10^-3/(2.06± 0.35)× 10^-3(LFQM/LCSR),
B(D̅^0→ρ^+ e^-ν̅) = (1.73± 0.07)× 10^-3/(1.67± 0.27)× 10^-3(LFQM/LCSR),
where theoretical errors are from the heavy-to-light transition form factors.
These theoretical results are in good agreement with the
data <cit.>:
B(D^-→ρ^0 e^-ν̅) = (2.18^+0.17_-0.25)× 10^-3,
B(D^-→ρ^0 μ^-ν̅) = (2.4±0.4)× 10^-3,
B(D̅^0→ρ^+ e^-ν̅) = (1.77±0.16)× 10^-3.
The starting point for detailed analysis of D→ππℓν̅
is to obtain the double-differential distribution
d^2Γ/dq^2 dm_ππ^2 after
performing integration over all the angles
d^2Γ/dq^2 dm_ππ^2 = (1+m̂_l^2/2)( |A_0^0|^2 + |A_0^1|^2 +
|A_||^1|^2 + |A_⊥^1|^2 ) + 3/2m̂_l^2
(|A_t^1|^2 + |A_t^0|^2 ),
where apparently in the massless limit for the involved lepton, the
total normalization for angular distributions changes to the sum of
the S-wave and P-wave amplitudes
d^2Γ/dq^2 dm_ππ^2 =
|A_0^0|^2 + |A_0^1|^2 + |A_||^1|^2 + |A_⊥^1|^2.
In Fig. <ref>, we give
the dependence of branching fraction on m_ππ in the D^-→π^+π^-e^-ν̅_e process. The solid, dashed, and dotted curves correspond to the total, S-wave and P-wave contributions.
For the S-wave contribution, there is no resonance around 0.98 GeV, and theoretically, this should be a dip.
Due to the quantum number constraint, the process D̅^0→π^+π^0 ℓν̅ receives only a P-wave contribution and D^-→π^0π^0 ℓν̅ is generated by the S-wave term.
To match the kinematics constraints implemented in experimental
measurements, one may explore the generic observable with
m_ππ^2 integrated out:
⟨ O⟩ = ∫_(m_ρ-δ_m)^2^(m_ρ+δ_m)^2dm^2_ππdO/dm_ππ^2.
We use the following choice in
our study of D→ππℓν̅:
δ_m = Γ_ρ.
In the narrow width-limit, the integration of the lineshape is conducted as
∫dm^2_ππ|L_ρ(m^2_ππ)|^2 = B(ρ^0→π^-π^+)=1.
However, with the explicit form given in
Eq. (<ref>), we find that the integration
∫_(m_ρ-δ_m)^2^(m_ρ+δ_m)^2dm^2_ππ|L_ρ(m^2_ππ)|^2 =0.70
is below the expected value.
On the other hand, the integrated S-wave lineshape in this region is
∫_(m_ρ-δ_m)^2^(m_ρ+δ_m)^2dm^2_ππ|L_S(m^2_ππ)|^2 = 0.37,
which is smaller but at the same order. Integrated from
m_ρ-Γ_ρ to m_ρ+Γ_ρ, we have
B(D^-→ρ^0 (→π^+π^-) e^-ν̅) = (1.57± 0.07)× 10^-3/(1.51± 0.26)× 10^-3 (LFQM/LCSR),
B(D^-→ρ^0 (→π^+π^-) μ^-ν̅) = (1.57± 0.07)× 10^-3/(1.51± 0.26)× 10^-3 (LFQM/LCSR).
The S-wave branching fractions for 2m_π<m_ππ<
1.0 GeV are given as
B(D^-→ (π^+π^-)_S e^-ν̅) = (6.99± 2.46)× 10^-4,
B(D^-→ ( π^+π^-)_S μ^-ν̅) = (7.20± 2.52)× 10^-4.
Above 1 GeV, the unitarized χPT will fail and thus we lack any
reliable prediction.
Furthermore, one may explore the q^2-dependent ratio
R_ππ^μ/e(q^2) = ⟨dΓ(D→ππμν̅_μ)/dq^2 ⟩/⟨dΓ( D→ππ
eν̅_e)/dq^2⟩.
Differential decay widths for D→ππℓν̅_ℓ are
given in Fig. <ref>, with ℓ= e in panel (a) and
ℓ=μ in panel (b). The q^2-dependent ratio
R_ππ^μ/e is given in panel (c). Errors from the form
factors and QCD condensate parameter B_0 are shown as shadowed
bands, and most errors cancel in the ratio R_ππ^μ/e
given in panel (c).
§.§ Distribution in θ_π^+
We explore the distribution in θ_π^+:
d^3Γ/dq^2 dm_ππ^2 dcosθ_π^+ = π/2 (3I_1-I_2)
= 1/8{ (4+2m̂_l^2) |A_0^0|^2 + 6m̂_l^2 |A_t^0|^2
+ √(3) (8+4m̂_l^2) cosθ_π^+ Re[A_0^0 A_0^1*] + 12√(3)m̂_l^2 cosθ_π^+ Re[A_t^0 A_t^1*]
+ (12+ 6m̂_l^2) |A_0^1|^2 cos^2θ_π^+ + 18m̂_l^2 cos^2θ_π^+ |A_t^1|^2
+ (6+3m̂_l^2) sin^2θ_π^+ (|A_⊥^1|^2 +
|A_||^1|^2) }.
Compared to the distribution with only P-wave contribution, namely
D→ρ(→ππ)ℓν̅, the first two lines of
Eq. (<ref>) are new: the first one is the
S-wave ππ contribution, while the second line arises from
the interference of S-wave and P-wave. Based on this
interference, one can define a forward–backward asymmetry for the
involved pion,
A_FB^π ≡ [∫_0^1
- ∫_-1^0] dcosθ_π^+d^3Γ/dq^2 dm_ππ^2
dcosθ_π^+
= √(3)/2 (2+ m̂_l^2) Re[A_0^0 A_0^1*] + 3√(3)/2m̂_l^2 Re[A_t^0 A_t^1*].
We define the polarization fraction at a given value of q^2 and
m_ππ^2:
F_S (q^2, m_ππ^2) = (1+m̂_l^2/2) |A_0^0|^2 + 3/2 m̂_l^2 |A_t^0|^2 /d^2Γ/(dq^2 dm_ππ^2) ,
F_P (q^2, m_ππ^2) = (1+m̂_l^2/2)(|A_0^1|^2 + |A_||^1|^2 + |A_⊥^1|^2 ) + 3/2m̂_l^2 |A^1_t|^2 /d^2Γ/(dq^2 dm_ππ^2) ,
and also
F_L (q^2, m_ππ^2) = (1+m̂_l^2/2)|A_0^1(q^2, m_ππ^2)|^2 + 3/2m̂_l^2 |A^1_t|^2 / (1+m̂_l^2/2)(|A_0^1|^2 + |A_||^1|^2 + |A_⊥^1|^2 ) + 3/2m̂_l^2 |A^1_t|^2 ,
A_FB^π(q^2, m_ππ^2)= √(3)/2 (2+m̂_l^2) Re[A_0^0 A_0^1*] + 3√(3)/2m̂_l^2 Re[A_t^0 A_t^1*] /d^2Γ/(dq^2 dm_ππ^2) .
By definition, F_S+ F_P=1.
In Fig. <ref>, we give our results for the S-wave
fraction ⟨ F_S⟩ (panel (a)), longitudinal
polarization fraction ⟨ F_L⟩ in P-wave contributions
(panel (b)) and the asymmetry ⟨A_FB^π⟩ (panel (c)). Only the curves for the light lepton e are
shown since the results for the μ lepton are similar. These
observables and the following ones are defined by the integration
over m_ππ^2; for instance,
⟨ F_S (q^2) ⟩ = ∫dm_ππ^2 [ (1+m̂_l^2/2) |A_0^0|^2 + 3/2 m̂_l^2 |A_t^0|^2] /∫dm_ππ^2 d^2Γ/(dq^2 dm_ππ^2) ,
and likewise for the others.
§.§ Distribution in θ_l and forward–backward asymmetry
Integrating over θ_π^+ and ϕ, we have the
distribution:
d^3Γ/dq^2 dm_ππ^2 dcosθ_l = 3π/4∫dcosθ_π^+ (I_1 +I_2cos(2θ_l) +I_6 cosθ_l )
= 3/4m̂_l^2 ( (|A_t^0|^2 +|A_t^1|^2 )) + 3/2cosθ_l ( Re[A_||^1 A_⊥^1*] + m̂_l^2 Re[A_t^0 A_0^0*+A_t^1 A_0^1*] )
+3/4 [1 -(1-m̂_l^2)cos^2θ_l] (|A_0^0|^2 +|A_0^1|^2 ) + 3/8 [ (1+m̂_l^2) + (1-m̂_l^2) cos^2θ_l ] (|A_||^1|^2 +|A_⊥^1|^2 ).
The forward–backward asymmetry is defined as
A_FB^l ≡ [∫_0^1
- ∫_-1^0] dcosθ_l d^3Γ/dq^2 dm_ππ^2
dcosθ_l = 3/2 ( Re[A_||^1 A_⊥^1*] + m̂_l^2 Re[A_t^0 A_0^0*+A_t^1 A_0^1*] ) ,
and the results for A_FB^l are given in
Fig. <ref>.
§.§ Distribution in the azimuth angle ϕ
The angular distribution in ϕ is derived as
d^3Γ/dq^2 dm_ππ^2
dϕ = a_ϕ +b_ϕ^c cosϕ + b_ϕ^s sinϕ +
c_ϕ^c cos(2ϕ) + c_ϕ^s sin(2ϕ)
with
a_ϕ = 1/2πd^2Γ/dq^2 dm_ππ^2 ,
b_ϕ^c = 3/16π∫ I_5 dcosθ_π^+ = 3√(3)/32√(2)π( Re[A_0^0 A_⊥^1*]-m̂_l^2 Re[A_t^0 A_⊥^1*])
b_ϕ^s = 3/16π∫ I_7 dcosθ_π^+= 3√(3)/32√(2)π( Im[A_0^0 A_⊥^1*]-m̂_l^2 Im[A_t^0 A_⊥^1*])
c_ϕ^c = 1/2∫ I_3 dcosθ_π^+ = 1/4πβ_l (|A_⊥^1|^2-|A_||^1|^2),
c_ϕ^s =1/2∫ I_9 dcosθ_π^+= 1/ 2 πβ_l Im[A_⊥^1 A_||^1*].
Since the complex phase in the P-wave amplitudes comes from the
Breit–Wigner lineshape, the coefficient c_ϕ^s vanishes.
Numerical results for the normalized coefficients using the two
sets of form factors are shown in Fig. <ref>. The
coefficients b_ϕ^c and b_ϕ^s contain a very small
prefactor, 3√(3)/(32√(2)π )∼ 0.037, and thus are
numerically tiny, as shown in this figure. The c_ϕ^c is also
small due to the cancellation between the |A_⊥|^2 and
|A_|||^2.
§.§ Polarization of μ lepton
In this work, we also give the polarized angular distributions as
d^5Γ(λ_μ)/dm_ππ^2dq^2dcosθ_π^+dcosθ_l dϕ = 3/8[I_1^(λ_μ) +I_2 ^(λ_μ)cos(2θ_l) + I_3^(λ_μ)sin^2θ_l
cos(2ϕ)
+I_4^(λ_μ)sin(2θ_l)cosϕ +I_5^(λ_μ)sin(θ_l) cosϕ +I_6 ^(λ_μ)cosθ_l
+I_7^(λ_μ)sin(θ_l) sinϕ +I_8^(λ_μ)sin(2θ_l)sinϕ +I_9^(λ_μ)sin^2θ_l
sin(2ϕ)],
with the coefficients
I_1^(-1/2) = |A_0|^2 +3/2 (|A_⊥|^2
+|A_|||^2),
I_2^(-1/2) = - |A_0|^2+ 1/2 (|A_⊥|^2
+|A_|||^2),
I_3^(-1/2) = |A_⊥|^2-|A_|||^2 ,
I_4^(-1/2) = 2 Re(A_0A_||^*),
I_5^(-1/2)
=4 Re(A_0A_⊥^*),
I_6^(-1/2) = 4
Re(A_||A^*_⊥),
I_7^(-1/2) = 4 Im(A_0A^*_||) ,
I_8^(-1/2) = 2 Im(A_0A^*_⊥),
I_9^(-1/2) =2 Im(A_⊥A^*_||).
The coefficients for the λ_μ=1/2 are easily obtained by
comparing Eqs. (<ref>) and
(<ref>). For instance, the lepton
polarization fraction is defined as
A^λ_μ(q^2, m_ππ^2 ) = d^2Γ^(1/2)/dq^2dm_ππ^2 - d^2Γ^(-1/2)/dq^2dm_ππ^2 /d^2Γ/dq^2dm_ππ^2
= (-1+ m̂_l^2/2)( |A_0^0|^2 + |A_0^1|^2 + |A_||^1|^2 + |A_⊥^1|^2 ) + 3/2m̂_l^2 (|A_t^1|^2 + |A_t^0|^2 )/d^2Γ/dq^2dm_ππ^2 ,
and we show the numerical results in Fig. <ref>.
§.§ Theoretical uncertainties
Before closing this section, we will briefly discuss the
theoretical uncertainties in this analysis. The parametric errors
in heavy-to-light transition form factors and QCD condensate
parameter B_0 have been included in the above. As one can see,
these uncertainties are sizable to branching fractions and other
related observables, but are negligible in the ratios like
R_ππ^μ/e. This is understandable, since most
uncertainties will cancel in the ratio.
For the heavy-to-light form factors, we have used the LCSR and LFQM
results. In LCSR, the theoretical accuracy for most form factors
is at leading order in α_s. An analysis of B_s→
f_0 <cit.> has indicated the NLO radiative
corrections to form factors may reach 20%. The radiative
corrections are, in general, channel-dependent but should be
calculated in a high precision study. It should be pointed out that
radiative corrections in the light-front quark model is not
controllable.
A third type of uncertainties resides in the scalar ππ form
factor. In this work, we have used the unitarized results from
Refs. <cit.>, where the low-energy
constants (L_is) are obtained by fitting the J/ψ decay data.
A Muskhelishvili–Omnès formalism has been developed for the
scalar ππ form factor in Ref. <cit.>. Compared to
the results in Ref. <cit.>, we find an overall
agreement in the shape of the non-strange ππ form factor, but
the modulus from Ref. <cit.> is about 20% larger.
This would induce about 40% uncertainties to the branching
ratios of the D→ππℓν̅_ℓ, while the results for
the ratio observables are not affected.
Finally, the Watson theorem does not always guarantee the use of
Eq. (<ref>), the matching of D→ππ form factor
and D→ f_0 form factors. As we have discussed in Sec. II, such
an approximation might be improved in the future.
§ CONCLUSIONS
In summary, we have presented a theoretical analysis of the D^-→π^+π^- ℓν̅ and D̅^0→π^+π^0 ℓν̅
decays. We have constructed a general angular distribution which
can include arbitrary partial waves of ππ. Retaining the
S-wave and P-wave contributions we have studied the branching
ratios, forward–backward asymmetries and a few other observables.
The P-wave contribution is dominated by ρ^0 resonance, and the
S-wave contribution is analyzed using the unitarized chiral
perturbation theory. The obtained branching fraction for D→ρℓν, at the order 10^-3, is consistent with the
available experimental data, while the S-wave contribution is
found to have a branching ratio at the order of 10^-4, and
this prediction can be tested by experiments like BESIII and LHCb.
The BESIII collaboration has accumulated about 10^7 events of the
D^0 and will collect about 3 fb^-1 data at the
center-of-mass √(s)= 4.17 GeV to produce the
D_s^+D_s^- <cit.>. All these data can
be used to study the charm decays into the f_0 mesons. In
addition, sizable branching fractions also indicate a promising
prospect at the ongoing LHC experiment <cit.>, the
forthcoming Super-KEKB factory <cit.> and the
under-design Super Tau-Charm factory. Future measurements can be
used to study the π–π scattering phase shift.
§ ACKNOWLEDGEMENTS
We thank Jian-Ping Dai, Liao-Yuan Dong, Hai-Bo Li and Lei Zhang for useful discussions.
This work is supported in part by National Natural
Science Foundation of China under Grant
Nos.11575110, 11655002, Natural Science Foundation of Shanghai under Grant No.
15DZ2272100 and No. 15ZR1423100, by the Young Thousand Talents Plan, and by Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education.
§ ANGULAR COEFFICIENTS
In the angular distribution, the coefficients have the form
I_1 = (1+m̂_l^2) |A_0|^2
+2 m̂_l^2 |A_t|^2 + (3+m̂_l^2)/2(|A_⊥|^2
+|A_|||^2)
I_2 = -β_l |A_0|^2+ β_l /2 (|A_⊥|^2
+|A_|||^2),
I_3 = β_l (|A_⊥|^2-|A_|||^2),
I_4 = 2 β_l [ Re(A_0A_||^*)],
I_5 = 4 [ Re(A_0A_⊥^*) -m̂_l^2 Re(A_tA_||^*) ],
I_6 = 4 [ Re(A_||A^*_⊥)+ m̂_l^2 Re(A_tA^*_0)],
I_7 = 4[ Im(A_0A^*_||)-m̂_l^2 Im(A_tA^*_⊥)],
I_8 = 2 β_l [ Im(A_0A^*_⊥)],
I_9 = 2β_l [ Im(A_⊥A^*_||)].
Substituting the expressions for A_i into the above equation, we obtain the general expressions
I_1(q^2, m_ππ^2, θ_π^+) = ∑_J=0,...{ |Y_J^0(θ_π^+, 0)|^2 [(1+m̂_l^2) |A^J_0|^2
+2 m̂_l^2 |A_t^J|^2]
+ 2∑_ J'=J+1, ... Y_J^0(θ_π^+, 0)Y_J'^0(θ_π^+, 0) [ cos(δ_0^J -
δ^J'_0)|A^J_0||A^J'*_0| + 2m̂_l^2
cos (δ_t^J -δ_t^J' )|A^J_t||A^J'_t|]}
+ 3+m̂_l^2/2∑_J=1,...{ |Y_J^-1(θ_π^+, 0)|^2 [ [|A^J_⊥|^2+|A^J_|||^2 ] ]
+ ∑_ J'=J+1, ... Y_J^-1(θ_π^+, 0)Y_J'^-1(θ_π^+, 0) [ 2cos(δ_⊥^J
- δ_⊥^J')|A_⊥^J||A_⊥^J'| ] },
I_2(q^2, m_ππ^2, θ_π^+) = -β_l ∑_J=0,...{ |Y_J^0|^2 |A^J_0(θ_π^+, 0)|^2 + 2∑_ J'=J+1, ... Y_J^0(θ_π^+, 0)Y_J'^0(θ_π^+, 0)
cos(δ_0^J - δ^J'_0)|A^J_0 A^J'_0| }
+
1/2β_l ∑_J=1,...{ |Y_J^-1(θ_π^+, 0)|^2 (|A^J_⊥|^2+|A^J_|||^2)
+2∑_ J'=J+1 Y_J^-1(θ_π^+, 0)Y_J'^-1(θ_π^+, 0)[ cos(δ_⊥^J
- δ^J'_⊥)|A^J_⊥ A^J'_⊥| + cos(δ_||^J
- δ^J'_||)|A^J_||A^J'_||| ] },
I_3(q^2, m_ππ^2, θ_π^+) = β_l ∑_J=1,...{ |Y_J^-1(θ_π^+, 0)|^2 (|A^J_⊥|^2-|A^J_|||^2)
+2∑_ J'=J+1,... Y_J^-1(θ_π^+, 0)Y_J'^-1(θ_π^+, 0)[ cos(δ_⊥^J
- δ^J'_⊥)|A^J_⊥ A^J'_⊥| - cos(δ_||^J
- δ^J'_||)|A^J_||A^J'_||| ] },
I_4(q^2, m_ππ^2, θ_π^+)
= 2 β_l ∑_J=0, ...∑_J'=1, ..[ Y_J^0(θ_π^+, 0) Y_J'^-1(θ_π^+, 0) | A^J_0A^J'*_|| | cos(δ_0^J -δ_||^J') ],
I_5(q^2, m_ππ^2, θ_π^+)
= 4 ∑_J=0, ...∑_J'=1, ..Y_J^0(θ_π^+, 0) Y_J'^-1 (θ_π^+, 0) [ | A^J_0A^J'*_⊥ | cos(δ_0^J -δ_⊥^J') -m̂_l^2 | A^J_tA^J'*_|| | cos(δ_t^J -δ_||^J')],
I_6(q^2, m_ππ^2, θ_π^+) = 4 ∑_J,J'=1,...{ Y_J^-1 (θ_π^+, 0)Y_J'^-1 (θ_π^+, 0)| A^J_||A^J'*_⊥ | cos(δ_||^J -δ_⊥^J') }
+ m̂_l^2 ∑_J,J'=0,...{ Y_J^0(θ_π^+, 0) Y_J'^0(θ_π^+, 0) | A^J_tA^J'*_0 | cos(δ_t^J -δ_0^J') },
I_7(q^2, m_ππ^2, θ_π^+)
= 4 ∑_J=0, ...∑_J'=1, .. Y_J^0(θ_π^+, 0) Y_J'^-1(θ_π^+, 0) [ | A^J_0A^J'*_|| | sin(δ_0^J -δ_||^J') -m̂_l^2 | A^J_tA^J'*_⊥ | sin(δ_t^J -δ_⊥^J')],
I_8 (q^2, m_ππ^2, θ_π^+) =
2 β_l ∑_J=0, ...∑_J'=1, ..[ Y_J^0 (θ_π^+, 0)Y_J'^-1(θ_π^+, 0) | A^J_0A^J'*_⊥ | sin(δ_0^J -δ_⊥^J') ],
I_9(q^2, m_ππ^2, θ_π^+)
= 2β_l ∑_J=1, ...∑_J'=1, ..[ Y_J^-1(θ_π^+, 0) Y_J'^-1(θ_π^+, 0) | A^J_⊥A^J'*_|| | sin(δ_⊥^J -δ_||^J') ].
11
CLEO:2011ab
S. Dobbs et al. [CLEO Collaboration],
Phys. Rev. Lett. 110, no. 13, 131802 (2013)
[arXiv:1112.2884 [hep-ex]].
Wang:2009azc
W. Wang and C. D. Lü,
Phys. Rev. D 82, 034016 (2010)
[arXiv:0910.0613 [hep-ph]].
Achasov:2012kk
N. N. Achasov and A. V. Kiselev,
Phys. Rev. D 86, 114010 (2012)
[arXiv:1206.5500 [hep-ph]].
Ablikim:2015orh
M. Ablikim et al. [BESIII Collaboration],
Phys. Lett. B 753, 629 (2016)
[arXiv:1507.08188 [hep-ex]].
Lu:2011jm
C. D. Lü and W. Wang,
Phys. Rev. D 85, 034014 (2012)
[arXiv:1111.1513 [hep-ph]].
Meissner:2013pba
U. G. Meißner and W. Wang,
JHEP 1401, 107 (2014)
[arXiv:1311.5420 [hep-ph]].
Meissner:2013hya
U. G. Meißner and W. Wang,
Phys. Lett. B 730, 336 (2014)
[arXiv:1312.3087 [hep-ph]].
Wang:2015uea
W. F. Wang, H. n. Li, W. Wang and C. D. Lü,
Phys. Rev. D 91, no. 9, 094024 (2015)
[arXiv:1502.05483 [hep-ph]].
Wang:2015paa
W. Wang and R. L. Zhu,
Phys. Lett. B 743, 467 (2015)
[arXiv:1502.05104 [hep-ph]].
Shi:2015kha
Y. J. Shi and W. Wang,
Phys. Rev. D 92, no. 7, 074038 (2015)
[arXiv:1507.07692 [hep-ph]].
Xie:2014tma
J. J. Xie, L. R. Dai and E. Oset,
Phys. Lett. B 742, 363 (2015)
[arXiv:1409.0401 [hep-ph]].
Sekihara:2015iha
T. Sekihara and E. Oset,
Phys. Rev. D 92, no. 5, 054038 (2015)
[arXiv:1507.02026 [hep-ph]].
Oset:2016lyh
E. Oset et al.,
Int. J. Mod. Phys. E 25, 1630001 (2016)
[arXiv:1601.03972 [hep-ph]].
Wang:2016rlo
W. F. Wang and H. n. Li,
Phys. Lett. B 763, 29 (2016)
[arXiv:1609.04614 [hep-ph]].
Kang:2013jaa
X. W. Kang, B. Kubis, C. Hanhart and U. G. Meißner,
Phys. Rev. D 89, 053015 (2014)
[arXiv:1312.1193 [hep-ph]].
Faller:2013dwa
S. Faller, T. Feldmann, A. Khodjamirian, T. Mannel and D. van Dyk,
Phys. Rev. D 89, no. 1, 014015 (2014)
[arXiv:1310.6660 [hep-ph]].
Niecknig:2015ija
F. Niecknig and B. Kubis,
JHEP 1510, 142 (2015)
[arXiv:1509.03188 [hep-ph]].
Daub:2015xja
J. T. Daub, C. Hanhart and B. Kubis,
JHEP 1602, 009 (2016)
[arXiv:1508.06841 [hep-ph]].
Albaladejo:2016mad
M. Albaladejo, J. T. Daub, C. Hanhart, B. Kubis and B. Moussallam,
JHEP 1704, 010 (2017)
[arXiv:1611.03502 [hep-ph]].
Wirbel:1985ji
M. Wirbel, B. Stech and M. Bauer,
Z. Phys. C 29, 637 (1985).
Scora:1995ty
D. Scora and N. Isgur,
Phys. Rev. D 52, 2783 (1995)
[hep-ph/9503486].
Fajfer:2005ug
S. Fajfer and J. F. Kamenik,
Phys. Rev. D 72, 034029 (2005)
[hep-ph/0506051].
Verma:2011yw
R. C. Verma,
J. Phys. G 39, 025005 (2012)
[arXiv:1103.2973 [hep-ph]].
Cheng:2003sm
H. Y. Cheng, C. K. Chua and C. W. Hwang,
Phys. Rev. D 69, 074025 (2004)
[hep-ph/0310359].
Wu:2006rd
Y. L. Wu, M. Zhong and Y. B. Zuo,
Int. J. Mod. Phys. A 21, 6125 (2006)
[hep-ph/0604007].
Colangelo:2010bg
P. Colangelo, F. De Fazio and W. Wang,
Phys. Rev. D 81, 074001 (2010)
[arXiv:1002.2880 [hep-ph]].
Olive:2016xmw
C. Patrignani et al. [Particle Data Group],
Chin. Phys. C 40, no. 10, 100001 (2016).
DeFazio:2001uc
F. De Fazio and M. R. Pennington,
Phys. Lett. B 521, 15 (2001)
[hep-ph/0104289].
Doring:2013wka
M. Döring, U. G. Meißner and W. Wang,
JHEP 1310, 011 (2013)
[arXiv:1307.0947 [hep-ph]].
Diehl:2003ny
M. Diehl,
Phys. Rept. 388, 41 (2003)
[hep-ph/0307382].
Cheng:2005nb
H. Y. Cheng, C. K. Chua and K. C. Yang,
Phys. Rev. D 73, 014017 (2006)
[hep-ph/0508104].
Bar:2012ce
O. Bär and M. Golterman,
Phys. Rev. D 87, no. 1, 014505 (2013)
[arXiv:1209.2258 [hep-lat]].
Diehl:2005rn
M. Diehl, A. Manashov and A. Schäfer,
Phys. Lett. B 622, 69 (2005)
[hep-ph/0505269].
Diehl:1998dk
M. Diehl, T. Gousset, B. Pire and O. Teryaev,
Phys. Rev. Lett. 81, 1782 (1998)
[hep-ph/9805380].
Diehl:2000uv
M. Diehl, T. Gousset and B. Pire,
Phys. Rev. D 62, 073014 (2000)
[hep-ph/0003233].
Colangelo:2015kha
G. Colangelo, E. Passemar and P. Stoffer,
Eur. Phys. J. C 75, 172 (2015)
doi:10.1140/epjc/s10052-015-3357-1
[arXiv:1501.05627 [hep-ph]].
Gasser:1983yg
J. Gasser and H. Leutwyler,
Annals Phys. 158, 142 (1984).
Gasser:1984gg
J. Gasser and H. Leutwyler,
Nucl. Phys. B 250, 465 (1985).
Gasser:1984ux
J. Gasser and H. Leutwyler,
Nucl. Phys. B 250, 517 (1985).
Meissner:2000bc
U. G. Meißner and J. A. Oller,
Nucl. Phys. A 679, 671 (2001)
[hep-ph/0005253].
Bijnens:1998fm
J. Bijnens, G. Colangelo and P. Talavera,
JHEP 9805, 014 (1998)
[hep-ph/9805389].
Bijnens:2003xg
J. Bijnens and P. Dhonte,
JHEP 0310, 061 (2003)
[hep-ph/0307044].
Oller:1998hw
J. A. Oller, E. Oset and J. R. Peláez,
Phys. Rev. D 59, 074001 (1999)
Erratum: [Phys. Rev. D 60, 099906 (1999)]
Erratum: [Phys. Rev. D 75, 099903 (2007)]
[hep-ph/9804209].
Oller:1997ti
J. A. Oller and E. Oset,
Nucl. Phys. A 620, 438 (1997)
Erratum: [Nucl. Phys. A 652, 407 (1999)]
[hep-ph/9702314].
Lahde:2006wr
T. A. Lähde and U. G. Meißner,
Phys. Rev. D 74, 034021 (2006)
[hep-ph/0606133].
Oller:2007xd
J. A. Oller and L. Roca,
Phys. Lett. B 651, 139 (2007)
[arXiv:0704.0039 [hep-ph]].
Cabibbo:1965zzb
N. Cabibbo and A. Maksymowicz,
Phys. Rev. 137, B438 (1965)
Erratum: [Phys. Rev. 168, 1926 (1968)].
Pais:1968zza
A. Pais and S. B. Treiman,
Phys. Rev. 168, 1858 (1968).
delAmoSanchez:2010fd
P. del Amo Sanchez et al. [BaBar Collaboration],
Phys. Rev. D 83, 072001 (2011)
[arXiv:1012.1810 [hep-ex]].
Lee:1992ih
C. L. Y. Lee, M. Lu and M. B. Wise,
Phys. Rev. D 46, 5040 (1992).
Ablikim:2014cea
M. Ablikim et al. [BESIII Collaboration],
Phys. Rev. D 89, no. 5, 052001 (2014)
[arXiv:1401.3083 [hep-ex]].
Asner:2008nq
D. M. Asner et al.,
Int. J. Mod. Phys. A 24, S1 (2009)
[arXiv:0809.1869 [hep-ex]].
Bediaga:2012py
R. Aaij et al. [LHCb Collaboration],
Eur. Phys. J. C 73, no. 4, 2373 (2013)
[arXiv:1208.3355 [hep-ex]].
Aushev:2010bq
T. Aushev et al.,
arXiv:1002.5012 [hep-ex].
| The Cabbibo–Kobayashi–Maskawa (CKM) matrix elements are key
parameters in the Standard Model (SM). They are essential to
understand CP violation within the SM and search for new physics
(NP). Among these matrix elements,
|V_cd| can
be determined from either exclusive or inclusive weak D decays,
which are governed by c→ d transition, for example, c→ dℓν transitions. However, for a general D decay process it is
difficult to extract CKM matrix elements, because strong and weak
interactions may be entangled.
The semi-leptonic D decays are ideal
channels to determine |V_cd|, not only because the weak and
strong dynamics can be separated in these process, but also the
clean experimental signals. Moreover, one can study the dynamics in
the heavy-to-light transition from semi-leptonic D decays. For
leptons do not participate in the strong interaction, all the strong
dynamics is included in the form factors; thus it provides a good
platform to measure the form factors. The D→ρ form factors
have been measured from D^0→ρ^- e^+ ν_e and D^+→ρ^0
e^+ ν_e at the CLEO-c experiment for both charged and neutral
channels <cit.>. Because of the large width of the
ρ meson, D→ρℓν̅_ℓ is in fact a quasi-four
body process D→ππℓν̅_ℓ. The ρ can be
reconstructed from the P-wave ππ mode. However, other
ππ resonant or non-resonant states may interfere with the
P-wave ππ pair, and thus it is necessary to analyze the
S-wave contribution to D→ππℓν̅_ℓ.
In addition, the internal structure of light mesons is an important
issue in hadron physics. It is difficult to study light mesons by
QCD perturbation theory due to the large strong coupling in the low
energy region. On the other hand, because of the large mass scale,
one can establish factorization for many heavy meson decay
processes, thus heavy mesons like B and D can be used to probe
the internal structure of light
mesons <cit.>. As mentioned above,
D→ππℓν̅_ℓ can receive contributions from
various partial waves of ππ. ρ(770) dominant for D to
P-wave ππ decay, at the same time, D meson can decay into
S-wave ππ through f_0(980). The structure of f_0(980) is
not fully understood yet. Analysis of D→ππℓν̅_ℓ may shed more light on understanding the nature of
f_0(980). The BESIII collaboration has collected 2.93
fb^-1 data in e^+e^- collisions at the energy around
3.773 GeV <cit.>, which can be used to study the
semi-leptonic D decays. Thus it presently is mandatory to make
reliable theoretical predictions. Some analyses of multi-body heavy
meson decays can be found in
Refs. <cit.>,
where the final state interactions between the light pseudoscalar
mesons are taken into account.
In this paper we present a theoretical analysis of
D^-→π^+π^-ℓν̅_ℓ and D^0→π^+π^0ℓν̅_ℓ decays. In Sec. II, we will present the results of
D→ f_0 (980) and D→ρ form factors. We also calculate D
to S-wave ππ pair form factors in non-resonance region, the
ππ form factor will be calculated by using unitarized chiral
perturbation theory. Based on these results, we present a full
analysis on the angular distribution of D→ππℓν̅_ℓ. We explore various distribution observables, including
the differential decay width, the S-wave fraction, forward–backward
asymmetry, and so on. These results will be collected in Sec. III.
The conclusion of this paper will be given in Sec. IV. The details
of the coefficients in angular distributions are relegated to the
appendix. | null | null | null | null | null |
http://arxiv.org/abs/1701.08209v1 | 20170127222352 | Consistent SPH Simulations of Protostellar Collapse and Fragmentation | [
"Ruslan Gabbasov",
"Leonardo Di G. Sigalotti",
"Fidel Cruz",
"J. Klapp",
"J. M. Ramírez-Velasquez"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Instituto de Ciencias Básicas e Ingenierías, Universidad Autónoma del
Estado de Hidalgo (UAEH),
Ciudad Universitaria, Carretera Pachuca-Tulacingo km. 4.5 S/N, Colonia Carboneras,
Mineral de la Reforma, C.P. 42184, Hidalgo, Mexico
Área de Física de Procesos Irreversibles, Departamento de Ciencias Básicas,
Universidad Autónoma Metropolitana-Azcapotzalco (UAM-A), Av. San Pablo 180,
C.P. 02200, Ciudad de México, Mexico
Departamento de Física, Instituto Nacional de Investigaciones Nucleares (ININ),
Carretera México-Toluca km. 36.5, La Marquesa, 52750 Ocoyoacac, Estado de México, Mexico
Centro de Física, Instituto Venezolano de Investigaciones Científicas (IVIC),
Apartado Postal 20632, Caracas 1020A, Venezuela
1ABACUS-Centro de Matemáticas Aplicadas y Cómputo de Alto Rendimiento,
Departamento de Matemáticas, Centro de Investigación y de Estudios Avanzados (Cinvestav-IPN),
Carretera México-Toluca km. 38.5, La Marquesa, 52740 Ocoyoacac, Estado de México, Mexico
We study the consistency and convergence of smoothed particle hydrodynamics (SPH), as a
function of the interpolation parameters, namely the number of particles N, the
number of neighbors n, and the smoothing length h, using simulations
of the collapse and fragmentation of protostellar rotating cores. The calculations are
made using a modified version of the GADGET-2 code that employs an improved scheme for
the artificial viscosity and power-law dependences of n and h on N, as was recently
proposed by Zhu et al., which comply with the combined limit N→∞,
h→ 0, and n→∞ with n/N→ 0 for full SPH consistency, as the domain
resolution is increased. We apply this realization to the
“standard isothermal test case” in the variant calculated by Burkert & Bodenheimer
and the Gaussian cloud model of Boss to investigate the response of the method to adaptive
smoothing lengths in the presence of large density and pressure gradients. The degree
of consistency is measured by tracking how well the estimates of the consistency integral
relations reproduce their continuous counterparts. In particular, C^0 and C^1
particle consistency is demonstrated, meaning that the calculations are close to
second-order accuracy. As long as n is increased with N, mass resolution also
improves as the minimum resolvable mass M_ min∼ n^-1. This aspect allows
proper calculation of small-scale structures in the flow associated with the formation
and instability of protostellar disks around the growing fragments, which are seen to
develop a spiral structure and fragment into close binary/multiple systems as supported
by recent observations.
§ INTRODUCTION
The method of smoothed particle hydrodynamics (SPH) was developed in the late 1970s
by <cit.> and <cit.> as a numerical tool for solving the equations
of gravitohydrodynamics in three-dimensional open space. Today, the use of SPH spans
many areas of astrophysics and cosmology as well as a broad range of fluid and
solid mechanics related areas. However, despite its extensive applications and
recent progress in consolidating its theoretical foundations, SPH still has unknown
properties that need to be investigated. A fundamental numerical aspect of SPH is
the lack of particle consistency, which affects the accuracy and convergence of the
method. Several modified techniques and corrective methods have been proposed to
restore particle consistency in fluid dynamics calculations <cit.>; the most successful ones being those based on Taylor series
expansions of the kernel approximations of a function and its derivatives. If m
derivatives are retained in the series expansions, the resulting kernel and particle
approximations will have (m+1)th-order accuracy or C^m consistency. However,
the improved accuracy of these methods comes at the price of involving matrix
inversions, which represent a major computational burden for time-evolving simulations
and eventually a loss of numerical stability due to matrix conditioning
for some specific problems. On the other hand, while these corrective methods solve
for particle inconsistency due to truncation of the kernel at model boundaries, it is
not clear how irregular particle distributions and the use of variable
smoothing lengths affect the consistency (and therefore the accuracy) of the solutions.
Recently, <cit.> showed that the condition for the particle approximation
to restore C^0 consistency and achieve asymptotic error decay is that the
volumes defined by the particles and the inter-particle faces partition the entire
domain, i.e., constitute a partition of unity. They found that this condition is
satisfied by relaxing the particles under a constant pressure field by keeping the
particle volumes invariant, yielding convergence rates for such a relaxed distribution
that are the same as those for particles on a perfect regular lattice. Quite curiously,
they also observed that the relaxed particle distributions obtained this way resemble
that of liquid molecules resulting from microscopic simulations. A method to improve
the SPH estimate of derivatives which is not affected by particle disorder was also
devised recently by <cit.>.
In comparison little work has been done to improve the SPH consistency in astrophysical
applications. In many cases, especially those involving self-gravitating flows, large
density gradients arise and an adaptive kernel is used to guarantee spatial resolution
in regions of high density. It has long been recognized that spatially adaptive
calculations where a variable smoothing length is employed turn out to be inconsistent
<cit.>. It was not until recently that <cit.> identified another source
of particle inconsistency associated with the finite number of neighbors within the
compact support of a smoothed function. It is common practice in SPH calculations to
assume that a large number of total particles, N, and a small smoothing length, h,
are sufficient conditions to achieve consistent solutions, while holding the number of
neighbor particles, n, fixed at some value n≪ N. <cit.> demonstrated that
C^0 particle consistency, i.e., satisfaction of the discrete normalization condition of
the kernel function can only be achieved when n is sufficiently large for which the finite
SPH sum approximation approaches the continuous limit. This result is consistent with the
error analysis of the SPH representation of the continuity and momentum equations
carried out by <cit.>, who found that particle consistency is completely lost
due to zeroth-order error terms that would persist when working with a finite number of
neighbors even though N→∞ and h→ 0. Indeed, as the resolution is increased,
approaching the limit N→∞ and h→ 0, the overall error will grow at a
faster rate if the magnitude of the zeroth-order error terms remains
constant. Based on these observations, full particle consistency is possible in SPH
only if the joint limit N→∞, h→ 0, and n→∞ is satisfied
<cit.>. However, we recall that this combined limit was first noted by
<cit.> using a simple linear analysis on one-dimensional sound wave propagation.
In particular, he found that SPH is fully consistent in this limit with N→∞
faster than n such that n/N→ 0.
On the other hand, <cit.> conjectured that for quasi-regularly distributed
particles, the discretization error made when passing from the continuous kernel to the
particle approximation is proportional to (log n)^d/n, where d is the dimension.
For n≫ 1, <cit.> parameterized this error
as ∼ n^-γ, where γ varies from 0.5 for a random
distribution to 1 for a perfectly regular lattice of particles. Combining this with
the leading error (∝ h^2) of the continuous kernel approximation for most
commonly used kernel forms, <cit.> derived the scaling relations n∝ N^1/2
and h∝ N^-1/6, which satisfy the joint limit as the domain resolution is
progressively increased. A recent analysis on standard SPH has demonstrated that using
the above scalings C^0 consistency is fully restored for both the estimates of the
function and its derivatives in contrast to the case where n is fixed to a
constant small value, with the numerical solution becoming also insensitive to the degree of
particle disorder <cit.>.
While these results are promising, it remains to investigate the response of the method
for spatially adaptive calculations in the presence of large gradients where the loss of
particle consistency is known to be most extreme. In particular, most of the above
analyses are based on static convergence tests for analytical functions in two- or
three-space dimensions using either uniformly or irregularly distributed point sets,
or on dynamical test problems for which an analytical solution is known in advance, and
therefore the results obtained are limited to idealized circumstances. As was emphasized
by <cit.>, the lack of consistency associated with particle disorder and spatial
adaptivity is not specific to a particular SPH scheme but is rather a generic problem.
It would therefore be desirable to test the present method for more complex models
as those involving the solution of the equations of hydrodynamics coupled to
gravity in three-space dimensions. To do so we choose as a problem the gravitational
collapse and fragmentation of an initially rotating protostellar cloud, using a
modified version of the GADGET-2 code <cit.>. As templates for the model
clouds we use the “standard isothermal test case” in the variant calculated by
<cit.> and the centrally condensed, Gaussian cloud model of <cit.>
coupled to a barotropic equation of state to mimic the nonisothermal collapse. The
simulations will then allow to better understand the impact of varying the number of
neighbors as the resolution is increased on the SPH discretization errors, which
will naturally emerge from the density estimate itself and the SPH momentum equation.
The convergence and accuracy of the simulations is measured by evaluating how well
the particle approximation of the integral consistency relations (or moments of the
kernel) are satisfied during the evolution.
A further implication of the consistency scaling relations on protostellar collapse
calculations is the improved mass resolution. Since the minimum resolvable mass,
M_ min, scales with h as h^3, this implies that M_ min∼ n^-1.
Although the collapse models proposed here start from ideal conditions, this aspect
has an important impact on the outcome of the simulations, where well-defined, rotating
circumstellar disks are seen to form around the growing fragments, which then increase in mass,
develop spiral arms, and fragment to produce small-scale binary/multiple protostellar
systems. This result is consistent with recent observations of L1448 IRS3B <cit.>:
a close triple protostar system where two of the protostars formed by fragmentation of
a massive disk with a spiral structure surrounding a primary, young star formed from
the collapse of a larger cloud of gas and dust. While based principally on the relative
proximity of the companion stars, this observation provides for the first time direct
evidence of protostellar disk fragmentation as a mechanism for the formation of
close binary/multiple young stars.
§ THE ISSUE OF CONSISTENCY
We start by recalling that the kernel (or smoothed) estimate of a scalar function
f( r), where f may be either the density, ρ, or the gas pressure, p,
is defined by
⟨ f( r)⟩ =∫ _ R^3f( r^')
W(| r- r^'|,h)d^3 r^',
where the volume integration is taken over the whole real space,
r=(x,y,z) denotes position, and W is the kernel interpolation function,
which must be positive definite, symmetric, monotonically decreasing, and satisfy the
normalization condition
∫ _ R^3W(| r- r^'|,h)d^3 r^'=1,
together with the Dirac-δ function property that is observed when h→ 0.
Moreover, suitable kernels should have a compact support so that
W(| r- r^'|,h)=0 for | r- r^'|≥ kh, where k
is some integer that depends on the kernel function itself. Making the change of
variable | r- r^'|→ h| r- r^'|, it is easy to
show that the following scaling relation holds <cit.>
W(h| r- r^'|,h)=1/h^νW(| r- r^'|,1),
for any SPH kernel function, where ν =1, 2, and 3 in one-, two-, and three-space
dimensions, respectively.
If we expand in Taylor series f( r^') around r^'= r,
make r→ h r and r^'→ h r^', use Eq. (3), and
insert the result in the kernel approximation (1), we obtain for the function estimate
the relation
⟨ f(h r)⟩ =∑ _l=0^∞h^l/l!∇ _h^(l)f(h r)
:::⋯ :∫ _ R^3( r^'- r)^lW(| r- r^'|,1)
d^3 r^',
where ∇ _h^(l) denotes the product of
the ∇ operator with respect to coordinates (hx,hy,hz) l times,
( r^'- r)^l is a tensor of rank l, and the symbol
“:::⋯ :” is used to denote the lth-order inner product. Therefore, if the
kernel approximation is to exactly reproduce a sufficiently smooth function to (m+1)th
order, the family of consistency relations must be fulfilled
M_0 = ∫ _ R^3W(| r- r^'|,1)d^3 r^'=1,
M_l = ∫ _ R^3( r^'- r)^lW(| r- r^'|,1)
d^3 r^'= 0^(l), forl=1,2,...,m,
where 0^(1)= 0=(0,0,0) is the null vector and 0^(l) is the zero
tensor of rank l. Fulfillment of the integral relations (5) and (6) guarantees
C^m consistency for the kernel estimate of the function, which by virtue of
Eq. (4) reproduces exactly the continuous function to order m+1. Owing to the scaling
relation (3), the contribution of the error due to the smoothing length can be separated
from that due to the discrete representation of the integral, which, being independent
of h, will only depend on the number of neighbors within the kernel support and their
spatial distribution.
When solving the gravitohydrodynamics equations, gas compression is accounted for by
evaluating the pressure gradient in the momentum equation. Therefore, consistency
relations for the kernel estimate of the gradient of a function are also of concern.
Using the definition of the kernel estimate of the gradient as
⟨∇ f( r)⟩ =∫ _ R^3f( r^')
∇ W(| r- r^'|,h)d^3 r^',
expanding f( r^') again in Taylor series about r^'= r,
making r→ h r and r^'→ h r^', and inserting
the result in Eq. (7) we obtain the form
⟨∇_hf(h r)⟩ =∑ _l=0^∞h^l-1/l!∇ _h^(l)
f(h r):::⋯ :∫ _ R^3( r^'- r)^l∇ W(| r- r^'|,1)d^3 r^',
where we have made use of the scaling relation
∇ W(h| r- r^'|,h)=1/h^ν∇ W(| r- r^'|,1),
with ν =3, which also holds for the gradient of the kernel. From Eq. (8), it follows that
C^m consistency for the kernel estimate of the gradient is obtained only if the family
of integral relations is exactly satisfied
M^'_0 = ∫ _ R^3∇ W(| r- r^'|,1)
d^3 r^'=∫ _ R^2W(| r- r^'|,1) n
d^2 r^'= 0,
M^'_1 = ∫ _ R^3( r^'- r)
∇ W(| r- r^'|,1)d^3 r^'= I,
M^'_l = ∫ _ R^3( r^'- r)^l∇ W(| r- r^'|,1)d^3 r^'= 0^(l+1),
forl=2,3,...,m,
where I is the unit tensor. The second equality in Eq. (10) holds for any
volume enclosed by a continuous surface with differential volume element
d^3 r^' and differential surface element nd^2 r^',
where n is the outward unit normal from the volume surface. It is precisely
the requirement that the zeroth moment M_0^'= 0, which determines
that the surface integral of the kernel must vanish identically. Because of the
symmetry of the kernel function, relations (5) and (6) with l odd are automatically
satisfied, while those with l even will all appear in the expansion (4) as finite
sources of error and will not vanish unless the kernel approaches the Dirac-δ
distribution. Hence, up to leading second-order Eq. (4) becomes
⟨ f(h r)⟩ =f(h r)+1/2h^2∇ _h∇ _h
f(h r):∫ _ R^3( r^'- r)^2W(| r- r^'|,1)
d^3 r^'+O(h^4),
which expresses that the kernel approximation of a function has C^1 consistency for
an unbounded domain. Using Eq. (1) it is a simple matter to show that the integral on the
right-hand side of Eq. (13), which corresponds to the second moment of the kernel
M_2, is equal to
⟨ r r⟩-⟨ r⟩⟨ r⟩≠ 0^(2),
implying that C^2 consistency is not achieved even though C^0 and C^1
consistencies are automatically satisfied <cit.>. This term is just the variance
of the particle position vector r and is a measure of the spread of the particle
positions relative to the mean. Evidently,
⟨ r r⟩-⟨ r⟩⟨ r⟩→ 0^(2)
only when W(| r- r^'|,1)→δ ( r- r^'), or
equivalently, when N→∞, h→ 0, and n→∞.
Similarly, due to the symmetry of the kernel all integrals in Eqs. (10)-(12) will vanish
identically for l even, while only those for l odd will survive in the series expansion
(8), which up to second-order becomes
⟨∇_hf(h r)⟩ =∇ _hf(h r)+1/6h^2∇ _h∇ _h∇ _hf(h r)⋮∫ _ R^3( r^'- r)^3∇ W(| r- r^'|,1)d^3 r^'+O(h^4),
where the symbol “⋮” is used to denote the triple inner product. Note that the
integral on the right-hand side of Eq. (14) is the third moment of the kernel gradient
M_3^'=3 M_2 I≠ 0^(4). We recall that
relations (5) and (11) have important physical implications. In particular, satisfaction
of relation (5) means that the homogeneity of space is not affected by the SPH kernel
approximation, which has as a consequence the conservation of linear momentum. On the
other hand, fulfillment of relation (11) expresses that the isotropy of space is
preserved by the kernel approximation, and therefore angular momentum is locally conserved
<cit.>.
An important feature of Eqs. (13) and (14) is that the contribution of h to the error
can be separated from the error carried by the discretization of the consistency relations,
which will only depend on the number of neighbors, n, and how they are distributed within
the kernel support. In general, it is well-known that the particle approximation of Eq. (5)
diverges from being exactly one, i.e.,
M_0,a=∑ _b=1^nW_abΔ V_b≠ 1,
where W_ab=W(| r_a- r_b|,h) and Δ V_b is the volume of the subdomain
of neighbor particle b. The error carried by Eq. (15) scales as ∼ n^-γ,
with γ∈ [0.5,1], depending on the particle distribution <cit.>.
Therefore, as the number of neighbors is increased the discrete normalization condition
approaches unity and C^0 particle consistency is restored. This will also make the
particle approximations of relations (10) and (11) to approach the null vector 0
and the unit tensor, respectively. These conditions state that the particles should
provide a good approximation to a partition of unity. As in traditional
finite difference and finite element methods, the concept of consistency in SPH defines how
well the discrete model equations represent the exact equations in the continuum limit. In
SPH this is accomplished in two separate steps: the kernel approximation which, as we have
described above, is derived from the continuous form, and the particle approximation, where the
integrals are replaced by sums over a finite set of particles within the kernel support. Since
the kernel consistency relations do not assure consistency for the particle approximation,
the discrete counterparts of Eqs. (13) and (14) must be written as
f_a→⟨ f⟩ _a = M_0,a(f)_a+h(∇ f)_a· M_1,a+1/2h^2
(∇∇ f)_a: M_2,a+O(h^3),
∇ _af_a→⟨∇ f⟩ _a = 1/h M_0,a^'(f)_a
+(∇ f)_a· M_1,a^'+1/2h(∇∇ f)_a: M_2,a^'+
1/6h^2(∇∇∇ f)_a:· M_3,a^'+
O(h^3),
respectively, where quantities between parentheses denote exact values of the function and
its derivatives at the position of particle a and the particle representation of the
consistency integrals is given by
M_0,a = ∑ _b=1^nW_abΔ V_b,
M_l,a = ∑ _b=1^n r_ba^lW_abΔ V_b, forl=1,2,
M_0,a^' = ∑ _b=1^n∇ _aW_abΔ V_b,
M_l,a^' = ∑ _b=1^n r_ba^l∇ _aW_abΔ V_b,
forl=1,2,3,
where r_ba= r_b- r_a and Δ V_b=m_b/ρ _b, with
m_b and ρ _b denoting the mass and density of particle b, respectively.
According to Eqs. (16) and (17), C^0 particle consistency for the function and its
gradient will demand that M_0,a=1, M_0,a^'= 0, and
M_1,a^'= I at the position of particle a, while C^1
particle consistency is restored if in addition M_1,a= 0 and
M_2,a^'= 0^(3) are satisfied. In particular, restoring C^0
particle consistency implies that the homogeneity and isotropy of the discrete space
is preserved, which has as a consequence the conservation of linear and angular
momentum in practical calculations <cit.>. The goal here is to
track the quality of the
particle consistency relations in a true hydrodynamic evolution involving large
density and pressure gradients as well as large spatial and temporal variations of
the smoothing length. This will allow us to evaluate the degree of consistency that can
be achieved when the number of neighbors within the kernel support and the smoothing
length are allowed to vary with N according to the scalings n∼ N^1/2 and
h∼ N^-1/6, which approach asymptotically the joint limit N→∞,
h→ 0, and n→∞ for particle consistency as N is increased.
§ SPH SOLVER
A modified version of the simulation code GADGET-2 is used for the calculations of this
paper. The code relies on a fully conservative formulation where the discrete Euler
equations are derived via a variational principle from the discretized Lagrangian of
the fluid system <cit.>. As in most SPH formulations, the density
estimate is calculated by the summation interpolant
ρ _a=∑ _b=1^nm_bW_ab,
while the Euler-Lagrange equations of motion for the particles are given by
(d v_a/dt)_SPH=-∑ _b=1^nm_b[f_ap_a/ρ _a^2∇ _aW_ab(h_a)+f_bp_b/ρ _b^2∇ _bW_ab(h_b)],
where v_a and p_a are the particle velocity and pressure, respectively,
and the factor f_a is defined by
f_a=(1+h_a/3ρ _a∂ρ _a/∂ h_a)^-1.
The velocity of particle a is then updated according to
d v_a/dt=(d v_a/dt)_SPH+
(d v_a/dt)_GRAV+(d v_a/dt)_AV,
where the last two terms on the right-hand side account for the self-gravitational
acceleration and the artificial viscous forces, respectively.
The gravitational forces are calculated using a hierarchical multipole expansion,
which can be applied in the form of a TreePM method, where short-range forces are
calculated with the tree method and long-range forces are determined using mesh-based
Fourier methods. A detailed account of the code is given by
<cit.>. Here we shall only briefly describe the improvements that have
been incorporated in our version of the code.
One straightforward way of restoring particle consistency and therefore reducing the
zeroth-order error terms carried by the SPH representation of the continuity and
momentum equations <cit.> is just to increase the number of
particles within the kernel support. However, conventional kernels, like the widely
used cubic B-spline kernel of <cit.>, suffer from a pairing instability
when working with large numbers of neighbors, where particles come into close pairs
and become less sensitive to small perturbations within the kernel support
<cit.>. To overcome this difficulty, we have adopted
a Wendland C^4 kernel function <cit.>
W(q,h)=495/32π h^3(1-q)^6(1+6q+35/3q^2),
if q≤ 1 and 0 otherwise, where q=| r- r^'|/h. As was demonstrated
by <cit.>, Wendland functions have positive Fourier transforms and so they can
support arbitrarily large numbers of neighbors without favoring a close pairing of
particles. Moreover, the exact particle distribution depends on the dynamics of the
flow and on the kernel function that is employed. This makes the accuracy assessment
of SPH a non-trivial problem. However, Wendland functions are very reluctant to allow
for particle motion on a sub-resolution scale and, in contrast to most commonly used
kernels, they maintain a very regular particle distribution, even in highly dynamical
tests <cit.>.
A further improvement includes the update of the artificial viscosity switch using
the method proposed by <cit.>. In this method the artificial viscosity term
entering on the right-hand side of Eq. (25) is implemented as in GADGET-2 by the
common form <cit.>
(d v_a/dt)_AV=-∑ _b=1^nm_bΠ _ab∇ _aW̅_ab,
where W̅_ab=[W_ab(h_a)+W_ab(h_b)]/2 and
Π _ab=-1/2α̅_abv_ sig/ρ̅_abω _ab ifω _ab<0,
and zero otherwise. Here
ω _ab=( v_a- v_b)· r_ab/| r_ab|,
v_ sig=c_a+c_b-3ω _ab is the signal speed, c_a is the particle
sound speed, ρ̅_ab=(ρ _a+ρ _b)/2, and
α̅_ab=(α _a+α _b)/2. We note that in the original GADGET-2
code formulation α _a=α _b=const. It is well-known that this form of
the artificial viscosity introduces excessive dissipation in shear flows, leading to
spurious angular momentum transport in the presence of vorticity. Therefore, it is
desirable to suppress this excessive dissipation in regions where the vorticity
dominates over the velocity divergence <cit.>. In particular, <cit.>
proposed individual viscosity coefficients that adapt their values according to
velocity-based source terms. Later on, <cit.> improved on this formulation
by devising a novel shock indicator based on the total time derivative of the velocity
divergence, which distinguishes shocks from purely convergent flows and discriminates
between pre- and post-shocked regions. While this prevents false triggering of the
artificial viscosity, their method includes a limiter which puts a stronger weight
on the velocity divergence than on the vorticity. The artificial viscosity switch
proposed by <cit.> follows the same principles of that presented by <cit.>,
except that it now uses a limiter that applies the same weight to the velocity divergence
and vorticity. The method consists of calculating the viscosity coefficient through
the following steps. A target value of the viscosity coefficient is first calculated
using the relation
α _ tar,a=α _ maxh_a^2S_a/h_a^2S_a+c_a^2,
where α _ max=0.75 and S_a=max (0,-∇̇· v_a) is
the shock indicator. The total time derivative of the velocity divergence is given by
-∇̇· v=d^2lnρ /dt^2 after differentiation of the continuity
equation and the divergence of the velocity is evaluated using the higher-order estimator
proposed by <cit.>. Hence ∇̇· v<0 is indicative of nonlinear
flow steepening as occur in pre-shocked regions, while in post-shocked regions
∇̇· v>0. The true viscosity coefficient that enters in Eq. (28) is
then defined by
α _a={[ ξ _aα _ tar,a ifα _a≤α _ tar,a,; ξ _a[α _ tar,a+(α _a-α _ tar,a)
exp(-Δ t/τ _a)] ifα _a>α _ tar,a, ].
where ξ _a is a modified limiter given by
ξ _a=|(∇· v)_a|^2/|(∇· v)_a|^2+
|(∇× v)_a|^2+0.0001(c_a/h_a)^2,
Δ t is the time step, and τ _a=10h_a/v_ sig is the decay time with a decay
speed equal to
v_ decay=max _| r_ab|≤ h_a[c̅_ab-min (0,ω _ab)],
where c̅_ab=(c_a+c_b)/2. This method, which is referred to as an artificial
viscosity with a strong limiter, suppresses viscous dissipation in subsonically
convergent flows and ensures that α _a rises rapidly up to α _ max
when the converging flow becomes supersonic. This is a desirable property in protostellar
collapse simulations where holding α _a to a fixed constant value during the
evolution may cause unphysical dissipation of local velocity differences away from shocks.
Such adverse effects of the artificial viscosity are responsible for the oversmoothing of
weak shocks as well as the damping of adiabatic oscillations and shear flows, thereby
seriously affecting the outcome of the simulations.
§ TEST PROBLEMS
§.§ Particle consistency relations for a set of points
We first test the quality of the first few moments in relations (18)-(21) for a static
set of N=64^3 points distributed within a cube of length L=1, density ρ =1,
pressure p=1, and sound speed c^2=γ p/ρ, with γ =5/3. Two different
particle distributions are considered: a glass-like
configuration and a random distribution. A similar test problem was employed by
<cit.> to determine the quality of the discrete normalization condition given by
relation (18) as the number of neighbors is increased from n=48 to 3200.
Here the glass distribution was obtained from GADGET-2 by enabling the corresponding
code option and starting with a random distribution of SPH particles
and an expansion factor a=0.01 <cit.>. The resulting outcome is
then evolved hydrodynamically up to a time t=1.1 in code units, using an isothermal
equation of state, periodic boundary conditions at the edges of the box, and excluding
self-gravity. As the system evolves toward a relaxed state, the distributions of quantities,
such as the smoothing length, the density, and the discrete moments given by relations
(18)-(21) tend to normal distributions. The evolution ends up with an equilibrium
configuration in which all particles have approximately equal SPH densities (≈ 1).
In real SPH applications, the distances between neighboring particle pairs tend
to equilibrate due to pressure forces, which makes the interpolation errors much smaller
and the irregularity of the particle distribution more ordered than for a random
distribution, where particles sample the fluid in a
Poissonian fashion. In this sense, a random configuration represents an extreme case
for SPH simulations. In contrast, a glass configuration mimics the other extreme case
where the particle distribution is quasi-regular and almost force-free. Although a
random distribution rarely occurs in SPH, except perhaps in highly turbulent flows where
particles are highly disordered and SPH is unable to re-order the particles, we analyze
the quality of the density estimate and discrete moments of the kernel (and kernel
gradient) for a random distribution with the only purpose of comparing with the results
obtained by <cit.>.
The top and bottom panels of Figure 1 show histograms of the particle density estimate
[Eq. (22)] with increasing n from 48 to 3200 as in <cit.> for the glass and
random particle distributions, respectively. As expected, the density distribution for
the glass configuration is much narrower than for the randomly distributed points. In
the former case, as n is increased the density distribution approaches a Dirac-δ
distribution, while in the latter case, the distribution slowly approaches a Gaussian
shape with a peak at ρ =1. Also, for small values of n the density distribution
shows long tails for the random configuration which are not present for the glass distribution,
at least when n≥ 120. <cit.> argued that such long tails are due to an overestimate
of the density produced by the particle self-contribution in Eq. (22) when the SPH particles
are randomly spaced. While this result is well-known, it can also be derived analytically
from Eq. (22) given the mass of the particles, their number and actual distribution within
the kernel volume, and the form of the kernel function. However, using the M_4
kernel of <cit.>, <cit.> demonstrated that the overestimate in
density occurs because a random distribution produces a fluctuating density field, where
the particle positions are correlated with the overdense fluctuations and anticorrelated
with the underdense fluctuations. In other words, the expectation value of the density at
the location of a particle will be overestimated by a value almost exactly equal to the
“self-density”. This is the reason why it seems appropriate to exclude the particle
self-contribution from Eq. (22). <cit.> concluded that as soon as the particle
positions are settled before they are allowed to evolve dynamically, exclusion of the
self-contribution will lead to a significant error because the particle distribution will
cease to be random and the density fluctuations will be removed.
Table 1 lists the standard deviation, σ (ρ), and expectation
value, ⟨ρ⟩ _e, of the density measured from the distributions of Figure 1.
For both particle configurations, σ (ρ) decreases with increasing n, while
the value of ⟨ρ⟩ _e becomes close to unity when n≳ 480
for a glass configuration and n≳ 1600 for a random distribution. This agrees with
the ∼ n^-1/2 and n^-1 trends of σ (ρ) as a function of n found by
<cit.> for a truly random and a glass-like configuration, respectively. Since
the results of Figure 1 are consistent with the findings of <cit.> for these
tests, we feel confident to proceed with a similar statistical analysis to measure the
quality of the discrete moments as n is increased.
According to the series expansions (16) and (17), the error in the density and density
gradient estimates separates into two contributions: one due to the local value of h
and the other due to the discrete values of the moments.
Since the latter are independent of h, they will only depend on the number of
neighbors within the kernel support. This observation introduces a subtle difference
between the meaning of consistency and accuracy in SPH. Consistency demands that
n→∞, while accuracy demands that n→∞ and h→ 0 in order to have
convergent results in the limit N→∞. Therefore, we can achieve approximate particle
consistency and improved accuracy as n is increased and h is decreased with N.
To achieve C^0 consistency, the parameter M_0,a and the mean of the elements of
matrix M_1,a^' should peak around 1, while the mean of
the components of vector M_0,a^' should follow a peaked distribution
around 0. Moreover, C^1 consistency will additionally require that the mean of the
components of M_1,a and the mean of
the elements of matrix M_2,a^' both peak around 0. The distributions of all
these quantities are plotted in Figures 2 and 3 for the glass and random configurations,
respectively. We may see that for the glass configuration the distributions follow the
desired behavior and approach a Dirac-δ function as n is increased, indicating
that approximate C^1 consistency is achieved for the density and its gradient when
n=3200. Conversely, for a random configuration the distributions approach Gaussian-like
shapes with peaks close to the continuum values. For both particle configurations
the distributions of M_0,a peak at values lower than 1 for small n, suggesting an
overestimation of the particle density for small numbers of neighbors. Table 1 lists the
standard deviations and expectation values of these moments as calculated by fitting a Gaussian
function to the histograms of Figures 2 and 3. From these values we see that very good
C^0 and C^1 consistencies are achieved for the glass configuration. A stronger
sensitivity to the particle distribution is observed for M_0,a^', which
is exacerbated for the random configuration. In this case, the standard deviation of the
distribution is consistently larger for smaller n and converges to zero
rather slowly compared to the glass configuration. Evidently, M_0,a^'
seems to be more sensitive to the degree of particle disorder than the other parameters for
this test, implying a higher error in the SPH representation of the gradient. Although this error
depends on the quality of the particle distribution, it can be regulated by further
increasing the number of neighbors since the standard deviation is expected to follow a
trend between n^-1/2 and n^-1 in actual SPH calculations. Also, note that the
expectation values of M_1,a, M_0,a^', and M_2,a^'
are always zero because of the symmetry of the kernel.
The required computational cost in CPU time for a complete run is nearly the same
for the standard and modified GADGET-2 code. However, the computational cost increases
almost linearly with n for fixed N. For instance, a run with n=3200 took about
11 s compared to ∼ 0.34 s for n=64, implying a factor of ∼ 32 more CPU
time. Thus, increasing the number of neighbors increases the computational cost, which
is the price that has to be paid when SPH is used as a numerically consistent method.
Also, <cit.> argued that when n is made to vary with N as N^0.5, the
computational cost scales with N as O(N^1.5) rather than as O(N) as for
traditional choices of n.
§.§ Two-dimensional Keplerian ring
We now test the performance of our implemented artificial viscosity for an equilibrium
ring of isothermal gas rotating about a central point mass. This test is the same documented
by <cit.> and <cit.>. Self-gravity of the ring is neglected and perfect
balance between pressure forces, gravitational attraction from the central point mass,
and centrifugal forces is assumed. The surface density of the gas is given by the Gaussian
profile
Σ (r)=1/mexp[-(r-r_0)^2/2σ ^2],
where m is the mass of the ring, r is the radial distance from the central point
mass (r=0), r_0=10, and σ =1.25 is the width of the ring. For the central
point mass we set GM=1000, where G is the gravitational constant and M≫ m. The
ring is filled with N=9987 particles, initially distributed using the method of
<cit.>. With these parameters, the ring is in differential rotation with
an azimuthal velocity v_ϕ=√(GM/r)=10 and a rotation period T=2π at
r=r_0. The sound speed is set to c=0.01. This value is much smaller than the
azimuthal velocity so that dynamical instabilities in the ring are expected to occur
only after many rotation periods <cit.>. Under Keplerian differential
rotation the flow is shearing and therefore any viscosity may cause the ring to break
up <cit.>, with the instability initiating at its inner edge
<cit.>.
Figure 4 shows the ring configuration as obtained using four different SPH schemes.
The times are given in code units. When GADGET-2 is used with the cubic
B-spline kernel and n=12 neighbors together with the standard artificial
viscosity formulation with a constant coefficient α =0.8 (Fig. 4a), the ring becomes
unstable at t≈ 3.8 (corresponding to ≈ 0.6T). At t=12, i.e., after
approximately two rotation periods, the inner edge instability is well-developed and the
ring is close to break up. When the same run is repeated using <cit.> scheme for the
artificial viscosity with the higher-order velocity divergence estimator proposed by
<cit.> and α varying in the interval [0,0.8], the instability manifests
in the form of particle clusterings and voids in the particle distribution,
resembling a sort of sticking instability, as shown in Figure 4b at t=49 (corresponding
to ≈ 7.8 rotation periods). The calculation stops soon thereafter because
of failure of the TreePM algorithm when clustered particles become too close to
one another. Only
little improvement is obtained when using the standard artificial viscosity formulation and
the Wendland C^4 function with n=120 neighbors as the ring becomes unstable after
about one rotation period (t≈ 6.6). Figure 4c shows progress of the instability
at a later time (t=15). Therefore, changing the cubic B-spline kernel with a
Wendland C^4 function with n=120, or even larger n, while maintaining the standard
artificial viscosity causes the instability to grow a little more slowly. Only when
this latter run is repeated using <cit.> scheme for the artificial viscosity does
the ring stay stable for more than 20 rotation periods (Fig. 4d). The ring preserves its
particle configuration and remains stable even when the evolution is followed for more
than 30 rotation periods.
§ PROTOSTELLAR COLLAPSE SIMULATIONS
We now test the consistency and accuracy of our implemented SPH method for
numerical hydrodynamical calculations involving large density and pressure gradients as well
as variable smoothing lengths. As a problem we choose the collapse and fragmentation of an
isolated molecular cloud core. The templates for the model clouds correspond to the
well-known standard isothermal test case in the variant calculated by <cit.> and
the centrally condensed, Gaussian cloud advanced by <cit.>.
§.§ Initial conditions
§.§.§ Standard isothermal cloud
The standard isothermal test case starts from a uniform density (ρ _0=3.82× 10^-18
g cm^-3) sphere of mass M=1M_⊙, radius R=4.99× 10^16 cm, temperature
T=10 K, and solid-body rotation ω =7.2× 10^-13 s^-1. The model has
ideal gas thermodynamics with a mean molecular weight μ≈ 3 and an isothermal
sound speed c_ iso≈ 1.66× 10^4 cm s^-1. With these parameters the
initial mean free-fall time is t_ ff≈ 1.07× 10^12 s. In order to favor
fragmentation into a binary system, the uniform density background is perturbed azimuthally
as
ρ =ρ _0[1+0.1cos(2ϕ)],
where ϕ is the angle about the spinning z-axis. With these parameters the ratios of
thermal and rotational energies to the absolute value of the gravitational energy are
α≈ 0.26 and β≈ 0.16, respectively.
§.§.§ Gaussian cloud
The Gaussian cloud corresponds to a centrally condensed sphere of the same mass and radius as
the standard isothermal cloud. The radial central condensation is given by
ρ (r)=ρ _ cexp[-(r/b)^2],
where ρ _ c=1.7× 10^-17 g cm^-3 is the initial central density and
b≈ 0.578R. This produces a central density 20 times higher than the density at the
outer edge. Solid-body rotation is assumed at the rate ω =1.0× 10^-12 s^-1.
The gas has a temperature of 10 K, a chemical composition corresponding to a mean molecular
weight μ≈ 2.28, and an isothermal sound speed c_ iso≈ 1.90× 10^4
cm s^-1. The central free-fall time is t_ ff≈ 5.10× 10^11 s and the
radial density distribution is azimuthally perturbed using Eq. (34). With this choice of the
parameters, the values of α and β are the same as for the uniform-density,
standard isothermal test.
§.§.§ Equation of state
A barotropic pressure-density relation of the form <cit.>
p=c_ iso^2ρ +Kρ ^γ,
is used for both the uniform- and Gaussian-cloud models, where γ =5/3 and K is a
constant determined from equalizing the isothermal and adiabatic parts of Eq. (36) at a
critical density ρ _ crit=5.0× 10^-12 g cm^-3 for the isothermal test
case and 5.0× 10^-14 g cm^-3 for the Gaussian cloud, which separates the
isothermal from the nonisothermal collapse. The local sound speed is therefore given by
c^2=c_ iso^2[1+(ρ/ρ _ crit)^γ -1],
so that c≈ c_ iso when ρ≪ρ _ crit and
c≈γ ^1/2c_ iso when ρ≫ρ _ crit. With these choices of
the critical density we allow the standard isothermal cloud to evolve deep into the
isothermal collapse to provide direct comparison with previous barotropic SPH calculations
by <cit.> and <cit.>, while a value of
ρ _ crit=5.0× 10^-14 g cm^-3 produces a behavior that is more
representative of the near-isothermal phase and fits better the Eddington approximation
solution of <cit.>.
§.§.§ Initial particle distribution and smoothing length
All collapse calculations start from a set of points in a glass configuration, which was
generated from randomly distributed particles using the GADGET-2 glass-making mode. As shown
in Table 2, we consider two separate sequences of calculations with varying total number
of particles (N) for both the uniform and Gaussian cloud models. Models labeled U1C-U4C
correspond to uniform clouds calculated with the standard GADGET-2 using a fixed number of
neighbors (n=64), while models U1W-U4W were calculated using our modified GADGET-2 code
using a Wendland C^4 function with varying number of neighbors. Similarly, models
G1C-G6C and G1W-G6W correspond to Gaussian clouds using the standard (with n=64) and
modified code (with varied n), respectively.
For these tests, we use the parameterization provided by <cit.>, where h is
allowed to vary with N as h∝ N^-1/6. With this choice we obtain the scaling
relations n≈ 7.61N^0.503 and h≈ 7.23n^-0.33 so that h decreases as
the number of neighbors increases. Thus, choosing the proportionality factor of the scaling
h∝ N^-1/6 as exactly unity gives an exponent for the dependence of h on
n that is close to the suggested value of -1/3. The variation of h with n
is depicted in Figure 5. For small values of n the smoothing length decreases
rapidly as n increases and then more slowly at larger values of n, asymptotically
approaching zero as n→∞ as required to restore particle consistency.
We note that models U1W–U4W do not satisfy the Jeans condition for densities above
≈ 5.0× 10^-14 g cm^-3 due to their much larger numbers of neighbors
compared to models U1C–U4C. In contrast, models G1W–G6W all meet the Jeans resolution
requirements for gravitational fragmentation. However, in order to avoid spurious
fragmentation the gravity softening length of each particle, ϵ _a, is
evolved with time in step with its corresponding smoothing length h_a so that
ϵ _a≈ h_a <cit.>. In addition, <cit.> showed
that SPH reproduces the analytical Jeans criterion and simulates gravitational
fragmentation properly, even at very poor resolution. That is, artificial fragmentation
is suppressed in regions where the Jeans mass is less than the minimum resolvable mass,
M_ min=nm, provided the standard kernel-softened gravity (ϵ≈ h)
is used, where m is the mass of a single SPH particle. This way unresolved
Jeans-unstable condensations are stabilized numerically. Thus, <cit.> concluded
that failing to satisfy the Jeans condition simply suppresses true fragmentation in
SPH calculations, rather than resulting in artificial fragmentation as in
finite-difference codes. Similar conclusions were previously met by <cit.>
through an analytical derivation of the Jeans criterion.
§.§ Collapse of the uniform cloud
Although a uniform-density profile is an extreme idealization of a real cloud core,
it provides a simple model to learn how nonaxisymmetric perturbations grow from a
structureless medium. Perhaps the most illustrative example of this is given by the
standard isothermal test case, which was originally proposed by <cit.> and
thereafter used as a benchmark for testing numerical codes studying protostellar
collapse and fragmentation processes, with the fairly good agreement that the outcome
of the first evolution is the formation of a protostellar binary system
<cit.>. Previous
highly-resolved SPH calculations for this test over ∼ 9 orders of magnitude increase
in density and using a limited number of neighbors (n≈ 64) have predicted the
formation of two elongated fragments connected by a filamentary bar when the maximum
density in the fragments has passed ρ _ crit=5.0× 10^-12 g cm^-3
<cit.>. When the gas within the fragments becomes adiabatic
and heats up, their cylindrical collapse slows down. This makes the fragments to
approach a rather spherical shape, while the connecting bar, which remains isothermal,
collapses to a singular filament with no signs of fragmentation. However,
comparisons between all these earlier calculations have been performed with varied
total numbers of particles N and a constant number of neighbors n≈ 64 or so,
and therefore they are likely to suffer from a loss of consistency due to persisting
zeroth-order discretization errors, whose magnitudes may even grow at a faster rate
when approaching the limit N→∞ and h→ 0 <cit.>.
Figure 6 displays column density images of the cloud midplane during the collapse
of model U4C using the original GADGET-2 formulation with n=64 neighbors. We may
see that up to 1.2736t_ ff (peak density of ∼ 10^8.91ρ _0), the
morphology of collapse and the fragmentation details are very similar to previously
reported SPH results for this model. A singular bar connecting two quasi-spherical
fragments is formed and the details of the fanning-out of the bar close to the binary
fragments are also reproduced. However, when the calculation is continued farther in time
the binary components undergo rapid rotational disruption into smaller fragments (see
the last snapshot at 1.302t_ ff when the peak density is ∼ 10^9.74ρ _0).
Meanwhile the gas within the singular bar becomes adiabatic, hindering its cylindrical
collapse and fragmenting along its length into similar small objects. Due to their excess kinetic
energy acquired during rotational disruption of the former binary components, some of these
fragments collide and merge between them and/or with those coming from the bar breakup,
followed by a rather chaotic dynamics at later times. At these stages, the outcomes of
models U1C–U4C show no sign of convergence at comparable maximum densities. However,
we note that the lack of convergence is not surprising because at this stage the small-scale
fragmentation observed derives from the non-linear amplification of particle noise inherent
in SPH, which leads to different patterns as the spatial resolution is increased. This noise
arises because mutually repulsive pressure forces between particle pairs do not cancel
in all directions simultaneously. It affects the accuracy of SPH and leads to slow
convergence rates. On the other hand, the use of the standard artifical viscosity with a
constant coefficient leads to spurious angular momentum transport in the presence of
vorticity, which may cause the rotational disruption of the binary fragments.
The time evolution of the distribution of the first few moments given by
relations (18)–(21) is depicted in Figure 7 for model U4C. Only the late stages
of collapse during the process of fragmentation are shown. All plots represent only
particles carrying a density greater than ρ _ crit as identified from the
last snapshot generated during the simulation. Starting from a given point in the
evolution, histograms for the particle density, smoothing length, and moment distributions
are constructed. The gray strips in the plots of Figure 7 correspond to the time evolution
of the standard deviations calculated with respect to the maximum of the distributions
(marked with the solid lines), where most particles lie. Hence, the width of the
strips at any given time gives the width of the corresponding distribution. This procedure
allows us to evaluate the quality of the consistency relations in rapidly evolving
regions of high density where the smoothing length is also varying rapidly to guarantee
adequate spatial resolution. According to expansions (16) and (17), the trends of the
M_0, ⟨ M_0^'⟩, and ⟨ M_1^'⟩
distributions are indicative of whether C^0 consistency is achieved during the
evolution, while the degree of C^1 consistency is measured by the time evolution of the
⟨ M_1⟩ and ⟨ M_2^'⟩ distributions.
The maxima of the distributions of ⟨ M_1⟩ and
⟨ M_2^'⟩ always peak at zero because of the symmetry of
the kernel. Note that the maxima of the ⟨ M_0^'⟩ distribution
also peak at zero, except toward the end of the evolution when they start to oscillate
erratically about a mean value close to zero. As the smoothing length decreases sharply,
within the growing fragments, the width of ⟨ M_1⟩ and
⟨ M_2^'⟩ contracts until approaching a Dirac-δ
distribution. In contrast, the width of ⟨ M_0^'⟩
remains approximately constant. On the other hand, the peaks of the distributions of
M_0 and ⟨ M_1^'⟩ are always below unity, meaning that
C^0 consistency is not achieved. The deviations from unity of M_0 are even larger
than those of ⟨ M_1^'⟩, implying that the estimate of the
function is more sensitive to the particle discretization errors than the estimate of
the gradient. Violation of the normalization condition by the particle
approximation means that the calculation of model U4C is even worse than first-order
accurate. Similar temporal variations of the estimates of the moments were also observed for
models U1C–U3C at lower resolution. For all these models, the values of M_0 and
⟨ M_1^'⟩, which were initially closer to unity, degraded
gradually in the course of collapse and the time interval represented in Figure 7
corresponds to that of maximum deviation.
Details of the evolution of models U2W and U4W
are shown in Figures 8 and 9, respectively, at comparable maximum densities.
As shown in Table 2, model U2W is run with N=600000 and n=6121, while model
U4W uses N=2400000 and n=12289, where the initial value of h is set by the
relation h≈ 7.23n^-0.33 (see Fig. 5). Except for small residual differences
in the evolution of the maximum density at earlier collapse times, models U2W and
U4W shows essentially the same morphology. The same is true for models U1W and U3W.
It is important to notice that increasing n with resolution implies reducing the
particle discretization errors and improving the mass resolution as
M_ min=nm∼ n^-1. In other words, this means that the particle
approximation approaches the kernel approximation. Since models U1C–U4C work with
smaller smoothing lengths due to their fixed, low value of n compared to models
U1W–U4W for the same N, it is not possible to establish a direct quantitative
comparison between both sequences. Indeed this will require recalculating models
U1W–U4W with huge amounts of neighbors so that both sequences will have the same
value of h but different N. However, working with finer values of
h, while losing complete consistency, does not imply higher accuracy and convergence.
If C^0 and C^1 consistencies are achieved, it follows that the discrete
expansions (16) and (17) tend to their continuous counterparts (13) and (14),
respectively, implying second-order accuracy for the particle approximation
independently of the numerical value of h. This is the essence of particle consistency
in SPH.
The early collapse is qualitatively similar to models U1C–U4C. Initially the cloud
flattens about the equatorial plane, producing an isothermal disk with strong shocks
on both sides of it. The azimuthal structure of the
disk consists of two overdense blobs as a result of the m=2 perturbation seed,
which then fall toward the cloud center to merge into a bar with maximum density at
its endpoints. Due to converging flow into the bar, it soon grows in mass and
undergoes a cylindrical collapse upon itself. The result of this process
is the formation of a binary connected by a considerably more massive and thicker bar
compared to models U1C–U4C. The basic features of the formation of the binary plus
connecting bar are very similar in Figures 8 and 9 despite the difference in spatial
resolution. In these models the bar is centrally condensed, a feature which is not
clear from models U1C–U4C. The bar as a whole is never seen to contract into a
singular filament. The nascent binary cores are spinning about an axis of
symmetry passing through their points of maximum density. This causes the bar to fan
out close to the fragments and develop well-pronounced spiral arms. As the cores
accrete low angular momentum from the connecting bar and the spiral arms, the
binary separation decreases and the bar eventually dissipates. Because of its higher
initial resolution, model U4W fragments into a wider binary (t=1.2869t_ ff)
compared to model U2W (t=1.2973t_ ff). As a result of the accretion process,
well-defined protostellar disks form around the cores. The size of these disks is of
order ∼ 50 AU. We note that the outcome of model U2W is very similar to that
reported by <cit.> for the same initial conditions using their AMR finite-difference
method. The last snapshot of Figure 8 shows the binary at an orbital separation
of ∼ 88 AU, when almost 10% of the cloud mass is cointained by the fragments.
A similar binary system is produced by models U3W and U4W but with larger orbital separations
(∼ 146 AU) compared to model U2W at approximately the same maximum density.
However, in model U4W the circumstellar disk of one of the binary cores is seen to
fragment into a secondary of mass ∼ 0.02M_⊙, which then revolves around the
primary with mean orbital separations of ∼14–20 AU. The last snapshot of Figure
9 shows the final configuration for model U4W, where a new small fragment
(∼ 0.006M_⊙) has emerged from the residual bar material, which moves
toward the binary core on the right side and so it will probably merge. The calculation
was terminated at this time because of the increasingly small time steps at this stage
of the evolution. About 9% of the total cloud mass is contained by the cores in
models U3W and U4W. The formation of an apparently stable triple system in the highest
resolved calculation shows that the standard isothermal test is a demanding one.
Fragment disruption is never seen to occur and very good convergence is achieved.
This is a big difference with models U1C–U4C, where the cores disrupted into smaller
fragments and the connecting bar experienced multiple fragmentation along its length
into similar small fragments. The use of a Wendland function with a large number
of neighbors provides sufficient sampling of the kernel volumes and reduces particle noise
compared to the case of models U1C–U4C. Therefore, fragmentation of the protostellar disk
leading to a close binary in model U4W is not the result of noise amplification but
rather of the nonlinear growth of a gravitational instability as the mass resolution
is improved (see Section 5.4 below). The effects of increasing the number of neighbors
from n=30 to 200, while keeping N fixed were previously studied by <cit.>
for initially uniform clouds starting with stronger thermal support (α =0.50)
and lower rotation (β =0.04) than the models considered here. They found that
increasing the ratio n/N speeds up fragmentation because increasing n for a fixed
N decreases the spatial resolution as h necessarily increases. We note that this
strategy is different from the one implemented here where full SPH consistency
demands that n/N→ 0 and h→ 0 in the limit N→∞ and n→∞
<cit.>. The impact of varying the initial temperature on the collapse of the
standard isothermal test case was recently studied by <cit.> using their
GRADSPH code. In particular, when the temperature is set to T=10 K they obtain
a stable binary system in a similar way as shown in Figure 8 for model U2W. However,
their calculations differ from ours in the value of ρ _ crit. If
ρ _ crit is two orders of magnitude higher, this surely lengthens the
isothermal phase of collapse and favors the formation of a stable binary system
(see, for instance, <cit.> for similar calculations with the standard
GADGET-2 code; their Figures 5 and 6). The effects of the magnetic field on a variant
of the standard isothermal test case have been investigated by <cit.> using
the development version of GADGET-3 extended to include the magnetic field. Setting
ρ _ crit=1.0× 10^-14 g cm^-3, they also obtained a stable binary
system as in <cit.> and <cit.> for a purely hydrodynamical calculation
with no magnetic field.
Figures 10 and 11 show the time evolution of the estimates of the moments for models U2W and
U4W. Compared to Figure 7, the moments M_0 and ⟨ M_1^'⟩
are now closer to unity for most of the evolution, implying that approximate C^0
consistency is achieved in this set of calculations, except after ≈ 1.26t_ ff
when the degree of C^0 consistency is temporally lost within the fragment regions (see Fig.
10 for model U2W). This occurs precisely when h changes rapidly to ensure sufficient
spatial resolution within the higher-density regions. After this adaptive process, i.e., when
the variations of h slow down, C^0 consistency is rapidly restored (after about
1.34t_ ff for model U2W and 1.31t_ ff for model U4W). This is not surprising
since it is well-known that adaptive SPH calculations severely affect the consistency of
the method <cit.>. However, this temporal loss of consistency can be cured by
increasing further both n and N such that the ratio n/N→ 0. This can be seen by
comparing Figures 10 and 11, where the interval of inconsistency is reduced and the
quality of the estimates improves for model U4W. If we take the temporal mean of the maximum
of the distributions of M_0 and ⟨ M_1^'⟩ over the full
interval, the result is very close to unity, implying that C^0 consistency is
maintained on average. This is not the case for models U1C–U4C. The maximum of the
distributions for the other moments in Figures 10 and 11 are seen to exhibit erratic
oscillations about a mean value close to zero. However, the amplitudes of the oscillations
are much smaller for model U4W than for model U2W, implying that approximate C^1
consistency is better achieved by the former model. Therefore, we may conclude that
model U4W is actually closer to second-order accuracy and exhibits less noise than
its counterpart models at lower resolution.
§.§ Collapse of the Gaussian cloud
Calculations of the protostellar collapse starting from centrally condensed, Gaussian
density variations are of greater interest to understand the process of binary
fragmentation. A sequence of models similar to G1C–G6C with increasing spatial
resolution and fixed n (=64) was previously calculated by <cit.> using
the standard GADGET-2 code (their models G1B–G6B). As the resolution was progressively
increased from N=0.6 to 10 million particles, they obtained apparent convergence to a binary
system. In order to separate the effects of the artificial viscosity from those of improved
consistency on fragmentation, we have run this set of models using the standard
GADGET-2 code with n fixed to 64 neighbors and our improved artificial viscosity method
for approximately the same range of resolutions explored by <cit.> (see Table 2).
In this case, the sequence of calculations G1C–G6C all produced triple systems,
consisting of a bound binary plus an ejected third fragment escaping to infinity for
models G1C and G2C. In contrast, models G3C–G6C also produced a bound binary with
the third fragment now orbiting around the binary core at distances from ∼ 4 to 5 times
larger than the binary separation. Although the outcome is the same for all models, the
details of the final patterns and properties of the fragments are not the same
implying a lack of convergence which can be associated with a loss of C^0
consistency as revealed by the time evolution of the distributions of the estimates of
M_0 and ⟨ M_1^'⟩. As was outlined before, the form
of the artificial viscosity affects the outcome of the simulations. Unphysical
dissipation of local velocity gradients away from shocks in the calculations of
<cit.> is likely to be the cause of the differences with the outcomes of sequence
G1C–G6C.
We now describe the results of models G1W–G6W, which were run using the modified
GADGET-2 code with increasing number of neighbors. The initial phase of collapse for
these models is qualitatively similar to that observed for models G1C–G6C. That is,
up to the point where ρ _ max=ρ _ crit, the cloud evolves to a
centrally condensed, flat disk. When the disk becomes adiabatic, it expands due to
increasing pressure forces and deforms by rotational effects into a central bar.
Because of further rotation, the bar wraps up and becomes S-shaped. After
about a rotation period of the central bar, the S-shaped structure grows in
size and develops long arms. Meantime, the bar continues rotating and collapses into
a central blob. By this time, the S-shaped structure has already deformed and
two satellite fragments form at the end parts of the winding arms and at the same
distance from the central core, giving rise to a
ternary core. From top to down, Figure 12 shows column density images of the evolution
of models G3W–G6W at comparable maximum densities and same times. It is evident from the
first and second column of images that fragmentation is anticipated when the resolution
is increased. However, from the last column we may see that reasonably good convergence
is achieved by the highly resolved calculations at comparable maximum
densities and times. By 2.0192t_ ff, the fragments are
well-defined and evolving as separate entities from the parent cloud. They
contain about 13.2% (G3W), 8.4% (G4W), 8.5% (G5W), and 8.5% (G6W) of the total cloud
mass, while the separations of the two satellite fragments from the central core are
(∼ 298 AU) for G3W, (∼ 279 AU) for G4W, (∼ 289 AU) for G5W, and
(288 AU) for G6W.
Figures 13 and 14 depict the time evolution of the distribution of the estimates of the
moments for models G3W and G6W. As for the standard isothermal case, only the late evolution
is represented in both figures and the distributions are constructed by considering only
particles with densities >ρ _ crit. Approximate C^0 and C^1 consistencies
are achieved in both cases. However, by comparing these two figures we may see that the
quality of the simulation improves for model G6W working with higher values of n and N.
Therefore, as the values of n and N are increased, the particle discretization errors
decay asymptotically and the calculations become closer
to second-order accuracy. As was stated by <cit.>, in studies of protostellar
collapse and fragmentation it is more difficult to attain convergence for low than for
high thermal support. The point is that in the case of low thermal support the dynamics
of the flow is likely to become highly nonlinear faster than in clouds with high
thermal support. The same is also true for models where the isothermal phase of collapse is
prolonged by choosing high values of the critical density, as is indeed the case of
models U1W–U4W, where ρ _ crit=5.0× 10^-12 g cm^-3. According to
expansions (16) and (17), if C^1 particle consistency is restored, the errors carried
by the estimates of a function and its gradient match those for the kernel approximation
(∼ h^2). However, as n and N are increased, h decreases (see Figure 5).
Thus, decreasing the size of the kernel not only improves the resolution but also
favors the growth of nonlinearity at smaller scales. If thermal support is retarded, then
nonlinear behavior may amplify and lead to further fragmentation. This is precisely the
difference between the outcomes of models U1W–U3W (Figure 8) and model U4W
(Figure 9), where further fragmentation is observed. This is not the case in Figure 12,
where the transition from isothermal to adiabatic collapse is anticipated, and the
higher resolution calculations provide almost the same fragmentation time and pattern.
§.§ Protostellar disk fragmentation
The mass contained within the kernel volume scales with h as ∼ h^3.
Therefore, if h∼ n^-1/3 then the minimum mass varies with n as
M_ min∼ n^-1, implying that large numbers of neighbors leads to improved
mass resolution. This aspect makes a big difference with models U1C-U4C and G1C-G6C,
which employ a fixed value of n (=64) regardless of the total number of particles.
Improving the mass resolution will certainly allow to better resolve small-scale
features in the flow during the collapse and fragmentation of protostellar cloud cores.
This is the case of the highly resolved models U4W and G4W-G6W, where after large-scale
fragmentation, which was seeded here by a background m=2 density variation,
well-defined rotating disks were clearly seen to form around the growing fragments
as a result of infalling material from the cloud envelope. In particular, Fig. 15 shows
enlarged density maps for the evolution of one of the former binary fragments formed in
model U4W (see Fig. 9, leftmost fragment at t=1.3252t_ ff). As the fragment grows
in density, a circumstellar disk forms which then becomes sufficiently massive to develop
a two-armed spiral structure associated with the linear growth stage of a gravitational
instability. By this time (1.3042t_ ff), the mass of the disk is
≈ 0.011 M_⊙ compared to ≈ 0.032 M_⊙ of the central protostar.
The radius of the disk is R_ disk≈ 25 AU and grows to ≈ 36 AU by
t=1.3109t_ ff just before fragmentation. According to the Toomre stability
criterion, the disk is unstable to axisymmetric perturbations if
Q≈2M_⋆/M_ diskH/r<1,
where M_⋆ is the mass of the central protostar, H is the disk scale height,
and r denotes radial distance from the central protostar. In the above definition
we have assumed that the disk is Keplerian and define H=c/ω, where ω is the
Keplerian angular velocity at radius r. Taking r=20 AU, which is the approximate
radius where fragmentation of the disk occurs, we find that H/r≈ 0.17 and the
Toomre parameter Q≈ 0.97. Soon thereafter, the gravitational
instability enters a nonlinear growth phase and the outermost part of one of the arms
condenses into a secondary at a distance of ≈ 18 AU from the primary, leading
to fragmentation of the disk into a tight binary. The newly formed fragment takes its orbital
angular momentum from the rotation of the disk and revolves around the primary in an
approximate circular orbit. By 1.327t_ ff, when the calculation is terminated,
the primary has a mass of ≈ 0.044 M_⊙, while the mass of the secondary is
≈ 0.02 M_⊙.
The calculation of model U4W shows that working with 12289 neighbors is enough to
resolve small-scale fragmentation due to the gravitational instability of a massive
protostellar disk. While this can be seen as a possible mechanism for the formation
of binary/multiple stellar systems separated by a few AU, recent observations of
the L1448 IRS3B triple system, consisting of two protostars at the center and a third
one distant from them, are providing strong support to this conclusion <cit.>.
The observations, which were conducted with the Atacama Large Millimeter/submillimeter
Array (ALMA), show that the spiral structure in the dusty disk surrounding the young stars
is indicative of their having been formed by fragmentation of the disk via a gravitational
instability. Hence, differences in distance may be the result of different formation
mechanisms. For instance, systems separated by hundreds to thousands AU are likely to
be the result of fragmentation of the larger cloud during its early gravitational
collapse, while tighter systems with separations of tens of AU may be hierarchical
systems formed from disk fragmentation.
Similarly, as shown in Fig. 12, models G1W-G6W collapsed to form a central protostar
surrounded by a circumstellar disk, which then experienced fragmentation into two
secondaries, forming a tertiary protostellar system. This time the circumstellar disk
appears to be larger (≳ 600 AU) at the time of fragmentation compared to
model U4W because of thermal retardation due to the assumption of a lower value of
the critical density (=5.0× 10^-14 g cm^-3) for the Gaussian models. As
the ternary fragments grow in density, each of them develop well-pronounced
circumstellar disks as shown by the last column of density maps in Fig. 12 for
models G4W-G6W.
§ CONCLUSIONS
We have investigated the consistency of smoothed particle hydrodynamics (SPH) in
numerical simulation tests of the collapse and fragmentation of rotating molecular cloud
cores. A modified version of the simulation code GADGET-2 <cit.> was used for
the calculations, where the interpolation kernel was replaced by a Wendland C^4 function
to allow support of large numbers of neighbors and an advanced scheme for the artificial
viscosity was implemented based on the method proposed by <cit.>. Approximations
to the power-law relations provided by <cit.> were used to set the kernel
interpolation parameters, namely the total number of particles N, the smoothing length
h, and the number of neighbors n, where h is allowed to vary with N as
h∼ N^-1/6. With this choice, the scalings h≈ 7.23n^-0.33 and
n≈ 7.61N^0.503 were used to set the initial values of h and n for fixed N.
As the domain resolution is increased, these scalings comply with the combined limit
N→∞, h→ 0, and n→∞ with n/N→ 0 for full SPH consistency
<cit.>.
The initial conditions for the protostellar collapse models were chosen to be the
“standard isothermal test case” in the variant calculated by <cit.> and
the centrally condensed, Gaussian cloud advanced by <cit.>, coupled to a barotropic
pressure-density relation to simulate the transition from the isothermal to the nonisothermal
collapse. The critical density, separating both regimes, was set to
ρ _ crit=5.0× 10^-12 g cm^-3 for the standard isothermal test
to provide insight into the role played by n for a case where convergence is
more demanding at late stages of the evolution. In contrast, for the
Gaussian cloud model ρ _ crit=5.0× 10^-14 g cm^-3, which is more
representative of the near-isothermal phase <cit.>. Since convergence is easier
when shortening the isothermal phase of collapse due to thermal retardation, this model
has been used to discern the effects of the artificial viscosity on the final outcome
by comparing with previous calculations by <cit.>.
Two separate sequences of calculations with increasing N were run for both models. One
sequence used the standard version of GADGET-2 with fixed n(=64), while the other
sequence was calculated using the modified version of the code with varied n.
Over ∼ 9 orders of magnitude
increase in density, the standard isothermal models with fixed n produced a binary
connected by a singular bar in much the same way as reported in previous SPH calculations.
However, as the evolution was continued farther in time the binary cores and the bar
were seen to fragment into smaller condensations, with the dynamics becoming highly nonlinear
and chaotic due to numerical noise amplification. At these stages, even the highly
resolved models showed no sign of convergence. In contrast, the models with varied n
experienced a similar initial collapse, producing stable binary systems for moderate
resolutions (N≤ 1200000) and a triple system for N=2400000 due to fragmentation
of the disk around one of the binary components into a secondary.
Owing to the higher number of neighbors
and hence improved mass resolution for this model, it was possible to resolve the
small-scale structure and fragmentation of the disk into a close binary. This
mechanism has recently received convincing observational evidence for explaining the
formation of close binary/multiple protostellar systems <cit.>.
On the other hand, the Gaussian
clouds using the standard GADGET-2 code with fixed n but with the new scheme of the
artificial viscosity produced different outcomes at all resolutions compared to those
previously reported by <cit.>. In all cases, only qualitative convergence into
a triple system was achieved, consisting of a bound binary plus a third core at much
higher orbital distances. Evidently, the reduced dissipation and better treatment of
shocks implied by the new artificial viscosity had an important impact on the final
outcome. With the modified code, all runs also produced a final triple system but with
a quite different pattern. In this case, the highly resolved runs were seen to converge
at comparable maximum densities and times.
The degree of consistency of the calculations was measured by tracking how well the
kernel consistency relations were reproduced by the particle
approximation. From the time evolution of the estimates of the moments of the kernel
it was clear that all calculations with fixed n(=64) were inconsistent. The normalization
condition of the kernel and the first moment of the gradient always diverged from
unity, with the maximum deviations occurring at the late stages of the evolution just
after the fragmentation period when the fragments were growing in density and h was
varying rapidly to ensure adequate spatial resolution, meaning that C^0 consistency
was not achieved by these models. Thus, violation of the normalization condition by the
particle approximation implies that these calculations are even worse than first-order
accurate due to persisting zeroth-order discretization errors <cit.>. In
contrast, approximate C^0 and C^1 consistencies were achieved by all models
when n was allowed to vary with N. However, loss of C^0 consistency was
temporally observed within the fragment regions due to rapid variations of h there,
confirming previous expectations that adaptive kernels
severely affect the consistency of SPH <cit.>. After this adaptive process, the
variations of h in the high-density regions slowed down and C^0 consistency was
rapidly restored by the models. In both sequences of calculations, as n and N are
increased the interval and degree of inconsistency are progressively reduced and the
quality of the calculations is improved. On the other hand, the temporal means of the
estimates of the normalization condition of the kernel and the first moment of the
gradient over the full interval, where consistency is lost and then restored, are seen
to peak very close to unity, implying that C^0 consistency is achieved on average
within the fragment regions. Since the estimates of the second moments are always close
to zero, approximate C^1 consistency is also achieved. We may therefore conclude that
the simulations presented here are actually close to second-order accuracy.
As a final remark, it has been demonstrated that C^0 particle consistency for both
the estimates of a function and its gradient implies preservation of the homogeneity
and isotropy of the discrete space, which have as consequences conservation of the
linear and angular momentum, respectively <cit.>. Therefore,
we may expect that local linear and angular momentum are well conserved in our consistent
collapse calculations. However, it would be interesting to quantify numerically the
degree of angular momentum conservation when C^0 consistency is achieved in the
limit n/N→ 0. Due to its Lagrangian character, SPH provides direct access to the
initial angular momentum of particles so that any loss can be easily quantified following
a similar analysis to that developed by <cit.>. Future studies in this
line will deal with the impact of consistency on angular momentum conservation and how
to address the Jeans-resolution requirement under large numbers of neighbors.
We thank the anonymous referee for raising a number of comments and suggestions that
have improved the style and content of the manuscript. In particular, his/her
comment on the relation between consistency and mass resolution is much acknowledged.
The calculations of this paper were performed using the computing facilities of
ABACUS-Centro de Matemáticas Aplicadas y Cómputo de Alto Rendimiento of Cinvestav-IPN.
This work was partially supported by ABACUS through the CONACyT grant EDOMEX-2011-C01-165873
and by the Departamento de Ciencias Básicas e Ingeniería (CBI) of the Universidad
Autónoma Metropolitana–Azcapotzalco (UAM-A), the Instituto de Ciencias Básicas e
Ingenierías of the Universidad Autónoma del Estado de Hidalgo (UAEH), and the
Instituto Venezolano de Investigaciones Científicas (IVIC) through internal funds.
[Arreaga-García et al. (2007)]Arreaga07 Arreaga-García, G., Klapp,
J., Sigalotti, L. Di G., & Gabbasov, R. 2007, , 666, 290
[Bate & Burkert (1997)]Bate97 Bate, M. R., & Burkert, A. 1997, ,
288, 1060
[Bonet & Lok (1999)]Bonet99 Bonet, J., & Lok, T.-S. L. 1999, Comput.
Meth. Appl. Mech. Eng., 180, 97
[Boss & Bodenheimer (1979)]Boss79 Boss, A. P., & Bodenheimer, P. 1979,
, 234, 289
[Boss (1991)]Boss91 Boss, A. P. 1991, , 351, 298
[Boss et al. (2000)]Boss00 Boss, A. P., Fisher, R. T., Klein, R. I., &
McKee, C. F. 2000, , 528, 325
[Burkert & Bodenheimer (1993)]Burkert93 Burkert, A., & Bodenheimer, P.1993, , 264, 798
[Bürzle et al. (2011)]Burzle11 Bürzle, F., Clark, P. C., Stasyszyn, F.,
Greif, T., Dolag, K., Klessen, R. S., & Nielaba, P. 2011, , 412, 171
[Cartwright et al. (2009)]Cartwright09 Cartwright, A., Stamatellos, D., &
Whitworth, A. P. 2009, , 395, 2373
[Chen et al. (1999)]Chen99 Chen, J. K., Beraun, J. E., & Jih, C. J. 1999,
Comput. Mech., 24, 273
[Commerçon et al. (2008)]Commercon08 Commerçon, B., Hennebelle, P.,
Audit, E., Chabrier, G., & Teyssier, R. 2008, Å, 482, 371
[Couchman et al. (1995)]Couchman95 Couchman, H. M. P., Thomas, P. A., &
Pearce, F. R. 1995, , 452, 797
[Cullen & Dehnen (2010)]Cullen10 Cullen, L., & Dehnen, W. 2010, ,
408, 669
[Dehnen & Aly (2012)]Dehnen12 Dehnen, W., & Aly, H. 2012, , 425,
1068
[Gingold & Monaghan (1977)]Gingold77 Gingold, R. A., & Monaghan, J. J.1977, , 181, 375
[Hayward et al. (2014)]Hayward14 Hayward, C. C., Torrey, P., Springel, V.,
Hernquist, L., & Vogelsberger, M. 2014, , 442, 1992
[Hu et al. (2014)]Hu14 Hu, C.-Y., Naab, T., Walch, S., Moster, B. P., &
Oser, L. 2014, , 443, 1173
[Hubber et al. (2006)]Hubber06 Hubber, D. A., Goodwin, S. P., &
Whitworth, A. P. 2006, , 450, 881
[Kitsionas & Whitworth (2002)]Kitsionas02 Kitsionas, S., & Whitworth, A.
P. 2002, , 330, 129
[Klein et al. (1999)]Klein99 Klein, R. I., Fisher, R. T., McKee, C. F.,
& Truelove, J. K. 1999, in Numerical Astrophysics 1998, ed. S. Miyama, K. Tomisaka,
& T. Hanawa (Dordrecht: Kluwer), 131
[Li & Liu (1996)]Li96 Li, S. F., & Liu, W. K. 1996, Comput. Meth. Appl.
Mech. Eng., 139, 159
[Litvinov et al. (2015)]Litvinov15 Litvinov, S., Hu, X. Y., & Adams, N.
A. 2015, J. Comput. Phys., 301, 394
[Liu & Liu (2006)]Liu06 Liu, M. B., & Liu, G. R. 2006, Appl. Numer. Math.,
56, 19
[Liu et al. (2003)]Liu03 Liu, M. B., Liu, G. R., & Lam, K. Y. 2003, J.
Comput. Appl. Math., 155, 263
[Lucy (1977)]Lucy77 Lucy, L. B. 1977, , 82, 1013
[Lynden-Bell & Pringle (1974)]Lynden74 Lynden-Bell, D., & Pringle, J.
E. 1974, , 168, 603
[Maddison et al. (1996)]Maddison96 Maddison, S. T., Murray, J. R., &
Monaghan, J. J. 1996, , 13, 66
[Monaghan (1992)]Monaghan92 Monaghan, J. J. 1992, , 30, 543
[Monaghan & Lattanzio (1985)]Monaghan85 Monaghan, J. J., & Lattanzio,
J. C. 1985, , 149, 135
[Monaghan (1997)]Monaghan97 Monaghan, J. J. 1997, J. Comput. Phys., 136,
298
[Morris & Monaghan (1997)]Morris97 Morris, J. P., & Monaghan, J. J. 1997,
J. Comput. Phys., 136, 41
[Nelson et al. (2009)]Nelson09 Nelson, A. F., Wetzstein, M., & Naab,
T. 2009, , 184, 326
[Price (2012)]Price12 Price, D. J. 2012, J. Comput. Phys., 231, 759
[Rasio (2000)]Rasio00 Rasio, F. A. 2000, Prog. Theoret. Phys. Suppl., 138,
609
[Read et al. (2010)]Read10 Read, J. I., Hayfield, T., & Agertz, O. 2010,
, 405, 1513
[Riaz et al. (2014)]Riaz14 Riaz, R., Farooqui, S. Z., & Vanaverbeke, S.
2014, , 444, 1189
[Rosswog (2015)]Rosswog15 Rosswog, S. 2015, Living Rev. Comput. Astrophys.,
1, 1
[Sibilla (2015)]Sibilla15 Sibilla, S. 2015, Comput. Fluids, 118, 148
[Sigalotti et al. (2016)]Sigalotti16 Sigalotti, L. Di G., Klapp, J., Rendón,
O., Vargas, C. A., & Peña-Polo, F. 2016, Appl. Numer. Math., 108, 242
[Springel (2005)]Springel05 Springel, V. 2005, , 364, 1105
[Springel & Hernquist (2002)]Springel02 Springel, V., & Hernquist, L. 2002,
, 333, 649
[Tobin et al. (2016)]Tobin16 Tobin, J. J., Kratter, K. M., Persson, M. V., et
al. 2016, , 538, 483
[Truelove et al. (1998)]Truelove98 Truelove, J. K., Klein, R. I., McKee, C. F.,
Holliman, J. H., Howell, L. H., & Greenough, J. A. 1998, , 495, 821
[Vignjevic (2009)]Vignjevic09 Vignjevic, R., & Campbell, J. 2009, in Predictive
Modeling of Dynamic Processes, ed. S. Hiermaier (Dordrecht: Springer), 367
[Wendland (1995)]Wendland95 Wendland, H. 1995, Adv. Comput. Math., 4, 389
[White (1996)]White96 White, S. D. M. 1996, in Cosmology and Large Scale
Structure, ed. R. Schaeffer, J. Silk, M. Spiro, & J. Zinn-Justin (Amsterdam: Elsevier),
349
[Whitworth et al. (1995)]Whitworth95 Whitworth, A. P., Bhattal, A. S.,
Turner, J. A., & Watkins, S. J. 1995, , 301, 929
[Whitworth (1998)]Whitworth98 Whitworth, A. P. 1998, , 296, 442
[Zhang & Batra (2004)]Zhang04 Zhang, G. M., & Batra, R. C. 2004, Comput.
Mech., 34, 137
[Zhu et al. (2015)]Zhu15 Zhu, Q., Hernquist, L., & Li, Y. 2015, ,
800:6, 13pp
lcccccccc
1
Standard deviation σ (·) and expectation value ⟨·⟩ _e
of the density and moments estimates as a function of n.
0pt
Glass
n 48 64 120 240 480 800 1600 3200
σ (ρ) 3.44(-2) 1.67(-2) 3.31(-3) 1.31(-3) 8.85(-4) 7.03(-4) 4.85(-4) 3.12(-4)
⟨ρ⟩ _e 1.05479 1.02380 1.00421 1.00064 1.00003 0.99994 0.99990 0.99989
σ (M_0) 1.93(-2) 1.04(-2) 2.46(-3) 7.36(-4) 3.89(-4) 3.51(-4) 2.72(-4) 1.87(-4)
⟨ M_0⟩ _e 0.570 0.678 0.828 0.914 0.957 0.974 0.987 0.993
σ ( M_1) 2.15(-4) 1.19(-4) 1.75(-5) 4.46 (-6) 3.21(-6) 3.58(-6) 3.96(-6) 3.76(-6)
⟨ M_1⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
σ ( M_0^') 2.63 1.21 1.76(-1) 3.55(-2) 1.08(-2) 7.97(-3) 5.45(-3) 3.18(-3)
⟨ M_0^'⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
σ ( M_1^') 3.47(-2) 2.31(-2) 4.73(-3) 9.18(-4) 2.75(-4) 1.75(-4) 1.17(-4) 8.71(-5)
⟨ M_1^'⟩ _e 0.86317 0.93636 0.98910 0.99792 0.99967 0.99991 0.99997 0.99999
σ ( M_2^') 2.14(-4) 1.56(-4) 3.93(-5) 6.63(-6) 2.65(-6) 2.38(-6) 2.31(-6) 2.05(-6)
⟨ M_2^'⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Random
σ (ρ) 2.97(-1) 2.17(-1) 1.20(-1) 6.66(-2) 3.75(-2) 2.45(-2) 1.37(-2) 7.67(-3)
⟨ρ⟩ _e 1.22783 1.14226 1.05358 1.01846 1.00626 1.0027 1.00077 1.00019
σ (M_0) 1.03(-1) 9.54(-2) 6.73(-2) 4.10(-2) 2.39(-2) 1.59(-2) 9.04(-3) 5.06(-3)
⟨ M_0⟩ _e 0.541 0.659 0.821 0.912 0.956 0.974 0.987 0.993
σ ( M_1) 6.55(-4) 6.23(-4) 5.04(-4) 3.74(-4) 2.71(-4) 2.13(-4) 1.53(-4) 1.09(-4)
⟨ M_1⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
σ ( M_0^') 10.30 7.73 3.90 1.78 8.06(-1) 4.49(-1) 2.03(-1) 9.13(-2)
⟨ M_0^'⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
σ ( M_1^') 8.47(-2) 6.82(-2) 4.11(-2) 2.43(-2) 1.42(-2) 9.49(-3) 5.39(-3) 3.05(-3)
⟨ M_1^'⟩ _e 0.695 0.794 0.915 0.969 0.989 0.995 0.998 0.999
σ ( M_2^') 4.32(-4) 3.89(-4) 2.99(-4) 2.24(-4) 1.64(-4) 1.29(-4) 9.21(-5) 6.61(-5)
⟨ M_2^'⟩ _e 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
lclc
0pc
2
4
Collapse models
Model Total number of Number of
Final
particles (N) neighbors (n)
outcome
Uniform clouds
U1C............. 300,000 64 Binary?
U2C............. 600,000 64 Binary?
U3C............. 1,200,000 64 Binary?
U4C............. 2,400,000 64 Binary?
U1W............. 300,000 4321 Binary
U2W............. 600,000 6121 Binary
U3W............. 1,200,000 8673 Binary
U4W............. 2,400,000 12289 Triple
Gaussian clouds
G1C............. 300,000 64 Triple
G2C............. 600,000 64 Triple
G3C............. 1,200,000 64 Triple
G4C............. 2,400,000 64 Triple
G5C............. 4,800,000 64 Triple
G6C............. 9,600,000 64 Triple
G1W............. 300,000 4321 Triple
G2W............. 600,000 6121 Triple
G3W............. 1,200,000 8673 Triple
G4W............. 2,400,000 12289 Triple
G5W............. 4,800,000 17412 Triple
G6W............. 9,600,000 24673 Triple
| The method of smoothed particle hydrodynamics (SPH) was developed in the late 1970s
by <cit.> and <cit.> as a numerical tool for solving the equations
of gravitohydrodynamics in three-dimensional open space. Today, the use of SPH spans
many areas of astrophysics and cosmology as well as a broad range of fluid and
solid mechanics related areas. However, despite its extensive applications and
recent progress in consolidating its theoretical foundations, SPH still has unknown
properties that need to be investigated. A fundamental numerical aspect of SPH is
the lack of particle consistency, which affects the accuracy and convergence of the
method. Several modified techniques and corrective methods have been proposed to
restore particle consistency in fluid dynamics calculations <cit.>; the most successful ones being those based on Taylor series
expansions of the kernel approximations of a function and its derivatives. If m
derivatives are retained in the series expansions, the resulting kernel and particle
approximations will have (m+1)th-order accuracy or C^m consistency. However,
the improved accuracy of these methods comes at the price of involving matrix
inversions, which represent a major computational burden for time-evolving simulations
and eventually a loss of numerical stability due to matrix conditioning
for some specific problems. On the other hand, while these corrective methods solve
for particle inconsistency due to truncation of the kernel at model boundaries, it is
not clear how irregular particle distributions and the use of variable
smoothing lengths affect the consistency (and therefore the accuracy) of the solutions.
Recently, <cit.> showed that the condition for the particle approximation
to restore C^0 consistency and achieve asymptotic error decay is that the
volumes defined by the particles and the inter-particle faces partition the entire
domain, i.e., constitute a partition of unity. They found that this condition is
satisfied by relaxing the particles under a constant pressure field by keeping the
particle volumes invariant, yielding convergence rates for such a relaxed distribution
that are the same as those for particles on a perfect regular lattice. Quite curiously,
they also observed that the relaxed particle distributions obtained this way resemble
that of liquid molecules resulting from microscopic simulations. A method to improve
the SPH estimate of derivatives which is not affected by particle disorder was also
devised recently by <cit.>.
In comparison little work has been done to improve the SPH consistency in astrophysical
applications. In many cases, especially those involving self-gravitating flows, large
density gradients arise and an adaptive kernel is used to guarantee spatial resolution
in regions of high density. It has long been recognized that spatially adaptive
calculations where a variable smoothing length is employed turn out to be inconsistent
<cit.>. It was not until recently that <cit.> identified another source
of particle inconsistency associated with the finite number of neighbors within the
compact support of a smoothed function. It is common practice in SPH calculations to
assume that a large number of total particles, N, and a small smoothing length, h,
are sufficient conditions to achieve consistent solutions, while holding the number of
neighbor particles, n, fixed at some value n≪ N. <cit.> demonstrated that
C^0 particle consistency, i.e., satisfaction of the discrete normalization condition of
the kernel function can only be achieved when n is sufficiently large for which the finite
SPH sum approximation approaches the continuous limit. This result is consistent with the
error analysis of the SPH representation of the continuity and momentum equations
carried out by <cit.>, who found that particle consistency is completely lost
due to zeroth-order error terms that would persist when working with a finite number of
neighbors even though N→∞ and h→ 0. Indeed, as the resolution is increased,
approaching the limit N→∞ and h→ 0, the overall error will grow at a
faster rate if the magnitude of the zeroth-order error terms remains
constant. Based on these observations, full particle consistency is possible in SPH
only if the joint limit N→∞, h→ 0, and n→∞ is satisfied
<cit.>. However, we recall that this combined limit was first noted by
<cit.> using a simple linear analysis on one-dimensional sound wave propagation.
In particular, he found that SPH is fully consistent in this limit with N→∞
faster than n such that n/N→ 0.
On the other hand, <cit.> conjectured that for quasi-regularly distributed
particles, the discretization error made when passing from the continuous kernel to the
particle approximation is proportional to (log n)^d/n, where d is the dimension.
For n≫ 1, <cit.> parameterized this error
as ∼ n^-γ, where γ varies from 0.5 for a random
distribution to 1 for a perfectly regular lattice of particles. Combining this with
the leading error (∝ h^2) of the continuous kernel approximation for most
commonly used kernel forms, <cit.> derived the scaling relations n∝ N^1/2
and h∝ N^-1/6, which satisfy the joint limit as the domain resolution is
progressively increased. A recent analysis on standard SPH has demonstrated that using
the above scalings C^0 consistency is fully restored for both the estimates of the
function and its derivatives in contrast to the case where n is fixed to a
constant small value, with the numerical solution becoming also insensitive to the degree of
particle disorder <cit.>.
While these results are promising, it remains to investigate the response of the method
for spatially adaptive calculations in the presence of large gradients where the loss of
particle consistency is known to be most extreme. In particular, most of the above
analyses are based on static convergence tests for analytical functions in two- or
three-space dimensions using either uniformly or irregularly distributed point sets,
or on dynamical test problems for which an analytical solution is known in advance, and
therefore the results obtained are limited to idealized circumstances. As was emphasized
by <cit.>, the lack of consistency associated with particle disorder and spatial
adaptivity is not specific to a particular SPH scheme but is rather a generic problem.
It would therefore be desirable to test the present method for more complex models
as those involving the solution of the equations of hydrodynamics coupled to
gravity in three-space dimensions. To do so we choose as a problem the gravitational
collapse and fragmentation of an initially rotating protostellar cloud, using a
modified version of the GADGET-2 code <cit.>. As templates for the model
clouds we use the “standard isothermal test case” in the variant calculated by
<cit.> and the centrally condensed, Gaussian cloud model of <cit.>
coupled to a barotropic equation of state to mimic the nonisothermal collapse. The
simulations will then allow to better understand the impact of varying the number of
neighbors as the resolution is increased on the SPH discretization errors, which
will naturally emerge from the density estimate itself and the SPH momentum equation.
The convergence and accuracy of the simulations is measured by evaluating how well
the particle approximation of the integral consistency relations (or moments of the
kernel) are satisfied during the evolution.
A further implication of the consistency scaling relations on protostellar collapse
calculations is the improved mass resolution. Since the minimum resolvable mass,
M_ min, scales with h as h^3, this implies that M_ min∼ n^-1.
Although the collapse models proposed here start from ideal conditions, this aspect
has an important impact on the outcome of the simulations, where well-defined, rotating
circumstellar disks are seen to form around the growing fragments, which then increase in mass,
develop spiral arms, and fragment to produce small-scale binary/multiple protostellar
systems. This result is consistent with recent observations of L1448 IRS3B <cit.>:
a close triple protostar system where two of the protostars formed by fragmentation of
a massive disk with a spiral structure surrounding a primary, young star formed from
the collapse of a larger cloud of gas and dust. While based principally on the relative
proximity of the companion stars, this observation provides for the first time direct
evidence of protostellar disk fragmentation as a mechanism for the formation of
close binary/multiple young stars. | null | null | null | null | null |
http://arxiv.org/abs/1701.07773v1 | 20170126165434 | Data preservation at the Fermilab Tevatron | [
"S. Amerio",
"S. Behari",
"J. Boyd",
"M. Brochmann",
"R. Culbertson",
"M. Diesburg",
"J. Freeman",
"L. Garren",
"H. Greenlee",
"K. Herner",
"R. Illingworth",
"B. Jayatilaka",
"A. Jonckheere",
"Q. Li",
"S. Naymola",
"G. Oleynik",
"W. Sakumotob",
"E. Varnes",
"C. Vellidis",
"G. Watts",
"S. White"
] | hep-ex | [
"hep-ex",
"physics.ins-det"
] |
pado]S. Amerio
fnal]S. Behari
fnal]J. Boyd
wash]M. Brochmann
fnal]R. Culbertson
fnal]M. Diesburg
fnal]J. Freeman
fnal]L. Garren
fnal]H. Greenlee
fnal]K. Herner
fnal]R. Illingworth
fnal]B. Jayatilakacor1
fnal]A. Jonckheere
fnal]Q. Li
fnal]S. Naymola
fnal]G. Oleynik
fnal,roc]W. Sakumoto
zona]E. Varnes
fnal]C. Vellidis
wash]G. Watts
fnal]S. White
[pado]Istituto Nazionale di Fisica Nucleare, Sezione di Padova-Trento and University of Padova, I-35131 Padova, Italy
[fnal]Fermi National Accelerator Laboratory, Batavia, Illinois, 60510, USA
[wash]University of Washington, Seattle, Washington, 98195, USA
[roc]University of Rochester, Rochester, New York 14627, USA
[zona]University of Arizona, Tuscon, Arizona, 85721, USA
[cor1]Corresponding author. E-mail address: [email protected]
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
§ INTRODUCTION
The Tevatron was a proton-antiproton collider located at Fermi National Accelerator Laboratory (Fermilab). Run II of the Tevatron, occurring from 2001 to 2011 and having collisions with a center-of-mass energy of 1.96 TeV, saw the CDF and D0 collaborations <cit.> record datasets corresponding to an integrated luminosity of approximately 11 fb^-1 per experiment. These datasets helped make groundbreaking contributions to high energy physics including the most precise measurements of the W boson and top quark masses, observation of electroweak production of top quarks, observation of B_s oscillations, and first evidence of Higgs boson decay to fermions.
The unique nature of the Tevatron's proton-antiproton collisions and large size of the datasets means that the CDF and D0 data will retain their scientific value for years to come, both as a vehicle to perform precision measurements as newer theoretical calculations appear, and to potentially validate any new discoveries at the LHC.
The Fermilab Run II Data Preservation Project (R2DP) aims to ensure that both experimental collaborations have the ability to perform complete physics analyses on their full datasets through at least the year 2020. To retain full analysis capability, the project must preserve not only the experimental data themselves, but also their software and computing environments. This requires ensuring that the data remain fully accessible in a cost-effective manner and that experimental software and computing environments are supported on modern hardware. Furthermore user jobs must be able to run at newer facilities when dedicated computing resources are no longer available and job submission and data movement to these new facilities must be accomplished within the familiar software environment with a minimal amount of effort on the part of the end-user. Documentation is also a critical component of R2DP and includes not only the existing web pages, databases, internal documents, but also requires writing clear, concise instructions detailing how users need to modify their usual habits to work in the R2DP computing infrastructure.
§ DATASET PRESERVATION
§.§ Collision data
At the time of the Tevatron shutdown, the data for both CDF and D0 were stored on LTO4 tapes <cit.>, which have a per-tape capacity of 800 GB. An analysis of then-available tape technologies concluded that T10K tapes <cit.>, with a capacity of up to 5 TB per tape, would be the near-term choice for archival storage at Fermilab. While it was theoretically possible to leave the CDF and D0 data on LTO4 storage, a decision was made to migrate these data to T10K storage for two reasons. First, if the Tevatron data were accessed for a long period of time after data taking ended, the LTO4 storage may be an unsupportable configuration; as LTO4 tapes decline in usage industry-wide there may not be replacement storage easily available. Second, as storage media and drives for older technology become scarce, their costs rise, potentially increasing the overall long-term cost of staying with LTO4 storage. Due to these concerns, the commitment was made to purchase T10K tapes and migrate all of the CDF and D0 data. It took approximately two years for the migration to be completed (Fig. <ref>) and the CDF and D0 data now share tape access resources with active Fermilab experiments.
Both CDF and D0 migrated all data stored on tape, including raw detector data, reconstructed detector data and derived datasets, and simulation. CDF has also made an additional copy of its raw detector data at the National Centre for Research and Development in Information Technology (CNAF) in Italy <cit.>. Table <ref> shows the amount of each type of data that CDF and D0 have stored over Run II.
[While not within the scope of R2DP, a copy of raw CDF data from Run 1 of the Tevatron (1992-1996) is also being made at CNAF.]
§.§ Non-statistical data
Throughout the Tevatron run, both CDF and D0 used Oracle database software for non-statistical data, such as detector calibrations. The ongoing cost of maintaining Oracle licenses, which are not used by most current Fermilab experiments for scientific use, presented a long-term challenge. The database schema was heavily interwoven into the analysis software. As a result, converting to a more economical open source database solution such as PostgreSQL would incur a prohibitive investment in human resources. Thus, both experiments decided to retain the Oracle database systems throughout the life of the data preservation period, following an upgrade to the most recent version of Oracle at the time of the Tevatron shutdown. Furthermore, as future upgrades to the Oracle database could potentially disrupt the existing schema, and thus the analysis software, a contingency plan was drawn up where the current version and schema could be frozen and run, in network isolation if necessary, in the future, even if support for that version had ceased. The CDF and D0 Oracle database schemas currently contain 1.89× 10^10 rows and continue to be accessed by physicists generating simulated data for ongoing analyses. A breakdown of the main types and size of data in the CDF and D0 databases appears in Table <ref>. While the project decided to retain the Oracle databases, there was a risk that the physical hardware could fail before the end of the project lifetime. To mitigate that risk, the experiments moved the databases themselves to virtual machines. Most of the servers used for database hosting and caching at both experiments were transitioned to virtual hardware with no degradation in performance or uptime.
§ SOFTWARE AND ENVIRONMENT PRESERVATION
Both CDF and D0 have complex software frameworks to carry out simulation, reconstruction, calibration, and analysis. Most of the core software was developed in the early- to mid-2000s on Scientific Linux running on 32-bit x86 architectures. During the operational period of the Tevatron, releases of the software framework were maintained on dedicated storage elements that were mounted on the experiments' respective dedicated computing clusters. As these dedicated resources are no longer maintained, CDF and D0 have migrated their software releases to CERN Virtual Machine File System (CVMFS) repositories <cit.>. As CVMFS has been widely adopted by current Fermilab experiments, this move allows for maintaining CDF and D0 software releases for the foreseeable future without a significant investment in dedicated resources. Furthermore, as the Fermilab-based CVMFS repositories are distributed to a variety of computing facilities away from Fermilab, this approach lends to CDF and D0 computing environments that exist on a variety of remote sites.
At the time of the Tevatron shutdown, both CDF and D0 were running software releases that, while operational on Scientific Linux 5 (“SL5”), depended on compatibility libraries that were built in previous OS releases dating back to Scientific Linux 3. The two experiments chose different strategies to ensure the functionality of their software releases throughout the data preservation period.
§.§ CDF software release preservation
At CDF, stable software releases were available under two flavors: one was used in reconstruction of collision data and analysis and another for Monte Carlo generation and simulation. To ensure the long-term viability of CDF analysis capability, the CDF software team chose to prepare brand new “legacy” releases of both flavors. These legacy releases were stripped of any long-obsolete packages that were no longer used for any analysis and also shed any compatibility libraries built prior to SL5. Once validated and distributed, older releases of CDF software which still depended on compatibility libraries were removed from the CVMFS repository. This meant that usage of CDF code for analysis on any centrally available resources was guaranteed to be fully buildable and executable on SL5. Furthermore, it meant a relatively simple process for further ensuring the legacy release is buildable and executable exclusively on Scientific Linux 6 (“SL6”), the target OS for R2DP. The total size of the CDF code base
in the legacy releases is 326 GB, which includes compiled code and most external dependencies.
§.§ D0 software release preservation
After careful study, the D0 software team chose to stay with its software releases that were current at the time of the Tevatron shutdown, but updated common tools where possible, and also made sure that 32-bit compatibility system libraries are installed on worker nodes at Fermilab, where D0 plans to run jobs throughout the R2DP project lifetime. If D0 physicists should wish to run analysis jobs outside of Fermilab in future years, they will need to ensure that 32-bit versions of system libraries such as GLIBC are available at any future remote sites. Resources at Fermilab, however, are sufficient to meet the projected demand over the project lifetime. Required pre-SL6 compatibility libraries can also be added to the CVMFS repository if needed in the future. The total size of the D0 software repository in CVMFS, including code base, executables, and external product dependencies, is currently 227 GB.
§ JOB SUBMISSION AND DATA MOVEMENT
§.§ Job submission
During Run II, CDF and D0 both had large dedicated analysis farms (CDFGrid and CAB, respectively) of several thousand CPU cores each. Since the end of the run, these resources have been steadily diminishing as older nodes are retired and some newer ones are repurposed. While the computing needs of the experiments have declined over the years (Fig. <ref>), preserving full analysis capability requires that both experiments have access to opportunistic resources and a way to submit jobs to them. The Fermilab General Purpose Grid (“GPGrid”), used by numerous other experiments based at Fermilab, is a natural choice for the Tevatron experiments. Both CDF and D0 have worked with the Fermilab Scientific Computing Division to add the ability to run their jobs on GPGrid, by adopting the Fermilab Jobsub product <cit.> used by other Fermilab experiments. Having users submit their analysis jobs via Jobsub solves the issue of long-term support, but introduces an additional complication for users who are unfamiliar with the new system, or who may return to do a Tevatron analysis many years from now and will not have time to learn an entirely new system. Thus, both CDF and D0 have implemented wrappers around the Jobsub tool that emulate job submission commands each experiment normally uses.
In the case of D0, users who wish to submit to GPGrid instead of CAB (which will be required once CAB is retired) can simply do so by adding an extra command line option. The D0 submission tools will then generate and issue the appropriate Jobsub commands without any direct user intervention. Users can switch to submitting jobs to GPGrid with a minimum amount of effort, and future analyzers will not need to spend time learning an entirely new system. We have successfully tested submission of all common job types (simulation, reconstruction, user analysis) to GPGrid using the modified D0 submission tools.
In the case of CDF, the existing CDFGrid gateway was retired with all remaining hardware (worker nodes) absorbed into GPGrid. Analysis and Monte Carlo generation jobs are now submitted entirely using Jobsub and a CDF-specific wrapper in front. This also allows for CDF analysis jobs to go to remote sites on the Open Science Grid, specifically to sites that previously had to operate CDF-specific gateways. These sites can now support CDF computing either opportunistically or via dedicated quotas without the need to support a separate gateway. This transition was completed in early 2015 and CDF computing use did not diminish in 2015 as compared to 2014 despite moving to an environment with no dedicated computing nodes. CDF physicists consumed over 5 million CPU hours on GPGrid in the twelve months following the transition.
§.§ Data management and file delivery
Both CDF and D0 use the Sequential Access to Metadata (SAM) service <cit.> for data handling. Older versions of SAM used Oracle backends with a CORBA-based communication infrastructure, while more recent versions use a PostgreSQL-based backend, with communication over http. Throughout Run II CDF and D0 used CORBA-based versions of SAM, but can now also communicate with their existing Oracle databases using the new http interfaces as part of R2DP, eliminating the requirement of supporting the older CORBA-based communication interface through the life of the project.
While CDF and D0 code bases had to be modified to interface with these new communication interfaces, these
changes are transparent to the end user.
For D0, part of the SAM infrastructure included dedicated cache disks on the D0 cluster worker nodes that allowed for rapid staging of input files to jobs. As files were requested through SAM, they would be copied in from one of the cache disks if they were present. If they were not already on one of the cache disks SAM would fetch them from tape. At its peak this cache space totaled approximately 1 PB, but this cache space was only available to D0 and would have been too costly to maintain over the life of the data preservation project. D0 has therefore deployed a 100 TB dCache <cit.> instance for staging input files to worker nodes, as CDF and numerous other Fermilab experiments are already doing. The test results showed no degradation in performance relative to the dedicated SAM caches, and the D0 dCache instance has been in production for approximately two years.
As CDF was already using dCache for tape-backed caching, once the necessary code changes were made to use newer versions of SAM, data access continued to be possible with no hardware infrastructure changes needed.
§ DOCUMENTATION
Preserving the experiments' institutional knowledge is a critically important part of the project. Here we define this knowledge to be internal documentation and notes, presentations in meetings, informational web pages and tutorials, meeting agendas, and mailing list archives. The largest step in this part of the project was transferring each experiment's internal documents to long-term repositories. Both CDF and D0 have partnered with INSPIRE <cit.> to transfer their internal notes to experiment-specific accounts on INSPIRE.
For internal meeting agendas, D0 has moved to an Indico instance hosted by the Fermilab Scientific Computing Division, while CDF has virtualized their MySQL-based system. Fermilab will see to it that archives of each experiment's mailing lists are available through the life of the project, and Wiki/Twiki instances are being moved to static web pages to facilitate ease of movement to new servers if needed in the future. In addition to moving previous documentation to modern platforms, both experiments have written new documentation specifically detailing the infrastructure changes that the R2DP project has made, along with instructions for adapting
legacy workflows to the new systems.
§ SUMMARY
The Run II Data Preservation Project aims to enable full analysis capability for the CDF and D0 experiments through at least the year 2020. Both experiments have modernized
their software environments and job submission procedures in order to be able to run jobs in current operating systems and to take advantage of non-dedicated computing resources.
Wherever possible they have adopted elements of the computing infrastructure now in use by the majority of active Fermilab experiments. They have also made significant efforts at preserving
institutional knowledge by moving documentation to long-term archives.
Materials costs for the project were dominated by media for migrating the data to T10K tapes, while the vast majority of the project's other costs came from salaries. The implementation phase of the project is complete and both experiments are actively using the R2DP infrastructure
for their current and future work.
§ ACKNOWLEDGMENTS
The authors thank the computing and software teams of both CDF and D0 as well as the staff of the Fermilab Computing Sector that made this project possible.
Fermilab is operated by Fermi Research Alliance, LLC under Contract number DE-AC02-07CH11359 with the United States Department of Energy. The work was supported in part by
Data And Software Preservation for Open Science (DASPOS) (NSF-PHY-1247316).
§ REFERENCES
99
CDFdet1 D. Acosta et al., “Measurement of the J/ψ meson and b-hadron production cross sections in pp̅ collisions at √(s)=1960 GeV", Phys. Rev. D 71, 032001 (2005).
CDFdet2 A. Abulencia et al., “Measurements of inclusive W and Z cross sections in pp̅ collisions at √(s)=1.96 TeV", Journal of Physics G: Nuclear and Particle Physics, 34, Number 12 (2007).
D0det V. M. Abazov et al., “The Upgraded D0 Detector", Nucl. Instrum. Methods in Phys. Res. Sect. A 565, 463 (2006).
lto The Linear Tape Open Consortium, http://www.lto.org.
t10k Oracle, "Storagetek T10000 data cartridge family data sheet", http://www.oracle.com/us/products/servers-storage/storage/tape-storage/storagetek-t10000-t2-cartridge-296699.pdf.
cnaf S. Amerio et al., “The Long Term Data Preservation (LTDP) project at INFN CNAF: CDF use case,” J. Phys. Conf. Ser. 608 (2015) no.1, 012012.
cvmfs P. Buncic et al., “CernVM - a virtual appliance for LHC applications.” Proceedings of the XII. International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT08), Erice, 2008 PoS(ACAT08)012
jobsub Dennis Box et al., “Progress on the FabrIc for Frontier Experiments Project at Fermilab”, Journal of Phyics.: Conference Series, 664 062040 (2015).
sam R. A. Illingworth, “A data handling system for modern and future Fermilab experiments”, Journal of Physics: Conference Series, 513, 032045 (2014).
dcache M. Ernst et al., “dCache, a distributed data storage caching system”, In Computing in High Energy Physics 2001 (CHEP 2001), Beijing, China.
inspire http://inspirehep.net.
| The Tevatron was a proton-antiproton collider located at Fermi National Accelerator Laboratory (Fermilab). Run II of the Tevatron, occurring from 2001 to 2011 and having collisions with a center-of-mass energy of 1.96 TeV, saw the CDF and D0 collaborations <cit.> record datasets corresponding to an integrated luminosity of approximately 11 fb^-1 per experiment. These datasets helped make groundbreaking contributions to high energy physics including the most precise measurements of the W boson and top quark masses, observation of electroweak production of top quarks, observation of B_s oscillations, and first evidence of Higgs boson decay to fermions.
The unique nature of the Tevatron's proton-antiproton collisions and large size of the datasets means that the CDF and D0 data will retain their scientific value for years to come, both as a vehicle to perform precision measurements as newer theoretical calculations appear, and to potentially validate any new discoveries at the LHC.
The Fermilab Run II Data Preservation Project (R2DP) aims to ensure that both experimental collaborations have the ability to perform complete physics analyses on their full datasets through at least the year 2020. To retain full analysis capability, the project must preserve not only the experimental data themselves, but also their software and computing environments. This requires ensuring that the data remain fully accessible in a cost-effective manner and that experimental software and computing environments are supported on modern hardware. Furthermore user jobs must be able to run at newer facilities when dedicated computing resources are no longer available and job submission and data movement to these new facilities must be accomplished within the familiar software environment with a minimal amount of effort on the part of the end-user. Documentation is also a critical component of R2DP and includes not only the existing web pages, databases, internal documents, but also requires writing clear, concise instructions detailing how users need to modify their usual habits to work in the R2DP computing infrastructure. | null | null | null | null | null |
http://arxiv.org/abs/1701.07937v2 | 20170127040649 | Homotopies for Free! | [
"Taichi Uemura"
] | cs.LO | [
"cs.LO",
"math.LO"
] |
Computationally Efficient Market Simulation Tool for Future Grid Scenario Analysis
Shariq Riaz, Graduate Student Member, IEEE,
Gregor Verbič, Senior Member, IEEE,
and Archie C. Chapman, Member, IEEE
Shariq Riaz, Gregor Verbič and Archie C. Chapman are with the School of Electrical and Information Engineering, The University of Sydney, Sydney, New South Wales, Australia. e-mails: (shariq.riaz, gregor.verbic, [email protected]).
Shariq Riaz is also with the Department of Electrical Engineering, University of Engineering and Technology Lahore, Lahore, Pakistan.
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We show “free theorems” in the style of Wadler
for polymorphic functions in homotopy type theory
as consequences of the abstraction theorem.
As an application, it follows that
every space defined as a higher inductive type
has the same homotopy groups as some type of polymorphic functions
defined without univalence or higher inductive types.
§ INTRODUCTION
Given a closed term of type of polymorphic functions
defined in homotopy type theory <cit.>,
we can derive a theorem that it satisfies.
For example,
let t be a closed term of type
t : ∏_X : ∏_x : Xx = x → x = x.
Then we have a theorem
∏_X, X' : ∏_f : X → X'∏_x : X∏_p : x = xt(fp) = f(tp)
in homotopy type theory, in the sense that
there is a closed term of this type.
Such theorems are “free theorems” in the style of Wadler <cit.>
for homotopy type theory.
Original free theorems for polymorphic type theory
are consequences of relational parametricity <cit.>
and have a lot of applications including
short cut fusion <cit.>,
non-definability of polymorphic equality <cit.>,
and encoding initial algebras and final coalgebras
in pure polymorphic lambda calculus <cit.>.
Recently relational parametricity and free theorems for dependent type theory
have been studied by several authors.
Atkey et al. <cit.>
constructed relationally parametric models of Martin-Löf type theory
and proved a simple free theorem
and the existence of initial algebras for indexed functors.
Takeuti <cit.> studied relational parametricity for the lambda cube
and proved adjoint functor theorem internally.
Bernardy et al. <cit.>
studied relational parametricity for pure type systems
and free theorems for dependently typed functions.
In this paper we show free theorems specific to homotopy type theory
such as the example given in the first paragraph
where the type ∏_X : ∏_x : Xx = x → x = x
seems to be trivial without homotopy-theoretic interpretation.
A difference between free theorems for homotopy type theory
and original free theorems for polymorphic type theory is that
in homotopy type theory they are represented by homotopies
instead of equalities.
This difference causes some problems
related to proof-relevance and higher dimensional homotopies.
One approach to these problems is higher dimensional parametricity
<cit.>
and to state free theorems as coherent homotopies.
Both in <cit.> and <cit.>,
the target languages are polymorphic lambda calculus
which does not have higher dimensional structures.
On the other hand, our target language, homotopy type theory,
has already higher dimensional structures,
and thus ordinary free theorems for higher dimensional types work well.
To explain this, let us see an example.
Consider a canonical embedding
i : A →∏_X : (A → X) → X
i ≡λ a. λ (X, g).ga
for a base type A :.
In polymorphic type theory
it follows from a free theorem that i is an isomorphism.
In homotopy type theory
an immediate consequence of a free theorem is the fact that
i is 0-connected, that is, it induces
a bijection between the sets of connected components.
A 0-connected map is far from an isomorphism.
However, for each n ≥ 1 and a : A,
it follows from a free theorem for the type
∏_X : ∏_g : A → X^n(X, ga)
that i induces a 0-connected map
^n(i) : ^n(A, a) →^n(∏_X : (A → X) → X, ia)
between the n-th loop spaces.
Therefore we conclude that i is ∞-connected, that is,
it induces a bijection between the n-th homotopy groups
for each n ≥ 0.
Hence the types A and ∏_X : (A → X) → X
are equivalent from homotopical point of view.
For a concrete (higher) inductive type A,
the type ∏_X : (A → X) → X
is equivalent to a type definable in Martin-Löf type theory <cit.>
without univalence or higher inductive types.
For example,
(∏_X : (^n→ X) → X) ≃
(∏_X : ∏_x : X^n(X, x) → X)
where ^n is the n-dimensional sphere.
The right hand side of this equivalence
is the Church encoding of n-sphere,
proposed by Shulman[<https://homotopytypetheory.org/2011/04/25/higher-inductive-types-via-impredicative-polymorphism/>].
It follows from the previous paragraph that
every space can be identified via an ∞-connected map
with its Church encoding.
The Church encoding of a space suggests that
generators of its homotopy groups are definable without univalence or higher inductive types.
For example the generator of π_3(^2)
can be defined as polymorphic functions of type
∏_X : ∏_x : X^2(X, x) →^3(X, x).
We can say that the univalence axiom and higher inductive types
are used only for proving that π_3(^2) is the integers
but not needed for creating the generator of π_3(^2).
Free theorems for general open terms in homotopy type theory
should follow from relational parametricity,
but it seems to be hard to axiomatize relational parametricity for homotopy type theory.
Thus we focus on free theorems for closed terms
as the first step to understanding relational parametricity
for homotopy type theory,
because free theorems for closed terms follow from Reynolds's abstraction theorem <cit.>
without any assumptions.
Informally, it says that terms evaluated under related environments yield related values.
We show the abstraction theorem for homotopy type theory
via a syntactic transformation of a term in homotopy type theory to another.
The key to prove the abstraction theorem is the fact that
binary type families in homotopy type theory form a model of homotopy type theory
which we call the relational model.
Then the abstraction theorem is the soundness of the interpretation
of types as binary type families.
There is a category-theoretic proof of this fact
using Shulman's inverse diagrams of type-theoretic fibration categories <cit.>
or fibred type-theoretic fibration categories introduced by the author <cit.>.
In this paper we give a syntactic proof
in order to make the paper self-contained.
We also show a new result on inductive data types:
for a type theory with indexed -types,
originally called general trees <cit.>,
the relational model has indexed -types.
The construction of indexed -types in the relational model
is essentially same as that of -types in the gluing construction
for a cartesian functor between Π-pretoposes
<cit.>.
The study of relational parametricity via syntactic transformations is not new.
Abadi et al. <cit.> and
Plotkin and Abadi <cit.>
introduced logic for parametricity
where the abstraction theorem is the soundness of
the interpretations of terms in System F as proofs in their logic.
Wadler pointed out that
Reynolds's abstraction theorem can be seen as
a transformation of a term in System F to a proof in second-order logic
<cit.>.
Takeuti <cit.> and
Bernardy et al. <cit.>
studied relational parametricity for the lambda cube
and pure type systems respectively
via syntactic transformations of a term in one type theory to another.
Since homotopy type theory, even Martin-Löf type theory,
is powerful enough to express predicates
(reflective in terms of <cit.>),
we can transform a term in homotopy type theory
to another in homotopy type theory itself.
Our contribution is to give transformations of
identity types, the univalence axiom and some higher inductive types.
Organization.
We begin in Section <ref>
by recalling some important types and functions in homotopy type theory.
Section <ref> and <ref>
are the core of this paper.
In Section <ref>
we explain what the abstraction theorem is.
In Section <ref>,
we give some free theorems as corollaries of the abstraction theorem.
In Section <ref>,
we discuss Church encodings of higher inductive types
and give the generator of π_3(^2)
as a polymorphic function.
We prove the abstraction theorem
in Section <ref>, <ref> and <ref>.
§ PRELIMINARIES ON HOMOTOPY TYPE THEORY
We recall some types and functions in homotopy type theory
which are used in Section <ref> and <ref>.
See <cit.> for details.
The key idea of homotopy type theory is to identify types as spaces,
elements as points and equalities as paths.
We think of an identity type x : A, y : A ⊢ x = y
as the space of paths from x to y.
Under this identification,
reflexivity, transitivity and symmetry
correspond to constant path _x : x = x,
path concatenation (-) (-) : x = y → y = z → x = z
and path inversion (-)^-1 : x = y → y = x respectively.
A function f : A → B acts on paths as
(f, -) : x = y → fx = fy for all x, y : A,
and we will often write (f, p) as fp for p : x = y.
Corresponding to indiscernability of identicals,
there is a function ^C(p, -) : C(x) → C(y)
for x : A ⊢ C(x), x, y : A and p : x = y.
Since the symbol “=” is reserved for identity types,
we write a ≡ b when expressions a and b
are judgmentally or definitionally equal.
A function f : A → B also acts on higher dimensional paths.
For x_1, y_1 : A, x_2, y_2 : x_1 = y_1,
…, x_n, y_n : x_n - 1 = y_n - 1,
we define _n(f, -) : x_n = y_n→_n - 1(f, x_n) = _n - 1(f, y_n)
as _0(f, z) ≡ fz and
_n(f, p) ≡(_n - 1(f, -), p).
We often write _n(f, p) as fp.
There are compositions of higher dimensional paths.
For x_0, y_0 : A, x_1, y_1 : x_0 = y_0,
…, x_n, y_n : x_n - 1 = y_n - 1, σ : x_n = y_n,
p : x' = x_0 and q : y_0 = y',
we set p lσ≡_n(λ s.p s, σ)
and σr q ≡_n(λ s.s q, σ).
These operations l and r are called whiskering.
A pointed type is a pair (A, a)
of type A and its inhabitant a : A called a base point.
For a pointed type (A, a) and a natural number n ≥ 0,
the n-th loop space ^n(A, a) of A at a
is a pointed type defined inductively as
^0(A, a) ≡ (A, a) and
^n + 1(A, a) ≡^n(a = a, _a).
Write ^n_a : ^n(A, a) for the base point of ^n(A, a).
A function f : A → B acts on loop spaces
as _n(f, -) : ^n(A, a) →^n(B, fa).
A path space of a product space A × B
is a product of path spaces:
(a, b = a', b') ≃ (a = a') × (b = b')
for a, a' : A and b, b' : B.
We think of a pair p, q of paths p : a = a' and q : b = b'
as a path a, b = a', b' in A × B.
Similarly, we regard a pair l, k
of n-loops l : ^n(A, a) and k : ^n(B, b)
as an n-loop in A × B at a, b.
Let x : A ⊢ B(x) be a type family.
For a path p : a = a' in A and points b : B(a) and b' : B(a'),
the path space from b to b' over p,
written b =_p b', is the type ^B(p, b) = b'.
For an n-loop l : ^n(A, a) and a point b : B(a),
the n-th loop space of B at b over l,
written _l^n(B, b),
is the type _n - 1(λ p.^B(p, b), l) = ^n - 1_b.
§ ABSTRACTION THEOREM EXPLAINED
The abstraction theorem for polymorphic type theory
is explained in terms of set-theoretic relations.
For dependent type theory,
we use type-theoretic relations, namely binary type families.
For a binary type family x : A, x' : A' ⊢A(x, x'),
a family on A is a triple of
x : A ⊢ B(x), x' : A' ⊢ B'(x') and
x : A, x' : A', x : A(x, x'), y : B(x), y' : B'(x')
⊢B(x, y, y'),
written x : A⊢B(x)
in short.
Note that B depends on x : A and x' : A' implicitly.
Let x : A⊢B(x)
be a family on a binary type family A.
The dependent product of B over A is the binary type family
f : ∏_x : AB(x), f' : ∏_x' : A'B'(x') ⊢∏_x : A∏_x' : A'∏_x : A(x, x')B(x, fx, f'x') .
The dependent sum of B over A is the binary type family
z : ∑_x : AB(x), z' : ∑_x' : A'B'(x') ⊢∑_x : A(_1(z), _1(z'))B(x, _2(z), _2(z')) .
For a binary type family A,
the path space of A is the family
x_0 : A, x'_0 : A', x_0 : A(x_0, x'_0),
x_1 : A, x'_1 : A', x_1 : A(x_1, x'_1),
p : x_0 = x_1, p' : x'_0 = x'_1⊢x_0 =_p, p'x_1
on two copies of A.
A universe of binary type families is a binary type family
X : , X' : ⊢ X → X' →
where ⊢ is a universe of types.
For each type constant C (for example, , , ,
, ^1, ^2 and so on),
we associate it with a binary type family
c : C, c' : C ⊢ c = c' .
Then, by induction, we can associate each type family x : X ⊢ A(x)
with a family of binary type families
x : X, x' : X, x : X(x, x'), a : A(x),
a' : A(x') ⊢A(x, a, a') .
Now the abstraction theorem can be described as follows:
[Abstraction Theorem]
For each term x : X ⊢ t(x) : A(x),
there exists a term
x : X, x' : X', x : X(x, x') ⊢t̂(x) : A(x, t(x), t(x')).
In particular, for each closed term ⊢ t : A,
there exists a closed term
⊢t̂ : A(t, t).
§ ABSTRACTION THEOREM APPLIED
§.§ Concatenation of a Loop
Let t be a closed term of type
t : ∏_X : ∏_x : Xx = x → x = x.
One might guess that t is an iterated concatenation of a loop,
that is,
t(p) ≡p … p_n times
for a fixed integer n,
where negative n means (-n) times concatenation of the inversion of p.
In fact any closed term of this type
must be homotopic to some iterated concatenation of a loop,
but one can derive a theorem without this fact.
We show that the type
∏_X, X' : ∏_f : X → X'∏_x : X∏_p : x = xt(fp) = f(tp)
is inhabited.
From the abstraction theorem we have a closed term
t̂ : ∏_(X : , X' : , X : X → X' →)∏_(x : X, x' : X', x : X(x, x'))∏_(p : x = x, p' : x' = x', p : x =_p, p'x)x =_tp, tp'x.
For a function f : X → X' of -small types,
let X(x, x') ≡ fx = x'.
One can prove that, for p : x = x, p' : x' = x' and x : fx = x',
the type x =_p, p'x is equivalent to
the type x p' = fp x.
Letting x' ≡ fx and x≡_fx,
we have an inhabitant of the type
∏_p : x = x, p' : fx = fx, p : p' = fp
tp' = f(tp).
Finally we set p' ≡ fp and p≡_fp.
Then we have an inhabitant of the type
∏_x : X∏_p : x = xt(fp) = f(tp).
§.§ Loop Operations
The example in Section <ref> can be generalized.
Let n and k be natural numbers
and t a closed term of type
t : ∏_X : ∏_x : X^n(X, x) →^k(X, x).
The example in Section <ref>
is the case when n = k = 1.
This type represents the k-th loop space of n-sphere,
discussed in Section <ref>,
and thus we could not guess what function t is.
However, we can derive a theorem about t.
We show that the type
∏_X, X' : ∏_f : X → X'∏_x : X∏_p : ^n(X, x)
t(fp) = f(tp)
is inhabited.
From the abstraction theorem we have a closed term
t̂ : ∏_(X : , X' : , X : X → X' →)∏_(x : X, x' : X', x : X(x, x'))∏_(p : ^n(X, x), p' : ^n(X, x'), p : ^n_p, p'(X, x))^k_tp, tp'(X, x).
For a function f : X → X' of -small types,
let X(x, x') ≡ fx = x'.
One can prove that, for p : ^n(X, x),
p' : ^n(X, x') and x : fx = x',
the type ^n_p, p'(X, x) is equivalent to
the type xl p' = fp rx.
Letting x' ≡ fx, x≡_fx,
p' ≡ fp and p≡_fp,
we have an inhabitant of the type
∏_x : X∏_p : ^n(X, x)t(fp) = f(tp).
§.§ Action on Loops
Let t be a closed term of type
t : ∏_X, Y : ∏_f : X → Y∏_x : Xx = x → fx = fx.
One might guess that t(f, p) ≡(f, p).
Of course, t could be another function, for example,
t(f, p) ≡(f, p p).
However, intuitively only (f, p)
is an interesting function of this type,
because (f, p p) is a composition of (f, p)
and a loop concatenation, and the latter does not use f.
Let us to formulate this intuition.
We show that the type
∏_X, Y : ∏_f : X → Y∏_x : X∏_p : x = x
t(f, p) = f(t(𝕀_X, p))
is inhabited.
This means that, for any t,
t(f, -) is a composition of (f, -)
after a loop operation t(𝕀_X, -) : x = x → x = x.
From the abstraction theorem we have an inhabitant of the type
∏_X', X, Y', Y : ∏_g : X' → X∏_h : Y' → Y∏_f' : X' → Y'∏_f : X → Y∏_σ : ∏_x' : X'f(gx') = h(f'x')
∏_x' : X'∏_p' : x' = x'
t(f, gp') σ(x') = σ(x') h(t(f', p')).
Letting X' ≡ Y' ≡ X, h ≡ f,
f' ≡ g ≡𝕀_X and σ≡λ x._fx,
we have
∏_X, Y : ∏_f : X → Y∏_x : X∏_p : x = x
t(f, p) = f(t(𝕀_X, p)).
Note that, from Section <ref>,
we also have f(t(𝕀_X, p)) = t(𝕀_Y, fp).
§.§ An Embedding
For a base type A : such as and ^1,
let A≡∏_X : (A → X) → X.
There are back and forth functions between the types
A and A as follows:
i : A →∏_X : (A → X) → X
i ≡λ(a : A).λ(X : , g : A → X).ga
j : (∏_X : (A → X) → X) → A
j ≡λ(φ : ∏_X : (A → X) → X).φ_A(𝕀_A).
Clearly j ∘ i ≡𝕀,
but i ∘ j ≡𝕀 or even i ∘ j ∼𝕀
does not hold.
However, given a closed term t : A,
we can construct a closed term of type
∏_X : ∏_g : A → X(i(jt))g = tg.
To show this, let t : A be a closed term.
From the abstraction theorem we can get a closed term of type
∏_X_0, X : ∏_f : X_0→ X∏_g : A → X_0f(tg) = t(f ∘ g).
Taking X_0≡ A and g ≡𝕀_A,
we get an inhabitant of the type f(t(𝕀_A)) = t(f).
Now, for X : and g : A → X, we have
(i(jt))g = g(jt) = g(t(𝕀_A)) = t(g).
§.§ An ∞-Connected Map
For a type A, let π_0(A) be the set of
homotopy equivalence classes of closed terms of A
which we call the 0-th homotopy group of A.
For a point a : A and a natural number n,
the n-the homotopy group of A at a, written π_n(A, a),
is the set π_0(^n(A, a)).
From Section <ref>,
we get a bijection π_0(A) →π_0(A).
We can extend this result to all homotopy groups.
We show that i : A →A is ∞-connected
in the sense that it induces a bijection between the n-th homotopy groups
for each n ≥ 0.
For a pointed types (A, a) : _,
A has a base point
a≡λ(X : , g : A → X).ga.
The maps i and j preserve base points,
and thus they induce maps
^n(i) : ^n(A, a) →^n(A, a)
and ^n(j) : ^n(A, a) →^n(A, a).
Identifying ^n(A, a) with
∏_X : ∏_g : A → X^n(X, ga)
by the functional extensionality,
we get:
^n(i) : ^n(A, a) →∏_X : ∏_g : A → X^n(X, ga) ^n(i) =
λ p.λ(X, g).gp
^n(j) : (∏_X : ∏_g : A → X^n(X, ga)) →^n(A, a) ^n(j) =
λφ.φ_A(𝕀_A).
Then we have ^n(j) ∘^n(i) = 𝕀_^n(A, a).
For a closed term t : ^n(A, a),
we can construct a closed term of type
∏_X : ∏_g : A → X
(^n(i)(^n(j)t))g = tg
in a similar way to Section <ref>.
Thus we conclude that the map i induces a bijection
π_n(A, a) →π_n(A, a)
for each n ≥ 0.
§.§ Free Theorems for Open Terms
In the reflexive graph model of Atkey et al. <cit.>,
free theorems can be derived not only for closed terms
but also for open terms.
In our framework,
we cannot derive free theorems for open terms in general.
Indeed, the negation of the free theorem for some open term is provable.
Assuming the law of excluded middle for propositions in a universe ,
one can construct a function t : ∏_X : X → X
such that t_(0) = 1 and t_(1) = 0
where 0 : and 1 : are the constructors
of the two point type : <cit.>.
Note that recently Booij et al. has pointed out that, conversely,
the existence of a non-trivial polymorphic endofunction
implies the law of excluded middle <cit.>.
Since the law of excluded middle for propositions in
can be expressed by some closed type _,
t can be regarded as an open term
l : _⊢ t : ∏_X : X → X.
For this open term the free theorem
∏_X, X' : ∏_f : X → X'∏_x : Xt(fx) = f(tx)
fails by taking f ≡λ x.0 : →.
Since the law of excluded middle is consistent,
the free theorem for t is not provable.
§ CHURCH ENCODINGS OF SPACES
In Section <ref>,
for each type A : we have an ∞-connected map
i : A →A
where A≡∏_X : (A → X) → X.
For a concrete (higher) inductive type A,
using the recursion principle of A
we have the Church encoding of A
in Martin-Löf type theory without univalence or higher inductive types.
If A has a base point a_0 : A,
The Church encoding of A is of the form
∏_X : ∏_x : XF_A(X, x) → X
and its n-th loop space is
∏_X : ∏_x : XF_A(X, x) →^n(X, x),
where F_A(X, x) is a type defined from X and x
using only dependent products, dependent sums and path spaces.
The Church encoding of a type A
suggests that we can construct generators of homotopy groups of A
without univalence or higher inductive types,
although we need univalence and higher inductive types
to prove that they are actually generators of homotopy groups.
In this section we describe Church encodings
of some higher inductive types.
We also define the Hopf map
and give a generator of the third homotopy group of 2-sphere
as a polymorphic function.
§.§ The Circle
The circle ^1 is a higher inductive type
generated by a point constructor _1 : ^1
and a path constructor _1 : _1 = _1.
It has a recursion principle
(^1→ X) ≃∑_x : Xx = x.
Therefore
^1≃∏_X : ∏_x : Xx = x → X.
The constructors are defined as polymorphic functions
_1 ≡λ (X, x, p).x :
∏_X : ∏_x : Xx = x → X
_1 ≡λ (X, x, p).p :
∏_X : ∏_x : Xx = x → x = x
§.§ Spheres
For a natural number n,
the n-sphere ^n
is a higher inductive type generated by
a point constructor _n : ^n
and a path constructor _n : ^n(^n, _n).
We have
^n≃∏_X : ∏_x : X^n(X, x) → X.
The constructors are defined as polymorphic functions
_n ≡λ (X, x, p).x :
∏_X : ∏_x : X^n(X, x) → X
_n ≡λ (X, x, p).p :
∏_X : ∏_x : X^n(X, x) →^n(X, x)
The k-th loop space of ^n
is ∏_X : ∏_x : X^n(X, x) →^k(X, x)
studied in Section <ref>.
§.§ Suspensions
For a type A, the suspension A of A
is a higher inductive type generated by
point constructors : A and : A
and a path constructor : A → =.
We have
A≃∏_X : ∏_x, y : X(A → x = y) → X.
§.§ Joins
For types A and B,
the join A B of A and B
is a higher inductive type generated by
point constructors : A → A B and : B → A B
and a path constructor : ∏_a : A∏_b : B(a) = (b).
We have
A B≃∏_X : ∏_s : A → X∏_t : B → X
(∏_a : A∏_b : Bsa = tb) → X.
§.§ The Hopf Map
The Hopf map is a function ^3→^2
whose fiber at the base point is ^1.
Identifying ^3≃^1^1
and ^2≃^1,
the Hopf map h : ^1^1→^1
is defined as
h((x)) ≡,
h((y)) ≡,
and h((x, y) = (h_1(x, y)),
where
h_1 : ^1→^1→^1
is a function defined as
h_1(_1, y) ≡ y,
h_1(_1, _1) = _1^-1,
and h_1(_1, _1) is given by a proof of
_1^-1_1 = __1 = _1_1^-1.
We define the Hopf map as a polymorphic function.
Observe that
^1^1→ X
≃∑_f, g : ^1→ X∏_x, y : ^1fx = gy
≃ ∑_x : X∑_l : x = x∑_y : X∑_k : y = y∑_p : x = y∑_α : l p = p∑_β : p k = p(αr k) β = (l lβ) α
and
^1→ X
≃∑_x, y : X^1→ x = y
≃∑_x, y : X∑_p : x = yp = p.
Then we define
h : (∏_X : ∏_x, y : X∏_l : x = x∏_k : y = y∏_p : x = y∏_α : l p = p∏_β : p k = p
(αr k) β = (l lβ) α→ X)
→ (∏_X : ∏_x, y : X∏_p : x = yp = p → X)
h(f) ≡ λ (X, x, y, p, α).
f(X, x, y, _x, _y, p, α^-1, α, α̌)
where α̌ is a proof of α^-1α = _p = αα^-1.
§.§ A Generator of π_3(^2)
The Hopf map is a generator of π_3(^2).
We describe the generator as a polymorphic function.
First we define a 3-loop of ^1^1
as a polymorphic function.
We have to construct a function
l_3 : ∏_X : ∏_x, y :X∏_l : x = x∏_k : y = y∏_p : x = y∏_α : l p = p∏_β : p k = p
(αr k) β = (l lβ) α→^3(X, x).
By path induction on p,
we can assume y ≡ x and p ≡_x.
Then the goal becomes
l'_3 : ∏_X : ∏_x : X∏_l, k : x = x∏_α : l = _x∏_β : k = _x
(αr k) β = (l lβ) α→^3(X, x).
For σ : (αr k) β = (l lβ) α,
define l'_3(σ) : ^2_x = ^2_x
as the following concatenation:
^2_x
= (β^-1 (αr k)^-1)
((αr k) β)
E, σ= (α^-1 (l lβ)^-1)
((l lβ) α)
= ^2_x
where E ≡ E(α, β) : β^-1 (αr k)^-1
= α^-1 (l lβ)^-1
is the path described in Figure <ref>,
also defined as E(^2_x, ^2_x) ≡^3_x
by path induction on α and β.
Now we can define a 3-loop of ^1
in a similar way to the Hopf map:
c : ∏_X : ∏_x, y : X∏_p : x = yp = p →^3(X, x)
c ≡λ (X, x, y, p, α).
l_3(X, x, y, _x, _y, p, α^-1, α, α̌).
We can also define it as an element of ^3(^2):
c : ∏_X : ∏_x : X^2(X, x) →^3(X, x)
c ≡λ (X, x, α).
l_3(X, x, x, _x, _x, _x, α^-1, α, α̌).
In fact c(α) is the concatenation of paths
^2_x = αα^-1E=α^-1α = ^2_x
where E comes from the commutativity of concatenation of higher loops.
Here is a natural question.
Is any generator of a homotopy group of a space
definable as a polymorphic function without univalence or higher inductive types?
This question is important because
it measures power of univalence and higher inductive types.
If the answer to the question is yes,
we can say, informally, that univalence and higher inductive types
give proofs that some elements are different
but do not generate new elements,
although there is a problem
which terms we should think of as proofs,
because in dependent type theory elements and proofs are not distinguished.
§ HOMOTOPY TYPE THEORY
In the rest of this paper we prove the abstraction theorem.
We begin with a quick review of homotopy type theory <cit.>.
In this paper we consider the Martin-Löf's dependent type theory
T with countably many univalent universes
_0 : _1 : _2 : …,
an empty type : _0,
a one point type : _0,
a two point type : _0,
indexed -types [t, A, B]
and n-spheres ^n : _0.
The existence of ordinary -types is not enough to construct -types in the relational model,
and we require indexed -types.
In extensional type theory
the existence of -types implies the existence of indexed -types
<cit.>,
but in intensional type theory this does not hold
due to the lack of equalizers.
Also to construct general higher inductive types in the relational model
we need some class of indexed higher inductive types,
but we do not know such a class of higher inductive types.
Therefore we deal with only constant higher inductive types ^n.
For a type family i : I, x : A(i) ⊢ B(x)
and a function i : I, x : A(i), y : B(x) ⊢ t(y) : I,
the -type [t, A, B] of B on A indexed over t
is an inductive type family i : I ⊢[t, A, B](i)
with a single constructor
i : I, a : A(i), f : ∏_y : B(a)[t, A, B](ty)
⊢_[t, A, B](a, f) : [t, A, B](i).
We often omit the subscript _[t, A, B] of the constructor and
write it simply as .
The indexed -type has an induction principle:
given a type family i : I, w : [t, A, B](i) ⊢ D(w)
and a term
i : I, a : A(i), f : ∏_y : B(a)[t, A, B](ty),
g : ∏_y : B(a)D(fy) ⊢ d(a, f, g) : D((a, f)),
we get a term
i : I, w : [t, A, B](i) ⊢_[t, A, B]^D(d, w) : D(w)
together with a computational rule
_[t, A, B]^D(d, (a, f)) ≡ d(a, f, λ(y : B(a))._[t, A, B]^D(d, fy)).
There are projections
i : I ⊢_1 : [t, A, B](i) → A(i)
i : I ⊢_1((a, f)) ≡ a
i : I ⊢_2 : ∏_w : [t, A, B](i)∏_y : B(_1(w))[t, A, B](ty)
i : I ⊢_2((a, f)) ≡ f.
Some important types are definable from these types.
Ordinary -types _x : AB(x) are -types indexed over
the function B →.
The type of natural numbers is defined as
≡_x : _(, , x),
where _(, ) : →_0 is a function defined by recursion
as _(, , 0_) ≡ and
_(, , 1_) ≡.
A coproduct A + B of two types A, B : is defined as
A + B ≡∑_x : _(A, B, x).
For a function f : A → B, define
(f) ≡(∑_g : B → A∏_a : Ag(fa) = a)
×(∑_h : B → A∏_b : Bf(hb) = b)
and (A ≃ B) ≡∑_f : A → B(f).
For types A, B :, define a function _A, B : A = B → A ≃ B by path induction
as (_A) is the identity function on A.
The univalence axiom is the axiom that is an equivalence:
_ : ∏_A, B : (_A, B).
§ RELATIONAL MODEL
The key to prove the abstraction theorem is the fact that
binary type families x : A, x' : A' ⊢A(x, x')
form a model (T) of homotopy type theory
which we call the relational model.
Families x : A⊢B(x)
of binary type families
are defined in Section <ref>.
A term of a family x : A⊢B(x)
is a triple of terms
x : A ⊢ b(x) : B(x), x' : A' ⊢ b'(x') : B'(x') and
x : A, x' : A', x : A(x, x') ⊢b(x) : B(x, b, b'),
written x : A⊢b(x) : B(x)
in short.
In Section <ref>,
we defined dependent products, dependent sums,
path spaces and universes of binary type families.
It remains to construct other types and check the univalence axiom.
For a type constant C ≡, , , ^n,
the binary type family c : C, c' : C ⊢ c = c'
has the same constructors and
satisfies the same induction principle as those of C.
For example, c : , c' : ⊢ c = c'
has two constructors (0_, 0_, _0_)
and (1_, 1_, _1_).
To see the induction principle of two point type,
let c : , c' : , c : c = c', x : A(c), x' : A'(c') ⊢A(c, x, x')
be a family on c = c' and
(a_0 : A(0_), a'_0 : A'(0_), a_0 : A(_0_, a_0, a'_0))
and (a_1 : A(1_), a'_1 : A'(1_), a_1 : A(_1_, a_1, a'_1))
be elements of A.
We have to construct terms
c : ⊢ f(c) : A(c),
c' : ⊢ f'(c') : A'(c') and
c : , c' : , c : c = c' ⊢f(c) : A(c, f(c), f'(c'))
such that f(0_) ≡ a_0, f(1_) ≡ a_1,
f'(0_) ≡ a'_0, f'(1_) ≡ a'_1,
f(_0_) ≡a_0 and
f(_1_) ≡a_1.
Define f and f' by -induction.
By path induction,
to construct f it suffices to give a term
c : ⊢f(_c) : A(_c, f(c), f'(c)),
which is given by -induction.
To define indexed -types,
suppose that we get a family of binary type families
i : I, x : A(i)
⊢B(x)
and a term
i : I, x : A(i),
y : B(x) ⊢t(y) : I.
First we have indexed -types
i : I ⊢[t, A, B](i)
and i' : I' ⊢[t', A', B'](i')
which we refer to as W(i) and W'(i') respectively.
We have to construct a type
i : I, i' : I', i : I(i, i'),
w : W(i), w' : W'(i') ⊢W(i, w, w').
Let J ≡∑_i : I∑_i' : I'I(i, i') × W(i) × W'(i').
Define type families j : J ⊢Ǎ(j) and
j : J, x : Ǎ(j) ⊢B̌(x) as
Ǎ(i, w, w') ≡A(i, _1(w), _1(w'))
B̌((i, w, w'), a) ≡∑_b : B(_1(w))∑_b' : B'(_1(w'))B(a, b, b').
Define a term j : J, x : Ǎ(j), y : B̌(x) ⊢ť(y) : J as
ť((i, w, w'), x, (b, b', b)) ≡
(t(b), t'(b'), t(b), _2(w)(b), _2(w')(b')).
Then we set
W(i, w, w') ≡[ť, Ǎ, B̌](i, w, w').
We have a constructor
i : I, i' : I', i : I(i, i'),
a : A(i), a' : A'(i'), a : A(i, a, a'),
f : ∏_y : B(a)W(t(y)),
f' : ∏_y' : B'(a')W'(t'(y')),
f : ∏_y : B(a)∏_y' : B'(a')∏_y : B(a, y, y')W(t(y), f(y), f'(y'))
⊢_[ť, Ǎ, B̌](a, f) :
W(i, _[t, A, B](a, f), _[t', A', B'](a', f')).
One can check the induction principle of indexed -type.
Note that we have formalized, in Agda[<http://wiki.portal.chalmers.se/agda/>],
the construction of indexed -types
in the relational model[<https://gist.github.com/uemurax/040d22a4c037f5323ed26fbee6420544>].
We give a sketch of a proof that
a universe X : , X' : ⊢ X → X' →
of binary type families satisfies the univalence axiom.
Recall that satisfies the univalence axiom if and only if
the canonical function
e : →∑_X, X' : X ≃ X'
e(X) ≡ (X, X, 𝕀_X)
is an equivalence.
Observe that in (T)
a function f : A→B
is an equivalence if and only if
f : A → B and f' : A' → B' are equivalences and
f(x, x') : A(x, x') →B(fx, f'x')
is an equivalence for all x : A and x' : A'.
Therefore, to show that X → X' → is univalent,
it suffices to see that
e : (X → X' →) →∑_X, Y : X → X' →∏_x : X∏_x' : X'X(x, x') ≃Y(x, x')
is an equivalence for all X, X' :.
There is an equivalence
(∑_X, Y : X → X' →∏_x : X∏_x' : X'X(x, x') ≃Y(x, x'))
≃(X → X' →∑_X, Y : X≃Y),
and e is homotopic to
(X → X' → e) : (X → X' →) →
(X → X' →∑_X, Y : X≃Y)
along this equivalence.
The function (X → X' → e) is an equivalence
by univalency of .
§ THE ABSTRACTION THEOREM
In Section <ref>
we see that the binary type families form a model (T)
of Martin-Löf's dependent type theory
with countable univalent universes, an empty type,
a one point type, a two point type, indexed -type and n-spheres.
Thus we have an interpretation - : T→(T).
- takes a type judgment x : X ⊢ A(x)
to a type judgment
x : X, x' : X, x : X,
a : A(x), a' : A(x') ⊢A(x, a, a')
and a term judgment x : X ⊢ t(x) : A(x)
to a term judgment
x : X, x' : X, x : X⊢t(x) : A(x, t(x), t(x')).
Now the abstraction theorem is proved by taking t̂≡t.
For each term x : X ⊢ t(x) : A(x),
there exists a term
x : X, x' : X', x : X(x, x') ⊢t̂(x) : A(x, t(x), t(x')).
| Given a closed term of type of polymorphic functions
defined in homotopy type theory <cit.>,
we can derive a theorem that it satisfies.
For example,
let t be a closed term of type
t : ∏_X : ∏_x : Xx = x → x = x.
Then we have a theorem
∏_X, X' : ∏_f : X → X'∏_x : X∏_p : x = xt(fp) = f(tp)
in homotopy type theory, in the sense that
there is a closed term of this type.
Such theorems are “free theorems” in the style of Wadler <cit.>
for homotopy type theory.
Original free theorems for polymorphic type theory
are consequences of relational parametricity <cit.>
and have a lot of applications including
short cut fusion <cit.>,
non-definability of polymorphic equality <cit.>,
and encoding initial algebras and final coalgebras
in pure polymorphic lambda calculus <cit.>.
Recently relational parametricity and free theorems for dependent type theory
have been studied by several authors.
Atkey et al. <cit.>
constructed relationally parametric models of Martin-Löf type theory
and proved a simple free theorem
and the existence of initial algebras for indexed functors.
Takeuti <cit.> studied relational parametricity for the lambda cube
and proved adjoint functor theorem internally.
Bernardy et al. <cit.>
studied relational parametricity for pure type systems
and free theorems for dependently typed functions.
In this paper we show free theorems specific to homotopy type theory
such as the example given in the first paragraph
where the type ∏_X : ∏_x : Xx = x → x = x
seems to be trivial without homotopy-theoretic interpretation.
A difference between free theorems for homotopy type theory
and original free theorems for polymorphic type theory is that
in homotopy type theory they are represented by homotopies
instead of equalities.
This difference causes some problems
related to proof-relevance and higher dimensional homotopies.
One approach to these problems is higher dimensional parametricity
<cit.>
and to state free theorems as coherent homotopies.
Both in <cit.> and <cit.>,
the target languages are polymorphic lambda calculus
which does not have higher dimensional structures.
On the other hand, our target language, homotopy type theory,
has already higher dimensional structures,
and thus ordinary free theorems for higher dimensional types work well.
To explain this, let us see an example.
Consider a canonical embedding
i : A →∏_X : (A → X) → X
i ≡λ a. λ (X, g).ga
for a base type A :.
In polymorphic type theory
it follows from a free theorem that i is an isomorphism.
In homotopy type theory
an immediate consequence of a free theorem is the fact that
i is 0-connected, that is, it induces
a bijection between the sets of connected components.
A 0-connected map is far from an isomorphism.
However, for each n ≥ 1 and a : A,
it follows from a free theorem for the type
∏_X : ∏_g : A → X^n(X, ga)
that i induces a 0-connected map
^n(i) : ^n(A, a) →^n(∏_X : (A → X) → X, ia)
between the n-th loop spaces.
Therefore we conclude that i is ∞-connected, that is,
it induces a bijection between the n-th homotopy groups
for each n ≥ 0.
Hence the types A and ∏_X : (A → X) → X
are equivalent from homotopical point of view.
For a concrete (higher) inductive type A,
the type ∏_X : (A → X) → X
is equivalent to a type definable in Martin-Löf type theory <cit.>
without univalence or higher inductive types.
For example,
(∏_X : (^n→ X) → X) ≃
(∏_X : ∏_x : X^n(X, x) → X)
where ^n is the n-dimensional sphere.
The right hand side of this equivalence
is the Church encoding of n-sphere,
proposed by Shulman[<
It follows from the previous paragraph that
every space can be identified via an ∞-connected map
with its Church encoding.
The Church encoding of a space suggests that
generators of its homotopy groups are definable without univalence or higher inductive types.
For example the generator of π_3(^2)
can be defined as polymorphic functions of type
∏_X : ∏_x : X^2(X, x) →^3(X, x).
We can say that the univalence axiom and higher inductive types
are used only for proving that π_3(^2) is the integers
but not needed for creating the generator of π_3(^2).
Free theorems for general open terms in homotopy type theory
should follow from relational parametricity,
but it seems to be hard to axiomatize relational parametricity for homotopy type theory.
Thus we focus on free theorems for closed terms
as the first step to understanding relational parametricity
for homotopy type theory,
because free theorems for closed terms follow from Reynolds's abstraction theorem <cit.>
without any assumptions.
Informally, it says that terms evaluated under related environments yield related values.
We show the abstraction theorem for homotopy type theory
via a syntactic transformation of a term in homotopy type theory to another.
The key to prove the abstraction theorem is the fact that
binary type families in homotopy type theory form a model of homotopy type theory
which we call the relational model.
Then the abstraction theorem is the soundness of the interpretation
of types as binary type families.
There is a category-theoretic proof of this fact
using Shulman's inverse diagrams of type-theoretic fibration categories <cit.>
or fibred type-theoretic fibration categories introduced by the author <cit.>.
In this paper we give a syntactic proof
in order to make the paper self-contained.
We also show a new result on inductive data types:
for a type theory with indexed -types,
originally called general trees <cit.>,
the relational model has indexed -types.
The construction of indexed -types in the relational model
is essentially same as that of -types in the gluing construction
for a cartesian functor between Π-pretoposes
<cit.>.
The study of relational parametricity via syntactic transformations is not new.
Abadi et al. <cit.> and
Plotkin and Abadi <cit.>
introduced logic for parametricity
where the abstraction theorem is the soundness of
the interpretations of terms in System F as proofs in their logic.
Wadler pointed out that
Reynolds's abstraction theorem can be seen as
a transformation of a term in System F to a proof in second-order logic
<cit.>.
Takeuti <cit.> and
Bernardy et al. <cit.>
studied relational parametricity for the lambda cube
and pure type systems respectively
via syntactic transformations of a term in one type theory to another.
Since homotopy type theory, even Martin-Löf type theory,
is powerful enough to express predicates
(reflective in terms of <cit.>),
we can transform a term in homotopy type theory
to another in homotopy type theory itself.
Our contribution is to give transformations of
identity types, the univalence axiom and some higher inductive types.
Organization.
We begin in Section <ref>
by recalling some important types and functions in homotopy type theory.
Section <ref> and <ref>
are the core of this paper.
In Section <ref>
we explain what the abstraction theorem is.
In Section <ref>,
we give some free theorems as corollaries of the abstraction theorem.
In Section <ref>,
we discuss Church encodings of higher inductive types
and give the generator of π_3(^2)
as a polymorphic function.
We prove the abstraction theorem
in Section <ref>, <ref> and <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07618v1 | 20170126085802 | PT-symmetric scattering in flow duct acoustics | [
"Yves Aurégan",
"Vincent Pagneux"
] | physics.class-ph | [
"physics.class-ph"
] |
[email protected]
Laboratoire d'Acoustique de l'Université du Maine, UMR CNRS 6613
Av. O Messiaen, F-72085 LE MANS Cedex 9, France
[email protected]
Laboratoire d'Acoustique de l'Université du Maine, UMR CNRS 6613
Av. O Messiaen, F-72085 LE MANS Cedex 9, France
We show theoretically and experimentally that the propagation of an acoustic wave in an airflow duct going through a pair of diaphragms,
with equivalent amount of mean-flow-induced effective gain and loss,
displays all the features of a parity-time (𝒫𝒯) symmetric system.
Using a scattering matrix formalism,
we observe experimentally the properties which reflect the 𝒫𝒯-symmetry of the scattering acoustical system:
the existence of a spontaneous symmetry breaking with symmetry-broken pairs of scattering eigenstates showing amplification and reduction,
and the existence of points with unidirectional invisibility.
11.30.Er, 43.20.Mv, 43.20.Rz, 68.35.Iv
𝒫𝒯-symmetric scattering in flow duct acoustics
Vincent Pagneux
===============================================
Hydrodynamic instability theory shows that flow can provide energy to small perturbations
<cit.>.
If, in addition, these perturbations are compressible, then both acoustic wave propagation and energy exchange
with the flow are possible, leading e.g. to the classical whistling phenomena <cit.>.
Thus, in the particular case of flow duct acoustics, the wave can obviously be convected but it also experiences gain or loss due to
interactions with the flow inhomogeneities <cit.>.
Consequently, propagation of acoustic waves in ducts with flow is a natural Non-Hermitian system where
loss and gain are available.
Non-Hermitian systems, where energy conservation is broken, lead to dynamics governed by evolution equations with
non-normal operators, where surprising phenomena can appear due to huge non-normality especially close to
exceptional points <cit.>.
The particular case of 𝒫𝒯-symmetry, where gain and loss are delicately balanced,
has attracted a lot of attention in the last two decades
<cit.>.
It opens the possibility to obtain purely real spectra from Non-Hermitian Hamiltonians,
as well as a spontaneous symmetry breaking where real eigenvalues coalesce at an exceptional point to become complex conjugate pair.
From a scattering point of view, another type of spontaneous symmetry breaking for 𝒫𝒯-symmetric systems has been theoretically proposed <cit.>.
It corresponds to the transition of norm-preserving scattering eigenstates, with unimodular eigenvalues, to symmetry broken pairs
of amplified and lossy scattering eigenstates, with associated pairs of scattering eigenvalues with
inverse moduli <cit.>.
It is to be noticed that this type of symmetry breaking is still waiting to be observed experimentally <cit.>.
Initiated in the domain of quantum mechanics, many works on 𝒫𝒯-symmetry
have displayed several intriguing effects such as
power oscillation <cit.>,
unidirectional transparency <cit.>,
single-mode laser <cit.>, spectral singularity and Coherent Perfect Absorber (CPA)-Laser
<cit.>
or enhanced sensitivity <cit.>.
A majority of the studies has been conducted in optics with
some attempts in acoustics where the difficulty to obtain gain has been recognized.
Actually, whilst losses can be easily introduced <cit.>, the gain for acoustic waves has until now been obtained owing
to active electric amplification <cit.>.
In this letter, we report the experimental realization of a purely mechanical scattering 𝒫𝒯-symmetric system
for the propagation of acoustic waves in a waveguide.
The loss and the gain are produced by
two localized scattering units made of diaphragms, one associated with loss and the other
associated with gain, see Fig. <ref>(a).
In our experiments, the Mach number of the flow is small enough
(𝑀-.20𝑒𝑚 𝑎≃ 0.01) such that the effect of convection on the sound wave can
be neglected, preserving the reciprocity property, and the only effect of the flow is located at the two diaphragms,
characterized by normalized complex impedances C_1 and C_2.
The balance of gain and loss is realized by finely tuning the flow rate and the geometry of each diaphragm, ensuring
a 𝒫𝒯-symmetric system that corresponds to C_1=C_2^* (note that the real part of the two impedances have to be equal to
get the parity symmetry).
Measurements of the scattering matrix components allow us to demonstrate unidirectional invisibility and to verify the 𝒫𝒯-symmetry properties.
Besides, by changing the distance between the scatterers, the spontaneous symmetry breaking of the scattering matrix is observed with the transition from
exact-𝒫𝒯-symmetric phase to 𝒫𝒯-broken phase.
In the broken phase, with the experimental gain available, the scattering eigenstates can be simultaneously fourfold amplified or reduced,
and we show that this effect might be enhanced by considering a finite periodic collection of the set of two diaphragms, leading to CPA-Laser points.
System description and 1D model.—
The description of the set-up is shown in Fig. <ref>.
We consider an acoustic waveguide where only plane waves can propagate (k A < 1.841 <cit.>, where A is the tube radius, k=ω/c_0 is the wavenumber, ω is the frequency and c_0 is the sound velocity). The propagation for the acoustic pressure p is then governed by the 1D Helmholtz equation.
Two diaphragms are inserted into the tube and are separated with a distance D (Fig. <ref>(a)).
As their thicknesses t are small (k t ≪ 1), the acoustic velocity is conserved while the pressure jumps between the two sides of the discontinuities. Thus the propagation is governed by
p” + k^2 p = 0,
with the point scatterer jump conditions at the diaphragms:
p' _x=± D/2 = 0 ,
p _x=- D/2= C_1/k p' and p _x= D/2 = C_2/k p'.
where prime is the derivative with respect to x.
The real part of the dimensionless parameters C_1,2 is associated to reactive effects while its imaginary part is linked to the dissipative or gain effects.
We have thus a very simple 1D reciprocal wave model with two point scatterers at x=± D/2 (Fig. <ref>(b)).
The effect of the flow on acoustic propagation is only and entirely contained in the complex impedances
C_1 and C_2 that reflect the mean-flow-induced effective gain and loss.
The system is 𝒫𝒯-symmetric if and only if the two impedances are complex conjugated: C_2=C_1^* <cit.>.
With the exp(-iω t ) convention, there is absorption if (C_i) >0 and gain if (C_i) <0.
The overall behavior of the acoustical system can be described by the transfer matrix 𝖬
( [ p_2^+; p_2^- ]) =
[ [ M_11 M_12; M_21 M_22 ]]
( [ p_1^+; p_1^- ])
where p^+_1,2 and p^-_1,2 are defined in Fig. <ref>(b).
After some algebra, the component of the overall transmission matrix are found to be:
M_11 = -isin( k D) C_1 C_2/2+e^i k D( 1+iC_1/2+iC_2/2)
M_12 = isin( k D) C_1 C_2/2 -e^i k DiC_1/2 -e^-i k DiC_2/2
M_21 = -isin( k D) C_1 C_2/2+e^-i k DiC_1/2+e^i k DiC_2/2
M_22 = isin( k D) C_1 C_2/2+e^-i k D( 1-iC_1/2-iC_2/2)
where in the case of a 𝒫𝒯 symmetric system <cit.>: M_11=M^*_22 and
[M_12] = [M_21] = 0. The transmission and reflection coefficients for waves coming from left and right are defined by
t_L= det(𝖬) /M_22 r_R=M_12/M_22
r_L= - M_21/M_22 t_R= 1/M_22
Due to reciprocity we have det(𝖬)=1 and then t=t_L=t_R.
As discussed in detail in <cit.>, by permutation of the outgoing waves, two different scattering matrices
with different sets of eigenvalues can be defined, leading to distinct symmetry breaking.
These two scattering matrices are
𝖲_𝗋 = [ r_L t; t r_R ] and 𝖲_𝗍 = [ t r_L; r_R t ]
where[ p_1^-; p_2^+ ]
=𝖲_𝗋[ p_1^+; p_2^- ] , and 𝖲_𝗍=𝖲_𝗋σ_𝗑,
σ_𝗑 is one of the Pauli matrices.
The eigenvalues of 𝖲_𝗋 and 𝖲_𝗍 may have both an exact and broken phases but the symmetry-breaking points are not the same.
In this paper, we have chosen to consider both 𝖲_𝗋 and 𝖲_𝗍 and the different phase transitions they imply.
When computing the scattering eigenvalues, it is useful to remind the 𝒫𝒯-symmetry conservation relations <cit.> that can be written for instance as 𝖲_𝗍^*=𝖲_𝗍^-1 and leads to
r^*_L r_R = 1-|t|^2
r_L t^* + r^*_L t = 0
r_R t^* + r^*_R t = 0
The eigenvalues of the scattering matrix 𝖲_𝗍 are given by λ_1,2=t±√(r_R r_L))=t (1 ±√(1-|t|^-2)). Then if |t|<1, the modulus of the eigenvalues is equal to 1. The case |t|=1 corresponds to symmetry-breaking and |t|>1 correspond to the 𝒫𝒯-broken phase.
The eigenvalues of the other scattering matrix 𝖲_𝗋 are given by s_1,2=(r_R+r_L ±√(Δ))/2 where Δ = (r_R-r_L)^2+4t^2.
The broken phase condition can be written Δ=0 which leads to r_R-r_L=± 2 i t. In term of the transmission matrix coefficients, it is equivalent to M_12-M_21 = ± 2 i or (C_1) sin( k D) = ± 1.
Experimental set-up.—
As described in Fig. <ref>, the 𝒫𝒯 symmetric system is mounted in a rigid circular duct between two measurement sections, upstream and downstream.
Each measurement section consists in a hard walled steel duct (diameter 30 mm) where two microphones are mounted.
Two acoustic sources on both sides of the system give two different acoustic states and the four elements of the scattering matrix (transmission and reflection coefficient on both directions) for plane waves can be evaluated.
A more detail description of the measurement technique can be found in <cit.>.
The desired gain scatterer is realized by a finely designed diaphragm submitted to a steady flow. In this geometry, a shear layer is formed on its upstream edge and the flow is contracted into a jet with an area smaller than the hole of the diaphragm, see Fig. <ref>(c).
This shear layer is very sensitive to any perturbations like an oscillation in the velocity due to the acoustic wave.
The shear layer convects and amplifies these perturbations (see the marked zone in Fig. <ref>(c) and a strong coupling between acoustic and flow occurs when the acoustical period is of the order of the time taken by the perturbations to go from the upstream edge of the diaphragm to the exit of the diaphragm.
This corresponds to a Strouhal number of the order of S_h=f t/U_d ∼ 0.2 <cit.>
where f is the frequency of the acoustic perturbation, t is the thickness of the diaphragm and U_d is the mean velocity in the diaphragm U_d=U_0 (A/a)^2 with U_0 the mean velocity in the duct
and a the radius of the diaphragm (Fig. <ref>(c)).
Eventually, this gain diaphragm has been chosen with an internal radius a = 10 mm and a thickness t = 5 mm (see Fig. <ref> and the inset in Fig. <ref>).
The other diaphragm, that has to be lossy, has been chosen with an internal radius a = 12 mm and a thickness t = 4.3 mm.
Two resistive metallic tissues have been glued to produce the dissipation by viscous and turbulent effects.
In a first step, the scattering coefficients of the two diaphragms have been measured separately,
allowing us to deduce the values of the impedance C_1,2.
These parameters,
that have to verify C_2=C_1^* to get a 𝒫𝒯-symmetric system,
are plotted on Fig. <ref>. With the chosen geometry and flow parameters, it can be observed that there is
a frequency f_m where the desired equality (C_2=C_1^*) is achieved.
In a second step, the scattering matrix of the system composed by the two balanced diaphragms is measured.
All the subsequently reported measurements are made at the frequency f_m = 1920 Hz and at the Mach number 𝑀-.20𝑒𝑚 𝑎= 0.01 for which C_2=C_1^*= 1.83 - 1.36i, allowing the system to be 𝒫𝒯 symmetric.
In order to be able to observe the symmetry breaking, the distance between the two diaphragms D is varied
from 312 mm to 417 mm by inserting 22 rigid metallic tubes of different lengths.
The minimal distance is chosen to minimize the hydrodynamical interactions between the two diaphragms.
The maximal D is chosen to have points over half a wavelength at the measurement frequency with
a value of kD/2 π approximately in the range 1.7 – 2.4.
Results.—
The measured transmission and reflection coefficients are displayed in Fig. <ref>(a).
They are compared to the theoretical values obtained by using the measured value of C_1=C^*_2 and the 1D modeling of Eqs. (<ref>).
The reflections from left r_L (impinging on the loss) and right r_R (impinging on the gain) appear as deeply asymmetric, with two points
with |t|=1 and r_R=0 or r_L=0.
These two points correspond to the unidirectional transparency phenomenon where
the wave passes unreflected with no amplitude change through the scatterers form one side, and is strongly reflected from the other side.
In order to verify experimentally the 𝒫𝒯 symmetry of the system, in Fig. <ref>(b), we plot the 2-norm of
the matrix 𝖲_𝗍𝖲_𝗍^* - 𝖨 corresponding to the shift from the 𝒫𝒯 symmetry conservation relations in
Eqs (<ref>)-(<ref>).
For comparison the norm of the matrix 𝖲_𝗍 ^t𝖲_𝗍^* - 𝖨 which represents the deviation to the energy conservation is also displayed.
It appears that 𝖲_𝗍𝖲_𝗍^* - 𝖨 is nearly equal to zero in the whole range of paramaters which unambiguously demonstrates
that the system is 𝒫𝒯 symmetric; meanwhile 𝖲_𝗍 ^t 𝖲_𝗍^* - 𝖨 can take large values confirming
that our system strongly violates conservation of energy.
It can be noticed that for kD multiple of π, the system is simultaneously 𝒫𝒯-symmetric and conservative;
it can be verified (see Eqs. <ref>) that in these cases the scattering is only sensitive to the real part of the impedances C_1 and C_2
ignoring thus the effect of gain and loss.
By varying the length of the duct between the two diaphragms, we can also inspect the spontaneous symmetry breaking
of the scattering matrix of the system <cit.>.
In Fig. <ref>, we show the eigenvalues of S_r and S_t that, since they are different,
lead to different symmetric and broken phases <cit.>.
We represent also the singular value decomposition (SVD) of the scattering matrices. These two SVD are identical for 𝖲_𝗍
and 𝖲_𝗋 (since 𝖲_𝗍 ^t 𝖲_𝗍^*=𝖲_𝗋 ^t 𝖲_𝗋^*) and correspond respectively to the maximum
and minimum outgoing wave for any incoming waves with unit flux; by definition they are upper and lower bound of the modulus of the eigenvalues, and thus
must be different from one to allow the broken phase.
For each choice of scattering matrix, the experimental measurements, very close to the theoretical predictions,
display clear signatures of the spontaneous symmetry breaking
with different broken phases for 𝖲_𝗍 and 𝖲_𝗋.
In the symmetric phase the eigenvalues of the scattering matrices
remain on the unit circle in the complex plane, and the symmetry breaking corresponds to pairs of non-unimodular scattering eigenvalues
i.e. where the moduli are the inverse of each other and different from 1.
To the best of our knowledge, it is the first experimental demonstration of the symmetry breaking of the scattering matrix
for 𝒫𝒯-symmetric systems as proposed in <cit.>.
In the broken phase, a particularly interesting case is the CPA-Laser where one eigenvalue of the 𝖲 matrix goes to infinity (Laser) and the
other goes to zero (Absorber). From the experimental results of Fig. <ref> we can see that this Laser-Absorber is not achieved because the
maximum eigenvalue corresponds to a 3.5 amplification.
From Eqs. (<ref>), it can be shown that the CPA-Laser condition can be obtained for larger values of the gain parameter ((C_2)≃2.5) which cannot be achieved with our current experimental setup. Nevertheless, in Fig. <ref>, we show that quasi-CPA-Laser could be theoretically achieved by taking
a finite periodic array of N cells of our 𝒫𝒯-symmetric system with a distance W between each cell (Fig. <ref>(a)).
The use of the 1D model shows that very near CPA-Laser can be obtained by just tuning the number of cells and the intercell dimensionless frequency kW (N=25 and kW/2π=2.1 in Fig <ref>(b-c)).
Fig. <ref>(c) indicates that, by using interference Bragg effect in finite periodic case, it is possible to approach very closely the conditions of CPA-Laser.
Conclusion.—
Owing to vortex-sound interaction providing gain and loss in an acoustical system,
we have obtained the experimental signatures of the spontaneous 𝒫𝒯-symmetry breaking in scattering systems.
The scattering matrix eigenvalues can remain on the unit circle in the complex plane despite the Non-Hermiticity and the symmetry breaking results in pairs of scattering eigenvalues with inverse moduli. The unidirectional transparency has also been observed.
It is noteworthy that this mechanical gain medium does not require to be electronically powered and that this 𝒫𝒯-symmetric system is very simple to manufacture: one tube, two diaphragms and a small flow inside the tube.
Therefore, this kind of acoustic system
can be seen as a building block to study wave propagation with more complex
𝒫𝒯-symmetry (for instance in periodic systems),
and, more generally, we believe it provides an important connection
between hydrodynamic instability theory, acoustic wave propagation and Non-Hermitian physics.
99
schmid P.J. Schmid and D.S. Henningson,
Stability and transition in shear flows (Vol. 142, Springer Science & Business Media., 2012).
drazin
P.G. Drazin and W.H. Reid,
Hydrodynamic stability (Cambridge University Press, 2004).
mohring83
W. Mohring, E.A. Muller, and F. Obermeier,
Rev. Mod. Phys. 55, 707 (1983).
goldstein
Goldstein, M. E.
Aeroacoustics
(New York, McGraw-Hill International Book Co., 1976).
fabrikant
A.L. Fabrikant and Y.A. Stepanyants,
Propagation of waves in shear flows
(Vol. 18, Singapore: World Scientific, 1998).
howe
M.S. Howe,
J. Fluid Mech. 71, 625 (1975).
trefethen
L.N. Trefethen, and M. Embree, Spectra and pseudospectra: the behavior of nonnormal matrices and operators
(Princeton University Press, 2005).
moiseyev
N. Moiseyev, Non-Hermitian quantum mechanics
(Cambridge University Press, 2011).
krejcirik
D. Krejcirik, P. Siegl, M. Tater, and J. Viola,
J. Math. Phys. 56, 103513, (2015).
bender1998 C.M. Bender and S. Boettcher,
Phys. Rev. Lett. 80, 5243 (1998).
mostafa2002 A. Mostafazadeh, A.
J. Math. Phys. 43, 205 (2002).
bender2005 C.M. Bender,
Contemporary Physics 46, 277, (2005).
bender2007 C.M. Bender,
Rep. Prog. Phys. 70, 947 (2007).
guo2009
A. Guo, G.J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D.N. Christodoulides,
Phys. Rev. Lett. 103, 093902, (2009).
christo2010 C. E. Ruter , K.G. Makris, R. El-Ganainy, D.N. Christodoulides, M. Segev, and D. Kip,
Nat. Phys. 6, 192 (2010).
Christodoulides2007 R. El-Ganainy, K. G. Makris, D. N. Christodoulides, and Z.H. Musslimani,
Opt. Lett. 32, 2632 (2007).
Regensburger2013
A. Regensburger, M.-A. Miri, C. Bersch, J. Nager, G. Onishchukov, D. N. Christodoulides, and U. Peschel,
Phys. Rev. Lett. 110, 223902 (2013).
cerjan2016
A. Cerjan, A. Raman, and S. Fan,
Phys. Rev. Lett. 116, 203902 (2016).
christo2016
Z. Zhang, Y. Zhang, J. Sheng, L. Yang, M.-A. Miri, D. N. Christodoulides, B. He, Y. Zhang, and M. Xiao,
Phys. Rev. Lett. 117, 123601 (2016).
stone2011
Y. D. Chong, L. Ge, and A. D. Stone,
Phys. Rev. Lett. 106, 093902 (2011).
stone2012
L. Ge, Y. Chong, and A. D. Stone,
Phys. Rev. A 85, 023802 (2012).
stone2013 P. Ambichi, K. G. Makris, L. Ge, Y. Chong, A. D. Stone, and S. Rotter,
Phys. Rev. X 3, 041030 (2013).
diakonos2014
P.A. Kalozoumis, G. Pappas, F.K. Diakonos, and P. Schmelcher,
Phys. Rev. A 90, 043809 (2014).
zhu2016
B. Zhu, R. Lu, and S. Chen,
Phys. Rev. A 93, 032129 (2016).
lige2015 L. Ge, K.G. Makris, D.N. Christodoulides, and L. Feng,
Phys. Rev. A 92, 062135 (2015).
lige2016 L. Ge and L. Feng,
Phys. Rev. A 94, 043836 (2016).
Christodoulides2009
K. G. Makris, R. El-Ganainy, D. N. Christodoulides, and Z. H. Musslimani,
Phys. Rev. Lett. 100, 103904 (2008).
Regensburger2012
A. Regensburger, C. Bersch, M.-A. Miri, G. Onishchukov, D.N. Christodoulides, and U. Peschel,
Nature (London) 488, 167 (2012).
koslov2015
M. Kozlov and G.P. Tsironis,
New J. Phys. 17, 105004 (2015).
kottos2011
Z. Lin, H. Ramezani, T. Eichelkraut, T. Kottos, H. Cao and D.N. Christodoulides,
Phys. Rev. Lett. 106, 213901 (2011).
longhi2011
S. Longhi,
J. Phys. A 44, 485302 (2011).
longhi2014
S. Longhi,
J. Phys. A 47, 485302 (2014).
zhang2014c
L. Feng, Z. J. Wong, R.-M. Ma, Y. Wang, and X. Zhang, Science 346, 972 (2014).
hoadei2014
H. Hodaei, M.-A. Miri, M. Heinrich, D. N. Christodoulides, and M. Khajavikhan, Science 346, 975 (2014).
mostafazadeh2009
A. Mostafazadeh,
Phys. Rev. Lett. 102, 220402 (2009).
longhi2010 S. Longhi,
Phys. Rev. A 82, 031801(R) (2010).
zhang2014b
H. Ramezani, H.K. Li, Y. Wang, and X. Zhang,
Phys.Rev.Lett. 113, 263905 (2014).
feng2016
Z. J. Wong, Y.-L. Xu, J. Kim, K. OBrien, Y. Wang, L. Feng, and X. Zhang,
Nat. Photon. 10, 796 (2016).
nori2016
Z.-P. Liu, J. Zhang, S. K. Ozdemir, B. Peng, H. Jing, X.-Y. Lu, C.-W. Li, L. Yang, F. Nori, and Y.-X. Liu,
Phys. Rev. Lett. 117, 110802 (2016).
vrg2016a
V. Romero-Garcia, G. Theocharis, O. Richoux, A. Merkel, V. Tournat, and V. Pagneux,
Sci. Rep. 6, 19519 (2016).
vrg2016b
V. Romero-Garcia, G. Theocharis, O. Richoux, and V. Pagneux,
J. Acoust. Soc. Am. 139, 3395 (2016).
fleury2015
R. Fleury, D. Sounals, and A. Alu, Nat. Commun. 6, 5905 (2015).
ramezani2014
X. Zhu, H. Ramezani, C. Shi, J. Zhu, and X. Zhang,
Phys. Rev. X 4, 031042 (2014).
zhang2016
C. Shi, M. Dubois, Y. Chen, L. Cheng, H. Ramezani, Y. Wang, and X. Zhang,
Nature Commun. 7, 11110 (2016).
christensen2016
J. Christensen, M. Willatzen, V.R. Velasco, and M.- H. Lu,
Phys. Rev. Lett. 116, 207601 (2016).
pierce
A. D. Pierce,
Acoustics: An introduction to its physical principles and applications (Acoustical Society of America, 1994).
mostafazadeh2010
H. Mehri-Dehnavi, A. Mostafazadeh, and A. Batal,
J. Phys. A 43, 145301 (2010).
schomerus2013
H. Schomerus,
Phil. Trans. R. Soc. A 371, 20120194, (2013).
mostafazadeh2014
A. Mostafazadeh,
J. Phys. A 47, 505303 (2014).
testud2009
P. Testud, Y. Aurégan, P. Moussou, and A. Hirschberg,
J. Sound Vib. 325, 769 (2009).
lacombe2013
R. Lacombe, S. Foller, G. Jasor, W. Polifke, Y. Aurégan, and P. Moussou,
J. Sound Vib. 332, 5059 (2013).
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07966v1 | 20170127080918 | Surface Ocean Enstrophy, Kinetic Energy Fluxes and Spectra from Satellite Altimetry | [
"Hemant Khatri",
"Jai Sukhatme",
"Abhishek Kumar",
"Mahendra K. Verma"
] | physics.ao-ph | [
"physics.ao-ph"
] |
1. Department of Mathematics, Imperial College London, London SW7 2AZ, United Kingdom.
2. Centre for Atmospheric and Oceanic Sciences, Indian Institute of Science,
Bangalore, India.
3. Divecha Centre for Climate Change, Indian Institute of Science, Bangalore, India.
4. Department of Physics, Indian Institute of Technology Kanpur, Kanpur, India.
Enstrophy, kinetic energy (KE) fluxes and spectra
are estimated in different parts of the mid-latitudinal oceans via
altimetry data.
To begin with, using geostrophic currents
derived from sea-surface height anomaly data provided by AVISO, we confirm the presence of
a strong inverse flux of surface KE at scales larger than approximately 250 km.
We then compute enstrophy fluxes to help develop a clearer picture of the underlying dynamics
at smaller scales,
i.e., 250 km to 100 km. Here, we observe a robust enstrophy cascading regime, wherein the enstrophy
shows a large
forward flux and the KE spectra follow an approximate
k^-3.5 power-law.
Given the rotational character of the flow,
not only is this large scale inverse KE and smaller scale forward enstrophy transfer scenario consistent with expectations from
idealized studies of three-dimensional rapidly-rotating and strongly-stratified
turbulence, it also agrees with detailed analyses of spectra and fluxes in the upper level midlatitude troposphere.
Decomposing the currents into components with greater and less than 100 day variability (referred to as seasonal and eddy,
respectively),
we find that, in addition to the eddy-eddy contribution, the seasonal-eddy and seasonal-seasonal fluxes play a significant role
in the inverse (forward) flux of KE (enstrophy) at scales
larger (smaller) than about 250 km.
Taken together, we suspect, it is quite possible that,
from about 250 km to 100 km,
the altimeter is capturing the relatively steep portion of a surface oceanic counterpart of the upper tropospheric Nastrom-Gage spectrum.
0.25truecm
Surface Ocean Enstrophy, Kinetic Energy Fluxes and Spectra from Satellite Altimetry
Hemant Khatri^1, Jai Sukhatme^2,3, Abhishek Kumar^4 and Mahendra K. Verma^4
December 30, 2023
===================================================================================
§ INTRODUCTION
For the past decade, satellite altimetry data has been used to estimate the interscale transfer and spectral
distribution of surface kinetic energy, henceforth abbreviated as KE, in the oceans.
Focusing on mesoscales in mid-latitudinal regions with high eddy activity (for example, near the Gulf Stream, the Kuroshio or the Agulhas currents),
the flux of KE is seen to be scale dependent. In particular, it has been noted that surface
KE tends to be transferred upscale for scales larger than the local deformation radius and downscale for smaller scales
<cit.>.
Mesoscale wavenumber spectra, on the other hand, are somewhat more diverse with spectral indices ranging from -5/3 to -3 depending on the
region in consideration
<cit.>. In fact, recent work using in-situ observations suggests that the scaling changes with
season, and is modulated by
the strength of eddy activity <cit.>.
Interpreting these results in terms of
the dynamics captured by the altimeter and more
fundamentally the nature of the actual dynamics of the upper ocean
has been the subject of numerous recent investigations
<cit.>.
As baroclinic modes are intensified near the surface <cit.>,
it has been suggested that
altimetry data mostly represents
the first baroclinic mode in the ocean <cit.>. Indeed, energy is expected to concentrate in the first baroclinic mode
due to an inverse transfer
among the vertical modes <cit.>.
Given this, at first sight, the observed inverse transfer of surface KE at large scales was surprising, as classical quasigeostrophic (QG) baroclinic
turbulence anticipates a forward cascade in the baroclinic mode with energy flowing towards the deformation scale
<cit.>.
However, a careful examination of the energy budget in numerical simulations reveals that, while KE goes to larger scales,
the total energy in the first baroclinic mode does indeed flow downscale <cit.>; a feature that is comforting in the context
of traditional theory. In fact, along with the two-layer QG study of <cit.>,
inverse transfer of KE has also been documented in more comprehensive ocean models
<cit.>.
Noting the significance of surface buoyancy gradients (a fact missed in the aforementioned
first baroclinic mode framework), it has been suggested that, surface QG (SQG) dynamics is a more appropriate framework
for the oceans' surface <cit.>, and is reflected in the altimeter
measurements <cit.>. Even though the variance of buoyancy is transferred downscale <cit.>,
surface KE actually flows upscale in
SQG dynamics <cit.>, consistent with the flux calculations using altimetry data. Of course,
in the QG limit, a combination of surface
and interior modes is only natural. There are ongoing efforts to represent the variability of the surface ocean and interpret the
altimetry data in these terms
<cit.>.
Thus, much of the work using altimeter data has focussed on the inverse transfer of KE at relatively large scales. Here, we spend some time
on the larger scales, but
mainly concentrate on slightly smaller scales that are still properly resolved by the data. Specifically,
in addition to the KE flux, we also compute the spectral flux of enstrophy. In fact, this enstrophy flux sheds new light on the range
of scales that span approximately 250 km to 100 km. We find that the enstrophy flux is strong and directed to small scales over this range,
and is accompanied by a KE spectrum that follows an approximate k^-3.5 power-law.
This suggests that the rotational currents as derived from the altimeter
are in an enstrophy cascading regime from about 250 km to 100 km, and in an inverse KE transfer regime for scales greater than
about 250 km.
In terms of an eddy and slowly varying or seasonal decomposition (defined as smaller and larger than 100 day timescale variability, respectively), we observe that the
seasonal-seasonal and seasonal-eddy fluxes play a significant role in the KE (enstrophy) flux for scales
larger (smaller) that about 250 km.
Finally, we interpret these findings in the context of idealized
studies of three-dimensional, rapidly rotating and stratified turbulence and also compare them with detailed analyses of
midlatitude upper tropospheric
KE spectra and fluxes.
§ DATA ANALYSIS AND METHODOLOGY
§.§ Data Description
Gridded data of sea-surface height (SSH) anomalies (MADT delay time gridded data) from the AVISO project has been used in our analysis.
The data spanning 21 years (1993-2013) is available at a spatial resolution of
0.25^∘× 0.25^∘, thus scales smaller than approximately 50 km can not be resolved.
We compute horizontal currents from the SSH anomaly data using geostrophic balance relations, and the
latitudinal variation of the Coriolis parameter has been included in the computations.
§.§ Geographical Locations
The five geographical regions chosen for analysis are located far from the equator so that
geostrophic balance is expected to be dominant.
As is seen in Figure <ref>, these regions represent relatively uninterrupted stretches in the Northern and Southern parts of the Pacific,
Atlantic, and the Southern Indian Ocean. In essence, we expect that this choice of domain minimizes boundary effects. Region 1 is the largest which is
about 4500 km long and 3500 km wide. Other regions are comparatively smaller.
§.§ Computation of Spectra and Fluxes
In this paper, we compute the KE spectrum and fluxes of KE and enstrophy. For this purpose, the data is
represented using Fourier modes in both spatial directions, i.e.,
U( k) = ∫∫ U(x,y) exp[- i(k_x x + k_y y)] dxdy,
where, k=(k_x, k_y) and U=(u,v) contains the zonal and meridional components of the velocity.
The two-dimensional (2D) Fourier transform technique requires the data to have uniform
grid spacing in both directions, so a linear interpolation scheme is employed to generate the velocity data on a rectangular grid
(the original data is on equidistant latitudes and longitudes).
In order to make the velocity field spatially periodic, the data is multiplied with a 2D bump function (exp[-0.01/1-x^2-0.01/1-y^2+0.02] where (x,y)∈ [-1,1]) before performing a Fourier transform, this ensures that the velocity
smoothly goes to zero at the boundaries.
In Fourier space, the KE equation is represented as <cit.>,
∂ E(k)/∂ t = T(k) + F(k) - D(k),
where E(k) is the KE of a shell of wavenumber k=√(k^2_x+k^2_y), F(k) is the energy supply rate to the above shell by forcing, and D(k) is the energy dissipation at the
shell. T(k) is the energy supply to this shell via nonlinear transfer. Note that,
E(k) = 1/2∑_k-1 < |k'| ≤ k| U( k')|^2.
In a statistically steady state, ∂ E(k)/∂ t = 0, and E(k) is approximately constant in time.
The energy supply rate due to non-linearity is balanced by
F(k)-D(k). A useful quantity called the KE flux, Π(k), measures the energy passing through a wavenumber of radius k, and it is defined as,
Π(k) = -∫_0^k T(k')dk'.
In 2D flows, another quantity of interest is the enstrophy (ζ = ∫ω^2/2 d r), where ω is the relative vorticity. The corresponding enstrophy flux is denoted by ζ(k).
We compute the KE and enstrophy fluxes using the formalism of <cit.> and <cit.>, and the relevant formulae read,
Π(k) = ∑_k'> k^∑_p ≤ k^δ_k',p+qIm([k' · U(q)][U^*(k') · U(p)]),
ζ(k) = ∑_k'> k^∑_p≤ k^δ_k',p+qIm([k' · U(q)][ω^*(k') ω(p)]),
where Im stands for the imaginary part of the argument and U^* is the complex conjugate. The expression (inside the summation operators) in equation <ref> (<ref>) represents the energy (enstrophy) transfer in a triad (k'=p+q) where k' mode receives energy (enstrophy) from modes p and q. Then, the expression is integrated over all such possible triads satisfying the condition k'> k and p ≤ k (note that -∫_0^k T(k')dk' = ∫_k^∞ T(k')dk').
§ RESULTS
We begin by considering the spectra and fluxes associated with daily geostrophic currents.
Figure <ref> shows KE spectra of the geostrophic currents derived from SSH anomalies in all five regions.
As seen, these currents show an approximate k^-3.5 scaling
(the best fits range from k^-3.5 to k^-3.6) over a range of 250 to 100 km in all regions except Region 1 (the Southern Pacific),
where the slope is somewhat shallower with a best fit of k^-2.9.
Thus, the spectra we obtain for geostrophic currents are more in line with those reported by <cit.>, <cit.> (global extratropics),
<cit.> (Agulhas region) and <cit.> & <cit.> (near the Gulf Stream), but
differ from the shallower -5/3 like scaling observed by
<cit.> <cit.>.
The KE and enstrophy fluxes are shown in Figure <ref>.
Note that we have computed the flux using daily data and the results presented are an average over the entire 21 year period.
The qualitative structure of the KE flux confirms the findings of <cit.>
<cit.>. Specifically,
we observe a robust inverse transfer of KE at large scales (i.e., greater than approximately 250 km).
Some regions (1 and 2) show a very weak forward transfer of KE at small scales, while
in the others (Region 3, 4 and 5), the flux continues to be negative (though very small in magnitude) even at small scales.
In fact, in all the regions considered, the KE flux crosses zero or becomes very small by about 200 km.
Note that 2π times the climatological first baroclinic deformation scale in these five regions also
lies between 200 and 250 km <cit.>. Whether this is indicative of a KE injection scale due to linear instability as put forth by <cit.>, or more
of a coincidence is not particularly clear.
Indeed,
a mismatch between the deformation and
zero-crossing scale can be seen in <cit.> and has also been pointed out by <cit.>
in a comprehensive ocean model.
Proceeding to the enstrophy, also shown in Figure <ref>, we see that it is characterized by a large forward flux at
scales smaller than approximately 250 km. Further, the enstrophy flux does not show an inertial range, rather it increases with
progressively smaller scales and peaks at approximately 150 km. Interestingly,
we note that, in most of the regions, the
scale √(⟨ E ⟩/⟨ζ⟩) (where ⟨·⟩ denotes a domain average) <cit.> — shown by the dashed vertical lines in Figure <ref> — serves
as a reasonable marker for the onset of the forward enstrophy flux regime.
§.§ Eddy (subseasonal) and slowly varying (seasonal) fluxes
An important difference between actual geophysical flows (the atmosphere and ocean) and idealized 3D rotating stratified turbulence
is the presence of nontrivial mean flows, and a hierarchy of prominent temporal scales. To get an idea of the KE and enstrophy
flux contributions from the fast and slowly-varying components of the flow, following <cit.>, we filter the
derived geostrophic currents. In particular, at every grid point, we consider the daily 21 year long
time series and split this into two parts: one that contains variability of less than 100 days (referred to as the eddy or transient component) and
the other with only larger than 100 day timescales (referred to as the slowly varying, or for brevity, as the seasonal component).
To get a feel for the physical character of the these decompositions, Figures <ref> and <ref> we show a snapshot of the zonal and
meridional velocities (during summer) for all of the five regions. Quite clearly, the slowly varying or
seasonal u flow has a pronounced zonal structure, as compared to the more isotropic eddy component.
Similarly, the seasonal v velocity has a larger scale and is oriented in a preferentially meridional direction as compared
to its eddy component.
It is interesting to note that the meridional (seasonal and eddy) velocity is always comparable in strength to the zonal flow.
As with the original data (the total field), we compute the KE and enstrophy fluxes from the eddy field and the
seasonal component. The seasonal-eddy fluxes are computed by subtracting the eddy-eddy and seasonal-seasonal contributions
from the total flux.
Figures <ref> and <ref> show these four terms (total: solid curves, eddy-eddy: dashed curves, seasonal-seasonal: dotted curves and seasonal-eddy: dash-dot curves)
for KE and enstrophy in the five regions
considered, respectively.
For the KE (Figure <ref>), we see that the total flux at large scales (i.e., greater than 250 km) has a strong contribution from the
seasonal-eddy interactions. In fact, the eddy-eddy term
is qualitatively of the correct
form but quite small in magnitude. The seasonal-seasonal contribution is always upscale (except for a very small positive bump
at small scales in Region 1), and thus, it too enhances the inverse transfer at large scales.
For the enstrophy (Figure <ref>), we see that the eddy-eddy term is reasonably strong, and along with the seasonal-eddy flux (in Regions 3,4 and 5), or the
seasonal-seasonal flux (in Region 1), or both (Region 2) leads to the strong
forward enstrophy cascading regime at small scales (i.e., below
approximately 250 km).
§ INTERPRETATION AND CONCLUSION
By studying 21 years (1993-2013) of surface geostrophic currents derived from AVISO SSH anomalies in different midlatitudinal parts of the world's oceans
we find — in agreement with previous studies — that the spectral flux of rotational KE exhibits an inverse transfer at scales larger than about
250 km. Further, at smaller scales,
specifically, 250 km to 100 km, we find a strong forward flux of enstrophy accompanied by a KE spectrum that approximately follows a k^-3.5 power-law.
The KE flux at these small scales is very weak, and in a few of the regions considered, it is in the forward direction.
The transition from an inverse KE to a dominant forward enstrophy flux is roughly in agreement with a simple prescription based on the total enstrophy and KE in
the domain. On splitting the original data into high (eddy) and low (seasonal) frequencies,
we observed that the seasonal-seasonal and seasonal-eddy
fluxes play an important role in the
inverse (forward) transfer of KE (enstrophy) at scales greater (smaller) than 250 km.
We now interpret these findings in the context of rapidly rotating, strongly
stratified three-dimensional (3D) turbulence as well as spectra and flux analyses from the midlatitude upper troposphere.
Specifically, idealized
3D rotating Boussinesq simulations suggest that rotational (or vortical) modes dominate the energy budget at large scales and exhibit
a robust inverse transfer of KE
to larger
scales, and a forward transfer of enstrophy to small scales <cit.>.
These transfers, akin to 2D and QG turbulence, are accompanied by KE spectra that follow -5/3 and -3 power-laws in the upscale KE and downscale
enstrophy flux dominated regimes <cit.>.
Given the rotational nature of the
geostrophic currents, our observation of the upscale (downscale) KE (enstrophy) flux at scales greater (smaller) than 250 km is therefore in accord
with the aforementioned expectations. Indeed, the k^-3.5 scaling is also close to the expected KE spectrum that characterizes the enstrophy flux dominated regime
[It should
be noted that the -3 exponent (even in incompressible 2D turbulence) is fairly delicate. As discussed in the review by <cit.>, it is not
uncommon to observe power-laws for the KE spectrum that range from -3 to -3.5 in the enstrophy cascading regime.].
With regard to the atmosphere, the forward enstrophy transfer regime of QG turbulence has been postulated to explain the -3 portion of
midlatitude upper tropospheric KE spectrum <cit.>. Starting with <cit.> and <cit.>, re-analysis products at progressively finer resolutions have been analyzed
with a view towards seeing if the Nastrom-Gage spectrum is captured by the respective models <cit.>,
and if so, what are the associated energy and enstrophy
fluxes that go along with it <cit.>. In all, these studies demonstrate quite clearly that the -3 range of the Nastrom-Gage spectrum
(spanning approximately 3000-4000 km to 500 km in the upper troposphere) corresponds to the dominance of rotational modes, and a forward
enstrophy cascading regime. Further, at scales greater than the -3 range (i.e., greater than approximately 4000 km),
the upper troposphere supports an inverse rotational KE flux <cit.>.
Regarding the small forward flux of rotational KE at scales smaller than approximately 250 km in a few regions,
as pointed out by <cit.> and <cit.>, this is
likely due to the limited resolution of the data. For example, a forward KE flux in the rotational modes was observed in the coarse data used by
<cit.>, but it vanishes in the more recent finer scale products analyzed in <cit.> and <cit.> <cit.>. In fact,
much like the idealized 3D rotating stratified scenario <cit.>, the forward transfer of
KE at small scales in atmospheric data (i.e., below approximately 500 km and accompanied by a shallower KE spectrum) is likely due to the divergent component of the flow <cit.>.
On decomposing the flow into eddy and seasonal components (less and greater than 100 day timescales, respectively), our results are somewhat
analogous to the upper troposphere. Specifically, in the atmosphere, the zonal mean-eddy (which translates to a
stationary-eddy or seasonal-eddy decomposition) flux enhanced the inverse KE
transfer to large scales <cit.>. We find the seasonal-eddy term to be important, but the
seasonal-seasonal contribution to also be significant in the inverse transfer. In fact, in our decomposition (based on a timescale
of 100 days), these terms
dominate over the eddy-eddy contribution.
For enstrophy, in the upper troposphere,
<cit.> noted that the stationary-eddy fluxes are important (as we do here),
but higher resolution data employed in <cit.> suggests that the eddy-eddy term is the dominant contributor to the
forward enstrophy flux.
Thus our findings using altimeter data, i.e., employing purely rotational
geostrophic currents, are in fair accordance with expectations from idealized simulations of rotating stratified flows as well as analyses of upper tropospheric
re-analysis data.
The qualitative similarity in rotational KE fluxes, enstrophy fluxes, and KE spectra between the surface ocean currents and
the near tropopause atmospheric flow is comforting as they both are examples of rapidly-rotating and strongly-stratified fluids.
In fact, in addition to an inverse rotational KE flux at large scales, we believe it is quite possible that the altimeter data is showing us
an enstrophy cascading, and relatively steep spectral KE scaling range of a surface oceanic counterpart to the atmospheric Nastrom-Gage spectrum.
Quite naturally, it would be very interesting to obtain data at a finer scale, and see
if the ocean surface currents (rotational and divergent together) also exhibit a transition to shallower spectra — like the upper tropospheric
Nastrom-Gage spectrum — with a change in scaling at a length scale
smaller than 100 km.
We thank the AVISO project for making the SSH data freely available (<http://www.aviso.altimetry.fr/en/home.html>).
| For the past decade, satellite altimetry data has been used to estimate the interscale transfer and spectral
distribution of surface kinetic energy, henceforth abbreviated as KE, in the oceans.
Focusing on mesoscales in mid-latitudinal regions with high eddy activity (for example, near the Gulf Stream, the Kuroshio or the Agulhas currents),
the flux of KE is seen to be scale dependent. In particular, it has been noted that surface
KE tends to be transferred upscale for scales larger than the local deformation radius and downscale for smaller scales
<cit.>.
Mesoscale wavenumber spectra, on the other hand, are somewhat more diverse with spectral indices ranging from -5/3 to -3 depending on the
region in consideration
<cit.>. In fact, recent work using in-situ observations suggests that the scaling changes with
season, and is modulated by
the strength of eddy activity <cit.>.
Interpreting these results in terms of
the dynamics captured by the altimeter and more
fundamentally the nature of the actual dynamics of the upper ocean
has been the subject of numerous recent investigations
<cit.>.
As baroclinic modes are intensified near the surface <cit.>,
it has been suggested that
altimetry data mostly represents
the first baroclinic mode in the ocean <cit.>. Indeed, energy is expected to concentrate in the first baroclinic mode
due to an inverse transfer
among the vertical modes <cit.>.
Given this, at first sight, the observed inverse transfer of surface KE at large scales was surprising, as classical quasigeostrophic (QG) baroclinic
turbulence anticipates a forward cascade in the baroclinic mode with energy flowing towards the deformation scale
<cit.>.
However, a careful examination of the energy budget in numerical simulations reveals that, while KE goes to larger scales,
the total energy in the first baroclinic mode does indeed flow downscale <cit.>; a feature that is comforting in the context
of traditional theory. In fact, along with the two-layer QG study of <cit.>,
inverse transfer of KE has also been documented in more comprehensive ocean models
<cit.>.
Noting the significance of surface buoyancy gradients (a fact missed in the aforementioned
first baroclinic mode framework), it has been suggested that, surface QG (SQG) dynamics is a more appropriate framework
for the oceans' surface <cit.>, and is reflected in the altimeter
measurements <cit.>. Even though the variance of buoyancy is transferred downscale <cit.>,
surface KE actually flows upscale in
SQG dynamics <cit.>, consistent with the flux calculations using altimetry data. Of course,
in the QG limit, a combination of surface
and interior modes is only natural. There are ongoing efforts to represent the variability of the surface ocean and interpret the
altimetry data in these terms
<cit.>.
Thus, much of the work using altimeter data has focussed on the inverse transfer of KE at relatively large scales. Here, we spend some time
on the larger scales, but
mainly concentrate on slightly smaller scales that are still properly resolved by the data. Specifically,
in addition to the KE flux, we also compute the spectral flux of enstrophy. In fact, this enstrophy flux sheds new light on the range
of scales that span approximately 250 km to 100 km. We find that the enstrophy flux is strong and directed to small scales over this range,
and is accompanied by a KE spectrum that follows an approximate k^-3.5 power-law.
This suggests that the rotational currents as derived from the altimeter
are in an enstrophy cascading regime from about 250 km to 100 km, and in an inverse KE transfer regime for scales greater than
about 250 km.
In terms of an eddy and slowly varying or seasonal decomposition (defined as smaller and larger than 100 day timescale variability, respectively), we observe that the
seasonal-seasonal and seasonal-eddy fluxes play a significant role in the KE (enstrophy) flux for scales
larger (smaller) that about 250 km.
Finally, we interpret these findings in the context of idealized
studies of three-dimensional, rapidly rotating and stratified turbulence and also compare them with detailed analyses of
midlatitude upper tropospheric
KE spectra and fluxes. | null | null | We begin by considering the spectra and fluxes associated with daily geostrophic currents.
Figure <ref> shows KE spectra of the geostrophic currents derived from SSH anomalies in all five regions.
As seen, these currents show an approximate k^-3.5 scaling
(the best fits range from k^-3.5 to k^-3.6) over a range of 250 to 100 km in all regions except Region 1 (the Southern Pacific),
where the slope is somewhat shallower with a best fit of k^-2.9.
Thus, the spectra we obtain for geostrophic currents are more in line with those reported by <cit.>, <cit.> (global extratropics),
<cit.> (Agulhas region) and <cit.> & <cit.> (near the Gulf Stream), but
differ from the shallower -5/3 like scaling observed by
<cit.> <cit.>.
The KE and enstrophy fluxes are shown in Figure <ref>.
Note that we have computed the flux using daily data and the results presented are an average over the entire 21 year period.
The qualitative structure of the KE flux confirms the findings of <cit.>
<cit.>. Specifically,
we observe a robust inverse transfer of KE at large scales (i.e., greater than approximately 250 km).
Some regions (1 and 2) show a very weak forward transfer of KE at small scales, while
in the others (Region 3, 4 and 5), the flux continues to be negative (though very small in magnitude) even at small scales.
In fact, in all the regions considered, the KE flux crosses zero or becomes very small by about 200 km.
Note that 2π times the climatological first baroclinic deformation scale in these five regions also
lies between 200 and 250 km <cit.>. Whether this is indicative of a KE injection scale due to linear instability as put forth by <cit.>, or more
of a coincidence is not particularly clear.
Indeed,
a mismatch between the deformation and
zero-crossing scale can be seen in <cit.> and has also been pointed out by <cit.>
in a comprehensive ocean model.
Proceeding to the enstrophy, also shown in Figure <ref>, we see that it is characterized by a large forward flux at
scales smaller than approximately 250 km. Further, the enstrophy flux does not show an inertial range, rather it increases with
progressively smaller scales and peaks at approximately 150 km. Interestingly,
we note that, in most of the regions, the
scale √(⟨ E ⟩/⟨ζ⟩) (where ⟨·⟩ denotes a domain average) <cit.> — shown by the dashed vertical lines in Figure <ref> — serves
as a reasonable marker for the onset of the forward enstrophy flux regime.
§.§ Eddy (subseasonal) and slowly varying (seasonal) fluxes
An important difference between actual geophysical flows (the atmosphere and ocean) and idealized 3D rotating stratified turbulence
is the presence of nontrivial mean flows, and a hierarchy of prominent temporal scales. To get an idea of the KE and enstrophy
flux contributions from the fast and slowly-varying components of the flow, following <cit.>, we filter the
derived geostrophic currents. In particular, at every grid point, we consider the daily 21 year long
time series and split this into two parts: one that contains variability of less than 100 days (referred to as the eddy or transient component) and
the other with only larger than 100 day timescales (referred to as the slowly varying, or for brevity, as the seasonal component).
To get a feel for the physical character of the these decompositions, Figures <ref> and <ref> we show a snapshot of the zonal and
meridional velocities (during summer) for all of the five regions. Quite clearly, the slowly varying or
seasonal u flow has a pronounced zonal structure, as compared to the more isotropic eddy component.
Similarly, the seasonal v velocity has a larger scale and is oriented in a preferentially meridional direction as compared
to its eddy component.
It is interesting to note that the meridional (seasonal and eddy) velocity is always comparable in strength to the zonal flow.
As with the original data (the total field), we compute the KE and enstrophy fluxes from the eddy field and the
seasonal component. The seasonal-eddy fluxes are computed by subtracting the eddy-eddy and seasonal-seasonal contributions
from the total flux.
Figures <ref> and <ref> show these four terms (total: solid curves, eddy-eddy: dashed curves, seasonal-seasonal: dotted curves and seasonal-eddy: dash-dot curves)
for KE and enstrophy in the five regions
considered, respectively.
For the KE (Figure <ref>), we see that the total flux at large scales (i.e., greater than 250 km) has a strong contribution from the
seasonal-eddy interactions. In fact, the eddy-eddy term
is qualitatively of the correct
form but quite small in magnitude. The seasonal-seasonal contribution is always upscale (except for a very small positive bump
at small scales in Region 1), and thus, it too enhances the inverse transfer at large scales.
For the enstrophy (Figure <ref>), we see that the eddy-eddy term is reasonably strong, and along with the seasonal-eddy flux (in Regions 3,4 and 5), or the
seasonal-seasonal flux (in Region 1), or both (Region 2) leads to the strong
forward enstrophy cascading regime at small scales (i.e., below
approximately 250 km). | null | null |
http://arxiv.org/abs/1701.08166v1 | 20170127190003 | HST proper motions in Galactic globular clusters | [
"Laura L. Watkins",
"Roeland P. van der Marel",
"Andrea Bellini",
"A. T. Baldwin",
"P. Bianchini",
"J. Anderson"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218, USA Dept. of Physics & Astronomy, Louisiana State Univ., Baton Rouge, LA 70803, USA Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, Germany [email protected]
Watkins
HST proper motions in Galactic GCs
Proper motions (PMs) are crucial to fully understand the internal dynamics of globular clusters (GCs). To that end, the Hubble Space Telescope (HST) Proper Motion (HSTPROMO) collaboration has constructed large, high-quality PM catalogues for 22 Galactic GCs. We highlight some of our exciting recent results: the first directly-measured radial anisotropy profiles for a large sample of GCs; the first dynamical distance and mass-to-light (M/L) ratio estimates for a large sample of GCs; and the first dynamically-determined masses for hundreds of blue-straggler stars (BSSs) across a large GC sample.
HST proper motions in Galactic globular clusters
L. L. Watkins1,4
R. P. van der Marel1 A. Bellini1 A. T. Baldwin1,2 P. Bianchini3 J. Anderson1
=====================================================================================================
§ INTRODUCTION
The HSTPROMO collaboration is using PMs to revolutionise our dynamical understanding of many objects in the universe – including stars in globular and young star clusters; Local Group galaxies, including Andromeda, the Magellanic Clouds and a number of dwarf spheroidals; and even AGN black hole jets – thanks to the exquisite astrometric precision of HST <cit.>.[http://www.stsci.edu/ marel/hstpromo.htmlhttp://www.stsci.edu/∼marel/hstpromo.html]
As part of this ongoing work, <cit.> recently presented a set of internal PM catalogues for 22 Galactic GCs, measured using archival data from HST. In <cit.>, <cit.>, and <cit.>, we used these catalogues to study 3 different aspects of the GC sample: 1) velocity anisotropy profiles; 2) dynamical distances and M/Ls; and 3) masses of their BSS populations. Here we briefly highlight the results from each study.
§ VELOCITY ANISOTROPY
Dynamical mass estimates are degenerate with anisotropy, so understanding the anisotropy in a stellar system is crucial to successful mass determination.
In <cit.>, we began by making a series of cuts to select high-quality samples of bright stars. By restricting the magnitude range of the samples to only those stars brighter than 1 mag below the main-sequence turn off (MSTO), we limited the stellar-mass range in each sample, and so could neglect the effect of stellar mass on the kinematics and consider only the spatial changes. The quality cuts were made to eliminate stars for which the PMs were poorly measured or for which the uncertainties had been underestimated as such stars can introduce biases into kinematic analyses. We then constructed binned velocity dispersion and anisotropy profiles for each GC.
Figure <ref> shows the binned anisotropy profile for NGC 2808 (black points). This GC is isotropic at its centre and becomes mildly radially anisotropic with increasing distance from the centre. This trend is typical for all GCs in our sample; to quantify this, we used the fits (blue lines) to estimate the anisotropy at the core and half-light radii (green and red lines) and compared these values to estimates of the relaxation times at these radii <cit.>. Figure <ref> shows the results of this comparison. Nearly all GCs appear to be isotropic out to their core radii; thereafter, some remain isotropic out to their half-light radii, while others become mildly radially anisotropic, with the degree of anisotropy increasing with relaxation time. The black lines show a fit to the data with a break between the isotropic and anisotropic regions at the characteristic time marked by the dashed line.
This analysis offers a way to estimate the vital anisotropy of a GC using its relaxation time, when no PM data is available.
§ DYNAMICAL DISTANCES AND MASS-TO-LIGHT RATIOS
GC distances are typically estimated using photometric methods that compare the apparent and absolute magnitudes of stars for which the absolute magnitudes are known or may be inferred, such as RR Lyrae stars. M/Ls are typically inferred from via stellar population synthesis (SPS) modelling. However, both distances and M/Ls can be estimated using dynamical modelling when both PM and line-of-sight (LOS) velocity data exist. The photometric and dynamical methods use very different types of data to constrain the same fundamental properties, so their comparison can serve as a crucial test of both methods.
In <cit.>, we used cleaned samples of bright stars to construct PM velocity dispersion profiles and then compared these against LOS velocity dispersion profiles from the literature. This was only possible for 15 of the 22 GCs, the remaining GCs had insufficient (or even no) LOS data available. From this analysis, we estimated dynamical distances and M/Ls for each GC, which we compared against photometric distances from <cit.> and SPS M/Ls from <cit.>.
Figure <ref> shows the fractional difference in the dynamical and photometric distances versus the fractional difference in the dynamical and SPS M/Ls. The mean difference in the distances was just -1.7 ± 1.9 %, indicating excellent agreement and highlighting the robustness of both methods. The mean difference in the M/Ls was -8.8 ± 6.4 %, showing slightly more scatter but still consistent within 1.3σ.
Figure <ref> shows the M/Ls as a function of GC metallicity <cit.>. Our dynamical M/Ls are shown in blue and the SPS M/Ls are shown in green. We see that the dynamical and SPS M/Ls are consistent for the metal-poor GCs ([Fe/H]<-1 dex), but that they diverge for the metal-rich GCs: the SPS M/Ls increase with increasing metallicity, whereas the dynamical M/Ls decrease. This is consistent with the behaviour noted in a study of 200 M31 GCs by <cit.> (black points) and has been attributed to the effects of mass segregation <cit.>.
§ BLUE-STRAGGLER KINEMATICS AND DYNAMICAL MASS ESTIMATES
Frequent two-body stellar interactions in GCs allow the stars to exchange energy; over time, the stars move towards a state of energy equipartition, where they all have the same energy. As a result, high mass stars tend to move more slowly than low mass stars; this is true even if the GC is only in partial equipartition. This effect can be expressed as σ∝ M^-η (1), where σ is the velocity dispersion of a stellar population of mass M, and 0 ≤η≤ 0.5 quantifies the degree of equipartition in the GC.
BSSs are an apparent extension of the main-sequence in a GC, bluer and brighter than the MSTO. Most stars brighter than the MSTO in a GC are evolved stars, with approximately equal masses as the latter stages of stellar evolution are so fast. However, BSSs are believed to have formed via mass-transfer or stellar collisions within a binary system, thus making them a more massive population. So, as a result of equipartition in a GC, we expect them to be moving more slowly.
In <cit.>, we used a series of colour and magnitude cuts to select samples of BSSs in 19 of our 22 GCs, finding 598 BSSs in total. We then calculated binned velocity dispersion profiles for the BSS subsamples and for the evolved stars. Figure <ref> shows the colour-magnitude diagram (CMD) for NGC 6341; the box shows the cuts used to select the BSSs (blue diamonds). The black points show the evolved stars and the red diamond marks the MSTO. Figure <ref> shows the dispersion profiles for the BSSs (black) and the evolved stars (orange) in NGC 6341.
On average, we found that the BSS dispersions were lower than the evolved-star dispersions, indicating that the BSSs are indeed more massive. Furthermore, by estimating the degree of equipartition in each GC from the series of N-body simulations presented in <cit.>, we were able to use equation (1) to estimate the average BSS mass in each GC as a function of the MSTO mass . Then by estimating the MSTO mass in each GC, we were thus able to estimate the mass of each BSS population. We found an mass ratio of / = 1.50 ± 0.14 and an average mass = 1.22 ± 0.12 , in good agreement with previous BSS mass estimates.
§ CONCLUSIONS
PMs are crucial to fully understand the internal dynamics of GCs. To that end, the HSTPROMO collaboration has constructed large, high-quality PM catalogues for 22 Galactic GCs. We highlighted some of our exciting recent results: the first directly-measured radial anisotropy profiles for a large sample of GCs; the first dynamical distance and M/L estimates for a large sample of GCs; and the first dynamically-determined masses for hundreds of BSSs across a large GC sample.
Support for this work was provided by grants for HST programs AR-12845 (PI: Bellini) and AR-12648 (PI: van der Marel), provided by the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
aasjournal
| The HSTPROMO collaboration is using PMs to revolutionise our dynamical understanding of many objects in the universe – including stars in globular and young star clusters; Local Group galaxies, including Andromeda, the Magellanic Clouds and a number of dwarf spheroidals; and even AGN black hole jets – thanks to the exquisite astrometric precision of HST <cit.>.[ marel/hstpromo.html
As part of this ongoing work, <cit.> recently presented a set of internal PM catalogues for 22 Galactic GCs, measured using archival data from HST. In <cit.>, <cit.>, and <cit.>, we used these catalogues to study 3 different aspects of the GC sample: 1) velocity anisotropy profiles; 2) dynamical distances and M/Ls; and 3) masses of their BSS populations. Here we briefly highlight the results from each study. | null | null | null | null | null |
http://arxiv.org/abs/1701.08213v1 | 20170127224755 | Tapering off qubits to simulate fermionic Hamiltonians | [
"Sergey Bravyi",
"Jay M. Gambetta",
"Antonio Mezzacapo",
"Kristan Temme"
] | quant-ph | [
"quant-ph"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.07787v6 | 20170126174031 | Multi-locus data distinguishes between population growth and multiple merger coalescents | [
"Jere Koskela"
] | q-bio.PE | [
"q-bio.PE",
"q-bio.QM",
"stat.CO",
"stat.ME",
"92D15 (Primary), 62M02, 62M05 (Secondary)"
] |
thmTheorem
propProposition
lemLemma
corCorollary
definition
defnDefinition
asmAssumption
rmkRemark
exExample
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08017v1 | 20170127112335 | On one approach to definition of singular differential operators | [
"A. A. Vladimirov"
] | math.SP | [
"math.SP",
"34L20, 34B09"
] |
1200
lainenc
mssym
epsf
T2A
utf-8
4000
10000
10000
1.5em
17truecm
24truecm
0truecm
-0.5truecm
labx1440
larm1000
cmr10
lati1000
labx1000
cmmi10 '177
cmsy10 '60
cmex10
eufm10
larm0800
cmr8
lati0800
labx0800
cmmi8 '177
cmsy8 '60
cmex8
eufm8
larm0600
cmr6
lati0600
labx0600
cmmi6 '177
cmsy6 '60
eufm6
larm0500
cmr5
lati0500
labx0500
cmmi5 '177
cmsy5 '60
eufm5
cmmib10 '177
cmmib8 '177
cmmib7 '177
cmmib6 '177
cmmib5 '177
0
00011122233312pt
height8.5pt depth3.5pt width0pt
0
00011122233311pt
5pt
5pt
height7pt depth2pt width0pt
157
` 11
`1̈3
"#1#1<190#1>191#1
@bel#1#2l@#1#2
11.aux
11
1111
.aux
@̧section
@̧subsection
@̧subsubsection
@̧equation
@̧bibl
@̧enum
@̧section 0
@̧subsection 0
@̧subsubsection 0
@̧equation 0
@̧bibl 0
@̣enum
@̣enum=0pt
@l
111@bel#1@l
#1()
§ #
1@̧section 1
3ex plus 0.5ex minus 0.1ex
0pt plus 1fill0pt plus 1fill@̧section . #125000
1ex plus 0.25ex
@l@̧section.
@̧subsection 0
@̧subsubsection 0
@̧equation 0
§.§ @̧subsection 1
1ex plus 0.1ex minus 0.05ex@̧subsection.
@l@̧section.@̧subsection
@̧subsubsection 0@̧equation 0
§.§.§ @̧subsubsection 1
1ex plus 0.1ex minus 0.05ex
@̧subsection.@̧subsubsection.
@l@̧section.@̧subsection.
@̧subsubsection
@̧equation 1
@l@̧section.@̧subsection.
@̧equation@̧equation
#1@̧bibl 1
[@̧bibl]
@l@̧bibl
<ref>ref#1.#2:@#2@#1
#1=@̧section #2#1.#2
<ref>eqref#1.#2.#3:#1=@̧section#2=@̧subsection
(#3)#2(#3)#1.#2(#3)
<ref>1l@#1??@l@#1:
<ref>ref@
(<ref>)1l@#1(??)@l@#1:
<ref>eqref@
<cit.>1l@#1??l@#1
#110ptto
0pt plus 1fil##
0pt##
##0pt plus 1fil
0pt###1
Im
im
supp
vrai sup
rank
@C
@W
@#10#1 W@[email protected]@
0.35@ii#1^∘
`=̈12
"0@0E
`=̈13
#1#2
0#1^∘.@̣enum=0@̣enum 2pt@̧enum=0
* @̧enum 10pt
to @̣enum@̧enum^∘.
#2
Доказатель
ство.
0pt□
` 12
11.aux
УДК 517.9845pt
0pt plus 1fill0pt plus 1fillОб одном подходе к определению сингулярных дифференциальных операторов2exА.А. Владимиров[]Работа поддержана
РФФИ, грант 16-01-00706.
0.25cm
0.75 0cm 0cm
Аннотация: На основе представления о тройках банаховых пространств
даётся определение и характеризация основных свойств широкого класса
граничных задач для обыкновенных дифференциальных уравнений произвольного
(в том числе нечётного) порядка с сингулярными коэффициентами.
0.5cm
§ ВВЕДЕНИЕ
§.§ В
работе [<cit.>] была указана конструкция, позволяющая дать
корректное определение ряда граничных задач для дифференциальных уравнений вида
∑_k=0^n(-1)^n-k(p_ky^(n-k))^(n-k)=f
с сингулярными коэффициентами p_k∈ W_2^-k[0,1]. В случае n=2 при этом было
показано, что действие соответствующих неограниченных операторов в пространстве
L_2[0,1] допускает описание посредством систем обычных дифференциальных уравнений
для абсолютно непрерывных функций. Применительно к задачам более высоких порядков
аналогичные представления явным образом не указывались, хотя возможность такого
указания в свете развитой теории была совершенно прозрачной. Значимость конструкции
из [<cit.>] может быть продемонстрирована, например, развитым на её основе
в работе [<cit.>] простым подходом к изучению осцилляционных свойств
собственных функций сингулярных обыкновенных дифференциальных операторов —
включая ряд так называемых "<многоточечных"> задач, рассмотрение которых ранее
обычно проводилось с использованием весьма трудоёмких косвенных методов
(см. имеющиеся в [<cit.>] ссылки).
Идеологические основы подхода работы [<cit.>] были заложены в более ранней
работе [<cit.>], где схожим способом изучались дифференциальные уравнения
второго порядка.
В последнее время интерес к этой тематике возобновился.
Так, в работе [<cit.>] были — вне связи с подходом
работы [<cit.>] — предложены регуляризованные представления для формально
несколько более широкого, нежели рассмотренный в [<cit.>], класса
дифференциальных уравнений чётного порядка. Поэтому представляется не лишённым
интереса вопрос о расширении границ применения использованной в [<cit.>]
методики. Характеризации некоторых возможных направлений такого расширения
и посвящается настоящая статья.
§.§ С
труктура статьи имеет следующий вид. В <ref> даётся определение
и указываются важнейшие свойства вспомогательных функциональных пространств,
используемых далее при описании основных объектов изучения. В <ref>
непосредственно определяются и исследуются операторы, отвечающие допустимым
в рамках развиваемой теории граничным задачам для дифференциальных уравнений
с сингулярными коэффициентами. Наконец, в <ref> приводятся примеры
приложения развитой теории к ряду конкретных ситуаций.
Все рассматриваемые далее линейные пространства предполагаются комплексными.
Наименьшим натуральным числом на всём протяжении статьи считается 0.
§ КВАЗИДИФФЕРЕНЦИАЛЬНЫЕ ПРОСТРАНСТВА СОБОЛЕВА
§.§ П
усть A — такая система функций класса L_1[0,1], что для каждой пары
натуральных индексов i и j⩽ i+1 найдётся соответствующая ей функция
A_ij∈ L_1[0,1]. Символом C_A^n[0,1] мы далее будем обозначать
подпространство в пространстве {C[0,1]}^n+1 непрерывных вектор-функций
с n+1 компонентами, выделенное системой понимаемых в смысле обобщённого
дифференцирования уравнений
Y_i'=∑_j=0^i+1A_ijY_j, i<n.()
Символом W_s,A^n[0,1], где n>0 и s∈ [1,∞), мы будем обозначать
результат пополнения пространства C_A^n[0,1] по норме
Y_W_s,A^n[0,1]⇌∑_i=0^n-1Y_i_C[0,1]+
(∫_0^1|A_n-1,n|·|Y_n|^s dx)^1/s.
Наконец, символами _A^n[0,1] и _s,A^n[0,1] мы будем обозначать
подпространства, получаемые замыканием в пространствах C_A^n[0,1]
и W_s,A^n[0,1], соответственно, множеств принадлежащих этим пространствам
финитных на интервале (0,1) вектор-функций. Тривиальным образом имеет место
следующий факт:
§.§.§
prop:1.1.1
Если при каждом i<n первообразная функции |A_i,i+1| строго монотонна,
то естественное вложение Y↦ Y_0 пространства W_1,A^n[0,1]
в пространство C[0,1] является инъективным.
Обычные соболевские пространства W_s^n[0,1] и _s^n[0,1], где s∈
[1,∞), в рамках изложенной конструкции отвечают (с точностью до выбора
эквивалентной нормы) ситуации
A_ij(x)≡1 при j=i+1, 0 иначе.
§.§ О
бозначим символом M_n решение понимаемой в смысле обобщённого
дифференцирования начальной задачи
(M_n)_ij'= -1em∑_k=0^inf{n-1,i+1} -1em
A_ik· (M_n)_kj,
(M_n)_ij(0)=1 при i=j, 0 иначе,
где каждый из индексов i и j пробегает промежуток {0,…,n-1}.
Очевидным образом, каждая из функций (M_n)_ij принадлежит классу W_1^1[0,1].
Сопоставляя в каждой точке x∈ [0,1] имеющей размер n× n матрице
элементов (M_n)_ij(x) обратную матрицу элементов (M_n^-1)_ij(x),
получаем набор также принадлежащих классу W_1^1[0,1] функций (M_n^-1)_ij.
При этом всякая вектор-функция Y∈ W_1,A^n[0,1] восстанавливается по своей
последней компоненте согласно правилам
Y_i(x)=∑_j=0^n-1(M_n)_ij(x)·[Y_j(0)+
∫_0^x A_n-1,n(t)· (M_n^-1)_j,n-1(t)Y_n(t) dt],
i<n.()
§.§.§
prop:1.2.1
Пусть при некотором n>0 первообразная функции |A_n-1,n| строго
монотонна. Тогда естественное вложение пространства C_A^n[0,1] в пространство
C_A^n-1[0,1] имеет плотный образ.
Зафиксируем произвольную вектор-функцию Y∈ C_A^n-1[0,1] и поставим ей
в соответствие вектор-функцию Φ∈{C[0,1]}^n вида
Φ_i(x)≡ Y_i(x)-∫_0^x[∑_j=0^inf{n-1,i+1} -1em A_ij(t)Y_j(t)] dt, i<n.
Зафиксируем также последовательность {Y_α}_α=0^∞
вектор-функций класса C_A^n[0,1], удовлетворяющих при i<n равенствам
(Y_α)_i(0)=Y_i(0) и таких, что первообразные суммируемых функций
A_n-1,n· (Y_α)_n равномерно стремятся при α→∞ к функции
Φ_n-1. Из представления (<ref>) и соотношений (M_n^-1)_j,n-1∈ W_1^1[0,1] тогда немедленно вытекает факт равномерной сходимости
функциональных последовательностей {(Y_α)_i}_α=0^∞,
где i<n. Тривиальные тождества
Y_n-1(0)+∫_0^x A_n-1,n(t)(Y_α)_n(t) dt=(Y_α)_n-1(x)-
∫_0^x[∑_j=0^n-1 A_n-1,j(t)(Y_α)_j(t)
] dt
вместе с известными общими свойствами интегральных уравнений первого рода
означают теперь, что пределом последовательности образов вектор-функций
Y_α∈ C_A^n[0,1] является в точности исходная вектор-функция
Y∈ C_A^n-1[0,1].
§.§.§
prop:mallev
Пусть при некотором n>0 первообразная функции |A_n-1,n| строго
монотонна, а набор суммируемых функций {f_j}_j=0^n удовлетворяет тождеству
(∀ Y∈_A^n[0,1]) ∑_j=0^n∫_0^1 f_jY_j dx=0.()
Тогда существует функция h∈ W_1^1[0,1], при почти всех x∈ [0,1]
удовлетворяющая равенствам f_n(x)=A_n-1,n(x) h(x). При этом
справедливо также тождество
(∀ Y∈_A^n-1[0,1]) ∑_j=0^n-1∫_0^1 g_jY_j dx=0,
где положено
g_j⇌f_j-A_n-1,j h, при j<n-1,
f_n-1-A_n-1,n-1 h-h', при j=n-1.
С учётом представления (<ref>), заведомо найдётся функция
φ∈ W_1^1[0,1], удовлетворяющая тождеству
(∀ Y∈_A^n[0,1]) ∫_0^1 [f_n+
A_n-1,n φ] Y_n dx=0.
Функция f_n+A_n-1,nφ∈ L_1[0,1] при этом заведомо принадлежит
линейной оболочке набора {A_n-1,n·
(M_n^-1)_j,n-1}_j=0^n-1, что автоматически означает существование
функции h∈ W_1^1[0,1] с требуемым свойством. Выражая
теперь, с использованием определения (<ref>), в тождестве (<ref>)
суммируемую функцию A_n-1,nY_n через функции набора {Y_i}_i=0^n-1
и интегрируя по частям, убеждаемся в справедливости соотношения
(∀ Y∈_A^n[0,1]) ∑_j=0^n-1∫_0^1 g_jY_j dx=0.
Учёт утверждения <ref> завершает доказательство.
§.§ С
имволом Y^∧∈ C^2n, где Y∈ W_1,A^n[0,1], мы далее будем
обозначать вектор граничных значений
Y^∧_k⇌Y_k(0) при k<n, Y_k-n(1) иначе.
На основе произвольно фиксированной матрицы U∈ C^2n× 2n
могут быть определены подпространства C_A,U^n[0,1] и W_s,A,U^n[0,1],
выделенные, соответственно, внутри C_A^n[0,1] и W_s,A^n[0,1] системой
граничных условий UY^∧=0.
На указанных подпространствах могут быть заданы полулинейные функционалы вида
⟨ F,Y⟩≡∑_i=0^n
∫_0^1 f_iY_i dx при Y∈ C_A,U^n[0,1],∑_i=0^n-1∫_0^1 f_iY_i dx+
∫_0^1 |A_n-1,n|^1/s f_nY_n dx
при Y∈ W_s,A,U^n[0,1],()
где f_i∈ L_1[0,1]. В случае рассмотрения пространства W_s,A,U^n[0,1] здесь
дополнительно предполагается, что выполнено условие f_n∈ L_s/(s-1)[0,1],
причём для почти любого x∈ [0,1] из равенства A_n-1,n(x)=0 следует
равенство f_n(x)=0. Первообразные всех функций |A_i,i+1|, где i<n,
мы на протяжении настоящего пункта считаем строго монотонными.
Из представления (<ref>) немедленно вытекает, что всякий функционал
вида (<ref>) может быть переписан в форме
⟨ F,Y⟩≡∫_0^1 gY_n dx+
∑_i=0^n-1μ_iY_i(0) при Y∈ C_A,U^n[0,1],∫_0^1 |A_n-1,n|^1/s gY_n dx+
∑_i=0^n-1μ_iY_i(0) при Y∈ W_s,A,U^n[0,1],()
где g∈ L_1[0,1] в случае пространства C_A,U^n[0,1],
и g∈ L_s/(s-1)[0,1] в случае пространства W_s,A,U^n[0,1]. Из того же
представления легко получается, что всякий функционал вида (<ref>)
допускает обратную перезапись в форме (<ref>). Сказанное означает
замкнутость линейного множества функционалов вида (<ref>) в сопряжённом
к рассматриваемому пространстве. Из известной теоремы об общем виде полулинейного
непрерывного функционала на лебеговском пространстве легко выводится также,
что в случае пространства W_s,A,U^n[0,1] множество функционалов
вида (<ref>) в точности совпадает с сопряжённым пространством.
Пространство заданных на C_A,U^n[0,1] функционалов вида (<ref>)
мы будем далее обозначать символом W_1,A,U^-n[0,1]. Аналогичным образом,
пространство таких функционалов, заданных на W_s,A,U^n[0,1], мы будем
обозначать символом W_s/(s-1),A,U^-n[0,1].
§ СИНГУЛЯРНЫЕ ДИФФЕРЕНЦИАЛЬНЫЕ ОПЕРАТОРЫ
§.§ П
усть B — система функций, обладающая аналогичными системе A свойствами
и такая, что при почти всех x∈ [0,1] равенства A_n-1,n(x)=0
и B_m-1,m(x)=0 равносильны. Пусть также зафиксирован параметр
s∈ [1,+∞), две матрицы V∈ C^2m× 2m,
Q∈ C^2m× 2n и система функций со следующими свойствами:
* ∙ p_nm,p_nm^-1∈ L_∞[0,1].
* ∙ p_im∈ L_s/(s-1)[0,1] при i<n, причём в случае s≠ 1
для почти любого x∈ [0,1] из равенства B_m-1,m(x)=0 следует равенство
p_im(x)=0.
* ∙ p_nj∈ L_s[0,1] при j<m, причём для почти любого
x∈ [0,1] из равенства A_n-1,n(x)=0 следует равенство p_nj(x)=0.
* ∙ p_ij∈ L_1[0,1] при i<n и j<m.
Тогда может быть задан ограниченный оператор T W_s,A,U^n[0,1]→
W_s,B,V^-m[0,1] вида
to eq:kvdif 1truecm
⟨ TY,Z⟩≡∫_0^1 |A_n-1,n|^1/s· |B_m-1,m|^(s-1)/s p_nmY_nZ_m dx++∑_i=0^n-1∫_0^1 |B_m-1,m|^(s-1)/s p_imY_i
Z_m dx+∑_j=0^m-1∫_0^1 |A_n-1,n|^1/s
p_njY_nZ_j dx+ 4truecm+∑_i=0^n-1∑_j=0^m-1∫_0^1 p_ijY_i
Z_j dx+⟨ QY^∧,Z^∧⟩.
§.§.§ Всякий оператор T вида (<ref>) фредгольмов индекса n-m- U+
V.
Справедливость доказываемого утверждения легко устанавливается на основе
представления (<ref>), а также факта наличия у оператора
T̂ L_s([0,1]; |A_n-1,n|)→ L_s([0,1]; |B_m-1,m|) вида
T̂y⇌|A_n-1,n B_m-1,m|^1/s
p_nmy
обратного с оценкой нормы _x∈ [0,1]|p_nm^-1(x)|.
§.§ Р
езультаты предыдущего параграфа показывают, что в случае строгой монотонности
первообразных функций |B_j,j+1| уравнения TY=F допускают эквивалентную запись
в форме граничных задач для систем дифференциальных уравнений для абсолютно
непрерывных функций. А именно, для искомого решения Y∈ W_s,A,U^n[0,1]
с очевидностью могут быть определены квазипроизводные y^[i]⇌
Y_i∈ W_1^1[0,1], где i<n, удовлетворяющие системе уравнений
(y^[i])'=∑_j=0^i+1A_ijy^[j], i<n-1.()
Далее, из утверждения <ref> вытекает существование квазипроизводной
y^[n]∈ W_1^1[0,1], удовлетворяющей соотношению
|A_n-1,n|^1/sY_n=p_nm^-1·[B_m-1,m
|B_m-1,m|^(s-1)/s· y^[n]-∑_i=0^n-1p_imy^[i]
+f_m],()
а тогда и уравнению
(y^[n-1])' =∑_i=0^n-1[A_n-1,i-p_nm^-1p_im
A_n-1,n|A_n-1,n|^1/s]· y^[i]+ eq:kvdif2 3truecm+
p_nm^-1A_n-1,nB_m-1,m |A_n-1,n|^1/s
|B_m-1,m|^(s-1)/s· y^[n]+p_nm^-1A_n-1,nf_m
|A_n-1,n|^1/s.
Наконец, утверждение <ref> вместе с соотношением (<ref>)
означают существование квазипроизводных y^[n+m-j-1]∈ W_1^1[0,1], где j<m-1,
подчиняющихся уравнениям
to eq:kvdif3 1truecm
(y^[n+m-j-1])'=∑_i=0^n-1[p_ij-
p_nm^-1p_imp_nj]· y^[i]++[p_nm^-1·B_m-1,m |B_m-1,m|^(s-1)/s· p_nj-
B_m-1,j]· y^[n]- 4truecm-∑_k=sup{j,1}^m-1B_k-1,j
y^[n+m-k]-f_j+p_nm^-1p_njf_m, j<m.
При этом, очевидным образом, оказывается справедливым тождество
⟨ TY-F,Z⟩≡⟨ QY^∧-Y^∨,Z^∧⟩,
где положено
Y^∨_k⇌y^[n+m-k-1](0) при k<m,
-y^[n+2m-k-1](1) иначе.
Соответственно, исходное уравнение TY=F равносильно системе дифференциальных
уравнений (<ref>), (<ref>) и (<ref>), рассмотренной
совместно с набором граничных условий
UY^∧=0, QY^∧-Y^∨∈ V^*.
§.§ П
усть теперь для рассматриваемых пространств W_s,A,U^n[0,1]
и W_s,B,V^-m[0,1], а также некоторого гильбертова пространства H,
зафиксированы вложения I W_s,A,U^n[0,1]→ H и J H→
W_s,B,V^-m[0,1]. В этом случае оператор T задаёт на пространстве H
линейное отношение T^∙⇌ J^-1TI^-1 с графиком
{(y,z)∈ H× H : (∃ Y∈ W_s,A,U^n[0,1])
(IY=y)&(TY=Jz)}.
Если при некотором λ∈ C оператор T-λ JI является ограниченно
обратимым, то очевидным образом определена также и ограниченная резольвента
(T^∙-λ)^-1=I·(T-λ JI)^-1J.()
Соответственно, график отношения T^∙ в этом случае заведомо замкнут.
Если дополнительно резольвента (<ref>) инъективна и имеет плотный образ,
то отношение T^∙ представляет собой оператор с плотной областью определения.
§.§.§ Если вложения I W_s,A,U^n[0,1]→ H и J H→
W_s,B,V^-m[0,1] инъективны и имеют плотные образы, то допускающая
представление (<ref>) резольвента отношения T^∙ также инъективна
и имеет плотный образ.
§.§.§
prop:nein
Пусть вложение J H→ W_s,B,V^-m[0,1] инъективно, и пусть
для некоторого ограниченного оператора K W_1,A,U^n[0,1]→
C_B,V^m[0,1] в случае s=1 или 5000 K W_s,A,U^n[0,1]→
W_s/(s-1),B,V^m[0,1] в случае s≠ 1 справедливо тождество
⟨ z,IY⟩≡⟨ Jz,KY⟩. Пусть также равенство ⟨ TY,
KY⟩=0 возможно лишь в случае Y=0. Тогда линейное отношение T^∙
представляет собой оператор. Если при этом индекс оператора T равен нулю,
то существует ограниченный оператор T^-1, а область определения оператора
T^∙ плотна в H.
Если пара (0,z)∈ H× H принадлежит графику отношения T^∙,
то должна существовать вектор-функция Y∈ W_s,A,U^n[0,1] со свойствами TY=Jz
и IY=0. При этом, однако, должны выполняться также равенства
⟨ TY,KY⟩ =⟨ Jz,KY⟩ =⟨ z,IY⟩ =0,
согласно сделанным предположениям гарантирующие справедливость равенства Y=0,
а потому и равенства z=0.
Далее, всякая вектор-функция Y∈ W_s,A,U^n[0,1] со свойством TY=0 заведомо
удовлетворяет равенству ⟨ TY,KY⟩=0, а потому является нулевой.
Соответственно, факт ограниченной обратимости оператора T в случае равенства
его индекса нулю немедленно вытекает из альтернативы Фредгольма.
Наконец, если вектор z∈ H ортогонален образу оператора IT^-1J,
то должны быть справедливы равенства
0 =⟨ z,IT^-1Jz⟩ =⟨ Jz,KT^-1Jz⟩ =⟨ T·[T^-1Jz],K·[T^-1Jz]⟩,
согласно сделанным предположениям гарантирующие выполнение равенства T^-1Jz=0,
а потому и равенства z=0.
§.§.§
prop:sekt
Пусть для некоторого ограниченного оператора K W_1,A,U^n[0,1]→
C_B,V^m[0,1] в случае s=1 или K W_s,A,U^n[0,1]→
W_s/(s-1),B,V^m[0,1] в случае s≠ 1 справедливо тождество
⟨ z,IY⟩≡⟨ Jz,KY⟩, и пусть числовая область значений
оператора K^*T включается в некоторый сектор комплексной плоскости, имеющий
вершиной точку 0. Тогда числовая область значений отношения T^∙
включается в тот же сектор.
Если пара (y,z)∈ H× H принадлежит графику отношения T^∙,
то должна существовать вектор-функция Y∈ W_s,A,U^n[0,1] со свойствами IY=y
и TY=Jz. Выполняющиеся при этом равенства
⟨ z,y⟩ =⟨ Jz,KY⟩ =⟨ TY,KY⟩
как раз и гарантируют справедливость доказываемого утверждения.
§.§.§
prop:sim
Пусть вложение I W_s,A,U^n[0,1]→ H инъективно, для некоторого
ограниченного оператора K W_1,A,U^n[0,1]→ C_B,V^m[0,1] в случае s=1
или 5000 K W_s,A,U^n[0,1]→ W_s/(s-1),B,V^m[0,1]
в случае s≠ 1 справедливо тождество ⟨ z,IY⟩≡⟨ Jz,KY⟩, а оператор K^*T является симметрическим. Тогда при любом
невещественном λ∈ C оператор T-λ JI, если является
фредгольмовым с нулевым индексом, обладает ограниченным обратным.
В рассматриваемом случае для всякой вектор-функции Y∈ W_s,A,U^n[0,1]
справедливо равенство
⟨ (T-λ JI)Y,KY⟩=-λ·IY^2.
Соответственно, уравнение (T-λ JI)Y=0 не может иметь нетривиальных решений,
что, согласно альтернативе Фредгольма, как раз и означает справедливость
доказываемого утверждения.
Отметим в заключение, что, согласно утверждению 2.1.3
работы [<cit.>], в случае инъективности и плотности вложений I
W_s,A,U^n[0,1]→ H и J H→ W_s,B,V^-m[0,1], а также
непустоты резольвентного множества пучка T^♮λ↦
T-λ JI, спектр оператора T^∙ в точности совпадает со спектром
пучка T^♮.
§ ПРИМЕРЫ
§.§ В
качестве первого примера рассмотрим задачу Дирихле для дифференциального
уравнения
(py”)”-(q'y')'+r”y=f∈ L_2[0,1],
где p,p^-1∈ L_∞[0,1] и q,r∈ L_2[0,1] (классический случай, когда
коэффициенты p, q' и r” предполагаются гладкими, связан с существенно более
сильными ограничениями). Этой задаче отвечает оператор T_2^2[0,1]→_2^-2[0,1] вида
⟨ Ty,z⟩≡∫_0^1[(py”-qy'+ry) z”+
(-qy”+2ry') z'+ry”z] dx.
Данный оператор имеет форму (<ref>), и потому уравнение T^∙ y=f
для соответствующего неограниченного оператора в пространстве L_2[0,1]
равносильно граничной задаче
1(y^[0])' =y^[1], (y^[1])' =-p^-1ry^[0]+
p^-1qy^[1]+p^-1y^[2], (y^[2])' =p^-1qry^[0]+
(2r-p^-1q^2)y^[1]-p^-1qy^[2]-y^[3],
(y^[3])' =-p^-1r^2y^[0]+p^-1qry^[1]+p^-1ry^[2]-f,
0pt
y^[0](0)=y^[1](0)=y^[0](1)=y^[1](1)=0.
Указанная система уравнений, ожидаемым образом, в точности совпадает с системой
из леммы 2 работы [<cit.>].
Развитая выше теория, однако, может быть приложена и к случаю коэффициентов
существенно более широкого вида. В частности, пусть коэффициент p^-1 есть
обобщённая производная некоторой строго возрастающей функции H∈ C[0,1],
коэффициент q принадлежит классу L_2([0,1]; H'), а коэффициент r —
классу L_2([0,1]; H')∩ L_1[0,1]. Тогда могут быть введены в рассмотрение
функции ξ,η∈ W_1^1[0,1] со свойствами
ξ(x+H(x)-H(0) 1+H(1)-H(0))≡ x, η(x+H(x)-H(0) 1+H(1)-H(0))≡ H(x),
а также связанная с ними система суммируемых функций
A_ij⇌ξ' при i=0, j=1,η' при i=1, j=2, 0 иначе.
Рассматриваемая задача получит при этом трактовку T^∙ y=f,
где действующий в пространстве L_2[0,1] неограниченный оператор T^∙=
(I^*)^-1TI^-1 строится на основе инъективного вложения I_2,A^2[0,1]→ L_2[0,1] вида
[IY](x)≡ Y_0(x+H(x)-H(0) 1+H(1)-H(0))
и оператора T_2,A^2[0,1]→_2,A^-2[0,1] вида
⟨ TY,Z⟩≡∫_0^1[η'·(Y_2-σ Y_1+
ρ Y_0) Z_2+(-η'σ Y_2+2ξ'ρ Y_1) Z_1+η'ρ Y_2 Z_0] dx,
где положено σ⇌ q∘ξ и ρ⇌
r∘ξ. Результаты предыдущего параграфа позволяют теперь представить исходную
задачу в форме
1(y^[0])' =ξ'y^[1], (y^[1])' =-η'ρ y^[0]+
η'σ y^[1]+η'y^[2],
(y^[2])' =η'σρ y^[0]+(2ξ'ρ-
η'σ^2)y^[1]-η'σ y^[2]-ξ'y^[3],
(y^[3])' =-η'ρ^2y^[0]+η'σρ y^[1]+
η'ρ y^[2]-f∘ξ,
0pt
y^[0](0)=y^[1](0)=y^[0](1)=y^[1](1)=0,
где y^[0]=y∘ξ.
§.§ В
качестве второго примера возьмём задачу периодического типа
для дифференциального уравнения третьего порядка
-iy”'-(py')'+q'y=f∈ L_2[0,1],()
где p,q∈ L_1[0,1]. В случае задач нечётного порядка естественным является
рассмотрение ситуации s=1, в отличие от типичного для чётного случая равенства
s=2. Соответственно, свяжем с задачей пару пространств
W_1,A,U^2 ⇌{y∈ W_1^2[0,1] :
y(1)-y(0)=y'(1)-y'(0)=0},
C_B,V^1 ⇌{y∈ C^1[0,1] : y(1)-y(0)=0}.
Умножая обе стороны уравнения (<ref>) на компоненту Z_0 произвольной
вектор-функции Z∈ C_B,V^1[0,1], интегрированием по частям убеждаемся
в справедливости равенства T^∙ y=f, где отношение T^∙ отвечает
оператору T W_1,A,U^2[0,1]→ W_1,B,V^-1[0,1] вида
⟨ TY,Z⟩≡∫_0^1[(iY_2+pY_1-qY_0) Z_1-
qY_1 Z_0] dx.
Результаты предыдущего параграфа дают теперь для уравнения T^∙ y=f
эквивалентное представление
1(y^[0])' =y^[1], (y^[1])' =-iqy^[0]+ipy^[1]
-iy^[2], (y^[2])' =-qy^[1]-f,
0pt
y^[0](1)-y^[0](0)=y^[1](1)-y^[1](0)=y^[2](1)-y^[2](0)=0.
На основе утверждений <ref> и <ref>, с учётом наличия
ограниченного вложения пространства W_1,A,U^2[0,1] в пространство
C_B,V^1[0,1], легко показывается также, что в случае вещественнозначности
коэффициентов p,q∈ L_1[0,1] неограниченный оператор T^∙ является
самосопряжённым.
§.§ В
качестве последнего примера рассмотрим задачу Дирихле для исследовавшегося
в работах [<cit.>] и [<cit.>] уравнения
-d dG(dy dH)=f∈ L_2([0,1]; G'),
где H∈ C[0,1] есть некоторая неубывающая функция со свойствами H(0)=H(1)-1=0,
а неубывающая функция G∈ C[0,1] допускает представление G(x)≡ N(H(x)).
В этом случае решение задачи заведомо может быть записано в виде y=Iu,
где вложение I_2^1[0,1]→ L_2([0,1]; G') — вообще говоря,
не инъективное — подчиняется тождеству [Iu](x)≡ u(H(x)). Кроме того,
заведомо найдётся функция g∈ L_2([0,1]; N'), для которой композиция g∘ H
будет совпадать в пространстве L_2([0,1]; G') с исходной правой частью f,
а потому и подчиняться тождеству
(∀ v∈_2^1[0,1]) ⟨ I^*f,v⟩=
∫_0^1 gv dN.
Соответственно, исходная задача допускает переформулировку (I^*)^-1TI^-1y=f,
где T_2^1[0,1]→_2^-1[0,1] есть обычный оператор двукратного
дифференцирования
⟨ Tu,v⟩≡∫_0^1 u'v' dx.
Утверждения <ref> и <ref> вместе с очевидным фактом
положительности оператора T позволяют гарантировать корректную определённость,
самосопряжённость и положительность действующего в пространстве L_2([0,1]; G')
неограниченного оператора T^∙⇌ (I^*)^-1TI^-1.
0.4cm
0cm0cm plus 1fill0cm
Литература200000.4cm20000
Vl:2004 А.А. Владимиров. О сходимости последовательностей
обыкновенных дифференциальных операторов // Матем. заметки. — 2004. —
Т. 75, 6. — С. 941–943.
Vl:2016 А.А. Владимиров. К вопросу об осцилляционных свойствах
положительных дифференциальных операторов с сингулярными коэффициентами //
Матем. заметки. — 2016. — Т. 100, 6. — С. 800–806.
NSh:1999 М.И. Нейман–заде, А.А. Шкаликов. Операторы
Шрёдингера с сингулярными потенциалами из пространств мультипликаторов //
Матем. заметки. — 1999. — Т. 66, 5. — С. 723–733.
MSh:2016 К.А. Мирзоев, А.А. Шкаликов. Дифференциальные
операторы чётного порядка с коэффициентами-распределениями //
Матем. заметки. — 2016. — Т. 99, 5. — С. 788–793.
Vl:2014 А.А. Владимиров. Теоремы о представлении и вариационные
принципы для самосопряжённых операторных матриц // arXiv:1403.2253.
Fr:2011 U. Freiberg. Refinement of the spectral asymptotics
of generalized Krein Feller operators // Forum Math. — 2011. — V. 23. —
P. 427–445.
Vl:2012 А.А. Владимиров. Об одном классе сингулярных задач
Штурма–Лиувилля // arXiv:1211.2009.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07520v2 | 20170125234001 | Reaching for the quantum limits in the simultaneous estimation of phase and phase diffusion | [
"Magdalena Szczykulska",
"Tillmann Baumgratz",
"Animesh Datta"
] | quant-ph | [
"quant-ph"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.07775v1 | 20170126165920 | A Forward Model at Purkinje Cell Synapses Facilitates Cerebellar Anticipatory Control | [
"Ivan Herreros-Alonso",
"Xerxes D. Arsiwalla",
"Paul F. M. J. Verschure"
] | q-bio.NC | [
"q-bio.NC",
"cs.SY",
"math.OC"
] |
On Lattice Calculation of Electric Dipole Moments and Form Factors of the Nucleon
[
18 november 2016
=================================================================================
How does our motor system solve the problem of anticipatory control in spite of a wide spectrum of response dynamics from different musculo-skeletal systems, transport delays as well as response latencies throughout the central nervous system? To a great extent, our highly-skilled motor responses are a result of a reactive feedback system, originating in the brain-stem and spinal cord, combined with a feed-forward anticipatory system, that is adaptively fine-tuned by sensory experience and originates in the cerebellum. Based on that interaction we design the counterfactual predictive control (CFPC) architecture, an anticipatory adaptive motor control scheme in which a feed-forward module, based on the cerebellum, steers an error feedback controller with counterfactual error signals. Those are signals that trigger reactions as actual errors would, but that do not code for any current or forthcoming errors. In order to determine the optimal learning strategy, we derive a novel learning rule for the feed-forward module that involves an eligibility trace and operates at the synaptic level. In particular, our eligibility trace provides a mechanism beyond co-incidence detection in that it convolves a history of prior synaptic inputs with error signals. In the context of cerebellar physiology, this solution implies that Purkinje cell synapses should generate eligibility traces using a forward model of the system being controlled. From an engineering perspective, CFPC provides a general-purpose anticipatory control architecture equipped with a learning rule that exploits the full dynamics of the closed-loop system.
§ INTRODUCTION
Learning and anticipation are central features of cerebellar computation and function <cit.>: the cerebellum learns from experience and is able to anticipate events, thereby complementing a reactive feedback control by an anticipatory feed-forward one <cit.>.
This interpretation is based on a series of anticipatory motor behaviors that originate in the cerebellum. For instance, anticipation is a crucial component of acquired behavior in eye-blink conditioning <cit.>, a trial by trial learning protocol where an initially neutral stimulus such as a tone or a light (the conditioning stimulus, CS) is followed, after a fixed delay, by a noxious one, such as an air puff to the eye (the unconditioned stimulus, US). During early trials, a protective unconditioned response (UR), a blink, occurs reflexively in a feedback manner following the US. After training though, a well-timed anticipatory blink (the conditioned response, CR) precedes the US. Thus, learning results in the (partial) transference from an initial feedback action to an anticipatory (or predictive) feed-forward one. Similar responses occur during anticipatory postural adjustments, which are postural changes that precede voluntary motor movements, such as raising an arm while standing <cit.>. The goal of these anticipatory adjustments is to counteract the postural and equilibrium disturbances that voluntary movements introduce. These behaviors can be seen as feedback reactions to events that after learning have been transferred to feed-forward actions anticipating the predicted events.
Anticipatory feed-forward control can yield high performance gains over feedback control whenever the feedback loop exhibits transmission (or transport) delays <cit.>. However, even if a plant has negligible transmission delays, it may still have sizable inertial latencies. For example, if we apply a force to a visco-elastic plant, its peak velocity will be achieved after a certain delay; i.e. the velocity itself will lag the force. An efficient way to counteract this lag will be to apply forces anticipating changes in the desired velocity. That is, anticipation can be beneficial even when one can act instantaneously on the plant. Given that, here we address two questions: what is the optimal strategy to learn anticipatory actions in a cerebellar-based architecture? and how could it be implemented in the cerebellum?
To answer that we design the counterfactual predictive control (CFPC) scheme, a cerebellar-based adaptive-anticipatory control architecture that learns to anticipate performance errors from experience. The CFPC scheme is motivated from neuro-anatomy and physiology of eye-blink conditioning. It includes a reactive controller, which is an output-error feedback controller that models brain stem reflexes actuating on eyelid muscles, and a feed-forward adaptive component that models the cerebellum and learns to associate its inputs with the error signals driving the reactive controller. With CFPC we propose a generic scheme in which a feed-forward module enhances the performance of a reactive error feedback controller steering it with signals that facilitate anticipation, namely, with counterfactual errors. However, within CFPC, even if these counterfactual errors that enable predictive control are learned based on past errors in behavior, they do not reflect any current or forthcoming error in the ongoing behavior.
In addition to eye-blink conditioning and postural adjustments, the interaction between reactive and cerebellar-dependent acquired anticipatory behavior has also been studied in paradigms such as visually-guided smooth pursuit eye movements <cit.>. All these paradigms can be abstracted as tasks in which the same predictive stimuli and disturbance or reference signal are repeatedly experienced. In accordance to that, we operate our control scheme in trial-by-trial (batch) mode. With that, we derive a learning rule for anticipatory control that modifies the well-known least-mean-squares/Widrow-Hoff rule with an eligibility trace. More specifically, our model predicts that to facilitate learning, parallel fibers to Purkinje cell synapses implement a forward model that generates an eligibility trace. Finally, to stress that CFPC is not specific to eye-blink conditioning, we demonstrate its application with a smooth pursuit task.
§ METHODS
§.§ Cerebellar Model
We follow the simplifying approach of modeling the cerebellum as a linear adaptive filter, while focusing on computations at the level of the Purkinje cells, which are the main output cells of the cerebellar cortex <cit.>. Over the mossy fibers, the cerebellum receives a wide range of inputs. Those inputs reach Purkinke cells via parallel fibers (Fig. <ref>), that cross dendritic trees of Purkinje cells in a ratio of up to 1.5 × 10^6 parallel fiber synapses per cell <cit.>. We denote the signal carried by a particular fiber as x_j, j ∈ [1,G], with G equal to the total number of inputs fibers. These inputs from the mossy/parallel fiber pathway carry contextual information (interoceptive or exteroceptive) that allows the Purkinje cell to generate a functional output. We refer to these inputs as cortical bases, indicating that they are localized at the cerebellar cortex and that they provide a repertoire of states and inputs that the cerebellum combines to generate its output o. As we will develop a discrete time analysis of the system, we use n to indicate time (or time-step). The output of the cerebellum at any time point n results from a weighted sum of those cortical bases. w_j indicates the weight or synaptic efficacy associated with the fiber j. Thus, we have x[n] = [ x_1[n], … , x_G[n] ]^⊺ and w[n]=[ w_1[n], … , w_G[n] ]^⊺ (where the transpose, ^⊺, indicates that x[n] and w[n] are column vectors) containing the set of inputs and synaptic weights at time n, respectively, which determine the output of the cerebellum according to
o[n]=x[n]^⊺w[n]
The adaptive feed-forward control of the cerebellum stems from updating the weights according to a rule of the form
Δ w_j[n+1]=f(x_j[n], …, x_j[1], e[n],Θ)
where Θ denotes global parameters of the learning rule; x_j[n], …, x_j[1], the history of its pre-synaptic inputs of synapse j; and e[n], an error signal that is the same for all synapses, corresponding to the difference between the desired, r, and the actual output, y, of the controlled plant. Note that in drawing an analogy with the eye-blink conditioning paradigm, we use the simplifying convention of considering the noxious stimulus (the air-puff) as a reference, r, that indicates that the eyelids should close; the closure of the eyelid as the output of the plant, y; and the sensory response to the noxious stimulus as an error, e, that encodes the difference between the desired, r, and the actual eyelid closures, y. Given this, we advance a new learning rule, f, that achieves optimal performance in the context of eye-blink conditioning and other cerebellar learning paradigms.
§.§ Cerebellar Control Architecture
We embed the adaptive filter cerebellar module in a layered control architecture, namely the CFPC architecture, based on the interaction between brain stem motor nuclei driving motor reflexes and the cerebellum, such as the one established between the cerebellar microcircuit responsible for conditioned responses and the brain stem reflex circuitry that produces unconditioned eye-blinks <cit.> (Fig. <ref> left). Note that in our interpretation of this anatomy we assume that cerebellar output, o, feeds the lower reflex controller (Fig. <ref> right). Put in control theory terms, within the CFPC scheme an adaptive feed-forward layer supplements a negative feedback controller steering it with feed-forward signals.
Our architecture uses a single-input single-output negative-feedback controller. The controller receives as input the output error e=r-y. For the derivation of the learning algorithm, we assume that both plant and controller are linear and time-invariant (LTI) systems.
Importantly, the feedback controller and the plant form a reactive closed-loop system, that mathematically can be seen as a system that maps the reference, r, into the plant's output, y.
A feed-forward layer that contains the above-mentioned cerebellar model provides the negative feedback controller with an additional input signal, o. We refer to o as a counter-factual error signal, since although it mechanistically drives the negative feedback controller analogously to an error signal it is not an actual error. The counterfactual error is generated by the feed-forward module that receives an output error, e, as its teaching signal. Notably, from the point of view of the reactive layer closed-loop system, o can also be interpreted as a signal that offsets r. In other words, even if r remains the reference that sets the target of behavior, r+o functions as the effective reference that drives the closed-loop system.
§ RESULTS
§.§ Derivation of the gradient descent update rule for the cerebellar control architecture
We apply the CFPC architecture defined in the previous section to a task that consists in following a finite reference signal 𝐫∈ℝ^N that is repeated trial-by-trial. To analyze this system, we use the discrete time formalism and assume that all components are linear time-invariant (LTI). Given this, both reactive controller and plant can be lumped together into a closed-loop dynamical system, that can be described with the dynamics 𝐀, input 𝐁, measurement 𝐂 and feed-through 𝐃 matrices. In general, these matrices describe how the state of a dynamical system autonomously evolves with time, 𝐀; how inputs affect system states, 𝐁; how states are mapped into outputs, 𝐂; and how inputs instantaneously affect the system's output 𝐃 <cit.>.
As we consider a reference of a finite length N, we can construct the N-by-N transfer matrix 𝒯 as follows <cit.>
𝒯 = [[ D 0 0 0; CB D 0 0; CAB CB D 0; ⋮ ⋮ ⋮ ⋱ ⋮; CA^N-2B CA^N-3B CA^N-4B D ]]
With this transfer matrix we can map any given reference 𝐫 into an output 𝐲_r using 𝐲_r=𝒯𝐫, obtaining what would have been the complete output trajectory of the plant on an entirely feedback-driven trial. Note that the first column of 𝒯 contains the impulse response curve of the closed-loop system, while the rest of the columns are obtained shifting that impulse response down. Therefore, we can build the transfer matrix 𝒯 either in a model-based manner, deriving the state-space characterization of the closed-loop system, or in measurement-based manner, measuring the impulse response curve. Additionally, note that (𝐈-𝒯)𝐫 yields the error of the feedback control in following the reference, a signal which we denote with 𝐞_0.
Let 𝐨∈ℝ^N be the entire feed-forward signal for a given trial. Given commutativity, we can consider that from the point of view of the closed-loop system o is added directly to the reference 𝐫, (Fig. <ref> right). In that case, we can use 𝐲=𝒯(𝐫+𝐨) to obtain the output of the closed-loop system when it is driven by both the reference and the feed-forward signal.
The feed-forward module only outputs linear combinations of a set of bases. Let 𝐗∈ℝ^N × G be a matrix with the content of the G bases during all the N time steps of a trial. The feed-forward signal becomes 𝐨=𝐗𝐰, where 𝐰∈ℝ^G contains the mixing weights. Hence, the output of the plant given a particular 𝐰 becomes 𝐲=𝒯(𝐫+𝐗𝐰).
We implement learning as the process of adjusting the weights 𝐰 of the feed-forward module in a trial-by-trial manner. At each trial the same reference signal, 𝐫, and bases, 𝐗, are repeated. Through learning we want to converge to the optimal weight vector 𝐰^* defined as
𝐰^* = _w c(𝐰) = _w 1/2𝐞^⊺𝐞= _w 1/2 (𝐫-𝒯(𝐫+𝐗𝐰))^⊺(𝐫-𝒯(𝐫+𝐗𝐰))
where c indicates the objective function to minimize, namely the L_2 norm or sum of squared errors. With the substitution 𝐗̃=𝒯𝐗 and using 𝐞_0 = (𝐈-𝒯)𝐫, the minimization problem can be cast as a canonical linear least-squares problem:
𝐰^* = _w 1/2
(𝐞_0-𝐗̃𝐰)^⊺(𝐞_0-𝐗̃𝐰)
One the one hand, this allows to directly find the least squares solution for 𝐰^*, that is, 𝐰^*=𝐗̃^†𝐞_0, where † denotes the Moore-Penrose pseudo-inverse. On the other hand, and more interestingly, with 𝐰[k] being the weights at trial k and having 𝐞[k] = 𝐞_0-𝐗̃𝐰[k], we can obtain the gradient of the error function at trial k with relation to w as follows:
∇_w c = -𝐗̃^⊺𝐞[k] = -𝐗^⊺𝒯^⊺ 𝐞[k]
Thus, setting η as a properly scaled learning rate (the only global parameter Θ of the rule), we can derive the following gradient descent strategy for the update of the weights between trials:
𝐰[k+1] = 𝐰[k] + η𝐗^⊺𝒯^⊺𝐞[k]
This solves for the learning rule f in eq. <ref>. Note that f is consistent with both the cerebellar anatomy (Fig. <ref>left) and the control architecture (Fig. <ref>right) in that the feed-forward module/cerebellum only requires two signals to update its weights/synaptic efficacies: the basis inputs, 𝐗, and error signal, 𝐞.
§.§ 𝒯^⊺ facilitates a synaptic eligibility trace
The standard least mean squares (LMS) rule (also known as Widrow-Hoff or decorrelation learning rule) can be represented in its batch version as 𝐰[k+1] = 𝐰[k] + η𝐗^⊺𝐞[k]. Hence, the only difference between the batch LMS rule and the one we have derived is the insertion of the matrix factor 𝒯^⊺. Now we will show how this factor acts as a filter that computes an eligibility trace at each weight/synapse.
Note that the update of a single weight, according Eq. <ref> becomes
w_j[k+1] = w_j[k] + η𝐱_j^⊺𝒯^⊺𝐞[k]
where 𝐱_j contains the sequence of values of the cortical basis j during the entire trial. This can be rewritten as
w_j[k+1] = w_j[k] + η𝐡_j^⊺𝐞[k]
with 𝐡_j ≡𝒯𝐱_j. The above inner product can be expressed as a sum of scalar products
w_j[k+1] = w_j[k] + η∑_n=1^N 𝐡_j[n] 𝐞[k,n]
where n indexes the within trial time-step. Note that 𝐞[k] in Eq. <ref> refers to the whole error signal at trial k whereas 𝐞[k,n] in Eq. <ref> refers to the error value in the n-th time-step of the trial k. It is now clear that each 𝐡_j[n] weighs how much an error arriving at time n should modify the weight w_j, which is precisely the role of an eligibility trace. Note that since 𝒯 contains in its columns/rows shifted repetitions of the impulse response curve of the closed-loop system, the eligibility trace codes at any time n, the convolution of the sequence of previous inputs with the impulse-response curve of the reactive layer closed-loop. Indeed, in each synapse, the eligibility trace is generated by a forward model of the closed-loop system that is exclusively driven by the basis signal.
Consequently, our main result is that by deriving a gradient descent algorithm for the CFPC cerebellar control architecture we have obtained an exact definition of the suitable eligibility trace. That definition guarantees that the set of weights/synaptic efficacies are updated in a locally optimal manner in the weights' space.
§.§ On-line gradient descent algorithm
The trial-by-trial formulation above allowed for a straightforward derivation of the (batch) gradient descent algorithm. As it lumped together all computations occurring in a same trial, it accounted for time within the trial implicitly rather than explicitly: one-dimensional time-signals were mapped onto points in a high-dimensional space. However, after having established the gradient descent algorithm, we can implement the same rule in an on-line manner, dropping the repetitiveness assumption inherent to trial-by-trial learning and performing all computations locally in time.
Each weight/synapse must have a process associated to it that outputs the eligibility trace. That process passes the incoming (unweighted) basis signal through a (forward) model of the closed-loop as follows:
[ s_j[n+1] = As_j[n] + B x_j[n]; h_j[n] = Cs_j[n] + D x_j[n] ]
where matrices A, B, C and D refer to the closed-loop system (they are the same matrices that we used to define the transfer matrix 𝒯), and s_j[n] is the state vector of the forward model of the synapse j at time-step n. In practice, each “synaptic” forward model computes what would have been the effect of having driven the closed-loop system with each basis signal alone. Given the superposition principle, the outcome of that computation can also be interpreted as saying that h_j[n] indicates what would have been the displacement over the current output of the plant, y[n], achieved feeding the closed-loop system with the basis signal x_j.
The process of weight update is completed as follows:
w_j[n+1] = w_j[n] + η h_j[n] e[n]
At each time step n, the error signal e[n] is multiplied by the current value of the eligibility trace h_j[n], scaled by the learning rate η, and subtracted to the current weight w_j[n]. Therefore whereas the contribution of each basis to the output of the adaptive filter depends only on its current value and weight, the change in weight depends on the current and past values passed through a forward model of the closed-loop dynamics.
§.§ Simulation of a visually-guided smooth pursuit task
We demonstrate the CFPC approach in an example of a visual smooth pursuit task in which the eyes have to track a target moving on a screen. Even though the simulation does not capture all the complexity of a smooth pursuit task, it illustrates our anticipatory control strategy. We model the plant (eye and ocular muscles) with a two-dimensional linear filter that maps motor commands into angular positions. Our model is an extension of the model in <cit.>, even though in that work the plant was considered in the context of the vestibulo-ocular reflex. In particular, we use a chain of two leaky integrators: a slow integrator with a relaxation constant of 100 ms drives the eyes back to the rest position; the second integrator, with a fast time constant of 3 ms ensures that the change in position does not occur instantaneously. To this basic plant, we add a reactive control layer modeled as a proportional-integral (PI) error-feedback controller, with proportional gain k_p and integral gain k_i. The control loop includes a 50 ms delay in the error feedback, to account for both the actuation and the sensing latency. We choose gains such that reactive tracking lags the target by approximately 100 ms. This gives k_p=20 and k_i=100. To complete the anticipatory and adaptive control architecture, the closed-loop system is supplemented by the feed-forward module.
The architecture implementing the forward model-based gradient descent algorithm is applied to a task structured in trials of 2.5 sec duration. Within each trial, a target remains still at the center of the visual scene for a duration 0.5 sec, next it moves rightwards for 0.5 sec with constant velocity, remains still for 0.5 sec and repeats the sequence of movements in reverse, returning to the center. The cerebellar component receives 20 Gaussian basis signals (𝐗) whose receptive fields are defined in the temporal domain, relative to trial onset, with a width (standard-deviation) of 50 ms and spaced by 100 ms. The whole system is simulated using a 1 ms time-step. To construct the matrix 𝒯 we computed closed-loop system impulse response.
At the first trial, before any learning, the output of the plant lags the reference signal by approximately 100 ms converging to the position only when the target remains still for about 300 ms (Fig. <ref> left). As a result of learning, the plant's behavior shifts from a reactive to an anticipatory mode, being able to track the reference without any delay. Indeed, the error that is sizable during the target displacement before learning, almost completely disappears by the 50^th trial (Fig. <ref> right). That cancellation results from learning the weights that generate a feed-forward predictive signal that leads the changes in the reference signal (onsets and offsets of target movements) by approximately 100 ms (Fig. <ref> right). Indeed, convergence of the algorithm is remarkably fast and by trial 7 it has almost converged to the optimal solution (Fig. <ref>).
To assess how much our forward-model-based eligibility trace contributes to performance, we test three alternative algorithms. In both cases we employ the same control architecture, changing the plasticity rule such that we either use no eligibility trace, thus implementing the basic Widrow-Hoff learning rule, or use the Widrow-Hoff rule extended with a delta-function eligibility trace that matches the latency of the error feedback (50 ms) or slightly exceeds it (70 ms). Performance with the basic WH model worsens rapidly whereas performance with the WH learning rule using a “pure delay” eligibility trace matched to the transport delay improves but not as fast as with the forward-model-based eligibility trace (Fig. <ref>). Indeed, in this case, the best strategy for implementing a delayed delta eligibility trace is setting a delay exceeding the transport delay by around 20 ms, thus matching the peak of the impulse response. In that case, the system performs almost as good as with the forward-model eligibility trace (70 ms). This last result implies that, even though the literature usually emphasizes the role of transport delays, eligibility traces also account for response lags due to intrinsic dynamics of the plant.
To summarize our results, we have shown with a basic simulation of a visual smooth pursuit task that generating the eligibility trace by means of a forward model ensures convergence to the optimal solution and accelerates learning by guaranteeing that it follows a gradient descent.
§ DISCUSSION
In this paper we have introduced a novel formulation of cerebellar anticipatory control, consistent with experimental evidence, in which a forward model has emerged naturally at the level of Purkinje cell synapses. From a machine learning perspective, we have also provided an optimality argument for the derivation of an eligibility trace, a construct that was often thought of in more heuristic terms as a mechanism to bridge time-delays <cit.>.
The first seminal works of cerebellar computational models emphasized its role as an associative memory <cit.>. Later, the cerebellum was investigates as a device processing correlated time signals<cit.>. In this latter framework, the use of the computational concept of an eligibility trace emerged as a heuristic construct that allowed to compensate for transmission delays in the circuit<cit.>, which introduced lags in the cross-correlation between signals. Concretely, that was referred to as the problem of delayed error feedback, due to which, by the time an error signal reaches a cell, the synapses accountable for that error are no longer the ones currently active, but those that were active at the time when the motor signals that caused the actual error were generated. This view has however neglected the fact that beyond transport delays, response dynamics of physical plants also influence how past pre-synaptic signals could have related to the current output of the plant. Indeed, for a linear plant, the impulse-response function of the plant provides the complete description of how inputs will drive the system, and as such, integrates transmission delays as well as the dynamics of the plant. Recently,
Even though cerebellar microcircuits have been used as models for building control architectures, e.g., the feedback-error learning model <cit.>, our CFPC is novel in that it links the cerebellum to the input of the feedback controller, ensuring that the computational features of the feedback controller are exploited at all times. Within the domain of adaptive control, there are remarkable similarities at the functional level between CFPC and iterative learning control (ILC) <cit.>, which is an input design technique for learning optimal control signals in repetitive tasks. The difference between our CFPC and ILC lies in the fact that ILC controllers directly learn a control signal, whereas, the CFPC learns a conterfactual error signal that steers a feedback controller. However the similarity between the two approaches can help for extending CFPC to more complex control tasks.
With our CFPC framework, we have modeled the cerebellar system at a very high level of abstraction: we have not included bio-physical constraints underlying neural computations, obviated known anatomical connections such as the cerebellar nucleo-olivary inhibition <cit.> and made simplifications such as collapsing cerebellar cortex and nuclei into the same computational unit. On the one hand, such a choice of high-level abstraction may indeed be beneficial for deriving general-purpose machine learning or adaptive control algorithms. On the other hand, it is remarkable that in spite of this abstraction our framework makes fine-grained predictions at the micro-level of biological processes. Namely, that in a cerebellar microcircuit <cit.>, the response dynamics of secondary messengers <cit.> regulating plasticity of Purkinje cell synapses to parallel fibers must mimic the dynamics of the motor system being controlled by that cerebellar microcircuit. Notably, the logical consequence of this prediction, that different Purkinje cells should display different plasticity rules according to the system that they control, has been validated recording single Purkinje cells in vivo <cit.>.
In conclusion, we find that a normative interpretation of plasticity rules in Purkinje cell synapses emerges from our systems level CFPC computational architecture. That is, in order to generate optimal eligibility traces, synapses must include a forward model of the controlled subsystem. This conclusion, in the broader picture, suggests that synapses are not merely components of multiplicative gains, but rather the loci of complex dynamic computations that are relevant from a functional perspective, both, in terms of optimizing storage capacity <cit.> and fine-tuning learning rules to behavioral requirements.
§.§.§ Acknowledgments
The research leading to these results has received funding from the European Commission’s Horizon 2020 socSMC project (socSMC-641321H2020-FETPROACT-2014) and by the European Research Council’s CDAC project (ERC-2013-ADG 341196).
| Learning and anticipation are central features of cerebellar computation and function <cit.>: the cerebellum learns from experience and is able to anticipate events, thereby complementing a reactive feedback control by an anticipatory feed-forward one <cit.>.
This interpretation is based on a series of anticipatory motor behaviors that originate in the cerebellum. For instance, anticipation is a crucial component of acquired behavior in eye-blink conditioning <cit.>, a trial by trial learning protocol where an initially neutral stimulus such as a tone or a light (the conditioning stimulus, CS) is followed, after a fixed delay, by a noxious one, such as an air puff to the eye (the unconditioned stimulus, US). During early trials, a protective unconditioned response (UR), a blink, occurs reflexively in a feedback manner following the US. After training though, a well-timed anticipatory blink (the conditioned response, CR) precedes the US. Thus, learning results in the (partial) transference from an initial feedback action to an anticipatory (or predictive) feed-forward one. Similar responses occur during anticipatory postural adjustments, which are postural changes that precede voluntary motor movements, such as raising an arm while standing <cit.>. The goal of these anticipatory adjustments is to counteract the postural and equilibrium disturbances that voluntary movements introduce. These behaviors can be seen as feedback reactions to events that after learning have been transferred to feed-forward actions anticipating the predicted events.
Anticipatory feed-forward control can yield high performance gains over feedback control whenever the feedback loop exhibits transmission (or transport) delays <cit.>. However, even if a plant has negligible transmission delays, it may still have sizable inertial latencies. For example, if we apply a force to a visco-elastic plant, its peak velocity will be achieved after a certain delay; i.e. the velocity itself will lag the force. An efficient way to counteract this lag will be to apply forces anticipating changes in the desired velocity. That is, anticipation can be beneficial even when one can act instantaneously on the plant. Given that, here we address two questions: what is the optimal strategy to learn anticipatory actions in a cerebellar-based architecture? and how could it be implemented in the cerebellum?
To answer that we design the counterfactual predictive control (CFPC) scheme, a cerebellar-based adaptive-anticipatory control architecture that learns to anticipate performance errors from experience. The CFPC scheme is motivated from neuro-anatomy and physiology of eye-blink conditioning. It includes a reactive controller, which is an output-error feedback controller that models brain stem reflexes actuating on eyelid muscles, and a feed-forward adaptive component that models the cerebellum and learns to associate its inputs with the error signals driving the reactive controller. With CFPC we propose a generic scheme in which a feed-forward module enhances the performance of a reactive error feedback controller steering it with signals that facilitate anticipation, namely, with counterfactual errors. However, within CFPC, even if these counterfactual errors that enable predictive control are learned based on past errors in behavior, they do not reflect any current or forthcoming error in the ongoing behavior.
In addition to eye-blink conditioning and postural adjustments, the interaction between reactive and cerebellar-dependent acquired anticipatory behavior has also been studied in paradigms such as visually-guided smooth pursuit eye movements <cit.>. All these paradigms can be abstracted as tasks in which the same predictive stimuli and disturbance or reference signal are repeatedly experienced. In accordance to that, we operate our control scheme in trial-by-trial (batch) mode. With that, we derive a learning rule for anticipatory control that modifies the well-known least-mean-squares/Widrow-Hoff rule with an eligibility trace. More specifically, our model predicts that to facilitate learning, parallel fibers to Purkinje cell synapses implement a forward model that generates an eligibility trace. Finally, to stress that CFPC is not specific to eye-blink conditioning, we demonstrate its application with a smooth pursuit task. | null | null | §.§ Derivation of the gradient descent update rule for the cerebellar control architecture
We apply the CFPC architecture defined in the previous section to a task that consists in following a finite reference signal 𝐫∈ℝ^N that is repeated trial-by-trial. To analyze this system, we use the discrete time formalism and assume that all components are linear time-invariant (LTI). Given this, both reactive controller and plant can be lumped together into a closed-loop dynamical system, that can be described with the dynamics 𝐀, input 𝐁, measurement 𝐂 and feed-through 𝐃 matrices. In general, these matrices describe how the state of a dynamical system autonomously evolves with time, 𝐀; how inputs affect system states, 𝐁; how states are mapped into outputs, 𝐂; and how inputs instantaneously affect the system's output 𝐃 <cit.>.
As we consider a reference of a finite length N, we can construct the N-by-N transfer matrix 𝒯 as follows <cit.>
𝒯 = [[ D 0 0 0; CB D 0 0; CAB CB D 0; ⋮ ⋮ ⋮ ⋱ ⋮; CA^N-2B CA^N-3B CA^N-4B D ]]
With this transfer matrix we can map any given reference 𝐫 into an output 𝐲_r using 𝐲_r=𝒯𝐫, obtaining what would have been the complete output trajectory of the plant on an entirely feedback-driven trial. Note that the first column of 𝒯 contains the impulse response curve of the closed-loop system, while the rest of the columns are obtained shifting that impulse response down. Therefore, we can build the transfer matrix 𝒯 either in a model-based manner, deriving the state-space characterization of the closed-loop system, or in measurement-based manner, measuring the impulse response curve. Additionally, note that (𝐈-𝒯)𝐫 yields the error of the feedback control in following the reference, a signal which we denote with 𝐞_0.
Let 𝐨∈ℝ^N be the entire feed-forward signal for a given trial. Given commutativity, we can consider that from the point of view of the closed-loop system o is added directly to the reference 𝐫, (Fig. <ref> right). In that case, we can use 𝐲=𝒯(𝐫+𝐨) to obtain the output of the closed-loop system when it is driven by both the reference and the feed-forward signal.
The feed-forward module only outputs linear combinations of a set of bases. Let 𝐗∈ℝ^N × G be a matrix with the content of the G bases during all the N time steps of a trial. The feed-forward signal becomes 𝐨=𝐗𝐰, where 𝐰∈ℝ^G contains the mixing weights. Hence, the output of the plant given a particular 𝐰 becomes 𝐲=𝒯(𝐫+𝐗𝐰).
We implement learning as the process of adjusting the weights 𝐰 of the feed-forward module in a trial-by-trial manner. At each trial the same reference signal, 𝐫, and bases, 𝐗, are repeated. Through learning we want to converge to the optimal weight vector 𝐰^* defined as
𝐰^* = _w c(𝐰) = _w 1/2𝐞^⊺𝐞= _w 1/2 (𝐫-𝒯(𝐫+𝐗𝐰))^⊺(𝐫-𝒯(𝐫+𝐗𝐰))
where c indicates the objective function to minimize, namely the L_2 norm or sum of squared errors. With the substitution 𝐗̃=𝒯𝐗 and using 𝐞_0 = (𝐈-𝒯)𝐫, the minimization problem can be cast as a canonical linear least-squares problem:
𝐰^* = _w 1/2
(𝐞_0-𝐗̃𝐰)^⊺(𝐞_0-𝐗̃𝐰)
One the one hand, this allows to directly find the least squares solution for 𝐰^*, that is, 𝐰^*=𝐗̃^†𝐞_0, where † denotes the Moore-Penrose pseudo-inverse. On the other hand, and more interestingly, with 𝐰[k] being the weights at trial k and having 𝐞[k] = 𝐞_0-𝐗̃𝐰[k], we can obtain the gradient of the error function at trial k with relation to w as follows:
∇_w c = -𝐗̃^⊺𝐞[k] = -𝐗^⊺𝒯^⊺ 𝐞[k]
Thus, setting η as a properly scaled learning rate (the only global parameter Θ of the rule), we can derive the following gradient descent strategy for the update of the weights between trials:
𝐰[k+1] = 𝐰[k] + η𝐗^⊺𝒯^⊺𝐞[k]
This solves for the learning rule f in eq. <ref>. Note that f is consistent with both the cerebellar anatomy (Fig. <ref>left) and the control architecture (Fig. <ref>right) in that the feed-forward module/cerebellum only requires two signals to update its weights/synaptic efficacies: the basis inputs, 𝐗, and error signal, 𝐞.
§.§ 𝒯^⊺ facilitates a synaptic eligibility trace
The standard least mean squares (LMS) rule (also known as Widrow-Hoff or decorrelation learning rule) can be represented in its batch version as 𝐰[k+1] = 𝐰[k] + η𝐗^⊺𝐞[k]. Hence, the only difference between the batch LMS rule and the one we have derived is the insertion of the matrix factor 𝒯^⊺. Now we will show how this factor acts as a filter that computes an eligibility trace at each weight/synapse.
Note that the update of a single weight, according Eq. <ref> becomes
w_j[k+1] = w_j[k] + η𝐱_j^⊺𝒯^⊺𝐞[k]
where 𝐱_j contains the sequence of values of the cortical basis j during the entire trial. This can be rewritten as
w_j[k+1] = w_j[k] + η𝐡_j^⊺𝐞[k]
with 𝐡_j ≡𝒯𝐱_j. The above inner product can be expressed as a sum of scalar products
w_j[k+1] = w_j[k] + η∑_n=1^N 𝐡_j[n] 𝐞[k,n]
where n indexes the within trial time-step. Note that 𝐞[k] in Eq. <ref> refers to the whole error signal at trial k whereas 𝐞[k,n] in Eq. <ref> refers to the error value in the n-th time-step of the trial k. It is now clear that each 𝐡_j[n] weighs how much an error arriving at time n should modify the weight w_j, which is precisely the role of an eligibility trace. Note that since 𝒯 contains in its columns/rows shifted repetitions of the impulse response curve of the closed-loop system, the eligibility trace codes at any time n, the convolution of the sequence of previous inputs with the impulse-response curve of the reactive layer closed-loop. Indeed, in each synapse, the eligibility trace is generated by a forward model of the closed-loop system that is exclusively driven by the basis signal.
Consequently, our main result is that by deriving a gradient descent algorithm for the CFPC cerebellar control architecture we have obtained an exact definition of the suitable eligibility trace. That definition guarantees that the set of weights/synaptic efficacies are updated in a locally optimal manner in the weights' space.
§.§ On-line gradient descent algorithm
The trial-by-trial formulation above allowed for a straightforward derivation of the (batch) gradient descent algorithm. As it lumped together all computations occurring in a same trial, it accounted for time within the trial implicitly rather than explicitly: one-dimensional time-signals were mapped onto points in a high-dimensional space. However, after having established the gradient descent algorithm, we can implement the same rule in an on-line manner, dropping the repetitiveness assumption inherent to trial-by-trial learning and performing all computations locally in time.
Each weight/synapse must have a process associated to it that outputs the eligibility trace. That process passes the incoming (unweighted) basis signal through a (forward) model of the closed-loop as follows:
[ s_j[n+1] = As_j[n] + B x_j[n]; h_j[n] = Cs_j[n] + D x_j[n] ]
where matrices A, B, C and D refer to the closed-loop system (they are the same matrices that we used to define the transfer matrix 𝒯), and s_j[n] is the state vector of the forward model of the synapse j at time-step n. In practice, each “synaptic” forward model computes what would have been the effect of having driven the closed-loop system with each basis signal alone. Given the superposition principle, the outcome of that computation can also be interpreted as saying that h_j[n] indicates what would have been the displacement over the current output of the plant, y[n], achieved feeding the closed-loop system with the basis signal x_j.
The process of weight update is completed as follows:
w_j[n+1] = w_j[n] + η h_j[n] e[n]
At each time step n, the error signal e[n] is multiplied by the current value of the eligibility trace h_j[n], scaled by the learning rate η, and subtracted to the current weight w_j[n]. Therefore whereas the contribution of each basis to the output of the adaptive filter depends only on its current value and weight, the change in weight depends on the current and past values passed through a forward model of the closed-loop dynamics.
§.§ Simulation of a visually-guided smooth pursuit task
We demonstrate the CFPC approach in an example of a visual smooth pursuit task in which the eyes have to track a target moving on a screen. Even though the simulation does not capture all the complexity of a smooth pursuit task, it illustrates our anticipatory control strategy. We model the plant (eye and ocular muscles) with a two-dimensional linear filter that maps motor commands into angular positions. Our model is an extension of the model in <cit.>, even though in that work the plant was considered in the context of the vestibulo-ocular reflex. In particular, we use a chain of two leaky integrators: a slow integrator with a relaxation constant of 100 ms drives the eyes back to the rest position; the second integrator, with a fast time constant of 3 ms ensures that the change in position does not occur instantaneously. To this basic plant, we add a reactive control layer modeled as a proportional-integral (PI) error-feedback controller, with proportional gain k_p and integral gain k_i. The control loop includes a 50 ms delay in the error feedback, to account for both the actuation and the sensing latency. We choose gains such that reactive tracking lags the target by approximately 100 ms. This gives k_p=20 and k_i=100. To complete the anticipatory and adaptive control architecture, the closed-loop system is supplemented by the feed-forward module.
The architecture implementing the forward model-based gradient descent algorithm is applied to a task structured in trials of 2.5 sec duration. Within each trial, a target remains still at the center of the visual scene for a duration 0.5 sec, next it moves rightwards for 0.5 sec with constant velocity, remains still for 0.5 sec and repeats the sequence of movements in reverse, returning to the center. The cerebellar component receives 20 Gaussian basis signals (𝐗) whose receptive fields are defined in the temporal domain, relative to trial onset, with a width (standard-deviation) of 50 ms and spaced by 100 ms. The whole system is simulated using a 1 ms time-step. To construct the matrix 𝒯 we computed closed-loop system impulse response.
At the first trial, before any learning, the output of the plant lags the reference signal by approximately 100 ms converging to the position only when the target remains still for about 300 ms (Fig. <ref> left). As a result of learning, the plant's behavior shifts from a reactive to an anticipatory mode, being able to track the reference without any delay. Indeed, the error that is sizable during the target displacement before learning, almost completely disappears by the 50^th trial (Fig. <ref> right). That cancellation results from learning the weights that generate a feed-forward predictive signal that leads the changes in the reference signal (onsets and offsets of target movements) by approximately 100 ms (Fig. <ref> right). Indeed, convergence of the algorithm is remarkably fast and by trial 7 it has almost converged to the optimal solution (Fig. <ref>).
To assess how much our forward-model-based eligibility trace contributes to performance, we test three alternative algorithms. In both cases we employ the same control architecture, changing the plasticity rule such that we either use no eligibility trace, thus implementing the basic Widrow-Hoff learning rule, or use the Widrow-Hoff rule extended with a delta-function eligibility trace that matches the latency of the error feedback (50 ms) or slightly exceeds it (70 ms). Performance with the basic WH model worsens rapidly whereas performance with the WH learning rule using a “pure delay” eligibility trace matched to the transport delay improves but not as fast as with the forward-model-based eligibility trace (Fig. <ref>). Indeed, in this case, the best strategy for implementing a delayed delta eligibility trace is setting a delay exceeding the transport delay by around 20 ms, thus matching the peak of the impulse response. In that case, the system performs almost as good as with the forward-model eligibility trace (70 ms). This last result implies that, even though the literature usually emphasizes the role of transport delays, eligibility traces also account for response lags due to intrinsic dynamics of the plant.
To summarize our results, we have shown with a basic simulation of a visual smooth pursuit task that generating the eligibility trace by means of a forward model ensures convergence to the optimal solution and accelerates learning by guaranteeing that it follows a gradient descent. | In this paper we have introduced a novel formulation of cerebellar anticipatory control, consistent with experimental evidence, in which a forward model has emerged naturally at the level of Purkinje cell synapses. From a machine learning perspective, we have also provided an optimality argument for the derivation of an eligibility trace, a construct that was often thought of in more heuristic terms as a mechanism to bridge time-delays <cit.>.
The first seminal works of cerebellar computational models emphasized its role as an associative memory <cit.>. Later, the cerebellum was investigates as a device processing correlated time signals<cit.>. In this latter framework, the use of the computational concept of an eligibility trace emerged as a heuristic construct that allowed to compensate for transmission delays in the circuit<cit.>, which introduced lags in the cross-correlation between signals. Concretely, that was referred to as the problem of delayed error feedback, due to which, by the time an error signal reaches a cell, the synapses accountable for that error are no longer the ones currently active, but those that were active at the time when the motor signals that caused the actual error were generated. This view has however neglected the fact that beyond transport delays, response dynamics of physical plants also influence how past pre-synaptic signals could have related to the current output of the plant. Indeed, for a linear plant, the impulse-response function of the plant provides the complete description of how inputs will drive the system, and as such, integrates transmission delays as well as the dynamics of the plant. Recently,
Even though cerebellar microcircuits have been used as models for building control architectures, e.g., the feedback-error learning model <cit.>, our CFPC is novel in that it links the cerebellum to the input of the feedback controller, ensuring that the computational features of the feedback controller are exploited at all times. Within the domain of adaptive control, there are remarkable similarities at the functional level between CFPC and iterative learning control (ILC) <cit.>, which is an input design technique for learning optimal control signals in repetitive tasks. The difference between our CFPC and ILC lies in the fact that ILC controllers directly learn a control signal, whereas, the CFPC learns a conterfactual error signal that steers a feedback controller. However the similarity between the two approaches can help for extending CFPC to more complex control tasks.
With our CFPC framework, we have modeled the cerebellar system at a very high level of abstraction: we have not included bio-physical constraints underlying neural computations, obviated known anatomical connections such as the cerebellar nucleo-olivary inhibition <cit.> and made simplifications such as collapsing cerebellar cortex and nuclei into the same computational unit. On the one hand, such a choice of high-level abstraction may indeed be beneficial for deriving general-purpose machine learning or adaptive control algorithms. On the other hand, it is remarkable that in spite of this abstraction our framework makes fine-grained predictions at the micro-level of biological processes. Namely, that in a cerebellar microcircuit <cit.>, the response dynamics of secondary messengers <cit.> regulating plasticity of Purkinje cell synapses to parallel fibers must mimic the dynamics of the motor system being controlled by that cerebellar microcircuit. Notably, the logical consequence of this prediction, that different Purkinje cells should display different plasticity rules according to the system that they control, has been validated recording single Purkinje cells in vivo <cit.>.
In conclusion, we find that a normative interpretation of plasticity rules in Purkinje cell synapses emerges from our systems level CFPC computational architecture. That is, in order to generate optimal eligibility traces, synapses must include a forward model of the controlled subsystem. This conclusion, in the broader picture, suggests that synapses are not merely components of multiplicative gains, but rather the loci of complex dynamic computations that are relevant from a functional perspective, both, in terms of optimizing storage capacity <cit.> and fine-tuning learning rules to behavioral requirements.
§.§.§ Acknowledgments
The research leading to these results has received funding from the European Commission’s Horizon 2020 socSMC project (socSMC-641321H2020-FETPROACT-2014) and by the European Research Council’s CDAC project (ERC-2013-ADG 341196). | null |
http://arxiv.org/abs/1701.07603v3 | 20170126075233 | Quantum Work Fluctuations in connection with Jarzynski Equality | [
"Juan D. Jaramillo",
"Jiawen Deng",
"Jiangbin Gong"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"quant-ph"
] |
Department of Physics, National University of Singapore, Singapore 117546
NUS Graduate School for Integrative Science and Engineering, Singapore 117597
[email protected]
Department of Physics, National University of Singapore, Singapore 117546
NUS Graduate School for Integrative Science and Engineering, Singapore 117597
A result of great theoretical and experimental interest, Jarzynski equality predicts a free energy change Δ F of a system at inverse temperature β from an ensemble average of non-equilibrium exponential work, i.e., ⟨ e^-β W⟩ =e^-βΔ F.
The number of experimental work values needed to reach a given accuracy of Δ F is determined by the variance
of e^-β W, denoted var(e^-β W).
We discover in this work that var(e^-β W) in both harmonic and an-harmonic Hamiltonian systems
can systematically diverge in non-adiabatic work protocols, even when the adiabatic protocols do not suffer from such divergence. This divergence may be regarded as a type of dynamically induced phase transition in work fluctuations. For a quantum harmonic oscillator with time-dependent trapping frequency as a working example, any non-adiabatic work protocol is found to yield a diverging var(e^-β W) at sufficiently low temperatures, markedly different from the classical behavior.
The divergence of
var(e^-β W) indicates the too-far-from-equilibrium nature of a non-adiabatic work protocol and makes it compulsory
to apply designed control fields to suppress the quantum work fluctuations in order to test Jarzynski equality.
Quantum Work Fluctuations in connection with Jarzynski Equality
Jiangbin Gong
December 30, 2023
===============================================================
Introduction. Work fluctuation theorems are a central topic in modern non-equilibrium statistical mechanics <cit.>. One outstanding result is Jarzynski equality (JE) <cit.> which makes use of an average of non-equilibrium exponential work, i.e., e^-β W (where W is the work and β is the inverse temperature) to obtain important equilibrium information. It takes the form ⟨ e^-β W⟩ =e^-βΔ F, where ⟨·⟩ represents an ensemble average of all possible work values with the system initially prepared in a Gibbs state, Δ F is the free energy difference between initial and final systems at the same β. Regarded as a recent breakthrough in non-equilibrium statistical mechanics, JE holds no matter how the work protocol is executed, fast or slow, adiabatic or highly non-adiabatic. The quantum version of JE takes precisely the same form <cit.>, provided that the value of quantum work is interpreted as the energy difference obtained from two energy projective measurements.
It is also convenient to define the so-called dissipated work W_ dis= W-Δ F. Then JE gives ⟨ e^-β W_ dis⟩=1.
Early applications of JE focused on biomolecular systems, where it is a well-established method to compute Δ F <cit.>. Proof-of-principle experiments were carried out <cit.> following an early proposal by Hummer and Szabo <cit.>. Other experiments include those using classical oscillators <cit.>. On the quantum side, the first experiment testing JE was first reported for a quantum spin <cit.>, relying on interferometric schemes to reconstruct work distributions <cit.>. While other settings have been proposed, e.g., circuit QED <cit.>, a more recent experimental confirmation, following the proposal <cit.>, uses a trapped ion subject to an external force <cit.>. These experiments are by no means straightforward. In both classical and quantum cases, the ensemble average of a highly nonlinear (in fact, exponential) function of work, as requested by JE, could become practically demanding in yielding a well-converged result for Δ F, as rare events can potentially dominate the average <cit.>. This in fact has motivated a number of studies in the literature to investigate the errors in predicting Δ F based on JE <cit.>.
As suggested by the central limit theorem (CLT), the errors are related to the variance in the exponential work, i.e., var(e^-β W)≡⟨ e^-2β W⟩- e^-2βΔ F. The larger var(e^-β W) is, the more realizations of W we must collect to reach a given accuracy in Δ F. To obtain Δ F within an error of k_BT, the number of W realizations needed was estimated as var(e^-β W_ dis)= var(e^-β W)/e^-2βΔ F <cit.>.
The quantity var(e^-β W_ dis) represents the ensemble average of exponential quantities and the Boltzmann exponential cutoff for high-energy states might not always be effective in suppressing the contributions from the high-energy tail.
Indeed, in contrast to JE itself which is always well behaved, var(e^-β W_ dis) can diverge, suggesting higher-energy components from the initial Gibbs distribution may contribute even more to the variance. Surprisingly, prior to our work <cit.>, this possibility of divergence was rarely mentioned, with one exception <cit.> treating an adiabatic work protocol. According to the principle of minimal work fluctuations (PMWF) <cit.>, once var(e^-β W_ dis) diverges for an adiabatic protocol, then it will suffer from analogous divergence under arbitrary work protocols sharing the same initial and final system Hamiltonians. The accuracy in estimating Δ F in such situations becomes problematic. One can no longer rely on the CLT to predict how the error scales with the number of experimental runs. A generalization of the CLT can prove useful <cit.> for a specific situation, but in general the error estimate under a diverging var(e^-β W_ dis) is unknown.
This work discovers the divergence of var(e^-β W_ dis) that is only present for non-adiabatic dynamics.
Such a possibility is consistent with PMWF. This represents intriguing and more complex situations, where the “phase" boundary between a finite var(e^-β W_ dis) and a diverging var(e^-β W_ dis) is yet to be located case by case.
Our working example below is mainly a quantum harmonic oscillator (QHO) with time-dependent trapping frequency ω because recent experiments to test quantum JE <cit.> and to construct quantum heat engines <cit.> were based on QHO (we consider an-harmonic Hamiltonians in the end). Work is done to the system by varying ω. For a QHO of fundamental and experimental importance, all quantum transition probabilities can be analytically obtained.
To our great surprise, even in such a prototypic system with all dynamical aspects known for so many years, divergence of var(e^-β W_ dis) may be induced by non-adiabatic work protocols. The domain of divergence as a function of temperature and other dynamical parameters is found to possess complicated structures. In general, so long as the dynamics is non-adiabatic, quantum effects at lower temperatures tend to enlarge the domain of divergence in var(e^-β W_ dis). Indeed, even when the classical var(e^-β W_ dis) is well behaved, its quantum counterpart is bound to diverge at sufficiently low temperatures. This itself constitutes a new aspect of quantum effects in non-equilibrium statistical mechanics. Our results indicate (i) that direct applicability of JE in free energy estimates can become much limited as a system of interest approaches the deep quantum regime, and (ii) that quantum non-adiabatic effects or “inner friction" <cit.> alone can induce critical changes in work fluctuations. This work shall also motivate parallel studies on several generalized quantum fluctuation theorems <cit.>.
Work characteristic function and model system. Throughout we focus on work applied to an isolated system.
Let H(t) be the time-dependent Hamiltonian subject to a work protocol.
The work distribution function is assumed to be P(W). The Fourier transformation of P(W) yields the so-called work characteristic function <cit.>
G(μ)=∫ dW e^iμ WP(W).
For a work protocol starting at t=0 and ending at t=τ, G(μ) can be expressed as a quantum correlation function G(μ)= Tr[e^iuH_H(τ)e^-iuH(0)ρ(0)], where ρ(0) is the initial Gibbs state and H_H(τ) is the final Hamiltonian in the Heisenberg representation. While the value of G(μ) at μ=iβ yields JE, the value of G(μ) at μ=2iβ determines var(e^-β W_ dis). That is,
var(e^-β W_ dis)=G(2iβ)/e^-2βΔ F-1.
This observation indicates that a divergence in G(2iβ) is equivalent to a divergence in var(e^-β W_ dis) or in var(e^-β W). The quantum correlation function G(μ) on the imaginary axis is hence our central object.
An operative expression of (<ref>) is given by
G(μ)=∑_m,n exp[iμ(E_n^τ-E_m^0)]P_m,n^0→τP_m^0,
where P^0→τ_m,n represents the transition probability from mth state of H(0) to nth state of H(τ). Equations (<ref>) and (<ref>) indicate that, for a system with a finite-dimension Hilbert space such as a spin system,
var(e^-β W_ dis) is obtained from a finite summation and will be always well behaved.
To lay a solid ground for our surprising findings, we aim to present our main results with computational and analytical calculations supporting each other. With this in mind, the parametric QHO system seems to be the best choice as a model with an infinite-dimensional Hilbert space. QHO belongs to the algebraic class SU(1,1) of integrable systems, for which many solutions in the form of exact propagators can be found in the literature <cit.>. Systems in the same algebraic class share common dynamical features. For example, the Calogero-Sutherland model despite being an interacting system is dynamically analogous to QHO <cit.>. In addition, in the history of quantum mechanics and statistical physics, QHO plays a pivotal role, providing general insights into quantum zero-point energy, low-temperature behavior of the heat capacity of a crystal, open quantum systems, and the relationship between classical mechanics and quantum mechanics, etc. In particular, in understanding quantum-classical correspondence, tuning the dimensionless ratio of the thermal energy over the energy level spacing is essential to make transitions between classical and quantum regimes.
Consider then a QHO with driving in its trapping frequency, with H_QHO(ω(t))=p̂^2/2m+mω^2(t)q̂^2/2.
Work is done as the trapping frequency changes from ω_0=ω(0) to ω_1=ω(τ).
Thanks to Husimi, all P^0→τ_m,n are available as a function of the Husimi coefficient Q^∗≥ 1 <cit.>, which is defined as
Q^∗(t)=1/2(ω_0ω_1X^2+ω_0/ω_1Ẋ^2+ω_1/ω_0Y^2+1/ω_0ω_1Y^2), where X(t) and Y(t) are the two solutions of the equation of motion of the corresponding classical oscillator <cit.>.
For adiabatic dynamics, Q^∗=1 and P^0→τ_m,n=δ_m,n.
Any non-adiabaticity is captured by deviations of Q^∗ from its adiabatic value Q^∗=1. For later use, we define the compression ratio κ≡ω_0/ω_1. For a sudden quench (sq) protocol τ→ 0, then Q^∗→ Q^∗_ sq=(κ+1/κ)/2.
High-temperature and low-temperature regimes.
It is of interest to first examine the behavior of G(μ=2iβ) in the strictly adiabatic case, i.e., P^0→τ_m,n=δ_m,n. Because the convergence criteria of the resulting
geometric series in Eq. (<ref>) becomes iμω_1-(iμ+β)ω_0<0 with μ=2iβ, it is obvious that if κ≥ 2, then G(2iβ) and hence var(e^-β W_ dis) diverge. With PMWF, one further concludes that the divergence occurs in all non-adiabatic work protocols.
Next we investigate the behavior of G(2iβ) under non-adiabatic work protocols, provided that adiabatic work protocols do not
yield divergence in G(2iβ), i.e., in the regime of κ< 2.
To that end, we partially resort to a compact expression for G(μ) <cit.>. That is,
G(μ)=√(2) (1-e^-βħω_0)e^iμħ(ω_1-ω_0)/2/√(Q^∗(1-e^2iμħω_1)(1-e^-2(iμ+β)ħω_0)+(1+e^2iμħω_1)(1+e^-2(iμ+β)ħω_0)-4e^iμħω_1 e^-(iμ+β)ħω_0).
Extra care is needed when one extends such a non-analytic
result based on integration assuming real μ to the imaginary axis of μ. The Jarzynski equality is always recovered from (<ref>) at μ=iβ <cit.>, i.e.,
G(iβ)=sinh(βħω_0/2)/sinh(βħω_1/2).
Encouraged by this, we cautiously use this compact expression to help us digest the possible critical boundary of divergence in G(2iβ) in the regime of κ<2. Equation (<ref>) then takes us to the following compact expression for G(2iβ),
G(2iβ)→√(2)sinh(βħω_0/2)/√(cosh(βħ(2ω_1-ω_0))-1-(Q^∗-1) sinh(2βħω_1)sinh(βħω_0)).
The real denominator of G(2iβ) in Eq. (<ref>) approaching zero signifies a boundary separating finite values of G(2iβ) from its diverging values. We stress however, all such “shortcut" conclusions are carefully checked against the original expression in Eq. (<ref>), with P^0→τ_m,n in the sum series truncated at some large values of m and n (see Supplementary Material <cit.>).
For the high-temperature (classical) regime βħω_0,1≪ 1 we have
G(2iβ) →κ/√((2-κ)^2-4κ(Q^∗-1)).
The boundary of divergence in G(2iβ) is identified at
κ_c=2(Q^∗-√(Q^∗ 2-1)).
Equation (<ref>) for adiabatic cases (Q^∗=1) still reproduces the known κ_c=2 boundary.
In generic non-adiabatic cases with Q^∗>1, the divergence boundary is pushed to smaller values, i.e., κ_c<2, in agreement with the classical result (see Supplementary Material <cit.>). Later on we consider specific work protocols to further digest this result. One important feature is that
this phase boundary at κ_c is no longer dependent upon temperature. Indeed, Eq. (<ref>) shows that G(2iβ) itself is temperature-independent in the high-temperature (classical) regime. Expanding the denominator of Eq. (<ref>) around κ_c under a given Q^∗, we find var(e^-β W_ dis) diverges as ∼ (κ_c-κ)^-1/2 as κ approaches the critical value κ_c from below.
The behavior of G(2iβ) is entirely different in the low-temperature (deep quantum) regime βħω_0,1≫ 1, where
the compact expression for G(2iβ) becomes
G(2iβ) →exp[βħ(ω_0-ω_1)]/√(1-(Q^∗-1/2) [exp(2βħω_0)-1]).
For strictly adiabatic cases, i.e., Q^∗=1, Eq. (<ref>) is well behaved. However, for any work protocol with even slight non-adiabaticity, Q^∗ 1, one observes that the denominator in Eq. (<ref>) hits zero or becomes imaginary at sufficiently low temperatures. This indicates that var(e^-β W_ dis) must diverge as temperature decreases, so long as the protocol is not strictly adiabatic. The boundary of divergence in var(e^-β W_ dis), within the low-temperature regime, is located at β_c= ln(Q^∗+1/Q^∗-1)/(2ħω_0), which does not explicitly depend on κ. Further, from Eq. (<ref>) we find that as β approaches β_c, var(e^-β W_ dis) diverges as
∼ (β_c-β)^-1/2.
To understand the divergence, one can actually focus on the contribution to var(e^-β W_ dis) made by the transitions from 2n-th state of H(0) to the ground state of H(τ). After applying Sterling's
formula to P^0→τ_2n,0 for large n, this contribution is found to scale as
1/√(n)(Q^⋆-1/Q^⋆+1)^ne^2nβħω_0.
This predicts the same β_c as above, because for β>β_c, the more rare the initial state is (sampled from the initial Boltzmann distribution), the more contribution it makes to var(e^-β W_ dis). In other words,
quantum transitions associated with very negative work values, though exponentially suppressed by (Q^⋆-1/Q^⋆+1)^n with n, still not suppressed as sharply as
in classical cases so as to lose the competition from the exponential increasing factor e^2nβħω_0 contained in e^-2β W at sufficiently low temperatures. This important insight has been confirmed by the behavior of classical and quantum deformed Jarzynzki equalities we just proposed <cit.>.
All these predicted features are checked in numerics, with some examples shown in Fig. <ref>. There the results are obtained based on Eq. (<ref>) directly as well as from Eq. (<ref>). In the high-temperature regime (small β), all the plots var(e^-β W_ dis) vs ħβ (in dimensionless units) become flat, in agreement with Eq. (<ref>). This also indicates that the classical var(e^-β W_ dis) does not diverge in these cases. For lower temperatures, however, all the plotted curves tend to blow up, except for the strict adiabatic case (bottom curve) whose var(e^-β W_ dis) approaches zero as temperature decreases, in agreement with Eq. (<ref>).
As seen from Fig. <ref>, local minima in var(e^-β W_ dis) as a function of β might emerge, reflecting a competition between quantum fluctuations and thermal fluctuations. In addition, the divergent behavior of var(e^-β W_ dis) close to κ_c or β_c (obtained from direct numerics and not shown) is also found to agree with the scaling laws of (κ_c-κ)^-1/2 and (β_c-β)^-1/2.
Specific work protocols at intermediate temperatures. To further investigate the behavior of var(e^ -β W_ dis), we turn to a specific work protocol where the trapping frequency is suddenly quenched from ω_0 to ω_1. In this case Q^∗=(κ+ 1/κ)/2. In the high-temperature regime, Eq. (<ref>) yields a critical κ_c=√(2), namely, var(e^ -β W_ dis) diverges if ω_0≥√(2)ω_1. Fig. <ref> depicts the numerically obtained domain of divergence in var(e^ -β W_ dis) in terms of ω_0 and ω_1, with panel (a) exactly showing this high-temperature behavior. Quantum effects however dramatically enlarge the domain of divergence (white area) in var(e^ -β W_ dis): From panel (b) to (d), temperature decreases and the domain of divergence in var(e^ -β W_ dis) gradually invades the (classical) domain of finite var(e^ -β W_ dis) in panel (a). For even lower temperatures than shown in Fig. <ref>, the domain of convergence (gray area) collapse into the line ω_0=ω_1, suggesting any actual quench in ω will yield divergence in var(e^ -β W_ dis).
We have also examined a finite-time work protocol, where the parameter dω/dt/ω^2(t) is chosen to be time-independent, with ω(t)=ω_1ω_0τ/[ω_0τ+t(ω_1-ω_0)] <cit.>.
Compared with the sudden quench case, the divergence domain is now fragmented into multiple domains with subtle phase boundaries.
In addition, as the temperature decreases, the domain of divergence again grows.
Similar disconnected regions of divergence are observed along the time of driving, where convergent domains are found around the adiabatic times <cit.> for which Q^∗ is close to 1. Detailed results are presented in Supplementary Material <cit.>.
Finally, it is worth mentioning the possibility of work protocols with Q^∗>Q^∗_sq, see for example <cit.>. In these cases var(e^-β W_ dis) may even diverge under compression of the harmonic potential (κ<1) at both high and low temperatures.
Before ending this section, we also mention our numerical studies of work fluctuations in quantum anharmonic oscillators and other systems such as particle in a box. There we gain similar insights. In particular, we consider a time-dependent quartic potential
with the total Hamiltonian H_a(t)=H_QHO(ω(t))+J cos(tπ/τ) q̂^4, where τ is the time of driving, as well as a time-independent quartic potential, with the total Hamiltonian H_b(t)=H_QHO(ω(t))+J q̂^4.
There quantum effects are also found to induce the divergence of var(e^-β W_ dis) even if it does not diverge in the classical (high-temperature) domain.
Fig. <ref> depicts a few such computational examples. In all these cases, the effect of a quartic potential is to bring the onset of divergence at higher temperatures as compared with the harmonic cases.
Conclusions. JE was hoped to take advantage of non-equilibrium work protocols to estimate the change of free energy in nanoscale and mesoscopic systems. The divergence in var(e^-β W_ dis) presents a hurdle to a direct application of JE. Note that if the divergence is solely induced by non-adiabatic quantum effects, then one promising solution is to use shortcuts to adiabaticity (STA) <cit.>, to go around this divergence and yet still realizing speedy work protocols (but with some price <cit.>). Indeed, one can even use var(e^-β W_ dis) as a minimization target to design control fields <cit.>.
Because quantum effects are seen to enlarge the domain of divergence drastically in the low temperature regime, using JE for free energy estimates in the deep quantum regime will not be fruitful in the absence of a designed control field. Nevertheless, the divergence in var(e^-β W_ dis) offers a new angle to study quantum work fluctuations <cit.> and to characterize the too-far-from-equilibrium nature of a work protocol. One fascinating question is to study how the divergence of G(μ=2iβ) may be reflected in the behavior of G(μ) in the real-μ domain.
Finally, one might wonder if the recent experiment testing quantum JE using a trapped ion <cit.> already suffered from the divergence issue exposed here. It turns out that this belongs to a fortunate case without any divergence in var(e^-β W_ dis) (see Supplementary Material <cit.>). Thus, analyzing possible divergence in exponential work fluctuations will be crucial in guiding the design of future quantum experiments testing JE.
Acknowledgements: This work
is funded by Singapore MOE Academic Research
Fund Tier-2 project (Project No. MOE2014-T2-2- 119 with
WBS No. R-144-000-350-112). J.G. also acknowledges encouraging discussions with many colleagues.
Appendix
As explained in the main text, all divergences in var(e^-β W) are traced back to divergences in
⟨ e^-2β W⟩=∑_m,n exp[-2β(E_n^τ-E_m^0)]P_m,n^0→τP_m^0,
where P_m,n^0→τ are the state-to-state transition probabilities and P_m^0=e^-β E^0_m/Z_0 is the Gibbs distribution of the initial state. All our numeric calculations involve a truncation in the series (<ref>).
§ TRANSITION PROBABILITIES
The transition probabilities between instant eigenstates at t=0 and t=τ for the quantum harmonic oscillator (QHO) with arbitrary driving of the trapping frequency are given by <cit.>
P^0→τ_2ν,2μ = (2/Q^∗+1)^1/2(Q^∗-1/Q^∗+1)^ν+μ(2μ)!(2ν)!/2^2μ+2ν {∑_λ=0^min(ν,μ)[-8/(Q^∗-1)]^λ/(2λ)!(μ-λ)!(ν-λ)!}^2,
P^0→τ_2ν+1,2μ+1 = (2/Q^∗+1)^3/2(Q^∗-1/Q^∗+1)^ν+μ(2μ+1)!(2ν+1)!/2^2μ+2ν {∑_λ=0^min(ν,μ)[-8/(Q^∗-1)]^λ/(2λ+1)!(μ-λ)!(ν-λ)!}^2;
where, ν,μ∈ℕ, and Q^∗≥ 1 is the so called Husimi coefficient. The selection rules prevent transitions between energy levels with different parity.
The definition of Q^∗ is <cit.>
Q^∗(t)=1/2(ω_0ω_1X^2+ω_0/ω_1Ẋ^2+ω_1/ω_0Y^2+1/ω_0ω_1Y^2),
where X(t) and Y(t) are the two solutions of the equation of motion of the corresponding classical oscillator,
Ẍ(t)+ω^2(t)X(t)=0,
with initial conditions: X(0)=Ẏ(0)=0 and Ẋ(0)=Y(0)=1. Any non-adiabaticity is captured by deviations of Q^∗ from its adiabatic value, Q^∗=1. The Q^∗ used in the main text is assumed to be Q^∗(τ).
§ WORK FLUCTUATIONS FOR THE CLASSICAL HARMONIC OSCILLATOR
For classical systems the work distribution is given by
P^c(W)=βω_0/2π∫ dp_0dq_0 e^-βℋ_0(p_0,q_0) δ[W-(ℋ_τ(p_0,q_0)-ℋ_0(p_0,q_0))],
where ℋ_0(p_0,q_0) is the initial classical Hamiltonian and ℋ_τ(p_0,q_0) represents the value of the final Hamiltonian ℋ_τ for the trajectory emanating from (p_0,q_0).
For the classical harmonic oscillator (CHO) this integral results in two different expressions, associated to the cases: Q^∗≤ Q^∗_sq and Q^∗>Q^∗_sq, where Q^∗ is the Husimi coefficient defined in (<ref>) and Q^∗_sq stands for the Husimi coefficient in sudden quench cases.
We first consider the regime Q^∗≤ Q^∗_sq, which will be the case if, for example, we have a monotonically increasing or decreasing ω(t) from ω_0 to ω_1. This gives rise to positive definite work or negative definite work. For compression of the harmonic trap (ω_0<ω_1) the work distribution reads <cit.>
P^c_<(W)=β √(κ/2(Q^∗_sq-Q^∗)) exp[β κ-Q^∗/2(Q^∗_sq-Q^∗) W] I_0(β √(Q^∗ 2-1)/2(Q_sq^∗-Q^∗) |W|)Θ(W),
where κ≡ω_0/ω_1 is the compression ratio, Θ(W) is the step function and I_0(z) is the modified Bessel function of the first kind. This work distribution coincides with that of the QHO in the semiclassical regime <cit.>.
The asymptotic behavior P^c(W→∞) determines the convergence of
⟨ e^-2β W⟩_c=∫ dW e^-2β WP^c(W).
One can then use the approximation: I_0(z)→ e^z/√(2π z), z→∞, to get the tail
e^-2β WP^c_<(W)→√(βκ/π√(Q^∗ 2-1)) exp[β (κ-Q^∗/2(Q^∗_sq-Q^∗)-2+√(Q^∗ 2-1)/2(Q_sq^∗-Q^∗)) W].
Evaluation of ⟨ e^-2β W⟩ requires us to integrate e^-2β WP^c(W) along W from 0 to ∞. A divergence of ⟨ e^-2β W⟩_c will take place when the exponent in (<ref>) is an increasing function of W. Fortunately, the divergence does not show up for compression of the trap. However, for expansion of the trap the work W is always negative, the work function reading
P^c_<(W)=β √(κ/2(Q^∗_sq-Q^∗)) exp[β κ-Q^∗/2(Q^∗_sq-Q^∗) W] I_0(β √(Q^∗ 2-1)/2(Q_sq^∗-Q^∗) |W|)Θ(-W).
Similarly, the tail is
e^-2β WP^c_<(W)→√(βκ/π√(Q^∗ 2-1)) exp[β (κ-Q^∗/2(Q^∗_sq-Q^∗)-2-√(Q^∗ 2-1)/2(Q_sq^∗-Q^∗)) W].
Since the term ⟨ e^-2β W⟩ is an integration along W from -∞ to 0, the possibility of a negative coefficient, i.e.,
(κ-Q^∗/2(Q^∗_sq-Q^∗)-2-√(Q^∗ 2-1)/2(Q_sq^∗-Q^∗))<0
allows divergence. In particular, for adiabatic expansion of the trap (Q^∗=1, κ>1) the criteria of divergence is ω_0>2ω_1.
Now consider the regime where the nonadiabaticity parameter Q^∗ is even larger than that associated with a sudden quench, i.e., Q^∗>Q^∗_sq. This is possible if we introduce multiple quenches to the trapping frequency ω. In this case, the integral (<ref>) for compression or expansion becomes
P^c(W)=β √(κ/2π^2(Q^∗-Q^∗_sq)) exp[-β κ-Q^∗/2(Q^∗-Q^∗_sq) W] K_0(β √(Q^∗ 2-1)/2(Q^∗-Q_sq^∗) |W|),
where K_0(x) is the Macdonald function or modified Bessel function of the third kind.
This result is consistent with that reported in <cit.> using a different approach starting from the characteristic function of the quantum harmonic oscillator (QHO) and taking a semiclassical limit. Similarly, with the approximation K_0(z)→√(π)e^-z/√(2 z), z→∞, one obtains the tail
e^-2β WP^c(W)→√(βκ/π√(Q^∗ 2-1)) exp{-β [(κ-Q^∗/2(Q^∗-Q^∗_sq)+2) W+√(Q^∗ 2-1)/2(Q^∗-Q^∗_sq) |W|]}.
Because W now can take both positive and negative values, the tail must go to zero at both W→±∞ in order to have a finite integral over W. The condition for divergence after integration over W is
κ-Q^∗/2(Q^∗-Q^∗_sq)+2+√(Q^∗2-1)/2(Q^∗-Q_sq^∗)<0, for W>0;
-κ-Q^∗/2(Q^∗-Q^∗_sq)-2+√(Q^∗2-1)/2(Q^∗-Q_sq^∗)<0, for W< 0.
Briefly,
|κ-Q^∗/2(Q^∗-Q^∗_sq)+2|>√(Q^∗2-1)/2(Q^∗-Q_sq^∗).
Combining these results and after a few straightforward steps, we finally reach a compact condition for divergence
κ>2(Q^∗-√(Q^∗2-1)).
Classical characteristic function. First we consider the case Q^∗<Q^∗_sq. From the integral formula
∫_0^+∞ dx e^Ax I_0(Bx)=-A/|A| 1/√(A^2-B^2), |(B)|+(A)≤ 0,
one obtains from (<ref>) the compact expression
⟨ e^-2β W⟩_c,<
=√(2κ(Q^∗_sq-Q^∗))/√([κ-Q^∗-4(Q^∗_sq-Q^∗)]^2-|Q^∗ 2-1|).
With some algebra is easy to show that this result coincides with the high-temperature limit of the quantum characteristic function at μ=2iβ, depicted in Eq. (7) of the main text. Next we consider the case Q^∗>Q^∗_sq. From the integral
1/π∫_-∞^+∞ dx e^A x K_0(B|x|)=1/√(-A^2+B^2), |(A)|-(B)≤ 0,
and Eq. (<ref>) it is easy to see that ⟨ e^-2β W⟩_c,> takes the same form as ⟨ e^-2β W⟩_c,<, but in the domain Q^∗>Q^∗_sq.
§ WORK PROTOCOLS
The existence of additional domains of divergence in var(e^-β W_ dis) away of adiabaticity is allowed by the PMWF. We study such behavior for two characteristic work protocols: (i) Sudden quench, Q^∗→ Q^∗_sq, and (ii) a protocol with dω/dt/ω^2(t) independent of the time of evolution, covering the regime 1<Q^∗≤ Q^∗_sq for different values of the time of driving τ. On the other hand, protocols involving multiple quenches <cit.> report Q^∗ greater than sudden quench, Q^∗_sq. Although not treated here, it is worth mentioning that such protocols with Q^∗>Q^∗_sq can lead to divergences in var(e^-β W_ dis) in the high-temperature regime not only for expansion protocols but also for compression.
We briefly recall that
var(e^-β W_ dis)=G(2iβ)/e^-2βΔ F-1,
where G(μ) is the characteristic function of work. For the quantum harmonic oscillator
G(μ)=(1-e^-βħω_0)e^iμħ(ω_1-ω_0)/2 [∑_m,n (e^iμħω_1)^m(e^-(iμ+β)ħω_0)^n P^0→τ_m,n],
where P^0→τ_m,n are the corresponding transition probabilities. Our numerical approach is based on truncation of the quantum numbers n and m in the latter expression. Using the generating function method one arrives at the compact expression (to be derived in the next section)
G(μ)=√(2) (1-e^-βħω_0)e^iμħ(ω_1-ω_0)/2/√(Q^∗(1-e^2iμħω_1)(1-e^-2(iμ+β)ħω_0)+(1+e^2iμħω_1)(1+e^-2(iμ+β)ħω_0)-4e^iμħω_1 e^-(iμ+β)ħω_0).
In particular,
G(2iβ)→√(2)sinh(βħω_0/2)/√(cosh(βħ(2ω_1-ω_0))-1-(Q^∗-1) sinh(2βħω_1)sinh(βħω_0)).
For the high-temperature (classical) regime βħω_0,1≪ 1, we have
G(2iβ) →κ/√((2-κ)^2-4κ(Q^∗-1)),
while for the low-temperature (deep quantum) regime βħω_0,1≫ 1,
G(2iβ) →exp[βħ(ω_0-ω_1)]/√(1-(Q^∗-1/2) (exp(2βħω_0)-1)).
We now discuss the specific work protocols.
§.§ Sudden quench
For the case of sudden quench, Q^∗→ Q^∗_sq=(κ+1/κ)/2, the domain of divergence in var(e^-β W_ dis) for the high-temperature regime (classical) is ω_0>√(2)ω_1, as predicted by Eq. (<ref>). As we depart from the high-temperature limit, an extra domain of divergence emerges from the regime of ω_1∼∞, reaching smaller values with lower temperatures, see panels (b) to (e) in Fig. <ref>. This new domain divergence originates from quantum effects as they are absent in the limit of ħ→ 0. We can further digest the divergence by looking for the zero and imaginary values of the denominator in the characteristic function in Eq. (<ref>), i.e.,
cosh(βħ(2ω_1-ω_0))-1-(Q^∗_sq-1) sinh(2βħω_1) sinh(βħω_0)≤ 0.
To gain useful insights we consider the limit ω_0→ 0. Using Q^∗_sq∼ 1/2κ, Eq. (<ref>) reduces to
2 sinh(βħω_1)-βħω_1 cosh(βħω_1)≤ 0.
Noting the obvious inequality
2 sinh(βħω_1)-βħω_1 cosh(βħω_1)<(2-βħω_1) cosh(βħω_1),
the existence of a divergence requires
(2-βħω_1)< 0.
This divergence is not present in the high-temperature (classical) regime, being associated to ħβ→ 0. Panels (c) to (e) in Fig. <ref> show such boundary extending to ω_0>0, but always within the compression sector, ω_0<ω_1. As an example, the inequality (<ref>) predicts an onset of divergence around ω_1=0.2 for the temperature ħβ=10, in the limit ω_0→ 0. This is in agreement with Fig. <ref>(d).
§.§ A protocol with constant dω/dt/ω^2(t).
The second protocol allows us to study the impact of a varying protocol duration τ, between sudden quench (τ→0) and the slow driving limit (τ→∞). The explicit protocol is specified by <cit.>
ω(t)=ω_1ω_0τ/ω_0τ+t(ω_1-ω_0).
Thanks to a time-independent dω/dt/ω^2(t), one can readily obtain
the corresponding Husimi coefficient
Q^∗={[ 1+cosh(√(1-γ^2))ln(κ)/1-γ^2, if γ^2≤ 1,; 1+1/2 ln(κ),, if γ^2=1,; 1+1-cos(√(γ^2-1))ln(κ)/γ^2-1, if γ^2≥ 1; ].
where γ= 2ω_0τ/(1-κ), and |dω/dt/ω^2(t)|=2/γ. Depicted in Fig. <ref> is the behavior of var(e^-β W_ dis) for a time of driving τ=15 and temperatures ħβ=10, 15.
Compared with the sudden quench case, the divergent domains (see Fig. <ref>(b) and Fig. <ref>(d)) are now fragmented into multiple domains with subtle phase boundaries. This fragmentation can be partially traced back to the non-monotonic behavior of Q^∗ as a function of ω_0 and ω_1 (see dashed lines in Fig. <ref>(a) and Fig. <ref>(c)). Indeed, for |ω/dt/ω^2(t)|≤ 2, then Q^∗=1+(1-cos(γ^2-1)ln(κ))/(γ^2-1), where γ=2ω_0τ/(1-κ).
In addition, as the temperature decreases, the domain of divergence tends to grow (the white area in Fig. <ref>(c) is greater than in Fig. <ref>(b)). In the low temperature limit, the domain for having finite var(e^-β W_ dis) (gray area) shrinks to almost zero area,
leaving only zero-measure boundaries separating different domains of divergence.
Such zero-measure cases with finite var(e^-β W_ dis) arise from finite-time adiabatic dynamics whose Q^∗=1, which is possible because for |ω/ω^2(t)|≤ 2, Q^∗=1 if [1-cos(√(γ^2-1))ln(κ)]/(γ^2-1)=0.
As mentioned in the main text, similar disconnected regions of divergence are observed along the time of driving, where convergent domains are found around the adiabatic times <cit.>: τ_n=√(1+4π^2n^2/(lnκ)^2)|1-κ|/(2ω_0), n=1,2,3..., consistent with the solutions for Q^∗=1 in (<ref>).
As shown in Fig. <ref>(a), starting from the slow driving limit in a region of ω_0 and ω_1 where var(e^-β W_ dis) converges, there is a minimum time of driving τ_c above which no divergence is found. Below this time there is an irregular alternation of convergent and divergent behavior. The local minimas in var(e^-β W_ dis) are associated with finite-time adiabatic dynamics. Note also that for small values of ω_0 similar conclusions as in the sudden quench case can be drawn for the present protocol (<ref>) because Q^∗→ Q_sq^∗ as ω_0→ 0.
§ CHARACTERISTIC FUNCTION FOR THE QUANTUM HARMONIC OSCILLATOR UNDER CHANGE OF THE TRAP FREQUENCY
To compute the characteristic function we use the generating function method <cit.>. The series in Eq. (3) of the main text has the form P(u,v)=∑_m,nu^mv^nP_m,n^τ, where u=e^iμħω_1 and v=e^-(iμ+β) ħω_0. We review the procedure to arrive at a compact expression for P(u,v). From the definition of quantum transition probability,
P^0→τ_m,n = |∫ dx_0∫ dx (ϕ^τ_m(x))^∗U(x,x_0;τ)ϕ^0_n(x_0)|^2,
where each integral is along the real line. The series becomes
P(u,v) = ∑_m,n=0^∞u^mv^n(∫ dx_0∫ dx (ϕ^τ_m(x))^∗ U(x,x_0;τ)ϕ^0_n(x_0))(∫ dy_0∫ dy (ϕ^τ_m(y))^∗ U(y,y_0;τ)ϕ^0_n(y_0))^∗
= ∫∫∫∫ dx_0dxdy_0dy (∑_m=0^∞u^mϕ^τ_m(y)(ϕ^τ_m(x))^∗) U(x,x_0;τ)U(y_0,y;-τ) (∑_n=0^∞v^nϕ^0_n(x_0)(ϕ^0_n(y_0))^∗),
where U^∗(y,y_0;τ)=U(y_0,y;-τ). From Mehler's Hermite polynomial formula, we have
∑_m=0^∞u^mϕ^τ_m(y)(ϕ^τ_m(x))^∗ = √(mω_1/ħπ(1-u^2)) exp(-mω_1/2ħ (1+u^2)(x^2+y^2)-4uxy/(1-u^2)),
∑_n=0^∞v^nϕ^0_n(x_0)(ϕ^0_n(y_0))^∗ = √(mω_0/ħπ(1-v^2)) exp(-mω_0/2ħ (1+v^2)(x_0^2+y_0^2)-4vx_0y_0/(1-v^2)).
The latter equations require (u)^2 and (v)^2 to be lesser than 1. Using the well known propagators <cit.>
U(x,x_0;τ) = √(m/2π iħ X) exp(im/2ħ X(Ẋx^2-2xx_0+Yx_0^2)),
U(y_0,y;-τ) = √(im/2πħ X) exp(-im/2ħ X(Ẋy^2-2yy_0+Yy_0^2));
one arrives at the Gaussian integral
P(u,v)
= √(mω_1/ħπ(1-u^2)) √(mω_0/ħπ(1-v^2)) √(m/2π iħ X) √(im/2πħ X)∫∫∫∫ dx_0dxdy_0dy exp(- v^ T M v),
where
M=m/2ħ([ 1+u^2/1-u^2ω_1 - iẊ/X i/X -2uω_1/1-u^2 0; i/X 1+v^2/1-v^2ω_0 - iY/X 0 -2vω_0/1-v^2; -2uω_1/1-u^2 0 1+u^2/1-u^2ω_1+iẊ/X -i/X; 0 -2vω_0/1-v^2 -i/X 1+v^2/1-v^2ω_0 + iY/X ]) and v=([ x; x_0; y; y_0 ]).
Using (<ref>) and the identity ẊY-XẎ=1, one gets
det( M)=(m/2ħ)^4 2X^2ω_0ω_1 [Q^∗(1-u^2)(1-v^2)+(1+u^2)(1+v^2)-4uv].
From (<ref>) it follows that <cit.>
P(u,v)=√(2)/√(Q^∗(1-u^2)(1-v^2)+(1+u^2)(1+v^2)-4uv).
It is worth noting that numerics supports a broader domain of validity to (<ref>) than (u)^2 and (v)^2 to be lesser than 1, as required by the generating function method. This is easy to see in the case of adiabatic dynamics, where P(u,v)=∑_n(uv)^n and the convergence condition reduces to uv<1, for u,v∈ℝ.
§ WORK FLUCTUATIONS FOR A QUANTUM HARMONIC OSCILLATOR UNDER AN EXTERNAL FORCE
We illustrate how a special work protocol, essentially the one used in the recent proof-of-principle experiment <cit.>, can lead to total absence of divergence in var(e^-β W_ dis). In the experiment <cit.>, work is done by applying an external force to an ion while keeping the trapping frequency fixed. The general driving in this case is given by the time-dependent Hamiltonian
H(t)=ħω a^†a+f^∗(t)a+f(t)a^†,
where a and a^† are the creation and annihilation operators of a QHO with trapping frequency ω; the complex mechanical parameters are f and f^∗, with initial values f(0)=f^∗(0)=0. We note that the qubit in <cit.>, modifying (<ref>) by a coupling to the external force (a+a^†) via the Pauli matrix σ̂_x, is not initialized at thermal equilibrium but in one of its eigenstates; thus only the phonon is initialized at thermal equilibrium and the qubit is solely used to apply the corresponding work. The work characteristic function associated with the Hamiltonian in Eq. (<ref>) is given by <cit.>
G(μ)=exp(iμ |f(τ)|^2/ħω) exp[(e^iμħω-1)|z|^2] ∑_n=0^∞P_n^0L_n(4|z|^2sin^2(ħωμ/2)),
where L_n(x) are Laguerre polynomials and again we assume an initial state at thermal equilibrium, P^0_n=e^-β E^0_n/Z_0. The non-adiabatic dynamics is captured by the rapidity parameter
z(τ)=1/ħω∫_0^τdt ḟ(t)exp(iω t).
Using again ⟨ e^-2βW⟩=G(2iβ), one finds that G(2iβ) is always finite.
Indeed,
from Perron's formula
L_n(x)=2^-1π^-1/2e^x/2(-x)^-1/4n^-1/4e^2√(-nx)(1+𝒪(n^-1/2)), x∈ℂ\ℝ_+,
it is clear that the uniform convergence of the series in (<ref>) is dominated by the probability P_n^0 whose exponential is linear in n.
The compact expression of (<ref>) is given by <cit.>
G(μ)=exp(iμ |f(τ)|^2/ħω+(e^iμħω-1)|z|^2-4|z|^2 sin^2(ħωμ/2)/e^βħω-1).
Notice that evaluation at μ=iβ renders the Jarzynski equality such that ΔF=|f(τ)|^2/(ħω). Also, consistently with the PMWF the minimum of G(2iβ) with respect to the rapidity is obtained at |z|^2=0. For this type of work protocol there is no local minima in var(e^-βW_ dis), in contrast with the behavior depicted in Fig. 1 in the main text. Indeed, using (<ref>), we find
var(e^-βW_ dis)=e^2|z|^2sinh(βħω)-1,
which does not suffer from divergence at any given β.
§ WORK FLUCTUATIONS FOR A DRIVEN INFINITE SQUARE-WELL POTENTIAL
We present in Fig. <ref> results for an infinite square-well potential, i.e., H_box(t)=-1/2Mp̂^2+V_t(q̂), where
V_t(q)={[ 0, -L(t)/2<q<L(t)/2,; ∞, otherwise; ].
with a constant velocity of compression, dL/dt. Again, analytical results are not available yet numerics shows the qualitative behavior characteristic of divergence in var[exp(-β W_ dis)]. Although not shown in the figures we confirmed the following observations: (i) benchmark for the numerics with the adiabatic divergence, which can be easily predicted at L_1>√(2) L_0. (ii) For larger velocities we observe a shift of the divergent behavior towards higher temperatures. We used the reported transition probability from Ref. <cit.>:
P_m,n^0→τ=|∑_l=1^∞ {2/L_0∫_0^L_0 exp(-iMv/2ħ L_0x^2) sin(lπ x/L_0) sin(mπ x/L_0)dx..
..× exp[-il^2 π^2ħ(L_1-L_0)/2MvL_0L_1] 2/L_1∫_0^L_1 exp(iMv/2ħ L_1y^2) sin(lπ y/L_1) sin(nπ y/L_1)dy} |^2,
where we set M=1 and ħ=1. The n-th energy level of the system being E^t_n=n^2π^2ħ^2/(2ML^2(t)). Our numerical calculation is based on truncation of the series in Eq. (<ref>).
As shown in Fig. <ref>, for work protocols with finite var[exp(-β W_ dis)] at high temperatures, divergence will emerge as temperature decreases. This again confirms our general insights.
unsrt
10
Hanggireview1
Michele Campisi, Peter Hänggi, and Peter Talkner.
Colloquium : Quantum fluctuation relations: Foundations and
applications.
Rev. Mod. Phys., 83:771–791, 2011; Erratum Rev. Mod. Phys., 83:1653, 2011.
Hanggireview2
Peter Talkner and Peter Hänggi.
Aspects of quantum work.
Phys. Rev. E, 93:022131, 2016.
Jarzynski1
C. Jarzynski.
Nonequilibrium equality for free energy differences.
Phys. Rev. Lett., 78:2690–2693, 1997.
Kurchan1
J. Kurchan.
A quantum fluctuation theorem.
arXiv:cond-mat/0007360, 2000.
Tasaki1
H. Tasaki.
Jarzynski relations for quantum systems and some applications.
arXiv:cond-mat/0009244, 2000.
Dellago2
Christoph Dellago and Gerhard Hummer.
Computing equilibrium free energies using non-equilibrium molecular
dynamics.
Entropy, 16:41–61, 2014.
Bustamante1
Jan Liphardt, Sophie Dumont, Steven B. Smith, Ignacio Tinoco, and Carlos
Bustamante.
Equilibrium information from nonequilibrium measurements in an
experimental test of Jarzynski's equality.
Science, 296(5574):1832–1835, 2002.
Kiang1
Nolan C. Harris, Yang Song, and Ching-Hwa Kiang.
Experimental free energy surface reconstruction from single-molecule
force spectroscopy using Jarzynski's equality.
Phys. Rev. Lett., 99:068101, 2007.
Ritort1
Anna Alemany, Alessandro Mossa, Ivan Junier, and Felix Ritort.
Experimental free-energy measurements of kinetic molecular states
using fluctuation theorems.
Nat. Phys., 8:688–694, 2012.
Szabo1
Gerhard Hummer and Attila Szabo.
Free energy reconstruction from nonequilibrium single-molecule
pulling experiments.
Proc. Natl. Acad. Sci. USA, 98(7):3658–3661, 2001.
Rabbiosi1
F. Douarche, S. Ciliberto, A. Petrosyan, and I. Rabbiosi.
An experimental test of the Jarzynski equality in a mechanical
experiment.
EPL (Europhysics Letters), 70(5):593, 2005.
Bechinger1
V. Blickle, T. Speck, L. Helden, U. Seifert, and C. Bechinger.
Thermodynamics of a colloidal particle in a time-dependent
nonharmonic potential.
Phys. Rev. Lett., 96:070603, 2006.
Serra1
Tiago B. Batalhão, Alexandre M. Souza, Laura Mazzola, Ruben Auccaise,
Roberto S. Sarthour, Ivan S. Oliveira, John Goold, Gabriele De Chiara, Mauro
Paternostro, and Roberto M. Serra.
Experimental reconstruction of work distribution and study of
fluctuation relations in a closed quantum system.
Phys. Rev. Lett., 113:140601, 2014.
Vedral1
R. Dorner, S. R. Clark, L. Heaney, R. Fazio, J. Goold, and V. Vedral.
Extracting quantum work statistics and fluctuation theorems by
single-qubit interferometry.
Phys. Rev. Lett., 110:230601, 2013.
Paternostro2
L. Mazzola, G. De Chiara, and M. Paternostro.
Measuring the characteristic function of the work distribution.
Phys. Rev. Lett., 110:230602, 2013.
Campisi1
Michele Campisi, Ralf Blattmann, Sigmund Kohler, David Zueco, and Peter
Hänggi.
Employing circuit qed to measure non-equilibrium work fluctuations.
New Journal of Physics, 15:105028, 2013.
Lutz3
Gerhard Huber, Ferdinand Schmidt-Kaler, Sebastian Deffner, and Eric Lutz.
Employing trapped cold ions to verify the quantum Jarzynski equality.
Phys. Rev. Lett., 101:070403, 2008.
Kihwan1
Shuoming An, Jing-Ning Zhang, Mark Um, Dingshun Lv, Yao Lu, Junhua Zhang,
Zhang-Qi Yin, H. T. Quan, and Kihwan Kim.
Experimental test of the quantum Jarzynski equality with a
trapped-ion system.
Nat. Phys., 11(2):193–199, 2015.
Jarzynski2
O. Mazonka and C. Jarzynski.
Exactly solvable model illustrating far-from-equilibrium predictions.
arXiv:cond-mat/9912121, 1999.
Zuckerman1
Daniel M. Zuckerman and Thomas B. Woolf.
Theory of a systematic computational error in free energy
differences.
Phys. Rev. Lett., 89:180602, 2002.
Bustamante2
J. Gore, F. Ritort, and C. Bustamante.
Bias and error in estimates of equilibrium free-energy differences
from nonequilibrium measurements.
Proc. Natl. Acad. Sci. USA, 100(22):12564, 2003.
Zuckerman2
Daniel M. Zuckerman and Thomas B. Woolf.
Systematic finite-sampling inaccuracy in free energy differences and
other nonlinear quantities.
Journal of Statistical Physics, 114(5):1303–1323, 2004.
Dellago1
Wolfgang Lechner and Christoph Dellago.
On the efficiency of path sampling methods for the calculation of
free energies from non-equilibrium simulations.
Journal of Statistical Mechanics: Theory and Experiment,
2007(04):P04001, 2007.
Ritort2
Matteo Palassini and Felix Ritort.
Improving free-energy estimates from unidirectional work
measurements: Theory and experiment.
Phys. Rev. Lett., 107:060601, 2011.
Zuckerman3
F. Marty Ytreberg, Robert H. Swendsen, and Daniel M. Zuckerman.
Comparison of free energy methods for molecular systems.
The Journal of Chemical Physics, 125(18):184114, 2006.
Yi1
Seongjin Kim, Yong Woon Kim, Peter Talkner, and Juyeon Yi.
Comparison of free-energy estimators and their dependence on
dissipated work.
Phys. Rev. E, 86:041130, 2012.
Kirkpatrick1T. R. Kirkpatrick, J. R. Dorfman, and J. V. Sengers.
Work, work fluctuations, and the work distribution in a thermal nonequilibrium steady state.
Phys. Rev. E, 94: 052128, 2016.
Jarzynski3
Christopher Jarzynski.
Rare events and the convergence of exponentially averaged work
values.
Phys. Rev. E, 73:046105, 2006.
Deng1
Jiawen Deng, Alvis Mazon Tan, Peter Hänggi, and Jiangbin Gong.
Merits and qualms of work fluctuations in classical fluctuation
theorems.
Phys. Rev. E, 95:012106, 2017.
Gaoyang1
Gaoyang Xiao and Jiangbin Gong.
Principle of minimal work fluctuations.
Phys. Rev. E, 92:022130, 2015.
Singer1
Johannes Roßnagel, Samuel T. Dawkins, Karl N. Tolazzi, Obinna Abah, Eric
Lutz, Ferdinand Schmidt-Kaler, and Kilian Singer.
A single-atom heat engine.
Science, 352 (6283):325–329, 2016.
italy1 F. Plastina, A. Alecce, T. J. G. Apollaro, G. Falcone, G. Francica, F. Galve, N. Lo Gullo, and R. Zambrini.
Irreversible work and inner friction in quantum thermodynamic processes.
Phys. Rev. Lett., 113;260601, 2014.
genFT1A. E. Allahverdyan.
Nonequilibrium quantum
fluctuations of work.
Phys. Rev. E, 90;032137,2014.
genFT2G. Watanabe, B. Prasanna Venkatesh, P. Talkner, M. Campisi, and P. Hänggi.
Quantum fluctuation theorems and generalized measurements during the force protocol.
Phys. Rev. E , 89:032114,2014.
genFT3 D. Kafri and S. Deffner.
Holevo's bound from a general quantum fluctuation theorem.
Phys. Rev. A, 86:044302,2012.
genFT4 A. E. Rastegin.
Non-equilibrium equalities with unital quantum channels.
J. Stat. Mech., P06016, 2013.
genFT5 T. Albash, D. A. Lidar, M. Marvian, and P. Zanardi.
Fluctuation theorems for quantum processes.
Phys. Rev. E 88:032146, 2013.
Talkner1
Peter Talkner, Eric Lutz, and Peter Hänggi.
Fluctuation theorems: Work is not an observable.
Phys. Rev. E, 75:050102, 2007.
Nikonov
V. V. Dodonov, V. I. Man'ko and D. E. Nikonov.
Exact propagators for time-dependent Coulomb, delta and other potentials.
Physical Letters A 162 (1992) 359–364.
Lohe
M. A. Lohe.
Exact time dependence of solutions to the time-dependent Schrdinger equation.
J. Phys. A: Math. Theor. 42 (2009) 035307.
Jaramillo
J. Jaramillo, M. Beau and A. del Campo.
Quantum supremacy of many-particle thermal machines.
New J. Phys. 18:075019, 2016.
Husimi1
K. Husimi.
Miscellanea in elementary quantum mechanics, ii.
Progress of Theoretical Physics, 9:381, 1953.
Lutz1
Sebastian Deffner and Eric Lutz.
Nonequilibrium work distribution of a quantum harmonic oscillator.
Phys. Rev. E, 77:021128, 2008.
supp For quantum transition probabilities, classical work fluctuations, derivation of G(μ), and additional results on a harmonic oscillator under an exteral force, as well as a particle in a box, see Supplementary Material.
dJE J. Deng, J. D. Jaramillo, P. Hänggi, and J. Gong, Deformed Jarzynski equality, arxiv 1707:07393.
delCampo1
Mathieu Beau, Juan Jaramillo, and Adolfo del Campo.
Scaling-up quantum heat engines efficiently via shortcuts to
adiabaticity.
Entropy, 18(5), 2016.
Uzdin1
Raam Uzdin, Emanuele G. Dalla Torre, Ronnie Kosloff, and Nimrod Moiseyev.
Effects of an exceptional point on the dynamics of a single particle
in a time-dependent harmonic trap.
Phys. Rev. A, 88:022505, 2003.
Rezek1
Y. Rezek, P. Salamon, K. H. Hoffmann, and R. Kosloff.
The quantum refrigerator: The quest for absolute zero.
EPL (Europhysics Letters), 85:30008, 2009.
Lutz2
Fernando Galve and Eric Lutz.
Nonequilibrium thermodynamic analysis of squeezing.
Phys. Rev. A, 79:055804, 2009.
adam1
J. Janszky and P. Adam.
Strong squeezing by repeated frequency jumps.
Phys. Rev. A, 46:6091–6092, 1992.
Deng2
Jiawen Deng, Qing-hai Wang, Zhihao Liu, Peter Hänggi, and Jiangbin Gong.
Boosting work characteristics and overall heat-engine performance via
shortcuts to adiabaticity: Quantum and classical systems.
Phys. Rev. E, 88:062122, 2013.
Paternostro1
A. del Campo, J. Goold, and M. Paternostro.
More bang for your buck: Super-adiabatic quantum engines.
Scientific Reports, 4:6208, 2014.
Muga1
E. Torrontegui, S. Ibáñez, S. Martínez-Garaot, M. Modugno, A. del Campo,
D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga.
Shortcuts to adiabaticity.
Adv. At. Mol. Opt. Phys., 62:117–169, 2013.
Deffner1
Steve Campbell and Sebastian Deffner.
Trade-off between speed and cost in shortcuts to adiabaticity.
Phys. Rev. Lett., 118:100601, 2017.
Ueda1
Ken Funo, Jing-Ning Zhang, Cyril Chatou, Kihwan Kim, Masahito Ueda, and Adolfo
del Campo.
Universal work fluctuations during shortcuts to adiabaticity by
counterdiabatic driving.
Phys. Rev. Lett., 118:100602, 2017.
gaoyang2
Gaoyang Xiao and Jiangbin Gong.
Suppression of work fluctuations by optimal control: An approach
based on Jarzynski's equality.
Phys. Rev. E, 90:052132, 2014.
Quan1
Long Zhu, Zongping Gong, Biao Wu, and H. T. Quan.
Quantum-classical correspondence principle for work distributions in
a chaotic system.
Phys. Rev. E, 93:062108, 2016.
Rahav1
Christopher Jarzynski, H. T. Quan, and Saar Rahav.
Quantum-classical correspondence principle for work distributions.
Phys. Rev. X, 5:031038, 2015.
vanZon1
Ramses van Zon, Lisandro Hernández de la Peña, Gilles H. Peslherbe, and
Jeremy Schofield.
Quantum free-energy differences from nonequilibrium path integrals.
ii. convergence properties for the harmonic oscillator.
Phys. Rev. E, 78:041104, 2008.
Talkner2
Peter Talkner, P. Sekhar Burada, and Peter Hänggi.
Statistics of work performed on a forced quantum oscillator.
Phys. Rev. E, 78:011115, 2008; Erratum Phys. Rev. E, 79:039902, 2009.
Quan2
H. T. Quan and C. Jarzynski.
Validity of nonequilibrium work relations for the rapidly expanding quantum piston.
Phys. Rev. E 85:031102, 2012.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07840v1 | 20170126190343 | Probing the properties of extragalactic Supernova Remnants | [
"I. Leonidaki"
] | astro-ph.HE | [
"astro-ph.HE"
] |
arabic
myheadings
empty
< g r a p h i c s >
Probing the properties of extragalactic Supernova Remnants
I. Leonidaki ^1, 2
^1
IESL/Foundation for Research and Technology-Hellas, 71110 Heraklion, Crete, Greece
^2
Department of Physics, University of Crete, GR-71003, Heraklion, Crete, Greece
Extragalactic SNR populations
I. Leonidaki
empty
[l]0.09
[r]0.9
§ ABSTRACT
The investigation of extragalactic SNRs gives us the advantage of surmounting the challenges we are usually confronted with when observing Galactic SNRs, most notably Galactic extinction and distance uncertainties. At the same time, by obtaining larger samples of SNRs, we are allowed to cover a wider range of environments and ISM parameters than our Galaxy, providing us a more complete and representative picture of SNR populations. I will outline the recent progress on extragalactic surveys of SNR populations focusing on the optical, radio, and X-ray bands. Multi-wavelength surveys can provide several key aspects of the physical processes taking place during the evolution of SNRs while at the same time can overcome possible selection effects that are inherent from monochromatic surveys. I will discuss the properties derived in each band (e.g. line ratios, luminosities, denisties, temperatures) and their connection in order to yield information on various aspects of their behaviour and evolution. For example their interplay with the surrounding medium, their correlation with star formation activity, their luminosity distributions and their dependence on galaxy types.
§ INTRODUCTION
The evolution of Supernova Remnants (SNRs) within a uniform Interstellar Medium (ISM) is well-described by a four-stage model, introduced by <cit.>: the free expansion phase where the ejecta of the Supernova sweeps up matter as it expands freely until the mass of the ejecta equals the mass of the swept up material. Then it passes to the adiabatic phase at which the SNR's evolution can be described by the Sedov-Taylor self-similar solution (; ). The first two phases (Fig. 1) depict the blast waves of the newly formed SNRs, reaching high shock velocities (5×10^3 - 10^4 km/s) and heating the material behind the shock front to temperatures up to 10^8, producing thermal X-rays (for a good review on X-ray emission in SNRs see ). The third stage (Fig. 1) occurs when the mass of the swept up
r0.45
< g r a p h i c s >
Evolutionary stages of a Supernova Remnant
material has dramatically increased, forcing the velocity of the shock front to decrease down to ∼200 km/s. The temperature behind the shock front drops to ∼10^5 and the energy losses due to recombination become significant, creating a cooling region behind the shock front and producing shock-heated collisionally ionized species (such as , or hydrogen recombination lines). This is the first time the SNR is radiating in the optical band. The final stage of evolution occurs when the velocity of the shock reaches the sound speed of the ambient ISM and the SNR dissipates. Synchrotron radio emission is present throughout the life of the remnant as it is produced mainly at the vicinity of the shock (e.g. ). From the aforementioned it is evident that to what extend a SNR becomes an X-ray, optical or radio emitter depends on its evolutionary stage.
Apart from the evolutionary stage/age, the derived properties of SNRs depend on various other parameters such as the environment/ISM (e.g. density, temperature), the progenitor properties (e.g. mass loss rate, stellar wind density, composition) and selection effects. However, the details on the connection between these properties are poorly understood while each one of these parameters has its own signature at different wavebands: For example, different wavebands can yield information on density and temperature for different gas phases, the SNR progenitors can be evaluated using the X-ray spectra from the ejecta of young SNRs or optical SNRs are easier to be detected in low density/diffuse emission regions than radio or X-ray SNRs. Therefore, it is essential to investigate SNRs in a multi-wavelength context in order to have a more complete picture about their evolution.
§.§ Milky Way vs Extragalactic SNRs: Pros and Cons
Galactic SNRs allow us to probe the physics from individual regions of the remnants and their interaction with their surrounding ISM. However, these studies are severely hampered by two crucial factors: Galactic absorption and distance uncertainties. Most of the Galactic SNRs are located in the galactic plane impeding the detection of optical or X-ray emitting SNRs while due to distance uncertainties, essential parameters such as sizes or luminosities cannot be estimated. Therefore, there are difficulties in conducting systematic studies or probing their evolution, although an adequate number of Galactic SNRs is in hand (294 sources; ).
On the other hand, the study of extragalactic SNRs presents many advantages: they are regarded at the same distance with the observed galaxy, effects of internal Galactic absorption can be minimized (especially when we study face-on galaxies), a wider range of environments and ISM parameters than our Galaxy can be selected (e.g. different metallicities, star formation histories, masses) providing us this way a more complete and representative picture of the SNR populations, while larger samples can be obtained with fewer observations. Although there is limited sensitivity and spatial resolution in these kind of studies, it is imperative to sample large samples of galaxies in order to understand the global properties and the systematics of the SNR populations as a function with their environment.
§.§ Selection criteria for detecting SNRs
The widely-used selection criterion for SNRs comes from the optical band and is the emission line ratio /> 0.4 (for Type II SNRs) which has been empirically diagnosted to well-differentiate shock-excited (SNRs) from photo-ionized ( regions, Planetary Nebulae) regions (). Radio SNRs can be easily disentangled using their non-thermal synchrotron emission. Thermal X-ray emitting SNRs appear to have temperatures below 2 keV and a thermal plasma spectrum (). In the case of non-thermal X-ray emitting SNRs (Pulsar Wind Nebulae - PWN) and since they have very similar X-ray spectra to X-ray Binaries (XRBs), they cannot be identified based solely on their X-ray properties. Another diagnostic used to identify many new remnants (some of them could hardly be located with optical images) is the 1.644 μm emission line (). Nevertheless, to what extend an SNR can be detected in a multi-wavelength context depends on its evolutionary stage and/or its ambient medium.
§ SNRS IN THE OPTICAL: EMISSION LINE DIAGNOSTICS
Emission lines in optical surveys are a very useful tool in estimating various physical parameters. These lines depend on many factors (e.g. abundances, shock velocities, magnetic parameters, excitation mechanisms) and sophisticated models are needed for accurate calculations. However, there are some emission line ratios that have been proven to be indicative of specific parameters (e.g. /: main shock-heating gas indicator, /: metallicity indicator and secondary shock-heating gas indicator; /: shock velocity indicator; (6716/6731): electron density indicator).
§.§ /: metallicity and shock-heating gas indicator
A represenative example of how the / ratio depicts metallicity variations is well shown in Fig. 2 (left panel), where the log(/) against the log(/) emission line ratios of a large number of spectroscopically observed SNRs in six nearby galaxies (5 irregulars and 1 spiral) are plotted (). For comparison, the spectroscopically observed SNRs of four more spiral galaxies are included (). The locus of the different types of sources in these diagrams (dashed lines) have been created using the emission line ratios of a large number of Galactic SNRs, regions and Planetary Nebulae (PNe) and can help us distinguish the excitation mechanism of the emission lines (photoionization for regions and PNe or collisional excitation for SNRs). As can be seen, all sources are within the range of / = 0.4 - 1 which is typical for SNRs. What is intriguing though is that along the / axis, the vast majority of the SNRs in irregular galaxies extend outside the region of Galactic SNRs in contrast to the SNRs of spiral galaxies which occupy that specific region. The region of SNRs in irregular galaxies is shifted in the direction of higher / ratios, indicating weaker emission in the lines. This could be either due to difference in excitation or difference in metallicity. However, since there is no particular difference between the / ratios (a powerful shock-excitation indicator for SNRs) for the SNR populations in spiral and irregular galaxies, this suggests a difference in / line ratios of the SNR populations between different types of galaxies due to metallicity. Indeed, irregular galaxies present typically lower metallicities in relation to spiral galaxies (e.g.; ).
The / ratio can also be used as a good secondary shock-heating gas indicator. Various studies (e.g. ; ) have revealed a strong correlation between the / and / ratios of SNRs. This is also evident in Fig. 2 (right panel) from the work of <cit.>: higher / ratios present higher / ratios. What is allso interesting in that plot is that the SNRs in irregular galaxies present a flatter slope (higher / ratios) in comparison to the SNRs in spiral galaxies. This could be interpreted either as differences in metallicity or the presence of non-uniform ISM which is known to be the case in irregular galaxies. The spiral M 33 is placed in the region of irregulars most probably due to its lower metallicity in relation to the other spirals (see for metallicities of numerous galaxies).
§.§ Shock models for measuring abundances and shock velocities
Using sophisticated models we can measure elemental abundances and shock velocities: For example, <cit.> used the theoretical shock model grids of <cit.> in order to estimate the shock velocities of spectroscopically identified SNRs in six nearby galaxies (Fig. 3, left panel). These model grids were constructed for different values of shock velocities (horizontal lines starting from 200 km/s on top and going down to 1000 km/s in a step of 50), magnetic field parameters (vertical lines at the grids), and chemical abundances. The /, / emission line ratios of the SNRs were plotted on the grids indicating that most of the observed sources have shock velocities less than 200 km/s (indicating evolved SNRs) while only a handful of those present high shock velocities.
Another example comes from the work of <cit.> where, using shock ionization models of <cit.> for various values of oxygen abundance and the ratio of oxygen to nitrogen abundance, they estimated the oxygen abundances of SNRs in M 81 (Fig. 3, middle and right panels).
§ SNRS IN THE OPTICAL: AGE - EVOLUTIONARY STAGE
One of the main objectives in investigating SNRs is their evolution, which can be estimated in terms of their age. However, age is hard to be determined observationally; only for a handful of young or historical SNRs the actual age is known. On the other hand, their physical sizes can be measured and used as proxies of age. For that reason, many studies have adopted cumulative size distributions (hereafter CSDs) of SNRs (number of SNRs versus their diameter) in order to study the SNR evolutionary stages (e.g. ; ; ; ). These distributions are represented by power law forms which present different slopes (α) at different evolutionary stages: α=1, 2.5 and 3 for free expansion, Sedov-Taylor and radiative phases respectively. It is expected that the more evolved SNRs become, the flatter the slope gets.
As an example, we quote in Fig. 4 (left plot) the SNR CSDs of the SNR populations in five well-studied galaxies: M 31, M 33, the Magellanic Clouds and the Milky Way (). It seems that SNRs in M 31 and M 33 are going though the Sedov-Taylor phase, SNRs in the MCs are in the free expansion phase while SNRs in our Galaxy seem to be more evolved (radiative phase). However, these distributions should be interpreted with caution since the slopes may not only depict the evolutionary stage of the SNRs but also selection effects. For example the density distribution of the ISM may play an important role on the evolution of the SNR (e.g. ) since the transition size between phases is expected to depend on the density of the surrounding ISM (along with the mass of the ejected material and the energy released by the supernova). Therefore, a dense environment would force a SNR to evolve more rapidly. Furthemore, various selection effects could influence such distributions (e.g. ); For example low resolution studies could give incomplete SNR samples, or different wavebands could lead to measuring different sizes for an SNR.
The Σ - D (Surface Brightness vs Diameter) relation is another way of probing the evolutionary stages of SNRs. This distance-independent relation has shown that there is a slight trend of the relatively small diameter (young) SNRs to have higher surface brightnesses (e.g. Fig. 4, right panels). What also needs to be noticed in that plots is the lack of very young objects (<20-30 pc), and the existence of SNRs with diameters greater than 100 pc (not typical sizes for SNRs), denoting possible misidentification of superbubbles as SNRs.
Superbubbles
One of the things that may impede the detection of extragalactic SNRs is their misidentification with superbubbles (cavities of hot gas in the ISM, coming either from multiple supernovae or/and blown out stellar winds of massive stars in OB associations). The shock-excited structure of these objects can grant them with moderate [S II]/Hα values (0.45 < [SII]/Hα < 0.6) (e.g. ; ), placing them within the range of the [S II]/Hα ratios of SNRs. The only way to discern these objects is to use their typically larger sizes (>100 pc), which are rare among known SNRs (e.g. ), and the slower expansion velocities (<100 km/s; e.g. ) than those of SNRs. On the other hand, their low-density environment is responsible for their rather faint X-ray emission (below that of SNRs: 10^34 - 10^36 erg/s) (e.g. ) while in the radio, they exhibit mainly thermal emission (in contrast to the synchrotron emission seen in radio SNRs).
Progenitors
The identification of the SNR progenitors (Type Ia/II) attracts the interest of several studies since the type of the progenitor can modify the SNR evolution and its interplay with the ISM. Criteria for disentangling the progenitor types are the following: a) Distinct types of objects (e.g. PWN) or objects with characteristic spectroscopic signatures (oxygen-rich, Balmer dominated) can be apparently classified as core-collapse SNRs, b) Typa Ia SNRs present statistically a more spherical mirror-symmetric morphology in X-rays than core-collapse SNRs (), c) OB associations strengthen the existence of core-collapse SNRs (; ), d) Type Ia SNRs present relatively low fluxes compared to Type II SNRs since they tend to be located in more isolated regions (e.g. ), e) Metal abundances: Fe-rich correspond to Type Ia while O-rich correspond to Type II SNRs (; ) and Fe Kα line energy centroids: 6.4 keV for Type Ia, 6.7 keV for Type II () and f) light echoes seem to have been quite useful on classifying the progenitor type of historical SNe, based on the acquisition of their scattered-light spectrum ().
Taking advantage the knowledge of the SNRs' ancestry, we can yield more information about their evolution and their ambient medium. For example, the left panel of Fig. 5 shows the CSDs of all (black symbols), Type Ia (red symbols) and Type II (blue symbols) SNRs in M 33 (). The distributions are differentiated: the number of Type Ia SNRs is smaller than Type II SNRs while the mean diameter of Type Ia remnants is larger than that of the CC remnants. This means that the majority of CC remnants may be embedded in denser ambient ISM (and hence evolve faster) than Type Ia remnants. Similarly, the surface brightness of Type Ia SNRs show a stronger linear correlation with their sizes than CC SNRs at the Σ - D relation (e.g. Fig. 5, right panel), indicating a less dense ISM around Type Ia SNRs.
§ MULTI-WAVELENGTH PROPERTIES OF SNRS
§.§ Luminosity relations
Studies on various galaxies have shown that there is no strong correlation between the and X-ray luminosities of SNRs (e.g. ; ; ). However, most luminous X-ray SNRs tend to be the SNRs with the higher luminosities but with the X-ray luminosities being lower than the luminosities (e.g. Fig. 6, left plot). This lack of correlation could be interpreted either as having different materials in a wide range of temperatures (; ) or inhomogeneous local ISM around SNRs (e.g. ). In the same way, lack of correlation seems also to be the case between L_H_a - L_radio and L_radio - L_X of SNRs (Fig. 6, middle and right plots).
With the same rationale we can interpret the lack of a significant correlation between the / ratios of optically selected, X-ray emitting SNRs and their X-ray luminosities (Fig. 7, left plot). In a sample model one would expect that stronger shocks (higher / ratios) would correlate with higher L_X. However, because of the long-cooling time of the X-ray material, the shock velocity we are measuring does not necessarily correspond to the shock that generated the bulk of the X-ray emission material.
As regularly noted, density of the ambient ISM plays an important role on the morphology and the evolution of an SNR. Since higher density is a good predictor of X-ray detectability and the (6717Å/6731Å) ratio is density sensitive, someone would expect a strong correlation between those two quanties. This can be seen in Fig. 7 (right plot), where the ratio against L_X is presented for the SNRs in M 33. In most cases the higher the density in the zone is, the higher the X-ray luminosity is, with SNRs presenting line ratios 1.2 (density 250 cm^-3) being nearly always detected in X-rays (Long et al. 2010).
§.§ Venn diagrams
Comparison of the emission of SNRs in different wavebands can provide information about the evolutionary stage of the sources and/or can illustrate selection effects. This can be shown in the form of Venn diagrams where each circle denotes a different waveband and the overlap between X-ray-, optical-, and radio-selected SNRs can be illustrated. As can be seen in Fig. 8, there is a large gap between detection rates in different bands. In the X-rays in particular, most studies of X-ray-emitting SNRs outside the Local Group have been focused on the identification of X-ray counterparts to SNRs detected in other wavebands, rather than searching for new X-ray-emitting SNRs. For example, note the small detection rate of X-ray emitting SNRs in NGC 7793 (Fig. 8, middle panel) and its low overlap between other wavebands. On the other hand, when systematic studies of X-ray emitting SNR populations are performed (Fig. 8, right panel), the detection rates increase significantly.
The differences in the detection rates are not only due to evolutionary effects (e.g. it's easier to detect evolved/older SNRs in the optical) but also because of the limited sensitivity of the instruments. For example, one would expect SNRs in the same type of galaxies to present similar luminosities. However there is a trend of brighter SNRs to be detected at more distant galaxies (as it is well-illustrated at Fig. 39 from ), missing the fainter SNRs. Furthermore, the detection rate of SNRs in different wavebands is strongly influenced by the properties of the surrounding medium of the source. For example, <cit.> point out that optical searches are more likely to detect SNRs located in regions of low diffuse emission, while radio and X-ray searches are more likely to detect SNRs in regions of high optical confusion. We must also note that depending on the selection criteria used for detecting multi-wavelength SNRs, specific types of SNRs are excluded (e.g. Balmer-dominated/oxygen-rich in the optical, PWN in the X-rays).
§ SNRS AND STAR FORMATION RATE (SFR)
Since core-collapse SNe are the endpoints of the evolution of the most massive stars, their SNRs are good indicators of the current SFR. Therefore, a linear relation between the number of SNRs and SFRs is expected (e.g. ). In Fig. 9 we show the number of photometrically observed SNRs in a sample of six nearby galaxies (NGC 3077, NGC 2403, NGC 4214, NGC 4395, NGC 4449 and NGC 5204; ) against the integrated luminosity of each galaxy (which is used as an SFR proxy). This sample has been complemented by four more spiral galaxies (Kopsacheili et al. in preparation) in order to extend to a wider range of environments. A linear relation is apparent, with a linear correlation coefficient of 0.7 illustrating a significant correlation between the number of SNRs and SFR.
§ SNRS IN THE X-RAYS: ENVIRONMENTAL EFFECTS
Environmental effects (e.g. metallicity, ISM structure, Star Formation History) can affect the SNR populations seen in galaxies. In Fig. 10 (left plot), the average X-ray luminosity of SNRs in various galaxies of different types is shown against the FIR luminosity of the host galaxy (). As expected, there is no correlation between those two quantities because the X-ray luminosity comes from individual remnants while the FIR luminosity is a property of the entire host galaxy. However, we do see a systematic trend for more luminous SNRs to be associated with irregular galaxies. This indicates a difference of the SNR population characteristics between the two samples. This could be due to the typically lower metallicity of irregular galaxies than in typical spiral galaxies (e.g. ; ). Low abundances result in weaker stellar winds (e.g. ) which in turn produce higher mass SN progenitors. More massive progenitors are expected to produce more massive ejecta and stronger shocks which would lead to higher SNR X-ray luminosities. Another possible interpretation includes the non-uniform ISM which is often the case in irregular galaxies, where local enhancements (especially at the star-forming regions) could result to more luminous SNRs.
Furthermore, in the same work () there was another strong indication of different SNR populations in different types of galaxies, by comparing the luminosity distributions of X-ray emitting SNRs. They found that the numbers of SNRs in irregular galaxies are more consistent with an Maggelanic Cloud-like SNR X-ray luminosity function (XLF), while those of spiral galaxies are more consistent with the SNR-XLF of the spiral M33. However, due to the small number of statistics, this result could not be quantified. However, <cit.> pinned the environmental impact on SNR populations down by comparing the X-ray luminosity functions of SNRs in Local Group galaxies (LMC, SMC, M31, and M33; Fig. 10 right panel). They revealed different slopes of the XLFs which illustrate diferences between the SNR populations in these galaxies.
§ CONCLUDING...
There has been a revolution on the investigation of extragalactic SNRs the last decades, enabling the study of their physical properties in different environments. However, this is just the beginning; there is an imperative need in observing more galaxies at larger depths and in a multi-wavelength context in order to alleviate the selection effects that hamper the current studies of SNRs and obtain a more complete picture of the SNR populations.
[Allen et al.2008]Allen2008 Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S. 2008, ApJS, 178, 20
[Badenes et al.2010]Badenes2010 Badenes, C., Maoz, D., Draine, B. T. 2010, MNRAS, 407, 1301
[Blair et al.2014]Blair2014 Blair, W., Chandar, R., Dopita, M. A. et al. 2014, ApJ, 788, 55
[Blair & Long2004]Blair2004 Blair, W. P., & Long, K. S. 2004, ApJS, 155, 101
[Blair & Long1997]Blair1997 Blair, W. P., & Long, K. S. 1997, ApJS, 108, 261
[Charles & Seward1995]Charles1995 Charles, P.A.,& Seward, F.D. 1995, Exploring the X-ray Universe (Cambridge: Cambridge Univ. Press)
[Chen et al.2000]Chen2000 Chen, C.-H. Rosie, Chu, You-Hua, Gruendl, R. A., Points, S. D. 2000, AJ, 119, 131
[Chu & Mac Low1990]Chu1990 Chu, You-Hua, Mac Low, Mordecai-Mark 1990, ApJ, 365, 510
[Dopita et al.2010]Dopita2010 Dopita, M.A., Blair, W. P., Long, K. S. et al. 2010, ApJ, 710, 964
[Dopita et al.1984]Dopita1984 Dopita, M. A., Binette, L., Dodorico, S., & Benvenuti, P. 1984, ApJ, 276, 653
[Franchetti et al.2012]Franchetti2012 Franchetti, N. A., Gruendl, R. A., Chu, You-Hua, Dunne, B. C. et al. 2012, AJ, 143, 85
[Garnett2002]Garnett2002 Garnett, D.R. 2002, ApJ, 581, 1019
[Gordon et al.1998]Gordon1998 Gordon, S. M., Kirshner, R. P., Long, K. S. et al. 1998, ApJS, 117, 89
[Green2014]Green2014 Green, D.A. 2014, bASi, 42, 47
[Hughes et al.1995]Hughes1995 Hughes, J. P., Hayashi, I., Helfand, D., et al. 1995, ApJ, 444, L81
[Lamers & Cassinelli1999]Lamers1999 Lamers, H. & Cassinelli, J.P. 1999, Introduction to Stellar Winds (Cambridge, UK: Cambridge University Press)
[Lasker1977]Lasker1977 Lasker, B.M. 1977, ApJ, 212, 390
[Lee et al.2015]Lee2015 Lee, M.G., Sohn, J., Lee, Jong H. et al. 2015, ApJ, 804, 63
[Lee & Lee2014a]Lee2014a Lee, J.H., & Lee, M.G. 2014a, ApJ, 786, 130
[Lee & Lee2014b]Lee2014b Lee, J.H., & Lee, M.G. 2014b, ApJ, 793, 134
[Leonidaki et al.2013]Leonidaki2013 Leonidaki, I., Boumis, P. & Zezas, A. 2013, MNRAS, 429, 189
[Leonidaki et al.2010]Leonidaki2010 Leonidaki, I. Zezas, A. & Boumis, P. 2010, ApJ, 725, 842
[Long et al.2010]Long2010 Long, K. S., Blair, W. P., Winkler, P. F. et al. 2010, ApJS, 187, 495
[Long et al.1990]Long1990 Long, K. S., Blair, W. P., Kirshner, R. P., Winkler, P. F. 1990, ApJS, 72, 61
[Lopez et al.2011]Lopez2011 Lopez, L. A., Ramirez-Ruiz, E., Huppenkothen, D., Badenes, C.,&Pooley, D. A. 2011, ApJ, 732, 114
[Lopez et al.2009]Lopez2009 Lopez, L. A., Ramirez-Ruiz, E., Badenes, C., et al. 2009, ApJ, 706, L106
[Maggi et al.2016]Maggi2016 Maggi, P.; Haberl, F.; Kavanagh, P. J. et al. 2016, A&A, 585, 162
[Matonick & Fesen1997]Matonick1997 Matonick, D. M., & Fesen, R. A. 1997, ApJS, 112, 49
[Mathewson & Clarke1973]Mathewson1973 Mathewson, D. S., & Clarke, J.N. 1973, ApJ, 180, 725
[Mills et al.1984]Mills1984 Mills, B. Y., Turtle, A. J., Little, A. G., Durdin, J. M. 1984, AuJPh, 37, 321
[Pagel & Endmunds1981]Pagel1981 Pagel, B. E. J. & Endmunds, M. G. 1981, ARA&A, 19, 77
[Pannuti et al.2011]Pannuti2011 Pannuti, T. G., Schlegel, E. M., Filipovic, M. D. et al. 2011, AJ, 142, 20
[Pannuti et al.2007]Pannuti2007 Pannuti, T. G., Schlegel, E. M., & Lacey, C. K. 2007, AJ, 133, 1361
[Pilyugin et al.2004]Pilyugin2004 Pilyugin, L. S., Vlchez, J. M., Contini, T. 2004, A&A, 425, 849
[Rest et al.2008]Rest2008 Rest, A., Matheson, T., Blondin, S. et al. 2008, ApJ, 280, 1137
[Rest et al.2005]Rest2005 Rest, A., Stubbs, C., Becker, A. C. et al. 2005, ApJ, 634, 1103
[Sedov1959]Sedov1959 Sedov L. I. 1959, Similarity and Dimensional Methods in Mechanics. Academic Press, New York
[Taylor1950]Taylor1950 Taylor G. I. 1950, Proc. R. Soc. Lond., 201, 159
[Vink2012]Vink2012 Vink, J. 2012, A&ARv, 20, 49
[Williams et al.1999]Williams1999 Williams, R.M., Chu, You-Hua, Dickel, J. R. et al. 1999, ApJS, 123, 467
[Woltjer1972]Woltjer1972 Woltjer, L. 1972, ARA&A, 10, 129
[Yamaguchi2014]Yamaguchi2014 Yamaguchi, H., Badenes, C., Petre, R. et al. 2014, ApJ, 785, 27
| The evolution of Supernova Remnants (SNRs) within a uniform Interstellar Medium (ISM) is well-described by a four-stage model, introduced by <cit.>: the free expansion phase where the ejecta of the Supernova sweeps up matter as it expands freely until the mass of the ejecta equals the mass of the swept up material. Then it passes to the adiabatic phase at which the SNR's evolution can be described by the Sedov-Taylor self-similar solution (; ). The first two phases (Fig. 1) depict the blast waves of the newly formed SNRs, reaching high shock velocities (5×10^3 - 10^4 km/s) and heating the material behind the shock front to temperatures up to 10^8, producing thermal X-rays (for a good review on X-ray emission in SNRs see ). The third stage (Fig. 1) occurs when the mass of the swept up
r0.45
< g r a p h i c s >
Evolutionary stages of a Supernova Remnant
material has dramatically increased, forcing the velocity of the shock front to decrease down to ∼200 km/s. The temperature behind the shock front drops to ∼10^5 and the energy losses due to recombination become significant, creating a cooling region behind the shock front and producing shock-heated collisionally ionized species (such as , or hydrogen recombination lines). This is the first time the SNR is radiating in the optical band. The final stage of evolution occurs when the velocity of the shock reaches the sound speed of the ambient ISM and the SNR dissipates. Synchrotron radio emission is present throughout the life of the remnant as it is produced mainly at the vicinity of the shock (e.g. ). From the aforementioned it is evident that to what extend a SNR becomes an X-ray, optical or radio emitter depends on its evolutionary stage.
Apart from the evolutionary stage/age, the derived properties of SNRs depend on various other parameters such as the environment/ISM (e.g. density, temperature), the progenitor properties (e.g. mass loss rate, stellar wind density, composition) and selection effects. However, the details on the connection between these properties are poorly understood while each one of these parameters has its own signature at different wavebands: For example, different wavebands can yield information on density and temperature for different gas phases, the SNR progenitors can be evaluated using the X-ray spectra from the ejecta of young SNRs or optical SNRs are easier to be detected in low density/diffuse emission regions than radio or X-ray SNRs. Therefore, it is essential to investigate SNRs in a multi-wavelength context in order to have a more complete picture about their evolution.
§.§ Milky Way vs Extragalactic SNRs: Pros and Cons
Galactic SNRs allow us to probe the physics from individual regions of the remnants and their interaction with their surrounding ISM. However, these studies are severely hampered by two crucial factors: Galactic absorption and distance uncertainties. Most of the Galactic SNRs are located in the galactic plane impeding the detection of optical or X-ray emitting SNRs while due to distance uncertainties, essential parameters such as sizes or luminosities cannot be estimated. Therefore, there are difficulties in conducting systematic studies or probing their evolution, although an adequate number of Galactic SNRs is in hand (294 sources; ).
On the other hand, the study of extragalactic SNRs presents many advantages: they are regarded at the same distance with the observed galaxy, effects of internal Galactic absorption can be minimized (especially when we study face-on galaxies), a wider range of environments and ISM parameters than our Galaxy can be selected (e.g. different metallicities, star formation histories, masses) providing us this way a more complete and representative picture of the SNR populations, while larger samples can be obtained with fewer observations. Although there is limited sensitivity and spatial resolution in these kind of studies, it is imperative to sample large samples of galaxies in order to understand the global properties and the systematics of the SNR populations as a function with their environment.
§.§ Selection criteria for detecting SNRs
The widely-used selection criterion for SNRs comes from the optical band and is the emission line ratio /> 0.4 (for Type II SNRs) which has been empirically diagnosted to well-differentiate shock-excited (SNRs) from photo-ionized ( regions, Planetary Nebulae) regions (). Radio SNRs can be easily disentangled using their non-thermal synchrotron emission. Thermal X-ray emitting SNRs appear to have temperatures below 2 keV and a thermal plasma spectrum (). In the case of non-thermal X-ray emitting SNRs (Pulsar Wind Nebulae - PWN) and since they have very similar X-ray spectra to X-ray Binaries (XRBs), they cannot be identified based solely on their X-ray properties. Another diagnostic used to identify many new remnants (some of them could hardly be located with optical images) is the 1.644 μm emission line (). Nevertheless, to what extend an SNR can be detected in a multi-wavelength context depends on its evolutionary stage and/or its ambient medium. | null | null | null | null | null |
http://arxiv.org/abs/1701.07722v1 | 20170126144401 | Exotic Spin Phases in Two Dimensional Spin-orbit Coupled Models: Importance of Quantum Fluctuation Effects | [
"Chao Wang",
"Ming Gong",
"Yongjian Han",
"Guangcan Guo",
"Lixin He"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
[email protected]
[email protected]
CAS Key Laboratory of Quantum Information, Chinese Academy of Sciences, University of Science and Technology of China, Hefei, 230026, China
Synergetic Innovation Center of Quantum Information
and Quantum Physics, University of Science and Technology of China, Hefei, 230026, China
We investigate the phase diagrams of the effective spin models derived from Fermi-Hubbard and Bose-Hubbard models with Rashba spin-orbit coupling,
using string bond states, one of the quantum tensor network states methods.
We focus on the role of quantum fluctuation effect in stabilizing the exotic spin phases in these models.
For boson systems, and when the ratio between inter-particle and intra-particle interaction λ > 1,
the out-of-plane ferromagnetic (FM) and antiferromagnetic (AFM) phases obtained from quantum simulations are the same to those obtained from classic model.
However, the quantum order-by-disorder effect reduces the classical in-plane XY-FM and XY-vortex phases to the quantum X/Y-FM and X/Y-stripe phase
when λ < 1. The spiral phase and skyrmion phase can be realized in the presence of quantum fluctuation.
For the Fermi-Hubbard model, the quantum fluctuation energies are always important in the whole parameter regime.
A general picture to understand the phase diagrams from symmetry point of view is also presented.
71.10.Fd,75.10.Jm,64.60.Cn,
67.85.-d
Exotic Spin Phases in Two Dimensional Spin-orbit Coupled Models: Importance of Quantum Fluctuation Effects
Lixin He
18 november 2016
==========================================================================================================
The ultracold atoms in optical lattice<cit.> provide an excellent toolbox for simulating various spin models, such as
Heisenberg <cit.> model and Kitaev model<cit.> etc.,
and has been one of the central concepts in quantum simulations.
Along this line some primary results have been obtained<cit.>. The simplest ferromagnetic (FM) or
antiferromatic (AFM) Heisenberg spin models can be obtained in the deep Mott phase regime<cit.> when the Hubbard model possesses rotational
symmetry.
The recent interest in the searching of exotic spin structures in optical lattice is stimulated by the experimental realization of spin-orbit coupling (SOC), which can
be regarded as the simplest non-Abelian gauge potential in nature<cit.>.
In these cases, the effective spin models may become more complicated due to the appearance of some exotic terms, e.g., the
Dzyaloshinskii-Moriya (DM) <cit.> interactions and their deformations.
The DM interaction has already been widely investigated
in solid materials<cit.>
and now it is resurfaced in ultracold atoms due to its
flexibility in experiments, e.g., the SOC interactions can be made much stronger than their counterpart in real materials.
Results based on classical simulations<cit.>, Ginzburg-Landau theory<cit.>, dynamical mean-field theory<cit.> and spin wave expansion<cit.> have unveiled rich phase structures including spin spirals, skyrmions in the
presence of the frustrated interactions caused by the SOC: there are strong competition between spin-independent tunneling and the SOC induced
spin-flipping tunneling. However, the role of quantum fluctuation effect to the quantum phase diagrams in these models have not been thoroughly
investigated. Whether and how these phases can survive in the presence of quantum fluctuation are still unclear.
In this Letter, we investigate the quantum phase diagrams of the effective spin models with Rashba SOC, derived from Bose-Hubbard (BH)
model and Fermi-Hubbard (FH) model on a 12×12 square lattice, using
recently developed string bond states, one of the tensor network states (TNS) methods<cit.>.
The TNS methods provide promising tools to investigate quantum systems with frustrated interactions.
Details of the calculations are presented in Supplementary materials <cit.>.
We find whereas in some parameters regions the classic spin model can give qualitatively correct
ground states, in some regions, the quantum effects are crucial to get correct ground states.
In particular for the fermion systems, the quantum effects are always important.
Effective Spin Models. For a BH model with Rashba SOC, the Hamiltonian can be written as
H_BH=ℋ_0+U/2∑_i,σn_iσ(n_iσ-1)+λ U∑_in_i↑n_i↓,
where U and λ U are on-site intra-particle and inter-particle interactions and ℋ_0 =-t∑_⟨ i,j⟩Ψ^†_iexp[-iα e_z·(σ⃗× e_ij)]Ψ_j. Here Ψ^†=(b_i↑^†,
b_i↓^†), with b_iσ^† being the creation operator with site i and spin σ = ↑, ↓ and e_ij being the
unit vector from site i to j.
In the first Mott lobe (U≫ t), each site contains only one particle,
the effective spin model can be written as,
H=J∑_⟨ i,j⟩_x[cos(2α)/λS_i^xS_j^x+1/λS_i^yS_j^y+cos(2α)/λ(2λ-1) S_i^z S_j^z
-sin(2α)(S_i^xS^z_j-S_i^zS_j^x)]+J∑_⟨ i,j⟩_y[1/λS_i^xS_j^x + cos(2α)/λS_i^yS_j^y
+cos(2α)/λ(2λ-1)S_i^zS_j^z - sin(2α)(S_i^yS^z_j-S_i^zS_j^y)],
where J=-4t^2/U<0, and
⟨ i,j⟩_μ means the nearest neighbors in the μ=x, y directions.
In this model α determines the strength of SOC, and λ represents the anisotropy of the exchange interactions.
Similarly in the FH model, the Hamiltonian reads as
H_FH= ℋ_0+∑_i Un_i↑n_i↓ where ℋ_0 has the same form as boson model with Ψ^† replaced by (f_i↑^†, f_i↓^†),
where f_iσ^† is the fermion creation operator at site i and spin σ = ↑, ↓. The corresponding effective spin model equals to that in Eq.<ref> at λ = 1 except that now J = 4t^2/U >0 due to Pauli exclusion principle. Hereafter we let 4 t^2/ U = 1 for convenience.
The following order parameters are used to distinguish different phases. Firstly, the static magnetic structure factor is defined as
[μ =x, y, z, i = (i_x, i_y)],
S^μ(k)=4/L^2∑_i,j⟨ S^μ_i· S^μ_j⟩ e^i[(i_x-j_x)k_x+(i_y-j_y)k_y]/L.
on a L× L square lattice. For the FM and AFM phases along μ-direction, S^μ(k) has peaks at k = (0, 0) and
(π, π), respectively;
and in the strip phase the strongest peaks happen at k = (0, π) or (π, 0).
We also define the spiral and skyrmion order parameters in real space as<cit.>,
Sp_μ(i,j)=16 ⟨θ_μ^iθ_μ^j⟩, Sk(i,j)= 64 ⟨ v_s^i v_s^j⟩,
where θ_x^i= ( S_i× S_i+ e_x)_y and θ_y^i= ( S_i× S_i+ e_y)_x are related to the relative planer spin angles for
spins at site i and i+ e_μ. To account for the three dimensional spin alignment effect,
we define the spin volume constructed by the spins at the three neighboring sites
as v_s^i= S_i·( S_i+ e_x× S_i+ e_y).
In the co-planar spiral phase, v_s^i = 0 exactly, but it is nonzero in the skyrmion phases.
To determine the long-range order of the system, we calculate the order parameters as
Sp_μ= ∑_i 1 L^2 |Sp_μ(i,i+l)|, Sk= ∑_i 1 L^2 |Sk(i,i+l)|,
where l = (L/2, L/2) to make |i-j| as large as possible and i is averaged over the whole lattice for better numerical accuracy.
Phase Diagram for Boson. The phase diagram is presented in Fig. <ref> and corresponding order parameters are given in Fig. <ref>
at λ=1.5 and λ=0.8 and α∈ [0, π/2].
The spin model in Eq. <ref> possesses some unique symmetries, which is crucial
to understand this phase diagram. Firstly, the Hamiltonian in Eq. <ref> is invariant upon operation α→π-α and S_i^x,y→ -S_i^x,y, which is equivalent to
the transformation U^†_↑ b_i↑ U_↑ = -b_i↑, where U_↑ = exp(iπ∑_i n_i↑)
in the original BH model. This symmetry directly leads to U^†_↑ H(α, λ) U_↑ = H(π -α, λ), i.e., the phase
diagram should be symmetric about α = π 2. Therefore we only show the result for α∈ [0, π/2].
We first discuss the phase diagram at four corners, where α∼ 0 or π/2 and λ≪ 1 or λ≫ 1.
When α =0, i.e., in the absence of SOC, the original spin model can be reduced to
an effective XXZ spin model, with J_x= J_y = -1 / λ, and
J_z= -(2λ-1)/λ.
When λ > 1, |J_z| >|J_x|, the ground state is a Z-FM state, i.e., all spins are ferromagneticlly aligned along the z direction.
Our TNS calculations show that for small α≲0.15, the ground state is still Z-FM,
as determined by the order parameters shown in Fig. <ref>a.
In this region, the quantum simulations yield the same ground state as the classic one, suggesting the minor role of quantum fluctuation effect.
Interestingly, at α = π 2, the model can be mapped to
the α=0 case via a symmetry transformation, 𝒰 = ∏_i e^-iπ 2 i_x σ_x e^-iπ 2 i_yσ_y,
i.e., 𝒰^† H_π 2𝒰 = H_0.
Use this transformation, we immediately see that the ground state near α = π 2 is a Z-AFM.
We therefore see that these two limits (α=0 and α=π 2) should have the exact same energies,
and the quantum effects are small in both phases, which are confirmed by the numerical results.
However, there are dramatic difference in the case of λ <1 where the in-plane exchange energy dominates.
The order parameters calculated by TNS at λ=0.8 are shown in Fig. <ref>.
In the region of 0 < α/π < 0.13, the ground state is a FM phase,
with all spins are polarized along either x or
y direction, which we denote as X/Y-FM phase.
Remarkably this phase is very different from what is obtained from the
classical spin model, which gives a rotational invariant
FM state <cit.> with all spins lay in the x-y plane (dubbed as XY-FM).
To understand this difference, we note that the in-plane rotational symmetry is not inherent
of the original Hamiltonian, which possesses only C_4 symmetry.
The rotational invariance of the ground state in the classic model is due to the accidental degeneracy
because the ground state of classic model happen to has S_z=0.
When quantum fluctuation is introduced, it breaks the accidental degeneracy
and restore the C_4 symmetry of the original Hamiltonian, which therefore
single out a ground state with lower energy than the classical solution.
This is the known as order-by-disorder mechanism<cit.>.
Again, we can apply symmetry transformation 𝒰 = ∏_i e^-iπ 2 i_xσ_x e^-iπ 2 i_yσ_y near α=π/2,
which yields a X/Y-stripe phase (as confirmed by numerical results) for quantum spin model,
in contrast to the 2×2 vortex state obtained from classical simulations.
The line λ = 0 in principle can not he achieved due to the energy-costless double occupation. However this limit can still be defined
in the sense of lim_λ→ 0λ H_λ= - ∑_⟨ i,j⟩ _x [cos(2α) (S_i^x S_j^x+S_i^z S_j^z)+S_i^y S_j^y]
-∑_⟨ i,j⟩ _y [cos(2α)(S_i^y S_j^y +S_i^z S_j^z)+S_i^x S_j^x].
Obviously when α = π 4,
lim_λ→ 0λ H_λ,α = π 4= -(∑_⟨ i,j⟩_xS_i^y S_j^y+∑_⟨ i,j⟩_y S_i^x S_j^x)
which gives a compass model due to the strong coupling between the spins and directions<cit.>.
This model can not be solved exactly; however it can be shown exactly that the ground state is 2^L+1-fold degenerated for a L × L square
lattice<cit.>. It therefore corresponds to a critical boundary between the X/Y-FM and X/Y stripe phases since any deviation from
this critical point by varying the parameters (λ and α) can break the degeneracy and open an energy gap.
The classical and quantum simulations yield the same critical point.
We next try to understand the spiral and skyrmion phases in the presence of strong DM
interaction. The order parameters are shown in Eq. <ref> (and the corresponding spin textures are shown in the lower panel of Fig. <ref>).
The spiral-1 phase has two degenerate states spiral along either e_x+ e_y or e_x- e_y direction.
For these two cases the strongest peaks in the structure factor S( k) appear at k = ±(k_0,k_0) and
k = ±(k_0,-k_0), respectively, where k_0 can be smoothly tuned by α and λ. However, due to
the finite size used in the simulation, only k_0=2π/3, π/2, π/3 and π/6 are observed,
which are commensurate with the system size.
In this phase, the skyrmion order Sk∼ 0, whereas Sp_x = Sp_y 0 are strongest among all the order parameters.
The spiral-2 phase has two degenerate states, one is a spin spiral along x direction, and other one is along y direction.
Therefore, only one of the order parameters, either
Sp_x or Sp_y (see Fig. <ref>b) is nonzero.
In contrast, in spiral-3 phase, Sp_x=Sp_y, both are nonzero.
Spiral-3 phase is also observed in the classical model, and compared to the classical model,
the spiral-3 phase region is greatly suppressed in the quantum model.
In the skyrmion phase, the structure factor exhibits strongest peaks at
k=(± k_0, 0) and (0, ± k_0).
Furthermore the non-conplaner of spin alinement induce a finite srkymion order Sk.
The skyrmion phase is Neel type<cit.> and has a period 3× 3
(light purple region in Fig. <ref>) or larger(dark purple region in Fig. <ref>),
which is consistent with the numerical results for the classic spin model <cit.>.
To understand the quantum effects in a more quantitative way, we plot the ground state energies per site for
λ=1.3, 0.8, 0.3 in Fig. <ref> a - c respectively obtained from classical simulations (E_ c)
and full quantum mechanical TNS simulations (E_ q).
In the inserts, we also show the energy differences
δ E_fluc = E_c - E_q .
Obviously E_c≥ E_q, thus δ E_fluc≥ 0.
From Fig. <ref>a. we find that when α =0, and λ=1.3,
E_c = -0.61537, and E_q = -0.61538 in the 12×12 lattice,
while the exact classical energy in a infinite size system is
E_c^∞= 2λ -1 2λ = -0.61538.
This agreement can be understood using the Holstein-Primarkoff (HP) transformation to the XXZ model (see <cit.>) due to the disappearance
of pairing (or condensate) term, thus δ E_fluc = 0 exactly. In fact the XXZ model can be used
as a benchmark for the TNS method, which shows great accuracy in this problem.
As shown in Fig. <ref>a,
δ E_fluc∼ 0 in the whole Z-FM and Z-AFM phase regimes, even when α≠0.
In the spiral phase, δ E_fluc∼ 0.01 - 0.02 is more significant.
The fluctuation energy increases with the decreasing of λ. At λ=0.8, δ E_fluc∼0.01 in the X/Y-FM and X/Y strip
phases, which is about 4% of the total energies.
However, even though this energy difference seems not very large,
the ground states predicted by classical model and quantum model are totally different.
Full quantum treatments are therefore required to capture the correct physics in these phases.
δ E_fluc is also different for different phases, which is most significant in the skyrmion phase, where δ E_fluc∼0.02.
When λ further decrease to 0.3, δ E_fluc increases dramatically. It is about 0.1 - 0.3 in the X/Y-FM and X/Y strip phases, which
counts almost 10% - 20% of the total energies. The strong quantum fluctuation at small λ suppresses the
spiral-3 and skyrmion phases compared to the the classical phase diagram (see Fig. <ref>).
Phase Diagrams for Fermion. For the spin model from FH model, we have J = 4t^2/U>0, and λ=1.
Therefore α serves as the only adjustable parameter in this model.
The calculated phase diagram and the order parameters from the TNS method are presented in
Fig. <ref>. Similar to the phase diagrams in the bosonic system, we find X/Y-AFM phase when α/π < 0.08 and X/Y-Stripe phase when
α/π∈ [0.34, 0.5] (the mirror symmetry about α = π 2 is assumed). As before the classical model predicts a rotational
invariant AFM and vortex phases in the x-y plane, which reduce to the X/Y-AFM and X/Y-stripe phase due to order-by-order effect.
Between the AFM and stripe phases, there are spiral phases and one skyrmion phase. The spiral phase may also be distinguished by the period
p = 12 5, 3, 4 which can be accommodated by our simulation sizes. In these phases the skyrmion order almost equal to zero and the spiral
order dominates. However, when α/π∈ [0.29,0.34] , the skyrmion order become important although the spiral order is still nonzero,
similar to that in Fig. <ref>b.
The quantum fluctuation energy is much more pronounced in the FH model
than in the BH model for all phases, as depicted in Fig.<ref>d.
For α= 0, we find E_c = -0.5, and E_q = -0.6579, thus
δ E_fluc = 0.1579.
In the AFM and strip phases, δ E_fluc is about 30% of the total energy.
The large quantum fluctuation energy in the AFM state is due to that there are vast Hilbert spaces
near the S=0 that are energetically close to the ground state.
The δ E_fluc is slightly small in the spiral phase and skyrmion phase, but still significant.
It is very interesting to note that the Z-AFM state in BH model however has very small δ E_fluc,
in sharp contrast with the AFM state derived from the FH model. To understand this difference, we note that the Z-AFM state in BH model can be
mapped to the Z-FM state via symmetry transformations, which has small quantum fluctuation energy.
Therefore, even though the two AFM states appear very similar to each other at the classical level,
their physics are entirely different.
More fundamentally, this difference is rooted from the different statistic properties between bosons and fermions.
Conclusion. We address the role of quantum fluctuation effect on the possibilities on observing the exotic spin structures in the spin-orbit coupled BH and FH
models on a square lattice using TNS method. While for the out-of-plane FM and AFM phases the classical and quantum solution are the same, we find
that the quantum order-by-disorder effect reduces the classical in-plane XY-FM and XY-vortex phases to the quantum X/Y-FM and X/Y-stripe phase. Moreover, the spiral phase and skyrmion phase
can still be found even in the presence of quantum fluctuating effect. The structure of the phase diagrams are also understood from the symmetry point of view.
Acknowledgement.
This work was funded by the Chinese National Science Foundation No. 11374275, 11474267,
the National Key Research and Development Program of China No. 2016YFB0201202.
M.G. acknowledges the support by the National Youth Thousand Talents Program No. KJ2030000001,
the USTC start-up funding No. KY2030000053 and the CUHK RGC Grant No. 401113.
The numerical calculations have been done on the USTC HPC facilities.
M.G. Thank W. L. You for valuable discussion about compass model.
67
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[McKay and DeMarco(2011)]McKay11
author author D. C. McKay and author B. DeMarco, @noop journal journal
Reports on Progress in Physics volume 74, pages 054401 (year 2011)NoStop
[Bloch et al.(2008)Bloch,
Dalibard, and Zwerger]Bloch08
author author I. Bloch, author J. Dalibard, and author W. Zwerger, @noop journal journal Rev. Mod. Phys. volume 80, pages 885 (year
2008)NoStop
[Greiner et al.(2002)Greiner, Mandel, Esslinger, Hansch, and Bloch]Greiner02
author author M. Greiner, author O. Mandel,
author T. Esslinger, author T. W. Hansch, and author I. Bloch, @noop
journal journal Nature volume 415, pages 39 (year 2002)NoStop
[Jaksch et al.(1998)Jaksch,
Bruder, Cirac, Gardiner, and Zoller]Jaksch98
author author D. Jaksch, author C. Bruder,
author J. I. Cirac, author C. W. Gardiner, and author P. Zoller, @noop
journal journal Phys. Rev. Lett. volume 81, pages 3108 (year
1998)NoStop
[Kuklov and Svistunov(2003)]kuklov2003counterflow
author author A. Kuklov and author B. Svistunov, @noop journal journal
Physical review letters volume 90, pages 100401 (year 2003)NoStop
[Duan et al.(2003)Duan,
Demler, and Lukin]Duan03
author author L.-M. Duan, author E. Demler, and author M. D. Lukin, @noop
journal journal Phys. Rev. Lett. volume 91, pages 090402 (year
2003)NoStop
[Greif et al.(2013)Greif,
Uehlinger, Jotzu, Tarruell, and Esslinger]Greif13
author author D. Greif, author T. Uehlinger,
author G. Jotzu, author L. Tarruell, and author T. Esslinger, 10.1126/science.1236362 journal journal
Science volume 340, pages 1307
(year 2013)NoStop
[Kim et al.(2010)Kim,
Chang, Korenblit, Islam,
Edwards, Freericks, Lin,
Duan, and Monroe]kim2010quantum
author author K. Kim, author M.-S. Chang,
author S. Korenblit, author R. Islam, author
E. Edwards, author J. Freericks, author G.-D. Lin, author L.-M. Duan, and author C. Monroe, @noop journal journal
Nature volume 465, pages 590
(year 2010)NoStop
[Simon et al.(2011)Simon,
Bakr, Ma, Tai, Preiss, and Greiner]simon2011quantum
author author J. Simon, author W. S. Bakr,
author R. Ma, author
M. E. Tai, author P. M. Preiss, and author M. Greiner, @noop journal journal Nature volume 472, pages
307 (year 2011)NoStop
[Liu et al.(2009)Liu,
Borunda, Liu, and Sinova]Liu09
author author X.-J. Liu, author M. F. Borunda,
author X. Liu, and author J. Sinova, @noop
journal journal Phys. Rev. Lett. volume 102, pages 046402 (year
2009)NoStop
[Lin et al.(2009)Lin,
Compton, Perry, Phillips,
Porto, and Spielman]Spielman09
author author Y.-J. Lin, author R. L. Compton,
author A. R. Perry, author W. D. Phillips, author
J. V. Porto, and author
I. B. Spielman, 10.1103/PhysRevLett.102.130401 journal journal
Phys. Rev. Lett. volume 102, pages
130401 (year 2009)NoStop
[Wang et al.(2010)Wang,
Gao, Jian, and Zhai]Zhai10
author author C. Wang, author C. Gao, author C.-M. Jian, and author H. Zhai, @noop journal journal Phys. Rev. Lett. volume 105, pages 160403 (year
2010)NoStop
[Lin et al.(2011)Lin,
Jimenez-Garcia, and Spielman]Lin11
author author Y.-J. Lin, author K. Jimenez-Garcia,
and author I. B. Spielman, @noop journal journal Nature volume 471, pages 83 (year
2011)NoStop
[Ho and Zhang(2011)]Zhang11
author author T.-L. Ho and author S. Zhang, @noop journal journal Phys. Rev. Lett. volume 107, pages 150403 (year 2011)NoStop
[Cong-Jun et al.(2011)Cong-Jun, Mondragon-Shem, and Xiang-Fa]Wu11
author author W. Cong-Jun, author I. Mondragon-Shem, and author Z. Xiang-Fa, http://stacks.iop.org/0256-307X/28/i=9/a=097102
journal journal Chinese Physics Letters volume 28, pages 097102 (year 2011)NoStop
[Dalibard et al.(2011)Dalibard, Gerbier, Juzeli u̅ūnas, and Öhberg]Dalibard11
author author J. Dalibard, author F. Gerbier,
author G. Juzeli u̅ūnas, and author
P. Öhberg, @noop journal journal Rev. Mod. Phys. volume
83, pages 1523 (year 2011)NoStop
[Campbell et al.(2011)Campbell, Juzeli u̅ūnas, and Spielman]Spielman11
author author D. L. Campbell, author G. Juzeli u̅ūnas, and author
I. B. Spielman, @noop journal journal Phys. Rev. A volume
84, pages 025602 (year 2011)NoStop
[Cheuk et al.(2012)Cheuk,
Sommer, Hadzibabic, Yefsah,
Bakr, and Zwierlein]Lawrence12
author author L. W. Cheuk, author A. T. Sommer,
author Z. Hadzibabic, author T. Yefsah, author
W. S. Bakr, and author
M. W. Zwierlein, @noop journal journal Phys. Rev. Lett. volume 109, pages 095302 (year
2012)NoStop
[Wang et al.(2012)Wang,
Yu, Fu, Miao, Huang, Chai, Zhai, and Zhang]Zhai12
author author P. Wang, author Z.-Q. Yu,
author Z. Fu, author
J. Miao, author L. Huang, author S. Chai, author H. Zhai, and author J. Zhang, 10.1103/PhysRevLett.109.095301 journal
journal Phys. Rev. Lett. volume 109, pages 095301 (year 2012)NoStop
[Li et al.(2012)Li,
Pitaevskii, and Stringari]Yun12
author author Y. Li, author L. P. Pitaevskii,
and author S. Stringari, @noop journal journal Phys. Rev. Lett. volume 108, pages 225301 (year 2012)NoStop
[Galitski and Spielman(2013)]Spielman13
author author V. Galitski and author I. B. Spielman, @noop journal journal
Nature volume 494, pages 49
(year 2013)NoStop
[Qu et al.(2013)Qu,
Hamner, Gong, Zhang, and Engels]Gong13
author author C. Qu, author C. Hamner, author M. Gong, author
C. Zhang, and author
P. Engels, @noop journal journal Phys. Rev. A volume
88, pages 021604 (year 2013)NoStop
[Hamner et al.(2014)Hamner,
Qu, Zhang, Chang,
Gong, Zhang, and Engels]Gong14
author author C. Hamner, author C. Qu, author Y. Zhang, author
J. Chang, author M. Gong, author C. Zhang, and author P. Engels, 10.1038/ncomms50231 journal
journal Nat Commun volume 5, pages 4023 (year 2014)NoStop
[Jiménez-García et al.(2015)Jiménez-García, LeBlanc,
Williams, Beeler, Qu,
Gong, Zhang, and Spielman]Spielman15
author author K. Jiménez-García, author L. J. LeBlanc, author R. A. Williams, author M. C. Beeler, author C. Qu, author M. Gong, author C. Zhang, and author
I. B. Spielman, 10.1103/PhysRevLett.114.125301 journal journal
Phys. Rev. Lett. volume 114, pages
125301 (year 2015)NoStop
[Li et al.(2016)Li,
Huang, Shteynas, Burchesky,
Top, Su, Lee, Jamison, and Ketterle]Li16
author author J. Li, author W. Huang, author B. Shteynas, author
S. Burchesky, author
F. Top, author E. Su, author J. Lee, author A. O. Jamison, and author W. Ketterle, @noop journal journal Phys. Rev. Lett. volume 117, pages 185301 (year 2016)NoStop
[Wu et al.(2016)Wu,
Zhang, Sun, Xu, Wang, Ji, Deng, Chen,
Liu, and Pan]ChenShuai16
author author Z. Wu, author L. Zhang, author W. Sun, author
X.-T. Xu, author B.-Z. Wang, author S.-C. Ji, author Y. Deng, author S. Chen, author X.-J. Liu, and author J.-W. Pan, 10.1126/science.aaf6689 journal journal
Science volume 354, pages 83
(year 2016)NoStop
[Huang et al.(2016)Huang,
Meng, Wang, Peng,
Zhang, Chen, Li,
Zhou, and Zhang]Huang16
author author L. Huang, author Z. Meng,
author P. Wang, author
P. Peng, author S.-L. Zhang, author L. Chen, author D. Li, author Q. Zhou, and author J. Zhang, @noop
journal journal Nat. Phys. volume 12, pages 540 (year 2016)NoStop
[Dzyaloshinsky(1958)]DM58
author author I. Dzyaloshinsky, http://dx.doi.org/10.1016/0022-3697(58)90076-3 journal
journal J. Phys. Chem. Solids volume
4, pages 241 (year 1958)NoStop
[Moriya(1960)]DM60
author author T. Moriya, 10.1103/PhysRev.120.91 journal
journal Phys. Rev. volume 120, pages 91 (year 1960)NoStop
[Sergienko and Dagotto(2006)]Dagotto06
author author I. A. Sergienko and author E. Dagotto, @noop journal journal Phys.
Rev. B volume 73, pages 094434
(year 2006)NoStop
[Cao et al.(2009)Cao,
Guo, Vanderbilt, and He]Cao09
author author K. Cao, author G.-C. Guo,
author D. Vanderbilt, and author L. He, 10.1103/PhysRevLett.103.257201 journal journal
Phys. Rev. Lett. volume 103, pages
257201 (year 2009)NoStop
[Mühlbauer et al.(2009)Mühlbauer, Binz, Jonietz, Pfleiderer, Rosch, Neubauer, Georgii, and Böni]Muhlbauer09
author author S. Mühlbauer, author B. Binz,
author F. Jonietz, author C. Pfleiderer, author
A. Rosch, author A. Neubauer, author R. Georgii, and author P. Böni, 10.1126/science.1166767
journal journal Science volume 323, pages 915 (year
2009)NoStop
[Mochizuki et al.(2010)Mochizuki, Furukawa, and Nagaosa]Masahito10
author author M. Mochizuki, author N. Furukawa,
and author N. Nagaosa, @noop journal journal Phys. Rev. Lett. volume 104, pages 177206 (year 2010)NoStop
[Tokura and Seki(2010)]Yoshinori10
author author Y. Tokura and author S. Seki, 10.1002/adma.200901961 journal journal Advanced Materials volume 22, pages 1554 (year 2010)NoStop
[Yu et al.(2010)Yu,
Onose, Kanazawa, Park,
Han, Matsui, Nagaosa, and Tokura]Yu10
author author X. Z. Yu, author Y. Onose, author N. Kanazawa, author
J. H. Park, author
J. H. Han, author Y. Matsui, author N. Nagaosa, and author Y. Tokura, @noop journal journal Nature. volume 465, pages
901 (year 2010)NoStop
[Heinze et al.(2011)Heinze,
von Bergmann, Menzel, Brede,
Kubetzka, Wiesendanger, Bihlmayer, and Blugel]Stefan11
author author S. Heinze, author K. von
Bergmann, author M. Menzel,
author J. Brede, author A. Kubetzka, author
R. Wiesendanger, author
G. Bihlmayer, and author
S. Blugel, @noop journal journal Nat. Phys. volume
7, pages 713 (year 2011)NoStop
[Seki et al.(2012)Seki,
Yu, Ishiwata, and Tokura]Seki12
author author S. Seki, author X. Z. Yu,
author S. Ishiwata, and author Y. Tokura, 10.1126/science.1214143 journal journal
Science volume 336, pages 198
(year 2012)NoStop
[Nagaosa and Tokura(2013)]Nagaosa13
author author N. Nagaosa and author Y. Tokura, @noop journal journal Nat.
Nano. volume 8, pages 899 (year 2013)NoStop
[Wilson et al.(2014)Wilson,
Butenko, Bogdanov, and Monchesky]Wilson14
author author M. N. Wilson, author A. B. Butenko,
author A. N. Bogdanov, and author T. L. Monchesky, @noop journal journal Phys. Rev. B volume 89, pages 094411 (year 2014)NoStop
[Radi ćć et al.(2012)Radi ćć,
Di Ciolo, Sun, and Galitski]Radic12
author author J. Radi ćć, author A. Di Ciolo, author K. Sun, and author V. Galitski, 10.1103/PhysRevLett.109.085303 journal journal
Phys. Rev. Lett. volume 109, pages
085303 (year 2012)NoStop
[Cole et al.(2012)Cole,
Zhang, Paramekanti, and Trivedi]Cole12
author author W. S. Cole, author S. Zhang,
author A. Paramekanti, and author N. Trivedi, 10.1103/PhysRevLett.109.085302 journal journal Phys. Rev. Lett. volume 109, pages 085302 (year 2012)NoStop
[Gong et al.(2015)Gong,
Qian, Yan, Scarola, and Zhang]Gong15
author author M. Gong, author Y. Qian, author M. Yan, author
V. W. Scarola, and author
C. Zhang, 10.1038/srep10050 journal journal Sci. Rep. volume 5, pages 10050 (year
2015)NoStop
[Roszler et al.(2006)Roszler, Bogdanov, and Pfleiderer]Roszler06
author author U. Roszler, author A. N. Bogdanov, and author C. Pfleiderer, @noop journal journal
Nature volume 442, pages 797
(year 2006)NoStop
[Rowland et al.(2016)Rowland, Banerjee, and Randeria]Rowland16
author author J. Rowland, author S. Banerjee, and author M. Randeria, @noop journal journal Phys. Rev. B volume 93, pages 020404 (year 2016)NoStop
[He et al.(2015)He,
Ji, and Hofstetter]he2015bose
author author L. He, author A. Ji, and author W. Hofstetter, @noop journal journal Physical Review
A volume 92, pages 023630 (year 2015)NoStop
[Sun et al.(2015)Sun,
Ye, and Liu]Ye15
author author F. Sun, author J. Ye, and author W.-M. Liu, 10.1103/PhysRevA.92.043609 journal journal Phys. Rev. A volume 92, pages 043609 (year 2015)NoStop
[Sun et al.(a)Sun, Ye, and Liu]Ye16
author author F. Sun, author J. Ye, and author W.-M. Liu, @noop
(a), http://arxiv.org/abs/cond-mat/1603.00451
arXiv:cond-mat/1603.00451 NoStop
[Sun et al.(b)Sun, Ye, and Liu]Ye16-2
author author F. Sun, author J. Ye, and author W.-M. Liu, @noop
(b), http://arxiv.org/abs/cond-mat/1601.01642
arXiv:cond-mat/1601.01642 NoStop
[Vidal(2003)]Vidal03
author author G. Vidal, 10.1103/PhysRevLett.91.147902 journal journal Phys. Rev. Lett. volume 91, pages 147902 (year
2003)NoStop
[Verstraete and Cirac(2004)]Verstraete04
author author F. Verstraete and author J. I. Cirac, @noop journal journal
cond-mat/0407066 (year 2004)NoStop
[Sandvik and Vidal(2007)]sandvik07
author author A. W. Sandvik and author G. Vidal, 10.1103/PhysRevLett.99.220602 journal journal Phys. Rev. Lett. volume 99, pages 220602 (year
2007)NoStop
[Schuch et al.(2008)Schuch,
Wolf, Verstraete, and Cirac]schuch08
author author N. Schuch, author M. M. Wolf,
author F. Verstraete, and author J. I. Cirac, 10.1103/PhysRevLett.100.040501 journal journal Phys. Rev. Lett. volume 100, pages 040501 (year 2008)NoStop
[Song and Clay(2014)]Song14
author author J.-P. Song and author R. T. Clay, 10.1103/PhysRevB.89.075101 journal
journal Phys. Rev. B volume 89, pages 075101 (year 2014)NoStop
[sup()]supp
@noop note See Supplemental Material, which includes
Refs.<cit.>, for
details of the string bond states method and the Holstein-Primarkov
transformation.Stop
[Zhang et al.(2015)Zhang,
Cole, Paramekanti, and Trivedi]Cole15
author author S. Zhang, author W. S. Cole,
author A. Paramekanti, and author N. Trivedi, @noop
journal journal Annual Review of Cold Atoms
and Molecules volume 3, pages 135
(year 2015)NoStop
[Villain et al.(1980)Villain, Bidaux, Carton, and Conte]villain80
author author J. Villain, author R. Bidaux,
author J.-P. Carton, and author R. Conte, @noop
journal journal Journal de Physique volume 41, pages 1263 (year
1980)NoStop
[Shender(1982)]shender82
author author E. F. Shender, @noop journal journal Sov.
Phys. JETP volume 56, pages 178
(year 1982)NoStop
[Nussinov and Van
Den Brink(2015)]Nussinov15
author author Z. Nussinov and author J. Van
Den Brink, @noop journal journal Rev.
Mod. Phys. volume 87, pages 1
(year 2015)NoStop
[Dorier et al.(2005)Dorier,
Becca, and Mila]Dorier05
author author J. Dorier, author F. Becca, and author F. Mila, @noop
journal journal Phys. Rev. B volume 72, pages 024448 (year
2005)NoStop
[You et al.(2010)You,
Tian, and Lin]You10
author author W.-L. You, author G.-S. Tian, and author H.-Q. Lin, @noop journal journal Journal of Physics
A: Mathematical and Theoretical volume 43, pages 275001 (year 2010)NoStop
[Brzezicki and Ole śś(2013)]Brzezicki13
author author W. Brzezicki and author A. M. Ole śś, @noop journal journal Phys. Rev. B volume
87, pages 214421 (year 2013)NoStop
[Kézsmárki et al.(2015)Kézsmárki, Bordács, Milde,
Neuber, Eng, White,
Rønnow, Dewhurst, Mochizuki, Yanai et al.]kezsmarki2015neel
author author I. Kézsmárki, author S. Bordács, author P. Milde,
author E. Neuber, author L. Eng, author
J. White, author H. M. Rønnow, author C. Dewhurst, author M. Mochizuki, author K. Yanai, et al., @noop journal journal Nature materials volume 14, pages 1116 (year
2015)NoStop
[Wang et al.(2013)Wang,
Han, Guo, and He]wangzhen13
author author Z. Wang, author Y. Han, author G.-C. Guo, and author L. He, @noop journal journal Phys. Rev. B volume
88, pages 121105 (year 2013)NoStop
[Liu et al.(2015)Liu,
Wang, Li, Lao, Han, Guo, and He]Liu2015
author author W. Liu, author C. Wang, author Y. Li, author
Y. Lao, author Y. Han, author G.-C. Guo, and author L. He, @noop journal journal J. Phys.: Condens. Matter. volume 27, pages 085601 (year 2015)NoStop
[Swendsen and Wang(1986)]swendsen86
author author R. H. Swendsen and author J.-S. Wang, 10.1103/PhysRevLett.57.2607 journal
journal Phys. Rev. Lett. volume 57, pages 2607 (year 1986)NoStop
[Geyer(1991)]geyer_book
author author C. J. Geyer, @noop title Computer Science and
Statistics, Proceedings of the 23rd Symposium on the interface (publisher Interface Foundation, year 1991)NoStop
[Holstein and Primakoff(1940)]HP40
author author T. Holstein and author H. Primakoff, 10.1103/PhysRev.58.1098 journal journal Phys. Rev. volume
58, pages 1098 (year 1940)NoStop
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.08040v1 | 20170127125510 | CO2 infrared emission as a diagnostic of planet-forming regions of disks | [
"Arthur Bosman",
"Simon Bruderer",
"Ewine F. van Dishoeck"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.EP",
"astro-ph.GA"
] |
Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, The Netherlands
[email protected] für Extraterrestrische Physik, Gießenbachstrasse 1, 85748 Garching, Germany
The infrared ro-vibrational emission lines from organic molecules in the inner regions of protoplanetary disks are unique probes of the physical and chemical structure of planet forming regions and the processes that shape them. These observed lines are mostly interpreted with local thermal equilibrium (LTE) slab models at a single temperature.
The non-LTE excitation effects of carbon dioxide (CO2) are studied in a full disk model to evaluate: (i) what the emitting regions of the different CO2 ro-vibrational bands are; (ii) how the CO2 abundance can be best traced using CO2 ro-vibrational lines using future JWST data and; (iii) what the excitation and abundances tell us about the inner disk physics and chemistry. CO2 is a major ice component and its abundance can potentially test models with migrating icy pebbles across the iceline.
A full non-LTE CO2 excitation model has been built starting from experimental and theoretical molecular data. The characteristics of the model are tested using non-LTE slab models. Subsequently the CO2 line formation has been modelled using a two-dimensional disk model representative of T-Tauri disks where CO2 is detected in the mid-infrared by the Spitzer Space Telescope.
The CO2 gas that emits in the 15 μm and 4.5 μm regions of the spectrum is not in LTE and arises in the upper layers of disks, pumped by infrared radiation.
The v_2 15 μm feature is dominated by optically thick emission for most of the models that fit the observations and increases linearly with source luminosity.
Its narrowness compared with that of other molecules stems from a combination of the low rotational excitation temperature (∼ 250 K) and the inherently narrower feature for CO2. The inferred CO2 abundances derived for observed disks range from 3× 10^-9 to 1× 10^-7 with respect to total gas density for typical gas/dust ratios of 1000, similar to earlier LTE disk estimates. Line-to-continuum ratios are low, of order a few %, stressing the need for high signal-to-noise (S/N > 300) observations for individual line detections.
The inferred CO2 abundances are much lower than those found in interstellar ices (∼ 10^-5), indicating a reset of the chemistry by high temperature reactions in the inner disk. JWST-MIRI with its higher spectral resolving power will allow a much more accurate retrieval of abundances from individual P- and R-branch
lines, together with the ^13CO2 Q-branch at 15 μm. The ^13CO2 Q-branch is particularly sensitive to possible enhancements of CO2 due to sublimation of migrating icy pebbles at the iceline(s). Prospects for JWST-NIRSpec are discussed as well.
A.D. Bosman et al.
CO2 infrared emission as a diagnostic
of planet-forming regions of disks
Arthur D. Bosman 1 Simon Bruderer 2 Ewine F. van Dishoeck 1,2
December 30, 2023
=========================================================================
§ INTRODUCTION
Most observed exo-planets orbit close to their parent star <cit.>. The atmospheres of these
close-in planets show a large diversity in molecular composition
<cit.> which must be set during planet formation and
thus be representative of the natal protoplanetary
disk. Understanding the chemistry of the inner, planet-forming regions
of circumstellar disks around young stars will thus give us another
important piece of the puzzle of planet formation. Prime molecules for
such studies are H2O, CO, CO2 and CH4 which are the
major oxygen- and carbon-bearing species that set the overall C/O ratio <cit.>.
The chemistry in the inner disk, i.e., its inner few AU, differs from
that in the outer disk. It lies within the H2O and CO2
icelines so all icy planetesimals are sublimated. The large range of
temperatures (100–1500 K) and densities (10^10-10^16 cm^-3)
then makes for a diverse chemistry across the inner disk region
<cit.>. The driving cause for this
diversity is high temperature chemistry: some molecules such as
H2O and HCN have reaction barriers in their formation pathways
that make it difficult to produce the molecule in high abundances at
temperatures below a few hundred Kelvin. As soon as the temperature is high
enough to overcome these barriers, formation is fast and they become
major reservoirs of oxygen and nitrogen. An interesting example is
formed by the main oxygen bearing molecules, H2O and CO2:
the gas phase formation of both these molecules includes the OH
radical. At temperatures below ∼ 200 K the formation of
CO2 is faster, leading to high gas phase abundances, up to
∼ 10^-6 with respect to (w.r.t.) total gas density, in regions where
CO2 is not frozen out. When the temperature is high enough,
H2O formation will push most of the gas phase oxygen into
H2O and the CO2 abundance drops to ∼10^-8
<cit.>. Such chemical transitions can have
strong implications for the atmospheric content of gas giants formed
in these regions if most of their atmosphere is accreted from the
surrounding gas.
A major question is to what extent the inner disk abundances indeed
reflect high temperature chemistry or whether continuously migrating
and sublimating icy planetesimals and pebbles at the icelines
replenish the disk atmospheres <cit.>. Interstellar ices
are known to be rich in CO2, with typical abudances of 25%
w.r.t. H2O ice, or about 10^-5 w.r.t. total gas density
<cit.>.
Cometary ices show similarly high CO2/H2O abundance ratios
<cit.>. Of all molecules with high ice
abundances, CO2 shows the largest contrast between interstellar
ice and high temperature chemistry abundances, and could therefore be
a good diagnostic of its chemistry.
<cit.> argue based on Spitzer Space Telescope
data that CO2 is not inherited from the interstellar medium but
is reset by chemistry in the inner disk. However, that analysis
used a Local Thermodynamic Equilibrium (LTE) CO2 excitation model coupled with a disk model and did
not investigate the potential of future instruments, which could be
more sensitive to a contribution from sublimating planetesimals. Here
we re-consider the retrieval of CO2 abundances in the inner
regions of protoplanetary disks using a full non-LTE excitation and
radiative transfer disk model, with a forward look to the new
opportunities offered by the James Webb Space Telescope (JWST).
The detection of infrared vibrational bands seen from CO2,
C2H2 and HCN, together with high energy rotational lines of
OH and H2O, was one of the major discoveries of the
Spitzer Space Telescope
<cit.>. These
data cover wavelengths in the 10–35 μm range at low spectral resolving power
of λ/Δλ=600. Complementary ground-based infrared
spectroscopy of molecules such as CO, OH, H2O, CH4,
C2H2 and HCN also exists at shorter wavelengths in the 3–5
μm range
<cit.>. The high spectral resolving power of
R=25000-10^5 for instruments like Keck/NIRSPEC and VLT/CRIRES have
resolved the line profiles and have revealed interesting kinematical
phenomena, such as disk winds in the inner disk regions
<cit.>. Further advances are expected with VLT/CRIRES+ as well as through modelling of current data with more detailed physical models.
Protoplanetary disks have a complex physical structure <cit.> and putting all physics, from magnetically
induced turbulence to full radiative transfer, into a single model is
not feasible. This means that simplifications have to be made.
During the Spitzer era, the models used to explain the
observations were usually LTE
excitation slab models at a single temperature. With 2D physical
models such as RADLITE <cit.> and with full 2D
physical-chemical models such as Dust and Lines
<cit.> or Protoplanetary Disk
Model <cit.> it is now possible to fully take
into account the large range of temperatures and densities as well as
the non-local excitation effects. For example, it has been shown that
it is important to include radiative pumping introduced by hot
(500-1500 K) thermal dust emission of regions just behind the inner
rim. This has been done for H2O by <cit.> who
concluded that to explain the mid-infrared water lines observed with
Spitzer, water is located in the inner ∼1 AU in a region
where the local gas-to-dust ratio is 1–2 orders of magnitude higher
than the interstellar medium (ISM) value. <cit.> performed a
protoplanetary disk parameter study to see how disk parameters affect
the H2O emission. <cit.> compared a LTE disk model
analysis using RADLITE with slab models and concluded that, while
inferred abundance ratios were similar with factors of a few, there
could be orders of magnitude differences in absolute abundances
depending on the assumed emitting area in slab models <cit.>. <cit.> concluded that the CO
infrared emission from disks around Herbig stars was rotationally cool
and vibrationally hot due to a combination of infrared and
ultraviolet (UV) pumping fields (see also
). <cit.> modelled the non-LTE
excitation and emission of HCN concluding that the emitting area
for mid-infrared lines can be 10 times larger in disks than the
assumed emitting area in slab models due to infrared pumping. Our
study of CO2 is along similar lines as that for HCN.
As CO2 cannot be observed through rotational transitions in the
far-infrared and submillimeter, because of the lack of a permanent dipole
moment, it must be observed through its vibrational transitions at
near- and mid-infrared wavelengths. The CO2 in our own
atmosphere makes it impossible to detect these CO2 lines from
astronomical sources from the ground, and even at altitudes of 13 km
with SOFIA. This means that CO2 has to be observed from space.
CO2 has been observed by Spitzer in protoplanetary
disks through its v_2 Q-branch at 15 μm where many individual
Q-band lines combine into a single broad Q-branch feature at low
spectral resolution <cit.>. These gaseous CO_2
lines have first been detected in high mass protostars and shocks with
the Infrared Space Observatory
<cit.>. CO2
also has a strong band around 4.3 μm due to the v_3 asymmetric
stretch mode. This mode has high Einstein A coefficients and thus
should thus be easily observable, but has not been seen from CO2
gas towards protoplanetary disks or protostars, in contrast with the
corresponding feature in CO2 ice <cit.>.
The CO2 v_2 Q-branch profile is slightly narrower than that
of C2H2 and
HCN observed at similar wavelengths. These results suggest that CO2 is absent (or strongly
under-represented) in the inner, hottest regions of the disk. Full
disk LTE modeling of RNO 90 by <cit.> using RADLITE
showed that the observations of this disk favour a low CO2
abundance (10^-4 w.r.t. H2O, ≈ 10^-8 w.r.t. total gas density). The slab models by <cit.> indicate smaller
differences between the CO2 and H2O abundances, although
CO2 is still found to be 2 to 3 orders of magnitude lower in
abundance.
To properly analyse CO2 emission from disks, a full non-LTE
excitation model of the CO2 ro-vibrational levels has to be
made, using molecular data from experiments and detailed quantum
calculations. This model can then be used to perform a simple slab
model study to see under which conditions non-LTE effects may be
important. These same slab model tests are also used to check the
influences of the assumptions made in setting up the ro-vibrational
excitation model. Such CO2 models have been developed in the past for
evolved AGB stars <cit.> and shocks <cit.>, but not applied to disks.
Our CO2 excitation model is coupled with a full protoplanetary disk
model computed with DALI to investigate the importance of non-LTE excitation, infrared
pumping and dust opacity on the emission spectra. In addition, the
effects of varying some key disk parameters such as source luminosity
and gas/dust ratios on line fluxes and line-to-continuum ratios are
investigated. Finally, Spitzer data for a set of T-Tauri
disks are analysed to derive the CO2 abundance structure using
parametrized abundances.
JWST will allow a big leap forward in the observing
capabilities at near- and mid-infrared wavelengths, where the inner
planet-forming regions of disks emit most of their lines. The
spectrometers on board JWST, NIRSPEC and MIRI
<cit.> with their higher
spectral resolving power (R≈ 3000) compared to Spitzer
(R = 600) will not only separate many blended lines
<cit.> but also boost line-to-continuum ratios
allowing detection of individual P, Q and R-branch lines thus
giving new information on the physics and chemistry of the inner
disk. Here we simulate the emission spectra of CO2 and its
^13CO2 isotopologue from a protoplanetary disk at JWST
resolution. We investigate which of these lines are most useful for
abundance determinations at different disk heights and point out the
importance of detecting the ^13CO2 feature. We also investigate
which features could signify high CO2 abundances around the
CO2 iceline due to sublimating planetesimals.
§ MODELLING CO2 EMISSION
§.§ Vibrational states
The structure of a molecular emission spectrum depends on the
vibrational level energies and transitions between these levels that
can be mediated by photons. Fig. <ref> shows
the vibrational energy level diagram for CO2 from the HITRAN
database <cit.>. Lines denote the transitions that are
dipole allowed. Colours denote the part of the spectrum where features
will show up. This colour coding is the same as in
Fig. <ref> where a model CO2 spectrum is
presented.
CO2 is a linear molecule with a ^1 Σ^+_g ground
state. It has a symmetric, v_1, and an asymmetric, v_3, stretching
mode (both of the Σ type) and a doubly degenerate bending mode,
v_2 (Π type) with an angular momentum, l. A vibrational state
is denoted by these quantum numbers as: v_1v_2^lv_3. The
vibrational constant of the symmetric stretch mode is very close to
twice that of the bending mode. Due to this
resonance, states with the same value for 2v_1 + v_2 and the same
angular momentum mix. This mixing leads to multiple vibrational levels
that have different energies in a process known as Fermi
splitting. The Fermi split levels have the same notation as the
unmixed state with the highest symmetric stretch quantum number, v_1
and numbered in order of decreasing energy.[For example:
Fermi splitting of the theoretical 02^00 and 10^00 levels leads
to two levels denoted as 10^00(1) and 10^00(2) where the former
has the higher energy.] This leads to the vibrational state
notation of: v_1v_2^lv_3(n) where n is the numbering of the
levels. This full designation is used in
Fig. <ref>. For the rest of the paper we will
drop the (n) for the levels where there is only one variant.
The number of vibrational states in the HITRAN database is much larger
than the set of states used here. Not all of the vibrational states
are needed to model CO2 in a protoplanetary disk because some
the higher energy levels can hardly be excited, either collisionally or
with radiation, so they should not have an impact on the
emitted line radiation. We adopt the same levels as used for AGB stars
in <cit.> and add to this set the 03^30 vibrational
level.
§.§ Rotational ladders
The rotational ladder of the ground state is given in
Fig. <ref>. All states up to J=80 in each
vibrational state are included; this rotational level corresponds to
an energy of approximately 3700 K (2550 cm^-1) above the
vibrational state energy. The rotational structure of CO2 is
more complex than that of a linear diatomic like CO. This is due
to the fully symmetric wavefunction of CO2 in the ground
electronic state. This means that all states of CO2 need to be
fully symmetric to satisfy Bose-Einstein statistics. As a result, not
all rotational quantum numbers J exist in all of the vibrational
states: some vibrational states miss all odd or all even J
levels. There are also additional selections on the Wang parity of the
states (e,f). For the ground vibrational state this means that
only the rotational states with even J numbers are present and that
the parity of these states is fixed to e.
The rotational structure is summarized in Table <ref>.
The states with v_2 = v_3 = 0 all have the same rotational structure as the ground vibrational state.
The 01^10(1) state has both even and odd J levels starting
at J = 1. The even J levels have f parity, while the odd J
levels have e parity. In general for levels with v_2 ≠ 0 and
v_3 = 0, the rotational ladder starts at J = v_2 with an even
parity, with the parity alternating in the rotational ladder with
increasing J. For v_3 ≠ 0 and v_2 = 0, only odd J levels
exist if v_3 is odd, whereas only even J levels exist if v_3 is
even. All levels have an e parity. For v_2 ≠ 0 and v_3 ≠
0, the rotational ladder is the same as for the v_2 ≠ 0 and v_3
= 0 case if v_3 is even, whereas the parities relative to this case
are switched if v_3 is odd.
§.§ Transitions between states
To be able to properly model the emission of infrared lines from
protoplanetary disks non-LTE effects need to be taken into
account. The population of each level is determined by the balance of
the transition rates, both radiative and collisional. The radiative
transition rates are set by the Einstein coefficients and the ambient
radiation field. Einstein coefficients for CO2 have been well
studied, both in the laboratory and in detailed quantum chemical
calculations <cit.>. These are
collected in several databases for CO2 energy levels and
Einstein coefficients such as the Carbon Dioxide Spectroscopic Database
(CDSD) <cit.> and as part of large molecular databases
such as HITRAN <cit.> and GEISA
<cit.>. Here the ^12CO2 and ^13CO2 data from
the HITRAN database are used. It should be noted that the differences
between the three databases are small for the lines
considered here, within a few % in line intensity and less than 1%
for the line positions.
The HITRAN database gives the energies of the ro-vibrational levels
above the ground state and the Einstein A coefficients of
transitions between them. Only transitions above a certain intensity
at 296 K are included in the databases. The weakest lines included in
the line list are 13 orders of magnitude weaker than the strongest
lines. With expected temperatures in the inner regions of disks
ranging from 100–1000 K, no important lines should be missed
due to this intensity cut. In the final, narrowed down set of states
all transitions that are dipole allowed are accounted for.
Collisional rate coefficients between vibrational states are collected
from literature sources. The measured rate of the relaxation of the
01^10 to the 00^00 state by collisions with H2 from
<cit.> is used. Vibrational relaxation of the 00^01 state
due to collisions with H2 is taken from <cit.>. For
the transitions between the Fermi split levels the rate by
<cit.> for collisions between CO2 with
CO2 is used with a scaling for the decreased mean molecular
mass. Although data used here supersede those in <cit.>, that paper does give a sense for the uncertainties of the experiments. The different experiments in <cit.> usually agree within a factor of two and the numbers used here from <cit.> and <cit.> fall within the spread for their respective transitions. It is thus expected that the accuracy of the individual collisional rate coefficients is better than a factor of two.
No literature information is available for pure rotational transitions
induced by collisions of CO2 with other molecules. We therefore
adopt the CO rotational collisional rate coefficients from the
LAMDA database <cit.>. Due to the
lack of dipole moment, the critical density for rotational transitions
of CO2 is expected to be very low (n_crit < 10^4)
cm^-3 and thus the exact collisional rate
coefficients are not important for the higher density environments
considered here. A method similar to
<cit.> is used to create the full
state-to-state collisional rate coefficient matrix. The method is
described in Appendix <ref>.
§.§ CO2 spectra
Fig. <ref> presents a slab model spectrum of CO2 computed using the RADEX program <cit.>. A density of 10^16 cm^-3 was used to ensure close
to LTE populations of all levels. A column density of 10^16
cm^-2 was adopted, close to the observed value derived by
<cit.>, with a temperature of 750 K and linewidth of 1 km
s^-1. The transitions are labelled at the approximate location of
their Q-branch. The spectrum shows that, due to the Fermi splitting
of the bending and stretching modes, the 15 μm feature is very
broad stretching from slightly shorter than 12 μm to slightly
longer than 20 μm for the absorption in the Earth atmosphere. For astronomical sources, the lines between 14 and 16 μm are more realistic targets.
Two main emission features are seen in the spectrum. The strong
feature around 4.3 μm is caused by the radiative decay of the
00^01 vibration level to the vibrational ground state. As a
Σ-Σ transition this feature does not have a Q-branch,
but the R and P branches are the brightest features in the
spectrum in LTE at 750 K. The second strong feature is at 15
μm. This emission is caused by the radiative decay of the 01^10
vibrational state into the ground state. It also contains small
contributions of the 02^20 → 01^10 and 03^30 →
02^20 transitions. This feature does have a Q-branch which has been
observed both in absorption <cit.> and emission
<cit.>. The CO2 Q-branch is found to
be narrow compared to the other Q-branches of HCN and C2H2
measured in the same sources.
The narrowness is partly due to the fact that the
CO2 Q-branch is intrinsically narrower than the same feature for HCN.
This has to do with the change in the rotational constant
between the ground and excited vibrational states. A comparison between
Q-branch profiles for CO2 and HCN for two optically thin
LTE models is presented in Fig. <ref>. The lighter
HCN has a full width half maximum (FWHM) that is about 50%
larger than that of CO2. The difference in the observed width of the feature is generally larger <cit.>: the HCN feature is typically twice as wide as the CO2 feature. Thus the inferred temperature from the CO2 Q-branch from the observations is low compared to the temperature inferred from the HCN feature. The intrinsically narrower CO2 Q-branch amplifies the difference, making it more striking.
§.§ Dependence on kinetic temperature, density and radiation field
The excitation of, and the line emission from, a molecule depend
strongly on the environment of the molecule, especially the kinetic
temperature, radiation field and collisional partner density. In
Fig. <ref> slab model spectra of CO2 for
different physical parameters are compared. The dependence on the
radiation field is modelled by including a blackbody field of 750 K
diluted with a factor W: ⟨ J_ν⟩ = W
B_ν(T_rad) with T_rad = 750 K. When testing
the effects of the kinetic temperature and density, no incident
radiation field is included (W = 0).
Fig. <ref> shows that at a constant density of
10^12 cm^-3 the 4.3 μm band is orders of magnitude weaker than the 15 μm band. The 15 μm band increases in strength
and also in width, with increasing temperature as higher J levels of
the CO2 v_2 vibrational mode can be collisionally
excited. Especially the spectrum at 1000 K shows additional Q branches
from transitions originating from the higher energy 10^00(1) and 10^00(2) vibrational levels at 14 and 16 μm.
In the absence of a pumping radiation field, collisions are needed to populate
the higher energy levels. With enough collisions, the excitation temperature becomes
equal to the kinetic temperature. The density at which the excitation temperature of a
level reaches the kinetic temperature depends on the critical density:
n_c = A_ul/K_ul for a two-level system, where A_ul is the
Einstein A coefficient from level u to level l and K_ul is
the collisional rate coefficient between these levels. For densities
below the critical density the radiative decay is much faster than the
collisional excitation and de-excitation. This means that the line
intensity scales as n/n_c. Above the critical density collisional
excitation and de-excitation are fast: the intensity is then no longer
dependent on the density. The critical density of the 15 μm band
is close to 10^12 cm^-3, so there is little change in this
band when increasing the density above this value. However, when
decreasing the density below the critical value this results in the
a strong reduction of the band strength. The critical density of the 4.3
μm feature is close to 10^15 cm^-3 so below this the lines are orders of magnitude weaker than would be expected from LTE.
Adding a radiation field has a significant impact on both the 4.3 and
15 μm features. The radiation of a black body of 750 K peaks
around 3.8 μm so the 4.3 μm/15 μm flux ratio in these
cases is larger than the flux ratio without radiation field for
densities below the critical density of the 4.3 μm lines. Another
difference between the collisionally excited and radiatively excited
states is that in the latter case vibrational levels that cannot be
directly excited from the ground state by photons, such as the
10^00(1) and 10^00(2) levels, are barely populated at all.
§ CO2 EMISSION FROM A PROTOPLANETARY DISK
To properly probe the chemistry in the inner disk from infrared line
emission one needs to go beyond slab models with their inherent
degeneracies. A protoplanetary disk model such as that used here includes
more realistic geometries and contains a broad range of physical
conditions constrained by observational data. Information
can be gained on the location and extent of the emitting CO2
region as well as the nature of the excitation process. By comparing
with observational data, molecular abundances can be inferred as
function of location. A critical aspect of the models is the infrared
continuum radiation field, which has to be calculated accurately
throughout the disk. This means that detailed wavelength dependent
dust opacities need to be included and dust temperatures have to be
calculated on a very fine grid, since the pumping radiation can
originate in a different part of the disk than the lines, e.g., the near-infrared for close to the inner rim. The dust is
also important in absorbing some of the line flux, effectively hiding
parts of the disk from our view.
In this section, the CO2 spectra are modelled using the DALI
(Dust and Lines) code <cit.>. The focus is
on emission from the 15 μm lines that have been observed with
Spitzer and will be observable with
JWST-MIRI. Trends in the shape of the v_2 Q-branch and
the ratios of lines in the P- and R-branches are investigated and
predictions are presented. First the model and its parameters are
introduced and the results of one particular model are used as
illustration. Finally the effects of various parameters on the
resulting line fluxes are shown, in particular source luminosity and
gas/dust ratio. As in <cit.>, the model is based on the
source AS 205 (N) but should be representative of a typical T-Tauri
disk.
§.§ Model setup
Details of the full DALI model and benchmark tests are reported in <cit.> and <cit.>. Here we use the same parts of DALI as in <cit.>. The model starts with the input of a dust and gas surface density structure. The gas and dust structures are parametrized with a surface density profile
Σ(R) = Σ_c (R/R_c)^-γexp[-(R/R_c)^2-γ]
and vertical distribution
ρ(R,Θ) = Σ(R)/√(2π)Rh(R)exp[-1/2(π/2 - Θ/h(R))^2],
with the scale height angle h(R) = h_c(R/R_c)^ψ. The values of the parameters for the AS 205 (N) disk are taken from <cit.> who fitted both the SED and submillimeter images simultaneously. As the inferred structure of the disk is strongly dependent on the dust opacities and size distribution, the same values from <cit.> are used. They are summarized in Table <ref> and the gas density structure is shown in Fig. <ref>, panel a. The central star is a T-Tauri star with excess UV due to accretion. All the accretion luminosity is assumed to be released at the stellar surface as a 10^4 K blackbody. The density and temperature profile are typical for a strongly flared disk as used here. The temperature, radiation field and CO2 excitation structure can be found in the appendix, Fig. <ref>.
In setting up the model special care was taken at the inner rim, where
optical and UV photons are absorbed by the dust over a very short
physical path. To properly get the temperature structure of the disk
directly after the inner rim, high resolution in the radial direction
is needed. Varying the radial width of the first cells showed that the
temperature structure only converges when the cell width of the first
handful of cells is smaller than the mean free path of the UV photons.
The model dust structure is irradiated by the star and the interstellar radiation field. A Monte-Carlo radiative transfer module calculates the dust temperature and the local radiation field at all positions throughout the disk. The gas temperature is then assumed to be equal to the dust temperature. This is not true for the upper and outer parts of the disk. For the regions were CO2 is abundant in our models the difference between dust temperature and gas temperature computed by self-consistently calculating the chemistry and cooling is less than 5%.
The excitation module calculates the CO2 level populations, using a 1+1D escape probablity that includes the continuum radiation due to the dust <cit.>. Finally the synthetic spectra are derived using the ray tracing module, which solves the radiative transfer equation along rays through the disk. The ray tracing module as presented in <cit.> is used as well as a newly developed ray-tracing module that is presented in Appendix <ref> which is orders of magnitude faster, but a few percent less accurate.
In the ray-tracing module a thermal broadening and turbulent broadening with FWHM ∼0.2 km s^-1 is used, which means that thermal broadening dominates above ∼ 40 K. The gas is in Keplerian rotation around the star. This approach is similar to <cit.> and <cit.> for H2O and CO respectively. However <cit.> used a chemical network to determine the abundances, whereas here only parametric abundance structures are used to avoid the added complexity and uncertainties of the chemical network.
The adopted CO2 abundance is either a constant abundance or a jump abundance profile. The abundance throughout the paper is defined as the fractional abundance w.r.t n_H = n(H) + 2 n(H2). The inner region is
defined by T > 200 K and A_V > 2 mag, which is approximately the
region where the transformation of OH into H2O is faster
than the reaction of OH with CO to form CO2. The
outer region is the region of the disk with T < 200 K or A_V < 2
mag, where the CO2 abundance is expected to peak. No CO2 is
assumed to be present in regions with A_V < 0.5 mag as photodissociation is expected to be very efficient in this region.
The physical extent
of these regions is shown in panel b of
Fig. <ref>.
As shown by <cit.> and <cit.>, the
gas-to-dust ("G/D") ratio is very important for the resulting line
fluxes as the dust photosphere can hide a large portion of the
potentially emitting CO2. Here the gas-to-dust ratio is changed
in two ways, by increasing the amount of gas, or by decreasing the
amount of dust. When the gas mass is increased and thus the dust mass
kept at the standard value of 2.9× 10^-4 M_⊙, this is
denoted by g/d_gas. If the dust mass is decreased and the gas mass
kept at 0.029 M_⊙ this is denoted by g/d_dust.
§.§ Model results
Panel c of Fig. <ref> presents the contribution
function for one of the 15 μm lines, the v_2 1→ 0
Q(6) line. The contribution function shows the relative, azimuthally integrated contribution
to the total integrated line flux. Contours show the areas in which 25% and 75% of the
emission is located. Panel c also includes the τ = 1 surface
for the continuum (blue) due to the dust, the τ = 1 surface for
the v_2 1→ 0 Q(6) line (red) and surface where the
density is equal to the critical density. The area of the disk
contributing significantly to the emission is large, an annulus from
approximately 0.7 to 30 AU. The dust temperature in the CO2 emitting region is
between 100 and 500 K and the CO2 excitation temperature ranges
from 100–300 K (see Fig. <ref>). The density is lower
than the critical density at any point in the emitting area.
Panel d of Fig. <ref> shows the contribution
for the v_3 1→0 R(7) line with the same lines and
contours as panel c. The critical density for this line is very
high, ∼ 10^15 cm^-3. This means that except for the inner 1
AU near the mid-plane, the level population of the v_3 level is
dominated by the interaction of the molecule with the surrounding radiation field. The emitting area of
the v_3 1→0 R(7) line is smaller compared to that of
the line at 15 μm. The emitting area stretches from close the the
sublimation radius up to ∼ 10 AU. The excitation temperatures for
this line are also higher, ranging from 300–1000 K in the emitting
region (see Fig. <ref>).
In Fig. <ref> the total flux for the 00^01-00^00 R(7) line
at 4.25 μm and the 15 μm feature integrated from 14.8 to 15.0
μm are presented as functions of x_out, for different
gas-to-dust ratios and different x_in. The 15 μm flux
shows an increase in flux for increasing total CO2 abundance and
gas-to-dust ratio and so does the line flux of the 4.25 μm line
for most of the parameter space. The total flux never scales linearly
with abundance, due to different opacity
effects. The dust is optically thick at infrared wavelengths up to 100 AU, so there will always be a reservoir
of gas that will be hidden by the dust. The lines themselves are
strong (have large Einstein A coefficients) and the natural line
width is relatively small (0.2 km s^-1 FWHM). As a result the
line centers of transitions with low J values quickly become
optically thick. Therefore, if the abundance, and thus the column, in the
upper layers of the disk is high, the line no longer probes the inner
regions. This can be seen in Fig. <ref> as the fluxes for
models with different x_in converge with increasing
x_out. Convergence happens at lower x_out for
higher gas-to-dust ratios. The inner region is quickly invisible
through the 4.25 μm line with increasing gas-to-dust ratios: for a
gas-to-dust ratio of 10000, there is a less than 50% difference in
fluxes between the models with different inner abundances, even for
the lowest outer abundances. This is not seen so strongly in the 15
μm feature as it also includes high J lines which are stronger
in the hotter inner regions and are not as optically thick as the low
J lines. There is no significant dependence of the flux on the inner
abundance of CO2 if the outer abundance is >3×10^-7 and
the gas to dust ratio is higher than 1000. In these models the 15
μm feature traces part of the inner 1 AU but only the upper
layers.
Different ways of modelling the gas-to-dust ratio has little effect on
the resulting fluxes. Fig. <ref> shows the fluxes for a
constant dust mass and increasing gas mass for increasing the
gas-to-dust ratio, whereas Fig. <ref> in Appendix <ref>
shows the fluxes for decreasing dust mass for a constant gas mass.
The differences in fluxes are very small for models with the same gas/dust ratio times CO2
abundance, irrespective of the total gas mass: fluxes agree within 10% for most of the models. This
reflects the fact that the underlying emitting columns of CO2
are similar above the dust τ =1 surface. Only the temperature of the emitting gas changes: higher temperatures
for gas that is emitting higher up in a high gas mass disk and lower
temperatures for gas that is emitting deeper into the disk in a low
dust mass disk.
The grey band in Fig. <ref> and Fig. <ref>
shows the range of fluxes observed for protoplanetary disks scaled to
a common distance of 125 pc <cit.>. This figure immediately
shows that low CO2 abundances, x_out < 3×
10^-7, are needed to be consistent with the observations. Some
disks have lower fluxes than given by the lowest abundance model,
which can be due to other parameters. A more complete comparison
between model and observations is made in Sec. <ref>.
In Appendix <ref> a comparison is made between the
fluxes of models with CO2 in LTE and models for which the
excitation of CO2 is calculated from the rate coefficients and
the Einstein A coefficients. The line fluxes differ by a factor of
about three between the models, similar to the differences found by
<cit.> (their Fig. 6) for the case of HCN.
§.§.§ The v_2 band emission profile
Fig. <ref> shows the v_2 Q-branch profile
at 15 μm for a variety of models. All lines have been convolved to
the resolving power of JWST-MIRI at that wavelength <cit.> with three bins per
resolution element. Panel a shows the results from a simple LTE slab
model at different temperatures whereas panels b and c presents
the same feature from the DALI models.
Panel b contains models with different gas-to-dust ratios and abundances (assuming x_in = x_out) scaled so g/d × x_CO2 is constant. It shows that gas-to-dust ratio and abundance are degenerate. It is expected that these models show similar spectra, as the total amount of CO2 above the dust photosphere is equal for all models. The lack of any significant difference shows that collisional excitation of the vibrationally excited state is insignificant compared to radiative pumping.
Panel c of Fig. <ref> shows the effect
of different inner abundances on the profile. For the highest inner
abundance shown, 1× 10^-6, an increase in the shorter
wavelength flux can be seen, but the differences are far smaller than
the differences between the LTE models.
Panel d shows models with similar abundances, but with increasing g/d_dust. The flux in the 15 μm feature increases with g/d_dust for these models as can be seen in Fig. <ref>. This is partly due to the widening of the feature as can be seen in Panel d which is caused by the removal of dust. Due to the lower dust photosphere it is now possible for a larger part of the inner region to contribute to this emission. The inner region is hotter and thus emits more toward high J lines causing the Q-branch to widen.
Fitting of LTE models to DALI model spectra in Fig. 7b-d results in inferred temperatures of 300–600 K. Only the models with a strong tail (blue lines in 7b and 7d) need temperatures of 600 K for a good fit, the other models are well represented with ∼ 300 K. For comparison, the actual temperature in the emitting layers is 150–350 K (Fig. <ref>), illustrating that the optically thin model overestimates the inferred temperatures. The proper inclusion of optical depth effects for the lower-J lines lowers the inferred temperatures. This means that care has to be taken when interpreting a temperature from the CO2 profile. A wide feature can be due to high optical depths or high rotational temperature of the gas.
A broader look at the CO2 spectrum is thus
needed. The left panel of Fig. <ref> shows the P, Q and
R-branches of the vibrational bending mode transition at R = 2200,
for models with different inner CO2 abundances and the same
outer abundance of 10^-7. The shape for
the R- and P-branches is flatter for
low to mid-J and slightly more extended at high J in the spectrum from the model with an inner
CO2 abundance of 10^-6 than the other
spectra. The peaks at 14.4 μm and 15.6 μm are due to the Q-branches
from the transitions between 11^10(1) → 10^01(1) and 11^10(2)
→ 10^01(2) respectively. These are overlapping with lines
from the bending fundamental P and R branches. For the
constant and low inner CO2 abundances, 10^-7 and 10^-8 respectively R- and P-branch shapes are similar,
with models differing only in absolute flux. Decreasing the inner
CO2 abundance from 10^-8 to lower values has no effect of
the line strengths.
The right panel of Fig. <ref> shows Boltzmann plots of the spectra on the left. The number of molecules in the upper state inferred from the flux is given as a function of the upper state energy. The number of molecules in the upper state is given by: 𝒩_u = 4 π d^2F/(A_ulhν_ulg_u), with d the distance to the object, F the integrated line flux, g_u the statistical weight of the upper level and A_ul and ν_ul the Einstein A coefficient and the frequency of the transition. From slope of log(𝒩_u) vs E_up a rotational temperature can be determined. The expected slopes for 400, 600 and 800 K are given in the figure. It can be seen that the models do not show strong differences below J = 20, where emission is dominated by optically thick lines. Toward higher J, the model with x_in = 10^-6 starts to differ more and more from the other two models. The models with x_in=10^-7 and x_in=10^-8 stay within a factor of 2 of each other up to J = 80 where the molecule model ends.
Models with similar absolute abundances of
CO2 (constant g/d × x_CO2) but different
g/d_gas ratios are nearly identical: the width of the Q-branch and the
shapes of the P- and R-branches are set by the gas temperature
structure. This temperature structure is the same for models with
different g/d_gas ratios as it is set by the dust
structure. The temperature is, however, a function of
g/d_dust, but those temperature differences are not large
enough for measurable effects. From this it also follows that the exact collisional rate coefficients are not important:
The density is low enough that the radiation field can set the excitation of the vibrational levels. At the same time the density is still high enough to be higher than the critical density for the rotational transitions, setting the rotational excitation temperature equal to the gas kinetic temperature.
The branch shapes are a function of g/d_dust at constant
absolute abundance. Apart from the total flux which is slightly higher at higher
g/d_dust (Fig. <ref>), the spectra are also broader (Panel d. Fig. <ref>). This
is because the hotter inner regions are less occulted by dust for higher g/d_dust ratios. This hotter gas has more emission coming from high J lines, boosting the tail of the Q-branch.
To quantify the effects of different abundance profiles, line ratios
can also be informative. The lines are chosen so they are free from
water emission (see Appendix <ref>). The top two panels of
Fig. <ref> shows the line ratios for lines in the
01^10(1) → 00^00(1) 15 μm band: R(37):R(7) and
P(15):P(51). The R(7) and P(15) lines come from levels with
energies close to the lowest energy level in the vibrational state
(energy difference is less than 140 K). These levels are thus easily
populated and the lines coming from these levels are quickly optically
thick. The R(37) and P(51) lines come from levels with rotational
energies at least 750 K above the ground vibrational energy. These
lines need high kinetic/rotational temperatures to show up strongly
and need higher columns of CO2 at prevailing temperatures to
become optically thick. From Fig. <ref> a few things
become clear. First, for very high outer abundances, it
is very difficult to distinguish between different inner abundances
based on the presented line ratio. Second, models with high outer abundances
are nearly degenerate with models that have a low outer abundance and
a high inner abundance. A measure of the optical depth will solve
this. In the more intermediate regimes the line ratios presented here
or a Boltzmann plot will supplement the information needed to distinguish
between a cold, optically thick CO2 reservoir and a hot, more optically
thin CO2 reservoir that would be degenerate in just Q-branch fitting.
§.§.§ ^13CO2 v_2 band
An easier method to break these degeneracies is to use the ^13CO2
isotopologue. ^13CO2 is approximately 68 times less abundant
compared to ^12CO2, using a standard local interstellar medium
value <cit.>. This means that the isotopologue is much less likely
to be optically thick and thus ^13CO2:^12CO2 line ratios
can be used as a measure of the optical depth, adding the needed
information to lift the degeneracies. The bottom panel of
Fig. <ref> shows the ratio between the flux in the
^13CO2 v_2 Q-branch and the ^12CO2 v_2 P(25)
line.
As the Q-branch for ^13CO2 is less optically thick, it is
also more sensitive to the abundance structure. The Q-branch,
situated at 15.42 μm, partially overlaps with the P(23) line of the
more abundant isotopologue so both isotopologues need to be modelled
to properly account the the contribution of these
lines. Fig. <ref> shows the same models as in
Fig. <ref> but now with the ^13CO2
emission in thick lines. The ^13CO2 Q-branch is predicted to be
approximately as strong as the nearby ^12CO2 lines for the
highest inner abundances. The total flux in the ^13CO2
Q-branch shows a stronger dependence on the inner CO2
abundance than the ^12CO2 Q-branch. A hot reservoir of
CO2 strongly shows up as an extended tail of the ^13CO2
Q-branch between 15.38 and 15.40 μm.
§.§.§ Emission from the v_3 band
The v_3 band around 4.25 μm is a strong emission band in the
disk models, containing a larger total flux than the v_2 band. Even
so, the 4.3 μm band of gaseous CO2 has not been seen in
observations of ISO with the Short Wave Spectrometer (SWS) toward high mass protostars in contrast with
15 μm band that has been seen towards these sources in absorption
<cit.>. This may be largely due to the
strong solid CO2 4.2 μm ice feature obscuring the gas-phase
lines for the case of protostars, but for disks this should not be a
limitation. Fig. <ref> shows the spectrum of
gaseous CO2 in the v_3 band around 4.3 μm at
JWST-NIRSpec resolving power. The resolving power of NIRSpec
is taken to be R=3000, which is not enough to fully separate the
lines from each other. The CO2 emission thus shows up as an
extended band.
The band shapes in Fig. <ref> are very
similar. The largest difference is the strength of the 4.2 μm
discontinuity, which is probably an artefact of the model as only a finite
number of J levels are taken into account. The total flux over the
whole feature does depend on the inner abundance, but the difference
is of the order of ∼10% for 2 orders of magnitude change of the
inner abundance.
Fig. <ref> also shows the ^13CO2
spectrum. The lines from ^13CO2 are mostly blended with much
stronger lines from ^12CO2. At the longer wavelength limit,
^13CO2 lines are stronger than those of ^12CO2 but there
the 6 μm water band and 4.7 μm CO band start to complicate
the detection of ^13CO2 in the 4–5 μm region.
Average abundances of CO2 can be derived from observations of the 4.3 μm band. While inferring the abundance structure will be easier from the 15 μm band there are some observational advantages of using the 4.3 μm band. NIRSpec has multi-object capabilities and will thus be able to get large samples of disks in a single exposure, especially for more distant clusters where there are many sources in a single FOV. NIRSpec has the additional advantages that it does not suffer from detector fringing and that it is more sensitive <cit.>. As both the 4.3 μm and 15 μm bands are pumped by infrared radiation, the flux ratios between these two will mostly contain information about the ratio of the continuum radiation field between the wavelengths of these bands.
§.§ Line-to-continuum ratio
The line-to-continuum ratio is potentially an even better diagnostic
of the gas/dust ratio than line ratios <cit.>.
Fig. <ref> presents spectra with the continuum
added to it. The spectra have been shifted, as the continua for these models overlap. The
models for which the spectra are derived all have a CO2 abundance
of 10^-8 but differ in the gas-to-dust ratio. The
gas-to-dust ratio, determines the column of CO2
that can contribute to the line. A large part of the CO2
reservoir near the mid-plane cannot contribute due to the large
continuum optical depth of the dust. It is thus not surprising that
the line-to-continuum ratio is strongly dependant on the gas-to-dust
ratio. The precise way of setting the gas-to-dust ratio (by increasing
the amount of gas, or decreasing the amount of dust) does not really
matter for the line-to-continuum ratio. It does matter for the
absolute scaling of the continuum, which decreases if the amount of
dust is decreased.
<cit.> could constrain the gas-to-dust ratio from the data since there is an upper limit to the H2O abundance from the atomic O abundance. Here it is not possible to make a similar statement as CO2 is not expected to be a major reservoir of either the oxygen or the carbon in the disk. On the contrary, from Fig. <ref> it can be seen that with a gas-to-dust ratio of 100 an abundance of 10^-7 is high enough to explain the brightest of the observed line fluxes. The line-to-continuum ratios from the Spitzer-IRS spectra of 5–10% are also matched by the same models (see Fig. <ref>). External information such as can be obtained from H2O is needed to lift the degeneracy between high gas-to-dust ratio and high abundance: if one of the two is fixed, the other can be determined from the flux or line-to-continuum ratio.
The line-to-continuum ratio is very important for planning observations, however, as it sets the limit on how precisely the continuum needs to be measured to be able to make a robust line detection. Fig. <ref> shows the line-to-continuum ratios for models with a constant abundance. These figures show that high signal-to-noise (S/N) on the continuum is needed to be able to get robust line detections. The ^12CO2 Q-branch should be easily accessible for most protoplanetary disks. To be able to probe individual P- and R-branch lines of the ^12CO2 15 μm feature as well as the ^13CO2 Q-branch, deeper observations (reaching S/N of at least 300, up to 1000) will be needed to probe down to disks with CO2 abundances of 10^-8 and gas-to-dust ratios of 1000.
§.§ CO2 from the ground
As noted earlier, there is a large part of the CO2 spectrum
that cannot be seen from the ground because of atmospheric
CO2. There are a few lines, however, that could be targeted from
the ground using high spectral resolution. The high J lines
(J > 70) of the v_1=1-0 transition in the R branch around 4.18
μm are visible with a resolving power of R = 30000 or higher. At
this resolution the CO2 atmospheric lines are resolved and at
J > 70 they are narrow enough to leave 20-50% transmission windows
between them (ESO skycalc[<http://www.eso.org/observing/etc/bin/gen/form?INS.MODE=swspectr+INS.NAME=SKYCALC>]). The lines are expected to have a peak line-to-continuum ratio of 1:100. So a S/N of 10 on the line peak translates to a S/N of 1000 on the continuum. The FWHM of the atmospheric lines is about 30 km s^-1. So half of the emission line profile should be observable when the relative velocity shift between observer and source is more than 15 km s^-1. Since the Earth's orbit allows for velocity shifts up to 30 km s^-1 in both directions, observing the full line profile is possible in two observations at different times of year for sources close to the orbital plane of the earth.
The exposure time needed to get a S/N of 1000 on the continuum at
4.18 μm, which is 6.7 Jy in our AS 205 (N) model, with a 40% sky transmission on the lines, is about 10 hours for VLT-CRIRES [Exposure times have been calculated with the ESO exposure time calculator <https://www.eso.org/observing/etc/> for CRIRES (version 5.0.1)]. The
high J P-branch lines of CO2 are close to atmospheric
lines from N2O, O3 and H2O resulting in a very opaque atmosphere at these wavelengths <cit.>.
The other lines that can be seen from the ground are between 9 and 12
μm (N-band). These originate from the 01^11 and 00^01
levels. The line to continuum ratios vary from 1:40 to 1:3000 for the
brighter lines in the 00^01 → 10^00(1) band with the most
likely models having line to continuum ratios between 1:200 and
1:2000. For a continuum of ∼ 11 Jy a S/N of 2000 for R=10^5
could potentially be achieved in about three to ten minutes of integration with the European
Extremely Large Telescope (E-ELT). At the location of the CO2 atmospheric absorption lines in this part of the spectrum the sky transmittance is ∼50% and the atmospheric lines have a FWHM of ∼ 50 km s^-1
§.§ CO2 model uncertainties
The fluxes derived from the DALI models depend on the details of the
CO2 excitation processes included in the model. The collisional
rate coefficients are particularly uncertain, since the measured set
is incomplete. There are multiple ways to extrapolate what is measured
to what is needed to complete the model. Modelling slabs of CO2
using different extrapolations such as: absolute scaling of the rotational
collision rate coefficients, including temperature dependence of the
vibrational collisional rate coefficients and different implementations
of the collision rate coefficients between the vibrational levels with 2v_1 + v_2 = constant, show that
fluxes can change by up to 50% for specific combinations of radiation
fields and densities. The highest differences are seen in the 4–5
μm band, usually at low densities. The flux in the 15 μm band
usually stays within 10% of the flux of the model used here. These
uncertainties are small compared to other uncertainties in disk
modelling such as the chemistry or parameters of the disk hosting
protostar. The main reason that the fluxes are relatively insensitive
to the details of the collisional rate coefficients is due to the
importance of radiative pumping in parallel with collisions.
The assumption T_dust = T_gas is not entirely correct, since the gas temperature can be up to 5% higher than the dust temperature in the CO2 emitting regions. This affects our line fluxes. For the 4.3 μm fluxes the induced difference in flux is always smaller than 10%. For the 15 μm fluxes difference are generally smaller than 10%, whereas some of the higher J lines are up to 25% brighter.
A very simplified abundance structure was taken. It is likely that
protoplanetary disks will not have the abundance structure adopted
here. Full chemical models indeed show much more complex chemical
structures <cit.>. The analysis done here
should still hold for more complex abundance structures, and future
work will couple such chemistry models directly with the excitation
and radiative transfer.
The stellar parameters for the central star and the exact parameters
of the protoplanetary disk also influence the resulting CO2
spectrum. The central star influences the line emission through its UV
radiation that can both dissociate molecules and heat the gas. Since
for our models no chemistry is included, only the heating of the
dust by stellar radiation is important for our models. The CO2
flux in the emission band around 15 μm scales almost linearly with
the bolometric luminosity of the central object (see
Fig. <ref>).
§ DISCUSSION
§.§ Observed 15 μm profiles and inferred abundances
The v_2 15 μm feature of CO2 has been observed in many
sources with Spitzer-IRS
<cit.>. The SH (Short-High) mode barely
resolves the 15 μm Q-branch, but that is enough to compare with the
models. We used the spectra that have been reduced with the Caltech
High-res IRS pipeline (CHIP) <cit.>. The
sources selected out of the repository have a strong emission feature
of CO2 but no distinguishable H2O emission in the 10–20
μm range. The sources and some stellar parameters are listed in
Table <ref>. The observed spectra are continuum
subtracted (Appendix <ref>) and the observed profiles are
compared with model profiles by eye (Fig. <ref>).
Two sets of comparisons are made. For the first set, the model fluxes
are only corrected for the distance to the objects. For the other set,
the model fluxes are scaled for the distance but also scaled for the
luminosity of the central source, using
L_CO2∝ L_⋆. This relation is found by running a
set of models with a range of luminosities, presented in
Fig. <ref>. Aside from the luminosity of the star, all
other parameters have been kept the same including the shape of the
stellar spectrum. The effective temperature of the star mostly affects
the fraction of short wavelength UV photons which can photodissociate
molecules, but since no detailed chemistry is included, the use of a
different stellar temperature would not change our results. Other
tests (not shown here) have indeed shown that the shape of the
spectrum does not really matter for the CO2 line fluxes in this
parametric model. All models have a gas-to-dust ratio of 1000 and a
constant CO2 abundance of 10^-8.
Both the total flux in the range between 14.7 and 15.0 μm and that
of a single line in this region (the 01^10 Q(6) line) have an
almost linear relation with luminosity of the central star. For the
00^01 R(7) line around 4.3 μm the dependence on the central
luminosity is slightly more complex. Below a stellar luminosity of
1 L_⊙ the dependence is stronger than linear, but above that the
dependence becomes weaker than linear. Overall, it is reasonable to
correct the 15 μm fluxes from our model for source stellar
luminosities using the linear relationship. This is because the amount of infrared continuum radiation that the disk produces scales linearly with the amount of energy that is put into the disk by the stellar radiation. It is the infrared continuum radiation that sets the molecular emission the due to radiative pumping, the dominant vibrational excitation mechanism for CO2.
The model spectra are overplotted on the continuum subtracted
observations in Fig. <ref>. The flux in these models has
been scaled with the distance of the source and the luminosity of the
central star. A gas-to-dust ratio of 1000 is adopted as inferred from H2O observations <cit.>.
An overview of the inferred abundances is given in
Table <ref>. For the DALI models the emitting CO2
column and the number of emitting CO2 molecules have been tabulated in
Table <ref>. The column is defined as the column of CO2 above the
τ_dust =1 line at the radial location of the peak of the contribution function (Fig. <ref>d).
The number of molecules is taken over the region that is responsible for half the total emission as given by the contribution function. The number of molecules shown is thus the minimal amount of CO2 needed to explain the majority of the flux and a sets lower limit for the amount of CO2 needed to explain all of the emission. The inferred abundances range from 10^-9 - 10^-7.
They agree with that inferred by <cit.> using an
LTE disk model appropriate for the RNO 90 disk, demonstrating that
non-LTE excitation effects are minor (see also Appendix <ref>).
For GW Lup and SZ 50 the emitting CO2 columns found by
<cit.> are within a factor of two from those inferred here, whereas for DN Tau and IM Lup our inferred columns are consistent with the upper limits from the slab models (tabulated in Table <ref>).
The inferred column for HD101412 differs greatly however. For all disks the number of molecules in our models is at least an order of magnitude higher than the number of molecules inferred from the LTE models.
The emitting area used by <cit.> in fitting the
CO2 feature was fixed and generally taken to be slightly larger
than the inner 1 AU. This is very small compared to the emitting area
found in this work which extends up to 30 AU. It is thus not
surprising that the total number of CO2 molecules inferred is
lower for the LTE slab models from <cit.>. The high number of molecules needed for the emission in our models is also related to the difference in excitation: the
vibrational excitation temperature of the gas in the non-LTE models is
lower (100–300 K) than the temperature fitted for the LTE models
(∼ 650 K). Thus in the non-LTE models a larger number of
molecules is needed to get the same total flux. The narrow CO2 profile is due to low rotational temperatures as emission from large radii > 2–10 AU dominates the strongest lines. The visual contrast is enhanced by the fact that the CO2 feature is also intrinsically narrower at similar temperatures than that of HCN (Fig. <ref>). For HD 101412 the model feature is
notably narrower than the observed feature signifying either a
higher CO2 rotational temperature, or a more optically thick
emitting region.
There are of course caveats in the comparisons done here. The standard
model uses a T-Tauri star that is luminous (total luminosity of
7.3 L_⊙) and that disk is known to have very strong H2O
emission. The sample of comparison protostars consists of 7 T-Tauri
stars with luminosities a factor of 2 – 35 lower and a Herbig Ae
star that is more than 3 times as luminous. A simple correction for
source luminosity is only an approximation. All of these
sources have little to no emission lines of H2O in the
mid-infrared. This may be an indication of different disk structures, and the
disk model used in this work may not be representative of these
water-rich disks. Indeed, <cit.> found
that the emitting radius of the CO ro-vibrational lines scales
inversely with the vibrational temperature inferred from the CO
emission. This relation is consistent with inside-out gap opening.
Comparing the CO ro-vibrational data with H2O infrared emission data
from VLT-CRIRES and Spitzer-IRS <cit.> found that
there is a correlation between the radius of the CO ro-vibrational
emission and the strength of the water emission lines: The larger the
radius of the CO emission, the weaker the H2O emission. This suggests
that the H2O-poor sources may also have inner gaps, where both CO and
H2O are depleted. There are only two H2O-poor sources in our sample that overlap with <cit.>.
However, if our analysis is applied to sources that do have water
emission, the range of best-fit CO2 abundances is found to be
similar. Fig. G.2 shows CO2 model spectra compared to observations for
a set of the strongest water emitting sources. The conclusion that the
abundance of CO2 in protoplanetary disks is around 10^-8 is therefore
robust.
The inferred CO2 abundances are low, much lower
than the expected ISM value of 10^-5 if all
of the CO2 would result from sublimated ices. This demonstrates that
the abundances have been reset by high temperature chemistry, as also concluded by <cit.>.
The inferred low abundances agree well with chemical models by
<cit.>. However, the column found for chemical models by <cit.>,
∼ 6× 10^16 cm^-2, is more than an order of magnitude
higher. <cit.> used a different lower vertical bound for
their column integration and only considered the inner 3 AU. Either of
these assumptions may explain the difference in the CO2 column.
§.§ Tracing the CO2 iceline
One of the new big paradigms in (giant) planet formation is pebble
accretion. Pebbles, in models defined as dust particles with a Stokes
number around 1, are badly coupled to the gas, but generally not
massive enough to ignore the interaction with the gas. This means that
these particles settle to the mid-plane and radially drift inward on
short time scales.
This pebble flux allows in theory a planetesimal to accrete all the
pebbles that form at radii larger than its current location
<cit.>.
This flux of pebbles also has consequences for the chemical
composition of the disk. These pebbles should at some point encounter
an iceline, if they are not stopped before. At the iceline they
should release the corresponding volatiles. The same holds for any
drifting planetesimals <cit.>. As the ice composition is
very different from the gas composition, this can in principle
strongly change the gas content in a narrow region around the ice
line. For this effect to become observable in mid-infrared lines, the
sublimated ices should also be mixed vertically to higher regions in
the disk.
From chemical models the total gas-phase abundance of CO2
around the CO2 iceline is thought to be relatively low, <cit.>
similar to the value found in this work. The CO2
ice content in the outer disk can be orders of magnitude higher. Both
chemical models and measurements of comets show that the CO2
content in ices can be more than 20% of the total ice content
<cit.>, with CO2 ice even becoming more
abundant than H2O ice in some models of outer disk chemistry
<cit.>. This translates into an abundance up to a few × 10^-5. We investigate here
whether the evaporation of these CO2 ices around the iceline
would be observable.
To model the effect of pebbles moving over the iceline, a model with
a constant CO2 abundance of 1 × 10^-8 and a
gas-to-dust ratio of 1000 is taken. In addition, the abundance of
CO2 is enhanced in an annulus where the midplane temperature is
between 70 K and 100 K (grey region in Fig. <ref>b), corresponding to the sublimation temperature of
pure CO2 ice <cit.>. This results in a radial
area between 8 and 15 AU in our case. The abundance is taken to be
enhanced over the total vertical extent of the CO2 in the
model, as in the case of strong vertical mixing. The spectra from
three models with enhanced abundances of
x_CO2,ring = 10^-6, 10^-5 and 10^-4 in this
ring can be seen in Fig. <ref> together with the
spectrum for the constant x_CO2 = 10^-8 model.
Note that CO2 ice is unlikely to be pure, and that some of it
will likely also come off at the H2O iceline, but such a
multi-step sublimation model is not considered here.
The spectra in Fig. <ref> show both ^12CO2 and
^13CO2 in thin and thick lines respectively for the model with
a constant abundance (grey), an enhanced abundance of
1× 10^-6 (cyan), 1× 10^-5 (red) and
1× 10^-4 (blue) around the CO2 iceline. The enhanced
CO2 increases the ^12CO2 flux up to a factor of 3. The
optically thin ^13CO2 feature is however increased by a lot
more, with the peak flux in the ^13CO2 Q-branch reaching
fluxes that are more than two times higher than peak fluxes of the nearby
^12CO2 P-branch lines.
Enhanced abundances in the outer regions can be distinguished from an
enhanced abundance in the inner regions by looking at the tail of the
^13CO2 Q-branch. An over-abundant inner region will show a
significant flux (10–50% of the peak) from the ^13CO2
Q-branch in the entire region between the locations of the
^12CO2 P(21) and P(23) lines and will show a smoothly
declining profile with decreasing wavelength. If the abundance
enhancement is in the outer regions, where the gas temperature and
thus the rotational temperature is lower, the flux between the
^12CO2 P(21) and P(23) lines will be lower for the low
enhancements (0–20% of the peak flux), for the higher enhancements
the R(1) line shows up in the short wavelength side of the profile. Other R and P branch lines from
^13CO2 can also show up in the spectrum if the abundance can
reach up to 10^-4 in the ring around the CO2 iceline.
§.§ Comparison of CO2 with other inner disk molecules
With the models presented in this paper, four molecules with
rovibrational transitions coming from the inner disk have been studied
by non-LTE disk models: H2O
<cit.>, CO
<cit.>, HCN <cit.> and CO2 (this
work). Of these CO is special, as it can be excited by UV
radiation and fluoresces to excited vibrational states that in turn
emit infrared radiation. For the other molecules absorption of a UV
photon mostly leads to dissociation of the molecule <cit.>. H2O, HCN and CO all have an permanent dipole
moment and can thus also emit strongly in the sub-millimeter. This means that
these molecules will have lower rotational temperatures than
CO2 in low density gas but they are actually observed to have broader profiles in the mid-infrared. Thus, our models reinforce the conclusion from the observed profiles that CO2 comes from relatively cold gas (200-300 K) (See panel c of Fig. <ref> and panel d of Fig. <ref>).
For the disk around AS 205 (N) the emission of both HCN and CO2 has now been analysed under non-LTE conditions in DALI.
As such it is possible to infer the HCN to CO2 abundance ratio in the disk. The representative, constant abundance model for the CO2 emission from AS 205 N has an abundance of 3× 10^-8 with a gas-to-dust ratio of 1000 but the inner abundance can easily vary by an order of magnitude or more while still being in agreement with observations. The models from <cit.> that best reproduce the data have outer HCN abundances between 10^-10 and 10^-9 for gas-to-dust ratios of 1000.
This translates into CO2/HCN abundance ratios of 30–300 in the region from 2 to 30 AU. The higher abundance of CO2 in the outer regions of the disk explains the colder inferred rotational temperature of CO2 compared to HCN.
§ CONCLUSION
Results of DALI models are presented, modelling the full continuum
radiative transfer, non-LTE excitation of CO2 in a typical
protoplanetary disk model, with as the main goal: to find a way to
measure the CO2 abundance in the emitting region of disks with
future instruments like JWST and test different assumptions on its origin. Spectra of CO2 and ^13CO2
in the 4–4.5 μm and 14–16 μm regions were modelled for disks
with different parametrized abundance structures, gas masses and dust
masses. The main conclusions of this study are:
* The critical density of the CO2 00^01 state, responsible for emission around 4.3 μm, is very high, >10^15 cm^-3. As a result, in the absence of a pumping radiation field, there is no emission from the 00^01 state at low densities. If there is a pumping infrared radiation field, or if the density is high enough, the emission around 4.3 μm will be brighter than that around 15 μm.
* The infrared continuum radiation excites CO2 up to large
radii (10s of AU). The region probed by the CO2 emission
can therefore be an order of magnitude larger (in radius) than typically assumed in
LTE slab models. Temperatures inferred from optically thin LTE models can also be larger than the actual temperature of the emitting gas. Differences between LTE and non-LTE full disk model flux are typically a within factor of three.
* Current observations of the 15 μm Q-branch fluxes
are consistent with models with constant abundances between 10^-9 and
10^-7 for a gas-to-dust ratio of 1000. Observations of lines corresponding to levels
with high rotational quantum numbers or the ^13CO2 Q-branch
will have to be used to properly infer abundances. Especially
the ^13CO2 Q-branch can be a good indicator of abundance
structure from inner to outer disk.
* The gas-to-dust ratio and fractional abundance are largely
degenerate. What sets the emission is the column of CO2 above
the dust infrared photosphere. Models with similar columns have very
similar spectra irrespective of total dust and gas mass, due to the excitation mechanism of CO2.
If the gas-to-dust ratio is constrained from other observations such as H2O the fractional abundance can be determined from the spectra.
* The abundance of CO2 in protoplanetary disks inferred from modelling, 10^-9–10^-7, is at least 2 orders of magnitude lower than the CO2 abundance in ISM ices. This implies that disk chemical abundances are not directly inherited from the ISM and that significant chemical processing happens between the giant molecular cloud stage and the protoplanetary disk stage.
* The ^13CO2 v_2 Q-branch at 15.42 μm will be able to
identify an overabundance of CO2 in the upper layers of the inner disk, such as could be produced by sublimating pebbles and planetesimals around the iceline(s).
Our work shows that the new instruments on JWST will be able to give a wealth of information on the CO2 abundance structure, provided that high S/N (>300 on the continuum) spectra are obtained.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for his/her suggestions that have improved the paper. Astrochemistry in Leiden is supported by the European Union A-ERC grant 291141 CHEMPLAN, by the Netherlands Research School for Astronomy (NOVA), by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize. This work is based in part on archival data obtained made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
aa
§ COLLISIONAL RATE COEFFICIENTS
The collisional rate coefficients are calculated in a way very similar to <cit.>, that is by combining vibrational coefficients with rotational rate coefficients to get the state-to-state ro-vibrational rate coefficients. Only collisions with H2 are considered, which is the dominant gas species in the regions were CO2 is expected to be abundant. The vibrational coefficients were taken from the laser physics and atmosperic physics papers. An overview of the final vibrational rate coefficients used are shown in Table <ref>. The temperature dependence of the rates is suppressed for the de-excitation collissional rate coefficients and the rate for 300 K are used throughout. The vibrational rate coefficients are not expected to vary much over the range of temperatures considered here. The de-excitation rate coefficient of the bending mode by H2 (01^10 → 00^00 transition) from <cit.> is 5× 10^-12 cm^-3s^-1, an order of magnitude faster than the He <cit.>. This is probably due to vibrational-rotational energy exchange in collisions with rotationally excited H2 <cit.>. For levels higher up in the vibrational ladder we extrapolate the rates as done by <cit.> and <cit.>. Combining Eq. 6 and 8 from the last paper we get, for v > w, the relation:
k(v_2 = v → v_2 = w) = v2w+1/2v+1 k(v_2 = 1 → v_2 = 0).
The rate coefficient measured by <cit.> is actually the total quenching rate of the 00^01 level. Here we assume that the relaxiation of the 00^01 level goes to the three closest lower energy levels (03^30, 11^10(1), 11^10(2)) in equal measure. For the all the rates between the levels of the Fermi degenerate states and the corresponding bending mode with higher angular momentum the CO2-CO2 rate measured by <cit.> was used scaled to the reduced mass of the H2-CO2 system. The states with constant 2 v_1 + v_2 are considered equal to the pure bending mode with respect to the collisional rate coefficients to other levels.
No information exists on the rotational rate coefficients of CO2 with H2. We have decided to use the CO rate coefficients from <cit.> instead. Since CO2 does not have a dipole moment, the exact rate coefficients are not expected to be important since the critical densities for the levels in the rotational ladders are very low, < 10^4 cm^-3.
The method suggested by <cit.> was employed to calculate the state-to-state de-excitation rate coefficients for initial levels v,J to all levels v',J' with a smaller ro-vibrational energy. This is assuming a decoupling of rotational and vibrational levels, so we can write:
k(v,J → v',J';T) = k(v → v') × P_(J,J')(T),
where
P_(J,J')(T) = k(0,J→ 0,J';T)∑_J g_J exp(-E_v,J/kT) /∑_J g_J exp(-E_v,J/kT) ∑_J' k(0,J→ 0,J';T),
with the statistical weights g_i of the levels. All the excitation rates are calculated using the detailed balance.
§ FAST LINE RAY TRACER
For the calculation of the CO2 lines a new ray tracer was used. The conventional ray tracer used in DALI <cit.> can take up to 10 minutes to calculate the flux from one line. The CO2 molecule model used here includes more than 3600 lines. Not al of these lines are directly important but to get the complete spectrum of both the 4.3 μm and the 15 μm bands, a few hundred lines need to be ray traced for each model.
To enable the calculation of a large number of lines a module has been implemented into DALI that can calculate a line flux in a few seconds versus a few minutes for the conventional ray tracer. The module uses the fact that, along a line of sight, the velocity shear due to the finite height of the disk is approximately linear <cit.>. Using this, the spectrum for an annulus in the disk can be approximated. At the radius of the annulus in question, the spectra are calculated for different velocities shears. These spectra are calculated by vertically integrating the equation of radiative transfer through the disk and correcting for the projected area for non face-on viewing angles. Then the total spectrum of the annulus is calculated by iterating over the azimuthal direction. For each angle the velocity shear is calculated and the spectrum is interpolated from the pre-calculated spectra. A simple sum over the spectra in all annuli is now sufficient to calculate the total spectrum.
This approximation is a powerful tool for calculating the total flux in a line especially for low inclinations. For the models presented in this paper the fluxes differ by about 4% for the 15 μm lines and 1.5% for the 4.3 μm lines. This is small compared to the other uncertainties in the models.
The approximation breaks down at high inclinations and should be used with care for any inclination larger than 45^∘. The total line shape is also close to the line shape from the traditional ray tracer, but with the high S/N from ALMA, using the traditional ray tracer is still advised for doing direct comparisons. This same goes for images for which the errors will be larger than for the integrated flux or line shape as some of the errors made in making the image will cancel out (in first order) when integrating over the annulus.
§ MODEL TEMPERATURE AND RADIATION STRUCTURE
Fig. <ref> shows the model temperature, radiation field and excitation temperature structure corresponding to the model shown in Fig. <ref>. Panel a shows the dust temperature structure, panels b and c show the excitation temperature of the v_2 1→ 0 Q(6) line and v_3 1→ 0 R(7) lines. For the excitation temperature only the upper and lower state of the line are used. This is thus a vibrational excitation temperature and can be different from the ground state rotational excitation temperature (that follows the dust temperature) and the rotational excitation temperature within a vibrationally excited state. Where the density is higher than the critical density the excitation temperature is equal to the dust temperature. In the disk atmosphere the Q(6) is mostly subthermally excited, while the R(7) line is superthermally excited. For both lines there is a maximum in the vertical excitation temperature distribution at the point where the gas becomes optically thick to its own radiation. Panel d shows the dust temperature of the region from which most of the CO2 15 μm emission originates as function of radius. Most of the emitting gas is at temperatures between 150 and 350 K. Panel e and f show the strength of the radiation field at 15 μm and 4.3 μm is shown as a factor of the radiation field of a 750 K blackbody. This shows where there is a sufficient photon density to radiatively pump CO2.
§ MODEL FLUXES G/D_DUST
As mentioned in the main text, two different way of changing the gas-to-dust ratio were considered, increasing the gas mass and decreasing the dust mass w.r.t. the gas-to-dust ratio 100 case. Fig. <ref> is the counterpart to Fig. <ref> showing the modelled fluxes for different inner CO2 abundances, outer CO2 abundances and different gas-to-dust ratios. In this case the gas-to-dust ratio is varied by keeping the gas mass of the disk constant and decreasing the amount of dust in the disk.
There are only very slight differences between Fig. <ref> and Fig. <ref> and all observations made for the figure in the main text hold for this figure as well.
§ LTE VS NON-LTE
The effects of the LTE assumption on the line fluxes in a full disk model on the v_3, 1→ 0, R(7) line and the 15 μm feature are shown in Fig. <ref>. Only the models with constant abundance (x_out=x_in) are shown for clarity but the differences between LTE and non-LTE for these models are representative for the complete set of models. For the 15 μm flux the differences between the LTE and non-LTE models is small, of the order of 30%. The radial extent of the emission is, however, different: the region emitting 75% of the 15 μm flux extends twice as far in the non-LTE models (extent of the 15 μm non-LTE emission is seen in Fig. <ref>, panel f). This a clear sign of the importance of infrared pumping that is included in the non-LTE models. The difference between the fluxes in the 4.25 μm line are greater, up to an order of magnitude. The difference are strongest in the models that have a low total CO2 content (so low abundance and low gas-to-dust ratio). This is mostly due to the larger radial extent of the emitting region extending up to 20 times further out in the non-LTE models compared to the corresponding LTE model (Extent of the 4.3 μm non-LTE emission is seen in Fig. <ref>, panel i). This is in line with the higher Einstein A coefficient and upper level energy (and thus higher critical density) of the v_3, 1→ 0, R(7) line, giving rise to a large importance of infrared pumping relative to collisional excitation. Fig. <ref> uses g/d_dust =1000, but the plot for g/d_gas =1000 is very similar.
§ LINE BLENDING BY H2O AND OH
One of the major challenges in interpreting IR-spectra of molecules in T-Tauri disks are the ubiquitous water lines. H2O has a large dipole moment and thus has strong transitions. As H2O chemically favours hot regions <cit.> there are a lot of rotational lines in the mid infra-red. Fig. <ref> shows the H2O rotational lines near the CO2 15 μm feature. The spectra are simulated with a LTE slab model using the same parameters as fitted by <cit.> for AS 205 (N) as reproduced in Table <ref>. It should be noted that AS 205 (N) is a very water rich disk (in its spectra) explaining the large number of strong lines. Fortunately, there are still some regions in the CO2 spectrum that are not blended with H2O or OH lines and thus can be used for tracing the CO2 abundance structure independent of a H2O emission model. The situation improves at higher resolving power as can be seen in Fig. <ref>. The resolving power of 28000 has been chosen to match with the resolving power of the SPICA HRS mode. At this point the line widths are dominated by the assumed Keplerian linewidth of 20 km s^-1. At this resolution the individual Q-branch lines are separable and a lot of the line blends that happen at a resolving power of 2200 are no longer an issue.
§ SPITZER-IRS SPECTRA
Fig. <ref> shows the spectra as observed by Spitzer-IRS reduced with the CHIP software <cit.> in black. The blue lines show the continua that have been fitted to these spectra. The objects have been chosen because their spectra are relatively free of H2O emission. Even without H2O lines, it is still tricky to determine a good baseline for the continuum as there are a lot of spectral slope changes, even in the narrow wavelength range considered here. This is especially true for HD 101412 where the full spectrum shows a hint of what looks like R- and P-branches. If these are features due to line emission, it becomes arbitrary where one puts the actual continuum, thus these features are counted here as part of the continuum. Whether these feature are real or not will not matter a lot for the abundance determination as the CO2 Q-branch is separated from the strong R- and P-branch lines.
Fig. <ref> shows a comparison between observed spectra of disk with strong H2O emission and CO2 model spectra. The spectra are corrected for source luminosity and distance as explained in Sec. <ref>. Assumed distances and luminosities are given in Table. <ref>.
| Most observed exo-planets orbit close to their parent star <cit.>. The atmospheres of these
close-in planets show a large diversity in molecular composition
<cit.> which must be set during planet formation and
thus be representative of the natal protoplanetary
disk. Understanding the chemistry of the inner, planet-forming regions
of circumstellar disks around young stars will thus give us another
important piece of the puzzle of planet formation. Prime molecules for
such studies are H2O, CO, CO2 and CH4 which are the
major oxygen- and carbon-bearing species that set the overall C/O ratio <cit.>.
The chemistry in the inner disk, i.e., its inner few AU, differs from
that in the outer disk. It lies within the H2O and CO2
icelines so all icy planetesimals are sublimated. The large range of
temperatures (100–1500 K) and densities (10^10-10^16 cm^-3)
then makes for a diverse chemistry across the inner disk region
<cit.>. The driving cause for this
diversity is high temperature chemistry: some molecules such as
H2O and HCN have reaction barriers in their formation pathways
that make it difficult to produce the molecule in high abundances at
temperatures below a few hundred Kelvin. As soon as the temperature is high
enough to overcome these barriers, formation is fast and they become
major reservoirs of oxygen and nitrogen. An interesting example is
formed by the main oxygen bearing molecules, H2O and CO2:
the gas phase formation of both these molecules includes the OH
radical. At temperatures below ∼ 200 K the formation of
CO2 is faster, leading to high gas phase abundances, up to
∼ 10^-6 with respect to (w.r.t.) total gas density, in regions where
CO2 is not frozen out. When the temperature is high enough,
H2O formation will push most of the gas phase oxygen into
H2O and the CO2 abundance drops to ∼10^-8
<cit.>. Such chemical transitions can have
strong implications for the atmospheric content of gas giants formed
in these regions if most of their atmosphere is accreted from the
surrounding gas.
A major question is to what extent the inner disk abundances indeed
reflect high temperature chemistry or whether continuously migrating
and sublimating icy planetesimals and pebbles at the icelines
replenish the disk atmospheres <cit.>. Interstellar ices
are known to be rich in CO2, with typical abudances of 25%
w.r.t. H2O ice, or about 10^-5 w.r.t. total gas density
<cit.>.
Cometary ices show similarly high CO2/H2O abundance ratios
<cit.>. Of all molecules with high ice
abundances, CO2 shows the largest contrast between interstellar
ice and high temperature chemistry abundances, and could therefore be
a good diagnostic of its chemistry.
<cit.> argue based on Spitzer Space Telescope
data that CO2 is not inherited from the interstellar medium but
is reset by chemistry in the inner disk. However, that analysis
used a Local Thermodynamic Equilibrium (LTE) CO2 excitation model coupled with a disk model and did
not investigate the potential of future instruments, which could be
more sensitive to a contribution from sublimating planetesimals. Here
we re-consider the retrieval of CO2 abundances in the inner
regions of protoplanetary disks using a full non-LTE excitation and
radiative transfer disk model, with a forward look to the new
opportunities offered by the James Webb Space Telescope (JWST).
The detection of infrared vibrational bands seen from CO2,
C2H2 and HCN, together with high energy rotational lines of
OH and H2O, was one of the major discoveries of the
Spitzer Space Telescope
<cit.>. These
data cover wavelengths in the 10–35 μm range at low spectral resolving power
of λ/Δλ=600. Complementary ground-based infrared
spectroscopy of molecules such as CO, OH, H2O, CH4,
C2H2 and HCN also exists at shorter wavelengths in the 3–5
μm range
<cit.>. The high spectral resolving power of
R=25000-10^5 for instruments like Keck/NIRSPEC and VLT/CRIRES have
resolved the line profiles and have revealed interesting kinematical
phenomena, such as disk winds in the inner disk regions
<cit.>. Further advances are expected with VLT/CRIRES+ as well as through modelling of current data with more detailed physical models.
Protoplanetary disks have a complex physical structure <cit.> and putting all physics, from magnetically
induced turbulence to full radiative transfer, into a single model is
not feasible. This means that simplifications have to be made.
During the Spitzer era, the models used to explain the
observations were usually LTE
excitation slab models at a single temperature. With 2D physical
models such as RADLITE <cit.> and with full 2D
physical-chemical models such as Dust and Lines
<cit.> or Protoplanetary Disk
Model <cit.> it is now possible to fully take
into account the large range of temperatures and densities as well as
the non-local excitation effects. For example, it has been shown that
it is important to include radiative pumping introduced by hot
(500-1500 K) thermal dust emission of regions just behind the inner
rim. This has been done for H2O by <cit.> who
concluded that to explain the mid-infrared water lines observed with
Spitzer, water is located in the inner ∼1 AU in a region
where the local gas-to-dust ratio is 1–2 orders of magnitude higher
than the interstellar medium (ISM) value. <cit.> performed a
protoplanetary disk parameter study to see how disk parameters affect
the H2O emission. <cit.> compared a LTE disk model
analysis using RADLITE with slab models and concluded that, while
inferred abundance ratios were similar with factors of a few, there
could be orders of magnitude differences in absolute abundances
depending on the assumed emitting area in slab models <cit.>. <cit.> concluded that the CO
infrared emission from disks around Herbig stars was rotationally cool
and vibrationally hot due to a combination of infrared and
ultraviolet (UV) pumping fields (see also
). <cit.> modelled the non-LTE
excitation and emission of HCN concluding that the emitting area
for mid-infrared lines can be 10 times larger in disks than the
assumed emitting area in slab models due to infrared pumping. Our
study of CO2 is along similar lines as that for HCN.
As CO2 cannot be observed through rotational transitions in the
far-infrared and submillimeter, because of the lack of a permanent dipole
moment, it must be observed through its vibrational transitions at
near- and mid-infrared wavelengths. The CO2 in our own
atmosphere makes it impossible to detect these CO2 lines from
astronomical sources from the ground, and even at altitudes of 13 km
with SOFIA. This means that CO2 has to be observed from space.
CO2 has been observed by Spitzer in protoplanetary
disks through its v_2 Q-branch at 15 μm where many individual
Q-band lines combine into a single broad Q-branch feature at low
spectral resolution <cit.>. These gaseous CO_2
lines have first been detected in high mass protostars and shocks with
the Infrared Space Observatory
<cit.>. CO2
also has a strong band around 4.3 μm due to the v_3 asymmetric
stretch mode. This mode has high Einstein A coefficients and thus
should thus be easily observable, but has not been seen from CO2
gas towards protoplanetary disks or protostars, in contrast with the
corresponding feature in CO2 ice <cit.>.
The CO2 v_2 Q-branch profile is slightly narrower than that
of C2H2 and
HCN observed at similar wavelengths. These results suggest that CO2 is absent (or strongly
under-represented) in the inner, hottest regions of the disk. Full
disk LTE modeling of RNO 90 by <cit.> using RADLITE
showed that the observations of this disk favour a low CO2
abundance (10^-4 w.r.t. H2O, ≈ 10^-8 w.r.t. total gas density). The slab models by <cit.> indicate smaller
differences between the CO2 and H2O abundances, although
CO2 is still found to be 2 to 3 orders of magnitude lower in
abundance.
To properly analyse CO2 emission from disks, a full non-LTE
excitation model of the CO2 ro-vibrational levels has to be
made, using molecular data from experiments and detailed quantum
calculations. This model can then be used to perform a simple slab
model study to see under which conditions non-LTE effects may be
important. These same slab model tests are also used to check the
influences of the assumptions made in setting up the ro-vibrational
excitation model. Such CO2 models have been developed in the past for
evolved AGB stars <cit.> and shocks <cit.>, but not applied to disks.
Our CO2 excitation model is coupled with a full protoplanetary disk
model computed with DALI to investigate the importance of non-LTE excitation, infrared
pumping and dust opacity on the emission spectra. In addition, the
effects of varying some key disk parameters such as source luminosity
and gas/dust ratios on line fluxes and line-to-continuum ratios are
investigated. Finally, Spitzer data for a set of T-Tauri
disks are analysed to derive the CO2 abundance structure using
parametrized abundances.
JWST will allow a big leap forward in the observing
capabilities at near- and mid-infrared wavelengths, where the inner
planet-forming regions of disks emit most of their lines. The
spectrometers on board JWST, NIRSPEC and MIRI
<cit.> with their higher
spectral resolving power (R≈ 3000) compared to Spitzer
(R = 600) will not only separate many blended lines
<cit.> but also boost line-to-continuum ratios
allowing detection of individual P, Q and R-branch lines thus
giving new information on the physics and chemistry of the inner
disk. Here we simulate the emission spectra of CO2 and its
^13CO2 isotopologue from a protoplanetary disk at JWST
resolution. We investigate which of these lines are most useful for
abundance determinations at different disk heights and point out the
importance of detecting the ^13CO2 feature. We also investigate
which features could signify high CO2 abundances around the
CO2 iceline due to sublimating planetesimals. | null | null | null | §.§ Observed 15 μm profiles and inferred abundances
The v_2 15 μm feature of CO2 has been observed in many
sources with Spitzer-IRS
<cit.>. The SH (Short-High) mode barely
resolves the 15 μm Q-branch, but that is enough to compare with the
models. We used the spectra that have been reduced with the Caltech
High-res IRS pipeline (CHIP) <cit.>. The
sources selected out of the repository have a strong emission feature
of CO2 but no distinguishable H2O emission in the 10–20
μm range. The sources and some stellar parameters are listed in
Table <ref>. The observed spectra are continuum
subtracted (Appendix <ref>) and the observed profiles are
compared with model profiles by eye (Fig. <ref>).
Two sets of comparisons are made. For the first set, the model fluxes
are only corrected for the distance to the objects. For the other set,
the model fluxes are scaled for the distance but also scaled for the
luminosity of the central source, using
L_CO2∝ L_⋆. This relation is found by running a
set of models with a range of luminosities, presented in
Fig. <ref>. Aside from the luminosity of the star, all
other parameters have been kept the same including the shape of the
stellar spectrum. The effective temperature of the star mostly affects
the fraction of short wavelength UV photons which can photodissociate
molecules, but since no detailed chemistry is included, the use of a
different stellar temperature would not change our results. Other
tests (not shown here) have indeed shown that the shape of the
spectrum does not really matter for the CO2 line fluxes in this
parametric model. All models have a gas-to-dust ratio of 1000 and a
constant CO2 abundance of 10^-8.
Both the total flux in the range between 14.7 and 15.0 μm and that
of a single line in this region (the 01^10 Q(6) line) have an
almost linear relation with luminosity of the central star. For the
00^01 R(7) line around 4.3 μm the dependence on the central
luminosity is slightly more complex. Below a stellar luminosity of
1 L_⊙ the dependence is stronger than linear, but above that the
dependence becomes weaker than linear. Overall, it is reasonable to
correct the 15 μm fluxes from our model for source stellar
luminosities using the linear relationship. This is because the amount of infrared continuum radiation that the disk produces scales linearly with the amount of energy that is put into the disk by the stellar radiation. It is the infrared continuum radiation that sets the molecular emission the due to radiative pumping, the dominant vibrational excitation mechanism for CO2.
The model spectra are overplotted on the continuum subtracted
observations in Fig. <ref>. The flux in these models has
been scaled with the distance of the source and the luminosity of the
central star. A gas-to-dust ratio of 1000 is adopted as inferred from H2O observations <cit.>.
An overview of the inferred abundances is given in
Table <ref>. For the DALI models the emitting CO2
column and the number of emitting CO2 molecules have been tabulated in
Table <ref>. The column is defined as the column of CO2 above the
τ_dust =1 line at the radial location of the peak of the contribution function (Fig. <ref>d).
The number of molecules is taken over the region that is responsible for half the total emission as given by the contribution function. The number of molecules shown is thus the minimal amount of CO2 needed to explain the majority of the flux and a sets lower limit for the amount of CO2 needed to explain all of the emission. The inferred abundances range from 10^-9 - 10^-7.
They agree with that inferred by <cit.> using an
LTE disk model appropriate for the RNO 90 disk, demonstrating that
non-LTE excitation effects are minor (see also Appendix <ref>).
For GW Lup and SZ 50 the emitting CO2 columns found by
<cit.> are within a factor of two from those inferred here, whereas for DN Tau and IM Lup our inferred columns are consistent with the upper limits from the slab models (tabulated in Table <ref>).
The inferred column for HD101412 differs greatly however. For all disks the number of molecules in our models is at least an order of magnitude higher than the number of molecules inferred from the LTE models.
The emitting area used by <cit.> in fitting the
CO2 feature was fixed and generally taken to be slightly larger
than the inner 1 AU. This is very small compared to the emitting area
found in this work which extends up to 30 AU. It is thus not
surprising that the total number of CO2 molecules inferred is
lower for the LTE slab models from <cit.>. The high number of molecules needed for the emission in our models is also related to the difference in excitation: the
vibrational excitation temperature of the gas in the non-LTE models is
lower (100–300 K) than the temperature fitted for the LTE models
(∼ 650 K). Thus in the non-LTE models a larger number of
molecules is needed to get the same total flux. The narrow CO2 profile is due to low rotational temperatures as emission from large radii > 2–10 AU dominates the strongest lines. The visual contrast is enhanced by the fact that the CO2 feature is also intrinsically narrower at similar temperatures than that of HCN (Fig. <ref>). For HD 101412 the model feature is
notably narrower than the observed feature signifying either a
higher CO2 rotational temperature, or a more optically thick
emitting region.
There are of course caveats in the comparisons done here. The standard
model uses a T-Tauri star that is luminous (total luminosity of
7.3 L_⊙) and that disk is known to have very strong H2O
emission. The sample of comparison protostars consists of 7 T-Tauri
stars with luminosities a factor of 2 – 35 lower and a Herbig Ae
star that is more than 3 times as luminous. A simple correction for
source luminosity is only an approximation. All of these
sources have little to no emission lines of H2O in the
mid-infrared. This may be an indication of different disk structures, and the
disk model used in this work may not be representative of these
water-rich disks. Indeed, <cit.> found
that the emitting radius of the CO ro-vibrational lines scales
inversely with the vibrational temperature inferred from the CO
emission. This relation is consistent with inside-out gap opening.
Comparing the CO ro-vibrational data with H2O infrared emission data
from VLT-CRIRES and Spitzer-IRS <cit.> found that
there is a correlation between the radius of the CO ro-vibrational
emission and the strength of the water emission lines: The larger the
radius of the CO emission, the weaker the H2O emission. This suggests
that the H2O-poor sources may also have inner gaps, where both CO and
H2O are depleted. There are only two H2O-poor sources in our sample that overlap with <cit.>.
However, if our analysis is applied to sources that do have water
emission, the range of best-fit CO2 abundances is found to be
similar. Fig. G.2 shows CO2 model spectra compared to observations for
a set of the strongest water emitting sources. The conclusion that the
abundance of CO2 in protoplanetary disks is around 10^-8 is therefore
robust.
The inferred CO2 abundances are low, much lower
than the expected ISM value of 10^-5 if all
of the CO2 would result from sublimated ices. This demonstrates that
the abundances have been reset by high temperature chemistry, as also concluded by <cit.>.
The inferred low abundances agree well with chemical models by
<cit.>. However, the column found for chemical models by <cit.>,
∼ 6× 10^16 cm^-2, is more than an order of magnitude
higher. <cit.> used a different lower vertical bound for
their column integration and only considered the inner 3 AU. Either of
these assumptions may explain the difference in the CO2 column.
§.§ Tracing the CO2 iceline
One of the new big paradigms in (giant) planet formation is pebble
accretion. Pebbles, in models defined as dust particles with a Stokes
number around 1, are badly coupled to the gas, but generally not
massive enough to ignore the interaction with the gas. This means that
these particles settle to the mid-plane and radially drift inward on
short time scales.
This pebble flux allows in theory a planetesimal to accrete all the
pebbles that form at radii larger than its current location
<cit.>.
This flux of pebbles also has consequences for the chemical
composition of the disk. These pebbles should at some point encounter
an iceline, if they are not stopped before. At the iceline they
should release the corresponding volatiles. The same holds for any
drifting planetesimals <cit.>. As the ice composition is
very different from the gas composition, this can in principle
strongly change the gas content in a narrow region around the ice
line. For this effect to become observable in mid-infrared lines, the
sublimated ices should also be mixed vertically to higher regions in
the disk.
From chemical models the total gas-phase abundance of CO2
around the CO2 iceline is thought to be relatively low, <cit.>
similar to the value found in this work. The CO2
ice content in the outer disk can be orders of magnitude higher. Both
chemical models and measurements of comets show that the CO2
content in ices can be more than 20% of the total ice content
<cit.>, with CO2 ice even becoming more
abundant than H2O ice in some models of outer disk chemistry
<cit.>. This translates into an abundance up to a few × 10^-5. We investigate here
whether the evaporation of these CO2 ices around the iceline
would be observable.
To model the effect of pebbles moving over the iceline, a model with
a constant CO2 abundance of 1 × 10^-8 and a
gas-to-dust ratio of 1000 is taken. In addition, the abundance of
CO2 is enhanced in an annulus where the midplane temperature is
between 70 K and 100 K (grey region in Fig. <ref>b), corresponding to the sublimation temperature of
pure CO2 ice <cit.>. This results in a radial
area between 8 and 15 AU in our case. The abundance is taken to be
enhanced over the total vertical extent of the CO2 in the
model, as in the case of strong vertical mixing. The spectra from
three models with enhanced abundances of
x_CO2,ring = 10^-6, 10^-5 and 10^-4 in this
ring can be seen in Fig. <ref> together with the
spectrum for the constant x_CO2 = 10^-8 model.
Note that CO2 ice is unlikely to be pure, and that some of it
will likely also come off at the H2O iceline, but such a
multi-step sublimation model is not considered here.
The spectra in Fig. <ref> show both ^12CO2 and
^13CO2 in thin and thick lines respectively for the model with
a constant abundance (grey), an enhanced abundance of
1× 10^-6 (cyan), 1× 10^-5 (red) and
1× 10^-4 (blue) around the CO2 iceline. The enhanced
CO2 increases the ^12CO2 flux up to a factor of 3. The
optically thin ^13CO2 feature is however increased by a lot
more, with the peak flux in the ^13CO2 Q-branch reaching
fluxes that are more than two times higher than peak fluxes of the nearby
^12CO2 P-branch lines.
Enhanced abundances in the outer regions can be distinguished from an
enhanced abundance in the inner regions by looking at the tail of the
^13CO2 Q-branch. An over-abundant inner region will show a
significant flux (10–50% of the peak) from the ^13CO2
Q-branch in the entire region between the locations of the
^12CO2 P(21) and P(23) lines and will show a smoothly
declining profile with decreasing wavelength. If the abundance
enhancement is in the outer regions, where the gas temperature and
thus the rotational temperature is lower, the flux between the
^12CO2 P(21) and P(23) lines will be lower for the low
enhancements (0–20% of the peak flux), for the higher enhancements
the R(1) line shows up in the short wavelength side of the profile. Other R and P branch lines from
^13CO2 can also show up in the spectrum if the abundance can
reach up to 10^-4 in the ring around the CO2 iceline.
§.§ Comparison of CO2 with other inner disk molecules
With the models presented in this paper, four molecules with
rovibrational transitions coming from the inner disk have been studied
by non-LTE disk models: H2O
<cit.>, CO
<cit.>, HCN <cit.> and CO2 (this
work). Of these CO is special, as it can be excited by UV
radiation and fluoresces to excited vibrational states that in turn
emit infrared radiation. For the other molecules absorption of a UV
photon mostly leads to dissociation of the molecule <cit.>. H2O, HCN and CO all have an permanent dipole
moment and can thus also emit strongly in the sub-millimeter. This means that
these molecules will have lower rotational temperatures than
CO2 in low density gas but they are actually observed to have broader profiles in the mid-infrared. Thus, our models reinforce the conclusion from the observed profiles that CO2 comes from relatively cold gas (200-300 K) (See panel c of Fig. <ref> and panel d of Fig. <ref>).
For the disk around AS 205 (N) the emission of both HCN and CO2 has now been analysed under non-LTE conditions in DALI.
As such it is possible to infer the HCN to CO2 abundance ratio in the disk. The representative, constant abundance model for the CO2 emission from AS 205 N has an abundance of 3× 10^-8 with a gas-to-dust ratio of 1000 but the inner abundance can easily vary by an order of magnitude or more while still being in agreement with observations. The models from <cit.> that best reproduce the data have outer HCN abundances between 10^-10 and 10^-9 for gas-to-dust ratios of 1000.
This translates into CO2/HCN abundance ratios of 30–300 in the region from 2 to 30 AU. The higher abundance of CO2 in the outer regions of the disk explains the colder inferred rotational temperature of CO2 compared to HCN. | Results of DALI models are presented, modelling the full continuum
radiative transfer, non-LTE excitation of CO2 in a typical
protoplanetary disk model, with as the main goal: to find a way to
measure the CO2 abundance in the emitting region of disks with
future instruments like JWST and test different assumptions on its origin. Spectra of CO2 and ^13CO2
in the 4–4.5 μm and 14–16 μm regions were modelled for disks
with different parametrized abundance structures, gas masses and dust
masses. The main conclusions of this study are:
* The critical density of the CO2 00^01 state, responsible for emission around 4.3 μm, is very high, >10^15 cm^-3. As a result, in the absence of a pumping radiation field, there is no emission from the 00^01 state at low densities. If there is a pumping infrared radiation field, or if the density is high enough, the emission around 4.3 μm will be brighter than that around 15 μm.
* The infrared continuum radiation excites CO2 up to large
radii (10s of AU). The region probed by the CO2 emission
can therefore be an order of magnitude larger (in radius) than typically assumed in
LTE slab models. Temperatures inferred from optically thin LTE models can also be larger than the actual temperature of the emitting gas. Differences between LTE and non-LTE full disk model flux are typically a within factor of three.
* Current observations of the 15 μm Q-branch fluxes
are consistent with models with constant abundances between 10^-9 and
10^-7 for a gas-to-dust ratio of 1000. Observations of lines corresponding to levels
with high rotational quantum numbers or the ^13CO2 Q-branch
will have to be used to properly infer abundances. Especially
the ^13CO2 Q-branch can be a good indicator of abundance
structure from inner to outer disk.
* The gas-to-dust ratio and fractional abundance are largely
degenerate. What sets the emission is the column of CO2 above
the dust infrared photosphere. Models with similar columns have very
similar spectra irrespective of total dust and gas mass, due to the excitation mechanism of CO2.
If the gas-to-dust ratio is constrained from other observations such as H2O the fractional abundance can be determined from the spectra.
* The abundance of CO2 in protoplanetary disks inferred from modelling, 10^-9–10^-7, is at least 2 orders of magnitude lower than the CO2 abundance in ISM ices. This implies that disk chemical abundances are not directly inherited from the ISM and that significant chemical processing happens between the giant molecular cloud stage and the protoplanetary disk stage.
* The ^13CO2 v_2 Q-branch at 15.42 μm will be able to
identify an overabundance of CO2 in the upper layers of the inner disk, such as could be produced by sublimating pebbles and planetesimals around the iceline(s).
Our work shows that the new instruments on JWST will be able to give a wealth of information on the CO2 abundance structure, provided that high S/N (>300 on the continuum) spectra are obtained. |
http://arxiv.org/abs/1701.08183v2 | 20170127200204 | On the Existence of Ordinary Triangles | [
"Radoslav Fulek",
"Hossein Nassajian Mojarrad",
"Márton Naszódi",
"József Solymosi",
"Sebastian U. Stich",
"May Szedlák"
] | math.CO | [
"math.CO",
"52C30"
] |
Let P be a finite point set in the plane.
A c-ordinary triangle in P is a subset of P consisting of three
non-collinear points such that each of the three lines determined by the three
points contains at most c points of P.
Motivated by a question of Erdős, and answering a question of de Zeeuw, we
prove that there exists a constant c>0 such that P contains a c-ordinary
triangle, provided that P is not contained in the union of two lines.
Furthermore, the number of c-ordinary triangles in P is Ω(|P|).
[NO \title GIVEN]
[NO \author GIVEN]
December 30, 2023
======================
§ INTRODUCTION
In 1893, Sylvester <cit.> asked whether, for any finite set of
non-collinear points on the Euclidean plane, there exists a line incident
with exactly two points.
The positive answer to this question, now known as the Sylvester–Gallai
theorem, was first obtained almost half a century later in 1941 by
Melchior <cit.> as a consequence of the positive answer to an analogous
question in the projective dual.
Erdős <cit.>, unaware of these developments, posed the same
problem in 1943, and it was solved by Gallai in 1944.
For more on the history of this and related problems, see <cit.>.
Given a finite set of points P on the Euclidean plane, a line ℓ⊂^2 is determined by P if ℓ contains at least two
points of P. We say that ℓ is an ordinary line, if ℓ
contains exactly two points of P.
Erdős <cit.> considered the problem of finding an ordinary triangle,
that is, three ordinary lines determined by three points of a finite planar
point set. See <cit.> for details on the origin of this problem.
Motivated by this problem, and with an application in studying
ordinary conics <cit.>, de Zeeuw asked a related question at the 13th
Gremo's Workshop on Open Problems (GWOP 2015, Feldis, Switzerland), which we
describe below.
Let c be a natural number and let P be a point set in the plane.
A c-ordinary triangle in P is
a subset of P consisting of three non-collinear points such that each of the
three lines determined by the three points contains at most c points of P.
It is easy to see that in order to be able to find a c-ordinary triangle
for large n, we have to assume that P is not contained in the union of two
lines.
Under this restriction one might suspect that there is a 2-ordinary triangle in
P. However, this is not true as shown by Böröczky's
construction <cit.>.
The following simple example also shows this. Let P_1 be a set of points
that are not all collinear and let ℓ be some line. For each line ℓ_1
determined by the point set P_1, we add the point at the intersection of
ℓ and ℓ_1. Let us denote this new set of points by P_2. All points
of P_2 are collinear, hence a 2-ordinary triangle must contain two points from
P_1. However, by construction every line determined by P_1 contains a point
of P_2. Hence there are no 2-ordinary triangles in this point set.
De Zeeuw asked whether a c-ordinary triangle can be found in P.
The aim of this manuscript is to give a positive answer to this question.
There is a natural number c such that the following holds. Assume P is a
finite set of points on the Euclidean plane not contained in the union of two
lines. Then P contains a c-ordinary triangle, that is three non-collinear
points such that each of the three lines determined by these three points
contains at most c points of P. Moreover, the number of c-ordinary
triangles in P is Ω(|P|).
The constant in the theorem above can be chosen as c=12000.
We see no reason to believe that this is the best constant.
Moreover, it remains open if the number of c-ordinary
triangles in P is superlinear (possibly even quadratic) in |P|.
§ TOOLS
To prove Theorem <ref>, we need the following lemmas. The first one is a
corollary of the Szemerédi-Trotter Theorem.
<cit.>
Let k,n ≥ 2 be natural numbers, P a set of n points in the plane, and
let f(k) denote the number of lines in the plane containing at least k
points of P. Then
f(k)≤
c' n^2/k^3, k ≤√(n),
c' n/k, k > √(n)
for a universal constant c'>0.
In fact, we may take c' = 125.
Clearly, the claimed bound holds for k=2,3, since f(2)≤n2
and f(3)≤n2/32. To prove the statement for k >
3, we rely on the following result by Pach, Radoičić, Tardos and
Tóth <cit.>: for any given n points and m lines on
the Euclidean plane, the number of incidences between them is at most
2.5m^2/3 n^2/3 + m + n. Let m=f(k) denote the number of lines
containing at least k points of P. Observe that the number of point-line
incidences are thus at least mk. Hence, mk ≤ 2.5m^2/3 n^2/3 +
m + n.
First, consider the case m ≥ n. Observe that for k>3, we have mk/2 ≤
m(k-2) ≤ 2.5m^2/3 n^2/3.
It follows that m ≤ 125 n^2/k^3, and specifically, m ≤ 125
n/k if k > √(n).
Next, consider the case m < n. We have mk ≤ 2.5m^2/3 n^2/3 + m + n
≤ 2.5m^2/3 n^2/3 + 2.5 n, and therefore mk ≤max{5 m^2/3
n^2/3,5n}. Hence, m ≤max{ 125 n^2/k^3, 5n/k}. For
k ≤ 5√(n), the maximum is attained at the first term, whereas for k >
√(n), we trivially have n^2/k^3 < n/k,
establishing the claim for c'=125.
The following Turán–type lemma (related to Mantel's theorem) from extremal
graph theory provides a lower bound for the number of triangles (subgraphs
isomorphic to K_3) in a graph. It can be found with a proof as Problem 10.33
in <cit.>.
Consider a graph G=(V(G), E(G)) with |V(G)|=n and |E(G)|=m. Let
t_3(G) denote the number of triangles in G. Then we have
t_3(G) ≥m/3n(4m-n^2).
§ PROOF OF THEOREM <REF>
In this section, we prove Theorem <ref>.
Our proof is closely related to the standard proof of Beck's Theorem, where the
number of pairs of points on medium-rich lines is bounded using the
Szemerédi-Trotter theorem, and then it is concluded that either there is a
very rich line, or there are many pairs of points on poor lines, see the proof
of Theorem 18.8 in <cit.>.
The constant c will be chosen at the end of the proof. Assume P is a set of
n ≥ c points in the plane and let ℒ={L_1,L_2,…,L_m}
denote the set of lines determined by P. Define l_i=|L_i ∩ P|, for
i=1,2,…,m.
Set α=4/c+1. We split the proof into two
cases:
(i) There is a line L_i ∈ℒ such that l_i > α n;
(ii) For all i=1,2,…,m we have l_i ≤α n.
Consider the first case. Since the point set P∖ L_i is non-collinear
by the assumption, by applying the Sylvester-Gallai theorem, we can
find an
ordinary line L ∈ℒ for P∖ L_i, i.e. L contains
exactly two points q,r ∈ P∖ L_i. Note that L may contain at most
one point of P ∩ L_i. Next, we show that there are many points on L_i
which
together with q,r form c-ordinary triangles. For this, we define the set
P_q ⊂ P as
P_q={ p ∈ L_i∩ P : |pq∩ P| > c},
where pq denotes the line passing through p,q. We define P_r
in
a similar way. Note that for any point p∈ P_q, the line pq
contains at least c-1 points of P∖ ( L_i∪{q}), moreover, these
sets of c-1 points are disjoint for different p∈ P_q. So we
get
(c-1)· |P_q| ≤ n-l_i,
which implies that
|P_q| ≤n-l_i/c-1 <
l_i/α-l_i/c-1=(c+1)/4-1/c-1· l_i<l_i/4.
Similarly, |P_r| < l_i/4. So there are at
least l_i/2 points
s∈ P∩ L_i such that s ∉ P_q∪ P_r. Furthermore, s,q,r are
non-collinear. This implies that the lines sq,sr
contain
at most c points of P. Therefore every triangle determined by s,q,r,
where s ∉ P_q∪ P_r, is a
c-ordinary triangle for P.
The number of these triangles is at least
l_i/2>α n/2=2n/c+1,
completing the proof of case (i). Note that, so far, c may be chosen as any
integer greater than 2.
Next, we consider case (ii). So we assume that no line of ℒ
contains more than α n points of P. First we bound ∑_c < l_i ≤α nl_i2 from above. With the notation of Lemma <ref>, we
have
∑_i : c < l_i ≤√(n)l_i2 ≤∑_j=⌊log (c+1)⌋^⌈log√(n)⌉ ∑_i: 2^j ≤ l_i ≤ 2^j+1l_i2≤∑_j=⌊log (c+1)⌋^⌈log√(n)⌉ f(2^j) 2^j+12
*≤∑_j=⌊log (c+1)⌋^⌈log√(n)⌉ c' n^2/2^3j2^j+12≤∑_j=⌊log (c+1)⌋^⌈log√(n)⌉ c'
n^2/2^j-1
≤∑_j=⌊log (c+1)⌋^∞ c' n^2/2^j-1≤8c'n^2/c+1,
where logarithms are base 2, and the inequality with star follows
from Lemma <ref>.
On the other hand, by the same lemma, we have
∑_i : √(n) < l_i ≤α nl_i2 ≤∑_j=0^⌈log(α√(n))⌉-1∑_2^j√(n) < l_i ≤
2^j+1√(n)l_i2≤∑_j=0^⌈log(α√(n))⌉-1 f(2^j√(n))
2^j+1√(n)2
*≤∑_j=0^⌈log(α√(n))⌉-1 c'
n/2^j√(n)2^j+1√(n)2
≤∑_j=0^⌈log(α√(n))⌉-1 c'n^3/2 2^j+1≤ 4c'n^3/2·α√(n)=16c'n^2/c+1.
As a result, we obtain
∑_i : c < l_i ≤α nl_i2= ∑_i : c < l_i ≤√(n)l_i2+∑_i : √(n) < l_i ≤α nl_i2≤24c'n^2/c+1.
Let G be the graph with vertex set V(G)=P, such that two points p,p'
∈ P are adjacent in G if the line pp' spanned by p,p'
satisfies |pp'∩ P| ≤ c.
By the following identity
∑_i : 2 ≤ l_i ≤α nl_i2=n2,
we obtain for the number of edges of G,
|E(G)| =∑_i : 2 ≤ l_i ≤ cl_i2≥n2-
24c'n^2/c+1.
Now we choose c large enough such that
4(n2- 24c'n^2/c+1)-n^2=Ω(n^2).
Combining
it with (<ref>) yields
4|E(G)|-n^2=4(∑_i: 2 ≤ l_i ≤ cl_i2)-n^2=Ω(n^2).
Therefore, by Lemma <ref> we have
t_3(G) ≥|E(G)|/3n(4|E(G)|-n^2)=Ω(n^2)/n·Ω(n^2)=Ω(n^3).
This implies that G has Ω(n^3) triangles. Let T be the set of those
triangles in G whose three vertices are non-collinear. It is easy to see that these
triangles correspond to c-ordinary triangles in P.
Note that the number of triangles with collinear vertices is at most
∑_i : 2 ≤ l_i ≤ cl_i3≤∑_i : 2 ≤ l_i ≤
cc3≤n2·c3=O(n^2).
So we get
|T|=Ω(n^3)-O(n^2)=Ω(n^3).
As a result, P has Ω(n^3) c-ordinary
triangles, provided that c satisfies (<ref>).
Equation (<ref>) yields that we may choose c = 96 c', where
c' is from Lemma <ref>.
§ ACKNOWLEDGEMENTS
We thank Emo Welzl for providing the venue, his GWOP workshop, where our
research initially started. We also thank Frank de Zeeuw for his many remarks
on earlier versions of the manuscript.
We are grateful to both referees, whose comments made the presentation much
cleaner.
R. Fulek was partially supported by the People Programme (Marie Curie Actions)
of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA
grant agreement no [291734].
H. N. Mojarrad and M. Naszódi were members of János Pach's Chair of DCG
at EPFL, Lausanne, supported by the Swiss National Science
Foundation (SNSF) Grants 200020-162884 and 200021-165977.
M. Naszódi was also partially supported by the National Research,
Development and Innovation Office grant K119670, and by the
János Bolyai Research Scholarship of the Hungarian
Academy of Sciences.
J. Solymosi was supported by NSERC, by ERC Advanced Research Grant no 267165
(DISCONV) and by the National Research, Development and Innovation Office
(NKFIH) grant NK 104183.
S. U. Stich acknowledges support from SNSF and grant “ARC 14/19-060” from the
“Direction de la recherche scientifique - Communauté française de
Belgique”. M. Szedlák's research was supported by the SNSF Project
200021_150055/1.
amsalpha
| In 1893, Sylvester <cit.> asked whether, for any finite set of
non-collinear points on the Euclidean plane, there exists a line incident
with exactly two points.
The positive answer to this question, now known as the Sylvester–Gallai
theorem, was first obtained almost half a century later in 1941 by
Melchior <cit.> as a consequence of the positive answer to an analogous
question in the projective dual.
Erdős <cit.>, unaware of these developments, posed the same
problem in 1943, and it was solved by Gallai in 1944.
For more on the history of this and related problems, see <cit.>.
Given a finite set of points P on the Euclidean plane, a line ℓ⊂^2 is determined by P if ℓ contains at least two
points of P. We say that ℓ is an ordinary line, if ℓ
contains exactly two points of P.
Erdős <cit.> considered the problem of finding an ordinary triangle,
that is, three ordinary lines determined by three points of a finite planar
point set. See <cit.> for details on the origin of this problem.
Motivated by this problem, and with an application in studying
ordinary conics <cit.>, de Zeeuw asked a related question at the 13th
Gremo's Workshop on Open Problems (GWOP 2015, Feldis, Switzerland), which we
describe below.
Let c be a natural number and let P be a point set in the plane.
A c-ordinary triangle in P is
a subset of P consisting of three non-collinear points such that each of the
three lines determined by the three points contains at most c points of P.
It is easy to see that in order to be able to find a c-ordinary triangle
for large n, we have to assume that P is not contained in the union of two
lines.
Under this restriction one might suspect that there is a 2-ordinary triangle in
P. However, this is not true as shown by Böröczky's
construction <cit.>.
The following simple example also shows this. Let P_1 be a set of points
that are not all collinear and let ℓ be some line. For each line ℓ_1
determined by the point set P_1, we add the point at the intersection of
ℓ and ℓ_1. Let us denote this new set of points by P_2. All points
of P_2 are collinear, hence a 2-ordinary triangle must contain two points from
P_1. However, by construction every line determined by P_1 contains a point
of P_2. Hence there are no 2-ordinary triangles in this point set.
De Zeeuw asked whether a c-ordinary triangle can be found in P.
The aim of this manuscript is to give a positive answer to this question.
There is a natural number c such that the following holds. Assume P is a
finite set of points on the Euclidean plane not contained in the union of two
lines. Then P contains a c-ordinary triangle, that is three non-collinear
points such that each of the three lines determined by these three points
contains at most c points of P. Moreover, the number of c-ordinary
triangles in P is Ω(|P|).
The constant in the theorem above can be chosen as c=12000.
We see no reason to believe that this is the best constant.
Moreover, it remains open if the number of c-ordinary
triangles in P is superlinear (possibly even quadratic) in |P|. | null | null | null | null | null |
http://arxiv.org/abs/1701.07999v1 | 20170127102210 | Beyond the EULA: Improving consent for data mining | [
"Luke Hutton",
"Tristan Henderson"
] | cs.CY | [
"cs.CY"
] |
Beyond the EULA: Improving consent for data mining
Luke Hutton Centre for Research in Computing, The Open University, Milton Keynes, MK7 6AA, UK [email protected]
Tristan Henderson School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK [email protected]
*
Luke Hutton and Tristan Henderson
December 30, 2023
=====================================
Companies and academic researchers may collect, process, and
distribute large quantities of personal data without the explicit
knowledge or consent of the individuals to whom the data pertains.
Existing forms of consent often fail to be appropriately readable and
ethical oversight of data mining may not be sufficient. This raises
the question of whether existing consent instruments are sufficient,
logistically feasible, or even necessary, for data mining. In this
chapter, we review the data collection and mining landscape, including
commercial and academic activities, and the relevant data protection
concerns, to determine the types of consent instruments used. Using
three case studies, we use the new paradigm of human-data interaction
to examine whether these existing approaches are appropriate. We then
introduce an approach to consent that has been empirically
demonstrated to improve on the state of the art and deliver meaningful
consent. Finally, we propose some best practices for data collectors
to ensure their data mining activities do not violate the expectations
of the people to whom the data relate.
§ INTRODUCTION
The ability of companies to collect, process, and distribute large
quantities of personal data, and to further analyse, mine and generate
new data based on inferences from these data, is often done without
the explicit knowledge or consent of the individuals to whom the data
pertains. Consent instruments such as privacy notices or End User
License Agreements (EULAs) are widely deployed, often presenting
individuals with thousands of words of legal jargon that they may not
read nor comprehend, before soliciting agreement in order to make use
of a service. Indeed, even if an individual does have a reasonable
understanding of the terms to which they have agreed, such terms are
often carefully designed to extend as much flexibility to the data
collector as possible to obtain even more data, distribute them to
more stakeholders, and make inferences by linking data from multiple
sources, despite no obvious agreement to these new practices.
The lack of transparency behind data collection and mining practices
threatens the agency and privacy of data subjects, with no practical way
to control these invisible data flows, nor correct misinformation or
inaccurate and inappropriate inferences derived from linked data.
Existing data protection regimes are often insufficient as they
are predicated on the assumption that an individual is able to detect
when a data protection violation has occurred in order to demand
recourse, which is rarely the case when data are opaquely mined at
scale.
These challenges are not unique to commercial activities, however.
Academic researchers often make use of datasets containing
personal information, such as those collected from social network
sites or devices such as mobile phones or fitness trackers. Most
researchers are bound by an obligation to seek ethical approval from
an institutional review board (IRB) before conducting their research.
The ethical protocols used, however, are inherited from post-war
concerns regarding biomedical experiments, and may not be appropriate
for Internet-mediated research, where millions of data points can be
collected without any personal interventions. This raises the
question of whether existing consent instruments are sufficient,
logistically feasible, or even necessary, for research of this nature.
In this chapter we first review the data collection and mining
landscape, including commercial and academic activities, and the
relevant data protection laws, to determine the types of consent
instruments used. Employing the newly-proposed paradigm of Human-Data
Interaction, we examine three case studies to determine whether these
mechanisms are sufficient to uphold the expectations of individuals,
to provide them with sufficient agency, legibility and negotiability,
and whether privacy norms are violated by secondary uses of data which
are not explicitly sanctioned by individuals. We then discuss various
new dynamic and contextual approaches to consent, which have been
empirically demonstrated to improve on the state of the art and
deliver meaningful consent. Finally, we propose some best practices
that data collectors can adopt to ensure their data mining activities
do not violate the expectations of the people to whom the data relate.
§ BACKGROUND
Data mining is the statistical analysis of large-scale datasets to
extract additional patterns and trends <cit.>. This has
allowed commercial, state, and academic actors to answer questions
which have not previously been possible, due to insufficient data,
analytical techniques, or computational power. Data mining is often
characterised by the use of aggregate data to identify traits and
trends which allow the identification and characterisation of clusters
of people rather than individuals, associations between events, and
forecasting of future events. As such, it has been used in a number of
real-world scenarios such as optimising the layout of retail stores,
attempts to identify disease trends, and mass surveillance. Many
classical data mining and knowledge discovery applications involve
businesses or marketing <cit.>, such as clustering
consumers into groups and attempting to predict their behaviour. This
may allow a business to understand their customers and target
promotions appropriately. Such profiling can, however, be used to
characterise individuals for the purpose of denying service when
extending credit, leasing a property, or acquiring insurance. In such
cases, the collection and processing of sensitive data can be
invasive, with significant implications for the individual,
particularly where decisions are made on the basis of inferences that
may not be accurate, and to which the individual is given no right of
reply. This has become more important of late, as more recent data
mining applications involve the analysis of personal data, much of
which is collected by individuals and contributed to marketers in what
has been termed “self-surveillance” <cit.>.
Such personal data have been demonstrated to be highly
valuable <cit.>, and have even been described as the new
“oil” in terms of the value of their resource <cit.>.
Value aside, such data introduce new challenges for consent as they
can often be combined to create new inferences and profiles where
previously data would have been absent <cit.>.
Data mining activities are legitimised through a combination of legal and
self-regulatory behaviours. In the European Union, the Data Protection
Directive <cit.>, and the forthcoming General Data Protection
Regulation (GDPR) that will succeed it in 2018 <cit.> govern how data
mining can be conducted legitimately. The e-Privacy Directive also
further regulates some specific aspects of data mining such as cookies
(Table <ref>). In the United States, a self-regulatory
approach is generally preferred, with the Federal Trade Commission offering
guidance regarding privacy protections <cit.>, consisting of six core
principles, but lacking the coverage or legal backing of the EU's approach.
Under the GDPR, the processing of personal data for any purpose, including
data mining, is subject to explicit opt-in consent from an individual, prior
to which the data controller must explicitly state what data are collected,
the purpose of processing them, and the identity of any other recipients of
the data. Although there are a number of exceptions, consent must generally be
sought for individual processing activities, and cannot be broadly acquired a priori for undefined future
uses, and there are particular issues with data mining, transparency
and accountability <cit.>. Solove <cit.> acknowledges these
regulatory challenges, arguing that paternalistic approaches are not
appropriate, as these deny people the freedom to consent to
particular beneficial uses of their data. The timing of consent
requests and the focus of these requests need to be managed
carefully; such thinking has also become apparent in the
GDPR.[e.g., Article 7(3) which allows consent to be withdrawn,
and Article 17 on the “right to be forgotten” which allows
inferences and data to be erased.] The call for
dynamic consent is consistent with Nissenbaum's model of contextual
integrity <cit.>,
which posits that all information exchanges are subject to context-specific
norms,
which governs to whom and for what purpose information sharing can be
considered appropriate. When the context is disrupted, perhaps by changing
with whom data are shared, or for what purpose, privacy violations can occur
when this is not consistent with the norms of the existing context. Therefore,
consent can help to uphold contextual integrity by ensuring that if the context
is perturbed, consent is renegotiated, rather than assumed.
Reasoning about how personal data are used has resulted in a new
paradigm, human-data interaction, which places humans at the
centre of data flows and provides a framework for studying personal
data collection and use according to three
themes <cit.>:
* Legibility: Often, data owners are not aware that data mining is even taking place. Even if they are, they may not know what is being collected or analysed, the purpose of the analysis, or the insights derived from it.
* Agency: The opaque nature of data mining often denies data owners agency. Without any engagement in the practice, people have no ability to provide meaningful consent, if they are asked to give consent at all, nor correct flawed data or review inferences made based on their data.
* Negotiability: The context in which data are collected and processed can often change, whether through an evolving legislative landscape, data being traded between organisations, or through companies unilaterally changing their privacy policies or practices. Analysis can be based on the linking of datasets derived
from different stakeholders, allowing insights that no single provider could
make. This is routinely the case in profiling activities such as credit
scoring. Even where individuals attempt to obfuscate their data to subvert this practice, it is often possible to re-identify them from such linked data <cit.>. Data owners should have the ability to review how their data are used as circumstances change in order to uphold contextual integrity.
Early data protection regulation in the 1980s addressed the increase in
electronic data storage and strengthened protections against unsolicited
direct marketing <cit.>. Mail order companies were able to
develop large databases of customer details to enable direct marketing, or the
trading of such information between companies. When acquiring consent for the
processing of such information became mandatory, such as under the 1984 Data
Protection Act in the UK, this generally took the form of a checkbox on paper
forms, where a potential customer could indicate their willingness for
secondary processing of their data. As technology has evolved away from mail-in
forms being the primary means of acquiring personal information, and the
scope and intent of data protection moves from regulating direct marketing to
a vast range of data-processing activities, there has been little regulatory
attention paid to how consent is acquired. As such, consent is often acquired
by asking a user to tick a checkbox to opt-in or out of secondary use of their
data. This practice is well-entrenched, where people are routinely asked to
agree to an End-User Licence Agreement (EULA) before accessing software, and
multiple terms of service and privacy policies before accessing online
services, generally consisting of a long legal agreement and an “I Agree”
button.
A significant body of research concludes that such approaches to acquiring
consent are flawed. Luger et al. find that the terms and conditions provided
by major energy companies are not sufficiently readable, excluding many from
being able to make informed decisions about whether they agree to such
terms <cit.>. Indeed, Obar and Oeldorf-Hirsch find that the
vast majority of people do not even read such documents <cit.>, with
all participants in a user study accepting terms including handing over their
first-born child to use a social network site. McDonald and Cranor measure the
economic cost of reading lengthy policies <cit.>, noting the
inequity of expecting people to spend an average of ten minutes of their time
reading and comprehending a complex document in order to use a service.
Freidman et al. caution that simply including more information and more
frequent consent interventions can be counter-productive, by frustrating
people and leading them to making more complacent consent
decisions <cit.>.
Academic data mining is subject to a different regulatory regime, with fewer
constraints over the secondary use of data from a data protection perspective.
This is balanced by an ethical review regime, rooted in post-war concern over
a lack of ethical rigour in biomedical research. In the US, ethical review for
human subjects research via an institutional review board (IRB) is necessary
to receive federal funding, and the situation is similar in many other
countries. One of the central tenets of ethical human research is to acquire
informed consent before a study begins <cit.>. As such,
institutions have developed largely standardised consent
instruments <cit.> which researchers can use to meet these
requirements. While in traditional lab-based studies, these consent procedures
can be accompanied by an explanation of the study from a researcher, or the
opportunity for a participant to ask any questions, this affordance is
generally not available in online contexts, effectively regressing to the
flawed EULAs discussed earlier.
Some of these weaknesses have been examined in the
literature. Hamnes et al. find that consent documents in rheumatological studies
are not sufficiently readable for the majority of the
population <cit.>, a finding which is supported by Vučemilo and Borovečki who also find that medical consent forms often
exclude important information <cit.>. Donovan-Kicken et al.
examine the sources of confusion when reviewing such
documents <cit.>, which include
insufficient discussion of risk and lengthy or overly complex language. Munteanu et al. examine the ethics approval process in a number of HCI
research case studies, finding that participants often agreed to
consent instruments they have not read or understood, and the rigidity of such
processes can often be at odds with such studies where a “situational
interpretation” of an agreed protocol is needed <cit.>.
There also lacks agreement among researchers about how to conduct such research
in an ethical manner, with Vitak et al. finding particular variability regarding
whether data should be collected at large scale without consent, or if acquiring
consent in such cases is even possible <cit.>.
Existing means of acquiring consent are inherited from a time when the scope of
data collection and processing was perhaps constrained and could be well
understood. Now, even when the terms of data collection and processing are
understood as written, whether registering for an online service, or
participating in academic research, it is not clear that the form of gaining
the consent was meaningful, or sufficient. Someone may provide consent to
secondary use of their data, without knowing what data this constitutes, who
will be able to acquire it, for what purpose, or when. This is already a
concern when considering the redistribution and processing of self-disclosed
personally identifiable information, but becomes increasingly complex when
extended to historical location data, shopping behaviours, or social network
data, much of which are not directly provided by the individual, and
are nebulous
in scale and content. Moreover concerns may change over time (the
so-called “privacy paradox” <cit.> that has been
demonstrated empirically <cit.>),
which may require changes to previously-granted consent.
Returning to our three themes of legibility, agency, and
negotiability, we can see that:
* Existing EULAs and consent forms may not meet a basic
standard of legibility, alienating
significant areas of the population from understanding what they are being
asked to agree to. Furthermore, the specific secondary uses of their data are
often not explained.
* EULAs and consent forms are often only used to secure permission once, then
often never again, denying people agency to revoke their
consent when a material change in how their data are used arises.
* Individuals have no power to meaningfully negotiate how their
data are
used, nor to intelligently adopt privacy-preserving behaviours, as they
generally do not know which data attributed to them is potentially risky.
§ CASE STUDIES
In this section we examine a number of real-world case studies to
identify instances where insufficient consent mechanisms were
employed, failing to provide people with legibility, agency, and
negotiability.
§.§ Taste, Ties, and Time
In 2006, researchers at Harvard University collected a dataset of Facebook
profiles from a cohort of undergraduate students, named “Tastes, Ties and
Time” (T3) <cit.>. At the time, Facebook considered individual
universities to comprise networks where members of the institution could access the
full content of each other's profiles, despite not having an explicit
friendship with each other on the service. This design was exploited
by having research assistants at the same institution manually
extract the profiles of each member of the cohort.
Subsequently, an anonymised version of the dataset was made publicly
available, with student names and identifiers removed, and terms and
conditions for downloading the dataset made it clear that
deanonymisation was not permitted. Unfortunately, this proved
insufficient, with aggregate statistics such as the size of the cohort
making it possible to infer the college the dataset was derived from,
and as some demographic attributes were only represented by a single
student, it was likely that individuals could be
identified <cit.>.
Individuals were not aware that the data collection took place, and did not
consent to its collection, processing, nor subsequent release. As such, this
case falls short in our themes for acceptable data-handling practices:
* Legibility: Individuals were not aware their data was
collected or subsequently released. With a tangible risk of individuals being
identified without their knowledge, the individual is not in a position to explore any legal
remedies to hold Facebook or the researchers responsible for any
resulting harms. In addition, even if consent were sought, it can be difficult
for individuals to conceptualise exactly which of their data would be included,
considering the large numbers of
photos, location traces, status updates, and biographical information a typical
user might accrue over years, without an accessible means of visualising or
selectively disclosing these data.
* Agency: Without notification, individual users had no way to
opt-out of the data collection, nor prevent the release of their data. As a
side-effect of Facebook's university-only network structure at the time, the
only way for
somebody to avoid their data being used in such a manner was to leave these institution networks, losing much of the utility of the service in the process. This parallels
Facebook's approach to releasing
other products, such as the introduction of News Feed in 2006. By
broadcasting profile updates to one's network, the effective visibility of
their data was substantially increased, with no way to opt-out of the feature
without leaving the service entirely. This illusory loss of control was
widely criticised at the time <cit.>.
* Negotiability: In this respect, the user's relationship with
Facebook itself is significant. In addition to IRB approval, the study was conducted with
Facebook's
permission <cit.>, but Facebook's privacy policy at the
time did not allow for Facebook to
share their data with the researchers.[Facebook Privacy Policy,
February 2006: <http://web.archive.org/web/20060406105119/http://www.facebook.com/policy.php>]
Therefore, the existing context for sharing information on Facebook was
disrupted
by this study. This includes the normative expectation that data are shared
with
Facebook for the purpose of sharing information with one's social network, and
not myriad third parties. In addition, no controls were extended to the
people involved to prevent it from happening, or to make a positive decision to
permit this new information-sharing context.
§.§ Facebook emotional contagion experiment
In 2012, researchers at Facebook and Cornell University conducted a
large-scale
experiment on 689,003 Facebook users. The study manipulated the presentation
of stories in Facebook's News Feed product, which aggregates recent content
published by a user's social network, to determine whether biasing the
emotional content of the news feed affected the emotions that people expressed in
their own disclosures <cit.>.
While the T3 study highlighted privacy risks of nonconsensual data sharing, the
emotional contagion experiment raises different personal risks from
inappropriate data mining activities. For example, for a person suffering from
depression, being subjected to a news feed of predominantly
depressive-indicative content could have catastrophic consequences, particularly
considering the hypothesis of the experiment that depressive
behaviour would increase under these circumstances. Considering the
scale at which the
experiment
was conducted, there was no mechanism for excluding such vulnerable people,
nor measuring the impact on individuals to mitigate such harms. Furthermore,
as the study was not age-restricted, children may have unwittingly been
subjected to the study <cit.>. Rucuber notes that the harms to
any one individual in such experiments can be masked by the scale of the
experiment <cit.>.
Beyond the research context, this case highlights the broader implications of
the visibility of media, whether socially-derived or from mainstream media,
being algorithmically controlled. Napoli argues that this experiment highlights
Facebook's ability to shape public discourse by altering the news feed's
algorithm to introduce political bias <cit.>, without any
governance to ensure that such new media are acting in the public interest. The
majority of Facebook users do not know that such filtering happens at all, and
the selective presentation of content from one's social network can cause
social repercussions where the perception is that individuals are withholding
posts from someone, rather than an algorithmic intervention by
Facebook <cit.>.
This case shows one of the greater risks of opaque data mining. Where people are
unaware such activities are taking place, they lose all power to act
autonomously to minimise the risk to themselves, even putting aside the
responsibility of the
researchers in this instance. We now consider how this case meets our three core
themes:
* Legibility: Individuals were unaware that they were participants
in the research. They would have no knowledge or
understanding of the algorithms which choose
which content is presented on the news feed, and how they were altered for this
experiment, nor that the news feed is anything other than a chronological
collection of content provided by their social network. Without this insight,
the cause of a perceptible change in the emotional bias in the news feed can not
be reasoned. Even if one is aware that the news feed is algorithmically
controlled,
without knowing which data are collected or used in order to determine
the relevance of individual stories, it is difficult to reason why certain
stories are displayed.
* Agency: As in the T3 case, without awareness of the experiment
being conducted, individuals were unable to provide consent, nor opt-out of the
study. Without an understanding of the algorithms which drive the news feed, nor
how they were adjusted for the purposes of this experiment, individuals are
unable to take actions, such as choosing which information to disclose or hide
from Facebook in an effort to control the inferences Facebook makes, nor to
correct any inaccurate inferences. At the most innocuous level, this might be
where Facebook has falsely inferred a hobby or interest, and shows more content
relating to that. Of greater concern is when Facebook, or the researchers
in this study, are unable to detect when showing more
depressive-indicative content could present a risk.
* Negotiability: In conducting this study, Facebook unilaterally
changed the relationship its users have with the service, exploiting those who
are unable to control how their information is used <cit.>.
At the time of the study, the Terms of Service to which users agree when
joining Facebook did not indicate that data could be used for research
purposes <cit.>, a clause which was added after the data were
collected. As a commercial operator collaborating with academic researchers,
the nature of the study was ambiguous, with Facebook having an internal
product improvement motivation, and Cornell researchers aiming to contribute
to generalisable knowledge. Cornell's IRB deemed that they did not need to
review the study because Facebook provided the data, [Cornell statement: <https://perma.cc/JQ2L-TEXQ>],
but
the ethical impact on the unwitting participants is not dependent on who collected the data. As Facebook has no legal requirement to
conduct an ethical review of their own research, and without oversight from
the academic collaborators, these issues did not surface earlier. Facebook has
since adopted an internal ethics review process <cit.>, however it
makes little reference to mitigating the impact on participants, and mostly
aims to maximise benefit to Facebook. Ultimately, these actions by researchers
and institutions with which individuals have no prior relationship serve to
disrupt the existing contextual norms concerning people's relationship with
Facebook, without extending any ability to renegotiate this relationship.
§.§ NHS sharing data with Google
In February 2016, the Google subsidiary DeepMind
announced a collaboration with the National Health Service's Royal Free London
Trust to build a mobile application titled Streams to support the detection of
acute kidney
injury (AKI) using machine learning techniques. The information sharing
agreement permitting this collaboration gives DeepMind ongoing and historical
access to all identifiable patient data collected by the trust's three
hospitals <cit.>.
While the project is targeted at supporting those at risk of AKI, data
relating to all patients are shared with DeepMind, whether they are at
risk or not. There is no attempt to gain
the consent of those patients, or to provide an obvious opt-out mechanism. The
trust's privacy policy only allows data to be shared without explicit consent
for “your direct care”. [Royal Free London Trust Privacy Statement:
<https://perma.cc/33YE-LYPF>]
Considering that Streams is
only
relevant to
those being
tested for kidney disease, it follows that for most people, their data are
collected and processed without any direct care benefit <cit.>,
in violation of this policy. Given the diagnostic purpose of the app,
such an
application could constitute a medical device, however no regulatory approval
was sought by DeepMind or the trust <cit.>.
Permitting private companies to
conduct data mining within the medical domain disrupts existing norms, by
occupying a space that lies between direct patient care and academic research.
Existing ethical approval and data-sharing regulatory mechanisms have not been
employed, or are unsuitable
for properly evaluating the potential impacts of such work. By not limiting
the scope of the data collection nor acquiring informed
consent, there is no opportunity for individuals to protect
their data. In addition, without greater awareness of the collaboration, broader
public debate about
the acceptability of the practice is avoided, which is of importance
considering the sensitivity of the data involved. Furthermore, this fairly
limited
collaboration can normalise a broader sharing of data in the future, an
eventuality which is more likely given an ongoing strategic partnership forged
between DeepMind and the trust <cit.>.
We now consider this case from the perspective of our three themes:
* Legibility: Neither patients who could directly benefit from
improved detection of AKI, nor all other hospital patients, were aware that
their data were being shared with DeepMind.
Indeed, this
practice in many data mining activities – identifying patterns to produce
insight from myriad data – risks violating a fundamental principle of
data protection regulation; that of proportionality <cit.>.
* Agency: The NHS collects data from its functions in a number of
databases, such as the
Secondary Uses Service which provides a historical record of treatments in the
UK, and can be made available to researchers. Awareness of this
database and its research purpose is mostly constrained to leaflets and posters
situated in GP practices. If patients wish to opt-out of
their data being used they must insist on it, and are likely to be reminded of
the public health benefit and discouraged from opting-out <cit.>.
Without being able to assume awareness of the SUS, nor individual consent being
acquired, it is difficult for individuals to act with agency. Even assuming
knowledge of this collaboration, it would require particular understanding of
the functions of the NHS to know that opting-out of the SUS would limit
historical treatment data made available to DeepMind. Even where someone is
willing to share their data to support their direct care, they may wish to
redact information relating to particularly sensitive diagnoses or treatments,
but have no mechanism to do so.
* Negotiability: The relationship between patients and their
clinicians is embodied in complex normative expectations of confidentiality
which are highly context-dependent <cit.>. Public
understanding of individual studies is already low <cit.>, and the
introduction of sophisticated data mining techniques into the diagnostic
process which existing regulatory mechanisms are not prepared for disrupts
existing norms around confidentiality and data sharing. The principle of
negotiability holds that patients should be able to review their willingness
to share data as their context changes, or the context in which the data are
used. Existing institutions are unable to uphold this, and the solution may
lie in increased public awareness and debate, and review of policy and regulatory
oversight to reason a more appropriate set of norms.
How each of these case studies meets the principles of legibility, agency, and
negotiability is summarised in Table <ref>.
§ ALTERNATIVE CONSENT MODELS
In Section <ref> we discussed some of the shortcomings with existing means
of acquiring consent for academic research and commercial services including
data mining, and discussed three case studies in
Section <ref>. Many of the concerns in these case studies
revolved around an inability to provide or enable consent on the part
of participants. We now discuss the state-of-the-art in providing
meaningful consent for today's data-mining activities.
The acquisition of informed consent can broadly be considered
to be secured or sustained in nature <cit.>.
Secured consent encompasses the forms we discussed in
Section <ref>, where consent is gated by a single EULA or consent
form at the beginning of the data collection process and not
revisited. Conversely, sustained consent involves ongoing
reacquisition of consent over the period that the data are collected
or used. This might mean revisiting consent when the purpose of the
data collection or processing has changed, such as if data are to be
shared with different third parties, or if the data subject's context
has changed. Each interaction can be viewed as an individual consent
transaction <cit.>. In research, this can also mean extending more granular
control to participants over which of their data are collected, such
as in Sleeper et al.'s study into self-censorship behaviours on
Facebook, where participants could choose which status updates they were
willing to share with researchers <cit.>. This approach has a number of
advantages. Gaining consent after the individual has had experience
with a particular service or research study may allow subjects to make
better-informed decisions than a sweeping form of secured consent.
Furthermore, sustained consent can allow participants to make more
granular decisions about what they would be willing to share, with a
better understanding of the context, rather than a single consent form
or EULA being considered carte blanche for unconstrained data
collection.
The distinction between secured and sustained consent reveals a tension between
two variables: burden – the time spent and cognitive load required to
negotiate the consent process, and accuracy – the extent to which the
effect of a consent decision corresponds with a person's expectations. While a
secured instrument such as a consent form minimises the burden on the
individual, with only a single form to read and comprehend, the accuracy is
impossible to discern, with no process for validating that consent decision in
context, nor to assess the individual's comprehension of what they have agreed
to. Conversely, while a sustained approach – such as asking someone whether
they are willing for each item of personal data to be used for data mining
activities – may improve accuracy, the added burden is significant and can be
frustrating, contributing to attrition, which is particularly problematic in
longitudinal studies <cit.>.
In some domains other than data mining, this distinction has already been
applied. In biomedical research, the consent to the use of samples is commonly
distinguished as being broad or dynamic. Broad consent allows samples to be used
for a range of experiments within an agreed framework without consent being
explicitly required <cit.>, whereas dynamic consent involves
ongoing engagement with participants, allowing them to see how their samples are
used, and permitting renegotiation of consent if the samples are to be used for
different studies, or if the participant's wishes change <cit.>.
Despite the differences from the data mining domain, the same consent
challenges resonate.
Various researchers have proposed ways of minimising the burden of
consent, while simultaneously collecting meaningful and accurate
information from people. Williams et al. look at sharing medical data,
enhancing agency with a dynamic
consent model that enables control of data electronically, and improved
legibility by providing patients with information about how their data are
used <cit.>. Gomer et al. propose the use of agents who make
consent decisions
on behalf of individuals to reduce the burden placed on them, based on
preferences they have expressed which are periodically
reviewed <cit.>. Moran et al. suggest that consent can be
negotiated in multi-agent environments by identifying interaction patterns to
determine appropriate times to acquire consent <cit.>.
We have discussed legibility as an important aspect of HDI, and
Morrison et al. study how to visualise collected data to research
participants <cit.>.
Personalised visualisations led participants in an empirical study to
exit the study earlier, which might mean that secured consent was leading
participants to continue beyond the appropriate level of data
collection. In much earlier work, Patrick looked at presenting user
agreements contextually (rather than at the beginning of a
transaction as in secured consent) and developed a web-based widget to
do so <cit.>.
Such dynamic approaches to consent are not universally supported. Steinsbekk
et al. suggest that where data are re-used for multiple studies, there is no
need to acquire consent for each one where there are no significant ethical
departures because of the extra burden, arguing that it puts greater
responsibility on individuals to discern whether a study is ethically
appropriate than existing governance structures <cit.>.
In previous work, we have developed a method for acquiring consent which aims to
maximise accuracy and minimise burden, satisfying both requirements, bringing
some of the principles of dynamic consent to the data mining
domain <cit.>, while aiming to maintain contextual integrity by
respecting prevailing social norms.
While many of the consent approaches discussed in this chapter may satisfy a
legal requirement, it is not clear that this satisfies the expectations of
individuals or society at large, and thus may violate contextual integrity.
In a user study, we examine whether prevailing norms representing willingness to
share specific types of Facebook data with researchers, along with limited data
about an individual's consent preferences, can be used to minimise burden
and maximise accuracy. The performance of these measures were compared to two
controls: one for accuracy and for burden. In the first instance,
a sustained consent process which gains permission for the use of each
individual item of Facebook data would maximise accuracy while pushing great
burden on to the individual. Secondly, a single consent checkbox minimises
burden, while also potentially minimising the accuracy of the method. The
contextual integrity method works similarly to this approach, by asking a series
of consent questions until the individual's conformity to the social norm can be
inferred, at which point no more questions are asked.
For 27.7% of participants, this method is able to achieve high accuracy of
96.5% while reducing their burden by an average of 41.1%. This is highlighted
in Figure <ref>, showing a cluster of norm-conformant
participants achieving high accuracy and low burden. This indicates that
for this segment of the population, the contextual integrity approach both
improves accuracy and reduces burden compared to their respective controls.
While this does indicate the approach is not suitable for all people, norm
conformity is able to be quickly determined within six questions. Where one
does not conform to the norm, the sustained approach can be automatically used
as a fallback, which maintains accuracy at the cost of a greater time burden
on the individual. Even in less optimal cases, the technique can reduce
the burden by an average of 21.9%.
While the technique assessed in this user study is prototypical in its nature,
it highlights the potential value of examining alternative means of acquiring
consent, which has seen little innovation in both academic and commercial
domains. Moreover, while this technique is not universally applicable, this only
highlights that the diversity of perspectives, willingness to engage, and
ability to comprehend consent language requires a plurality of approaches.
§ DISCUSSION
In this chapter we have illustrated how data mining activities, in both
academic and commercial contexts, are often opaque by design. Insufficient
consent mechanisms can prevent people from understanding what they
are agreeing to, particularly where the scope of the data collected or with
whom it is shared is changed without consent being renegotiated. Indeed, as
in our three case studies, consent is often not sought at all.
We have considered the impacts of opaque data mining in terms of legibility,
agency, and negotiability. We now propose some best practices for
conducting data mining which aim to satisfy these three themes.
§.§ Legibility
In order to make data mining more acceptable, it is not sufficient to simply
make processes more transparent. Revealing the algorithms, signals, and
inferences
may satisfy a particularly technically competent audience, but for most people
does not help them understand what happens to their data, in order to make an
informed decision over whether to consent, or how they can act with any agency.
The incoming General Data Protection Regulation (GDPR) in the European Union
requires consent language to be concise, transparent, intelligible and easily
accessible <cit.>, which as indicated in the literature, is currently
not a universal practice. As highlighted in our three case studies, the
absence of any meaningful consent enabling data to be used beyond its original
context, such as a hospital or social network site, is unacceptable. Even
without adopting more sophisticated approaches to consent as discussed in
Section <ref>, techniques to notify and reacquire consent such that
people are aware and engaged with ongoing data mining practices can be
deployed. As discussed earlier, a practical first step is to ensure all
consent documents can be understood by a broad spectrum of the population.
§.§ Agency
Assuming that legibility has been satisfied, and people are able to understand
how their data are being used, the next challenge is to ensure people are
able to act autonomously and control how their data are used beyond a single
consent decision. Some ways of enabling this include ensuring people can
subsequently revoke their consent for their data to be used at any time, without
necessarily being precipitated by a change in how the data are used. In the
GDPR, this is enshrined through the right to be
forgotten <cit.> that
includes the cascading revocation of data between data controllers.
Legibility can also enable agency by allowing people to act in a certain way in
order to selectively allow particular inferences to be made. By being able to
choose what they are willing to share with a data collector in order to satisfy
their own utility, some of the power balance can be restored, which has been
previously tipped
towards the data collector who is able to conduct analyses at a scale beyond any
individual subject's capabilities.
§.§ Negotiability
As discussed in Section <ref>, Nissenbaum's contextual
integrity <cit.> can be used to detect privacy violations
when the terms of data-handling have changed in such a way that existing norms
are breached. The principle of negotiability is key to preventing this, by
allowing people to make ongoing decisions about how their data are used as
contexts evolve, whether their own, environmentally, or that of the data
collector.
Dynamic consent in the biobanks context <cit.>
could be adapted to
allow data subjects to be notified and review how their data are being used,
whether for new purposes or shared with new actors, allowing consent to be
renegotiated. Our consent method informed by contextual
integrity <cit.> is one such approach which aims to tackle this
problem, by allowing people to make granular consent decisions without being
overwhelmed. Adopting the principles of the GDPR, which emphasises dynamic
consent, can support negotiability, with
guidance made available for organisations wishing to apply these
principles <cit.>.
§ CONCLUSION
Data mining is an increasingly pervasive part of daily life, with the
large-scale collection, processing, and distribution of personal data being
used for myriad purposes. In this chapter, we have outlined how this often
happens without consent, or the consent instruments used are overly complex or
inappropriate. Data mining is outgrowing existing regulatory and ethical
governance structures, and risks violating entrenched norms about the
acceptable use of personal data, as illustrated in case studies spanning the
commercial and academic spheres. We argue that organisations involved in data
mining should provide legible consent information such that people can
understand what they are agreeing to, support people's agency by allowing them
to selectively consent to different processing activities, and to support
negotiability by allowing people to review or revoke their consent as the
context of the data mining changes. We have discussed recent work which
dynamically negotiates consent, including a technique which leverages social
norms to acquire granular consent without overburdening people. We call for
greater public debate to negotiate these new social norms collectively, rather
than allowing organisations to unilaterally impose new practices without
oversight.
§ ACKNOWLEDGEMENTS
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/L021285/1].
spmpsci
10
urlstyle
akkad:consent
Akkad, A., Jackson, C., Kenyon, S., Dixon-Woods, M., Taub, N., Habiba, M.:
Patients' perceptions of written consent: questionnaire study.
BMJ 333(7567), 528+ (2006).
10.1136/bmj.38922.516204.55
ayalon:retrospective
Ayalon, O., Toch, E.: Retrospective privacy: Managing longitudinal privacy in
online social networks.
In: Proceedings of the Ninth Symposium on Usable Privacy and
Security. ACM, New York, NY, USA (2013).
10.1145/2501604.2501608
barnes:privacy
Barnes, S.B.: A privacy paradox: Social networking in the United States.
First Monday 11(9) (2006).
10.5210/fm.v11i9.1394
bauer:temporal
Bauer, L., Cranor, L.F., Komanduri, S., Mazurek, M.L., Reiter, M.K., Sleeper,
M., Ur, B.: The post anachronism: The temporal dimension of Facebook
privacy.
In: Proceedings of the 12th ACM Workshop on Workshop on Privacy in
the Electronic Society, pp. 1–12. ACM, New York, NY, USA (2013).
10.1145/2517840.2517859
berg:consent
Berg, J.W., Appelbaum, P.S.: Informed consent legal theory and clinical
practice.
Oxford University Press (2001)
brown:consent
Brown, I., Brown, L., Korff, D.: Using NHS patient data for research without
consent.
Law, Innovation and Technology 2(2), 219–258 (2010).
10.5235/175799610794046186
carmichael:discrimination
Carmichael, L., Stalla-Bourdillon, S., Staab, S.: Data mining and automated
discrimination: A mixed legal/technical perspective.
IEEE Intelligent Systems 31(6), 51–55 (2016).
10.1109/mis.2016.96
donovan-kicken:uncertainty
Donovan-Kicken, E., Mackert, M., Guinn, T.D., Tollison, A.C., Breckinridge, B.:
Sources of patient uncertainty when reviewing medical disclosure and consent
documentation.
Patient Education and Counseling 90(2), 254–260 (2013).
10.1016/j.pec.2012.10.007
eslami:algorithms
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K.,
Hamilton, K., Sandvig, C.: "I always assumed that I wasn't really that
close to [her]": Reasoning about invisible algorithms in news feeds.
In: Proceedings of the 33rd Annual ACM Conference on Human Factors in
Computing Systems, CHI '15, pp. 153–162. ACM, New York, NY, USA (2015).
10.1145/2702123.2702556
eu:directive
European Parliament and the Council of the European Union: Directive
95/46/EC of the European Parliament and of the Council of 24 October 1995 on
the protection of individuals with regard to the processing of personal data
and on the free movement of such data.
Official Journal of the European Union L 281, 0031–0050
(1995)
eu:gdpr
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27
April 2016 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing
Directive 95/46/EC (General Data Protection Regulation).
Official Journal of the European Union L119/59 (2016)
fayyad:data-mining
Fayyad, U., Piatetsky-Shapiro, G., Smyth, P.: From data mining to knowledge
discovery in databases.
AI Magazine 17(3) (1996).
10.1609/aimag.v17i3.1230
friedman:informed
Friedman, B., Lin, P., Miller, J.K.: Informed consent by design.
In: L.F. Cranor, S. Garfinkel (eds.) Security and Usability,
chap. 24, pp. 495–521. O'Reilly Media (2005)
gomer:agents
Gomer, R., Schraefel, M.C., Gerding, E.: Consenting agents: Semi-autonomous
interactions for ubiquitous consent.
In: Proceedings of the 2014 ACM International Joint Conference on
Pervasive and Ubiquitous Computing: Adjunct Publication, UbiComp '14 Adjunct,
pp. 653–658. ACM, New York, NY, USA (2014).
10.1145/2638728.2641682
hamnes:readability
Hamnes, B., van Eijk-Hustings, Y., Primdahl, J.: Readability of patient
information and consent documents in rheumatological studies.
BMC Medical Ethics 17(1) (2016).
10.1186/s12910-016-0126-0
hastie:mining
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical
Learning: Data Mining, Inference, and Prediction, corrected edn.
Springer (2003)
heimbach:profile
Heimbach, I., Gottschlich, J., Hinz, O.: The value of user's Facebook profile
data for product recommendation generation.
Electronic Markets 25(2), 125–138 (2015).
10.1007/s12525-015-0187-9
hektner:esm
Hektner, J.M., Schmidt, J.A., Csikszentmihalyi, M.: Experience sampling
method: measuring the quality of everyday life.
SAGE Publications, Thousand Oaks, CA, USA (2007)
hill:facebook
Hill, K.: Facebook Added `Research' To User Agreement 4 Months After Emotion
Manipulation Study (2014).
<http://onforb.es/15DKfGt>.
Accessed 30 November 2016
hoadley:newsfeed
Hoadley, C.M., Xu, H., Lee, J.J., Rosson, M.B.: Privacy as information access
and illusory control: The case of the Facebook News Feed privacy outcry.
Electronic Commerce Research and Applications 9(1), 50–60
(2010).
10.1016/j.elerap.2009.05.001
hodson:approval
Hodson, H.: Did Google's NHS patient data deal need ethical approval? (2016).
<https://www.newscientist.com/article/2088056-did-googles-nhs-patient-data-deal-need-ethical-approval/>.
Accessed 30 November 2016
hodson:google
Hodson, H.: Google knows your ills.
New Scientist 230(3072), 22–23 (2016).
10.1016/s0262-4079(16)30809-0
hutton:consent
Hutton, L., Henderson, T.: “I didn't sign up for this!”: Informed consent
in social network research.
In: Proceedings of the 9th International AAAI Conference on Web and
Social Media (ICWSM), pp. 178–187 (2015).
<http://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/view/10493>
jackman:irb
Jackman, M., Kanerva, L.: Evolving the IRB: Building robust review for
industry research.
Washington and Lee Law Review Online 72(3), 442–457 (2016).
<http://scholarlycommons.law.wlu.edu/wlulr-online/vol72/iss3/8/>
kang:self-surveillance
Kang, J., Shilton, K., Estrin, D., Burke, J., Hansen, M.: Self-surveillance
privacy.
Iowa Law Review 97(3), 809–848 (2012).
10.2139/ssrn.1729332
kaye:dynamic
Kaye, J., Whitley, E.A., Lund, D., Morrison, M., Teare, H., Melham, K.:
Dynamic consent: a patient interface for twenty-first century research
networks.
European Journal of Human Genetics 23(2), 141–146 (2014).
10.1038/ejhg.2014.71
kramer:contagion
Kramer, A.D.I., Guillory, J.E., Hancock, J.T.: Experimental evidence of
massive-scale emotional contagion through social networks.
Proceedings of the National Academy of Sciences 111(24),
8788–8790 (2014).
10.1073/pnas.1320040111
lewis:facebook
Lewis, K., Kaufman, J., Gonzalez, M., Wimmer, A., Christakis, N.: Tastes, ties,
and time: A new social network dataset using Facebook.com.
Social Networks 30(4), 330–342 (2008).
10.1016/j.socnet.2008.07.002
luger:complexity
Luger, E., Moran, S., Rodden, T.: Consent for all: revealing the hidden
complexity of terms and conditions.
In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, CHI '13, pp. 2687–2696. ACM, New York, NY, USA (2013).
10.1145/2470654.2481371
luger:informed
Luger, E., Rodden, T.: An Informed View on Consent for UbiComp.
In: Proceedings of the 2013 ACM International Joint Conference on
Pervasive and Ubiquitous Computing, UbiComp '13, pp. 529–538. ACM, New York,
NY, USA (2013).
10.1145/2493432.2493446
mcdonald:policies
McDonald, A.M., Cranor, L.F.: The cost of reading privacy policies.
I/S: A Journal of Law and Policy for the Information Society
4(3), 540–565 (2008).
<http://www.is-journal.org/files/2012/02/Cranor_Formatted_Final.pdf>
miller:consent-transactions
Miller, F.G., Wertheimer, A.: Preface to a theory of consent transactions:
Beyond valid consent.
In: F. Miller, A. Wertheimer (eds.) The Ethics of Consent, chap. 4,
pp. 79–105. Oxford University Press, Oxford, UK (2009).
10.1093/acprof:oso/9780195335149.003.0004
moran:agent
Moran, S., Luger, E., Rodden, T.: Exploring Patterns as a Framework for
Embedding Consent Mechanisms in Human-Agent Collectives.
In: D. Ślȩzak, G. Schaefer, S. Vuong, Y.S. Kim (eds.) Active
Media Technology, Lecture Notes in Computer Science, vol. 8610, pp.
475–486. Springer International Publishing (2014).
10.1007/978-3-319-09912-5_40
morrison:personalised-representations
Morrison, A., McMillan, D., Chalmers, M.: Improving consent in large scale
mobile HCI through personalised representations of data.
In: Proceedings of the 8th Nordic Conference on Human-Computer
Interaction: Fun, Fast, Foundational, NordiCHI '14, pp. 471–480. ACM, New
York, NY, USA (2014).
10.1145/2639189.2639239
mortier:hdi-encyclopedia
Mortier, R., Haddadi, H., Henderson, T., McAuley, D., Crowcroft, J., Crabtree,
A.: Human-data interaction.
In: M. Soegaard, R.F. Dam (eds.) Encyclopedia of Human-Computer
Interaction, chap. 41. Interaction Design Foundation, Aarhus, Denmark (2016).
<https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/human-data-interaction>
munteanu:situational
Munteanu, C., Molyneaux, H., Moncur, W., Romero, M., O'Donnell, S., Vines, J.:
Situational ethics: Re-thinking approaches to formal ethics requirements for
human-computer interaction.
In: Proceedings of the 33rd Annual ACM Conference on Human Factors in
Computing Systems, CHI '15, pp. 105–114. ACM, New York, NY, USA (2015).
10.1145/2702123.2702481
napoli:governance
Napoli, P.M.: Social media and the public interest: Governance of news
platforms in the realm of individual and algorithmic gatekeepers.
Telecommunications Policy 39(9), 751–760 (2015).
10.1016/j.telpol.2014.12.003
narayanan:deanonymizing
Narayanan, A., Shmatikov, V.: De-anonymizing social networks.
In: Proceedings of the IEEE Symposium on Security and Privacy, pp.
173–187. IEEE, Los Alamitos, CA, USA (2009).
10.1109/sp.2009.22
nissenbaum:context
Nissenbaum, H.: Privacy in Context: Technology, Policy, and the Integrity of
Social Life.
Stanford Law Books, Stanford, CA, USA (2009)
obar:lie
Obar, J.A., Oeldorf-Hirsch, A.: The Biggest Lie on the Internet: Ignoring the
Privacy Policies and Terms of Service Policies of Social Networking
Services.
Social Science Research Network Working Paper Series (2016).
10.2139/ssrn.2757465
patrick:agreements
Patrick, A.: Just-in-time click-through agreements: Interface widgets for
confirming informed, unambiguous consent.
Journal of Internet Law 9(3), 17–19 (2005).
<http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc an=8914195 lang=en>
ftc:fip
Pitofsky, R., Anthony, S.F., Thompson, M.W., Swindle, O., Leary, T.B.: Privacy
online: Fair information practices in the electronic marketplace: A report to
congress.
Security (2000).
<http://www.ftc.gov/reports/privacy2000/privacy2000.pdf>
recuber:milgram
Recuber, T.: From obedience to contagion: Discourses of power in Milgram,
Zimbardo, and the Facebook experiment.
Research Ethics 12(1), 44–54 (2016).
10.1177/1747016115579533
sankar:confidentiality
Sankar, P., Mora, S., Merz, J.F., Jones, N.L.: Patient Perspectives of Medical
Confidentiality.
Journal of General Internal Medicine 18(8), 659–669 (2003).
10.1046/j.1525-1497.2003.20823.x
selinger:co-opted
Selinger, E., Hartzog, W.: Facebook's emotional contagion study and the ethical
problem of co-opted identity in mediated environments where users lack
control.
Research Ethics 12(1), 35–43 (2016).
10.1177/1747016115579531
sleeper:censor
Sleeper, M., Balebako, R., Das, S., McConahy, A.L., Wiese, J., Cranor, L.F.:
The Post That Wasn't: Exploring Self-censorship on Facebook.
In: Proceedings of the 2013 Conference on Computer Supported
Cooperative Work, CSCW 2013, pp. 793–802. ACM, New York, NY, USA (2013).
10.1145/2441776.2441865
solove:self-management
Solove, D.J.: Privacy self-management and the consent dilemma.
Harvard Law Review 126(7), 1880–1903 (2013).
<http://heinonline.org/HOL/Page?handle=hein.journals/hlr126 id= page= collection=journals id=1910>
staiano:money
Staiano, J., Oliver, N., Lepri, B., de Oliveira, R., Caraviello, M., Sebe, N.:
Money walks: A human-centric study on the economics of personal mobile data.
In: Proceedings of Ubicomp 2014 (2014).
10.1145/2632048.2632074
steinke:us-eu
Steinke, G.: Data privacy approaches from US and EU perspectives.
Telematics and Informatics 19(2), 193–200 (2002).
10.1016/s0736-5853(01)00013-2
steinsbekk:dynamic
Steinsbekk, K.S., Kare Myskja, B., Solberg, B.: Broad consent versus dynamic
consent in biobank research: Is passive participation an ethical problem?;.
European Journal of Human Genetics 21(9), 897–902 (2013).
10.1038/ejhg.2012.282
tankard:gdpr
Tankard, C.: What the GDPR means for businesses.
Network Security 2016(6), 5–8 (2016).
10.1016/s1353-4858(16)30056-3
vitak:belmont
Vitak, J., Shilton, K., Ashktorab, Z.: Beyond the Belmont Principles: Ethical
challenges, practices, and beliefs in the online data research community.
In: Proceedings of the 19th ACM Conference on Computer-Supported
Cooperative Work & Social Computing, CSCW '16, pp. 941–953. ACM, New York,
NY, USA (2016).
10.1145/2818048.2820078
vucemilo:readability
Vučemilo, L., Borovečki, A.: Readability and content assessment of
informed consent forms for medical procedures in Croatia.
PLoS ONE 10(9), e0138,017+ (2015).
10.1371/journal.pone.0138017
williams:consent
Williams, H., Spencer, K., Sanders, C., Lund, D., Whitley, E.A., Kaye, J.,
Dixon, W.G.: Dynamic consent: A possible solution to improve patient
confidence and trust in how electronic patient records are used in medical
research.
JMIR Medical Informatics 3(1), e3+ (2015).
10.2196/medinform.3525
wef:asset
World Economic Forum: Personal data: The emergence of a new asset class
(2011).
<http://www.weforum.org/reports/personal-data-emergence-new-asset-class>
zimmer:public
Zimmer, M.: “But the data is already public”: on the ethics of research in
Facebook.
Ethics and Information Technology 12(4), 313–325 (2010).
10.1007/s10676-010-9227-5
| The ability of companies to collect, process, and distribute large
quantities of personal data, and to further analyse, mine and generate
new data based on inferences from these data, is often done without
the explicit knowledge or consent of the individuals to whom the data
pertains. Consent instruments such as privacy notices or End User
License Agreements (EULAs) are widely deployed, often presenting
individuals with thousands of words of legal jargon that they may not
read nor comprehend, before soliciting agreement in order to make use
of a service. Indeed, even if an individual does have a reasonable
understanding of the terms to which they have agreed, such terms are
often carefully designed to extend as much flexibility to the data
collector as possible to obtain even more data, distribute them to
more stakeholders, and make inferences by linking data from multiple
sources, despite no obvious agreement to these new practices.
The lack of transparency behind data collection and mining practices
threatens the agency and privacy of data subjects, with no practical way
to control these invisible data flows, nor correct misinformation or
inaccurate and inappropriate inferences derived from linked data.
Existing data protection regimes are often insufficient as they
are predicated on the assumption that an individual is able to detect
when a data protection violation has occurred in order to demand
recourse, which is rarely the case when data are opaquely mined at
scale.
These challenges are not unique to commercial activities, however.
Academic researchers often make use of datasets containing
personal information, such as those collected from social network
sites or devices such as mobile phones or fitness trackers. Most
researchers are bound by an obligation to seek ethical approval from
an institutional review board (IRB) before conducting their research.
The ethical protocols used, however, are inherited from post-war
concerns regarding biomedical experiments, and may not be appropriate
for Internet-mediated research, where millions of data points can be
collected without any personal interventions. This raises the
question of whether existing consent instruments are sufficient,
logistically feasible, or even necessary, for research of this nature.
In this chapter we first review the data collection and mining
landscape, including commercial and academic activities, and the
relevant data protection laws, to determine the types of consent
instruments used. Employing the newly-proposed paradigm of Human-Data
Interaction, we examine three case studies to determine whether these
mechanisms are sufficient to uphold the expectations of individuals,
to provide them with sufficient agency, legibility and negotiability,
and whether privacy norms are violated by secondary uses of data which
are not explicitly sanctioned by individuals. We then discuss various
new dynamic and contextual approaches to consent, which have been
empirically demonstrated to improve on the state of the art and
deliver meaningful consent. Finally, we propose some best practices
that data collectors can adopt to ensure their data mining activities
do not violate the expectations of the people to whom the data relate. | Data mining is the statistical analysis of large-scale datasets to
extract additional patterns and trends <cit.>. This has
allowed commercial, state, and academic actors to answer questions
which have not previously been possible, due to insufficient data,
analytical techniques, or computational power. Data mining is often
characterised by the use of aggregate data to identify traits and
trends which allow the identification and characterisation of clusters
of people rather than individuals, associations between events, and
forecasting of future events. As such, it has been used in a number of
real-world scenarios such as optimising the layout of retail stores,
attempts to identify disease trends, and mass surveillance. Many
classical data mining and knowledge discovery applications involve
businesses or marketing <cit.>, such as clustering
consumers into groups and attempting to predict their behaviour. This
may allow a business to understand their customers and target
promotions appropriately. Such profiling can, however, be used to
characterise individuals for the purpose of denying service when
extending credit, leasing a property, or acquiring insurance. In such
cases, the collection and processing of sensitive data can be
invasive, with significant implications for the individual,
particularly where decisions are made on the basis of inferences that
may not be accurate, and to which the individual is given no right of
reply. This has become more important of late, as more recent data
mining applications involve the analysis of personal data, much of
which is collected by individuals and contributed to marketers in what
has been termed “self-surveillance” <cit.>.
Such personal data have been demonstrated to be highly
valuable <cit.>, and have even been described as the new
“oil” in terms of the value of their resource <cit.>.
Value aside, such data introduce new challenges for consent as they
can often be combined to create new inferences and profiles where
previously data would have been absent <cit.>.
Data mining activities are legitimised through a combination of legal and
self-regulatory behaviours. In the European Union, the Data Protection
Directive <cit.>, and the forthcoming General Data Protection
Regulation (GDPR) that will succeed it in 2018 <cit.> govern how data
mining can be conducted legitimately. The e-Privacy Directive also
further regulates some specific aspects of data mining such as cookies
(Table <ref>). In the United States, a self-regulatory
approach is generally preferred, with the Federal Trade Commission offering
guidance regarding privacy protections <cit.>, consisting of six core
principles, but lacking the coverage or legal backing of the EU's approach.
Under the GDPR, the processing of personal data for any purpose, including
data mining, is subject to explicit opt-in consent from an individual, prior
to which the data controller must explicitly state what data are collected,
the purpose of processing them, and the identity of any other recipients of
the data. Although there are a number of exceptions, consent must generally be
sought for individual processing activities, and cannot be broadly acquired a priori for undefined future
uses, and there are particular issues with data mining, transparency
and accountability <cit.>. Solove <cit.> acknowledges these
regulatory challenges, arguing that paternalistic approaches are not
appropriate, as these deny people the freedom to consent to
particular beneficial uses of their data. The timing of consent
requests and the focus of these requests need to be managed
carefully; such thinking has also become apparent in the
GDPR.[e.g., Article 7(3) which allows consent to be withdrawn,
and Article 17 on the “right to be forgotten” which allows
inferences and data to be erased.] The call for
dynamic consent is consistent with Nissenbaum's model of contextual
integrity <cit.>,
which posits that all information exchanges are subject to context-specific
norms,
which governs to whom and for what purpose information sharing can be
considered appropriate. When the context is disrupted, perhaps by changing
with whom data are shared, or for what purpose, privacy violations can occur
when this is not consistent with the norms of the existing context. Therefore,
consent can help to uphold contextual integrity by ensuring that if the context
is perturbed, consent is renegotiated, rather than assumed.
Reasoning about how personal data are used has resulted in a new
paradigm, human-data interaction, which places humans at the
centre of data flows and provides a framework for studying personal
data collection and use according to three
themes <cit.>:
* Legibility: Often, data owners are not aware that data mining is even taking place. Even if they are, they may not know what is being collected or analysed, the purpose of the analysis, or the insights derived from it.
* Agency: The opaque nature of data mining often denies data owners agency. Without any engagement in the practice, people have no ability to provide meaningful consent, if they are asked to give consent at all, nor correct flawed data or review inferences made based on their data.
* Negotiability: The context in which data are collected and processed can often change, whether through an evolving legislative landscape, data being traded between organisations, or through companies unilaterally changing their privacy policies or practices. Analysis can be based on the linking of datasets derived
from different stakeholders, allowing insights that no single provider could
make. This is routinely the case in profiling activities such as credit
scoring. Even where individuals attempt to obfuscate their data to subvert this practice, it is often possible to re-identify them from such linked data <cit.>. Data owners should have the ability to review how their data are used as circumstances change in order to uphold contextual integrity.
Early data protection regulation in the 1980s addressed the increase in
electronic data storage and strengthened protections against unsolicited
direct marketing <cit.>. Mail order companies were able to
develop large databases of customer details to enable direct marketing, or the
trading of such information between companies. When acquiring consent for the
processing of such information became mandatory, such as under the 1984 Data
Protection Act in the UK, this generally took the form of a checkbox on paper
forms, where a potential customer could indicate their willingness for
secondary processing of their data. As technology has evolved away from mail-in
forms being the primary means of acquiring personal information, and the
scope and intent of data protection moves from regulating direct marketing to
a vast range of data-processing activities, there has been little regulatory
attention paid to how consent is acquired. As such, consent is often acquired
by asking a user to tick a checkbox to opt-in or out of secondary use of their
data. This practice is well-entrenched, where people are routinely asked to
agree to an End-User Licence Agreement (EULA) before accessing software, and
multiple terms of service and privacy policies before accessing online
services, generally consisting of a long legal agreement and an “I Agree”
button.
A significant body of research concludes that such approaches to acquiring
consent are flawed. Luger et al. find that the terms and conditions provided
by major energy companies are not sufficiently readable, excluding many from
being able to make informed decisions about whether they agree to such
terms <cit.>. Indeed, Obar and Oeldorf-Hirsch find that the
vast majority of people do not even read such documents <cit.>, with
all participants in a user study accepting terms including handing over their
first-born child to use a social network site. McDonald and Cranor measure the
economic cost of reading lengthy policies <cit.>, noting the
inequity of expecting people to spend an average of ten minutes of their time
reading and comprehending a complex document in order to use a service.
Freidman et al. caution that simply including more information and more
frequent consent interventions can be counter-productive, by frustrating
people and leading them to making more complacent consent
decisions <cit.>.
Academic data mining is subject to a different regulatory regime, with fewer
constraints over the secondary use of data from a data protection perspective.
This is balanced by an ethical review regime, rooted in post-war concern over
a lack of ethical rigour in biomedical research. In the US, ethical review for
human subjects research via an institutional review board (IRB) is necessary
to receive federal funding, and the situation is similar in many other
countries. One of the central tenets of ethical human research is to acquire
informed consent before a study begins <cit.>. As such,
institutions have developed largely standardised consent
instruments <cit.> which researchers can use to meet these
requirements. While in traditional lab-based studies, these consent procedures
can be accompanied by an explanation of the study from a researcher, or the
opportunity for a participant to ask any questions, this affordance is
generally not available in online contexts, effectively regressing to the
flawed EULAs discussed earlier.
Some of these weaknesses have been examined in the
literature. Hamnes et al. find that consent documents in rheumatological studies
are not sufficiently readable for the majority of the
population <cit.>, a finding which is supported by Vučemilo and Borovečki who also find that medical consent forms often
exclude important information <cit.>. Donovan-Kicken et al.
examine the sources of confusion when reviewing such
documents <cit.>, which include
insufficient discussion of risk and lengthy or overly complex language. Munteanu et al. examine the ethics approval process in a number of HCI
research case studies, finding that participants often agreed to
consent instruments they have not read or understood, and the rigidity of such
processes can often be at odds with such studies where a “situational
interpretation” of an agreed protocol is needed <cit.>.
There also lacks agreement among researchers about how to conduct such research
in an ethical manner, with Vitak et al. finding particular variability regarding
whether data should be collected at large scale without consent, or if acquiring
consent in such cases is even possible <cit.>.
Existing means of acquiring consent are inherited from a time when the scope of
data collection and processing was perhaps constrained and could be well
understood. Now, even when the terms of data collection and processing are
understood as written, whether registering for an online service, or
participating in academic research, it is not clear that the form of gaining
the consent was meaningful, or sufficient. Someone may provide consent to
secondary use of their data, without knowing what data this constitutes, who
will be able to acquire it, for what purpose, or when. This is already a
concern when considering the redistribution and processing of self-disclosed
personally identifiable information, but becomes increasingly complex when
extended to historical location data, shopping behaviours, or social network
data, much of which are not directly provided by the individual, and
are nebulous
in scale and content. Moreover concerns may change over time (the
so-called “privacy paradox” <cit.> that has been
demonstrated empirically <cit.>),
which may require changes to previously-granted consent.
Returning to our three themes of legibility, agency, and
negotiability, we can see that:
* Existing EULAs and consent forms may not meet a basic
standard of legibility, alienating
significant areas of the population from understanding what they are being
asked to agree to. Furthermore, the specific secondary uses of their data are
often not explained.
* EULAs and consent forms are often only used to secure permission once, then
often never again, denying people agency to revoke their
consent when a material change in how their data are used arises.
* Individuals have no power to meaningfully negotiate how their
data are
used, nor to intelligently adopt privacy-preserving behaviours, as they
generally do not know which data attributed to them is potentially risky. | null | null | In this chapter we have illustrated how data mining activities, in both
academic and commercial contexts, are often opaque by design. Insufficient
consent mechanisms can prevent people from understanding what they
are agreeing to, particularly where the scope of the data collected or with
whom it is shared is changed without consent being renegotiated. Indeed, as
in our three case studies, consent is often not sought at all.
We have considered the impacts of opaque data mining in terms of legibility,
agency, and negotiability. We now propose some best practices for
conducting data mining which aim to satisfy these three themes.
§.§ Legibility
In order to make data mining more acceptable, it is not sufficient to simply
make processes more transparent. Revealing the algorithms, signals, and
inferences
may satisfy a particularly technically competent audience, but for most people
does not help them understand what happens to their data, in order to make an
informed decision over whether to consent, or how they can act with any agency.
The incoming General Data Protection Regulation (GDPR) in the European Union
requires consent language to be concise, transparent, intelligible and easily
accessible <cit.>, which as indicated in the literature, is currently
not a universal practice. As highlighted in our three case studies, the
absence of any meaningful consent enabling data to be used beyond its original
context, such as a hospital or social network site, is unacceptable. Even
without adopting more sophisticated approaches to consent as discussed in
Section <ref>, techniques to notify and reacquire consent such that
people are aware and engaged with ongoing data mining practices can be
deployed. As discussed earlier, a practical first step is to ensure all
consent documents can be understood by a broad spectrum of the population.
§.§ Agency
Assuming that legibility has been satisfied, and people are able to understand
how their data are being used, the next challenge is to ensure people are
able to act autonomously and control how their data are used beyond a single
consent decision. Some ways of enabling this include ensuring people can
subsequently revoke their consent for their data to be used at any time, without
necessarily being precipitated by a change in how the data are used. In the
GDPR, this is enshrined through the right to be
forgotten <cit.> that
includes the cascading revocation of data between data controllers.
Legibility can also enable agency by allowing people to act in a certain way in
order to selectively allow particular inferences to be made. By being able to
choose what they are willing to share with a data collector in order to satisfy
their own utility, some of the power balance can be restored, which has been
previously tipped
towards the data collector who is able to conduct analyses at a scale beyond any
individual subject's capabilities.
§.§ Negotiability
As discussed in Section <ref>, Nissenbaum's contextual
integrity <cit.> can be used to detect privacy violations
when the terms of data-handling have changed in such a way that existing norms
are breached. The principle of negotiability is key to preventing this, by
allowing people to make ongoing decisions about how their data are used as
contexts evolve, whether their own, environmentally, or that of the data
collector.
Dynamic consent in the biobanks context <cit.>
could be adapted to
allow data subjects to be notified and review how their data are being used,
whether for new purposes or shared with new actors, allowing consent to be
renegotiated. Our consent method informed by contextual
integrity <cit.> is one such approach which aims to tackle this
problem, by allowing people to make granular consent decisions without being
overwhelmed. Adopting the principles of the GDPR, which emphasises dynamic
consent, can support negotiability, with
guidance made available for organisations wishing to apply these
principles <cit.>. | Data mining is an increasingly pervasive part of daily life, with the
large-scale collection, processing, and distribution of personal data being
used for myriad purposes. In this chapter, we have outlined how this often
happens without consent, or the consent instruments used are overly complex or
inappropriate. Data mining is outgrowing existing regulatory and ethical
governance structures, and risks violating entrenched norms about the
acceptable use of personal data, as illustrated in case studies spanning the
commercial and academic spheres. We argue that organisations involved in data
mining should provide legible consent information such that people can
understand what they are agreeing to, support people's agency by allowing them
to selectively consent to different processing activities, and to support
negotiability by allowing people to review or revoke their consent as the
context of the data mining changes. We have discussed recent work which
dynamically negotiates consent, including a technique which leverages social
norms to acquire granular consent without overburdening people. We call for
greater public debate to negotiate these new social norms collectively, rather
than allowing organisations to unilaterally impose new practices without
oversight. |
http://arxiv.org/abs/1701.07451v1 | 20170125190732 | Stability interchanges in a curved Sitnikov problem | [
"Luis Franco-Pérez",
"Marian Gidea",
"Mark Levi",
"Ernesto Pérez-Chavela"
] | math.DS | [
"math.DS",
"math-ph",
"math.MP",
"nlin.CD"
] |
L.Franco-Pérez]Luis Franco-Pérez
[email protected]
M. Gidea]Marian Gidea
[email protected]
M. Levi]Mark Levi
[email protected]
E. Pérez-Chavela]Ernesto Pérez-Chavela
[email protected]
[UAM]Departamento de Matemáticas Aplicadas y Sistemas, UAM-Cuajimalpa, Av. Vasco de Quiroga 4871, México, D.F. 05348, México.
[YU]Department of Mathematical Sciences, Yeshiva University, 245 Lexington Ave, New York, NY 10016, USA.
[PSU]Mathematics Department, Penn State University, University Park, PA 16802, USA.
[ITAM]Departamento de Matemáticas, ITAM México, Río Hondo 1, Col. Progreso Tizapán, México D.F. 01080 .
We consider a curved Sitnikov problem, in which an infinitesimal particle moves on a circle under the gravitational influence of two equal masses in Keplerian motion within a plane perpendicular to that circle. There are two equilibrium points, whose stability we are studying. We show that one of the equilibrium points undergoes stability interchanges as the semi-major axis of the Keplerian ellipses approaches the diameter of that circle. To derive this result, we first formulate and prove a general theorem on stability interchanges, and then we apply it to our model. The motivation for our model resides with the n-body problem in spaces of constant curvature.
Control Allocation for Wide Area Coordinated Damping
[
A. [email protected], B. [email protected]
==============================================================================================
§ INTRODUCTION
§.§ A curved Sitnikov problem
We consider the following curved Sitnikov problem: Two bodies of equal masses (primaries) move, under mutual gravity, on Keplerian ellipses about their center of mass. A third, massless particle is confined to a circle passing through the center of mass of the primaries, denoted by P_0, and perpendicular to the plane of motion of the primaries; the second intersection point of the circle with that plane is denoted by P_1. We assume that the massless particle moves under the gravitational influence of the primaries without affecting them. The dynamics of the massless particle has two equilibrium points, at P_0 and P_1. We focus on the local dynamics near these two points, more precisely, on the dependence of the linear stability of these points on the parameters of the problem.
When the Keplerian ellipses are not too large or too small, P_0 is a local center and P_1 is a hyperbolic fixed point. When we increase the size of the Keplerian ellipses, as the distance between P_1 and the closest ellipse approaches zero, then P_1 undergoes stability interchanges.
That is, there exists a sequence of open, mutually disjoint intervals of values of the semi-major axis of the Keplerian ellipses, such that, on each of these intervals the linearized stability of P_1 is strongly stable, and each complementary interval contains values where the linearized stability is not strongly stable, i.e., it is either hyperbolic or parabolic.
The length of these intervals approaches zero when the semi-major axis of the Keplerian ellipses approaches the diameter of the circle on which the massless particle moves. This phenomenon is the main focus in the paper.
It is stated in <cit.> and suggested by numerical evidence <cit.> that the
linearized stability of the point P_0 also undergoes stability interchanges when the size of the binary is kept fixed and the eccentricity of the Keplerian ellipses approaches 1.
Stability interchanges of the type described above are ubiquitous in systems of varying parameters; they appear, for example, in the classical Hill's equation and in the Mathieu equation <cit.>. To prove the occurrence of this phenomenon in our curved Sitnikov problem, we first formulate a general result on stability interchanges for a general class of simple mechanical systems. More precisely, we consider the motion of two bodies — one massive and one massless — which are confined to a pair of curves and move under Newtonian gravity. We let the distance between the two curves be controlled by some parameter λ. We assume that the position of the infinitesimal particle that achieves the minimum distance between the curves is an equilibrium point. We show that, in the case when the minimum distance between the two curves approaches zero, corresponding to λ→ 0, there existence a sequence of mutually disjoint open intervals (λ_2n-1,λ_2n), whose lengths approach zero as λ→ 0, such that whenever λ∈ (λ_2n-1,λ_2n) the linearized stability of the equilibrium point is strongly stable, and each complementary interval contains values of λ where the linearized stability is not strongly stable.
From this result we derive the above mentioned stability interchange result for the curved Sitnikov problem.
The curved Sitnikov problem considered in this paper is an extension of the classical Sitnikov problem described in Section <ref> (also, see e.g., <cit.>). When the radius of the circle approaches infinity, in the limit we obtain the classical Sitnikov problem — the infinitesimal mass moves along the line perpendicular to the plane of the primaries and passing through the center of mass. The equilibrium point P_1 becomes the point at infinity and is of a degenerate hyperbolic type. Thus, stability interchanges of P_1 represent a new phenomenon that we encounter in the curved Sitnikov problem but not in the classical one.
Also in the last case, it is well known that for =0 the classical Sitnikov problem is integrable. In the case of the curved one, numerical evidence suggests that it is not (see Figure <ref>).
The motivation for considering the curved Sitnikov problem resides in the n-body problem in spaces with constant curvature, and with models of planetary motions in binary star systems, as discussed in Section <ref>.
§.§ Classical Sitnikov problem
We recall here the classical Sitnikov problem. Two bodies (primaries) of equal masses m_1=m_2=1 move in a plane on Keplerian ellipses of eccentricity about their center of mass, and a third, massless particle moves on a line perpendicular to the plane of the primaries and passing through their center of mass. By choosing the plane of the primaries the xy-plane and the line on which the massless particle moves the z-axis, the equations of motion of the massless particle can be written, in appropriate units, as
z̈=-2z/(z^2+r^2(t))^3/2,
where r(t) is the distance from the primaries to their center of mass given by
r(t)=1-cos u(t),
where u(t) is the eccentric anomaly in the Kepler problem. By normalizing the time we can assume that the period of the primaries is 2π, and
r(t)=(1-cos t)+O(^2),
for small .
When =0, i.e., the primaries move on a circular orbit and the dynamics of the massless particle is described by a 1-degree of freedom Hamiltonian and so is integrable. Depending on the energy level, one has the following types of solutions: an equilibrium solution, when the particle rests at the center of mass of the primaries; periodic solutions around the center of mass; escape orbits, either parabolic, that reach infinity with zero velocity, or hyperbolic, that reach infinity with positive velocity.
When ∈(0,1), the differential equation (<ref>) is non-autonomous and the system is non-integrable. Consider the case ≪ 1.
The system also has bounded and unbounded orbits, as well as unbounded oscillatory orbits and capture orbits (oscillatory orbits are those for which lim sup_t→±∞|z(t)|=+∞ and lim inf_t→±∞|z(t)|<+∞, and capture orbits are those for which lim sup_t→ -∞|z(t)|=+∞ and
lim sup_t→ +∞|z(t)|<+∞). In his famous paper about the final evolutions in the three body problem, Chazy introduced the term oscillatory motions <cit.>, although he did not find examples of these, leaving the question of their existence open. Sitnikov's model yielded the first example of oscillatory motions <cit.>.
There are many relevant works on this problem, including <cit.>.
The curved Sitnikov problem introduced in Section <ref> is a modification of the classical problem when the massless particle moves on a circle rather than a line. Here we regard the circle as a very simple restricted model of a space with constant curvature. In Subsection <ref>
we introduce and summarize some aspects of this problem.
§.§ The n-body problem in spaces with
constant curvature
The n-body problem on spaces with
constant curvature is a natural extension of the n-body problem in the Euclidean space; in either case the gravitational law considered is Newtonian. The extension was first proposed independently by the founders of hyperbolic geometry, Nikolai Lobachevsky and János Bolyai. It was subsequently studied in the late 19th, early 20th century, by Serret, Killing, Lipschitz, Liebmann, Schering, etc. Schrödinger developed a quantum mechanical analogue of the Kepler problem on the two-sphere in 1940. The interest in the problem was revived by Kozlov, Harin, Borisov, Mamaev, Kilin, Shchepetilov, Vozmischeva, and others, in the 1990's. A more recent surge of interest was stimulated by the works on relative equilibria in spaces with constant curvature (both positive and negative) by Diacu, Pérez-Chavela, Santoprete, and others, starting in the 2010's. See <cit.> for a history of the problem and a comprehensive list of references.
A distinctive aspect of the n-body problem on curved spaces is that the lack of (Galilean) translational invariance results in the lack of center-of-mass and linear-momentum integrals.
Hence, the study of the motion cannot be reduced to a barycentric coordinate system.
As a consequence, the two-body problem on a sphere can no longer be reduced to the
corresponding problem of motion in a central potential field, as is the case for the Kepler problem in the Euclidean space. As it turns out, the two-body problem on the sphere is not integrable <cit.>.
Studying the three-body problem on spaces with curvature is also challenging.
Perhaps the simplest model is the restricted three-body problem on a circle. This was studied in <cit.>. First, they consider the motion of the two primaries on the circle, which is integrable, collisions can be regularized, and all orbits can be classified into three different classes (elliptic, hyperbolic, parabolic). Then they consider the motion of the massless particle under the gravity of the primaries, when one or both primaries are at a fixed position. They obtain once again a complete classification of all orbits of the massless particle.
In this paper we take the ideas from above one step further, by considering the curved Sitnikov problem, with the massless particle moving on a circle under the gravitational influence of two primaries that move on Keplerian ellipses in a plane perpendicular to that circle. In the limit case, when the primaries are identified with one point, that is when the primaries coalesce into a single body, the Keplerian ellipses degenerate to a point, and the limit problem coincides with the two-body problem on a circle described above.
While the motivation of this work is theoretical, there are possible connections with the dynamics of planets in binary star systems. About 20 planets outside of the Solar System have been confirmed to orbit about binary stars systems; since more than half of the main sequence stars have at least one stellar companion, it is expected that a substantial fraction of planets form in binary stars systems. The orbital dynamics of such planets can vary widely, with some planets orbiting one star and some others orbiting both stars. Some chaotic-like planetary orbits have also been observed, e.g. planet Kepler-413b orbiting Kepler-413 A and Kepler-413 B in the constellation Cygnus, which displays erratic precession. This planet's orbit is tilted away from the plane of binaries and deviating from Kepler's laws. It is hypothesized that this tilt may be due to the gravitational influence of a third star nearby <cit.>. Of a related interest is the relativistic version of the Sitnikov problem <cit.>.
Thus, mathematical models like the one considered in this paper could be helpful to understand possible types of planetary orbits in binary stars systems.
To complete this introduction, the paper is organized as follows: In Section <ref> we go deeper in the description of the curved Sitnikov problem, studying the limit cases and its general properties. In Section <ref> we present a general result on stability interchanges. In Section <ref> we show that the equilibrium points in the curved Sitnikov problem present stability interchanges. Finally, in order to have a self contained paper, we add an Appendix with general results (without proofs) from Floquet theory.
§ THE CURVED SITNIKOV PROBLEM
§.§ Description of the model
We consider two bodies with equal masses (primaries) moving under mutual Newtonian gravity on identical elliptical orbits of eccentricity ε, about their center of mass. For small values of ε, the distance r(t) from either primary to the center of mass of the binary is given by
r_ε(t;r)= rρ(t;ε), r>0,
ρ(t;ε)= (1-εcos (u(t)))= (1-εcos (t))+𝒪(ε^2),
where u(t) is the eccentric anomaly, which satisfies Kepler's equation u-sin u=(2π/τ)t, where τ/2π is the mean motion[The mean motion is the time-average angular velocity over an orbit.] of the primaries. The expansion in (<ref>) is convergent for < _c = 0.6627...; see <cit.>.
A massless particle is confined on a circle of radius R passing through the center of mass of the binary and perpendicular to the plane of its motion. We assume that the only force acting on the infinitesimal particle is the component along the circle of the resultant of the gravitational forces exerted by the primaries. The motion of the primaries take place in the xy-coordinate plane and the circle with radius R is in the yz-coordinate plane. See Figure <ref>.
We place the center of mass at the point (0,R,0) in the xyz-coordinate system. The position of the primaries are determined by the functions
𝐱_1(t) = (r_ε(t;r)sin t ,R+r_ε(t;r)cos t ,0),
𝐱_2(t) = (-r_ε(t;r)sin t ,R-r_ε(t;r)cos t ,0).
Note that t=0 corresponds to the passage of the primaries through the pericenter at y=R± r(1-ε) and t=π to the passage of the primaries through the apocenter at y=R± r(1+ε); both peri- and apo-centers lie on the plane of the circle of radius R in the y-axis.
The position of the infinitesimal particle is 𝐱(t)=(0,y(t),z(t)) (taking into account the restriction of motion for the infinitesimal particle to the circle y^2+z^2=R^2). We will derive the equations of motion by computing the gravitational forces exerted by the primaries:
𝐅_ε(y,t;R,r) = -𝐱-𝐱_1/||𝐱-𝐱_1||^3-
𝐱-𝐱_2/||𝐱-𝐱_2||^3
= -(-r_ε(t;r)sin t ,y-R-r_ε(t;r)cos t ,z)/||𝐱-𝐱_1||^3
-(r_ε(t;r)sin t ,y-R+r_ε(t;r)cos t ,z)/||𝐱-𝐱_2||^3 ,
where the distance from the particle to each primary is
||𝐱-𝐱_1||=
[r^2_ε(t;r)+2R^2+2Rr_(t;r)cos t -2y(R+r_ε(t;r)cos t )]^1/2,
||𝐱-𝐱_2||=[r^2_ε(t;r) +2R^2-2Rr_(t;r)cos t -2y(R-r_ε(t;r)cos t )]^1/2.
We note that when r(1+ε)=2R the elliptical orbit of the primary with the apo-center at y<R crosses the circle of radius R, hence collisions between the primary and the infinitesimal mass are possible. Therefore we will restrict to r<2R/1+ε; when ε=0, this means r<2R.
We write (<ref>) in polar coordinates, that is y=Rcos q, z=Rsin q, and we obtain
𝐅_ε(q,t;R,r) = -(-r_ε(t;r)sin t ,Rcos q-R-r_ε(t;r)cos t ,Rsin q)/||𝐱-𝐱_1||^3
-(r_ε(t;r)sin t ,Rcos q-R+r_ε(t;r)cos t ,Rsin q)/||𝐱-𝐱_2||^3.
The origin q=0 corresponds to the point (0,R,0) in the xyz-coordinate system. Thus, the primaries move on elliptical orbits around this point.
Next we will retain the component along the circle of the resulting force (<ref>). That is, we will ignore the constraint force that confines the motion of the particle to the circle, as this force acts perpendicularly to the tangential component of the gravitational attraction force. The unit tangent vector to the circle of radius R at (0,Rcos( q),Rsin( q)) pointing in the positive direction is given by 𝐮( q)=(0,-sin( q),cos( q)). The component of the force 𝐅_ε( q,t;R,r) along the circle is computed as
𝐅_ε( q,t;R,r)·𝐮( q)=-(R+r_ε(t;r)cos(t))sin( q)/||𝐱-𝐱_1||^3-(R-r_ε(t;r)cos(t))sin( q)/||𝐱-𝐱_2||^3.
The motion of the particle, as a Hamiltonian system of one-and-a-half degrees of freedom, corresponds to
q̇ = p ,
ṗ = f_ε(q,t;R,r),
where
f_ε( q,t;R,r) := 𝐅_ε( q,t;R,r)·𝐮( q) ,
||𝐱-𝐱_1|| = [ r^2_ε(t;r) +2R(1-cos q)(R+r_ε(t;r)cos t) ]^1/2 ,
||𝐱-𝐱_2|| = [ r^2_ε(t;r)
+2R(1-cos q)(R-r_ε(t;r)cos t)]^1/2 ,
Hence
H_(q,p,t;R,r)=p^2/2+V_(q,t;R,r),
where the potential is given by
V_(q,t;R,r) = -1/R(1/||𝐱-𝐱_1||+1/||𝐱-𝐱_2||)
.
§.§ Limit cases
The curved Sitnikov problem can be viewed as a link between the classical Sitnikov problem and the Kepler problem on the circle, mentioned in Section <ref>.
§.§.§ The limit R→∞.
We express (<ref>) in terms of the arc length w=R q, obtaining
f_ε(w,t;R,r)=-(R+r_ε(t;r)cos t )sin(w/R)/[r^2_ε(t;r) +2R(1-cos(w/R))(R+r_ε(t;r)cos t) ]^3/2
-(R-r_ε(t;r)cos t )sin(w/R)/[r^2_ε(t;r) +2R(1-cos(w/R))(R-r_ε(t;r)cos t) ]^3/2
which we can write in a suitable form as
f_ε(w,t;R,r)=-wsin(w/R)/(w/R)(1+r_ε(t;r)/Rcos t )/[r^2_ε(t;r) +2w^2(1-cos(w/R))/ ( w/R)^2(1+r_ε(t;r)/Rcos t) ]^3/2
-wsin(w/R)/(w/R)(1-r_ε(t;r)/Rcos t )/[r^2_ε(t;r) +2w^2(1-cos(w/R))/ ( w/R)^2(1-r_ε(t;r)/Rcos t) ]^3/2.
Letting R tend to infinity we obtain
lim_R→∞f_ε(w,t;R,r)=-2w/(r^2_ε(t;r)+w^2)^3/2 ,
which is the classical Sitnikov Problem.
§.§.§ The limit r→0.
When we take the limit r→0 in (<ref>) we are fusing the primaries into a large mass at the center of mass and we obtain a two-body problem on the circle.
The component force along the circle corresponds to
lim_r→0f_ε( q,t;R,r)=-sin( q)/√(2)R^2(1-cos( q))^3/2 .
This problem was studied in <cit.> with a different force given by
-1/R q^2+1/R(2π- q)^2,
the distance between the large mass and the particle is measured by the arc length (in that paper the authors assume that R=1).
The potential of the force (<ref>) is
V_1(q_1)=-1/R^2√(2)(1-cos (q_1))^1/2,
where q_1 denotes the angular coordinate, and the potential for (<ref>) is
V_2(q_2)=-1/Rq_2-1/R(2π-q_2),
where q_2 denotes the angular coordinate.
Each problem defines an autonomous system with Hamiltonian
H_i(p_i,q_i)=1/2p_i^2+V_i(q_i) ,
taking p_1=dq_1/dt, p_2=dq_2/dt and i=1,2.
Let ϕ^i_t be the flow of the Hamiltonian H_i, and let A_i denote the phase space, i=1, 2
Using that all orbits are determined by the energy relations given by (<ref>), it is not difficult to define a homeomorphism g:A_1 → A_2 which maps orbits of system (<ref>) into orbits of system (<ref>). In the same way we can define a homeomorphism h:A_2 → A_1 which in fact is
g^-1. This shows the C^0 equivalence of the respective flows.
One can show that the two corresponding flows are C^0–equivalent.
We recall from <cit.> that the solutions of the two-body problem on the circle (apart from the equilibrium antipodal to the fixed body) are classified in three families (elliptic, parabolic and hyperbolic solutions) according to their energy level. The elliptic solutions come out of a collision, stop instantaneously, and reverse their path back to the collision with the fixed body. The parabolic solutions come out of a collision and approach the equilibrium as t →∞. Hyperbolic motions comes out of a collision with the fixed body, traverse the whole circle and return to a collision.
We remark that the two limit cases R→∞ and r→ 0 are not equivalent. Indeed, in the case r→ 0 the resulting system is autonomous,
the point q=0 is a singularity for the system, and the point q=π is a hyperbolic fixed point, while in the case R→∞ the resulting system is non-autonomous (for ≠ 0), the point q=0 is a fixed point of elliptic type, and the point q=∞ is a degenerate hyperbolic periodic orbit.
§.§ General properties
§.§.§ Extended phase space, symmetries, and equilibrium points
It is clear that, besides the limit cases R→∞ and r→ 0, the dynamics of the system depends only on the ratio r/R, so we can fix R=1 and study the dependence of the global dynamics on r where 0<r<2.
In this case using (<ref>) and (<ref>) we get
f_ε(q,t;r)=-(1+rρ(t;ε)cos(t))sin( q)/[r^2ρ^2(t;ε) +2(1-cos q)(1+rρ(t;ε)cos t)
]^3/2
-(1-rρ(t;ε)cos(t))sin( q)/[r^2ρ^2(t;ε) +2(1-cos q)(1-rρ(t;ε)cos t)
]^3/2.
To study the non-autonomous system (<ref>) we will make the system autonomous by
introducing the time as an extra dependent variable
𝒳_ε(q,p,s;r)={[ q̇ = p; ṗ = f_ε(q,s;r); ṡ = 1 ]..
This vector field is defined on [0,2π]×ℝ×[0,2π], where we identify the boundary points of the closed intervals. The flow of 𝒳_ ε possesses symmetries defined by the functions
𝕊_1(q,p,s) = (-q,-p,s) ,
𝕊_2(q,p,s) = (q,p,s+2π) ,
𝕊_3(q,p,s) = (q+2π,p,s) ,
𝕊_4(q,p,s) = (q,-p,-s)
in the sense that
* 𝕊_1(𝒳_ε(q,p,s))=𝒳_ε(𝕊_1(q,p,s)),
* 𝒳_ε(q,p,s)=𝒳_ε(𝕊_2(q,p,s)),
* 𝒳_ε(q,p,s)=𝒳_ε(𝕊_3(q,p,s))
* 𝕊_4(𝒳_ε(q,p,s))=-𝒳_ε(𝕊_4(q,p,s)),
as can be verified by a direct computation.
The function 𝕊_1 describes the symmetry respect to the trajectory (0,0,s), 𝕊_2 and 𝕊_3 describe the bi-periodicity of f_ε(q,s;r) and 𝕊_4 describes the reversibility of the system.
System (<ref>) has two equilibria (0,0) and (π,0), which correspond to periodic orbits for 𝒳_ε.
While the classical Sitnikov equation is autonomous for ε = 0, our equation (<ref>) is not, and thus we expect it to be non–integrable, as is borne out by numerical simulation. Figure <ref> shows a Poincaré section corresponding to s=0 (mod 2π), with ε=0 and r=1. This simulation suggests that the invariant KAM circles
coexist with chaotic regions.
In the sequel, we will analyze the dynamics around the equilibrium points (π,0) and (0,0). One important phenomenon that we will observe is that both equilibrium points undergo stability interchanges as parameters are varied. More precisely, when
ε sufficiently small is kept fixed and r→2R/1+ε, the point (π,0) undergoes infinitely many changes in stability, and when r is kept fixed and ε→ 1, the point (0,0) undergoes infinitely many changes in stability.
In the next section we will first prove a general result.
§ A GENERAL RESULT ON STABILITY INTERCHANGES
In this section we switch to a more general mechanical model which exhibits stability interchanges. We consider the motion under mutual gravity of an infinitesimal particle and a heavy mass each constrained to its own curve and moving under gravitational attraction, and study the linear stability of the equilibrium point corresponding to the closest position between the particles along the curves they are moving on. In Section <ref>, we will apply this general result to the equilibrium point P_1 of the curved Sitnikov problem described in Section <ref>. The fact that in the curved Sitnikov problem there are two heavy masses, rather than a single one as considered in this section, does not change the validity of the stability interchanges result, since, as we shall see, what it ultimately matters is the time-periodic gravitational potential acting on the infinitesimal particle.
To describe the setting of this section, consider a particle constrained to a curve x = x (s, λ ) in ℝ ^3, where s is the arc length along the curve and λ is a parameter with values in some interval [0, λ_0], λ_0> 0.
Another (much larger) gravitational mass undergoes a prescribed periodic motion according to y = y (t, λ) = y (t+1, λ ); see Figure <ref>. We assume that the mass of the particle at x (s) is negligible compared to the mass at y(t), treating the particle at x (s) as massless.
To write the equation of motion for the unknown coordinate s of the massless particle, let:
z (s,t, λ ) = x (s, λ) - y (t, λ)
.
Assume that s=0, t=0 minimize the distance between the two curves:
| z (0,0, λ) | = min_s,t | z (s,t, λ)|def=δ( λ ),
for all λ∈ [0, λ_0], and that this minimum point is non-degenerate with respect to t, in the sense that
∂^2 /∂ t^2|z(0,t,λ)|_| t=0≠ 0 .
Moreover, we make the following orthogonality assumption:
ẋ(0,λ)·ẏ(0,λ)=0, for all λ∈[0,λ_0].
In the sequel we will study the case when the minimum distance min_s,t | z (s,t, λ)| → 0, that is, the shortest distance from the orbit y(t,λ) of the massive body to the curve x(s,λ)
drawn by the infinitesimal mass approaches 0 as λ→λ_0. See Figure <ref>.
In the curved Sitnikov problem, this corresponds to the case when r→2R/1+ε (r→ 2R when
ε = 0).
To write the equation of motion for s, we note that
the Newtonian gravitational potential of the particle at x (s) is a function of s and t given by
U( s, t, λ ) =- | z (s,t, λ ) | ^-1,
and the evolution of s is governed by the Euler–Lagrange equation
d/dt L_ṡ-L_s=0 with the Lagrangian
L = 1/2ṡ ^2 - U( s, t, λ ),
leading to[to explain this form of the Lagrangian, we note that the equations for our massless particle are obtained by taking the limit of the particle of small mass m; for such a particle, in the ambient potential U, the Lagrangian is
1/2 m ṡ ^2 -m U( s, t, λ ) – the factor m in front of U is due to the fact that U is the potential energy of the unit mass. Dividing the Euler–Lagrange equation by m gives (<ref>).]
s̈ + U ^' ( s , t,λ) = 0, where ^' = ∂/∂ s .
Note that s=0 is an equilibrium for any λ, since U ^' (0,t, λ ) = 0 for all t and for all λ, according to (<ref>).
Linearizing (<ref>)
around the equilibrium s=0 we obtain
S̈+ a(t, λ ) S = 0, a(t+1, λ ) = a (t, λ ),
where a(t, λ ) = U ^'' (0,t,λ).
We have the following general result:
Assume that (<ref>), (<ref>),
(<ref>) hold, that x and y are both bounded in the C^2–norm uniformly in λ, and
min_s,tx(s,λ)-y(t,λ)→ 0 as λ→ 0.
Then there exists an infinite sequence
λ_1 > λ_2 ≥λ _3> λ _4≥⋯ >
λ_ 2n-1>λ_ 2n≥⋯→ 0
such that the equilibrium solution s=0 of (<ref>) is linearly strongly stable[The equilibrium solution is linearly strongly stable if the Floquet multipliers of (<ref>) lie on the unit circle and are not real, or equivalently, if the linearized system lies in the interior of the set of stable systems.] for all λ∈ (λ_2n, λ _2n-1). Furthermore, each complementary λ–interval contains points where the linearized equilibrium is not strongly stable, i.e. is either hyperbolic or parabolic.
The proof of Theorem <ref> relies on Lemmas <ref>, <ref> and <ref> stated below.
Consider the linear system
ẍ + a(t, λ ) x=0,
where a(t,λ) is a continuous function of t∈ [0,1], λ∈ (0, λ̅], where λ̅> 0. Let z(t, λ) = x+ i ẋ, where x=x(t, λ ) is a nontrivial solution of (<ref>). Assume that there exists an
interval [t_0( λ ), t_1( λ ) ]⊂ [0,1], possibly depending on λ, such that
lim_λ↓ 0 z(t, λ)|_t_0^t_1→ -∞.
Here z(t, λ ) is defined as a continuous function of t. Although this choice of is unique only modulo 2 π,
its increment as stated in equation (<ref>) is uniquely defined.
Then there exists a sequence {λ_k}_k=0^ ∞ satisfying (<ref>) such that the Floquet matrix of (<ref>) is strongly stable
for all λ∈ (λ_2n, λ _2n-1), for any n > 0. Furthermore, every complementary λ–interval contains
values of λ for which the Floquet matrix is not strongly stable.
Proof of this lemma can be found in <cit.>.
Consider the linear system
ẍ + a(t) x=0,
and assume that a(t) > 0 on some interval
[t_0, t_1]. For any (nontrivial) solution x(t) the corresponding phase vector
z(t ) = x + i ẋ rotates by
θ [z]def= z(t) |_t_0^t_1≤ -min_[t_0,t_1] √( a(t) ) (t_1-t_0) + π.
Writing the differential equation ẍ + a (t) x = 0 as a system ẋ = y, ẏ = - a(t)x, we obtain (using complex notation):
θ̇= d/dt (x+iy)= - d/dtIm ( ln z) = - Imż/z =
-Im(y-iax)(x-iy)/x ^2 + y ^2 = -(a cos^2 θ+ sin ^2 θ ).
We conclude that for any solution of (<ref>), the angle θ= (x+iy) satisfies
θ̇≤ -(a_mcos ^2 θ + sin^2 θ ), for t∈ [t_0,t_1] , where a_m=min_[t_0,t_1] a(t).
To invoke comparison estimates, consider θ̅(t) which satisfies
d/dtθ̅= -(a_m cos^2 θ̅+ sin^2 θ̅), θ̅(t_0) = θ (t_0).
By the comparison estimate, we conclude:
θ|_t_0^t_1≤θ̅|_t_0^t_1,
and the proof of the lemma will be complete once we show that θ̅ satisfies the estimate (<ref>).
To that end we consider a solution of
ẍ̅̈ + a_m x̅ = 0
with the initial condition satisfying
(x̅(t_0)+iẋ̅̇(t_0)) = θ̅(t_0).
This solution is of the form
x̅(t)= Acos (√( a_m) t- φ ), A = const. ,
where φ is chosen so as to satisfy (<ref>).
Since ( x̅+ iẋ̅̇) satisfies
the same differential equation as θ̅, and since the initial conditions match, we conclude that
θ̅(t) = ( x̅+ iẋ̅̇), so that
θ̅(t) =
( cos (√( a_m ) t- φ) - i√( a_m )sin (√( a_m ) t- φ ) )=
- (√(a_m) t- φ )+ π /2,
where X denotes a quantity whose absolute value does not exceed X. In other words, θ̅(t) is given by a linear function with coefficient - √( a_m), up to an error
< π /2.
The last inequality is due to the fact that the complex numbers cos(√(a_m)t-ϕ)-isin(√(a_m)t-ϕ) and
cos(√(a_m)t-ϕ)-i√(a_m)tsin(√(a_m)t-ϕ) lie in the same quadrant, so the difference between their arguments is no more than π/2.
Therefore, over the interval [t_0,t_1] the function θ̅ changes by the amount √( a_m)(t_1-t_0) with the error of at most π/2 + π/2 = π:
θ̅|_t_0^t_1≤ -√( a_m ) (t_1-t_0) + π;
restating this more formally, (<ref>) implies
θ̅(t_1)< - (√( a_m) t_1- φ)+ π/2 , θ̅(t_0)> - (√( a_m)t_0- φ)- π/2.
Subtracting the second inequality from the first gives (<ref>).
Substituting (<ref>) into (<ref>) yields (<ref>) and completes the proof of Lemma <ref>.
Consider the potential U defined by equation (<ref>). Assume that the functions
x= x(s,λ), y= y(t,λ) satisfy
(i) the minimum min_s,t | z(·,·,λ) |=δ(λ) is non-degenerate and is achieved at s=t=0, where z(s,t,λ)= x(s,λ)- y(t,λ),
(ii) || z(·,·,λ) ||_C^2≤ M uniformly in
0<λ < λ_0, and
(iii) δ ( λ ) → 0 as λ→ 0.
Then there exists time τ= τ (λ) (approaching zero as λ→ 0) such that
lim_λ→ 0(τ(λ) ·min_|t|≤τ (λ)√(U ^'' (0,t,λ))) →∞.
Differentiating (<ref>) with respect to s twice, we get
U^'' (0,t, λ ) = [( z^'· z^' + z· z ^'')
( z· z ) -3( z· z ^' )^2 /( z· z)^5/2]_| s=0.
We now estimate all the dot products in the above expression to obtain a lower bound.
First,
z ^'· z ^' = 1,
since s is the arc length, and from here z ^'· z ^'' = 0. Now, to estimate
z· z and z· z^' we observe that z· z^'= 1/2 ( z· z)^' and we note that the first expression, as a function of t with s=0 fixed has a minimum at t=0 that we call δ ^2, and that the second function vanishes at t=0. Applying Taylor's formula with respect to t we then have
z· z =δ^2 + k t ^2, z· z^' = 1/2
( z· z)^' = kt ,
where the constant k is determined by the C^2–norm M of z. For the remaining dot product we have (still keeping s=0 and t arbitrary):
| z· z ^''| ≤ | z | | z ^'' | (<ref>) ≤ M
√(δ ^2 + k t ^2 ).
Using the above estimates in (<ref>), we obtain
U ^''(0,t, λ ) ≥(1- M √(δ^2 + k t ^2 ))δ^2- 3k ^2 t^2 /(δ^2 + k t ^2)^5/2 .
Now we restrict t to have δ ^2 + kt ^2≤ 2 δ^2; this guarantees that the denominator in (<ref>) does not exceed (2 δ )^5/2; to bound the numerator, we further restrict t so that the dominant part
δ ^2 - 3 k ^2 t^2 ≥1/2δ ^2, thus bounding
the numerator from below by
( δ ^2 - 3 k ^2 t ^2 ) - M √(2 δ ^2)δ^2 ≥1/2δ ^2 - M √(2)δ ^3 > 1/4δ ^2
if δ is sufficiently small.
Summarizing, we restricted t to
| t | ≤ c δdef=τ ( λ ), where c = min(k^-1/2 ,
(k √( 6) ) ^-1 ),
and showed that for all such t and for δ small enough
U^''(0,t, λ ) ≥1/4δ ^2 /(2δ^2 )^5/2 = c_1/δ ^3,
where c_1=2 ^-9/2.
With τ defined in (<ref>) we obtain
lim_τ→ 0( τ·min_|t| ≤τ√(a(t,λ))) = ∞, thus completing the proof of the lemma.
Proof of Theorem <ref>
Consider the linearized equation
ẍ + U^'' (0,t, λ)x = 0,
and consider the phase point z=x+i ẋ of a nontrivial solution.
Lemma <ref> gives us the rotation estimate (<ref>) for any time interval [t_0, t_1]; let this interval be
[- τ ( λ ) ,τ ( λ )] where τ ( λ ) is taken from the statement of Lemma <ref>. We then have from (<ref>):
(<ref>)
θ [z]def= z(t) |_t_0^t_1≤
- 2τ(λ )·min_|t|≤τ(λ )√(U^''(0,t, λ)) + π.
According to the conclusion (<ref>) of Lemma <ref>,
θ→ - ∞ as λ→ 0. This satisfies the condition (<ref>)
Lemma <ref>, which now applies and its conclusion comples the proof of Theorem <ref>.
§ STABILITY OF THE EQUILIBRIUM POINTS IN THE CURVED SITNIKOV PROBLEM
The system (<ref>) has two equilibria (0,0) and (π,0), which correspond to periodic orbits for 𝒳_ε.
The associated linear system around the fixed point (q_*,p_*) can be
written as
𝐯̇=A(t)𝐯 , 𝐯=([ x; y; ]) ,
with
A(t)=.(
[ 0 1; ∂f_ε/∂ q 0; ])|_q=q_*.
Let X(t) be a fundamental matrix solution of system (<ref>) given by
X(t)=(
[ x_1(t) x_2(t); y_1(t) y_2(t); ]),
with the initial condition X(0)=I, the identity matrix; x_1 is an even function and x_2 and odd one since ∂f_ε/∂ q is an even function with respect to t. The monodromy matrix is given by X(2π) and we denote λ_1,λ_2 its eigenvalues, the Floquet multipliers associated to (<ref>).
These are given by
λ_1,λ_2=x_1(2π)+y_2(2π)±√((x_1(2π)+y_2(2π))^2-4)/2 ,
the trace Tr(X(2π))=x_1(2π)+y_2(2π) determines the linearized dynamics around the fixed point. Moreover, since the function (∂f_ε/∂ q)_| q=q^* is an even function, we know x_1(2π)=y_2(2π) (see <cit.>) and then
λ_1,λ_2=y_2(2π)±√((y_2(2π))^2-1) .
To emphasize the dependence on the parameters , r we write
y_2(2π;ε,r)=y_2(2π).
Thus the linear stability of the equilibrium is
* Elliptic type: | y_2(2π;ε,r) |<1.
* Parabolic type: |y_2(2π;ε,r)|=1.
* Hyperbolic type: |y_2(2π;ε,r)|>1.
Since the Wronskian is equal to 1 for all t, then
W(ε, r)=(y_2(2π;ε,r))^2-x_2(2π;ε,r)y_1(2π;ε,r)=1,
and we have:
(y_2(2π;ε,r))^2=1+x_2(2π;ε,r)y_1(2π;ε,r) .
From this last expression it follows that, in the parabolic case, the periodic orbit corresponding to the equilibrium point (π,0) is associated to x_1(t) or to x_2(t) (or to both).
§.§ Stability of the equilibrium point (π,0). Case =0.
In this subsection we consider the system (<ref>)-(<ref>) for the case ε=0, namely when the primaries are following circular trajectories. We also fix R=1. The extended vector field under study is
𝒳_0(q,p,t;γ)={[ q̇ = p; ṗ = f_0(q,t;r); ṫ = 1 ].
where
f_0(q,t;r):= -(1+rcos(t))sin(q)/||𝐱-𝐱_1||^3-(1-rcos(t))sin(q)/ ||𝐱-𝐱_2||^3 ,
||𝐱-𝐱_1|| = [r^2+2+2 rcos(t)-2cos (q)-2rcos(q)cos(t))]^1/2 ,
||𝐱-𝐱_2|| = [r^2+2-2 rcos(t)-2cos (q)+2rcos(q)cos(t))]^1/2 ,
with 0<r<2. We observe that in this case, function (<ref>) is π–periodic (remember that f_ε(q,t;r) is 2π–periodic if ε >0).
The linear system associated to (<ref>) around the fixed point (π,0) is defined by the function
∂f_0/∂ q(π,t;r)=(1+rcos(t))/[r^2+4+4rcos(t)]^3/2
+(1-rcos(t))/[r^2+4-4rcos(t)]^3/2 ,
which is C^1 respect to t and r.
The function ∂f_0/∂ q(π,t;r) is monotone decreasing with respect to r, that is, for all t∈[0,π)
∂f_0/∂ q(π,t;r_1)>∂f_0/∂ q(π,t;r_2)
if r_1<r_2 (see Figure <ref>).
Let be F(t,r)=∂f_0/∂ q(π,t;r), then
by straightforward computation we get
∂ F/∂ r(t,r) = (cos(t))[r^2+4rcos(t)+4] - 3(1+rcos(t))[r+2cos(t)]/[r^2+4+4rcos(t)]^5/2
+ (- cos(t))[r^2-4rcos(t)+4] - 3(1-rcos(t))[r-2cos(t)]/[r^2+4-4rcos(t)]^5/2
≤ -4rcos(t)^2 - 6r/min{[r^2+4+4rcos(t)]^5/2,[r^2+4-4rcos(t)]^5/2} < 0.
The equilibrium point (π,0) is of hyperbolic type if r≤(√(17)-3)^1/2=1.059….
We first show that F(t,r)≥ 0 for all t if and only if r≤(√(17)-3)^1/2.
We compute
∂ F/∂ t(t,r)=rsin t [-1/(r^2+4rcos t+4)^1/2 +1/(r^2-4rcos t+4)^1/2.
.
+6(1+rcos t)/(r^2+4rcos t+4)^5/2-6(1-rcos t)/(r^2-4rcos t+4)^5/2 ].
We have F(t,r)=F( π-t, r ) so it is enough to restrict t∈[0,π/2]. Note that ∂ F/∂ t(t,r)=0 for t=0,π/2.
For 0<r<2, we have
F (π/2,r )=2/(r^2+4)^3/2>0,
and
F(0,r)=1+r/(2+r)^3+1-r/(2-r)^3=-2(r^4+6r^2-8)/(4-r^2)^3.
It follows immediately that F(0,r)<0 if r>(√(17)-3)^1/2.
If rcos t≤ 1 then (<ref>) implies F(t,r)≥ 0. Hence F(t,r)≥ 0 for all t provided r≤ 1.
Let 1<r≤ (√(17)-3)^1/2. If t∈[0,cos^-1(1/r)], which is equivalent to rcos t>1, then
1/(r^2-4rcos t+4)^1/2
-1/(r^2+4rcos t+4)^1/2≥ 0,
6(1+rcos t)/(r^2+4rcos t+4)^5/2-6(1-rcos t)/(r^2-4rcos t+4)^5/2≥ 0.
Therefore (<ref>) implies that F(t,r) is increasing in t for t∈ [0,cos^-1(1/r)], and, since F(0,r)≥ 0, it follows that F(t,r)≥ 0 for all t.
Now let x(t)=x_1(t) + x_2(t) be a particular solution of system (<ref>); by hypothesis it satisfies x(0)=1, ẋ(0)=1. From (<ref>) with ε = 0 we obtain
ẍ = ∂f_0/∂ q(π,t;r) x or ẍ - F(t,r)x = 0.
Then, since F(t,r)≤ 0 for all t, we can apply directly Lyapunov's instability criterion (see for instance page 60 in <cit.>) to show that (π,0) is of hyperbolic type.
We can in fact estimate the first value r_1 of r at which the equilibrium point (π,0) becomes of parabolic type for the first time.
The idea is to extend slightly the result beyond r=(√(17)-3)^1/2, and then find the maximum of the r for which Proposition <ref> holds. In the way we have to do straightforward analytic but tedious computations that we decide to avoid in this paper. Finally we get that the value of r to have the first parabolic solution is r_1 ≈ 1.2472⋯.
In Figure <ref> we show a couple of numerical simulations which illustrate the stability interchanges of the equilibrium point (π,0) for r ∈ (0,2) and ε = 0. The figure on the right hand side is a plot for r close to 2.
§.§ Stability of the equilibrium point (π,0). Case ≠ 0.
In this case, for every 0<r<(√(17)-3)^1/2, ∂ f_0/∂ q (π,t;r)>0, hence ∂ f_/∂ q (π,t;r)>0 for all >0 sufficiently small (depending on r). Therefore (π,0) remains an equilibrium point of hyperbolic type in the case when the primaries move on Keplerian ellipses of sufficiently small eccentricity ε>0.
We remark that Theorem <ref> does not depend of the shape of the curves where x(x) or y(t) are moving, in other words we can apply Theorem <ref> independently if the primaries are moving on a circle or on ellipses of eccentricity ε>0. Thus we obtain the following result on stability interchanges:
In the curved Sitnikov problem, let us fix any ε∈ [0,1), let R=1, and consider r (the semi-major axes of the Keplerian ellipses traced out by the primaries) as the parameter. As r approaches 2/1+ε, the distance between a Keplerian ellipse and the circle of the massless particle approaches zero. There exists a sequence r_n↑2/1+ε satisfying
r_0 ≤ r_1 < r _2 ≤ r _3 ⋯ <
r_2n≤ r_2n+1 < ⋯,
such that the equilibrium point (π, 0) of equation (<ref>)
is strongly stable for r ∈ (r_2n-1, r _2n)
and not strongly stable for some r in the complementary intervals (r_2n , r _2n+1). In other words, the equilibrium point (π, 0) loses and then regains its strong stability infinitely many times as r increases towards 2/1+ε.
We verify that Theorem <ref> applies. The curve x(s,λ) of Theorem <ref> is represented by the circle of radius R=1, the curve y (t,λ) is represented by the orbit of the primary that gets closer to (π,0) (when ε =0 the primaries are co-orbital), and the parameter λ corresponds to r. The planes of the two curves are perpendicular, as in (<ref>), and the minimum distance between the curves, given by (<ref>), is non-degenerate as in (<ref>), and it corresponds to the infinitesimal mass being at y=-1 and the closest primary to this point being at y=1-r(1+ε). Thus, the minimum distance is δ(r)=2-r(1+ε), and it approaches 0 when r→2/1+ε. To apply Theorem <ref> we only need to verify that x and y are bounded in the C^2-norm uniformly in r. This is obviously true for y(t,r) since the motion of the primary is not affected by the motion of the infinitesimal mass; it is also true for x(s,r) since s=arclength and the motion lies on the circle of radius R=1, hence x(s,r)=1 (the radius of the circle), x^'(s,r)=1 (the unit speed of a curve parametrized by arc-length), and x^''(s,r)=1 (the curvature of the circle). Hence the conclusion of Theorem <ref> follows immediately.
§.§ Stability of the equilibrium point q=(0,0).
The linear stability of (0,0) is the same as of the barycenter in the classical Sitnikov problem, since
∂f_ε∂ q(0)=-2R/r^3ρ(t;ε)^3.
The stability of this point can be treated very similarly to that of the origin for Hill's equation, so in this analysis we use results from that theory.
As in the study of the other equilibrium point we start with the case ε = 0. Here the function f_0(q,t;r)
defined in equation (<ref>) around q=(0,0) is given by
∂ f_0/∂ q(q)=-2/r^3q+9+r^2+9r^2(cos(t))^2/3r^5q^3+𝒪(q^5) .
The local dynamics is determined by the linear part. The eigenvalues are on the unit circle, given by σ_1,2=± i√(2/r^3), and the Floquet multipliers, which come from the monodromy matrix
X(π)=
(
[ cos(√(2/r^3)π) (√(r^3/2))sin(√(2/r^3)π); -(√(2/r^3)π)sin(√(2/r^3)π) cos(√(2/^3)π); ]), X(0)=I ,
are λ_1,2=e^± i√(2/r^3)π.
Hence q=(0,0) is of elliptic type if √(2/r^3)≠ k for any k∈ℤ, and it is of parabolic type if √(2/r^3)=k for some k∈ℤ. In fact, in the parabolic case, if √(2/r^3)=2m, m∈ℤ, then there exists a π-periodic solution, and if √(2/r^3)=2m+1, m∈ℤ, then there exists a 2π-periodic solution.
The equilibium point q=(0,0) of the system defined by (<ref>) is stable for all r∈(0,2).
The linear part possess Floquet multipliers which place the system in either the elliptic or the parabolic case.
In the last case, when √(2/r^3)=k for some k∈ℤ, there exists two independent eigenvectors associated to the eigenvalues λ_1,2. Then the conjugacy of
A and ± I implies that the monodromy matrix is A = ± I and q=(0,0) is stable (see <cit.> for more details).
When the origin is of elliptic type for the linear system,
we observe that the coefficient of the term of order 3 in equation (<ref>) is greater than zero for all r∈(0,2), then we can apply directly Ortega's theorem (see Appendix and <cit.>), and therefore we obtain that the equilibrium point (0,0) is stable for the whole system, that is, including the nonlinear part.
We note that, in the case when ε=0, stability interchanges in a weak sense appear as r→ 0, since (0,0) switches between elliptic type when r≠ (2/k^2)^1/3 and parabolic type when r= (2/k^2)^1/3, k∈ℤ^+, as noted before.
In the case when ε≠ 0, as we mentioned earlier, the linear stability of (0,0) is the same as in the classical Sitnikov problem, so it only depends on the eccentricity parameter ε.
The papers <cit.> state that there are stability interchanges when the size r of the binary is kept fixed and the eccentricity ε of the Keplerian ellipses approaches 1.
We should point out that Theorem <ref> does not apply to this case since the function r_ε(t,r) describing the motion of the primaries — corresponding to y(t,λ) in Theorem <ref> — does not remain bounded in
the C^2 norm uniformly in ε.
§ FLOQUET THEORY
In order to have a self contained paper we add this appendix with the main results on Floquet theory, must of them are very well known for people in the field.
Consider the linear system
𝐱̇=A(t)𝐱 , 𝐱∈ℝ^2 ,
where A(t) is a T-periodic matrix-valued function. Let X(t) be the fundamental matrix solution
X(t)=(
[ x_1(t) x_2(t); y_1(t) y_2(t); ])
with the initial condition X(0)=I, the identity matrix. Let λ_1 and λ_2 be the eigenvalues (Floquet multipliers) of the monodromy matrix matrix X(T) and let μ_1,μ_2 (Floquet exponents) be such that λ_1=e^μ_1T, λ_2=e^μ_2T.
[Floquet's theorem]
Suppose X(t) is a fundamental matrix solution for (<ref>), then
X(t+T)=X(t)X(T)
for all t∈ℝ. Also there exists a constant matrix B such that e^TB=X(T) and a T-periodic matrix P(t), so that, for all t,
X(t)=P(t)e^Bt.
Let be λ a Floquet multiplier for (<ref>) and λ=e^μ T, then there exists a nontrivial solution x(t)=e^μ tp(t), with p(t) a T-periodic function. Moreover, x(t+T)=λ x(t).
Thus, the Floquet multipliers lead to the following characterization:
* If |λ|<1⇔Re(μ)<0 then x(t)→ 0 as t→∞.
* If |λ|=1⇔Re(μ)=0 then x(t) is a pseudo-periodic, bounded solution. In particular when λ=1 then x(t) is T-periodic and when λ=-1 then x(t) is 2T-periodic.
* If |λ|>1 then Re(μ)>0 and therefore x(t)→∞ as t
→∞, an unbounded solution.
If the Floquet multipliers satisfy λ_1≠λ_2, then the equation (<ref>) has two linearly independent solutions
x_1(t)=p_1(t)e^μ_1t , x_2(t)=p_2(t)e^μ_2t ,
where p_1(t) and p_2(t) are T-periodic functions and μ_1 and μ_2 are the respective Floquet exponents.
In this way, the stability of the solution to (<ref>) is
* Asymptotically stable if |λ_i|<1 for i=1,2.
* Lyapunov stable if |λ_i|≤1 for i=1,2, or if |λ_i|=1 and the algebraic multiplicity equals the geometric multiplicity.
* Unstable if |λ_i|>1 for at least one i, or if |λ_i|=1 and the algebraic multiplicity is greater than the geometric multiplicity.
A particular case for the equation (<ref>) is the so-called Hill's equation, namely the periodic linear second order differential equation:
z̈+f(t)z=0 ,
where f(t) is a π-periodic function. Equation (<ref>), as a first order system, is
𝐯̇=A(t)𝐯 , 𝐯=([ x; y; ]) ,
A(t)=(
[ 0 1; -f(t) 0; ]) .
Let (<ref>) be a fundamental matrix solution of (<ref>). The monodromy matrix corresponds to X(π) and, as before, let λ_1 and λ_2 be the Floquet multipliers associated to (<ref>) and μ_1 and μ_2 be the corresponding Floquet exponents.
In this paper we assume that f(t) is an even function; from Floquet theory we know that x_1 is an even and x_2 is an odd function. Also, the trace of A(t) vanishes and
λ_1λ_2=e^∫_0^πtr(A(t))dt=1 .
Assuming μ_1=a+ib is the Floquet exponent corresponding to the Floquet multiplier λ_1, the general solution to (<ref>) is characterized as follows:
* Elliptic type: λ_1∈ℂ∖ℝ, with |λ_1|=1 (and λ_2=λ̅_1). The general solution is pseudo-periodic and can be written as
𝐯(t)=c_1Re(𝐩(t)e^ibt/π)+c_2Im(𝐩(t)e^ibt/π) .
The origin is Lyapunov stable.
* Parabolic type: λ_1=λ_2=± 1.
* If λ_1=1 and there are two linearly independent eigenvectors of the monodromy matrix, the general solution is
𝐯(t)=c_1𝐩_1(t)+c_2𝐩_2(t) ,
and is π-periodic and Lyapunov stable.
* If λ_1=1 and there is just one eigenvector associated to this eigenvalue, thus the general solution is
𝐯(t)=(c_1+c_2t)𝐩_1(t)+c_2𝐩_2(t) ,
and is unstable.
* If λ_2=-1 and there are two linearly independent eigenvectors of the monodromy matrix, the general solution is
𝐯(t)=c_1𝐩_1(t)e^it+c_2𝐩_2(t)e^it ,
and is 2π-periodic and Lyapunov stable.
* If λ_1=-1 and there is just one eigenvector associated to this eigenvalue, thus the general solution is
𝐯(t)=(c_1+c_2t)𝐩_1(t)e^it+c_2𝐩_2(t)e^it ,
and is unstable.
* Hyperbolic type: λ_1∈ℝ, but |λ_1|≠ 1 (and λ_2=1/λ_1).
* If λ_1>1, then the solution is
𝐯(t)=c_1𝐩_1(t)e^μ_1 t+c_2𝐩_2(t)e^-μ_1 t .
* If λ_1<-1, then the solution is
𝐯(t)=c_1𝐩_1(t)e^μ_1 te^it+c_2𝐩_2(t)e^-μ_1 te^it .
Thus, the origin is unstable.
A useful result from Rafael Ortega <cit.> considers the nonlinear Hill equation
y”+a(t)y+c(t)y^2n-1+d(t,y)=0,
with n≥2, where the functions a,c:ℝ→ℝ are continuous, T-periodic, and
∫_0^T|c(t)|≠ 0,
and the function d:ℝ×(-ϵ,ϵ)→ℝ, for ϵ>0, is continuous, has continuous derivatives of all orders respect to y, is T-periodic function respect to t, and
d(t,y)=𝒪(|y|^2n) as
y→0 uniformly with respect to t∈ℝ.
The linear part around the solution y=0 of (<ref>) is
y”+a(t)y=0 .
[Ortega's Theorem]
Assume the following:
* The equation (<ref>) is stable.
* c≥0 or c≤0.
Then y=0 is a stable solution of (<ref>).
§.§ Acknowledgements
We thank to the anonymous referees, their remarks and suggestions help us to improve this paper. Research of M.G. was partially supported by NSF grant DMS-1515851.
M. L. gratefully acknowledges support by the NSF grant DMS-1412542.
The fourth author (EPC) has received partial support by the Asociación Mexicana de Cultura A.C.
Parts of this work have been done while the authors visited CIMAT, Guanajuato, LFP and EPC visit Yeshiva University and M.G. visited UAM-I in Mexico City. All authors are grateful for the hospitality of these institutions.
99
Alekseev1968a V. Alekseev, Quasirandom dynamical systems I, Math. USSR Sbornik 5, 73-128 (1968).
Alekseev1968b V. Alekseev, Quasirandom dynamical systems II, Math. USSR Sbornik, 6, 505-560 (1968).
Alekseev1969 V. Alekseev, Quasirandom dynamical systems III, Math. USSR Sbornik, 7, 1-43 (1969).
Ces L. Cesari, Asymptotic behavior and instability problems in ordinary differential equations, Springer-Verlag, New York, 1971.
Chazy J. Chazy, Sur l'allure finale du mouvement dans le problème des trois corps I, II, III. Ann. Sci. Ecole Norm. Sup. 3^e Sér. 39 (1922); J. Math. Pures Appl. 8 (1929); Bull. Astron. 8 (1932).
Dankowicz H. Dankowicz, P. Holmes, The existence of transversal homoclinic points in the Sitnikov problem. J. Differential Equations 116, 468-483 (1995).
Diacu F. Diacu, Relative Equilibria in the 3-Dimensional Curved-N-Body Problem, Memoirs of the American Mathematical Society, Vol. 228, 2014.
Fr L. Franco-Pérez, E. Pérez Chavela, Global symplectic regularization for some restricted 3–body problems on S^1. Nonlinear Analysis, 71, 5131-5143 (2009).
GarciaPerezChavela A. Garcia, E. Pérez-Chavela, Heteroclinic phenomena in the Sitnikov problem, Hamiltonian systems and celestial mechanics (Patzcuaro, 1998), 174-185, World Sci. Monogr. Ser. Math., 6, World Sci. Publ., River Edge, NJ, 2000.
GorodetskiKaloshin A. Gorodetski and V. Kaloshin, Hausdorff dimension of oscillatory motions in the restricted planar circular three body problem and in Sitnikov problem, preprint.
Hagel_Lothka J. Hagel, C. Lhotka, A High Order Perturbation Analysis
of the Sitnikov Problem, Celestial Mechanics and Dynamical Astronomy 93, 201-228 (2005).
Kalas V. O. Kalas, P. S. Krasil'nikov, On equilibrium stability in the Sitnikov problem,
Cosmic Research 49, 534-537 (2011).
Kostov2014 V.B. Kostov, P.R. McCullough, J.A. Carter, M. Deleuil, R.F. Díaz, D. C. Fabrycky, G. Hébrard, T.C. Hinse, T. Mazeh, J.A. Orosz, Z.I. Tsvetanov, W.F. Welsh, Kepler-413b: a slightly misaligned, Neptune-size transiting circumbinary planet, Arxive: 1401.7275 (2014).
Kovacs T. Kovács, Gy. Bene and T. Tél, Relativistic effects in the chaotic Sitnikov problem, Monthly Notices of the Royal Astronomical Society, 414 (3), 2275-2281 (2011).
Levi M. Levi, Stability of the inverted pendulum - a topological explanation. SIAM Review, 30 (4), 639-644 (1988).
Alfaro J. Martínez Alfaro, C. Chiralt, Invariant rotational curves in Sitnikov's Problem,
Celestial Mechanics and Dynamical Astronomy, 55 (4), 351–367 (1993).
McGehee R. McGehee, A stable manifold theorem for degenerate fixed points with applications to
Celestial Mechanics, J. Differential Equations 14, 70-88 (1973).
Mos J. K. Moser, Stable and Random Motion in Dynamical Systems. Annals Math. Studies 77, Princeton University Press, 1973.
Ortega R. Ortega, The stability of the equilibrium of a nonlinear Hill's equation. SIAM J. Math. Anal. 25 (5), 1393-1401 (1994).
Ortega2 R. Ortega, The stability of the equilibrium: a search for the right approximation. Ten Mathematical Essays on Aproximation in Analysis and Topology, (J. Ferrera, J. López-Gómez and F.R. Ruiz del Portal, Eds.), Elsevier, 215-234, (2005).
Plummer H.C. Plummer, An Introductory Treatise on Dynamical Astronomy, New York, Dover, 1960.
Robinson C. Robinson, Homoclinic Orbits and Oscillation for the
Planar Three-Body Problem, Journal of Differential Equations 52, 356-377 (1984).
Sitnikov K. Sitnikov, The Existence of Oscillatory Motions in the Three-Body Problem. Translation from Doklady Akademii Nauk SSSR, 133 (2), 647-650 (1961).
Magnus M. Wilhelm, W. Stanley, Hill's Equation. First edition, Dover, 1979.
Ziglin S.L. Ziglin, Non-integrability of the restricted two-body problem on a sphere. Doklady RAN. 379 (4), 477-478 (2001). Engl. transl.: Physics-Doklady. 46 (8), 570-571 (2001).
| §.§ A curved Sitnikov problem
We consider the following curved Sitnikov problem: Two bodies of equal masses (primaries) move, under mutual gravity, on Keplerian ellipses about their center of mass. A third, massless particle is confined to a circle passing through the center of mass of the primaries, denoted by P_0, and perpendicular to the plane of motion of the primaries; the second intersection point of the circle with that plane is denoted by P_1. We assume that the massless particle moves under the gravitational influence of the primaries without affecting them. The dynamics of the massless particle has two equilibrium points, at P_0 and P_1. We focus on the local dynamics near these two points, more precisely, on the dependence of the linear stability of these points on the parameters of the problem.
When the Keplerian ellipses are not too large or too small, P_0 is a local center and P_1 is a hyperbolic fixed point. When we increase the size of the Keplerian ellipses, as the distance between P_1 and the closest ellipse approaches zero, then P_1 undergoes stability interchanges.
That is, there exists a sequence of open, mutually disjoint intervals of values of the semi-major axis of the Keplerian ellipses, such that, on each of these intervals the linearized stability of P_1 is strongly stable, and each complementary interval contains values where the linearized stability is not strongly stable, i.e., it is either hyperbolic or parabolic.
The length of these intervals approaches zero when the semi-major axis of the Keplerian ellipses approaches the diameter of the circle on which the massless particle moves. This phenomenon is the main focus in the paper.
It is stated in <cit.> and suggested by numerical evidence <cit.> that the
linearized stability of the point P_0 also undergoes stability interchanges when the size of the binary is kept fixed and the eccentricity of the Keplerian ellipses approaches 1.
Stability interchanges of the type described above are ubiquitous in systems of varying parameters; they appear, for example, in the classical Hill's equation and in the Mathieu equation <cit.>. To prove the occurrence of this phenomenon in our curved Sitnikov problem, we first formulate a general result on stability interchanges for a general class of simple mechanical systems. More precisely, we consider the motion of two bodies — one massive and one massless — which are confined to a pair of curves and move under Newtonian gravity. We let the distance between the two curves be controlled by some parameter λ. We assume that the position of the infinitesimal particle that achieves the minimum distance between the curves is an equilibrium point. We show that, in the case when the minimum distance between the two curves approaches zero, corresponding to λ→ 0, there existence a sequence of mutually disjoint open intervals (λ_2n-1,λ_2n), whose lengths approach zero as λ→ 0, such that whenever λ∈ (λ_2n-1,λ_2n) the linearized stability of the equilibrium point is strongly stable, and each complementary interval contains values of λ where the linearized stability is not strongly stable.
From this result we derive the above mentioned stability interchange result for the curved Sitnikov problem.
The curved Sitnikov problem considered in this paper is an extension of the classical Sitnikov problem described in Section <ref> (also, see e.g., <cit.>). When the radius of the circle approaches infinity, in the limit we obtain the classical Sitnikov problem — the infinitesimal mass moves along the line perpendicular to the plane of the primaries and passing through the center of mass. The equilibrium point P_1 becomes the point at infinity and is of a degenerate hyperbolic type. Thus, stability interchanges of P_1 represent a new phenomenon that we encounter in the curved Sitnikov problem but not in the classical one.
Also in the last case, it is well known that for =0 the classical Sitnikov problem is integrable. In the case of the curved one, numerical evidence suggests that it is not (see Figure <ref>).
The motivation for considering the curved Sitnikov problem resides in the n-body problem in spaces with constant curvature, and with models of planetary motions in binary star systems, as discussed in Section <ref>.
§.§ Classical Sitnikov problem
We recall here the classical Sitnikov problem. Two bodies (primaries) of equal masses m_1=m_2=1 move in a plane on Keplerian ellipses of eccentricity about their center of mass, and a third, massless particle moves on a line perpendicular to the plane of the primaries and passing through their center of mass. By choosing the plane of the primaries the xy-plane and the line on which the massless particle moves the z-axis, the equations of motion of the massless particle can be written, in appropriate units, as
z̈=-2z/(z^2+r^2(t))^3/2,
where r(t) is the distance from the primaries to their center of mass given by
r(t)=1-cos u(t),
where u(t) is the eccentric anomaly in the Kepler problem. By normalizing the time we can assume that the period of the primaries is 2π, and
r(t)=(1-cos t)+O(^2),
for small .
When =0, i.e., the primaries move on a circular orbit and the dynamics of the massless particle is described by a 1-degree of freedom Hamiltonian and so is integrable. Depending on the energy level, one has the following types of solutions: an equilibrium solution, when the particle rests at the center of mass of the primaries; periodic solutions around the center of mass; escape orbits, either parabolic, that reach infinity with zero velocity, or hyperbolic, that reach infinity with positive velocity.
When ∈(0,1), the differential equation (<ref>) is non-autonomous and the system is non-integrable. Consider the case ≪ 1.
The system also has bounded and unbounded orbits, as well as unbounded oscillatory orbits and capture orbits (oscillatory orbits are those for which lim sup_t→±∞|z(t)|=+∞ and lim inf_t→±∞|z(t)|<+∞, and capture orbits are those for which lim sup_t→ -∞|z(t)|=+∞ and
lim sup_t→ +∞|z(t)|<+∞). In his famous paper about the final evolutions in the three body problem, Chazy introduced the term oscillatory motions <cit.>, although he did not find examples of these, leaving the question of their existence open. Sitnikov's model yielded the first example of oscillatory motions <cit.>.
There are many relevant works on this problem, including <cit.>.
The curved Sitnikov problem introduced in Section <ref> is a modification of the classical problem when the massless particle moves on a circle rather than a line. Here we regard the circle as a very simple restricted model of a space with constant curvature. In Subsection <ref>
we introduce and summarize some aspects of this problem.
§.§ The n-body problem in spaces with
constant curvature
The n-body problem on spaces with
constant curvature is a natural extension of the n-body problem in the Euclidean space; in either case the gravitational law considered is Newtonian. The extension was first proposed independently by the founders of hyperbolic geometry, Nikolai Lobachevsky and János Bolyai. It was subsequently studied in the late 19th, early 20th century, by Serret, Killing, Lipschitz, Liebmann, Schering, etc. Schrödinger developed a quantum mechanical analogue of the Kepler problem on the two-sphere in 1940. The interest in the problem was revived by Kozlov, Harin, Borisov, Mamaev, Kilin, Shchepetilov, Vozmischeva, and others, in the 1990's. A more recent surge of interest was stimulated by the works on relative equilibria in spaces with constant curvature (both positive and negative) by Diacu, Pérez-Chavela, Santoprete, and others, starting in the 2010's. See <cit.> for a history of the problem and a comprehensive list of references.
A distinctive aspect of the n-body problem on curved spaces is that the lack of (Galilean) translational invariance results in the lack of center-of-mass and linear-momentum integrals.
Hence, the study of the motion cannot be reduced to a barycentric coordinate system.
As a consequence, the two-body problem on a sphere can no longer be reduced to the
corresponding problem of motion in a central potential field, as is the case for the Kepler problem in the Euclidean space. As it turns out, the two-body problem on the sphere is not integrable <cit.>.
Studying the three-body problem on spaces with curvature is also challenging.
Perhaps the simplest model is the restricted three-body problem on a circle. This was studied in <cit.>. First, they consider the motion of the two primaries on the circle, which is integrable, collisions can be regularized, and all orbits can be classified into three different classes (elliptic, hyperbolic, parabolic). Then they consider the motion of the massless particle under the gravity of the primaries, when one or both primaries are at a fixed position. They obtain once again a complete classification of all orbits of the massless particle.
In this paper we take the ideas from above one step further, by considering the curved Sitnikov problem, with the massless particle moving on a circle under the gravitational influence of two primaries that move on Keplerian ellipses in a plane perpendicular to that circle. In the limit case, when the primaries are identified with one point, that is when the primaries coalesce into a single body, the Keplerian ellipses degenerate to a point, and the limit problem coincides with the two-body problem on a circle described above.
While the motivation of this work is theoretical, there are possible connections with the dynamics of planets in binary star systems. About 20 planets outside of the Solar System have been confirmed to orbit about binary stars systems; since more than half of the main sequence stars have at least one stellar companion, it is expected that a substantial fraction of planets form in binary stars systems. The orbital dynamics of such planets can vary widely, with some planets orbiting one star and some others orbiting both stars. Some chaotic-like planetary orbits have also been observed, e.g. planet Kepler-413b orbiting Kepler-413 A and Kepler-413 B in the constellation Cygnus, which displays erratic precession. This planet's orbit is tilted away from the plane of binaries and deviating from Kepler's laws. It is hypothesized that this tilt may be due to the gravitational influence of a third star nearby <cit.>. Of a related interest is the relativistic version of the Sitnikov problem <cit.>.
Thus, mathematical models like the one considered in this paper could be helpful to understand possible types of planetary orbits in binary stars systems.
To complete this introduction, the paper is organized as follows: In Section <ref> we go deeper in the description of the curved Sitnikov problem, studying the limit cases and its general properties. In Section <ref> we present a general result on stability interchanges. In Section <ref> we show that the equilibrium points in the curved Sitnikov problem present stability interchanges. Finally, in order to have a self contained paper, we add an Appendix with general results (without proofs) from Floquet theory. | null | null | null | null | null |
http://arxiv.org/abs/1701.07570v3 | 20170126035421 | Dynamic Regret of Strongly Adaptive Methods | [
"Lijun Zhang",
"Tianbao Yang",
"Rong Jin",
"Zhi-Hua Zhou"
] | cs.LG | [
"cs.LG"
] |
Dynamic Regret of Strongly Adaptive Methods
Lijun Zhang [email protected]
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China
Tianbao Yang [email protected]
Department of Computer Science
the University of Iowa, Iowa City, IA 52242, USA
Rong Jin [email protected]
Alibaba Group, Seattle, USA
Zhi-Hua Zhou [email protected]
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210023, China
December 30, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
To cope with changing environments, recent developments in online learning have introduced the concepts of adaptive regret and dynamic regret independently. In this paper, we illustrate an intrinsic connection between these two concepts by showing that the dynamic regret can be expressed in terms of the adaptive regret and the functional variation. This observation implies that strongly adaptive algorithms can be directly leveraged to minimize the dynamic regret. As a result, we present a series of strongly adaptive algorithms that have small dynamic regrets for convex functions, exponentially concave functions, and strongly convex functions, respectively. To the best of our knowledge, this is the first time that exponential concavity is utilized to upper bound the dynamic regret. Moreover, all of those adaptive algorithms do not need any prior knowledge of the functional variation, which is a significant advantage over previous specialized methods for minimizing dynamic regret.
Online convex optimization, Adaptive regret, Dynamic regret
§ INTRODUCTION
Online convex optimization is a powerful paradigm for sequential decision making <cit.>. It can be viewed as a game between a learner and an adversary: In the t-th round, the learner selects a decision _t ∈Ω, simultaneously the adversary chooses a function f_t(·): Ω↦, and then the learner suffers an instantaneous loss f_t(_t). This study focuses on the full-information setting, where the learner can query the value and gradient of f_t <cit.>. The goal of the learner is to minimize the cumulative loss over T periods . The standard performance measure is regret, which is the difference between the loss incurred by the learner and that of the best fixed decision in hindsight, i.e.,
(T)=∑_t=1^T f_t(_t) - min_∈Ω∑_t=1^T f_t().
The above regret is typically referred to as static regret in the sense that the comparator is time-invariant. The rationale behind this evaluation metric is that one of the decision in Ω is reasonably good over the T rounds. However, when the underlying distribution of loss functions
changes, the static regret may be too optimistic and fails to capture the hardness of the problem.
To address this limitation, new forms of performance measure, including adaptive regret <cit.> and dynamic regret <cit.>, were proposed and received significant interest recently. Following the terminology of <cit.>, we define the strongly adaptive regret as the maximum static regret over intervals of length τ, i.e.,
(T,τ) = max_[s, s+τ -1] ⊆ [T](∑_t=s^s+τ -1 f_t(_t) - min_∈Ω∑_t=s^s+τ -1 f_t() ).
Minimizing the adaptive regret enforces the learner to have a small static regret over any interval of length τ. Since the best decision for different intervals could be different, the learner is essentially competing with a changing comparator.
A parallel line of research introduces the concept of dynamic regret, where the cumulative loss of the learner is compared against a comparator sequence _̆1, …, _̆T ∈Ω, i.e.,
(_̆1,…,_̆T) = ∑_t=1^T f_t(_t) - ∑_t=1^T f_t(_̆t).
It is well-known that in the worst case, a sublinear dynamic regret is impossible unless we impose some regularities on the comparator sequence or the function sequence <cit.>. A representative example is the functional variation defined below
V_T = ∑_t=2^T max_∈Ω |f_t() - f_t-1()|.
<cit.> have proved that as long as V_T is sublinear in T, there exists an algorithm that achieves a sublinear dynamic regret. Furthermore, a general restarting procedure is developed, and it enjoys O(T^2/3V_T^1/3) and O(log T √(T V_T)) rates for convex functions and strongly convex functions, respectively.
However, the restarting procedure can only be applied when an upper bound of V_T is known beforehand, thus limiting its application in practice.
While both the adaptive and dynamic regrets aim at coping with changing environments, little is known about their relationship. This paper makes a step towards understanding their connections. Specifically, we show that the strongly adaptive regret in (<ref>), together with the functional variation, can be used to upper bound the dynamic regret in (<ref>). Thus, an algorithm with a small strongly adaptive regret is automatically equipped with a tight dynamic regret. As a result, we obtain a series of algorithms for minimizing the dynamic regret that do not need any prior knowledge of the functional variation. The main contributions of this work are summarized below.
* We provide a general theorem that upper bounds the dynamic regret in terms of the strongly adaptive regret and the functional variation.
* For convex functions, we show that the strongly adaptive algorithm of <cit.> has a dynamic regret of O(T^2/3 V_T^1/3log^1/3 T), which matches the minimax rate <cit.>, up to a polylogarithmic factor.
* For exponentially concave functions, we propose a strongly adaptive algorithm that allows us to control the tradeoff between the adaptive regret and the computational cost explicitly. Then, we demonstrate that its dynamic regret is O(d √(T V_T log T)), where d is the dimensionality. To the best of our knowledge, this is the first time that exponential concavity is utilized in the analysis of dynamic regret.
* For strongly convex functions, our proposed algorithm can also be applied and yields a dynamic regret of O(√(T V_T log T)), which is also minimax optimal up to a polylogarithmic factor.
§ RELATED WORK
We give a brief introduction to previous work on static, adaptive, and dynamic regrets in the context of online convex optimization.
§.§ Static Regret
The majority of studies in online learning are focused on static regret <cit.>. For general convex functions, the classical online gradient descent achieves O(√(T)) and O(log T) regret bounds for convex and strongly convex functions, respectively <cit.>. Both the O(√(T)) and O(log T) rates are known to be minimax optimal <cit.>. When functions are exponentially concave, a different algorithm, named online Newton step, is developed and enjoys an O(d log T) regret bound, where d is the dimensionality <cit.>.
§.§ Adaptive Regret
The concept of adaptive regret is introduced by <cit.>, and later strengthened by <cit.>. Specifically, <cit.> introduce the weakly adaptive regret
(T)= max_[s, q] ⊆ [T](∑_t=s^q f_t(_t) - min_∈Ω∑_t=s^q f_t()).
To minimize the adaptive regret, <cit.> have developed two meta-algorithms: an efficient algorithm with O(log T) computational complexity per iteration and an inefficient one with O(T) computational complexity per iteration. These meta-algorithms use an existing online method (that was possibly designed to have small static regret) as a subroutine.[For brevity, we ignored the factor of subroutine in the statements of computational complexities. The O(·) computational complexity should be interpreted as O(·) × s space complexity and O(·) × t time complexity, where s and t are space and time complexities of the subroutine per iteration, respectively.] For convex functions, the efficient and inefficient meta-algorithms have O(√(T log^3 T)) and O(√(T log T)) regret bounds, respectively. For exponentially concave functions, those rates are improved to O(d log^2 T) and O(d log T), respectively. We can see that the price paid for the adaptivity is very small: The rates of weakly adaptive regret differ from those of static regret only by logarithmic factors.
A major limitation of weakly adaptive regret is that it does not respect short intervals well. Taking convex functions as an example, the O(√(T log^3 T)) and O(√(T log T)) bounds are meaningless for intervals of length O(√(T)). To overcome this limitation, <cit.> proposed the strongly adaptive regret (T,τ) which takes the length of the interval τ as a parameter, as indicated in (<ref>). From the definitions, we have (T,τ) ≤(T), but it does not mean the notation of weakly adaptive regret is stronger, because an upper bound for (T) could be very loose for (T,τ) when τ is small.
If the strongly adaptive regret is small for all τ <T, we can guarantee the learner has a small regret over any interval of any length. In particular, <cit.> introduced the following definition.
Let R(τ) be the minimax static regret bound of the learning problem over τ periods. An algorithm is strongly adaptive, if
(T,τ)=O((log T) · R(τ)), ∀τ.
It is easy to verify that the meta-algorithms of <cit.> are strongly adaptive for exponentially concave functions,[That is because (i) (T,τ) ≤(T), and (ii) there is a (log T) factor in the definition of strong adaptivity. ] but not for convex functions. Thus, <cit.> developed a new meta-algorithm that satisfies (T,τ)=O( √(τ)log T ) for convex functions, and thus is strongly adaptive. The algorithm is also efficient and the computational complexity per iteration is O(log T). Later, the strongly adaptive regret of convex functions was improved to O( √(τlog T) ) by <cit.>, and the computational complexity remains O(log T) per iteration. All the previously mentioned algorithms for minimizing adaptive regret need to query the gradient of the loss function at least O(log t) times in the t-th iteration. In a recent study, <cit.> demonstrate that the number of gradient evaluations per iteration can be reduced to 1 by introducing the surrogate loss.
§.§ Dynamic Regret
In a seminal work, <cit.> proposed to use the path-length defined as
(_̆1, …, _̆T)=∑_t=2^T _̆t - _̆t-1_2
to upper bound the dynamic regret, where _̆1, …, _̆T ∈Ω is a comparator sequence. Specifically, <cit.> proved that for any sequence of convex functions, the dynamic regret of online gradient descent can be upper bounded by O(√(T)(_̆1, …, _̆T)). Another regularity of the comparator sequence, which is similar to the path-length, is defined as
'(_̆1, …, _̆T)=∑_t=2^T _̆t - Φ_t (_̆t-1)_2
where Φ_t (·) is a dynamic model that predicts a reference point for the t-th round. <cit.> developed a novel algorithm named dynamic mirror descent and proved that its dynamic regret is on the order of √(T)'(_̆1, …, _̆T). The advantage of '(_̆1, …, _̆T) is that when the comparator sequence follows the dynamical model closely, it can be much smaller than the path-length (_̆1, …, _̆T).
Let _t^* ∈_∈Ω f_t() be a minimizer of f_t(·). For any sequence of _̆1, …, _̆T ∈Ω, we have
(_̆1,…,_̆T) =∑_t=1^T f_t(_t) - ∑_t=1^T f_t(_̆t)
≤ (_1^*,…,_T^*) = ∑_t=1^T f_t(_t) - ∑_t=1^T min_∈Ω f_t().
Thus, (_1^*,…,_T^*) can be treated as the worst case of the dynamic regret, and there are many works that were devoted to minimizing (_1^*,…,_T^*) <cit.>.
When a prior knowledge of (_1^*, …, _T^*) is available, (_1^*,…,_T^*) can be upper bounded by O(√(T (_1^*, …, _T^*))) <cit.>. If all the functions are strongly convex and smooth, the upper bound can be improved to O((_1^*, …, _T^*)) <cit.>. The O((_1^*, …, _T^*)) rate is also achievable when all the functions are convex and smooth, and all the minimizers _t^*'s lie in the interior of Ω <cit.>. In a recent study, <cit.> introduced a new regularity—squared path-length
(_1^*, …, _T^*)=∑_t=2^T _t^* - _t-1^*_2^2
which could be much smaller than the path-length (_1^*, …, _T^*) when the difference between successive minimizers is small. <cit.> developed a novel algorithm named online multiple gradient descent, and proved that (_1^*,…,_T^*) is on the order of min((_1^*, …, _T^*),(_1^*, …, _T^*)) for (semi-) strongly convex and smooth functions.
Discussions Although closely related, adaptive regret and dynamic regret are studied independently and there are few discussions of their relationships. In the literature, dynamic regret is also referred to as tracking regret or shifting regret <cit.>. In the setting of “prediction with expert advice”, <cit.> have shown that the tracking regret can be derived from the adaptive regret. In the setting of “online linear optimization in the simplex”, <cit.> introduced a generalized notion of shifting regret which unifies adaptive regret and shifting regret. Different from previous work, this paper considers the setting of online convex optimization, and illustrates that the dynamic regret can be upper bounded by the adaptive regret and the functional variation.
§ A UNIFIED ADAPTIVE ALGORITHM
In this section, we introduce a unified approach for minimizing the adaptive regret of exponentially concave functions, as well as strongly convex functions.
§.§ Motivation
We first provide the definition of exponentially concave (abbr. exp-concave) functions <cit.>.
A function f(·): Ω↦ is α-exp-concave if exp(-α f(·)) is concave over Ω.
For exp-concave functions, <cit.> have developed two meta-algorithms that take the online Newton step as its subroutine, and proved the following properties.
* The inefficient one has O(T) computational complexity per iteration, and its adaptive regret is O(d log T).
* The efficient one has O(log T) computational complexity per iteration, and its adaptive regret is O(d log^2 T).
As can be seen, there is a tradeoff between the computational complexity and the adaptive regret: A lighter computation incurs a looser bound and a tighter bound requires a higher computation. Our goal is to develop a unified approach, that allows us to trade effectiveness for efficiency explicitly.
§.§ Improved Following the Leading History (IFLH)
Let E be an online learning algorithm that is designed to minimize the static regret of exp-concave functions or strongly convex functions, e.g., online Newton step <cit.> or online gradient descent <cit.>. Similar to the approach of following the leading history (FLH) <cit.>, at any time t, we will instantiate an expert by applying the online learning algorithm E to the sequence of loss functions f_t,f_t+1,…, and utilize the strategy of learning from expert advice to combine solutions of different experts <cit.>. Our method is named as improved following the leading history (IFLH), and is summarized in Algorithm <ref>.
Let E^t be the expert that starts to work at time t. To control the computational complexity, we will associate an ending time e^t for each E^t. The expert E^t is alive during the period [t, e^t-1]. In each round t, we maintain a working set of experts _t, which contains all the alive experts, and assign a probability p_t^j for each E^j ∈ S_t. In Steps 6 and 7, we remove all the experts whose ending times are no larger than t. Since the number of alive experts has changed, we need to update the probability assigned to them, which is performed in Steps 12 to 14. In Steps 15 and 16, we add a new expert E^t to _t, calculate its ending time according to Definition <ref> introduced below, and set p_t^t = 1/t. It is easy to verify ∑_E^j ∈_t p_t^j=1. Let _t^j be the output of E^j at the t-th round, where t ≥ j. In Step 17, we submit the weighted average of _t^j with coefficient p_t^j as the output _t, and suffer the loss f_t(_t). From Steps 18 to 25, we use the exponential weighting scheme to update the weight for each expert E^j based on its loss f_t(_t^j). In Step 21, we pass the loss function to all the alive experts such that they can update their predictions for the next round.
The difference between our IFLH and the original FLH is how to decide the ending time e^t of expert E^t. In this paper, we propose the following base-K ending time.
[Base-K Ending Time] Let K be an integer, and the representation of t in the base-K number system as
t= ∑_τ≥ 0β_τ K^τ
where 0 ≤β_τ <K, for all τ≥ 0. Let k be the smallest integer such that β_k > 0, i.e., k = min{τ:β_τ > 0}. Then, the base-K ending time of t is defined as
_K(t)=∑_τ≥ k+1β_τ K^τ + K^k+1.
In other words, the ending time is the number represented by the new sequence obtained by setting the first nonzero element in the sequence β_0,β_1,… to be 0 and adding 1 to the element after it.
Let's take the decimal system as an example (i.e., K=10). Then,
E_10(1)=E_10(2)=⋯ =E_10(9)=10,
E_10(11)=E_10(12)=⋯=E_10(19)=20,
E_10(10)=E_10(20)=⋯=E_10(90)=100.
§.§ Theoretical Guarantees
When the base-K ending time is used in Algorithm <ref>, we have the following properties.
Suppose we use the base-K ending time in Algorithm <ref>.
* For any t ≥ 1, we have
|_t| ≤(⌊log_K t ⌋+1 ) (K-1)=O( K log t/log K).
* For any interval I = [r, s] ⊆ [T], we can always find m segments I_j = [t_j, e^t_j-1], j ∈ [m]
with m ≤⌈log_K (s-r+1)⌉ +1, such that t_1=r, e^t_j=t_j+1, j ∈ [m-1], and e^t_m > s.
The first part of Lemma <ref> implies that the size of _t is O(K log t/log K). An example of _t in the decimal system is given below.
_486={ 481, 482, …, 486,
410, 420, …, 480,
100, 200, …, 400
} .
The second part of Lemma <ref> implies that for any interval I=[r, s], we can find O(log s/log K) experts such that their survival periods cover I. Again, we present an example in the decimal system: The interval [111, 832] can be covered by
[111, 119], [120, 199], and [200, 999]
which are the survival periods of experts E^111, E^120, and E^200, respectively. Recall that E_10(111)=120, E_10(120)=200, and E_10(200)=1000.
We note that a similar strategy for deciding the ending time was proposed by <cit.> in the study of “prediction with expert advice”. The main difference is that their strategy is built upon base-2 number system and introduces an additional parameter g to compromise between the computational complexity and the regret, in contrast our method relies on base-K number system and uses K to control the tradeoff. Lemma 2 of <cit.> indicates an O(g log t ) bound on the number of alive experts, which is worse than our O(K log t/log K) bound by a logarithmic factor.
To present adaptive regret bounds, we introduce the following common assumption.
Both the gradient and the domain are bounded.
* The gradients of all the online functions are bounded by G, i.e., max_∈Ω∇ f_t()≤ G for all f_t.
* The diameter of the domain Ω is bounded by B, i.e., max_, ' ∈Ω -'≤ B.
Based on Lemma <ref>, we have the following theorem regarding the adaptive regret of exp-concave functions.
Suppose Assumption <ref> holds, Ω⊂^d, and all the functions are α-exp-concave. If online Newton step is used as the subroutine in Algorithm <ref>, we have
∑_t=r^s f_t(_t) - min_∈Ω∑_t=r^s f_t()≤((5d+1) m + 2/α + 5d mGB ) log T
where [r,s] ⊆ [T] and m ≤⌈log_K (s-r+1)⌉ +1. Thus,
(T,τ)≤((5d+1) m̅ + 2/α + 5d m̅ GB ) log T = O( d log^2 T /log K)
where m̅=⌈log_K τ⌉ +1.
From Lemma <ref> and Theorem <ref>, we observe that the adaptive regret is a decreasing function of K, while the computational cost is an increasing function of K. Thus, we can control the tradeoff by tuning the value of K.
Specifically, Lemma <ref> indicates the proposed algorithm has
(⌊log_K T ⌋+1 ) (K-1)=O( K log T/log K)
computational complexity per iteration. On the other hand, Theorem <ref> implies that for α-exp-concave functions that satisfy Assumption <ref>, the strongly adaptive regret of Algorithm <ref> is
((5d+1) m̅ + 2/α + 5d m̅ GB ) log T = O( d log^2 T/log K)
where d is the dimensionality and m̅= ⌈log_K (τ)⌉ +1.
We list several choices of K and the resulting theoretical guarantees in Table <ref>, and have the following observations.
* When K=2, we recover the guarantee of the efficient algorithm of <cit.>, and when K=T, we obtain the inefficient one.
* By setting K=⌈ T^1/γ⌉ where γ>1 is a small constant, such as 10, the strongly adaptive regret can be viewed as O(d log T), and at the same time, the computational complexity is also very low for a large range of T.
Next, we consider strongly convex functions.
A function f(·): Ω↦ is λ-strongly convex if
f() ≥ f() + ⟨∇ f(), -⟩ + λ/2 -_2^2, ∀, ∈Ω.
It is easy to verify that strongly convex functions with bounded gradients are also exp-concave <cit.>.
Suppose f(·): Ω↦ is λ-strongly convex and ∇ f()≤ G for all ∈Ω. Then, f(·) is λ/G^2-exp-concave.
According to the above lemma, we still use Algorithm <ref> as the meta-algorithm, but choose online gradient descent as the subroutine. In this way, the adaptive regret does not depend on the dimensionality d.
Suppose Assumption <ref> holds, and all the functions are λ-strongly convex. If online gradient descent is used as the subroutine in Algorithm <ref>, we have
∑_t=r^s f_t(_t) - min_∈Ω∑_t=r^s f_t()
≤G^2/2λ(m+ (3 m +4) log T )
where [r,s] ⊆ [T] and m ≤⌈log_K (s-r+1)⌉ +1. Thus
(T,τ)≤G^2/2λ(m̅+ (3m̅ +4) log T )= O( log^2 T /log K)
where m̅=⌈log_K τ⌉ +1.
§ FROM ADAPTIVE TO DYNAMIC
In this section, we first introduce a general theorem that bounds the dynamic regret by the adaptive regret, and then derive specific regret bounds for convex functions, exponentially concave functions, and strongly convex functions.
§.§ Adaptive-to-Dynamic Conversion
Let _1=[s_1, q_1], _2 = [s_2, q_2], …, _k=[s_k, q_k] be a partition of [1,T]. That is, they are successive intervals such that
s_1=1, q_i +1 = s_i+1, i ∈ [k-1], and q_k=T.
Define the local functional variation of the i-th interval as
V_T(i) = ∑_t=s_i+1^q_imax_∈Ω |f_t() - f_t-1()|
and it is obvious that ∑_i=1^k V_T(i) ≤ V_T.[Note that in certain cases, the sum of local functional variation ∑_i=1^k V_T(i) can be much smaller than the total functional variation V_T. For example, when the sequence of functions only changes k times, we can construct the intervals based on the changing rounds such that ∑_i=1^k V_T(i)=0.]
Then, we have the following theorem for bounding the dynamic regret in terms of the strongly adaptive regret and the functional variation.
Let _t^* ∈_∈Ω f_t(). For all integer k ∈ [T], we have
(_1^*,…,_T^*) ≤min__1,…,_k∑_i=1^k ( (T,|_i|) + 2 |_i| · V_T(i) )
where the minimization is taken over any sequence of intervals that satisfy (<ref>).
The above theorem is analogous to Proposition 2 of <cit.>, which provides an upper bound for a special choice of the interval sequence. The main difference is that there is a minimization operation in our bound, which allows us to get rid of the issue of parameter selection. For a specific type of problems, we can plug in the corresponding upper bound of strongly adaptive regret, and then choose any sequence of intervals to obtain a concrete upper bound. In particular, the choice of the intervals may depend on the (possibly unknown) functional variation.
§.§ Convex Functions
For convex functions, we choose the meta-algorithm of <cit.> and take the online gradient descent as its subroutine. The following theorem regarding the adaptive regret can be obtained from that paper.
Under Assumption <ref>, the meta-algorithm of <cit.> is strongly adaptive with
(T,τ) ≤(12 BG/√(2)-1 +8 √(7 log T + 5)) √(τ) = O(√(τlog T) ).
From Theorems <ref> and <ref>, we derive the following bound for the dynamic regret.
Under Assumption <ref>, the meta-algorithm of <cit.> satisfies
(_1^*,…,_T^*)≤ max{ (c +9 √(7 log T + 5)) √(T)
(c +8 √(5) ) T^2/3 V_T^1/3/log^1/6 T + 24 T^2/3 V_T^1/3log^1/3 T
.
= O ( max{√(T log T) , T^2/3 V_T^1/3log^1/3 T })
where c=12 BG/(√(2)-1).
According to Theorem 2 of <cit.>, we know that the minimax dynamic regret of convex functions is O(T^2/3 V_T^1/3). Thus, our upper bound is minimax optimal up to a polylogarithmic factor. Although the restarted online gradient descent of <cit.> achieves a dynamic regret of O(T^2/3V_T^1/3), it requires to know an upper bound of the functional variation V_T. In contrast, the meta-algorithm of <cit.> does not need any prior knowledge of V_T. We note that the meta-algorithm of <cit.> can also be used here, and its dynamic regret is on the order of max{√(T)log T, T^2/3 V_T^1/3log^2/3 T }.
§.§ Exponentially Concave Functions
We proceed to consider exp-concave functions, defined in Definition <ref>. Exponential concavity is stronger than convexity but weaker than strong convexity. It can be used to model many popular losses used in machine learning, such as the square loss in regression, logistic loss in classification and negative logarithm loss in portfolio management <cit.>.
For exp-concave functions, we choose Algorithm <ref> in this paper, and take the online Newton step as its subroutine. Based on Theorems <ref> and <ref>, we derive the dynamic regret of the proposed algorithm.
Let K=⌈ T^1/γ⌉, where γ>1 is a small constant. Suppose Assumption <ref> holds, Ω⊂^d, and all the functions are α-exp-concave. Algorithm <ref>, with online Newton step as its subroutine, is strongly adaptive with
(T,τ)≤ ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB ) log T
= O(γ d log T)=O( d log T)
and its dynamic regret satisfies
(_1^*,…,_T^*)
≤ ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB +2) max{log T, √(T V_T log T)}
= O (d ·max{log T, √(T V_T log T)}).
To the best of our knowledge, this is the first dynamic regret that exploits exponential concavity. Furthermore, according to the minimax dynamic regret of strongly convex functions <cit.>, our upper bound is minimax optimal, up to a polylogarithmic factor.
§.§ Strongly Convex Functions
Finally, we study strongly convex functions. According to Lemma <ref>, we know that strongly convex functions with bounded gradients are also exp-concave. Thus, Corollary <ref> can be directly applied to strongly convex functions, and yields a dynamic regret of O(d √(T V_T log T)). However, the upper bound depends on the dimensionality d. To address this limitation, we use online gradient descent as the subroutine in Algorithm <ref>.
From Theorems <ref> and <ref>, we have the following theorem, in which both the adaptive and dynamic regrets are independent from d.
Let K=⌈ T^1/γ⌉, where γ>1 is a small constant. Suppose Assumption <ref> holds, and all the functions are λ-strongly convex. Algorithm <ref>, with online gradient descent as its subroutine, is strongly adaptive with
(T,τ) ≤G^2/2λ(γ+1+ (3 γ+7) log T )= O(γlog T)=O( log T)
and its dynamic regret satisfies
(_1^*,…,_T^*)≤ max{ γ G^2 /λ + (5 γ G^2/λ +2 ) log T
γ G^2 /λ√(T V_T/log T)+ (5 γ G^2/λ +2 )√(T V_T log T).
= O ( max{log T, √(T V_T log T)}).
According to Theorem 4 of <cit.>, the minimax dynamic regret of strongly convex functions is O(√(T V_T)), which implies our upper bound is almost minimax optimal. By comparison, the restarted online gradient descent of <cit.> has a dynamic regret of O(log T √(T V_T)), but it requires to know an upper bound of V_T.
§ ANALYSIS
We here present the proof of main theorems.
§.§ Proof of Theorem <ref>
From the second part of Lemma <ref>, we know that there exist m segments
I_j = [t_j, e^t_j-1], j ∈ [m]
with m ≤⌈log_K (s-r+1)⌉ +1, such that
t_1=r, e^t_j=t_j+1, j ∈ [m-1], and e^t_m > s.
Furthermore, the expert E^t_j is alive during the period [t_j, e^t_j-1].
Using Claim 3.1 of <cit.>, we have
∑_t = t_j^e^t_j-1 f_t(_t) - f_t(^t_j_t) ≤1/α(log t_j + 2∑_t = t_j + 1^e^t_j-11/t), ∀ j ∈ [m-1]
where ^t_j_t_j, …, ^t_j_e^t_j-1 is the sequence of solutions generated by the expert E^t_j. Similarly, for the last segment, we have
∑_t = t_m^s f_t(_t) - f_t(^t_m_t) ≤1/α(log t_m + 2∑_t = t_m + 1^s1/t).
By adding things together, we have
∑_j=1^m-1(∑_t = t_j^e^t_j-1 f_t(_t) - f_t(^t_j_t) ) + ∑_t = t_m^s f_t(_t) - f_t(^t_m_t)
≤ 1/α∑_j=1^m log t_j + 2/α∑_t=r+1^s 1/t≤m + 2/αlog T .
According to the property of online Newton step <cit.>, we have, for any ∈Ω,
∑_t = t_j^e^t_j-1 f_t(^t_j_t) - f_t() ≤ 5d (1/α +GB )log T, ∀ j ∈ [m-1]
and
∑_t = t_m^s f_t(^t_m_t) - f_t() ≤ 5d (1/α +GB )log T.
Combining (<ref>), (<ref>), and (<ref>), we have,
∑_t=r^s f_t(_t) - ∑_t=r^s f_t() ≤((5d+1) m + 2/α + 5d mGB ) log T
for any ∈Ω.
§.§ Proof of Theorem <ref>
Lemma <ref> implies that all the λ-strongly convex functions are also λ/G^2-exp-concave. As a result, we can reuse the proof of Theorem <ref>. Specifically, (<ref>) with α=λ/G^2 becomes
∑_j=1^m-1(∑_t = t_j^e^t_j-1 f_t(_t) - f_t(^t_j_t) ) + ∑_t = t_m^s f_t(_t) - f_t(^t_m_t) ≤(m + 2)G^2/λlog T .
According to the property of online gradient descent <cit.>, we have, for any ∈Ω,
∑_t = t_j^e^t_j-1 f_t(^t_j_t) - f_t() ≤G^2/2λ (1+log T), ∀ j ∈ [m-1]
and
∑_t = t_m^s f_t(^t_m_t) - f_t() ≤G^2/2λ (1+log T).
Combining (<ref>), (<ref>), and (<ref>), we have,
∑_t=r^s f_t(_t) - ∑_t=r^s f_t() ≤G^2/2λ(m+ (3 m +4) log T )
for any ∈Ω.
§.§ Proof of Theorem <ref>
First, we upper bound the dynamic regret in the following way
(_1^*,…,_T^*)
= ∑_i=1^k (∑_t=s_i^q_i f_t(_t) - ∑_t=s_i^q_imin_∈Ω f_t() )
= ∑_i=1^k ( ∑_t=s_i^q_i f_t(_t) - min_∈Ω∑_t=s_i^q_i f_t()_:=a_i +min_∈Ω∑_t=s_i^q_i f_t()- ∑_t=s_i^q_imin_∈Ω f_t()_:=b_i).
From the definition of strongly adaptive regret, we can upper bound a_i by
∑_t=s_i^q_i f_t(_t) - min_∈Ω∑_t=s_i^q_i f_t() ≤(T,|_i|).
To upper bound b_i, we follow the analysis of Proposition 2 of <cit.>:
min_∈Ω∑_t=s_i^q_i f_t()- ∑_t=s_i^q_imin_∈Ω f_t()= min_∈Ω∑_t=s_i^q_i f_t()- ∑_t=s_i^q_i f_t(_t^*)
≤ ∑_t=s_i^q_i f_t(_s_i^*)- ∑_t=s_i^q_i f_t(_t^*) ≤ |_i| ·max_t ∈ [s_i,q_i]( f_t(_s_i^*)- f_t(_t^*) ).
Furthermore, for any t ∈ [s_i,q_i], we have
f_t(_s_i^*)- f_t(_t^*) = f_t(_s_i^*)- f_s_i(_s_i^*) + f_s_i(_s_i^*) - f_t(_t^*)
≤ f_t(_s_i^*)- f_s_i(_s_i^*) + f_s_i(_t^*) - f_t(_t^*) ≤ 2 V_T(i).
Combining (<ref>) with (<ref>), we have
min_∈Ω∑_t=s_i^q_i f_t()- ∑_t=s_i^q_imin_∈Ω f_t() ≤ 2 |_i| · V_T(i).
Substituting the upper bounds of a_i and b_i into (<ref>), we arrive at
(_1^*,…,_T^*)≤∑_i=1^k ( (T,|_i|)+ 2 |_i| · V_T(i) ).
Since the above inequality holds for any partition of [1,T], we can take minimization to get a tight bound.
§.§ Proof of Corollary <ref>
To simplify the upper bound in Theorem <ref>, we restrict to intervals of the same length τ, and in this case k=T/τ. Then, we have
(_1^*,…,_T^*) ≤ min_1 ≤τ≤ T∑_i=1^k ( (T,τ) + 2 τ V_T(i) )
= min_1 ≤τ≤ T( (T,τ) T/τ + 2 τ∑_i=1^k V_T(i) )
≤ min_1 ≤τ≤ T( (T,τ) T/τ + 2 τ V_T ).
Combining with Theorem <ref>, we have
(_1^*,…,_T^*)≤min_1 ≤τ≤ T( (c +8 √(7 log T + 5)) T /√(τ) + 2 τ V_T ).
where c=12 BG/(√(2)-1).
In the following, we consider two cases. If V_T ≥√(log T/T), we choose
τ = ( T √(log T)/V_T)^2/3≤ T
and have
(_1^*,…,_T^*) ≤ (c +8 √(7 log T + 5) ) T^2/3 V_T^1/3/log^1/6 T + 2 T^2/3 V_T^1/3log^1/3 T
≤ (c +8 √(5) ) T^2/3 V_T^1/3/log^1/6 T + (2+ 8√(7)) T^2/3 V_T^1/3log^1/3 T.
Otherwise, we choose τ=T, and have
(_1^*,…,_T^*) ≤ (c +8 √(7 log T + 5)) √(T) + 2 T V_T
≤ (c +8 √(7 log T + 5)) √(T) + 2 T √(log T/T)
≤ (c +9 √(7 log T + 5)) √(T).
In summary, we have
(_1^*,…,_T^*)≤ max{ (c +9 √(7 log T + 5)) √(T)
(c +8 √(5) ) T^2/3 V_T^1/3/log^1/6 T + 24 T^2/3 V_T^1/3log^1/3 T
.
= O ( max{√(T log T) , T^2/3 V_T^1/3log^1/3 T }).
§.§ Proof of Corollary <ref>
The first part of Corollary <ref> is a direct consequence of Theorem <ref> by setting K=⌈ T^1/γ⌉.
Now, we prove the second part. Following similar analysis of Corollary <ref>, we have
(_1^*,…,_T^*) ≤min_1 ≤τ≤ T{((5d+1) (γ+1) + 2/α + 5d (γ+1) GB ) T log T /τ + 2 τ V_T }.
Then, we consider two cases. If V_T ≥log T/T, we choose
τ = √(T log T /V_T)≤ T
and have
(_1^*,…,_T^*) ≤((5d+1) (γ+1) + 2/α + 5d (γ+1) GB +2)√(T V_T log T) .
Otherwise, we choose τ=T, and have
(_1^*,…,_T^*) ≤ ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB ) log T + 2 T V_T
≤ ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB ) log T + 2 T log T/T
= ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB +2) log T .
In summary, we have
(_1^*,…,_T^*)
≤ ((5d+1) (γ+1) + 2/α + 5d (γ+1) GB +2)
max{log T, √(T V_T log T)}
= O (d ·max{log T, √(T V_T log T)}).
§.§ Proof of Corollary <ref>
The first part of Corollary <ref> is a direct consequence of Theorem <ref> by setting K=⌈ T^1/γ⌉.
The proof of the second part is similar to that of Corollary <ref>. First, we have
(_1^*,…,_T^*) ≤ min_1 ≤τ≤ T{G^2/2λ(γ+1+ (3 γ+7) log T ) T/τ + 2 τ V_T }
≤ min_1 ≤τ≤ T{( γ+ 5 γlog T )G^2 T/λτ + 2 τ V_T }
where the last inequality is due to the condition γ >1.
Then, we consider two cases. If V_T ≥log T/T, we choose
τ = √(T log T /V_T)≤ T
and have
(_1^*,…,_T^*) ≤ γ G^2 /λ√(T V_T/log T)+ 5 γ G^2/λ√(T V_T log T) + 2 √(T V_T log T)
= γ G^2 /λ√(T V_T/log T)+ (5 γ G^2/λ +2 )√(T V_T log T) .
Otherwise, we choose τ=T, and have
(_1^*,…,_T^*) ≤ ( γ+ 5 γlog T )G^2/λ + 2 T V_T
≤ ( γ+ 5 γlog T )G^2/λ +2 T log T/T
= γ G^2 /λ + (5 γ G^2/λ +2 ) log T.
In summary, we have
(_1^*,…,_T^*)
≤ max{ γ G^2 /λ + (5 γ G^2/λ +2 ) log T
γ G^2 /λ√(T V_T/log T)+ (5 γ G^2/λ +2 )√(T V_T log T).
= O ( max{log T, √(T V_T log T)}).
§ CONCLUSIONS AND FUTURE WORK
In this paper, we demonstrate that the dynamic regret can be upper bounded by the adaptive regret and the functional variation, which implies strongly adaptive algorithms are automatically equipped with tight dynamic regret bounds. As a result, we are able to derive dynamic regret bounds for convex functions, exp-concave functions, and strongly convex functions. Moreover, we provide a unified approach for minimizing the adaptive regret of exp-concave functions, as well as strongly convex functions.
The adaptive-to-dynamic conversion leads to a series of dynamic regret bounds in terms of the functional variation. As we mentioned before, dynamic regret can also be upper bounded by other regularities such as the path-length. It is interesting to investigate whether those kinds of upper bounds can also be established for strongly adaptive algorithms.
§ PROOF OF LEMMA <REF>
We first prove the first part of Lemma <ref>. Let k= ⌊log_K t ⌋. Then, integer t can be represented in the base-K number system as
t = ∑_j=0^k β_j K^j .
From the definition of base-K ending time, integers that are no larger than t and alive at t are
{ 1*K^0+∑_j=1^k β_j K^j, 2*K^0+ ∑_j=1^k β_j K^j, …, β_0*K^0 + ∑_j=1^k β_j K^j
1*K^1+∑_j=2^k β_j K^j, 2*K^1+ ∑_j=2^k β_j K^j, …, β_1*K^1 + ∑_j=2^k β_j K^j
…
1*K^k-1+β_k K^k, 1*K^k-1+ β_k K^k, …, β_k-1*K^k-1 +β_k K^k
1 *K^k, 2 * K^k, …, β_k K^k
}.
The total number of alive integers are upper bounded by
∑_i=0^k β_i ≤ (k+1) (K-1)= (⌊log_K t ⌋+1) (K-1).
We proceed to prove the second part of Lemma <ref>. Let k= ⌊log_K r ⌋, and the representation of r in the base-K number system be
r= ∑_j=0^k β_j K^j.
We generate a sequence of segments as
I_1 = [t_1, e^t_1-1] =[∑_j=0^k β_j K^j, (β_1+1) K^1+∑_j=2^k β_j K^j - 1],
I_2 = [t_2, e^t_2-1] = [(β_1+1) K^1+∑_j=2^k β_j K^j, (β_2+1) K^2+∑_j=3^k β_j K^j - 1],
I_3 = [t_3, e^t_3-1] = [(β_2+1) K^2+∑_j=3^k β_j K^j, (β_3+1) K^3+∑_j=4^k β_j K^j - 1],
…
I_k = [t_k, e^t_k-1] = [(β_k-1+1)K^k-1 +β_k K^k, (β_k+1) K^k - 1],
I_k+1 = [t_k+1, e^t_k+1-1] = [(β_k+1) K^k, K^k+1-1],
I_k+2 = [t_k+2, e^t_k+2-1] = [K^k+1, K^k+2-1],
…
until s is covered. It is easy to verify that
t_m+1 > t_m + K^m-1 -1.
Thus, s will be covered by the first m intervals as long as
t_m + K^m-1 -1 ≥ s.
A sufficient condition is
r+ K^m-1 -1 ≥ s
which is satisfied when
m=⌈log_K (s-r+1)⌉ +1.
§ PROOF OF LEMMA <REF>
The gradient of exp(-α f()) is
∇exp(-α f()) = exp(-α f()) -α∇ f() = -αexp(-α f()) ∇ f().
and the Hessian is
∇^2 exp(-α f()) = -αexp(-α f()) -α∇ f() ∇^⊤ f() -αexp(-α f()) ∇^2 f()
= αexp(-α f()) (α∇ f() ∇^⊤ f() - ∇^2 f() ).
Thus, f(·) is α-exp-concave if
α∇ f() ∇^⊤ f() ≼∇^2 f().
We complete the proof by noticing
λ/G^2∇ f() ∇^⊤ f() ≼λ I ≼∇^2 f().
§ PROOF OF THEOREM <REF>
As pointed out by <cit.>, the static regret of online gradient descent <cit.> over any interval of length τ is upper bounded by 3BG√(τ). Combining this fact with Theorem 2 of <cit.>, we get Theorem <ref> in this paper.
| Online convex optimization is a powerful paradigm for sequential decision making <cit.>. It can be viewed as a game between a learner and an adversary: In the t-th round, the learner selects a decision _t ∈Ω, simultaneously the adversary chooses a function f_t(·): Ω↦, and then the learner suffers an instantaneous loss f_t(_t). This study focuses on the full-information setting, where the learner can query the value and gradient of f_t <cit.>. The goal of the learner is to minimize the cumulative loss over T periods . The standard performance measure is regret, which is the difference between the loss incurred by the learner and that of the best fixed decision in hindsight, i.e.,
(T)=∑_t=1^T f_t(_t) - min_∈Ω∑_t=1^T f_t().
The above regret is typically referred to as static regret in the sense that the comparator is time-invariant. The rationale behind this evaluation metric is that one of the decision in Ω is reasonably good over the T rounds. However, when the underlying distribution of loss functions
changes, the static regret may be too optimistic and fails to capture the hardness of the problem.
To address this limitation, new forms of performance measure, including adaptive regret <cit.> and dynamic regret <cit.>, were proposed and received significant interest recently. Following the terminology of <cit.>, we define the strongly adaptive regret as the maximum static regret over intervals of length τ, i.e.,
(T,τ) = max_[s, s+τ -1] ⊆ [T](∑_t=s^s+τ -1 f_t(_t) - min_∈Ω∑_t=s^s+τ -1 f_t() ).
Minimizing the adaptive regret enforces the learner to have a small static regret over any interval of length τ. Since the best decision for different intervals could be different, the learner is essentially competing with a changing comparator.
A parallel line of research introduces the concept of dynamic regret, where the cumulative loss of the learner is compared against a comparator sequence _̆1, …, _̆T ∈Ω, i.e.,
(_̆1,…,_̆T) = ∑_t=1^T f_t(_t) - ∑_t=1^T f_t(_̆t).
It is well-known that in the worst case, a sublinear dynamic regret is impossible unless we impose some regularities on the comparator sequence or the function sequence <cit.>. A representative example is the functional variation defined below
V_T = ∑_t=2^T max_∈Ω |f_t() - f_t-1()|.
<cit.> have proved that as long as V_T is sublinear in T, there exists an algorithm that achieves a sublinear dynamic regret. Furthermore, a general restarting procedure is developed, and it enjoys O(T^2/3V_T^1/3) and O(log T √(T V_T)) rates for convex functions and strongly convex functions, respectively.
However, the restarting procedure can only be applied when an upper bound of V_T is known beforehand, thus limiting its application in practice.
While both the adaptive and dynamic regrets aim at coping with changing environments, little is known about their relationship. This paper makes a step towards understanding their connections. Specifically, we show that the strongly adaptive regret in (<ref>), together with the functional variation, can be used to upper bound the dynamic regret in (<ref>). Thus, an algorithm with a small strongly adaptive regret is automatically equipped with a tight dynamic regret. As a result, we obtain a series of algorithms for minimizing the dynamic regret that do not need any prior knowledge of the functional variation. The main contributions of this work are summarized below.
* We provide a general theorem that upper bounds the dynamic regret in terms of the strongly adaptive regret and the functional variation.
* For convex functions, we show that the strongly adaptive algorithm of <cit.> has a dynamic regret of O(T^2/3 V_T^1/3log^1/3 T), which matches the minimax rate <cit.>, up to a polylogarithmic factor.
* For exponentially concave functions, we propose a strongly adaptive algorithm that allows us to control the tradeoff between the adaptive regret and the computational cost explicitly. Then, we demonstrate that its dynamic regret is O(d √(T V_T log T)), where d is the dimensionality. To the best of our knowledge, this is the first time that exponential concavity is utilized in the analysis of dynamic regret.
* For strongly convex functions, our proposed algorithm can also be applied and yields a dynamic regret of O(√(T V_T log T)), which is also minimax optimal up to a polylogarithmic factor. | We give a brief introduction to previous work on static, adaptive, and dynamic regrets in the context of online convex optimization.
§.§ Static Regret
The majority of studies in online learning are focused on static regret <cit.>. For general convex functions, the classical online gradient descent achieves O(√(T)) and O(log T) regret bounds for convex and strongly convex functions, respectively <cit.>. Both the O(√(T)) and O(log T) rates are known to be minimax optimal <cit.>. When functions are exponentially concave, a different algorithm, named online Newton step, is developed and enjoys an O(d log T) regret bound, where d is the dimensionality <cit.>.
§.§ Adaptive Regret
The concept of adaptive regret is introduced by <cit.>, and later strengthened by <cit.>. Specifically, <cit.> introduce the weakly adaptive regret
(T)= max_[s, q] ⊆ [T](∑_t=s^q f_t(_t) - min_∈Ω∑_t=s^q f_t()).
To minimize the adaptive regret, <cit.> have developed two meta-algorithms: an efficient algorithm with O(log T) computational complexity per iteration and an inefficient one with O(T) computational complexity per iteration. These meta-algorithms use an existing online method (that was possibly designed to have small static regret) as a subroutine.[For brevity, we ignored the factor of subroutine in the statements of computational complexities. The O(·) computational complexity should be interpreted as O(·) × s space complexity and O(·) × t time complexity, where s and t are space and time complexities of the subroutine per iteration, respectively.] For convex functions, the efficient and inefficient meta-algorithms have O(√(T log^3 T)) and O(√(T log T)) regret bounds, respectively. For exponentially concave functions, those rates are improved to O(d log^2 T) and O(d log T), respectively. We can see that the price paid for the adaptivity is very small: The rates of weakly adaptive regret differ from those of static regret only by logarithmic factors.
A major limitation of weakly adaptive regret is that it does not respect short intervals well. Taking convex functions as an example, the O(√(T log^3 T)) and O(√(T log T)) bounds are meaningless for intervals of length O(√(T)). To overcome this limitation, <cit.> proposed the strongly adaptive regret (T,τ) which takes the length of the interval τ as a parameter, as indicated in (<ref>). From the definitions, we have (T,τ) ≤(T), but it does not mean the notation of weakly adaptive regret is stronger, because an upper bound for (T) could be very loose for (T,τ) when τ is small.
If the strongly adaptive regret is small for all τ <T, we can guarantee the learner has a small regret over any interval of any length. In particular, <cit.> introduced the following definition.
Let R(τ) be the minimax static regret bound of the learning problem over τ periods. An algorithm is strongly adaptive, if
(T,τ)=O((log T) · R(τ)), ∀τ.
It is easy to verify that the meta-algorithms of <cit.> are strongly adaptive for exponentially concave functions,[That is because (i) (T,τ) ≤(T), and (ii) there is a (log T) factor in the definition of strong adaptivity. ] but not for convex functions. Thus, <cit.> developed a new meta-algorithm that satisfies (T,τ)=O( √(τ)log T ) for convex functions, and thus is strongly adaptive. The algorithm is also efficient and the computational complexity per iteration is O(log T). Later, the strongly adaptive regret of convex functions was improved to O( √(τlog T) ) by <cit.>, and the computational complexity remains O(log T) per iteration. All the previously mentioned algorithms for minimizing adaptive regret need to query the gradient of the loss function at least O(log t) times in the t-th iteration. In a recent study, <cit.> demonstrate that the number of gradient evaluations per iteration can be reduced to 1 by introducing the surrogate loss.
§.§ Dynamic Regret
In a seminal work, <cit.> proposed to use the path-length defined as
(_̆1, …, _̆T)=∑_t=2^T _̆t - _̆t-1_2
to upper bound the dynamic regret, where _̆1, …, _̆T ∈Ω is a comparator sequence. Specifically, <cit.> proved that for any sequence of convex functions, the dynamic regret of online gradient descent can be upper bounded by O(√(T)(_̆1, …, _̆T)). Another regularity of the comparator sequence, which is similar to the path-length, is defined as
'(_̆1, …, _̆T)=∑_t=2^T _̆t - Φ_t (_̆t-1)_2
where Φ_t (·) is a dynamic model that predicts a reference point for the t-th round. <cit.> developed a novel algorithm named dynamic mirror descent and proved that its dynamic regret is on the order of √(T)'(_̆1, …, _̆T). The advantage of '(_̆1, …, _̆T) is that when the comparator sequence follows the dynamical model closely, it can be much smaller than the path-length (_̆1, …, _̆T).
Let _t^* ∈_∈Ω f_t() be a minimizer of f_t(·). For any sequence of _̆1, …, _̆T ∈Ω, we have
(_̆1,…,_̆T) =∑_t=1^T f_t(_t) - ∑_t=1^T f_t(_̆t)
≤ (_1^*,…,_T^*) = ∑_t=1^T f_t(_t) - ∑_t=1^T min_∈Ω f_t().
Thus, (_1^*,…,_T^*) can be treated as the worst case of the dynamic regret, and there are many works that were devoted to minimizing (_1^*,…,_T^*) <cit.>.
When a prior knowledge of (_1^*, …, _T^*) is available, (_1^*,…,_T^*) can be upper bounded by O(√(T (_1^*, …, _T^*))) <cit.>. If all the functions are strongly convex and smooth, the upper bound can be improved to O((_1^*, …, _T^*)) <cit.>. The O((_1^*, …, _T^*)) rate is also achievable when all the functions are convex and smooth, and all the minimizers _t^*'s lie in the interior of Ω <cit.>. In a recent study, <cit.> introduced a new regularity—squared path-length
(_1^*, …, _T^*)=∑_t=2^T _t^* - _t-1^*_2^2
which could be much smaller than the path-length (_1^*, …, _T^*) when the difference between successive minimizers is small. <cit.> developed a novel algorithm named online multiple gradient descent, and proved that (_1^*,…,_T^*) is on the order of min((_1^*, …, _T^*),(_1^*, …, _T^*)) for (semi-) strongly convex and smooth functions.
Discussions Although closely related, adaptive regret and dynamic regret are studied independently and there are few discussions of their relationships. In the literature, dynamic regret is also referred to as tracking regret or shifting regret <cit.>. In the setting of “prediction with expert advice”, <cit.> have shown that the tracking regret can be derived from the adaptive regret. In the setting of “online linear optimization in the simplex”, <cit.> introduced a generalized notion of shifting regret which unifies adaptive regret and shifting regret. Different from previous work, this paper considers the setting of online convex optimization, and illustrates that the dynamic regret can be upper bounded by the adaptive regret and the functional variation. | null | null | null | null |
http://arxiv.org/abs/1701.07655v1 | 20170126111650 | Magnetic and electronic properties of La$_3M$O$_7$ and possible polaron formation in hole-doped La$_3M$O$_7$ ($M$=Ru and Os) | [
"Bin Gao",
"Yakui Weng",
"Jun-Jie Zhang",
"Huimin Zhang",
"Yang Zhang",
"Shuai Dong"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email protected]
Department of Physics, Southeast University, Nanjing 211189, China
Oxides with 4d/5d transition metal ions are physically interesting for their particular crystalline structures as well as the spin-orbit coupled electronic structures. Recent experiments revealed a series of 4d/5d transition metal oxides R_3MO_7 (R: rare earth; M: 4d/5d transition metal) with unique quasi-one-dimensional M chains. Here first-principles calculations have been performed to study the electronic structures of La_3OsO_7 and La_3RuO_7. Our study confirm both of them to be Mott insulating antiferromagnets with identical magnetic order. The reduced magnetic moments, which are much smaller than the expected value for ideal high-spin state (3 t_2g orbitals occupied), are attributed to the strong p-d hybridization with oxygen ions, instead of the spin-orbit coupling. The Ca-doping to La_3OsO_7 and La_3RuO_7 can not only modulate the nominal carrier density but also affect the orbital order as well as the local distortions. The Coulombic attraction and particular orbital order would prefer to form polarons, which might explain the puzzling insulating behavior of doped 5d transition metal oxides. In addition, our calculation predict that the Ca-doping can trigger ferromagnetism in La_3RuO_7 but not in La_3OsO_7.
Keywords: 4d/5d transition metal oxides, polaron, antiferromagnetism
§ INTRODUCTION
Transition metal oxides have attracted enormous attentions for its plethoric members, divergent properties, novel physics, and great impacts on potential applications based on correlated electrons. In past decades, the overwhelming balance of interests were devoted to those compounds with 3d elements, which showed high-T_ C superconductivity, colossal magnetoresistivity, multiferroicity, and so on <cit.>. However, the 4d and 5d counterparts were much less concerned and only in very recent years a few of them, e.g. Sr_2IrO_4, have been focused on <cit.>. In principle, for 4d/5d electrons, the electron-electron repulsion, e.g. Hubbard U, is much weaker due to more extended wave functions, while the spin-orbit coupling (SOC) is much stronger due to the large atom number, comparing with the 3d electrons <cit.>. These characters may lead to non-conventional physics in 4d/5d metal oxides, e.g. p-wave superconductors, spin-orbit Mott insulator, Kitaev magnets, topological materials, and possible high-T_ C superconductors <cit.>.
Till now, the most studied 4d/5d metal oxides owns quasi-two-dimensional layer structures (e.g. Sr_2IrO_4 and Na_2IrO_3) or three-dimensional structures (e.g. SrIrO_3 and SrRuO_3). Recently, those 4d/5d metal oxides with quasi-one-dimensional chains have also been synthesized, which may lead to unique low-dimensional physics, e.g. charge density waves, spin-Peierls transitions, and novel magnetic excitations <cit.>. For example, recent experiments reported the basic physical properties of R_3MO_7, which owns the weberite structure, as shown in Fig. <ref>(a) <cit.>. Since here the 4d/5d electrons are mostly confined in one-dimensional chains instead of two-dimensional plane or three-dimensional framework, their electronic and magnetic structures, may be markedly different from the higher-dimensional structural 4d/5d counterparts. Given the decreased electron correlations and increased SOC of the 4d/5d electrons, the physical behavior of these compounds may also show differences comparing with quasi-one-dimensional 3d metal oxides <cit.>. In fact, there is rare 3d metal oxide forming the weberite R_3MO_7 structure. It is therefore physically interest to study these new systems.
Taking La_3OsO_7 for example, recent experimental studies reported its structural, transport, and magnetic properties, characterized by magnetic susceptibility, x-ray diffraction, as well as neutron diffraction <cit.>. The corner-shared OsO_6 octahedra form chains along the [001] direction of the orthorhombic framework. The nearest-neighbor distance of Os-Os is 3.81 Å within a chain, but 6.75 Å between chains. The Os-O-Os bond angle within a chain is about 153^∘, implying strong octahedra tilting, which is also widely observed in other oxides. Its ground state is an antiferromagnetic (AFM) insulator. The Ca-doped La_3OsO_7 was also studied. Despite the change of nominal carrier density, surprisingly, this hope-doped system remain an insulator (or a semiconductor), violating the rigid band scenario <cit.>. Similar robust insulating behavior was also found in some doped iridates <cit.>, which was expected to show superconductivity after doping <cit.>.
In this work, we have performed systematic first-principles calculations to understand the electronic and magnetic properties of La_3OsO_7 as well as the isostructural La_3RuO_7. The doping effect has also been studied, which may provide a reasonable explanation to the insulating behavior, based on the polaron forming. To our best knowledge, there were very few theoretical studies on these two materials before. Only Khalifah et al. calculated several magnetic states of La_3RuO_7 <cit.>. Even though, their predicted ground state (see Fig. <ref>(b)) seems to be inaccurate, according to our results.
§ MODEL & METHODS
All following calculations were performed using the Vienna ab initio Simulation Package (VASP) based on the generalized gradient approximation (GGA) <cit.>. The new-developed PBEsol function is adopted <cit.>, which can improve the accurate description of crystal structure comparing with the old-fashion PBE one. The plane-wave cutoff is 550 eV and Monkhorst-Pack k-points mesh centered at Γ points is adopted.
Starting from the low-temperature experimental orthorhombic (No. 63 Cmcm) structures <cit.>, the lattice constants and inner atomic positions are fully optimized till the Hellman-Feynman forces are all less than 0.01 eV/Å. The Hubbard repulsion U_ eff (=U-J) is imposed on Ru's 4d orbitals and Os's 5d orbitals <cit.>. Various values of U_ eff have been tested from 0 eV to 4 eV. It is found that U_ eff(Ru)=1 eV is the best choice to reproduce the experimental structure of La_3RuO_7, while for La_3OsO_7 the bare GGA without U_ eff is the best choice. Comparing with experimental values <cit.>, the deviation of calculated lattice constants are only <0.8% for La_3OsO_7, and 0.5% for La_3RuO_7, providing a good start point to study other physical properties. These choices of U_ eff are quite reasonable considering the gradually deceasing Hubbard repulsion from 3d to 5d.
Considering the fact of heavy atoms, the relativistic SOC is also taken into consideration, comparing with those calculations without SOC.
§ RESULTS & DISCUSSION
§.§ Undoped La_3MO_7: magnetic orders and reduced moments
First, the magnetic ground state is checked by comparing several possible magnetic orders, including ferromagnetic (FM) state, and various AFM ones (AFM I-IV as shown in Fig. <ref>(b-e)). For AFM I, III, and IV states, the -up-down-up-down- ordering is adopted within each chain, but with different coupling between chains. Taking the FM state as the energy reference, the energies of all candidates are summarized in Table 1, which suggest the AFM IV to be the possible ground state for both M=Os and Ru.
By mapping the system to a classical spin model, the exchange coefficients between neighbor (within each chain and between chains, as indicated in Fig. <ref>(a)) spins (normalized to |S|=1) can be extracted as: J_1=72.00 meV, J_2=8.34 meV, and J_3=2.96 meV for M=Os, J_1=3.45 meV, J_2=0.66 meV, and J_3=0.37 meV for M=Ru. Obviously, the exchanges between Os chains are quite prominent even for the nearest neighbor chains (distance up to 6.75 Å), implying strongly coupled AFM chains, different from the one-dimensional intuition. And these exchanges are much stronger in La_3OsO_7 than the correspondences in La_3RuO_7. These characters of La_3OsO_7 are benefited from the more extended distribution of 5d orbitals.
Experimentally, the AFM transition temperatures of La_3OsO_7 are much higher than the corresponding ones of La_3RuO_7. For La_3OsO_7, the intrachain magnetic correlation emerges near ∼100 K (mainly due to J_1) and the fully three-dimensional AFM ordering occurs at 45 K (also determined by J_2 and J_3 <cit.>. In contrast, the signal for magnetic ordering in La_3RuO_7 appears at ∼17 K with short-range characters <cit.>. Note Ref. <cit.> once predicted the ground state of La_3RuO_7 to be AFM I, which is ruled out according to our calculation. More neutron experiments are needed to refine the subtle magnetic order of La_3RuO_7.
Second, the total density of states (DOS) and atomic-projected density of states (PDOS) of La_3OsO_7 are displayed in Fig. <ref>(a). Clearly, the system is insulating with a band gaps of ∼0.53 eV, even in the pure GGA calculation. Both the topmost valence band(s) and bottommost conducting band(s) of La_3OsO_7 are from Os, in particular the t_2g orbitals. Since the 5d orbitals have a large SOC coefficient, we also calculate the DOS and PDOS with SOC enabled, which are presented in Fig. <ref>(b) for comparison. However, there is no qualitative difference between the SOC-enabled and SOC-disabled calculations. The quantitative differences include: 1) a shrunk band gap ∼0.37 eV (SOC-enabled); 2) a slightly reduced local magnetic moment from 1.661 μ_ B/Os (SOC-disabled) to 1.578 μ_ B/Os (SOC-enabled). In particular, the magnitude of orbital moment is only ∼0.087 μ_ B. Noting this local moment is obtained by integrating the wave function within the Wigner-Seitz radius of Os (0.58 Å) and thus not absolutely precise. Even though, the theoretical values are still quite close to the experimental one ∼1.71 μ_ B/Os <cit.>. Such a local moment is significantly reduced from the high-spin expectation (3 μ_ B/Os) of three t_2g electrons as in Os^5+ here, but agrees with recent neutron diffraction results of Os^5+ in several double perovskites <cit.>.
According to the PDOS (insert of Fig. <ref>(a-b)), every Os seems to be in the high-spin state, i.e. only spin-up electrons within the Wigner-Seitz radius. Then how to understand the reduced local moment? Above SOC-enabled calculation has ruled out SOC as the main contribution, which can only slightly affect the value of moment. Instead, the real mechanism is the covalency between Os and O. As revealed in PDOS, there exists strong hybridization between Os's 5d and O's 2p orbitals around the Fermi energy level, owning to the spatial extended 5d orbitals. In fact, the previous neutron study also attributed the reduced moment to the hybridization between Os and O <cit.>.
Furthermore, the same calculations have been done for La_3RuO_7 and the DOS/PDOS are shown in Fig. <ref>(c-d), which are qualitatively similar to La_3OsO_7. The local magnetic moment of Ru^5+ is 1.892 μ_ B (SOC-enabled) or 1.900 μ_ B (SOC-disabled), and such a negligible difference implies an weaker SOC effect comparing with La_3OsO_7. In particular, the magnitude of orbital moment is only ∼0.018 μ_ B per Ru, even lower than that of Os. The total moment is also lower than the ideal 3 μ_ B but higher than the moment of Os, which is also reasonable considering the more localized distribution of 4d orbitals than 5d. The reduced moment of Ru^5+ is also due to the covalency between Ru and O, as indicated in Fig. <ref>(c).
The calculated band gap of La_3RuO_7 is 0.70 eV, which is higher than the experimental value (∼ 0.28 eV) extracted from transposrt <cit.>. This inconsistent is probably due to their polycrystalline nature of samples and the presence of small amounts of the highly insulating La_2O_3, as admitted in Ref. <cit.>. More measurements, especially the optical absorption spectrum, are needed to clarify the intrinsic band gap of La_3RuO_7.
The aforementioned weak SOC effects to magnetism and band structures in La_3RuO_7 and La_3OsO_7 seem to contradict with the intuitive expection of strong SOC coefficients for 4d/5d electrons. This paradox can be understood as following. Since in La_3MO_7 the low-lying t_ 2g orbitals are half-filled (t_ 2g^3), the Hund coupling between t_ 2g electrons will prefer the high-spin state, in which the orbit moment is mostly quenched. Then the net effect of SOC is weak even if the SOC efficiency is large. Other 5d electronic systems with own more or less electrons than t_ 2g^3, e.g. Sr_2IrO_4, can active the SOC effects.
§.§ Chemical doping and polaron forming
Doping is a frequently used method to tune physical properties of materials. For Mott insulators, proper doping may result in superconductivity (e.g. for cuprates) or colossal magnetoresistivity (e.g. for manganites). One of the most anticipant doping effects on 5d metal oxides is the possible superconductivity, as predicted in Sr_2IrO_4 <cit.>. However, till now, not only the superconductivity has not been found, but also there is an unsolved debate regarding the metallicity of doped Sr_2IrO_4. Some experiments reported the metallic transport behavior upon tiny doping and observed Fermi arcs using angle-resolved photoelectron spectroscopy (ARPES) <cit.>, while some others reported robust insulating (or semiconducting) behavior even upon heavy doping by element substitution and field-effect gating <cit.>.
Similarly, for La_3OsO_7, the experiment found that the Ca-doping up to 6.67% could reduce the resistivity but the system remained insulating <cit.>. Then it is interesting to investigate the doping effect. In our calculation, by using one Ca to replace one La in a unit cell, i.e. 8.33% doping, the crystal structure is re-relaxed with various magnetism. Then the ground state turns to be AFM III, a little different from the original AFM IV (see Table 2 for more details). Even though, the in-chain AFM order remains robust.
In contrast, when the 8.33% Ca-doping is applied to La_3RuO_7, our calculations predict that the ground state magnetism would probably transformed from AFM IV to FM, different from above Os-based counterpart. As summarized in Table 2, no matter the lowest energy FM state or the second lowest energy AFM II state, the in-chain FM order is unambiguous. This result is also reasonable considering the much weaker in-chain antiferromagnetism (i.e. J_1) of La_3RuO_7. Thus the antiferromagnetism of La_3RuO_7 should be more fragile against chemical doping. Further experiments are needed to verify our prediction.
As shown in Fig. <ref>, the DOS's of doped La_3OsO_7 and La_3RuO_7 own finite values at Fermi levels, implying metallic behavior, which seems to be opposite to experimental observation of doped La_3OsO_7. However, a careful analysis finds that this finite DOS at Fermi level should be due to a technical issue of calculation. The substitution of one La by one Ca will bring one hole into the system. However, the AFM state implies at least doubly degenerate bands (spin up and spin down). Thus one hole to doubly degenerate bands always leads to half-filling, as observed in our DOS. Thus, the finite DOS at Fermi level does not guarantee metallicity of Ca-doped La_3OsO_7, while the metallicity of Ca-doped La_3RuO_7 needs experimental verification.
The SOC-enabled calculations have also been performed for the doped La_3MO_7. However, due to the partial hole concentration (∼1/4 per M), the SOC effect is not prominent. For example, the near-Fermi-level DOS (Fig. <ref>(g-h)) are similar to the corresponding non-SOC ones. The local moments for the ground states are also listed in Table <ref>, which are only slightly lower than the original one without SOC, especially for the Ru case.
The PDOS's of Ca-doped La_3OsO_7 show that the Os ions can be classified into two types: a) two near-Ca Os's (one spin up and one spin down); b) other two Os's. Their PDOS's are slightly different, and the type-b Os's are less affected by the Ca-doping, namely the doping effect has a tendency to be localized. In contrast, the PDOS's of Ca-doped La_3RuO_7 show that almost all four Ru ions are equally effected by the Ca-doping.
To clarify the effect of doping, the charge density distribution of hole in Ca-doped La_3OsO_7 is visualized in Fig. <ref>(a). Here we only extract the wave function of the above-Fermi-level partial bands, which can represent the hole (half spin-up hole plus half spin-down hole). Clearly, the orbitals of hole are the d_xy type on Os site and the p_x type on O site (the chain direction is chosen as the z axis), implying an spatially extended wave function. According to the Slater-Koster equation <cit.>, the lying-down d_xy orbital has a very weak hopping amplitude along the z-axis, if not ideally zero. Thus, the hole will be restricted near the Ca dopant by the Coulombic interaction, leading to the semiconducting behavior of Ca-doped La_3OsO_7.
According to the plenty experience of 3d electron systems, the lattice distortions, e.g. Jahn-Teller modes, will be always activated by partially occupied t_2g orbitals (or e_g orbitals) to split the energy degeneration between/among orbitals. By carefully analyzing the bond lengths of oxygen octahedra, it is easy to verify the effect of hole modulated lattice distortions. First, the breathing mode Q_1 can be defined as (l_x+l_y+l_z)/√(3) to characterize the size of oxygen octahedral cage, where l denote the O-M-O bond length along a particular axis <cit.>. After the doping, the changes of Q_1 are -5.307 pm for the near-Ca Os's and -3.636 pm for other two Os's. These shrunk octahedral cages are due to the Coulombic attraction between positive-charged hole on Os and negative charged oxygen ions. Second, the Jahn-Teller modes Q_2 and Q_3 can be defined as (l_x-l_y)/√(2) and (-l_x-l_y+2l_z)/√(6) respectively, which can split the degeneration among triplet t_2g orbitals or between doublet e_g orbitals. For La_3OsO_7, the original Jahn-Teller modes Q_2=0.100 pm and Q_3=-3.938 pm. This prominent Q_3 mode prefers the d_xy orbital for electrons. Therefore, the d_xy hole after doping is not driven by this preseted lattice distortion, but can only be due to the Coulombic interaction from Ca^2+ since the spatial distribution of d_xy hole is more closer to dopant (see Fig. <ref>). Then this Coulombic-driven d_xy hole will suppress the Q_3 mode: Q_3=-2.414 pm for near-Ca Os's and -1.007 pm for other two Os's. Meanwhile, the Q_2 mode are enhanced: Q_2=0.942 pm for near-Ca Os's and 1.152 pm for the other two Os's.
For doped La_3RuO_7, the changes of Q_1 are -4.074 pm for the near-Ca Ru's and -3.470 pm for the other two, similar to the case of doped La_3OsO_7. For original La_3RuO_7, the Jahn-Teller modes are: Q_2=0.110 pm, Q_3=5.351 pm. In contrast with La_3OsO_7, this lattice distortion dislike the d_xy orbital (for electron) in energy. Then the Coulombic-driven d_xy hole will further enhance this positive Q_3 mode: Q_3=7.574 pm for the near-Ca Ru's and 7.913 pm for the other two Ru's. Meanwhile, the Q_2 mode are enhanced as in La_3OsO_7: Q_2=1.227 pm for the near-Ca Ru's and 1.129 pm for the other two.
The localization of hole can be further confirmed by calculations of supercells. As shown in Fig. <ref>(c-e), the hole occupancies on four near-Ca Os ions are much prominent than other Os ions which stay only one u.c. space from the dopant. Similar results exist for doped La_3RuO_7. This localized hole with distorted lattices form the polaron.
According previous studies, there are magnetic polarons in manganites, which are ferromagnetic clusters of several Mn sites embedded in AFM background <cit.>. This scenario is quite possible for La_3RuO_7 considering the ferromagetically-aligned Ru ions as revealed in Table <ref>. However, for La_3OsO_7, both the result of the minimal cell (shown in Table <ref>) and the calculation of doubled-supercell (along the c-axis) dislike magnetic polaron, at least the small (up to three-site) magnetic polaron. It is also reasonable considering the differences among 3d/4d/5d electrons. The 3d electrons have strong Hubbard interaction which prefers localized magnetic moments. Thus the energy gain from forming magnetic polaron is large <cit.>. In contrast, the 5d electrons are more spatially extended and own weaker Hubbard interaction, which are disadvantage to form magnetic polaron. In fact, in Ref. <cit.>, the second nearest-neighbor hopping (which equals to the spatially extended effect) can suppress the formation of magnetic polaron, which provides a hint to understand the difference between 3d polaron and 5d polaron. Of course, the current calculations can not fully exclude the possibility of magnetic polarons with larger sizes or higher dimensional, which are beyond our computational capability considering the fact that a minimal cell of La_3OsO_7 already contains 44 ions.
In short, considering above results of hole restricted by Coulombic interaction and the lattice distortions followed the particular d_xy-orbital hole, it is reasonable to argue that the carriers generate by Ca doping would be mostly localized near dopant to form polaron and contribute to the semiconducting behavior. This scenario may explain the puzzle why the expected metallicity is absent in some doped 5d-Mott-insulators.
§ CONCLUSION
In summary, two 4d/5d metal oxides La_3MO_7 (M=Os and Ru) with unique quasi-one-dimensional M chains have been studied systematically using the density function theory calculation. Their magnetic ground states are revealed to be identical, in agreement with the recent neutron study of La_3OsO_7 but different from early calculations on La_3RuO_7. Due to the half-filled t_2g configuration, the spin-orbit coupling is not crucial in these two systems. Moreover, the doping of Ca has been predicted to affect the magnetism somewhat, leading to different effects for M=Os and Ru. In particular, the d_xy-orbital-sharp hole is formed after Ca dopant. This orbital is driven by the Coulombic interaction, and further tunes the lattice distortions. The experimental observed semiconducting behavior after doping is explained as the forming of polarons.
§ ACKNOWLEDGMENT
This work was supported by National Natural Science Foundation of China (Grant No. 11674055) and Fundamental Research Funds for the Central Universities.
§ REFERENCES
| Transition metal oxides have attracted enormous attentions for its plethoric members, divergent properties, novel physics, and great impacts on potential applications based on correlated electrons. In past decades, the overwhelming balance of interests were devoted to those compounds with 3d elements, which showed high-T_ C superconductivity, colossal magnetoresistivity, multiferroicity, and so on <cit.>. However, the 4d and 5d counterparts were much less concerned and only in very recent years a few of them, e.g. Sr_2IrO_4, have been focused on <cit.>. In principle, for 4d/5d electrons, the electron-electron repulsion, e.g. Hubbard U, is much weaker due to more extended wave functions, while the spin-orbit coupling (SOC) is much stronger due to the large atom number, comparing with the 3d electrons <cit.>. These characters may lead to non-conventional physics in 4d/5d metal oxides, e.g. p-wave superconductors, spin-orbit Mott insulator, Kitaev magnets, topological materials, and possible high-T_ C superconductors <cit.>.
Till now, the most studied 4d/5d metal oxides owns quasi-two-dimensional layer structures (e.g. Sr_2IrO_4 and Na_2IrO_3) or three-dimensional structures (e.g. SrIrO_3 and SrRuO_3). Recently, those 4d/5d metal oxides with quasi-one-dimensional chains have also been synthesized, which may lead to unique low-dimensional physics, e.g. charge density waves, spin-Peierls transitions, and novel magnetic excitations <cit.>. For example, recent experiments reported the basic physical properties of R_3MO_7, which owns the weberite structure, as shown in Fig. <ref>(a) <cit.>. Since here the 4d/5d electrons are mostly confined in one-dimensional chains instead of two-dimensional plane or three-dimensional framework, their electronic and magnetic structures, may be markedly different from the higher-dimensional structural 4d/5d counterparts. Given the decreased electron correlations and increased SOC of the 4d/5d electrons, the physical behavior of these compounds may also show differences comparing with quasi-one-dimensional 3d metal oxides <cit.>. In fact, there is rare 3d metal oxide forming the weberite R_3MO_7 structure. It is therefore physically interest to study these new systems.
Taking La_3OsO_7 for example, recent experimental studies reported its structural, transport, and magnetic properties, characterized by magnetic susceptibility, x-ray diffraction, as well as neutron diffraction <cit.>. The corner-shared OsO_6 octahedra form chains along the [001] direction of the orthorhombic framework. The nearest-neighbor distance of Os-Os is 3.81 Å within a chain, but 6.75 Å between chains. The Os-O-Os bond angle within a chain is about 153^∘, implying strong octahedra tilting, which is also widely observed in other oxides. Its ground state is an antiferromagnetic (AFM) insulator. The Ca-doped La_3OsO_7 was also studied. Despite the change of nominal carrier density, surprisingly, this hope-doped system remain an insulator (or a semiconductor), violating the rigid band scenario <cit.>. Similar robust insulating behavior was also found in some doped iridates <cit.>, which was expected to show superconductivity after doping <cit.>.
In this work, we have performed systematic first-principles calculations to understand the electronic and magnetic properties of La_3OsO_7 as well as the isostructural La_3RuO_7. The doping effect has also been studied, which may provide a reasonable explanation to the insulating behavior, based on the polaron forming. To our best knowledge, there were very few theoretical studies on these two materials before. Only Khalifah et al. calculated several magnetic states of La_3RuO_7 <cit.>. Even though, their predicted ground state (see Fig. <ref>(b)) seems to be inaccurate, according to our results. | null | null | null | null | In summary, two 4d/5d metal oxides La_3MO_7 (M=Os and Ru) with unique quasi-one-dimensional M chains have been studied systematically using the density function theory calculation. Their magnetic ground states are revealed to be identical, in agreement with the recent neutron study of La_3OsO_7 but different from early calculations on La_3RuO_7. Due to the half-filled t_2g configuration, the spin-orbit coupling is not crucial in these two systems. Moreover, the doping of Ca has been predicted to affect the magnetism somewhat, leading to different effects for M=Os and Ru. In particular, the d_xy-orbital-sharp hole is formed after Ca dopant. This orbital is driven by the Coulombic interaction, and further tunes the lattice distortions. The experimental observed semiconducting behavior after doping is explained as the forming of polarons. |
http://arxiv.org/abs/1701.08160v1 | 20170127190000 | A gravitationally-boosted MUSE survey for emission-line galaxies at z>~5 behind the massive cluster RCS 0224 | [
"Renske Smit",
"A. M. Swinbank",
"Richard Massey",
"Johan Richard",
"Ian Smail",
"J. -P. Kneib"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
2017
Contagion dynamics of extremist propaganda in social networks
[
Accepted 2017 January 26. Received 2017 January 16; in original form 2016 September 4
=========================================================================================
We present a VLT/MUSE survey of lensed high-redshift galaxies behind the z=0.77 cluster RCS 0224-0002.
We study the detailed internal properties of a highly magnified (μ∼29)
z = 4.88 galaxy seen through the cluster. We detect wide-spread nebular Civλλ1548,1551 Å emission from this galaxy as well as a bright Lyα halo with a spatially-uniform wind and absorption profile across 12 kpc in the image plane.
Blueshifted high- and low-ionisation interstellar absorption indicate the presence of a high-velocity outflow (Δ v∼300 km s^-1) from the galaxy.
Unlike similar observations of galaxies at z∼2-3, the Lyα emission from the halo emerges close to the systemic velocity - an order of magnitude lower in velocity offset than predicted in “shell”-like outflow models. To explain these observations we favour a model of an outflow with a strong velocity gradient, which changes the effective column density seen by the Lyα photons.
We also search for high-redshift Lyα emitters and identify 14 candidates between z=4.8-6.6, including an over-density at z=4.88, of which only one has a detected counterpart in HST/ACS+WFC3 imaging.
galaxies: high-redshift – galaxies: formation – galaxies: evolution
§ INTRODUCTION
Over the last decade, deep observations of blank fields, in particular with the Hubble Space Telescope (HST) have identified a substantial poplation of galaxies beyond z>3, using broadband photometry <cit.>.
Despite the progress in identifying large numbers of galaxies, it remains challenging to obtain spectroscopic redshifts and determine the physical properties of these systems. This is largely due to their inherent faintness and the fact that bright rest-frame optical emission-line tracers such as Hα and [Oiii], which are traditionally used to measure the properties of the ISM, are shifted to observed mid-infrared wavelengths for sources at z≳3-4. The small physical sizes of galaxies at z>3 compared to typical ground-based seeing also makes spatially resolved observations difficult to obtain, inhibiting measurements of dynamical masses, star-formation distributions and wind energetics.
Recently, the commissioning of the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT) has led to an advance in the identification and characterisation of z∼3-6 galaxies though wide-field and deep spectroscopy of the rest-frame ultraviolet (UV) spectra of these sources.
For example, MUSE is starting to probe the physical properties of Hii regions within galaxies by exploiting gravitationally lensing through their faint UV nebular emission lines such as Civλλ1548,1551 Å, Heiiλ1640 Å, Oiii]λλ1661,1666 Å and Ciii]λλ1907,1909 Å <cit.>, lines which are rarely seen in local star-forming galaxies <cit.>. These lines are produced either by young, metal-poor stellar populations with high-ionization parameters <cit.>, or gas photo-ionisation by faint active galactic nuclei <cit.>. Furthermore, MUSE has enabled the detailed modelling of extended Lyα emission, gaining insights into the inflowing neutral gas and/or wind energetics in the circum-galactic medium (CGM) of galaxies <cit.>.
Moreover, MUSE is a promising new instrument for undertaking unbiased spectroscopic surveys. <cit.> used a 27 hour MUSE pointing of the Hubble Deep Fields South (HDF-S)
to detect 89 Lyman-α emitters in the redshift range z∼3-6.
Remarkably, 66% of the Lyα emitters above z≳5 have
no counterpart in the HST broadband imaging (to a
limiting magnitude of m_i∼29.5),
In this paper, we extend current work on characterising the UV spectra of intrinsically faint high-redshift galaxies out to z∼5 through the analysis of VLT/MUSE observations of one of the most strongly magnified galaxies known above a redshift of z>3; the highly magnified (μ=13-145×) z=4.88 lensed arc seen through the core of the compact z=0.77 cluster RCS 0224-0002 <cit.>.
S07 observed nebular [Oii] emission and an extended Lyα halo in this z=4.88 source and they hypothesized that a galactic-scale bipolar outflow has recently bursted out of this system and into the intergalactic medium (IGM). Our new observations obtain significantly higher signal-to-noise ratio (S/N) in the UV emission and continuum, allowing us to resolve the shape of the Lyα profile and detect the UV-interstellar medium (ISM)
lines. Furthermore, our MUSE pointing covers the complete z∼6 critical curves, which allows for an efficient survey for faint high-redshift Lyα emitters. These sources are important targets to study in order to understand the properties of the ultra-faint galaxy population that could have contributed significantly to reionisation.
This paper is organised as follows: we describe our MUSE dataset and we summarize the complementary data presented by S07 in <ref>. We analyse the spectral properties of the main z=4.88 arc in <ref>. We present the results of a blind search for Lyα emitters in <ref> and finally we summarise our findings in <ref>.
For ease of comparison with previous studies we take H_0=70 km s^-1 Mpc^-1, Ω_m=0.3,andΩ_Λ=0.7, resulting in an angular scale of 6.4 kpc per arcsecond at z=4.88. Magnitudes are quoted in the AB system <cit.>.
§ DATA
§.§ HST imaging
We obtained HST imaging from the Space Telescope Science Institute MAST data archive (GO:14497, PI: Smit and GO: 9135, PI:Gladdders). RCS 0224-0002 (α = 02:24:34.26, δ= -00:02:32.4) was observed with the Wide Field Planetary Camera 2 (WFPC2) using the F666W (V_666) filter (10.5 ks), the Advanced Camera for Surveys (ACS) using the F814W (I_814) filter (2.2 ks) and the Wide Field Camera 3 (WFC3) using the F125W (J_125) and F160W (H_160) filters (2.6 ks each). The ACS and WFC3 images were reduced with Drizzlepac v2.1.3 to 0.05 and 0.128 arcsec pixel^-1 resolution respectively. The depth of the I_814, J_125 and H_160 band images is 26.3, 26.8 and 26.7 mag respectively (5σ in a 0.5-diameter aperture). The WFPC2 data was reduced with the STSDAS package from IRAF to ∼0.1 arcsec pixel^-1 resolution as described by S07. A false-color image using the I_814, J_125 and H_160 bands are shown in Figure <ref>. The color image shows two bright arcs at z=2.40 (lensed images B1–B6) and z=4.88 (lensed images 1–4).
§.§ MUSE spectroscopy
We observed the cluster RCS 0224-0002 with a single pointing (∼1× 1 arcmin) of the VLT/MUSE IFU spectrograph <cit.> between November 13, 2014, and September 16, 2015, programme 094.A-0141. Each individual exposure was 1500 seconds, with spatial dithers of ∼15 arcsec to account for cosmic rays and defects. One observing block was partly taken in twilight and therefore omitted from the final data-cube, resulting in a co-added exposure time of 13.5 ks. All the observations we use were taken in dark time with <0.8" V-band seeing and clear atmospheric conditions.
We reduced the data with the public MUSE ESOREX pipeline version 1.2.1, including bias, dark, flat-fielding, sky subtraction, wavelength and flux calibrations. For each individual exposure we used the lamp flat-field taken adjacent in time to the observation for illumination correction. The reduced data-cubes were registered and stacked using the EXP_COMBINE routine.
The seeing measured on the combined exposure is ∼0.68" full width half max (FWHM), with a spectral resolution of 94 km s^-1 (2.2 Å) FWHM at 7000 Å.
A false-color image constructed from the final MUSE cube is shown in Figure <ref>. We use median images centered on 5375 Å, 6125 Å and 8275 Å as broadband inputs and we add a 8 Å wide mean image centered on 7146 Å to the red channel to emphasize the Lyα emission in the z=4.88 arc. All bright HST sources are detected in the MUSE continuum, while the z=4.88 arc is clearly detected with spatially extended Lyα emission. A number of other Lyα sources are identified at the same redshift (see <ref>).
§.§ SINFONI spectroscopy
To complement the MUSE dataset we exploit the SINFONI IFU spectroscopy presented by S07. Briefly, the SINFONI data was taken in the HK grating (λ/Δλ = 1700) covering the [Oii]λλ3726.1,3728.8 Å doublet redshifted to ∼2.2 μm. The ∼8× 8 arcsec field-of-view (with a spatial resolution 0.25 arcsec pixel^-1) covers the lensed images 2 and 3 of the z=4.88 arc .
§ ANALYSIS AND DISCUSSION
§.§ Lens model
To constrain the intrinsic properties of the emission-line galaxies in this study we require an accurate lens-model. S07 constructed a simple mass-model of RCS 0224-0002 with the two main elliptical galaxies in the centre of the cluster and the dark matter component approximated by single truncated pseudo-isothermal elliptical mass distributions. Their primary observational constraints on the mass configuration are the four lensed images of the z=4.88 arc. However, our MUSE observations also cover the other arcs in the cluster. We extract spectra over the multiply-imaged central blue arcs (B1–B6 in Figure <ref>) and detect Ciii]λλ1907,1909 Å, Oiii]λλ1661,1666 Å emission and Siiiλ1403 Å, Siivλλ1394,1403 Å, Siiiλ1526 Å, Civλλ1548,1551 Å, Feiiλ1608 Å, Aliiλ1671 Å absorption in images B1–B6 and measure a redshift z_Ciii]=2.396±0.001 from the integrated light of these images (Smit et al., in preparation).
We use these new constraints to update the lens model by S07. As in S07, the lens modelling is performed using the LENSTOOL software <cit.>. LENSTOOL is a parametric method for modelling galaxy clusters that uses a Markov Chain Monte Carlo (MCMC) fit for a specified number of mass peaks. Each mass peak corresponds to a dark matter halo modelled with a truncated pseudo-isothermal elliptical that is characterised by a position (RA, dec), velocity dispersion σ_ V, ellipticity ε, truncation radius r_ cut and core radius r_ core.
For our updated mass model we include mass components for the brightest 22 cluster members and two components for the cluster halo.
We include constraints from the six images B1–B6 of the z=2.396 galaxy arc, including the de-magnified image in the centre. Another faint arc is identified in the HST imaging (D1–D3 in Table <ref>), just inside the z=4.88 arc, but we do not detect any emission lines from this source in the MUSE data-cube. Furthermore, we search the MUSE cube for bright multiply lensed line emitters and find a Lyα emitter without an HST continuum counterpart at z_ Lyα=5.500 ± 0.002 (labelled C1 at α = 02:24:34.86, δ= -00:02:16.2 and C2 at α = 02:24:34.02, δ= -00:02:36.3 in Figure <ref>). The locations of the Lyα emitter images are well predicted by the lens-model that uses all other constraints and therefore we include this doubly lensed image as an additional constraint. In Figure <ref> we show the critical curve of our new model at a redshift of z=4.88 and we list all multiple images used to constrain the model in Table <ref> in Appendix <ref>.
Our mass model differs from that of S07 in three ways. First, owing to the different assumed cosmology, our model is ∼35% less massive: M=(3.8±0.2)×10^14M_⊙ compared to M=(5.9±0.4)×10^14M_⊙. We recover the S07 mass if we switch back to their cosmology. Second, the inclusion of mass from all cluster member galaxies makes the z=2.4 critical line better match the observed features of lensed system B. Third, our distribution of mass is more elongated toward the North-West. The S07 model had close to circular symmetry, forced by a prior on the ellipticity of the cluster-scale dark matter. This resulted in a scatter of rms^A_i=1.21 between the predicted and observed positions of images A1-A4 (G. Smith et al. private comm.). By dropping the prior (and simultaneously imposing constraints from newly identified lens systems), a cluster-scale mass distribution with ellipticity ε=0.63 achieves rms_i^A=0.52, or rms_i=1.03 for all image systems. However, we achieve a still-better fit (rms_i^A=0.48, rms_i=0.88) using two cluster-scale halos. These were given a Gaussian prior centred on the two BCGs. The first gets asymmetrically offset to the North-West; the second remains near CG2. This two-halo model achieves a superior log(Likelihood) of –26.57 and χ^2=61.3 in 11 degrees of freedom compared to the best-fitting one-halo model, which has log(Likelihood) of –142.92 and χ^2=294 in 17 degrees of freedom. The best-fit parameters are listed in Table <ref> in Appendix <ref>.
§.§ The z=4.88 arc
The z=4.88 arc was first discovered in the Red-Sequence Cluster (RCS) by <cit.>. <cit.> detected the bright Lyα emission in galaxy images 1–3 at z_ Lyα=4.8786 with VLT/FORS-2 spectroscopy. S07 targeted the arc with VLT/VIMOS (galaxy images 1–4) and VLT/SINFONI (galaxy images 2–3) spectroscopy and detected Lyα at z_ Lyα=4.8760 and [Oii]λλ3726.1,3728.8 Å at z_ [Oii]=4.8757. S07 measured a star-formation rate of 12±2 M_⊙ yr^-1, a velocity gradient of ≲60 km s^-1, and an estimated dynamical mass of ∼ 10^10 M_⊙ within 2 kpc from the [Oii] emission lines.
For our MUSE study of the z=4.88 arc we will assume the systemic velocity of the galaxy is best estimated by z_ sys=z_ [Oii]=4.8757±0.0005 (integrated over galaxy images 2–3). Furthermore, from our lens-model we find luminosity weighted amplifications of μ=29^+9_-11, μ=21^+12_-8, μ=138^+7_-74 and μ=1.30^+0.01_-0.01 for images 1, 2, 3 and 4 respectively (note that image 3 has a very high amplification, but also a very large uncertainty, because the arc crosses the critical curve). These values are slightly higher than the mean, luminosity-weighted magnification of μ=16±2 found by S07 for images 1,2 and 3 integrated (though within the uncertainties for images 1 and 2). The uncertainty on our numbers is largely due to the fact that a small shift of the critical curve can change the luminosity weighted amplification significantly. In particular, we note that the high magnification of image 3 is dominated by a few pixels that overlap with the critical curve, while the estimated magnification for any modelling method is most uncertain near the critical curves <cit.>.
To measure the detailed properties of the UV spectrum of this galaxy we first construct a one-dimensional spectrum (up to ∼1600 Å in the rest-frame) of the z=4.88 arc from the MUSE cube by measuring the integrated (non-weighted) spectrum extracted from pixels in the lensed images 1, 2 and 3 with a S/N>2σ in the continuum image of the MUSE data-cube. The resulting spectrum is shown in Figure <ref>. As well as bright Lyα emission, which has an observed equivalent width (EW) of 793±159 Å (rest-frame EW_0=135±27 Å), we clearly detect the absorption line doublet Siivλλ1394,1403 Å, which originates in the ISM and/or CGM and the emission line doublet Civλλ1548,1551 Å, with some evidence for an absorption component as well (see inset panels), which is likely to arise from a combination of stellar, nebular and ISM/CGM components. The observational parameters of the UV spectroscopic features in the MUSE data are listed in Table <ref> (see Appendix <ref> for measurements on the individual lensed galaxy images).
In the next sections we will first discuss the morphology of the emission lines, before moving to a detailed analysis of the spectral properties of the z=4.88 arc, the kinematics of the system and the physical picture that emerges from these observations.
§.§.§ Lyα morphology
The Lyα emission in the z=4.88 arc (see Figure <ref>) appears to be significantly extended.
Lyman Break galaxies and Lyα emitters at z∼2-6 often exhibit extended Lyα halos around the stellar continuum of the galaxies, following an exponential surface brightness distribution <cit.>. These Lyα halos are thought to be generated either by cooling radiation <cit.> or by resonant scattering from a central powering source, such as star-formation or AGN <cit.>.
First, we investigate the morphology of Lyα in the z=4.88 arc behind RCS 0224-0002.
Figure <ref> shows the source-plane reconstruction of the continuum subtracted Lyα halo. We use image 4, since the Lyα halos of images 1 to 3 are incomplete and merged together (see the right panel of Figure <ref>). For the spatial profile we use bins of 0.1 arcsec in concentric circles around the peak flux of the Lyα emission. The MUSE Lyα halo has an observed FWHM of 2.2 kpc, while the HST continuum has a FWHM of 0.2 kpc. A number of foreground cluster galaxies contaminate the measurement of the stellar spatial profile directly from the continuum image. Therefore we construct a broadband image redwards of Lyα, centered on 7400Å, and we subtract a continuum image bluewards of the Lyα break centered on 6975Å in order to remove most of the foreground contamination. We mask any remaining flux from foreground sources by hand. Furthermore, we extract the PSF from a nearby star and place this at the position of the Lyα peak in our lens model in order to construct the source-plane image of the PSF and measure its spatial profile.
The Lyα halo appears roughly isotropic, with little substructure, except for an extended lower luminosity region in the South-East. Comparing the Lyα halo with the UV continuum image in Figure <ref> indicates the extended nature of the faint Lyα profile beyond the continuum. The Lyα halo is consistent with an exponential profile. For comparison with Lyα halos in the literature, we measure the Petrosian radius <cit.> of the halo, defined as the annulus where the Lyα flux is equal to η times the mean flux within the annulus. The Petrosian radius is a useful measure, since it is only weakly dependent on the seeing of the observations. For η=20% we find R_ p20,Lyα=8.1±0.4 kpc and R_ p20,UV=2.9±0.8 kpc, which is somewhat lower than the range of Petrosian radii R_ p20,Lyα∼10-30 kpc (for R_ p20,UV∼1.3-3.5 kpc) found by <cit.>, but similar to some of the largest Lyα halos around z∼0 analogues in the LARS sample <cit.>, which typically show radii R_ p20,Lyα≲8 kpc (Figure <ref>).
<cit.> and <cit.> use radiative transfer simulations to investigate the expected morphology of Lyα halos generated by cooling radiation and they predict concentrated emission, which can extend out to 10-30 kpc. While stacked Lyα halos extend out to ∼ 100 kpc <cit.>, indicating cooling is not the origin of Lyα emission in typical galaxies, our observations do not have sufficient S/N to trace the z=4.88 Lyα halo beyond 10 kpc, necessary to rule out a gas cooling scenario. We will therefore further investigate whether the faint, extended Lyα emission is produced by resonant scattering from a central source or by cooling radiation in <ref> based on the spectral properties of the line.
§.§.§ Spectral properties of the Lyα line
High-redshift Lyα emitters can exhibit a wide range of spectral properties, such as blueshifted and redshifted emission, single and double peaked lines and different line widths and velocity offsets, which gives insight into the emission mechanism of Lyα and the column density and velocity distribution of the ISM and CGM neutral gas <cit.>.
For the z=4.88 arc, the Lyα emission line profile is very asymmetric and we find a single redshifted Lyα line, with a peak at z_ Lyα=4.8770±0.0005 (using the wavelength and width of the spectral element where Lyα peaks), ∼40-90 km s^-1 redshifted with respect to the [Oii] emission which marks the systemic redshift, and FWHM_red=285 km s^-1, with very little flux bluewards of the [Oii] redshift (see Figures <ref> and <ref>). We set an upper limit on the presence of a weaker blue line; at -v_ red we find an upper limit on the flux ratio of any blue peak to the red peak F_ peak,blue/F_ peak,red<0.027. Furthermore, we detect a faint tail of redshifted Lyα emission out to ∼1000 km s^-1. A simplified model for this asymmetric line shape is that of a Gaussian emission line profile convolved with a Voigt profile, describing the collisional and Doppler broadening of interstellar absorption lines, as shown in Figure <ref>, where we fix the redshift of the underlying Gaussian emission to the [Oii] redshift z=4.8757. The best-fit model in Figure <ref> indicates a Hi absorber with a column density of 10^19 cm^-2, however, the fit fails to reproduce both the narrow peak and the high-velocity tail of the Lyα line. In fact, the emission-line component of the Lyα line shows a strongly non-gaussian shape; instead we observe an exponential profile as a function of velocity over two orders of magnitude in flux (Figure <ref>), remarkably similar to the exponential surface brightness profile (Figure <ref>).
S07 observed the modest redshifted narrow Lyα line in combination with the high velocity tail, and interpreted this as a combination of emission from the central source combined with redshifted emission from an outflow. To test this model, in Figure <ref> we show the spatial variation of the spectral Lyα profile in the image-plane of the lensed galaxy image 1 (the highest S/N image). While we used the source-plane reconstruction of galaxy image 4 for deriving the spatial properties of the Lyα halo, we use the brightest galaxy image for spectral analysis to obtain higher signal-to-noise information. First, we partition the halo along contours of constant observed Lyα flux. While the Lyα flux in the halo drops by more than an order of magnitude compared to the emission over the stellar continuum, the shape of the Lyα profile, after normalising to the peak flux, changes only marginally from the Lyα profile extracted over the stellar continuum. Across the lensed image, the wavelength of the peak of the Lyα line changes by less than ∼50 km s^-1, while the width and the shape of the high velocity tail stays nearly constant <cit.>.
These results differ strongly from the scenario described in S07, where the main peak of the Lyα profile comes directly from the star-forming regions and the high velocity wing is re-scattered in an expanding shell of gas within the CGM. For this model to hold we would expect the star-formation component (the peak of Lyα) to drop off rapidly with increasing radius, while the back-scattered CGM component changes little with radius, and therefore the peak flux would shift to higher velocities and the shape would change significantly. The spatially-uniform Lyα spectral profile instead suggests that the Lyα peak is also produced or resonantly scattered within the CGM, which generates Lyα emission with a wide range in velocities.
We can test this further by searching for any deviations from the average Lyα profile. We use the spectrum extracted over the stellar continuum as a model for fitting the Lyα line in each individual pixel of the z=4.88 arc, leaving the normalisation as the only free parameter and considering only the peak of the Lyα line as a model constraint. After subtracting our one-parameter model we detect only a weak residual. In Figure <ref> we show the spectrum extracted over the region with the largest residual, which shows a slightly offset peak compared to the Lyα extracted over the stellar continuum and a broadened profile out to 1000 km s^-1, indicating a collimated high-velocity outflow on top of the isotropic Lyα halo component that is described above.
Given the extended nature of the Lyα emission (<ref>) we now consider various generation mechanisms for the emission. Given the spatially invariant Lyα line profile, which indicates that only a minor fraction of the Lyα emission reaches us directly from the galaxy, it is reasonable to consider whether the halo can be produced by cooling radiation from the CGM. <cit.> and <cit.> model such scenarios using radiative transfer simulations and find that Lyα should typically be double peaked and blueshifted with respect to the systemic velocity of the galaxy. Assuming these models provide a reasonable description of the system, the single redshifted Lyα peak we observe excludes cooling as a source of Lyα photons in the z=4.88 arc.
To reproduce the Lyα line profile for the z=4.88 arc we thus favour a picture where a central powering source is surrounded by a largely isotropic halo of neutral gas, which dampens Lyα bluewards of the systemic velocity and resonantly scatters the vast majority of photons towards higher velocities within the expanding gas behind the galaxy <cit.>.
The strong similarity between the Lyα surface brightness profile (Fig. <ref>) and the spectral profile in the z=4.88 arc could suggest the presence of a smoothly varying velocity gradient in the CGM gas that resonantly scatters the Lyα photons into our line of sight. This scenario is qualitatively in good agreement with the Lyα profile considered in <cit.>, shown in Appendix <ref>, where the relatively low column density (at a given velocity) created by the strong velocity gradient causes the escape of Lyα photons predominantly at low velocity (and small radii), while a weak high-velocity tail is still observed due to the photons that are resonantly scattered through the accelerating outflow. A model with a gas velocity gradient furthermore predicts the absence of a blue peak, which is difficult to reproduce in a shell model, since a low velocity shell with low covering fraction which gives rise to a red Lyα peak close to the systemic velocity also produces a nearly symmetric blue peak.
In <ref> we will discuss how this model fits into a physical picture that can explain our full set of observations.
§.§.§ C IV emission
The detection of narrow (FWHM = 156± 16 km s^-1) Civλλ1548,1551 Å emission in the z=4.88 arc is interesting since the UV spectra of field galaxies generally show Civ in absorption from ISM/CGM gas, or else exhibit a P-Cygni profile from the stellar winds of O-stars, with Civ emission redshifted by a few hundred km s^-1 <cit.>. AGN can also produce Civ in emission, though with typical line-widths of at least a few hundred km s^-1. Narrow Civλλ1548,1551 Å has so far been observed in a handful of strongly lensed high-redshift galaxies <cit.>. To date these galaxies have either been studied with slit spectroscopy or they are unresolved in ground-based observations, inhibiting the study of the spatial distribution of the Civλλ1548,1551 Å. The MUSE observations of the brightly lensed z=4.88 arc therefore provides us with a unique opportunity to investigate the origin of this line in more detail.
In the absence of rest-frame optical spectroscopy, a common approach to assessing the possible presence of AGN is using UV emission line ratios <cit.>. At z=4.88, this requires near-infrared spectroscopy to measure the Heiiλ1640 Å and Ciii]λλ1907,1909 Å lines. With the current observations we can only assess the Civ/Lyα ratio, which has a ratio of ≳0.2 in the composite spectra of AGN <cit.>. In contrast, this ratio is Civ/Lyα =0.054±0.006 in the z=4.88 arc, with little variation along the images, consistent with the interpretation that this line is associated with star formation and not with a hidden AGN.
Figure <ref> shows the Civ/Lyα ratio as a function of Lyα equivalent width for various lensed galaxies in the literature <cit.> and in 5 spatial bins along the lensed image 1 of the z=4.88 arc (using both the 1548 and 1551 Å lines). The Civ/Lyα ratio and the observed equivalent width of Civ 55±2 Å (rest-frame EW_0=9.3±0.4 Å) do not change significantly as a function of position along galaxy image 1. Furthermore, we observe no strong emission from typical AGN lines such as Nvλ1240 Å, Sivλλ1393,1402 Å and Nivλλ1483,1486 Å <cit.>.
In Figure <ref> we show the spatial distribution of Civλλ1548,1551 Å, using a continuum subtracted narrowband image of the Civ emission and overlaying the contours on the HST continuum image. Civ clearly extends along the arc and shows a morphology that is consistent with the [Oii] emission. While we would expect a centrally concentrated source for Civ if it was originating from an AGN, we can distinguish at least four different `clumps' in the Civ morphology with similar brightness, suggesting that the Civ emission is nebular in origin and emerging from multiple star-forming regions throughout the galaxy.
Finally, we measure the UV-continuum slope β (f_λ∝λ^β) from the J_125-H_160 color of galaxy image 1 (integrated flux) and find β=-2.19±0.14, while the individual star-forming clumps along the arc show slopes between β=-1.68--2.64. Since high-redshift galaxies that host faint AGN have measured UV continuum slopes of β∼-1.4--0.3 <cit.> and we therefore conclude that low-mass accreting black holes are unlikely to contribute to the radiation field giving rise to the Civ emission.
§.§.§ Metal absorption lines
The high-ionisation Civλλ1548,1551 Å line is expected to be a combination of nebular, stellar and ISM/CGM components.
While no absorption by the ISM appears present at the systemic velocity of Civ, blueshifted Civ absorption is observed at Δ v=-322±26 km s^-1 which also has a narrow profile, with FWHM∼200 km s^-1.
The spectrum displays a strong similarity between the Civλ1548 Å and Siivλ1394 Å absorption profiles (Figure <ref>), indicating the absorption of both lines is due to highly ionised gas clouds in the ISM/CGM of the galaxy moving towards us.
Furthermore, both Siivλ1394 Å and Siiiλ1304 Å show no evidence for absorption at the systemic velocity (Figure <ref>) and we even find weak emission lines, possibly indicating a low covering fraction of gas in the ISM of the galaxy. Given the strong absoprtion of the high-ionisation lines at ∼-300 km s^-1, galactic feedback in this galaxy has possibly ejected a large fraction of the interstellar gas into the CGM/IGM.
The Siiiλ1304 Å absorption appears to be weaker than the Civλ1548 Å and Siivλ1394 Å absorption lines. The ratio of the equivalent with of the Siiiλ1304 Å line over that of Siivλ1394 Å is EW(Siiiλ1304 Å)/EW(Siivλ1394 Å)=0.2 (see Table <ref>).
This is in contrast to the typical UV spectra of high-redshift galaxies, where the low-ionisation lines are stronger than the high-ionisation lines of the same species, with for example EW(Siiiλ1304 Å)/EW(Siivλ1394 Å)=1.2 in the composite spectrum of Lyman break galaxies by <cit.>.
We also fit the low- and high-ionisation lines with a Gaussian profile convolved with a Voigt-profile absorber and estimate column densities of logN/ cm^-2=14.5±0.3 and logN/ cm^-2=14.6±0.1 respectively.
Possibly a larger fraction of the outflowing gas is highly ionised due to a hard ionisation field, which is present given the widespread nebular Civλλ1548,1551 Å emission in the galaxy.
It is possible that the neutral gas swept up by the ionised outflow is optically thin because of this, or else that the covering fraction of neutral gas in the CGM is incomplete <cit.>. To distinguish between these explanations we would need a clean observation of the Siiiλ1260 Å and Siiiλ1527 Å absorption features, which are currently obscured by skylines.
§.§.§ Stellar population
Given that the Civ emission appears to be nebular in origin and powered by star formation (see <ref>), we investigate the properties of the stellar population that are needed to reproduce the ∼9 Å rest-frame equivalent width nebular Civ emission.
Figure <ref> shows the evolution of the nebular Civ equivalent width with metallicity, obtained from the Binary Population and Spectral Synthesis <cit.> models, using a single stellar population and including binary stellar evolution. The highly ionising photons needed to generate the high-equivalent width nebular Civ lines can be generated by a young stellar population (1-3 Myr old) with a low metallicity Z=0.05Z_⊙.
The low-metallicity BPASS models also predict significantly reduced equivalent width stellar P-Cygni profiles, due to the fact that the winds of hot stars are driven by metal line absorption and therefore low-metallicity stars are much less efficient in driving stellar winds. This is consistent with the observed profile of Civ in the z=4.88 arc (see Figure <ref>), where we see no evidence for any redshifted stellar emission (>500 km s^-1 ) or broad blueshifted absorption (<-500 km s^-1) from the systemic velocity. In fact, a lower equivalent width stellar P-Cygni profile could provide an improved fit to our data, suggesting that the stellar iron abundance of the stellar population of the z=4.88 arc could be even lower than that assumed in the lowest-metallicity BPASS models available. The reasonable consistency between the stellar and nebular components in the Civ line profile provides confidence that we are indeed witnessing the early star formation in a galaxy with a very metal-poor stellar population.
§.§.§ Kinematics
To derive spatially resolved dynamics of the stars and gas, we exploit the enhanced spatial resolution of the strongly lensed z=4.88 arc and the IFU data to understand the spatial variation in the kinematics of the ISM/CGM.
In Figure <ref> we show the spatial variation of Lyα, Civλλ1548,1551 Å and Siivλλ1394,1403 Å along the lensed galaxy image 1 running from the North-East to South-West. We use galaxy image 1, since galaxy images 2 and 3 are incomplete images that cross the critical curve (see Figure <ref>) and the stellar continuum of galaxy image 4 is not spatially resolved in the MUSE data. The velocities in Figure <ref> are given with respect to a redshift of z_[Oii]=4.8757 obtained from galaxy images 2 and 3 (S07).
As noted in <ref>, the Lyα profile is not well described by a traditional Gaussian profile convolved with a Voigt-profile absorber and we therefore choose a non-parametric description of the Lyα profile. We characterise the spatial variation in the shape of the asymmetric Lyα profile by finding the wavelength that corresponds to 50%, 25% and 10% of the peak flux redwards of the Lyα peak.
The Civλλ1548,1551 Å emission lines are modelled using a Gaussian emission line doublet. We model the Siivλλ1394,1403 Å absorption line doublet using Gaussians convolved with a Voigt-profile absorber. We detect these lines with >5σ significance against the brightest continuum clump in galaxy image 1, corresponding to the North of the galaxy in the source plane. To obtain better constraints on the Siiv kinematics over the whole galaxy, we combine the bright clumps in galaxy images 1, 2 and 3 that correspond to the Southern bright star-forming region in the source-plane (we also show this in Fig. <ref>).
The Civ emission shows a velocity gradient of less than 50 km s^-1 along the arc, with an irregular velocity pattern that is repeated in the lensed images 2 and 3. This is broadly consistent with the systemic velocity derived by S07, who find a velocity gradient in [Oii] of ≲60 km s^-1. Moreover, the width of the Civ doublet does not change significantly as a function of position but stays either unresolved or marginally resolved at a FWHM ∼ 100 km s^-1.
For the high-ionisation Siiv line, we derive a blueshift of 300-400 km s^-1 from the systemic redshift, consistent with the measured velocity offset by S07. The Lyα emission shows very little variation in the peak velocity, but broadens along the South-West side of the extended Lyα halo. This is consistent with the analysis in <ref>, which suggests a collimated high-velocity outflow on top of a halo of isotropically out-flowing neutral gas. The small (<60 km s^-1) velocity gradient in the Civ and [Oii] lines as well as the ∼100 km s^-1 velocity gradient in the Siiv absorption also support this picture of an outflow over the interpretation of large scale rotation in the halo.
Comparing the high-ionisation absorption features and the Lyα line, the Lyα peak produced by the receding outflow emits at significantly lower velocities (<100 km s^-1) than where the absorption of interstellar Siiv takes place in the approaching outflow (300-400 km s^-1). This is in contradiction to a simple symmetric shell model <cit.>, which predicts that the Lyα peak be shifted by ∼2×Δ v_ exp, where Δ v_ exp is the outflow velocity of the shell as measured from the interstellar absorption features. In this model the Lyα peak velocity of the z=4.88 arc would be expected at ∼600-800 km s^-1, a full order of magnitude higher than our observations (see Figure <ref>). For comparison, the <cit.> composite spectrum of Lyman Break Galaxies at z∼3 shows a Lyα velocity offset of +360 km s^-1 and low-ionisation lines at -150 km s^-1, consistent with the symmetric shell model.
While some asymmetry could be present in the outflow, as indicated by the changing Lyα linewidth on one side of the galaxy, the peak velocity of Lyα changes by less than 50 km s^-1 and it therefore seems unlikely that asymmetry in the outflowing gas explains the difference of hundreds of km s^-1 between the approaching and receding gas tracers. We therefore suggest that a complex kinematic structure of the CGM, such as the velocity gradient we argued for in <ref> must affect the absorption and escape of Lyα photons.
§.§ A physical picture for the z=4.88 arc
In <ref> we analysed the morphological and spectral properties of the Lyα, Civ, Siiv and Siii emission and absorption lines in the z=4.88 lensed galaxy arc behind RCS 0224-0002. Widespread nebular emission of the highly ionised Civ line implies that the source is an actively star-forming galaxy with a hard ionisation field impacting upon the ISM surrounding the sites of star-formation, while the blueshifted Siiv absorption line and spatially extended redshifted Lyα halo indicate galaxy-wide outflows.
A notable difficulty in this picture is the difference between the gas outflow velocities indicated by Lyα emission and by low-ionisation absorption, suggesting the two tracers are dominated by different parts of a CGM which hosts a complex kinematic gas structure (<ref>). To find a model that qualitatively describes the Lyα spectral properties we argued for a strong velocity gradient in the gas (<ref> and Appendix <ref>), implying an accelerating outflow. In this model the velocity gradient affects the column density seen by the Lyα photons at any given velocity and the neutral gas at low velocity (and small radii) becomes transparent and produces the low velocity Lyα peak, while a small fraction of the photons is scattered to the outer, high-velocity halo (see Figure <ref>). This model could also explain the strong blueshift of the interstellar absorption lines compared to the peak of the Lyα emission, since these lines are absorbed by both low and high velocity gas.
While we expect a similar high-velocity tail in the interstellar absorption lines as is present in the Lyα emission line, we do not have the S/N to confirm this prediction. Finally, the spatial extend of the Lyα halo, which is strongly centrally peaked but shows a faint extended wing, is also well described by this model.
Accelerating outflows have already been inferred in lower redshift studies <cit.>. For example, <cit.> observed a “saw-tooth” profile and a long, high-velocity tail in the Mgii absorption features of 0.4<z<1.4 star-forming galaxies which can be explained by accelerated cool gas.
<cit.> suggest ultra-luminous infrared galaxies (ULIRGs) at z∼0.25 have lower covering fractions for their higher velocity gas, implying the highest velocity gas is found at the largest radii and therefore the presence of a velocity gradient in the outflow.
Furthermore, <cit.> used UV-selected galaxy pairs at z∼2-3 to measure the typical gas covering fraction of outflowing gas as a function of impact parameter and argued that consistency between the absorption line strength as a function of impact parameter, and the strength and profile shape of lines observed in the spectra of the galaxies, required large velocities and velocity gradients in the gas.
A physical explanation for accelerating gas is that the outflows are momentum driven <cit.>. Momentum injection is thought to be provided by radiation pressure produced on the dust grains. However, recent studies using deep ALMA observations <cit.> have shown that low-mass high-redshift galaxies have low dust content and the z=4.88 arc does not appear heavily reddened (i.e. the continuum is bluer than that observed in the composite spectrum of , see Fig. <ref>), indicating only a small fraction of the star-formation radiation is available to drive winds. <cit.> also consider momentum injection through ram pressure by supernovae, which can deposit roughly the same amount of momentum as the radiation pressure on the dust and is therefore potentially a more likely source of momentum injection. Alternatively, <cit.> considers momentum transfer due to the radiation of ionising photons, which could be a preferred source of momentum injection given the hard ionisation radiation field we know is present throughout the galaxy, because of the wide-spread high-equivalent-width Civ emission.
In summary, a physical picture consistent with our observations is that of a vigorously star-forming galaxy, inducing a galaxy-wide momentum-driven wind, either due to supernova ram pressure or to the strong radiation field.
§.§.§ Comparison with UV properties of sources at z≲3
To date, only a small sample of high-redshift galaxies have been studied with high S/N rest-frame UV spectroscopy and all below z<4, due to their faintness and hence the long integration times needed to detect faint spectral features. Therefore, we will compare the z=4.88 arc with lower redshift sources in order to understand if the features observed in this arc are common in z≲3 galaxies, or if there is evidence for a change in the ISM/CGM properties of galaxies as we start observing sources at higher redshifts.
The brightest targets for rest-frame UV studies at z∼2-3 are identified from ground-based surveys, which select strongly lensed galaxies, that are often relatively massive (M_∗>10^9.5 M_⊙) and strongly star forming (SFR ≳50 M_⊙ yr^-1), including cB58 <cit.>, the Cosmic Eye <cit.>, the Cosmic Horseshoe <cit.> and SGAS J105039.6+001730 <cit.>. Typical UV-spectroscopic signatures in these massive galaxies include strong P-Cygni profiles seen in the Civ line profile, strong low-ionisation absorption features with respect to high-ionisation ISM lines of the same species and a wide velocity range for both low- and high-ionisation absorption lines (FWHM∼500-1000 km s^-1). This is in strong contrast to the z=4.88 arc, where we detect no evidence for stellar winds through the Civ P-Cygni line and also where the high-ionisation ISM absorption lines are only a few hundred km s^-1 wide, indicating a marked difference in the properties of the stellar winds and the galaxy outflows of our source.
A few of these massive galaxies show strong Ciii]λλ1907,1909 Å emission, an uncommon feature in local galaxies <cit.> as it requires a significant flux above 24 eV and therefore indicates that these high-redshift galaxies have harder ionisation fields and/or higher ionisation parameters compared to their local counterparts. However, the nebular Civλλ1548,1551 Å doublet in the z=4.88 arc (which is only seen when significant amount of flux above 48 eV is produced) is typically not detected.
With a dynamical mass of ∼ 10^10 M_⊙ and SFR of 12 M_⊙ yr^-1 (see S07) the z=4.88 arc behind RCS 0224-0002 might be more likely to share similar properties to lower mass sources such as the Lynx arc <cit.>, BX418 <cit.> and a sample of z=1.4-2.9 galaxies behind Abell 1689 and MACS 0451 targeted by <cit.>.
Indeed, the Lynx arc and three of the 17 galaxies in the <cit.> all show evidence for narrow Civ. <cit.> also require a significant contribution from nebular Civ emission as well as stellar P-Cygni emisison to explain their observations.
Due to the faintness of most of these low-mass sources a detailed analysis of the absorption features is rarely possible. <cit.>, however, notice an almost complete absence of P-Cygni and ISM absorption features in the galaxies where they do detect the continuum (similar to local galaxies selected on their low oxygen abundance presented by ). Furthermore, <cit.> are able to detect numerous absorption features in BX418 at z=2.3, due to an extremely deep integration, and find that the low-ionisation absorption lines in this galaxy are typically significantly weaker than the high-ionisation ISM lines, similar to the z=4.88 arc. An obvious difference between BX814 and the z=4.88 arc behind RCS 0224-0002 is the spectral shape of Lyα; BX814 has an extremely broad, FWHM∼850 km s^-1, Lyα line, as opposed to FWHM<300 km s^-1 observed in this galaxy. Furthermore, in BX814 the peak Lyα emerges at Δ v∼+300 km s^-1, more than 3 times higher than for the z=4.88 arc, while the interstellar absorption lines show a 1.5-2 times lower velocity offset (Δ v∼-150 km s^-1).
In summary, there appear significant differences in the nebular lines, stellar and ISM absorption features between the z=4.88 arc behind RCS 0224-0002 and lower redshift sources with masses above M_∗>10^9.5 M_⊙. Typically, low-mass sources and/or low-metallicity galaxies at z<3 can in some cases have very similar highly ionised nebular features and some similar features in the absorption lines of low mass galaxies have been detected as well, however, no galaxy spectrum or composite spectrum of galaxies matches the full set of observations of the z=4.88 arc, highlighting the need for larger samples of high S/N observations of very high-redshift galaxies in order to understand if the physical properties of the earliest systems are systematically different from their later-time counterparts or if the z=4.88 arc is a rare outlier in the z∼5 galaxy population.
§.§.§ Implications for reionisation studies
Whether galaxies can reionise the Universe and what sources contribute most to reionisation, depends on a large number of parameters, including the Lyman-continuum photon production efficiency of galaxies, ξ_ ion, and the escape fraction of ionising photons <cit.>. While determining the physical properties of galaxies in the reionisation epoch remains challenging, recent spectroscopy of z≳7 galaxies has shown evidence for strong rest-frame UV nebular emission lines such as Civλλ1548,1551 Å <cit.>. Spitzer/IRAC imaging studies have also inferred extremely strong [Oiii]λλ4959,5007 Å in the rest-frame optical spectra of typical z∼7-8 galaxies <cit.>. These results suggest that galaxies in the reionisation epoch could have similar hard radiation field and/or high ionisation parameter as the z=4.88 arc and it is therefore interesting to assess this galaxy as an analogue of the sources that might be responsible for reionisation.
Using the BPASS stellar population template of a young extremely low metallicity galaxy needed to match the Civ equivalent width (see <ref>) we derive a Lyman-continuum photon production efficiency in the z=4.88 arc of log_10ξ_ ion=25.74 Hz erg^-1, 0.63 dex higher than the canonical value of log_10ξ_ ion=25.11 Hz erg^-1 <cit.>. Systemic deviations of ∼0.1 dex from the canonical value of ξ_ ion have been derived from the inferred Hα emission in typical z∼4-5 UV-selected galaxies <cit.>. The bluest z∼4-5 galaxies (β<-2.3), likely very young and dust-free sources, have a significantly higher Lyman-continuum photon production efficiency of log_10ξ_ ion=25.53-25.78 Hz erg^-1 <cit.>, in good agreement with our derived value for the z=4.88 arc, which also has a blue UV-continuum colour (β=-2.2). If similarly young and low-metallicity galaxies are common at z≳7, they could contribute significantly to reionisation even for modest (≲5%) escape fractions.
Measuring the direct escape of Lyman-continuum photons at z∼5 is challenging due to the intervening Lyα forest in the IGM. However, the Siiiλ1304 Å absoprtion in the z=4.88 arc shows no sign of absorption at the systemic velocity by low-ionisation gas in the ISM, indicating Lyman-continuum photons might also easily escape. Furthermore, the collimated high-velocity outflow discussed in <ref> could blow holes into the ISM through which the photons preferentially escape <cit.>.
Another important implication of the z=4.88 arc as an z≳7 galaxy analogue, is the opportunity to identify similar sources with future facilities such as the James Webb Space Telescope (JWST) and the various Extremely Large Telescopes (ELTs). Most of the Lyα emission of galaxies in the reionisation epoch will be absorbed due to the surrounding neutral IGM, but strong nebular lines such as Civλλ1548,1551 Å (seen at an observed EW of ∼80 Å at z∼8) can be easily identified. While these lines are uncommon in the local Universe, our results indicate that these lines are widely produced by the young and low-metallicity stellar population within the z=4.88 arc. These characteristics are likely to be more common as we start observing galaxies at earlier epochs.
§ A BLIND SEARCH FOR HIGH-REDSHIFT LYΑ EMITTERS
Finally, we can use the relatively wide field of view of MUSE to search for other emitters in the field.
Deep MUSE observations in the Hubble deep fields have proven efficient in detecting Lyα out to redshift z∼6.6 <cit.>. The extremely faint or undetected HST counterparts of these high-redshift Lyα emitters indicate we are sensitive to the faint end of the UV luminosity function; sources with similar properties to the galaxies expected to be responsible for cosmic reionisation. While these observations require integration times of ∼30 hours, similar sources can potentially be found behind by strong lensing clusters within reasonable integrations times <cit.>.
Due to its high mass, relative compactness and high redshift RCS 0224-0002 appears to be an efficient lens for high-redshift galaxies. Unlike low-redshift strong-lensing clusters, our single MUSE pointing covers the entire z∼6 critical curves of RCS 0224-0002. We therefore explore the potential of RCS 0224-0002 as a window into the very high-redshift Universe. We search for emission line candidates in our MUSE dataset in three windows with low sky contamination, 7100-7200 Å, 8070-8270 Å and 9060-9300 Å, corresponding to Lyα redshift ranges of z=4.84-4.92, z=5.64-5.80 and z=6.54-6.65 respectively. To achieve this, we develop a blind line detection method that follows a number of consecutive steps to identify extremely faint sources, while minimizing spurious detections. First, for each pixel in the MUSE field of view (masking bright continuum sources and removing cluster galaxies from the sample), we extract a one-dimensional spectrum by averaging 5×5 pixels, where we use the PSF measured from the MUSE data to assign a weight to each of the pixels. For each of the one-dimensional spectra we search for individual spectral pixels that have a value >3.5σ above the noise (estimated from the same one-dimensional spectrum, but masking the skylines) to identify potential lines. For each candidate, we fit a single Gaussian profile to the spectrum and we require a Δχ^2≥7.5^2 between the Gaussian and a straight line fit with a constant value.
To remove spurious detections we next generate a 12×12 arcsec continuum-subtracted narrowband image around the line. We create this image by extracting the images from the cube with a wavelength within the FWHM of the line (as measured from the Gaussian profile fit) and averaging them, we then subtract the median continuum in two bands (20 Å wide) at each side of the line (20 Å removed from the centre). On the continuum-subtracted narrowband image we measure 1000 randomly selected point-source fluxes, using the PSF extracted from a isolated star in the MUSE continuum image to weight the pixels around each point. We require the line candidates to have a flux ≥3σ above the random sampled point-source flux distribution.
We visually inspect our line-candidates and remove all sources that are clearly [Oii], [Oiii] or Hα emission and we remove low-redshift interlopers when bright continuum flux is detected blueward of the rest-frame 912 Å limit (based on the Lyα redshift). Note that while we do not exclude sources which show continuum flux between rest-frame 912 Å and 1216 Å, we detect flux below the rest-frame 1216 Å break in only one Lyα candidate (RCS0224_LAEz4p8773) from our final sample. Furthermore, we remove sources that appear due to noise artefacts at the edges of the CCDs, or sources that are strongly affected by an uneven background.
A significant uncertainty in our interloper rejection is that almost none of our Lyα candidates are detected in the HST/ACS+WFC3 imaging, and therefore no prior based on photometric redshift of the sources can be applied to our Lyα selection. Furthermore, many of our Lyα candidates are observed with too low S/N to identify the expected asymmetric Lyα spectral line shape and therefore our sample could be contaminated by for example high-equivalent-width [Oiii]λ5007 Å lines of lower redshift galaxies, for which the [Oiii]λ4959 Å line is too faint to be detected.
To make an estimate of the number of spurious detections we expect in our blind search we perform a test for false-positives by running our source selection code on the inverted data-cube. To identify a “pure” sample, we calculate the S/N required in the algorithm to give a false positive rate of zero; which is Δχ^2≥7.5^2 for the line fit and ≥3σ for the continuum-subtracted point source flux.
In our final sample of Lyα candidates we have five sources at z∼4.8, eight sources at z∼5.7 and one source at z∼6.6. The sources are listed in Table <ref> and thumbnails of all the sources presented in Appendix <ref>. For comparison, in a 30h exposure over a 1 arcmin^2 field over the HDF-S <cit.> find for the same redshift intervals seven Lyα emitters at z∼4.8, six emitters at z∼5.7 and no emitters at z∼6.6, see Figure <ref>. Using a 4h exposure and 1 arcmin^2 field, <cit.> find only 1 (multiply-lensed) source above z>4.5 behind the Frontier Fields cluster Abell S1063.
A notable result from our blind search for Lyα line candidates is the presence of a large number of bright Lyα emitting sources at z∼4.88. Three bright sources at 2.5- 5.8×10^-17 erg s^-1 cm^-2 are located at ∼+250-300 km s^-1 or ∼200 kpc from the bright arc, while two more sources are located ∼-1000 km s^-1 or ∼1.7 Mpc from the arc. With the exception of one of these sources, none of the line-emitters are detected in the HST imaging (at >5σ), however they all exhibit a clear asymmetric line profile that provides evidence of these sources being Lyα emitters. After correcting for the lensing magnification these sources have Lyα line luminosities ∼1-3 × 10^-17 erg s^-1 cm^-2 (L=2.6-5.1× 10^42 erg s^-1), ∼2-10× brighter than sources found in the Hubble Deep Field South by <cit.> at the same redshift. Comparing to the faint end of the Lyα luminosity functions obtained through narrow-band surveys at z=3.5-5.7 <cit.>, we would predict 0.1-0.9 Lyα emitters as bright as logL/ erg s^-1=42.5 in our MUSE data in the redshift window z=4.84-4.92.
This suggests that either the Lyα luminosity function is steeper at the faint-end than measured in narrow-band surveys <cit.>, or else that the z=4.88 arc is located in a ∼ 7-60× over-dense region, or group.
§ SUMMARY
We present a survey for line emitter galaxies behind the strong lensing cluster RCS 0224-0002. We analyse the rest-frame UV spectrum of a lensed galaxy magnified 29 times at z=4.88. For this source we observe the following properties:
* The z=4.88 galaxy is surrounded by a spatially extended Lyα halo with an exponential spatial profile. The spectral properties of the Lyα halo are spatially-uniform, showing a single redshifted peak close to the systemic velocity (Δ v<100 km s^-1) and high-velocity tail (Δ v>1000 km s^-1). The spatial and spectral properties of the halo are consistent with resonantly scattered Lyα photons produced in a central source and backscattered in a receding outflow from the galaxy.
* We detect spatially resolved narrow Civλλ1548,1551 Å emission. The spatial distribution of Civ strongly resembles that of the [Oii] line, suggesting a nebular origin of the line, powered by star-formation. We argue that the strong Civ emission (EW_0∼9Å) can be reproduced with a young (t<5 Myr), low-metallicity (Z≲0.05Z_⊙) stellar population. The blue UV-continuum color (β=-2.2) and the absence of a P-Cygni profile, indicating low-metallicity stars with significantly reduced stellar winds, is consistent with this analysis.
* We observe strong high-ionisation interstellar absorption lines in Civ and Siv with a significant blueshift (Δ v∼300 km s^-1) from the systemic velocity and much weaker low-ionisation Sii absorption (EW(Siiiλ1304 Å)/EW(Siivλ1394 Å)=0.2). The blueshift of the interstellar lines is surprising when considering how close to the systemic velocity we observe the Lyα line and given that an outflowing-shell model suggests Δ v_ IS∼ v_ shell and Δ v_ Lyα∼ 2× v_ shell.
We propose a physical model for this galaxy in which the outflowing gas follows a strong velocity gradient such that the effective column density of neutral gas, as seen by the outwards scattering Lyα photons, is significantly reduced, allowing for Lyα to escape at much lower velocities than the mean gas outflow <cit.>. This velocity gradient likely requires a momentum ejection into the gas, which can originate from supernovae ram pressure or radiation pressure <cit.>. These results emphasize the importance of increasing the samples of high-redshift low-mass galaxies where we are able to detect the interstellar absorption features, as relying on Lyα as a tracer of galaxy outflows can significantly underestimate the feedback in galaxies such as the z=4.88 arc behind RCS 0224.
We perform a blind line search for high-redshift Lyα using three wavelength ranges that are relatively free of sky lines, corresponding to z_ Lyα=4.84-4.92, z_ Lyα=5.64-5.80 and z_ Lyα=6.54-6.65. We select sources above the significance level needed such that a line search on the inverted data results in zero false positives. We find a total of 14 Lyα candidates, of which only one is detected in the HST imaging. This suggests line surveys over strong lensing clusters with MUSE are efficient at finding ultra-faint galaxies out to z∼6.6 and hence study the properties of faint Lyα emitting galaxies that are likely to have contributed to reionisation.
§ ACKNOWLEDGMENTS
We are grateful to Graham Smith for recovering the parameters of the S07 lensing model. We thank Matthew Hayes, Bethan James, Vera Patricio, Max Pettini, Tom Theuns, Ryan Trainor and Anne Verhamme for useful discussions. We are grateful to Max Gronke for giving us access to the on-line tool TLAC_WEB. RS, AMS, RJM and IRS acknowledge support from STFC (ST/L0075X/1). RS and IRS also acknowledge support from the ERC Advanced Investigator programme DUSTYGAL 321334. In addition, RS acknowledges support from the Leverhulme Trust, AMS from an STFC Advanced Fellowship (ST/H005234/1), RJM acknowledges support from a Royal Society URF and IRS acknowledges support from a Royal Society/Wolfson Merit Award. JPK acknowledges support from the ERC advanced grant LIDA and from CNRS.
plainnat
[Aravena et al.(2016)]Aravena2016 Aravena, M., Decarli, R., Walter, F., et al. 2016, , 833, 68
[Bacon et al.(2010)]Bacon2010 Bacon, R., Accardo, M., Adjali, L., et al. 2010, , 7735, 773508
[Bacon et al.(2015)]Bacon2015 Bacon, R., Brinchmann, J., Richard, J., et al. 2015, , 575, A75
[Bayliss et al.(2014)]Bayliss2014 Bayliss, M. B., Rigby, J. R., Sharon, K., et al. 2014, , 790, 144
[Berg et al.(2016)]Berg2016 Berg, D. A., Skillman, E. D., Henry, R. B. C., Erb, D. K., & Carigi, L. 2016, , 827, 126
[Bina et al.(2016)]Bina2016 Bina, D., Pelló, R., Richard, J., et al. 2016, , 590, A14
[Bouwens et al.(2015a)]Bouwens2015 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015a, , 803, 34
[Bouwens et al.(2015b)]Bouwens2015b Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015b, , 811, 140
[Bouwens et al.(2016)]Bouwens2016 Bouwens, R. J., Smit, R., Labbé, I., et al. 2016, , 831, 176
[Bouwens et al.(2016)]Bouwens2016b Bouwens, R. J., Aravena, M., Decarli, R., et al. 2016, , 833, 72
[Bowler et al.(2015)]Bowler2015 Bowler, R. A. A., Dunlop, J. S., McLure, R. J., et al. 2015, , 452, 1817
[Caminha et al.(2016)]Caminha2016 Caminha, G. B., Karman, W., Rosati, P., et al. 2016, , 595, A100
[Christensen et al.(2012a)]Christensen2012a Christensen, L., Richard, J., Hjorth, J., et al. 2012a, , 427, 1953
[Christensen et al.(2012b)]Christensen2012 Christensen, L., Laursen, P., Richard, J., et al. 2012b, , 427, 1973
[Coppin et al.(2015)]Coppin2015 Coppin, K. E. K., Geach, J. E., Almaini, O., et al. 2015, , 446, 1293
[Dessauges-Zavadsky et al.(2010)]Dessauges2010 Dessauges-Zavadsky, M., D'Odorico, S., Schaerer, D., et al. 2010, , 510, A26
[Dijkstra et al.(2006)]Dijkstra2006 Dijkstra, M., Haiman, Z., & Spaans, M. 2006, , 649, 14
[Dijkstra & Loeb(2009)]Dijkstra2009 Dijkstra, M., & Loeb, A. 2009, , 400, 1109
[Drake et al.(2016)]Drake2016 Drake, A. B., Guiderdoni, B., Blaizot, J., et al. 2016, arXiv:1609.02920
[Dunlop et al.(2016)]Dunlop2016 Dunlop, J. S., McLure, R. J., Biggs, A. D., et al. 2016, arXiv:1606.00227
[Eldridge & Stanway(2012)]Eldridge2012 Eldridge, J. J., & Stanway, E. R. 2012, , 419, 479
[Erb et al.(2006)]Erb2006 Erb, D. K., Steidel, C. C., Shapley, A. E., et al. 2006, , 646, 107
[Erb et al.(2010)]Erb2010 Erb, D. K., Pettini, M., Shapley, A. E., et al. 2010, , 719, 1168
[Erb(2015)]Erb2015 Erb, D. K. 2015, , 523, 169
[Faucher-Giguère et al.(2010)]Faucher2010 Faucher-Giguère, C.-A., Kereš, D., Dijkstra, M., Hernquist, L., & Zaldarriaga, M. 2010, , 725, 633
[Feltre et al.(2016)]Feltre2016 Feltre, A., Charlot, S., & Gutkin, J. 2016, , 456, 3354
[Finkelstein et al.(2015)]Finkelstein2015 Finkelstein, S. L., Ryan, R. E., Jr., Papovich, C., et al. 2015, , 810, 71
[Fosbury et al.(2003)]Fosbury2003 Fosbury, R. A. E., Villar-Martín, M., Humphrey, A., et al. 2003, , 596, 797
[Giallongo et al.(2015)]Giallongo2015 Giallongo, E., Grazian, A., Fiore, F., et al. 2015, , 578, A83
[Giavalisco et al.(2004)]Giavalisco2004 Giavalisco, M., Dickinson, M., Ferguson, H. C., et al. 2004, , 600, L103
[Gladders et al.(2002)]Gladders2002 Gladders, M. D., Yee, H. K. C., & Ellingson, E. 2002, , 123, 1
[Gronke et al.(2015)]Gronke2015 Gronke, M., Bull, P., & Dijkstra, M. 2015, , 812, 123
[Gullberg et al.(2016)]Gullberg2016 Gullberg, B., De Breuck, C., Lehnert, M. D., et al. 2016, , 586, A124
[Haehnelt(1995)]Haehnelt1995 Haehnelt, M. G. 1995, , 273, 249
[Hainline et al.(2011)]Hainline2011 Hainline, K. N., Shapley, A. E., Greene, J. E., & Steidel, C. C. 2011, , 733, 31
[Hayes et al.(2013)]Hayes2013 Hayes, M., Östlin, G., Schaerer, D., et al. 2013, , 765, L27
[Holden et al.(2001)]Holden2001 Holden, B. P., Stanford, S. A., Rosati, P., et al. 2001, , 122, 629
[Jones et al.(2012)]Jones2012 Jones, T., Stark, D. P., & Ellis, R. S. 2012, , 751, 51
[Jullo et al.(2007)]Jullo2007 Jullo, E., Kneib, J.-P., Limousin, M., et al. 2007, New Journal of Physics, 9, 447
[Jullo & Kneib(2009)]Jullo2009 Jullo, E., & Kneib, J.-P. 2009, , 395, 1319
[Karman et al.(2015)]Karman2015 Karman, W., Caputi, K. I., Grillo, C., et al. 2015, , 574, A11
[Kennicutt(1998)]Kennicutt1998 Kennicutt, R. C., Jr. 1998, , 36, 189
[Kneib et al.(1996)]Kneib1996 Kneib, J.-P., Ellis, R. S., Smail, I., Couch, W. J., & Sharples, R. M. 1996, , 471, 643
[Labbé et al.(2013)]Labbe2013 Labbé, I., Oesch, P. A., Bouwens, R. J., et al. 2013, , 777, L19
[Lehnert & Bremer(2003)]Lehnert2003 Lehnert, M. D., & Bremer, M. 2003, , 593, 630
[Leitherer et al.(2001)]Leitherer2001 Leitherer, C., Leão, J. R. S., Heckman, T. M., et al. 2001, , 550, 724
[Leitherer et al.(2011)]Leitherer2011 Leitherer, C., Tremonti, C. A., Heckman, T. M., & Calzetti, D. 2011, , 141, 37
[Matsuda et al.(2012)]Matsuda2012 Matsuda, Y., Yamada, T., Hayashino, T., et al. 2012, , 425, 878
[Meneghetti et al.(2016)]Meneghetti2016 Meneghetti, M., Natarajan, P., Coe, D., et al. 2016, arXiv:1606.04548
[Momose et al.(2014)]Momose2014 Momose, R., Ouchi, M., Nakajima, K., et al. 2014, , 442, 110
[Madau et al.(1996)]Madau1996 Madau, P., Ferguson, H. C., Dickinson, M. E., et al. 1996, , 283, 1388
[Martin & Bouché(2009)]Martin2009 Martin, C. L., & Bouché, N. 2009, , 703, 1394
[McLure et al.(2009)]Mclure2009 McLure, R. J., Cirasuolo, M., Dunlop, J. S., Foucaud, S., & Almaini, O. 2009, , 395, 2196
[Murray et al.(2005)]Murray2005 Murray, N., Quataert, E., & Thompson, T. A. 2005, , 618, 569
[Murray et al.(2011)]Murray2011 Murray, N., Ménard, B., & Thompson, T. A. 2011, , 735, 66
[Oke & Gunn(1983)]OkeGun Oke, J. B., & Gunn, J. E. 1983, , 266, 713
[Ouchi et al.(2004)]Ouchi2004 Ouchi, M., Shimasaku, K., Okamura, S., et al. 2004, , 611, 660
[Ouchi et al.(2008)]Ouchi2008 Ouchi, M., Shimasaku, K., Akiyama, M., et al. 2008, , 176, 301-330
[Heckman et al.(2015)]Heckman2015 Heckman, T. M., Alexandroff, R. M., Borthakur, S., Overzier, R., & Leitherer, C. 2015, , 809, 147
[Patrício et al.(2016)]Patricio2016 Patrício, V., Richard, J., Verhamme, A., et al. 2016, , 456, 4191
[Petrosian(1976)]Petrosian1976 Petrosian, V. 1976, , 209, L1
[Pettini et al.(2000)]Pettini2000 Pettini, M., Steidel, C. C., Adelberger, K. L., Dickinson, M., & Giavalisco, M. 2000, , 528, 96
[Pettini et al.(2002)]Pettini2002 Pettini, M., Rix, S. A., Steidel, C. C., et al. 2002, , 569, 742
[Quider et al.(2009)]Quider2009 Quider, A. M., Pettini, M., Shapley, A. E., & Steidel, C. C. 2009, , 398, 1263
[Quider et al.(2010)]Quider2010 Quider, A. M., Shapley, A. E., Pettini, M., Steidel, C. C., & Stark, D. P. 2010, , 402, 1467
[Rasappu et al.(2016)]Rasappu2016 Rasappu, N., Smit, R., Labbé, I., et al. 2016, , 461, 3886
[Rigby et al.(2015)]Rigby2015 Rigby, J. R., Bayliss, M. B., Gladders, M. D., et al. 2015, , 814, L6
[Rivera-Thorsen et al.(2015)]Rivera2015 Rivera-Thorsen, T. E., Hayes, M., Östlin, G., et al. 2015, , 805, 14
[Roberts-Borsani et al.(2016)]RobertsBorsani2016 Roberts-Borsani, G. W., Bouwens, R. J., Oesch, P. A., et al. 2016, , 823, 143
[Robertson et al.(2015)]Robertson2015 Robertson, B. E., Ellis, R. S., Furlanetto, S. R., & Dunlop, J. S. 2015, , 802, L19
[Rosdahl & Blaizot(2012)]Rosdahl2012 Rosdahl, J., & Blaizot, J. 2012, , 423, 344
[Sawicki et al.(1997)]Sawicki1997 Sawicki, M. J., Lin, H., & Yee, H. K. C. 1997, , 113, 1
[Santos et al.(2016)]Santos2016 Santos, S., Sobral, D., & Matthee, J. 2016, ,
[Shapley et al.(2003)]Shapley2003 Shapley, A. E., Steidel, C. C., Pettini, M., & Adelberger, K. L. 2003, , 588, 65
[Smail et al.(2007)]Smail2007 Smail, I., Swinbank, A. M., Richard, J., et al. 2007, , 654, L33
[Smit et al.(2014)]Smit2014 Smit, R., Bouwens, R. J., Labbé, I., et al. 2014, , 784, 58
[Smit et al.(2015)]Smit2015 Smit, R., Bouwens, R. J., Franx, M., et al. 2015, , 801, 122
[Smit et al.(2016)]Smit2016 Smit, R., Bouwens, R. J., Labbé, I., et al. 2016, , 833, 254
[Stark et al.(2014)]Stark2014 Stark, D. P., Richard, J., Siana, B., et al. 2014, , 445, 3200
[Stark et al.(2015)]Stark2015 Stark, D. P., Walth, G., Charlot, S., et al. 2015, , 454, 1393
[Stark et al.(2017)]Stark2017 Stark, D. P., Ellis, R. S., Charlot, S., et al. 2017, , 464, 469
[Steidel et al.(1996)]Steidel1996 Steidel, C. C., Giavalisco, M., Pettini, M., Dickinson, M., & Adelberger, K. L. 1996, , 462, L17
[Steidel et al.(1999)]Steidel1999 Steidel, C. C., Adelberger, K. L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, , 519, 1
[Steidel et al.(2010)]Steidel2010 Steidel, C. C., Erb, D. K., Shapley, A. E., et al. 2010, , 717, 289
[Steidel et al.(2011)]Steidel2011 Steidel, C. C., Bogosavljević, M., Shapley, A. E., et al. 2011, , 736, 160
[Swinbank et al.(2007)]Swinbank2007 Swinbank, A. M., Bower, R. G., Smith, G. P., et al. 2007, , 376, 479
[Swinbank et al.(2015)]Swinbank2015 Swinbank, A. M., Vernet, J. D. R., Smail, I., et al. 2015, , 449, 1298
[van der Burg et al.(2010)]vanderBurg2010 van der Burg, R. F. J., Hildebrandt, H., & Erben, T. 2010, , 523, A74
[Vanzella et al.(2016)]Vanzella2016 Vanzella, E., De Barros, S., Cupani, G., et al. 2016, , 821, L27
[Vanzella et al.(2017)]Vanzella2017 Vanzella, E., Balestra, I., Gronke, M., et al. 2017, , 465, 3803
[Verhamme et al.(2006)]Verhamme2006 Verhamme, A., Schaerer, D., & Maselli, A. 2006, , 460, 397
[Weiner et al.(2009)]Weiner2009 Weiner, B. J., Coil, A. L., Prochaska, J. X., et al. 2009, , 692, 187
[Wisotzki et al.(2016)]Wisotzki2016 Wisotzki, L., Bacon, R., Blaizot, J., et al. 2016, , 587, A98
§ BEST-FIT PARAMETERS OF THE LENS MODEL
In <ref> we described the set-up and main results of our LENSTOOL modelling. In this appendix we present the full observational constraints in Table <ref> and the best fit parameters of the lensing model that is used in this work in Table <ref>.
§ MEASURED PROPERTIES IN THE INDIVIDUAL LENSED IMAGES OF THE Z=4.88 ARC
In <ref> we discussed the emission lines properties of the z=4.88 arc from the integrated spectrum over galaxy images 1, 2 and 3. In this appendix, we present the constraints that we can measure on the individual galaxy images. In Table <ref> we give the redshift and equivalent width measurements of the Lyα, Civλ1548 Å and Civλ1551 Å emission lines. The individual images (with the exception of galaxy image 1) are too faint to detect the weaker emission lines or the absorption features, while galaxy image 4 is too faint to detect even the Civ lines at >3.5σ and we therefore do not include these lines. We find that the Lyα lines peaks at the same pixel for every image, while the Lyα equivalent width measurements are consistent with each other within the uncertainties. The Civ EWs are larger for image 2 and 3, which can be explained by the fact that these galaxy images are incomplete and the brightest star-forming region is not included in the measurement.
§ LYΑ LINE PROFILES
In <ref> we described the spectral line shape of the Lyα line observed in the z=4.88 arc. The features of this line are well described by the model <cit.> in which a smooth velocity gradient is present in the outflow of the galaxy.
In this appendix we present a comparison of the <cit.> models for velocity gradients with different maximum outflow velocities (20, 200 and 2000 km s^-1, see their Fig. 7) with the Lyα emission from the z=4.88 arc in Figure <ref>.
We show spectra as a function of the velocity shift, converting from Doppler units assuming a gas temperature T = 20000 K and column density of 2 × 10^20 cm^-2 as assumed in <cit.> and we show the spectrum of the z=4.88 arc with respect to z_[Oii]=4.8757.
The main elements of the Lyα emission, such as the single peak emerging close to the systemic velocity and the exponential tail to higher velocities, are present in the model with a high (2000 km s^-1) maximum velocity and the strongest velocity gradient.
§ LYΑ LINE EMITTER CANDIDATES
In <ref> we described our method and testing of a blind line-search for Lyα emitters at z=4.84-4.92, z=5.64-5.80 and z=6.54-6.65. Here we present the HST and MUSE thumbnails of the individual sources and the one-dimensional spectrum from which the sources are identified in figures <ref>, <ref> and <ref>. We also list the sources in table <ref>. We detect asymmetric line profiles in all five sources at z=4.84-4.92. However, measuring the continuum flux from the HST imaging using a fixed 0.5-diameter aperture centered on the Lyα detection, we find only one source (RCS0224_LAEz4p8773) detected at >5σ in the I_814, J_125 and H_160 bands. A slightly weaker signal (4.6σ) is detected for RCS0224_LAEz4p8784 in the WFPC2 V_606 band, while no significant detection (<2σ) is measured from the redder HST bands of the same source. If real, this flux belongs either to a foreground galaxy or else it would indicate that we have misidentified RCS0224_LAEz4p8784 as Lyα emission. However, the asymmetry of the line and the lack of secondary components in the spectrum favour the former interpretation. Furthermore, we find ∼3σ detections in the J_125 band for two sources at z∼5.7, RCS0224_LAEz5p7405 and RCS0224_LAEz5p7360. For the 9 Lyα candidates at z=5.64-5.80 and z=6.54-6.65 we do not have enough S/N to detect the asymmetric profiles, nor do we detect the sources in the HST imaging at >3.5σ .
| Over the last decade, deep observations of blank fields, in particular with the Hubble Space Telescope (HST) have identified a substantial poplation of galaxies beyond z>3, using broadband photometry <cit.>.
Despite the progress in identifying large numbers of galaxies, it remains challenging to obtain spectroscopic redshifts and determine the physical properties of these systems. This is largely due to their inherent faintness and the fact that bright rest-frame optical emission-line tracers such as Hα and [Oiii], which are traditionally used to measure the properties of the ISM, are shifted to observed mid-infrared wavelengths for sources at z≳3-4. The small physical sizes of galaxies at z>3 compared to typical ground-based seeing also makes spatially resolved observations difficult to obtain, inhibiting measurements of dynamical masses, star-formation distributions and wind energetics.
Recently, the commissioning of the Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT) has led to an advance in the identification and characterisation of z∼3-6 galaxies though wide-field and deep spectroscopy of the rest-frame ultraviolet (UV) spectra of these sources.
For example, MUSE is starting to probe the physical properties of Hii regions within galaxies by exploiting gravitationally lensing through their faint UV nebular emission lines such as Civλλ1548,1551 Å, Heiiλ1640 Å, Oiii]λλ1661,1666 Å and Ciii]λλ1907,1909 Å <cit.>, lines which are rarely seen in local star-forming galaxies <cit.>. These lines are produced either by young, metal-poor stellar populations with high-ionization parameters <cit.>, or gas photo-ionisation by faint active galactic nuclei <cit.>. Furthermore, MUSE has enabled the detailed modelling of extended Lyα emission, gaining insights into the inflowing neutral gas and/or wind energetics in the circum-galactic medium (CGM) of galaxies <cit.>.
Moreover, MUSE is a promising new instrument for undertaking unbiased spectroscopic surveys. <cit.> used a 27 hour MUSE pointing of the Hubble Deep Fields South (HDF-S)
to detect 89 Lyman-α emitters in the redshift range z∼3-6.
Remarkably, 66% of the Lyα emitters above z≳5 have
no counterpart in the HST broadband imaging (to a
limiting magnitude of m_i∼29.5),
In this paper, we extend current work on characterising the UV spectra of intrinsically faint high-redshift galaxies out to z∼5 through the analysis of VLT/MUSE observations of one of the most strongly magnified galaxies known above a redshift of z>3; the highly magnified (μ=13-145×) z=4.88 lensed arc seen through the core of the compact z=0.77 cluster RCS 0224-0002 <cit.>.
S07 observed nebular [Oii] emission and an extended Lyα halo in this z=4.88 source and they hypothesized that a galactic-scale bipolar outflow has recently bursted out of this system and into the intergalactic medium (IGM). Our new observations obtain significantly higher signal-to-noise ratio (S/N) in the UV emission and continuum, allowing us to resolve the shape of the Lyα profile and detect the UV-interstellar medium (ISM)
lines. Furthermore, our MUSE pointing covers the complete z∼6 critical curves, which allows for an efficient survey for faint high-redshift Lyα emitters. These sources are important targets to study in order to understand the properties of the ultra-faint galaxy population that could have contributed significantly to reionisation.
This paper is organised as follows: we describe our MUSE dataset and we summarize the complementary data presented by S07 in <ref>. We analyse the spectral properties of the main z=4.88 arc in <ref>. We present the results of a blind search for Lyα emitters in <ref> and finally we summarise our findings in <ref>.
For ease of comparison with previous studies we take H_0=70 km s^-1 Mpc^-1, Ω_m=0.3,andΩ_Λ=0.7, resulting in an angular scale of 6.4 kpc per arcsecond at z=4.88. Magnitudes are quoted in the AB system <cit.>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07457v2 | 20170125193449 | A Search for Fast Radio Bursts with the GBNCC Pulsar Survey | [
"P. Chawla",
"V. M. Kaspi",
"A. Josephy",
"K. M. Rajwade",
"D. R. Lorimer",
"A. M. Archibald",
"M. E. DeCesar",
"J. W. T. Hessels",
"D. L. Kaplan",
"C. Karako-Argaman",
"V. I. Kondratiev",
"L. Levin",
"R. S. Lynch",
"M. A. McLaughlin",
"S. M. Ransom",
"M. S. E. Roberts",
"I. H. Stairs",
"K. Stovall",
"J. K. Swiggum",
"J. van Leeuwen"
] | astro-ph.HE | [
"astro-ph.HE"
] |
1Department of Physics & McGill Space Institute, McGill University, 3600 University Street, Montreal, QC H3A 2T8, Canada; [email protected]
2Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, USA
3Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
4Green Bank Observatory, PO Box 2, Green Bank, WV, 24944, USA
5ASTRON, the Netherlands Institute for Radio Astronomy, Postbus 2, 7990 AA Dwingeloo, The Netherlands
6Department of Physics, Lafayette College, Easton, PA 18042, USA
7Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
8Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA
9Astro Space Center, Lebedev Physical Institute, Russian Academy of Sciences, Profsoyuznaya str. 84/32, 117997 Moscow, Russia
10Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL, UK
11National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22901, USA
12New York University, Abu Dhabi, U.A.E.
13Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada
14National Radio Astronomy Observatory, 1003 Lopezville Rd., Socorro, NM 87801, USA
We report on a search for Fast Radio Bursts (FRBs) with the Green Bank Northern Celestial Cap (GBNCC) Pulsar Survey at 350 MHz. Pointings amounting to a total on-sky time of 61 days were searched to a DM of 3000 pc cm^-3 while the rest (23 days; 29% of the total time) were searched to a DM of 500 pc cm^-3. No FRBs were detected in the pointings observed through May 2016. We estimate a 95% confidence upper limit on the FRB rate of 3.6× 10^3 FRBs sky^-1 day^-1 above a peak flux density of 0.63 Jy at 350 MHz for an intrinsic pulse width of 5 ms. We place constraints on the spectral index α by running simulations for different astrophysical scenarios and cumulative flux density distributions. The non-detection with GBNCC is consistent with the 1.4-GHz rate reported for the Parkes surveys for α > +0.35 in the absence of scattering and free-free absorption and α > -0.3 in the presence of scattering, for a Euclidean flux distribution. The constraints imply that FRBs exhibit either a flat spectrum or a spectral turnover at frequencies above 400 MHz. These constraints also allow estimation of the number of bursts that can be detected with current and upcoming surveys. We predict that CHIME may detect anywhere from several to ∼50 FRBs a day (depending on model assumptions), making it well suited for interesting constraints on spectral index, the log N-log S slope and pulse profile evolution across its bandwidth (400–800 MHz).

§ INTRODUCTION
Fast Radio Bursts (FRBs) are bright, millisecond-duration events occurring in the radio sky. Their origin is still unknown. Eighteen FRBs have been detected within the past decade (; ; ; ; ; ; ; ; ; ) with only one source (; ) known to repeat. A catalog of these bursts and their properties is made available by <cit.>[<http://www.astronomy.swin.edu.au/pulsar/frbcat/>]. These transient events can be distinguished from pulsars and rotating radio transients (RRATs) on the basis of their dispersion measure (DM), which is a measure of the integrated free electron density along the line of sight in the intervening medium. The bursts have DMs that are 1.4 to 35 times the maximum predicted along the line of sight by the NE2001 model of electron density in our Galaxy <cit.>.
The dominant contribution to the excess DM of FRBs can arise from the intergalactic medium, the host galaxy of the FRB progenitor, or possibly from a high electron density, compact structure in our Galaxy. The interferometric localization of bursts from the repeating FRB121102 provides evidence of its association with an optical counterpart <cit.>. Spectroscopic follow-up by <cit.> confirms the optical counterpart as being the host galaxy of the FRB and characterizes it as a low-metallicity, star-forming dwarf galaxy located at a redshift of z = 0.19273(8). The observations of <cit.> also support an extragalactic origin with scattering and scintillation in FRB110523 suggesting that the majority of the scattering originates from within the typical size scale of a galaxy. These observations lend support to models with extragalactic progenitors of FRBs such as giant pulses from extragalactic neutron stars <cit.> and magnetar giant flares (; ). Interferometric localizations of more FRBs are essential to conclusively determine the source of the excess DM and the nature of the FRB progenitors for the broader FRB population.
All but one known FRB <cit.> has been detected at frequencies greater than 1 GHz. Detection or stringent limits at lower frequencies are crucial for understanding properties of FRBs such as their spectral index and pulse profile evolution with frequency. Searches at low frequencies with telescopes such as LOFAR (; ), Arecibo <cit.> and MWA (; ) have so far not resulted in any detections. <cit.> report an upper limit on the FRB rate at 327 MHz of 10^5 FRBs sky^-1 day^-1 for a flux density threshold of 83 mJy and pulse width of 10 ms. A non-detection with the LOFAR Pilot Pulsar Survey at 142 MHz allowed <cit.> to place an upper limit of 150 FRBs sky^-1 day^-1, for bursts brighter than 107 Jy at burst duration 0.66 ms. <cit.> report an upper limit of 29 FRBs sky^-1 day^-1 for bursts with flux density above 62 Jy at 145 MHz and a pulse width of 5 ms, based on observations with the UK station of the LOFAR radio telescope. The upper limits on the FRB rate reported thus far from these low-frequency radio surveys are not particularly constraining because of limitations in total observing time and volume searched. With observations to date amounting to a total on-sky time of 84 days, the Green Bank Northern Celestial Cap (GBNCC) Pulsar Survey <cit.> can provide the strongest constraints yet on the FRB rate and spectral index in the frequency range of 300–400 MHz.
The GBNCC survey is also important for predicting the FRB yield of upcoming low-frequency telescopes such as the Canadian Hydrogen Intensity Mapping Experiment (CHIME). With its large field of view and good sensitivity, CHIME is predicted to discover tens of FRBs per day (; ) in its frequency range of 400–800 MHz. The GBNCC survey is thus well placed to determine the expected detection rate for the lower part of the CHIME band.
In this paper, we present results from the search for FRBs in GBNCC survey pointings observed through May 2016. For the purpose of our search and subsequent analysis, we define an FRB as an astrophysical pulse with a DM greater than twice the maximum line-of-sight Galactic DM. The suggestion by <cit.> of a possibly Galactic origin of the excess DM of the only FRB with a DM ratio < 2, FRB010621 <cit.>, lends support to our choice of a DM ratio of 2 for the FRB definition.
Our paper is organized as follows. In Section <ref>, we give a description of the survey and its sensitivity. We describe the data analysis pipeline in Section <ref> and place constraints on the FRB rate in Section <ref>. In Section <ref>, we constrain the mean spectral index of FRBs by performing Monte-Carlo simulations of a population of FRBs. We discuss the implications for current and upcoming surveys in Section <ref> and present our summary and conclusions in Section <ref>.
§ OBSERVATIONS
§.§ Survey Description
The Green Bank Northern Celestial Cap (GBNCC) Pulsar Survey <cit.> began in 2009 with the aim to search for pulsars and RRATs, particularly millisecond pulsars suitable for inclusion in the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) pulsar timing array[<http://nanograv.org>]. The search is conducted using the 100-m diameter Robert C. Byrd Green Bank Telescope (GBT) at a frequency of 350 MHz. Data spanning 100 MHz of bandwidth split into 4096 frequency channels are recorded with the Green Bank Ultimate Pulsar Processing Instrument (GUPPI). Each pointing on the sky is observed for 120 s and sampled with a 81.92-μs time resolution.
The entire sky visible to the GBT (δ > -40) has been divided into ∼125000 pointings, around 75000 of which have been observed through May 2016. In the initial days of the survey, data were searched to a maximum DM of 500 pc cm^-3. Motivated by the discovery of FRBs, the maximum DM for the search was increased to 3000 pc cm^-3. However, the initial pointings are yet to be reprocessed with this updated parameter. A total of 71% of the pointings were searched to a DM of 3000 pc cm^-3 and 29% of the pointings were searched to a DM of 500 pc cm^-3. The search in DM space is conducted by stepping over a range of trial DMs with ΔDM being the step size between consecutive trials. The DM step sizes used by the search pipeline for the GBNCC survey are mentioned in the caption to Figure <ref>.
Not all pointings observed by the GBNCC survey were examined during the analysis reported on here. Pointings searched to a DM of 500 pc cm^-3 for which the maximum line-of-sight Galactic DM predicted by the NE2001 model <cit.> was greater than 100 pc cm^-3 were not inspected. This is because our adopted definition of an FRB implies that these 7000 pointings searched over an extremely small range of extragalactic DMs as compared to the rest of the pointings. Removal of an additional 3000 pointings that were rendered unusable by the presence of radio frequency interference (RFI) limited the total observing time for the FRB search to 84 days. The time corresponding to an estimated masking fraction of 2% for all pointings has been subtracted from the total time on sky reported here.
Figure <ref> shows the GBNCC pointings included in our FRB search overlaid on the sky map of the maximum Galactic DM predicted by the NE2001 model <cit.>. The temporal distribution of the pointings is shown in Figure <ref>.
§.§ Survey Sensitivity
The minimum detectable flux density S_min for FRBs searched with the GBNCC survey can be calculated using the expression derived by <cit.>:
S_min = β (S/N)_b (T_rec + T_sky)/G W_i√(W_b/n_pΔν),
where β is a factor accounting for digitization losses, (S/N)_b is the minimum detectable signal-to-noise ratio of the broadened pulse, T_rec is the receiver temperature, T_sky is the sky temperature, W_i and W_b are the intrinsic and broadened pulse widths, respectively, G is the telescope gain, n_p is the number of polarizations summed and Δν is the bandwidth. Values of the above-mentioned parameters for the GBNCC survey are listed in Table <ref>. We use Δν = 75 MHz instead of the recorded bandwidth of 100 MHz to account for roll-off at the bandpass edges and for the estimated masking fraction of 5% in the frequency domain. The average sky temperature at 350 MHz, T_sky = 44 K, along the line of sight for all the pointings included in the FRB search has been estimated using the 408 MHz all-sky map <cit.> and a spectral index of -2.6 for Galactic emission.
The broadened pulse width W_b accounts for both instrumental and propagation effects, and is computed from the quadrature sum as follows:
W_b = √(W_i^2 + t_samp^2 + t_chan^2 + t_scatt^2) .
Here t_samp is the sampling time and t_scatt is the scattering time arising from multi-path propagation of signals caused by an ionized medium. The dispersive delay within each frequency channel, t_chan, is calculated (see, e.g., ) as follows:
t_chan = 8.3 μ s (Δν_chan/MHz) (ν/GHz)^-3(DM/pc cm^-3),
where ν is the central observing frequency and Δν_chan is the channel bandwidth.
For an intrinsic pulse width W_i = 5 ms, scattering time t_scatt = 0 ms and a DM of 756 pc cm^-3 (mean DM of known FRBs; ), the minimum detectable flux density for the GBNCC survey is 0.63 Jy. We note that there is a reduction in sensitivity to high DM events since the dispersive delay for these events across a bandwidth of 100 MHz is a large fraction of the observation time per pointing. However, a significant fraction (29%) of our pointings have been searched to a DM of 500 pc cm^-3, where this effect is not important. Also, since the highest DM observed for a known FRB is 1629 pc cm^-3 <cit.>, the sensitivity is impacted only for a small region of the parameter space.
The minimum detectable flux density is plotted as a function of intrinsic pulse width and scattering time, for different DM step sizes, in Figure <ref>. The minimum detectable S/N, used for the calculation of the minimum detectable flux density, is also dependent on the intrinsic pulse width, scattering timescale and DM step size. The dependence of the S/N on these variables is part of the code used to search and rank FRB candidates, RRATtrap (described in Section <ref>). The rationale for this dependence is detailed in Section <ref>.
§ ANALYSIS
The analysis pipeline, based on the PRESTO software package <cit.>[<http://www.cv.nrao.edu/ sransom/presto>], is run on the Guillimin High Performance Computing (HPC) cluster operated at McGill University by CLUMEQ & Compute Canada. The first step of processing involves searching for and masking time samples and frequency channels containing RFI. The effect of dispersion is then corrected for by dedispersing the data at a large number of trial DMs up to a maximum of 500 pc cm ^-3 or 3000 pc cm^-3. The dedispersed and downsampled time series for each trial DM is subsequently searched for single pulses using a matched filtering algorithm which convolves the time series with box cars having widths ranging from 81.92 μs to 100 ms. All single pulse events with S/N greater than 5 are stored for further processing. The above-mentioned analysis has been described in detail in <cit.>. The single-pulse output is processed by a grouping and rating algorithm RRATtrap[The code is available at <https://github.com/ajosephy/Clustering/> and is a modified version of the code by <cit.>, which is available at <https://github.com/ckarako/RRATtrap>] which has aided in the discovery of 10 RRATs in GBNCC survey data <cit.>.
§.§ RRATtrap
The large number of DM trials ensures that each pulse (astrophysical or RFI) is detected as multiple single pulse events that are closely spaced in DM and time. RRATtrap groups all such events and ranks the groups based on how closely they match the behavior of an astrophysical pulse. It then produces colorized DM versus time plots for several DM ranges with groups of different ranks plotted in different colors.
A group of fewer than 30 single pulse events occurring within a fixed DM and time threshold is classified as noise and not processed further. A considerable fraction of the single pulse events in our pointings fall in this category. Strong narrow-band RFI is another major source of single pulse events. The algorithm deals with these signals, that we know have a terrestrial origin, by assigning a low rank to groups with the S/N peaking at a DM < 2 pc cm^-3. A low rank is also assigned to a group corresponding to a narrow-band signal, identified by it being detected with a constant S/N over a large range of DMs. A bright, broadband signal from an astrophysical source will be detected with the maximum S/N at an optimal DM and with lower S/N at closely spaced trial DMs above or below the optimal DM due to dispersive smearing. Groups exhibiting this characteristic of astrophysical pulses are ranked highly.
§.§.§ RRATtrap Sensitivity
RRATtrap exhibits a significant variation in sensitivity with pulse width due to our requirement of a minimum of 30 single pulse events for a group to be ranked. Sensitivity to extremely narrow pulses is reduced since dispersive smearing prevents the detection of the pulse at 30 DM trials. The reduction in the sensitivity is maximum at high DMs where the DM step size increases to 0.5 pc cm^-3.
In order to determine whether a pulse will be ranked by RRATtrap, we first obtain the peak flux S corresponding to the S/N of the pulse at the optimal DM (S/N_peak), using Equation <ref>. The reduction in the peak flux S of the pulse due to dedispersion at an incorrect trial DM is modelled by the following equation derived by <cit.>:
S(δDM)/S = √(π)/2ζ^-1erfζ,
where
ζ = 6.91 × 10^-3δDMΔν/W_i,msν^3_GHz.
Here ν_GHz = 0.350 GHz is the center frequency of the GBNCC survey and S(δDM) is the reduced flux measured at a trial DM differing from the optimal DM by δDM. The width of the pulse dedispersed at an incorrect trial DM is given by W(δDM) = S W_i / S(δDM) since dispersive smearing conserves pulse area A = S W_i <cit.>. A single pulse event at a trial DM, with a DM error of δDM, will therefore be detected with a S/N which can be determined by substituting the reduced flux S(δDM), the intrinsic pulse width W(δDM) and other parameters of the GBNCC survey in Equation <ref>.
For a given pulse width, we can thus obtain the minimum value of the peak S/N that will allow detection of 30 single pulse events with a S/N > 5. The minimum detectable peak S/N is plotted as a function of intrinsic pulse width and scattering time, for different DM step sizes, in Figure <ref>. The S/N used to calculate the threshold flux density of the GBNCC survey is set to be the minimum value of the peak S/N evaluated using the above-mentioned method or 10, whichever is greater. This is done to account for the fact that only pointings having a FRB candidate with a S/N > 10 were visually inspected (see Section <ref>).
ccccccccc
Search Parameters for Various FRB Surveys
9
1
0pt
Survey
Field of View
Bandwidth Center Freq.
No. of Freq.
Polarizations
Gaina
T_rec
Ref.
(sq. deg.)
(MHz)
(MHz)
Channels
Summed
(K/Jy)
(K)
GBNCC 0.408 100 350 4096 2 2 23 1
PARKESb 0.559c 340 1352 1024 2 0.64 23 2
UTMOST 4.64 x 2.14 31.25 843 40 1 3.6 70 3
PALFA 0.022 322 1375 960 2 2cSEFD = 5 4
CHIME 134 400 600 16000 2 1.38 50 5
AO327 0.049 57 327 1024 2 11 115 6
GBT (800 MHz) 0.055 200 800 4096 2 2 26.5 5
LPPS (LOFAR) 75 6.8 142 560 2 2cSEFD = 1141 7
ARTEMIS (LOFAR) 24 6 145 64 2 2cSEFD = 1100 8
ALERT (APERTIF) 8.7 300 1400 1024 2 0.96 75 9
V-FASTR 0.364 32d 1550 512d 2 2cSEFD = 311 10,11
2*MWA 600 30.72 155 24 2 1e 50 12
145 30.72 182 2 1e 50 13
VLA 0.283 256 1396 256 2 2cSEFD = 16 14
aSurveys for which T_rec and gain (G) were not documented have their system equivalent flux densities (SEFD) = (T_rec + T_sky) / G in K/Jy reported here. T_sky for all other surveys has been evaluated assuming an average sky temperature of 34 K at 408 MHz and a spectral index of -2.6 <cit.>.
bThe parameters are valid for the HTRU survey, for which the rate was reported by <cit.>. <cit.> estimated the FRB rate using several Parkes surveys, the parameters for which have been reported in their paper.
cThe field of view quoted here for the 13-beam receiver of the Parkes telescope has been calculated based on the single-beam field of view of 0.043 sq. deg., reported by <cit.>.
dThe no. of frequency channels and bandwidth reported for V-FASTR are representative values as the observing set-up can vary between observations.
eThe gain for MWA is given by A_eff/2k, where k is the Boltzmann constant and A_eff is the effective area of the telescope reported by <cit.>.
References: 1 - <cit.>, 2 - <cit.>, 3 - <cit.>, 4 - <cit.>, 5 - <cit.>, 6 - <cit.>, 7 - <cit.>, 8 - <cit.>, 9 - <cit.>, 10 - <cit.>, 11 - <cit.>, 12 - <cit.>, 13 - <cit.>, 14 - <cit.>
§.§.§ Modifications to RRATtrap
Algorithmic changes were made to the grouping stage. Initially, this was done via “agglomerative hierarchical clustering" (AHC) <cit.>, which runs in (n^3) time for the simplest implementation, where n is the number of single pulse events. AHC is a bottom-up approach where all events are first initialized as individual groups and then iteratively merged based on proximity in DM and time. Merging terminates once the minimum separation between groups, in either DM or time, is above some dimension specific threshold. The threshold in time is taken as 100 ms, corresponding to the largest boxcar used to detect pulses. The threshold in DM is taken as 0.5 pc cm^-3 and is increased for large DMs, where the separation in trial DMs increases.
The AHC method was replaced with the “density-based spatial clustering of applications with noise" (DBSCAN) algorithm <cit.>, which runs in (nlog n) time.
DBSCAN works by taking an arbitrary event and running a nearest neighbour query to start a group including events which are sufficiently nearby. This group is then iteratively grown outwards by repeating the neighbourhood query for newly added members.
Once the reachable events are exhausted, the group is complete and the process repeats for the next unvisited event.
Since the distance thresholds used by both algorithms determine whether or not two events belong to the same group, identical thresholds yield identical output.
The purpose of the change was to reduce runtime. The performance improvement is largely due to storing the single pulse events in a k-d Tree <cit.> which allows neighbourhood queries to be done in logarithmic time.
A k-d Tree is a space-partitioning data structure used to organize data existing in k dimensions. For our two dimensions, the tree is constructed as follows. The median event in time is taken as the root, which partitions the plane in two. Now for each side, median events in DM are taken to further partition the plane into four regions– these two events are the nodes in the second level of the tree. This process continues, cycling in DM and time, until all events exist as nodes on the tree. The construction of the tree takes (nlog n) time.
§.§ Visual Inspection
A total of 72% of the pointings had at least one single pulse with a S/N > 10. These 44000 pointings were processed with the modified version of RRATtrap to group and rank single pulse events at a DM greater than twice the maximum line-of-sight Galactic DM, DM_max. There is a 10% chance that an astrophysical pulse will not be ranked highly by RRATtrap <cit.>. To ensure no effect of this false negative rate on our search, we did not apply RRATtrap ranks as a criteria for visual inspection and inspected plots (corresponding to DM ranges for which DM > 2DM_max) for all 44000 pointings, regardless of the ranks of the groups they contained. However, the colors corresponding to the ranks guided the eye during the inspection of the plots. We flagged potential astrophysical candidates in these pointings and obtained their dynamic spectrum, or their intensity as a function of frequency and time. All flagged candidates had characteristics consistent with RFI and showed no evidence of a dispersive sweep. We conclude that no FRB with a S/N greater than the detection threshold of RRATtrap (see Figure <ref>) was present in these pointings.
§ CALCULATION OF FRB RATE
§.§ Estimation of Sky Rate
The non-detection of FRBs in our search is a significant result since it constrains the all-sky FRB rate at 350 MHz. Assuming FRBs follow Poisson statistics, the probability of detecting N FRBs,
P(N) = (R T Ω)^N e^-(R T Ω)/N!,
where Ω is the solid angle of the beam, T is the total observing time and R is the FRB rate per unit solid angle. The 95% confidence upper limit on the rate is the upper bound for which normalization and integration of Equation <ref>, with a lower bound of R = 0, yields a value of 0.95 for the case N = 0.
We will be reporting the rate for two different beam areas, one for the field-of-view corresponding to the FWHM of the GBT beam and another for the field-of-view at the edge of which the gain is equal to 0.64 K/Jy (i.e. the Parkes 1.4-GHz on-axis gain; ). The former will be referred to as the FWHM case and the latter as the Parkes-equivalent case. Since all but two of the currently known FRBs have been detected using the Parkes telescope, we estimate the rate for the Parkes-equivalent case to facilitate comparison with the Parkes 1.4-GHz rate estimate (; ). Knowing that the GBT beam is well approximated by a two-dimensional symmetric Gaussian, we obtain Ω = 0.408 sq. deg. for the FWHM case (θ_0 = 36').[<https://science.nrao.edu/facilities/gbt/proposing/GBTpg.pdf>] and Ω = 0.672 sq. deg. (θ_0 = 46') for the Parkes-equivalent case.
The total time on sky, T, for GBNCC pointings searched to a DM of 3000 pc cm^-3 is 61 days and, for pointings searched to a DM of 500 pc cm^-3 is 23 days. The latter pointings are sensitive only to FRBs with low extragalactic DM contributions. Thus, we are unevenly sampling the range of extragalactic DMs for the pointings we have searched implying an uneven coverage of potential FRBs. However, if we assume that all values of extragalactic DM contribution are equally likely, we can estimate an upper limit using the total observing time of 84 days that includes both pointings searched to a DM of 3000 pc cm^-3 and 500 pc cm^-3.
For the flux limit S_min = 0.63 Jy corresponding to the field-of-view-averaged gain of 1.44 K/Jy for the FWHM case, we estimate a 95% confidence upper limit on the FRB rate of
R < 4.98 × 10^3 FRBs sky^-1 day^-1 (T = 61 days)
R < 3.62 × 10^3 FRBs sky^-1 day^-1 (T = 84 days)
and, for the flux limit S_min = 0.76 Jy corresponding to the field-of-view-averaged gain of 1.19 K/Jy for the Parkes-equivalent case, we obtain,
R < 3.03 × 10^3 FRBs sky^-1 day^-1 (T = 61 days)
R < 2.20 × 10^3 FRBs sky^-1 day^-1 (T = 84 days) .
The survey is ongoing with ∼50000 pointings left to be observed in order to cover the GBT visible sky. A non-detection in these pointings will improve the constraint on the rate to 1.98× 10^3 FRBs sky^-1 day^-1 for the FWHM case.
§.§ Estimation of Volumetric Rate
We can also constrain the volumetric rate of FRBs up to the redshift out to which the GBNCC survey searches. For each pointing, we are searching out to a different redshift as the DM contribution from the Galaxy varies greatly across the sky. We estimate the DM due to our Galaxy, DM_MW, as the maximum line-of-sight DM predicted by the NE2001 model for each of our pointings. The DM contribution of the IGM can be estimated using the following equation:
DM_IGM = DM_thresh - (DM_host/z+1 + DM_MW)
Here DM_thresh is the maximum DM searched by the analysis pipeline, either 3000 pc cm^-3 or 500 pc cm^-3. Assuming the electron density distribution of the potential host galaxy of the FRB progenitor to be similar to that of our Galaxy, we obtain a DM contribution for the host galaxy, DM_host = 80 pc cm^-3, by averaging over the maximum DM predicted by the NE2001 model for evenly spaced lines-of-sight through our Galaxy. However, we assume DM_host as 100 pc cm^-3 for evaluating the limiting redshift of the GBNCC survey, following <cit.>. The above assumption is to allow for a meaningful comparison with the redshifts of 0.5 to 1 inferred by <cit.> for four FRBs discovered with the Parkes telescope. The assumption for DM_host is reduced by a factor of (z+1) to facilitate comparison with the effect of DM_MW and DM_IGM <cit.>. The reduction in the DM of the host galaxy accounts for the decrease in the observed frequency by a factor of (z+1) as compared to the emission frequency of a source at a redshift z and the increase in the observed dispersive delay by a factor of (z+1). The limiting redshift, z, for each pointing can be determined using the DM-redshift relation, DM_IGM = 1200 z pc cm^−3 (; ). We find the mean limiting redshift, z_lim = 1.84, for the GBNCC pointings included in our FRB search.
We note that there are significant uncertainties in the DM-redshift relation used for the estimation of the limiting redshift. However, the relation is corroborated by the determination of the redshift of the repeating FRB121102 and the resulting estimate of the DM of its host galaxy. The DM obtained for the host galaxy, after subtracting the Galactic DM and the DM contribution estimated for the IGM using the DM-redshift relation, is equivalent to that expected from a dwarf galaxy <cit.>. The observations of <cit.> also imply that the assumption of DM_host = 100 pc cm^-3 could be an underestimate if FRBs preferentially exist in dwarf galaxies. The mean limiting redshift for the GBNCC pointings reduces to z_lim = 1.79, if we assume DM_host to be equal to the upper limit on the inferred DM of the host galaxy of FRB121102 (225 pc cm^-3). The estimate of the mean limiting redshift is thus not sensitive to the assumption for the DM contribution of the host galaxy.
We then estimate the comoving volume surveyed by each of the pointings using the solid angle for the GBT beam at 350 MHz, Ω = 0.408 sq. deg. and assuming Planck 2015 cosmological parameters <cit.>. The total comoving volume searched by the survey is estimated by summing up the comoving volume for all the pointings and is equal to 3.8 × 10^11 Mpc^3. We note that the above estimate is an upper limit at best since the comoving volume surveyed at 350 MHz is flux-limited and cannot be correctly determined by the maximum DM searched. The intrinsic luminosity distribution of FRBs could be such that FRBs at high redshifts have flux densities less than the survey sensitivity. Additionally, pulses from high-redshift FRBs, whose intrinsic luminosity does not limit detectability, can be broadened by intra-channel and DM step smearing. Since the threshold flux density determined by Equation <ref> depends on the broadened pulse width which increases with increasing redshift, high-redshift FRBs with correspondingly higher DMs are harder to detect, which can also cause our survey in this volume to be flux-limited.
The upper limit on the FRB rate per unit comoving volume inferred using our upper limit on the sky rate for the FWHM case of 3.6 × 10^3 FRBs sky^-1 day^-1 is 3.5 × 10^3 Gpc^-3 yr^-1, for isotropic emission. The rate reported here is valid up to the mean limiting redshift for the GBNCC pointings, z_lim = 1.84, and under the assumptions that the population of FRBs does not evolve with redshift and that all FRBs located at z < z_lim are detectable with GBNCC. The rate could be an underestimate if FRBs exhibit beamed radio emission. This is possible if FRBs are extragalactic as the extremely high implied brightness temperatures in that scenario would suggest that the emission is coherent and beamed.
The survey's limiting redshift and the corresponding upper limit on the volumetric rate can also be estimated following the method and assumptions detailed in <cit.>. The rate estimate of 3.3 × 10^3 FRBs sky^-1 day^-1 above a fluence of 3.8 Jy ms for the Parkes surveys reported by <cit.> is assumed to be survey independent and valid at a frequency of 350 MHz and for a limiting redshift of z_lim = 0.75 (). The limiting redshift of 0.75 is an assumption based on the redshifts of 0.5 to 1 inferred from the DMs of the FRBs discovered by <cit.>. We translate the Parkes rate to a range of redshifts by assuming a constant comoving number density distribution of FRBs. We compute the number of FRBs detectable by GBNCC for a range of limiting redshifts using the corresponding Parkes rate. The number detectable with GBNCC for an observing time of 84 days is represented by the white curve shown in Figure <ref>. The limiting redshift, z_lim = 0.37, is the one for which the GBNCC survey is predicted to detect 1 FRB. The conclusion is justified because if the survey were sensitive to a redshift greater than z_lim, then the Parkes rate estimate predicts a detection with the GBNCC survey, which is inconsistent with our observations.
The above two limiting redshift estimates, obtained using different approaches, depend on several different assumptions that cannot presently be tested. The large discrepancy between the two redshift estimates can be explained if the comoving volume estimated for the first case (hereafter case A) is flux-limited such that FRBs located at z < 1.84 are not detectable even though we are searching the DM range extending out to z_lim = 1.84. The estimate of the limiting redshift for the second case (hereafter case B) is thus more robust since it is based on the GBNCC survey sensitivity and the underlying assumption of FRBs being standard candles which ensures that all sources in the estimated comoving volume are detectable.
Assuming the redshift estimate of 0.37 to be correct, we conclude the upper limit on the volumetric rate to be 1.6 × 10^5 Gpc^-3 yr^-1, with the caveat that treating the 1.4-GHz rate estimate as an all-sky rate at 350 MHz involves the implied assumption of a flat spectral index. Obtaining the rate at 350 MHz by scaling with a different assumed spectral index would change the estimate of the limiting redshift and volumetric rate. The estimate is also sensitive to the assumed intrinsic luminosity distribution and would vary if instead of the standard candle assumption, a distribution of luminosities were assumed.
§ SPECTRAL INDEX CONSTRAINTS
Observations of FRBs can help determine the intrinsic spectral index if the position of the FRB within the telescope beam is known. <cit.> measured α = 1.3 ± 0.5 for FRB150418 assuming that the FRB is located at the position of the potentially associated variable source found within the Parkes beam. The association has, however, been questioned by <cit.> and <cit.>. The intrinsic spectral index can also be constrained by methods other than observation and localization. In this section, we use the non-detection with GBNCC to constrain the spectral index for different astrophysical scenarios.
We perform Monte Carlo simulations for FRB flux distributions consistent with the rate estimate reported at 1.4 GHz for the Parkes surveys, 3.3 × 10^3 FRBs sky^-1 day^-1 <cit.>. We assume a power-law flux density model for FRBs with flux density at a frequency ν, S_ν∝ν^α. The cumulative flux density distribution function (the log N-log S function) of the FRB population is also modelled as a power law with an index γ. This implies that the number of FRBs with a flux density greater than S,
N ( > S) ∝ S^-γ.
For a non-evolving population uniformly distributed in a Euclidean universe, γ = 1.5, for any luminosity distribution. Any value other than 1.5 would argue for FRBs being a cosmological population and/or exhibiting redshift-dependent evolution. <cit.> calculate γ based on multiple-beam detections with Parkes and different detection rates for varying dish diameters, and report a constraint, 0.66 < γ <0.96. <cit.> derive the constraint 0.8 ≤γ≤ 1.7 making use of the detections with the HTRU survey at Parkes and PALFA Survey at Arecibo. We use three different values of the slope of the log N-log S function (γ = 0.8, 1.2 and 1.5) for our simulations to roughly sample the range in which it is estimated to vary.
§.§ Absence of Scattering & Free-Free Absorption
To reconcile the upper limit on the FRB rate obtained from GBNCC with the observed rate from the Parkes surveys, it may be that FRBs are rendered undetectable at low frequencies by scattering and/or the presence of a spectral turnover, either intrinsic to the emission mechanism or due to free-free absorption. In the absence of scattering and free-free absorption, the intrinsic spectral index needs to be relatively flat or even positive to account for our non-detection.
We ran 100 Monte Carlo iterations each for different cumulative flux density distributions (γ = 0.8, 1.2 and 1.5). For each Monte Carlo iteration, we generated a flux density distribution of FRBs at 1.4 GHz consistent with the rate for Parkes surveys. We scaled the distribution to 350 MHz by sampling the spectral index of each FRB from a normal distribution (σ = 0.5) centered on the mean spectral index α ranging from -4 to +1. From the resulting flux distribution, we computed the all-sky rate of FRBs above a peak flux density of 0.63 Jy at 350 MHz. Figure <ref> shows the number of GBNCC-detectable FRBs i.e. the difference of the computed all-sky FRB rate and the 95% confidence GBNCC upper limit as a fraction of the computed all-sky FRB rate, for a range of spectral indices.
The constraining spectral index is the one for which the computed all-sky rate was found to be equal to the 95% confidence GBNCC upper limit i.e. when simulations do not predict any detections in the absence of scattering and free-free absorption. The constraints on the mean spectral index for different values of γ are listed in Table <ref>. The strongest constraint, α > 0.35, was obtained for a Euclidean flux distribution (γ = 1.5). The constraint depends on the assumed width of the distribution of spectral indices since the detectable FRBs in each distribution will be those with lower spectral indices. Therefore, decreasing the width will weaken the constraint on the mean spectral index. In the event of all FRBs having the same spectral index, we derive the constraint α > 0.09, for γ = 1.5.
§.§ Scattering
Scattering may arise from three sources: our Galaxy, the intergalactic medium (IGM) and the host galaxy. Figure <ref> shows the scattering times at 350 MHz predicted by the NE2001 model along the lines of sight of all GBNCC pointings that were searched for FRBs. Since the scattering time for 98% of these pointings is less than 10 ms (much less than our maximum searched box car width; see above), we can assume that the scattering from Galactic structures, which are modelled by the NE2001 model, is not responsible for smearing all potentially GBNCC-detectable FRBs beyond detection. However, compact regions of high electron density in our Galaxy, which are not accounted for by the NE2001 model, can potentially result in scattering timescales greater than 10 ms.
<cit.> argue against the IGM being the dominant source of scattering and support the hypothesis of strong scattering from either the dense central region of the host galaxy or a compact nebula surrounding the source. This conclusion is derived from FRB110523 showing evidence of being scattered by two plasma screens and exhibiting strong scintillation. <cit.> found no correlation between the measured pulse widths of FRBs and their extragalactic DMs suggesting that the IGM does not contribute to both scattering and extragalactic DM.
Having established the contribution to scattering from the IGM and the Galactic structures modelled by the NE2001 model as being irrelevant for our non-detection, we ran our simulations with a three-parameter log-normal distribution of scattering times. The parameters of this distribution were chosen on the basis of the distribution of Earth-centered scattering times for our Galaxy to allow for both source models supported by <cit.>, namely a dense nebula local to the source or location in the central region of the host galaxy. The threshold parameter defines the minimum of the distribution and is set to be equal to the minimum Earth-centered scattering time for our Galaxy, 4.3× 10^-3 ms. The scale of the distribution was set as a free parameter to allow for a range of values of the mean. The standard deviation, σ = 2.74 ms, of the underlying normal distribution was set to be the same as that of the distribution of scattering times for our Galaxy at 350 MHz predicted by the NE2001 model.
As in Section <ref>, we generated a flux density distribution at 1.4 GHz and scaled it to 350 MHz using spectral indices drawn from a normal distribution centered on the mean spectral index (-4 < α < 0). We estimated the threshold flux density of the GBNCC survey to be 0.82 Jy for t_scatt = 10 ms, accounting for the contribution to scattering from the IGM and our Galaxy. FRBs in the flux distribution that are detectable with GBNCC (S > 0.82 Jy) were assigned scattering times drawn from the above-mentioned log-normal distribution. This step was repeated with the mean of the log-normal distribution increased for each repetition until the scattering timescales of all detectable FRBs became greater than 100 ms. Since the widest box car template used by our search pipeline for detecting single pulses is 100 ms (see Section 3), FRBs with a scattering timescale greater than 100 ms will not be detected with an optimal S/N by our search pipeline. The above analysis assumes uniform sensitivity to pulses of any scattering timescale less than 100 ms. Although there is a reduced sensitivity to highly scattered pulses because of the prevalence of RFI on longer timescales, the effect is countered by the reduction in the minimum peak S/N required to satisfy RRATtrap's cluster requirement with increase in the scattering timescale, as shown in Figure <ref>.
Figure <ref> shows the mean scattering time of the log-normal distribution that can render FRBs in the flux density distribution expected to be seen by GBNCC (with S > 0.82 Jy) in an observing time of 84 days undetectable, for a range of spectral indices. A more negative spectral index would predict a higher number of detections with GBNCC requiring a higher mean scattering time at 350 MHz to render all the FRBs undetectable. We find our constraint on the spectral index, α_lim, to be the one for which the mean scattering time at 350 MHz scales to the maximum observed scattering timescale for known FRBs at 1.4 GHz assuming a Kolmogorov scaling. We derive the constraint, α_lim > -0.3, for a Euclidean flux distribution. This constraint is valid only in the absence of free-free absorption. The constraints for other values of γ are listed in Table <ref>.
cCCC
Spectral Index Constraints
4
2
0pt
γ
No Scattering/FFa
2cScatteringb
0.8 > 0.19 > -0.9 > -1.5
1.2 > 0.28 > -0.6 > -1.2
1.5 > 0.35 > -0.3 > -0.9
aFF refers to free-free absorption.
bThe two columns correspond to different 1.4 GHz rate
estimates assumed for the initial flux distribution.
Different surveys conducted at 1.4 GHz with the Parkes telescope have different reported rate estimates and flux density thresholds. To gauge the sensitivity of our results to the assumed 1.4-GHz rate estimate, we repeat this analysis with a flux distribution at 1.4 GHz that is consistent with the rate reported by <cit.> of 7 × 10^3 FRBs sky^-1 day^-1 above a flux density of 0.17 Jy for W_i = 5 ms. The resulting constraints are weaker and are listed in Table <ref>. The constraints on spectral index are also sensitive to the width of the log-normal distribution. Decreasing the width of the distribution would allow even modest scattering times to explain our non-detection and thus weakening the constraints on spectral index.
Another effect which can potentially weaken our constraints is the 1.4-GHz observation of a reduced FRB detection rate at low and intermediate Galactic latitudes as compared to high Galactic latitudes by <cit.>. A recent analysis by <cit.> demonstrates that the reduction in the FRB rate is significant (p = 5 × 10^-5) for low Galactic latitudes (|b| < 5) while the difference between the mid-latitude (5 < |b| < 15) and the high-latitude FRB rate is only marginally significant (p = 0.03). Since only 5% of the GBNCC survey pointings were observed at Galactic latitudes |b|<5, and 10% of the pointings were observed at intermediate Galactic latitudes, incorporating the latitude dependence of the FRB rate would not have a significant effect on the resulting constraints. Additionally, if the latitude dependence of the FRB event rate is due to diffractive scintillation as suggested by <cit.> then the frequency dependence of this effect can also weaken the spectral index constraints. However, we do not account for this effect here since <cit.> demonstrate that the analysis by <cit.> is incorrect as its prediction for a high FRB rate with the PALFA survey is not matched by observations.
§.§ Constant Comoving Number Density Distribution
We attempt to constrain the spectral index for the specific case of a constant comoving number density distribution in this section. The approach follows from the analysis in <cit.> and references therein and enables us to derive constraints for a variety of astrophysical models. It is based on the assumption that FRBs are standard candles, thus making it different from the approach described in Section <ref>.
The bolometric luminosity, L for each model and spectral index α is evaluated using the following equation assuming a S_peak = 1 Jy detection of an FRB located at z_lim = 0.75 <cit.> with the Parkes surveys:
S_peak = L ∫_ν'_1^ν'_2E_ν' d ν' /(1+z)^2 4 π D(z)^2 (ν_2 - ν_1) ∫_ν'_low^ν'_highE'_ν' d ν'.
Here D(z) is the comoving distance calculated using Planck 2015 cosmological parameters <cit.> and ν' = (1+z) ν is the frequency in the source frame. The limiting frequencies for emission, ν'_high and ν'_low, are assumed to be 10 GHz and 10 MHz, respectively <cit.>. The frequencies ν_1 and ν_2 are the lowest and highest observing frequencies of the survey in consideration.
The bolometric luminosity is different for each model because of the difference in the expression for the energy released per unit frequency interval, E_ν'. In the absence of scattering and free-free absorption, positive spectral indices will be the sole reason for reduction of flux at low frequencies and we can set E_ν'∝ν'^α. Mirroring the terminology used by <cit.>, we will be referring to it as model A hereafter. For the model where scattering becomes important (model B), E_ν' gets reduced by a factor of
√(1 + (t_scatt/W_i)^2). Here t_scatt is the scattering timescale at a frequency ν obtained by scaling the mean observed timescale of 6.7 ms at 1 GHz under the assumption of a Kolmogorov scattering spectrum. The observed scattering time of 6.7 ms was determined by taking the average of the scattering timescales of known FRBs <cit.>. For FRBs with no measured scattering timescales, we used half of the published upper limits when computing the average.
Another astrophysical phenomenon that can render FRBs undetectable at low frequencies is free-free absorption in the dense environment surrounding the FRB progenitor. For the case of free-free absorption,
E_ν'∝ (ν'/1 GHz)^α exp[- τ (ν'/1 GHz)^-2.1].
The optical depth, τ at 1 GHz is computed using τ = 0.082 T_e^-1.35 EM <cit.>, where T_e is the electron temperature and EM is the emission measure. We considered two models of free-free absorption; cold molecular clouds with ionization fronts (model C) and hot, ionized magnetar ejecta/ circum-burst medium (model D). The parameters, T_e and EM for these models have been adopted from <cit.> and are listed in Table <ref>. Model E and F mimic model C and D, respectively, but also account for scattering. For this, the expression for E_ν' in Equation <ref> is reduced by a factor of √(1 + (t_scatt/W_i)^2), as was done for model B.
Using the expressions for E_ν' derived above, we calculate the peak flux density detectable with the GBNCC survey for each model and spectral index α by substituting the bolometric luminosity for that model and spectral index, and parameters of the GBNCC survey in Equation <ref>. This calculation is performed for a range of redshifts. The peak flux density at GBNCC survey's limiting redshift for case B, z_lim = 0.37, should be equal to the survey's sensitivity for the constraining spectral index α_lim. Any spectral index α < α_lim can be rejected as the peak flux density at z_lim for α < α_lim would be greater than the survey sensitivity implying FRB detections with the GBNCC survey. The procedure is shown graphically in Figure <ref> and the resulting constraints are listed in Table <ref>.
The GBNCC survey sensitivity exhibits a non-linear dependence on redshift. If the IGM is assumed to be the dominant contributor to the DM, then the DM-redshift relation (; ) implies that we are searching for FRBs with higher DMs as we search out to higher redshifts. The increase in DM increases the dispersive smearing within each frequency channel (evaluated using Equation <ref>), thereby broadening the pulse and increasing the minimum detectable flux density of the survey. The survey sensitivity for the broadened pulse width is determined using Equation <ref> and is plotted in Figure <ref>.
The constraints listed in Table <ref> are based on the limiting redshift for the GBNCC survey for case B, the calculation of which is based on the assumption that the Parkes surveys searched to a redshift, z_lim = 0.75. If the repeating FRB121102 is not representative of the FRB population i.e. not all FRBs are cosmological, then this assumption might not hold true. Other caveats associated with these constraints have been detailed in <cit.>.
For the case of scattering, the constraint for a uniform distribution in comoving volume of FRBs, α_lim = -2.48 is very weak in comparison with α_lim = -0.3 evaluated for a Euclidean flux distribution with the approach described in Section <ref>. The constraint for a Euclidean flux distribution is derived by assuming a distribution of scattering times as compared to a single scattering time for all FRBs assumed for evaluating the weaker constraint. The marked difference in the resulting constraints points to the sensitivity of our results to the initial assumptions about the scattering timescale.
The constraints based on the GBNCC non-detection are not markedly different from the constraints evaluated by <cit.> based on non-detection with surveys such as AO327 <cit.>, LOFAR <cit.> and UTMOST <cit.>. The constraint we derive under the assumption of absence of scattering and free-free absorption, α_lim = 1.18, is stronger than the most constraining spectral index obtained from the above-mentioned surveys (α_lim = 0.7; AO327). The best constraint derived by <cit.> for the model where scattering becomes relevant, α_lim = -2.10, is based on non-detection with UTMOST and is stronger than the constraint obtained using the GBNCC non-detection, α_lim = -2.48.
ccCRC
Spectral Index Constraints for Constant Comoving Number Density Distribution
3
0pt
Model
T_e
EM α_lim
(K)
(cm^-6 pc)
A 1.18
B -2.48
C 200 1000 1.00
D 8000 1.5 ×10^6 -0.64
E 200 1000 -2.67
F 8000 1.5 ×10^6 -4.39
§ IMPLICATIONS FOR OTHER SURVEYS
We can predict the FRB detection rates for current and upcoming surveys using the constraints on spectral index derived from GBNCC. We derive the following equation for the calculation of the FRB rate, R above a flux density S_0 at a frequency ν_0, for a spectral index α and slope of the log N-log S function, γ:
R( > S_0 ) = R_ref(S_0/S_ref( ν_0/ν_ref)^α)^-γ
= R_ref(S_0/S_ref)^-γ(ν_0/ν_ref)^αγ.
Here R_ref is the reference rate estimate above a flux density S_ref at a frequency ν_ref. The above equation uses a scaling factor of ν^αγ to calculate the FRB rate instead of ν^α used by <cit.>. The correction to the scaling factor can be justified in the following manner. If R_ref is the number of bursts detectable per sky per day above a flux density S_ref at a frequency ν_ref, then R_ref is also the number of bursts detectable above a flux density S_ref(ν_0/ν_ref)^α at a frequency ν_0. The ratio of the number of bursts R detectable above a flux density S_0 and the number of bursts R_ref detectable above a flux density S_ref(ν_0/ν_ref)^α can then be given by Equation <ref>.
However, Equation <ref> makes incorrect assumptions about the FRB population in that it does not allow a distribution of spectral indices and scattering timescales. This warrants the need to run Monte Carlo simulations to ensure that the predicted rate accounts for the occasional bright FRBs with scattering times lower than the mean of the population. For instance, all FRBs with a scattering time greater than 1 ms at 1 GHz will not be detectable with an optimal S/N with the GBNCC survey. This is because the widest box car template of 100 ms used by our search pipeline corresponds to a timescale of 1 ms at 1 GHz (under the assumption of a Kolmogorov medium). However, scattering timescales of known FRBs at 1 GHz range from 0.7 to 23 ms with several of these measurements being upper limits <cit.>, suggesting that the survey could still be sensitive to a significant fraction of the FRB population.
We generated flux distribution of FRBs at 1.4 GHz consistent with the Parkes rate estimate for γ = 0.8, 1.2 and 1.5. Spectral indices drawn from a normal distribution (σ = 0.5) centered on the mean spectral index were used to scale the flux distribution to the frequency of the survey in consideration. For each FRB in the distribution, the scattering time t_350 was sampled from a log-normal distribution at 350 MHz with the width same as our Galaxy's distribution. The mean of the log-normal distribution was set to be the scattering timescale at 350 MHz obtained by scaling the mean observed timescale of 6.7 ms at 1 GHz assuming a Kolmogorov spectrum. The flux of all FRBs in the distribution was reduced by √(1+ (t_scatt/W_i)^2) where t_scatt was the scattering time for each FRB at the survey frequency, obtained by scaling t_350, again under the assumption of a Kolmogorov spectrum. The number of FRBs in this distribution with a flux greater than S_0 was used to compute the number of bursts per hour detectable by the survey. The minimum detectable flux density for each survey, S_0, was evaluated using Equation <ref> with (S/N)_b = 10, t_scatt = 0 and the parameters for the surveys considered listed in Table <ref>.
Our simulations predict the rate for mean spectral indices ranging from α_lim to an arbitrary upper limit, α = +2. If α > +2, then the rate predictions for all surveys at frequencies < 1.4 GHz would decrease, although there is no observational evidence arguing for α > +2. The lower limits on the mean spectral index, α_lim are the constraints we obtain with GBNCC in the event of scattering which are listed in Table <ref>.
The rate predictions for all surveys that we considered are shown in Figure <ref>. Our simulations predict rates that are consistent with the upper limits reported for LOFAR (; ), AO327 <cit.>, MWA <cit.>, VLA <cit.> and UTMOST <cit.>. The simulations do not account for repeating sources and the rate reported for the PALFA survey <cit.> is based on the detection of a single event. Since our simulations calculate the rate for UTMOST at its full sensitivity, the upper limit shown in Figure <ref> is calculated by scaling the reported upper limit for a fluence threshold of 11 Jy ms <cit.> to the fluence for the fully sensitive UTMOST survey calculated using Equation <ref>.
One caveat, however, is that we have difficulty matching the predicted rate for the Parkes surveys with the observations if the mean of the scattering time distribution is set to be the observed 6.7 ms at 1 GHz. This suggests that one or all of the following assumptions: Kolmogorov spectrum, log-normal distribution of scattering timescales, width of the distribution equal to that of the Galaxy might be incorrect. A more sophisticated treatment of the scattering timescale distribution will allow us to make better predictions.
The results of the simulations also demonstrate the effect of the slope of the log N-log S function on the FRB yield of a survey. The log N-log S function determines whether field of view or sensitivity is a more important factor for FRB detection. Our simulations predict a greater FRB detection rate for PALFA as compared to Parkes for γ > 1 but the rates are consistent with each other for γ < 1. The abundance of fainter bursts implied by γ > 1 explains the higher rate prediction for PALFA whose greater sensitivity is highly advantageous for FRB detection in that scenario. However, if γ < 1, there will be an abundance of brighter bursts, thereby allowing the greater field of view of Parkes as compared to PALFA to compensate for the reduction in sensitivity and have a similar FRB detection rate per hour.
§.§ Predictions for CHIME
The simulations suggest that CHIME will be detecting more FRBs than any existing telescope due to its large field of view. However, scattering can reduce the number of detections in the lower part of the band. To model the competing effects of increase in field of view and increase in scattering timescales at lower frequencies, the CHIME bandwidth has been divided into 4 equal parts (centered at 450 MHz, 550 MHz, 650 MHz and 750 MHz) in our simulations. The scattering timescale and field of view for each part has been calculated using its center frequency and the sensitivity has been evaluated using Equation <ref>, assuming a bandwidth of 100 MHz.
For the lower part of the CHIME band (400–500 MHz), our simulations predict 0.5–21 bursts a day for γ = 1.5 and 2–24 bursts a day for γ = 0.8. The prediction for the upper part of the CHIME band (700–800 MHz) is 2–15 bursts per day, assuming a Euclidean flux distribution (γ = 1.5). <cit.> predict detection of 2–40 bursts per day for the same part of the band based on the one FRB detected with GBT at 800 MHz. Although our rate prediction for 700–800 MHz is not very different from the <cit.> prediction, there are significant differences in the method of rate estimation. We extrapolate the 1.4-GHz rate estimate reported by <cit.> to the frequency in consideration (750 MHz), assuming a distribution of spectral indices and scattering timescales for the FRB population. On the other hand, <cit.> predict the detection rate based on the measured FRB rate in the relevant frequency range (700–800 MHz) and neglect the distribution of scattering timescales. The overall rate predicted by our simulations, 3–54 bursts per day, is also in agreement with the prediction of detection of 30–100 FRBs per day by <cit.> assuming a cosmological population of FRBs.
§ SUMMARY & CONCLUSIONS
We did not detect any FRBs in GBNCC survey pointings amounting to a total observing time of 84 days. The non-detection allows us to determine a 95% confidence upper limit on the FRB rate at 350 MHz of 3.6× 10^3 FRBs sky^-1 day^-1 above a peak flux density of 0.63 Jy for bursts with an intrinsic width of 5 ms. The threshold flux density of the survey ranges from 0.3 Jy for an FRB of 16 ms duration to 9 Jy for a 0.35 ms duration FRB.
We computed constraints on the mean intrinsic spectral index by performing Monte Carlo simulations of a population of FRBs consistent with the 1.4-GHz rate estimate and assuming a power-law flux density model for FRBs. The FRBs generated in these simulations had spectral indices sampled from a normal distribution and scattering timescales sampled from a log-normal distribution. If intrinsic spectral index were the only reason for our non-detection, i.e. scattering and free-free absorption were absent, the non-detection with GBNCC would be compatible with the Parkes rate estimate reported by <cit.> for α > +0.35. <cit.> derived a constraint, α > +0.1, based on non-detection with LOFAR at 145 MHz. Non-detection with MWA at 155 MHz implied α > -1.2 <cit.>. The GBNCC survey, owing to its large observing time and greater sensitivity, thus enables us to place a stronger constraint on spectral index than has any previous survey.
However, scattering is one possible reason for our non-detection. Another variant of the simulations was aimed at finding the mean scattering timescale that would render FRBs expected for a particular value of spectral index undetectable with GBNCC. Given the observed range of scattering times at 1.4 GHz, we constrain α > -0.3 for a Euclidean flux distribution, in the absence of free-free absorption. The constraints on spectral index are very sensitive to the 1.4-GHz rate estimate used in the simulations. The above-mentioned constraint is derived using the <cit.> rate estimate. If the rate estimate reported by <cit.> is used, then the constraint is weaker with α > -0.9. The simulations used for deriving these constraints assume a scattering timescale distribution resembling the distribution of Earth-centered scattering times for our Galaxy. However, the scattering timescale depends on the location in, orientation and type of the host galaxy. Detailed treatment of this problem is beyond the scope of this paper. The simulations are also based on the assumption of a power-law spectral model. Although the assumption is in line with previous studies, it could be incorrect if the repeating FRB is a member of the same source class as the rest of the population since observations of the repeating FRB121102 <cit.> show that a single power-law is a poor characterization of the burst spectra.
We find that the strongest constraint is obtained for the case of the Euclidean flux distribution, both in the absence of scattering and free-free absorption and in the presence of scattering. A higher value of γ corresponds to an increase in the relative abundance of fainter FRBs. Therefore, an increase in γ implies an increase in the number of detections with GBNCC by virtue of its sensitivity, thereby requiring higher mean scattering times or a more positive spectral index to explain our non-detection.
For the particular case of standard candles with a constant comoving number density, we estimate a maximal redshift of 0.37 being probed by the GBNCC survey. We find a spectral index α_lim for which the peak flux density of an FRB at z = 0.37 is equal to the survey sensitivity. We rejected any spectral index < α_lim as it would predict sensitivity to a greater redshift and hence detection of FRBs with GBNCC. In the scenario of free-free absorption with a hot ionized magnetar ejecta, we obtain α_lim = -0.6 and for a cold molecular cloud having ionization fronts, α_lim = 1.0 under the assumption of no scattering. Our constraints imply that spectra of FRBs are different from observed pulsar spectra, for which the mean spectral index is -1.4 <cit.>. However, if FRBs are subject to both free-free absorption and scattering, our constraints are far weaker and allow for steep negative spectral indices as well.
We also predict the detection rate for existing surveys and upcoming ones such as CHIME using Monte Carlo simulations. The simulations for a Euclidean flux distribution predict that CHIME will detect 3–54 bursts per day assuming the <cit.> rate estimate and 1–25 bursts a day assuming the rate estimate reported by <cit.>. The predictions are promising because even with the most conservative estimates, CHIME will be able to greatly increase the number of known FRBs and probe the distribution of their properties such as spectral index, scattering timescales and the slope of the log N-log S function.
§ ACKNOWLEDGEMENTS
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank Compute
Canada and the McGill Center for High Performance Computing and Calcul Quebec for provision and maintenance of the Guillimin supercomputer and related resources. We also thank an anonymous referee for useful comments which helped improve the manuscript. We are grateful to Erik Madsen for providing code to make plots for Figure <ref>. PC acknowledges support from a Mitacs Globalink Graduate Fellowship and the TOEFL Scholarship Program in India. VMK receives support from an NSERC Discovery Grant, an Accelerator Supplement and from the Gerhard Herzberg Award, an R. Howard Webster Foundation Fellowship from the Canadian Institute for Advanced Research, the Canada Research Chairs Program, and the Lorne Trottier Chair in Astrophysics and Cosmology. JWTH and VIK acknowledge support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement nr. 337062. MAM is supported by NSF AST Award #1211701. Pulsar research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. JvL acknowledges funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 617199.
PRESTO <cit.>, RRATtrap <cit.>
[Ade et al.(2016)]ade2015 Ade, P. A. R., Aghanim, N., Arnaud, M., et al. 2016, A&A, 594, A13
[Anderberg(1973)]anderberg1973 Anderberg, M.R. 1973, Cluster Analysis for Applications (New York: Academic Press)
[Bandura et al.(2014)]bandura2014 Bandura, K., et. al. 2014, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9145, p. 22
[Bannister & Madsen(2014)]bannister2014 Bannister, K. W. & Madsen, G. J. 2014, MNRAS, 440, 353
[Bates et al.(2013)]bates2013 Bates, S. D., Lorimer, D. R. & Verbiest, J. P. W., 2013, MNRAS, 431, 1352
[Bentley(1975)]bentley1975 Bentley, J. L. 1975, Communications of the ACM, 18(9):509-517
[Burke-Spolaor & Bannister(2014)]BSB2014 Burke-Spolaor S. & Bannister K. W. 2014, ApJ, 792, 19
[Burke-Spolaor et al.(2016)]spolaor2016 Burke-Spolaor, S., Trott, C. M., Brisken, W. F., et al. 2016, , 826, 2
[Caleb et al.(2016)]caleb2016 Caleb, M., Flynn, C., Bailes, M., et al. 2016, MNRAS, 458, 718
[Champion et al.(2016)]champion2016 Champion, D. J., Petroff, E., Kramer, M., et al. 2016, MNRAS, 460, L30
[Chatterjee et al.(2017)]chatterjee2017 Chatterjee, S., Law, C. J., Wharton, R. S., et al. 2017, Nature, 541, 58
[Coenen et al.(2014)]coenen2014 Coenen, T., van Leeuwen, J., Hessels, J. W. T., et al. 2014, A&A, 570, A60
[Connor et al.(2016)]connor2016 Connor, L., Lin, H.-H., Masui, K., et al. 2016, MNRAS, 460, 1054
[Cordes & Lazio(2002)]cordes2002 Cordes, J. M. & Lazio, T. J. W. 2002, astro-ph/0207156
[Cordes & McLaughlin(2003)]cordes2003 Cordes, J. M. & McLaughlin, M. A. 2003, , 596, 1142
[Cordes et al.(2016)]cordes2016 Cordes, J. M., Wharton, R. S., Spitler, L. G., et al. 2016, ArXiv e-prints, arXiv:1605.05890
[Cordes & Wasserman(2016)]cordes2016b Cordes, J. M & Wasserman, I. 2016, MNRAS, 457, 232
[Crawford et al.(2016)]crawford2016 Crawford, F., Rane, A., Tran, L., et al. 2016, MNRAS, 460, 3370
[Deneva et al.(2016)]deneva2016 Deneva, J. S., Stovall, K., McLaughlin, M. A., et al. 2016, , 821, 10
[Ester et al.(1996)]ester1996 Ester, M., Kriegel, H.-P., Sander, J. & Xu, X. 1996, Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining (Portland, OR: AAAI Press), 226
[Haslam et al.(1982)]haslam1982 Haslam, C. G. T, Salter, C. J., Stoffel, H. & Wilson, W. E. 1982, A&AS, 47, 1
[Inoue(2004)]inoue2004 Inoue, S. 2004, MNRAS, 348, 999
[Ioka(2003)]ioka2003 Ioka, K. 2003, , 598, L79
[Karako-Argaman et al.(2015)]karako2015 Karako-Argaman, C., Kaspi, V. M., Lynch, R. S., et al. 2015, , 809,67
[Karastergiou et al.(2015)]karastergiou2015 Karastergiou, A., Chennamangalam, J., Armour, W., et al. 2015, MNRAS, 452, 1254
[Katz(2016)]katz2016 Katz, J. I. 2016, Mod. Phys. Lett. A, 31, 1630013
[Keane et al.(2012)]keane2012 Keane, E. F., Stappers, B. W., Kramer, M., & Lyne, A. G. 2012, MNRAS, 425, L71
[Keane et al.(2016)]keane2016 Keane, E. F., Johnston, S., Bhandari, S., et al. 2016, Nature, 530, 453
[Kulkarni et al.(2015)]kulkarni2015 Kulkarni, S. R., Ofek, E. O., & Neill, J. D. 2015, ArXiv e-prints, arXiv:1511.09137
[Law et al.(2015)]law2015 Law, C. J., Bower, G. C., Burke-Spolaor, S., et al. 2015, , 807, 16
[Lorimer & Kramer(2005)]lorimer2005 Lorimer, D. R. and Kramer, M. 2005, Handbook of Pulsar Astronomy (Cambridge University Press)
[Lorimer et al.(2007)]lorimer2007 Lorimer, D. R., Bailes, M., McLaughlin, M. A., et al. 2007, Science, 318, 777
[Lorimer et al.(2013)]lorimer2013 Lorimer, D. R., Karastergiou, A., McLaughlin, M. A. & Johnston, S. 2013, MNRAS, 436, L5
[Macquart & Johnston(2015)]macquart2015 Macquart, J.-P., & Johnston, S. 2015, MNRAS, 451, 3278
[Masui et al.(2015)]masui2015 Masui, K., Lin, H.-H., Seivers, J., et al. 2015, Nature, 528, 523
[Mezger & Henderson(1967)]mezger1967 Mezger, P. G. & Henderson, A. P. 1967, , 147, 471
[Oppermann et al.(2016)]oppermann2016 Oppermann, N., Connor, L. D. & Pen, U. 2016, MNRAS, 461, 984
[Petroff et al.(2014)]petroff2014 Petroff, E., van Straten, W., Johnston, S. 2014, ApJL, 789, 2
[Petroff et al.(2015)]petroff2015 Petroff, E., Bailes, M., Barr, E. D., et al. 2015, MNRAS, 447, 246
[Petroff et al.(2016)]petroff2016 Petroff, E., Barr, E. D., Jameson, A., et al. 2016, Publications of the Astronomical Society of Australia, 33, 45
[Popov & Postnov(2013)]popov2013 Popov, S. B. & Postnov, K. A. 2013, ArXiv e-prints, arXiv:1307.4924
[Rajwade & Lorimer(2017)]rajwade2016 Rajwade, K. M. & Lorimer, D. R., 2017, MNRAS, 465, 2
[Ransom(2001)]ransom2001 Ransom, S. M. 2001, PhD thesis, Harvard University
[Ravi et al.(2015)]ravi2015 Ravi, V., Shannon, R. M., & Jameson, A. 2015, ApJL, 799, L5
[Ravi et al.(2016)]ravi2016 Ravi, V. & Shannon, R. M. et al., 2016, Science
[Remazeilles et al.(2015)] remazeilles2015 Remazeilles, M., Dickinson, C., Banday, A. J., et al. 2015, MNRAS, 451, 4311
[Rowlinson et al.(2016)]rowlinson2016 Rowlinson, A., Bell, M. E., Murphy, T., et al. 2016, MNRAS, 458, 3506
[Scholz et al.(2016)]scholz2016 Scholz, P., Spitler, L. G., Hessels, J. W. T., et al. 2016, , 833, 2
[Spitler et al.(2014)]spitler2014 Spitler, L. G., Cordes, J. M., Hessels, J. W. T., et al. 2014, , 790, 101
[Spitler et al.(2016)]spitler2016 Spitler, L. G., Scholz, P., Hessels, J. W. T., et al. 2016, Nature, 531, 202
[Stovall et al.(2014)]stovall2014 Stovall, K., Lynch, R. S., Ransom, S. M., et al. 2014, , 791, 67
[Thornton et al.(2013)]thornton2013 Thornton, D., Stappers, B., Bailes, M., et al. 2013, Science, 341, 53
[Tendulkar et al.(2017)]tendulkar2017 Tendulkar, S. P, Bassa, C. G., Cordes, J. M. et al. 2017, ApJL, 834, 2
[Tingay et al.(2013)]tingay2013 Tingay, S. J., Goeke, R., Bowman, J. D., et al. 2013, PASA, 30, 7
[Tingay et al.(2015)]tingay2015 Tingay, S. J., Trott, C. M., Wayth, R. B., et al. 2015, , 150, 6
[van Leeuwen(2014)]vanleeuwen2014 van Leeuwen, J. 2014, in The Third Hot-wiring the Transient Universe Workshop, ed. P. R. Wozniak, M.J. Graham, A. A. Mahabal, & R. Seaman, 79, http://www.slac.stanford.edu/econf/C131113.1/
[Vander Wiel et al.(2016)]vanderwiel2016 Vander Wiel, S., Burke-Spolaor, S., Lawrence, E., et al. 2016, ArXiv e-prints, arXiv:1612.00896
[Vedantham et al.(2016a)]vedantham2016a Vedantham, H. K., Ravi, V., Mooley, K., et al. 2016a, ApJL, 824, L9
[Vedantham et al.(2016b)]vedantham2016b Vedantham, H. K., Ravi, V., Hallinan, G., et al. 2016b, , 830, 75
[Wayth et al.(2012)]wayth2012 Wayth, R. B., Tingay, S. J., Deller, A. T., et al. 2012, ApJL, 753, 2
[Williams & Berger(2016)]williams2016 Williams, P. K. G., & Berger, E. 2016, ApJL, 821, L22
| Fast Radio Bursts (FRBs) are bright, millisecond-duration events occurring in the radio sky. Their origin is still unknown. Eighteen FRBs have been detected within the past decade (; ; ; ; ; ; ; ; ; ) with only one source (; ) known to repeat. A catalog of these bursts and their properties is made available by <cit.>[< These transient events can be distinguished from pulsars and rotating radio transients (RRATs) on the basis of their dispersion measure (DM), which is a measure of the integrated free electron density along the line of sight in the intervening medium. The bursts have DMs that are 1.4 to 35 times the maximum predicted along the line of sight by the NE2001 model of electron density in our Galaxy <cit.>.
The dominant contribution to the excess DM of FRBs can arise from the intergalactic medium, the host galaxy of the FRB progenitor, or possibly from a high electron density, compact structure in our Galaxy. The interferometric localization of bursts from the repeating FRB121102 provides evidence of its association with an optical counterpart <cit.>. Spectroscopic follow-up by <cit.> confirms the optical counterpart as being the host galaxy of the FRB and characterizes it as a low-metallicity, star-forming dwarf galaxy located at a redshift of z = 0.19273(8). The observations of <cit.> also support an extragalactic origin with scattering and scintillation in FRB110523 suggesting that the majority of the scattering originates from within the typical size scale of a galaxy. These observations lend support to models with extragalactic progenitors of FRBs such as giant pulses from extragalactic neutron stars <cit.> and magnetar giant flares (; ). Interferometric localizations of more FRBs are essential to conclusively determine the source of the excess DM and the nature of the FRB progenitors for the broader FRB population.
All but one known FRB <cit.> has been detected at frequencies greater than 1 GHz. Detection or stringent limits at lower frequencies are crucial for understanding properties of FRBs such as their spectral index and pulse profile evolution with frequency. Searches at low frequencies with telescopes such as LOFAR (; ), Arecibo <cit.> and MWA (; ) have so far not resulted in any detections. <cit.> report an upper limit on the FRB rate at 327 MHz of 10^5 FRBs sky^-1 day^-1 for a flux density threshold of 83 mJy and pulse width of 10 ms. A non-detection with the LOFAR Pilot Pulsar Survey at 142 MHz allowed <cit.> to place an upper limit of 150 FRBs sky^-1 day^-1, for bursts brighter than 107 Jy at burst duration 0.66 ms. <cit.> report an upper limit of 29 FRBs sky^-1 day^-1 for bursts with flux density above 62 Jy at 145 MHz and a pulse width of 5 ms, based on observations with the UK station of the LOFAR radio telescope. The upper limits on the FRB rate reported thus far from these low-frequency radio surveys are not particularly constraining because of limitations in total observing time and volume searched. With observations to date amounting to a total on-sky time of 84 days, the Green Bank Northern Celestial Cap (GBNCC) Pulsar Survey <cit.> can provide the strongest constraints yet on the FRB rate and spectral index in the frequency range of 300–400 MHz.
The GBNCC survey is also important for predicting the FRB yield of upcoming low-frequency telescopes such as the Canadian Hydrogen Intensity Mapping Experiment (CHIME). With its large field of view and good sensitivity, CHIME is predicted to discover tens of FRBs per day (; ) in its frequency range of 400–800 MHz. The GBNCC survey is thus well placed to determine the expected detection rate for the lower part of the CHIME band.
In this paper, we present results from the search for FRBs in GBNCC survey pointings observed through May 2016. For the purpose of our search and subsequent analysis, we define an FRB as an astrophysical pulse with a DM greater than twice the maximum line-of-sight Galactic DM. The suggestion by <cit.> of a possibly Galactic origin of the excess DM of the only FRB with a DM ratio < 2, FRB010621 <cit.>, lends support to our choice of a DM ratio of 2 for the FRB definition.
Our paper is organized as follows. In Section <ref>, we give a description of the survey and its sensitivity. We describe the data analysis pipeline in Section <ref> and place constraints on the FRB rate in Section <ref>. In Section <ref>, we constrain the mean spectral index of FRBs by performing Monte-Carlo simulations of a population of FRBs. We discuss the implications for current and upcoming surveys in Section <ref> and present our summary and conclusions in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07844v2 | 20170126190753 | Markov Chain Monte Carlo with the Integrated Nested Laplace Approximation | [
"Virgilio Gómez-Rubio",
"Håvard Rue"
] | stat.CO | [
"stat.CO"
] |
Human-Robot Mutual Adaptation in Shared Autonomy
Stefanos Nikolaidis
Carnegie Mellon University
[email protected]
Yu Xiang Zhu
Carnegie Mellon University
[email protected]
David Hsu
National University of Singapore
[email protected]
Siddhartha Srinivasa
Carnegie Mellon University
[email protected]
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================
The Integrated Nested Laplace Approximation (INLA) has established
itself as a widely used method for approximate inference on
Bayesian hierarchical models which can be represented as a latent
Gaussian model (LGM). INLA is based on producing an accurate
approximation to the posterior marginal distributions of the
parameters in the model and some other quantities of interest by
using repeated approximations to intermediate distributions and
integrals that appear in the computation of the posterior
marginals.
INLA focuses on models whose latent effects are a Gaussian Markov
random field (GMRF). For this reason, we have explored alternative
ways of expanding the number of possible models that can be fitted
using the INLA methodology. In this paper, we present a novel
approach that combines INLA and Markov chain Monte Carlo (MCMC).
The aim is to consider a wider range of models that cannot be
fitted with INLA unless some of the parameters of the model have
been fixed. Hence, conditioning on these parameters the model
could be fitted with the R-INLA package. We show how new
values of these parameters can be drawn from their posterior by
using conditional models fitted with INLA and standard MCMC
algorithms, such as Metropolis-Hastings. Hence, this will extend
the use of INLA to fit models that can be expressed as a
conditional LGM. Also, this new approach can be used to build
simpler MCMC samplers for complex models as it allows sampling
only on a limited number parameters in the model.
We will demonstrate how our approach can extend the class of
models that could benefit from INLA, and how the R-INLA
package will ease its implementation. We will go through simple
examples of this new approach before we discuss more advanced
problems with datasets taken from relevant literature.
Keywords: Bayesian Lasso, INLA, MCMC, Missing Values, Spatial Models
§ INTRODUCTION
Bayesian inference for complex hierarchical models has almost entirely
relied upon computational methods, such as Markov chain Monte Carlo
<cit.>. <cit.> propose a
new paradigm for Bayesian inference on hierarchical models that can be
represented as latent Gaussian models (LGMs), that focuses on
approximating marginal distributions for the parameters in the model.
This new approach, the Integrated Nested Laplace Approximation (INLA,
henceforth), uses several approximations to the conditional
distributions that appear in the integrals needed to obtain the
marginal distributions. See Section <ref> for details.
INLA is implemented as an R package, called R-INLA, that allows
us to fit complex models often in a matter of seconds. Hence, this is
often much faster than fitting the same model using MCMC methods.
Fitting models using INLA is restricted, in practice, to the classes
of models implemented in the R-INLA package. Several authors
have provided ways of fitting other models with INLA by fixing some of
the parameters in the model so that conditional models are fitted with
R-INLA. We have included a brief summary below.
<cit.> provide an early application of the idea of
fitting conditional models on some of the model parameters with
R-INLA. They developed this idea for a very specific example on
spatiotemporal models in which some of the models parameters are fixed
at their maximum likelihood estimates, which are then plugged-in the
overall model, thus ignoring the uncertainty about these parameters
but greatly reducing the dimensionality of the model. However, they do
not tackle the problem of fitting the complete model to make inference
on all the parameters in the model.
<cit.> propose an approach to extend
the type of models that can be fitted with R-INLA and apply
their ideas to fit some spatial models. They note how some models can
be fitted after conditioning on one or several parameters in the
model. For each of these conditional models R-INLA reports the
marginal likelihood, which can be combined with a set of priors for
the parameters to obtain their posterior distribution. For the
remainder of the parameters, their posterior marginal distribution can
be obtained by Bayesian model averaging <cit.> the
family of models obtained with R-INLA.
Although <cit.> focus on some spatial
models, their ideas can be applied in many other examples. They apply
this to estimate the posterior marginal of the spatial autocorrelation
parameter in some models, and this parameter is known to be bounded,
so that computation of its marginal distribution is easy because the
support of the distribution is a bounded interval.
For the case of unbounded parameters, the previous approach can be
applied, but a previous search may be required. For example, the
(conditional) maximum log-likelihood plus the log-prior could be
maximised to obtain the mode of the posterior marginal. This will mark
the centre of an interval where the values for the parameter are taken
from and where the posterior marginal can be evaluated.
In this paper, we will propose a different approach based on Markov
chain Monte Carlo techniques. Instead of trying to obtain the
posterior marginal of the parameters we condition on, we show how to
draw samples from their posterior distribution by combining MCMC
techniques and conditioned models fitted with R-INLA. This
provides several advantages, as described below.
This will increase the number of models that can be fitted using INLA
and its associated R package R-INLA. In particular,
models that can be expressed as a conditional LGM could be fitted. The
implementation of MCMC algorithms will also be simplified as only the
important parameters will be sampled, while the remaining parameters
are integrated out with INLA and R-INLA.
<cit.> have also effectively combined MCMC and
INLA for efficient variable selection and model choice.
The paper is structured as follows. The Integrated Nested Laplace
Approximation is described in Section <ref>. Markov chain
Monte Carlo methods are summarised in Section <ref>. Our
proposed combination of MCMC and INLA is detailed in Section
<ref>. Some simple examples are developed in Section
<ref> and some real applications are provided in Section
<ref>. Finally, a discussion and some final remarks
are provided in Section <ref>.
§ INTEGRATED NESTED LAPLACE APPROXIMATION
We will now describe the types of models that we will be considering
and how the Integrated Nested Laplace Approximation method works. We
will assume that our vector of n observed data
𝐲 = (y_1,…,y_n) are observations from a distribution
in the exponential family, with mean μ_i. We will also assume that
a linear predictor on some covariates plus, possibly, other effects
can be related to mean μ_i by using an appropriate link function.
Note that this linear predictor η_i may be made of linear terms
on some covariates plus other types of terms, such as non-linear
functions on the covariates, random effects, spatial random effects,
etc. All these terms will define some latent effects 𝐱.
The distribution of 𝐲 will depend on a vector of
hyperparameters θ_1. Because of the approximation that INLA
will use, we will also assume that the vector of latent effects
𝐱 will have a distribution that will depend on a vector of
hyperparameters θ_2. Altogether, the hyperparameters can be
represented using a single vector θ=(θ_1, θ_2).
From the previous formulation, it is clear that observations are
independent given the values of the latent effects 𝐱 and
the hyperparameters θ. That is, the likelihood of our model
can be written down as
π(𝐲|𝐱,θ) =
∏_i∈ℐπ(y_i|x_i,θ)
Here, i is indexed over a set of indices
ℐ⊆{1,…,n} that indicates observed
responses. Hence, if the value of y_i is missing then
i ∉ℐ (but the predictive distribution y_i could
be computed once the model is fitted).
Under a Bayesian framework, the aim is to compute the posterior
distribution of the model parameters and hypermeters using Bayes'
rule. This can be stated as
π(𝐱, θ|𝐲) ∝π(𝐲|𝐱,θ) π(𝐱,θ)
Here, π(𝐱,θ) is the prior distribution of the latent
effects and the vector of hyperparameters. As the latent effects
𝐱 have a distribution that depends on θ_2, it is
convenient to write this prior distribution as
π(𝐱,θ) = π(𝐱|θ) π(θ).
Altogether, the posterior distribution of the latent effects and
hyperparameters can be expressed as
π(𝐱, θ|𝐲) ∝π(𝐲|𝐱,θ) π(𝐱,θ) =
π(𝐲|𝐱,θ) π(𝐱|θ)
π(θ) =
π(𝐱|θ) π(θ) ∏_i∈ℐπ(y_i|x_i,θ)
The joint posterior, as presented in Equation (<ref>), is
seldom available in a closed form. For this reason, several estimation
methods and approximations have been developed over the years.
Recently, <cit.> have provided approximations
based on the Laplace approximation to estimate the marginals of all
parameters and hyperparameters in the model. They develop this
approximation for the family of latent Gaussian Markov random fields
models. In this case, the vector of latent effects is a Gaussian
Markov random field (GMRF). This GMRF will have zero mean (for
simplicity) and precision matrix Q(θ_2).
Assuming that the latent effects are a GMRF will let us develop
Equation (<ref>) further. In particular, the posterior
distribution of the latent effects 𝐱 and the vector of
hyperparameters θ can be written as
π(𝐱, θ|𝐲) ∝π(θ) |𝐐(θ)|^n/2exp{-1/2𝐱^T 𝐐(θ)
𝐱+∑_i∈ℐlog(π(y_i|x_i, θ)) }.
With INLA, the aim is not the joint posterior distribution
π(𝐱, θ|𝐲) but the marginal
distributions of latent effects and hyperparameters. That is,
π(x_j|𝐲) and π(θ_k|𝐲), where indices
j and k will take different ranges of values depending on the
number of latent effects and hyperparameters.
Before computing these marginal distributions, INLA will obtain an
approximation to π(θ|y),
π̃(θ|𝐲). This approximation will later
be used to compute an approximation to marginals
π(x_j|𝐲). Given that the marginal can be written down as
π(x_j|𝐲) = ∫π(x_j|θ, 𝐲)
π(θ| 𝐲) dθ,
the approximation is as follows:
π̃(x_j|𝐲)=
∑_g π̃(x_j|θ_g, 𝐲)×π̃(θ_g|𝐲)×Δ_g.
Here, π̃(x_j|θ_g, 𝐲) is an approximation to
π (x_j|θ_g, 𝐲), which can be obtained using
different methods <cit.>.
θ_g refers to an ensemble of hyperparameters, that
take values on a grid (for example), with weights Δ_g.
INLA is a general approximation that can be applied to a large number
of models. An implementation for the R programming language
is available in the R-INLA package at www.r-inla.org,
which provides simple access to model fitting. This includes a simple
interface to choose the likelihood, latent effects and priors. The
implementation provided by R-INLA includes the computation of
other quantities of interest. The marginal likelihood
π(𝐲) is approximated, and it can be used for model
choice. As described in <cit.>, the approximation
to the marginal likelihood provided by INLA is computed as
π̃(𝐲) = ∫π(θ, 𝐱,
𝐲)/π̃_G(𝐱|θ,𝐲)|_𝐱=𝐱^*(θ)d θ.
Here,
π(θ, 𝐱, 𝐲) =
π(𝐲|𝐱, θ)
π(𝐱|θ) π(θ),
π̃_G(𝐱|θ,𝐲) is a
Gaussian approximation to π(𝐱|θ,𝐲)
and 𝐱^*(θ) is the posterior mode of 𝐱 for a
given value of θ. This approximation is reliable when
the posterior of θ is unimodal, as it is often the
case for latent Gaussian models. Furthermore,
<cit.> demonstrate that this approximation is
accurate for a wide range of models.
Other options for model choice and assessment include the Deviance
Information Criterion <cit.> and the
Conditional Predictive Ordinate <cit.>. Other
features in the R-INLA package include the use of different
likelihoods in the same model, the computation of the posterior
marginal of a certain linear combination of the latent effects and
others <cit.>.
§ MARKOV CHAIN MONTE CARLO
In the previous Section we have reviewed how INLA computes an
approximation of the marginal distributions of the model parameters
and hyperparameters. Instead of focusing on an approximation to the
marginals, Markov chain Monte Carlo methods could be used to
obtain a sample from the joint posterior marginal
π( x, θ| y). To simplify the notation, we will
denote the vector of latent effects and hyperparameters by
z = ( x, θ). Hence, the aim now is to estimate
π( z| y) or, if we are only interested on the posterior
marginals, π(z_i| y).
Several methods to estimate or approximate the posterior distribution
have been developed over the years <cit.>. In the case
of MCMC, the interest is in obtaining a
Markov chain whose limiting distribution is π( z | y). We
will not provide a summary of MCMC methods here, and the reader is
referred to <cit.> for a detailed description.
The values generated using MCMC are (correlated) draws from
π( z | y) and, hence, can be used to estimate quantities of
interest. For example, if we are interested in marginal inference on
z_i, the posterior mean from the N sampled values
{z_i^(j)}_j=1^N can be estimated using the empirical mean
of {z^(j)_i}_j=1^N. Similarly, estimates of the posterior
expected value of any function on the parameters f(𝐳) can be found
using that
E[f(𝐳)| y] ≃1/N∑_j = 1 ^N f(𝐳^(j))
Multivariate estimates inference can be made by using the multivariate
nature of vector z^(j). For example, the posterior covariance
between parameters z_k and z_l could be computed by considering
samples {(z_k^(j)), z_l^(j))}_j=1^N.
§.§ The Metropolis-Hastings algorithm
This algorithm was firstly proposed by <cit.> and
<cit.>. The Markov chain is generated by proposing new
moves according to a proposal distribution q(·|·). The new
point is accepted with probability
α = min{
1,
π( z^(j+1)| y) q( z^(j) | z^(j+1))/π( z^(j)| y) q( z^(j+1)| z^(j))}
In the previous acceptance probability, the posterior probabilities of
the current point and the proposed new point appear as
π(z^(j)| y) and π(z^(j+1)| y), respectively. These
two probabilities are unknown, in principle, but using Bayes' rule
they can be rewritten as
π( z| y) = π( y | z)π( z)/π( y)
Hence, the acceptance probability α can be rewritten as
α = min{
1,
π( y | z^(j+1))π( z^(j+1)) q( z^(j)
|
z^(j+1))/π( y | z^(j))π( z^(j))
q( z^(j+1)| z^(j))}
This is easier to compute as the acceptance probability depends on
known quantities, such as the likelihood π( y | z), the
prior on the parameters π( z) and the probabilities of the
proposal distribution. Note that the term π( y) that appears in
Equation (<ref>) is unknown but that it cancels out as it
appears both in the numerator and denominator.
In Equation (<ref>) we have described the move to sample from
the joint ensemble of model parameters. However, this can be applied
to individual paramaters one at a time, so that acceptance
probabilities will be
α = min{
1,
π( y |z_i^(j+1))π(z_i^(j+1)) q(z_i^(j) |
z_i^(j+1))/π( y |z_i^(j))π(z_i^(j))
q(z_i^(j+1)| z_i^(j))}
§ INLA WITHIN MCMC
In this Section, we will describe how INLA and MCMC can be combined to
fit complex Bayesian hierarchical models. In principle, we will assume
that the model cannot be fitted with R-INLA unless some of the
parameters or hyperparameters in the model are fixed. This set of
parameters is denoted by z_c so that the full ensemble of
parameters and hyperparameters is z = ( z_c, z_-c).
Here z_-c is used to denote all the parameters in z that
are not in z_c. Our assumptions are that the posterior
distribution of z can be split as
π( z | y) ∝π( y| z_-c) π( z_-c| z_c) π( z_c)
and that π( y| z_-c) π( z_-c| z_c) is a latent
Gaussian model suitable for INLA. This means that conditional models
(on z_c) can still be fitted with R-INLA, i.e., we can
obtain marginals of the parameters in z_-c given z_c.
The conditional posterior marginals for the k-th element in vector
z_-c will be denoted by π(z_-c,k | z_c, y). Also, the
conditional marginal likelihood π( y | z_c) can be easily
computed with R-INLA.
§.§ Metropolis-Hastings with INLA
We will now discuss how to implement the Metropolis-Hastings algorithm
to estimate the posterior marginal of z_c. Note that this is a
multivariate distribution and that we will use block updating in the
Metropolis-Hastings algorithm. Say that we start from an initial point
z_c^(0) then we can use the Metropolis-Hastings algorithm to
obtain a sample from the posterior of z_c.
We will draw a new proposal value for z_c, z_c^(1),
using the proposal distribution q(·|·). The acceptance
probability, shown in Equation (<ref>), becomes now:
α = min{
1,
π( y | z_c^(j+1))π( z_c^(j+1))
q( z_c^(j) | z_c^(j+1))/π( y |
z_c^(j))π( z_c^(j))
q( z_c^(j+1)| z_c^(j))}
Note that π( y | z_c^(j)) and
π( y | z_c^(j+1)) are the conditional marginal likelihoods
on z_c^(j) and z_c^(j+1), respectively. All these
quantities can be obtained by fitting a model with R-INLA with
the values of z_c set to z_c^(j) and z_c^(j+1).
Hence, at each step of the Metropolis-Hastings algorithm only a model
conditional on the proposal needs to be fitted.
π( z_c^(j)) and π( z_c^(j+1)) are the priors of
z_c evaluated at z_c^(j) and z_c^(j+1),
respectively, and they can be easily computed as the priors are known
in the model. Values q( z_c^(j) | z_c^(j+1)) and
q( z_c^(j+1)| z_c^(j)) can also be computed as the
proposal distribution is known. Hence, the Metropolis-Hastings
algorithm can be implemented to obtain a sample from the posterior
distribution of z_c. The marginal distribuions of the elements
of z_c can be easily obtained as well.
Regarding the marginals of z_-c,k, it is worth noting that at step
j of the Metropolis-Hastings algorithm a conditional marginal
distribution on z_c^(j) (and the data y) is obtained:
π(z_-c,k| z_c^(j), y). The posterior marginal can be
approximated by integrating over z_c as follows:
π(z_-c,k| y) = ∫π(z_-c,k|
z_c, y) π( z_c| y)
d z_c≃1/N∑_j=1^N π(z_-c,k| z_c^(j), y),
where N is the number of samples of the posterior distribution of
z_c. That is, the posterior marginal of z_-c,k can be
obtained by averaging the conditional marginals obtained at each
iteration of the Metropolis-Hastings algorithm.
§.§ Effect of approximating the marginal likelihood
So far, we have ignored the fact that the conditional marginal
likelihood π( y | z_c) used in the acceptance probability
α is actually an approximation. In this section, we will
discuss how this approximation will impact the validity of the
inference.
The situations where a Metropolis-Hastings algorithm has inexact
acceptance probabilities are often called pseudo-marginal MCMC
algorithms and were first introduced in <cit.> in the
context of statistical genetics where the likelihood in the acceptance
probability is approximated using importance sampling.
<cit.> provided a more general justification of
the pseudo-marginal MCMC algorithm, whose properties are further
studied in <cit.> and
<cit.>. These results show that if the (random)
acceptance probability is unbiased then the Markov chain will still
have as stationary distribution the posterior distribution of the
model parameters.
In our case, the error in the acceptiance rate is coming from a
deterministic estimate of the conditional marginal likelihood, hence
the framework of pseudo-marginal MCMC does not apply. However, since
it is deterministic, our MCMC chain will converge to a
stationary distribution. This limiting distribution will be
π̃( z_c | y) ∝π( z_c)
π̃( y | z_c)
where the “∼” indicates an approximation. R-INLA returns
an approximation to the conditional marginal likelihood term, which
implies an approximation to π( z_c | y). This leaves the
question, about how good this approximation is, for which we have to
rely on asymptotic results, heuristics and numerical experience.
The conditional marginal likelihood estimate returned from
R-INLA is based on numerical integration and uses a sequence of
Laplace approximations <cit.>. This
estimate is more accurate than the classical estimate using one
Laplace approximation. This approximation has, with classical
assumptions, relative error 𝒪(n^-1) <cit.>,
where n is the number of replications in the observations. For our
purpose, this error estimate is sufficient, as it demonstrates that
π̃( z_c | y)/π( z_c | y)∝π̃( y | z_c)/π( y| z_c) =
1 + 𝒪(n^-1)
for plausible values of z_c. However, as discussed by
<cit.>, the classical assumptions are
rarely met in practice due to “random effects”, smoothing etc.
Precise error estimates under realistic assumptions are difficult to
obtain; see <cit.> for a more detailed discussion of this
issue.
About numerical experience with the conditional marginal likelihood
estimate, <cit.> have studied empirically its
properties and accuracy for a wide range of latent Gaussian models.
They have compared the estimates with those obtained using MCMC, and
in all their cases the approximates of the marginal likelihood
provided by INLA were very accurate. For this reason, we believe that
the approximate stationary distribution π̃( z_c | y)
should be close to the true one, without being able to quantify this
error in more details.
Although the error in Equation (<ref>) is pointwise, we do expect
the error would be smooth in z_c. This is particularly
important, as in most cases we are interested in the univariate
marginals of π̃( z_c | y). These marginals will
typically have less error as the influence of the approximation error
will be averaged out integrating out all the other components. A final
renormalization would also remove constant offset in the error.
Additionally, we will validate the approximation error in a simulation
study in Section <ref> where we fit various models using
INLA, MCMC and INLA within MCMC and very similar posterior
distributions are obtained. Furthermore, the real applications in
Section <ref> also support that the approximations to
the marginal likelihood are accurate.
§.§ Some remarks
Common sense is still not out of fashion, hence there is an implicit
assumption that our INLA within MCMC approach should be only for models for
which it is reasonable to use the INLA-approach to do the inference
for the conditional model. The procedure that we have just shown will
allow INLA to be used together with the Metropolis-Hastings algorithm
(and, possibly, other MCMC methods) to obtain the posterior
distribution (and marginals) of z_c and the posterior marginals
of the elements in z_-c. Hence, this will allow INLA to be
used to fit models not implemented in the R-INLA package as well
as providing other options for model fitting, that we summarise here.
The Metropolis-Hastings algorithm will allow any choice of the priors
on the set of parameters z_c. This is an advantage (as shown in
the example in Section <ref>) of combining MCMC and INLA
because priors that are not implemented in R-INLA can be used in
the model. In particular, improper flat priors, multivariate priors
and objective priors can be used.
The framework of conditional LGMs that we now can fit using our new
approach is quite rich. It includes models with missing covariates
that are imputed at each step of the Metroplis-Hastings algorithm (see
example in Section <ref>), models with complex
non-linear effects in the linear predictor (see example in
Section <ref>) or models that have a mixture of effects in
the linear predictor <cit.>.
§ SIMULATION STUDY
In this section we develop simple examples to illustrate the method
proposed in the previous sections, and we investigate how this new
approach works in practice.
§.§ Bivariate linear regression
The first example is based on a linear regression with two covariates.
Our aim is to use our proposed method to obtain the posterior
distribution of the coefficients of the two covariates and then
compare the estimated marginals to the results obtained when the full
model is fitted with MCMC and INLA.
The simulated dataset contains 100 observations of a response variable
y and covariates u_1 and u_2. The model used to
generate the data is a typical linear regression, i.e.,
y_i = α + β_1 u_1i + β_2 u_2i +
ε_i; i = 1, …, 100
Here, ε_i is a Gaussian error term with zero mean and
precision τ. The dataset has been simulated using α = 3,
β_1 = 2, β_2 = -2 and τ = 1. Covariates u_1i and
u_2i have also been simulated using a uniform distribution between
0 and 1 in both cases.
This model can be easily fitted using R-INLA. Given that we are
using a Gaussian model, inference is exact in this case (up to
integration error). For this reason, we can compare the marginal
distribution provided of β_1 and β_2 by INLA to the ones
obtained with our combined approach. Note that the Metropolis-Hastings
algorithm will provide the joint posterior distribution of
β = (β_1, β_2) that can be use to obtain the
posterior marginals of β_1 and β_2. Furthermore, we can
also compare the marginals of α and τ, that will be
estimated by averaging the different conditional marginals obtained in
the Metropolis-Hastings steps.
In order to implement the Metropolis-Hastings algorithm to obtain a
sample from π(β| y) we have chosen a starting point of
β^(0) = (0, 0). The transition kernel to obtain a
candidate β^(t+1) at iteration t has been a bivariate
Guassian kernel centered at β^(t) with diagonal
variance-covariance matrix with values 1/0.75^2 in the diagonal as
this provided a resonable acceptance rate. The prior distribution of
β has been the product of two Gaussian distributions with
zero mean and precission 0.001 because these are the default priors
for linear effects in R-INLA.
Figure <ref> shows a summary of the results. Given
that both covariates are independent, their coefficients should show
small correlation and this can clearly seen in the plot of the joint
posterior distribution of β. Also, it can be seen how the
marginals obtained with INLA within MCMC for β_1 and β_2
match those obtained with INLA and MCMC. In addition, we have included
the estimates of the posterior marginals of the intercept α and
the precission τ. When using INLA within MCMC these are obtained
by Bayesian model averaging over the fitted models at every step of
the Metropolis-Hastings algorithm, whilst when computed with R-INLA these
are obtained by using INLA alone. The three estimation methods provide very
similar posterior distributions of the posterior marginals of the intercept and
the precission, which again confirms the accuracy of INLA within MCMC.
§.§ Missing covariates
In the next example, we will discuss the case of missing covariates.
In this example we will consider a linear regression with covariate
u_1 only and we will assume that a number of values of the
covariates are missing. The aim is to include the imputation of this
variables into the model, so that the output is a marginal
distribution of the missing values. We will not discuss here the
different frameworks under which the values have gone missing, but
this is something that should take into account in the model. In
particular, we have removed the values of 9 covariates, which is
almost 10% of our data and summary plots can be included in a 3x3
plot. Hence, in this case the missingness mechanism is of the type
missing completely at random <cit.>.
Now, we will treat the missing values as if they were covariates. We
will use a block updating scheme as we can have a large number of
missing covariates. The transition kernel will be a multivariate
Gaussian with diagonal variance-covariance. The mean and variance for
all values are the mean and variance of the observed covariates,
respectively. The prior distribution is also a multivariate Gaussian,
but now with zero mean and diagonal variance-covariance matrix with
entries four times the variance of a uniform random variable in the
unit interval (the one used to simulate the covariates). This is done
so that the prior information is small compared to the information
provided by the covariates.
Figure <ref> shows the posterior marginals
obtained from the samples. As it can be seen, most of them are
centered at the actual values removed from the model.
Note that this time the model with missing covariates cannot be fitted
with R-INLA so that we can only compare the marginals to those
obtained with MCMC. In all cases the marginals obtained with INLA
within MCMC and full MCMC are very similar.
§.§ Poisson regression
In this example we consider a Poisson regression with two covariates:
y_i ∼ Po(μ_i); log(μ_i) = α + β_1 u_1i + β_2 u_2i; i = 1, …, 100.
The values of the parameters used to simulate the dataset are
α = 0.5, β_1 = 2 and β_2 = -2.
As in Section <ref>, our purpose is to estimate the
joint posterior distribution of (β_1, β_2).
The prior distribution used now is the
same as in the previous example. Hence, the posterior marginal of
α is obtained by combining the different conditional marginals
obtained at the different steps of the Metropolis-Hastings algorithm.
Figure <ref> shows the estimates of the marginal
distributions of the three parameters in the model, together with the
joint posterior distribution of β_1 and β_2. In all cases,
there is a very good agreement between the estimates obtained with
INLA, MCMC and INLA within MCMC of the marginals of the parameters in
the model.
§ APPLICATIONS
In this section, we will focus on some real life applications that provide a
more reallistic test of this methodology. In all the examples, we have run
INLA within MC and MCMC for a total of 100500 simulations and discarded the
first 500. Then we applied a thinning to keep one in ten iterations, to obtain
a final chain of 10000 samples. This includes samples from the missing
observations and fitted models. To fit the model using MCMC alone, we have used
rjags <cit.> with the same number of iterations and thinning.
§.§ Bayesian Lasso
The Lasso <cit.> is a popular regression and variable
selection method for variable selection. It has the nice property of
providing coefficient estimates that are exactly zero and, hence, it
performs model fitting and variable selection at the same time. For a
linear model with a Gaussian likelihood, the Lasso is trying to
estimate the regression coeffcient by minimising
∑_i=1^n ( y_i - β_0 - ∑_j=1^p β_j
x_ij)^2 + λ∑_j=1^p |β_j|
Here, y_i is the response variable and x_ij are associated
covariates. n is the number of observations and p the number of
covariates. λ is a non-negative penalty term to control how
the shrinkage of the coefficients is done. If λ = 0 then the
fitted coefficients are those obtained by maximum likelihood, whilst
higher values of λ will shrink the estimates towards zero.
The Lasso is closely related to Bayesian inference as it can be
regarded as a standard regression model with Laplace priors on the
variable coefficients. The Laplace distribution is defined as
f(β) = 1/2σexp(-|β -
μ|/σ), x∈ℝ
where μ and σ, a positive number, are parameters of
location and scale, respectively. The Laplace prior distribution is
not available for (parts of) the latent field in R-INLA.
However, conditioning on the values of the β-coefficients
the model can be easily fitted with R-INLA.
We will apply the methodology described in this paper to implement the
Bayesian Lasso by combining INLA and MCMC. We will be using the
Hitters dataset described in <cit.>. This
dataset records several statistics about players in the Major League
Baseball, including salary in 1987, number of times at bat in 1986 and
other variables. Our aim is to build a model to predict the player's
salary in 1987 on some of the other variables recorded in 1986 (the
previous season).
We will focus on a smaller model than the one described in
<cit.> and will consider predicting salary in 1987 on
only five variables measured from the 1986 season: number of times at
bat (AtBat), number of hits (Hits), the number of home runs (HmRun),
number of runs (Runs) and the number of runs batted (RBI).
For our implementation of the Bayesian Lasso, we
will be fitting models conditioning on the covariate coefficients
β = (β_1,…β_p). Also, we will assume that
β and the error term precision τ are independent a
priori, i.e., π(β, τ) = π(β) π(τ). This
will provide a simpler way to compare our results with the Lasso and
it will also make computations a bit simpler. However, note that
choosing it is also possible to choose a prior so that
π(β, τ) = π(β|τ)π(τ) <cit.> The posterior distribution of these
variables will be obtained using MCMC.
The summary of the Lasso estimates are available in Table
<ref> and the posterior distribution of the coefficients is
in Figure <ref>. In all cases, there is agreement between the
Lasso and Bayesian Lasso estimates. Also, the posterior distribution
of the model coefficients is the same for MCMC and combining INLA with
MCMC. For those coefficients with a zero estimate with the Lasso, the
posterior distribution obtained with the Bayesian Lasso is centered at
zero.
§.§ Imputation of Missing Covariates
<cit.> describe the package mice that implements
several multiple imputation methods. We will be using the
nhanes dataset to illustrate how our approach can be used to
provide imputation of missing coariates in a real dataset. This
dataset contains data from <cit.> on age, body mass
index (bmi), hypertension status (hyp) and cholesterol
level (chl). Age is divided into three groups: 20-39,
40-59, 60+.
Our aim is to impute missing covariates in order to fit a model that
explains the cholesterol level on age and body mass index. Although
the values of age have been completely observed, there are missing
values in body mass index and cholesterol level. INLA can handle
missing values in the response (and will provide a predictive
distribution) but, as already stated, is not able to handle models
with missing values in the covariates.
We will consider a very simple imputation mechanism by assigning a
Gaussian prior to the missing values of body mass index. This Gaussian
is centred at the average value of the observed values (26.56) and
variance four times the variance of the observed values (71.07,
altogether). With this, we expect to provide some guidance on how the
imputed values should be but allowing for a wide range of variation.
More complex imputation mechanisms could be considered <cit.>. As in previous examples, we will fit
the same model using MCMC in order to compare both results. The model
that we will fit is:
[ chl_i = β_0 + β_1 bmi_i +
β_2 age2_i+ β_3 age3_i + ε_i; β_0 ∝ 1; β_k ∝ N(0, 0.001); k = 1, 2, 3; ε_i ∼ N(0, τ); τ ∼ Ga(1, 0.00005) ]
Figure <ref> shows the posterior
marginal distributions of the imputed values of the body mass index.
Both MCMC and our approach provide very similar point estimates. Table
<ref> summarises the model parameters obtained both with MCMC
and our approach and Figure <ref> displays the posterior
marginals of the model parameters obtained with our approach and MCMC.
In all cases, the marginals agree, and the point estimates look very
similar.
§.§ Spatial econometrics models
<cit.> describe a novel approach to extend the
classes of models that can be fitted with R-INLA to fit some
spatial econometrics models. In particular, they fit several
conditional models by fixing the values of some of the parameters in
the model, and then they combine these models using a Bayesian model
averaging approach <cit.>. <cit.>
show a practical implementation with a spatial statistics model using
R package INLABMA. Some of these models have already been
included in R-INLA <cit.> but are still
considered as experimental.
In this example we will focus on one of the spatial econometrics
models described in <cit.> to illustrate how our new
approach to combine MCMC and R-INLA can be used to fit
unimplemented models. In particular, we will consider the spatial lag
model <cit.>:
y = ρ W y + X β + u; u ∼ N(0,
σ^2_u I)
Here, y is a vector of observations at n areas, W is an
adjacency matrix, ρ a spatial autocorrelation parameter, X
a n× p matrix of covariates with associated coeffients
β = (β_1,…,β_p) and u = (u_1,…, u_n)
an error term. u_i, i=1,…,n, is Normally distributed with zero
mean and precision τ_u. This model can be rewritten as follows:
y = ( I_n - ρ W)^-1 Xβ + ε;ε∼ N(0, 1/τ_u (( I_n-ρ W) (
I_n-ρ W^'))^-1)
This model is difficult to fit with any standard software for
mixed-effects models because of parameter ρ. If the value of
ρ is fixed, then it is easy to fit the model with R-INLA as
it becomes a linear term on the covariates plus a random effects term
with a known structure. Hence, by conditioning on the value of ρ
we will be able to fit the model with R-INLA. In order to use
our new approach, we will be drawing values of ρ using MCMC and
conditioning on this parameter to fit the models with R-INLA.
Regarding prior distributions, ρ is assigned a uniform between
-1.5 and 1, β_i, i = 1,…, p a Gaussian prior with zero
mean and precision 0.001 (the default), and τ_u is assigned a
Gamma distribution with parameters 1 and 0.00005 (the default for
the precision of a 'generic0' latent class in R-INLA).
We have fitted this model to the Columbus dataset available in R
package spdep. This dataset contains information about 49
neighbourhoods in Columbus (Ohio) and we have considered a model with
crime rates as the response and household income and housing value as
covariates. We have also fitted the spatial lag model using a maximum
likelihood approach, the method proposed by <cit.>
and MCMC using an implementation of the model for the Jags software
included in package SEjags, which can be downloaded from Github.
The results are shown in Table <ref>. All Bayesian
approaches have very similar estimates, and these are also very
similar to the maximum likelihood estimates.
§ DISCUSSION
In this paper, we have developed a novel approach to extend the models
that can be fitted with INLA. For this, we have used INLA within the
Metropolis-Hastings algorithm, so that only a few number of parameters
are sampled.
We have shown three important applications. In the first one, we have
implemented a Bayesian Lasso for variable selection using Laplace
priors on the coefficient of the covariates. By following this
example, other priors could be easily used with INLA. This includes
not only univariate priors but multivariate priors, that are seldom
available in R-INLA.
In our second example we have tackled the problem of imputation of
missing covariates in model fitting. Here, we have included a very
simple imputation method for the missing values in the covariates, so
that model fitting and imputation were done at the same time. Compared
to fitting the same model with MCMC, we obtained the same posterior
estimates. In an ongoing work, <cit.> explore how
this can be extended to larger problems and how different imputation
models and missingness mechanisms can be properly addressed with INLA
and MCMC.
Finally, we have also shown how other models not included in the
R-INLA software can be fitted with INLA and MCMC. In particular,
we have fitted a spatial econometrics model by fitting conditional
models on the spatial autocorrelation parameter. This method can be
easily modified to suit any other models. In particular, Gibbs
sampling could be used if the full conditionals are available for a
subset of model parameters.
To sum up, we believe that this approach can be employed together with
INLA to fit more complex models and that it can also be combined with
other MCMC algorithms to develop simple samplers to fit complex
Bayesian hierarchical models. This method can work well when the
conditional models are hard to explore with current approaches for
which INLA provides a fast approximation, such as geostatistical
models. Furthermore, INLA could be embedded into a Reversible Jump
MCMC algorithm so that once the model dimension has been set, the
resulting model is approximated with INLA. See, for example,
<cit.> for a comprehensive list of MCMC algorithms that
could benefit from embedding INLA.
§ ACKNOWLEDGEMENTS
Virgilio Gómez-Rubio has been supported by grant PPIC-2014-001, funded
by Consejería de Educación, Cultura y Deportes (JCCM) and FEDER,
and grant MTM2016-77501-P, funded by Ministerio de Economía y
Competitividad.
Chicago
| Bayesian inference for complex hierarchical models has almost entirely
relied upon computational methods, such as Markov chain Monte Carlo
<cit.>. <cit.> propose a
new paradigm for Bayesian inference on hierarchical models that can be
represented as latent Gaussian models (LGMs), that focuses on
approximating marginal distributions for the parameters in the model.
This new approach, the Integrated Nested Laplace Approximation (INLA,
henceforth), uses several approximations to the conditional
distributions that appear in the integrals needed to obtain the
marginal distributions. See Section <ref> for details.
INLA is implemented as an R package, called R-INLA, that allows
us to fit complex models often in a matter of seconds. Hence, this is
often much faster than fitting the same model using MCMC methods.
Fitting models using INLA is restricted, in practice, to the classes
of models implemented in the R-INLA package. Several authors
have provided ways of fitting other models with INLA by fixing some of
the parameters in the model so that conditional models are fitted with
R-INLA. We have included a brief summary below.
<cit.> provide an early application of the idea of
fitting conditional models on some of the model parameters with
R-INLA. They developed this idea for a very specific example on
spatiotemporal models in which some of the models parameters are fixed
at their maximum likelihood estimates, which are then plugged-in the
overall model, thus ignoring the uncertainty about these parameters
but greatly reducing the dimensionality of the model. However, they do
not tackle the problem of fitting the complete model to make inference
on all the parameters in the model.
<cit.> propose an approach to extend
the type of models that can be fitted with R-INLA and apply
their ideas to fit some spatial models. They note how some models can
be fitted after conditioning on one or several parameters in the
model. For each of these conditional models R-INLA reports the
marginal likelihood, which can be combined with a set of priors for
the parameters to obtain their posterior distribution. For the
remainder of the parameters, their posterior marginal distribution can
be obtained by Bayesian model averaging <cit.> the
family of models obtained with R-INLA.
Although <cit.> focus on some spatial
models, their ideas can be applied in many other examples. They apply
this to estimate the posterior marginal of the spatial autocorrelation
parameter in some models, and this parameter is known to be bounded,
so that computation of its marginal distribution is easy because the
support of the distribution is a bounded interval.
For the case of unbounded parameters, the previous approach can be
applied, but a previous search may be required. For example, the
(conditional) maximum log-likelihood plus the log-prior could be
maximised to obtain the mode of the posterior marginal. This will mark
the centre of an interval where the values for the parameter are taken
from and where the posterior marginal can be evaluated.
In this paper, we will propose a different approach based on Markov
chain Monte Carlo techniques. Instead of trying to obtain the
posterior marginal of the parameters we condition on, we show how to
draw samples from their posterior distribution by combining MCMC
techniques and conditioned models fitted with R-INLA. This
provides several advantages, as described below.
This will increase the number of models that can be fitted using INLA
and its associated R package R-INLA. In particular,
models that can be expressed as a conditional LGM could be fitted. The
implementation of MCMC algorithms will also be simplified as only the
important parameters will be sampled, while the remaining parameters
are integrated out with INLA and R-INLA.
<cit.> have also effectively combined MCMC and
INLA for efficient variable selection and model choice.
The paper is structured as follows. The Integrated Nested Laplace
Approximation is described in Section <ref>. Markov chain
Monte Carlo methods are summarised in Section <ref>. Our
proposed combination of MCMC and INLA is detailed in Section
<ref>. Some simple examples are developed in Section
<ref> and some real applications are provided in Section
<ref>. Finally, a discussion and some final remarks
are provided in Section <ref>. | null | null | null | In this paper, we have developed a novel approach to extend the models
that can be fitted with INLA. For this, we have used INLA within the
Metropolis-Hastings algorithm, so that only a few number of parameters
are sampled.
We have shown three important applications. In the first one, we have
implemented a Bayesian Lasso for variable selection using Laplace
priors on the coefficient of the covariates. By following this
example, other priors could be easily used with INLA. This includes
not only univariate priors but multivariate priors, that are seldom
available in R-INLA.
In our second example we have tackled the problem of imputation of
missing covariates in model fitting. Here, we have included a very
simple imputation method for the missing values in the covariates, so
that model fitting and imputation were done at the same time. Compared
to fitting the same model with MCMC, we obtained the same posterior
estimates. In an ongoing work, <cit.> explore how
this can be extended to larger problems and how different imputation
models and missingness mechanisms can be properly addressed with INLA
and MCMC.
Finally, we have also shown how other models not included in the
R-INLA software can be fitted with INLA and MCMC. In particular,
we have fitted a spatial econometrics model by fitting conditional
models on the spatial autocorrelation parameter. This method can be
easily modified to suit any other models. In particular, Gibbs
sampling could be used if the full conditionals are available for a
subset of model parameters.
To sum up, we believe that this approach can be employed together with
INLA to fit more complex models and that it can also be combined with
other MCMC algorithms to develop simple samplers to fit complex
Bayesian hierarchical models. This method can work well when the
conditional models are hard to explore with current approaches for
which INLA provides a fast approximation, such as geostatistical
models. Furthermore, INLA could be embedded into a Reversible Jump
MCMC algorithm so that once the model dimension has been set, the
resulting model is approximated with INLA. See, for example,
<cit.> for a comprehensive list of MCMC algorithms that
could benefit from embedding INLA. | null |
http://arxiv.org/abs/1701.07569v2 | 20170126034219 | Data-Driven Sparse Sensor Placement for Reconstruction | [
"Krithika Manohar",
"Bingni W. Brunton",
"J. Nathan Kutz",
"Steven L. Brunton"
] | math.OC | [
"math.OC",
"cs.SY"
] |
frameNum
seqn
counteqn
countfig
sidefig
Sidebar[2]
@edefcurrentlabelname#2
@edefcurrentlabel#2
[backgroundcolor=lightgray!20,shadow=true,roundcorner=8pt,frametitle=#1]
frameNum1
customc
belowcaptionskip=.25,
breaklines=true,
frame=L,
xleftmargin=,
language=Matlab,
showstringspaces=false,
basicstyle=,
keywordstyle=,
identifierstyle=,
commentstyle=,
stringstyle=,
backgroundcolor=
style=customc
figures/
> 0
[
\begin@twocolumnfalse
Data-Driven Sparse Sensor Placement for Reconstruction
Krithika Manohar^*, Bingni W. Brunton, J. Nathan Kutz, and Steven L. Brunton
^*Corresponding author: [email protected]
December 30, 2023
==========================================================================================================================
Optimal sensor placement is a central challenge in the design, prediction, estimation, and control of high-dimensional systems. High-dimensional states can often leverage a latent low-dimensional representation, and this inherent compressibility enables sparse sensing. This article explores optimized sensor placement for signal reconstruction based on a tailored library of features extracted from training data. Sparse point sensors are discovered using the singular value decomposition and QR pivoting, which are two ubiquitous matrix computations that underpin modern linear dimensionality reduction. Sparse sensing in a tailored basis is contrasted with compressed sensing, a universal signal recovery method in which an unknown signal is reconstructed via a sparse representation in a universal basis. Although compressed sensing can recover a wider class of signals, we demonstrate the benefits of exploiting known patterns in data with optimized sensing. In particular, drastic reductions in the required number of sensors and improved reconstruction are observed in examples ranging from facial images to fluid vorticity fields. Principled sensor placement may be critically enabling when sensors are costly and provides faster state estimation for low-latency, high-bandwidth control.
\end@twocolumnfalse]
XXXXXXXX
Scalars Notation
n State dimension
m Number of snapshots
p Number of sensors (measurements)
r Intrinsic rank of tailored basis _r
K Sparsity of state in universal basis
η Variance of zero-mean sensor noise
Vectors
∈^n High-dimensional state
∈^p Measurements of state
∈^r Tailored basis coefficients
_j∈^n Canonical basis vectors for ^n
∈^n K-sparse basis coefficients
∈ℕ^p Sensor placement indices
∈^n POD modes (columns of _r)
∈^1× r Rows of
[5pt]
Matrices
∈^p× n Measurement matrix
Unitary QR factor matrix
Upper triangular QR factor matrix
∈^n× n Universal basis
_r∈^n× r Tailored basis of rank r
= Product of measurement and basis
∈ℝ^n× m Data matrix with m snapshots
Optimal sensor and actuator placement is an important unsolved problem in control theory.
Nearly every downstream control decision is affected by these sensor/actuator locations, but determining optimal locations amounts to an intractable brute force search among the combinatorial possibilities.
Indeed, there are n p=n!/(n-p)!p! possible choices of p point sensors out of an n dimensional state .
Determining optimal sensor and actuator placement in general, even for linear feedback control, is an open challenge.
Instead, sensor and actuator locations are routinely chosen according to heuristics and intuition.
For moderate sized search spaces, the sensor placement problem has well-known model-based solutions using optimal experiment design <cit.>, information theoretic and Bayesian criteria <cit.> .
We explore how to design optimal sensor locations for signal reconstruction in a framework that scales to arbitrarily large problems, leveraging modern techniques in machine learning and sparse sampling.
Reducing the number of sensors through principled selection may be critically enabling when sensors are costly, and may also enable faster state estimation for low latency, high bandwidth control.
[]This article explores optimized sensor placement for signal reconstruction based on a tailored library of features extracted from training data.
In this paradigm, optimized sparse sensors are computed using a powerful sampling scheme based on the matrix QR factorization and singular value decomposition.
Both procedures are natively implemented in modern scientific computing software, and Matlab code supplements are provided for all examples in this paper <cit.>.
These data-driven computations are more efficient and easier to implement than the convex optimization methods used for sensor placement in classical design of experiments.
In addition, data-driven sensing in a tailored basis is contrasted with compressed sensing, a universal signal recovery method in which an unknown signal is reconstructed using a sparse representation in a universal basis.
Although compressed sensing can recover a wider class of signals, we demonstrate the benefits of exploiting known patterns in data with optimized sensing.
In particular, drastic reductions in the required number of sensors and improved reconstruction are observed in examples ranging from facial images to fluid vorticity fields.
The overarching signal reconstruction problem is formulated in “Sidebar1".
This paper provides a tutorial overview of current sparse sampling methods for sensor placement and reconstruction of structured signals.
We also connect and equate certain sampling strategies with analogues in the design of experiments literature.
Near-optimal sensor locations are obtained using fast greedy procedures that scale well with large signal dimension. This work also generalizes and extends a powerful sampling scheme based on the matrix QR factorization and demonstrates its broad applicability to image and fluid flow reconstruction as well as polynomial interpolation.
The overarching sensor placement problem is summarized in “Sidebar1", and Matlab code is provided for all examples.
There are myriad complex systems that would benefit from principled, scaleable sensor and actuator placement, including fluid flow control <cit.>, power grid optimization <cit.>, epidemiological modeling and suppression <cit.>, bio-regulatory network monitoring and control <cit.>, and high-performance computing <cit.>, to name only a few.
In applications where individual sensors are expensive, reducing the number of sensors through principled design may be critically enabling.
In applications where fast decisions are required, such as in high performance computing or feedback control, computations may be accelerated by minimizing the number of sensors required.
In other words, low-dimensional computations may be performed directly in the sensor space.
Scaleable optimization of sensor and actuator placement is a grand challenge problem, with tremendous potential impact and considerable mathematical depth.
With existing mathematical machinery, optimal placement can only be determined in general using a brute-force combinatorial search.
Although this approach has been successful in small-scale problems <cit.>, a combinatorial search does not scale well to larger problems. Moore's law of exponentially increasing computer power cannot keep pace with this combinatorial growth in complexity.
Despite the challenges of sensing and actuation in a high-dimensional, possibly nonlinear dynamical system, there are promising indicators that this problem may be tractable with modern techniques.
High-dimensional systems, such as are found in fluids, epidemiology, neuroscience, and the power grid, typically exhibit dominant coherent structures that evolve on a low-dimensional attractor.
Indeed, much of the success of modern machine learning rests on the ability to identify and take advantage of patterns and features in high-dimensional data.
These low-dimensional patterns are often identified using dimensionality reduction techniques <cit.> such as the proper orthogonal decomposition (POD) <cit.>, which is a variant of principal component analysis (PCA), or more recently via dynamic mode decomposition (DMD) <cit.>, diffusion maps <cit.>, etc.
In control theory, balanced truncation <cit.>, balanced proper orthogonal decomposition (BPOD) <cit.>, and the eigensystem realization algorithm (ERA) <cit.>, have been successfully applied to obtain control-oriented reduced-order models for many high-dimensional systems.
In addition to advances in dimensionality reduction, key developments in optimization, compression, and the geometry of sparse vectors in high-dimensional spaces are providing powerful new techniques to obtain approximate solutions to NP-hard, combinatorially difficult problems in scaleable convex optimization architectures.
For example, compressed sensing <cit.> provides convex algorithms to solve the combinatorial sparse signal reconstruction problem with high probability.
Ideas from compressed sensing have been used to determine the optimal sensor locations for categorical decisions based on high-dimensional data <cit.>.
Recently, compressed sensing, sparsity-promoting algorithms such as the lasso regression <cit.>, and machine learning have been increasingly applied to characterize and control dynamical systems <cit.>.
These techniques have been effective in modeling high-dimensional fluid systems using POD <cit.> and DMD <cit.>.
Information criteria <cit.> has also been leveraged for the sparse identification of nonlinear dynamics <cit.>, as in <cit.>, and may also be useful for sensor placement.
Thus, key advances in two fields are fundamentally changing our approach to the acquisition and analysis of data from complex dynamical systems: 1) machine learning, which exploits patterns in data for low-dimensional representations, and 2) sparse sampling, where a full signal can be reconstructed from a small subset of measurements. The combination of machine learning and sparse sampling is synergistic, in that underlying low-rank representations facilitate sparse measurements.
Exploiting coherent structures underlying a large state space allows us to estimate and control systems with few measurements and sparse actuation. Low-dimensional, data-driven sensing and control is inspired in part by the high performance exhibited by biological organisms, such as an insect that performs robust, high-performance flight control in a turbulent fluid environment with minimal sensing and low-latency control <cit.>. They provide proof-by-existence that it is possible to assimilate sparse measurements and perform low-dimensional computations to interact with coherent structures in a high-dimensional system (i.e., a turbulent fluid).
Here we explore two competing perspectives on high-dimensional signal reconstruction: 1) the use of compressed sensing based on random measurements in a universal encoding basis, and 2) the use of highly specialized sensors for reconstruction in a tailored basis, such as POD or DMD.
These choices are also discussed in the context of feedback control.
Many competing factors impact control design, and a chief consideration is the latency in making a control decision, with large latency imposing limitations on robust performance <cit.>.
Thus, for systems with fast dynamics and complex coherent structures, it is important to make control decisions quickly based on efficient low-order models, with sensors and actuators placed strategically to gather information and exploit sensitivities in the dynamics.
§.§ Extensions to dynamics, control, and multiscale physics
Data-driven sensor selection is generally used for instantaneous full-state reconstruction, despite the fact that many signals are generated by a dynamical system <cit.>.
Even in reduced-order models, sensors are typically used to estimate nonlinear terms instantaneously without taking advantage of the underlying dynamics.
However, it is well known that for linear control systems <cit.>, the high-dimensional state may be reconstructed with few sensors, if not a single sensor, by leveraging the time history in conjunction with a model of the dynamics, as exemplified by the Kalman filter <cit.>.
In dynamic estimation and control, prior placement of sensors and actuators is generally assumed.
Extending the sensor placement optimization to the model reduction <cit.> and system identification <cit.> of linear control systems is an important avenue of ongoing work.
In particular, sensors and actuators may be chosen to increase the volume of the controllability and observability Gramians, related to the original balanced truncation literature <cit.>.
More generally, sensor and actuator placement may be optimized for robustness <cit.>, or for network control and consensus problems <cit.>.
The sensor placement algorithms discussed above are rooted firmly in linear algebra, making them readily extensible to linear control systems.
Recent advances in dynamical systems are providing techniques to embed nonlinear systems in a linear framework through a suitable choice of measurement functions of the state, opening up the possibility of optimized sensing for nonlinear systems.
As early as the 1930s, Koopman demonstrated that a nonlinear system can be rewritten as an infinite-dimensional linear operator on the Hilbert space of measurement functions <cit.>.
This perspective did not gain traction until modern computation and data collection capabilities enabled the analysis of large volumes of measurement data.
Modern Koopman theory may drive sensor placement and the selection of nonlinear measurement functions on the sensors to embed nonlinear dynamics in a linear framework for optimal nonlinear estimation and control.
This approach is consistent with neural control systems, where biological sensor networks (e.g., strain sensors on an insect wing) are processed through nonlinear neural filters before being used for feedback control.
Much of the modern Koopman operator theory has been recently developed <cit.>, and it has been shown that under certain conditions DMD approximates the Koopman operator <cit.>; sensor fusion is also possible in the Koopman framework <cit.>.
Recently, Koopman analysis has been used to develop nonlinear estimators <cit.> and controllers <cit.>, although establishing rigorous connections to control theory is an ongoing effort <cit.>.
Koopman theory has also been used to analyze chaotic dynamical systems from time-series data <cit.>, relying on the Takens embedding <cit.>, which is related to sensor selection.
Beyond extending sensor selection to nonlinear systems and control, there is a significant opportunity to apply principled sensor selection to multiscale systems.
Turbulence is an important high-dimensional system that exhibits multiscale phenomena <cit.>.
Data-driven approaches have been used to characterize turbulent systems <cit.>, including clustering <cit.>, network theory <cit.>, DMD-based model reduction <cit.>, and local POD subspaces <cit.>, to name a few.
Recently, a multiresolution DMD has been proposed <cit.>, where a low-dimensional subspace may locally characterize the attractor, despite a high-dimensional global attractor.
This approach may significantly reduce the number of sensors needed for multiscale problems.
§ COMPRESSED SENSING: RANDOM MEASUREMENTS IN A UNIVERSAL BASIS
The majority of natural signals, such as images and audio, are highly compressible, meaning that when the signal is written in an appropriate coordinate system, only a few basis modes are active.
These few values corresponding to the large mode amplitudes must be stored for accurate reconstruction, providing a significant reduction compared to the original signal size.
In other words, in the universal transform basis, the signal may be approximated by a sparse vector containing mostly zeros.
This inherent sparsity of natural signals is central to the mathematical framework of compressed sensing.
Signal compression in the Fourier domain is illustrated on an image example in “Sidebar4".
Further, sparse signal recovery using compressed sensing is demonstrated on a sinusoidal example in “sb:cs_example".
The theory of compressed sensing <cit.> inverts this compression paradigm.
Instead of collecting high-dimensional measurements just to compress and discard most of the information, it may be possible to collect a low-dimensional subsample or compression of the data and then infer the sparse vector of coefficients in the transformed coordinate system.
§.§ Theory of compressed sensing
Mathematically, a compressible signal 𝐱∈ℝ^n may be written as a sparse vector 𝐬∈ℝ^n in a new basis Ψ∈ℝ^n× n such that
= .
The vector is called K-sparse if there are exactly K nonzero elements.
To be able to represent any natural signal, rather than just those from a tailored category, the basis must be complete.
Consider a set of measurements ∈ℝ^p, obtained via a measurement matrix ∈ℝ^p× n, which satisfies
= = = .
In general, for p<n (<ref>) is underdetermined, and there are infinitely many solutions. The least least squaressquares (minimum _2) solution is not sparse, and typically yields poor reconstruction.
Instead, knowing that natural signals are sparse, we seek the sparsest consistent with the measurements ,
= _''_0 , such that = ',
where _0 is the ℓ_0 pseudo-norm corresponding to the number of non-zero entries of .
Unfortunately, this optimization problem is intractable, requiring a combinatorial brute-force search across all sparse vectors .
A major innovation of compressed sensing is a set of conditions on the measurement matrix that allow the nonconvex ℓ_0-minimization in (<ref>) to be relaxed to the convex ℓ_1-minimization <cit.>
= _''_1 , such that = ',
where _1 = ∑_k=1^n|s_k|. This formulation is shown schematically in Fig. <ref>.
For the ℓ_1-minimization in (<ref>) to yield the sparsest solution in (<ref>) with high probability, the measurements must be chosen so that = satisfies a restricted isometry property (RIP)
(1-δ_K)_2^2 ≤_2^2≤ (1+δ_K)_2^2,
where δ_K is a small positive restricted isometry constant <cit.>.
In particular, there are two conditions on for a RIP to be satisfied for all K-sparse vectors :
* The measurements must be incoherent with respect to the basis .
This incoherence means that the rows of are sufficiently uncorrelated with the columns of , as quantified by μ
μ(,) = √(n)max_j,k|⟨𝐜_k,_j⟩|.
Small μ indicates better incoherent measurements, with an optimal value of μ=1.
Here, 𝐜_k denotes the k-th row of and _j the j-th column of , both of which are assumed to be normalized. A more detailed discussion about incoherence and the RIP may be found in <cit.>.
* The number of measurements p must satisfy <cit.>
p∼𝒪(Klog(n/K)).
The Klog(n/K) term above is generally multiplied by a small constant multiple of the incoherence.
Thus, fewer measurements are required if they are less coherent.
Intuitively, the existence of a RIP implies that the geometry of sparse vectors is preserved through the measurement matrix .
Determining the exact constant δ_K may be extremely challenging in practice, and it tends to be more desirable to characterize the statistical properties of δ_K, as the measurement matrix may be randomly chosen.
“Sidebar_6" describes why it is not possible to use QR pivot locations as optimized sensors for compressed sensing, since they fail to identify the sparse structure of an unknown signal.
Often, a generic basis such as Fourier or wavelets may be used to represent the signal sparsely.
Spatially localized measurements (i.e., single pixels in the case of an image) are optimally incoherent with respect to the Fourier basis, so that μ(,)=1.
Thus, single pixel measurements are ideal because they excite a broadband frequency response.
In contrast, a measurement corresponding to a fixed Fourier mode would be uninformative; if the signal is not sparse in this particular frequency, this measurement provides no information about the other Fourier modes.
For many engineering applications, spatially localized measurements are desirable, as they correspond to physically realizable sensors, such as buoys in the ocean.
One of the major results of compressed sensing is that random projection measurements of the state (i.e., entries of that are Bernoulli or Gaussian random variables) are incoherent with respect to nearly any generic basis <cit.>.
This result is truly remarkable; however, the incoherence of random projections is not optimal, and typically scales as μ∼√(2log(n)).
Moreover, it may be difficult to obtain random projections of the full state in physical applications.
There are many alternative strategies to solve for the sparsest solution to (<ref>).
Greedy algorithms are often used <cit.>, including the compressed sampling matching pursuit (CoSaMP) algorithm <cit.>.
In addition, there is additional theory about how sparse the random projections may be for compressed sensing <cit.>.
§.§ Compressed sensing example
As a simple example, we consider a sparse signal that is constructed as the sum of three distinct cosine waves,
x(t) = cos(2π× 37 t) + cos(2π× 420 t) + cos(2π× 711 t).
The Shannon-Nyquist sampling theorem <cit.> states that for full signal reconstruction, we must sample at twice the highest frequency present, indicating a theoretical minimum sampling rate of 1422Hz.
However, since the signal is sparse, we may sample at considerably lower than the Nyquist rate, in this case at an average of 256Hz, shown in Figure <ref>.
Note that for accurate sparse signal reconstruction, these measurements must be randomly spaced in time, so that the relative spacing of consecutive points may be quite close or quite far apart.
Spacing points evenly with a sampling rate of 256Hz would alias the signal, resulting in poor reconstruction. Matlab code for reproducing Figure <ref> is provided below.
[firstline=3,lastline=23]MATLAB/FIG_X_CS.m
§ OPTIMAL SPARSE SENSING IN A TAILORED BASIS
The compressed sensing strategy above is ideal for the recovery of a high-dimensional signal of unknown content using random measurements in a universal basis.
However, if information is available about the type of signal (e.g., the signal is a turbulent velocity field or an image of a human face), it is possible to design optimized sensors that are tailored for the particular signals of interest.
Dominant features are extracted from a training dataset consisting of representative exaples, for example using the proper orthogonal decomposition (POD).
These low-rank features, mined from patterns in the data, facilitate the design of specialized sensors that are tailored to a specific problem.
Low-rank embeddings, such as POD, have already been used in the ROM community to select measurements in the state space that are informative for feature space reconstruction.
The so-called empirical interpolation methods seek the best interpolation points for a given basis of POD features.
These methods have primarily been used to speed up the evaluation of nonlinear terms in high-dimensional, parameterized systems. However, the resulting interpolation points correspond to measurements in state space, and their use for data-driven sensor selection has largely been overlooked. We will focus on this formulation of sensor selection and explore sparse, convex, and greedy optimization methods for solving it.
We begin with brief expositions on POD and our mathematical formulation of sensor placement, followed by an overview of related work in design of experiments and sparse sampling. We conclude this section with our generalized sensor selection method that connects empirical interpolation methods, such as QR pivoting to optimize condition number, with D-optimal experimental design <cit.>.
The QR pivoting method described in “Sec:QR" is particularly favorable, as it is fast, simple to implement, and provides nearly optimal sensors tailored to a data-driven POD basis. []Finally, the distinctions between compressed sensing and our data-driven sensing are summarized in “sb:sensing_compare".
§.§ Proper orthogonal decomposition
POD is a widespread data-driven dimensionality reduction technique <cit.> used in many domains; it is also commonly known as the Karhunen-Loève expansion, principal component analysis (PCA) <cit.>, and empirical orthogonal functions <cit.>.
POD expresses high-dimensional states ∈^n as linear combinations of a small number of orthonormal eigenmodes (i.e., POD modes) that define a low-dimensional embedding space. States are projected into this POD subspace, yielding a reduced representation that can be used to streamline tasks that would normally be expensive in the high-dimensional state space.
This low-rank embedding does not come for free, but instead requires training data to tailor the POD basis to a specific problem.
POD is illustrated on a simple example of extracting coherent features in images of human faces in “Sidebar2".
A low-dimensional representation of in terms of POD coefficients 𝐚 can be lifted back to the full state with a linear combination of POD modes,
_i ≈∑_k=1^r a_k(t_i)_k(x).
For time-series data _i, the coefficients a_k(t_i) vary in time and _k(x) are purely spatial modes without time dependence, resulting in a space-time separation of variables.
Thus, care should be taken applying POD to data from a traveling wave problem.
The eigenmodes _k and POD coefficients a_k are easily obtained from the singular value decomposition (SVD). Given a data matrix of state space observations =[_1 _2 … _m], the resulting eigenmodes are the orthonormal left singular vectors of obtained via the SVD,
= ΨΣ 𝐕^T≈_r_r_r^T.
The matrices _r and _r contain the first r columns of and (left and right singular vectors, respectively), and the diagonal matrix _r contains the first r× r block of (singular values).
The SVD is the optimal least squares approximation to the data for a given rank r, as demonstrated by the Eckart-Young theorem <cit.>
_⋆ = _ - _F s. t. rank()=r,
where _⋆ = _r_r_r^T, and ·_F is the Frobenius norm.
The low-dimensional vector of POD coefficients for a state is given by the orthogonal projection = _r^T.
Thus, the POD is a widely used dimensionality reduction technique for high-dimensional systems. This reduction allows computational speedup of numerical time-stepping, parameter estimation, and control.
Choosing the intrinsic target rank without magnifying noise in the data is a difficult task. In practice, r is often chosen by thresholding the singular values to capture some percentage of the variance in the data. An optimal hard threshold is derived in <cit.> based on the singular value distribution and aspect ratio of the data matrix, assuming additive Gaussian white noise of unknown variance. This threshold criterion has been effective in practice, even in cases where the noise is likely not Gaussian.
§.§ Sensor placement for reconstruction
We optimize sensor placement specifically to reconstruct high-dimensional states from point measurements, given data-driven or tailored bases. Recall that full states may be expressed as an unknown linear combination of basis vectors
x_j = ∑_k=1^r Ψ_jk a_k,
where Ψ_jk is the coordinate form of _r from (<ref>).
Effective sensor placement results in a point measurement matrix that is optimized to recover the modal mixture from sensor outputs . Point measurements require that the sampling matrix ∈^p× n be structured in the following way
= [ __1 __2 … __p ]^T,
where _j are the canonical basis vectors for ^n with a unit entry at index j and zeros elsewhere. Note that point measurements are fundamentally different than the suggested random projections of compressive sensing.
The measurement matrix results in the linear system
y_i = ∑_j=1^n C_ij x_j = ∑_j=1^n C_ij∑_k=1^r Ψ_jk a_k,
where C_ij is the coordinate form of from (<ref>).
The observations in consist of p elements selected from
= = [x__1 x__2 … x__p]^T,
where = {_1,…,_p}⊂{1,…,n} denotes the index set of sensor locations with cardinality ||=p.
When is unknown, it can be reconstructed by approximating the unknown basis coefficients with the Moore-Penrose pseudoinverse, = ^†=(_r)^†. Equivalently, the reconstruction is obtained using
= _r, = ^-1 = (_r)^-1, p=r,
^† = (_r)^†, p>r
.
A schematic of sparse sampling in a tailored basis _r is shown in Fig. <ref>.
The optimal sensor locations are those that permit the best possible reconstruction .
Thus, the sensor placement problem seeks rows of _r, corresponding to point sensor locations in state space, that optimally condition inversion of the matrix .
For brevity in the following discussion we denote the matrix to be inverted by _=^T (_= if p=r).
Recall that determines the structure of , i.e. the sensor locations, and hence affects the condition numbers of and _.
The condition number of the system may be indirectly bounded by optimizing the spectral content of _ using its determinant, trace, or spectral radius.
For example, the spectral radius criterion for _^-1 maximizes the smallest singular value of _
_⋆ = _,||=p_^-1_2 = _,||=pσ_min(_) .
Likewise, the sum (trace) or product of magnitudes (determinant) of its eigenvalue or singular value spectrum may be optimized
_⋆ = _,||=p(_,||=p) = _∑_iλ_i(_),
_⋆ = _,||=p |_| = _,||=p∏_i|λ_i(_)|
=_,||=p∏_i σ_i(_).
Direct optimization of the above criteria requires a combinatorial search over n p possible sensor configurations and is hence computationally intractable even for moderate n. Several heuristic greedy sampling methods have emerged for state reconstruction specifically with POD bases. These gappy POD <cit.> methods originally relied on random sub-sampling. However, significant performance advances where demonstrated by using principled sampling strategies for reduced-order models (ROMs) in fluid dynamics <cit.>, ocean modeling <cit.> and aerodynamics <cit.>. More recently, variants of the so-called empirical interpolation method (EIM, DEIM and Q-DEIM) <cit.> have provided near optimal sampling for interpolative reconstruction of nonlinear terms in ROMs. This work examines an approximate greedy solution given by the matrix QR factorization with column pivoting of _r^T, which builds upon the Q-DEIM method <cit.>.
[]
The goal of sparse measurement selection is to choose a measurement matrix representing p distinct point measurements in state space, so that consists of p rows of the identity matrix.
Although more general linear measurements of may be admissible in some problems, point measurements are physically appealing as spatially localized sensors.
The row-wise sum 𝐜^+=∑_j 𝐜_k satisfies 𝐜^+ _0 = p,
where 𝐜_k is the k-th row of .
The sensor selection problem can be formulated as choosing to make the pseudoinverse of =_r as well-conditioned as possible, thus making the estimation of the coefficients in robust to measurement noise on . The condition number is discussed in more detail in “Sidebar3".
Condition number minimization is the viewpoint taken by the reduced-order modeling community for selecting point measurements within a low-rank POD basis. These empirical interpolation methods (EIM) <cit.> assume p=r sensors so is square and invertible. EIM and subsequent discrete variants such as DEIM <cit.> and Q-DEIM <cit.> are greedy procedures that control the condition number of (_r)^-1 by minimizing the spectral norm (i.e. largest singular value)
_⋆ = _(_r)^-1_2,
where is subject to the same structural constraints as above.
Related strategies include selecting measurements at maxima of successive POD modes or iteratively seeking measurements that decrease the condition number of _r <cit.>.
This strategy is particularly advantageous when only a few modes are required to characterize the data. When singular values decay slowly, we may require p>r measurements for well-conditioned reconstruction. In “Sec:QR", we generalize one particular empirical interpolation method, Q-DEIM, which uses QR pivoting to determine sensor locations. In particular, we extend this method to the case of p> r sensors, making it viable for more general sensor selection and significantly speeding up its computation.
§.§.§ General formulation
Sensor selection is more generally framed as follows: given a set of n possible measurement indices V, select a subset of sensor indices S to optimize a carefully chosen function evaluating their quality. The problem is mathematically realized as
max_S⊆ V f(S) S.
This problem belongs to an area of research called submodular function optimization, surveyed in <cit.>. The so-called submodular function, f(S):2^V↦, evaluates these subsets to some scalar value, thus rendering a brute force search over its entire domain (the power set of all possible subsets 2^V) computationally intractable even for moderate values of n.
Possible choices of f include mutual information, entropy, estimation error covariance, and spatial coverage of sensors. Often, even a single evaluation of f is expensive when matrix determinants or factorizations are involved. The present work aims to reconstruct states from measurements with minimal variance of reconstruction error, which requires maximizing the determinant of error covariance matrices. The minimization of various error metrics is an active topic in optimal experiment design, for instance, the determinant or volume (D-optimal), the trace or mean-squared error (A-optimal), and maximum eigenvalue or worst-case variance (e-optimal) of error covariance matrices.
Joshi and Boyd <cit.> frame sensor selection as selecting p rows of _r that minimize the volume of the estimation error covariance matrix, (_r)^T_r=^T. Since matrix volume is the absolute value of its determinant, the naive approach requires evaluating n p determinants over a combinatorially large set of row-selected submatrices.
Instead, the authors approach this search with a computationally tractable convex optimization,
_1 = _ (_r)^T_r
^+_0=p, c_i^+ ∈ [0,1].
Each intermediate iteration of this method operates on an n× n matrix, so that storage requirements scale as 𝒪(n^2). Thus this approach is costly for high-dimensional states. The authors frame _r as a general matrix whose rows are possible measurements from which they select a subset of measurements. However, this matrix is not explicitly restricted to a POD basis, as proposed here.
§.§ Sparse sensor placement with QR pivoting
An original contribution of this work is extending Q-DEIM to the case of oversampled sensor placement, where the number of sensors exceeds the number of modes used in reconstruction (p>r). The key computational idea enabling oversampling is the QR factorization with column pivoting applied to the POD basis. QR pivoting itself dates back to the 1960s by Businger and Golub to solve least squares problems <cit.>, and it has found utility in various measurement selection applications <cit.>. Similar to empirical interpolation methods such as DEIM, pivots from the QR factorization optimally condition the measurement or row selected POD basis, as described below.
The reduced matrix QR factorization with column pivoting decomposes a matrix ∈^m× n into a unitary matrix , an upper-triangular matrix and a column permutation matrix such that ^T =. The pivoting procedure provides an approximate greedy solution method for the optimization in (<ref>), which is also known as submatrix volume maximization because matrix volume is the absolute value of the determinant. QR column pivoting increments the volume of the submatrix constructed from the pivoted columns by selecting a new pivot column with maximal 2-norm, then subtracting from every other column its orthogonal projection onto the pivot column (see Algorithm <ref>). Pivoting expands the submatrix volume by enforcing a diagonal dominance structure <cit.>
σ_i^2 = |r_ii|^2 ≥∑_j=i^k |r_jk|^2; 1≤ i ≤ k ≤ m.
This works because matrix volume is also the product of diagonal entries r_ii
| | = ∏_i σ_i = ∏_i |r_ii|.
Furthermore, the oversampled case p>r may be solved using the pivoted QR factorization of _r_r^T, where the column pivots are selected from n candidate state space locations based on the observation that
^T = ∏_i=1^r σ_i(^T),
where we drop the absolute value since the determinant of ^T is nonnegative.
QR factorization with column pivoting of ∈^n× m
Thus the QR factorization with column pivoting yields r point sensors (pivots) that best sample the r basis modes _r
_r^T^T = .
Based on the same principle of pivoted QR, which controls the condition number by minimizing the matrix volume, the oversampled case is handled by the pivoted QR factorization of _r_r^T,
(_r_r^T)^T = .
[]Algorithm <ref>, the QR factorization, is natively implemented in most scientific computing software packages. A Matlab code implementation of Algorithm <ref> is provided in “sb:qr_code".
The oversampled case requires an expensive QR factorization of an n× n matrix, whose storage requirements scale quadratically with state dimension.
However, this operation may be advantageous for several reasons.
The row selection given by the first p QR pivots increase the leading r singular values of ^T, hence increasing ^T.
This is the same maximization objective used in D-optimal experiment design <cit.>, which is typically solved with Newton iterations using a convex relaxation of the subset selection objective.
These methods require one matrix factorization of an n× n matrix per iteration, leading to a runtime cost per iteration of at least O(n^3). The entire procedure must be recomputed for each new choice of p.
Our proposed method only requires a single O(n^3) QR factorization and results in a hierarchical list of all n total pivots, with the first p pivots optimized for reconstruction in _r for any p>r.
Thus, additional sensors may be leveraged if available.
The QR factorization is implemented and optimized in most standard scientific computing packages and libraries, including Matlab, LAPACK, NumPy, among many others. In addition to software-enabled acceleration, QR runtime can be significantly reduced by terminating the procedure after the first p pivots are obtained. The operation can be accelerated further using randomized techniques, for instance, by the random selection of the next pivot <cit.> or by using random projections to select blocks of pivots <cit.>.
“Sidebar_6" shows why QR pivoting does not find the sparsest vector in an universal basis for compressed sensing.
The sparse sensor placement problem we have posed here is related to machine learning
concepts of variable and feature selection <cit.>.
Such sensor (feature) selection concepts generalize to data-driven classification. For image classification using linear discriminant analysis (LDA), sparse sensors may be selected that map via POD modes into the discriminating subspace <cit.>. Moreover, sparse classification within libraries of POD modes <cit.> can be improved by augmenting DEIM samples with a genetic algorithm <cit.> or adapting QR pivots for classification <cit.>. Sparse sensing has additionally been explored in signal processing for sampling and estimating signals over graphs <cit.>.
§.§ Relation to optimal experimental design
The matrix volume objective described above is closely related to D-optimal experiment design <cit.>; in fact, the two problems are identical when regarding the tailored basis _r as a set of n candidate experiments of a low-dimensional subspace. Classical experimental design selects the best p out of n candidate experiments for estimating r unknown parameters ∈^r. Each experiment, denoted _i, produces one output y_i that may include zero-mean i.i.d. Gaussian noise ξ∼𝒩(0,η^2). Again, we wish to estimate the parameters from p experiment outputs ∈^p in the following linear system,
= [ 0.4cm0.4pt_10.4cm0.4pt; 0.4cm0.4pt_20.4cm0.4pt; ⋮; 0.4cm0.4pt_p0.4cm0.4pt ]· + ξ =
∑_k=1^r[ Ψ__1,k; Ψ__2,k; ⋮; Ψ__p,k ]a_k + ξ
=
_r + ξ,
which is equivalent to the state reconstruction formulation of gappy POD <cit.>. In Matlab notation we sometimes refer to _r as _r(,:).
Each possible experiment _i may be regarded as a row of or of the tailored basis _r such that _i = _r(_i,:).
Equivalently each _i is a weighted “measurement” of the lower dimensional POD parameter space (not to be confused with the point measurement operation ). Note that when all experiments are selected the output is simply the state vector since = _r+ξ.
Given experiment selections indexed by , the estimation error covariance is given by
(-) = η^2(^T)^-1 = η^2((_r)^T_r)^-1.
D-optimal subset selection minimizes the error covariance by maximizing the matrix volume of _=^T:
_⋆ =
_,||=plog∑_i=1^p _i^T_i
= _,||=p (_r)^T_r,
which is equivalent to (<ref>).
Similarly, A-optimal and E-optimal design criteria optimize the trace and spectral radius of ^T, and are equivalent to (<ref>) and (<ref>), respectively.
The exact solutions of these optimization problems are intractable, and they are generally solved using heuristics. This is most commonly accomplished by solving the convex relaxation with a linear constraint on sensor weights β,
β_⋆ = _β∈^n log∑_i=1^n β_i_i^T_i,
∑_i=1
0≤β_i≤ 1^nβ_i = p.
The optimized sensors are obtained by selecting the largest sensor weights from β.
The iterative methods employed to solve this problem, convex optimization and semidefinite programs <cit.>, require matrix factorizations of n× n matrices in each iteration. Therefore they are computationally more expensive than the QR pivoting methods, which cost one matrix factorization in total. Greedy sampling methods such as EIM and QR are practical for sensor placement within a large number of candidate locations in fine spatial grids; hence, they are the methods of choice in reduced-order modeling <cit.>. The various optimization methods for data-driven sensor selection are summarized in Table <ref>.
§ COMPARISON OF METHODS
Sensor selection and signal reconstruction algorithms are implemented and compared on data from fluid dynamics, facial images, and ocean surface temperatures.
The examples span a wide range of complexity, exhibit both rapid and slow singular value decay, and come from both static and dynamic systems.
In each example, optimized sensors obtained in a tailored basis with QR pivots (See “Sec:QR") outperform random measurements in a universal basis using compressed sensing (See “Sec:CS") for signal reconstruction.
Moreover, for the same reconstruction performance, many fewer QR sensors are required, decreasing the cost associated with purchasing, placing, and maintaining sensors, as well as reducing the latency in computations.
Thus, for a well-scoped reconstruction task with sufficient training data, we advocate principled sensor selection rather than compressed sensing.
For example, the QR-based sampling method is demonstrated with yet another tailored basis commonly encountered in scientific computing – the Vandermonde matrix of polynomials. “Sidebar_rev" compares polynomial interpolation with QR pivots for the ill-conditioned set of equispaced points on an interval.
When the structure of the underlying signal is unknown, then compressed sensing provides more flexibility with an associated increase in the number of sensors.
§.§ Flow past a cylinder
Fluid flow past a stationary cylinder is a canonical example in fluid dynamics that is high-dimensional yet reveals strongly periodic, low-rank phenomena.
It is included here as an ideal system for reduction via POD and hence, minimal sensor placement.
The data is generated by numerical simulation of the linearized Navier-Stokes equations using the immersed boundary projection method (IBPM) based on a fast multi-domain method <cit.>.
The computational domain consists of four nested grids so that the finest grid covers a domain of 9× 4 cylinder diameters and the largest grid covers a domain of 72× 32. Each grid has resolution 450× 200, and the simulation consists of 151 timesteps with δ t=0.02. The Reynolds number is 100, and the flow is characterized by laminar periodic vortex shedding <cit.>.
Vorticity field snapshots are shown in Fig. <ref>.
In the cylinder flow and sea surface temperature examples, each snapshot _i=(t_i) is a spatial measurement of the system at a given time t_i. Thus POD coefficients a_k(t_i) are time dependent, and _k(x) are spatial eigenmodes. The first 100 cylinder flow snapshots are used to train POD modes and QR sensors, and reconstruction error bars are plotted over 51 remaining validation snapshots in Figures <ref> and <ref>.
The POD modes of this data reflect oscillatory dynamics characterized by periodic vortex-shedding.
The data is low-rank, and the singular values decay rapidly, as shown in Fig. <ref>.
The singular values occur in pairs corresponding to harmonics of the dominant vortex shedding frequency.
Most of the spectral energy in the dataset is captured by the first 42 POD modes. Thus the intrinsic rank of the dataset in POD feature space is r=42, and the minimal number of QR pivots is p=42. This yields a dramatic reduction from the initial state dimension of n≈ 90000 spatial gridpoints. Here, QR pivoting of _r^T (with O(nr^2) operations) bypasses expensive O(n^3) factorizations of large n× n matrices with alternate sampling or convex optimization methods.
Reconstruction from QR sensors (Fig. <ref>) successfully captures modal content with only p=r sensors when fitting to the first 42 POD modes. The first 42 POD modes characterize nearly 100% of the system's energy, the normalized sum of the singular values.
Using modes beyond r>42 results in overfitting, and QR pivoting selects sensors based on uninformative modes.
Thus, accuracy stops improving beyond r=42 target modes, whereupon sensor data amplifies noise. However, these tailored sensors perform significantly better than random sensors due to the favorable conditioning properties of QR interpolation points.
“Sidebar_7" illustrates the ability of optimized sensing to significantly reduce the number of sensors required for a given performance.
§.§ Noise comparison study
Measurements of real-world data are often corrupted by sensor noise.
The POD-based sensor selection criteria, as well as A,D and E-optimal experimental design criteria, are optimal for estimation with measurements corrupted by zero-mean Gaussian white noise.
We empirically demonstrate this on the cylinder flow data with increasing additive white noise.
Here we assume sensor noise only in the test measurements and not in the training data or features, see Eqn. (<ref>).
The POD modes and the different sensor sets are trained on the first 100 snapshots, and these different sensor sets are used to reconstruct the remaining 50 validation snapshots, which were not used for training features.
The reconstruction accuracy of the various sampling methods are compared for increasing sensor noise in Fig. <ref>, alongside the full-state POD approximation for illustration.
Here we truncate the POD expansion to r=40 eigenmodes and compare the p=r reconstruction computed with the discrete empirical interpolation method (DEIM) <cit.> against the QR pivoting reconstruction (Q-DEIM, p=r).
The DEIM greedy strategy places sensors at extrema of the residual computed from approximating the k-th mode with the previous k-1 sensors and eigenmodes.
It can be seen that QR reconstruction is slightly more accurate than that of DEIM, which is the leading sampling method currently in use for reduced-order modeling <cit.>.
QR pivoting is competitive in both speed and accuracy.
The speed of QR pivoting is enabled by several implementation accelerations; for example, the column norms in line 4 of Algorithm 2 are only computed once and are then reused.
Unlike QR pivoting, DEIM and related methods add successive sensors per iteration by similarly optimizing some metric over all locations.
However, this metric (e.g., the approximation residual or the largest singular value) is recomputed at every iteration.
The QR factorization is significantly faster than convex optimization methods used in optimal design of experiments, which typically require one matrix factorization per iteration.
In fact, convex optimization methods that relax the subset selection to weighted sensor placement provide no bounds for deviation from the global optimum, partly because rounding procedures are employed on the weights to decide the final selection. However, reconstruction error bounds for the globally optimal selection are known for DEIM <cit.>, Q-DEIM <cit.> and related POD sampling methods <cit.>.
Furthermore, QR pivoting can achieve significant accuracy gains over DEIM by oversampling – when p=2r QR reconstruction error is 4x smaller than that of DEIM. It should be noted that while DEIM has not yet been extended to the p>r case, oversampling methods exist for other POD-sampling methods <cit.>.
However, the iterative procedures involved in the latter are typically more expensive.
Recent accelerated variants of greedy principled sampling <cit.> may permit oversampling for large n, when oversampled QR storage requirements would be excessive.
In the cylinder flow case, we bypass this storage requirement by uniformly downsampling the fine grid by a factor of 5 in each spatial direction, thus reducing the number of candidate sensor locations to n=3600 instead of n=89351.
§.§ Extended Yale B eigenfaces
Image processing and computer vision commonly involve high-resolution data with dimension determined by the number of pixels. Cameras and recording devices capture massive images with rapidly increasing pixel and temporal resolution. However, most pixel information in an image can be discarded for subject identification and automated decision-making tasks.
The extended Yale B face database <cit.> is a canonical dataset used for facial recognition, and it is an ideal test bed for recovering low-rank structure from high-dimensional pixel space.
The data consists of 64 aligned facial images each of 38 stationary individuals in different lighting conditions.
We validate our sensor (pixel) selection strategy by recovering missing pixel data in a validation image using POD modes or eigenfaces trained on 32 randomly chosen images of each individual.
Normalized singular values are shown in Fig. <ref>, and the optimal singular value truncation threshold <cit.> occurs at r=166, indicating the intrinsic rank of the training dataset.
Indeed, selected eigenfaces are also shown to reveal no meaningful facial structure beyond eigenface 166.
QR pixel selection is performed on the first 50 and first 166 eigenfaces, and selected pixels shown in Fig. <ref> cluster around important facial features – eyes, nose and mouth.
Image reconstructions in Fig. <ref> are estimated from the same number of selected pixels as the number of modes used for reconstruction. For instance, the 50 eigenface reconstruction is uniquely constructed from 50 selected pixels out of 1024 total – 5% of available pixels. Even at lower pixel selection rates, least squares reconstruction from QR selected pixels is more successful at filling in missing data and recovering the subject's face.
For comparison, reconstruction of the same face from random pixels using compressed sensing is shown in Fig. <ref>. Compressed sensing in a universal Fourier basis demonstrates progressively improved global feature recovery. However, more than triple the pixels are required for the same quality of reconstruction as in QR selection. Moreover, the convex ℓ_1 optimization procedure is extremely expensive compared to the single ℓ_2 regression on subsampled eigenfaces. Therefore data-driven feature selection and structured measurement selection are of significant computational and predictive benefit, and occur at the small training expense of one SVD and pivoted QR operation.
The convergence of reconstruction with sensors using QR pivoting is shown in Fig. <ref>. More sensors than modes are used in reconstruction for this example. The expected error dropoff is observed with increasing number of modes and sensors, although the dropoff is slower than for the cylinder flow (Fig. <ref>) due to slower decay of singular values.
§.§ Sea surface temperature (SST)
Next we consider the NOAA_OISST_V2 global ocean surface temperature dataset spanning the duration 1990–2016. The data is publicly available online <cit.>.
Unlike eigenfaces, this dataset is a time series, for which a snapshot is recorded every week. Sensor selection must then track energetic temporal signatures.
Sensors and features are trained on the first 16 years (832 snapshots), and a test snapshot is selected from the excluded validation set. The singular values are shown in Fig. <ref>.
Like the eigenfaces, localized convective phenomena have energetic contributions to otherwise globally uninformative eigenssts. This is best seen in the POD snapshot projections, in which the 100 eigensst projection already sufficiently recovers dynamics, while increasing the number of eigenssts in the projection further refines convective phenomena. These lower-energy modes containing convective effects contribute to some degree of overfitting in ℓ_2 reconstruction (Fig. <ref>). The most interesting of these is the El Niño southern oscillation (ENSO) feature that is clearly identified from QR selected sensors. El Niño is defined as any temperature increase of a specified threshold lasting greater than six months in this highlighted region of the South Pacific. It has been implicated in global weather patterns and climate change.
Remark: Modal separation of intermittent phenomena such as the El Niño is difficult from a time-invariant POD analysis. Separation of isolated, low-energy temporal events cannot be done from a variance-characterizing decomposition such as the POD – reordering the snapshots will yield the same dominant modes. On the other hand, tensor decompositions and temporal-frequency analyses such as multiresolution dynamic mode decomposition have succeeded at identifying El Niño where POD has failed. Sensor selection using non-normal modes arising from such decompositions remains an open problem and the focus of ongoing work.
§ DISCUSSION
The efficient sensing of complex systems is an important challenge across the physical, biological, social, and engineering sciences, with significant implications for nearly all downstream tasks.
In this work, we have demonstrated the practical implementation of several sparse sensing algorithms on a number of relevant real-world examples.
As discussed throughout, there is no all-purpose strategy for the sparse sensing of a high-dimensional system.
Instead, the choice depends on key factors such as the amount of training data available, the scope and focus of the desired estimation task, cost constraints on the sensors themselves, and the required latency of computations on sensor data.
Thus, we partition the sparse sensing algorithms into two fundamental categories: 1) optimized sensing in a data-driven tailored basis, and 2) random sensing in a universal basis.
A critical comparison of the two approaches highlights a number of relative strengths and weaknesses.
The first strategy results in a highly optimized set of sensors that are suitable for tightly scoped reconstruction problems where sufficient training data is available.
The second strategy requires more sensors for accurate reconstruction but also makes fewer assumptions about the underlying signal, making it more general.
We emphasize that optimized sensing in a tailored basis typically provides more accurate signal reconstruction than random measurements, facilitating a reduction in the number of sensors by about a factor of two.
Further, sensor selection and signal reconstruction in the tailored basis is computationally efficient and simple to implement, while compressed sensing generally requires a costly iterative algorithm.
In problems where sensors are expensive, or when low-latency decisions are required, the reduction in the number of sensors and the associated speed-up of optimized sensing can be significant.
Thus, when the reconstruction task is well-scoped and a sufficient volume of training data is available, we advocate principled sensor selection rather than compressed sensing.
In addition, pivoted QR sensors may be used in conjunction with other tailored bases (polynomials, radial basis functions) when signal structure is known. Since these are not data-driven basis functions, QR optimized samples can generalize to different dynamical regimes or flow geometries.
§.§ Potential applied impact
Many fields in science and engineering rely on sensing and imaging.
Moreover, any application involving feedback control for stabilization, performance enhancement, or disturbance rejection relies critically on the choice of sensors.
We may roughly categorize these sensor-critical problems into two broad categories: 1) problems where sensors are expensive and few (ocean sampling, disease monitoring, espionage, etc.), and 2) problems where sensors are cheap and abundant (cameras, high-performance computation, etc.).
In the first category, where sensors come at a high cost, the need for optimized sparse sensors is clear. However, it is not always obvious how to collect the training data required to optimize these sensors. In some applications, high-fidelity simulations may provide insight into coherent structures, whereas in other cases a large-scale survey may be required. It has recently been shown that it may be possible to optimize sensors based on heavily subsampled data, as long as coherent structures are non-localized <cit.>.
In the second category, where sensors are readily available, it may still be advantageous to identify key sensors for fast control decisions.
For example, in mobile applications, such as vision-based control of a quad-rotor or underwater monitoring of an energy harvesting site using an autonomous underwater vehicle, computational and battery resources may be limited.
Restricting high-dimensional measurements to a small subset of key pixels speeds up computation and reduces power consumption.
Similar performance enhancements are already exploited in high-performance computing, where expensive function evaluations are avoided by sampling at key interpolation points <cit.>.
Finally, it may also be the case that if measurements are corrupted by noise, reconstruction may improve if uninformative sensors are selectively ignored.
§ REPRODUCIBLE RESEARCH
A Matlab code supplement is available <cit.> for reproducing results in this manuscript, including:
* Datasets in Matlab file formats, or links to data that are publicly available online;
* Matlab scripts to recreate figures of results.
ieeetr
§ ACKNOWLEDGMENTS
The authors thank Eurika Kaiser, Joshua Proctor, Serkan Gugercin, Bernd Noack, Tom Hogan, Joel Tropp, and Aleksandr Aravkin for valuable discussions.
SLB and JNK acknowledge support from the Defense Advanced Research Projects Agency (DARPA HR0011-16-C-0016).
SLB and KM acknowledge support from the Boeing Corporation (SSOW-BRT-W0714-0004).
BWB and SLB acknowledge support from the Air Force Research Labs (FA8651-16-1-0003).
[]SLB acknowledges support from the Air Force Office of Scientific Research (AFOSR FA9550-16-1-0650). JNK acknowledges support from the Air Force Office of Scientific Research (AFOSR FA9550-15-1-0385).
| null | null | null | null | The efficient sensing of complex systems is an important challenge across the physical, biological, social, and engineering sciences, with significant implications for nearly all downstream tasks.
In this work, we have demonstrated the practical implementation of several sparse sensing algorithms on a number of relevant real-world examples.
As discussed throughout, there is no all-purpose strategy for the sparse sensing of a high-dimensional system.
Instead, the choice depends on key factors such as the amount of training data available, the scope and focus of the desired estimation task, cost constraints on the sensors themselves, and the required latency of computations on sensor data.
Thus, we partition the sparse sensing algorithms into two fundamental categories: 1) optimized sensing in a data-driven tailored basis, and 2) random sensing in a universal basis.
A critical comparison of the two approaches highlights a number of relative strengths and weaknesses.
The first strategy results in a highly optimized set of sensors that are suitable for tightly scoped reconstruction problems where sufficient training data is available.
The second strategy requires more sensors for accurate reconstruction but also makes fewer assumptions about the underlying signal, making it more general.
We emphasize that optimized sensing in a tailored basis typically provides more accurate signal reconstruction than random measurements, facilitating a reduction in the number of sensors by about a factor of two.
Further, sensor selection and signal reconstruction in the tailored basis is computationally efficient and simple to implement, while compressed sensing generally requires a costly iterative algorithm.
In problems where sensors are expensive, or when low-latency decisions are required, the reduction in the number of sensors and the associated speed-up of optimized sensing can be significant.
Thus, when the reconstruction task is well-scoped and a sufficient volume of training data is available, we advocate principled sensor selection rather than compressed sensing.
In addition, pivoted QR sensors may be used in conjunction with other tailored bases (polynomials, radial basis functions) when signal structure is known. Since these are not data-driven basis functions, QR optimized samples can generalize to different dynamical regimes or flow geometries.
§.§ Potential applied impact
Many fields in science and engineering rely on sensing and imaging.
Moreover, any application involving feedback control for stabilization, performance enhancement, or disturbance rejection relies critically on the choice of sensors.
We may roughly categorize these sensor-critical problems into two broad categories: 1) problems where sensors are expensive and few (ocean sampling, disease monitoring, espionage, etc.), and 2) problems where sensors are cheap and abundant (cameras, high-performance computation, etc.).
In the first category, where sensors come at a high cost, the need for optimized sparse sensors is clear. However, it is not always obvious how to collect the training data required to optimize these sensors. In some applications, high-fidelity simulations may provide insight into coherent structures, whereas in other cases a large-scale survey may be required. It has recently been shown that it may be possible to optimize sensors based on heavily subsampled data, as long as coherent structures are non-localized <cit.>.
In the second category, where sensors are readily available, it may still be advantageous to identify key sensors for fast control decisions.
For example, in mobile applications, such as vision-based control of a quad-rotor or underwater monitoring of an energy harvesting site using an autonomous underwater vehicle, computational and battery resources may be limited.
Restricting high-dimensional measurements to a small subset of key pixels speeds up computation and reduces power consumption.
Similar performance enhancements are already exploited in high-performance computing, where expensive function evaluations are avoided by sampling at key interpolation points <cit.>.
Finally, it may also be the case that if measurements are corrupted by noise, reconstruction may improve if uninformative sensors are selectively ignored. | null |
http://arxiv.org/abs/1701.07573v1 | 20170126042414 | Vertical Advection Effects on Hyper-accretion Disks and Potential Link between Gamma-ray Bursts and Kilonovae | [
"Tuan Yi",
"Wei-Min Gu",
"Feng Yuan",
"Tong Liu",
"Hui-Jun Mu"
] | astro-ph.HE | [
"astro-ph.HE"
] |
1Department of Astronomy, Xiamen University, Xiamen,
Fujian 361005, China; [email protected]
2Shanghai Astronomical Observatory,
Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, China
3SHAO-XMU Joint Center for Astrophysics,
Xiamen University, Xiamen, Fujian 361005, China
Recent simulations on super-Eddington accretion flows have shown that,
apart from the diffusion process, the vertical advection based on
magnetic buoyancy can be a more efficient
process to release the trapped photons in the optically thick disk.
As a consequence, the radiative luminosity from the accretion disk
can be far beyond the Eddington value.
Following this spirit, we revisit the structure and radiation
of hyper-accretion disks with mass accretion rates in the range
10^-3∼ 10.
Our results show that, due to the strong cooling
through the vertical advection, the disk temperature becomes lower
than that in the classic model without the vertical advection process,
and therefore the neutrino luminosity from the disk is lower.
On the other hand, the gamma-ray photons released
through the vertical advection can be extremely super-Eddington.
We argue that the large amount of escaped gamma-ray photons
may have more significant contribution to the primordial fireball
than the neutrino annihilation,
and may hint a link between gamma-ray bursts and kilonovae in the
black hole hyper-accretion scenario.
§ INTRODUCTION
Gamma-ray bursts (GRBs) are the most energetic phenomenon occurred
in the cosmological distance. The luminosity of GRBs can reach
the order of around 10^50. There are various models
to explain the energy source of GRBs.
The popular model for the generation of short GRBs is the mergers of
compact binaries <cit.>,
either black hole-neutron star binaries or double neutron stars.
And it is commonly believed that the collapse of a massive star
can account for long GRBs <cit.>.
In both cases, a dense accretion disk is expected to form,
with extremely high temperature and accretion rates.
The neutrino-antineutrino annihilation process is a possible mechanism
to provide energy for the bursts.
The other mechanism based on hyper-accretion disks around black holes
is the well-known Blandford-Znajek (BZ) process <cit.>.
Alternatively, a rapidly rotating neutron star with strong magnetic fields
can also be responsible for GRBs <cit.>.
In this paper we will focus on the hyper-accreting disks with
extremely high accretion rates (10^-3 Ṁ 10),
where the neutrino radiation may be a dominant cooling mechanism, and therefore
such flows are usually named as neutrino-dominated accretion flows (NDAFs).
<cit.> pioneered to study the NDAFs in details,
and proposed that the neutrino annihilation of a hyper-accreting black hole
system can explain GRBs up to 10^52.
The hyper-accreting black hole system typically has accretion rates around
0.01 ∼ 10.
The NDAFs have been widely studied on the radial structure and neutrino
radiation
<cit.>, vertical structure and
convection <cit.>,
and time-dependent variation <cit.>.
The density can reach 10^8-10^12 g cm^-3 in the inner region
of the disk, and the temperature can be up to 10^10-10^11 K.
Such disks can be extremely optically thick,
leading to the trapping of large portion of photons,
which are carried along the radial direction and are eventually absorbed by the
central black hole.
Thus, the neutrino cooling is the dominant cooling process in the inner part.
Recently, whether the neutrino annihilation process can work as the central
engine for GRBs has been investigated in more details
<cit.>. Such works revealed that
the neutrino annihilation process can account for most GRBs. For some
long-duration GRBs, the central engine is more likely to be the BZ
mechanism rather than the neutrino annihilation since the former is
much more efficient. On the other hand, if the X-ray flares after the prompt
gamma-ray emission are regarded as the reactivity of the central engine,
the neutrino annihilation mechanism may encounter difficulty in interpreting
the X-ray flares. Even including a possible magnetic coupling <cit.>
between the inner disk and the central black hole, <cit.> showed that
the annihilation mechanism can work for the X-ray flares with duration
100 s.
However, the annihilation mechanism is unlikely to be
responsible for those long flares with duration 1000 s,
even though the role of magnetic coupling is included.
More recently, <cit.> investigated the central engine of the
extremely late-time X-ray flares with peak time larger than 10^4 s,
and suggested that neither the neutrino annihilation nor the BZ process
seems to work well. Instead, a fast rotating neutron star with strong bipolar
magnetic fields may account for such flares.
Recent simulations have made significant progress on the
super-Eddington accretion process, including the presence of strong outflows
<cit.> and the radiation-powered baryonic jet
<cit.>. In addition, some simulations <cit.>
show the anisotropic feature of radiation. More importantly,
the simulation of <cit.> revealed a new energy transport
mechanism in addition to the diffusion, which is named as the vertical
advection. Their simulation results show that, for the super-Eddington
accretion rate Ṁ = 220L_ Edd/c^2, the radiative efficiency
is around 4.5%, which is comparable to the value in a standard thin disk model.
The physical reason is that, a large fraction of photons can escape from
the disk before being advected into the black hole, through the vertical
advection process based on the magnetic buoyancy, which dominates
over the photon diffusion process.
In this work, following the spirit of the vertical advection,
we incorporate the vertical advection process into NDAFs and revisit
the structure and radiation of hyper-accretion disks around
stellar-mass black holes.
The remainder is organized as follows. The basic physics and equations
for our model are described in Section <ref>.
Numerical results and analyses are presented in Section <ref>.
Conclusions and discussion are made in Section <ref>.
§ BASIC EQUATIONS
In this section we describe the basic equations of our model.
We consider a steady state, axisymmetric hyper-accretion disk
around a stellar-mass black hole.
The well-known Paczyński-Wiita potential <cit.>
is adopted, i.e., Ψ = - G / (R-),
where is the black hole mass
and = 2G / c^2 is the Schwarzschild radius.
The Keplerian angular velocity can be expressed as
= (G /R)^1/2/(R-).
We use the usual convention of quantities to describe the accretion disk:
the half-thickness of the disk is H = /,
where = (P/ρ)^1/2 is the isothermal sound speed,
with P being the pressure, and ρ the density.
We adopt the standard Shakura-Sunyaev prescription for
the kinematic viscosity coefficient, i.e. ν = α H.
The basic equations that describe the accretion disk
are the continuity, azimuthal momentum, energy equation,
and the equation of state.
The continuity equation is
Ṁ = -4 πρ H R ,
where is the radial velocity.
With the Keplerian rotation assumption =,
the azimuthal momentum equation can be simplified as <cit.>:
= -αH/R f^-1 g ,
where f = 1 - j / R^2, g = - ḷṇ / ḷṇR,
and j represents the specific angular momentum per unit mass
accreted by the black hole.
The equation of state takes the form <cit.>
P = ρ k_ B T/m_ p( 1+3X_ nuc/4)
+ 11/12aT^4
+ 2 π h c/3 (3/8 π m_ pρ/μ_e)^4/3 + u_ν/3 ,
where the four terms on the right-hand side are the gas pressure,
radiation pressure of photons, degeneracy pressure of electrons,
and radiation pressure of neutrinos, respectively.
The energy equation is written as
= + + .
The above equation shows the balance between the viscous heating
and the cooling by radial advection, vertical advection, and neutrino
radiation. The viscous heating rate and the advective cooling
rate for a half-disk above or below the equator are expressed as
<cit.>
= 1/4πṀΩ^2 f g ,
= - ξH/RT
( 11/3aT^3 + 3/2ρ k_ B/m_ p1+3X_ nuc/4 + 4/3u_ν/T ) ,
where T is the temperature, s is the specific entropy,
u_ν is the neutrino energy density, and ξ is taken to be 1.
X_ nuc is the mass fraction of free nucleons <cit.>:
X_ nuc = Min [1, 20.13 ρ_10^-3/4 T_11^9/8exp (-0.61/T_11)] ,
where ρ_10= ρ /10^10 g cm^-3 and
T_11 = T / 10^11 K.
The quantity is the cooling rate due to the neutrino radiation.
We adopt a bridging formula for calculating as shown in <cit.> and <cit.>.
The main difference from previous works is that
the vertical advection term is taken into account in our work,
which is written as
= V_z ( u_ ph + u_ν + u_ gas ) ,
where u_ ph is the energy density of photons,
u_ν is the energy density of neutrinos,
u_ gas is the energy density of the gas.
In our calculation the third term V_z u_ gas
is dropped since the escaped gas through
the magnetic buoyancy can be negligible.
The quantity V_z is the averaged velocity of the vertical
advection process, which can be simply written as
V_z = λ ,
where λ is a dimensionless parameter.
By comparing the typical vertical velocity V_z
in the simulation results
<cit.> and the theoretical estimate of sound speed,
we take λ = 0.1 for our numerical calculations.
The vertical advection term describes the released photons and neutrinos
due to the magnetic buoyancy, which can dominate over
normal diffusion process.
Equations (<ref>)-(<ref>) can be solved if the parameters
, Ṁ, α, and j are given.
We consider a stellar-mass black hole with = 3 M_⊙.
The viscous parameter α = 0.02 is taken from the simulation
results <cit.>.
The specific angular momentum j = 1.83 c is just a little less than
the Keplerian angular momentum at the marginally stable orbit, i.e.,
l_ K|_3 = 1.837c.
Our study focuses on the solutions in the range 3∼ 10^3.
§ NUMERICAL RESULTS
In this section we present our numerical results and the analyses of
the physics behind these results.
The calculation reveals that the vertical advection process has essential
effects on the structure and radiation of the disk.
First, the radial profiles of mass density and temperature are investigated
for two cases, i.e., with and without the vertical advection process.
The radial profiles of density ρ are shown in Figure <ref>,
where the solid (dashed) lines correspond to the results with (without)
the vertical advection process. The five typical mass accretion rates
are Ṁ = 10^-3, 10^-2, 0.1, 1, and 10, which are
shown by different colors.
It is seen from Figure <ref> that, for a certain radius R and a given
Ṁ, the density ρ in the disk with vertical advection process
is significantly higher than that without such a process, particularly for
low accretion rates such as Ṁ = 10^-3.
The radial profiles of temperature T are shown in Figure <ref>, where
the explanation of different color and types of lines is the same as in
Figure <ref>.
It is seen that the temperature of the disk with the vertical advection
is generally lower than that without the advection.
The physical understanding of the above difference in density and temperature
is as follows. Since a large fraction of the trapped photons can be released
through the vertical advection process, the radiative cooling
through such a process is efficient. As a consequence, the temperature T
together with the total pressure P, the vertical height H/R, and the radial
velocity v_R will decrease, whereas the mass density ρ will increase.
For relatively low accretion rates such as Ṁ = 10^-3,
the radiative cooling of neutrinos is quite inefficient, so the
effects of vertical advection may be more significant, as indicated by
Equation (<ref>).
Figure <ref> shows the radial profiles of vertical scale height of the disk.
It is seen that the relative height H/R of the disk with vertical advection
is significantly thinner than that without the vertical advection,
particularly for low accretion rates.
The physical reason is mentioned above, which is
related to the decrease of temperature and pressure.
The decrease of vertical height also implies the decrease of sound speed
and therefore the decrease of radial velocity, as inferred by
Equation (<ref>).
For a typical mass accretion rate Ṁ = 0.1,
Figure <ref> shows the radial profiles of energy components,
where the three dimensionless factors are
f_z = Q_z/Q_ vis (red line),
f_ adv = Q_ adv/Q_ vis (green line),
and f_ν = Q_ν/Q_ vis (blue line).
It is seen that in the most inner region (R 4) the radial advection
is the dominant cooling mechanism.
For the outer part of the disk (R 20) the energy transport
through the vertical advection becomes dominant.
For the region 4 R 20 the neutrino cooling may dominate over
the other two mechanisms.
The red line implies that the photon radiation through the vertical
advection process can reach a large fraction of the total released
gravitational energy, and therefore the photon luminosity can be extremely
super-Eddington.
We will investigate the corresponding photon luminosity in Figure <ref>.
Our main focus is the energy transport through vertical advection process.
Figure <ref> shows the radial profiles of the dimensionless cooling rate
due to the vertical advection, i.e., f_z = Q_z/Q_ vis,
where five typical accretion rates are adopted.
It is seen that f_z generally decreases with
increasing Ṁ. The physical reason is that the neutrino cooling
is less important for relative low accretion rates.
For the highest accretion rate with Ṁ = 10, the red line shows that
there exists a big bump in the inner region (20).
The physics for this bump is
that the neutrino cooling is again less significant since the inner disk
is optically thick to the neutrinos.
Finally, we calculate the neutrino luminosity and the photon luminosity
of the accretion disk.
The neutrino luminosity is derived by the integration of the whole disk:
L_ν = ∫^R_ out_R_ in
4 π R · (Q_ν + V_z u_ν) Ṛ ,
where L_ν includes the contributions from the direct neutrino radiation
Q_ν and the vertical advection process on neutrinos
V_z u_ν.
Actually, the latter is negligible except for extremely high accretion rates
Ṁ≫ 1.
The variation of neutrino luminosity with mass accretion rates is shown in
Figure <ref>, where the red solid line corresponds to the neutrino
luminosity with the vertical advection process, whereas the red dashed line
corresponds to the neutrino luminosity without the process. It is seen
that the red solid line is under the red dashed line, which means that
the neutrino luminosity with the vertical advection is lower. The physical
reason is as follows. The neutrino radiation is more sensitive to
the temperature than the density. As shown by Figure <ref>,
the disk temperature is lower for the case with the vertical advection.
Thus, the corresponding Q_ν and L_ν in our cases are lower
than those without the vertical advection.
The variation of photon luminosity L_ ph is shown by the blue solid
line in Figure <ref>, where L_ ph is calculated by
L_ ph = ∫^R_ out_R_ in
4 π R ·V_z u_ ph Ṛ .
It is seen that L_ ph is in the range 10^50∼ 10^53 for
0.001 ⩽ṁ⩽ 10, which is more than ten orders of
magnitude higher than the Eddington luminosity. The released photons are mainly
in the gamma-ray band according to the thermal radiation of the inner disk
with 10^10 K < T < 10^11 K, as shown by Figure <ref>.
The huge amount of gamma-ray photons escape from the optically thick disk
through the vertical advection process, which is much more efficient than
the diffusion process. Such an extremely high photon radiation should have
observational effects. We will have a discussion on that in next section.
§ CONCLUSIONS AND DISCUSSION
In this work, we have studied the structure and radiation of hyper-accretion
flows around stellar-mass black holes by taking into the role of
vertical advection process. By the comparison of our results with the
classic NDAF solutions, we have shown that
the density is higher, the temperature is lower, and the vertical
height is thinner in our solutions.
The physical reason is that a large fraction of photons can escape
from the optically thick disk through the vertical advection process.
As a consequence, the neutrino luminosity from the disk is
decreased. Thus, even without calculating the neutrino annihilation
luminosity, we can conclude that the annihilation mechanism cannot be
responsible for the long-duration GRBs and X-ray flares.
We would point out that outflows are not taken into consideration in the
present work. However,
outflows are believed to generally exist in accretion flows.
Recent MHD simulations have shown that outflows
exist both in optically thin flows <cit.>
and optically thick flows <cit.>.
From the observational view,
<cit.> reveals that more than 99% of the accreted mass escape
from the accretion flow by outflows in our Galactic center.
Based on the energy balance argument, <cit.> shows that
the outflow is inevitable for the accretion flows where the radiative cooling
is far below the viscous heating, no matter the flow is optically thin
or thick.
Thus, outflows may work as another process to help the trapped photons
to escape <cit.>, and will also have effects on the structure
and neutrino radiation of the accretion flow.
Such a mechanism is not included in the current work.
Our calculations are based on the relation
V_z = λ with λ = 0.1.
As mentioned in Section 2, the value for λ is adopted following
the simulation results for Ṁ = 220 L_ Edd/c^2 <cit.>.
In our case, the mass accretion rate is higher for more than ten
orders of magnitude.
Then, a key question may exist whether the vertical advection due to
the magnetic buoyancy can also work for such hyper-accretion systems.
In our opinion, the radiation pressure is always dominant up to
Ṁ 0.1 or for the outer part of even higher accretion rates.
Thus, such a mechanism seems to be an efficient process.
On the other hand, even for the case that the parameter λ is
significantly smaller than 0.1 in the hyper-accretion case, such as
several orders of magnitude smaller, the released gamma-ray photons may
still be extremely super-Eddington and the potential application is significant.
In this work we have assumed α = 0.02 according to the simulation
results of <cit.>. However, other simulations may provide
different values for α. As shown by <cit.>, such a value
may be related to the magnitude of net magnetic flux in the simulations.
The values of α may also have significant effects on the energy transport
of the vertical advection. As Equations (<ref>) and (<ref>) imply,
is proportional to α and is proportional to
and therefore α. Thus, we can expect that, for a larger
value of α, the advective cooling rate can significantly
increase and therefore the cooling rate due to the vertical advection
Q_z will decrease according to the energy balance of Equation (<ref>).
Nevertheless, the luminosity related to the radial integration of Q_z
will still be hyper-Eddington even though Q_z may be lower than
for a large range of radii.
Our results of extremely super-Eddington luminosity of gamma-ray emission
can also be generally applied to short GRBs.
It is commonly believed that short GRBs originate from the merger of
compact objects, i.e., the black hole-neutron star binary or
the binary with double neutron stars.
More importantly, the merger is a significant source of
gravitational wave event.
Thus, the released high-energy photons may have significant contribution
to an electromagnetic counterpart for the gravitational wave event,
such as kilonovae.
Kilonovae have been widely studied in recent years
<cit.>.
In our case, a new picture is shown by Figure <ref>.
It is seen from this figure that, either the merger of a black hole
and a neutron star or the merger of two neutron stars may result in
a gravitational wave event and a black hole hyper-accretion disk.
According to our study in this work,
most gamma-ray photons escaping from the direction perpendicular to
the equatorial plane together with the neutrino annihilation
contribute to the thermal fireball,
and the remanent escaped photons diverge from other directions to trigger
non-thermal emission due to diffusion into ambient environment.
Obviously, such a progenitor of kilonovae is quite different from
the origin from disk wind <cit.>
or magnetar wind <cit.>.
In addition, a faint gamma-ray thermal component may
exist owing to the large amount of escaped thermal gamma-ray photons
from the disk, which may have potential contribution to the thermal component
of the prompt gamma-ray emission <cit.>.
Moreover, the neutrino annihilation mechanism or the BZ mechanism based on
the accumulation of magnetic fields through the hyper-accretion process
will work as the main central engine for the GRB.
In summary, in our scenario of black hole hyper-accretion with
vertical advection process, short GRBs, kilonovae, and gravitational
wave events can be naturally blended together <cit.>.
This work was supported by the National Basic Research Program of China
(973 Program) under grants 2014CB845800,
the National Natural Science Foundation of China under grants 11573023,
11473022, 11573051, 11633006, and 11333004,
the National Program on Key Research and Development Project of China
(Grant No. 2016YFA0400704),
the Key Research Program of Frontier Sciences of CAS (No. QYZDJ-SSW-SYS008),
and the CAS Open Research Program of Key Laboratory for the Structure and
Evolution of Celestial Objects under grant OP201503.
[Abdo et al.(2009)]Abdo2009
Abdo, A. A., Ackermann, M., Ajello, M., et al. 2009, , 706, L138
[Berger(2014)]Berger2014 Berger, E. 2014, , 52, 43
[Blandford & Znajek(1977)]Blandford1977 Blandford, R. D., & Znajek, R. L. 1977, , 179, 433
[Cao et al.(2014)]Cao2014 Cao, X., Liang, E.-W., & Yuan, Y.-F. 2014, , 789, 129
[Chen & Beloborodov(2007)]Chen2007 Chen, W.-X., & Beloborodov, A. M. 2007, , 657, 383
[Dai & Lu(1998)]Dai1998 Dai, Z. G., & Lu, T. 1998, Physical Review Letters, 81, 4301
[Dai et al.(2006)]Dai2006 Dai, Z. G., Wang, X. Y., Wu, X. F., & Zhang, B. 2006, Science, 311, 1127
[Di Matteo et al.(2002)]Di2002Di Matteo, T., Perna, R., & Narayan, R. 2002, , 579, 706
[Fernández et al.(2016)]Fernandez2016a Fernández, R., Foucart, F., Kasen, D., et al. 2016, arXiv:1612.04829
[Fernández & Metzger(2016)]Fernandez2016b Fernández, R., & Metzger, B. D. 2016, Annual Review of Nuclear and Particle Science, 66, 23
[Gao et al.(2015)]Gao2015 Gao, H., Ding, X., Wu, X.-F., Dai, Z.-G., & Zhang, B. 2015, , 807, 163
[Gu(2015)]Gu2015 Gu, W.-M. 2015, , 799, 71
[Gu et al.(2006)]Gu2006 Gu, W.-M., Liu, T., & Lu, J.-F. 2006, , 643, L87
[Hirose et al.(2009)]Hirose2009 Hirose, S., Blaes, O., & Krolik, J. H. 2009, , 704, 781
[Janiuk et al.(2013)]Janiuk2013 Janiuk, A., Mioduszewski, P., & Moscibrodzka, M. 2013, , 776, 105
[Janiuk et al.(2004)]Janiuk2004 Janiuk, A., Perna, R., Di Matteo, T., & Czerny, B. 2004, , 355, 950
[Jiang et al.(2014)]Jiang2014Jiang, Y.-F., Stone, J. M., & Davis, S. W. 2014, , 796, 106
[Jin et al.(2013)]Jin2013 Jin, Z.-P., Xu, D., Fan, Y.-Z., Wu, X.-F., & Wei, D.-M. 2013, , 775, L19
[Kasen et al.(2015)]Kasen2015 Kasen, D., Fernández, R., & Metzger, B. D. 2015, , 450, 1777
[Kawaguchi et al.(2016)]Kawaguchi2016 Kawaguchi, K., Kyutoku, K., Shibata, M., & Tanaka, M. 2016, , 825, 52
[Kawanaka & Kohri(2012)]Kawanaka2012 Kawanaka, N., & Kohri, K. 2012, , 419, 713
[Kawanaka et al.(2013)]Kawanaka2013 Kawanaka, N., Piran, T., & Krolik, J. H. 2013, , 766, 31
[Kohri & Mineshige(2002)]Kohri2002 Kohri, K., & Mineshige, S. 2002, , 577, 311
[Kohri et al.(2005)]Kohri2005 Kohri, K., Narayan, R., & Piran, T. 2005, , 629, 341
[Lee et al.(2000)]Lee2000 Lee, H. K., Wijers, R. A. M. J., & Brown, G. E. 2000, Physics Reports, 325, 83
[Lei et al.(2009)]Lei2009 Lei, W. H., Wang, D. X., Zhang, L., et al. 2009, , 700, 1970
[Li & Paczyński(1998)]Li98 Li, L.,& Paczyński, B. 1998, , 507, L59
[Liu et al.(2010)]Liu2010 Liu, T., Gu, W.-M., Dai, Z.-G., & Lu, J.-F. 2010, , 709, 851
[Liu et al.(2007)]Liu2007 Liu, T., Gu, W.-M., Xue, L., & Lu, J.-F. 2007, , 661, 1025
[Liu et al.(2015a)]Liu2015a Liu, T., Gu, W.-M., Kawanaka, N., & Li, A. 2015a, , 805, 37
[Liu et al.(2015b)]Liu2015b Liu, T., Hou, S.-J., Xue, L., & Gu, W.-M. 2015b, , 218, 12
[Liu et al.(2014)]Liu2014 Liu, T., Yu, X.-F., Gu, W.-M., & Lu, J.-F. 2014, , 791, 69
[Luo et al.(2013)]Luo2013 Luo, Y., Gu, W.-M., Liu, T., & Lu, J.-F. 2013, , 773, 142
[Lü et al.(2015)]Lv2015 Lü, H.-J., Zhang, B., Lei, W.-H., Li, Y., & Lasky, P. D. 2015, , 805, 89
[Metzger(2016)]Metzger2016 Metzger, B. D. 2016, arXiv:1610.09381
[Metzger & Berger(2012)]Metzger2012 Metzger, B. D., & Berger, E. 2012, , 746, 48
[Metzger et al.(2011)]Metzger2011 Metzger, B. D., Giannios, D., Thompson, T. A., Bucciantini, N., & Quataert, E. 2011, , 413, 2031
[Mu et al.(2016)]Mu2016 Mu, H.-J., Gu, W.-M., Hou, S.-J., et al. 2016, , 832, 161
[Nakar(2007)]Nakar2007 Nakar, E. 2007, Physics Reports, 442, 166
[Narayan et al.(1992)]Narayan1992 Narayan, R., Paczynski, B., & Piran, T. 1992, , 395, L83
[Narayan et al.(2001)]Narayan2001 Narayan, R., Piran, T., & Kumar, P. 2001, , 557, 949
[Narayan et al.(2012)]Narayan2012 Narayan, R., Sa̧dowski, A., Penna, R. F., & Kulkarni, A. K. 2012, , 426, 3241
[Ohsuga & Mineshige(2011)]Ohsuga2011 Ohsuga, K., & Mineshige, S. 2011, , 736, 2
[Ohsuga et al.(2005)]Ohsuga2005 Ohsuga, K., Mori, M., Nakamoto, T., & Mineshige, S. 2005, , 628, 368
[Paczyński(1998)]Paczy1998 Paczyński, B. 1998, , 494, L45
[Paczyński & Wiita(1980)]PW80 Paczyński, B., & Wiita, P. J. 1980, , 88, 23
[Pan & Yuan(2012)]Pan2012 Pan, Z., & Yuan, Y.-F. 2012, , 759, 82
[Popham et al.(1999)]Popham1999Popham, R., Woosley, S. E., & Fryer, C. 1999, , 518, 356
[Rosswog et al.(2016)]Rosswog2016 Rosswog, S., Feindt, U., Korobkin, O., et al. 2016, arXiv:1611.09822
[Sa̧dowski & Narayan(2015)]Sadow2015Sa̧dowski, A., & Narayan, R. 2015, , 453, 3213
[Sa̧dowski & Narayan(2016)]Sadow2016Sa̧dowski, A., & Narayan, R. 2016, , 456, 3929
[Shen et al.(2015)]Shen2015 Shen, R.-F., Barniol Duran, R., Nakar, E., & Piran, T. 2015, , 447, L60
[Song et al.(2015)]Song2015 Song, C.-Y., Liu, T., Gu, W.-M., et al. 2015, , 815, 54
[Song et al.(2016)]Song2016 Song, C.-Y., Liu, T., Gu, W.-M., & Tian, J.-X. 2016, , 458, 1921
[Sa̧dowski et al.(2013)]Sadowski2013 Sa̧dowski, A., Narayan, R., Penna, R., & Zhu, Y. 2013, , 436, 3856
[Usov(1992)]Usov1992 Usov, V. V. 1992, , 357, 472
[Wang et al.(2013)]Wang2013Wang, Q. D., Nowak, M. A., Markoff, S. B., et al. 2013, Science, 341, 981
[Woosley(1993)]Woosley1993 Woosley, S. E. 1993, , 405, 273
[Woosley & Bloom(2006)]Woosley2006 Woosley, S. E., & Bloom, J. S. 2006, , 44, 507
[Xie et al.(2016)]Xie2016 Xie, W., Lei, W.-H., & Wang, D.-X. 2016, arXiv:1609.09183
[Xue et al.(2013)]Xue2013 Xue, L., Liu, T., Gu, W.-M., & Lu, J.-F. 2013, , 207, 23
[Yang et al.(2014)]Yang2014 Yang, X.-H., Yuan, F., Ohsuga, K., & Bu, D.-F. 2014, , 780, 79
[Yu et al.(2013)]Yu2013 Yu, Y.-W., Zhang, B., & Gao, H. 2013, , 776, L40
[Yuan et al.(2012a)]Yuan2012aYuan, F., Bu, D., & Wu, M. 2012a, , 761, 130
[Yuan et al.(2012b)]Yuan2012bYuan, F., Wu, M., & Bu, D. 2012b, , 761, 129
[Yuan & Narayan(2014)]Yuan2014
Yuan, F., & Narayan, R. 2014, , 52, 529
[Zalamea & Beloborodov(2011)]Zalamea2011Zalamea, I., & Beloborodov, A. M. 2011, , 410, 2302
[Zhang et al.(2016)]Zhang2016
Zhang, B.-B., Zhang, B., Castro-Tirado, A. J., et al. 2016, arXiv:1612.03089
| Gamma-ray bursts (GRBs) are the most energetic phenomenon occurred
in the cosmological distance. The luminosity of GRBs can reach
the order of around 10^50. There are various models
to explain the energy source of GRBs.
The popular model for the generation of short GRBs is the mergers of
compact binaries <cit.>,
either black hole-neutron star binaries or double neutron stars.
And it is commonly believed that the collapse of a massive star
can account for long GRBs <cit.>.
In both cases, a dense accretion disk is expected to form,
with extremely high temperature and accretion rates.
The neutrino-antineutrino annihilation process is a possible mechanism
to provide energy for the bursts.
The other mechanism based on hyper-accretion disks around black holes
is the well-known Blandford-Znajek (BZ) process <cit.>.
Alternatively, a rapidly rotating neutron star with strong magnetic fields
can also be responsible for GRBs <cit.>.
In this paper we will focus on the hyper-accreting disks with
extremely high accretion rates (10^-3 Ṁ 10),
where the neutrino radiation may be a dominant cooling mechanism, and therefore
such flows are usually named as neutrino-dominated accretion flows (NDAFs).
<cit.> pioneered to study the NDAFs in details,
and proposed that the neutrino annihilation of a hyper-accreting black hole
system can explain GRBs up to 10^52.
The hyper-accreting black hole system typically has accretion rates around
0.01 ∼ 10.
The NDAFs have been widely studied on the radial structure and neutrino
radiation
<cit.>, vertical structure and
convection <cit.>,
and time-dependent variation <cit.>.
The density can reach 10^8-10^12 g cm^-3 in the inner region
of the disk, and the temperature can be up to 10^10-10^11 K.
Such disks can be extremely optically thick,
leading to the trapping of large portion of photons,
which are carried along the radial direction and are eventually absorbed by the
central black hole.
Thus, the neutrino cooling is the dominant cooling process in the inner part.
Recently, whether the neutrino annihilation process can work as the central
engine for GRBs has been investigated in more details
<cit.>. Such works revealed that
the neutrino annihilation process can account for most GRBs. For some
long-duration GRBs, the central engine is more likely to be the BZ
mechanism rather than the neutrino annihilation since the former is
much more efficient. On the other hand, if the X-ray flares after the prompt
gamma-ray emission are regarded as the reactivity of the central engine,
the neutrino annihilation mechanism may encounter difficulty in interpreting
the X-ray flares. Even including a possible magnetic coupling <cit.>
between the inner disk and the central black hole, <cit.> showed that
the annihilation mechanism can work for the X-ray flares with duration
100 s.
However, the annihilation mechanism is unlikely to be
responsible for those long flares with duration 1000 s,
even though the role of magnetic coupling is included.
More recently, <cit.> investigated the central engine of the
extremely late-time X-ray flares with peak time larger than 10^4 s,
and suggested that neither the neutrino annihilation nor the BZ process
seems to work well. Instead, a fast rotating neutron star with strong bipolar
magnetic fields may account for such flares.
Recent simulations have made significant progress on the
super-Eddington accretion process, including the presence of strong outflows
<cit.> and the radiation-powered baryonic jet
<cit.>. In addition, some simulations <cit.>
show the anisotropic feature of radiation. More importantly,
the simulation of <cit.> revealed a new energy transport
mechanism in addition to the diffusion, which is named as the vertical
advection. Their simulation results show that, for the super-Eddington
accretion rate Ṁ = 220L_ Edd/c^2, the radiative efficiency
is around 4.5%, which is comparable to the value in a standard thin disk model.
The physical reason is that, a large fraction of photons can escape from
the disk before being advected into the black hole, through the vertical
advection process based on the magnetic buoyancy, which dominates
over the photon diffusion process.
In this work, following the spirit of the vertical advection,
we incorporate the vertical advection process into NDAFs and revisit
the structure and radiation of hyper-accretion disks around
stellar-mass black holes.
The remainder is organized as follows. The basic physics and equations
for our model are described in Section <ref>.
Numerical results and analyses are presented in Section <ref>.
Conclusions and discussion are made in Section <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07813v2 | 20170126185122 | Mechanism for nematic superconductivity in FeSe | [
"Jian-Huang She",
"Michael J. Lawler",
"Eun-Ah Kim"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.supr-con"
] |
^1Department of Physics, Cornell University, Ithaca, NY 14853, USA
^2Department of physics, Binghamton University, Vestal, NY 13850, USA
^3Kavli Institute for Theoretical Physics, Kohn Hall
University Of California Santa Barbara CA 93106-4030, USA
Despite its seemingly simple composition and structure, the pairing mechanism of FeSe remains an open problem due to several striking phenomena. Among them are nematic order without magnetic order, nodeless gap and unusual inelastic neutron spectra with a broad continuum, and gap anisotropy consistent with orbital selection of unknown origin. Here we propose a
microscopic description of a nematic quantum spin liquid that reproduces key features of neutron spectra.
We then study how the spin fluctuations of the local moments lead to pairing within a spin-fermion model.
We find the resulting superconducting order parameter to be nodeless s± d-wave within each domain. Further
we show that orbital dependent Hund's coupling can readily capture observed gap anisotropy. Our prediction
calls for inelastic neutron scattering in a detwinned sample.
Mechanism for nematic superconductivity in FeSe
Jian-Huang She^1, Michael J. Lawler^2, 1,3, and Eun-Ah Kim^1,3
December 30, 2023 [file: ]
==================================================================
The pairing mechanism and gap symmetry of
bulk<cit.> and single layer<cit.> FeSe is an open issue that
inhibits an overarching understanding of iron-based superconductors.
Although a spin-fluctuation mediated pairing scenario is a broadly accepted mechanism in iron-based superconductors<cit.>, much debate continues to focus around two distinct perspectives: weak coupling and strong coupling.
Weak coupling approaches
are sensitive to the band structure and generally predict dominantly (π,0), (0,π) spin density wave fluctuations that couple hole pockets to electron pockets in all Fe-pnictides as well as in bulk FeSe <cit.>. Strong coupling approaches take strong electron-electron correlations to generate quasi-localized moments that would interact with itinerant carriers.
FeSe presents new challenges to both perspectives, including explaining its nematic order <cit.>(see Fig. <ref>(a)), its absence of magnetism, its gapped but active spin fluctuations at (π,π) in addition to (π,0) <cit.> and its nodeless superconducting gap. There have been much efforts to address these issues. RPA based weak-coupling approaches focused on implications of assumed nematic order <cit.>. Renormalization group approaches found the effective interactions promoting spin density wave to be also promoting orbital order <cit.>. Approaches focusing on sizable local moments <cit.> led to proposals of quadrupolar order accompanying nematic order <cit.> and the proposal of
a quasi-one dimensional quantum paramagnet state<cit.> of AKLT (Affleck-Kennedy-Lieb-Tasaki)<cit.> type. Nevertheless, strikingly unique inelastic neutron spectra (INS) of FeSe evade the approaches so far one way or another.
The absence of the stripe order in FeSe has been attributed to the notion of frustration<cit.>.
Indeed FeSe is close to a classic situation for frustrated magnets in the much studied J_1-J_2 model<cit.>(see Fig. <ref>(b)).
Interestingly, in systems that form stripe upon cooling, viewing the nematic state as thermally melted version of stripe was a very productive point of view <cit.>. Here we note that frustration from the competition between J_1 and J_2 has been long known to drive quantum melted versions of Neel and stripe orders giving rise to C_4 symmetric and C_2 symmetric (nematic) quantum spin liquids (QSL) respectively<cit.>.
Moreover DMRG studies on J_1-J_2 model noted an intermediate paramagnetic phase between stripe order and Neel order state <cit.>. A recent DMRG study of J_1-J_2-K_1-K_2 spin model found a nematic quantum paramagnetic state between the Neel and stripe ordered states<cit.>.
In this letter we propose a microscopic description of the frustration driven nematic quantum spin liquid (QSL) state that amounts to quantum melted stripe and
captures the observed INS. We then investigate the implication of dramatically anisotropic spin-fluctuation spectra of the proposed state on the nature of superconductivity.
In FeSe, there is evidence that local moments <cit.> coexist with itinerant carriers of all three t_2g orbitals<cit.>.
In order to capture the dual character <cit.>
we turn to a spin-fermion model <cit.>: H= H_c+ H_S+ H_ int, where H_c and H_S describe the itinerant carriers and local moments respectively that are coupled through H_ int.
For the spin model
H_S= ∑_ij J_ij S_i· S_j,
with exchange interactions J_ij on a square lattice (Fig.1b), the two dominant interactions
are the nearest-neighbor J_1 and the next-nearest-neighbor J_2 exchange interactions as in other Fe-based superconductors <cit.>. But due to the near itinerancy of the core electrons, longer range terms
are also expected <cit.>.
The J_1-J_2 model has been extensively studied both classically and quantum mechanically
(see Refs.[Chandra90, Misguich04, Jiang09, Jiang12]). Within classical models the role of frustration is clear from the fact that the model can be recast as
H_S=J_2∑ ( S_1+ S_2+ S_3+ S_4)^2 up to a constant at J_2=J_1/2 point, where S_1-4 are the four spins on each plaquette ⟨ 1234⟩ and the summation is over all plaquettes.
Classical ground state with vanishing total spin on each plaquette property leads to a zero mode at each wave vector on the Brillouin zone boundary<cit.> and so the model is highly frustrated.
With quantum effects of small spin S the frustration effects are not limited to the fine tuned point of J_2=J_1/2. Unfortunately, a controlled theoretical study for quantum spins for such frustrated spin systems is challenging. Hence we will restrict ourselves to mean field theories and choose an ansatz that (1) agrees with the observed inelastic neutron spectrum <cit.>, and (2) the ordering tendencies obey the classical condition of S_1+ S_2+ S_3+ S_4=0 on a plaquette.
A prominent feature of the INS data <cit.> is its broad and gapped continuum of spectral weight (Fig.<ref>a) without any one-magnon branch. Intriguingly such a continuum is expected in a QSL with deconfined spinons in two-dimension in an insulating magnetic system <cit.>. Indeed it is a common feature of slave-particle mean field theories. So we will choose Schwinger boson mean field theory (SBMFT) <cit.> as our mean field theory approach. Additional features of Fig. <ref> (a-c) we aim to capture include:
* The simultaneous presence of both (π, π) spin fluctuations and (π, 0), (0, π) spin fluctuations.
* The quasi-one-dimensional dispersion ω∼sin k_y <cit.> found in the shape of the upper and lower bounds.
* The observed cross-shaped spectrum around (π, π).
To find these features in a SBMFT, we turn to the known <cit.> SBMFT phase diagram of the J_1-J_2 model (Fig. <ref>).
Note that the Neel and stripe long range order for small J_2/J_1 and large J_2/J_1 are expected<cit.> to melt into C_4 symmetric and C_2 symmetric QSL's respectively (see Fig. <ref>).
Hence the shaded region near the phase boundary between C_4 symmetric QSL, C_2 symmetric QSL and the stripe ordered phase will capture all of the above features. Specifically, states in this region will support a dynamic spin structure factor with 1d-like dispersion and cross-shaped spectrum
assuming twin domains of the stripe state are averaged over in the INS data. To account for the itinerancy of the electrons, we extend
an ansatz within the shaded region of Fig. <ref>
with additional neighbor couplings.
To construct the ansatz, we now turn briefly to the specifics of SBMFT. In Schwinger boson representation, each spin S_ r is represented by two bosonic operators b_ rσ, σ=↑,↓ with the constraint ∑_σ b^†_ rσb_ rσ=2S. The spin operator is then S_ r=1/2∑_σσ'b^†_ rσσ_σσ'b_ rσ', with σ the Pauli matrices. We can then expand H_ r, r'≡ J_ r, r' S_ r· S_ r' in terms of the spin singlet operator A^†_ r, r'=b^†_ r↑b^†_ r'↓-b^†_ r↓b^†_ r'↑ to obtain H_ r, r' = -J_ r, r'1/2A^†_ r, r'A_ r, r'+S^2. Finally,
we mean-field decompose H_ r, r' and introduce mean fields ⟨ A_ r, r'⟩ using A^†_ r, r'A_ r, r' = ⟨ A^†_ r, r'⟩ A_ r, r' + A^†_ r, r'⟨ A_ r, r'⟩ - ⟨ A^†_ r, r'⟩⟨ A_ r, r'⟩. We will further assume the bosons do not
condense for we are interested in the spin liquid phase.
Defining A_μ̂≡⟨ A_ r, r+μ̂⟩, we keep A_x̂≠ 0 and the diagonals A_x̂±ŷ≠ 0 and A_x̂± 2ŷ≠ 0
for states in the shaded region of Fig. <ref>.
The fourth neighbor term can be understood as a result of the competition between Neel and stripe states: it is a bond that is favored by both the (π,π) Néel state and the (π,0)/(0,π) stripe state.
The result is a state with the same projective symmetry group as the Read and Sachdev state used in the phase diagram of Fig. <ref>. It is a “zero flux state" in that the smallest loop has zero “flux" obeying the so-called flux expulsion principle <cit.> and hence energetically competitive.
Most importantly it is a state in which translational symmetry is restored by quantum melting stripe into C_2 symmetric nematic QSL state.
We can then calculate
the dynamic spin structure factor S_ qω≡ Im⟨ S^z( q, ω) S^z(- q, ω)⟩ associated with our ansatz.
At T=0, it is of the form <cit.>
S_ q, ω∼∑_ k{cosh[ 2(θ_ k+θ_ k+q̃)]-1}δ( ω_ k+ω_ k+q̃-|ω|),
where θ_ k is the angle in the Bogoliubov transformation of SBMFT (see SM1 for explicit expression), and q̃= q-(π, 0) arises because of a standard unitary transformation we carried out on the B sublattice for simplicity. The results summing over two domains are plotted in Fig. <ref>(d-f). They capture the basic features of the neutron spectra: (1) The spectrum is gapped (Fig. <ref>d), as a result of the absence of long range magnetic ordering.
(2) Both (π, π) and (π, 0)/(0, π) spin fluctuations are present (Fig. <ref>d, e).
(3) The spectrum displays the novel feature of continuum with the bounds exhibiting quasi-one-dimensional dispersion (Fig. <ref>d).
A sharp prediction of our model is the dramatic suppression of
spectral weight around (0, q_y) in a detwinned sample ((q_x, 0) for the other domain). This means at low energies there are weights at say (π, π) and (π, 0), but not at (0, π).
By contrast, in an orbital order driven picture for nematic ordering, there is only a weak anisotropy in the spin-structure factor with the spectral weight at (π, π), (0, π) and (π, 0) of roughly the same magnitude even in a single nematic domain<cit.>.
Such a distinction has profound implications for pairing. When the degree of anisotropy in the momentum distribution of the spin spectra is mild, pairing interactions with different q-wavevectors compete, leading to nodes <cit.>. On the other hand, the strong anisotropy in the spectral weight distribution in our SBMFT ansatz quenches such competition removing any need for a superconducting gap node.
We now turn to the itinerant degrees of freedom to study nematicity and superconductivity. Their kinetic energy is given by a tight-binding model:
H_c=∑_ k,αβ,νϵ_αβ^μν( k)c^†_αμ( k)c_βν( k),
where c^†_αμ( k) creates an itinerant electron with momentum k, spin μ and orbital index α.
The Fermi surface of FeSe consists of two electron pockets around the M points and one hole pocket around the Γ point <cit.>. Following <cit.>, we take a simple symmetry based approach of expanding the dispersion ϵ_αβ^μν( k) around the Fermi surface. It is known experimentally that the spectral weight of the low energy states are predominantly from d_yz and d_zx around the Γ point, from d_yz and d_xy around (π, 0), from d_zx and d_xy around (0, π). We consider the corresponding intra- and inter-orbital hopping terms. Furthermore we include on-site nematicity and spin-orbit coupling to produce the band splitting that gives rise to a single hole pocket around Γ. The resulting simplified Fermi surface is shown in Fig.<ref>a, see SM2 for explicit parameters. [Our rather simple band structure allows largely analytic calculation at the expense of missing out quantiative details such as large mismatch in the pocket sizes as found in quantum oscillation (see SM4).]
The itinerant electrons couple to the local moments via the ferromagnetic Hund's coupling <cit.>:
H_ int=-∑_i,α,μνJ_α S_i· c^†_iαμσ_μνc_iαν,
where σ represents the vector of Pauli matrices, and J_α>0 denote the Hund's couplings.
Since the Hund's couplings depend on the overlap of the itinerant electron wave function with the local moment, they are generally different for different orbitals.
[Coupling to itinerant electrons generates a self-energy for the local moment propagator, giving rise to Landau damping (see SM3). However the strength of the Hund's couplings can be estimated to be much weaker than the spin exchange interaction (see SM3). Hence coupling to itinerant electrons will not significantly modify the local moment spin susceptibility.]
Note that the proposed nematic QSL state induces nematicity in the charge sector.
For instance non-zero
⟨ A_ r, r±x̂⟩ in the nematic QSL state generates
an interaction among conduction electrons along the x-direction,
which drives bond-centered nematic order with
φ_c≡⟨ c^†_ r+x̂,αc_ r,α -c^†_ r+ŷ,αc_ r,α⟩≠ 0 below the temperature at which the nematic QSL develops. The observed nematic transition at T_s∼ 90K <cit.> is consistent with this picture. Furthermore, φ_c linearly couples to
φ_o≡n_zx-n_yz/n_zx+n_yz, where n_zx, yz denote occupation of zx and yz orbitals, and φ_s ≡ M_x^2-M_y^2, where M represents the magnetic moment. These different measures of nematicity are consistent with orbital imbalance
observed in ARPES <cit.> (φ_o≠ 0) and the observed NMR resonance line splitting <cit.> (φ_s≠ 0).
Furthermore, the nematic spin fluctuations in the proposed QSL state mediate pairing
and the resulting gap structure can be determined via standard a mean field procedure (see SM4). An immediate observation is that non-universal aspects of the gap structure such as relative gap strength of each pocket and the T_c are sensitive to strength of the Hund's couplings J's (see Fig. <ref>b,c). Nevertheless the gap functions resulting from our model share the following generic features: (1) The gap is generically nodeless as a result of severe anisotropy of the spin fluctuations in the nematic QSL state. In particular, the near absence of spin fluctuations around say (0, π) for one nematic domain renders the determination of gap sign in different pockets unfrustrated. In contrast, in the itinerant model where (π, π), (π, 0) and (0, π) spin fluctuations are close in magnitude they compete for deciding the sign structure of the gap causing nodal gap structures. (2) The gap is deeply anisotropic due to the
variation of orbital content around each Fermi pocket. The resulting nodeless but very anisotropic gap structure explains the seemingly contradictory experimental results of STM <cit.>, penetration depth <cit.>, thermal conductivity measurements <cit.>, observing low energy excitations <cit.> despite the evidence of a full gap <cit.>. (3)
The gap changes sign from pocket to pocket. This is consistent (see SM4) with the observation of sharp spin resonance in the superconducting state <cit.>.
More specifically, our gap function is a combination of d-wave as induced by (π, π) spin fluctuations and s_± as induced by (π, 0) spin fluctuations.
We consider a single nematic domain, where the pairing interaction concentrates around (π, q_y). Two examples of the gap function (in arbitrary units) are shown in Fig.<ref>b,c, where J_xy=J_yz=J_zx and J_xy=0.4 J_yz=0.4 J_zx respectively.
Now we turn to the question of orbital dependence of the Hund's coupling. Fig. <ref>b,c shows that the orbital dependent Hund's coupling can alter the relative magnitude and anisotropy of gap functions at different Fermi pockets (while the gap is predominantly d-wave in Fig. <ref>b, d- and s-wave are at par in Fig. <ref>c).
Since the Hund's coupling requires overlap of the wave-function between the conduction electrons and local moments, significantly lower spectral weight of d_xy orbitals <cit.> implies J_xy≪ J_zx, J_yz.
Indeed, the gap function with such orbital dependent Hund's coupling shows remarkable resemblance to the gap structure observed by recent STM measurements<cit.> (see Fig. <ref>c,d). In Ref. Sprau16 the observed pocket specific gap anisotropy was interpreted as resulting from orbital-selective pairing of unknown microscopic origin.
In our model, such orbital selective pairing arise from orbital dependence in the Hund's coupling J_xy<J_yz=J_zx, reflecting much smaller weight of d_xy orbitals in the conduction electrons <cit.>. This orbital dependent Hund's coupling amplify the role of (π,0) spin fluctuation in pairing despite larger spectral weight at (π,π), which is consistent with the observation of sharp spin resonance at (π,0) <cit.> (see SM4 for further discussion).
In conclusion, we propose a nematic QSL state description of FeSe that explains
the basic phenomenology of FeSe: (1) Spin dynamics observed in Ref.Zhao16 assuming it is averaged over domains, (2) nematic transition without mangetic ordering, (3) highly anisotropic fully gapped superconducting gap.
The central assumption that neutron scattering is averaging over domains could be tested in a detwinned neutron experiment. Orbital dependent Hund's coupling mechanisms for orbital selective pairing in bulk FeSe further offers new insight regarding higher T_c observed in
mono-layer FeSe and K-doped FeSe. As we show in SM4, larger J_xy that enables conduction electrons to utilize (π,π) spin fluctuation with larger intensity and higher characteristic frequency leads to higher transition temperature (as high as 47K). Combined with the observation that spectral weight of the d_xy orbitals in the conduction electrons is much higher in the higher T_c settings of mono-layer FeSe and K-doped FeSe <cit.>, it is conceivable these systems make better use of already more prominent (π,π) fluctuation to achieve higher T_c. We note here that the nematic QSL state we propose is distinct from the proposal of Ref. FWang15 in that it contains no one-magnon branch of excitations (see SM5) although both proposals start from strong coupling perspective and spin ground states lacking any form of magnetic order.
Finally, although we used SBMFT as a calculational crutch to capture the spin wave continuum, the ultimate fate of spinons in this spin system coupled to itinerant electrons needs further study. Interestingly, such a state with spinons coexisting with conduction electrons would resemble the FL* state first proposed in Refs. Senthil03, Senthil04 that has recently been revisited using DMRG <cit.>.
Acknowledgements We thank Andrey Chubukov, J.C. Davis, Rafael Fernandez, Yong Baek Kim, Steve Kivelson, Igor Mazin, Andriy Nevidomskyy, Subir Sachdev, Doug Scalapino, Qimiao Si, Fa Wang for discussions. E-AK and J-HS were supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering under Award DE-SC0010313. E-AK also acknowledges Simons Fellow in Theoretical Physics Award#392182. E-AK and MJL acknowledge hospitality of the KITP supported by Grant No. NSF PHY11-25915.
apsrev
§ SM1: SCHWINGER BOSON MEAN FIELD THEORY
We show here that our ansatz state is a self-consistent solution of the J_1-J_2-J_3-J_4 spin model (see Fig.1a of main text for definition of J's). On a bipartite lattice, it is convenient to perform a unitary transformation by defining a_im≡ b_im on A sublattice, and a_j↑≡ b_j↓, a_j↓≡ -b_j↑ on B sublattice. The valence bond operator is then brought to the simpler form A^†_ij= ∑_σ a^†_iσa^†_jσ. Modular a constant, the spin Hamiltonian
H_S=∑_ijJ_ij S_i· S_j
can be written in terms of the valence bond operators as
H_S=-1/2∑_ijJ_ijA^†_ijA_ij.
We then apply mean field theory to the bosonic Hamiltonian <cit.>. Defining Q_ij=J_ij⟨ A_ij⟩≡ Q_δ, the quadratic part of the mean field Hamiltonian reads:
H_S^( MF)=λ∑_iσ a^†_iσa_iσ+1/2∑_iδσQ_δ( a^†_iσa^†_i+δ, σ+ a_iσa_i+δ, σ).
For a given mean field ansatz, the mean field Hamiltonian can be diagonalized by the Bogoliubov transformation
α_ kσ=coshθ_ ka_ kσ-sinhθ_ ka^†_- kσ,
with tanh( 2θ_ k)=-Qγ_ k/λ. Here Qγ_ k denotes the Fourier transform of Q_δ: Qγ_ k=∑_δQ_δe^-i k·δ.
The resulting Hamiltonian reads
H_S^( MF)=∑_ kσω_ k(α^†_ kσα_ kσ+1/2),
with the dispersion ω_ k=√(λ^2-(Qγ_ k)^2). Integrating out the bosonic fields, one obtains the free energy
F=∑_δ|Q_δ|^2/2J_δ -1/2(2S+1)λ+1/β∫d^2 k/(2π)^2ln[2sinh(1/2βω_ k)],
from which follow the self-consistency equations. In SBMFT, long-range order occurs through Bose-Einstein condensation (BEC) of the Schwinger bosons, and condensation gives rise to gapless spectrum due to the resulting Goldstone mode. A QSL state corresponds to a solution of the self-consistency equations with gapped spectrum, where there is no condensation of Schwinger bosons.
We start with decoupled 1d chains where only Q_x≠ 0, and
Qγ_ k=2Q_xcos k_x.
Its free energy is
F^(1)=Q_x^2/J_1-1/2(2S+1)λ+1/β∫d^2 k/(2π)^2ln[2sinh(1/2βω_ k)],
and the self-consistency equations are
S+1/2 = ∫d^2 k/(2π)^2λ/2ω_ k,
Q_x/J_1 = ∫d^2 k/(2π)^2Qγ_ k/2ω_ kcos k_x.
This ansatz state is basically determined by the dimensionless parameters: Q_x/λ. We have plotted the spin structure factor taking Q_x/λ=0.498 (see Fig. <ref>a,b,c). Here S=0.677.
We consider then our ansatz state, namely the quantum melted stripe state, where Q_x≠ 0, Q_x+y≠ 0, Q_x+2y≠ 0, and
Qγ_ k=2Q_xcos k_x+4Q_x+ycos k_x cos k_y + 4Q_x+2ycos k_x cos(2k_y).
Its free energy is
F^(2)=Q_x^2/J_1+ 2Q_x+y^2/J_2+2Q_x+2y^2/J_4-1/2(2S+1)λ+1/β∫d^2 k/(2π)^2ln[2sinh(1/2βω_ k)],
and the self-consistency equations are
S+1/2 = ∫d^2 k/(2π)^2λ/2ω_ k,
Q_x/J_1 = ∫d^2 k/(2π)^2Qγ_ k/2ω_ kcos k_x,
Q_x+y/J_2 = ∫d^2 k/(2π)^2Qγ_ k/2ω_ kcos k_xcos k_y ,
Q_x+2y/J_4 = ∫d^2 k/(2π)^2Qγ_ k/2ω_ kcos k_xcos(2k_y).
This ansatz state is basically determined by the three dimensionless parameters: Q_x/λ, Q_x+y/Q_x and Q_x+2y/Q_x. We have plotted the spin structure factor taking Q_x/λ=0.398, Q_x+y/Q_x=0.025 and Q_x+2y/Q_x=0.1 (see Fig. <ref>d,e,f). Here S=0.153, J_2/J_1=0.904, J_4/J_1=0.975.
In addition, we find that the quantum melted stripe state has lower energy than the decoupled 1d chain state. We set J_1=1. For S=0.25, J_2=0.88, J_4=0.944, we obtain F^(1)=-0.175, F^(2)=-0.179. For S=0.26, J_2=0.864, J_4=0.925, we obtain F^(1)=-0.188, F^(2)=-0.192. For S=0.3, J_2=0.869, J_4=0.935, we obtain F^(1)=-0.224, F^(2)=-0.227.
§ SM2: ITINERANT PART: THREE ORBITAL MODEL
For the itinerant part of the system, what matters for pairing are the low energy electronic states around the Fermi pockets. We take a phenomenological approach to expand the dispersion around the Fermi pockets. Since it is known experimentally that the spectral weight of the low energy states arises mainly from d_yz, d_zx, d_xy orbitals, we consider a band structure involving these three orbitals. Such orbital-projected band models have been studied in <cit.>. Consider first the Fermi pocket near the Γ point, a single hole pocket has been detected, with the d_yz and d_zx orbitals dominating the spectral weight. We introduce a spinor ψ^T_γ, k=(c_yz, k↑, -c_zx, k↑, c_yz, k↓, -c_zx, k↓), and the kinetic energy part of the Hamiltonian is of the form
H_0,Γ = ∑_ kψ^†_γ, k h_Γ( k)ψ_Γ, k.
The Hamiltonian includes the on-site energy, intra-orbital hopping, inter-orbital hopping. To get the right orbital splitting, we include also the difference in the on-site energy for the two orbitals reflecting nematicity, and the spin-orbit coupling <cit.>. The result reads
h_Γ( k)=(ε_Γ+k^2/2m_Γ) τ^0⊗σ^0 + [δε_Γ +b k^2cos (2θ_k)] τ^3⊗σ^0 + ck^2sin (2θ_k) τ^1⊗σ^0 + λτ^2⊗σ^3,
where τ and σ are Pauli matrices in orbital and spin space respectively, and k=(k_x, k_y)=k(cosθ_k, sinθ_k).
For the electron pocket near (π, 0), the d_yz and d_xy orbitals dominate the spectral weight.
We introduce a spinor ψ^T_X, k=(c_yz, k↑, c_xy, k↑, c_yz, k↓, c_xy, k↓), and the kinetic energy part of the Hamiltonian is of the form
H_0,X = ∑_ kψ^†_X, k h_X( k)ψ_X, k,
with
h_X( k)=[ε_1 +k^2/2m_1-a_1 k^2cos(2θ_k)] τ^0+τ^3/2⊗σ^0 +[ε_3 +k^2/2m_3-a_3 k^2cos(2θ_k)] τ^0-τ^3/2⊗σ^0+2vksinθ_k τ^2⊗σ^0.
Here k is measured from (π, 0).
For the electron pocket near (0, π), the d_zx and d_xy orbitals dominate the spectral weight. We introduce a spinor ψ^T_Y, k=(c_zx, k↑, c_xy, k↑, c_zx, k↓, c_xy, k↓), and the kinetic energy part of the Hamiltonian is of the form
H_0,Y = ∑_ kψ^†_Y, k h_Y( k)ψ_Y, k,
with
h_Y( k)=[ε_1 +k^2/2m_1+a_1 k^2cos(2θ_k)] τ^0+τ^3/2⊗σ^0 +[ε_3 +k^2/2m_3+a_3 k^2cos(2θ_k)] τ^0-τ^3/2⊗σ^0+2vkcosθ_k τ^2⊗σ^0.
Here k is measured from (0, π).
With a proper choice of the parameters, we can obtain a single hole pocket around Γ, a single electron pocket around (π, 0), and a single electron pocket around (0, π) as shown in Fig.4a of main text. The corresponding parameters are: ε_Γ =14, δε_Γ = 11, 1/2m_Γ=-350, b=-70, c=120, λ = 9, ε_1=-20, ε_3=-60 1/2m_1=75, 1/2m_3=160, a_1=100, a_3=-120, v=-60. Note that the band structure employed in our paper is simplified in order to use a closed form for hamiltonian that allows us to carry out the study of superconductivity semi-analytically. Although our band structure misses quantitative details such as severe mismatch in the pocket sizes between two electron pockets (see SM 4 for further discussion), such details will not impact qualitative conclusions of the paper.
§ SM3: HOW ITINERANT ELECTRONS AFFECT LOCAL MOMENTS: LANDAU DAMPING
Coupling to itinerant electrons generates a self-energy for the local moment propagator, giving rise to Landau damping. Since the Fermi pockets are small in size, and located near Γ- and M-points, the induced self-energy will be predominantly near q=(π, π), (π, 0) and (0, π) (see Fig.<ref>). We expect the neutron spectrum near these points to be smeared. For q∼ (π, 0), the self-energy is predominantly from d_yz orbitals,
D^(π, 0)( q, Ω) ∼∑_ k,ωJ_yz^2 G^(yz)( k, ω) G^(yz)( k+ q, ω+ Ω),
with fermion Green's function G( k, ω). Here k is at the Γ pocket, and k+ q at the M_x pocket. For q∼ (π, π), the self-energy is predominantly from d_xy orbitals,
D^(π, π)( q, Ω) ∼∑_ k,ωJ_xy^2 G^(xy)( k, ω) G^(xy)( k+ q, ω+ Ω),
where k and k+ q are at the two M-pockets. With the suppression of J_xy, we expect the Landau damping effect to be weaker near q=(π, π).
We can estimate the strength of the Hund's coupling from the resulting superconducting transition temperature: T_c∼ E_F e^-1/λ, with the dimensionless coupling λ= N_0 V∼ J_H^2/(E_FJ_ ex). The energy scales involved are: (1) Fermi energy E_F∼ 10 meV <cit.>, (2) the exchange interaction J_ ex∼ 100 meV <cit.>, and (3) T_c∼ 1 meV. From these, we obtain Hund's coupling J_H∼ 20 meV, which is much smaller than the exchange interaction J_ ex. Hence we expect coupling to itinerant electrons will not significantly modify the local moment spin susceptibility.
§ SM4: HOW LOCAL MOMENTS AFFECT ITINERANT ELECTRONS: NEMATICITY AND PAIRING
The dynamic spin fluctuations in the QSL affect the itinerant electrons. Since the spins have a gapped spectrum, we can integrate them out to obtain an effective interaction for the itinerant electrons. The induced action reads
S_ int =-1/2∫_0^β dτ∑_α, α' J_αJ_α'χ_ij(τ) s_iα(τ)· s_jα'(0),
with the itinerant electron spin density s_iα≡∑_μνc^†_iαμσ_μνc_iαν, and the local moment spin correlation function χ_ij(τ)≡⟨ T_τS_i^a(τ)S_j^a(0) ⟩. The induced interaction is highly anisotropic, and the dominant interaction term is the nearest-neighbor interaction (say along the x-direction): J_H^2χ c^†_ rασ^a_αβc_ rβc^†_ r+x̂,α'σ^a_α'β'c_ r+x̂,β'. This interaction results in a phase transition to a nematic state with order parameter ⟨ c^†_ r+x̂,αc_ r,α⟩≠ 0, or more generally, φ_c≡⟨ c^†_ r+x̂,αc_ r,α -c^†_ r+ŷ,αc_ r,α⟩≠ 0.
Furthermore, the induced interaction leads to pairing among the itinerant electrons.
Since the spin fluctuations are antiferromagnetic, one expects pairing in the spin singlet channel. We then mean field decompose the induced interaction into spin singlet pairing channel with the corresponding pair operator h^†_αα'( k)=1/√(2)(c^†_ kα↑c^†_- kα'↓ -c^†_ kα↓c^†_- kα'↑). Due to the special form of spin susceptibility and band structure in FeSe, the pairing problem is largely simplified. The spin fluctuations enter the pairing problem through the spin susceptibility χ( q)≡χ( q, Ω_n=0), which can be obtained from the dynamic spin structure factor via χ( q)=-∫ dω S_ q, ω/ω. The special form of χ( q) in FeSe (see Fig. <ref>) results in only inter-band pairing correlation among the three Fermi pockets. Furthermore, since Hund's coupling is diagonal in orbital space, there are only pairing correlations between the same orbitals: in orbital basis, the pairing interaction is of the form H_ pair∼ J_α^2 χ( k- k')h^†_αα( k)h_αα( k').
Pairing occurs near the Fermi surface, which is naturally expressed in the band basis. We then transform from the orbital basis to the band basis: c^†_ kαμ=∑_a η^*_α a μ( k)d^†_ kaμ with band index a. Note that since the spin-orbit coupling here is in the σ^3 channel, different spins do not mix. The pair operator in the band basis is h^†_a( k)=1/√(2)(d^†_ ka↑d^†_- ka↓ -d^†_ ka↓d^†_- ka↑). Omitting the frequency dependence, the pairing Hamiltonian is of the form
H_ pair =∑_ k k'abΓ_ab( k, k')h^†_a( k)h_b( k'),
with the projected pairing interaction Γ_ab( k, k')=1/2∑_αJ_α^2χ( k- k') M^*_α a( k) M_α b( k'). The orbital content is encoded in the form factor M_α a( k)= η_α a ↑( k) η_α a ↓(- k).
The gap function is then defined as Δ_a( k) =∑_ k'bΓ_ab( k, k')⟨ h_b( k')⟩. The gap symmetry function g_i( k)∝Δ_a( k) on the Fermi surface is determined by the eigen equation
-∑_j∮_ FS_jd k'_∥/2π v_F( k')Γ_ij( k, k')g_j( k') =λ g_i( k),
where k_∥ denotes momentum along the Fermi surface FS_j, and v_F( k)=|∇ E_a( k)| represents the Fermi velocity. We can then solve the above eigen equation to find the leading eigenvalue and the corresponding eigenvector, which determines the resulting gap structure within a single nematic domain. The inputs are (1) itinerant electron band structure encoded in ϵ_αβ^μν( k) (2) local moment spin susceptibility χ( q), and (3) Hund's couplings J_α.
We show here more concretely how (π, π) spin fluctuation mediated pairing enhances the superconducting T_c.
We first estimate the value of λ in bulk FeSe from the observed T_c. Since Fermi energy is small compared to spin fluctuation scale (so called antiadiabatic limit), Fermi energy acts as cutoff in the T_c equation: T_c∼ E_F e^-1/λ <cit.>. With T_c∼ 8 K, E_F∼ 10 meV <cit.>, we obtain λ∼ 0.37. In bulk FeSe, pairing occurs predominantly among d_yz orbitals as mediated by (π, 0) spin fluctuations, while (π, π) spin fluctuation mediated pairing among d_xy orbitals is largely suppressed. This corresponds to taking (J_xy, J_zx, J_yz) ∼ (0, 1, 1). (Note that due to the near absence of (0, π) spin fluctuations, pairing among d_zx orbitals is suppressed for any coupling. So we just set J_zx =1.) When pairing is predominantly among d_xy orbitals, we have (J_xy, J_zx, J_yz) ∼ (1, 1, 0). We have obtained the resulting eigenvalue λ'=2.98λ, which gives T_c∼ 47 K. Hence (π, π) spin fluctuation mediated pairing is indeed able to account for the much higher T_c in heavily doped FeSe (T_c∼ 48 K <cit.>), and a large part of the T_c increase in monolayer FeSe (T_c∼ 50-64 K <cit.>).
§.§ Neutron resonance
A characteristic feature of our gap function is that it has different signs at different Fermi pockets. Such a sign-changing gap function can give rise to resonances in the neutron spectrum. However the intensity of the resonances depends on the details of the band structure and the superconducting gap function. The resonance comes from the itinerant electron spin susceptibility, which contains a term of the form<cit.>
χ”( q, ω)∼∑_ k( 1-Δ_ kΔ_ k+ q/E_ kE_ k+ q)
[ 1-f(E_ k)-f(E_ k+ q) ]δ(ω-E_ k-E_ k+ q ).
The gap function changes sign between the Γ pocket and (π, 0) pocket, and between the (π, 0) pocket and (0, π) pocket. Since the sizes of the Fermi pockets are small, one expects the resonances to be localized in momentum space around q=(π, 0) and q=(π, π).
Due to lack of full knowledge of the gap function in the whole Brillouin zone, we approximate the neutron intensity at q=(π, 0) and q=(π, π) by summing over the corresponding Fermi surfaces. Furthermore we penalize the resulting term by an exponential factor depending on the difference between the momentum transfer and q, i.e. we replace ∑_ k by ∮_ k∮_ k'e^-| k- k'- q|^2/q_0^2, where the integrals are restricted to the Fermi surfaces, and q_0 is a parameter that basically measures the range of blurring in momentum space. Using the simplified band structure employed in the paper and specified in SM3 (Fig.4(a) of main text), and the resulting gap function with J_xy/J_yz=0.4 (see Fig.4(c) of main text), we obtain the neutron intensity as shown in Fig.<ref>(a) (with q_0=0.2). One can see that the resonance at q=(π, π) is suppressed due to worse nesting of the corresponding Fermi surface and gap functions.
Actually one expects further suppression of (π, π) resonance with more realistic band structures. In particular, it has been found in quantum oscillation measurements <cit.> that the Fermi pocket at (0, π) (with k_F≃ 0.13Å^-1) is much larger than the Fermi pocket at (π, 0) (with k_F≃ 0.043Å^-1). Indeed a simple check by enlarging the Fermi pocket at (0, π) by a factor of two while keeping the rest of the band structure and gap functions fixed suppreses the resonance at q=(π, π) to be almost vanishing (see Fig.<ref>(b)).
§ SM5: AKLT CHAINS OR FRACTIONALIZED SPIN LIQUID IN FESE?
The theory of FeSe presented in the main manuscript consisted of coupling a nematic quantum spin liquid of local moments described by Schwinger boson mean field theory to itinerant electrons. In the absence of this coupling, the local moments would behave like they do in insulators where it is known <cit.> that this Schwinger boson mean field theory is unstable by confinement and forms chains of AKLT states. This confined state is presumably related to the AKLT chain states studied in Ref. FWang15 as a potential theory of the magnetism in FeSe.
Here we attempt to find an experimental signature that could distinguish the deconfined state studied in the main manuscript and an AKLT chain state in a future experiment. Our approach will be to make use of known results from one dimensional physics. If we decouple chains in the AKLT chain state, we can use numerical results on the well studied one dimensional model to determine signatures in neutron scattering for such a state. Then after determining experimentally relevant features of this state we will step back and assess the implication of these results for the broader question of the distinction between the AKLT chain state and nematic quantum spin liquid states.
In one dimension, the S=1 spin chain with bilinear and biquadratic interactions has a variety of phases including the AKLT state and a variety of phase transitions. The Hamiltonian is
H_S=J∑_i[cosθ( S_i· S_i+1) +sinθ( S_i· S_i+1)^2 ],
with -π≤θ≤π. As shown in Fig.<ref>, its phase diagram contains (see <cit.> and references therein): (1) a ferromagnetic phase for -π<θ<-3π/4 and π/2<θ≤π, (2) a dimerized phase for -3π/4<θ<-π/4, (3) a gapped and topologically ordered Haldane phase for -π/4<θ<π/4 (the AKLT state corresponds to θ=arctan1/3≃ 0.1024π) and (4) a gapless phase with antiferroquadrupolar (AFQ) correlations for π/4<θ<π/2 <cit.>. The dimerized phase and the Haldane phase are separated by a critical point, the Takhtajan-Babujian (TB) point <cit.>. At the TB point, the system possesses gapless spinon excitations. The spinon continuum is manifest in the dynamical spin structure factor (Fig.<ref>) as obtained using algebraic Bethe ansatz-based method <cit.>. As one moves away from the critical point into the Haldane phase, the spinons get confined, and the elementary excitations are magnons. However the magnons are strongly interacting and not always well defined: in addition to the one magnon branch, the two-magnon processes have important contributions to the dynamic spin structure factor (see Fig.<ref>) <cit.>. Returning back to the critical point, the two-magnon excitations merge with the one magnon excitations and only a continuum of spinon excitations remain<cit.>.
Presumably, the qualitative features of the one dimensional model would carry over to a coupled two dimensional chain model. This implies that an AKLT chain state would similarly consist of both a continuum of excitations and a one magnon branch similar to those found in Fig. <ref>. But the nematic quantum spin liquid in the main manuscript has no such one magnon branch: the spinons at the mean field level are deconfined. Distinguishing between the AKLT state and a nematic quantum spin liquid state is therefore a matter of finding evidence for the one magnon branch of excitations in neutron scattering. If such a signature exists, it will provide strong evidence for the AKLT chain state.
Finally, we should mention that the presence of itinerant fermions likely complicates this story as mentioned in the main text. The survival of the one magnon branch and or even the fundamental distinction between confined and deconfined spinons may disappear though arguments in the literature suggest an FL* state which preserves the fundamental distinction is possible<cit.>.
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07809v1 | 20170126183818 | On the inverse problem of detecting cardiac ischemias: theoretical analysis and numerical reconstruction | [
"Elena Beretta",
"Cecilia Cavaterra",
"Maria Cristina Cerutti",
"Andrea Manzoni",
"Luca Ratti"
] | math.AP | [
"math.AP",
"math.NA"
] |
equationsection
16cm 23cm
-15mm -20mm
0pt
theoremTheorem[section]
corollary[theorem]Corollary
definition[theorem]Definition
lemma[theorem]Lemma
proposition[theorem]Proposition
remark[theorem]Remark
#1
#1(<ref>) Proof. 3mmexdraw
On the inverse problem of detecting cardiac ischemias:
theoretical analysis and numerical reconstruction
Elena BerettaDipartimento di Matematica "F. Brioschi", Politecnico di Milano ( [email protected]) Cecilia CavaterraDipartimento di Matematica, Università degli Studi di Milano ( [email protected]) M.Cristina CeruttiDipartimento di Matematica "F. Brioschi", Politecnico di Milano" ( [email protected]) Andrea ManzoniCMCS-MATHICSE-SB, Ecole Polytechnique Fédérale de Lausanne ( [email protected]) Luca RattiDipartimento di Matematica "F. Brioschi", Politecnico di Milano" ( [email protected])
December 30, 2023 [file: ]
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper we develop theoretical analysis and numerical reconstruction techniques for the solution of an inverse boundary value problem dealing with the nonlinear, time-dependent monodomain equation, which models the evolution of the electric potential in the myocardial tissue.
The goal is the detection of an inhomogeneity ω_ϵ (where the coefficients of the equation are altered) located inside a domain Ω starting from observations of the potential on the boundary ∂Ω. Such a problem is related to the detection of myocardial ischemic regions, characterized by severely reduced blood perfusion and consequent lack of electric conductivity.
In the first part of the paper we provide an asymptotic formula for electric potential perturbations caused by internal conductivity inhomogeneities of low volume fraction, extending the results published in <cit.> to the case of three-dimensional, parabolic problems. In the second part we implement a reconstruction procedure based on the topological gradient of a suitable cost functional. Numerical results obtained on an idealized three-dimensional left ventricle geometry for different measurement settings assess the feasibility and robustness of the algorithm.
§ INTRODUCTION
Mathematical and numerical models of computational electrophysiology can provide quantitative tools to describe electrical heart function and disfunction <cit.>, often complementing imaging techniques (such as computed tomography and magnetic resonance) for diagnostic and therapeutic purposes. In this context, detecting pathological conditions or reconstructing model features such as tissue conductivities from potential measurements yield to the solution of an inverse boundary value problem. Standard electrocardiographic techniques attempt to infer electrophysiological
processes in the heart from body surface measurements of the electrical potential, as in the case of electrocardiograms (ECGs), or body surface ECGs (also known as body potential maps). These measurements can provide useful insights for the
reconstruction of the cardiac electrical activity within the so-called electrocardiographic imaging, by solving the well-known
inverse problem of electrocardiography[The inverse problem of electrocardiography aims at recovering the epicardial potential (that is, at the heart surface)
from body surface measurements <cit.>. Since the torso is considered as a passive conductor, such an inverse problem involves the linear steady diffusion model as direct problem. A step further, aiming at computing the potential inside the heart from the epicardial potential, has been considered, e.g., in <cit.>.]. A much more invasive option to acquire potential measurements is represented by non-contact electrodes inside a heart cavity to record endocardial potentials.
Here we focus on the problem of detecting the position and the size of myocardial ischemias from a single boundary measurement of the electric potential. Ischemia is a reversible precursor of heart infarction caused by partial occlusion of one or more coronary arteries, which supply blood to the heart. If this condition persists, myocardial cells die and the ischemia eventually degenerates in infarction. For the time being, we consider an insulated heart model, neglecting the coupling with the torso;
this results in the inverse problem of detecting inhomogeneities for a nonlinear parabolic reaction-diffusion equation (in our case, the so-called monodomain equation) dealing with a single measurement of the endocardial potential. Our long-term goal is indeed to deal with an inverse problem for the coupled heart-torso model, in order to detect ischemias from body surface measurements, such as those acquired on each patient with symptoms of cardiac disease through an ECG.
The problem we consider in this paper is a mathematical challenge itself, almost never considered before. Indeed, difficulties include the nonlinearity of both the direct and the inverse problem, as well as the lack of measurements at disposal. Indeed, even for the linear counterpart of the inverse problem, it has been shown in <cit.> and <cit.> that infinitely many measurements are needed to detect uniquely the unknown inclusions, and that the continuous dependence of the inclusion from the data is logarithmic <cit.>. Moreover, despite the inverse problem of ischemia identification from measurements of surface potentials has been tackled in an
optimization framework for numerical purposes <cit.>, a detailed mathematical analysis of this problem has never been performed.
To our knowledge, no theoretical investigation of inverse problems related with ischemia detection involving the monodomain and/or the bidomain model has been
carried out. On the other hand, recent results regarding both the analysis and the numerical approximation of this inverse problem in a much simpler stationary case
have been obtained in <cit.>.
In order to obtain rigorous theoretical results additional assumptions are needed, for instance by considering small-size conductivity inhomogeneities. We thus model ischemic regions as small inclusions ω_ where the electric conductivity is significantly smaller than the one of healthy tissue and there is no ion transport.
We establish a rigorous asymptotic expansion of the boundary potential perturbation due to the presence of the inclusion adapting to the parabolic nonlinear case the approach introduced
by Capdeboscq and Vogelius in <cit.> for the case of the linear conductivity equation. The theory of detection of small conductivity inhomogeneities from boundary measurements via asymptotic techniques has been developed in the last three decades in the framework of Electric Impedence Tomography (see, e.g., <cit.>). A similar approach has also been used in Thermal Imaging (see, e.g., <cit.>).
We use these results to set a reconstruction procedure for detecting the inclusion. To this aim, as in <cit.>,
we propose a reconstruction algorithm based on topological optimization, where a suitable quadratic functional is minimized to detect the position and the size of
the inclusion (see also <cit.>).
Numerical results obtained on an idealized
left ventricle geometry
assess the feasibility of the proposed procedure.
Several numerical test cases also show the robustness of the reconstruction procedure with respect to measurement noise, unavoidable when dealing with real data. The modeling assumption on the small size of the inclusion, instrumental to the derivation of our theoretical results, is verifed in practice in the case of residual ischemias after myocardial infarction. On the other hand, a fundamental task of ECG's imaging is to detect the presence of ischemias as precursor of heart infarction without any constraint on its size. For this reason, we also consider the case of the detection of larger size inclusions, for which the proposed algorithm still provides useful insights.
The paper is organized as follows. In Section 2 we describe the monodomain model of cardiac electrophysiology we are going to consider.
In Section 3 we show some suitable wellposedness results concerning the direct problems, in the unperturbed (background) and perturbed
cases. In Section 4 we prove useful energy estimates of the difference of the solutions of the two previous problems.
The asymptotic expansion formula is derived in Section 5 and the reconstruction algorithm in Section 6.
Numerical results are finally provided in Section 7. The appendix, Section 8, is devoted to a technical proof of a result needed in section 6.
§ THE MONODOMAIN MODEL OF CARDIAC ELECTROPHYSIOLOGY
The monodomain equation is a nonlinear parabolic reaction-diffusion PDE for the transmembrane potential, providing a mathematical description of the macroscopic electric activity of the heart <cit.>.
Throughout the paper we consider the following (background) initial and boundary value problem
ν C_m u_t- div (k_0∇ u) + ν f(u) = 0, in Ω× (0,T),
∂ u/∂ n = 0, on ∂Ω× (0,T),
u(0) = u_0, in Ω,
2truecm
where Ω⊂ R^3 is a bounded set with boundary ∂Ω, and
k_0 ∈ℝ, k_0 >0. Here Ω is the domain occupied by the ventricle, u is the (transmembrane) electric potential, f(u) is a nonlinear term modeling the ionic current flows across the membrane of cardiac cells,
k_0 is the conductivity tensor of the healthy tissue, C_m >0 and ν >0 are two constant coefficients representing the membrane capacitance and the surface area-to-volume ratio, respectively.
For the sake of simplicity we deal with an insulated heart, namely we do not consider the effect of the surrounding torso, which behaves as a passive conductor.
The initial datum u_0 represents the initial activation of the tissue, arising from the propagation of the electrical impulse in the cardiac conduction system. This equation yields a macroscopic model of the cardiac
tissue, arising from the superposition of intra and extra cellular media, both assumed to occupy the whole heart volume (bidomain model), making the hypothesis that the extracellular and the intracellular conductivities are proportional quantities. Concerning the mathematical analysis of both the monodomain and the bidomain models, some results on the related direct problems have been obtained for instance in <cit.>.
We thus assume a phenomenological model to describe the effect of ionic currents through a nonlinear function of the potential.
We neglect the coupling with the ODE system modeling the evolution of the so-called gating variables, which represent the amount of open channels per unit area of the cellular membrane and thus regulate the transmembrane
currents.
In the case of a single gating variable w, a well-known option would be to replace f by g =g(u,w) where
g(u, w) = - β u(u-α)(u-1) - w,
and w solves the following ODE initial value problem, ∀ x ∈Ω,
∂ w/∂ t = ρ(u- γ w) (0,T) , w(0) = w_0,
for suitable (constant) parameters β, α, ρ, γ. This is the so-called FitzHugh-Nagumo model for the ionic current, and the gating variable w is indeed a recovery function allowing to take into
account the depolarization phase. See, e.g., <cit.> for more details. In our case, the model (<ref>) is indeed widely used to characterize the large-scale propagation of the
front-like solution in the cardiac excitable medium.
As suggested in <cit.> and <cit.>, hereon we consider the cubic function
f(u) = A^2(u-u_1)(u-u_2)(u-u_3), u_i ∈ℝ, u_1 < u_2 < u_3,
where A>0 is a parameter determining the rate of change of u in the depolarization phase, and u_1 < u_2 < u_3 are given constant values representing the resting, threshold and peak potentials,
respectively. Possible values of the parameters are, e.g., u_1 = -85 mV, u_2 = - 65 mV and u_3 = 40 mV, A=0.04, see <cit.>.
Note that both the sharpness of the wavefront and its propagation speed strongly depend on the value of the parameter A.
Consider now a small inhomogeneity located in a measurable bounded domain ω_ε⊂Ω, such that there exist a
compact set K_0, with ω_ε⊂ K_0 ⊂Ω, and a constant d_0 >0 satisfying
dist(ω_ε, Ω\ K_0) ≥ d_0 >0.
Moreover, we assume
|ω_ε| >0, lim_ε→ 0 |ω_ε| = 0.
In the inhomogeneity ω_ε the conductivity coefficient and the nonlinearity take different values with respect the ones in
Ω\ω_ε. The problem we consider is therefore
ν C_m u^ε_t- div (k_ε∇ u^ε) + νχ_Ω\ω_ε f(u^ε) = 0,
in Ω× (0,T),
∂ u^ε∂ n = 0, on ∂Ω× (0,T),
u^ε(0) = u_0, in Ω,
2truecm
where χ_D stands for the characteristic function of a set D ⊂ℝ^3. Here
k_ε = (k_0 - k_1) χ_Ω\ω_ε + k_1 =
k_0 in Ω\ω_ε,
k_1 in ω_ε,
with k_0, k_1 ∈ℝ, k_0 > k_1 >0.
§ WELL POSEDNESS OF THE DIRECT PROBLEM
Problem (<ref>) thus describes the propagation of the initial activation u_0 in an insulated heart portion (e.g., the left ventricle), and hereon will be referred to as the background problem;
we devote Section <ref> to the analysis of its well-posedness. The well-posedness of the perturbed problem modeling the presence of a small ischemic region in the domain will be instead analyzed in Section <ref>.
§.§ Well posedness of the background problem
For the sake of simplicity, throughout the paper we set
ν = C_m = 1 and we assume that
Ω∈ C^2+α, α∈ (0,1),
u_0 ∈ C^2+α(Ω),
u_1< u_0(x) <u_3 ∀ x ∈Ω, ∂ u_0 (x)∂ n = 0 ∀ x ∈∂Ω.
Moreover, let us set
M_1:=f_C([u_1,u_3]), M_2 := f^'_C([u_1,u_3]).
The following well posedness result holds.
Assume (<ref>), (<ref>), (<ref>). Then problem (<ref>) admits a unique solution u∈ C^2+ α, 1 + α/2(Ω× [0,T])
such that
u_1 ≤ u(x,t) ≤ u_3, (x,t) ∈Ω× [0,T],
u _C^2+ α, 1 + α/2(Ω× [0,T])≤ C,
where C is a positive constant depending (at most) on k_0, T, Ω, M_1, M_2,u_0_C^2 + α(Ω).
We omit the details of the proof since (<ref>) can be easily obtained using the results in <cit.>
and (<ref>) by means of <cit.>.
§.§ Well posedness of the perturbed problem
The well-posedness of the perturbed problem (<ref>) is provided by the following theorem.
Assume (<ref>), (<ref>), (<ref>), (<ref>). Then problem (<ref>) admits a unique solution u^ε such that
u^ε∈ L^2(0,T; H^1(Ω)) ∩ C([0,T]; L^2(Ω)), u_t^ε∈ L^2(0,T; (H^1(Ω))^') + L^4/3(Ω× (0,T)).
Moreover, u^ε∈ C^α, α/2(Ω× [0,T]) and the following estimate holds
u^ε_C^α, α/2(Ω× [0,T])≤ C,
where C is a positive constant depending (at most) on k_0, k_1, T, Ω, u_0_C^α(Ω) and M_1.
Throughout the proof C will be as in the statement of the Theorem.
Recalling the definition of f, there exist k ≥ 0, α_1 >0, α_2 >0, λ >0 such that
α_1u^4 - k ≤ f(u)u ≤α_2u^4 + k, f^'(u) ≥ - λ.
We formulate problem (<ref>) in the weak form
∫_Ω u^ε_t v + ∫_Ω k_ε∇ u^ε·∇ v
+ ∫_Ωχ_Ω\ω_ε f(u^ε)v = 0 , ∀ v ∈ H^1(Ω).
Setting f̃(u) = f(u) - u, (<ref>) becomes
∫_Ω u^ε_t v + ∫_Ω k_ε∇ u^ε·∇ v
+ ∫_Ωχ_Ω\ω_ε u^ε v
+ ∫_Ωχ_Ω\ω_εf(u^ε)v = 0 , ∀ v ∈ H^1(Ω).
Observe that, thanks to following the Poincaré type inequality in <cit.>
z^2_H^1(Ω)≤ S(Ω) (∇ z^2_L^2(Ω)
+ z^2_L^2(Ω\ω_ε)), ∀ z ∈ H^1(Ω),
the bilinear form
a_ε(u^ε,v) = ( ∫_Ω k_ε∇ u^ε·∇ v
+ ∫_Ω\ω_ε u^ε v ) is coercive.
Indeed
a_ε(u^ε,u^ε) = ∫_Ω k_ε |∇ u^ε|^2 + ∫_Ω\ω_ε (u^ε)^2
≥ Su^ε^2_H^1(Ω),
where S is a positive constant depending on Ω and k_1.
Through the classical Faedo-Galerkin approximation scheme it is possible to prove that problem (<ref>) admits a unique weak solution u^ε
satisfying (<ref>).
In order to obtain further regularity for u^ε, let {ϕ_n} be a sequence such that
ϕ_n ∈ C^1(Ω), 0 ≤ϕ_n(x) ≤ 1, ∀ x ∈Ω, ϕ_n(x) = 1, ∀ x ∈Ω\ω_ε, and ϕ_n →χ_Ω∖ω_ε in L^∞(Ω),
and formulate the approximating problems
u^n_t- div (((k_0 - k_1)ϕ_n + k_1)∇ u^n) + ϕ_n f(u^n) = 0,
in Ω× (0,T),
∂ u^n ∂ n = 0, on ∂Ω× (0,T),
u^n(0) = u_0, in Ω.
Using the same arguments as in the proof of Theorem <ref>, we can prove that, ∀ n ∈ℕ, problem (<ref>)
admits a unique solution u^n such that
u^n ∈ C(Ω× [0,T]), u_1 ≤ u^n(x,t) ≤ u_3, (x,t) ∈Ω× [0,T].
Moreover, by means again of a standard Faedo-Galerkin approximation scheme (for any n) we can prove that the solution to problem (<ref>)
satisfies
u^n ∈ L^2(0,T; H^1(Ω)) ∩ C([0,T]; L^2(Ω)),
u_t^n ∈ L^2(0,T; (H^1(Ω))^') + L^4/3(Ω× (0,T)),
u^n^2_L^∞(0,T; L^2(Ω))≤ C, u^n^2_L^2(0,T; H^1(Ω))≤ C,
u_t^n|^2_L^4/3(0,T; (H^1(Ω))^')≤ C, ϕ_nf(u^n)^2_L^4/3(Ω× (0,T))≤ C,
where C are some positive constants independent of n.
An application of <cit.> implies that, up to a subsequence,
u^n →ζ strongly in L^2(Ω× (0,T)),
so that
u^n →ζ a.e. in Ω× (0,T) and
ϕ_nf(u^n) ⇀χ_ω_εf(ζ) in L^4/3(Ω× (0,T)).
Since problem (<ref>) has a unique solution (cf. (<ref>)), we conclude that ζ=u^ε
and satisfies
u^ε^2_L^∞(0,T; L^2(Ω))≤ C, u^ε^2_L^2(0,T; H^1(Ω))≤ C,
u_t^ε^2_L^4/3(0,T; (H^1(Ω))^')≤ C,
u_1 ≤ u^ε (x,t)≤ u_3, in Ω× [0,T].
Considering now the interior regularity result in <cit.> (see also <cit.>)
and the regularity up to the boundary contained in <cit.>, then we deduce (<ref>).
§ ENERGY ESTIMATES FOR 𝐮^Ε - 𝐮
In this section we prove some energy estimates for the difference between u^ε and u, solutions to problem (<ref>) and problem (<ref>), respectively, that are crucial to establish the asymptotic formula for u^ε-u of Theorem 4 in Section 5.
Assume (<ref>), (<ref>), (<ref>), (<ref>). Setting w := u^ε - u, then
w_L^∞(0,T;L^2(Ω))≤ C|ω_ε|^1/2,
w_L^2(0,T;H^1(Ω))≤ C|ω_ε|^1/2.
Moreover, there exists 0<β<1 such that
w_L^2(Ω× (0,T))≤ C |ω_ε|^1/2 + β.
Here C stands for a positive constant depending (at most) on k_0, k_1, Ω, T, M_1, M_2, u_0_C^2+α(Ω).
Throughout the proof C will be as in the statement of the Theorem.
On account of the assumptions, Theorems <ref> and <ref> hold. Then w solves the problem
w_t- div (k_ε∇ w)
+ χ_Ω / ω_ε p_ε w
= - div (kχ_ω_ε∇ u) + χ_ω_ε f(u), in Ω× (0,T),
∂ w ∂ n = 0, on ∂Ω× (0,T),
w(0) = 0, in Ω,
where we have set k := k_0 - k_1 > 0 and
p_ε w := f^'(z_ε)w = f(u^ε) - f(u),
z_ε being a value between u^ε and u. By means of (<ref>), (<ref>)
and recalling (<ref>), we have
u_1 ≤ z_ε≤ u_3, |p_ε| = |f^'(z_ε)| ≤ M_2.
Multiplying the first equation by w in (<ref>) by w and integrating by parts over Ω, we get
1/2d/dt∫_Ω w^2 + ∫_Ω k_ε|∇ w|^2
+ ∫_Ωχ_Ω / ω_ε p_ε w^2
= ∫_Ωkχ_ω_ε∇ u∇ w + ∫_Ωχ_ω_ε f(u)w.
Adding and subtracting ∫_Ωχ_Ω∖ω_ε w^2 and applying (<ref>) we obtain
1/2d/dt∫_Ω w^2 + Sw^2_H^1(Ω)≤∫_ω_εk∇ u∇ w + ∫_ω_ε f(u)w
- ∫_Ωχ_Ω / ω_ε (p_ε - 1) w^2.
Recalling (<ref>) and (<ref>), thanks to Young's inequality we deduce
1/2d/dt∫_Ω w^2 + Sw^2_H^1(Ω)≤k (k/2S∫_ω_ε | ∇ u|^2 + S/2k∫_Ω|∇ w|^2)
+ 1/2∫_ω_ε (f(u))^2
+ ∫_Ω (M_2 + 3/2) w^2,
so that
1/2d/dt∫_Ω w^2 + S/2w^2_H^1(Ω)≤(k)^2/2S∫_ω_ε | ∇ u|^2
+ 1/2∫_ω_ε M_1^2
+ (M_2 + 3/2) ∫_Ω w^2,
and finally, see (<ref>),
d/dtw(t)^2_L^2(Ω)≤ C (|ω_ε| + w(t)^2_L^2(Ω) ).
Recalling w(0) = 0, an application of Gronwall's Lemma implies
w(t)^2_L^2(Ω)≤ C|ω_ε|, t ∈ (0,T),
so that (<ref>) follows.
Integrating now inequality (<ref>) on (0,T) we get
∫_Ω w^2(T) + C∫_0^Tw(t)^2_H^1(Ω)dt
≤ C (|ω_ε| + ∫_0^Tw(t)^2_L^2(Ω)dt),
and a combination with (<ref>) gives (<ref>).
In order to obtain the more refined estimate (<ref>), observe that w also
solves problem
w_t- div (k_0∇ w)
+ χ_Ω / ω_ε p_ε w
= - div (kχ_ω_ε∇ u^ε) + χ_ω_ε f(u), in Ω× (0,T),
∂ w ∂ n = 0, on ∂Ω× (0,T),
w(0) = 0, in Ω.
Let's now introduce the auxiliary function w, solution to the adjoint problem
w_t + div (k_0∇w)
- χ_Ω / ω_ε p_εw
= - w, in Ω× (0,T),
∂w∂ n = 0, on ∂Ω× (0,T),
w(T) = 0, in Ω.
By the change of variable t → T - t, problem (<ref>) is equivalent to
z_t - div (k_0∇ z)
+ χ_Ω / ω_εp̂_ε z
= ŵ, in Ω× (0,T),
∂ z ∂ n = 0, on ∂Ω× (0,T),
z (0) = 0, in Ω,
where we have set z(x,t) = w(x, T - t), p̂_ε (x,t) = p_ε (x,T - t), ŵ (x,t)= w(x, T-t).
Since |χ_Ω / ω_εp̂_ε| is bounded in Ω× (0,T) and w ∈ C^α, α/2(Ω× [0,T]),
standard arguments show that problem (<ref>) admits a unique solution z such that (see <cit.>)
z ∈ W^2,1_2(Ω× (0,T)) := { z ∈ L^2(Ω× (0,T)) | z ∈ H^1(0,T; L^2(Ω)) ∩ L^2(0,T;H^2(Ω))}.
Hereon, up to equation (<ref>), all the equations depend on t and are valid for every t ∈ (0,T); however, we will omit this dependence for the sake of notation.
Moreover, multiplying the first equation in (<ref>) by z and integrating over Ω, we get
1/2d/dt∫_Ω z^2 + k_0∫_Ω|∇ z|^2 + k_0∫_Ω z^2
= ∫_Ωŵ z -∫_Ωχ_Ω / ω_εp̂_ε z^2 + k_0∫_Ω z^2.
By means of Young's inequality and recalling (<ref>), we have
1/2d/dt z_L^2(Ω)^2 + k_0/2z^2_H^1(Ω)≤1/2k_0ŵ_L^2(Ω)^2 + (M_2 + k_0) z_L^2(Ω)^2,
and then
d/dt z_L^2(Ω)^2
≤1/k_0ŵ_L^2(Ω)^2 + 2(M_2 + k_0) z_L^2(Ω)^2.
Recalling z(x,0)= 0, an application of Gronwall's Lemma gives
z _L^2(Ω)^2
≤ Cŵ_L^2(Ω)^2.
Let's now multiply the first equation in (<ref>) by z_t and integrate over Ω. We get
∫_Ω z_t^2 + k_0/2d/dt∫_Ω |∇ z|^2
= ∫_Ωŵ z_t -∫_Ωχ_Ω / ω_εp̂_ε z z_t.
An application of Young's inequality gives
1/2∫_Ω z_t^2 + k_0/2d/dt∫_Ω|∇ z|^2
≤∫_Ω (ŵ)^2 +∫_Ωχ_Ω / ω_ε (p̂_ε)^2 z^2,
and then
1/2z_t_L^2(Ω)^2 + k_0/2d/dt∇ z_L^2(Ω)^2
≤ŵ_L^2(Ω)^2 +M_2^2 z_L^2(Ω)^2.
Combining (<ref>) and (<ref>),
integrating in time on (0,t), and using ∇ z(0) = 0 we deduce
∇ z(t)_L^2(Ω)^2 ≤ C ŵ_L^2(Ω× (0,t))^2, t ∈ (0,T),
so that
z_L^∞(0,T;H^1(Ω))^2 ≤ C ŵ_L^2(Ω× (0,T))^2.
The same computations also gives
z_t_L^2(Ω× (0,T))^2
≤ Cŵ_L^2(Ω× (0,T))^2.
Then, an application of standard elliptic regularity results to problem (<ref>) implies (see <cit.>)
z_L^2(0,T;H^2(Ω)^2
≤ Cŵ_L^2(Ω× (0,T))^2.
Recalling the definition of z and ŵ, by estimates (<ref>) and (<ref>)
we get
w^2_L^∞(0,T;H^1(Ω)) + w_L^2(0,T;H^2(Ω))^2
≤ Cw_L^2(Ω× (0,T))^2,
Finally, we want to prove that there exists p >2 such that
w_L^p(Ω× (0,T)) + ∇w_L^p(Ω× (0,T))≤ Cw_L^2(Ω× (0,T)).
To this aim, on account of (<ref>) and Sobolev immersion theorems, we deduce
w^2_L^6(Ω× (0,T))≤ Cw^2_L^∞(0,T;H^1(Ω))≤ Cw_L^2(Ω× (0,T))^2.
Moreover, again from (<ref>) we have
∇w∈ L^∞(0,T; L^2(Ω)) ∩ L^2(0,T;L^6(Ω)).
From well-known interpolation estimates (cf. <cit.>) we infer
∇w^10/3_L^10/3(Ω× (0,T))≤ C∇w^2_L^2(0,T;L^6(Ω))∇w^4/3_L^4/3(0,T;L^2(Ω))
and therefore, using (<ref>),
∇w^10/3_L^10/3(Ω× (0,T))≤ C w ^2_L^2(Ω× (0,T)) w ^4/3_L^2(Ω× (0,T))≤ C w ^10/3_L^2(Ω× (0,T)),
so that (<ref>) holds for any p ∈ (2, 10/3].
Let us now multiply the evolution equation in (<ref>) by w and the evolution equation in (<ref>) by w, respectively.
Integrating on Ω we obtain
∫_Ω w_t w + k_0 ∫_Ω∇ w ·∇w
+ ∫_Ωχ_Ω / ω_ε p_ε w w
= k∫_ω_ε∇ u^ε∇w + ∫ _ω_ε f(u) w,
∫_Ωw_t w - k_0 ∫_Ω∇w·∇ w
- ∫_Ωχ_Ω / ω_ε p_εw w
= - ∫ _Ω w^2.
Summing up (<ref>) and (<ref>) we obtain
∫_Ω (w_t w + w_t w) =
k∫_ω_ε∇ u^ε∇w + ∫ _ω_ε f(u) w
- ∫ _Ω w^2,
subsequently, an integration in time on (0,T) gives
∫_0^T ∫ _Ω w^2
= -∫_0^T∫_Ω (w_t w + w_t w)
+ k∫_0^T∫_ω_ε∇ u^ε∇w + ∫_0^T∫ _ω_ε f(u) w.
Recalling the conditions at time t=0 for w and at time t=T for w, we get
[ ∫_0^T∫_Ω (w_t w + w_t w) = ∫_Ω∫_0^T (w_t w +w_t w ) ; = ∫_Ω ((ww)(T) - (ww)(0) - ∫_0^T (w w_t + w_t w) )
=0 ]
So that (<ref>) becomes
∫_0^T ∫ _Ω w^2
= k∫_0^T∫_ω_ε∇ u^ε∇w + ∫_0^T∫ _ω_ε f(u) w.
Using now Hölder inequality we deduce
w_L^2(Ω× (0,T))^2
≤∇ u^ε_L^q(ω_ε× (0,T))∇w_L^p(ω_ε× (0,T))
+f(u)_L^q(ω_ε× (0,T))w_L^p(ω_ε× (0,T)),
where we may choose for instance p=10/3 and q = 10/7.
By means of (<ref>) and (<ref>), from the previous inequality we get
w^2_L^2(Ω× (0,T))≤ C w_L^2(Ω× (0,T)) (∇ u^ε_L^q(ω_ε× (0,T))
+ |ω_ε|^1/q),
and therefore
w_L^2(Ω× (0,T))≤ C (∇ u^ε_L^q(ω_ε× (0,T))
+ |ω_ε|^1/q ).
Thanks to (<ref>) we also have
[ ∇ u^ε_L^q(ω_ε× (0,T)) ≤∇ u^ε - ∇ u_L^q(ω_ε× (0,T)) + ∇ u_L^q(ω_ε× (0,T)); ≤∇ w _L^q(ω_ε× (0,T)) + C|ω_ε|^1/q. ]
Finally, using again Hölder inequality and (<ref>), and recalling that q ∈ [10/7,)2, we obtain
∇ w _L^q(ω_ε× (0,T)) ≤ ( ∫_0^T (∫_ω_ε|∇ w (t)|^q 2/p^')^q/2 (∫_ω_ε 1 )^2-q/2 )^1/q
≤ |ω_ε|^1/q - 1/2 (∫_0^T∇ w(t)^q_L^2(Ω) )^1/q≤ |ω_ε|^1/q - 1/2∇ w_L^q(0,T; L^2Ω))
≤ C(Ω) |ω_ε|^1/q - 1/2∇ w_L^2(Ω× (0,T))≤ C |ω_ε|^1/q.
Combining the previous estimate with (<ref>), since 1/q∈(1/2,7/10] we can conclude that (<ref>) holds with β∈(0,1/5].
§ THE ASYMPTOTIC FORMULA
In this section we derive and prove an asymptotic representation formula for w = u_ε - u in analogy with <cit.> and <cit.>.
Let Φ =Φ(x,t) be any solution of
Φ_t + k_0ΔΦ - f^'(u)Φ = 0, in Ω× (0,T),
Φ(T) = 0, in Ω.
2truecm
Our main result is the following
Assume (<ref>), (<ref>), (<ref>), (<ref>). Let u^ε and u be the solutions to (<ref>) and (<ref>)
and Φ a solution to (<ref>), respectively. Then, there exist a sequence ω_ε_n satisfying (<ref>) and (<ref>)
with |ω_ε_n|→ 0, a regular Borel measure μ and a symmetric matrix ℳ with elements
ℳ_ij∈ L^2(Ω, dμ) such that, for ε→ 0,
∫_0^T∫_∂Ω k_0∂Φ/∂ n(u^ε - u)
= |ω_ε_n| {∫_0^T∫_Ωk̃ℳ∇ u·∇Φ dμ
+ ∫_0^T∫_Ω f(u)Φ dμ + o(1) }.
To prove Theorem <ref>, we need to state some preliminary results. Let v_ε^(j) and v^(j) be the variational solutions
(depending only on x ∈Ω) to the problems
(PV_ε)
{[ div (k_ε∇ v_ε^(j)) = 0, in Ω, ∂ v_ε^(j)/∂ n = n_j, on ∂Ω, ∫_∂Ω v_ε^(j) = 0, ].
(PV_0) {[ div (k_0∇ v^(j)) = 0, in Ω, ∂ v^(j)/∂ n = n_j, on ∂Ω, ∫_∂Ω v^(j) =0, ].
n_j being the j-th coordinate of the outward normal to ∂Ω. It can be easily verified that
v^(j) = x_j - 1/|∂Ω|∫_∂Ωx_j.
The following results hold
Let v_ε^(j) and v^(j) solutions to (<ref>), then there exists C(Ω)>0 such that
v_ε^(j) - v^(j)_H^1(Ω)≤ C(Ω)|ω_ε|^1/2.
Moreover, for some η∈ (0, 1/2), there exists C(Ω, η)>0 such that
v_ε^(j) - v^(j)_L^2(Ω)≤ C(Ω, η)|ω_ε|^1/2 + η.
See Lemma 1 in <cit.>.
Let u and u_ε be the solutions to problems (<ref>) and (<ref>), respectively.
Consider v_ε^(j) and v^(j) as in (<ref>).
Then, for any Φ∈ C^1(Ω× [0,T]) with Φ(x,T) = 0, the folllowing holds as ε→ 0,
∫_0^T∫_Ω1/|ω_ε|χ_ω_ε∇ u·∇ v_ε^(j)Φ dxdt =
∫_0^T∫_Ω1/|ω_ε|χ_ω_ε∇ u_ε·∇ v^(j)Φ dxdt + o(1).
We follow the ideas in <cit.> and <cit.>.
Since w = u_ε - u, then we obtain the identity
∫_Ω k_0∇ w·∇ (v^(j)Φ) = ∫_Ω k_0∇ w·∇ v^(j)Φ
+ ∫_Ω k_0∇ w·∇Φ v^(j)
= - ∫_Ω k_0 w ∇ v^(j)·∇Φ + ∫_∂Ω k_0 w n_jΦ
+∫_Ω k_0∇ w·∇Φ v^(j).
Moreover, we have
∫_0^T∫_Ω k_ε∇ w·∇ (v_ε^(j)Φ)
= ∫_0^T∫_Ω ( k_ε∇ w·∇ v_ε^(j)Φ
+ k_ε∇ w·∇Φ v^(j)
+ k_ε∇ w·∇Φ (v^(j)_ε - v^(j)) )
= ∫_0^T ∫_Ω k_ε∇ w·∇ v_ε^(j)Φ
+ ∫_Ω k_0∇ w·∇Φ v^(j)
+ ∫_Ω (k_ε - k_0)∇ w·∇Φ v^(j) + ∫_Ω k_ε∇ w·∇Φ (v^(j)_ε - v^(j))
= ∫_0^T ( - ∫_Ω k_ε w ∇ v_ε^(j)·∇Φ
+ ∫_∂Ωk_0 w n_jΦ
+ ∫_Ω k_0∇ w·∇Φ v^(j)
+ ∫_Ω (k_ε - k_0)∇ w·∇Φ v^(j)
+ ∫_Ω k_ε∇ w·∇Φ (v^(j)_ε - v^(j)) )
= ∫_0^T ( - ∫_Ω k_ε w ∇ v^(j)·∇Φ
+ ∫_∂Ωk_0 w n_jΦ
+ ∫_Ω k_0∇ w·∇Φ v^(j)
+ ∫_ω_ε (k_1 - k_0)∇ w·∇Φ v^(j) + ∫_Ω k_ε∇ w·∇Φ (v^(j)_ε - v^(j))
- ∫_Ω k_ε w ∇ (v^(j)_ε - v^(j))·∇Φ).
A combination with (<ref>) gives
∫_0^T∫_Ω k_ε∇ w·∇ (v_ε^(j)Φ )
= ∫_0^T ( ∫_Ω k_0∇ w·∇ (v^(j)Φ ) + ∫_Ω(k_0 - k_ε) w ∇ v^(j)·∇Φ
+ ∫_ω_ε (k_1 - k_0)∇ w·∇Φ v^(j) + ∫_Ω k_ε∇ w·∇Φ (v^(j)_ε - v^(j))
- ∫_Ω k_ε w·∇ (v^(j)_ε - v^(j))·∇Φ)
= ∫_0^T ( ∫_Ω k_0∇ w·∇ (v^(j)Φ) + ∫_ω_ε (k_1 - k_0)∇ w·∇Φ v^(j)
+ ∫_ω_ε(k_0 - k_1) w ∇ v^(j)·∇Φ + ∫_Ω k_ε∇ w·∇Φ (v^(j)_ε - v^(j))
- ∫_Ω k_ε w ∇ (v^(j)_ε - v^(j))·∇Φ).
Then, on account of (<ref>), (<ref>), (<ref>), (<ref>) and Schwarz inequality, we get
∫_0^T∫_Ω k_ε∇ w·∇ (v_ε^(j)Φ )
= ∫_0^T ( ∫_Ω k_0∇ w·∇ (v^(j)Φ ) - ∫_ω_εk∇ w·∇Φ v^(j) )
+o(|ω_ε|).
Let us consider now problem (<ref>). Multiplying both sides of the first equation by v_ε^(j)Φ, and integrating by parts
on Ω× (0,T), we obtain
∫_0^T∫_Ω w_tv_ε^(j)Φ + ∫_0^T∫_Ω k_ε∇ w ·∇ (v_ε^(j)Φ)
+ ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w v_ε^(j)Φ
= ∫_0^T∫_ω_εk∇ u·∇(v_ε^(j)Φ) + ∫_0^T∫_ω_ε f(u)v_ε^(j)Φ.
On the other hand, multiplying the first equation in (<ref>) by v^(j)Φ and integrating by parts on Ω× (0,T), we get
∫_0^T∫_Ω w_t v^(j)Φ + ∫_0^T∫_Ω k_0∇ w ·∇(v^(j)Φ)
+ ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w v^(j)Φ
= ∫_0^T∫_ω_εk∇ u^ε·∇(v^(j)Φ) + ∫_0^T∫_ω_ε f(u) v^(j)Φ.
A combination of (<ref>), (<ref>) and (<ref>) gives, for ε→ 0,
∫_0^T∫_ω_εk∇ u·∇(v_ε^(j)Φ) + ∫_0^T∫_ω_ε f(u)v_ε^(j)Φ
- ∫_0^T∫_Ω w_tv_ε^(j)Φ
- ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w v_ε^(j)Φ
= ∫_0^T∫_ω_εk∇ u^ε·∇(v^(j)Φ) + ∫_0^T∫_ω_ε f(u) v^(j)Φ
- ∫_0^T∫_Ω w_t v^(j)Φ
- ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w v^(j)Φ
- ∫_0^T∫_ω_εk̃∇ w·∇Φ v^(j)
+o(|ω_ε|),
from which we deduce
∫_0^T∫_ω_εk∇ u·∇(v_ε^(j)Φ)
= ∫_0^T∫_Ω w_t(v_ε^(j) - v^(j))Φ
+ k∫_0^T∫_ω_ε ( ∇ u^ε·∇(v^(j)Φ)
- ∇ u^ε·∇Φ v^(j) )
+ ∫_0^T∫_ω_εk∇ u·∇Φ v^(j)
- ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w (v_ε^(j) - v^(j))Φ
+ ∫_0^T∫_ω_ε f(u) (v^(j) - v_ε^(j))Φ
+ o(|ω_ε|).
By means of (<ref>), (<ref>), (<ref>) and (<ref>), and recalling also (<ref>) and (<ref>),
an application of the Hölder inequality both in space and time gives, for ε→ 0,
k∫_0^T∫_ω_ε∇ u·∇(v_ε^(j)Φ)
= ∫_0^T∫_Ω w_t(v_ε^(j) - v^(j))Φ
+ k∫_0^T∫_ω_ε ( ∇ u^ε·∇ v^(j)Φ
+ ∇ u ·∇Φ v^(j) )
+o(|ω_ε|)
and then
k∫_0^T∫_ω_ε∇ u·∇ v_ε^(j)Φ
= ∫_0^T∫_Ω w_t(v_ε^(j) - v^(j))Φ
+ k∫_0^T∫_ω_ε∇ u^ε·∇ v^(j)Φ
+ k∫_0^T∫_ω_ε∇ u ·∇Φ (v^(j)- v_ε^(j))
+o(|ω_ε|)
= ∫_0^T∫_Ω w_t(v_ε^(j) - v^(j))Φ
+ k∫_0^T∫_ω_ε∇ u^ε·∇ v^(j)Φ
+o(|ω_ε|).
Consider the first term in the last line of (<ref>). Integrating by parts in time and recalling that
Φ(T) = 0, w(0) = 0, (v^(j)- v_ε^(j))_t = 0, we finally have
(cf. also (<ref>), (<ref>)), for ε→ 0,
∫_0^T∫_Ω w_t(v_ε^(j) - v^(j))Φ =
∫_Ω [w(v_ε^(j) - v^(j))Φ](T)
- ∫_Ω [w(v_ε^(j) - v^(j))Φ](0)
- ∫_0^T∫_Ω w(v_ε^(j) - v^(j))_tΦ - ∫_0^T∫_Ω w(v_ε^(j) - v^(j))Φ _t
= - ∫_0^T∫_Ω w(v_ε^(j) - v^(j))Φ _t = o(|ω_ε|).
Combining (<ref>) and (<ref>) we get
k∫_0^T∫_ω_ε∇ u·∇ v_ε^(j)Φ
= k∫_0^T∫_ω_ε∇ u^ε·∇ v^(j)Φ
+o(|ω_ε|), ε→ 0,
then formula (<ref>) is true.
Proof of Theorem <ref>.
Following <cit.>, there exist a regular Borel measure μ, a symmetric matrix ℳ with elements
ℳ_ij∈ L^2(Ω, dμ), a sequence ω_ε_n with |ω_ε_n| → 0 such that
|ω_ε_n|^-1χ_ω_ε_n dx → dμ,
|ω_ε_n|^-1χ_ω_ε_n∂ v_ε_n^(j)/∂ x_i dx →ℳ_ij dμ,
in the weak^* topology of C^0(Ω).
On account of (<ref>), we deduce also
|ω_ε_n|^-1χ_ω_ε_n∂ u(t)/∂ x_i∂ v_ε_n^(j)/∂ x_i dx
→ℳ_ij∂ u(t)/∂ x_i dμ, ∀ t ∈ (0,T),
in the weak^* topology of C^0(Ω).
Moreover, recalling (<ref>), (<ref>) and (<ref>), we get
| ∫_0^T∫_Ωχ_ω_ε_n/|ω_ε_n|∂ u^ε_n/∂ x_i∂ v^(j)/∂ x_i |
≤ C,
where C is independent of ε_n.
Hence
|ω_ε_n|^-1χ_ω_ε_n∂ u^ε_n/∂ x_i∂ v^(j)/∂ x_i dxdt → dν_j
in the weak^* topology of C^0(Ω× [0,T]).
Combining (<ref>), (<ref>) and (<ref>) we obtain
dν_j = ℳ_ij∂ u(t)/∂ x_i dμ, ∀ t ∈ (0,T).
Now multiply the first equation in (<ref>) by w and the first equation in (<ref>) by Φ on Ω× (0,T).
Then, integrating by parts, we get
∫_0^T∫_ΩΦ_tw + ∫_0^T∫_Ω k_0∇Φ·∇ w - ∫_0^T∫_Ω f^'(u)Φ w
+∫_0^T∫_∂Ω k_0∂Φ/∂ nw= 0,
∫_0^T∫_Ω w_tΦ + ∫_0^T∫_Ω k_0∇ w ·∇Φ + ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w Φ
= ∫_0^T∫_ω_εk∇ u^ε·∇Φ + ∫_0^T∫_ω_ε f(u)Φ.
Summing up the two previous equations, we have
∫_0^T∫_Ω (w_tΦ + Φ_tw) - ∫_0^T∫_Ω f^'(u)Φ w
+∫_0^T∫_∂Ω k_0∂Φ/∂ nw + ∫_0^T∫_Ωχ_Ω / ω_ε p_ε w Φ
= ∫_0^T∫_ω_εk∇ u^ε·∇Φ + ∫_0^T∫_ω_ε f(u)Φ.
Observe that the following identities hold
∫_0^T∫_Ω (w_tΦ + Φ_tw) = ∫_Ω ( Φ(T)w(T) - Φ(0)w(0) ) - ∫_0^T∫_ΩΦ w_t + ∫_0^T∫_ΩΦ w_t =0,
and then, from (<ref>) we infer
∫_0^T∫_Ω ( χ_Ω / ω_ε p_ε w Φ - f^'(u)Φ w )
+∫_0^T∫_∂Ω k_0∂Φ/∂ nw
= ∫_0^T∫_ω_εk∇ u^ε·∇Φ + ∫_0^T∫_ω_ε f(u)Φ.
Moreover, on account of (<ref>), we have
∫_0^T∫_Ω ( χ_Ω / ω_ε p_ε w Φ - f^'(u)Φ w )
=∫_0^T∫_Ω ( χ_Ω / ω_ε p_ε w Φ - χ_Ω / ω_ε f^'(u)Φ w )
- ∫_0^T∫_ω_ε f^'(u)Φ w
= ∫_0^T∫_Ωχ_Ω / ω_ε (p_ε - f^'(u)) w Φ + o(|ω_ε|) = o(|ω_ε|).
The last equality in (<ref>) is a consequence of the regularity of f (see (<ref>), from which |p_ε - f^'(u)| ≤ C|w| follows)
and (<ref>).
Combining (<ref>) and (<ref>)
we obtain
∫_0^T∫_∂Ω k_0∂Φ/∂ nw
= |ω_ε|∫_0^T∫_Ωk |ω_ε|^-1χ_ω_ε∇ u^ε·∇Φ
+ |ω_ε| ∫_0^T∫_Ωχ_ω_ε|ω_ε|^-1 f(u)Φ
+ o(|ω_ε_n|).
And finally, by means of (<ref>), (<ref>) and (<ref>), the formula (<ref>) holds.
We would like to emphasize that, with minor changes, the asymptotic expansion extends to the case of piecewise smooth anisotropic conductivities
of the form
𝕂_ε = {[ 𝕂_0 Ω∖ω_ε; 𝕂_1 ω_ε ].
where 𝕂_0, 𝕂_1∈ C^∞(Ω) are symmetric matrix valued functions satisfying
α_0|ξ|^2≤ξ^T𝕂_0(x)ξ≤β_0 |ξ|^2, α_1|ξ|^2≤ξ^T𝕂_1(x)ξ≤β_1|ξ|^2, ∀ ξ∈ℝ^3, ∀ x ∈Ω,
with 0<α_1<β_1< α_0 < β_0. Then, the asymptotic formula (<ref>) becomes
∫_0^T∫_∂Ω𝕂_0∇Φ· n (u^ε - u) =
|ω_ε|∫_0^T∫_Ω( ℳ_i j(𝕂_0-𝕂_1)_ik∂ u/∂ x_k∂Φ/∂ x_j
+f(u)Φ )dμ +o(|ω_ε|)
where Φ solves
Φ_t + div (𝕂_0 ∇Φ) - f^'(u)Φ = 0, in Ω× (0,T),
Φ(T) = 0, in Ω,
2truecm
and u is the background solution of
u_t - div (𝕂_0 ∇ u) + f(u) = 0, in Ω× (0,T),
𝕂_0∇ u · n = 0, on ∂Ω× (0,T),
u(0) = 0, in Ω.
2truecm
The matrix ℳ is called the polarization tensor associated to the inhomogeneity ω_ε.
Indeed, all the results of the previous sections can be extended to the case of constant anisotropic coefficients using for instance
the regularity results contained in <cit.>.
§ A RECONSTRUCTION ALGORITHM
We now use the asymptotic expansion derived in the previous section to set a reconstruction procedure
for the inverse problem of detecting a spherical inhomogeneity ω_ε from boundary measurements of the potential.
Following the approach of <cit.>,
<cit.>, but taking into account the time-dependence of the problem, we introduce the mismatch functional
J(ω_ε) = 1/2∫_0^T ∫_∂Ω(u^ε - u_meas)^2 ,
being u^ε the solution of the perturbed problem (<ref>) in presence of an inclusion ω_ε satisfying hypotheses
(<ref>), (<ref>).
It is possible to reformulate the inverse problem in terms of the following minimization problem
J(ω_ε) →min
among all the small inclusions, well separated from the boundary. We introduce the following additional assumption on the exact inclusion
ω_ε = z + ε B = {x ∈Ω s.t. x = z+ε b, b ∈ B},
being z ∈Ω and B an open, bounded, regular set containing the origin. We remark that we prescribe the geometry of the inclusion to be fixed
throughout the whole observation time.
The restriction of the functional J to the class of inclusions satisfying (<ref>) is denoted by j(ε;z).
We can now define the topological gradientG: Ω→ as the first order term appearing in the asymptotic expansion of the cost
functional with respect to ε, namely
j(ε;z) = j(0) + |ω_ε| G(z) + o(|ω_ε|), ε→ 0,
where j(0) = ∫_0^T ∫_∂Ω(u - u_meas)^2 and u is the solution of the unperturbed problem (<ref>).
Under the assumption that the exact inclusion has small size and satisfies hypothesis
(<ref>), a reconstruction procedure consists in identifying the point z̅∈Ω where the topological gradient G attaints its minimum.
Indeed, the cost functional achieves the smallest value when it is evaluated in the center of the exact inclusion. Thanks to the hypothesis of small size, we expect the reduction of the cost functional j to be correctly described by the first order term G, up to a reminder which is negligible with respect to ε.
Nevertheless, in order to define a reconstruction algorithm, we need to efficiently evaluate the topological gradient G. According to the definition,
G(z) = lim_ε→ 0j(ε;z)-j(0)/|ω_ε|.
Evaluating G in a single point z∈Ω would require to solve the direct problem several times in presence of inclusions centered at z
with decreasing volume.
This procedure can be indeed avoided thanks to a useful representation formula that can be deduced from the asymptotic expansion (<ref>).
To show this we need the following preliminary Proposition the proof of which is given in the Appendix.
Consider the problem
Φ_t + k_0ΔΦ - f^'(u)Φ = 0, in Ω× (0,T),
∂Φ/∂ n = u^ε - u, on ∂Ω× (0,T),
Φ(T) = 0, in Ω.
2truecm
Given a compact set K ⊂Ω such that d(K,∂Ω) ≥ d_0 > 0 the following estimate holds
Φ_L^1(0,T;W^1,∞(K))≤ C u^ε - u _L^2(0,T;L^2(∂Ω)).
On account of Proposition <ref>, we deduce the following representation of the topological gradient
The topological gradient of the cost functional j(ε, z) can be expressed by
G(z) = ∫_0^T ( kℳ∇ u(z) ·∇ W(z) + f(u(z))W(z) ),
where W is the solution of the adjoint problem:
W_t + k_0Δ W - f^'(u)W = 0, in Ω× (0,T),
∂ W/∂ n = u - u_meas, on ∂Ω× (0,T),
W(T) = 0, in Ω.
Consider the difference
j(ε;z) - j(0)
= 1/2u^ε - u_meas_L^2(0,T;L^2(∂Ω))^2 - 1/2u - u_meas_L^2(0,T;L^2(∂Ω))^2
= ∫_0^T ∫_∂Ω (u^ε - u)(u - u_meas)dt + 1/2u^ε - u _L^2(0,T;L^2(∂Ω))^2.
According to (<ref>) and to the definition of the adjoint problem (<ref>), we can express
∫_0^T ∫_∂Ω (u^ε - u)(u - u_meas)dt =
|ω_ε| {∫_0^T∫_Ωkℳ∇ u·∇ W dμ
+ ∫_0^T∫_Ω f(u)W dμ + o(1) }.
Since we assume (<ref>), the measure μ associated to the inclusion is the Dirac mass δ_z centered in point z (see <cit.>).
Hence
∫_0^T ∫_∂Ω (u^ε - u)(u - u_meas)dt =
|ω_ε| {∫_0^T kℳ∇ u(z)·∇ W(z) .
+ . ∫_0^T f(u(z))W(z) } + o(|ω_ε|).
Moreover, by (<ref>), the second term in the left-hand side of (<ref>) can be expressed as
∫_0^T ∫_∂Ω (u^ε - u)(u^ε - u)dt =
|ω_ε| {∫_0^Tkℳ∇ u(z)·∇Φ(z) + ∫_0^T f(u(z))Φ(z) } + o(|ω_ε|),
where Φ is the solution to (<ref>). Thanks to regularity results on u (see Theorem <ref>) and using Proposition <ref> with
K = Ω_d_0 = {x ∈Ω s.t. d(x,∂Ω) ≥ d_0}, we obtain
∫_0^T ∫_∂Ω (u^ε - u)(u^ε - u)dt ≤ C |ω_ε| {∫_0^T | ∇Φ(z)|
+ ∫_0^T |Φ(z)| } + o(|ω_ε|)
≤ C|ω_ε| u^ε - u _L^2(0,T,L^2(∂Ω)) + o(|ω_ε|)
≤ C|ω_ε| u^ε - u _L^2(0,T,H^1(Ω)) + o(|ω_ε|)
≤ C|ω_ε|^3/2 + o(|ω_ε|) = o(|ω_ε|).
Replacing (<ref>) and (<ref>) in (<ref>), we finally get
j(ε;z) - j(0)
= |ω_ε| {∫_0^T kℳ∇ u(z)·∇ W(z) + ∫_0^T f(u(z))W(z) } + o(|ω_ε|).
Thanks to the representation formula (<ref>), evaluating the topological gradient of the cost functional requires just the solution
of two initial and boundary value problems. This yields the definition of
a one-shot algorithm for the identification of the center of a small inclusion satisfying hypotesis (<ref>) (see Algorithm <ref>).
Inspired by the electrophysiological application, we consider moreover the possibility to have partial boundary measurements, i.e.
the case where the support of the function u_meas is not the whole
boundary ∂Ω but only a subset Γ⊂∂Ω. In this case, it is possible to formulate a slightly different optimization problem,
in which we aim at minimizing the mismatch between the measured and the perturbed data just on the portion Γ of the boundary. The same reconstruction algorithm can be devised
for this problem, by simply changing the expression of the Neumann condition of the adjoint problem.
§ NUMERICAL RESULTS
In order to implement Algorithm 1 for the detection of inclusions, it is necessary to approximate the solution of the background problem (<ref>) and
the adjoint problem (<ref>). Moreover, when considering synthetic data u_meas, we must be able to compute the solution to the perturbed problem
(<ref>) in presence of the exact inclusion. We rely on the Galerkin finite element method for the numerical approximation of these problems.
The one-shot procedure makes the reconstruction algorithm very efficient, only requiring the solution of an adjoint problem for each acquired measurement over
the time interval, without entailing any iterative (e.g. descent) method for numerical optimization.
§.§ Finite Element approximation of initial and boundary value problems
The background problem (<ref>) can be cast in weak form as follows
∀ t∈(0,T), find u(t) ∈ V = H^1(Ω) such that u(0) = u_0 and
∫_Ω u_t(t)v + ∫_Ω k_1 ∇ u(t)·∇ v + ∫_Ωf(u(t))v = 0, ∀ v ∈ V.
By introducing a finite-dimensional subspace V_h of V, dim(V_h) = N_h < ∞, the Galerkin (semi-discretized in space) formulation of problem
(<ref>) reads
∀ t∈(0,T), find u_h(t) ∈ V_h such that u_h(0) = u_h,0 and
((u_h)_t(t),v_h) + b(u_h(t),v_h) + F(u_h(t),v_h) = 0, ∀ v_h ∈ V_h,
where (·,·) is the inner product in L^2(Ω), b(u,v) = ∫_Ωk_1 ∇ u ·∇ v, F(u,v)=∫_Ω f(u)v, f is defined as in (<ref>) and u_h,0 is the H^1-projection of u_0 onto V_h.
To obtain a full discretization of the problem, we introduce a finite difference approximation in time. According to the strategy reported in <cit.>, <cit.>,
we rely on a semi-implicit scheme which allows an efficient treatment of the nonlinear terms.
Let us consider an uniform partition {t^n}_n = 0^N of the time interval [0,T] of step τ=T/N s.t. t^0 = 0, t^N=T. Then, the fully discrete
formulation of (<ref>) is given by
∀ n = 0, … N-1, find u_h^n+1∈ V_h such that u_h^0 = u_0,h and
(u^n+1_h,v_h) - (u^n_h,v_h) + τ b(u^n+1_h,v_h) + τ F(u^n_h,v_h) = 0, ∀ v_h ∈ V_h.
With the same discretization strategy one may describe a numerical scheme for the approximate solution of the perturbed problem, using the weak form reported in (<ref>) and introducing the forms
b_ε(u,v) = ∫_Ω k_ε∇ u ·∇ v,
F_ε(u,v) = ∫_Ωχ_Ω∖ω_ε f(u)v.
The adjoint problem, instead, requires the introduction of the form dF(u,v;w) = ∫_Ω f'(w)uv, which is bilinear with respect to u and v. Thanks
to the linearity of the adjoint problem, we can consider a fully implicit Crank-Nicolson scheme
∀ n = 0, … N-1, find w_h^n∈ V = H^1(Ω) such that w^N_h = 0 and
(w^n+1_h,v_h) - (w^n_h,v_h) +τ/2( b(w^n+1_h,v_h) + b(w^n_h,v_h) + .
. dF(w^n+1_h,v_h;u^n+1_h) + dF(w^n_h,v_h;u^n_h) )=
τ/2( ∫_∂Ω (u_h^n+1 - u_meas(t^n+1))v_h + ∫_∂Ω (u_h^n - u_meas(t^n))v_h ), ∀ v_h ∈ V_h.
The existence and the uniqueness of the solutions of the fully-discrete problems (<ref>) and (<ref>) follow by the well-posedness of the continuous problems, since V_h is a subspace of H^1(Ω).
For further details on the stability and of the convergence of the proposed schemes we refer to <cit.>, <cit.> and <cit.>.
The numerical setup for the simulation is represented in Figure <ref>. We consider an idealized geometry of the left ventricle (which has been object of several studies, see e.g. <cit.>,
<cit.>), and define a tetrahedral tesselation 𝒯_h of the domain. The discrete space V_h is the P1-Finite Element space over 𝒯_h, i.e. the space of the continuous functions
over Ω which are linear polynomials when restricted on each element T ∈𝒯_h. The mesh we use for all the reported results consists of 24924 tetrahedric elements and N_h=5639 nodes.
We report also the anisotropic structure considered in all the recontruction tests, according to <cit.> and <cit.>. The conductivity matrix 𝕂_0 for the monodomain equation is given by 𝕂_0(x) = 𝕂^e(x)(𝕂^e(x)+ 𝕂^i(x))^-1𝕂^i(x), where both 𝕂^i and 𝕂^e are orthotropic tensors with three constant positive real eigenvalues, namely
𝕂^e(x) = k_f^e e⃗_⃗f⃗(x)⊗e⃗_⃗f⃗(x) + k_t^e e⃗_⃗t⃗(x)⊗e⃗_⃗t⃗(x) + k_r^e e⃗_⃗r⃗(x)⊗e⃗_⃗r⃗(x)
𝕂^i(x) = k_f^i e⃗_⃗f⃗(x)⊗e⃗_⃗f⃗(x) + k_t^i e⃗_⃗t⃗(x)⊗e⃗_⃗t⃗(x) + k_r^i e⃗_⃗r⃗(x)⊗e⃗_⃗r⃗(x)
The eigenvectors e⃗_⃗f⃗, e⃗_⃗t⃗ and e⃗_⃗r⃗ are associated to the three principal directions of conductivity in the heart tissue: respectively, the fiber centerline, the tangent direction to the heart sheets and the transmural direction (normal to the sheets).
For the direct problem simulations, we consider the formulation reported in (<ref>), specifying realistic values for the parameters C_m and ν. We have rescaled the values of the coefficients u_1, u_2, u_3 and A^2 in order to simulate the electric potential in the adimensional range [0,1]. The rescaling is performed by the transformation ũ = (α+u)/β, where α = 0.085 mV and β = 0.125 mV, whereas for the sake of simplicity we will still denote by u the rescaled variable ũ. We consider the initial datum u_0 to be positive on a band of the endocardium, representing the initial stimulus provided by the heart conducting system. The most important parameters, considered in accordance with <cit.>, <cit.>, are reported in Table <ref>.
In Figure <ref> we report the solution of the discrete background problem (<ref>) at different time instants, comparing the isotropic and the anisotropic cases.
§.§ Reconstruction of small inclusions
We now tackle the problem of reconstructing the position of a small inhomogeneity using the knowledge of the electric potential of the tissue on a portion Γ
of the boundary of the domain. In particular, we assume that u_meas is known on the endocardium, i.e. the inner surface of the heart cavity. We generate
synthetic data on a more refined mesh and test the effectiveness of Algorithm 1 in the reconstruction of a small spherical inclusion in different positions.
In Figure <ref>
we report the value assumed by the topological gradient, and superimpose the exact inclusions: we observe a negative region in proximity of the position of the real inclusion.
The algorithm precisely identifies the region where the inclusion is present, whereas the minimum may in general be found along the endocardium also when the center of the real inclusion is not located on the heart surface.
Nevertheless, due to the thinness of the domain the reconstructed position is close to the real one.
This slight loss in accuracy seems to be an intrinsic limit of the topological gradient strategy applied to the considered problem. We point out that the reconstruction is performed by relying on a single measurement
acquired on the boundary, which is indeed a constraint imposed by the physical problem. Hence, all the techniques relying on the introduction of several measurements to increase the quality of the reconstruction are impracticable.
A different strategy, as proposed in several works addressing the steady-state case, may consist in introducing a modification to the cost functional J. In <cit.> and many related works the authors
introduce a cost functional inherited from imaging techniques, whereas in <cit.>, <cit.> different strategies involving the Kohn-Vogelius functional or similar ones are explored. Nevertheless,
the nonlinearity of the direct problem considered in this work prevents the possibility to apply these techniques, since the analitycal expressions of the fundamental solution, single and double layer potentials would not be available.
§.§ Reconstruction in presence of experimental noise
We test the stability of the algorithm in presence of experimental noise on the measured data u_meas. We consider different noise levels, according to the formula
u_meas(x,t) = u_meas(x,t) + p η(x,t),
where η(x,t), for each point x and instant t, is a Gaussian random variable with zero mean and standard deviation equal to u_3-u_1, whereas p ∈ [0,1] is the noise level.
In Figure <ref> the results of the reconstruction with different noise levels are compared. The algorithm shows to be highly stable with respect to high rates of noise, with increasing accuracy as the noise level reduces.
§.§ Reconstruction from partial discrete data
A further test case we have performed deals with the reconstruction of the position of small inclusions starting from the knowledge of partial data. We are interested in assessing the effectiveness of our algorithm when the electric potential is measured only on a discrete set of points on the endocardium, possibly simulating the procedure of intracavitary electric measurements. Figure <ref> shows that the algorithm is able to detect the region where the small ischemia is located from the knowledge of the potential on N_p = 246, 61, 15 different points. The
position of the reconstructed inclusion is slightly affected by the reduction of sampling points; nevertheless, reliable reconstructions can be obtained even with a very small number of points.
For the same purpose, we have tested the capability of the reconstruction procedure to avoid false positives: the algorithm is able to distinguish the presence of a real ischemia from the case where no ischemic region is present, also in the case where the data are recovered only at a finite set of points, and are affected by noise. We compare the value of the cost functional and of the minimum of the topological gradient obtained through Algorithm <ref> on data generated when (i) a small ischemia is present in the tissue or (ii) no inclusion is considered. The measurement is performed on a set of N_p = 100 points and is affected by different noise levels. The results are reported in Table <ref>.
We observe that the presence of a small noise on the measured data causes a great increase of the cost functional J: with 5% noise, e.g., the value of J is two orders of magnitude greater than the value that J assumes in presence of a small inclusion without noise.
Nevertheless, the topological gradient G allows to distinguish the false positive cases, since (at least in case of a small noise level) the value attained by its minimum in presence of a small inclusion is considerably lower than the random oscillations of G due to the noise.
§.§ Reconstruction of larger inclusions
We finally assess the performance of Algorithm 1, developed for the reconstruction of small inclusions well separated from the boundary, in detecting the position of extended inclusions, which may be of greater interest in view of the problem of detecting ischemic regions.
Indeed, total occlusion of a major coronary artery generally causes the entire thickness of the ventricular wall to become ischemic (transmural ischemia) or, alternatively, a significant ischemia only in the endocardium, that is, the inner layer of the myocardium (subendocardial ischemia). See, e.g., <cit.> for a detailed investigation of the interaction between the presence of moderate or severe subendocardial ischemic regions and the anisotropic structure of the cardiac muscle.
The most important assumption on which our one-shot procedure relies is that the variation of the cost functional from the value J(0) attained in the background case to the value related to the exact inclusion can be correctly described by the first order term of its asymptotic expansion, the topological gradient G. Removing the hypothesis of the small size, we cannot rigorously assess the accuracy of the algorithm, however it still allows us to identify the location of the ischemic region.
The results reported in Figure <ref> show that in presence of a inclusion of larger size (and not even separated from the boundary), the minimum of the topological gradient is found to be close to the position of the inclusion, and attains lower values with respect to the previously reported cases.
Moreover, in Figure <ref> we also assess the stability of the reconstruction with respect to the presence of noisy data and partial measurements, as done in the case of small inclusions.
§ CONCLUSIONS AND PERSPECTIVES
A rigorous theoretical analysis of the inverse problem of detecting inhomogeneities in the monodomain equations has allowed us to set up a numerical reconstruction procedure, aiming at the detection of ischemic
regions in the myocardic tissue from a single measurement of the endocardial potential. The identification is made possible by evaluating the topological gradient of a quadratic cost functional, requiring
the solution of two initial and boundary value problems, the background problem and the adjoint one. Numerical results are encouraging and allow to estimate the position of the inclusion,
although the identified inhomogeneity is nearly always detected on the boundary where the measurement is acquired. Nevertheless, provided a single measurement can be used for the sake of identification,
and a one-shot procedure is performed, the obtained results give useful insights.
Many issues are still open. Concerning the mathematical model, an even more interesting case would be the one involving the heart-torso coupling is considered, so that more realistic (and noninvasive)
body surface measurements can be employed. Setting and analyzing the inverse problem in this context represents the natural continuation of the present work.
To close the gap between the rigorous mathematical setting and the practice, the two assumptions made in this work about the size of the inclusion and its distance from the boundary should be relaxed.
Numerical results shown in Section <ref> provide a first insight on the detection of inclusions with larger size, as those corresponding to transmural or subendocardial ischemias.
From a mathematical standpoint, this problem is still open. Also in the case of a linear direct problem, very few results can be found in literature, see, e.g. <cit.>.
Estimating the size of the inclusion is another open question in the case of parabolic PDEs, also for linear equations. The case of multiple inclusions, addressed in <cit.>
for a stationary nonlinear problem, could also be considered. Last, but not least, the topological optimization framework addressed in this paper could also be combined with an iterative algorithm,
such as the level set method, or with the solution of a successive shape optimization problem, to achieve a full reconstruction both of the dimension and the shape of the inclusion.
§ APPENDIX - PROOF OF PROPOSITION <REF>
Setting Z(t) = Φ(T-t), t ∈ (0,T), we get an equivalent problem to (<ref>)
Z_t - k_0Δ Z + f^'(u)Z = 0, in Ω× (0,T),
∂ Z/∂ n = u^ε - u, on ∂Ω× (0,T),
Z(0) = 0, in Ω.
2truecm
We will prove that Z ∈ L^2(0,T;H^3(K)) ↪ L^1(0,T;W^1,∞(K)). To this aim we need to derive some energy estimates.
Multiplying the first equation in (<ref>) by Z, an application of Young's inequality leads to
1/2d/dtZ(t)^2_L^2(Ω) + k_0/2∇ Z_L^2(Ω)^2
≤ C (Z_L^2(Ω)^2 + u^ε -u^2_L^2(∂Ω)),
where C= C(k_0, M_2, Ω) >0. An application of Gronwall's lemma gives
Z(t)^2_L^2(Ω))≤ C u^ε -u^2_L^2(0,t; L^2(∂Ω)), ∀ t ∈ [0,T],
so that
Z^2_L^∞(0,t; L^2(Ω))≤ Cu^ε -u^2_L^2(0,t; L^2(∂Ω)), ∀ t ∈ [0,T].
Instead, integrating (<ref>) in time over [0,t] we get
∫_0^t∇ Z_L^2(Ω)^2
≤ C ( ∫_0^tZ_L^2(Ω)^2 + ∫_0^tu^ε -u^2_L^2(∂Ω) )
and finally
∇ Z_L^2(0,t;L^2(Ω))^2
≤ Cu^ε -u^2_L^2(0,t;L^2(∂Ω)), ∀ t ∈ [0,T],
where C is a positive constant depending on k_0, M_2, Ω, T.
We remark that, by standard regularity results, Z is smooth on E × [0,T], for any compact E⊂Ω.
Consider now two compact sets K_1 and K_2 such that
K⊂ K_2 ⊂ K_1 ⊂Ω, d(k_0, ∂Ω) ≥ d_1 >0.
It is possible to construct two functions ξ_1, ξ_2 and two constants b_1, b_2 satisfying
ξ_i ∈ C^2(Ω), 0≤ξ_i ≤ 1, ξ_i(x) = 1 ∀ x ∈ K_i, ξ_i(x) = 0
∀ x ∈ B_i i=1,2,
B_i = { x ∈Ω : d(x, ∂Ω) ≤ b_i}, 0 < b_1 < b_2 < d_1,
K ⊂⊂ Supp ξ_2 ⊂⊂ K_1 ⊂ Supp ξ_1 ⊂Ω.
Let us multiply the first equation of (<ref>) by -Δ Z. Then the following holds
d/dt (1/2|∇ Z|^2) + k_0(Δ Z)^2 - f^'(u)ZΔ Z = div (Z_t ∇ Z).
Multiplying (<ref>) by ξ_1, integrating on Ω× (0,T) and using the definitions of Z we get
∫_Ω (1/2|∇ Z(T)|^2 )ξ_1 + k_0∫_0^T∫_Ω(Δ Z)^2ξ_1
= ∫_0^T∫_Ω f^'(u)ZΔ Z ξ_1 - ∫_0^T∫_Ω Z_t ∇ Z ·∇ξ_1.
Combining (<ref>) and the first equation in (<ref>), applying Young's inequality and taking into account (<ref>) and the fact
that 0 ≤ξ≤ 1, we obtain
∫_Ω (|∇ Z(T)|^2 )ξ_1 + k_0∫_0^T∫_Ω(Δ Z)^2ξ_1
≤ 2M_2∫_0^T∫_Ω Z^2- 2∫_0^T∫_Ω (k_0Δ Z - f^'(u)Z) ∇ Z ·∇ξ_1.
Integrating by parts the term ∫_0^T∫_ΩΔ Z ∇ Z ·∇ξ_1, we can easily deduce
∫_Ω (|∇ Z(T)|^2 )ξ_1 + ∫_0^T∫_Ω(Δ Z)^2ξ_1
≤ C ( Z_L^2(0,T;L^2(Ω))^2 + ∇ Z^2_L^2(0,T;L^2(Ω))),
where C is a positive constant depending on M_2, k_0, ξ_1.
Hence, since ξ_1 = 1 in k_0, we get
Δ Z ^2_L^2(0,T;L^2(K_1))≤ C ( Z_L^2(0,T;L^2(Ω))^2 + ∇ Z^2_L^2(0,T;L^2(Ω))).
Observe that, replacing T by t ∈ (0,T] in (<ref>), we deduce also
∇ Z_L^∞(0,T;L^2(K_1))≤ C ( Z_L^2(0,T;L^2(Ω))^2 + ∇ Z^2_L^2(0,T;L^2(Ω))).
Combining (<ref>), (<ref>) and (<ref>), we obtain
Z ^2_L^2(0,T;H^2(K_1))≤ C u^ε - u_L^2(0,T;L^2(∂Ω))^2,
where C is a positive constant depending on k_0, M_2, Ω, T, ξ_1.
On account of the first equation in (<ref>) and the previous estimates, we get
Z_t_L^2(0,T;L^2(K_1))^2 ≤ C ( Z_L^2(0,T;L^2(Ω))^2
+ ∇ Z_L^2(0,T;L^2(Ω))^2) ≤ C u^ε - u_L^2(0,T;L^2(∂Ω))^2,
where C is a positive constant depending on k_0, M_2, Ω, T, ξ_1.
Now, let us multiply the first equation of (<ref>) by -Δ Z_t. We obtain
- Z_t Δ Z_t + k_0/2d/dt(Δ Z)^2 - f^'(u) ZΔ Z_t = 0.
Multiplying the previous equation by ξ_2 and integrating on Ω× (0,T), then a
suitable integration by parts in space implies
∫_0^T∫_Ω|∇ Z_t|^2ξ_2 + k_0/2∫_0^T∫_Ωd/dt(Δ Z)^2ξ_2
+ ∫_0^T∫_Ωξ_2 Z f^''(u)∇ u ·∇ Z_t + ∫_0^T∫_Ωξ_2f^'(u)∇ Z·∇ Z_t
= ∫_0^T∫_Ω div ( 1/2∇(( Z_t)^2) ) ξ_2
+ ∫_0^T∫_Ω div (∇(f^'(u) Z Z_t) - Z_t∇(f^'(u) Z))ξ_2.
Integrating by parts the second term of the left-hand side and by parts in space the terms in the right-hand side, by an application of Young's
inequality we finally get
∫_0^T∫_K_2|∇ Z_t|^2 ≤∫_0^T∫_Ω|∇ Z_t|^2ξ_2 ≤
C ( ∫_0^T∫_Ω| Z|^2 + ∫_0^T∫_Ω |∇ Z|^2 + ∫_0^T∫_K_1 ( Z_t)^2 ),
where the constant C>0 depends on ξ_2, M_2.
A combination with (<ref>), (<ref>), (<ref>) gives
∇ Z_t_L^2(0,T;L^2(K_2))^2 ≤
C u^ε - u _L^2(0,T;L^2(∂Ω))^2,
where the constant C>0 depends on k_0, M_2, Ω, T, ξ_1, ξ_2.
In order to prove the desired regularity, we need to take into account also the third-order derivatives, in particular the operator ∇Δ Z.
Observe that from the first equation in (<ref>) we get
∇Δ Z = 1/k_0 ( ∇ Z_t + Z f^''(u)∇ u + f^'(u)∇ Z ).
Hence, we can conclude
∇Δ Z_L^2(0,T;L^2(K_2))^2 ≤
C u^ε - u _L^2(0,T;L^2(∂Ω))^2,
where C is a positive constant depending on k_0, 1/k_0, M_2, Ω, T, ξ_1, ξ_2.
Recalling (<ref>) and the fact that K ⊂ K_2 ⊂ K_1, standard regularity results imply
Z_L^2(0,T;H^3(K))^2 ≤
C u^ε - u _L^2(0,T;L^2(∂Ω))^2.
Finally, from (<ref>) and (<ref>), by Sobolev immersion theorems, we get
Z_L^1(0,T;W^1,∞(K))^2 ≤ C(T) Z_L^2(0,T;W^1,∞(K))^2 ≤
C u^ε - u _L^2(0,T;L^2(∂Ω))^2,
where C is a positive constant depending on k_0, 1/k_0, M_2, Ω, T, ξ_1, ξ_2.
Recalling the relation between Z and Φ we get (<ref>).
§ ACKNOWLEDGMENTS
E. Beretta, C. Cavaterra, M.C. Cerutti and L. Ratti thank the New York University in Abu Dhabi
for its kind hospitality that permitted a further development of the present research.
The work of C. Cavaterra was supported by the FP7-IDEAS-ERC-StG 256872 (EntroPhase)
and by GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilità
e le loro Applicazioni).
99Alvarez2012 D. Álvarez, F. Alonso-Atienza, J.L. Rojo-Álvarez, A. Garcia-Alberola, and M. Moscoso,
Shape reconstruction of cardiac ischemia from non-contact intracardiac recordings: a model study, Math. Computer Modeling
55 (2012) 1770–1781.
Ammari2013
H. Ammari, P. Garapon, F. Jouve, H. Kang, M. Lim, and S. Yu,
A new optimal control approach for the reconstruction of extended inclusions, SIAM J. Control Optim. 51(2) (2013) 1372–1394.
ammari2012stability H. Ammari, J. Garnier, V. Jugnon, and H. Kang, Stability and resolution analysis for a topological derivative based imaging functional, SIAM J. Control Optim.
50 (2012) 48–76.
Ammari2005 H. Ammari, E. Iakovleva, H. Kang, and K. Kim, Direct algorithms for thermal imaging of small inclusions, Multiscale Model. Simul. 4 (4) (2005) 1116–1136.
AK H. Ammari and H. Kang, Reconstruction of small inhomogeneities from boundary measurements, Lectures Notes in Mathematics Series 1846 Springer, 2004.
Bendahmane2006 M. Bendahmane and K.H. Karlsen, Analysis of a class of degenerate reaction-diffusion systems and the bidomain model of cardiac tissue,
Netw. Heterog. Media 1 (2006) 185–218.
BCMP E. Beretta, M. C. Cerutti, A. Manzoni, and D. Pierotti, An asymptotic formula for boundary potential perturbations
in a semilinear elliptic equation related to cardiac electrophysiology, Math. Mod. Meth. Applied Sciences 26 (2016) 645–670.
BerettaManzoniRatti E. Beretta, A. Manzoni, and L. Ratti, A reconstruction algorithm based on topological gradient for an inverse problem related to a semilinear elliptic boundary value problem, Inverse Problems, accepted for publication (2017).
Gerbeau2008 M. Boulakia, M.A Fernández, J.F. Gerbeau, and N. Zemzemi, A coupled system of PDEs and ODEs arising in electrocardiograms modeling,
Applied Math. Res. Exp. 2008 (2008), doi 10.1093/amrx/abn002.
Bourgault2009 Y. Bourgault, Y. Coudière, and C. Pierre, Existence and uniqueness of the solution for the bidomain model used in cardiac electrophysiology,
Nonlinear Anal. Real World Appl. 10 (2009) 458–482.
Burger2010 M. Burger, K. A. Mardal, and B. F. Nielsen, Stability analysis of the inverse transmembrane potential problem in electrocardiography, Inverse Problems 26 (2010) 105012.
CV Y. Capdeboscq and M. Vogelius, A general representation formula for boundary voltage perturbations caused by internal conductivity
inhomogeneities of low order fraction, Math. Modelling and Num. Analysis 37 (2003) 159–173.
CMV D.J. Cedio-Fengya, S. Moskow, and M. Vogelius, Identification of conductivity imperfections of small diameter by boundary measurements.
Continuous dependence and computational reconstruction. Inverse Problems 14 (1998), no. 3, 553595.
art:cmm S. Chaabane, M. Masmoudi, and H. Meftahi, Topological and shape gradient strategy for solving geometrical inverse problems,
J. Math. Anal. Appl. 400 (2013) 724–742.
Chavez2015
C.E. Chávez, N. Zemzemi, Y. Coudière, F. Alonso-Atienza, and D. Álvarez, Inverse Problem of Electrocardiography: Estimating the Location of Cardiac Ischemia in a 3D Realistic Geometry, in Functional Imaging and Modeling of the Heart: 8th International Conference, FIMH 2015, Maastricht, The Netherlands, June 25-27, 2015. Proceedings, H. van Assen, P. Bovendeerd and T. Delhaas (Eds.), Springer International Publishing (2015)
colli2004parallel P. Colli Franzone and L. Pavarino, A parallel solver for reaction–diffusion systems in computational electrocardiology, Math. Mod. Meth. Applied Sciences
14(6) (2004) 883–911.
Pavarino2010 P. Colli Franzone, L. Pavarino, and S. Scacchi, Dynamical effects of myocardial ischemia in anisotropic cardiac models in three dimensions, Math. Mod. Meth. Applied Sciences 17(12) (2007) 1965–2008.
Pavarino2014book P. Colli Franzone, L. Pavarino, and S. Scacchi, Mathematical Cardiac Electrophysiology, Modeling, Simulation and Applications (MS&A) Series,
13 Springer-Verlag Italia, Milano, 2014.
ColliFranzone1978 P. Colli Franzone, B. Taccardi, and C. Viganotti, An approach to inverse calculation of epicardial potentials from body surface maps,
Adv. Cardiol. 21 (1978) 50–54.
DiCristoVessella2010 M. Di Cristo and S. Vessella, Stable determination of the discontinuous conductivity coefficient of a parabolic equation, SIAM J. Math. Anal. 42(1) (2010) 183–217.
D L. Dung, Remarks on Hölder continuity for parabolic equations and convergence to global attractors, Nonlinear Analysis 41 (2000) 921–941.
Isakov1 A. Elayyan and V. Isakov, On uniqueness of recovery of the discontinuous conductivity coefficient of a parabolic equation, SIAM J. Math. Anal. 28(2) (1997) 49–59
fernandez2010decoupled M. Fernández and N. Zemzemi, Decoupled time-marching schemes in computational cardiac electrophysiology and ECG numerical simulation, Math. Biosci.
226(1) (2010) 58–75.
FV A. Friedman and M. Vogelius, Identification of small inhomogeneities of extreme conductivity by boundary measurements: a theorem on continuous dependence, Arch. Rational Mech. Anal. 105 (1984), 299-326.
gerbeau2015reduced J.F. Gerbeau, D. Lombardi, and E. Schenone, Reduced order model in cardiac electrophysiology with approximated Lax pairs, Adv. Comput. Math.
41(5) (2015) 1103–1130.
GT D. Gilbarg and N.S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer Verlag, Berlin, 1983.
Isakov2 V. Isakov, K. Kim, and G. Nakamura, Reconstruction of an unknown inclusion by thermography, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (5) Vol. IX (2010) 725–758.
LSU O.A. Ladyzenskaja, V.A. Solonnikov, and N.N. Ural'ceva, Linear and Quasi-linear Equations of Parabolic Type, AMS Transl. Monographs 23
AMS, Providence, 1968.
L A. Lunardi, Analytic semigroups and optimal regularity in parabolic problems, Birkhäuser, Basel, 1995.
LN M. Lysaker and B.F. Nielsen, Towards a level set framework for infarction modeling: an inverse problem, Int. J. Numer. Anal. Model.
3 (2006) 377–394.
NielsenEtAl20072 B.F. Nielsen, X. Cai, and M. Lykaser, On the possibility for computing the transmembrane potential in the heart with a one shot method: an inverse problem,
Math. Biosciences 210 (2007) 523–553.
Nielsen2007 B.F. Nielsen, M. Lykaser, and A. Tveito, On the use of the resting potential and level set methods for identifying ischemic heart disease: An inverse problem,
J. Comput. Phys. 220 (2007) 772–790.
N L. Nirenberg, On elliptic partial differential equations, Ann. Scuola Norm. Sup. Pisa 13 (1959) 115–162.
P C.V. Pao, Nonlinear parabolic and elliptic equations, Plenum Press, New York, 1992.
park2012topological W.K. Park, Topological derivative strategy for one-step iteration imaging of arbitrary shaped thin, curve-like electromagnetic inclusions, J Comput. Phys.
231(4) (2012) 1426–1439.
MacLeod2010 A.J. Pullan, L.K. Cheng, M.P. Nash, A. Ghodrati, R. MacLeod, and D.H. Brooks, The inverse problem of electrocardiography, Comprehensive Electrocardiology,
299–344, P.W. Macfarlane and A. van Oosterom and O. Pahlm and P. Kligfield and M. Janse and J. Camm editors, Springer London, 2010.
Quarteroni2016 A. Quarteroni, T. Lassila, S. Rossi, and R. Ruiz-Baier, Integrated Heart – Coupling multiscale and multiphysics models for the simulation of the cardiac function, Comput. Methods Appl. Mech. Engrg. 314 (2017) 345–407.
R J.C. Robinson, Infinite-Dimensional Dynamical Systems, Cambridge texts in applied mathematics, Cambridge University Press, Cambridge, 2001.
rossi2014thermodynamically S. Rossi, T. Lassila, R. Ruiz-Baier, A. Sequeira, and A. Quarteroni, Thermodynamically consistent orthotropic activation model capturing ventricular systolic wall thickening in cardiac electromechanics, Eur. J. Mech. A-Solid.
48 (2014) 129–142.
sanfelici2002convergence S. Sanfelici, Convergence of the Galerkin approximation of a degenerate evolution problem in electrocardiology, Numer. Methods Partial Differential Equations
18(2) (2002) 218–240.
SLCNMT J. Sundnes, G.T. Lines, X. Cai, B.F. Nielsen, K.A. Mardal, and A. Tveito, Computing the electrical activity in the heart,
Monographs in Computational Science and Engineering Series, 1, Springer, (2006).
| Mathematical and numerical models of computational electrophysiology can provide quantitative tools to describe electrical heart function and disfunction <cit.>, often complementing imaging techniques (such as computed tomography and magnetic resonance) for diagnostic and therapeutic purposes. In this context, detecting pathological conditions or reconstructing model features such as tissue conductivities from potential measurements yield to the solution of an inverse boundary value problem. Standard electrocardiographic techniques attempt to infer electrophysiological
processes in the heart from body surface measurements of the electrical potential, as in the case of electrocardiograms (ECGs), or body surface ECGs (also known as body potential maps). These measurements can provide useful insights for the
reconstruction of the cardiac electrical activity within the so-called electrocardiographic imaging, by solving the well-known
inverse problem of electrocardiography[The inverse problem of electrocardiography aims at recovering the epicardial potential (that is, at the heart surface)
from body surface measurements <cit.>. Since the torso is considered as a passive conductor, such an inverse problem involves the linear steady diffusion model as direct problem. A step further, aiming at computing the potential inside the heart from the epicardial potential, has been considered, e.g., in <cit.>.]. A much more invasive option to acquire potential measurements is represented by non-contact electrodes inside a heart cavity to record endocardial potentials.
Here we focus on the problem of detecting the position and the size of myocardial ischemias from a single boundary measurement of the electric potential. Ischemia is a reversible precursor of heart infarction caused by partial occlusion of one or more coronary arteries, which supply blood to the heart. If this condition persists, myocardial cells die and the ischemia eventually degenerates in infarction. For the time being, we consider an insulated heart model, neglecting the coupling with the torso;
this results in the inverse problem of detecting inhomogeneities for a nonlinear parabolic reaction-diffusion equation (in our case, the so-called monodomain equation) dealing with a single measurement of the endocardial potential. Our long-term goal is indeed to deal with an inverse problem for the coupled heart-torso model, in order to detect ischemias from body surface measurements, such as those acquired on each patient with symptoms of cardiac disease through an ECG.
The problem we consider in this paper is a mathematical challenge itself, almost never considered before. Indeed, difficulties include the nonlinearity of both the direct and the inverse problem, as well as the lack of measurements at disposal. Indeed, even for the linear counterpart of the inverse problem, it has been shown in <cit.> and <cit.> that infinitely many measurements are needed to detect uniquely the unknown inclusions, and that the continuous dependence of the inclusion from the data is logarithmic <cit.>. Moreover, despite the inverse problem of ischemia identification from measurements of surface potentials has been tackled in an
optimization framework for numerical purposes <cit.>, a detailed mathematical analysis of this problem has never been performed.
To our knowledge, no theoretical investigation of inverse problems related with ischemia detection involving the monodomain and/or the bidomain model has been
carried out. On the other hand, recent results regarding both the analysis and the numerical approximation of this inverse problem in a much simpler stationary case
have been obtained in <cit.>.
In order to obtain rigorous theoretical results additional assumptions are needed, for instance by considering small-size conductivity inhomogeneities. We thus model ischemic regions as small inclusions ω_ where the electric conductivity is significantly smaller than the one of healthy tissue and there is no ion transport.
We establish a rigorous asymptotic expansion of the boundary potential perturbation due to the presence of the inclusion adapting to the parabolic nonlinear case the approach introduced
by Capdeboscq and Vogelius in <cit.> for the case of the linear conductivity equation. The theory of detection of small conductivity inhomogeneities from boundary measurements via asymptotic techniques has been developed in the last three decades in the framework of Electric Impedence Tomography (see, e.g., <cit.>). A similar approach has also been used in Thermal Imaging (see, e.g., <cit.>).
We use these results to set a reconstruction procedure for detecting the inclusion. To this aim, as in <cit.>,
we propose a reconstruction algorithm based on topological optimization, where a suitable quadratic functional is minimized to detect the position and the size of
the inclusion (see also <cit.>).
Numerical results obtained on an idealized
left ventricle geometry
assess the feasibility of the proposed procedure.
Several numerical test cases also show the robustness of the reconstruction procedure with respect to measurement noise, unavoidable when dealing with real data. The modeling assumption on the small size of the inclusion, instrumental to the derivation of our theoretical results, is verifed in practice in the case of residual ischemias after myocardial infarction. On the other hand, a fundamental task of ECG's imaging is to detect the presence of ischemias as precursor of heart infarction without any constraint on its size. For this reason, we also consider the case of the detection of larger size inclusions, for which the proposed algorithm still provides useful insights.
The paper is organized as follows. In Section 2 we describe the monodomain model of cardiac electrophysiology we are going to consider.
In Section 3 we show some suitable wellposedness results concerning the direct problems, in the unperturbed (background) and perturbed
cases. In Section 4 we prove useful energy estimates of the difference of the solutions of the two previous problems.
The asymptotic expansion formula is derived in Section 5 and the reconstruction algorithm in Section 6.
Numerical results are finally provided in Section 7. The appendix, Section 8, is devoted to a technical proof of a result needed in section 6. | null | null | null | null | null |
http://arxiv.org/abs/1701.07777v1 | 20170126171316 | Henkin measures for the Drury-Arveson space | [
"Michael Hartz"
] | math.FA | [
"math.FA",
"math.CV",
"Primary 46E22, Secondary 47A13"
] |
We exhibit Borel probability measures on the unit sphere in ^d for d ≥ 2
which are Henkin for the multiplier algebra of the Drury-Arveson space,
but not Henkin in the classical sense.
This provides a negative answer to a conjecture of Clouâtre and Davidson.
On Lattice Calculation of Electric Dipole Moments and Form Factors of the Nucleon
[
18 november 2016
=================================================================================
§ INTRODUCTION
Let _d denote the open unit ball in ^d and let A(_d) be the ball algebra,
which is the algebra of all analytic functions on _d which extend to be continuous on _d.
A regular complex Borel measure μ on the unit sphere _d = ∂_d is said to be Henkin
if the functional
A(_d) →, f ↦∫__d f d μ,
extends to a weak-* continuous functional on H^∞(_d), the algebra of all bounded analytic
functions on _d. Equivalently, whenever (f_n) is a sequence in A(_d) which is uniformly
bounded on _d and satisfies lim_n →∞ f_n(z) = 0 for all z ∈_d, then
lim_n →∞∫__d f_n d μ = 0.
Henkin measures play a prominent role in the description
of the dual space of A(_d) and of peak interpolation sets for the ball algebra,
see Chapter 9 and 10 of <cit.> for background material.
Such measures
are completely characterized by a theorem of Henkin <cit.> and Cole-Range <cit.>. To state
the theorem, recall that a Borel probability measure
τ on _d is said to be a representing measure for the origin if
∫__d f d τ = f(0)
for all f ∈ A(_d).
A regular complex Borel measure μ on _d is Henkin if and only if it is absolutely continuous with respect
to some representing measure for the origin.
If d=1, then the only representing measure for the origin is the normalized
Lebesgue measure on the unit circle, hence the Henkin measures on the unit circle are precisely those
measures which are absolutely continuous with respect to Lebesgue measure.
In addition to their importance in complex analysis, Henkin measures also play a role
in multivariable operator theory <cit.>.
However, it has become clear over the years that for the purposes of multivariable operator theory,
the “correct” generalization of H^∞, the algebra of bounded analytic functions on the unit disc,
to higher dimensions
is not H^∞(_d), but the multiplier algebra of the Drury-Arveson space H^2_d.
This is the reproducing kernel Hilbert space on _d with reproducing kernel
K(z,w) = 1/1- ⟨ z,w ⟩.
A theorem of Drury <cit.> shows that H^2_d hosts a version of von Neumann's inequality
for commuting row contractions, that is, tuples T= (T_1,…,T_d)
of commuting operators on a Hilbert space such that
the row operator [T_1,…,T_d]: ^d → is a contraction.
The corresponding dilation theorem is due to Müller-Vasilescu <cit.>
and Arveson <cit.>.
The Drury-Arveson space is also known as symmetric Fock space <cit.>,
it plays a distinguished role in the theory
of Nevanlinna-Pick spaces <cit.> and is an object of interest in harmonic analysis <cit.>.
An overview of the various features of this space can be found in <cit.>.
In <cit.>, Clouâtre and Davidson generalize much of the classical theory of Henkin measures
to the Drury-Arveson space. Let _d denote the multiplier algebra of H^2_d and let _d be
the norm closure of the polynomials in _d.
In particular, functions in _d belong to A(_d).
Clouâtre and Davidson define
a regular Borel measure μ on _d to be _d-Henkin if
the associated integration functional
_d →, f ↦∫__d f d μ
extends to a weak-* continuous functional on _d (see Subsection <ref> for the definition
of weak-* topology). Equivalently,
whenever (f_n) is a sequence in _d such that ||f_n||__d≤ 1 for all n ∈
and lim_n →∞ f_n(z) = 0 for all z ∈_d, then
∫__d f_n(z) d μ = 0,
see <cit.>. This notion, along with the complementary notion of _d-totally
singular measures, is crucial in the study of the dual space of _d and of peak interpolation sets
for _d in <cit.>.
Compelling evidence of the importance of _d-Henkin measures in multivariable
operator theory can be found in <cit.>, where Clouâtre and Davidson extend
the Sz.-Nagy-Foias H^∞–functional calculus
to commuting row contractions.
Recall that every contraction T on a Hilbert space can be written as T = T_cnu⊕ U,
where U is a unitary operator and T_cnu is completely non-unitary (i.e. has no unitary summand).
Sz.-Nagy and Foias showed that in the separable case, T
admits a weak-* continuous H^∞-functional calculus if and only if the
spectral measure
of U is absolutely continuous with respect to Lebesgue measure on the unit circle, see
<cit.> for a classical treatment.
Clouâtre and Davidson obtain a complete generalization of this result.
The appropriate generalization of a unitary is a spherical unitary, which is
a tuple of commuting normal operators whose joint spectrum
is contained in the unit sphere.
Every commuting row contraction admits a decomposition T = T_cnu⊕ U, where U is a spherical
unitary and T_cnu is completely non-unitary
(i.e. has no spherical unitary summand), see <cit.>. The following result is
then a combination of Lemma 3.1 and Theorem 4.3 of <cit.>.
Let T be a commuting row contraction acting on a separable Hilbert space
with decomposition T = T_cnu⊕ U as above.
Then T admits a weak-* continuous _d-functional calculus if and only if the spectral measure
of U is _d-Henkin.
This result shows that for the theory of commuting row contractions, _d-Henkin measures
are a more suitable generalization of absolutely continuous measures on the unit circle
than classical Henkin measures.
Thus, a characterization of _d-Henkin measures would be desirable.
Since the unit ball of _d is contained in the unit ball of A(_d), it is trivial that
every classical Henkin measure is also _d-Henkin.
Clouâtre and Davidson conjectured <cit.> that conversely, every _d-Henkin
measure is also a classical Henkin measure, so that these two notions agree.
If true, the classical theory would apply to _d-Henkin measures and in particular, the
Henkin and Cole-Range theorem would provide a characterization of _d-Henkin measures.
They also formulate a conjecture for the complementary notion of totally singular measure, which
turns out to be equivalent to their conjecture on Henkin measures <cit.>.
Note that the conjecture is vacuously true if d= 1, as _1 = H^∞.
The purpose of this note is to provide a counterexample to the conjecture of Clouâtre and Davidson for d ≥ 2.
To state the main result more precisely, we require one more definition. A compact set K ⊂_d
is said to be totally null if it is null for every representing measure of the origin.
By the Henkin and Cole-Range theorem, a totally null set cannot support a non-zero
classical Henkin measure.
Let d ≥ 2 be an integer. There exists a Borel probability measure μ on
_d which is _d-Henkin and whose support is totally null.
In fact, every measure which is supported on a totally null set is totally singular (i.e. it
is singular with respect to every representing measure of the origin). The measure in Theorem <ref> therefore also serves at the same time a counterexample to the conjecture
of Clouâtre and Davidson on totally singular measures, even without invoking <cit.>.
It is not hard to see that if μ is a measure on _d which satisfies
the conclusion of Theorem <ref>, then so does the trivial
extension of μ to _d' for any d' ≥ d (see Lemma <ref>),
hence it suffices to prove Theorem <ref> for d = 2.
In fact, the construction of such a measure μ is easier in the case d=4, so
will consider that case first.
The remainder of this note is organized as follows. In Section <ref>, we recall some of
the necessary background material. Section <ref> contains the construction of a measure
μ which satisfies the conclusion of Theorem <ref> in the case d=4. In Section
<ref>, we prove Theorem <ref> in general.
§ PRELIMINARIES
§.§ The Drury-Arveson space
As mentioned in the introduction, the Drury-Arveson space H^2_d is the reproducing kernel Hilbert
space on _d with reproducing kernel
K(z,w) = 1/1 - ⟨ z,w ⟩.
For background material on reproducing kernel Hilbert spaces, see <cit.> and <cit.>.
We will require a more concrete description of H^2_d.
Recall that if α = (α_1,…,α_d)
∈^d is a multi-index and if z = (z_1,…,z_d) ∈^d, one usually writes
z^α = z_1^α_1… z_d^α_d, α! = α_1! …α_d!,
|α| = α_1 + … + α_d.
The monomials z^α form an orthogonal basis of H^2_d, and
||z^α||^2_H^2_d = α!/|α|!
for every multi-index α,
see <cit.>.
Let (x_n) and (y_n) be two sequences of positive numbers. We write
x_n ≃ y_n to mean that there exist C_1,C_2 > 0 such that
C_1 y_n ≤ x_n ≤ C_2 y_n for all n ∈.
The following well-known result can be deduced
from Stirling's formula, see <cit.>.
Let d ∈. Then
||(z_1 z_2 … z_d)^n||_H^2_d^2 ≃ d^-n d (n+1)^(d-1)/2
for all n ∈.
The multiplier algebra of H^2_d is
_d = {φ: _d → : φ f ∈ H^2_d for all f ∈ H^2_d }.
Every φ∈_d gives rise to a bounded multiplication operator M_φ on H^2_d, and
we set ||φ||__d = ||M_φ||. Moreover,
we may identify _d with a unital subalgebra of B(H^2_d), the algebra of bounded operators
on H^2_d. It is not hard to see that _d is WOT-closed, and hence weak-* closed, inside of B(H^2_d).
Thus, _d becomes a dual space in this way, and we endow it with the resulting weak-* topology.
In particular, for every f,g ∈ H^2_d, the functional
_d →, φ↦⟨ M_φ f,g ⟩,
is weak-* continuous. Moreover, it is well known and not hard to see that on bounded subsets of _d,
the weak-* topology coincides with the topology of pointwise convergence on _d.
§.§ Henkin measures and totally null sets
Let K ⊂_d be a compact set. A function f ∈ A(_d) is said to peak on K
if f = 1 on K and |f(z)| < 1 for all z ∈_d∖ K.
Recall that K is said to be totally null if it is null for every representing measure of the
origin. In particular, if d=1, then K is totally null if and only if it is a Lebesgue null set.
We will make repeated use of the following characterization of totally null
sets, see <cit.>.
A compact set K ⊂_d is totally null if and only if there exists a function
f ∈ A(_d) which peaks on K.
If d' ≥ d, then we may regard _d ⊂_d' in an obvious way. Thus,
every regular Borel measure μ on _d admits a trivial extension μ to _d' defined by
μ(A) = μ(A ∩_d)
for Borel sets A ⊂_d'.
The following easy lemma shows that it suffices to prove Theorem <ref>
in the case d=2.
Let μ be a Borel probability measure on _d, let d' ≥ d and let μ
be the trivial extension of μ to _d'.
* If μ is _d-Henkin, then μ is _d'-Henkin.
* If the support of μ is a totally null subset of _d, then
the support of μ is a totally null subset of _d'.
(a)
Let P: ^d'→^d denote the orthogonal projection onto the first d coordinates.
It follows from the concrete description of the Drury-Arveson space at the beginning
of Subsection <ref> that
V: H^2_d → H^2_d', f ↦ f ∘ P,
is an isometry. Moreover, V^* M_φ V = M_φ|__d for every
φ∈_d', so that
_d'↦_d, φ↦φ|__d,
is weak-*-weak-* continuous and maps _d' into _d.
Suppose now that μ is _d-Henkin. Then there exists a weak-* continuous
functional Φ on _d which extends the integration functional given by μ,
thus
∫__d'φ d μ= ∫__dφ d μ
= Φ( φ|__d)
for φ∈_d'. Since the right-hand side defines a weak-* continuous
functional on _d', we see that μ is _d'-Henkin.
(b) We have to show that if K ⊂_d is totally null,
then K is also totally null as a subset of _d'. But this is immediate from Theorem <ref>
and the observation that if f ∈ A(_d) peaks on K, then f ∘ P ∈ A(_d') peaks
on K as well, where P denotes the orthogonal projection from (a).
§ THE CASE D=4
The goal of this section is to prove Theorem <ref> in the case d=4
(and hence for all d ≥ 4 by Lemma <ref>).
To prepare and motivate the construction of the measure μ,
we begin by considering analogues of
Henkin measures for more general reproducing kernel Hilbert spaces
on the unit disc. Suppose that is a reproducing kernel Hilbert space on the unit disc
with reproducing kernel of the form
K(z,w) = ∑_n=0^∞ a_n (z w)^n,
where a_0 = 1 and a_n > 0 for all n ∈.
If ∑_n=0^∞ a_n < ∞, then the series above
converges uniformly on ×, and becomes a reproducing kernel
Hilbert space of continuous functions on in this way.
In particular, evaluation at 1
is a continuous functional on and hence a weak-* continuous functional on ().
Indeed,
φ(1) = ⟨ M_φ 1 , K(·,1) ⟩_
for φ∈().
Therefore, the Dirac measure δ_1 induces
a weak-* continuous functional on (), but it is not absolutely
continuous with respect to Lebesgue measure, and hence not Henkin. (In fact, every regular Borel measure on the unit circle induces a weak-* continuous functional on ().)
The main idea of the construction is to embed a reproducing kernel Hilbert space
as in the preceding paragraph into H^2_4.
To find the desired space on the disc, recall that
by the inequality of arithmetic and geometric means,
sup{ |z_1 z_2 … z_d|: z ∈_d} = d^-d/2,
and the supremum is attained if and only if |z_1| = … = |z_d| = d^-1/2.
Hence,
r: _4→, z ↦ 16 z_1 z_2 z_3 z_4,
indeed takes values in , and it maps _4 onto .
For n ∈, let
a_n = ||r(z)^n||^-2_H^2_4,
and let be the reproducing kernel Hilbert space on
with reproducing kernel
K(z,w) = ∑_n=0^∞ a_n (z w)^n.
The map
→ H^2_4, f ↦ f ∘ r,
is an isometry, and
∑_n=0^∞ a_n < ∞.
It is well known that for any space on with kernel as in Equation
(<ref>), the monomials z^n form an orthogonal basis
and ||z^n||^2 = 1/a_n for n ∈. Thus, with our choice of (a_n) above,
we have
||z^n||^2 = 1/a_n = ||r(z)^n||^2_H^2_4.
Since the sequence r(z)^n is an orthogonal sequence in H^2_4,
it follows that V is an isometry.
Moreover, an application of Lemma <ref> shows that
||r(z)^n||^2 = 4^4 n ||(z_1,…,z_4)^n||^2
≃ (n+1)^3/2,
so that a_n ≃ (n+1)^-3/2, and hence ∑_n=0^∞ a_n < ∞.
Let
h: ^3 →_4,
(ζ_1,ζ_2,ζ_3) ↦
1/2 (ζ_1, ζ_2, ζ_3, ζ_1 ζ_2 ζ_3)
and observe that the range of h is contained in r^-1({1}).
Let μ be the pushforward of the normalized Lebesgue measure m on ^3 by h,
that is,
μ(A) = m ( h^-1 (A))
for a Borel subset A of _4. We will show that
μ satisfies
the conclusion of Theorem <ref>.
The support of μ is totally null.
Let X = r^-1({1}), which is compact,
and define f = 1+r/2. Then f belongs to the unit ball of A(_4)
and peaks on X,
hence X is totally null by Theorem <ref>.
Since h(^3) ⊂ X, the support of μ is contained in X, so the
support of μ is totally null as well.
The following lemma finishes the proof of Theorem <ref> in the
case d=4.
The measure μ is _4-Henkin.
Let α∈^4 be a multi-index. Then
∫__4 z^α d μ
= ∫_^3 z^α∘ h dm
= 2^-|α|∫_^3ζ_1^α_1 - α_4ζ_2^α_2 - α_4ζ_3^α_3 - α_4 dm.
This integral is zero unless α_4 = α_1= α_2 = α_3 = k,
in which case it equals 2^- 4k.
Let g = K(·,1) ∘ r, where K denotes the reproducing kernel of . Then g ∈ H^2_4
by Lemma <ref>, and it is a power series in z_1 z_2 z_3 z_4. Thus,
z^α is orthogonal to g unless α_1 = … = α_4 = k, in which
case
⟨ z^α, g ⟩_H^2_4 =
2^-4 k⟨ r(z)^k, g ⟩_H^2_4 = 2^-4 k⟨ z^k, K(·,1) ⟩_
= 2^ -4 k,
where we have used Lemma <ref> again.
Hence,
∫__4φ d μ = ⟨ M_φ 1, g ⟩_H^2_4
for all polynomials φ, and hence for all φ∈_4. Since the right-hand side
obviously extends to a weak-* continuous functional in φ on _4, we see that
μ is _4-Henkin.
§ THE CASE D=2
In this section, we will prove Theorem <ref> in the case d=2 and hence
in full generality
by Lemma <ref>.
To this end, we will also embed a reproducing
kernel Hilbert space on into H^2_2.
Let
r: _2→, z ↦ 2 z_1 z_2,
and observe that r maps _2 onto . For n ∈, let
a_n = ||r(z)^n||^-2_H^2_2,
and consider the reproducing kernel Hilbert space on
with reproducing kernel
K(z,w) = ∑_n=0^∞ a_n (z w)^n.
This space turns
out to be the well-known weighted Dirichlet space _1/2, which is the reproducing
kernel Hilbert space on with reproducing kernel (1 - z w)^-1/2. This
explicit description is not strictly necessary for what follows, but it provides
some context for the arguments involving capacity below.
The kernel K satisfies K(z,w) = (1 - z w)^-1/2.
The formula for the norm of monomials in Section <ref> shows that
a_n = ||r(z)^n||^-2 = 4^-n(2 n)!/(n!)^2 = (-1)^n -1/2n,
so that
K(z,w) = ∑_n=0^∞ (-1)^n -1/2n (z w)^n
= (1 - z w)^-1/2
by the binomial series.
The analogue of Lemma <ref> in the case d=2 is the following result.
The map
_1/2→ H^2_2, f ↦ f ∘ r,
is an isometry. Moreover, a_n ≃ (n+1)^-1/2 and ||z^n||^2__1/2≃ (n+1)^1/2.
As in the proof of Lemma <ref>, we see that V is an isometry.
Moreover, Lemma <ref> shows that
||z^n||^2__1/2 = ||r(z)^n||^2_H^2_2 = 2^2 n ||(z_1 z_2)^n||^2_H^2_2≃ (n+1)^1/2
for n ∈.
The crucial difference to the case d=4 is that the
functions in _1/2 do not all extend to continuous functions on . This
makes the construction
of the measure μ of Theorem <ref> more complicated.
The following lemma provides a measure σ on the unit circle which will serve as a replacement
for the Dirac measure δ_1, which was used in the case d=4.
It is very likely that this result is well known. Since the measure σ is crucial for
the construction of the measure μ,
we explicitly indicate how such a measure on the unit circle can arise.
There exists a Borel probability measure σ on such that
* the support of σ has Lebesgue measure 0, and
* the functional
[z] →, p ↦∫_ p d σ,
extends to a bounded functional on the space _1/2.
To prove Lemma <ref>, we require the notion of capacity. Background material
on capacity can be found in <cit.>.
Let k(t) = t^-1/2. The 1/2-energy of a Borel probability measure ν on is defined to be
I_k(ν) = ∫_∫_ k(|x - y|) d ν(x) d ν(y).
We say that a compact subset E ⊂ has
positive Riesz capacity of degree 1/2 if there exists a Borel probability measure ν
supported on E with I_k(ν) < ∞.
Let E ⊂ be a compact set with positive Riesz capacity of degree 1/2, but
Lebesgue measure 0. For instance, since 1/2 < log 2 / log 3, the circular middle-third
Cantor set has this property by <cit.>. Thus,
there exists a measure σ on whose support is contained in E with I_k(σ) < ∞.
Then (a) holds.
To prove (b),
for n ∈, let
σ(n) = ∫_ z^-n d σ(z)
denote the n-th Fourier coefficient of σ. Since I_k(σ) < ∞,
an application of <cit.> shows that
∑_n=0^∞|σ(n)|^2/ (n+1)^1/2 < ∞.
Let now p be a polynomial, say
p(z) = ∑_n=0^N α_n z^n.
Then using the Cauchy-Schwarz inequality,
we see that
| ∫_ p d σ|
≤∑_n=0^N |α_n| | σ(-n)|
≤( ∑_n=0^N (n+1)^1/2 |α_n|^2 )^1/2( ∑_n=0^N |σ(n)|^2/(n+1)^1/2)^1/2.
Lemma <ref> shows that the first factor is dominated by C ||p||__1/2
for some constant C,
and the second factor is bounded uniformly in N by (<ref>). Thus, (b)
holds.
The last paragraph of the proof of <cit.> in fact shows that
the Cantor measure on the circular middle-thirds Cantor set
has finite 1/2-energy, thus we can take σ to be this measure.
Let now σ be a measure provided by Lemma <ref> and let
E be the support of σ. Let
h: × E →_2, (ζ_1,ζ_2) ↦1/√(2) (ζ_1, ζ_1ζ_2),
and observe that the range of h is contained in r^-1(E). Define μ to be the pushforward
of m ×σ by h. We will show that μ satisfies the conclusion of Theorem <ref>.
The support of μ is totally null.
Let X = r^-1(E).
Since E has Lebesgue measure 0 by Lemma <ref>, there exists
by the Rudin-Carleson theorem (i.e. the d=1 case of Theorem <ref>) a function
f_0 ∈ A() which peaks on E.
Let f = f_0 ∘ r. Then f belongs to A(_d) and peaks on X,
so that X is totally null by Theorem <ref>.
Finally, the support of μ is contained in X, hence it is totally null as well.
The following lemma finishes the proof of Theorem <ref>.
The measure μ is _2-Henkin.
For all m,n ∈, we have
∫__2 z_1^m z_2^n d μ = 2^- (m+n)/2∫_∫_E ζ_1^m -nζ_2^n d m(ζ_1) d σ(ζ_2).
This quantity is zero unless m = n, in which case it equals
2^-n∫_E ζ^n d σ(ζ).
On the other hand,
Lemma <ref> shows that there exists f ∈_1/2 such that
⟨ p,f ⟩__1/2 = ∫_E p d σ
for all polynomials p. Let g = f ∘ r. Then g
belongs to H^2_d by Lemma <ref> and it is
orthogonal to z_1^n z_2^m
unless n=m, in which case
⟨ (z_1 z_2)^n, g ⟩_H^2_2
= 2^-n⟨ r(z)^n, g ⟩_H^2_2
= 2^-n⟨ z^n, f ⟩__1/2
= 2^-n∫_E ζ^n dσ(ζ).
Consequently,
∫__2φ d μ = ⟨ M_φ 1, g ⟩_H^2_2
for all polynomials φ, and hence for all φ∈_2, so that μ is _2-Henkin.
Theorem <ref> suggests the following problem, which is deliberately stated somewhat
vaguely.
Find a measure theoretic characterization of _d-Henkin measures.
amsplain
| Let _d denote the open unit ball in ^d and let A(_d) be the ball algebra,
which is the algebra of all analytic functions on _d which extend to be continuous on _d.
A regular complex Borel measure μ on the unit sphere _d = ∂_d is said to be Henkin
if the functional
A(_d) →, f ↦∫__d f d μ,
extends to a weak-* continuous functional on H^∞(_d), the algebra of all bounded analytic
functions on _d. Equivalently, whenever (f_n) is a sequence in A(_d) which is uniformly
bounded on _d and satisfies lim_n →∞ f_n(z) = 0 for all z ∈_d, then
lim_n →∞∫__d f_n d μ = 0.
Henkin measures play a prominent role in the description
of the dual space of A(_d) and of peak interpolation sets for the ball algebra,
see Chapter 9 and 10 of <cit.> for background material.
Such measures
are completely characterized by a theorem of Henkin <cit.> and Cole-Range <cit.>. To state
the theorem, recall that a Borel probability measure
τ on _d is said to be a representing measure for the origin if
∫__d f d τ = f(0)
for all f ∈ A(_d).
A regular complex Borel measure μ on _d is Henkin if and only if it is absolutely continuous with respect
to some representing measure for the origin.
If d=1, then the only representing measure for the origin is the normalized
Lebesgue measure on the unit circle, hence the Henkin measures on the unit circle are precisely those
measures which are absolutely continuous with respect to Lebesgue measure.
In addition to their importance in complex analysis, Henkin measures also play a role
in multivariable operator theory <cit.>.
However, it has become clear over the years that for the purposes of multivariable operator theory,
the “correct” generalization of H^∞, the algebra of bounded analytic functions on the unit disc,
to higher dimensions
is not H^∞(_d), but the multiplier algebra of the Drury-Arveson space H^2_d.
This is the reproducing kernel Hilbert space on _d with reproducing kernel
K(z,w) = 1/1- ⟨ z,w ⟩.
A theorem of Drury <cit.> shows that H^2_d hosts a version of von Neumann's inequality
for commuting row contractions, that is, tuples T= (T_1,…,T_d)
of commuting operators on a Hilbert space such that
the row operator [T_1,…,T_d]: ^d → is a contraction.
The corresponding dilation theorem is due to Müller-Vasilescu <cit.>
and Arveson <cit.>.
The Drury-Arveson space is also known as symmetric Fock space <cit.>,
it plays a distinguished role in the theory
of Nevanlinna-Pick spaces <cit.> and is an object of interest in harmonic analysis <cit.>.
An overview of the various features of this space can be found in <cit.>.
In <cit.>, Clouâtre and Davidson generalize much of the classical theory of Henkin measures
to the Drury-Arveson space. Let _d denote the multiplier algebra of H^2_d and let _d be
the norm closure of the polynomials in _d.
In particular, functions in _d belong to A(_d).
Clouâtre and Davidson define
a regular Borel measure μ on _d to be _d-Henkin if
the associated integration functional
_d →, f ↦∫__d f d μ
extends to a weak-* continuous functional on _d (see Subsection <ref> for the definition
of weak-* topology). Equivalently,
whenever (f_n) is a sequence in _d such that ||f_n||__d≤ 1 for all n ∈
and lim_n →∞ f_n(z) = 0 for all z ∈_d, then
∫__d f_n(z) d μ = 0,
see <cit.>. This notion, along with the complementary notion of _d-totally
singular measures, is crucial in the study of the dual space of _d and of peak interpolation sets
for _d in <cit.>.
Compelling evidence of the importance of _d-Henkin measures in multivariable
operator theory can be found in <cit.>, where Clouâtre and Davidson extend
the Sz.-Nagy-Foias H^∞–functional calculus
to commuting row contractions.
Recall that every contraction T on a Hilbert space can be written as T = T_cnu⊕ U,
where U is a unitary operator and T_cnu is completely non-unitary (i.e. has no unitary summand).
Sz.-Nagy and Foias showed that in the separable case, T
admits a weak-* continuous H^∞-functional calculus if and only if the
spectral measure
of U is absolutely continuous with respect to Lebesgue measure on the unit circle, see
<cit.> for a classical treatment.
Clouâtre and Davidson obtain a complete generalization of this result.
The appropriate generalization of a unitary is a spherical unitary, which is
a tuple of commuting normal operators whose joint spectrum
is contained in the unit sphere.
Every commuting row contraction admits a decomposition T = T_cnu⊕ U, where U is a spherical
unitary and T_cnu is completely non-unitary
(i.e. has no spherical unitary summand), see <cit.>. The following result is
then a combination of Lemma 3.1 and Theorem 4.3 of <cit.>.
Let T be a commuting row contraction acting on a separable Hilbert space
with decomposition T = T_cnu⊕ U as above.
Then T admits a weak-* continuous _d-functional calculus if and only if the spectral measure
of U is _d-Henkin.
This result shows that for the theory of commuting row contractions, _d-Henkin measures
are a more suitable generalization of absolutely continuous measures on the unit circle
than classical Henkin measures.
Thus, a characterization of _d-Henkin measures would be desirable.
Since the unit ball of _d is contained in the unit ball of A(_d), it is trivial that
every classical Henkin measure is also _d-Henkin.
Clouâtre and Davidson conjectured <cit.> that conversely, every _d-Henkin
measure is also a classical Henkin measure, so that these two notions agree.
If true, the classical theory would apply to _d-Henkin measures and in particular, the
Henkin and Cole-Range theorem would provide a characterization of _d-Henkin measures.
They also formulate a conjecture for the complementary notion of totally singular measure, which
turns out to be equivalent to their conjecture on Henkin measures <cit.>.
Note that the conjecture is vacuously true if d= 1, as _1 = H^∞.
The purpose of this note is to provide a counterexample to the conjecture of Clouâtre and Davidson for d ≥ 2.
To state the main result more precisely, we require one more definition. A compact set K ⊂_d
is said to be totally null if it is null for every representing measure of the origin.
By the Henkin and Cole-Range theorem, a totally null set cannot support a non-zero
classical Henkin measure.
Let d ≥ 2 be an integer. There exists a Borel probability measure μ on
_d which is _d-Henkin and whose support is totally null.
In fact, every measure which is supported on a totally null set is totally singular (i.e. it
is singular with respect to every representing measure of the origin). The measure in Theorem <ref> therefore also serves at the same time a counterexample to the conjecture
of Clouâtre and Davidson on totally singular measures, even without invoking <cit.>.
It is not hard to see that if μ is a measure on _d which satisfies
the conclusion of Theorem <ref>, then so does the trivial
extension of μ to _d' for any d' ≥ d (see Lemma <ref>),
hence it suffices to prove Theorem <ref> for d = 2.
In fact, the construction of such a measure μ is easier in the case d=4, so
will consider that case first.
The remainder of this note is organized as follows. In Section <ref>, we recall some of
the necessary background material. Section <ref> contains the construction of a measure
μ which satisfies the conclusion of Theorem <ref> in the case d=4. In Section
<ref>, we prove Theorem <ref> in general. | null | null | null | null | null |
http://arxiv.org/abs/1701.07815v1 | 20170126185205 | A Modern Search for Wolf-Rayet Stars in the Magellanic Clouds. III. A Third Year of Discoveries | [
"Philip Massey",
"Kathryn F. Neugent",
"Nidia Morrell"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
*This paper includes data gathered with the 1 m Swope and 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
1Lowell Observatory, 1400 W Mars Hill Road, Flagstaff, AZ 86001; [email protected];
[email protected].
2Also Department of Physics and Astronomy, Northern Arizona University, Box 6010, Flagstaff, AZ 86011-6010.
3Las Campanas Observatory, Carnegie Observatories, Casilla 601, La Serena, Chile; [email protected].
For the past three years we have been conducting a survey for WR stars in the Large and Small Magellanic Clouds (LMC, SMC). Our previous work has resulted in the discovery of a new type of WR star in the LMC,
which we are calling WN3/O3. These
stars have the emission-line properties of a WN3 star (strong N v but no N iv), plus the absorption-line properties of an O3 star (Balmer hydrogen plus Pickering He ii, but no He i). Yet these stars are 15× fainter than an O3 V star would be by itself, ruling out these being WN3+O3 binaries. Here we report the discovery of two more members of this class, bringing the total number of these objects to 10, 6.5% of the LMC's total WR population. The optical spectra of nine of these WN3/O3s are virtually indistinguishable from each other, but one of the newly found stars is significantly different, showing a lower excitation emission and absorption spectrum (WN4/O4-ish). In addition, we have newly classified three unusual Of-type stars, including one with a strong C iii λ 4650 line, and two rapidly rotating “Oef" stars. We also “rediscovered" a low mass x-ray binary, RX J0513.9-6951, and demonstrate its spectral variability. Finally, we discuss the spectra of ten low priority WR candidates that turned out not to have He ii emission. These include both a Be star and a B[e] star.
§ INTRODUCTION
For many years, our knowledge of the Wolf-Rayet (WR) population of the Magellanic Clouds (MCs) was considered essentially complete: twelve WRs were known in the SMC <cit.> and 134 were known in the LMC <cit.>. These stars had been found by a combination of general objective prism surveys, directed searches, and accidental discoveries by spectroscopy <cit.>. However, over the years several additional WRs were found in the LMC, culminating in our own discovery of a very strong-lined WO-type <cit.>, only the second known example of this rare type of WR in the LMC. This discovery prompted us to begin a multi-year survey of both the SMC and LMC in an effort to obtain a complete census of their Wolf-Rayet population. In part this was motivated in terms of finding a more accurate value for the relative number of WC- and WN-type WRs, as this provides a key test of the evolutionary models (see, e.g., , , ). And, such a survey was timely, given our improved knowledge of the populations of other evolved massive stars in the Magellanic Clouds, such as yellow and red supergiants <cit.>, and on-going improvements in massive star models <cit.>.
It was entirely possible, of course, that our survey would fail to find much of interest. Instead, in the first year we discovered nine more WRs in the LMC <cit.>. More interesting than the numbers, however, were the type of WRs we found: six had spectra that were unlike those of any previously observed. They were also somewhat fainter in absolute visual magnitude (M_V=-2.5 to -3.0) than the previously known WRs. Their emission-line spectra were dominated by N v λλ4603,19, λ4945 and He ii λ4686, and no trace of N iv λ4058, implying a WR class of WN3. However, they also showed He ii and Balmer absorption spectra with no trace of He i, characteristic of an O3 V star. Yet, these stars were 10× too faint to be WN3+O3 V binaries, as a typical O3 V star has M_V∼ -5.5 <cit.>. Our preliminary modeling demonstrated that we could reproduce both the emission and absorption lines with a single set of physical parameters <cit.>, lending further credence to these being single objects. (Our limited number of repeat observations also failed to detect any radial velocity variations.) In our second year of the survey we detected two more of these stars <cit.>, which we are calling WN3/O3s. In addition, our survey has found the third known WO-type star in the LMC and four other WNs (including two WN+O binaries, an O3.5If*/WN5 star, and a WN11), two rare Of?p stars (magnetically braked oblique rotators; see )[See the recent followup study of these two stars plus other members of this class by <cit.>], an Onfp star <cit.>, a possible B[e]+WN binary, four Of-type supergiants, and a peculiar emission-line star (the nature of which eludes us), and other, more normal, early-type stars (Papers I and II).
Here we report on the third year of discoveries, which include two additional members of the WN3/O3 class in the LMC, bringing the number of this class to 10, 6.5% of the LMC's WR population, which now stands at a total of 154 stars. One of these WN3/O3 stars is not like the others, in that it shows somewhat lower excitation emission and absorption lines. We report the discovery of three Of stars (all unusual), and revisit a low mass x-ray source previously known to have a very broad He ii λ4686 component. We also classify 10 low ranked candidates that failed to have He ii λ4686, including one Be and one B[e] star.
§ OBSERVATIONS AND REDUCTIONS
The imaging part of our survey is being conducted using the Swope 1-m telescope on Las Campanas. Complete details are given in Papers I and II; here we summarize our observing and
reduction procedures. The
current e2v camera provides a 298(EW) × 297(NS) field of view with 4110×4096 15μm pixels, each subtending 0435. Our exposure times are
300 s through each of three interference filters, a WC filter centered on C iii λ4650, the strongest optical line in WC- and WO-type WRs, a WN filter centered on
He ii λ4686, the strongest optical line in WN-type WRs, and a continuum filter (CT), centered at 4750Å. All three filters have a 50Å bandpass (full width at half maximum, FWHM). We were initially assigned ten nights on the Swope (UT) 2015 Nov 15-24. However, since some of that time was lost due to clouds, Carnegie was kind enough to assign us five additional nights, Dec 25-29, which had been unscheduled. Typical image quality on both runs ranged from 13-19. There were 93 fields observed (or reobserved) in the LMC, and 21 in the SMC in total during this third year of our survey. There is about a 1 overlap between adjacent fields, and so the total new coverage is 19.9 deg^2 in the LMC, and 4.5 deg^2 in the SMC.
For calibration purposes, ten bias frames were taken daily. When conditions were clear, 3-5 sky flats were obtained in bright twilight through each filter, dithering the telescope between exposures in order to allow us to filter out any stars when we combined frames. Since these twilight exposures were short (a few seconds), it was necessary to correct each of the exposures for the iris pattern of the camera shutter. The details of this procedure are given in Paper II. Since the CCD is read out through four amplifiers, each quadrant is treated separately in the preliminary reductions. Overscan and bias structure (which is practically non-existent) were removed. After these additive corrections were made, the counts in each quadrant were corrected for slight non-linearities in the CCD amplifiers as discussed in Paper II. Each quadrant was flat-fielded, and then the four sections were recombined. Accurate (05) coordinates are obtained through the use of the “astrometry.net" software <cit.>.
The frames were then analyzed by first running aperture photometry on each image to identify stars that were significantly brighter in one of the two emission-line filters than in the continuum image. In addition, image subtraction was performed with the High Order Transform of Point-spread function ANd Template Subtraction (hotpants) software described by <cit.>. The resultant WC-CT and WN-CT images were examined by eye to identify WR candidates. The combination of the two techniques has proven very effective, as shown in Papers I and II.
This procedure was carried out under some time pressure as we had a single night (UT 11 Jan 2016) allocated on the Baade 6.5-m Magellan telescope for follow-up spectroscopy. Despite the short time between the December imaging and January spectroscopy time, we had successfully reduced and analyzed most of the fields, with 15 WR candidates at various significance levels.
The spectroscopic observations took place with the Magellan Echellette Spectrograph (MagE) mounted on a folded port of the Baade. The instrument is described in detail by <cit.>.
MagE provides complete coverage from the atmospheric cutoff (∼3200Å) to ∼1μm. We used it with a 1 slit, which then yielded a resolving power R of 4100.
The night was clear, and the seeing was 06-07. As we described in Paper II (and in more detail in ),
the pixel-to-pixel sensitivity variations are quite small, and we have found we can achieve better results by not flat-fielding the data in the blue. The data from this night were flat-fielded in the red, primarily to remove fringing. After extraction, wavelength calibration, and fluxing, the individual orders were combined. Our typical signal-to-noise (S/N) in the blue classification region is 100-150 per spectral resolution element.
Our initial spectrum of LMCe058-1 was intriguing but quite noisy, and we were fortunate to be able to squeeze in additional
MagE observations of the source on three additional nights throughout the following months, as described further below.
Analysis of a few remaining fields not fully analyzed before our January night revealed one other high-significance WR candidate (LMCe113-1), which was observed with MagE on UT 29 March 2016 at the start of the night.
§ DISCOVERIES
Our spectroscopy identified six stars among our candidates that have He ii λ4686 emission. Ten other candidates proved to be
(mainly) B-type stars, not of immediate interest to us, but discussed further below. This success rate is similar to our second year (Paper II). We discuss our “winners" and “losers" below.
§.§ Newly Found Wolf-Rayet Stars
Our most exciting discovery was identifying two new members of the class of WRs that
we are calling WN3/O3s (Papers I and II). Other members of this class
show the strong N v λλ4603,19 doublet, N v λ4945 and He ii λ4686 emission lines characteristic of a WN3 star, and a He ii and Balmer line absorption spectrum typical of an O3 star. A composite spectrum can be immediately ruled out as these stars have M_V=-2.3 to -3.0, 15× fainter than an O3 V would be by itself (M_V∼ -5.5, ) and even slightly fainter than a normal WN3
star would be by itself (M_V∼ -3.8, ).
We list the properties of these two WN3/O3 stars in Table <ref> and illustrate their spectra in Figs. <ref> and <ref>. One thing we have been strongly struck by is how similar all the previous members of this group have been; their spectra are nearly indistinguishable; see, e.g., Fig. 7 in Paper I and Fig. 3 in Paper II.
LMCe078-3 is now the ninth example, with a spectrum nearly identical to one of the prototypes of this class, LMC170-2, also illustrated here in
Figs. <ref> and <ref>.
However, the spectrum of our newly discovered tenth member of the WN3/O3 class, LMCe055-1, is substantially different. Both the emission and the absorption are indicative of a lower excitation temperature. The spectrum shows N iv λ4058 emission, which is completely missing from the other
members of this group. Furthermore, weak He i λ4471 is visible in absorption. The N v λλ4603,19 doublet lines have P Cygni profiles, with a strong absorption component to the blue side of the emission. We would more properly call this star a
WN4/O4 rather than an WN3/O3!
Further complicating the picture is the fact that this star was identified as an eclipsing binary by OGLE with a 2.159074 day period <cit.>.
Might this star simply be a normal WN4+O4 V binary? We can rule this out immediately using the same argument as for “normal" WN3/O3 stars: as shown in Table <ref> has an absolute
visual magnitude of only M_V=-2.8, while an O4 V star is expected to have an M_V∼ -5.5. <cit.>. Furthermore, the OGLE light
curve itself is inconsistent with the presence of a massive companion. As shown in Fig. <ref> there is no sign of the ellipsoidal variations
one sees in massive binaries with similarly short periods, such as DH Cep <cit.>[We are indebted to our colleague Dr. Laura Penny for commenting on the OGLE light curve, and pointing out the implications of the comparison with DH Cep.]. We are continuing to investigate this star, including a radial velocity study.
§.§ Other Emission Line Stars
§.§.§ Of-type stars
Our survey is sufficiently sensitive that we detect many Of-type stars. Although such stars are significantly brighter than our WN3/O3s, the equivalent widths (EWs) of their
He ii λ4686 emission are sufficiently small that these are among the hardest stars for us to detect (see, e.g., Fig. 9 in Paper II). Here we discuss the discovery of three of these stars.
The spectrum of LMCe078-1 is shown in Fig. <ref>. We classify the star as O6 Ifc, where the “c" is required given the strong presence of C iii λ4650 emission <cit.> in addition
to the strong N iii λλ4634,42 and He ii λ4686 lines that result in the luminosity “If" classification (.
Subsequent to our spectroscopy, but prior to this publication, <cit.> also classified this star (their star #271)
as an O5.5 Iaf, in substantial
agreement with our classification here, although the presence of the C iii emission went unnoticed. (The line is only marginally visible in the on-line version of their spectrum, due to their lower S/N.)
Another Of-type star, LMCe-078-2, is shown in Fig. <ref>. It too is unusual: He ii λ4686 emission shows a central reversal (absorption),
and the absorption lines are extremely broad, with v sini∼ 385 km s^-1. These are the classic characteristics of “Oef" stars <cit.>,
also known as “Onfp" stars <cit.>. Based upon the relative He ii and He i absorption line strengths, we classify
the star as O5nfp. Given the weakness of the emission, it is rather remarkable that we detected this star in the first place.
Fortunately the N iii λλ4634,42 emission doublet is nicely within the bandpass of the WC filter, and so this star was considered of high significance as it was brighter in both emission-line filters relative to the continuum.
In addition, Onfp stars are known to have variable He ii strengths, and possibly we detected this star when the line was stronger. We note that this star too may show signs of C iii λ4650 as well, but the broadness of the lines precludes a definitive assessment.
Our third Of-type star, LMCe113-1, is yet another Onfp star, which we classify as O7.5nfp. As shown in Fig. <ref>, He ii λ 4686 has a central reversal. The emission here is quite weak, we are again
surprised (but pleased) we detected it; without the N iii λλ4634,42 emission we probably would not have. The absorption lines are quite broad, with
v sini∼ 300 km s^-1. The star is identified in the <cit.> catalog as Sk-67^∘3 according to Brian Skiff's on-line spectral catalog[http://cdsbib.u-strasbg.fr/cgi-bin/cdsbib?2014yCat....1.2023S], and was classified
as B0.5 in <cit.> based upon low resolution objective prism plates.
§.§.§ RX J0513.9-6951 Revisited
One of the more interesting spectra we've encountered is that of LMCe058-1, shown in Fig. <ref>. The star is a known to be a super-soft x-ray
source, RX J0513.9-6951, and its optical spectrum has been previously described by <cit.>. The star is variable both in the x-ray region <cit.> as well as in the optical <cit.>, with a Harvard Variable designation, HV 5682. <cit.> describe the spectrum as typical of that of a low-mass X-ray binary (LMXB), with narrow emission components of He ii and the Balmer lines, along with N iii and C iii and O vi <cit.>. The He ii λ4686 profile clearly has two components, a narrow component and a broad component. <cit.> attributes the broad component
to the high velocities of the inner part of the accretion disk. <cit.> and <cit.> both note that the optical spectrum is very similar to that of Cal 83, the prototype of the LMXBs <cit.>.
We reobserved this star as we were intrigued by the description of a broad He ii λ4686 emission component. Might this be a previously unrecognized Wolf-Rayet star with a disk? Alas, the early descriptions are an accurate match to our optical spectrum; although He ii λ4686 has a broad component, there are no other WR signatures present. Qualitatively, the optical spectrum is very similar to that described over twenty years ago; compare Fig. 2 in <cit.> to our MagE spectrum in Fig. <ref>. We obtained four spectra of this star throughout a four month period and found dramatic changes in the emission line equivalent width; for instance, the EW of He ii ranged from -5Å to -25Å. ( reported an EW of -16Å.) Is this variability due to changes in the emission line fluxes or just to the change in the continuum level affecting the EWs? To answer this, we turned to the fluxed
versions of our spectra. As shown in Fig. <ref>, it is clear that although the continuum flux varies, the emission line fluxes vary even more; the
two appear to be correlated, in that the emission lines are their strongest (both in terms of EWs and fluxes) when the star is brightest.
§.§ Non-Emission-Line Stars
In assigning observing priorities, we ranked our candidates on a 1-4 scale, with 1 being the most potentially interesting.
All of the emission-lined stars discussed above had been classified as a 1 or a 2. All of our remaining candidates that were observed
were ranked as either a 3 or a 4,
and none of them proved to have He ii λ4686 emission. These priority 3-4 candidates were invariably B-type stars (including both a Be and B[e] star) plus one A0 I. Given the large number of stars on our frames, and our desire to be thorough, a few low-significance candidates without emission are expected just on statistical grounds. Although these stars
are not of immediate interest to us, we list their properties in Table <ref>.
These stars were classified using the relative strengths of Si iv λ4089, Si iii λ4553, Si ii λ4128. For stars later than B2, Mg ii λ4481 and He i λ4471 are secondary indicators. (For more details, see .) In this, we made reference to the <cit.> atlas. A difficulty, particularly for the SMC B stars, is that the metal lines are almost undetectable, and thus our classifications particularly uncertain even at our S/N. We confess to being influenced by our knowledge of the absolute visual magnitudes in assigning luminosity classes.
§ SUMMARY AND FUTURE WORK
We report here the detection and spectroscopic observations of 16 WR candidates from our late 2015/early 2016 observing season, the third year of our survey. All six of the higher ranked candidates proved to have He ii λ4686 emission. Of these five, two are members of the WN3/O3 class, bringing to 10 the total number of this class. All ten are in the LMC, and they make up >6% of the LMC's WR known population, which now
numbers 154. The spectral features of nine of these WN3/O3 are remarkably homogeneous, but one of the newly found members here is not like the others, showing lower excitation emission spectrum (specifically,
N iv λ4058), P Cygni profiles for the N v λλ4503,19 emission, and weak He i absorption. Spectral modeling of all of these stars is in progress, with preliminary results reported by <cit.>.
In addition, we have found three unusual Of-type stars, one of which is a member of the rare “Ifc" luminosity class showing
relatively strong C iii at λ4650, while two others are rapidly rotating “Oef" type star (now often designated as
“nfp.") We also present modern spectra of the low-mass x-ray binary RX J0513.9-6951, and note that qualitatively they look very similar to the discovery spectra of <cit.> and <cit.>. Our repeated observations demonstrate that there are significant changes in the emission-line intensities, unsurprising given the star's large photometric variations.
Ten of the candidates showed B-type or early A-type spectra; these were all lower ranked candidates and were observed for completeness. One turned out to be a previously unrecognized Be star, and another a newly found B[e] star.
We are now prepared to begin the fourth year of our survey, with 26% of the SMC and 12% of the LMC left to observe. We do not know what our final year of discoveries will bring us, but if past performance is any indication of future results, we are sure to find something interesting!
We thank the referee, Dr. Nolan Walborn, for useful comments which improved this paper. We are grateful for the excellent support we always receive at Las Campanas Observatory, as well as the generosity of the Carnegie Observatory and Steward Observatory Arizona Time Allocation Committees. Support for this project was provided by the National Science Foundation through AST-1008020 and AST-1612874, and through Lowell Observatory. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A&AS 143, 23. The Catalog of Stellar Spectral Classification prepared by our colleague at Lowell Observatory, Brian Skiff, proved particularly useful, as always. We also made use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the NSF.
Facilities: Magellan: Baade (MagE spectrograph), Swope (e2v imaging CCD)
apj
l c c c c c c r c c c l l l
Newly Found Wolf-Rayet Stars
0pt
IDa
α_ 2000
δ_ 2000
Vb
B-Vb
CT
2cWN - CT
2cHe ii λ4686
M_Vc
Sp. Type
Comment
7-8 10-11
mag
σ
log(-EWd
FWHM (Å)
LMCe078-3 05 41 17.50 -69 06 56.2 17.03 +0.03 17.2 -0.27 12.0 1.2 16 -2.2e WN3/O3
LMCe055-1 04 56 48.72 -69 36 40.3 16.15 -0.10 16.4 -0.17 8.4 0.8 17 -2.8 WN4/O4 OGLE LMC-ECL-3548
aDesignation from the current survey. We have denoted the e2v fields with a small “e" to distinguish them from our numbering system from Paper I, i.e., LMCe159 is distinct from LMC159. We plan to impose less idiosyncratic designations once our survey is complete.
bPhotometry from .
cWe assume an apparent distance modulus of 18.9 for the LMC, corresponding to a distance of 50 kpc <cit.> and an average extinction of A_V=0.40 <cit.>.
d“EW" is the equivalent width, measured in Å.
eAssumes an extra 0.3 mag of extinction at V given its B-V color.
l c c c c c c r c c c l l l
Other Emission-Lined Stars
0pt
IDa
α_ 2000
δ_ 2000
Vb
B-Vb
CT
2cWN - CT
2cHe ii λ4686
M_Vc
Sp. Type
Comment
7-8 10-11
mag
σ
log(-EWd)
FWHM (Å)
LMCe078-1 05 37 29.63 -69 14 52.0 13.49 -0.02 13.8 -0.07 3.3 0.2 15 -5.4 O6 Ifc O5.5 Iaf in <cit.>
LMCe078-2 05 37 27.84 -69 23 52.9 13.38 -0.16 13.3 -0.06 3.2 -0.2 -5.5 O5 nfp
LMCe113-1 04 49 28.01 -67 42 39.8 13.18 -0.23 13.2 -0.09 4.4 -0.9 -5.7 O7.5nfp
LMCe058-1 05 13 50.80 -69 51 47.6 16.71 -0.05 17.1 -0.35 15.3 0.7e 14d -2.2 LMXB RX J0513.9-6951
aDesignation from the current survey. We have denoted the e2v fields with a small “e" to distinguish them from our numbering system from Paper I, i.e., LMCe159 is distinct from LMC159. We plan to impose less idiosyncratic designations once our survey is complete.
bPhotometry from .
cWe assume an apparent distance modulus of 18.9 for the LMC, corresponding to a distance of 50 kpc <cit.> and an average extinction of A_V=0.40 <cit.>.
d“EW" is the equivalent width, measured in Å.
eThe He ii λ4686 line consists of a broad component (WN star?) and a narrow component (disk?). The values given
in the table are for the broad component. The narrow component has a log(-EW) of 0.8 and a FWHM of 2.8Å. The total log(-EW) of the line is 1.0.
l c c c c c l l
Stars without He ii λ4686 Emission
0pt
IDa
α_ 2000
δ_ 2000
Vb
B-Vb
M_Vc
Sp. Type
Comment
SMCe033-1 0:45:28.28 -73:56:36.6 14.98 -0.14 -4.1 B0.5 III [M2002] SMC 5611; B2 (II),
SMCe055-1 0:29:46.91 -73:08:40.6 15.65 -0.06 -3.5 B1-B1.5 IIIe Weak Si III, no Mg II. Balmer emission
LMCe005-1 5:31:31.76 -71:56:10.1 16.27 -0.07 -2.6 B2-V Si III and Mg II
LMCe027-1 4:50:47.05 -70:37:58.7 15.31 -0.10 -3.6 B5 III [M2002] LMC 8808
LMCe029-1 4:59:01.99 -70:48:53.0 15.37 -0.03 -3.5 B0.5 V[e] [M2002] LMC 41535; Balmer and Fe II (?) emission
LMCe050-2 5:46:05.71 -69:59:57.2 16.65 +0.20 -2.3 A0 V: No He I
LMCe117-1 5:06:13.53 -67:49:11.3 15.96 +0.09 -2.9 B2 V Si III and Mg II
LMCe117-2 5:05:43.53 -67:53:53.0 16.85 -0.05 -2.0 B2 V Si III and Mg II
LMCe141-1 4:51:19.42 -66:52:18.9 16.90 +0.03 -2.0 B8 III Mg II stronger than He I
LMCe155-1 5:03:37.35 -66:33:19.5 16.42 -0.18 -2.5 B2 V-III Weak MgII and Si III
aDesignation from the current survey. We have denoted the e2v fields with a small “e" to distinguish them from our numbering system from Paper I, i.e., LMCe159 is distinct from LMC159. We plan to impose less idiosyncratic designations once our survey is complete.
bPhotometry from for SMC members, and for LMC members.
cFor the SMC, we assume an apparent distance modulus of 19.1, corresponding to a distance of 59 kpc <cit.>, and an average extinction of A_V=0.30 <cit.>. For the LMC, we assume an apparent distance modulus of 18.9, corresponding to a distance of 50 kpc <cit.>, and an average extinction of A_V=0.40 <cit.>.
| For many years, our knowledge of the Wolf-Rayet (WR) population of the Magellanic Clouds (MCs) was considered essentially complete: twelve WRs were known in the SMC <cit.> and 134 were known in the LMC <cit.>. These stars had been found by a combination of general objective prism surveys, directed searches, and accidental discoveries by spectroscopy <cit.>. However, over the years several additional WRs were found in the LMC, culminating in our own discovery of a very strong-lined WO-type <cit.>, only the second known example of this rare type of WR in the LMC. This discovery prompted us to begin a multi-year survey of both the SMC and LMC in an effort to obtain a complete census of their Wolf-Rayet population. In part this was motivated in terms of finding a more accurate value for the relative number of WC- and WN-type WRs, as this provides a key test of the evolutionary models (see, e.g., , , ). And, such a survey was timely, given our improved knowledge of the populations of other evolved massive stars in the Magellanic Clouds, such as yellow and red supergiants <cit.>, and on-going improvements in massive star models <cit.>.
It was entirely possible, of course, that our survey would fail to find much of interest. Instead, in the first year we discovered nine more WRs in the LMC <cit.>. More interesting than the numbers, however, were the type of WRs we found: six had spectra that were unlike those of any previously observed. They were also somewhat fainter in absolute visual magnitude (M_V=-2.5 to -3.0) than the previously known WRs. Their emission-line spectra were dominated by N v λλ4603,19, λ4945 and He ii λ4686, and no trace of N iv λ4058, implying a WR class of WN3. However, they also showed He ii and Balmer absorption spectra with no trace of He i, characteristic of an O3 V star. Yet, these stars were 10× too faint to be WN3+O3 V binaries, as a typical O3 V star has M_V∼ -5.5 <cit.>. Our preliminary modeling demonstrated that we could reproduce both the emission and absorption lines with a single set of physical parameters <cit.>, lending further credence to these being single objects. (Our limited number of repeat observations also failed to detect any radial velocity variations.) In our second year of the survey we detected two more of these stars <cit.>, which we are calling WN3/O3s. In addition, our survey has found the third known WO-type star in the LMC and four other WNs (including two WN+O binaries, an O3.5If*/WN5 star, and a WN11), two rare Of?p stars (magnetically braked oblique rotators; see )[See the recent followup study of these two stars plus other members of this class by <cit.>], an Onfp star <cit.>, a possible B[e]+WN binary, four Of-type supergiants, and a peculiar emission-line star (the nature of which eludes us), and other, more normal, early-type stars (Papers I and II).
Here we report on the third year of discoveries, which include two additional members of the WN3/O3 class in the LMC, bringing the number of this class to 10, 6.5% of the LMC's WR population, which now stands at a total of 154 stars. One of these WN3/O3 stars is not like the others, in that it shows somewhat lower excitation emission and absorption lines. We report the discovery of three Of stars (all unusual), and revisit a low mass x-ray source previously known to have a very broad He ii λ4686 component. We also classify 10 low ranked candidates that failed to have He ii λ4686, including one Be and one B[e] star. | null | null | null | null | null |
http://arxiv.org/abs/1701.07792v1 | 20170126174647 | On Lattice Calculation of Electric Dipole Moments and Form Factors of the Nucleon | [
"M. Abramczyk",
"S. Aoki",
"T. Blum",
"T. Izubuchi",
"H. Ohki",
"S. Syritsyn"
] | hep-lat | [
"hep-lat",
"nucl-th"
] |
Physics Department, University of Connecticut, Storrs, CT 06269, USA
Center for Gravitational Physics, Yukawa Institute for Theoretical Physics,
Kyoto University, Kyoto 606-8502, Japan
Physics Department, University of Connecticut, Storrs, CT 06269, USA
RIKEN/BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA
RIKEN/BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA
RIKEN/BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA
Jefferson Lab, 12000 Jefferson Ave, Newport News, VA 23606, USA
Kavli Institute for Theoretical Physics, UC Santa Barbara, CA 93106, USA
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA
11.15.Ha, 12.38.Gc, 12.38.Aw, 21.60.De
RBRC-1226
We analyze commonly used expressions for computing the nucleon electric
dipole form factors (EDFF) F_3 and moments (EDM) on a lattice
and find that they lead to spurious contributions from the Pauli form factor F_2
due to inadequate definition of these form factors when parity mixing of lattice nucleon
fields is involved.
Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF
induced by the CP-violating quark chromo-EDM interaction using the corrected expression.
In addition, we calculate the electric dipole moment of the neutron using background
electric field that respects time translation invariance and boundary conditions,
and find that it decidedly agrees with the new formula but not the old formula for F_3.
Finally, we analyze some selected lattice results for the nucleon EDM and observe that after
the correction is applied, they either agree with zero or
are substantially reduced in magnitude, thus reconciling their difference from
phenomenological estimates of the nucleon EDM.
On Lattice Calculation of Electric Dipole Moments and Form Factors of the Nucleon
S. Syritsyn
18 november 2016
=================================================================================
§ INTRODUCTION
The origin of nuclear matter can be traced back to the excess of nucleons
over antinucleons in the early Universe and it is one of the greatest puzzles
in Physics known as the baryonic asymmetry of the Universe (BAU).
One of the required conditions for the BAU is violation of the symmetry ().
In the Standard Model (SM), the CKM matrix phases lead to violations in weak interactions,
but their magnitudes are not sufficient to explain the BAU,
and signs of additional are actively sought in experiments.
The most promising ways to look for are measurements of
electric dipole moments (EDM) of atoms, nucleons and nuclei.
In particular, the Standard Model prediction for the neutron EDM is five orders
below the current experimental bound, and represents a negligible background.
Near-future EDM experiments plan to improve this bound by 2 orders of magnitude,
and are capable of constraining various Beyond-the-Standard-Model (BSM) extensions
of particle physics, purely from low-energy nuclear and atomic high-precision experiments.
Knowledge of nucleon structure and interactions is necessary to interpret these
experiments in terms of quark and gluon effective operators and put constraints
on proposed extensions of the Standard Model, in particular SUSY and GUT models
as sources of additional .
Connecting the quark- and gluon- to hadron-level effective
interactions is an urgent task for lattice QCD (an extensive
review of EDM phenomenology can be found in Ref. <cit.>).
The proton and the neutron can have electric dipole moments only if the symmetry of
the Standard Model Lagrangian is broken by additional P-,T-odd interactions.
The only such dimension-4 operator is the QCD θ̅-angle
(θ̅ stands for the physically-relevant combination of the QCD θ
angle and quark mass phases).
The θ̅-induced nucleon EDMs (nEDMs) have been calculated on a lattice
from energy shifts in uniform background electric
field <cit.>
or extracting the P-odd electric dipole form factor (EDFF) F_3(Q^2)
from nucleon matrix elements of the vector current in vacuum <cit.>.
Nucleon EDMs
have been studied using QCD
sum rules, quark models, and chiral perturbation theory (see Refs. <cit.>
to name a few).
On a lattice, quark EDM-induced nucleon EDMs have been recently
computed on a lattice in partially-quenched framework <cit.>.
Another important dimension-5(6)[
These operators are sometimes referred to as “dimension-6” because in the SM they
contain a factor of the Higgs field.]
operator is the -odd quark-gluon interaction,
also known as the chromo-electric dipole moment (cEDM)
_cEDM = i ∑_ψ=u,dδ̃_ψ/2ψ̅(T^a G^q_μν) σ^μνγ_5 ψ ,
and calculations of cEDM-induced nEDMs have recently started using Wilson
fermions <cit.>.
In this paper, we report several important achievements in studying nucleon EDMs
on a lattice.
First, we argue that the commonly accepted methodology for computing electric
dipole form factors of spin-1/2 particles on a lattice has a problem to identify the electric
dipole moment form factor.
In particular, in the standard analysis of the nucleon-current correlators <cit.>, the electric dipole form factor
F_3 receives large and likely dominant contribution from spurious mixing with
the Pauli form factor F_2.
The energy shift methods <cit.> are not affected by
such mixing, but their precision has not been sufficient to detect the discrepancy.
This problem affects all the previous lattice calculations of the nucleon EDFFs and EDMs from
nucleon-current correlators, including those studying the θ̅-angle <cit.>
as well as the more recent studying chromo-EDM <cit.>.
We demonstrate the problem formally in Sec <ref>
and also derive the correction for the results of Refs. <cit.> to subtract the spurious mixing with F_2.
In addition, in Sec. <ref> we study the energy shift of a neutral
particle on a Euclidean lattice in uniform background electric field.
We introduce the uniform electric field preserving translational invariance and periodic
boundary conditions on a lattice <cit.>.
In order to satisfy these conditions, the electric field value has to be analytically continued to
the imaginary axis upon Wick rotation from Minkowski to Euclid, and we demonstrate that
the eigenstates of a fermion having an EDM are shifted by a purely imaginary value.
In Sec. <ref>, we apply this formalism to the analysis of neutron correlators
computed in the presence of the quark chromo-EDM interaction (<ref>).
Calculation of the neutron EDM in the background field is independent from parity mixing
ambiguities, and it allows us to validate our new formula for the EDFF F_3 numerically.
The difference is evident only if the nucleon “parity-mixing” angle is large,
≳1.
The calculations with quark chromo-EDM generate very strong parity mixing compared to the
θ̅-angle, which is beneficial for our numerical check.
In Section <ref> we calculate the proton and neutron EDFFs F_3p,n(Q^2)
induced by the quark chromo-EDM interaction (<ref>), as well as the regular
-even Dirac and Pauli form factors F_1,2.
In Sec. <ref> we compare the EDM results from the form factor and
the energy-shift calculations,
providing a numerical confirmation of the validity of our new EDFF analysis.
Finally, in Section <ref> we analyze some select results for nucleon EDM
induced by θ̅-angle availiable in the literature <cit.> and attempt to correct them according to our findings.
§ CP-ODD FORM FACTORS OF SPIN-1/2 PARTICLE
§.§ Form factors and parity mixing
In this section we argue that the ubiquitously used expression for computing -odd electric
dipole form factor F_3 on a lattice does not correspond to the electric dipole moment
measured in experiments and leads to a finite and perhaps dominant contribution from the Pauli
form factor F_2 to the reported values of EDFF F_3 and EDM of the proton and the neutron.
First, we recall the lattice framework for calculation of the -violating form factor F_3
first introduced in Ref. <cit.>, and later used without substantial
changes in the subsequent papers <cit.> studying the QCD θ-term, as well as more recent
developments <cit.> to study the quark chromo-EDM.
To compute nucleon form factors on a lattice, one evaluates nucleon two- and three-point functions
(see Fig. <ref>)
in presence of -violating () interactions
C_NN̅^(p⃗,t)
= ∑_x⃗ e^-ip⃗·x⃗⟨ N(x⃗,t) N̅(0)⟩_ ,
C_NJN̅^(p⃗^',t_sep;q⃗,t_op)
= ∑_y⃗,z⃗ e^-ip⃗^'·y⃗+iq⃗·z⃗⟨ N(y⃗,t_sep) J^μ(z⃗,t_op) N̅(0)⟩_ .
The subscript indicates that these correlation functions are evaluated in
vacuum, either with a finite value of the relevant coupling or an infinitesimal value,
i.e. performing first-order Taylor expansion of the correlation functions.
As argued in Ref. <cit.>, as well as earlier model
calculations <cit.>, the background leads to a
phase in the
nucleon mass in the Dirac equation that governs the on-shell nucleon fields N, N̅
(i∂ - m_N^'e^-2iγ_5)N(x) = 0 .
where the real-valued m_N^'>0 is the nucleon ground state mass in the new vacuum.
The spinor wave functions ũ_p, ũ̅_p for the new ground states
⟨Ω|N|p,σ⟩_
= Z_N^'ũ_p,σ
= Z_N^' e^iγ_5 u_p,σ ,
also satisfy the same Dirac equation
(p - m_N^' e^-2iγ_5) ũ_p
= (p - m_N^' e^-2iγ_5) e^iγ_5 u_p = 0 ,
where the chirally-rotated spinors ũ_p, ũ̅_p have a
Lorentz-invariant phase similarly to the mass term,
ũ=e^iγ_5 u ,
ũ̅ = u̅ e^iγ_5 ,
where the regular spinors u_p, u̅_p satisfy the regular Dirac equation with a real-valued
nucleon mass,
(p - m_N^') u_p = 0 ,
u̅_p (p - m_N^') = 0
From the above equation (<ref>) it also follows that the spinors u_p,
u̅_p transform under spatial reflection (parity P) as the regular spinors,
γ_4 u_p=(p⃗,E) = u_p̃=(-p⃗,E) .
Below we will discuss correlation functions on a Euclidean lattice, which depend on the
Wick-rotated 4-momentum and are more conveniently expressed using the Euclidean matrices
[γ^μ]_ (<ref>).
Whenever a confusion may arise, we will explicitly specify the type
(Minknowski) or (Euclidean) of γ-matrices and 4-vectors
(see App. <ref> for details).
The Euclidean versions of the Dirac equations (<ref>)
for the nucleon spinors are
(ip_ + m_N^') u_p = 0 ,
u̅_p (ip_ + m_N^') = 0 ,
where (-ip_) = (-i) p^μ_ [γ_μ]_
= E [γ^4]_ - ip⃗· [γ]_,
in which the Euclidean on-shell 4-momentum p^μ_=(p⃗, iE)
is contracted with Euclidean γ-matrices and
E=√(m_N^'^2 + p⃗^'2) is the real-valued on-shell energy of the nucleon.
Due to the chiral phase (<ref>), the nucleon propagator on a
lattice (<ref>) contains chiral phases e^iγ_5.
Keeping only the ground state and omitting the exponential time dependence for simplicity,
we get
C_NN̅(p⃗,t)|_g.s.∼∑_σũ_p,σ̃̅u_p,σ/2E_N^'
= -ip_ + m_N^'e^2iγ_5/2E_N^'
= e^iγ_5[-ip_ + m_N^'/2E_N^']
e^iγ_5 .
Analogously, the expression for the nucleon-current correlator (<ref>)
contains the phases e^iγ_5:
C_NJN̅(p⃗^',t_sep;q⃗, t_op)|_g.s.∼∑_σ^',σũ_p^',σ^'⟨ p^',σ^' |J^μ|p,σ⟩_ũ̅_p,σ
= e^iγ_5[∑_σ^',σ u_p^',σ^'⟨ p^',σ^' |J^μ|p,σ⟩_u̅_p,σ] e^iγ_5 .
The problem with the commonly used expression for the three-point function comes from the fact
that the physical interpretation of a parity-mixed fermion state (<ref>)
on the lattice is not clear.
In Refs. <cit.>, it is assumed that the
nucleon matrix elements of the vector current in vacuum have the form
(in Minkowski space, up to sign conventions for F_3 and F_A)
⟨ p^',σ^' |J^μ|p,σ⟩_?= ũ̅_p^',σ^'[
F_1(Q^2) γ^μ
+ F̃_2(Q^2) iσ^μνq_ν/2m_N^'
- F̃_3(Q^2) γ_5σ^μνq_ν/2m_N^'
+ F_A(Q^2) (qq^μ - γ^μ q^2)γ_5/m_N^'^2] ũ_p,σ ,
where q=p^'-p, Q^2 = -[q^2]_2=-(q^4)^2+q⃗^2,
F_1 and F̃_2 are the Dirac and Pauli form factors,
F̃_3 is the electric dipole form factor (EDFF),
and F_A is the anapole form factor
(notations F̃_2,3 are introduced to avoid confusion with the true F_2,3 below).
The matrix element expression (<ref>), however, disagrees with the
literature <cit.>,
⟨ p^',σ^' |J^μ|p,σ⟩_
= u̅_p^',σ^'[
F_1(Q^2) γ^μ
+ F_2(Q^2) iσ^μνq_ν/2m_N^'
- F_3(Q^2) γ_5σ^μνq_ν/2m_N^'
+ F_A(Q^2) (qq^μ - γ^μ q^2)γ_5/m_N^'^2] u_p,σ ,
in which the vertex spin matrix
Γ^μ(p^',p)
= F_1 γ^μ + (F_2 + i F_3γ_5) iσ^μνq_ν/2m_N^'
+ F_A(qq^μ - γ^μ q^2)γ_5/m_N^'^2
is contracted with the spinors
satisfying the regular parity transformations of spinors (<ref>).
Only in this case the contribution of the form factor F_3 to the matrix element
⟨ p^',σ^' |J^μ|p,σ⟩ transforms as an axial 4-vector
so that F_3 is indeed the -odd coupling of the nucleon to the electromagnetic
potential <cit.>.
Let us show that this is not the case if the matrix element of the current has the
form (<ref>).
Upon spatial reflection, the true 4-vectors of momenta and current have to transform as
(p⃗^('), p^4(')) → (-p⃗^('), p^4(')) ,
(J⃗, J^4) → (-J⃗, J^4) .
while the axial vector current A^μ transforms with the sign opposite to J^μ:
(A⃗, A^4) → (A⃗, -A^4) .
The chirally-rotated spinors transform as
ũ_p⃗ →ũ_-p⃗ = e^2iγ_5γ_4 ũ_p⃗ ,
ũ̅_p⃗ →ũ̅_-p⃗ = ũ̅_p⃗ γ_4 e^2iγ_5 ,
up to an irrelevant spinor-diagonal phase factor.
Finally, remembering that the spatial momentum q⃗ is also reflected and using the identities
γ_4σ^iν (-q⃗, q^4)_νγ_4 = -σ^iν(q⃗, q^4)_ν ,
γ_4σ^4ν (-q⃗, q^4)_νγ_4 = σ^4ν(q⃗, q^4)_ν ,
we observe that a combination of the F̃_2,3 form factors transforms as
e^2i(F̃_2 + iF̃_3) →
e^-2i(F̃_2 - iF̃_3) .
Therefore, we conclude that the axial-vector contribution of the matrix
element (<ref>) appears because of the parity-odd form factor combination
Im[e^2i(F̃_2 + iF̃_3)]
= sin(2) F̃_2 + cos(2) F̃_3 .
which is different from F_3 if π n.
Since the expression (<ref>) is used in lattice calculations so
ubiquitously, we present extensive arguments that it is not correct.
The form factor F_A is irrelevant for this discussion, and will be omitted[
It is also worth noting that F_A is not affected by the parity mixing, unlike F_2,3.].
In Appendix <ref> we directly show that it is the
expression (<ref>) that leads to the correct -odd EDM coupling
∼E⃗·S⃗,
and the forward limit Q^2→0 of form factors F_2(Q^2) and F_3(Q^2) yields the anomalous
magnetic κ and electric ζ dipole moments (in units e/(2m_N)), respectively.
In Section <ref> we calculate the mass shift of a particle governed by a
Dirac equation with chirally-rotated mass in background electric field.
In this section we offer several heuristic arguments why
expression (<ref>) is not correct.
First, revisiting the form factor expression (<ref>), we note that
the only effect of the chiral phase is to mix form factors F_2 and F_3 into each other,
ũ̅_p^'[
F_1 γ^μ
+ (F̃_2 + iF̃_3γ_5) iσ^μνq_ν/2m_N^'] ũ_p
= u̅_p^'[
F_1 γ^μ
+ e^2iγ_5 (F̃_2 + i F̃_3 γ_5)
iσ^μνq_ν/2m_N^'] u_p
while the form factor F_1, as well as the omitted F_A, are independent of .
Thus, the form factors F̃_2,3 computed in Refs. <cit.>
are linear combinations of the true form factors F_2,3
(F_2 + i F_3 γ_5) = e^2iγ_5 (F̃_2 + i F̃_3 γ_5) ,
or {[ F̃_2 = cos(2) F_2 + sin(2) F_3 ,; F̃_3 = -sin(2) F_2 + cos(2) F_3 ,; ].
which is also consistent with Eq. (<ref>).
It is easy to see that the effect of the phase e^iγ_5 can be completely
removed by a field redefinition N^'= e^-iγ_5 N.
After this transformation, the on-shell nucleon field N^' will satisfy a Dirac equation
with the real-valued mass m_N^',
(i∂ - m_N^')N^'(x) = 0 .
A similar transformation for the nucleon correlators
C_N[J]N̅→ C^'_N[J]N̅
= e^-iγ_5 C_N[J]N̅ e^-iγ_5
will remove any dependence on altogether.
Note, however, that this is the case only if Eq. (<ref>) is used for the
nucleon matrix elements of the current.
Thus, this phase is purely conventional and similar to the operator normalization Z_N^',
in that physical quantities cannot depend on it.
In a lattice calculation, however, this phase is not known in advance and must
be determined numerically to be removed from the two- and three-point
correlators (<ref>,<ref>).
To make this point evident, suppose one calculated nucleon form factors in -even QCD but
using unconventional nucleon interpolating fields e^iγ_5 N with some arbitrary
.
If Eq. (<ref>) was used, the definition of F̃_2,3 would
depend on this arbitrarily chosen .
Consequently, because of the spurious mixing (<ref>),
the electric dipole form factor would obtain the non-zero value F̃_3 = -F_2sin(2)
in absence of any interactions.
Analogously, the apparent nucleon magnetic moment
μ̃_N^'=G̃_M(0)=F_1(0)+F̃_2
would have contributions from both F_2 and F_3.
In -even QCD vacuum, F_3=0 and the mixing (<ref>) would just reduce
the contribution of F_2 to μ̃_N^' by a factor of cos(2).
This would happen because the spin operator Σ^k=1/2ϵ^ijkσ^ij
was “sandwiched” between chirally-rotated 4-spinors and
(2S⃗)^k = Σ^k
= ũ̅_p^'Σ^k/2m_N^'ũ_p
= ξ^'†σ^kξ cos(2)
where the initial and final momenta p⃗,p⃗^'≈0
and ξ,ξ^' are the corresponding 2-spinors.
The resolution to this apparent paradox is hinted by the modified form of the Gordon identity
for the spinors ũ_p, ũ̅_p.
Since these spinors satisfy the Dirac equation with the chirally rotated
mass (<ref>), the Gordon identity takes the form
ũ̅_p^'γ^μũ_p
= ũ̅_p^'[
(p^' + p)^μ + iσ^μν(p^' - p)_ν/2 m_N^' e^2iγ_5]ũ_p ,
which is obtained from the standard Gordon identity by replacing
m_N^'→ m_N^' e^2iγ_5.
The form of the nucleon-current vertex must be compatible with the form of the Gordon identity,
which, among other things, relates form factors F_1,2 to G_M.
Therefore, to make the nucleon-current vertex compatible with the spinors
ũ_p, ũ̅_p,
the nucleon mass in the F̃_2,3 terms in Eq. (<ref>)
has to be adjusted similarly to Eq. (<ref>),
which leads back to the correct expression (<ref>).
Finally, we emphasize that Eqs. (<ref>) and
(<ref>) result in different prescriptions for analyzing the three-point
nucleon-current correlators:
C_NJN̅(p⃗,t_sep;q⃗,t_op)|_g.s.?=
e^-E_N^'^'(t_sep-t_op)-E_N^'t_op
e^iγ_5-ip^'_ + m_N^'/2E^'_N^'{e^iγ_5}^?
Γ^μ_{e^iγ_5}^?
-ip_ + m_N^'/2E_N^'
e^iγ_5
where phase factors in curly braces {e^iγ_5}^? are present only if
one uses the (incorrect) Eq.(<ref>).
In the above equation, we have introduced the Euclidean nucleon-current vertex
Γ^μ_(p^', p) =[F_1γ^μ + (F_2 + iγ_5 F_3)
σ^μνq_ν/2m_N^']_
§.§ EDM energy shift from Dirac equation
We argued in the previous section that one has to use regular ”even” spinors satisfying
Eq. (<ref>) to evaluate the nucleon matrix elements even if the QCD vacuum
is -broken, contrary to the previous works <cit.>.
Most of the ambiguity must have resulted from the notion that in a -broken vacuum, particles
are no longer parity eigenstates, hence, the argument goes, the nucleon must be described by a
parity-mixed spinor.
This argument is rather confusing because parity transformations of fermion fields are fixed only
up to a phase factor, and only a fermion-antifermion pair may have definite parity.
To clarify this question, in this section we calculate the energy spectrum of a particle described
by the Dirac operator _N with the complex mass me^-2iγ_5 and with
magnetic and electric dipole interactions in the form (<ref>) in the
background of uniform magnetic and electric fields.
Such an operator is exactly the nucleon effective operator on a lattice.
The zero modes of this operator (i.e., the poles of its Green's function) must correspond to
particle eigenstates, and their calculation avoids the spinor phase ambiguity completely.
The energy shifts linear in these fields are then identified with the magnetic κ and
electric ζ dipole moments, respectively.
The effective action for the Euclidean lattice nucleon field in the vacuum and
point-like electromagnetic interaction introduced via “long derivative” is
_int = N̅ [i∂
- Q γ^μ A_μ - m e^-2iγ_5] N ,
where we neglect the momentum transfer dependence of the nucleon form factors
for simplicity, setting F_1 to a “point-like” value Q=F_1(0)=const.
In the absence of electromagnetic potential A_μ, the nucleon
propagator ⟨ NN̅⟩ takes the form (<ref>).
We add effective point-like anomalous magnetic κ̃=F̃_2(0)
and electric ζ̃=F̃_3(0) dipole interactions
to the interaction vertex
Q γ^μ→
Q γ^μ + κ̃iσ^μνq_ν/2m
- ζ̃γ_5σ^μνq_ν/2m
Using conventions (<ref>-<ref>) as well as
(<ref>,<ref>), the Dirac equation for N becomes
[p - Qγ^μ A_μ
- (κ̃+ iζ̃γ_5) 1/2 F_μνσ^μν/2m
- m e^-2iγ_5] N
= 0
We are going to find the energy levels of the particle in presence of constant field strength
F_μν.
To avoid irrelevant complications, we consider only a neutral particle with Q=0.
Using the identity (<ref>) to trade γ_5 for
F_μν→F̃_μν, we obtain the Dirac operator in the block-diagonal form
in the chiral basis (<ref>):
p - 1/2(κ̃F_μν
- ζ̃F̃_μν)σ^μν/2m
- m e^-2iγ_5
=([ -M E-p⃗·σ⃗; E+p⃗·σ⃗ -M^† ]) ,
where
M = me^2i - 1/2m(κ̃- i ζ̃)(H⃗ + iE⃗)·σ⃗.
In the rest frame, p⃗=(0⃗,E_0), and the operator (<ref>) has
solutions if
det([ -M E_0; E_0 -M^† ]) = 0
⇔ det(E_0^2 - M^† M) = det(E_0^2 - M M^†) = 0
Up to terms linear in κ̃, ζ̃, the normal operator M^† M is
M^† M = m^2 - 1/2[ e^ 2i(κ̃+ iζ̃)(H⃗ - iE⃗)
+ e^-2i(κ̃- iζ̃)(H⃗ + iE⃗)] ·σ⃗+ O(κ̃^2,ζ̃^2)
= m^2 - [κH⃗ + ζE⃗]·σ⃗+ O(κ^2,ζ^2)
where in the last line we redefined
e^2i(κ̃+ iζ̃)=(κ + i ζ),
which is the same transformation as Eq. (<ref>).
Finally, the energy of the particle's interaction with the E&M background is
E_0 - m = -κ/2mH⃗·Σ̂- ζ/2mE⃗·Σ̂+ O(κ^2,ζ^2)
where Σ̂ is the unit vector of the particle's spin.
From the interaction energy, we conclude that indeed
κ=F_2(0) , ζ=F_3(0)
are the
particle's magnetic and electric dipole moments.
For a neutral particle such as the neutron, the form factor F_2(0) is indeed the full magnetic
moment.
Thus, we have shown that if particle's field is governed by the Dirac equation with a complex
mass (<ref>), electric and magnetic dipole moments have to be properly
adjusted (κ̃, ζ̃)→(κ, ζ).
This adjustment follows from redefining the field and the operator
N → N^' = e^-iγ_5 N ,
D̃_N→D_N
= e^iγ_5D̃_N e^iγ_5
to remove the complex (chiral) phase from the mass,
where D̃_N(D_N) contains κ̃(κ) and ζ̃(ζ).
§.§ EDM energy shift in Euclidean space
In order to verify our findings, in this paper we calculate the EDM of the neutron on a lattice
using two methods: from the energy shift in the background electric field method and using the
new formula for the -odd form factor F_3.
The electric field is introduced following Ref. <cit.> and preserving the
(anti)periodic boundary condition in time.
Such electric field <cit.> is analytically continued to an imaginary value.
If the particle's electric dipole moment is finite and real-valued, the energy shift will be
imaginary, which might present a problem in the analysis of corresponding lattice correlators.
However, in our methodology, the -odd interaction is always infinitesimal, and so are
the electric dipole moments and the corresponding energy shifts, which are extracted from the
first-order Taylor expansion of the nucleon correlation functions in the -odd interaction.
In this paper, we study only neutral particles, because analysis of charged particle propagators
is more complicated <cit.>.
In this section, we repeat the calculation of Sec. <ref> for a neutral
particle on a Euclidean lattice, which has on-shell Euclidean 4-momentum
p_=(p⃗,iE), with energy E=√(E_0^2 + p⃗^2) up to discretization errors.
The energy at rest E_0 is modified from the mass m due to electric and magnetic dipole
interactions.
To avoid any confusion, we imply no relation between the Minkowski E⃗,H⃗ and Euclid
, electric and magnetic fields.
Instead, we introduce ad hoc uniform Abelian fields on a lattice
(see Fig. <ref>) preserving boundary conditions in both space and
time <cit.> that probe the MDM and EDM:
magnetic ϵ^ijk^k = (∂_i _j - ∂_j _i) = n_ijΦ_ij
(no summation over i,j)
{[ _x,j = n_ij Φ_ij x_i; _x,i|_x_i=L_i-1 = -n_ij Φ_ij L_i x_j ].
and electric ^k = (∂_k _4 - ∂_4 _k) = n_k4Φ_k4
{[ _x,4 = n_k4 Φ_k4 x_k; _x,k|_x_k=L_k-1 = -n_k4 Φ_k4 L_k x_4 ]. ,
where Φ_μν = 6π/L_μ L_ν is the quantum of flux through a plaquette
(μν) and n_μν is the corresponding number of quanta.
The fractional quark charges Q_u=2/3, Q_d=-1/3 and periodic boundary conditions require
that the flux through the edge of the lattice is L_μ L_ν·Φ_μν = 3·2π.
From potentials (<ref>,<ref>), the Euclidean field strength
tensor _μν = ∂_μ_ν - ∂_ν_μ
_μν
= ([ 1 2 3 4; 1 0 ^3 -^2 ^1; 2 -^3 0 ^1 ^2; 3 ^2 -^1 0 ^3; 4 -^1 -^2 -^3 0 ])
with
=(n_23Φ_23, n_31Φ_31, n_12Φ_12) and
=(n_14Φ_14, n_24Φ_24, n_34Φ_34).
We start from the effective EDM and MDM coupling in the nucleon-current vertex.
The Dirac operator for the nucleon on a lattice is
D+m=γ^μ(∂_μ + i Q _μ) + m, which we extend to include the
point-like effective interactions from Eq. (<ref>)
[ip +m +iQ_q
-(1/2σ^μν_μν) κ +iζγ_5/2m]_
with κ=F_2(0) and ζ=F_3(0).
We use Euclidean matrices γ^μ (<ref>)
and [γ_5]_=(γ^1γ^2γ^3γ^4)_ (<ref>)
[
The results are manifestly independent from the basis of γ-matrices used,
if the relation between γ_5 and γ^μ is kept unchanged.]
and the plain-wave fields ψ_p(x) and _q,μ(x) depending on
the Euclidean 4-momenta p, q as
ψ_p(x)∼ e^i p x ,
∂_μψ_p(x) ↔ i p_μ u_p ;
_q,μ(x) ∼ e^i(p^'-p) x = e^i q x ,
∂_ν_q,μ(x) ↔ i q_ν_μ .
The mass m in the above equation (<ref>) is chosen real and positive,
since any chiral phase factor may be removed with a field redefinition (<ref>),
which at the same time rotates the dipole couplings (κ, ζ) into the physical magnetic
and electric dipole moments, as has been shown in Sec. <ref>.
After setting the charge Q=0 and the momentum p⃗=0, we use
σ_^ij
= ϵ^ijk([ -σ^k ; -σ^k ]) ,
σ_^i4
= ([ σ^i ; -σ^i ]) ,
and transform the operator (<ref>) into the block-diagonal form,
and find the condition for on-shell fermion energies
([ _- -E_0; -E_0 _+ ])
=0
⇔ (E_0^2 - _+ _-) = (E_0^2 - _- _+) = 0
where
_± = m + 1/2m(κ∓ i ζ) (±)·σ⃗ .
The on-shell energies are then determined by the eigenvalues of the spin-dependent operator
_∓_±
=m^2 +κσ⃗·- ζσ⃗· i +O(k^2, ζ^2) ,
E_0-m=κ/2m ·Σ̂- ζ/2m i·Σ̂+O(k^2, ζ^2) ,
where Σ̂ is the direction of the particle's spin.
Note that the electric field enters Eq. (<ref>) as i,
with an imaginary factor emphasizing that its value has been analytically continued
to the imaginary axis, and the corresponding energy shift is purely imaginary.
Equation (<ref>) provides a prescription for extracting the EDM and MDM from
energy shifts of a neutral particle on a lattice in uniform background fields.
§ CEDM-INDUCED EDM AND EDFF ON A LATTICE
In our initial calculation of cEDM-induced nucleon EDMs,
we use two lattice ensembles with Iwasaki gauge action and N_f=2+1 dynamical domain wall
fermions: 16^3×32 with m_π≈420 MeV <cit.>
and 24^3×64 with m_π≈340 MeV <cit.>.
The ensemble parameters are summarized in Tab. <ref>.
We use identical ensembles, statistics, and spatial sampling per gauge configuration
in both calculation methods discussed in further sections.
We use all-mode-averaging <cit.> framework to optimize sampling,
in which we approximate quark propagators with truncated-CG solutions
to a Möbius operator <cit.>.
We use the Möbius operator with short 5th dimension L_5s and complex s-dependent
coefficients b_s + c_s = ω_s^-1 (later referred to as “zMobius”) that approximates
the same 4d effective operator as the Shamir operator with the full L_5f=32 (DSDR)
or L_5f=16 (Iwasaki).
The approximation is based on the domain wall-overlap equivalence
[^DWF]_4d = 1+m_q/2 - 1-m_q/2γ_5
ϵ_L_5(H_T) ,
H_T = γ_5_W/2+_W ,
ϵ^Möbius_L_5s(x)
= ∏_s^L_5s(1+ω_s^-1x) - ∏_s^L_5s(1-ω_s^-1x)/∏_s^L_5s(1+ω_s^-1x) + ∏_s^L_5s(1-ω_s^-1x)≈ϵ^Shamir_L_5f(x)
.
where the coefficients ω_s are chosen so that the function
ϵ^Möbius_L_5s(x) is the minmax approximation to the
ϵ^Shamir_L_5f(x).
We find that L_5s=10 is enough for efficient 4d operator approximation.
Shortened 5th dimension reduces the CPU and memory requirements:
for example, L_5f=16 is reduced to L_5s=10 saving 38% of the cost.
We deflate the low-lying eigenmodes of the internal even-odd preconditioned operator,
to make the truncated-CG AMA efficient.
The numbers of deflation eigenvectors N_ev and truncated CG iterations N_CG are given in
Tab. <ref>.
We compute 32 sloppy samples per configuration.
To correct any potential bias due to the approximate operator and the truncated CG
inversion, in addition we compute one exact sample per configuration using the Shamir operator.
The latter is computed iteratively by refining the solution of the “zMobius”
to approach the solution of the Shamir operator, again taking advantage of the short L_5s
and deflation.
§.§ Parity-even and -odd nucleon correlators
The EDFF F_3 is a parity-odd quantity induced by interactions.
To compute the effect of -odd interactions, we modify the lattice action
S→ S + i δ^ S = S + i∑_i,x c_i [^_i]_x
where c_i are the -odd couplings such as the QCD θ-angle, quark (chromo-)EDMs, etc.
We Taylor-expand QCD+ vacuum averages in the couplings c_i.
For example, for the three-point function, we get[
In this section, all conventions for correlators, form factors, and momenta are Euclidean.]
⟨ N [q̅γ^μ q] N̅⟩_
= 1/Z∫ U ψ̅ψ e^-S - iδ^ S N [q̅γ^μ q] N̅
= C_NJN̅ - i∑_i c_i δ^_i C_NJN̅
+ O(c_ψ^2) ,
where C_… and δ^ C_… stand for the -even and -odd correlators.
Similarly, we also analyze the effect of interaction on the nucleon-only correlators.
In total, we calculate the following two- and three-point -even correlators as well as
three- and four-point -odd correlators,
C_NN̅ = ⟨ N N̅⟩ ,
δ^_i C_NN̅ = ⟨ N N̅ ·∑_x [^_i]_x ⟩ ,
C_NJN̅ = ⟨ N [q̅γ^μ q] N̅⟩ ,
δ_i^ C_NJN̅ = ⟨ N [q̅γ^μ q] N̅·∑_x [^_i]_x ⟩ ,
where ⟨·⟩ stand for vacuum averages computed with -even QCD action S.
In Sec. <ref>, we also modify the action S to include the uniform background electric
field as the probe of the electric dipole moment.
In this work, we study only the quark chromo-EDM as the source of violation,
_ψ G^
= 1/2ψ̅[G_μν]^clovσ^μνγ_5ψ
= 1/2ψ̅(g_S G^a,cont_μν T^a)σ^μνγ_5ψ ,
where G^a,cont is the continuum color field strength tensor and
the “clover” [G_μν]^clov gauge field strength tensor
on a lattice is (see Fig. <ref>)
[G_μν]^clov = 1/8i[
( U^P_x,+μ̂,+ν̂ + U^P_x,+ν̂,-μ̂
+ U^P_x,-μ̂,-ν̂ + U^P_x,-ν̂,+μ̂)
- h.c.]
Insertions of the quark-bilinear cEDM density (<ref>) can generate both connected
and disconnected contractions, similarly to the quark current.
In this work, we calculate only the fully connected contributions to these correlation functions
shown in Fig. <ref>.
The disconnected contributions (see Fig. <ref>) are typically much more
challenging to calculate, and we will address them in future work.
Neglecting the disconnected diagrams will not affect the comparison of the form factor and the
energy shift methods, because they are omitted in both calculations.
To compute the connected diagrams, we insert the quark-bilinear cEDM density (<ref>)
once in every ψ-quark line of C_N JN̅ diagrams, generating the four-point functions
shown in Fig.<ref>.
We evaluate all the connected three- and four-point contractions using the forward
and the set of sequential propagators shown in Fig. <ref>.
In addition to the usual one forward and two backward (sink-sequential) propagators,
we compute one cEDM-sequential and four doubly-sequential ({cEDM, sink}-sequential)
(+) propagators per sample.
For every additional value of the source-sink separation t_sep and sink momentum p⃗^', additional backward and doubly-sequential (+) propagators must be
computed, i.e.
N_ = N_ = 1 ,
N_=N_q N_sep N_mom ,
N_+ = N_q N_ψ N_sep N_mom
where N_q is the number of separate flavors in the quark current and N_ψ is the number of
separate flavors in the operator.
The connected -even two- and three-point correlators do not require any additional inversions.
In this scheme, we perform only the minimal number of inversions required for computing
all the diagrams for the neutron and proton EDM induced by
a connected flavor-dependent quark-bilinear interaction
with the two degenerate flavors u and d.
Compared to Ref. <cit.>, in which a finite small O(ϵ)
-odd perturbation term is added to the quark action that results in modified quark
propagators
_m^-1→(_m + iϵσ^μνG̃_μν)^-1 ,
our four-point contractions correspond to directly computing the first derivative
(∂ C_2,3^ϵ / ∂ϵ)_ϵ=0, thus avoiding
any higher-order dependence on ϵ and obviating the ϵ-extrapolation.
As a cross-check, we have verified our contraction code on a small test lattice by replacing
propagators _m^-1η with
_m^-1η→
[_m^-1 - _m^-1 (iϵΓ) _m^-1]η
to approximate [_m + iϵΓ]^-1η,
where Γ=1/2 G_μνσ^μνγ_5.
Using these “-perturbed” propagators, each of which needed two inversions,
we have computed the nucleon C^ϵ_NN̅ and nucleon-current
C^ϵ_NJN̅ correlators, and compared their finite-difference
ϵ-derivatives to δ C_NN̅ and δ C_NJN̅.
We use only one value of the sink momentum p⃗^'=0.
We compute nucleon-current three- and four-point correlators with two source-sink separation
values t_sep={8,10}a={0.91,1.15} fm for the 16^3×32 ensemble,
and three t_sep={8,10,12}a={0.88,1.11,1.33} fm for the 24^3×64 ensemble.
For the 24^3×64 lattice, we use the Gaussian-smeared sources with APE-smeared gauge links
using parameters optimized for overlap with the ground state <cit.>,
while for the 16^3×32 ensemble we used the smearing parameters from
Ref. <cit.>.
The effective nucleon mass plots for the ensembles are shown in Fig. <ref>.
Correlators C_NJN̅ and δ^C_NJN̅ are computed with
the polarization projector
T^+_S_z+=1+γ_4/2(1+Σ_3)=1/2(1+γ_4)(1-iγ_1γ_2) ,
,
while correlators C_NN̅ and δ^ C_NN̅ are computed with all 16
polarizations and saved to be used later for disconnected contractions.
We reduce the cost of computing backward propagators with the widely-used “coherent”
trick combining 2 backward sources from samples separated by L_t/2 into one inversion.
Combining 4 samples resulted in a large increase in the statistical uncertainty negating the
cost-saving advantages.
§.§ Nucleon form factors
Following the discussion in Sec.<ref>, we use the form factor decomposition that
is different from the previous works <cit.>,
⟨ N_p^' | q̅γ^μ q | N_p⟩
= u̅_p^'[ F_1(Q^2) γ^μ
+ F_2(Q^2) σ^μν q_ν/2m_N
+ F_3(Q^2) iγ_5σ^μν q_ν/2m_N] u_p .
where the spinors u_p,u̅_p^' have positive parity.
Details of evaluating kinematic coefficients for form factors F_1,2,3
are given in Appendix <ref>.
We use the standard plateau method to evaluate both -even and -odd matrix elements of
the nucleon
[δ^] _NJN̅ (t_sep,t_op)
= [δ^] C_NJN̅(t_sep,t_op)/c_2^'(t_sep)√(c_2^'(t_sep)/c_2(t_sep)c_2^'(t_op)/c_2(t_op)c_2(t_sep-t_op)/c_2^'(t_sep-t_op))
where the two-point functions are projected with the positive-parity polarization matrix
T^+=1/2(1+γ_4),
c^(')_2(t) = [ T^+· C_NN̅(p⃗^('), t) ] .
The three central points on the ratio plateaus are taken as the estimate of the ground state
matrix elements.
This is a crude estimate and improved analysis of excited states is necessary for better
control of systematic uncertainties.
However, we find that our results change insignificantly with increasing source-sink separation
(see Figs. <ref>, <ref>), therefore we conclude
that excited state effects cannot influence the main conclusions of the paper.
We calculate the Dirac and Pauli form factors F_1,2 using a correlated χ^2 fit to the
matrix elements of the quark vector current (“overdetermined analysis”).
The system of equations for form factors is reduced by combining equivalent equations to
reduce the system dimension and make estimation of the covariance matrix more stable
(see, e.g., Ref. <cit.> for details).
The quark current operator is renormalized using renormalization constants
Z_V=0.71408 for 24^3×64 <cit.> and
the chiral-limit value Z_V=Z_A=0.7162 for 16^3×32 <cit.> ensembles.
We show the momentum dependence of the resulting Sachs electric and magnetic form factors
G_E(Q^2) = F_1(Q^2) - Q^2/4m_N^2 F_2(Q^2) ,
G_M(Q^2) = F_1(Q^2) + F_2(Q^2) ,
for the proton and the neutron (connected-only) for both ensembles in Fig. <ref>.
Our data for form factors G_E,M show no significant systematic variation with
increasing the source-sink separation t_sep.
In order to compute the form factor F_3, we first need to calculate the parity mixing angle
in order to subtract the F_1,2 mixing terms.
Expanding the nucleon two-point function C_NN̅^(t) to the first order in
∼ c_ψ G and assuming that the ground state dominates for sufficiently large t,
C_NN̅(t) - i c_ψ G δ^ C_NN̅(t) +O(c_ψ G^2)
t→∞= |Z_N|^2[1+γ^4/2 + iγ_5 + O(^2)] e^-m_N t
we use the projectors T^+=1+γ_4/2 and T^+γ_5 to calculate
the “effective” mixing angle (t) normalized to c_ψ G=1
^eff(t)
= -[ T^+γ_5 ·δ^ C_NN̅ (t)]/[ T^+ · C_NN̅(t)]t→∞ = /c_ψ G
The time dependence of the ratios (<ref>) for both ensembles is shown in
Fig. <ref>.
The quark flavors in the cEDM interaction are shown respective to the proton, and
for the neutron must be switched u↔ d due to the isospin symmetry.
The plateau is reached for time t≥8, and we extract the values from a constant fit
(weighted average) to points t=8…11.
An interesting observation is that the mixing angle depends very strongly on the flavor
involved in the interaction.
Thus, for the proton P_δ = u_δ(u^T C γ_5 d), in which the d-quark enters
together with u as a scalar diquark, the d-cEDM does not lead to any parity mixing.
Finally, the electric dipole form factor F_3 is calculated from the -odd
four-point correlator δ^ C_NJN̅.
Similarly to the extraction of above, we can expand the
three-point function in the -odd interaction.
We extract the matrix elements using the ratios (<ref>) of
polarization-projected three-point functions [T·^_NJN̅]
to -even two-point functions (<ref>).
Expanding the ratio in ∼ c_ψ G, we get
[T(_NJN̅
- i c_ψ G δ^_NJN̅ +O(c_ψ G^2))]
t→∞=
∑_i=1,2[ _ i^(T) + i_ i^({T,γ_5})]F_i
+ _ 3^(T) F_3 + O(^2)
where ^(T)_ 1,2,3 are the kinematic
coefficients (<ref>-<ref>) for form factors F_1,2,3
computed with the polarization matrix T and with →_ (<ref>).
Matching the O(c_ψ G^1) terms in the above expansion and neglecting excited states,
we obtain
i _ 3^(T)F̂_3
= [T·δ^_NJN̅]_g.s.
+ ∑_i=1,2_ i^({T,γ_5}) F_i
The second term in the RHS of the above equation is the mixing subtraction.
Its form indicates that the mixing between form factors F_1,2 and F_3 happens only
because of the mixing of the polarization of the nucleon interpolating fields on a lattice.
This is substantially different from expressions used in Refs. <cit.>,
which also include additional subtraction term (-2 F_3) because of spurious mixing of
F_2 and F_3 in the vector current vertex (<ref>).
Although both timelike and spacelike components of the current can be used to calculate F_3,
in practice we find that the time component J^4 yields much better precision than
the spacelike component J^3.
Due to the larger uncertainty of the J^3 signal, combining both components did not result in
improved precision of the F_3 form factor.
If only the J^4 component is used, the overdetermined fit to matrix elements is not required,
and for T=T^+_S_z+ = 1+γ_4/2(1-iγ_1γ_2)
from Eqs. (<ref>,<ref>)
(1+τ) F_3(Q^2)
= m_N/q_3 _[T^+_S_z+·δ^_N J^4N̅]
- G_E(Q^2)
where τ is the kinematic variable (<ref>).
It is remarkable that for the neutron the subtraction term ∼ G_E is zero in the
forward limit.
In fact, if one uses the traditional formula for extracting the neutron EDM
d_N = F_3(0)/(2m_N), a large contribution (-2 F_2(0))/(2m_N) comes from
the spurious mixing if is not zero.
In Section <ref> we will discuss the currently available lattice results for the
neutron and proton EDM induced by the QCD θ-term.
To compute form factors from data with each source-sink separation t_sep,
we use the value = ^eff(t_sep) in Eq. (<ref>)
to subtract the mixing.
The results for the EDFF F_3 are shown in Fig. <ref>.
Despite relatively high statistics, the signal for the cEDM-induced form factor is noisy.
There is no significant dependence on the source-sink separation t_sep.
Since the cEDM operator is not renormalized, it can include contributions from other operators of
dimension 5, as well as operators from lower dimension 3 <cit.>.
One peculiar feature of these results is that, similarly to , the contribution to the
proton EDM comes mostly from the u-cEDM, while the contribution to the neutron comes mostly from
the d-cEDM.
However, a substantial increase in statistics, as well as more elaborate analysis of excited
states, are required to confirm these observations.
The electric dipole moment is determined by the value of the form factor F_3(Q^2) at zero.
This value is not directly calculable, and one has to extrapolate the Q^2>0 data points to
Q^2=0.
In Figure <ref> we show linear extrapolation of these form factors
using the three smallest Q^2>0 points.
Other fit models are not warranted until the statistical precision is substantially improved.
§.§ Neutron electric dipole moments from energy shifts
Calculation of the dipole moment using uniform background field has an advantage that no form
factor extrapolation in momentum is required, because the energy shift depends on the forward
matrix element of the nucleon.
This calculation is easier for the neutron than for the proton, because in case of a charged
particle there are additional complications due to its constant acceleration,
which makes analysis of its correlation functions more complicated <cit.>.
On the other hand, since the uniform background field is quantized on a lattice, these fields
cannot be made arbitrarily small.
In fact, the field quanta are very large and their magnitudes are comparable to the QCD scale,
especially on the smaller 16^3×32 lattice.
Because of the fractional charges of quarks, there is additional factor of 3 in the minimal value
of the electric field, which is quantized in multiples of _0 = 6π/a^2 L_x L_t.
The _0 values are shown in Tab. <ref>, and for the smaller 16^3×32
lattice the minimal electric field is _0=0.110 GeV^2 = 560 MV/fm.
Such electric field pulls the u quark in the neutron with tension
≈(270 MeV)^2, or approximately 40% of the QCD string tension,
and may deform the neutron too far away from the ground state.
We introduce the uniform electric field on a lattice as described in
Sec. <ref> along the z direction.
Using modified QCD+U(1) gauge links, we calculate the regular nucleon correlator
C_NN̅,, as well as the correlator with the insertion of
-odd interaction, in full analogy with Sec. <ref>, e.g.
δ^ C_NN̅,(p⃗, t)
= ∑_y⃗ e^-ip⃗(y⃗ - x⃗)⟨ N(y⃗, t)
N̅(x⃗, 0)·^_ψ G⟩_
The modified gauge links are used in both computing the propagators and construction of the
smeared sources and sinks.
In fact, since the individual quarks are charged, smearing their distributions with only
the QCD gauge links breaks the covariance and makes the calculation dependent on the choice
of the gauge of the electromagnetic potential.
The QCD links used in Gaussian smearing are first APE-smeared, and then the electromagnetic potential
is applied to them.
From Eq. (<ref>), the energy of a particle on a lattice with the spin polarized
along the electric field =ẑ is shifted by the imaginary value
δ E = -(ζ / 2m) i.
The nucleon correlator at rest (p⃗ = 0) thus must take the form
C^_NN̅,(p⃗=0, t)
= |Z_N|^2 e^iγ_51+γ_4/2[
1+Σ_z/2 e^-(m+δ E)t
+1-Σ_z/2 e^-(m-δ E)t]e^iγ_5
As with the -odd form factor F_3, expanding the correlator up to the first order in
c_ψ G∼∼δ E∼ζ, we get
C_NN̅, - i c_ψ G δ^ C_NN̅,t→∞= |Z_N|^2 e^-m_N t [1 + γ_4/2 + iγ_5
- Σ_z δ E t ]
for the electric dipole moment we obtain the following estimator for the
effective energy shift:
ζ^eff(t) = 2 m_N d_N^eff(t) = -2m_N/_z[R_z(t+1) - R_z(t)] ,
R_z(t) = Tr[ T^+ Σ_z δ^ C_NN̅,_z(t)]/Tr[ T^+ C_NN̅,_z(t)] .
We have computed the neutron correlation functions with two values of the electric field
=_0 and 2_0.
The results for both ensembles are shown in Fig. <ref>.
We choose t=6…9 as the common plateau to estimate the value of ζ on both ensembles and
both flavors in the cEDM operator.
In the case of d-cEDM, we observe non-zero values of the energy shift.
Also the EDM values computed with =_0 and 2_0 agree well with each other,
indicating that the energy shift is linear in and our EDM result does not depend on the
polarizing effect of the electric field.
§.§ Numerical comparison of the form factor and energy shift methods
The normalization and the sign convention of the dimensionless EDM ζ in Sec. <ref>
are identical to those of F_3(0) in Sec. <ref>, and we plot
them for comparison in Fig. <ref>.
We observe satisfactory agreement between the values of ζ computed in the uniform background
method and the values obtained from the Q^2→0 extrapolation of form factors F_3n(Q^2).
In order to check how the spurious mixing affects the results,
in Fig. <ref> we also plot the values of form factors
computed with the old formula used in Refs. <cit.>
F̃_3 = F_3 - 2 F_2 .
This formula obviously gives a value for F̃_3 different from F_3 only if
is large.
In the case of u-cEDM, the value for the neutron is small, and there is no observable
difference between F_3 and F̃_3.
However, in the case of d-cEDM, the difference is remarkable.
Neither of the three sources of uncertainty: excited state bias in the energy shift calculation,
excited state bias in the form factor calculation, nor the Q^2→0 extrapolation of the form
factors, can plausibly change the outcome of this comparison, due to the large value of .
The agreement between the new form factor extraction formula and the energy shift method is one of
the main results of this paper, and serves as a numerical cross-check of the analytic derivation.
We collect the values of , extrapolated F_3(0), and ζ_n from the background field
method in Tab. <ref>.
§ CORRECTIONS TO EXISTING Θ-INDUCED NEDM LATTICE RESULTS
In Section <ref> it has been shown that the commonly used formula for extracting the
form factor F_3 from nucleon matrix elements on a lattice is incorrect.
This formula has been used in all of the papers that compute QCD θ-induced nucleon
EDM <cit.>.
Fortunately, the correction has a very simple form (<ref>), in which
F̃_2,3 refer to the old results and F_2,3 refer to corrected results.
Unfortunately, Refs. <cit.> offer a broad spectrum of conventions
for F̃_3 and differing in sign and scale factors.
However, by comparing expressions for polarized -odd
matrix elements of the timelike component of the vector current J_4
we can deduce the appropriate correction
using that reference's conventions.
For example, using Eq.(55) from Ref. <cit.>,
Π^0_3pt,Q(Γ_k=i/4(1+γ_0)γ_5γ_k)
∼i Q_k/2m_N[α^1 (F_1 + E_N + 3m_N/2m_N F_2)
+ E_N + m_N/2m_NF̃_3]
= i Q_k/2m_N[ α^1 G_E
+ (1 + τ)
(F̃_3 + 2α^1 F_2)
_F_3] ,
where τ=E_N-m_N/2m_N introduced in Eq.(<ref>) and G_E=F_1-τ F_2 is the
Sachs electric form factor.
Comparing the above equation to the expected form (<ref>),
for the corrected value of F_3 we obtain[
Note that this correction is the opposite compared to Eq. (<ref>), which results
from a difference in used conventions.]
F_3(Q^2) = F̃_3(Q^2) + 2α^1 F_2(Q^2) ,
which should hold for any value of Q^2.
Although it is more suitable that the original authors of Refs. <cit.>
reanalyze their data with these new formulas,
it is interesting to examine whether the presently available lattice calculations
necessarily yield non-zero values θ̅-induced nucleon EDM
after corrections similar to Eq. (<ref>) have been applied.
The most precise result for F_3n(0) that also allows us to perform the correction
unambiguously is Ref.<cit.>,
which reports an 8σ non-zero value for F_3(0)=-0.56(7) from calculations with dynamical
twisted-mass fermions at m_π=373 MeV.
However, when we apply the corresponding correction (<ref>),
the value becomes 0.09(7) and essentially compatible with zero.
Calculations with finite imaginary θ-angle <cit.> yield the most precise
values of the neutron EDM to date.
However, they do not contain sufficient details to deduce the proper correction for F_3.
It must also be noted that it is not clear if the sign of the -odd interaction ∼G̃G
is consistent in all of the Refs. <cit.>.
On the other hand, all the reported non-zero results for the proton and neutron EDM agree in sign
with F_3n(0) < 0 and F_3p(0) > 0, and it is reasonable to assume that any
differences in the conventions are compensated in each final reported EDM value.
Furthermore, because the θ-angle is equivalent to a chiral rotation of quark fields,
it is then reasonable to assume that upon conversion to some common set of conventions,
e.g., those of Ref. <cit.>, the sign of the chiral rotation angle α
agrees between different calculations.
Based on these plausible assumptions, we deduce that the results in <cit.>
must be corrected as F^θ_3 = F̃^θ_3 + 2α(θ) F_2[
Strictly speaking, for finite values of θ̅ and α̅(θ̅), one has to
use the hyperbolic “rotation” formula cosh(2α)F_3 = F̃_3 + sinh(2α) F_2,
in which we neglect O(α^2) terms because |α|≲0.15, while the precision is
only ≈10%.],
where α<0, in analogy with Ref. <cit.>.
The data for α̅^θ and F̃_3^θ(0) at finite θ̅ values are
extracted from figures in Ref. <cit.>.
The original F̃_3^θ(0) and the corrected F_3^θ(0) values are shown in
Fig. <ref>.
Following Ref. <cit.>, the corrected F_3^θ(0) values are interpolated to
θ̅→0 using a linear+cubic fit F_3(0)θ̅+ Cθ̅^3
and the resulting normalized values F_3(0)=dF_3^θ/dθ̅|_θ̅=0
are given in Tab. <ref>.
We observe that the corrected values at both the finite and zero θ̅ values agree with
zero at ≲2σ level.
Corrections to other results <cit.>,
may be done on the similar basis[
Correction to results in Ref. <cit.> require the corresponding values for F_2,
which we could not locate in published works.].
The resulting values are also collected in Tab. <ref>, and in most cases
they are compatible with zero, deviating at most 2σ.
We emphasize that, apart from Ref. <cit.>, these corrections are made using the
sign assumptions discussed above.
If our assumptions are wrong, the corrected central values will be approximately twice as large
compared to the originally reported values.
Although we find our assumptions plausible, and thus the corrected values in
Tab. <ref> most likely valid, it is up to the authors of
Refs. <cit.>
to reanalyze their data and confirm or deny our findings.
It is possible that the difference between the lattice values of the neutron EDM and
phenomenological estimates d_n∼ O(10^-3…10^-2) θ̅ e·fm <cit.>,
which has been ascribed to chiral symmetry breaking of lattice fermions and the
heavy quark masses used in simulations, can disappear when the proper corrections are applied.
§ SUMMARY AND CONCLUSIONS
Among our most important findings in this paper is the new formula for analysis of nucleon-current
correlators computed in vacuum and extraction of the electric dipole form factor F_3.
We have demonstrated, both analytically and numerically, that the analysis of the
θ̅-induced nucleon EDM in previous calculations <cit.>
received contribution (-2κ) from spurious mixing with the anomalous magnetic
moment κ of the nucleon.
Fortunately, the correction is very simple and requires only the values of the nucleon
anomalous magnetic moments from calculations on the same lattice ensembles.
Applying this correction properly is somewhat complicated due to differences
in the conventions used in these works.
Under some plausible assumptions we have demonstrated that, after the correction,
even the most precise current lattice results for θ̅-nEDM may be compatible with zero.
If this finding is confirmed in detailed reanalysis of Refs. <cit.>,
the precision of the current lattice QCD determination of θ̅-nEDM may be completely
inadequate to constrain the QCD θ̅ angle from experimental data.
The entire modern Physics program to search for fundamental symmetry violations as signatures
of new physics relies on our understanding of the effects of quark and gluon
interactions on nucleon structure.
The importance and urgency of first-principles calculations of these effects
hardly needs more emphasis, and we have to conclude, that they will likely be
even more difficult that thought before.
In this paper, we have performed calculations of nucleon electric dipole moments
induced by -odd quark-gluon interactions using two different methods.
In the first method, we have successfully calculated the nucleon-current correlators
with and without the -odd interaction, evaluating up to four-point connected nucleon
correlation functions.
We have demonstrated that this novel technique works well and we argue that it is both cheaper
and has fewer uncertainties than the technique used in
<cit.> to compute the same observables with modified
Wilson action.
One of the obstacles to applying the technique of
Refs. <cit.> is that low-eigenmode deflation used to
accelerate calculations will be more expensive, because the eigenvectors have to be computed for
every modification of the fermion action.
This may also be partially true for recently introduced multi-grid methods, in which
operator-dependent subspace null vectors have to be computed in the multi-grid setup phase,
which has considerable cost.
In the second method, we computed the neutron EDM using its energy shift in uniform background
electric field and in the presence of the same -odd interaction.
The energy-shift method to compute nucleon EDM has been used before <cit.>,
but our calculation is the first one that uses the uniform background electric
field that respects boundary conditions <cit.>.
We perform calculations with identical statistics in both methods and can directly compare
the central values and the uncertainties of the results.
We find that the EDM results agree if the new formula for extraction of the EDFF F_3 is used.
Also, both methods yield comparable uncertainty, and the energy shift method may be preferable in
the future because it does not require forward-limit extrapolation and the excited states may be
easier to control <cit.>.
Our calculations on a lattice are far from perfect and require improvement of the treatment of
excited states and forward-limit extrapolation of the form factors.
However, the associated systematic uncertainties are too small to cast doubt on the numerical
comparison of the energy shift and the form factor methods.
Although our calculations lack evaluation of the disconnected diagrams and
renormalization and mixing subtractions of the quark chromo-EDM operator,
these drawbacks apply equally to both methods, therefore do not affect said validation.
Future calculation of disconnected contributions to the F_3 form factors will
be an extension to the present work, in which the quark-disconnected loops with
insertions of the quark current, chromo-EDM, and both, will be evaluated and used
together with the existing nucleon correlators.
The disconnected contractions do not require four-point correlators and are simpler
to construct, although the stochastic noise will likely be a much bigger problem
than for the connected contractions.
We expect that with advances in numerical evaluation of the disconnected
diagrams <cit.>, this problem will be tractable.
T.B. is supported by US DOE grant DE-FG02-92ER40716.
T.I. is supported in part by US DOE Contract AC-02-98CH10886(BNL).
T.I. is also supported in part by the Japanese Ministry of Education
Grant-in-Aid, No. 26400261.
H.O. is supported by the RIKEN Special Postdoctoral Researcher program
S.N.S. was supported by the Nathan Isgur fellowship program at JLab
and by RIKEN BNL Research Center under its joint tenure track
fellowship with Stony Brook University.
This material is based upon work supported by the U.S. Department of Energy, Office of Science,
Office of Nuclear Physics under contract DE-AC05-06OR23177.
The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish
or reproduce this manuscript for U.S. Government purposes.
S.A. and S.N.S are also grateful for the hospitality of Kavli Institute for Theoretical Physics
(UC Santa Barbara) during the “Nuclear16” workshop.
This research was supported in part by the National Science Foundation
under Grant No. NSF PHY11-25915.
Gauge configurations with dynamical domain wall fermions used in this work
were generated by the RBC/UKQCD collaboration.
The computation was performed using the Hokusai supercomputer of the RIKEN ACCC facility
and Jlab cluster as part of the USQCD collaboration.
The calculations were performed with the “Qlua” software suite<cit.>.
§ CONVENTIONS
In this appendix section, we collect conventions for γ-matrices implicitly or explicitly
used throughout the text.
In Table <ref>, we also provide notes the transformation between
Minkowski () and Euclid () notations to avoid any ambiguities in matching
Minkowski and Euclidean form factor expressions for matrix elements and vertices.
In Minkowski space with metric {-1,-1,-1,+1}, we use the chiral γ-matrix basis
[γ^i]_ = ([ σ^i; -σ^i ]) ,
[γ^4]_ = ([ 1; 1 ]) ,
and with ϵ^4123=+1 we define the chiral γ_5 matrix
[γ_5]_
= -i/4![ϵ^μνρσγ_μγ_νγ_ργ_σ]_
= i[γ^4γ^1γ^2γ^3]_
= ([ -1 ; 1 ]) ,
For the spin matrix σ^μν=i/2[γ^μ,γ^ν] we
will also need the relation
[σ^μνγ_5]_
= i/2[ϵ^μνρσσ_ρσ]_
In accordance with Tab. <ref>, the γ-matrices in Euclidean space are
[γ^i]_ = ([ -iσ^i; +iσ^i ]) ,
[γ^4]_ = ([ 1; 1 ]) ,
in which γ^1,3 have the opposite sign compared to the deGrand-Rossi basis used in most
of the lattice QCD software.
This difference is inconsequential because all results are manifestly covariant with respect
to unitary basis transformations.
Finally, we use the γ_5 definition that agrees with the lattice software,
[γ_5]_ = [γ^1γ^2γ^3γ^4] =
([ 1 ; -1 ]) ,
and note that the kinematic coefficients for vector form factors derived in
Sec. <ref> depend on a particular γ_5 definition in terms of
γ^μ, but the numerical lattice results are invariant as long as the same
[γ_5]_ is used in both Eqs. (<ref>)
and (<ref>).
§ ELECTRIC AND MAGNETIC DIPOLE MOMENTS AND FORM FACTORS
In this Appendix section, we recall the connection between the form factors F_2,3 and
the magnetic and electric dipole moments of a spin-1/2 particle.
Although this is discussed in many textbooks, we find it useful to perform a rigorous derivation
expanding the matrix element (<ref>) in the momentum transfer q=p^'-p
and taking the limit q→0.
For completeness and to avoid any ambiguities, in addition to the γ-matrices
in Sec. <ref>, we collect all relevant conventions for E&M fields, 4-spinors,
and their interaction.
The discussion in this Section assumes Minkowski conventions with
g_μν=diag{-1,-1,-1,+1}.
The fermion-photon interaction is determined by the form of the “long” derivative,
D_μ = ∂_μ + ieA_μ,
= ψ̅(iD_μγ^μ - m) ψ
= ψ̅(i∂ - m)ψ - e A_μ J^μ ,
which leads to the interaction Hamiltonian
H_int = ∫ d^3x (-_int)
= e∫ d^3x A_μ J^μ = e ∫ d^3x (ρϕ - J⃗·A⃗)
where the EM potential A^μ=(A⃗, ϕ), EM current J^μ=(J⃗, ρ),
and the electric coupling (charge) e=|e|.
To evaluate the matrix element (<ref>) in the
interaction (<ref>), we use the chiral γ-matrix representation
summarized in Appendix <ref>.
The on-shell spinors satisfying the regular Dirac equation with a real-valued mass m>0
and energy E^(')=√(m^2+p⃗^(')2)
take the form
u_p
= ([ √(E-p⃗σ⃗)ξ; √(E+p⃗σ⃗)ξ ])
= √(m)[1 + p⃗Σ⃗/2mγ_5 + O(p⃗^2)]
([ ξ; ξ ]) ,
u̅_p^' = ([ √(E^'-p⃗^'σ⃗)ξ^'; √(E^'+p⃗^'σ⃗)ξ^' ])^†γ^4
= √(m)([ ξ^'; ξ^' ])^†[1 - p⃗^'Σ⃗/2mγ_5 + O(p⃗^2)]
where
Σ^k = 1/2ϵ^ijkσ^jk
= ([ σ^k ; σ^k ]) .
We will use these spinors to evaluate matrix elements of the Hamiltonian (<ref>),
treating the E&M field as classical background.
Note that in order to treat these matrix elements as the interaction energy,
they must be normalized as non-relativistic,
E_int
= ⟨p⃗^',σ^'|H_int|p⃗,σ⟩_NR
= eA_μ1/√(2E^'·2E)u̅_p^'Γ^μ u_p
≐ eA_μΓ^μ ,
where we introduced the notation
X=1/√(2E^'·2E)u̅_p^' X u_p for convenience.
In the limit of small spatial momenta |p⃗|,|p⃗^'|→0, only the spatial components
σ^ij give non-vanishing contributions when contracted with
spinors (<ref>):
σ^ij
= 1/√(2E 2E^')u̅_p^'σ^ij u_p
= ϵ^ijk ξ^'†σ^kξ + O(|p⃗|,|p⃗^'|) ,
σ^4k
= u̅_p^'σ^4k u_p
= O(|p⃗|,|p⃗^'|) ,
Recalling the conventions <cit.> for the EM potential A^μ,
(E⃗)^i
= -∂/∂ x^i A^4 - ∂/∂ t (A⃗)^i ,
(H⃗)^i = (curlA⃗)^i
= ϵ^ijk∂/∂ x^j (A⃗)^k ,
which result in the following field strength tensor F_μν and its dual
F̃_μν=1/2ϵ_μνρσ F^ρσ,
ϵ_1234=+1
F_μν = ([ 1 2 3 4; 1 0 -H^3 H^2 -E^1; 2 H^3 0 -H^1 -E^2; 3 -H^2 H^1 0 -E^3; 4 E^1 E^2 E^3 0 ]) ,
F̃_μν = ([ 1 2 3 4; 1 0 E^3 -E^2 -H^1; 2 -E^3 0 E^1 -H^2; 3 E^2 -E^1 0 -H^3; 4 H^1 H^2 H^3 0 ]) ,
where the rows and the columns are enumerated by μ and ν, respectively.
With the following conventions for the fermion and photon fields with definite momenta
p^(') and q, respectively,
ψ_p(x)∼ e^-ipx ,
ψ̅_p^'(x) ∼ e^ip^' x ,
A_q,μ(x)∼ e^-i(p^'-p)x = e^-iqx ,
the derivatives acting on these fields are translated into momentum factors,
∂ψ = γ^μ∂_μψ →γ^μ (-ip_μ) ψ = (-i)pψ ,
F_μν(x) = ∂_μ A_ν - ∂_ν A_μ → (-i) ( q_μ A_ν - q_ν A_μ) .
Applying the Gordon identity to Eq. (<ref>) and omitting the F_A form
factor, we get
⟨ p^',σ^' |J^μ|p,σ⟩_
= u̅_p^'[
F_1 (p^'+p)^μ/2m
+ (G_M + iγ_5 F_3) iσ^μνq_ν/2m] u_p,σ ,
where G_M=F_1+F_2 is the magnetic Sachs form factor determining the full magnetic moment
μ=Q+κ=G_M(0).
The first term is independent of the spin and is equal to the electromagnetic interaction
of a scalar particle, which we omit as irrelevant.
With the use of (<ref>) and (<ref>), the spin-dependent
interaction energy takes the form
E_int,spin
= i q_ν A_μ [
eG_M σ^μν/2m
- eF_3 1/2ϵ^μνρσσ_ρσ/2m]
= 1/2(e G_M/2m F_μν - e F_3/2m F̃_μν) σ^μν .
Neglecting all but the leading order in O(|p⃗|,|p⃗^'|),
we only have to keep the spatial components σ^ij:
E_int,spin
= -e G_M/2m H⃗·Σ̂- e F_3/2mE⃗·Σ̂ ,
where the unit spin vector Σ̂= ξ^'†σ⃗ξ, |Σ̂|=1.
The coupling coefficients to the magnetic and electric fields in the above equation have to be
identified with the magnetic and electric dipole moments, respectively,
μ_N = G_M(0) ,
d_N = F_3(0) .
which both are expressed here in the particle magneton units e/(2m).
Note that the above derivation could be repeated for the chirally-rotated spinors and the
nucleon-current vertex (<ref>).
It can be easily shown that the only change compared to Eq. (<ref>) would be that the
magnetic and electric fields would couple to some orthogonal linear combinations of
F̃_2,3, and that these combinations would reproduce F_2 and F_3 exactly in
agreement with Eq. (<ref>).
Finally, we note that if one uses the chirally-rotated spinors to calculate the spatial
matrix elements σ^ij, they are reduced by a factor of cos(2)
while the timelike matrix elements σ^4k become non-zero:
e^2iγ_5σ^ij = cos(2) σ^ij + sin(2) ϵ^ijkσ^4k ,
e^2iγ_5ϵ^ijkσ^4k = -sin(2) σ^ij + cos(2) σ^4k .
As we noted above, u̅_p^'σ^iju_p couples to the magnetic field,
while u̅_p^'σ^4ku_p couples to the electric field.
This “mixing” of electric and magnetic fields compensates exactly the mixing in
Eq. (<ref>) induced by using the chirally-rotated spinors
ũ̅, ũ_p instead of the regular spinors u̅_p^', u_p.
§ KINEMATIC COEFFICIENTS
In this section, we present expressions for the kinematic coefficients for form factors
F_1,2,3 on a Euclidean lattice.
We use two types of the polarization projectors, (1) spin-average T^+ and (2) polarized
T^+_S_z.
Both projectors also select the upper (positive-parity) part of the nucleon spinors
T^+=[1+γ^4/2]_ ,
T^+_S_z=[1+γ^4/2(-iγ^1γ^2)]_ .
Form factor F_3 can be extracted from μ=3,4 components of the vector current matrix
elements between S_z-polarized nucleon states.
Using handy notations for the positive-parity nucleon spinor matrices,
= -ip_ + m
, ^' = -ip_^' + m ,
the form factor expression for the nucleon-current correlation function on a lattice
C^_3pt=C^_NJ^μN̅ can be written as
Tr[T_pol C^_N J^μN̅(p⃗^',t; q⃗,t_op)]
=e^-E^' (t-t_op) - Et_op/2E^'·2E[ e^iγ_5 T e^iγ_5·^'·Γ_^μ(p^',p)·]
=e^-E^' (t-t_op) - Et_op/2E^'·2E[ (T + i{γ_5,T} + O(^2)) ·^'·Γ_^μ(p^',p)·] ,
where, assuming that the -odd interaction is small, we have expanded in
the -odd mixing angle .
Below we quote formulas for contributions to the last line of Eq. (<ref>)
computed for zero sink momentum p⃗^'=0,
source p⃗ = - q⃗ , E =√(m^2+q⃗^2) ,
sink p⃗^' = p⃗ + q⃗ = 0 , E^' =m ,
with the nucleon spin projectors T^+ and T^+_S_z.
The -independent contribution is
[T^+^'Γ_^μ]
= 4m^2([ i q_1 / m -iτ q_1 / m 0; i q_2 / m -iτ q_2 / m 0; i q_3 / m -iτ q_3 / m 0; 2(1+τ) -2τ(1+τ) 0 ]) ,
[T^+_S_z^'Γ_^μ]
= 4m^2([ -q_2 / m -q_2 / m q_1 q_3 / (2m^2); q_1 / m q_1 / m q_2 q_3 / (2m^2); 0 0 q_3^2 / (2m^2); 0 0 -i(1+τ) q_3 / m ]) ,
where the rows correspond to the Lorentz components μ=1,2,3,4
and the columns correspond to the form factors F_1,2,3.
We have also introduced the frequently used kinematic variable τ
τ≐Q^2/4m^2 p⃗^'=0≡ E-m/2m .
The coefficients of the contributions ∼ are
[{γ_5,T^+}^'Γ_^μ]
= 4m^2([ 0 0 -τ q_1 / m; 0 0 -τ q_2 / m; 0 0 -τ q_3 / m; 0 0 2iτ(1+τ); ]) ,
[{γ_5,T^+_S_z}^'Γ_^μ]
= 4m^2([ 0 i q_1 q_3 / (2m^2) 0; 0 i q_2 q_3 / (2m^2) 0; -2iτ -2iτ + i q_3^2 / (2m^2) 0; -q_3 / m τ q_3 / m 0; ]) .
Up to order O(), the nucleon correlation functions are[
Note that both and F_3 are proportional to the -odd perturbation, therefore we
consider F_3=O() and drop terms F_3 and higher.
]
[T^+ C^_N J^3 N̅]
= [
iq_3/m G_E
+ O(^2) ] ,
[T^+ C^_N J^4 N̅]
= [
2(1+τ)G_E
+ O(^2)] ,
[T^+_S_z C^_N J^3 N̅]
= [
2τ G_M
- q_3^2/2m^2F_2
+ q_3^2/2m^2 F_3
+ O(^2)] ,
[T^+_S_z C^_N J^4 N̅]
= [
- iq_3/m G_E
- i(1+τ)q_3/m F_3
+ O(^2)] ,
where G_E=F_1-τ F_2 is the electric and G_M=F_1+F_2 is the magnetic Sachs form
factor, and
=m/Ee^-E^' (t_sep-t_op) - E t_op
is the time dependence combined with kinematic
factors.
In the analysis of the C_NJN̅/C_NN̅ ratios (<ref>), the exponential
time dependence is canceled, and the kinematic coefficients have to be modified
to take into account the traces of the nucleon two-point functions:
_=m/√(2E(m+E)) .
In addition, we evaluate the extra contributions to the kinematic coefficients
∼{γ_5,Γ_^μ} that comes from spurious mixing of F_2,3
[T^+^'{γ_5,Γ_^μ}]
= 4m^2([ 0 0 2τ q_1 / m; 0 0 2τ q_2 / m; 0 0 2τ q_3 / m; 0 0 -4iτ (1+τ); ]) ,
[T^+_S_z^'{γ_5,Γ_^μ}]
= 4m^2([ 0 -i q_1 q_3 / m^2 -2 i q_2 / m; 0 -i q_2 q_3 / m^2 2 i q_1 / m; 0 -i q_3^2 / m^2 0; 0 -2 (1+τ) q_3 / m 0; ]) ,
which in Refs. <cit.>
contributes to the polarized nucleon-current correlators as
δ[T^+_S_z C^_N J^3 N̅]
?= [
q_3^2/m^2F_2
+ O(^2)] ,
δ[T^+_S_z C^_N J^4 N̅]
?= [
- 2i(1+τ)q_3/m F_2
+ O(^2)] ,
If the terms (<ref>,<ref>)
are erroneously added to the kinematic
coefficients (<ref>,<ref>), analysis of the same
lattice correlation functions will result in incorrect values of EDFF
F̃_3 = F_3 - 2 F_2, in full agreement with Eq. (<ref>).
aip
| The origin of nuclear matter can be traced back to the excess of nucleons
over antinucleons in the early Universe and it is one of the greatest puzzles
in Physics known as the baryonic asymmetry of the Universe (BAU).
One of the required conditions for the BAU is violation of the symmetry ().
In the Standard Model (SM), the CKM matrix phases lead to violations in weak interactions,
but their magnitudes are not sufficient to explain the BAU,
and signs of additional are actively sought in experiments.
The most promising ways to look for are measurements of
electric dipole moments (EDM) of atoms, nucleons and nuclei.
In particular, the Standard Model prediction for the neutron EDM is five orders
below the current experimental bound, and represents a negligible background.
Near-future EDM experiments plan to improve this bound by 2 orders of magnitude,
and are capable of constraining various Beyond-the-Standard-Model (BSM) extensions
of particle physics, purely from low-energy nuclear and atomic high-precision experiments.
Knowledge of nucleon structure and interactions is necessary to interpret these
experiments in terms of quark and gluon effective operators and put constraints
on proposed extensions of the Standard Model, in particular SUSY and GUT models
as sources of additional .
Connecting the quark- and gluon- to hadron-level effective
interactions is an urgent task for lattice QCD (an extensive
review of EDM phenomenology can be found in Ref. <cit.>).
The proton and the neutron can have electric dipole moments only if the symmetry of
the Standard Model Lagrangian is broken by additional P-,T-odd interactions.
The only such dimension-4 operator is the QCD θ̅-angle
(θ̅ stands for the physically-relevant combination of the QCD θ
angle and quark mass phases).
The θ̅-induced nucleon EDMs (nEDMs) have been calculated on a lattice
from energy shifts in uniform background electric
field <cit.>
or extracting the P-odd electric dipole form factor (EDFF) F_3(Q^2)
from nucleon matrix elements of the vector current in vacuum <cit.>.
Nucleon EDMs
have been studied using QCD
sum rules, quark models, and chiral perturbation theory (see Refs. <cit.>
to name a few).
On a lattice, quark EDM-induced nucleon EDMs have been recently
computed on a lattice in partially-quenched framework <cit.>.
Another important dimension-5(6)[
These operators are sometimes referred to as “dimension-6” because in the SM they
contain a factor of the Higgs field.]
operator is the -odd quark-gluon interaction,
also known as the chromo-electric dipole moment (cEDM)
_cEDM = i ∑_ψ=u,dδ̃_ψ/2ψ̅(T^a G^q_μν) σ^μνγ_5 ψ ,
and calculations of cEDM-induced nEDMs have recently started using Wilson
fermions <cit.>.
In this paper, we report several important achievements in studying nucleon EDMs
on a lattice.
First, we argue that the commonly accepted methodology for computing electric
dipole form factors of spin-1/2 particles on a lattice has a problem to identify the electric
dipole moment form factor.
In particular, in the standard analysis of the nucleon-current correlators <cit.>, the electric dipole form factor
F_3 receives large and likely dominant contribution from spurious mixing with
the Pauli form factor F_2.
The energy shift methods <cit.> are not affected by
such mixing, but their precision has not been sufficient to detect the discrepancy.
This problem affects all the previous lattice calculations of the nucleon EDFFs and EDMs from
nucleon-current correlators, including those studying the θ̅-angle <cit.>
as well as the more recent studying chromo-EDM <cit.>.
We demonstrate the problem formally in Sec <ref>
and also derive the correction for the results of Refs. <cit.> to subtract the spurious mixing with F_2.
In addition, in Sec. <ref> we study the energy shift of a neutral
particle on a Euclidean lattice in uniform background electric field.
We introduce the uniform electric field preserving translational invariance and periodic
boundary conditions on a lattice <cit.>.
In order to satisfy these conditions, the electric field value has to be analytically continued to
the imaginary axis upon Wick rotation from Minkowski to Euclid, and we demonstrate that
the eigenstates of a fermion having an EDM are shifted by a purely imaginary value.
In Sec. <ref>, we apply this formalism to the analysis of neutron correlators
computed in the presence of the quark chromo-EDM interaction (<ref>).
Calculation of the neutron EDM in the background field is independent from parity mixing
ambiguities, and it allows us to validate our new formula for the EDFF F_3 numerically.
The difference is evident only if the nucleon “parity-mixing” angle is large,
≳1.
The calculations with quark chromo-EDM generate very strong parity mixing compared to the
θ̅-angle, which is beneficial for our numerical check.
In Section <ref> we calculate the proton and neutron EDFFs F_3p,n(Q^2)
induced by the quark chromo-EDM interaction (<ref>), as well as the regular
-even Dirac and Pauli form factors F_1,2.
In Sec. <ref> we compare the EDM results from the form factor and
the energy-shift calculations,
providing a numerical confirmation of the validity of our new EDFF analysis.
Finally, in Section <ref> we analyze some select results for nucleon EDM
induced by θ̅-angle availiable in the literature <cit.> and attempt to correct them according to our findings. | null | null | null | null | null |
http://arxiv.org/abs/1701.08042v1 | 20170127130653 | Single-particle detection of products from atomic and molecular reactions in a cryogenic ion storage ring | [
"C. Krantz",
"O. Novotný",
"A. Becker",
"S. George",
"M. Grieser",
"R. von Hahn",
"C. Meyer",
"S. Schippers",
"K. Spruck",
"S. Vogel",
"A. Wolf"
] | physics.ins-det | [
"physics.ins-det"
] |
MPI]C. Krantzcor1
MPI]O. Novotnýcor2
MPI]A. Becker
MPI]S. George
MPI]M. Grieser
MPI]R. von Hahn
MPI]C. Meyer
UniGI]S. Schippers
MPI,IAMP]K. Spruck
MPI]S. Vogel
MPI]and A. Wolf
[cor1][email protected] — present address: Marburg Ion-Beam Therapy Centre, 35043 Marburg, Germany
[cor2][email protected]
[MPI]Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany
[UniGI]I. Physkalisches Institut, Abt. Atom- und Molekülphysik, Justus-Liebig-Universität Gießen, Heinrich-Buff-Ring 16, 35390, Gießen, Germany
[IAMP]Institut für Atom- und Molekülphysik, Justus-Liebig-Universität Gießen, Leihgesterner Weg 217, 35392 Gießen, Germany
We have used a single-particle detector system, based on secondary electron emission, for counting low-energetic (∼ keV/u) massive products originating from atomic and molecular ion reactions in the electrostatic Cryogenic Storage Ring (CSR). The detector is movable within the cryogenic vacuum chamber of CSR, and was used to measure production rates of a variety of charged and neutral daughter particles. In operation at a temperature of ∼ 6 K, the detector is characterised by a high dynamic range, combining a low dark event rate with good high-rate particle counting capability. On-line measurement of the pulse height distributions proved to be an important monitor of the detector response at low temperature. Statistical pulse-height analysis allows to infer the particle detection efficiency of the detector, which has been found to be close to unity also in cryogenic operation at 6 K.
Storage ring Low temperature Single-ion detection Secondary electrons
§ INTRODUCTION
Single-particle counting detectors are important instruments in many atomic and molecular physics experiments on fast-propagating ion beams <cit.>. In such experiments, an ion beam is guided through a target medium which can consist, e.g., of photons, electrons, neutral atoms, or molecules. Reactions of the projectile ions with the target particles typically lead to products of different charge-to-mass ratio. This results in the formation of daughter beams of different ion-optical rigidity compared to the parent, which can be separated from the latter by electric or magnetic analysing fields. At known intensity of the parent beam and thickness of the target, detection of the daughter particles reveals the rate coefficients of the processes involved in their production. Due to the typically low ion numbers and reaction cross-sections, the product detection needs to be done on the single-particle level.
Heavy-ion storage rings enhance such target experiments by their ability to store the projectiles for extended periods of time. Due to energetic processes in the ion source, unknown, highly-excited quantum states are often populated in atomic or molecular ions directly after production. In may cases storage of the ions enables them to reach a well-understood state population by spontaneous decay before undergoing the actual experiment. The extended storage time also allows phase-space manipulation of the ion beam, such as electron or stochastic cooling, or initial-state preparation techniques as required for laser- or collision-driven pump-probe experiments <cit.>.
For years, medium-energy magnetic ion synchrotrons have been used very successfully for these kinds of experiments—a remarkable development considering that the technology of those machines was originally aimed at nuclear physics applications <cit.>. Based on that success, a new class of heavy-ion storage rings has emerged, with designs that are optimised for experiments on atomic and molecular physics. They use purely electrostatic ion optics, matching the output energy of relatively simple electrostatic injectors that can be flexibly equipped with state-of-the art molecular ion sources <cit.>. The most advanced set-ups use cryogenic cooling machines to reduce the temperature of their beam guiding vacuum vessels down to values near that of liquid helium <cit.>. On the one hand, this results in a vastly improved residual gas pressure compared to conventional ultra-high vacuum (UHV) set-ups, with correspondingly longer ion storage times <cit.>. On the other hand, storage in such a cold environment allows infra-red-active molecular ions to de-excite to their lowest rovibrational levels prior to starting experiments—a significant improvement over room-temperature ion-storage facilities <cit.>.
The advantages of these cryogenic ion storage rings come with technological challenges with respect to the particle detector equipment. A restriction regarding possible detection principles arises from the low energy of the product particles. Limited by available high-voltage technology, typical kinetic energies in electrostatic storage devices are of order a few keV/u or below. This rules out detection mechanisms where the counting volume of the detector is covered by significant layers of passive material—as is the case for surface-barrier semi-conductor counters <cit.> and, to lesser extent, for scintillators <cit.>. Open micro-calorimetric detectors are a promising option for product detection at cryogenic storage rings, which is presently under investigation <cit.>. Their fabrication and operation are however extremely difficult and expensive, such that their use may be limited to selected experiments in the foreseeable future.
Suitable detectors for cryogenic storage rings, which can be widely deployed at acceptable manufacturing and operating costs, are therefore based on surface secondary-electron emission with subsequent multiplication <cit.>. This detection technique has proven itself also at particle energies below 1 keV/u <cit.>, but the low-temperature environment does come with new challenges. Besides engineering problems related to thermal expansion and embrittlement of materials, the efficiency of charge multiplication stages commonly used in low-energy ion detection is known to suffer in cold operation. Due to their semi-conductor-like properties, the electric resistance of micro-channel plates (MCPs) and single-channel electron multipliers (CEMs) rises strongly upon cooling into the cryogenic regime. The high resistance can lead to decreased gain or even complete charge depletion, especially at elevated particle hit rates. Depending on the application, MCPs have been used near ∼ 10 K with varying degrees of success <cit.>. Even less is known about the low-temperature behaviour of CEMs <cit.>.
In a recent publication, we have presented the design of a movable single-particle counting detector for the Cryogenic Storage Ring (CSR) of the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg, Germany <cit.>. Here, we report on the first operation of this device under real-life experimental conditions at the CSR.
This paper is structured as follows: In Section <ref> we briefly describe the instrument. In Section <ref> we present the most important findings from the first operation of the detection system with the storage ring CSR at its lowest temperature of ∼ 6 K. In Section <ref> we quantify and discuss the results from that series of experiments, with emphasis on the single-particle detection efficiency of the set-up. Section <ref> closes with a summary and outlook onto future developments.
§ OVERVIEW OF THE EXPERIMENTAL SET-UP
The CSR is a fully electrostatic storage ring designed for positive or negative ions of kinetic energies up to 300 keV per unit of charge <cit.>. The beam guiding vacuum vessel as well as the ion optics contained therein can be cooled to temperatures of ∼6 K by a closed-loop liquid-helium refrigerator. For thermal insulation, the beam line is enclosed in an additional isolation vacuum vessel and protected by several layers of black-body-radiation shields.
With an orbit circumference of 35 m, the storage ring (cf. Fig. <ref>) consists of four identical ion-optical sectors which enclose four field-free drift sections. While one of the latter is occupied by the beam diagnostic instrumentation of the storage ring <cit.>, the other three are free for installation of experimental equipment. The counting detector (lower panel of Fig. <ref>) is located downstream from an experimental section, within one of the ion-optics sectors of CSR. The technology of the detector system has been described extensively in a dedicated publication <cit.>, hence we limit ourselves to a brief overview here.
Equipped with a 20-mm-wide entrance window for heavy particles, the detector is movable transversely to the beam direction in the plane of the storage ring. It is installed 1.0 m downstream of a short (6^∘) electrostatic bending dipole of the storage ring. Product particles generated from the stored ions are deflected at a characteristic angle in the dipole element. By placement at a suitable horizontal position, the detector can intercept products with a charge-to-mass ratio that differs from that of the stored parent beam by more than 100 % in both directions. Specifically, it can detect neutral products on axis of the ion beam in the experiment as well as, e.g., ionisation products up to the double charge of a stored atomic cation beam <cit.>.
Eventually, the detector is designed to intercept product particles originating from ion-electron interactions in the future electron cooler of CSR—like electron recombination or electron impact ionisation <cit.>. In contrast to the detector set-up, the cooler was not yet operational during the 2015 experiments. Instead, an ion-photon interaction beam line was installed in the experimental CSR section preceding the detector <cit.>. It allowed to overlap the stored ions at grazing angle with laser beams of various wavelengths that were coupled into CSR using a system of broadband view-ports and mirrors in the cryogenic vacuum chamber. This in-ring laser target was used in experiments on photo-induced electron detachment of stored anions. In addition, without using the laser beams, experiments on auto-detachment and auto-fragmentation of excited molecular and cluster ions were performed using the same set-up. At higher CSR operating temperatures, products of electron transfer from the residual gas to stored cations were observed. For testing purposes, the detector can be irradiated by photons from an ultra-violet (UV, 245(5) nm) light emitting diode (LED) <cit.>. The UV-LED is located in a room temperature annex of the CSR sector opposite of the detector. The beam of photons from the LED is practically uncollimated and enters the CSR vacuum chamber via a set of UV-grade sapphire view-ports.
The detector employs a variant of the `Daly' ion detection principle, where incident massive particles impinge onto a secondary-electron emitting cathode made of aluminium <cit.>. The secondary electrons released in each hit are accelerated by 1.2 kV towards a small chevron micro-channel plate stack (cf. Fig. <ref>). The latter acts as secondary-electron multiplier, while being protected from direct hits by the primary massive ions. The multiplied electron bunches are then collected on a metal anode.
After capacitive decoupling from high-voltage, the pulses are driven into a fast front-end amplifier of 50 Ω input impedance and gain factor 200 (Ortec VT120A). In most of the presented experiments, the resulting ∼ 10 ns-short electric pulses were converted into logical signals using a linear discriminator, and counted by a VME-based multiscaler. Simultaneously—but asynchronously—sample pulses could be recorded using a digital oscilloscope which served as waveform digitiser. This simple solution yields two independent datasets for the detector count rate and for the sample waveforms, which cannot be correlated on the single-particle level. In the course of the experiments, a second, more advanced data acquisition system was set up, consisting of a fast analog-to-digital converter (FADC, Agilent Acqiris U1084A) equipped with a large sampling memory. This system allows gapless recording of the pre-amplified detector signal. Via an on-line peak-finding routine, it yields a single, consistent dataset containing the amplitude and arrival time of each individual detector pulse.
Much care was taken in preparing the detector to perform at cryogenic temperatures. With reference to that purpose, the device has been called `COMPACT', the `COld Movable PArticle CounTer' <cit.>. In order to support optional room-temperature operation of CSR, the design additionally needed to fulfil the standard low-out-gassing requirements of bakeable UHV equipment. All electronics is kept on the atmosphere side of CSR's nested vacuum system, as is the rotary actuator that allows horizontal positioning of the particle sensor via a thread drive inside the CSR beam line.
The chevron MCP stack consists of two matched, circular `extended dynamic range (EDR)' micro-channel plates (Photonis 18/12/10/12 D 40:1 EDR, MS) of 18 mm useful diameter. EDR MCPs are characterised by a significantly lower resistance as compared to standard variants and are thus expected to perform better at very low detector temperatures. As an option to warm up the electron multiplier in operation, a small electric heater made of a bare Constantan wire is included in the supporting frame of the MCP stack <cit.>.
§ LOW TEMPERATURE OPERATION
The first experimental beam-times at the CSR took place in 2015, and lasted for approximately five months, including cool-down of the storage ring by the liquid-helium refrigerator and rewarming of the facility <cit.>. Besides the afore mentioned measurements on electron detachment and cluster fragmentation, a multitude of experiments with positive and negative ions were conducted in an effort to characterise the storage ring and beam diagnostic instrumentation. During most of the experiments, the CSR operated at an average temperature of ∼ 6 K. Due to technical issues of the injection accelerator, the ion energies were limited to 80 keV, i.e. well below the CSR design energy of 300 keV per unit of charge. The stored ion species included Ar^+, N_2^+, O^-, OH^-, Si^-, C_2^-, Co_2^-, Co_3^-, and Ag_2^- <cit.>. The COMPACT detector system was used in almost all of the experiments, so that its low-temperature performance could be studied in a variety of use cases. This section presents a few examples of measurements that showcase the most important findings made during operation of the particle detector.
§.§ Cool-Down
While cryo-adsorption in cold operation vastly improves the residual gas pressure, the CSR vacuum concept does not rely on cryogenics alone. Before start of the cool-down procedure, the beam-guiding vacuum vessel of the storage ring was UHV-baked at 180^∘C and subsequently reached a residual gas pressure of ∼ 1× 10^-10 mbar already at room temperature <cit.>. Consequently, the storage ring and the detector could already operate during the cool-down phase of CSR from 300 K to 6 K, which took approximately three weeks.
During the cool-down, a beam of 60-keV (1.5 keV/u) ^40Ar^+ ions was regularly stored in CSR. The detector was routinely switched on to detect the neutral Ar products, originating from residual-gas electron capture by Ar^+, in order to deduce the stored-ion lifetime <cit.>. Additionally, the detector was irradiated by 245-nm photons from the UV-LED for comparison of the signals (see below).
Like the heavy particles, the UV photons do not irradiate the MCPs directly, but release electrons from the surface of the converter electrode which are then accelerated towards the MCPs. During UV irradiation, the aluminium converter thus acts as a photo-cathode of low (∼ 10^-4) quantum efficiency <cit.>. It was verified that, when the electron acceleration potentials were disabled while keeping the MCP gain voltage enabled, the count rate of the set-up dropped to zero. This shows that the MCP indeed detects secondary electrons only, and no primary particles (photons or ions) reach it in normal operation. Via the driving voltage of the UV-LED, the rate of photon detections could be varied over many orders of magnitude. In contrast to fast ions, each photon can emit at most only a single electron from the converter electrode, as the photon energy of 5.1(1) eV is lower than the double work-function of the cathode material.
After pre-amplification, the pulses were discriminated and counted using the VME multiscaler, while sample waveforms were recorded by the oscilloscope. The MCP bias current was continuously measured by a floating nano-amperemeter. Figure <ref> shows the derived electric resistance of the stacked MCP set as a function of the average temperature of the relevant CSR sector. No dedicated temperature sensor is attached to the particle detector itself. However, as the CSR cool-down process was slow, we assume that the MCPs were in thermal equilibrium with their surroundings.
Starting at the specified value of 56 MΩ at room temperature, the resistance of the chevron MCP-set rose by almost four orders of magnitude during the cool-down, reaching values of ∼ 300 GΩ at 6 K. After switching the detector on, a gradual increase of the MCP bias current by up to a factor of three was routinely observed within the first hour of operation, especially at very low temperatures. It is yet unknown whether this effect is due to operation-induced warming up or whether it reflects a purely electric change in the channel-plate properties. Figure <ref> shows only the initially measured MCP resistance, directly after enabling of the high-voltage supplies, when the temperature of the plates can be assumed to have been equal to the temperature of the CSR vacuum chamber.
Figure <ref> shows the pulse height distributions of the amplified detector signals obtained for UV photon and 60-keV argon irradiation at three different temperatures during the cool-down of CSR. The gain voltage across both MCPs was kept constant at 1.85 kV in all measurements. Pulses were recorded above a discrimination threshold of ∼ 0.035 V. For comparison, the individual measurements were normalised to the number of counts. No significant changes in the distributions were found, neither for UV photons nor for 60-keV argon particles, between room temperature and the final operating point of CSR of 6 K.
The intensity of the UV light source was set such that the average rate of detected photons was ∼ 600 s^-1 in each measurement. For argon irradiation, the detected particle rate varied strongly with temperature, as—at given intensity of the stored ion beam—the rate of electron capture events scaled with the residual gas pressure in CSR, which improved drastically during the cool-down <cit.>. At 200 K the detector recorded neutral Ar products at rates up to several 10000 s^-1 while at 110 K, the production rate had decreased to a few 1000 s^-1. At the final CSR temperature of 6 K, the rate of electron capture products was too low to be identified above a low background event rate of ∼ 10 s^-1, in spite of the high stored ion current of ∼ 1 μA. As the background events were not localised to the position of the axis of a neutral daughter beam, they are believed to be due to stray secondary particles produced along the beam pipe by the primary ions. In order to obtain a reliable pulse height distribution from impact of 60-keV argon particles at 6 K, the detector was moved towards the closed orbit in the storage ring until direct hits from parent Ar^+ ions could be detected at a rate of ∼ 500 s^-1. It was assumed that the secondary electron ejection coefficient of 60-keV Ar^+ ions was sufficiently similar to that of neutral argon atoms of the same energy. Indeed, the measured pulse height distribution of Ar^+ ions corresponded to that of the neutral atoms at higher temperatures, as shown in Fig. <ref>.
§.§ Localised Heating of the MCPs
With the beam guiding vacuum chamber of CSR at 6 K, the electric heater built into the detector can be used to warm up the micro-channel plate set above the temperature of its surroundings. A functional test showed that, by operating the heating wire at a power of ∼ 80 mW, the resistance of the MCP stack could be lowered by a factor of ∼ 10. An approximate calibration, as indicated in Fig. <ref>, translates this change in resistance to a warming of the channel-plate set from 6 K to ∼ 15 K. No warming of the neighbouring CSR structures was observed in the process. Previous experiments have shown that even substantially greater heating powers can be applied without danger to the detector <cit.>.
The pulse height distribution obtained for UV irradiation did not change during the tests of the heating. This confirmed the earlier observations, as the UV-induced detector signals had also not been influenced by the cooling-down from room temperature (cf. Fig. <ref>). It is however expected that the option of localised heating of the MCPs can improve the detector response to high-rate impact of massive particles (cf. Sect. <ref>). In the experiments reported in the following, which focussed on the extreme low-temperature behaviour of the device, this possibility was not yet checked, and the MCP stack was deliberately left at the 6 K temperature of the surrounding CSR vacuum chamber.
§.§ General Performance at 6 K
With the CSR operating at its lowest temperature, the COMPACT set-up was employed to detect a variety of neutral and charged product beams. For single-particle experiments, the stored beam currents were well below that of the Ar^+ ions used for CSR commissioning. The above mentioned secondary-ion background was not observed in any other experiment, and even weak daughter beams could be easily identified by moving the detector horizontally across the CSR beam line and monitoring the average particle count rate as a function of travel distance, as shown in Fig. <ref>. During the first experimental campaign, the cryogenic thread drive—as described in ref. <cit.>—was used to move the detector by a total distance of more than ∼ 7 m at lowest temperature, equivalent to 24 full strokes across the CSR vacuum chamber.
In the example of Fig. <ref>, a product beam of 60-keV neutral OH molecules (3.53 keV/u) was detected, originating from electron detachment of stored OH^- ions in a 633-nm cw laser beam in the experimental CSR section preceding the COMPACT detector. Using the known size of the horizontal detector aperture of 20 mm and the particle count rate measured as a function of detector position, the horizontal transverse daughter beam envelope can be obtained by deconvolution. The standard deviation of the assumed Gaussian product beam profile was derived to be 9.0 (5) mm at the detector position. As the neutral particles are not influenced by the ion optics, and as the momentum transfer to the molecule during the photo-detachment is negligible, the product beam maintains the emittance of its parent. Using the horizontal beta function of the storage ring <cit.> one derives a 95% horizontal transverse emittance of the ion beam of 24 (3) mm mrad. One also derives that, in this example, the horizontal width of the daughter beam leads to a 27 (3) % geometric detection loss due to the narrow sensitive aperture. This is by design and not considered critical, as the detector is intended primarily to detect products originating from future electron-cooled ion beams <cit.>. Such beams are characterised by a much lower transverse emittance, and in their case the narrow detector aperture will help identifying products based on charge-to-mass selection. Note that the height of the sensitive aperture is much wider (50 mm <cit.>) so that no significant vertical cut on the product beam is believed to occur in the experiments described here. Variants of the COMPACT detector with larger horizontal acceptance for use in future measurements on uncooled ion beams are presently being developed.
The detector pulse shapes and height distributions found for atomic or molecular products from experiments at 6 K were quite similar to those observed for argon hits during cool-down (cf. Fig. <ref>). As shown in Fig. <ref>, the amplified anode signals are 10–30 ns short pulses of typically a few 100 mV amplitude.
Characteristic features found in all experiments conducted during CSR commissioning are pulse height distributions which are not peaked (cf. Figs. <ref> and <ref>). The explanation for this signature lies in the fact that the chevron MCPs detect the 1.2-keV secondary electrons ejected by the primary ions <cit.>. All of these secondary electrons have to be assumed to impinge close to different MCP channels, within a time shorter than the observed pulse width of ∼ 10 ns, as has been verified by numerical simulation of the electron trajectories in the detector. Hence, the total pulse height from heavy-ion impact is, in fact, the result of pile-up of several independent pulses generated by 1.2-keV electron impact on the MCPs. The resulting sum pulse height distribution tends to be monotonously decreasing with higher amplitudes, as will be discussed in the upcoming Sect. <ref>, following the original analysis by Spruck et al. <cit.>. Due to this characteristic pulse height distribution, the detector count rate was sensitive to the signal discrimination threshold in the present work, and a low electric base-line noise was imperative in order to obtain good overall detection efficiency.
§.§ High-Average Rate Response at 6 K
At 6 K, a dependence of the pulse heights on the average detector count rate was observed for heavy-particle impact above a certain critical hit rate. This is illustrated in Fig. <ref>: In the experiment, a beam of Co_2^- ions was stored in CSR at 60-keV total energy (0.51 keV/u) and passed through a grazing-angle, 633-nm-laser target in the experimental section. Photo-detachment yielded neutral Co_2 molecules that reached the detector at the kinetic energy of the parent beam. By variation of the intensities of the ion or laser beams, the average rate of product particles could be adjusted.
It was observed that above ∼ 1000 discriminated Co_2 hits per second in average, the pulse amplitudes started to decrease noticeably (cases 1 and 2 in Fig. <ref>). At a detection rate of ∼ 2700 s^-1 (case 1) the mean pulse amplitude was ∼ 25 % lower than below 1000 s^-1 (case 3). Given the simple counting logics based on a fixed signal discrimination threshold (dashed vertical line in Fig. <ref>), this caused the detected particle rate to vary non-linearly with the stored intensity of the Co_2^- beam above a discriminated count rate of 1000 s^-1. In contrast, UV photons from the test LED—which produce much smaller MCP signals—could be detected at 6 K at significantly higher rates (more than ∼ 3000 s^-1) without deterioration of their pulse amplitudes. We attribute this effect to gain saturation due to the onset of charge depletion of the MCPs at simultaneously low temperature and elevated heavy-particle hit rate. This hypothesis is also supported by the earlier observation that, at higher temperature, 60-keV Ar could be detected at much higher average count rates with no evidence of signal degradation (c.f. Fig. <ref>). It is expected that local warming of the MCP set (cf. Sect. <ref>) can mitigate these saturation effects, however this has not been studied yet.
In the Co_2^- experiment the pulse height distribution—and thus the discriminator efficiency—was rate-independent also at 6 K, as long as the average count rate was kept below 1000 hits per second. Under these conditions, the detector could be used to reliably measure the evolution of the photo-detachment rate over very long storage times of the Co_2^- ions (left-hand frame of Fig. <ref>). A fit to the Co_2 count rate as a function of storage time yields a 1/e lifetime of the anions in the experimental set-up of 1383 (5) s. After 7200 s of storage, the remaining ions were deliberately kicked out of their closed orbits within a single turn, so that the dark count rate of the detector could be measured. As visible in the left-hand frame of Fig. <ref>, even two hours after ion injection, the measured Co_2^- photo-detachment rate was still more than an order of magnitude greater than the detector background level. In all measurements, the latter was found to be 0.3 (1) s^-1, with no notable dependence on temperature. Most of the dark pulses are believed to be due to β-decay of radio-nuclides in the MCP substrate <cit.> and are very similar to actual counting pulses in shape and amplitude. The fact that this background rate is found to be very low is important in that rejection of the dark events based on pulse shape analysis does not at present seem feasible.
§.§ Short Particle Bursts at 6 K
In the experiments described so far the product particles reached the detector in quasi-steady streams, with average fluxes that varied slowly compared to all other time constants of the set-up. Due to the long storage times of the CSR, this situation is common for parent ions that are in a stable state, or when their internal state population varies slowly, such that the rate of interaction between the stored beam and the target is nearly constant.
In other cases the reaction products show a burst-like time-structure. E.g., interaction of the stored ions with a pulsed laser target or the ion production process itself can lead to population of metastable levels. By timing the detection of the resulting end products with respect to the time of interaction, the lifetimes of the metastables can be measured down to the scale of the revolution period in the storage ring. In such experiments, the detector hit rate may vary drastically within a few milliseconds, with the burst rate directly following the formation of the metastables significantly exceeding the average count rate in the experiment.
As an example, the auto-fragmentation of Co_2^- molecular ions was studied. The Co_2^- beam was produced in a metal-ion sputter source, accelerated to a total kinetic energy of 60 keV, and stored in CSR for 90 s. After that storage time, any remaining ions were dumped before the next injection took place. Also here, the storage ring operated at 6 K. The sputter ion source naturally produces part of the anions in auto-dissociating metastable states. The COMPACT detector was positioned such as to collect the 30-keV Co^- fragments that, in absence of residual gas collisions or other target interactions, could only be produced from the metastable ion population. The results of the experiment will be published separately.
As shown in Fig. <ref>, the instantaneous detection rate of Co^- was very high directly after ion injection, but then steeply decreased on a time-scale of milliseconds. In the experiment, the advanced FADC-based data acquisition system (cf. Sect. <ref>) was used. In contrast to the steady-current experiments, as described in Sect. <ref>, it was thus possible to observe changes in the pulse-height distribution on very short time scales, limited only by the counting statistics.
Starting with the ion injection into CSR, integration time windows were defined as shown in Fig. <ref>. For each integration window, the height distribution of the detector pulses and the average count rate were calculated. The length of the time windows increased as a function of storage time, so that each window was characterised by the approximate average count rates given in Fig. <ref>. To easily quantify the shape of the measured pulse height distributions, each was approximated by a simple double exponential decay fit function.
As visible in Fig. <ref>, at 6 K detector temperature the pulse height distribution was constant across storage time, even if the peak rate during the first millisecond after ion injection was as high as 10^5 s^-1, i.e., two orders of magnitude higher than the maximum useful count rate observed in the Co_2^- photo-detachment experiments described in Sect. <ref>. The apparent change in shape at very late storage times (distributions 6 and 7 in Fig. <ref>) is due only to the increasing contribution of detector dark counts (distribution 8) to the measured signals.
The different saturation threshold compared to the (steady-current) photo-detachment experiment from Sect. <ref> is most likely due to the fact that the short auto-fragmentation bursts were followed by extended periods of near-zero count rate, during which the MCP channels could recharge via the low bias current at 6 K, before the next ion injection would take place. For peak hit rates even larger than 10^5 s^-1, saturation effects, similar to those depicted in Fig. <ref>, were indeed observed also in the burst-type auto-fragmentation experiment. In those cases, the non-linearity in the measured count rate with respect to the true fragment production rate could not be fully eliminated by data processing, in spite of the FADC-based data acquisition system allowing for advanced pulse discrimination techniques. Also in the burst experiments, on-line measurement of the pulse height amplitudes along with the detection rate thus turned out to be a crucial tool for evaluating the reliability of the experimental data.
§ ANALYSIS
In the following we seek to quantify the experience from the first operation of the COMPACT detector at 6 K temperature. The many engineering topics related to the cryogenic environment have been discussed in detail by Spruck et al. in a previous article <cit.>. Here we focus on the performance of the instrument during the first atomic and molecular physics experiments using the storage ring CSR at lowest temperature. For a particle counting detector, two basic properties come to mind: They are the single particle detection efficiency and useful dynamic range of the count rate.
§.§ Dynamic Range
The dynamic range is defined by the intrinsic detector background on the one hand, and by the maximum particle count rate that can be reliably measured on the other hand. For the chevron MCP set operating at the lowest CSR temperature of 6 K, the experiment on Co_2^- photo-detachment from Sect. <ref> can be considered as a benchmark: from the dark event rate of 0.3 (1) s^-1 to the maximum product count rate of ∼ 1000 s^-1 that could be reliably discriminated, the dynamic range of ∼ 3×10^3 in continuous-rate measurements allows to follow product formation from atomic, molecular or cluster processes in the CSR for up to eight 1/e-lifetimes of the reaction at hand. It is expected that local heating of the MCPs can extend the dynamic range even further, but this has not been studied yet. For burst mode operation, the Co_2^- auto-fragmentation measurement—also carried out at 6 K detector temperature—shows a significantly greater dynamic range, reaching up to ∼ 3×10^5 in the example at hand. However, that value likely depends on the time the MCP set is allowed to recharge in-between the bursts of product particles, an effect that has not been studied systematically yet.
This favourable low-temperature behaviour of the detector is believed to be a consequence of its `Daly'-type design. The detector combines a large sensitive aperture—defined by the size of the `Daly' converter electrode—with a relatively small MCP set that collects the secondary electrons ejected from that electrode. The background event rate of MCPs naturally scales with the volume of the substrate <cit.>. An EDR-MCP of the same active area as the sensitive aperture of the COMPACT detector (20 × 50 mm) can be expected to have a dark count rate that is an order of magnitude higher than the one measured in the experiments reported here. Additionally, larger (and thicker) MCPs have been measured to reach much higher electric resistance near liquid-helium temperature than the ∼ 300 GΩ found here <cit.>. Even EDR variants of large channel-plates have been found with unfavourable electric behaviour at lowest temperature, which likely leads to earlier depletion at high detection rates and thus worse high-rate acceptance <cit.>. Small MCPs are also produced on a large scale routinely, which—besides the obvious advantage of lower prices—might lead to more stable production processes and more predictable properties.
§.§ Detection Losses
Knowledge of the particle detection efficiency can be important in experiments seeking to measure absolute cross sections of ion reactions in the storage ring. In some cases the detector can be calibrated against a known process, or its efficiency can be inferred by controlled variation of the experimental conditions. However, if other parameters of the experiment are unknown, independent knowledge of the product detection efficiency can be the only way to interpret the measurements in terms of absolute numbers. Cryogenic operation of MCPs is considered out-of-specification by their manufacturers, and the detector behaviour near liquid-helium temperature is not guaranteed. Although there has been some research on the topic, open questions remain <cit.>. In this situation, a simple way to monitor the absolute detection efficiency during the experiment is important to ensure the reliability of the data taken.
In the experiments reported here, the particle detection efficiency of the COMPACT detector is limited by three effects. The first is the loss of product particles due to limited geometric acceptance of the detector in the horizontal plane (cf. Fig. <ref>). This is not discussed further here. As noted above, the narrow width of the sensitive aperture is by design. In fact significant efforts have been undertaken to realise the vertically elongated detection window. It is still wide enough to intercept daughters of electron cooled beams with 100% efficiency <cit.>. In all other cases the geometric loss ratio can be determined by a horizontal scan of the daughter beam envelope as described in Sect. <ref>.
A second limitation of the detection efficiency arises from the discrimination threshold applied to the anode signals. Pulses of amplitudes below threshold are not recorded in the counting electronics used in the experiments presented here. This is an issue that must be addressed technically. At high gain of the pre-amplifier and simultaneously low baseline noise level of the anode high-voltage line, the discrimination threshold can be very low relative to the mean pulse height. The remaining, small cut-off ratio can be easily estimated if the overall shape of the pulse height distribution is known or can be extrapolated. FADC-based data acquisition systems, like the one presented in Sect. <ref>, may not involve a fixed discrimination threshold at all, as they allow identification of particle hits using numerical pulse-shape analysis of the time-resolved anode signal. Independent external triggers (from, e.g., a pulsed laser or the storage-ring timing system) are then typically used to start and stop the FADC. Whether the FADC data can be processed in real-time, or needs to be recorded for subsequent analysis, depends on the processing speed of the computer, the particle rate, and the complexity of the chosen pulse detection algorithm.
The third and most fundamental source of detection efficiency loss lies in the stochastic nature of the electron ejection process from the converter cathode and of the detection of these secondary electrons by the MCPs. Even if the average number of electrons released per impinging ion can be quite high in some experiments, there is in fact a non-zero probability that an ion either releases no electron at all, or that none of the ejected electrons is detected by the MCPs. In these cases no anode pulse can be observed, no matter how technically advanced the readout electronics is. In the following, we denote by P_0 the likelihood for no MCP multiplication event to occur, although the converter electrode did receive an impact from a heavy particle.
§.§ Modelling the Detection Efficiency
In the case of the COMPACT detector, a value of P_0 can be derived by comparison of the anode pulses obtained for the heavy particle under study with those generated by UV photons from the LED source installed in CSR. As discussed in Sect. <ref>, anode pulses for heavy-particle impact can be assumed to result from pile-up of the MCP signals generated by the secondary electrons released from the `Daly' converter cathode. In contrast, the 245-nm photons can be assumed to release at maximum a single electron from the cathode material due to their low energy (5.1(1) eV). Comparison of the pulse height spectra obtained in both cases thus allows to estimate the average number of secondary electrons contributing to the heavy-ion signals.
In the case of the experiment on stored Ar^+ shown in Fig. <ref>, comparison of the mean pulse heights obtained for 60-keV argon and UV irradiation suggests that, in average, 4–5 MCP events from secondary electrons pile up to form the Ar-induced signals. At an assumed MCP detection efficiency of 60% for the 1.2-keV electrons, this means that for each impinging heavy Ar atom an average number of γ̃≈ 7.5 secondary electrons reach the MCP surface. Based on Poisson-statistics, one would expect the chance for no charge multiplication to occur in the MCP-stack after a heavy-ion impact to be as low as ∼ 1%.
The Poissonian model is however only true if the point of impact of the ions on the detector—and thus the average number γ̃ of converter electrons attracted towards the MCP surface—is fixed <cit.>. This is expected to be a good approximation in future experiments on electron-cooled atomic ions in CSR, as their product beams are characterised by very low emittances. In absence of beam cooling—as in all experiments reported here—or for strongly exothermic molecular breakup reactions, the product particles irradiate a large fraction of the sensitive aperture of the detector. In that case a possible variation of γ̃ across the sensitive detector aperture must be accounted for.
A model of the secondary-electron statistics valid for non-uniform γ̃ has been developed within the framework of discrete-dynode electron multipliers, where a similar situation occurs <cit.>. There, the number n of electrons emitted from one dynode towards a second one is described by a Pólya distribution
W_n(γ̃,b) = γ̃^n/n!(1+b γ̃)^-n-1/b ∏_j=0^n-1 (1 + j b) .
γ̃ is now the mean number of secondary electrons reaching the second dynode for each impact on the first one. b is the relative variance of γ̃. For b=0, W_n is equal to the Poisson distribution, in the special case of b=1 it assumes the shape of the exponentially decreasing geometric distribution. In our application we identify the `Daly' converter cathode with the emitting dynode, while the role of the collecting dynode is assumed by the positively biased MCP input surface.
Each of the n electrons from Eq. (<ref>) has a chance ϵ to generate a charge multiplication avalanche in the MCPs. Hence, the total number k of MCP cascades generated by a single primary ion is distributed according to
P_k(γ̃,b) = ∑_n=k^∞nkϵ^k(1-ϵ)^n-k W_n(γ̃,b) ,
which is the discrete convolution of W_n with a binomial distribution. In the following, we assume ϵ = 0.6 based on the geometric open-area ratio of the MCPs. The principal results of the analysis are largely independent on the choice of ϵ as discussed below.
Let f_k(h) be the distribution of pulse heights h produced by the MCP-stack upon simultaneous multiplication of (precisely) k converter electrons. If f_k is known for all k, the sum pulse-height spectrum F for heavy-particle detection can be modelled as
F(h;γ̃,b) = C ∑_k=1^∞ P_k(γ̃,b) f_k(h) ,
with C being a normalisation factor. In the case of the COMPACT set-up, the k-electron distributions f_k can be inferred from the pulse height distribution f_1, measured for irradiation of the detector by the UV photon source. As the photons never emit more than one converter electron, their MCP pulse height distribution is equal to f_1. The pulse height spectra for k-fold pile-up signals are then given by the recursive convolution formula f_k = f_1 ∗ f_k-1 (for k > 1).
With all f_k known, γ̃ and b can be obtained from a fit of Eq. (<ref>) to the data of each given CSR experiment. Via Eq. (<ref>), these parameters yield an independent experimental value for the likelihood P_0 and, hence, for the maximum possible detector efficiency due to secondary electron statistics, 1 - P_0. Analytically, the normalisation factor C from Eq. (<ref>) is equal to (1-P_0)^-1. However, due to the experimental discriminator cut-off, the measured pulse height distribution for heavy-particle impact cannot be reliably renormalised to 1 as long as the best-fit distribution F is not known. For simplicity, C is therefore treated as an independent free parameter.
Figure <ref> illustrates the procedure: By UV-irradiation of the detector, we measure the single-electron spectrum f_1. The pile-up distributions f_k are obtained by numerical convolution. The statistical sum spectrum F from Eq. (<ref>) is then fitted to the pulse height distribution measured for heavy-ion detection, as Fig. <ref> shows on two examples.
It should be noted that the method to extract b and γ̃ from the detector pulse height distribution is not new. In fact it is the standard way to measure secondary electron yields of ions impinging onto solids <cit.>. Normally a secondary electron detector with good energy resolution is used, so that the components f_k show up as clearly resolved peaks in the measured pulse height spectrum. For the COMPACT MCPs, the pulse height spectrum f_1 for a single secondary electron (i.e. for UV irradiation of the detector) is found to be a monotonously decreasing exponential distribution. Unsurprisingly, the resolution with regard to electron multiplicity is therefore very bad. The aim of this analysis is not to derive the secondary electron yield γ̃ but to estimate the amount of undetected ions.
The assumed MCP electron detection efficiency ϵ was kept fixed at 0.6. Due to the effect of ϵ on the mean of the binomial distribution in Eq. (<ref>), its choice correlates inversely to the fit value of γ̃. The results for the detection efficiencies are however largely independent of that choice.
In the fits of the pulse height distributions 1 and 2 from Fig. <ref>, showing the effect of detector saturation, f_1 was scaled by a factor d<1 along the h-axis, before the pile-up distributions f_k were computed (cf. Eq. <ref>). This simulates the reduced gain of the MCP-detector with respect to unsaturated behaviour. For values of d ≈ 0.4 (distribution 1 in Fig. <ref>) and d ≈ 0.6 (distribution 2), the subsequent fit procedure yields values for γ̃ and b that are compatible with the unsaturated case (distribution 3) within their statistical uncertainties.
Table <ref> summarises the data from a few selected experiments. The first three rows show the analysis of the pulse height distributions found in the Ar^+ storage experiments during cool-down of CSR (cf. Sect. <ref>), as shown in Fig. <ref>. As noted earlier, no change in the pulse heights as a function of temperature was observed for either photon or argon irradiation of the detector. Also the fit results obtained using the model from Eq. (<ref>) are consistent among all three operating temperatures. The number of secondary electrons emitted by the converter cathode is derived as γ̃∼ 8, in good agreement with the above estimate obtained by comparison of the mean pulse amplitudes of UV-photons and heavy ions. The Pólya parameter b fits at a large value of 0.8–0.9, which enhances the likelihood P_0 for emission of no secondary electron by a factor of ∼ 10 compared to the Poisson-statistical case (b=0). The large value of b may indicate that a large fraction of the sensitive detector aperture was irradiated by the Ar particles—as could be expected from an intense uncooled ion beam. The expected maximum possible detection efficiency 1-P_0 is hence reduced to 86 (3) %. The signal acquisition threshold causes another ∼ 15 % loss in efficiency, determined by the fraction of the best-fit model distribution (Eq. (<ref>)) below discrimination level (cf. Fig. <ref>). The resulting total detection efficiency for Ar atoms entering the detector is thus determined to be an average 73 (3) %. Geometric loss of particles due to the narrow detector aperture—likely to have occurred in all experiments reported here—is not taken into account as it has been measured in the case of OH^- only (cf. Fig. <ref>).
At intermediate temperature of CSR, dissociative residual-gas collisions of N_2^+ were observed. Significantly different pulse height distributions were found for the charged and neutral product beams (central two rows of Tab. <ref>), as reflected by the very different best-fit values of the secondary yield γ̃. Partly, the difference may be due to deflection of the charged fragments out of the storage ring plane, so that they hit a different area of the converter cathode. In addition, the neutral product beam is believed to originate not only from dissociative collisions N_2^+ + X →N^+ +N + X, but also from the competing dissociative electron transfer reaction N^+_2 + X →N + N +X^+ which leads to two neutral N atoms in the final state. Assuming a kinetic energy release in the order of 1 eV, both neutral fragments may reach the converter cathode within a time interval in the order of 10 ns. A fast multi-fragment detector dedicated to observation of such reactions involving multiple neutral products under CSR conditions is presently being set-up <cit.>. However, the electronics of the COMPACT detector is not designed to resolve such nearly-coincident double hits. In the present experiment, the two N atoms may thus appear as a single, larger anode pulse.
In the photo-detachment experiments conducted at 6 K operating temperature, care was taken to keep the average rate of detected particles ≤ 300 s^-1, in order to avoid the pulse height saturation effects observed at higher count rates (cf. Sect. <ref> and Fig, <ref>). Generally, large values of b are found also in these cases, though differences have been observed. The interplay of γ̃ and b in Eq. (<ref>) has a strong impact on the coefficient P_0, and thereby on the detection efficiency that can maximally be achieved (1-P_0). A relatively small b in conjunction with a high secondary yield can lead to very good detection efficiencies (as in the Co_2^- experiment listed in Tab. <ref>, with 1-P_0 ≈ 1). The opposite case of a small yield combined with a high value of b leads to the poorest detection efficiency, especially as the corresponding, nearly exponentially decreasing, pulse height distributions are particularly susceptible to the signal discrimination threshold (Ag_2^-).
§.§ Limits of the Efficiency Model
The above procedure provides an independent experimental value for the detection efficiency of the COMPACT set-up that can be determined in-situ. Although differences in the mean pulse amplitudes are often apparent without analysis, the lack of distinctive features in the pulse height distributions renders the fit model quite sensitive to statistical or systematic effects. Electronic noise picked up by the front-end amplifier can falsify the spectra to the point where reliable evaluation is not possible. Numerically, the model is very sensitive to the experimentally determined single-electron distribution f_1. As the latter is steeply decreasing towards higher amplitudes, the significant part of f_1 that lies below the discriminator threshold has to be extrapolated. Due to this, we estimate that the results from Tab. <ref> are bound to a systematic uncertainty of 10-20 % in addition to the fit-statistical error-bars given.
Finally, the analysis relies on the assumption that the MCP-signals from different secondary electrons originating from the same heavy-particle impact sum up linearly. This is a safe assumption in room-temperature operation of the MCPs. At 6 K, however, the total charge that can be extracted from the channel-plates within ∼ 10 ns might be limited, even though the individual avalanches that sum up to one anode pulse happen within different micro-channels, and even though no influence of the average particle count rate on the pulse height distribution was measured below ∼ 1000 s^-1. While we consider this option unlikely, it is difficult to disprove. Note however that, in presence of such a lowered-gain effect in the heavy-particle data, the fit model would underestimate γ̃ and, thus, the detector efficiency, rather than overestimate them.
§ SUMMARY AND OUTLOOK
Using the emerging cryogenic storage rings for slow heavy ions, a set of well-established experimental techniques can be applied to a new range of low-energy molecular and atomic physics. This requires, however, that those techniques are carefully adapted to the peculiarities of that new class of storage devices. Simple, robust, and inexpensive particle detectors are still not a generally-established type of beam line instrumentation in extremely low temperature environments, although the usefulness of such devices is undisputed. Following-up on our technical design paper on the movable COMPACT detector <cit.>, we have reported on the first data-taking operation of this device in the Cryogenic Storage Ring CSR <cit.>.
At a temperature of 6 K, the detector proved to operate reliably over a dynamic counting range between 3×10^3 and 3×10^5, depending on the time structure of the particle hits. For continuous irradiation, a critical average particle count rate of the order of 1000 s^-1 has been found. The response of the detector to non-continuous daughter beams—where particles arrive in bursts rather than as a steady stream—was tested, and the device was used productively in such experiments. Systematic studies of this application will follow. The available dynamic range of the detector can likely be further improved by the built-in MCP heating, a device whose operation was successfully tested, but not yet used in the experiments reported here. We have shown how a statistical model for the pulse height distributions can be used to derive independent estimates for the detection efficiency. In various experiments, the latter was found to lie in the range 0.5 - 1.0 also at a detector temperature of 6 K.
For well-localised hits of ions on the converter cathode, as anticipated, e.g., for future electron-cooled atomic parent ions, the observed secondary-electron statistics is expected to become near-Poissonian, resulting in smaller signal loss and easier pulse discrimination. However, low emittance product beams, where all particles impinge onto a defined small spot on the detector might also be more susceptible to saturation effects due to charge depletion of the MCPs at low temperature—an aspect that will need to be carefully studied. The fact that, even in this situation, the `Daly' working principle leads to a natural spread of the secondary electrons on the MCP might be another advantage of the COMPACT design.
On the basis of the favourable outcome of the first experiments, a second specimen of the COMPACT detector is being built. While its particle sensor and detection electronics are nearly identical to the existing detector, it features a translation stage with shorter travel range, fitting a dedicated vacuum chamber directly following the 6^∘-bender in the CSR beam line (cf. Fig. <ref>). This second detector will complement the existing COMPACT in future electron collision experiments. Due to the shorter distance to the bending element, the charge-to-mass resolution of the new set-up will be worse compared to its predecessor. However, its range of detectable product ions will be much larger, enabling it to detect, e.g., charged fragments from molecular breakup events with rigidities differing from that of the parent beam by more than 350 % in both directions.
Finally, variants of the COMPACT set-up optimised for uncooled beams are presently being developed to be part of the experimental equipment in upcoming CSR beam-times. These follow the same principle as the original detector, but offer a much-enlarged sensitive aperture in order to intercept high-emittance product beams without geometric loss.
§ ACKNOWLEDGEMENTS
This work would have been impossible without skilled support from the accelerator crew and mechanical workshops of the Max Planck Institute for Nuclear Physics (MPIK) which is hereby gratefully acknowledged. We thank X. Urbain for helpful discussions on the physics of MCPs. We are grateful for the financial support obtained from the Max Planck Society (MPG). The work of A.B. and K.S. was partly funded by the German Research Foundation (DFG) under contract numbers Wo 1481/2-1 and Schi 378/9-1, respectively.
§ REFERENCES
100
WolfDR A. Wolf, G. Gwinner, J. Linkemann, A. A. Saghiri, M. Schmitt, D. Schwalm, M. Grieser, M. Beutelspacher, T. Bartsch, C. Brandau, A. Hoffknecht, A. Müller, S. Schippers, O. Uwira, and D. W. Savin, Nucl. Instrum. Methods Phys. Res., Sect. A 441 (2000) 183.
SchippersAto S. Schippers, A. L. D. Kilcoyne, R. A. Phaneuf, and A. Müller, Contemp. Phys. 57 (2016) 215.
stoechkel K. Støchkel, J. A. Wyer, M.-B. S. Kirketerp, and S. B. Nielsen, J. Am. Soc. Mass. Spectrom. 21 (2010) 1884.
pedersen H. B. Pedersen, H. Buhr, S. Altevogt, V. Andrianarijaona, H. Kreckel, L. Lammich, N. de Ruette, E. M. Staicu-Casagrande, D. Schwalm, D. Strasser, X. Urbain, D. Zajfman, and A. Wolf, Phys. Rev. A 72 (2005) 012712.
astrid R. Stensgaard, Phys. Scr. T22 (1988) 315.
cryring K. Abrahamsson, G. Andler, L. Bagge, E. Beebe, P. Carlé, H. Danared, S. Egnell, K. Ehrnstén, M. Engström, C. J. Herrlander, J. Hilke, J. Jeansson, A. Källberg, S. Leontein, L. Liljeby, A. Nilsson, A. Paal, K.-G. Rensfelt, U. Rosengård, A. Simonsson, A. Soltan, J. Starker, M. af Ugglas, and A. Filevich, Nucl. Instrum. Methods Phys. Res., Sect. B 79 (1993) 269.
TSRIsolde M. Grieser, Yu. A. Litvinov, R. Raabe, K. Blaum, Y. Blumenfeld, P. A. Butler, F. Wenander, P. J. Woods, M. Aliotta, A. Andreyev, A. Artemyev, D. Atanasov, T. Aumann, D. Balabanski, A. Barzakh, L. Batist, A.-P. Bernardes, D. Bernhardt, J. Billowes, S. Bishop, M. Borge, I. Borzov, F. Bosch, A. J. Boston, C. Brandau, W. Catford, R. Catherall, J. Cederkäll, D. Cullen, T. Davinson, I. Dillmann, C. Dimopoulou, G. Dracoulis, Ch. E. Düllmann, P. Egelhof, A. Estrade, D. Fischer, K. Flanagan, L. Fraile, M. A. Fraser, S. J. Freeman, H. Geissel, J. Gerl, P. Greenlees, R. E. Grisenti, D. Habs, R. von Hahn, S. Hagmann, M. Hausmann, J. J. He, M. Heil, M. Huyse, D. Jenkins, A. Jokinen, B. Jonson, D. T. Joss, Y. Kadi, N. Kalantar-Nayestanaki, B. P. Kay, O. Kiselev, H.-J. Kluge, M. Kowalska, C. Kozhuharov, S. Kreim, T. Kröll, J. Kurcewicz, M. Labiche, R. C. Lemmon, M. Lestinsky, G. Lotay, X. W. Ma, M. Marta, J. Meng, D. Mücher, I. Mukha, A. Müller, A. St J. Murphy, G. Neyens, T. Nilsson, C. Nociforo, W. Nörtershäuser, R. D. Page, M. Pasini, N. Petridis, N. Pietralla, M. Pfützner, Z. Podolyák, P. Regan, M. W. Reed, R. Reifarth, P. Reiter, R. Repnow, K. Riisager, B. Rubio, M. S. Sanjari, D. W. Savin, C. Scheidenberger, S. Schippers, D. Schneider, R. Schuch, D. Schwalm, L. Schweikhard, D. Shubina, E. Siesling, H. Simon, J. Simpson, J. Smith, K. Sonnabend, M. Steck, T. Stora, T. Stöhlker, B. Sun, A. Surzhykov, F. Suzaki, O. Tarasov, S. Trotsenko, X. L. Tu, P. Van Duppen, C. Volpe, D. Voulot, P. M. Walker, E. Wildner, N. Winckler, D. F. A. Winters, A. Wolf, H. S. Xu, A. Yakushev, T. Yamaguchi, Y. J. Yuan, Y. H. Zhang, and K. Zuber, Eur. Phys. J. Special Topics 207 (2012) 1.
elisa S. P. Møller, Nucl. Instrum. Methods Phys. Res., Sect. A 394 (1997) 281.
kek T. Tanabe, K. Chida, K. Noda, and I. Watanabe, Nucl. Instrum. Methods Phys. Res., Sect. A 482 (2002) 595.
schmidtessr H. T. Schmidt, Phys. Scr. T166 (2015) 014063.
tmu S. Jinno, T. Takao, Y. Omata, A. Satou, H. Tanuma, T. Azuma, H. Shiromaru, K. Okuno, N. Kobayashi, and I. Watanabe, Nucl. Instrum. Methods Phys. Res., Sect. A 532 (2004) 477.
desiree H. T. Schmidt, R. D. Thomas, M. Gatchell, S. Rosén, P. Reinhed, P. Löfgren, L. Brännholm, M. Blom, M. Björkhage, E. Bäckström, J. D. Alexander, S. Leontein, D. Hanstorp, H. Zettergren, L. Liljeby, A. Källberg, A. Simonsson, F. Hellberg, S. Mannervik, M. Larsson, W. D. Geppert, K. G. Rensfelt, H. Danared, A. Paál, M. Masuda, P. Halldén, G. Andler, M. H. Stockett, T. Chen, G. Källersjö, J. Weimer, K. Hansen, H. Hartman, and H. Cederquist, Rev. Sci. Instrum. 84 (2013) 055115.
thomasdesiree R. D. Thomas, H. T. Schmidt, G. Andler, M. Björkhage, M. Blom, L. Brännholm, E. Bäckström, H. Danared, S. Das, N. Haag, P. Halldén, F. Hellberg, A. I. S. Holm, H. A. B. Johansson, A. Källberg, G. Källersjö, M. Larsson, S. Leontein, L. Liljeby, P. Löfgren, B. Malm, S. Mannervik, M. Masuda, D. Misra, A. Orbán, A. Paál, P. Reinhed, K.-G. Rensfelt, S. Rosén, K. Schmidt, F. Seitz, A. Simonsson, J. Weimer, H. Zettergren, and H. Cederquist, Rev. Sci. Instrum. 82 (2011) 065112.
riken Y. Nakano, W. Morimoto, T. Majima, J. Matsumoto, H. Tanuma, H. Shiromaru and T. Azuma, J. Phys.: Conf. Ser. 388 (2012) 142027.
csrpaper R. von Hahn, A. Becker, F. Berg, K. Blaum, C. Breitenfeldt, H. Fadil, F. Fellenberger, M. Froese, S. George, J. Göck, M. Grieser, F. Grussie, E. A. Guerin, O. Heber, P. Herwig, J. Karthein, C. Krantz, H. Kreckel, S. Kumar, M. Lange, F. Laux, S. Lohmann, S. Menk, C. Meyer, P. M. Mishra, O. Novotný, A. P. O'Connor, D. A. Orlov, M. L. Rappaport, R. Repnow, S. Saurabh, S. Schippers, C. D. Schröter, D. Schwalm, L. Schweikhard, T. Sieber, A. Shornikov, K. Spruck, J. Ullrich, X. Urbain, S. Vogel, P. Wilhelm, A. Wolf, and D. Zajfman, Rev. Sci. Instrum. 87 (2016) 063115.
langectf M. Lange, M. Froese, S. Menk, J. Varju, R. Bastert, K. Blaum, J. R. Crespo López-Urrutia, F. Fellenberger, M. Grieser, R. von Hahn, O. Heber, K.-U. Kühnel, F. Laux, D. A. Orlov, M. L. Rappaport, R. Repnow, C. D. Schröter, D. Schwalm, A. Shornikov, T. Sieber, Y. Toker, J. Ullrich, A. Wolf, and D. Zajfman, Rev. Sci. Instrum. 81 (2010) 055105.
backstroemPRL E. Bäckström, D. Hanstorp, O. M. Hole, M. Kaminska, R. F. Nascimento, M. Blom, M. Björkhage, A. Källberg, P. Löfgren, P. Reinhed, S. Rosén, A. Simonsson, R. D. Thomas, S. Mannervik, H. T. Schmidt, and H. Cederquist1, Phys. Rev. Lett. 114 (2015) 143003.
ZajfmanDR D. Zajfman, A. Wolf, D. Schwalm, D. A. Orlov, M. Grieser, R. von Hahn, C. P. Welsch, J. R. Crespo Lopéz-Urrutia, C. D. Schröter, X. Urbain, and J. Ullrich, J. Phys.: Conf. Ser. 4 (2005) 296.
oconnorCH A. P. O'Connor, A. Becker, K. Blaum, C. Breitenfeldt, S. George, J. Göck, M. Grieser, F. Grussie, E. A. Guerin, R. von Hahn, U. Hechtfischer, P. Herwig, J. Karthein, C. Krantz, H. Kreckel, S. Lohmann, C. Meyer, P. M. Mishra, O. Novotný, R. Repnow, S. Saurabh, D. Schwalm, K. Spruck, S. Sunil Kumar, S. Vogel, and A. Wolf, Phys. Rev. Lett. 116 (2016) 113002.
sheehan C. Sheehan, W. J. Lennard, and J. B. A. Mitchell, Meas. Sci. Technol. 11 (2000) 5.
akthar M. N. Akhtar, B. Ahmad, S. Ahmad, Nucl. Instrum. Methods Phys. Res. Sect. B 207 (2003) 333.
novotnyMMC O. Novotný, S. Allgeier, C. Enss, A. Fleischmann, L. Gamer, D. Hengstler, S. Kempf, C. Krantz, A. Pabinger, C. Pies, D. W. Savin, D. Schwalm, and A. Wolf, J. Appl. Phys. 118 (2015) 104503.
gamerMOCCA L. Gamer, D. Schulz, C. Enss, A. Fleischmann, L. Gastaldo, S. Kempf, C. Krantz, O. Novotný, D. Schwalm, and A. Wolf, J. Low Temp. Phys. 184 (2016) 839.
ohkubo M. Ohkubo, S. Shiki, M. Ukibe, S. Tomita, S. Hayakawa, Int. J. Mass Spectrom. 299 (2011) 94.
rinn1982 K. Rinn, A. Müller, H. Eichenauer, and E. Salzborn, Rev. Sci. Instrum. 53 (1982) 829.
schecker J. A. Schecker, M. M. Schauer, K. Holzscheiter, and M. H. Holzscheiter, Nucl. Instrum. Methods Phys. Res., Sect. A 320 (1992) 556.
roth P. Roth and G. W. Fraser, Nucl. Instrum. Methods Phys. Res., Sect. A 439 (2000) 134.
kuehnel K. U. Kuehnel, C. D. Schröter, and J. Ullrich, Proceedings of the 11th European Particle Accelerator Conference (2008) TUPC055.
sawicki J. A. Sawicki, Nucl. Instrum. Methods Phys. Res., Sect. B 16 (1986) 483.
spruck K. Spruck, A. Becker, F. Fellenberger, M. Grieser, R. von Hahn, V. Klinkhamer, O. Novotný, S. Schippers, S. Vogel, A. Wolf, and C. Krantz, Rev. Sci. Instrum. 86 (2015) 023303.
grieseripac M. Grieser, A. Becker, K. Blaum, S. George, R. von Hahn, C. Krantz, S. Vogel, and A. Wolf, Proceedings of the 5th International Particle Accelerator Conference (2014) THPME121.
KrantzDR C. Krantz, F. Berg, K. Blaum, F. Fellenberger, M. Froese, M. Grieser, R. von Hahn, M. Lange, F. Laux, S. Menk, R. Repnow, A. Shornikov, and A. Wolf, J. Phys.: Conf. Ser. 300 (2011) 012011.
HahnIonis M. Hahn, A. Becker, D. Bernhardt, M. Grieser, C. Krantz, M. Lestinsky, A. Müller, O. Novotný, M. S. Pindzola, R. Repnow, S. Schippers, K. Spruck, A. Wolf, and D. W. Savin, J. Phys. B: At. Mol. Opt. Phys. 49 (2016) 084006.
daly N. R. Daly, Rev. Sci. Instrum. 31 (1960) 264.
Dowell D. H. Dowell and J. F. Schmerge, Phys. Rev. ST Accel. Beams 12 (2009) 074201.
siegmund O. H. W. Siegmund, J. Vallerga, and B. Wargelin, IEEE Trans. Nucl. Sci. 35 (1988) 524.
rosen S. Rosén, H. T. Schmidt, P. Reinhed, D. Fischer, R. D. Thomas, and H. Cederquist, Rev. Sci. Instrum. 78 (2007) 113301.
prescott J. R. Prescott, Nucl. Instrum. Methods 39 (1966) 173.
dietz L. A. Dietz and J. Sheffield, Rev. Sci. Instrum. 44 (1973) 183.
lakits G. Lakits, F. Aumayr, and H. Winter, Rev. Sci. Instrum. 60 (1989) 3151.
collins L. E. Collins and P. T. Stroud, Brit. J. Appl. Phys. 18 (1967) 1121.
schackert P. Schackert, Z. Physik 197 (1966) 32.
moshammer R. Moshammer and R. Matthäus, J. Phys. Colloques 50 (1989) 111.
itoh A. Itoh, T. Majima, F. Obata, Y. Hamamoto, A. Yogo, Nucl. Instrum. Methods Phys. Res., Sect. B 193 (2002) 626.
beckerdiss A. Becker, Imaging of fragmentation products from fast molecular ion beams: paving the way for reaction studies in cryogenic environment, PhD Thesis, University of Heidelberg, 2016, pp. 95.
| Single-particle counting detectors are important instruments in many atomic and molecular physics experiments on fast-propagating ion beams <cit.>. In such experiments, an ion beam is guided through a target medium which can consist, e.g., of photons, electrons, neutral atoms, or molecules. Reactions of the projectile ions with the target particles typically lead to products of different charge-to-mass ratio. This results in the formation of daughter beams of different ion-optical rigidity compared to the parent, which can be separated from the latter by electric or magnetic analysing fields. At known intensity of the parent beam and thickness of the target, detection of the daughter particles reveals the rate coefficients of the processes involved in their production. Due to the typically low ion numbers and reaction cross-sections, the product detection needs to be done on the single-particle level.
Heavy-ion storage rings enhance such target experiments by their ability to store the projectiles for extended periods of time. Due to energetic processes in the ion source, unknown, highly-excited quantum states are often populated in atomic or molecular ions directly after production. In may cases storage of the ions enables them to reach a well-understood state population by spontaneous decay before undergoing the actual experiment. The extended storage time also allows phase-space manipulation of the ion beam, such as electron or stochastic cooling, or initial-state preparation techniques as required for laser- or collision-driven pump-probe experiments <cit.>.
For years, medium-energy magnetic ion synchrotrons have been used very successfully for these kinds of experiments—a remarkable development considering that the technology of those machines was originally aimed at nuclear physics applications <cit.>. Based on that success, a new class of heavy-ion storage rings has emerged, with designs that are optimised for experiments on atomic and molecular physics. They use purely electrostatic ion optics, matching the output energy of relatively simple electrostatic injectors that can be flexibly equipped with state-of-the art molecular ion sources <cit.>. The most advanced set-ups use cryogenic cooling machines to reduce the temperature of their beam guiding vacuum vessels down to values near that of liquid helium <cit.>. On the one hand, this results in a vastly improved residual gas pressure compared to conventional ultra-high vacuum (UHV) set-ups, with correspondingly longer ion storage times <cit.>. On the other hand, storage in such a cold environment allows infra-red-active molecular ions to de-excite to their lowest rovibrational levels prior to starting experiments—a significant improvement over room-temperature ion-storage facilities <cit.>.
The advantages of these cryogenic ion storage rings come with technological challenges with respect to the particle detector equipment. A restriction regarding possible detection principles arises from the low energy of the product particles. Limited by available high-voltage technology, typical kinetic energies in electrostatic storage devices are of order a few keV/u or below. This rules out detection mechanisms where the counting volume of the detector is covered by significant layers of passive material—as is the case for surface-barrier semi-conductor counters <cit.> and, to lesser extent, for scintillators <cit.>. Open micro-calorimetric detectors are a promising option for product detection at cryogenic storage rings, which is presently under investigation <cit.>. Their fabrication and operation are however extremely difficult and expensive, such that their use may be limited to selected experiments in the foreseeable future.
Suitable detectors for cryogenic storage rings, which can be widely deployed at acceptable manufacturing and operating costs, are therefore based on surface secondary-electron emission with subsequent multiplication <cit.>. This detection technique has proven itself also at particle energies below 1 keV/u <cit.>, but the low-temperature environment does come with new challenges. Besides engineering problems related to thermal expansion and embrittlement of materials, the efficiency of charge multiplication stages commonly used in low-energy ion detection is known to suffer in cold operation. Due to their semi-conductor-like properties, the electric resistance of micro-channel plates (MCPs) and single-channel electron multipliers (CEMs) rises strongly upon cooling into the cryogenic regime. The high resistance can lead to decreased gain or even complete charge depletion, especially at elevated particle hit rates. Depending on the application, MCPs have been used near ∼ 10 K with varying degrees of success <cit.>. Even less is known about the low-temperature behaviour of CEMs <cit.>.
In a recent publication, we have presented the design of a movable single-particle counting detector for the Cryogenic Storage Ring (CSR) of the Max Planck Institute for Nuclear Physics (MPIK) in Heidelberg, Germany <cit.>. Here, we report on the first operation of this device under real-life experimental conditions at the CSR.
This paper is structured as follows: In Section <ref> we briefly describe the instrument. In Section <ref> we present the most important findings from the first operation of the detection system with the storage ring CSR at its lowest temperature of ∼ 6 K. In Section <ref> we quantify and discuss the results from that series of experiments, with emphasis on the single-particle detection efficiency of the set-up. Section <ref> closes with a summary and outlook onto future developments. | null | null | null | null | null |
http://arxiv.org/abs/1701.08181v2 | 20170127195313 | Self-Organizing Systems in Planetary Physics: Harmonic Resonances of Planet and Moon Orbits | [
"Markus J. Aschwanden"
] | astro-ph.EP | [
"astro-ph.EP"
] |
^1) Lockheed Martin,
Solar and Astrophysics Laboratory,
Org. A021S, Bldg. 252, 3251 Hanover St.,
Palo Alto, CA 94304, USA;
e-mail: [email protected]
The geometric arrangement of planet and moon orbits into a
regularly spaced pattern of distances is the result of a
self-organizing system. The positive feedback mechanism
that operates a self-organizing system is accomplished by
harmonic orbit resonances, leading to long-term stable
planet and moon orbits in solar or stellar systems.
The distance pattern of planets was originally
described by the empirical Titius-Bode law, and by a generalized
version with a constant geometric progression factor (corresponding
to logarithmic spacing).
We find that the orbital periods T_i and planet distances R_i from
the Sun are not consistent with
logarithmic spacing, but rather follow the quantized scaling
(R_i+1/R_i) = (T_i+1/T_i)^2/3 = (H_i+1/H_i)^2/3, where
the harmonic ratios are given by five dominant resonances, namely
(H_i+1 : H_i) = (3:2), (5:3), (2:1), (5:2), (3:1).
We find that the orbital period ratios tend to follow the quantized harmonic
ratios in increasing order. We apply this harmonic orbit resonance model
to the planets and moons in our solar system, and to the exo-planets
of 55 Cnc and HD 10180 planetary systems.
The model allows us a prediction of missing planets
in each planetary system, based on the quasi-regular self-organizing
pattern of harmonic orbit resonance zones. We predict 7 (and 4) missing
exo-planets around the star 55 Cnc (and HD 10180).
The accuracy of the predicted planet and moon distances amounts
to a few percents. All analyzed systems are found to have
≈ 10 resonant zones that can be occupied with planets
(or moons) in long-term stable orbits.
§ INTRODUCTION
Johannes Kepler was the first to study the distances of
the planets to the Sun and found that the inner radii of
regular geometric bodies (Platon's polyhedra solids) approximately
match the observations, which he published in his famous
Mysterium Cosmographicum in 1596. An improved
empirical law was discovered by J.B. Titius in 1766, and
it was made prominent by Johann Elert Bode (published in 1772),
known since then as the Titius-Bode law:
R_n = {[ 0.4 ; 0.3 × 2^n-2 + 0.4 ].
Only the six planets from Mercury to Saturn were known at that time.
The asteroid belt, represented by the largest asteroid body
Ceres (discovered in 1801), part of the so-called
“missing planet” (Jenkins 1878; Napier et al. 1973; Opik 1978),
was predicted from the Titius-Bode law, as well as the outer
planets Uranus, Neptune, and Pluto, discovered in 1781, 1846, and 1930,
respectively. Historical reviews of the Titius-Bode law can be found
in Jaki (1972a,b), Ovenden (1972, 1975), Nieto (1972), Chapman (2001a,b),
and McFadden et al. (1999, 2007).
Noting early on that the original Titius-Bode law breaks
down for the most extremal planets (Mercury at the inner side,
and Neptune and Pluto at the outer side),
numerous modifications were proposed: such as a 4-parameter
polynomial (Blagg 1913; Brodetsky 1914); the Schrödinger-Bohr
atomic model with a scaling of R_n ∝ n(n+1), where
the quantum-mechanical number n is substituted by the planet
number (Wylie 1931; Louise 1982; Scardigli 2007a,b);
a geometric progression by a constant factor
(Blagg 1913; Nieto 1970; Dermott 1968, 1973; Armellini 1921;
Munini and Armellini 1978; Badolati 1982; Rawal 1986, 1989;
see compilation in Table 1); fitting an exponential distance law
(Pletser 1986, 1988);
the introduction of additional planets (Basano and Hughes 1979),
applying a symmetry correction to the Jupiter-Sun system
(Ragnarsson 1995); tests of random statistics
(Dole 1970; Lecar 1973; Dworak and Kopacz 1997;
Hayes and Tremaine 1998; Lynch 2003; Neslusan 2004; Cresson 2011;
Pletser 2017); self-organization of atomic patterns
(Prisniakov 2001), standing waves in the solar system formation
(Smirnov 2015), or the Four Poisson-Laplace theory of gravitation
(Nyambuya 2015).
Also the significance of the Titius-Bode law for predicting
the orbit radii of moons around Jupiter or Saturn was recognized
early on (Blagg 1913; Brodetsky 1914; Wylie 1931; Miller 1938a,b;
Todd 1938; Cutteridge 1962; Fairall 1963; Dermott 1968; Nieto 1970;
Rawal 1978; Hu and Chen 1987), or the prediction of a
trans-Neptunian planet “Eris” (Ortiz et al. 2007;
Flores-Gutierrez and Garcia-Guerra 2011; Gomes et al. 2016),
while more recent usage of the Titius-Bode law is made to
predict the distances of exo-planets to their central star
(Cuntz 2012; Bovaird and Lineweaver 2013, Bovaird et al. 2015;
Poveda and Lara 2008;
Lovis et al. 2011; Qian et al. 2011; Huang and Bakos 2014),
or a planetary system around a pulsar (Bisnovatyi-Kogan 1993).
Physical interpretations of the Titius-Bode Law involve the
accumulation of planetesimals, rather than the creation of
enormous proto-planets and proto-satellites (Dai 1975, 1978).
N-body (Monte-Carlo-type) computer simulations of the
formation of planetary stystems were performed, which could
reproduce the regular orbital spacings of the Titius-Bode law
to some extent (Dole 1970; Lecar 1973; Isaacman and Sagan 1977;
Prentice 1977; Estberg and Sheehan 1994), or not (Cameron 1973).
Some theories concerning the Titius-Bode law
involve orbital resonances in planetary system formation,
starting with an early-formed Jupiter which produces a runaway
growth of planetary embryos by a cascade of harmonic resonances
between their orbits (e.g., Goldreich 1965; Dermott 1968;
Tobett et al. 1982; Patterson 1987; Filippov 1991).
Alternative models involve the self-gravitational instability
in very thin Keplerian disks (Ruediger and Tschaepe 1988; Rica 1995),
the principle of least action interaction (Ovenden 1972;
Patton 1988), or scale-invariance of the disk that produces
planets (Graner and Dubrulle 1994; Dubrulle and Graner 1994).
Analytical models of the Titius-Bode law
have been developed in terms of hydrodynamics in thin disks
that form rings (Nowotny 1979; Hu and Chen 1987),
periodic functions with Tschebischeff polynomials (Dobo 1981),
power series expansion (Bass and Popolo 2005), and
the dependence of the regularity parameter on the central
body mass (Georgiev 2016).
The previous summary reflects the fact that we still
do not have a physical model that explains the empirical
Titius-Bode law, nor do we have an established
quantitative physical model that predicts the exact
geometric pattern of the planet distances to the central star,
which could be used for searches of exo-solar planets or
for missing moons around planets. In this Paper we investigate a
physical model that quantitatively explains the distances of the
planets from the Sun, based on the most relevant harmonic
resonances in planet orbits, which provides a more accurate
prediction of planet distances than the empirical Titius-Bode law,
or its generalized version with a constant geometric progression
factor. This model appears to be universally applicable, to the
planets of our solar system, planetary moon systems, Saturn-like
ring systems, and stellar exo-planetary systems.
A novel approach of this study is the interpretation of
harmonic orbit resonances in terms of a self-organization system
(not to be confused with self-organized criticality systems,
Bak et al. 1987; Aschwanden et al. 2016).
The principle of self-organization is a mechanism that creates
spontaneous order out of initial chaos, in contrast to random
processes that are governed by entropy. A self-organizing
mechanism is spontaneously triggered by random fluctuations,
is then amplified by a positive feedback mechanism, and produces
an ordered structure without any need of an external control agent.
The manifestation of a self-organizing mechanism is often a
regular geometric pattern with a quasi-periodic structure in space,
see various examples in Fig. 1. The underlying physics can involve
non-equilibrium processes, magneto-convection, plasma turbulence,
superconductivity, phase transitions, or chemical reactions.
The concept of self-organization has been applied to
solid state physics and material science (Müller and Parisi 2015),
laboratory plasma physics (Yamada 2007, 2010; Zweibel and Yamada 2009);
chemistry (Lehn 2002), sociology (Leydesdorff 1993),
cybernetics and learning algorithms (Kohonen 1984; Geach 2012),
or biology (Camazine et al. 2001).
In astrophysics, self-organization has been applied to
galaxy and star formation (Bodifee 1986; Cen 2014),
astrophysical shocks (Malkov et al. 2000),
accretion discs (Kunz and Lesur 2013),
magnetic reconnection (Yamada 2007, 2010; Zweibel and Yamada 2009);
turbulence (Hasegawa 1985),
magneto-hydrodynamics (Horiuchi and Sato 1985);
planetary atmosphere physics (Marcus 1993);
magnetospheric physics (Valdivia et al. 2003; Yoshida et al. 2010),
ionospheric physics (Leyser 2001),
solar magneto-convection (Krishan 1991; Kitiashvili et al. 2010),
and solar corona physics (Georgoulis 2005; Uzdensky 2007).
Here we apply the concept of self-organization to the solar system,
planetary moon systems, and exo-planet systems, based on the
physical mechanism of harmonic orbit resonances.
The plan of the paper is an analytical derivation of the
harmonic orbit resonance model (Section 2), an application to
observed data of our solar system planets, the moon systems of
Jupiter, Saturn, Uranus, and Neptune, and two exo-planet systems
(Section 3), a discussion in the context of previous work (Section 4),
and final conclusions (Section 5).
§ THEORY
§.§ The Titius-Bode Law
Kepler's third law can directly be derived from the equivalence
of the kinetic energy of a planet, E_kin=(1/2) m_P v^2, with
the gravitational potential energy, E_pot=Γ M_⊙ m_P / R,
which yields the scaling between the mean planet velocity v and
the distance R of the planet from the Sun,
v ∝ R^-1/2 ,
and by using the relationship of the mean velocity, v=2 π R / T,
yields
R ∝ T^2/3 ,
which is the familiar third Kepler law, where Γ is the
universal gravitational constant, m_P is the planet mass,
M_⊙ the solar mass, and T is the time period of
a planet orbit.
The empirical Titius-Bode law (Eq. 1) tells us that there is a
regular spacing between the planet distances R and the orbital
periods T, which predicts a distance ratio of ≈ 2
for subsequent planets (n+1) and (n), in the asymptotic limit
of large distances, R ≫ 0.4 AU,
R_n+1 R_n =
0.3 × 2^n-1+0.4 0.3 × 2^n-2+0.4≈ 2 .
According to Kepler's third law (Eq. 3), this would imply an
orbital period ratio of
T_n+1 T_n =
( R_n+1 R_n)^3/2≈ 2^3/2≈ 2.83 .
Thus, the Titius-Bode law predicts a non-harmonic ratio for the
orbital periods, which is in contrast to celestial mechanics models
with harmonic resonances (Peale 1976). In the following we will also
see that the assumption of a logarithmic spacing in planet distances is
incorrect, which explains the failure of the original
Titius-Bode law for the most extremal planets in our solar system
(Mercury, Neptune, Pluto).
§.§ The Generalized Titius-Bode Law
A number of authors modified the Titius-Bode law in terms of a
geometric progression with a constant factor Q between subsequent
planet distances, which was called the generalized Titius-Bode law,
R_n+1 R_n = Q ,
which reads as
R_n = R_3 × Q^n-3, i=1,...,n ,
if the third planet (i=3), our Earth, is used as the reference
distance R_3=1 astronomical unit (AU). If we apply Kepler's third
law again (Eq. 5), we find an orbital period ratio q that is
related to the distance ratio Q by q=Q^3/2,
q = T_n+1 T_n =
( R_n+1 R_n)^3/2
= Q^3/2 .
Applying this scaling law to the empirical factors Q found by
various authors, we find distance ratios in the range of
Q=1.26-2.00 (Table 1, column 1) for geometric progression factors,
and q=1.41-2.82 for orbital period ratios (Table 1, column 2).
None of those time period ratios matches a low
harmonic ratio q, such as (3:2)=1.5, (2:1)=2, or (3:1)=3.
To our knowledge, none of the past studies found a relationship
between the empirical Titius-Bode law and harmonic periods,
as it would be expected from the viewpoint of harmonic
resonance interactions in celestial mechanics theory,
as applied to orbital resonances in the solar system
(Peale 1976), or to the formation of the Cassini division
in Saturn's ring system (Goldreich and Tremaine 1978; Lissauer
and Cuzzi 1982). Moreover, the generalized Titius-Bode law
assumes logarithmically
spaced planet distances, quantified with the constant geometric
progression factor Q (Eq. 6), which turns out to be incorrect
for individual planets, but can still be useful as a simple
strategy to estimate the location of missing moons and exo-planets
(Bovaird and Lineweaver 2013; Bovaird et al. 2015).
§.§ Orbital Resonances
Orbital resonances tend to stabilize long-lived orbital
systems, such as in our solar system or in planetary moon systems.
Computer simulations of planetary systems have demonstrated
that injection of planets into circular orbits tend to produce
dynamically unstable orbits, unless their orbital periods settle
into harmonic ratios, also called commensurabilities (for reviews,
see, e.g., Peale 1976; McFadden 2007). For instance, the first
three Galilean satellites of Jupiter (Io, Europa, Ganymede)
exhibit a resonance (known already to Laplace 1829),
ν_1 - 3 ν_2 + 2 ν_3 ≈ 0 ,
where the frequencies ν_i=1/T_i correspond to the reciprocal
orbital periods T_i, with T_1=1.769 days for Io, T_2=3.551
days for Europa, and T_3=7.155 days for Ganymede), fullfilling
the resonance condition (Eq. 9) with an accuracy of order
≈ 10^-5, and using more accurate orbital periods
even to an accuracy of ≈ 10^-9 (Peale 1976).
Our goal is to predict the planet distances R from the Sun
based on their most likely harmonic orbital resonances.
Some two-body resonances of solar planets are mentioned in
the literature, such as the resonances (5:2) for the
Jupiter-Saturn system, (2:1) for the Uranus-Neptune system,
(3:1) for the Saturn-Uranus system, and (3:2) for the
Neptune-Pluto system (e.g., see review of Peale 1976).
If we find the most likely resonance between two neighbored
planet orbits with periods T_i and T_i+1, we can apply
Kepler's third law R ∝ T^2/3 to predict the
relative distances R_i and R_i+1 of the planets
from the Sun, which allows us also to test the Titius-Bode law
as well as the generalized Titius-Bode law.
Orbital resonances in the solar system are all found for small
numbers of integers, say for harmonic numbers in the range of
H=1 to H=5 (e.g., see Table 1 in Peale 1976). If we consider
all possible resonances in this number range, we have nine
different harmonic ratios, which includes (H_i : H_i+1) =
(5:4), (4:3), (3:2), (5:3), (2:1), (5:2), (3:1), (4:1), (5:1),
sorted by increasing ratios q=(H_i+1/H_i), as shown in Fig. 2a.
The harmonic ratios vary in the range of q=[1.2,...,5] (Fig. 2a).
The related planet distance ratios Q can be obtained from
Kepler's third law, which produces ratios of Q=q^2/3
(Eq. 8), which yields a range of Q=[1.129, ...., 2.924]
(Fig. 2b). This defines possible distance ratios
Q=R_i+1/R_i between neighbored planets varying by a factor
of three, which is clearly not consistent with a single constant,
as assumed in the generalized Titius-Bode law.
The frequency of strongest gravitational interaction between
two neighbored planets is given by the time interval t_conj
between two subsequent planet conjunctions, which is defined by
the orbital periods T_1 and T_2 as
1 t_conj = 1 T_1 - 1 T_2 ,
for T_2 > T_1 .
We see that the conjunction time t_conj approaches infinity
in the case of two orbits in close proximity (T_2 T_1),
while it becomes largest for T_2 ≫ T_1. The conjunction time
is plotted as a function of the harmonic ratios in Fig. 2c, which
clearly shows that the conjunction times decreases (Fig. 2c) with
increasing harmonic ratios (Fig. 2a). Since the gravitational
force decreases with the square of the distance Δ R = (R_2-R_1),
i.e., F_grav∝ m_1 m_2/Δ R^2, neighbored planet pairs
matter more for gravitational interactions, such as for stabilizing
resonant orbits, than remote planets. On the other side,
if the planets are too closely spaced, they have similar orbital
periods and the conjunction times increases, which lowers the chance
for gravitational stabilization interactions. So, there is a trade-off
between these two competing effects that determines which relative
distance is
at optimum for maintaining stable long-lived orbits. We will see
in the following that the “sweet spot” occurs for harmonic ratios
between q=3/2=1.5 and q=3/1=3.0, a range that includes only
five harmonic ratios, namely (3:2), (5:3), (2:1), (5:2), (3:1),
which are marked with hatched areas in Fig. 2.
§.§ The Harmonic Orbit Resonance Model
We investigate now a quantitative model that can fit the
distances of the planets from the Sun, which we call the
harmonic orbit resonance model.
The basic assumption in our model is that a two-body resonance
exists between two neighbored planets in stable long-term orbits,
which can be defined by the resonance condition (similar to Eq. 9),
(H_i ν_i - H_i+1ν_i+1) ∝ω_i,i+1 ,
where H_i and H_i+1 are the (small) integer numbers of a
harmonic ratio q_i,i+1=(H_i+t/H_i), and ν_1 and ν_2
are the frequencies of the orbital periods, ν_i=1/T_i,
which can be expressed as,
1 - H_i+1 H_i(T_i T_i+1)
= ω_i,i+1 ,
where ω_i,i+1 is the residual that remains from
unaccounted resonances from possible third or more bodies
involved in the resonance. The normalization by the factor
H_i ν_i in (Eq. 11) serves the purpose to make the
residual values compatible for different planet pairs.
In order to find the harmonic ratios H_i+1 : H_i
that fulfill the resonance condition (Eq. 12), we have
to insert the orbital time periods T_i, T_i+1
into Eq. (12) and find the best-fitting harmonic ratios
that yield a minimum in the absolute value of the
resulting residual |ω_i,i+1|.
This procedure is illustrated in Fig. 3 for the 9 neighbored
planet pair systems, where we included all 9 cases
of different harmonic ratios (H_i : H_i+1) =
(5:4), (4:3), (3:2), (5:3), (2:1), (5:2), (3:1), (4:1), (5:1),
sorted in rank number on the x-axis, while the residual value
is plotted on the y-axis. We see (in Fig. 3) that we find
solutions with a residual value in the range of
ω_i,i+1≈ 0.005-0.06 in each case.
The result for the 9 planet pairs shown in Fig. 3
confirms that the best-fit resonances are confined to
the small range of 5 cases between q=3/2=1.5 and q=3/1=3.0,
namely (3:2), (5:3), (2:1), (5:2), (3:1), as marked in Fig. 2.
Thus, the result of the harmonic orbit resonance model
for our solar system (based on the smallest two-body residuals
ω_min, Fig. 3), are the harmonic orbit resonances of
(3:2) for Neptune-Pluto,
(5:3) for Venus-Earth,
(5:2) for Mercury-Venus, Mars-Ceres, Ceres-Jupiter, Jupiter-Saturn,
(2:1) for Earth-Mars, Uranus-Neptune, and
(3:1) for Saturn-Uranus (Fig. 3), which agree with all previously
cited results (e.g., see review of Peale 1976).
We investigated also the role of Jupiter, the most massive planet,
in the three-body resonance condition (by expanding Eq. 12),
1 - H_i+1 H_i(T_i T_i+1)
- H_Jup H_i(T_i T_Jup)
= ω_i,i+1 ,
but found identical results (yielding H_Jup=0), except for the
three-body configuration of Ceres-Mars-Jupiter, where an optimum
harmonic ratio of (2:1) was found, instead of (5:3), for two-body
systems. From this we conclude that the two-body interactions
of neighbored planet-planet systems are more important in the
resonant stabilization of orbits than the influence of the
largest giant planet (Jupiter), except for planet-asteroid pairs.
Once we know the harmonic ratio for each pair of neighbored
planets, we can apply Kepler's third law to the orbital periods
T_i and predict the distances R_i of the planets,
( R_i+1 R_i) =
( T_i+1 T_i)^2/3 =
( H_i+1 H_i)^2/3 .
where H_i and H_i+1 are small integer numbers (from 1 to 5)
that define the optimized harmonic orbit resonance ratios.
Specifically, the five allowed harmonic time period ratios allow
only the values q_i=H_i+1/H_i = 1.5, 1.667, 2.0, 2.5, and 3.0.
Applying Kepler's law, this selection of harmonic time periods
yields the 5 discrete distance ratios Q_i=q_i^(2/3) =
[1.31, 1.40, 1.59, 1.84, 2.08]. The (arithmetic) averages
of these predicted ratios are < q > = 2.25 ± 0.75,
< Q > = 1.70 ± 0.4. In the following we will
also refer to the extremal values, [q_min, q_max]=[1.5, 3.0] and
[Q_min, Q_max]=[1.31, 2.08].
This physical model is distintcly different from the empircal
Titius-Bode law (Eq. 1), which assumes a constant value
q_i=2^3/2=2.83 (Eq. 4), in the limit of R ≫ 0.4 AU,
or from its generalized form with an unquantified constant
Q=q^2/3 (Eq. 6). What is common to all three
models is that the planet distances can be defined
in an iterative way, e.g., (R_i+1/R_i).
However, both the original and the generalized Titius-Bode
law are empirical relationships, rather than based on a
physical model. Moreover, both the original and the
generalized Titius-Bode relationships assume a
logarithmic spacing of planet distances with a constant
geometric progression factor Q, while the harmonic orbit
resonance model predicts 5 quantized values for the
planet distance ratios Q_i.
§.§ Fitting the Geometric Progression Factor
So far we discussed three different models to describe the
distance pattern of planets to the Sun, which we quantified
with the geometric progression factor Q_i, or time period
progression factor q_i=Q_i^3/2, namely (1)
q_i ≈ 2^3/2 for the Titius-Bode law, (2) q_i=const for
the generalized Titius-Bode law, and (3) the five quantized
values q_i=(H_i+1/H_i) of the five most dominant harmonic
ratios (3:2), (5:3), (2:1), (5:2), (3:1) for the harmonic orbit resonance
model. Although we narrowed down the possible harmonic ratios
to five values, there is no theory that predicts in what order
these five values are distributed for a given number of planets
or moons. Among the many possibile permutations (e.g.,
5^10≈ 10^7 for 10 planets), we make a model with
the simplest choice of including a first-order term Δ q,
besides the zero-order constant q_i,
(T_i+1 T_i) = q_i = q_1 + (i-1) Δ q ,
i=1,...,n_p ,
which simply represents a linear gradient of the time period
progression factor q_i for each planet. The corresponding geometric
progression factor is according to Kepler's law (Eq. 3),
( R_i+1 R_i) =
Q_i = [ q_1 + (i-1) Δ q ]^2/3 ,
i=1,...,n_p .
For a given set of observations with time periods
T_i, i=1,...,n_p, we can then fit the model of Eq. (15) by
minimizing the residuals |T_i^model/T_i^obs-1|,
in order to determine the gradient Δ q.
If the geometric progression factor is a constant, as assumed in the
generalized Titius-Bode law, the term will vahish, i.e., Δ q=0,
while it will be non-zero for any other model.
We anticipate that the term Δ q will be positive,
because a negative value would reverse the planet distances
for high planet numbers. If the geometric progression factor
is monotonically increasing with the planet number, we expect
the lowest admitted harmonic value of q_1=(3/2)=1.5 for the
first planet, and the highest admitted harmonic value of
q_n=(3/1)=3.0 for the outermost planet n.
We will describe the results
of the data fitting in Section 3. Once we determined the
functional form of the progression factor q_i, we can then
easily find missing planets or moons based on the theoretical
progression factors predicted by the model of Eq. (16).
§.§ Self-Organization of Planet Distances
We interpret now the evolution of the most stable planet
orbits as a self-organizing process, which produces a regular
geometric pattern that we characterize with the geometric
(Q_i) or temporal progression factor q_i. A constant
factor Q_i corresponds to a strictly logarithmic spacing,
because the planet distance increases by a constant factor
for each iterative planet number. The previously described
steps of the theory concern mostly the calculation of the
specific geometric pattern of the planet distances. Let us
justify now the interpretation in terms of a self-organizing
system.
Self-organizing systems create a spontaneous order, where
the overall order arises from local interactions between the
parts of the initially disordered system. In the case of our
solar system, the initially disordered state corresponds to
the state of the solar system formation by self-gravity,
where the local molecular cloud
condenses into individual planets that form our solar system.
The self-organization process is spontaneous and does not need
control by any external agent. In our solar system, it is the
many gravitational disturbances that interact beween all possible
orbits, and finally settle (over billions of years) into the
most stable orbits that result from harmonic orbit resonances,
as observed in our solar system (Peale 1976). A self-organizing
system is triggered by random fluctuations, and then amplified
by a positive feedback. The positive feedback in the evolution
of planet orbits is given by the stabilizing gravitational
interactions in resonant orbits, while a negative feedback
would occur when a gravitational disturbance pulls a planet
out of its orbit, or during a migration phase of planets,
where marginally stable orbits can be disrupted.
A self-organizing system is not controlled from outside, but
rather from all interacting interior parts. Thus, a 10-planet
system results from the evolution of 10 × 10 =100 two-body
interactions, which obviously lead to a self-organized
equilibrium, unless large exterior disturbances occur (e.g., a
passing star or a migrating Jupiter), or if there is not
sufficient critical mass to condense to a full planet in the
initial phase. The asteroids represent such
an example of incomplete condensation, prevented by the gravitational
tidal forces of the nearby Jupiter. Finally, a self-organizing
system is robust and survives many small disturbances and can
self-repair substantial perturbations. Thus, a solar system,
or a moon system of a planet, appears to fulfill all of these
general properties of a self-organizing system.
§ OBSERVATIONS AND DATA ANALYSIS
§.§ The Planets in our Solar System
In Fig. 4 we show the distances R of the planets to the Sun
in our solar system, as predicted with the Titius-Bode law
(Fig. 4a), juxtaposed to the so-called generalized Titius-Bode
law (Fig. 4b), and the harmonic orbit resonance model
(Fig. 4c). Although the Titius-Bode law fits the observed
planet distances very well from Venus (n=2) to Uranus (n=8),
it breaks down at the most extremal ranges. For Mercury (n=1),
it would predict a distance of R_1=0.55 rather than the
observed value of R_1=0.387, and thus it had been set ad hoc
to a value of R_1=0.4 in the Titius-Bode law (Eq. 1). For Neptune
(n=9) and Pluto (n=10) the largest deviations occur.
For Pluto, a value of R_10=77.2 AU is predicted, while the
observed value is R_10=39.48 AU, which is an over-prediction
by almost a factor of two. In the overall, the Titius-Bode law
agrees with the observations within a mean and standard deviation
of R_pred/R_obs=1.18 ± 0.31 (Fig. 4a).
The generalized Titius-Bode law (Fig. 4b) shows a better overall
agreement of R_pred/R=0.95±0.13, which is a factor
of 2.4 smaller standard deviation than the Titius-Bode law.
We fitted the data with a constant progression factor
Q=R_n+1/R_n and found a best-fit value of Q=1.72.
Some improvement over the original Titius-Bode law is that
there is no excessive mismatch for the nearest planet
(Mercury, n=1) or the outermost planets (Neptune n=9
and Pluto n=10), although the Titius-Bode law fits somewhat
better for the mid-range planets (from Venus to Uranus).
A significantly better agreement between the observed planet
distances R and model-predicted values R_pred is achieved by
using the harmonic ratios as determined from the resonance
condition for each planet (Eq. 14 and Fig. 3), yielding an
accuracy of R_pred/R=1.00±0.04 (Fig. 4c),
which is a factor of 8 better than the original Titius-Bode
law, and a factor of 3 better than the generalized Titius-Bode
law. The distance values predicted by the different discussed
models are compiled for all 10 planets in Table 2.
Note that a better agreement between the model and the data
is achieved with the quantized harmonic ratios q_i, rather
than using the logarithmic spacing assumed in the generalized
Titius-Bode law.
Is there an ordering scheme of the harmonic ratios q_i
in the sequence of planets i=1,...,n? From the harmonic ratios
displayed in Fig. 4c and Table 2 it becomes clear that there is
a tendency that the harmonic ratios q_i increase with the
ordering number i of a planet, with two exceptions out of the
10 planets. The first exception is the interval between Mercury
and Venus, where an additional planet can be inserted, while
the second exception is the planet Neptune, which can be
eliminated and then produces a pattern of monotonically increasing
intervals. We apply this modification in Fig. 5 and fit then
our theoretical model with a linearly increasing progression
factor (Eq. 15, 16). We see that a linearly increasing
progression factor yields a best fit with an accuracy of
R_pred/R_obs=0.96±0.04, or 4% (Fig. 5a).
The best fit yields
a linear increment of q=0.205 and a harmonic range of
q=1.40-3.24 (Fig. 5b), close to the theoretially predicted range
of q=1.5-3.0 for the allowed five dominant harmonic ratios.
Thus we can conclude that the self-organized pattern is
consistent with a linearly increasing progression factor,
at least for 8 out of the 10 planets.
It is interesting to speculate
on the reason of the two mismatches. The observed harmonic
ratio of Neptune-Pluto (3:2) and Uranus-Neptune (2:1) yields
a ratio of (3:2) × (2:1) = (3:1), which perfectly fits
the theoretical model and is one of the allowed harmonic ratios.
Therefore, Neptune occupies a stable orbit between Uranus
and Pluto, which does not match the primary progressive geometric
pattern, but fits a secondary harmonic pattern.
Neptune might have joined the
solar system later, or survived on a stable, interleaved harmonic
orbit. It also might have to do with the crossing orbits of
Neptune and Pluto.
The second exception is a missing planet between
Mercury and Venus, based on the regular pattern of a linearly
increasing progressive ratio q (Eq. 15, 16). The observed
harmonic ratio is (5:2) between Mercury and Venus, which
can be subdivided into two harmonic ratios
(5:3) × (3:2) = (5:2) to match our theoretical pattern
(Eq. 15, 16) of a monotonically increasing harmonic ratio.
Consequently, a planet orbit between Mercury and Venus is
expected in order to have a regular spacing, which could
have been occupied by an earlier existing planet that was
pulled out later, or this predicted harmonic resonance zone
was never filled with a planet.
What are the predictions for a hypothetical planet outside
Pluto? The Titius-Bode law is unable to make a prediction,
because it over-predicts the distance of Pluto already by
a factor of 2, and thus an even larger uncertainty would be
expected for a trans-Plutonian planet. The
harmonic orbit resonance model, using a geometric
mean extrapolation method, predicts a orbital period
of T_n+1≈ T_n^2/T_n-1=975 yrs and a distance of
R_n+1=R_n (T_n+1/T_n)^2/3=80 AU for the next
trans-Plutonian planet.
A known object in proximity is the Kuiper belt
which extends from Neptune (at 30 AU) out to approximately
50 AU from the Sun, which overlaps with Pluto, but not with
the predicted distance of a trans-Plutonian planet.
Another nearby object is Eris, the most massive and second-largest
dwarf planet known in our solar system (Brown et al. 2005).
Eris has a highly eccentric orbit with a semi-major axis of
68 AU, so it is close to our prediction of R_n+1=80 AU.
However, since there are many more dwarf planets known at
trans-Neptunian distances, they could all be part of a major
ring structure, like the asteroids. In addition, the
regular pattern predicted by our harmonic orbit resonance model
would require a harmonic ratio larger than q=3 for a planet
outside Pluto (Fig. 5b), in excess of the allowed five harmonic
ratios, which is an additional argument that trans-Plutonian
planets are not expected to have a stable orbit.
§.§ The Moons of Jupiter
A recent compilation of planetary satellites lists
67 moons for Jupiter (http://www.windows2universe.org/
our_solar_system.moons_table.html). Only 7 out of the
67 Jupiter moons have a size of D>100 km, namely
Amalthea, Thebe, Io, Europa, Ganymede, Callisto,
and Himalia. The smaller objects with
1 < D < 100 km often occur in clusters in the
distance distribution, which may be fragments
that never condensed to a larger moon or may
be the remnants of collisional fragmentation.
The irregular spacing in the form of clusters makes
these small objects with D < 100 km unsuitable
to study regular distance patterns. Clusters of
sub-planet sized bodies in the outer solar system
would require additional model components, which are
not accomodated here at this time.
We show a fit of the harmonic orbit
resonance model to the 7 largest Jupiter moons
in Fig. 6a, which matches our theoretical model
(Eqs. 15, 16) with an accuracy of R_pred/R_obs
=1.01±0.02. The regular spacing is shown in
Fig. 6b, where the 7 observed moon distances fit
a pattern with 10 iterative harmonic ratios.
The 7 moons include the 3 Galilean satellites
Io, Europa, and Ganymede that were known to
exhibit highly accurate harmonic ratios already
by Laplace in 1829 (Eq. 9).
The empty resonance zones (n=3, n=8, n=9)
are found at distances of R_3=270 Mm,
R_8=3370 Mm, and R_9=6100 Mm, where we propose
to search for additional Jupiter moons.
The moon closest to Jupiter is Adrastea, with a distance
of 128.98 km, which is close to the Roche radius (Wylie 1931),
and thus no further moons are expected to be discovered
in nearer proximity to Jupiter.
Since 7 moons fit a geometric pattern with 10 elements
with such a high accuracy of 2% in the moon distance
(to the center of the hosting planet), we have a strong argument
that the underlying self-organizing mechanism based
on stable harmonic orbit resonances predicts
the correct pattern of moon distances, but there
are holes in this geometric scheme that
are not filled by moons, which apparently do not
have a simple predictable pattern. Therefore,
our theoretical model has a predictability of 70%
for the case of Jupiter moon distances, unless
there exist some un-discovered moons with diameters
of >100 km, which we consider as unlikely.
§.§ The Moons of Saturn
The same compilation of planetary satellites mentioned
above lists 62 moons for Saturn. The largest 6 moons
(Enceladus, Tethys, Dione, Rhea, Titan, Iapetur)
have a diameter of D>400 km and fit the harmonic
orbit resonance model with an accuracy of
R_pred/R_obs=0.95±0.01. If we set the
same limit of D > 100 km as for Jupiter (Section 3.2),
we have 13 observed moons and find a best fit of
R_pred/R_obs=0.95±0.06 (Fig. 7a). Inspecting
the distance spacing (Fig. 7b) we find that a
geometric pattern with 11 ratios fits the data best,
where two resonance zones are occupied by two moons
each, and 3 resonance zones are not occupied.
Neverthelss, the spatial pattern fits the data in
the predicted range of harmonic ratios, q=1.46-2.60 (Fig. 7b).
§.§ The Moons of Uranus
For Uranus, a total of 27 moons have been discovered so far,
of which 8 moons have a diameter of D > 100 km.
Seven of the 8 largest moons have a quasi-periodic
pattern, while Sycorax is located much further outside (Fig. 8).
The best-fit geometric pattern shows 12 resonant zones
(Fig. 8b), which fit the observed moon distances with
an accuracy of R_pred/R_obs=0.97±0.07 (Fig. 8a).
The range of best-fit harmonics q=[1.50-2.71] (Fig. 8b) is
close to the theoretical prediction q=[1.5-3.0].
§.§ The Moons of Neptune
A total of 14 moons have been reported for Neptune,
of which 6 have a diameter of D>100 km (Fig. 9),
namely Galatea, Despina, Larissa, Proteus, Triton,
and Nereid. We show the distances of these 6 moons
to the center of Neptune in Fig. 9. The harmonic
resonance model fits the 6 moons with an accuracy of
R_prep/R_obs=1.03±0.06 (Fig. 9a).
The range of best-fit harmonics is q=[1.29-3.55] (Fig. 9b),
close to the theoretical prediction q=[1.5-3.0].
§.§ Exo-Planets
Five hypothetical planetary positions were measured in the
55 Cancri system, located at distances of 0.01583, 0.115,
0.240, 0.781, and 5.77 AU from the center of the star (Cuntz 2012).
Cuntz (2012) applied the Titius-Bode law and predicted
4 intermediate planet positions at 0.081, 0.41, 1.51, and
2.95 AU, adding up to a 9-planet system.
A similar prediction was made by Poveda and Lara (2008),
predicting two additional planets at 2.0 and 15.0 AU.
Applying the harmonic resonance model to this
data (Fig. 10), which yields a best fit with an accuracy of
R_pred/R_obs=1.01 ± 0.07 and predicts a total of 12
planets with 7 unknown positions at
R_2=0.022, R_3=0.031, R_4=0.048, R_5=0.077, R_8=0.40,
R_10=1.46, and R_11=2.93 AU. It is gratifying to see that
the later four positions predicted by our model, e.g.,
R=0.077, 0.40, 1.46, 2.93 AU, agree well with the predictions
of Cuntz (2012), e.g., R=0.081, 0.41, 1.51, 2.95 AU.
Our harmonic orbit resonance model predicts in total a number
of 12 planets within the harmonic ratio range of
q=[1.61-3.11] (Fig. 10b), where 7 planets are un-discovered.
In the millisecond pulsar PSR 1257+12 system, two planet
companions were discovered (Wolszczan and Frail 1992),
at distances of R_1=0.36 AU and R_2=0.47, for which
the Titius-Bode law has been applied to predict additional
planets (Bisnovatyi-Kogan 1993). The orbital distance ratio
is R_2/R_1=1.31 and implies (using Kepler's law) an orbital
period ratio of T_2/T_1=(R_2/R_1)^3/2=1.50 that exactly
corresponds to the harmonic ratio (H_2:H_1)=(3/2), and thus
is fully consistent with the harmonic orbit resonance model.
The HRPS search for southern extra-solar planets discovered
seven periods in the star HD 10180 (Lovis et al. 2011),
which can be translated
into distances of exo-planets from the central star by using
Kepler's law and are plotted in Fig. 11a.
We find a total of 11 planets that
fit the harmonic resonance model with an accuracy of
R_pred/R_obs=0.97±0.10 and we predict four
undiscovered planets in the resonant rings n=2, 3, 5, 9
with distances of R_2=0.029, R_3=0.039,
R_8=0.089, and R_10=1.55 AU.
In the eclipsing polar HU Aqr, two orbiting giant planets
at distances of R_1=3.6 and R_2=5.4 AU were discovered
(Qian et al. 2011).
The ratio is Q=5.4/3.6=1.50 implies a period ratio of
q=Q^3/2=1.84 which is not close to a harmonic ratio.
The application of the Titius-Bode to exo-planet data
furnished 141 additional exoplanets in 68 multiple-exoplanet
systems (Bovaird and Lineweaver 2013). In a follow-up study,
Bovaird et al. (2015) predicted the periods of 228 additional
planets in 151 multi-exoplanet systems. Huang and Bakos (2014)
searched in Kepler Long Cadence data for the 97
predicted planets of Bovaird and Lineweaver (2013) in 56
of the multi-planet systems, but found only 5 planetary
candidates around their predicted periods and questioned
the prediction power of the Titius-Bode law.
§.§ Statistics of Results
We summarize the statistics of results in Table 3. The
analyzed data set consists of our solar system, four
moon systems (Jupiter, Saturn, Uranus, and Neptune),
and two exo-planet systems. We found that each of these
7 systems consisted of N_res=10-12 resonant zones,
of which N_occ=5-10 were occupied with detected
satellites, while N_miss=0-7 resonant zones were
not occupied by sizable moons (with diameters of D>100 km)
or detected exo-planets. For the two exo-planet systems
we predict 11 additional resonance zones that could
harbor planets (Table 4).
The new result that each planet or moon system is found to have
about 10 resonant ring zones indicates some
unknown fundamental law for the maximum distance limit of
planet or moon formation. The innermost distance is essentially
given by the Roche limit, while the outermost distance may
be related to an insufficient mass density that is needed for
gravitational condensation. Our empirical result predicts
a distance range of R_10/R_1 ≈ 130 for a typical
satellite system with N_res≈ 10 satellites.
The relationship between the number of satellites and
the range of planet (or moon) distances is directly
connected to the variation of the progression factors,
which is summarized in Fig. 12 for all 7 analyzed systems.
§ DISCUSSION
§.§ Quantized Planet Spacing
The distances of the planets from the Sun, as well as the
distances of the moons from their central body (Jupiter,
Saturn, Uranus, Neptune) have been fitted with the original Titius-Bode
law, using a constant geometric progression factor
with empirical values in the range of Q=R_i+1/R_i=1.26-2.0 (Table 1),
with the Schrödinger-Bohr atomic model, R_n ∝ n (n+1) ≈ n^2,
Wylie 1931; Louise 1982; Scardigli 2007a,b), or with more
complicated polynomial functions (e.g., Blagg 1913).
The assumption of a constant geometric progression factor in the
planet distances, which corresponds to a regular logarithmic spacing,
is apparently incorrect, based on the poor agreement with observations,
while the harmonic orbit resonance model has the following properties:
(1) It fits the planet distances with 5 quantized values (that relate
to the five dominant harmonic ratios) with a much higher accuracy
than the Titius-Bode law and its generalized version;
(2) It is based on the physical model of harmonic orbit resonances,
and (3) disproves the assumption of a constant progression factor
(which corresponds to a logarithmic spacing). In constrast, it
predicts variations between neighbored orbital periods from
q_min=1.5 to q_max=3.0, which amounts to variations by
a factor of two. The harmonic orbit resonance model fits the
data best for a linearly increasing progression factor, starting
from q_1=q_min=(3:2)=1.5 for the first (innermost) planet pair,
and ending with
q_n=q_max=(3:1)=3.0 for the last (outermost) planet pair. From the
7 different data sets analyzed here we find the following mean values:
Δ q=0.16±0.05 for the linear gradient of the time period
progression factor, q_min=1.45±0.10 for the minimum progression
factor, and q_max=3.00±0.33 for the maximum progression factor
(Table 3). If we use the mean value of Δ q=0.16, the model
(Eq. 15) predicts the following ratios for a sample of 10 planets:
q_1=1.50, q_2=1.66, q_3=1.82, q_4=1.98, q_5=2.14,
q_6=2.30, q_7=2.46, q_8=2.62, q_9=2.78, q_10=2.94,
Rounding these values to the next allowed harmonic
number (which is a rational number), the following sequence of
harmonic ratios is predicted:
q_1=(3:2), q_2=(5:3), q_3=(5:3), q_4=(2:1), q_5=(2:1),
q_6=(5:2), q_7=(5:2), q_8=(5:2), q_9=(3:1), q_10=(3:1),
which closely matches the sequence of observed harmonic resonances
(Table 2, column 4) in our solar system.
These harmonic ratios match the observations closely, and thus
provide an adequate description of the geometric pattern created
by the self-organizing system.
§.§ The Geometric Progression Factor
Several attempts have been made to find a theoretical
physical model for the empirical Titius-Bode law.
There exists no physical model that can explain the
mathematical function that was empirically found by Titius and Bode,
with a scaling relationship 2^(n-2), multiplied by
an arbitrary factor and an additive constant (Eq. 1).
One interpretation attempted to relate it to the
Schrödinger-Bohr atomic model, R_n ∝ n (n+1) ≈ n^2,
Wylie 1931; Louise 1982; Scardigli 2007a,b), but the reason
why it was found to fit the Titius-Bode law is simply because
both series scale similarly for small integer numbers,
i.e., 2^n ∝ 1, 2, 4, 8, 16, 32, ... versus
n^2 ∝ 0, 1, 4, 9, 16, 25, ...,
but this numerological coincidence does not imply that
atomic physics and celestial mechanics can be understood by
the same physical mechanism, although both exhibit
discrete quantization rules.
Most recent studies assume that the physics behind the
Titius-Bode law is related to the accretion of mass
through collisions within a protoplanetary disk, clearing
out material in orbits with harmonic resonances, which
leads to a non-random distribution of planet orbits with
roughly logarithmic spacing (e.g., Peale 1976;
Hayes and Tremaine 1998; Bovaird and Linewaver 2013).
Quantitative modeling of such a scenario is not easy,
because it implies Monte Carlo-type N-body simulations
of a vast number of planetesimals that interact with
N^2 mutual gravitational terms. Analytical solutions
of N-body problems, as we know since Lagrange, are virtually
non-existent for N ≥ 3. However, the configuration of
planets that we observe after a solar system life time
of several billion years suggests that the observed
harmonic orbit resonances represent the most stable
long-term solutions of a resonant system, otherwise
the solar system would have disintegrated long ago.
In contrast to the generalized Titius-Bode law with
logarithmic spacing, we argue for a model with quantized
geometric progression factors, based on the most relevant
harmonic ratios that stabilize resonant orbits.
From a statistical point of view we can understand that
there is a “sweet spot” of harmonic ratios in the
planet orbits that is not too small (because it would
lengthen the conjunction times and reduce the frequency
of gravitational interactions necessary for the
stabilization of orbits), and is not too large
(because the inter-planet distances at conjunction
would be larger and the gravitational force weaker).
These reasons constrain the optimum range of dominant
harmonic ratios, for which we found the 5 values
between (3/2) and (3/1). However, there are still
open questions why there is a linear gradient Δ q
in the orbit time (and geometric) progression factor,
and what determines the value of this gradient
Δ q ≈ 0.16. The specific value of the
gradient determines how many planets n_p can exist
in a resonant system within the optimum range of
harmonic numbers (from q_min=1.5 to q_max=3).
(Fig. 12).
Therefore, since we have no theory to predict the gradient
Δ q, we have to resort to fitting of existing data
and treat the gradient Δ q as an empirical variable.
§.§ Self-Organizing Systems
One necessary property of self-organizing systems is the
positive feedback mechanism. Solar granulation (Fig. 1a),
for instance, is driven by subphotospheric convection,
a mechanism that has a vertical temperature gradient in
a gravitationally stratified layer. It is subject to the
Rayleight-Bénard instability, which can be described
with three coupled differential equations, the so-called
Lorenz model (e.g., Schuster 1988). The positive feedback
mechanism results from the upward motion of a fluid along
a negative vertical temperature gradient, which cools off
the fluid and makes it sink again, leading to chaotic motion.
In the limit
cycle, which is a strange attractor of this chaotic system,
the system dynamics develops a characteristic size of the
convection cells (approximately 1000 km for solar graulation),
which is maintained over the entire solar surface by this
self-organizing mechanism of the Lorenz model.
For planet orbits, gravitational disturbances act most
strongly between planet pairs that have a harmonic ratio
of their orbit times, because the gravitational pull
occurs every time at the same location for harmonic
orbits. Such repetitive disturbances that occur at
the same location into the same direction will pull the
planet with the lower mass away from its original orbit
and make its orbit unstable. However, if there is a third
body with another harmonic ratio at an opposite conjuction
location, it can pull the unstable planet back into a more
stable orbit. This is a positive feedback mechanism that
self-organizes the orbits of the planets into stable
long-term configurations. A more detailed physical
description of the resonance phenomenon can be found
in the review of Peale (1976).
§.§ Random Pattern Test
A prediction of the harmonic orbit resonance model is that
the the spacing of stable planet orbits is not random, but
rather follows some quasi-regular pattern, which we
quantified with the quantized spacing given with
Eqs. (15, 16). However, the matching of the observed
planet distances with the resonance-predicted pattern
is not perfect, but agrees within an accuracy of
R_pred/R_obs=1.00±0.06 in the statistical
average only (Table 3), the question may be asked whether
a random process could explain the observed spacing.
In order to test this hypothesis we performed a
Monte-Carlo simulation with 1000 random sets of planet
distances and analyzed it with the same numerical code
as we analyzed the observations shown in Figs. (4-11).
In Fig. 13 we show a 2D distribution of two
values obtained for each of the 1000 simulations:
The y-axis shows the standard deviation
|R_model/R_sim - 1| of the ratio of
modeled and simulated values of planet distances,
which is a measure of how well the model fits the
data; The x-axis shows the deviations
[(q_min-1.5)^2 + (q_max-3.0)^2]^1/2
of the best-fit progression factors q_min
and q_max (added in quadrature), which is
a measure how close the simulated and theoretically predicted
progression factors agree. From the 2D distribution
shown in Fig. 13 we see that the data set of Jupiter
matches the model best and
exhibits the largest deviation from the random
values, while the other six 6 data sets are
all distributed at the periphery of the random values.
Nevertheless, all analyzed data sets are found to be
significantly different from random spacing of
planet or moon distances.
§.§ Exo-Planet Searches
The Titius-Bode law was also applied to exo-planets of
stellar systems, such as the solar-like G8 V star 55 Cnc
(Cuntz 2012; Poveda and Lara 1980), the millisecond pulsar
PSR 1257+12 system (Bisnovatyi-Kogan 1993), the star
HD 10180 (Lovis et al. 2011), the eclipsing polar HU Aqr
(Qian et al. 2011), and to over 150 multi-planet systems
observed with Kepler (Bovaird and Lineweaver 2013;
Bovaird et al. 2015; Huang and Bakos 2014). The search
in Kepler data, however, did not reveal much new detections
based on predictions with the generalized Titius-Bode law,
i.e., R_n = R_1 Q^n (Huang and Bakos 2014).
Although a similar iterative formulation is used in both
the generalized Titius-Bode law and the harmonic resonance
model (Eq. 6, 7),
the geometric progression factor Q is used as a free
variable individually fitted to each system in other studies
(e.g., Bovaird and Lineweaver 2013).
The question arises how the harmonic orbit resonance model
can improve the prediction of exo-planet candidates.
The two examples shown in Figs. (10-11) suggest that the
predicted spatio-temporal pattern can be fitted to incomplete
sets of exo-planets with almost equal accuracy as the data sets
from the (supposedly complete) data sets in our solar system.
This may be true if there are 5 planets detected
per star, but it may get considerably ambiguous for smaller
sets of ≈ 2-4 exo-planets per star. However, since
the harmonic orbit resonance model predicts a variable ratio
of the time period progression factor that fits existing data,
it should do better than the generalized Titius-Bode law with
a constant progression factor, as it was used in recent work
(Bovaird and Lineweaver 2013; Bovaird et al. 2015).
§ CONCLUSIONS
The physical understanding of the Titius-Bode law is a long-standing
problem in planetary physics since 250 years. In this study we
interpret the quasi-regular geometric pattern of planet distances
from the Sun as a result of a self-organizing process that acts
throughout the life time of our solar system. The underlying
physical mechanism is linked to the celestial mechanics of
harmonic orbit resonances. The results may be useful for searches
of exo-planets orbiting around stars. Our conclusions are:
* The original form of the Titius-Bode law on distances of the
planets to the Sun is a purely empirical law and cannot be derived
from any existing physical model, although it fits the observations
to some extent, but fails for the extremal planets Mercury, Neptune
and Pluto. The “generalized form of the Titius-Bode law” assumes
a constant geometric progression factor Q = R_i+1/R_i, which
we find also to be inconsistent with the data, since the observed
distance ratios vary in the range of Q ≈ 1.3-2.1, corresponding
to a variation of q = T_i+1/T_i ≈ 1.5-3.0 of the orbital
periods, according to Kepler's third law, q ∝ Q^3/2.
* The observed orbital period ratios q_i of the planets
in our solar system correspond to five harmonic ratios,
(H_i+1/H_i)= (3:2), (5:3), (2:1), (5:4), (3:1), which
represent the dominant harmonic orbit resonances that
self-organize the orbits in the solar system. We find that the
progression factor q_i for time periods follows approximately
a linear function q_i = q_1 + (i-1) Δ q, i=1,...,n,
which varies in the range from the smallest harmonic ratio
q_1=(3:2)=1.5 of the innermost planet to q_n=(3:1)=3.0 for
the outermost planet, with a gradient of Δ q = 0.16±0.05.
The progression of orbital periods T_i is quantized by the
nearest dominant five harmonics, q_i = [1.5, 1.667, 2.0, 2.5, 3.0].
Based on these harmonic ratios of the orbital periods we
predict the variation of the geometric progression factors as
Q_i = [q_1 + (i-1) Δ q]^2/3, which is also quantized,
Q_i = q_i^2/3= [1.31, 1.40, 1.59, 1.84, 2.08].
* Fitting the geometric progression factors predicted by
our harmonic orbit resonance model to observed data from our
solar system, moon systems, and exo-planet systems, we find
best agreement for Jupiter moons (with an accuracy of 2%),
followed by the solar system (with an accuracy of 4% for 8 out of
the 10 planets). The other moon systems (of Saturn, Uranus,
Neptune) and exo-planet systems (of 55 Cnc and HD 10180)
agree with a typical accuracy of 6%-7%. We demonstrated
that these accuracies of predicted planet (or moon) distances are
significantly different from randomly distributed (logarithmic)
distances (Fig. 13). The number of resonant zones for each star or
planet amounts to n_res≈ 10-12, which is comparable with
the number of sizable detected moons (with a diameter of
D ≥ 100 km) in each moon system. For the exo-planet systems
we find best fits for n_res≈ 10-11 resonance zones,
which allows us to predict 7 missing exo-planets for the
star 55 Cnc, and 4 missing exo-planets for the star HD 10180.
* We interpret the observed quasi-regular geometric patterns
of planet or moon distances in terms of a self-organizing system.
Self-organizing systems (the primoridal molecular cloud around
our Sun) create a spontaneous order (the Titius-Bode law) from
local interactions (via harmonic orbital resonances) between the
internal parts (the planets or moons) of the initially disordered
(solar) system. A self-organizing process is spontaneous and
does not need an external control agent. Initial fluctuations
are amplified by a positive feedback (by the gravitational interactions
that lead to long-term stable orbits via harmonic orbit resonances).
A self-organizing system is robust
(during several billion years in our solar
system) and capable of self-repair after large disturbances (for
instance by a passing star or a migrating giant planet, thanks
to the stabilizing gravitational interactions of harmonic
orbit resonances). We find that the ordered patterns of planet
orbits is not always complete (a planet between Mecury and Venus
seems to be missing) and can display defects (the “superfluous”
Neptune, similarly to defects in crystal growth). The predicted
geometric pattern thus has a high statistical probablity, but is not
always perfectly created in self-organizing systems.
The present study can be applied in two ways: (1) predictions of
un-discovered exo-planets in other stellar systems; and
(2) prediction of resonant orbits and
solutions with long-term stability in numerical N-body simulations.
The validity of the
presented model could be corroborated by new discoveries of
predicted exo-planets (e.g., Boivard and Lineweaver 2013),
and by measuring the geometric scaling of gaps and rings in
simulated N-body accretion systems. The harmonic orbit resonance
model makes a very specific prediction about five harmonic orbit
and distance ratios (rather than the logarithmic spacing assumed
in the generalized Titius-Bode law), which can be tested with
numerical simulations. Such a quantized geometric pattern is
also distinctly different from random systems
(Dole 1970; Lecar 1973; Dworak and Kopacz 1997;
Dworak and Kopacz 1997; Hayes and Tremaine 1998; Lynch 2003;
Neslusan 2004; Cresson 2011).
Just at the time of submission of this paper, the discovery of
the Trappist-1 planetary system was announced (Gillon et al. 2017,
Pletser and Basano 2017; Scholkmann 2017;
Aschwanden and Scholkmann 2017).
This unique exoplanet system harbors 7 planets
with a regular spacing that corresponds to the harmonic ratios
of (T_i+1/T_i)=[1.60, 1.67, 1.51, 1.51, 1.34, 1.62].
It fulfills two of our three predictions: The period ratios
are close to the predicted harmonic ratios (3:2)=1.5 and
(5:3)=1.67, and the sequence starts with the lowest of the
predicted dominant resonances, but it does not continue to
the higher harmonic ratios (2:1, 5:2, 3:1). Either we are
missing additional planets further out, or the particular
spectral type of the central star (dwarf star M8V) favors
the lowest harmonic ratios, which have the shortest conjunction
times and therefore are the most stable configurations.
The author acknowledges the hospitality and partial support for
two workshops on “Self-Organized Criticality and Turbulence” at the
International Space Science Institute (ISSI) at Bern, Switzerland,
during October 15-19, 2012, and September 16-20, 2013, as well as
constructive and stimulating discussions (in alphabetical order)
with Sandra Chapman, Paul Charbonneau, Henrik Jeldtoft Jensen,
Lucy McFadden, Maya Paczuski, Jens Juul Rasmussen, John Rundle,
Loukas Vlahos, and Nick Watkins.
This work was partially supported by NASA contract NNX11A099G
“Self-organized criticality in solar physics”.
<ref>11cm #1
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
<ref>
ccl
The values of geometric progression factors
(Q = R_i+1/R_i) and orbital period progression factors (q = Q^3/2)
of solar system data are compiled from previous publications.
0pt
Geometric
Orbital period
Reference
progression
progression
factor
factor
Q_i
q_i
2 2.82 Titius (1766), Bode (1772), Miller 1938a,b, Fairall (1963)
1.7275 2.27 Blagg (1913), Brodetsky (1914), Nieto 1970)
1.89 2.59 Dermott (1968, 1973)
1.52 1.87 Armellini (1921); Badolati (1982)
1.52 1.87 Munini and Armellini (1978); Badolati (1982)
1.442 1.73 Rawal (1986, 1989)
1.26 1.41 Rawal (1986, 1989)
(1.31, 1.40, 1.59, 1.84, 2.08) (1.5, 1.667, 2.0, 2.5, 3.0)
Harmonic orbit resonance model (this work)
rlrrrrrrr
Observed orbital periods T and distances R of the
planets from the Sun, predicted harmonic orbit resonances (H1:H2),
the Titius-Bode law R_TB, the generalized Titius-Bode law
R_GTB, and predictionts of the harmonic orbit resonance
model R_HOR and ratios R_HOR/R.
0pt
Number
Planet
Orbital
Harmonic
Distance
Distance
Distance
Distance
Ratio
period
Resonance
observed
TB law
GTB law
HOR
HOR
T (yrs)
(H_i+1:H_i)
R (AU)
R_TB (AU)
R_GTB (AU)
R_HOR (AU)
R_HOR/R
1 Mercury 0.241 (5:2) 0.39 0.55 0.34 0.38 0.9839
2 Venus 0.615 (5:3) 0.72 0.70 0.58 0.70 0.9701
3 Earth 1.000 (2:1) 1.00 1.00 1.00 1.00 1.0000
4 Mars 1.881 (5:2) 1.52 1.60 1.72 1.56 1.0249
5 Ceres 4.601 (5:2) 2.77 2.80 2.95 2.76 0.9982
6 Jupiter 11.862 (5:2) 5.20 5.20 5.06 5.01 0.9639
7 Saturn 29.457 (3:1) 9.54 10.00 8.69 9.43 0.9888
8 Uranus 84.018 (2:1) 19.19 19.60 14.93 19.52 1.0171
9 Neptune 164.780 (3:2) 30.07 38.80 25.63 29.97 0.9968
10 Pluto 248.400 ... 39.48 77.20 44.01 38.77 0.9820
lccccrrr
Summary of analyzed planet and moon systems using the
harmonic orbit resonance model. We include only moons with a diameter
D > 100 km
0pt
Central
Number of
Number of
Occupied
Missing
Model
Progression
Orbital period
Object
satellites
resonant zones
zones
satellites
accuracy
factor
progression
gradient
factor
N_sat
N_res
N_occ
N_miss
R_pred/R_obs
dq
q
Sun 10 10 10 0 0.96±0.04 0.205 1.40-3.24
Jupiter 7 10 7 3 1.01±0.02 0.161 1.41-2.85
Saturn 13 11 8 3 0.95±0.06 0.114 1.46-2.60
Uranus 8 12 8 4 0.97±0.07 0.110 1.50-2.71
Neptune 6 10 6 4 1.03±0.06 0.251 1.29-3.55
55 Cnc 5 12 5 7 1.01±0.07 0.137 1.61-3.11
HD 10180 7 11 7 4 0.97±0.10 0.152 1.46-2.98
mean 1.00±0.06 0.16±0.05 1.45±0.10
3.00±0.33
lcrr
Predicted exo-planets of the star 55 Cnc and HD 10180.
0pt
Central
Number of
Predicted
Predicted
Object
harmonic orbit
distance
by Cuntz (2002)
55 Cnc 2 0.022 AU ...
55 Cnc 3 0.031 AU ...
55 Cnc 4 0.048 AU ...
55 Cnc 5 0.077 AU 0.081 AU
55 Cnc 8 0.40 AU 0.41 AU
55 Cnc 10 1.46 AU 1.51 AU
55 Cnc 11 2.93 AU 2.95 AU
HD 10180 2 0.029 AU ...
HD 10180 3 0.039 AU ...
HD 10180 8 0.089 AU ...
HD 10180 10 1.55 AU ...
| Johannes Kepler was the first to study the distances of
the planets to the Sun and found that the inner radii of
regular geometric bodies (Platon's polyhedra solids) approximately
match the observations, which he published in his famous
Mysterium Cosmographicum in 1596. An improved
empirical law was discovered by J.B. Titius in 1766, and
it was made prominent by Johann Elert Bode (published in 1772),
known since then as the Titius-Bode law:
R_n = {[ 0.4 ; 0.3 × 2^n-2 + 0.4 ].
Only the six planets from Mercury to Saturn were known at that time.
The asteroid belt, represented by the largest asteroid body
Ceres (discovered in 1801), part of the so-called
“missing planet” (Jenkins 1878; Napier et al. 1973; Opik 1978),
was predicted from the Titius-Bode law, as well as the outer
planets Uranus, Neptune, and Pluto, discovered in 1781, 1846, and 1930,
respectively. Historical reviews of the Titius-Bode law can be found
in Jaki (1972a,b), Ovenden (1972, 1975), Nieto (1972), Chapman (2001a,b),
and McFadden et al. (1999, 2007).
Noting early on that the original Titius-Bode law breaks
down for the most extremal planets (Mercury at the inner side,
and Neptune and Pluto at the outer side),
numerous modifications were proposed: such as a 4-parameter
polynomial (Blagg 1913; Brodetsky 1914); the Schrödinger-Bohr
atomic model with a scaling of R_n ∝ n(n+1), where
the quantum-mechanical number n is substituted by the planet
number (Wylie 1931; Louise 1982; Scardigli 2007a,b);
a geometric progression by a constant factor
(Blagg 1913; Nieto 1970; Dermott 1968, 1973; Armellini 1921;
Munini and Armellini 1978; Badolati 1982; Rawal 1986, 1989;
see compilation in Table 1); fitting an exponential distance law
(Pletser 1986, 1988);
the introduction of additional planets (Basano and Hughes 1979),
applying a symmetry correction to the Jupiter-Sun system
(Ragnarsson 1995); tests of random statistics
(Dole 1970; Lecar 1973; Dworak and Kopacz 1997;
Hayes and Tremaine 1998; Lynch 2003; Neslusan 2004; Cresson 2011;
Pletser 2017); self-organization of atomic patterns
(Prisniakov 2001), standing waves in the solar system formation
(Smirnov 2015), or the Four Poisson-Laplace theory of gravitation
(Nyambuya 2015).
Also the significance of the Titius-Bode law for predicting
the orbit radii of moons around Jupiter or Saturn was recognized
early on (Blagg 1913; Brodetsky 1914; Wylie 1931; Miller 1938a,b;
Todd 1938; Cutteridge 1962; Fairall 1963; Dermott 1968; Nieto 1970;
Rawal 1978; Hu and Chen 1987), or the prediction of a
trans-Neptunian planet “Eris” (Ortiz et al. 2007;
Flores-Gutierrez and Garcia-Guerra 2011; Gomes et al. 2016),
while more recent usage of the Titius-Bode law is made to
predict the distances of exo-planets to their central star
(Cuntz 2012; Bovaird and Lineweaver 2013, Bovaird et al. 2015;
Poveda and Lara 2008;
Lovis et al. 2011; Qian et al. 2011; Huang and Bakos 2014),
or a planetary system around a pulsar (Bisnovatyi-Kogan 1993).
Physical interpretations of the Titius-Bode Law involve the
accumulation of planetesimals, rather than the creation of
enormous proto-planets and proto-satellites (Dai 1975, 1978).
N-body (Monte-Carlo-type) computer simulations of the
formation of planetary stystems were performed, which could
reproduce the regular orbital spacings of the Titius-Bode law
to some extent (Dole 1970; Lecar 1973; Isaacman and Sagan 1977;
Prentice 1977; Estberg and Sheehan 1994), or not (Cameron 1973).
Some theories concerning the Titius-Bode law
involve orbital resonances in planetary system formation,
starting with an early-formed Jupiter which produces a runaway
growth of planetary embryos by a cascade of harmonic resonances
between their orbits (e.g., Goldreich 1965; Dermott 1968;
Tobett et al. 1982; Patterson 1987; Filippov 1991).
Alternative models involve the self-gravitational instability
in very thin Keplerian disks (Ruediger and Tschaepe 1988; Rica 1995),
the principle of least action interaction (Ovenden 1972;
Patton 1988), or scale-invariance of the disk that produces
planets (Graner and Dubrulle 1994; Dubrulle and Graner 1994).
Analytical models of the Titius-Bode law
have been developed in terms of hydrodynamics in thin disks
that form rings (Nowotny 1979; Hu and Chen 1987),
periodic functions with Tschebischeff polynomials (Dobo 1981),
power series expansion (Bass and Popolo 2005), and
the dependence of the regularity parameter on the central
body mass (Georgiev 2016).
The previous summary reflects the fact that we still
do not have a physical model that explains the empirical
Titius-Bode law, nor do we have an established
quantitative physical model that predicts the exact
geometric pattern of the planet distances to the central star,
which could be used for searches of exo-solar planets or
for missing moons around planets. In this Paper we investigate a
physical model that quantitatively explains the distances of the
planets from the Sun, based on the most relevant harmonic
resonances in planet orbits, which provides a more accurate
prediction of planet distances than the empirical Titius-Bode law,
or its generalized version with a constant geometric progression
factor. This model appears to be universally applicable, to the
planets of our solar system, planetary moon systems, Saturn-like
ring systems, and stellar exo-planetary systems.
A novel approach of this study is the interpretation of
harmonic orbit resonances in terms of a self-organization system
(not to be confused with self-organized criticality systems,
Bak et al. 1987; Aschwanden et al. 2016).
The principle of self-organization is a mechanism that creates
spontaneous order out of initial chaos, in contrast to random
processes that are governed by entropy. A self-organizing
mechanism is spontaneously triggered by random fluctuations,
is then amplified by a positive feedback mechanism, and produces
an ordered structure without any need of an external control agent.
The manifestation of a self-organizing mechanism is often a
regular geometric pattern with a quasi-periodic structure in space,
see various examples in Fig. 1. The underlying physics can involve
non-equilibrium processes, magneto-convection, plasma turbulence,
superconductivity, phase transitions, or chemical reactions.
The concept of self-organization has been applied to
solid state physics and material science (Müller and Parisi 2015),
laboratory plasma physics (Yamada 2007, 2010; Zweibel and Yamada 2009);
chemistry (Lehn 2002), sociology (Leydesdorff 1993),
cybernetics and learning algorithms (Kohonen 1984; Geach 2012),
or biology (Camazine et al. 2001).
In astrophysics, self-organization has been applied to
galaxy and star formation (Bodifee 1986; Cen 2014),
astrophysical shocks (Malkov et al. 2000),
accretion discs (Kunz and Lesur 2013),
magnetic reconnection (Yamada 2007, 2010; Zweibel and Yamada 2009);
turbulence (Hasegawa 1985),
magneto-hydrodynamics (Horiuchi and Sato 1985);
planetary atmosphere physics (Marcus 1993);
magnetospheric physics (Valdivia et al. 2003; Yoshida et al. 2010),
ionospheric physics (Leyser 2001),
solar magneto-convection (Krishan 1991; Kitiashvili et al. 2010),
and solar corona physics (Georgoulis 2005; Uzdensky 2007).
Here we apply the concept of self-organization to the solar system,
planetary moon systems, and exo-planet systems, based on the
physical mechanism of harmonic orbit resonances.
The plan of the paper is an analytical derivation of the
harmonic orbit resonance model (Section 2), an application to
observed data of our solar system planets, the moon systems of
Jupiter, Saturn, Uranus, and Neptune, and two exo-planet systems
(Section 3), a discussion in the context of previous work (Section 4),
and final conclusions (Section 5). | null | null | null | §.§ Quantized Planet Spacing
The distances of the planets from the Sun, as well as the
distances of the moons from their central body (Jupiter,
Saturn, Uranus, Neptune) have been fitted with the original Titius-Bode
law, using a constant geometric progression factor
with empirical values in the range of Q=R_i+1/R_i=1.26-2.0 (Table 1),
with the Schrödinger-Bohr atomic model, R_n ∝ n (n+1) ≈ n^2,
Wylie 1931; Louise 1982; Scardigli 2007a,b), or with more
complicated polynomial functions (e.g., Blagg 1913).
The assumption of a constant geometric progression factor in the
planet distances, which corresponds to a regular logarithmic spacing,
is apparently incorrect, based on the poor agreement with observations,
while the harmonic orbit resonance model has the following properties:
(1) It fits the planet distances with 5 quantized values (that relate
to the five dominant harmonic ratios) with a much higher accuracy
than the Titius-Bode law and its generalized version;
(2) It is based on the physical model of harmonic orbit resonances,
and (3) disproves the assumption of a constant progression factor
(which corresponds to a logarithmic spacing). In constrast, it
predicts variations between neighbored orbital periods from
q_min=1.5 to q_max=3.0, which amounts to variations by
a factor of two. The harmonic orbit resonance model fits the
data best for a linearly increasing progression factor, starting
from q_1=q_min=(3:2)=1.5 for the first (innermost) planet pair,
and ending with
q_n=q_max=(3:1)=3.0 for the last (outermost) planet pair. From the
7 different data sets analyzed here we find the following mean values:
Δ q=0.16±0.05 for the linear gradient of the time period
progression factor, q_min=1.45±0.10 for the minimum progression
factor, and q_max=3.00±0.33 for the maximum progression factor
(Table 3). If we use the mean value of Δ q=0.16, the model
(Eq. 15) predicts the following ratios for a sample of 10 planets:
q_1=1.50, q_2=1.66, q_3=1.82, q_4=1.98, q_5=2.14,
q_6=2.30, q_7=2.46, q_8=2.62, q_9=2.78, q_10=2.94,
Rounding these values to the next allowed harmonic
number (which is a rational number), the following sequence of
harmonic ratios is predicted:
q_1=(3:2), q_2=(5:3), q_3=(5:3), q_4=(2:1), q_5=(2:1),
q_6=(5:2), q_7=(5:2), q_8=(5:2), q_9=(3:1), q_10=(3:1),
which closely matches the sequence of observed harmonic resonances
(Table 2, column 4) in our solar system.
These harmonic ratios match the observations closely, and thus
provide an adequate description of the geometric pattern created
by the self-organizing system.
§.§ The Geometric Progression Factor
Several attempts have been made to find a theoretical
physical model for the empirical Titius-Bode law.
There exists no physical model that can explain the
mathematical function that was empirically found by Titius and Bode,
with a scaling relationship 2^(n-2), multiplied by
an arbitrary factor and an additive constant (Eq. 1).
One interpretation attempted to relate it to the
Schrödinger-Bohr atomic model, R_n ∝ n (n+1) ≈ n^2,
Wylie 1931; Louise 1982; Scardigli 2007a,b), but the reason
why it was found to fit the Titius-Bode law is simply because
both series scale similarly for small integer numbers,
i.e., 2^n ∝ 1, 2, 4, 8, 16, 32, ... versus
n^2 ∝ 0, 1, 4, 9, 16, 25, ...,
but this numerological coincidence does not imply that
atomic physics and celestial mechanics can be understood by
the same physical mechanism, although both exhibit
discrete quantization rules.
Most recent studies assume that the physics behind the
Titius-Bode law is related to the accretion of mass
through collisions within a protoplanetary disk, clearing
out material in orbits with harmonic resonances, which
leads to a non-random distribution of planet orbits with
roughly logarithmic spacing (e.g., Peale 1976;
Hayes and Tremaine 1998; Bovaird and Linewaver 2013).
Quantitative modeling of such a scenario is not easy,
because it implies Monte Carlo-type N-body simulations
of a vast number of planetesimals that interact with
N^2 mutual gravitational terms. Analytical solutions
of N-body problems, as we know since Lagrange, are virtually
non-existent for N ≥ 3. However, the configuration of
planets that we observe after a solar system life time
of several billion years suggests that the observed
harmonic orbit resonances represent the most stable
long-term solutions of a resonant system, otherwise
the solar system would have disintegrated long ago.
In contrast to the generalized Titius-Bode law with
logarithmic spacing, we argue for a model with quantized
geometric progression factors, based on the most relevant
harmonic ratios that stabilize resonant orbits.
From a statistical point of view we can understand that
there is a “sweet spot” of harmonic ratios in the
planet orbits that is not too small (because it would
lengthen the conjunction times and reduce the frequency
of gravitational interactions necessary for the
stabilization of orbits), and is not too large
(because the inter-planet distances at conjunction
would be larger and the gravitational force weaker).
These reasons constrain the optimum range of dominant
harmonic ratios, for which we found the 5 values
between (3/2) and (3/1). However, there are still
open questions why there is a linear gradient Δ q
in the orbit time (and geometric) progression factor,
and what determines the value of this gradient
Δ q ≈ 0.16. The specific value of the
gradient determines how many planets n_p can exist
in a resonant system within the optimum range of
harmonic numbers (from q_min=1.5 to q_max=3).
(Fig. 12).
Therefore, since we have no theory to predict the gradient
Δ q, we have to resort to fitting of existing data
and treat the gradient Δ q as an empirical variable.
§.§ Self-Organizing Systems
One necessary property of self-organizing systems is the
positive feedback mechanism. Solar granulation (Fig. 1a),
for instance, is driven by subphotospheric convection,
a mechanism that has a vertical temperature gradient in
a gravitationally stratified layer. It is subject to the
Rayleight-Bénard instability, which can be described
with three coupled differential equations, the so-called
Lorenz model (e.g., Schuster 1988). The positive feedback
mechanism results from the upward motion of a fluid along
a negative vertical temperature gradient, which cools off
the fluid and makes it sink again, leading to chaotic motion.
In the limit
cycle, which is a strange attractor of this chaotic system,
the system dynamics develops a characteristic size of the
convection cells (approximately 1000 km for solar graulation),
which is maintained over the entire solar surface by this
self-organizing mechanism of the Lorenz model.
For planet orbits, gravitational disturbances act most
strongly between planet pairs that have a harmonic ratio
of their orbit times, because the gravitational pull
occurs every time at the same location for harmonic
orbits. Such repetitive disturbances that occur at
the same location into the same direction will pull the
planet with the lower mass away from its original orbit
and make its orbit unstable. However, if there is a third
body with another harmonic ratio at an opposite conjuction
location, it can pull the unstable planet back into a more
stable orbit. This is a positive feedback mechanism that
self-organizes the orbits of the planets into stable
long-term configurations. A more detailed physical
description of the resonance phenomenon can be found
in the review of Peale (1976).
§.§ Random Pattern Test
A prediction of the harmonic orbit resonance model is that
the the spacing of stable planet orbits is not random, but
rather follows some quasi-regular pattern, which we
quantified with the quantized spacing given with
Eqs. (15, 16). However, the matching of the observed
planet distances with the resonance-predicted pattern
is not perfect, but agrees within an accuracy of
R_pred/R_obs=1.00±0.06 in the statistical
average only (Table 3), the question may be asked whether
a random process could explain the observed spacing.
In order to test this hypothesis we performed a
Monte-Carlo simulation with 1000 random sets of planet
distances and analyzed it with the same numerical code
as we analyzed the observations shown in Figs. (4-11).
In Fig. 13 we show a 2D distribution of two
values obtained for each of the 1000 simulations:
The y-axis shows the standard deviation
|R_model/R_sim - 1| of the ratio of
modeled and simulated values of planet distances,
which is a measure of how well the model fits the
data; The x-axis shows the deviations
[(q_min-1.5)^2 + (q_max-3.0)^2]^1/2
of the best-fit progression factors q_min
and q_max (added in quadrature), which is
a measure how close the simulated and theoretically predicted
progression factors agree. From the 2D distribution
shown in Fig. 13 we see that the data set of Jupiter
matches the model best and
exhibits the largest deviation from the random
values, while the other six 6 data sets are
all distributed at the periphery of the random values.
Nevertheless, all analyzed data sets are found to be
significantly different from random spacing of
planet or moon distances.
§.§ Exo-Planet Searches
The Titius-Bode law was also applied to exo-planets of
stellar systems, such as the solar-like G8 V star 55 Cnc
(Cuntz 2012; Poveda and Lara 1980), the millisecond pulsar
PSR 1257+12 system (Bisnovatyi-Kogan 1993), the star
HD 10180 (Lovis et al. 2011), the eclipsing polar HU Aqr
(Qian et al. 2011), and to over 150 multi-planet systems
observed with Kepler (Bovaird and Lineweaver 2013;
Bovaird et al. 2015; Huang and Bakos 2014). The search
in Kepler data, however, did not reveal much new detections
based on predictions with the generalized Titius-Bode law,
i.e., R_n = R_1 Q^n (Huang and Bakos 2014).
Although a similar iterative formulation is used in both
the generalized Titius-Bode law and the harmonic resonance
model (Eq. 6, 7),
the geometric progression factor Q is used as a free
variable individually fitted to each system in other studies
(e.g., Bovaird and Lineweaver 2013).
The question arises how the harmonic orbit resonance model
can improve the prediction of exo-planet candidates.
The two examples shown in Figs. (10-11) suggest that the
predicted spatio-temporal pattern can be fitted to incomplete
sets of exo-planets with almost equal accuracy as the data sets
from the (supposedly complete) data sets in our solar system.
This may be true if there are 5 planets detected
per star, but it may get considerably ambiguous for smaller
sets of ≈ 2-4 exo-planets per star. However, since
the harmonic orbit resonance model predicts a variable ratio
of the time period progression factor that fits existing data,
it should do better than the generalized Titius-Bode law with
a constant progression factor, as it was used in recent work
(Bovaird and Lineweaver 2013; Bovaird et al. 2015). | null |
http://arxiv.org/abs/1701.07929v1 | 20170127031359 | Neutrino masses, mixing, and leptogenesis in an S3 model | [
"Arturo Alvarez Cruz",
"Myriam Mondragón"
] | hep-ph | [
"hep-ph"
] |
Cutoff-free Circuit Quantum Electrodynamics
Hakan E. Türeci
December 30, 2023
===========================================
In this work we use previous results on the masses and mixing of
neutrinos of an S3 model with three right-handed Majorana neutrinos
and three Higgs doublets, to reduce one parameter in the case when
two of the right-handed neutrinos are mass degenerate. We derive a new
parameterization for the V_PMNS mixing matrix, with a new set of
parameters, in the more general case where the right-handed
neutrino masses are different. With these results, we calculate
leptogenesis and the associated baryogenesis in the model in the two
different scenarios. We show that it is possible to have enough
leptogenesis to explain the baryonic asymmetry with right-handed
neutrino masses above 10^6 GeV.
§ INTRODUCTION
The Standard Model (SM) is extremely successful, nevertheless the
discovery of neutrino masses and mixing in neutrino oscillation
experiments in 1998 <cit.>, presented evidence that is
necessary to go beyond it. Even before this discovery, the amount of
free parameters and the hierarchy problem, among others, have prompted
attempts to find a more fundamental theory, of which the SM is the
low-energy limit <cit.>.
Some of the goals of these new models are to understand the large
differences in the Yukawa couplings of the different fermions, the
hierarchy between the fundamental particles, and the amount of CP
violation and the structure of the CKM matrix <cit.>.
A popular way to approach these problems is to build models with
Non-Abelian flavor symmetries, often supplemented with extra Higgs
doublets. Common symmetries in flavor theories are, among many others,
A4, Q6 or S3
<cit.>.
The reason is that these models achieve in a natural way the Nearest
Neighbour Interaction textures in the fermion mass matrices
<cit.>. The S3 extension of the SM
with three Higgs doublets (S3-3H)
<cit.> is a model in which a
symmetry on the permutation of three objects is imposed, which in
additon to the SM particles has another two Higgs doublets, as well as
three right-handed Majorana neutrinos, which are related to the left
ones through the seesaw mechanism (type I).
There has been a lot of work done on various S3 models (see for instance
<cit.>),
some of this work reproduces the CKM and PMNS matrices in agreement
with the current experimental data
<cit.>,
and there have been also studies of leptogenesis in a soft breaking S3
model <cit.>. Nevertheless, most of this work has been
done in the case where two right-handed neutrinos are degenerate. In
this way, it is an interesting question to extend the model and see
the possible new results with a generalization, taking into account
both degenerate and non-degenerate right-handed neutrino masses.
Following the idea of previous work <cit.>, we extend the analysis on the generalization of the S3-3H model.
Another question that the SM fails to explain is the observed baryon asymmetry.
It is well know that there are more baryons than antibaryons in the Universe. Nucleosynthesis is a solid and consistent
model of the creation of the nuclei in the early Universe, which
predicts a baryonic density of,
η=η_b-η_b/η_γ=η=(2.6-6.2)×10^-10.
Measurements of the Cosmic Background Radiation <cit.> show a density of
η=(6.1±0.3)×10^-10,
in full agreement with the baryon density of the Nucleosynthesis<cit.>.
The idea to explain the baryon asymmetry through a dynamically process
was proposed by Sakharov in 1967 <cit.>. The present
cosmological observations favour the idea that the matter-antimatter asymmetry
of the Universe may be explained in terms of a dynamical generation
mechanism, called baryogenesis. Also, it has been realized that a
successful model of baryogenesis cannot occur within the Standard
Model (SM).
Leptogenesis is a mechanism which generates baryon asymmetry by
creating a leptonic asymmetry through B + L violating electroweak
sphaleron transitions <cit.>.
Several things are needed for the occurrence of leptogenesis:
* Heavy right handed neutrinos.
* Majorana type neutrinos.
* Decay of the right handed neutrinos to the left ones.
According to the original proposal of Fukugita and Yanagida
<cit.>, this mechanism also satisfies all the
Sakharov’s conditions <cit.> in order to produce a net
baryon asymmetry (for reviews see for instance <cit.>).
In this paper we explore the possibility of leptogenesis in the S3-3H
model, with degenerate and non-degenerate right-handed neutrino
masses, and calculate the associated baryogenesis. We first study the
case where two of the right-handed neutrino masses are degenerate, and
then the more general case where all the right-handed neutrino masses
are different. We scan the parameter space to find the leptogenesis
and associated baryogenesis dependence on the free parameters of the
model. We find that there is a region of parameter space where enough
baryogenesis is produced through leptogenesis to explain the
baryon asymmetry of the Universe.
The outline of the paper is organized as follows: In section 2, the S3
model is introduced as well as some of its most important results. In
section 3 it is shown how to produce leptogenesis in the S3-3H model, and
the resultant baryogenesis is also computed. At the end, in section 4,
we conclude summarizing our main results.
§ S3-3H MODEL
In the Standard Model analogous fermions in different generations have
identical couplings to all gauge bosons of the strong, weak, and
electromagnetic interactions <cit.>.
The group S3 consists of the six possible
permutations of three objects (f_1,f_2,f_3), and is the smallest
discrete non-abelian group. It has one 2-dimension irreducible
representation (irrep) and two of 1-dimension
F_s=(f_1,f_2,f_3) , F_d1= - f_1 - f_2 + 2f_3 ,
f_d2= f_2 - f_3 .
We can associate the particles in the model to doublets or to singlets with the following rules.
The direct product of two doublets p_D^T = (p_D1 , p_D2 ) and q_D^T = (q_D1 , q_D2 ) may be decomposed
into the direct sum of two singlets r_s and r_s' , and one doublet r_D^T where
r_s=p_D1q_D1+p_D2q_D2 r_s'=p_D1q_D2-p_D2q_D1
r_D^T=(r_D1,r_D2)=(p_D1q_D2+p_D2q_D1,p_D1q_D1+p_D2q_D2).
Since the Standard Model has only one Higgs SU(2)_L doublet, which can only be an S_3
singlet, it gives mass to the particles in the S_3 singlet representation.
To give mass to the rest of the particles we extend the Higgs sector
of the theory, by adding two more Higgs doublets.
The quark and Higgs fields are
Q^T=(u_L,d_L),u_r,d_r,
L^t=(ν_L,e_L),e_R,ν_R and H.
All of the fields have three species, and we assume that each one
forms a reducible representation 1S ⊕ 2. The first two
generations will be assigned to the doublet S3 irrep, and the third
generation to the singlet. This applies to quarks, leptons, Higgs
fields,
and right-handed neutrinos. The doublets carry capital indices I and J, which
run from 1 to 2, and the singlets are denoted by Q_3 , u_3R , d_3R , L_3 , e_3R , ν_3R and H_S.
The subscript 3 denotes the singlet representation and not the third generation.
The most
general renormalizable Yukawa interactions of this model are given by <cit.>
L_Y=L_Y_D+L_Y_U+L_Y_E+L_Y_ν
where
L_Y_D= -Y^d_lQ_IH_S d_IR - Y^d_3Q_3H_S d_3r
-Y^d_2[Q_Iκ_IJ H_l d_JR - Q_Iη _IJ H_2 d_JR]
-Y^d_4Q_3H _Id _IR - Y^d_5Q_IH _ID _3R + h.c.
L_Y_U= -Y^u_1Q_I(iσ _2 H^∗ _Su _IR) - Y^u_3Q_3(iσ _2 H^∗ _Su _3R)
-Y^u_2[Q_Iκ_IJ(iσ _2 H^∗ _1u _JR) - Q_Iη_IJ(iσ _2 H^∗ _2u _JR)]
-Y^u_4Q_3(iσ _2 H^∗ _Iu _IR) - Y^u_5Q_I(iσ _2 H^∗ _Iu _3R) + h.c.
L_Y_U= -Y^u_1Q_I(iσ _2 H^∗ _Su _IR) - Y^u_3Q_3(iσ _2 H^∗ _Su _3R)
-Y^u_2[Q_Iκ_IJ(iσ _2 H^∗ _1u _JR) - Q_Iη_IJ(iσ _2 H^∗ _2u _JR)]
-Y^u_4Q_3(iσ _2 H^∗ _Iu _IR) - Y^u_5Q_I(iσ _2 H^∗ _Iu _3R) + h.c.
L_Y_E= -Y^e_1L_IH _Se _IR) - Y^e_3L_3H _Se _3R)
-Y^e_2[L_Iκ_IJH_1e _JR - L_Iη_IJH _2e _JR)]
-Y^e_4L_3H _Ie _IR - Y^e_5L_IH _ID _3R + h.c.
L_Y_ν= -Y^ν_1L_I(iσ _2 H^∗ _Sν _IR) - Y^ν_3L_3(iσ _2 H^∗ _Sν _3R)
-Y^ν_2[L_Iκ_IJ(iσ _2 H^∗ _1ν _JR) - L_Iη_IJ(iσ _2 H^∗ _2ν _JR)]
-Y^ν_4L_3(iσ _2 H^∗ _Iν _IR) - Y^ν_5L_I(iσ _2 H^∗ _Iν _3R) + h.c.,
with
κ=[ 0 1; 1 0; ]η= [ 1 0; 0 -1; ].
Furthermore, we add to the Lagrangian the Majorana mass terms for the right-handed neutrinos
L_M = -M_1ν ^T_1RCν_1R -M_2ν ^T_2RCν_2R -M_3ν ^T_3RCν_3R.
Due to the presence of three Higgs fields, the Higgs potential V_H (H_S , H_D ) is more
complicated than that of the Standard Model <cit.>. In addition to the S3
symmetry, under certain conditions the Higgs potential
exhibits a permutational symmetry
Z2 : H1 ↔ H2, which is not a subgroup of the flavor
group S3 <cit.>. The model has as well an Abelian discrete symmetry
that we will use as selection rules for the Yukawa couplings in the
leptonic sector. In this paper, we will assume
that the vacuum respects the accidental Z2 symmetry of
the Higgs potential and that
<H_1>=<H_2>.
With these assumptions, the Yukawa interactions, eqs. (<ref>)-(<ref>) yield mass matrices, for
all fermions in the theory, of the general form
M=
[ μ_1+μ_2 μ_2 μ_5; μ_2 μ_1-μ_2 μ_5; μ_4 μ_4 μ_3; ].
The Majorana mass for the left handed neutrinos ν_L is generated by the see-saw mechanism.
The corresponding mass matrix is given by
M_ν=M_ν DM^-1(M_ν D)^T,
where M = diag (M_1 , M_1 , M_3 ).
In principle, all entries in the mass matrices can be complex since there is no restriction coming
from the S3 flavor symmetry.
The mass matrices are diagonalized by bi-unitary transformations as
U^†_d(u,e)LM_d(u,e)U_d(u,e)R=diag(m_d(u,e),m_s(u,e),m_b(u,e))
U_ν^T M_ν U_ν =diag(m_ν 1,m_ν 2,m_ν 3).
The entries in this matrix are complex numbers, so the physical masses are their absolute
values.
The mixing matrices are, by definition,
V_CKM=U^†_uLU_dL, V_PMNS=U^†_eLU_ν K.
Where K is defined as the matrix that take out the phases of the diagonal mass matrix,
diag(m_ν 1,m_ν 2,m_ν 3)=K^†diag(|m_ν 1|,|m_ν 2|,|m_ν 3|)K^†.
A further reduction of the number of parameters in the leptonic sector may be achieved by
means of an Abelian Z_2 symmetry. A possible set of charge assignments of Z_2, compatible with
the experimental data on masses and mixings in the leptonic sector is given in Table I.
The Z_2 assignments forbid the following Yukawa couplings
Y_1^e=Y_3^e=Y_1^ν=Y_5^ν.
Therefore, the corresponding entries in the mass matrices vanish.
§.§ Mass matrix for the charged leptons
Under these assumptions, the mass matrix of the charged leptons takes the form
M_e=m_τ[ μ̃_̃2̃ μ̃_̃2̃ μ̃_̃5̃; μ̃_̃2̃ -μ̃_̃2̃ μ̃_̃5̃; μ̃_̃4̃ μ̃_̃4̃ 0; ].
The unitary matrix U_eL that enters in the definition of the mixing matrix, V_PMNS, is calculated from
U^†_eLM_e M_e^† U_eL=diag(m^2_e,m^2_μ,m^2_τ),
where m_3,m_μ and m_τ are the masses of the charged leptons, and
M_e M_e^τ=m^2_τ[ 2|μ̃_̃2̃|^2 |μ̃_̃5̃|^2 2|μ̃_̃2̃||μ̃_̃4̃|e^-i δ_e; |μ̃_̃5̃|^2 2|μ̃_̃2̃|^2 0; 2|μ̃_̃2̃||μ̃_̃4̃|e^i δ_e 0 2|μ̃_̃4̃|^2; ].
Notice that this matrix only has one phase factor. The parameters |μ̃_̃2̃|, |μ̃_̃4̃| and |μ̃_̃5̃| may readily be expressed in terms of the charged lepton masses. From the invariants of M_e M^†_e, we get the set of equations <cit.>
Tr(M_e M^†_e)=m^2_e+m^2_μ+m^2_τ=m^2_τ[4|μ̃_̃2̃|^2+2(|μ̃_̃4̃|^2+|μ̃_̃5̃|^2)]
ξ(M_e M^†_e)= m^2_τ(m^2_e+m^2_μ)+m_e^2 m_μ^2
= 4m^4_τ[|μ̃_̃2̃|^2+|μ̃_̃2̃|^2(|μ̃_̃4̃|^2+|μ̃_̃5̃|^2)+|μ̃_̃4̃|^2|μ̃_̃5̃|^2]
(M_eM_e^†)=m_e^2m_μ^2m_τ^2=4 m^6_τ|μ̃_̃2̃|^2|μ̃_̃4̃|^2|μ̃_̃5̃|^2,
where ξ(M_eM_e^†)=1/2[Tr(*M_eM_e^†))^2-Tr((M_eM_e^†)^2)].
Solving these equations for |μ̃_̃2̃|,|μ̃_̃4̃| and |μ̃_̃5̃|, we obtain
|μ̃_̃2̃|^2=1/2m_e^2+m^2_μ/m^2_τ-m_e^2m_μ^2/m^2_τ(m^2_e+m^2_μ)+β.
In this expression, β is the smallest solution of the equation
β^3-1/2(1-2y+6x/y)β^2-1/4(y-y^2-4z/y+7z-12z^2/y^2)β-
1/8yz-1/2z^2/y^2+3/4z^2/y-z^3/y^3=0
where y=(m_e^2+m_μ^2)/ m_τ and z=μ^2_μμ^2_e / μ^4_τ.
An estimation of β at good order of magnitude is obtained from <cit.>
β≃ -m_μ^2 m_e^2/2m^2_τ(m^2_τ-(m^2_τ+m^2_e)).
The parameters |μ̃_̃4̃|^2 and |μ̃_̃5̃|^2 are in terms of |μ̃_̃2̃|^2,
|μ̃_̃4̃,̃5̃|^2= 1/4(1-m^2_μ+m^2_e/m_τ^2+4m_e^2m^2_μ/m_τ^2(m_e^2+m^2_μ)-β)
±1/4(√((1-m^2_μ+m^2_e/m_τ^2+4m_e^2m^2_μ/m_τ^2(m_e^2+m^2_μ)-β)^2)-m_μ^2 m_e^2/m^4_τ1/|μ̃_̃2̃|^2)
Once M_eM_e^† has been reparametrized in terms of the charged
lepton masses, it is straightforward to compute U_eL also as a
function of the lepton masses. Here we will write the result to order (m_μ m_e /m^2_τ)^2
and x^4=(m_e/m_τ)^4
M_e≃ m_τ[ 1/√(2)m̃_̃μ̃/√(1+x^2) 1/√(2)m̃_̃μ̃/√(1+x^2) 1/√(2)√(1+x^2-m̃_̃μ̃^̃2̃)/√(1+x^2); 1/√(2)m̃_̃μ̃/√(1+x^2) -1/√(2)m̃_̃μ̃/√(1+x^2) 1/√(2)√(1+x^2-m̃_̃μ̃^̃2̃)/√(1+x^2); m̃_̃ẽ(1+x^2)/√(1+x^2-m̃_̃μ̃^̃2̃)e^iδ _e m̃_̃ẽ(1+x^2)/√(1+x^2-m̃_̃μ̃^̃2̃)e^iδ _e 0; ].
The unitary matrix U_eL that diagonalizes M_eM_e^† and
enters in the definition of the neutrino mixing matrix V_VPMNS,
equation (<ref>), is
U_eL≃[ 1 0 0; 0 1 0; 0 0 e^iδ_e; ][ O_11 -O_12 O_13; -O_21 O_22 O_23; -O_31 -O_32 O_33; ],
where
U_eL≃[ O_11 -O_12 O_13; -O_21 O_22 O_23; -O_31 -O_32 O_33; ]=
[ 1/√(2)x(1+2m̃^̃2̃_̃μ̃+x^2+m̃^̃4̃_̃μ̃+2m̃^̃2̃_e)/√(1+m̃_̃μ̃^̃2̃+5x^2-m̃_̃μ̃^̃4̃-m̃^̃6̃_̃μ̃+m̃_̃ẽ^̃1̃2̃+12x^4) -1/√(2)(1-2m̃^̃2̃_̃μ̃+m̃^̃4̃_̃μ̃-2m̃^̃2̃_e)/√(1-m̃_̃μ̃^̃2̃+x^2+6m̃_̃μ̃^̃4̃-4m̃^̃6̃_̃μ̃+5m̃_̃ẽ^̃1̃2̃) 1/√(2); -1/√(2)x(1+4x^2-m̃^̃4̃_̃μ̃-2m̃^̃2̃_e)/√(1+m̃_̃μ̃^̃2̃+5x^2-m̃_̃μ̃^̃4̃-m̃^̃6̃_̃μ̃+m̃_̃ẽ^̃1̃2̃+12x^4) 1/√(2)(1-2m̃^̃2̃_̃μ̃+m̃^̃4̃_̃μ̃)/√(1-m̃_̃μ̃^̃2̃+x^2+6m̃_̃μ̃^̃4̃-4m̃^̃6̃_̃μ̃+5m̃_̃ẽ^̃1̃2̃) 1/√(2); -√(1+2x^2-m̃^̃2̃_̃μ̃-m̃^̃2̃_̃ẽ)(1+m̃^̃2̃_̃μ̃+x^2-2m̃^̃2̃_e)/√(1+m̃_̃μ̃^̃2̃+5x^2-m̃_̃μ̃^̃4̃-m̃^̃6̃_̃μ̃+m̃_̃ẽ^̃1̃2̃+12x^4) -x√(1+2x^2-m̃_̃μ̃^̃2̃-m̃_̃ẽ^̃2̃)(1+x^2-m̃^̃2̃_̃μ̃-2m̃^̃2̃_e)/√(1-m̃_̃μ̃^̃2̃+x^2+6m̃_̃μ̃^̃4̃-4m̃^̃6̃_̃μ̃+5m̃_̃ẽ^̃1̃2̃) m̃_̃ẽm̃_̃μ̃√(1+x^2)/√(1+x^2-m̃^̃2̃_̃μ̃); ]=
and where m̃_̃μ̃=m_μ/m_τ,m̃_̃ẽ=m_e/m_τ and
x=m_e/m_μ.
§.§ The mass matrix of the neutrinos
With the Z_2 selection rule (Table <ref>), the mass matrix of the Dirac neutrinos takes the form
M_ν D=
[ μ^ν_2 μ^ν_2 0; μ^ν_2 -μ^ν_2 0; μ^ν_4 μ^ν_4 μ^ν_3 ]
Then, the mass matrix for the left-handed Majorana neutrinos is obtained from the see-saw mechanism,
M_ν=M_ν DM̃^-1 (M_ν D)^T=
[ (1/M_1+1/M_2)μ_2^2 (1/M_1-1/M_2)μ_2^2 (1/M_1+1/M_2)μ_2μ_4; (1/M_1-1/M_2)μ_2^2 (1/M_1+1/M_2)μ_2^2 (1/M_1-1/M_2)μ_2μ_4; (1/M_1+1/M_2)μ_2μ_4 (1/M_1-1/M_2)μ_2μ_4 μ_4^2/M_2+μ_3^2/M_3; ] ,
where M_i are the right handed neutrino masses appearing in eq. (<ref>).
The non-Hermitian, complex, symmetric neutrino mass matrix M_ν may be brought to a
diagonal form by a bi-unitary transformation, as
U^T_ν M_ν U_ν=diag(m_ν 1 e^i ϕ _1,m_ν 2 e^i ϕ _2,m_ν 3 e^i ϕ _3)
Where U_ν is the matrix that diagonalizes the matrix M_ν.
§.§.§ Neutrino matrix with degenerate masses.
In the case where M_1=M_2 the mass matrix is reduced to <cit.>
M_ν=M_ν DM̃^-1 (M_ν D)^T=
[ (1/M_1+1/M_1)μ_2^2 0 (1/M_1+1/M_1)μ_2μ_4; 0 (1/M_1+1/M_1)μ_2^2 0; (1/M_1+1/M_1)μ_2μ_4 0 μ_4^2/M1+μ_3^2/M3; ].
With this texture is easy to calculate the U_ν matrix that
diagonalizes M_ν^† M_ν,
M^†_ν M_ν=
[ |A|^2+|B|^2 0 A^*B+B^*D; 0 |A^2| 0; AB^*+BD^* 0 |B|^2+|D|^2 ]
with A=μ_2 ^2 / M_1 , B=2 μ_2 μ_4/M_1and D=2 μ_1 μ_2 / M_1+μ_3 ^2 /M_2, this matrix is diagonalized by
U_ν=
[ 1 0 0; 0 1 0; 0 0 e^iδ_ν; ][ cosη sinη 0; 0 0 1; -sinη cosη 0; ].
If we require that the defining equation (<ref>) be satisfied as an identity, we get the following set
of equations:
2(μ_2^ν)^2/M1= m_ν3 ,
2(μ_2^ν)^2/M1= m_ν1 cos^2η+m_ν2 sin^2η,
2(μ_2^ν)(μ_4^ν)/M1= sinηcosη(m_ν2-m_ν1)e^iδ_nu,
2(μ_2^ν)^2/M1= m_ν3 ,
2(μ_2^ν)(μ_4^ν)/M1= sinηcosη(m_ν2-m_ν1)e^iδ_nu,
2(μ_4^ν)^2/M1+2(μ_3^ν)^2/M3= (m_ν1sin^2η+m_ν2cos^2η)e^-2iδ_nu.
Solving these equations for sinη and cosη, we find
sin^2η=m_ν 3-m_ν 1/m_ν 2-m_ν 1 cos^2η=m_ν 2-m_ν 3/m_ν 2-m_ν 1.
The unitarity of U_ν constrains sinη to be real and thus |sinη| ≤ 1, this condition fixes the phases ϕ_1 and ϕ_2 as
|m_ν 1|sinϕ_1=|m_ν 2|sinϕ_2=|m_ν 3|sinϕ_3.
The real phase δ_ν appearing in eq. (<ref>) is not constrained by the unitarity of U_ν.
Therefore the U_ν matrix is,
U_ν=
[ 1 0 0; 0 1 0; 0 0 e^iδ_nu; ][ √(m_ν2-m_ν3/m_ν2-m_ν1) √(m_ν3-m_ν1/m_ν2-m_ν1) 0; 0 0 1; -√(m_ν3-m_ν1/m_ν2-m_ν1) √(m_ν2-m_ν3/m_ν2-m_ν1) 0; ].
Now, the mass matrix of the Majorana neutrinos, M_ν, may be written in terms of the neutrino
masses; from (<ref>) and (<ref>,<ref>,<ref>), we get
M_ν=
[ m_ν3 0 √((m_ν3-m_ν1)(m_ν2-m_ν3))e^-iδ_ν; 0 m_ν3 0; √((m_ν3-m_ν1)(m_ν2-m_ν3))e^-iδ_ν 0 (m_ν1+m_ν2-m_ν3)e^-2δ_ν; ]
The only free parameters in these matrices, other than the neutrino
masses, are the phase ϕ_ν, implicit in m_ν1,m_ν2 and m_ν3, and the Dirac phase δ_ν.
Therefore, the theoretical mixing matrix V_PMNS , is
given by
V^th_PMNS=[ O_11cosη + O_31sinη e^iδ O_11sinη - O_31cosη e^iδ -O_21; -O_12cosη + O_32sinη e^iδ -O_12sinη - O_32cosη e^iδ O_22; O_13cosη - O_33sinη e^iδ O_13sinη + O_33cosη e^iδ O_23; ]× K.
To obtain the expressions for the mixing angles we need to match the
theoretical and PDG expressions for the V_PMNS matrix
|V_PMNS^th|=|V_PMNS^PDG|
meaning |V_ij^th|=|V_ij^PDG|.
The standard parametrization of the Particle Data Group is
V_PMNS=
[ c_12c_13 s_12c_13 s_13e^-i δ_CP; -s_12c_23-c_12s_23s_13e^i δ_CP c_12c_23-s_12s_23s_13e^i δ_CP s_23c_13; s_12s_23-c_12c_23s_13e^i δ_CP -c_12s_23-s_12c_23s_13e^i δ_CP c_23c_13; ].
We can straightforwardly read the equation for the mixing angles with
|sin_θ_13|=|O_21|≃1/√(2)x1+4x^2-m̃_̃μ̃^̃4̃/√(1+m̃_̃μ̃^̃2̃+5x^2-m̃_̃μ̃^̃4̃) ,
|sin_θ_23|=|O_22|/√(1-O^2_21)≃1/√(2)1-2m̃_̃μ̃^̃2̃+m̃_̃μ̃^̃4̃/√(1-4m̃_̃μ̃^̃2̃+x^2+6m̃_̃μ̃^̃4̃) ,
and
tan_θ_12=O_11sin_η-O_31cos_η/O_31sin_η+O_11cos_η
≃ -√(m_ν2-m_ν3/m_ν3-m_ν1)× (√(1+2x^2-m̃^̃2̃_̃μ̃)(1+m̃_̃μ̃^̃2̃+x^2)-1/√(2)x(1+2m̃_̃μ̃^̃2̃+4x^2)√(m_ν3-m_ν1/m_ν2-m_ν3)/√(1+2x^2-m̃^̃2̃_̃μ̃)(1+m̃_̃μ̃^̃2̃+x^2)+1/√(2)x(1+2m̃_̃μ̃^̃2̃+4x^2)√(m_ν2-m_ν3/m_ν3-m_ν1)).
We can express tanθ_12 in terms of the differences of the
square of the masses as
tan^2θ_12=(Δ m^2_12+Δ m^2_13+|m_ν 3|^2 cos^2ϕ_nu)^1/2-|m_ν 3||cosϕ_nu|/(Δ m^2_13+|m_ν 3|^2 cos^2ϕ_nu)^1/2+|m_ν 3||cosϕ_nu|
where Δ m^2_ij=m_ν i^2-m_ν j^2.
We can use the experimental values of the masses of the charged leptons and the differences of the square of the masses to fit the mixing angles,
(sin^2θ_13)^th=1.1×10^-5, (sin^2θ_13)^xp=2.19^+0.12_-0.12× 10^-2,
and
(sin^2θ_23)^th=.499, (sin^2θ_23)^xp=.5^+0.05_-0.05.
From expression (<ref>), we may readily derive expressions for
the neutrino masses in terms of tanθ_12, ϕ_ν and the differences of the squared masses,
|m_3|=√(Δ m_13^2)/2tanθ_12cosϕ_ν1-tan^4θ_12+r^2/(1+tan^2θ_12)(1+tan^2θ_12+r^2) ,
|m_1|=√(|m_ν 3|^2+Δ m_13^2) ,
|m_2|=√(|m_ν 3|^2+Δ m_13^2(1+r^2)) ,
here r^2 = Δ m^2_12 / Δ m^2_13≈ 3 × 10^-2. This implies an inverted neutrino mass spectrum |m_ν 3 |
< |m_ν 1 | < |m_ν 2 |.
As r^ 2 << 1, the sum of the neutrino masses is
∑^3_i=1 | m_ν_i | ≈Δ m^2_13/2cosϕ_νtanθ_12 (1 + 2 √(1 + 2 tan^2 θ_12 (2 cos^2 ϕ_ν - 1) + tan^4 θ_12) - tan^2 θ_12) .
The most restrictive cosmological upper bound <cit.> for this sum is
∑|m_ν | ≤ 0.23eV .
This upper bound and the experimentally determined values of tanθ_12 and Δ m^2_i,j, give a lower bound for
cosϕ_ν≥ 0.55
or 0 ≤ϕ_ν≤ 57^∘.
We can use again equation (<ref>) to set the best value of ϕ, we find that with ϕ=50^∘ we get,
tanθ_12=0.665288
Hence, setting ϕ_ν = 50^∘ in
our formula, we find
m_ν 1 = 0.052 eV, m_ν 2 = 0.053 eV, m_ν 3 = 0.019 eV.
The computed sum of the neutrino masses is
(∑^3_i=1 |m_ν i |)^th = 0.168508 eV,
below the cosmological upper bound given in eq. (<ref>), as
expected.
The above value of ϕ is in agreement with the requirements for
leptogenesis, as we will show in section 3.
One of the successes of the S3-3H model has been to predict an angle
θ_13 different from zero, as well a very accurate angles
θ_12 and θ_23. Nevertheless new experimental
results have shown that the angle θ_13 is greater than the
model predicts with degenerate right-handed neutrino masses. This is
the major reason to extend the model further, to the non-degenerate
case <cit.>, and where the angles fit the experimental value.
§.§.§ The mass matrix of the neutrinos without degeneration
In a more extensive analysis than <cit.>, we continue to study the case where the RHN masses are non-degenerate.
The effective neutrino mass matrix
m_ν is,
M_ν=M_ν DM̃^-1 (M_ν D)^T=
[ (1/M_1+1/M_2)μ_2^2 (1/M_1-1/M_2)μ_2^2 (1/M_1+1/M_2)μ_2μ_4; (1/M_1-1/M_2)μ_2^2 (1/M_1+1/M_2)μ_2^2 (1/M_1-1/M_2)μ_2μ_4; (1/M_1+1/M_2)μ_2μ_4 (1/M_1-1/M_2)μ_2μ_4 μ_4^2/M2+μ_3^2/M3; ].
We are going to assume that the phases of the μ_3 and μ_4
terms are aligned, therefore we can write M_ν in polar form
M_ν=P M_ν P, with M_ν, real and
P=diag(e^-i θ_μ 2,e^-i θ_μ 2,e^i(ca/2
-θ_μ 4)) In this way M_ν can be
expressed in terms of a matrix with two texture zeros class I as:
M_ν=μ_0 𝕀_3× 3+M_ν'
with μ_0=2 |μ_2|^2/M1.
Therefore, the U_ν matrix is P^† U_1 where U_1 is the matrix that diagonalizes M'_ν.
We can take a rotation u_π /4,
u_π /4=1/√(2)[ 1 1 0; -1 1 0; 0 0 √(2) ],
to the M'_ν matrix,
u_π /4^T M'_ν u_π /4=na
[ 0 0 1; 0 √(2)ψ (ψ_2-1) ψ_2; 1 ψ_2 μ_c ]
with na=M_2/(√(2)|μ_2||μ_4|), ψ=√(2)|μ_2|/|μ_4|, ψ_2=M_2/M_1 and μ_c=μ_4^2/M1+μ_3^2/M3-μ_0.
The Matrix that diagonalizes u_π /4^T M'_ν u_π /4 is
U_2=
[ ψ_2 n_1 ψ_2 n_2 ψ_2 n_3; -(1+μ_c λ_1-λ^2_1) n_1 -(1+μ_c λ_2-λ^2_2) n_2 -(1+μ_c λ_3-λ^2_3) n_3; ψ_2 λ_1 n_1 ψ_2 λ_2 n_2 ψ_2 λ_3 n_3 ].
where n_i is a normalization factor and λ_i is the i-th eigenvalue of M'_ν.With U_1=u_π/4 U_2, therefore
U_ν=
[ O'_11 O'_12 O'_13; O'_21 O'_22 O'_23; O'_31 O'_32 O'_33; ],
[ n_1 (ψ_2+ f_1) n_2 (ψ_2+ f_2) n_3 (ψ_2+ f_3); n_1 (-ψ_2+ f_1) n_2 (-ψ_2+ f_2) n_3 (-ψ_2+ f_3); ψ_2 n_1 λ_1 ψ_2 n_2 λ_2 ψ_2 n_3 λ_3; ],
where f_i=(-1-μ_c λ_i+λ^2_i).
In the same way as in the degenerate scenario we have
|V_PMNS^th|=|V_PMNS^PDG| ,
in terms of the mixing angles with
s_13=O'_13 , s_23=O'_23/√(1-O_13^'2) , s_12=O'_12/√(1-O_13^'2) .
In the non-degenerate scenario we have three free parameters (ψ,
ψ_2, μ_c) for the neutrino matrix. In this model the
PMNS matrix may be obtained numerically. We have used the following values for the masses given in <cit.>
m_e = 0.5109989461 ± 0.0000000031 MeV,
m_μ = 105.6583745 ± 0.0000024 MeV,
m_τ = 1776.86 ± 0.12 MeV.
In order to obtain the numerical values for the three free parameters
we perform a χ^2 analysis on the parameter space to find their
best fit points
χ^2=(sin(θ_12)^2-sin(θ_12)^2)^2/σ^2_sinθ_12^2+(sin(θ_12)^2-sin(θ_12)^2)^2/σ^2_sinθ_12^2+(sin(θ_12)^2-sin(θ_12)^2)^2/σ^2_sinθ_12^2
where we have taken the following experimental values for the V_PMNS elements <cit.>
sin^2_θ_12= 0.304 ± 0.014 , sin^2_θ_23 = 0.50 ± 0.05 , sin^2_θ_13= (2.19 ± 0.12) × 10^-2 .
The best values for the free parameters are thus found to be
ψ_2 = 1.1431 , ψ = 1.3091 , μ_c = 1.6502 eV,
at one sigma C.L. with χ^2=3.74×10^-15 as the minimal value. These correspond to the following mixing angles
sin(θ_12)^2= 0.3039, sin(θ_23)^2=0.4999,
sin(θ_13)^2=0.0218.
§ LEPTOGENESIS IN AN S3-3H MODEL
The Yukawa couplings of the neutrinos allow the decay of the right-handed neutrinos into the left-handed ones.
-Y^ν_1L_I(iσ_2H^∗_Sν_IR).
As shown in <cit.>, the asymmetry is defined to be
ϵ _1=∑_αΓ (N_1→ℓ _α H)-Γ (N_1→ℓ _αH)/∑_αΓ (N_1→ℓ _α H) +Γ (N_1→ℓ _αH) .
where Γ is the decay rate, and N_1 is the decaying right-handed neutrino.
The possible decays up to tree level are shown in <ref>,
The asymmetry generated by these decays are,
ϵ≃-3/8π1/(h_νh^†_ν)∑_i=2,3Im{ (h_νh^†_ν)^2_1i}[f(M^2_i/M^2_1)+g(M^2_i/M^2_1)].
and the self interactions are
f(x)=√(x)[1-(1+x)ln(1+x/x)] ,
g(x)=√(x)/1-x .
This function depends strongly on the hierarchy of the light neutrino masses. It can lead to
a strong enhancement of the CP asymmetries if the masses M_2 and M_3 are
nearly degenerate.
The relation between the lepton and baryon asymmetry is given through the sphaleron process <cit.>
Y_B=aY_B-L=a/a-1Y_L ,
where a is a=(8N_f+4N_H)/(22N_f+13N_H), N_f is the number of families and N_H is the number of Higgs doublets.
We can express the lepton asymmetry in terms of the CP asymmetry
Y_L=n_L-n_L̅/s=κϵ_i/g* ,
where g is 110, the number of relativistic degrees of freedom,
κ is obtained from solving the Boltzmann equations, and it can be
reparametrized in terms of K defined as the ratio of Γ_1, the
tree-level decay width of N_1 to H the Hubble parameter at
temperature M_1, where k=Γ_1/H<1 describes a process out of
thermal equilibrium and κ<1 describes the washout effect <cit.>:
κ≈0.3/K(ln(K))^0.6 for 10<K<10^6 ,
κ≈1/2√(K^2+9) for 0<K<10 .
The decay width of N_1 by the Yukawa interaction at tree level and
Hubble parameter in terms of the temperature T and the Planck scale
M_pl are Γ_1=(m^†_D m_D)_11M_1/(8π v^2) and
H=1.66g^*1/2T^2/M_pl respectively. At temperature T=M_1 the
ratio K is
K=M_pl/1.66√(g*)(8π v^2)(m^†_D m_D)_11/M_1 .
§.§ Baryon asymmetry in the degenerate scheme
Putting all the above ingredients together, the asymmetry for the S3-3H model is
ϵ=Im[e^2iδ^*M_2 m_3 √(M_2(m_2-m_3)(m_3-m_1))/√(m_3)](f[M_3^2/M_1^2]+g[M_3^2/M_1^2])/8π |M_2 m_3| .
The value of the baryon asymmetry has a dependence on ϕ and the
masses of the neutrinos |m_1|,M_1,|m_2|,M_2,|m_3|,M_3, where the
masses of the right-handed neutrinos are considered real. We can
calculate the dependence of the baryon asymmetry on the phase
δ. As can be seen from eq. (<ref>) the asymmetry is a
periodic function of δ, where the masses give the
scale of the baryon asymmetry.
The maximum value for the baryon asymmetry is on δ=3/4π. As
figure <ref> shows, the leptogenesis is crucially
dependent on the phase. The value of the baryon asymmetry is
determined by the masses of the light neutrinos and the ratios of
the right-handed neutrino masses. The see-saw mechanism relates the masses of
the right-handed neutrinos to the light ones,
making the right-handed neutrino masses bigger than 10^12 GeV, in order to be in agreement with the experimental data.
We calculate the asymmetry generated in the best case scenario where
δ=3/4π. In this case we can see from fig. <ref> that the masses of the
right-handed neutrinos could be of order of 10^7 GeV to produce leptogenesis. The graph also
shows the region of resonant leptogenesis M_1-M_3≃1/2Γ_N_1,3, where the asymmetry increases above the
one observed in the Universe, lowering even more the possible mass of
the right-handed neutrinos or the δ phase.
§.§ Non-degenerate scenario
The value of the baryon asymmetry in the non-degenerate case has
dependence on the angles of the M_d matrix μ_2a, μ_3a and
the real masses of the neutrinos M_1,M_2,M_3, where μ_ia is
the angle of μ_i=|μ_i|e^iμ_ia. We can calculate the
dependence of the baryon asymmetry on the phases μ_2a
, μ_3a. As in the degenerate case the magnitude or scale of the baryon
assymetry is given by the masses.
The maximum of the asymmetry is achieved in all the lines where
μ_2a-μ_3a=π /2 +nπ, where n is any integer, as can be
seen from fig. <ref>. Again, this is independent of the
masses of the neutrinos, the masses only fix the scale of the
asymmetry. Taking the best values of the angles we can see that the
scale of the masses of the right-handed neutrinos can be lower and
that the region of resonant leptogenesis is wider. Therefore, this
gives a wider region in parameter space fulfilling the baryon
asymmetry explanation. In fig. <ref> we show the baryon
asymmetry dependence on the M_1 and M_3 masses. The darker shades
of green correspond to more asymmetry, whereas the red regions
correspond to an excess of baryon asymmetry as compared to the one
observed in the Universe, for the maximum value of the phases.
§ CONCLUSIONS
The minimal S3-3H extension of the SM acommodates well the masses and
mixings of quarks and leptons, and gives naturally a non-zero value
for the neutrino reactor mixing angle. We re-derived previous results
on the neutrino sector with recent experimental data, taking into
account a new value of the phase ϕ to include the angle
θ_12 in the model. In the non-degenerate right-handed
neutrino mass case, we find a new parametrization of the V_PMNS
matrix and use the experimental values in a χ^2 analysis to fit
the new parameters. We find thus a new region in parameter space where
the model predicts the mixing angles correctly.
We then calculated the leptogenesis and the associated baryogenesis in
this model in the case of two right-handed degenerate neutrino masses,
and in the more general case of non-degenerate masses. We show that
there are regions in parameter space which allow leptogenesis as a
mechanism to solve the observed baryonic asymmetry with right-handed
neutrino masses starting from 10^6 GeV.
§ ACKNOWLEDGEMENTS
We acknowledge useful discussions with J. Kersten and
A. Mondragón. This work is partially supported by a UNAM grant
PAPIIT IN111115.
Frere:2006iz
Davidson:2008bu
Ishimori:2010au
Antusch:2003kp
h-physrev3
| The Standard Model (SM) is extremely successful, nevertheless the
discovery of neutrino masses and mixing in neutrino oscillation
experiments in 1998 <cit.>, presented evidence that is
necessary to go beyond it. Even before this discovery, the amount of
free parameters and the hierarchy problem, among others, have prompted
attempts to find a more fundamental theory, of which the SM is the
low-energy limit <cit.>.
Some of the goals of these new models are to understand the large
differences in the Yukawa couplings of the different fermions, the
hierarchy between the fundamental particles, and the amount of CP
violation and the structure of the CKM matrix <cit.>.
A popular way to approach these problems is to build models with
Non-Abelian flavor symmetries, often supplemented with extra Higgs
doublets. Common symmetries in flavor theories are, among many others,
A4, Q6 or S3
<cit.>.
The reason is that these models achieve in a natural way the Nearest
Neighbour Interaction textures in the fermion mass matrices
<cit.>. The S3 extension of the SM
with three Higgs doublets (S3-3H)
<cit.> is a model in which a
symmetry on the permutation of three objects is imposed, which in
additon to the SM particles has another two Higgs doublets, as well as
three right-handed Majorana neutrinos, which are related to the left
ones through the seesaw mechanism (type I).
There has been a lot of work done on various S3 models (see for instance
<cit.>),
some of this work reproduces the CKM and PMNS matrices in agreement
with the current experimental data
<cit.>,
and there have been also studies of leptogenesis in a soft breaking S3
model <cit.>. Nevertheless, most of this work has been
done in the case where two right-handed neutrinos are degenerate. In
this way, it is an interesting question to extend the model and see
the possible new results with a generalization, taking into account
both degenerate and non-degenerate right-handed neutrino masses.
Following the idea of previous work <cit.>, we extend the analysis on the generalization of the S3-3H model.
Another question that the SM fails to explain is the observed baryon asymmetry.
It is well know that there are more baryons than antibaryons in the Universe. Nucleosynthesis is a solid and consistent
model of the creation of the nuclei in the early Universe, which
predicts a baryonic density of,
η=η_b-η_b/η_γ=η=(2.6-6.2)×10^-10.
Measurements of the Cosmic Background Radiation <cit.> show a density of
η=(6.1±0.3)×10^-10,
in full agreement with the baryon density of the Nucleosynthesis<cit.>.
The idea to explain the baryon asymmetry through a dynamically process
was proposed by Sakharov in 1967 <cit.>. The present
cosmological observations favour the idea that the matter-antimatter asymmetry
of the Universe may be explained in terms of a dynamical generation
mechanism, called baryogenesis. Also, it has been realized that a
successful model of baryogenesis cannot occur within the Standard
Model (SM).
Leptogenesis is a mechanism which generates baryon asymmetry by
creating a leptonic asymmetry through B + L violating electroweak
sphaleron transitions <cit.>.
Several things are needed for the occurrence of leptogenesis:
* Heavy right handed neutrinos.
* Majorana type neutrinos.
* Decay of the right handed neutrinos to the left ones.
According to the original proposal of Fukugita and Yanagida
<cit.>, this mechanism also satisfies all the
Sakharov’s conditions <cit.> in order to produce a net
baryon asymmetry (for reviews see for instance <cit.>).
In this paper we explore the possibility of leptogenesis in the S3-3H
model, with degenerate and non-degenerate right-handed neutrino
masses, and calculate the associated baryogenesis. We first study the
case where two of the right-handed neutrino masses are degenerate, and
then the more general case where all the right-handed neutrino masses
are different. We scan the parameter space to find the leptogenesis
and associated baryogenesis dependence on the free parameters of the
model. We find that there is a region of parameter space where enough
baryogenesis is produced through leptogenesis to explain the
baryon asymmetry of the Universe.
The outline of the paper is organized as follows: In section 2, the S3
model is introduced as well as some of its most important results. In
section 3 it is shown how to produce leptogenesis in the S3-3H model, and
the resultant baryogenesis is also computed. At the end, in section 4,
we conclude summarizing our main results. | null | null | null | null | null |
http://arxiv.org/abs/1701.07987v1 | 20170127094739 | Twist maps as energy minimisers in homotopy classes: symmetrisation and the coarea formula | [
"Charles Morris",
"Ali Taheri"
] | math.AP | [
"math.AP"
] |
Twist maps as energy minimisers in homotopy classes: symmetrisation and the coarea formula
C. Morris, A. Taheri^†
==========================================================================================
Let = [a, b] = {x: a<|x|<b}⊂^n with 0<a<b<∞ fixed be an open annulus and consider the energy
functional,
𝔽 [u; ] = 1/2∫_|∇ u|^2/|u|^2 dx,
over the space of admissible incompressible Sobolev maps
𝒜_ϕ() = { u ∈ W^1,2(, ^n) : ∇ u = 1 a.e. in and u|_∂ = ϕ},
where ϕ is the identity map of . Motivated by the earlier works <cit.> in this paper we examine
the twist maps as extremisers of 𝔽 over 𝒜_ϕ() and investigate their minimality properties by
invoking the coarea formula and a symmetrisation argument. In the case n=2 where 𝒜_ϕ()
is a union of infinitely many disjoint homotopy classes we establish the minimality of these extremising twists in their respective
homotopy classes a result that then leads to the latter twists being L^1-local minimisers of 𝔽 in 𝒜_ϕ().
We discuss variants and extensions to higher dimensions as well as to related energy functionals.
§ INTRODUCTION AND PRELIMINARIES
Let = [a, b] = {(x_1, ..., x_n) : a < |x| < b} with 0<a<b<∞ fixed be an open annulus in ^n and consider the energy
functional
𝔽[u; ] = 1/2∫_|∇ u|^2/|u|^2 dx,
over the space of incompressible Sobolev maps,
𝒜_ϕ () = { u ∈ W^1,2(,^n) : ∇ u = 1 a.e. in and u|_∂ = ϕ}.
Here and in future ϕ denotes the identity map of and so the last condition in (<ref>) means that
u ≡ x on ∂ in the sense of traces.
By a twist map u on ⊂^n we mean a continuous self-map of onto itself which agrees with the identity map ϕ
on the boundary ∂ and has the specific spherical polar representation (see <cit.>-<cit.> for background and further results)
u : ( r, θ) ↦(r,Q(r)θ), x ∈.
Here r = |x| lies in [a, b] and θ = x/|x| sits on 𝕊^n-1 with Q ∈ C([a,b], SO(n)) satisfying Q(a)=Q(b)=I.
Therefore Q forms a closed loop in SO(n) based at I and for this in sequel we refer to Q as the twist loop associated with
u. Also note that (<ref>) in cartesian form can be written as
u: x ↦ Q(r) x = r Q(r) θ, x ∈.
Next subject to a differentiability assumption on the twist loop Q it can be verified that u ∈𝒜_ϕ() with its 𝔽
energy simplifying to
𝔽[Q(r) x; ] = 1/2∫_|∇ u|^2/|u|^2 dx = 1/2∫_|∇ Q(r)x|^2/|x|^2 dx
= n/2∫_dx/|x|^2 + ω_n/2∫_a^b Q̇^2 r^n-1 dr,
where the last equality uses |∇ [Q(r)x]|^2 = n + r^2 |Q̇θ|^2. Now as the primary task here is to search for extremising twist
maps we first look at the Euler-Lagrange equation associated with the loop energy 𝔼=𝔼[Q] defined by the last integral
in (<ref>) over the loop space {Q ∈ W^1,2([a, b]; SO(n)) : Q(a)=Q(b)=I}. Indeed this can be shown to take the form
(see below for justification)
d/dr[ ( r^n-1Q̇)Q^t ] =0,
with solutions
Q(r) = exp [-β(r) A] P,
where P ∈ SO(n), A ∈^n × n is skew-symmetric and β=β(|x|) is described for a ≤ r ≤ b by
β(r) = ln 1/r n=2,
r^2-n/(n-2) n≥ 3.
Now to justify (<ref>) fix Q∈ W^1,2([a,b], SO(n)) and for F ∈ W^1,2_0([a,b], ^n × n)
set H= (F-F^t)Q and Q_ϵ = Q + ϵ H. Then Q_ϵ^tQ_ϵ = I + ϵ^2 H^tH and
d/dϵ∫_a^b 2^-1|Q̇_ϵ|^2 r^n-1 dr |_ϵ = 0 = ∫_a^b ⟨Q̇ , (Ḟ - Ḟ^t )Q + (F - F^t )Q̇⟩ r^n-1 dr
= ∫_a^b ⟨Q̇ , (Ḟ - Ḟ^t )Q ⟩ r^n-1 dr
= ∫_a^b ⟨d/dr (r^n-1Q̇Q^t) , (F - F^t ) ⟩ dr = 0,
and so the arbitrariness of F with an orthogonality argument gives (<ref>).
Returning to (<ref>) it is not difficult to see that the Euler-Lagrange equation associated with 𝔽
over 𝒜_ϕ() is given by the system (cf. Section <ref>)
∇ u^2/u^4 u + div {∇ u/u^2 - p(x) cof ∇ u } = 0, u=(u_1, ..., u_n),
where p=p(x) is a suitable Lagrange multiplier. Here a further analysis reveals that out of the solutions Q=Q(r) to
(<ref>) just described only those twist loops in the form
Q(r) = R diag[ R[g](r), ..., R[g](r)]R^t, R∈𝐒𝐎(n),
when n is even and Q(r) ≡ I (a ≤ r ≤ b) when n is odd can grant extremising twist maps u for the original energy
(<ref>). For clarification R[g] denotes the SO(2) matrix of rotation by angle g:
R [g] = [ [ cos g sin g; -sin g cos g; ]].
Indeed direct computations give the angle of rotation g=g(r) to be
g(r) = 2π k log(r/a)/log(b/a) + 2π m, k, m ∈,
when n=2 and
g(r) = 2π k (r/a)^2-n-1/(b/a)^2-n-1 + 2π m, k, m ∈,
when n ≥ 4 even. (See also <cit.>, <cit.>, <cit.> for complementing and further results.)
Our point of departure is (<ref>)-(<ref>) and the aim is to study the minimising properties the twist maps calculated
above. Of particular interest is the case n=2 where the space 𝒜_ϕ() admits multiple homotopy classes (A_k : k ∈).
Here direct minimisation of the energy over these classes gives rise to a scale of associated minimisers (u_k). Using a symmetrisation
argument and the coarea formula we show that the twist maps u_k with twist angle g as presented in
(<ref>) are indeed energy minimisers in A_k and as a result also L^1 local minimisers of 𝔽 over
𝒜_ϕ(). We discuss variants and extensions including a larger scale of energies where similar techniques can be applied
to establish minimality properties in homotopy classes.
§ THE HOMOTOPY STRUCTURE OF THE SPACE OF SELF-MAPS 𝔄=𝔄()
The rich homotopy structure of the space of continuous self-maps of the annulus =[a, b] ⊂^n will prove
useful later on in constructing local energy minimisers. For this reason here we give a quick outline of the main tools and results and refer the
reader to <cit.> for further details and proofs. To this end set
𝔄=𝔄 ()= { f ∈ C(, ) : f =ϕ}
equipped with the uniform topology. A pair of maps f_0, f_1 ∈𝔄 are homotopic iff there exists
H ∈ C([0, 1] ×; ) such that, firstly, H(0, x) = f_0 (x) for all x∈,
secondly, H(1, x) = f_1 (x) for all x∈ and finally H(t, x) = x for all t ∈ [0,1], x ∈∂.
The equivalence class consisting of all g ∈𝔄 homotopic to a given f ∈𝔄 is referred to as the homotopy
class of f and is denoted by [f]. Now the homotopy classes {[f] : f ∈𝔄} can be characterised as follows depending
on whether n=2 or n ≥ 3.
* (n=2) Using polar coordinates, for f ∈𝔄 and for θ∈ [0, 2 π] (fixed), the ^1-valued curve
γ_θ defined by
γ_θ: [a, b] →^1 ⊂^2, γ_θ: r ↦ f|f|^-1(r, θ),
has a well-defined index or winding number about the origin. Furthermore, due to continuity of f, this index is independent of the
particular choice of θ∈ [0, 2 π]. This assignment of an integer (or index) to a map f ∈𝔄 will be denoted by
f ↦ deg(f|f|^-1).
Note firstly that this integer also agrees with the Brouwer degree of the map resulting from identifying ^1 ≅ [a, b]/{a, b},
justified as a result of γ_θ(a)=γ_θ(b) and secondly that for a differentiable curve (taking advantage of the
embedding ^1 ⊂) we have the explicit formulation
deg (f|f|^-1) = 1/2 π i∫_γdz/z.
(n=2). The map deg: { [f] : f ∈𝔄}→
is bijective. Moreover, for any pair of maps f_0, f_1 ∈𝔄, we have
[f_0]=[f_1] deg(f_0 |f_0|^-1) = deg(f_1|f_1|^-1).
* (n ≥ 3) Using the identification ≅ [a,b] ×^m where for ease of notation we have set m=n-1 it is plain
that for f ∈𝔄 the map
[Here as usual ϕ denotes the identity map of the m-sphere and C_ϕ(^m, ^m)
is the path-connected component of C(^m, ^m) containing ϕ.]
ω: [a, b] → C_ϕ(^m, ^m), ω: r ↦ f|f|^-1 (r, ·),
uniquely defines an element of the fundamental group π_1[ C_ϕ(^m, ^m)].
By considering the action of SO(n) on ^m – viewed as its group of
orientation preserving isometries, i.e., through the assignment,
E: ξ∈ SO(n) ↦ω∈ C(^m, ^m),
where
ω(x) = E[ξ](x) = ξ x, x ∈𝕊^m,
it can be proved that the latter assignment induces a group isomorphism on the level of the fundamental groups, namely,
E^⋆: π_1[ SO(n), I_n] ≅π_1 [ C_ϕ(^m, ^m), ϕ] ≅_2.
Thus, summarising, we are naturally lead to the assignment of an integer mod 2 to any f ∈𝔄 which will
be denoted by
f ↦ deg_2 (f|f|^-1) ∈_2.
(n ≥ 3) The degree mod 2 map deg_2: { [f] : f ∈𝔄}→_2
is bijective. Moreover, for a pair of maps f_0, f_1 ∈𝔄, we have
[f_0]=[f_1] deg_2(f_0 |f_0|^-1) = deg_2(f_1|f_1|^-1).
§ A COUNTABLE FAMILY OF L^1 LOCAL MINIMISERS OF 𝔽 WHEN N=2
When n=2 by Lebesgue monotonicity and degree theory (see <cit.> as well as <cit.>,<cit.>,<cit.>,<cit.>)
every map u in 𝒜 = 𝒜_ϕ() has a representative (again denoted u)
in 𝔄. As a consequence we can introduce the components – hereafter called the
homotopy classes,
A_k := {u ∈𝒜: deg (u|u|^-1) = k }, k ∈.
Evidently A_k are pairwise disjoint and their union (over all k ∈) gives 𝒜. Furthermore it can be seen
without difficulty that each A_k is W^1,2-sequentially weakly closed and that for u ∈ A_k and s>0 there
exists δ=δ(u, s)>0 with
{v : 𝔽[v]< s}∩𝔹^L^1_δ(u) ⊂ A_k.
Here 𝔹^L^1_δ(u) = {v ∈𝒜 : ||v-u||_L^1 < δ}, that is, the L^1-ball in 𝒜 centred at u.
Indeed for the sequential weak closedness fix k and pick (u_j : j ≥ 1) ⊂ A_k so that u_j u in W^1,2.
Then by a classical result of Y. Reshetnyak
∇ u_j ∇ u
(as measures) and so u ∈𝒜 while u_j → u uniformly on gives by Proposition <ref>
that u ∈ A_k. For the second assertion arguing indirectly and assuming
the contrary there exist u ∈ A_k, s>0 and (v_j : j ≥ 1) in 𝒜 such that 𝔽[v_j; ] < s
and ||v_j - u||_L^1→ 0
while v_j ∉ A_k. However by passing to a
subsequence (not re-labeled) v_j u in W^1,2(, ^2) and as above
v_j → u uniformly on . Hence again by Proposition <ref>, v_j ∈ A_k for large enough j
which is a contradiction. □
Now in view of the sequential weak lower semicontinuity of 𝔽 in 𝒜 (see below) an application
of the direct methods of the calculus of variations leads to the following existence and multiplicity result.
(Local minimisers) Let = [a, b] ⊂^2 and for k ∈ consider the homotopy classes A_k as defined by
(<ref>). Then there exists u= u (x; k) ∈ A_k such that
𝔽[u; ] = inf_ v ∈ A_k𝔽[v; ].
Furthermore for each such minimiser u there exists δ=δ(u)>0 such that
𝔽[u; ] ≤𝔽[v; ],
for all v ∈𝒜_ϕ() satisfying ||u-v||_L^1<δ.
Fix k and pick (v_j) ⊂ A_k an infimizing sequence: 𝔽[v_j] ↓ L:= inf_A_k𝔽[·]. Then as L<∞ and
a ≤ |v(x)| ≤ b for v ∈𝒜 it follows that by passing to a subsequence (not re-labeled) v_j u in W^1,2(, ^2) and uniformly in
where by the above discussion u ∈ A_k. Now
| ∫_|∇ v_k|^2/|v_k|^2 - ∫_|∇ v_k|^2/|u|^2| ≤∫_ |∇ v_k|^2 ( ||v_k|^2-|u|^2|/|u|^2 |v_k|^2)
≤sup_||v_k|^2-|u|^2|/|u|^2 |v_k|^2∫_ |∇ v_k|^2 → 0
as k ↗∞ together with
∫_|∇ u|^2/|u|^2≤lim∫_|∇ v_k|^2/|u|^2
gives the desired lower semicontinuity of the 𝔽 energy on 𝒜_ϕ() as claimed, i.e.,
𝔽[u; ]=∫_|∇ u|^2/2|u|^2≤lim∫_|∇ v_k|^2/2|v_k|^2 = lim 𝔽[v_k; ].
As a result L ≤𝔽[u] ≤lim inf𝔽[v_j] ≤ L and so u is a minimiser as required.
To justify the second assertion fix k ∈ and u as above and with s=1+𝔽[u] pick δ>0 as in the discussion prior to the theorem.
Then any v ∈𝒜 satisfying ||u-v||_L^1 < δ also satisfies (<ref>) [otherwise
𝔽[v]<𝔽[u] < s implying that v ∈ A_k and hence in view of u being a minimiser, 𝔽[v] ≥𝔽[u]
which is a contradiction.] □
§ TWIST MAPS AND THE EULER-LAGRANGE EQUATION ASSOCIATED WITH 𝔽
The purpose of this section is to formally derive the Euler-Lagrange equation associated with 𝔽
over 𝒜_ϕ(). Note that 𝔽[u] can in principle be infinite if |u| is too small or
zero, however, for twist maps or more generally L^n-integrable maps in 𝒜_ϕ, |u| is bounded away from zero
as u is a self-map of onto itself. Moreover ∇ u is
L^1-integrable for the latter maps but not in general for maps u of Sobolev class W^1,2 (with n ≥ 3).
Now the derivation uses the Lagrange multiplier method and proceeds formally by considering the unstrained functional
𝕂[u; 𝕏] = ∫_[ ∇ u^2/2u^2 - p(x) (∇ u -1 ) ] dx,
for suitable p=p(x) where evidently 𝕂[u; ]=𝔽[u; ] when u ∈𝒜_ϕ().
We can calculate the first variation of this energy by setting
d/dε 𝕂[u_ε; ] |_ε=0=0, where u ∈𝒜_ϕ() is sufficiently
regular and satisfies |u| ≥ c>0 in , u_ε=u+εφ for all φ∈ C^∞_c(,^n) and
ε∈ sufficiently small, hence obtaining,
0 = d/d ε∫_[ ∇ u_ε^2/2u_ε^2
- p(x) (∇ u_ε -1 ) ] dx |_ε=0
= ∫_{∑_i,j=1^n [ 1/u^2∂ u_i/∂ x_j - p(x) [ cof∇ u]_ij]
∂φ_i/∂ x_j - ∑_i=1^n ∇ u^2/u^4u_i φ_i } dx
= ∫_{ - ∑_i,j=1^n ∂/∂ x_j[ 1/u^2∂ u_i/∂ x_j
- p(x) [ cof∇ u]_ij]
φ_i - ∑_i=1^n ∇ u^2/u^4 u_i φ_i } dx
= ∫_ - ∑_i=1^n{∇ u^2/u^4 u_i + ∑_j=1^n ∂/∂ x_j[ 1/u^2∂ u_i/∂ x_j - p(x) [ cof∇ u]_ij] }φ_i dx.
As this is true for every compactly supported φ as above an application of the fundamental lemma
of the calculus of variation results in the Euler-Lagrange system for u=(u_1, ..., u_n) in :
∇ u^2/u^4 u + div {∇ u/u^2 - p(x) cof ∇ u } = 0,
where the divergence operator is taken row-wise. Proceeding further an application of the Piola identity on the
cofactor term gives (with 1 ≤ i ≤ n)
∇ u^2/u^4 u_i + ∑_j=1^n {∂/∂ x_j( 1/u^2∂ u_i/∂ x_j) - [ cof ∇ u]_ij∂ p/∂ x_j} = 0.
Next expanding the differentiation further allows us to write
0 = ∇ u^2/u^4 u_i + ∑_j=1^n {1/u^2∂^2 u_i /∂ x_j^2
- 2/u^4∑_k=1^n ∂ u_i/∂ x_j∂ u_k /∂ x_j u_k
- [ cof ∇ u]_ij∂ p/∂ x_j}
= ∇ u^2/u^4 u_i + ∑_j=1^n {1/u^2∂^2 u_i /∂ x_j^2
- 2/u^4∂ u_i/∂ x_j [∇ u^tu]_j - [ cof ∇ u]_ij∂ p/∂ x_j}.
Finally transferring back into vector notation and invoking the incompressibilty condition ∇ u =1 it follows in turn that
Δ u/u^2 + ∇ u^2/u^4 u - 2/u^4∇ u (∇ u)^tu = ( cof ∇ u) ∇ p,
and subsequently
(∇ u)^t/u^2[ Δ u + ∇ u^2/u^2 u - 2/u^2∇ u (∇ u)^tu ] = ∇ p.
Thus the Euler-Lagrange system (<ref>) is equivalent to (<ref>) that in particular asks for the nonlinear term on the left of (<ref>) to be a gradient field
in . Recall from earlier discussion that restricting 𝔽 to the class of twist maps results in the Euler-Lagrange equation (<ref>)
where the solution Q=Q(r) as explicitly computed is the twist loop associated with the map
u: (r, θ) ↦ (r, Q(r) θ), x ∈,
with Q(r)=exp [- β(r)A] P, P ∈𝐒𝐎(n), A ∈^n × n skew-symmetric (A^t=-A).
The boundary condition u=ϕ on ∂ gives,
[The function β=β(r) was introduced earlier in Section <ref>.]
exp[-β(a) A] P=I, exp[-β(b) A] P=I.
Therefore it must be that P= exp[β(a) A] and exp ([β(b)-β(a)] A)=I. Now as A lies in 𝔰𝔬(n) it must be conjugate
to a matrix S in the Lie algebra of the standard maximal torus of orthogonal 2-plane rotations in 𝐒𝐎(n). This means that
there exists R ∈𝐒𝐎(n) such that A = RSR^T for some S as described and so S∈ (β(b)-β(a))^-1𝕃
where 𝕃={ T∈𝔱: exp(T)=I }, that is,
𝕃 is the lattice in the Lie subalgebra 𝔱⊂𝔰𝔬(n) consisting of matrices sent by the
exponential map to the identity I of SO(n). Hence Q(r)=R exp (-[β(r)-β(a)] S) R^t.
Next upon noting that the derivatives of β=β(r) are given by
β̇ (r) = -1/r^n-1 , β̈ (r) = n-1/r^n,
we can write
Q̇ = A Q/r^n-1, Q̈ = A^2Q/r^2n-2-(n-1)AQ/r^n.
Now, moving forward, a set of straightforward calculations show that for a twist map u with a twice continuously differentiable twist loop Q=Q(r) we have the differential relations
(∇ u)^t = Q^t + r θ⊗Q̇θ,
∇ u^2 = tr [(∇ u)^t (∇ u)] = n + r^2 Q̇θ^2,
and likewise
Δ u = [ (n+1)Q̇ + r Q̈]θ.
Thus for the particular choice of a twist map with twist loop arising from a solution to (<ref>) the above quantities can be explicitly described by
the relations
(∇ u)^t = Q^t + r^2-nθ⊗ AQθ,
∇ u^2 = tr [ (Q^t + r^2-nθ⊗ AQθ) (Q + r^2-n AQ θ⊗θ) ]
= n + AQθ^2/r^2(n-2),
and likewise
Δ u = [ (n+1) AQ/r^n-1 + r( A^2 Q/r^2n-2 -(n-1) AQ/r^n) ] θ
= [ 2AQ/r^n-1 + A^2Q/r^2n-3]θ.
For the ease of notation we shall hereafter write ω = Qθ. Proceeding now with the calculations and using (<ref>)-(<ref>) we have
Δ u + ∇ u^2/u^2u = [ 2A/r^n-1 + A^2/r^2n-3
+ 1/r(n + Aω^2/r^2n-4) I ]ω
and in a similar way
∇ u (∇ u)^t/u^2u = [ Q + r^2-n Aω⊗θ]
[ Q^t + r^2-nθ⊗ Aω] ω/r
= [ I + Aω⊗θ Q^t + Qθ⊗ Aω/r^n-2
+ Aω⊗ Aω/r^2n-4] ω/r
= [ I + Aω⊗ω + ω⊗ Aω/r^n-2
+ Aω⊗ Aω/r^2n-4] ω/r.
= 1/r (I + r^2-n A) ω,
where the last identity here uses (x⊗ y)z = ⟨ y,z⟩ x and ⟨ω,ω⟩ = ⟨ Qθ,Qθ⟩=1 along with
⟨ Aω,ω⟩ =0 for skew-symmetric A. Hence putting together (<ref>) and (<ref>) gives
Δ u + ∇ u^2/u^2u - 2 ∇ u (∇ u)^t/u^2u = [ A^2+ Aω^2 I/r^2n-2
+ (n-2) I ] ω/r,
which when combined with (<ref>) results in
(<ref>) =(∇ u)^t/u^2[ Δ u + ∇ u^2/u^2u - 2 ∇ u (∇ u)^t/u^2u ]
= 1/r^2[ Q^t + θ⊗ Aω/r^n-2]
[ A^2 + Aω^2 I/r^2n-4 + (n-2) I ] ω/r
= Q^t [ A^2+ Aω^2 I/r^2n-1
+ n-2/r^3 I ]ω + (θ⊗ Aω )A^2 ω/r^3n-3.
Noting (θ⊗ Aω )A^2 ω = ⟨ Aω,A^2ω⟩θ = ⟨μ,Aμ⟩ = 0
with A skew-symmetric and μ = Aω the last set of equations give
(<ref>) = (∇ u)^t/u^2[ Δ u + ∇ u^2/u^2u - 2 ∇ u (∇ u)^t/u^2u ]
= Q^t [ A^2+ Aω^2 I/r^2n-2 + n-2/r^2 I ] ω/r = I.
Therefore to see if (<ref>) admits twist solutions it suffices to verify if the quantity described by (<ref>) is a gradient field in . Towards this end
recall that here we have Q(r) = exp(-β(r)A)P where as seen P=exp(β(a)A). Thus a basic
calculation gives
Aω^2 = AQθ^2 = θ^t Q^t A^t A Q θ = - θ^t Q^t A^2 Q θ
= - θ^t P^t exp(β(r)A) A^2 exp(-β(r)A)P θ
= - θ P^t A^2 Pθ = θ P^t A^t A Pθ
= AP θ^2,
and likewise by substitution we have
Q^tA^2 ω = P^t A^2 P θ.
Hence using the above we can proceed by writing the Euler-Lagrange equation (<ref>) upon substitution as,
I = (∇ u)^t/u^2[ Δ u + ∇ u^2/u^2u - 2 ∇ u (∇ u)^t/u^2u ]
= P^t ( A^2+ AP θ^2I ) P θ/r^2n-1
+ (n-2) θ/r^3.
Now as for a fixed skew-symmetric matrix B by basic differentiation we have
∇( By^2 ) = -2B^2y, ∇y^2n = 2ny^2n-2y,
it is evident that we can write
- ∇( By^2/2ny^2n) = B^2y/ny^2n + By^2y/y^2n+2.
In particular with B=P^tAP being skew-symmetric, (<ref>) can be written in the form
I = (∇ u)^t/u^2[ Δ u + ∇ u^2/u^2u - 2 ∇ u (∇ u)^t/u^2u ]
= - ∇( P^tAP x^2/2nx^2n) + (n-1) P^tA^2 P x/ nx^2n +
- (n-2) ∇1/|x|.
Therefore it is plain that (<ref>) is a gradient field in provided that the term on the right and subsequently the middle
term, that is, the expression
(n-1)P^tA^2 P x/n x^2n
is a gradient field in . By direct calculations (cf. <cit.>) this is seen to be the case iff all the eigenvalues of the skew-symmetric matrix A
are equal. (Note that in odd dimensions this requirement leads to A=0.) As a result here (<ref>) would be a gradient (indeed ∇ p)
and so the Euler-Lagrange system (<ref>) is satisfied by the twist u.
Now using the representation A= RSR^t for some S ∈ (β(b)-β(a))^-1𝕃 and writing S = λ J where, J is the n × n
block diagonal matrix: J=0 when n is odd and J= diag( A_1, ⋯, A_n/2) when n is even, i.e.,
J = [[ 𝐀_1 0 ⋯ 0; 0 𝐀_2 ⋯ 0; ⋮ ⋱ ⋮; 0 0 ⋯ 𝐀_n/2; ]]
A_j= [ 0 -1; 1 0 ]
it is required that λ (β(b)-β(a)) J ∈𝕃. But invoking the lattice structure of 𝕃 this can happen iff
λ = 2π k/β(b)-β(a), k ∈,
and thus
u(x) = Rexp( -2kπβ(r)-β(a)/β(b)-β(a) J ) R^tx.
Noticing that here we have
β(r)-β(a)/β(b)-β(a) = log(r/a)/log(b/a),
for n=2 and
β(r)-β(a)/β(b)-β(a) = (r/a)^2-n-1/(b/a)^2-n - 1 ,
for even n ≥ 4 respectively we obtain the representation
u(x) = Rexp( -g(r) J ) R^tx = exp( -g(r) A ) x,
where we have set A= R J R^t and the angle of twist function g=g(r) is given by (<ref>) for n=2 and
(<ref>) for even n ≥ 4 respectively.
For odd n ≥ 3 as shown A=0 and so the only twist solution to (<ref>) is the trivial solution u ≡ x.
§ SYMMETRISATION AS A MEANS OF ENERGY REDUCTION ON 𝒜_Φ() WHEN N=2
Recall that the space of admissible maps 𝒜_ϕ() consists of maps u ∈ W^1,2(, ^2) satisfying the incompressibility condition
∇ u = 1 a.e. in and u|_∂ = ϕ. Also as mentioned earlier due to a Lebesgue-type monotonicity every such map is
continuous on the closed annulus and using degree theory the image of the closed annulus is again the closed annulus itself; hence, the
"embedding"
𝒜_ϕ() = ⋃_k ∈ A_k ⊂𝔄(),
where the components A_k here are as defined by (<ref>). For the sake of future calculations it is useful to write (<ref>) as
deg (u|u|^-1) = 1/2π∫_a^b u × u_r/u^2 dr = k ∈,
where x=r. (Note that we adopt the convention that in two dimensions the cross product is a scalar and not a vector.) When u is a twist map,
specifically, u=Q[g] x the integral reduces to g(b)-g(a) = 2π k where as before g=g(r) is the angle of rotation function.
We now proceed by reformulating the 𝔽 energy of an admissible map u ∈𝒜_ϕ() in a more suggestive way. Indeed
switching to polar co-ordinates it is seen that
∇ u^2 = u_r^2 + 1/r^2u_θ^2
where
|u_r|^2 = (u · u_r)^2 + (u × u_r)^2/|u|^2,
|u_θ|^2 = (u · u_θ)^2 + (u × u_θ)^2/|u|^2.
Next we note that
(|u|_r)^2 = (u · u_r)^2/|u|^2, (|u|_θ)^2 = (u · u_θ)^2/|u|^2.
Hence the gradient term on the left in (<ref>) can be expressed as
∇ u^2 = (u · u_r)^2 + (u × u_r)^2/u^2 + (u · u_θ)^2 + (u × u_θ)^2 /r^2u^2
= ∇u^2 + (u × u_r)^2/u^2 + (u × u_θ)^2 /r^2u^2.
From this we therefore obtain the 𝔽 energy as
𝔽 [u; ] = 1/2∫_∇ u^2/u^2 dx
= 1/2∫_0^2π∫_a^b ∇ u^2/u^2 r dr dθ
= 1/2∫^2π_0 ∫_a^b [ ∇u^2/u^2 + (u × u_r)^2/u^4 +
(u × u_θ)^2 /r^2u^4] r dr dθ.
Let us first state the following useful identity that will be employed in obtaining a fragment of the lower bound on the energy: For u ∈𝒜_ϕ() and
a.e. r∈[a,b],
∫_0^2πu(r,θ)× u_θ(r,θ)/u^2 dθ = 2π.
The proof of this identity is postponed until later on in Section <ref> (cf. Proposition <ref>).
Now assuming this for the moment an application of Jensen's inequality gives, again for a.e. r∈[a,b],
1/2π∫_0^2π(u× u_θ)^2 /u^4 dθ ≥(1/2π∫_0^2πu(r,θ)× u_θ(r,θ)/u^2 dθ)^2=1.
Hence it is plain that
∫^2π_0 ∫_a^b (u × u_θ)^2 /r^2u^4 r dr dθ≥ 2πln(b/a).
Therefore we have the following lower bound on the 𝔽 energy of an admissible map u:
𝔽 [u; ] ≥πln(b/a) + 1/2∫^2π_0 ∫_a^b [ ∇u^2/u^2
+ (u × u_r)^2/u^4] r dr dθ.
Interestingly here we have equality only for twist maps and so outside this class the inequality is strict (for more on questions of uniqueness
see <cit.>). The next task is to show that by using a basic "symmetrisation" in 𝒜_ϕ() we can reduce the energy
which will then be the main ingredient in the proof of the result.
(Symmetrisation) Let u ∈𝒜_ϕ() be an admissible map and associated with u define the angle of rotation
function g=g(r) by setting
g(r) = 1/2π∫_a^r ∫_0^2πu × u_r/u^2 dθ dr, a ≤ r ≤ b.
Then the twist map defined by u̅ (x) = Q[g] x with Q=Q[g]= R[g] has a smaller 𝔽 energy than the original map u, that is,
𝔽[u̅; ] ≤𝔽[u; ].
Furthermore if u ∈ A_k then the symmetrised twist map u̅ satisfies u̅∈ A_k. Thus the homotopy classes A_k are invariant under
symmetrisation.
Clearly the symmetrised twist map u̅ is in the same homotopy class as u since by definition g ∈ W^1,2[a, b] satisfies
g(a)=0 and
g(b) = 1/2π∫_a^b ∫_0^2πu × u_r/u^2 dθ dr = 2 π k.
Therefore u̅∈ A_k as a result of (<ref>). Next the 𝔽 energy of u̅ satisfies the bound
𝔽 [u̅; ] - 2πlog(b/a) = π∫_a^b |Q̇(r)|^2 r dr
= π∫_a^b |ġ(r)|^2 r dr
= π∫_a^b [1/2π∫_0^2πu × u_r/u^2 dθ]^2 r dr
≤1/2∫_0^2π∫_a^b (u × u_r)^2/u^4 r drdθ
where the last line is a result of Jensen's inequality. Therefore by referring to (<ref>) all that
is left is to justify the inequality
2πlog(b/a) ≤∫_∇u^2/u^2 dx.
Towards this end we use the isoperimetric inequality in the context of sets of finite perimeter and the coarea formula in the context
of Sobolev spaces: For real-valued f and non-negative Borel g:
∫_ g(x) |∇ f| dx = ∫_∫_{ f=t } g(x) dℋ^1(x) dt.
(See, e.g., <cit.>.) Then upon taking f = |u| ∈ W^1,2() ∩ C() and g = 1/ |u|^2 this gives
∫_∇u/u^2 dx = ∫_a^b ( ∫_{u = t } dℋ^1 ) dt/t^2
= ∫_a^b ℋ^1({u=t }) dt/t^2.
Now since the level sets E_t = {x ∈ : u(x)≤ t } and F_t={x ∈ : x≤ t } enclose the same area due to
∇ u = 1 a.e. (we can consider u as extended by identity inside {|x|<a}) an application of the isoperimetric
inequality gives 2π t = ℋ^1({x=t }) = ℋ^1(∂^⋆ F_t)
≤ℋ^1(∂^⋆ E_t) = ℋ^1({u=t }) for a.e. t∈ [a,b] (cf., e.g.,
<cit.>). Thus substituting in (<ref>) results in the lower bound
∫_∇u/u^2 dx = ∫_a^b ℋ^1({u=t }) dt/t^2
≥∫_a^b ℋ^1({x=t }) dt/t^2 = ∫_a^b 2π t dt/t^2 = 2πlog (b/a).
Finally we arrive at the conclusion by noting that u and ϕ have the same distribution function, that is, again as a result of the pointwise
constraint ∇ u =1 a.e. in :
α_u(t) = |{ x ∈ : u(x)≥ t } | = | { x ∈ : x≥ t } | = α_ϕ(t) and therefore
∫_dx/u^2 = ∫_a^∞ -2 α_u(t) dt/t^3 + /a^2
= ∫_a^∞ -2 α_ϕ(t) dt/t^3 + /a^2
= ∫_dx/x^2 = 2πlog(b/a).
Now putting all the above together, a final application of Hölder inequality gives,
(2πlog(b/a) )^2 ≤( ∫_∇u/u^2 dx )^2
≤∫_∇u^2/u^2 dx ×∫_dx/u^2
= 2 πlog(b/a)∫_∇u^2/u^2 dx
and thus eventually we have
2πlog(b/a) ≤∫_∇u^2/u^2 dx
and so the conclusion follows.
§ THE 𝔽 ENERGY AND CONNECTION WITH THE DISTORTION FUNCTION
In this section we delve into the relationship between the energy functional 𝔽 in (<ref>) and the notions
of distortion function and energy of geometric function theory. In particular we show that in two dimensions twist maps have minimum
distortion among all incompressible Sobolev homeomorphisms of the annulus with identity boundary values in any given homotopy
class. To fix notation and terminology let U, V ⊂^n be open sets and
f ∈ W_loc^1,1(U, V). Then f is said to have finite outer distortion iff
there exists measurable function K=K(x) with 1≤ K(x)<∞ such that
∇ f(x)^n ≤ n^n/2 K(x) ∇ f (x).
The smallest such K is called the outer distortion of f and denoted by K_O(x, f). Note that here |A| = √( tr A^tA) is
the Hilbert-Schmidt norm of the n × n matrix A. Naturally 1 ≤ K_O (x, f) < ∞ and it measures the deviation of f from
being conformal. We also speak of the inner distortion function K=K_I(x, f) defined by the quotient
K_I(x,f) = n^-n/2 cof ∇ f^n/ ( cof ∇ f),
when ∇ f (x) ≠ 0 and K_I(x, f)=1 otherwise. We define the distortion energy associated to the inner distortion
K_I(x, f) (<ref>) by the integral
𝕎[f; U] = ∫_UK_I(x,f)/x^n dx.
Related energies and more have been considered in <cit.> with close links to the work in <cit.>.
The connection between the 𝔽 energy and the distortion energy 𝕎 (<ref>) is implicit in the following result
of T. Iwaniec, G. Martin, J. Onninen and K. Astala <cit.>. (See also <cit.>, <cit.> and <cit.>.)
Suppose f ∈ W^1,n_loc (, ) is a homeomorphism with finite outer distortion. Assume K_I is L^1-integrable over .
Then the inverse map h=f^-1: → lies in the Sobolev space W^1,n(, ).
Furthermore
n^-n/2∫_∇ h(y)^n/h(y)^n dy = ∫_K(x,f)/x^n dx.
The first assertion is Theorem 10.4 pp. 22 of <cit.>. For the second assertion using definitions we have
n^n/2 K_I (x, f) = | cof ∇ f|^n/ ( cof ∇ f) = |(∇ f)^-1|^n ∇ f
= |∇ h(f)|^n ∇ f,
and the conclusion follows upon dividing by |x|^n and integrating using the area formula. □
Now let 𝒜^n_ϕ() ={u ∈ W^1,n(; ^n) : ∇ u =1 }.
As in Section <ref> each u in 𝒜^n_ϕ() admits a representative in 𝔄(). Now restricting to
homeomorphisms u ∈𝒜^n_ϕ() the above theorem gives v=u^-1∈𝒜^n_ϕ() and so by the
incompressibility constraint
∫_K_I(x,u)/x^n dx
= n^-n/2∫_ cof ∇ u^n/x^n dx
= n^-n/2∫_∇ v^n/v^n dx.
In the planar case this allows us to relate the distortion energy of a homeomorphism u, say, in A_k to the
𝔽 energy of the inverse map v=u^-1 in A_-k through
𝔽[v;] = 1/2∫_∇ v(y)^2/v(y)^2 dy
= 1/2∫_∇ u(x)^2/x^2 dx = ∫_K_I(x,u)/x^2 dx = 𝕎[u;].
Therefore by showing that twist maps minimise the 𝔽 energy within their homotopy classes we have implicitly shown
that twist maps minimise the distortion energy within their respective homotopy classes of homeomorphisms
(as the inverse of a twist map is a twist map in opposite direction and clearly twist maps are homeomorphisms of annuli
onto themselves).
The distortion energy 𝕎 has a minimiser u=u(x; k) (k∈) among all
homeomorphisms within A_k. The minimiser is a twist map of the form u =Q[g] x
where g(r) = 2π k ln(r/a)/ln(b/a).
In particular the minimum energy is given by,
𝕎[u;] = ∫_K_I(x, u)/|x|^2 dx = 2πln(b/a) + 4π^3 k^2/ln(b/a).
That u=u_k minimises 𝕎 amongst homeomorphisms in A_k is a result of u_-k= (u_k)^-1 minimising
𝔽 over A_-k (Proposition <ref>) and (<ref>). Indeed arguing indirectly assume there is a
homeomorphism v∈ A_k: 𝕎[v; ] < 𝕎[u_k; ]. Then by (<ref>),
𝔽[v^-1; ]<𝔽[u_-k; ] and this is a contradiction as
v^-1, u_-k∈ A_-k while 𝔽[u_-k;] = inf_A_-k𝔽. We are thus left with the
calculation of the 𝕎 energy of u=u_k. To this end put v=u^-1:
𝔽[v; ] =1/2∫_|∇ v|^2/|v|^2 dx = 2πln (b/a) + π∫_a^b rġ_-k(r)^2 dr
= 2πln(b/a) + 4π^3 k^2/ln(b/a),
and so a further reference to (<ref>) completes the proof. □
In the higher dimensions, i.e. n>2, from Theorem <ref> we can get an analogous identity to (<ref>) for homeomorphisms
u in 𝒜^n_ϕ. Indeed, with v=u^-1
n 𝔽[v; ] = ∫_∇ v(y)^n/v(y)^n dy
= ∫_ cof ∇ u(x)^n/x^n dx = n^n/2𝕎[u; ],
where the energies 𝔽=𝔽_n and 𝕎 are given by,
𝔽[v; ] = 1/n∫_∇ v(y)^n/v(y)^n dy, 𝕎[u; ]= ∫_K_I(x, u)/|x|^n dx.
Therefore again to find minimisers of 𝕎 among homeomorphisms in u∈𝒜^n_ϕ() one can follows the lead of n=2 and
consider the energy 𝔽 over 𝒜^n_ϕ(). It is straightforward to see that we have equality in (<ref>) for twist
maps u∈𝒜^n_ϕ() with the distortion energy of u= Q(r)x given by
𝕎[u;] = n^-n/2∫_(n x^-2 + Q̇θ^2)^n/2 dx.
Now restricting to the particular case of the twist map being (cf. <cit.>)
u(x) = exp( -g(r) J ) x, x ∈,
with J is as in Section <ref> and g∈ W^1,n([a,b]) the angle of rotation describing the twist. The corresponding distortion energy is
𝕎[u;] = n^-n/2∫_(n x^-2 + ġ^2)^n/2 dx
= ω_n n^-n/2∫_a^b (n r^-2 + ġ^2)^n/2 r^n-1 dr .
Note that in higher dimensions (i.e., n≥ 3) as discussed earlier there are only two homotopy classes in 𝒜^n_ϕ().
A twist map u=Q(r)x lies in the non-trivial homotopy class of 𝒜^n_ϕ() iff the twist loop
Q=Q(r) ∈([a,b], 𝐒𝐎(n)) based at I lifts to a non-closed path R=R(r) ∈([a,b],𝐒𝐩𝐢𝐧(n)) connecting ± 1
in 𝐒𝐩𝐢𝐧(n) (see <cit.> for more.)
[Note that 𝐒𝐩𝐢𝐧(n) is the universal cover of 𝐒𝐎(n) and {± 1}⊂ Spin(n) is the fibre
over I under the covering map.]
Likewise a twist u of the form (<ref>) lies in the non-trivial homotopy class of 𝒜^n_ϕ() iff the angle
of rotation function g satisfies g(b)-g(a)=2π k for some k odd. When k is even the twist map u lies in the trivial homotopy
class of 𝒜_ϕ().
[The identity boundary conditions on u dictates that the angle of rotation function must satisfy g(b)-g(a)=2π k for some k∈.]
§ OTHER VARIANTS OF THE DIRICHLET ENERGY
The goal of this section is to establish various energy bounds and identities, when n=2, by invoking the regularity and the measure preserving
constraints satisfied by the elements of 𝒜_ϕ(). These inequalities will ultimately lead to useful results for extremisers
and minimisers of variants of the Dirichlet energy in homotopy classes of 𝒜_ϕ(). We begin with the following identity.
For Φ∈ C^1[a,b] the integral identity
∫_Φ(|u|) dx = ∫_Φ(|x|) dx
holds for all u ∈𝒜_ϕ().
Denoting by α_u=α_u(t) the distribution function of |u| we can write using basic considerations and invoking the standard properties of distribution functions
∫_Φ(u) dx = ∫_∫_a^uΦ̇(t) dt dx + Φ(a)
= ∫_a^b Φ̇(t) α_u(t) dt + Φ(a)
= ∫_a^b Φ̇(t) α_x(t) dt + Φ(a)
= ∫_∫_a^xΦ̇(t) dt dx + Φ(a)
= ∫_Φ(x) dx,
which is the required conclusion.
We now collect a few more results which will be needed for the proof of our main theorem at the end of this section.
Let Φ∈ C^1[a,b] and pick u∈𝒜_ϕ(). Consider the continuous closed curve γ(θ) = u(r,θ) where
a<r<b is fixed. Then the integral identity
∫_0^2πΦ(u)^2(u× u_θ) dθ |^r_a = 2∫__r[uΦ(u)Φ̇(u)+Φ(u)^2 ] dx,
holds for almost every r∈ (a,b), where _r=[a,r]={ x∈^2:a<x<r }.
We shall justify the assertion first when u is a sufficiently smooth diffeomorphism and then pass on to the general case by invoking a suitable
approximation argument. Towards this end consider first the case where u is a smooth diffeomorphism with u ≡ x on ∂. [Here
u need not satisfy the incompressibility condition in 𝒜_ϕ().] Let α denote
the 1-form
α = Φ(x)^2 (x_1dx_2-x_2dx_1).
Then by a rudimentary calculation the pull-back of α under the C^∞ curve γ is given by
γ^*α = Φ(u)^2 (u× u_θ ) dθ.
Hence contour integration and basic considerations lead to the integral identity
∫_γα = ∫_0^2πΦ(u)^2 (u× u_θ) dθ |_r.
Note that the C^∞ curve γ here is diffeomorphic to ^1 and as such by the Jordan-Schöenflies theorem γ is the boundary
of some bounded region C_γ⊂^2 diffeomorphic to the unit ball _1. In particular due to the boundary conditions on u we have that
a < γ(θ)=u(r,θ) when a<r and so as a result C_a^γ = { x ∈^2: a<x<γ}⊂
with boundaries of ∂_a and γ. Hence application of Stoke's theorem gives
∫_C_a^γ dα = ∫_∂ C_a^γα
= ∫_γα - ∫_|x|=aα
= ∫_0^2πΦ(u)^2 u× u_θ dθ |_a^r,
for all r∈[a,b]. Next again by a rudimentary calculation we obtain that the pull-back of dα is,
dα = 2 ∇ u [uΦ(u)Φ̇(u)+Φ(u)^2 ] dx_1 ∧ d x_2.
Hence from (<ref>) and (<ref>) it follows that for all r∈[a,b],
2∫__r[uΦ(u)Φ̇(u)+Φ(u)^2 ] ∇ u dx
= ∫_0^2πΦ(u)^2 u× u_θ dθ |_a^r.
Now pick an arbitrary u in 𝒜_ϕ(). By approximation, e.g., using Theorem 1.1 in <cit.> there is a sequence
of C^∞ diffeomorphisms (v^k) so that v^k-u ∈ W^1,2_0(,^2) with v_k → u uniformly on
and strongly in W^1,2. Hence,
f_k:= Φ(v^k)^2 v^k × v^k_θ/|x|→Φ(u)^2 u × u_θ/|x| =:f,
a.e. in . Note that f_k≤ c v^k_θ, f≤ c u_θ for some c>0 and so f_k,f ∈ L^2()
and by virtue of v_k → u in W^1,2 and dominated convergence, for each r∈(a,b) and 0<δ<b-r, we have
∫_r^r+δ∫_0^2πΦ(v^k)^2(v^k× v^k_θ) dθ dr →∫_r^r+δ∫_0^2πΦ(u)^2(u× u_θ) dθ dr.
In a similar spirit we have (suppressing the arguments of Φ for brevity)
h_k:=[v^kΦΦ̇+Φ^2 ] ∇ v^k →[uΦΦ̇+Φ^2 ] ∇ u =:h,
a.e. in . Again since h_k≤ c v^k_x_1v^k_x_2, h≤ c u_x_1u_x_2 for some
c>0 we have h_k,h∈ L^1() and so by dominated convergence
∫__r[v^kΦΦ̇+Φ^2 ] ∇ v^k dx →∫__r[uΦΦ̇+Φ^2 ] ∇ u dx.
Now combining (<ref>) and (<ref>) together with the fact that
u=v^k=ϕ on ∂ it follows that
2∫_r^r+δ∫__r[v^kΦΦ̇+Φ^2 ] ∇ v^k dx dr →∫_r^r+δ∫_0^2πΦ(u)^2(u× u_θ) dθ |_a^r dr.
Moreover (<ref>) and a final application of dominated convergence gives
2 ∫_r^r+δ∫__r[uΦΦ̇+Φ^2 ] ∇ u dx dr
= ∫_r^r+δ∫_0^2πΦ(u)^2(u× u_θ) dθ |_a^r dr.
Therefore the result follows by recalling that ∇ u = 1 a.e. in and applying the Lebsegue differentiation
theorem, i.e., dividing by δ and letting δ↘ 0. □
Note that we can write the conclusion of the above proposition, namely, the integral identity (<ref>) in a shorter and somewhat more suggestive form
∫_0^2πΦ(u)^2(u× u_θ) dθ |_a^r = ∫__ru^-1Γ̇(u) dx,
where Γ (t) = t^2 Φ(t)^2 for Φ∈ C^1[a,b]. With this result and formulation at our disposal we can now prove the earlier
relation (<ref>) as a specific proposition.
Taking Φ(t)= 1/t in the above gives the integral identity
∫_0^2πu(r,θ)× u_θ(r,θ)/u^2 dθ = 2π, a.e. r∈[a,b].
When Φ(t) = 1/t it can be easily seen that Γ̇(t)=0 and therefore Proposition <ref> gives that,
∫_0^2π(u× u_θ)/u^2 dθ |_a^r = 0, a.e. r∈ [a,b].
Then recalling that u(x)=x on ∂, i.e. when x=a, it can be seen that the integral over the inner boundary is 2π and
hence from (<ref>) this implies that,
∫_0^2π(u× u_θ)/u^2 dθ = 2π, a.e. r∈ [a,b],
which completes the proof.
Suppose Γ∈ C^2[a,b] such that Γ̇(t)/t is a monotone increasing function. Then for almost every r∈[a,b] we have that
∫_0^2πΓ(u)^2 (u(r,θ)× u_θ(r,θ))^2 /u^4 dθ≥ 2πΓ(r)^2.
First we note that by Proposition <ref> we have for a.e. r ∈ [a, b] that
∫_0^2πΓ (u) u× u_θ/u^2 dθ |_a^r= ∫__ru^-1Γ̇(u) dx.
As Γ̇(t)/t is monotone increasing and u and x share the same distribution function α_x(t) we have that,
∫_^b_ru^-1Γ̇(u) dx = ∫_a^b d/d t(Γ̇(t)/t)
∫_χ_{ x ∈_r^b : u(x)>t } dx dt + _r^bΓ̇(a)/a
≤∫_a^b α_x(t) d/d t(Γ̇(t)/t) dt + _r^bΓ̇(a)/a
= ∫__r^bx^-1Γ̇(x) dx
= 2π[ Γ(b)-Γ(r)].
In the above _r^b ={ x ∈ : r< x<b }. Now since,
2π[ Γ(b)-Γ(a)]= ∫_x^-1Γ̇(x) dx = ∫_u^-1Γ̇(u) dx,
by Proposition <ref> we obtain upon using (<ref>) in (<ref>) that,
∫__ru^-1Γ̇(u) dx ≥∫__rx^-1Γ̇(x) dx = 2π[Γ(r)-Γ(a)].
Therefore,
∫_0^2πΓ (u) u× u_θ/u^2 dθ |_a^r≥ 2π[Γ(r)-Γ(a)],
but from the identity boundary conditions on u we know that,
∫_0^2πΓ (u) u× u_θ/u^2 dθ |_a = 2πΓ(a),
and therefore,
∫_0^2πΓ (u) u× u_θ/u^2 dθ |_r ≥ 2 πΓ(r), a.e. r∈[a,b].
The result then follows from an application of Jensen's inequality.
With the aid of these bounds we can now move on to the main goal of the section, namely, formulating and proving minimality for twist maps
in homotopy classes of 𝒜_ϕ() for a larger class of energies than those considered earlier.
Let =[a, b] ⊂^2 and let ℍ=ℍ[u; ] denote the energy functional,
ℍ[u;] = 1/2∫_Φ (u) [ ∇u^2 +
(u× u_θ)^2/r^2u^2] +(u× u_r)^2/u^4 dx,
where u lies in 𝒜_ϕ() and Φ (t) = t^-2Γ(t)^2 with Γ(t)∈ C^2[a,b] such that Γ̇(t)/t
is monotone increasing. Then for any u ∈ A_k (k ∈) there exists a twist map
u̅ = u̅_k=Q[g] x defined by the same symmetrisation as in (<ref>) such that,
ℍ[u̅; ] ≤ℍ[u; ]
whilst u̅∈ A_k.
[Note that taking Φ(t)= 1/t^2, i.e., Γ(t) = 1, gives ℍ=𝔽.]
As the first step in the proof we wish to prove the inequality
∫_Φ(x) dx = ∫_Φ(u̅(x)) ∇u̅(x)^2 dx ≤∫_Φ(u) ∇u(x)^2 dx.
In order to do this we again need to apply the isoperimetric inequality, the coarea formula for Sobolev functions as in the proof of Proposition <ref>
and then the integral identity (<ref>). Thus we proceed by writing
∫_Φ(u) ∇u(x) dx = ∫_a^b Φ(t)ℋ^1(u=t) dt
≥∫_a^b Φ(t)ℋ^1(x=t) dt
= ∫_Φ(x) dx.
Therefore it follows from basic considerations that
(∫_Φ(x) dx)^2 ≤(∫_Φ(u) ∇u(x) dx )^2
≤∫_Φ(u) ∇u(x)^2 dx ∫_Φ(u) dx
= ∫_Φ(u) ∇u(x)^2 dx ∫_Φ(x) dx,
and so rearranging gives the desired inequality (<ref>), namely,
∫_Φ(x) dx ≤∫_Φ(u) ∇u(x)^2 dx.
Next we proceed by writing
∫_Φ(u)(u× u_θ)^2/r^2u^2 dx =
∫_a^b 1/r∫_0^2πΦ(u)(u× u_θ)^2/u^2 dθ dr.
As Φ(t) = t^-2Γ(t)^2 it follows upon noting Proposition <ref> that we have
1/2π∫_0^2πΦ(u)(u× u_θ)^2/u^2 dθ = 1/2π∫_0^2πΓ(u)^2(u× u_θ)^2/u^4 dθ
≥Γ(r)^2 = Φ(r) r^2, a.e. r∈[a,b].
Hence by combining the above it follows that
∫_Φ(u)(u× u_θ)^2/r^2u^2 dx ≥ 2π∫_a^b Φ(r)r dr
= ∫_Φ(u̅)(u̅×u̅_θ)^2/r^2u̅^2 dx.
Therefore with the above at our disposal all that remains is to use the inequality
∫_(u× u_r)^2/u^4 dx ≥∫_(u̅×u̅_r)^2/u̅^4 dx,
whose proof proceeds similar to that of Proposition <ref> by using the
same angle of rotation function (<ref>) in defining u̅. This therefore completes the proof.
§ MEASURE PRESERVING SELF-MAPS AND TWISTS ON SOLID TORI
In this section we propose and study extensions of twist maps to a larger class of domains. Recalling that an n-dimensions annulus takes the
form = [a,b] ×^n-1 the natural extension here would be domains of the product type =^m ×^n-1 (with m ≥ 1,
n ≥ 2) embedded in ^m+n. Twist maps in turn will be suitable measure preserving self-maps of that agree with the
identity map ϕ on the boundary ∂ (see <cit.>). To keep the discussion tractable we confine ourselves here to the
case m+n=3. We proceed by first considering the solid torus T≅𝔹^2 ×^1 embedded in ^3 as (see Fig. 1):
T = { x=(x_1, x_2, x_3) : ( √(x_1^2+x_2^2) - ρ )^2 + x_3^2 = r^2 , 0 ≤ r < 1 }.
Here T= T_ρ and the fixed parameter ρ is chosen ρ>1 to avoid self-intersection. Now let us set
μ = √(x_1^2+x_2^2)- ρ. Then T above can be represented as
0 ≤μ^2 + x_3^2 = r^2 < 1.
From now on (μ, x_3) is the preferred choice of co-ordinates for =𝔹_1^2 where
={ (μ,x_3)∈^2 : μ^2+x_3^2 < 1 } is unit ball in the (μ, x_3) plane. In polar co-ordinates we have μ = r cosθ,
x_3 = rsinθ and upon noting μ = √(x_1^2+x_2^2)- ρ we have (x_1, x_2) as the co-ordinates of a sphere of radius
ρ + rcosθ:
x_1 = (ρ + rcosθ) cosϕ, x_2 = (ρ + rcosθ) sinϕ.
For our purposes in this section we shall write the above co-ordinate system in the following way,
x_1 = (μ+ρ)cosϕ
x_2 = (μ+ρ) sinϕ
x_3 = x_3.
Now with the above notation in place we can define the desired twist maps on the solid torus 𝐓 as
[Note that the aim here is to seek non-trivial extremising twist maps for the energy functional 𝔽 over the admissible
class of maps 𝒜_ϕ(𝐓).],
u(x) = Q(μ,x_3) x, x ∈ T,
where the rotation matrix Q in SO(3) takes the explicit form
Q(μ,x_3) = [ cos g(μ,x_3) -sin g(μ,x_3) 0; sin g(μ,x_3) cos g(μ,x_3) 0; 0 0 1 ].
Here the function g=g(μ,x_3) defines the angle of rotation as in the case for the annulus, however, in this case g depends on the two variables
(μ, x_3) and not just one r=|x| as is the case for the annulus. There are two main reasons for this choice of representation of a twist map for
𝐓, which we describe below.
* Firstly in order to be consistent with twist maps for the annulus we require that the rotation matrix is an isometry of the boundary
∂ T = { x=(x_1, x_2, x_3) : ( √(x_1^2+x_2^2) - ρ )^2 + x_3^2 = 1 },
with respect to the metric induced from its embedding in ^3. (This is similar to what was done earlier in the case of an annulus). Then with this in mind the isometries
of ∂𝐓 are 𝐒𝐎(3) matrices of the form,
Q = [ cosφ -sinφ 0; sinφ cosφ 0; 0 0 1 ].
* Secondly we allow the angle of rotation function g here to depend on the two variables (μ, x_3) instead of one to incorporate all the "ball"
variables in the product structure on T. Note that (μ, x_3) collapses into r=|x| in the annulus case as here one deals with an interval
(a one dimensional ball).
Now in preparation for the upcoming calculations let us denote y=[x_1,x_2,0]^t, ϑ = y/|y| and g_μ= ∂ g / ∂μ.
Then it is easily seen that
∇ u = Q + Q̇x ⊗∇ g,
(∇ u)(∇ u)^t u = Qx + ⟨∇ g, x ⟩Q̇x,
∇ u^2 = 3 + (μ+ρ)^2∇ g^2,
u^2 = (μ+ρ)^2+x_3^2,
(∇ u) = ( Q + Q̇x ⊗∇ g )
= 1 + ⟨ Q^tQ̇x, ∇ g ⟩ = 1.
Note that the last equality results from the fact that the product Q^tQ̇ is skew-symmetric and ∇ g = [(μ+ρ)^-1x_1 g_μ,(μ+ρ)^-1x_2 g_μ, g_x_3]^t.
Therefore using the above it is seen that the energy for a twist maps is given by
𝔽[u; 𝐓] = 1/2∫_ T|∇ u|^2/|u|^2 dx
= π∫_3+(μ+ρ)^2∇ g^2/(μ+ρ)^2+x_3^2 (μ+ρ) dμ dx_3.
Here we are using the fact that a change in the co-ordinates (r,θ,ϕ)→ (μ,x_3,ϕ) results in a Jacobian factor of μ+ρ
in the integral, i.e.,
∫_0^2π∫_0^2π∫_-1^1 r(ρ + rcosθ ) drdθ dϕ = ∫_0^2π∫_ (μ+ρ) dμ dx_3dϕ.
Hence (<ref>) becomes,
𝔽[u;𝐓] = π∫_(μ+ρ)^3(g_μ^2+g_x_3^2)/(μ+ρ)^2+x_3^2 dμ d x_3 + 3/2∫_𝐓x^-2dx,
where here the additional absolute constant does not affect the variational structure of 𝔽. Now to derive the Euler-Lagrange equation
associated to the energy integral on the right it suffices to take variations of g=g(μ, x_3) by some φ∈𝐂^∞_c(). This calculation leads to the following
divergence form equation:
∂/∂μ( (μ+ρ)^3 g_μ/(μ+ρ)^2 + x_3^2) +
∂/∂ x_3( (μ+ρ)^3 g_x_3/(μ+ρ)^2 + x_3^2) = 0,
div(μ+ρ)^3 ∇ g/(μ+ρ)^2 + x_3^2 =0.
Evidently the identity boundary condition on u translates into g(z) = 2 k π for some fixed k ∈ and all z=(μ,x_3)∈∂.
Now suppose g solves (<ref>). Then by an application of the divergence theorem
it is seen that the only solution to this boundary value problem is the trivial one, namely, g(μ,x_3)=2π k for all (μ,x_3) ∈.
Indeed
0 = ∫_ div( (μ+ρ)^3 ∇ g/(μ+ρ)^2 + x_3^2) dμ dx_3
= ∫_0^2π(cosθ+ρ)^3/(cosθ+ρ)^2 + x_3^2∂ g/∂ r(1,θ) dθ.
Now again as g solves (<ref>) an application of the divergence theorem also gives
∫_(μ+ρ)^3(g_μ^2+g_x_3^2)/(μ+ρ)^2+x_3^2 dμ dx_3 =
∫_[ (μ+ρ)^3(g_μ^2+g_x_3^2)/(μ+ρ)^2+x_3^2 + g div(μ+ρ)^3∇ g/(μ+ρ)^2 + x_3^2] dμ dx_3
= ∫_0^2π g(1,θ)(cosθ+ρ)^3/(cosθ+ρ)^2 + x_3^2∂ g/∂ r(1,θ) dθ
= 2π k ∫_0^2π(cosθ+ρ)^3/(cosθ+ρ)^2 + x_3^2∂ g/∂ r(1,θ) dθ = 0.
Note that in obtaining the last identity we have used (<ref>) combined with the boundary condition satisfied by g, namely,
g(1,θ)=2π k for 0≤θ≤ 2π. Hence
∫_(μ+ρ)^3(g_μ^2+g_x_3^2)/(μ+ρ)^2+x_3^2 dμ dx_3 = 0.
Now since by assumption ρ>1 we have (ρ + μ) >0 as μ=rcosθ≤ r < 1 and so μ>-1.
Hence (<ref>) gives ∇ g^2=0 and thus g(μ,x_3)=2π k; again by invoking the boundary condition on
g. It therefore follows that here we have no non-trivial solutions. Interestingly note that this conclusion stems from one crucial
difference between the annulus and the solid torus 𝐓 in that has two boundary components whilst
𝐓 only has one. It was precisely this difference that turned crucial in the application of the divergence theorem.
There are no non-trivial twist solutions (<ref>) to the Euler-Lagrange equations
associated with the energy functional 𝔽 on a solid torus T.
§ TWIST MAPS ON TORI WITH DISCONNECTED DOUBLE COMPONENT BOUNDARY
In contrast to what was seen above let us next move on to considering a "thickened" torus, that is, the domain obtained
topologically by taking the product of a two-dimensional torus and an interval. Note that here the boundary of the resulting
domain consists of two disjoint copies of the initial torus and is in particular not connected. Now for definiteness and to fix
notation let us set 𝕋=𝕋_ρ to be (see Fig. 1)
𝕋 = { x=(x_1, x_2, x_3) : ( √(x_1^2+x_2^2) - ρ)^2 + x_3^2 = r^2 , a < r < 1 }.
Here 0<a<1<ρ are fixed and the aim is to seek non-trivial extremising twist maps for the energy functional 𝔽
over the admissible class of maps 𝒜_ϕ(𝕋). Using the same co-ordinate system as in the earlier case we see
that the (μ,x_3) are the co-ordinates of the two dimensional annulus centred at the origin. Additionally for similar reasons to that
discussed earlier we define twist maps on 𝕋 as u(x) = Q(μ,x_3) x where the rotation matrix Q=Q(μ, x_3)
in SO(3) is as (<ref>).
A straightforward calculation shows that the
energy of a twist maps is given by the integral
𝔽[u; 𝕋] = 1/2∫_𝕋|∇ u|^2/|u|^2 dx
= π∫__1\_a3+(μ+ρ)^2∇ g^2/(μ+ρ)^2+x_3^2 (μ+ρ) dμ d x_3
= π∫__1\_a(μ+ρ)^3(g_μ^2+g_x_3^2)/(μ+ρ)^2+x_3^2 dμ d x_3 + 3/2∫_𝕋x^-2 dx.
Similar to what was described earlier in obtaining the second equality we have used the integral identity
∫_0^2π∫_0^2π∫_a^1 r(ρ + rcos(θ)) drdθ dϕ = ∫_0^2π∫__1\_a (μ+ρ) dμ d x_3 dϕ .
The Euler-Lagrange equation can be obtained in the standard way by taking variations φ∈𝐂^∞_c(_1\_a) where
_1\_a={ (μ,x_3)∈^2 : a^2<μ^2+x_3^2 <1 }. This calculation again leads to the Euler-Lagrange equation given
by (<ref>) where we assume
without loss of generality that the boundary condition on the rotation angle function g is set to g(μ,x_3)=0 for (μ,x_3)∈∂_a
and g(μ,x_3)=2π k for (μ,x_3)∈∂_1 with k ∈. Therefore solutions to (<ref>) satisfy,
0 = ∫__1\_a div(μ+ρ)^3 ∇ g/(μ+ρ)^2 + x_3^2 dx
= ∫_∂_1(μ+ρ)^3 ∇ g· n/(μ+ρ)^2 + x_3^2 -
∫_∂_a(μ+ρ)^3 ∇ g· n/(μ+ρ)^2 + x_3^2.
Subsequently
𝔽[u; 𝕋] - 3/2 ∫_𝕋x^-2 dx/π = ∫__1\_a[ (μ+ρ)^3∇ g^2/(μ+ρ)^2 + x_3^2
+ g div( (μ+ρ)^3 ∇ g/(μ+ρ)^2 + x_3^2) ] dx
= ∫_∂_1g(μ+ρ)^3 ∇ g· n/(μ+ρ)^2 + x_3^2 dℋ^1
- ∫_∂_ag(μ+ρ)^3 ∇ g · n/(μ+ρ)^2 + x_3^2 dℋ^1.
Hence taking into account the boundary conditions, e.g. g=0 on ∂_a and g=2π k on ∂_1, we gain that if g is a solution of (<ref>) then,
𝔽[u; 𝕋] - 3/2 ∫_𝕋x^-2 dx/π = 2π k ∫_∂_1(μ+ρ)^3 ∇ g· n/(μ+ρ)^2 + x_3^2 dℋ^1,
k ∈.
Evidently (<ref>) with the stated boundary conditions has a unique solution. Indeed if g, g are two solutions to
(<ref>) with g=0 on ∂_a and g=2π k on ∂_1
then g=g-g solves (<ref>) with g=0 on ∂ [_1\_a]. Then by (<ref>)
∫__1\_a(μ+ρ)^3 ∇ g^2/(μ+ρ)^2 + x_3^2 dx = 0.
However in view of ρ>1 this gives ∇ g^2 ≡ 0 and so invoking the boundary conditions g=0, i.e., g= g.
As existence follows from standard arguments it follows that (<ref>) has a unique smooth solution g=g(μ, x_3; k) for each k ∈.
§ EULER-LAGRANGE ANALYSIS AND TWISTS AS CLASSICAL SOLUTIONS
The goal of this section is to examine the solution g=g(μ, x_3; k) to (<ref>) with the prescribed boundary conditions in relation to the
Euler-Lagrange system (<ref>) associated with 𝔽 on 𝒜_ϕ(𝕋). To this end recall that the system takes the form
(∇ u)^t/u^2[ Δ u + ∇ u^2/u^2 u - 2/u^2∇ u (∇ u)^tu ] = ∇ p.
For the ease of notation from now on we shall set ξ = μ + ρ. Hence using the identities
(<ref>) we have
(∇ u)(∇ u)^t u = Qx + ⟨∇ g,x ⟩Q̇x,
∇ u^2 u^-2 = (3 + ξ^2 ∇ g^2) x^-2,
Δ u = 2 ξ^-1 g_ξQ̇x + Δ g Q̇x + ∇ g^2 Q̈x.
Therefore from (<ref>) and a basic calculation we obtain
Δ u + ∇ u^2/u^2 u - 2/u^2∇ u (∇ u)^tu = (2g_ξ/ξ +
Δ g -2 ⟨∇ g,x ⟩/x^2)Q̇x + ∇ g^2 Q̈x
+ 1 + ξ^2 ∇ g^2/x^2 Qx.
Now since we have Δ g = Δ_ξ,x g + g_ξ/ξ where the Δ_ξ,x denotes the Laplacian with respect to the ξ and x_3
variables we can rewrite this as
Δ u + ∇ u^2/u^2 u - 2/u^2∇ u (∇ u)^tu = (3g_ξ/ξ +
Δ_ξ,x g - 2 ⟨∇ g,x ⟩/x^2)Q̇x + ∇ g^2 Q̈x
+ 1 + ξ^2 ∇ g^2/x^2 Qx.
Now upon recalling that the desired twist solution satisfies (<ref>) we have that
div( ξ^3 ∇ g/ξ^2 + x_3^2) = ξ^3 Δ_ξ,x g/ξ^2 + x_3^2 +
( 3ξ^2/ξ^2 + x_3^2- 2ξ^4/(ξ^2+x_3^2)^2)g_ξ - 2ξ^3 x_3 g_x_3/(ξ^2 + x_3^2)^2 =0.
Thus dividing both sides by ξ^3/(ξ^2+x_3^2) and taking the negative terms to one side gives
Δ_ξ,xg + 3 g_ξ/ξ = 2( ξ g_ξ + x_3 g_x_3/x^2) = 2⟨∇_ξ,x g, z⟩/x^2,
where z=(ξ,x_3)^t and ∇_ξ,x denotes the gradient with respect to the (ξ, x_3) variable. Now since
⟨∇ g, x ⟩ = ⟨∇_ξ,xg, z ⟩ we obtain,
Δ_ξ,xg + 3 g_ξ/ξ = 2⟨∇ g, x⟩/x^2,
and so as a result
Δ u + ∇ u^2/u^2 u - 2/u^2∇ u (∇ u)^tu = ∇ g^2 Q̈x +
1 + ξ^2 ∇ g^2/x^2 Qx.
Next referring to the definition of Q basic calculation gives Q̇ = J_1 Q and Q̈=-J_2Q, where
J_1 = [ 0 -1 0; 1 0 0; 0 0 0 ],
J_2 = [ 1 0 0; 0 1 0; 0 0 0 ].
Hence with the above notation the Euler-Lagrange associated with the twist u, satisfying (<ref>), simplifies to
(∇ u)^t/x^4 (I + ξ^2 ∇ g^2 I - x^2∇ g^2 J_2) Qx
= (I + ξ^2 ∇ g^2I - x^2∇ g^2 J_2) x/x^4
= 1/x^4[ (1-x_3^2∇ g^2 ) x_1; (1-x_3^2∇ g^2) x_2; (1+ξ^2 ∇ g^2) x_3 ]
= ∇( -1/2x^2) + ∇ g^2/x^4[ -x_3^2 x_1; -x_3^2 x_2; ξ^2 x_3 ] =∇ p.
Considering the last line in the above equation it is plain that for u to grant solution to the Euler-Lagrange equation it must be that
-1/2∇ |x|^-2 + ∇ g^2/x^4 (-x_3^2 x_1, -x_3^2 x_2, ξ^2 x_3)^t =∇ p,
or equivalently that the second term on the left is a gradient. But for this to be the case the latter term must necessarily be curl-free
and so this leads to the system of equations
0 = ∂/∂ x_1( -x_3^2 ∇ g^2/x^4x_2 )
- ∂/∂ x_2( -x_3^2 ∇ g^2/x^4x_1 ),
0 = ∂/∂ x_1( ξ^2 ∇ g^2/x^4x_3 )
- ∂/∂ x_3( -x_3^2 ∇ g^2/x^4x_1 ),
0 = ∂/∂ x_2( ξ^2 ∇ g^2/x^4x_3 )
- ∂/∂ x_3( -x_3^2 ∇ g^2/x^4x_2 ).
It can be easily verified that equation (<ref>) is satisfied for any twist map since here we have
∂/∂ x_1( -x_3^2 ∇ g^2/x^4x_2 )
- ∂/∂ x_2( -x_3^2 ∇ g^2/x^4x_1 )
= x_3^2 ∇ g^2[ ∂/∂ x_2x_1/x^4
- ∂/∂ x_1x_2/x^4] + x_3^2/x^4[ x_1∂∇ g^2/∂ x_2 - x_2∂∇ g^2/∂ x_1]
= x_3^2/x^4[ x_1 x_2/ξ∂∇ g^2/∂ξ
- x_2x_1/ξ∂∇ g^2/∂ξ]=0.
We point out that the last line results upon noting the relations
∂/∂ x_1 = x_1/ξ∂/∂ξ-x_2/ξ^2∂/∂ϕ, ∂/∂ x_2 = x_2/ξ∂/∂ξ+x_1/ξ^2∂/∂ϕ.
Using this we can again see that we can write (<ref>) and (<ref>) as a single equation in the following way,
∂/∂ x_1( ξ^2 ∇ g^2/x^4x_3 )
+ ∂/∂ x_3( x_3^2 ∇ g^2/x^4x_1 ) =
= x_1 x_3/ξ∂/∂ξ( ξ^2 ∇ g^2/x^4)
+ x_1∂/∂ x_3( x_3^2 ∇ g^2/x^4),
∂/∂ x_2( ξ^2 ∇ g^2/x^4x_3 )
- ∂/∂ x_3( -x_3^2 ∇ g^2/x^4x_2 ) =
= x_2 x_3/ξ∂/∂ξ( ξ^2 ∇ g^2/x^4)
+ x_2∂/∂ x_3( x_3^2 ∇ g^2/x^4).
Therefore it is apparent that (<ref>) and (<ref>) become
x_3 ∂/∂ξ( ξ^2 ∇ g^2/x^4) + ξ∂/∂ x_3( x_3^2 ∇ g^2/x^4)=0.
Now since we have the identities
∂/∂ξ( ξ^2/x^4) = 2ξ (x_3^2 - ξ^2)/(ξ^2+x_3^2)^3
=- ξ/x_3∂/∂ x_3( x_3^2/x^4),
we obtain that (<ref>) simplifies further to
x_3ξ/x^4 [ ξ∂∇ g^2/∂ξ
x_3∂∇ g^2 /∂ x_3] = 0 ξ∂∇ g^2/∂ξ + x_3∂∇ g^2 /∂ x_3 = 0.
Hence for a solution g to (<ref>) with the prescribed boundary conditions to furnish a solution
to the Euler-Lagrange system (<ref>) associated with 𝔽 it is necessary for g to satisfy
ξ∂∇ g^2/∂ξ + x_3∂∇ g^2 /∂ x_3 = 0.
We now show that (<ref>) is also sufficient. Indeed assuming (<ref>) the desired conclusion will
follow upon showing that (<ref>) holds. Towards this end set f to be the function,
f(ξ,x_3) = -∫_0^ξx_3^2∇ g^2/(τ^2 + x_3^2)^2τ dτ.
Then one can easily verify that,
∂ f/∂ x_1 = -x_3^2 ∇ g^2/x^4x_1, ∂ f/∂ x_2
= -x_3^2 ∇ g^2/x^4x_2.
Furthermore using (<ref>) it is plain that
∂ f/∂ x_3 = - ∫_0^ξ∂/∂ x_3x_3^2∇ g^2 τ/(τ^2 + x_3^2)^2 dτ
= ∫_0^ξ∂/∂ττ^2∇ g^2 x_3/(τ^2 + x_3^2)^2 dτ
= ξ^2∇ g^2/x^4 x_3.
As a result ∇ f = ∇ g^2 x^-4 (-x_3^2 x_1, -x_3^2 x_2, ξ^2 x_3)^t.
A twist map u with the corresponding angle of rotation function g=g(μ, x_3; k) satisfying (<ref>)
and g=0 on ∂_a, g=2π k on ∂_1 (with k ∈) is a solution to the Euler-Lagrange system
(<ref>) associated with 𝔽 on 𝒜_ϕ(𝕋) iff it satisfies (<ref>).
999
AIM K. Astala, T. Iwaniec, G. Martin, Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane,
Princeton Mathematical Series, Vol. 48, Princeton University Press, 2009.
AIMO K. Astala, T. Iwaniec, G. Martin, J. Onninen, Extremal Mappings of Finite Distortion, Proc. Lond. Math. Soc.,
Vol. 91, 2005, pp. 655-702.
Brothers J.E. Brothers, W.P. Ziemer, Minimal rearrangements of Sobolev functions,
Acta Univ. Carolin. Math. Phys., Vol. 28, 1987, pp 13-24.
Federer H. Federer, Geometric Measure Theory, Classics in Mathematics,
Vol. 153, Springer-Verlag, 1969.
HM-C S. Hencl, C. Mora-Corral, Diffeomorphic approximation of continuous almost everywhere injective Sobolev
deformations in the plane, Q. J. Math., Vol. 66, 2015, pp. 1055-1062.
IO T. Iwaniec, J. Onninen, n-harmonic mappings between annuli: the art of integrating free Lagrangians,
Mem. Amer. Math. Soc., Vol. 218, viii+105 pp., 2012.
IS T. Iwaniec, V. Sverak, Mappings with integrable dilatations, Proc. Amer. Math. Soc., Vol. 118,
1993, pp. 181-188.
MSZ J. Malý, D. Swanson, W. Ziemer, The co-area formula for Sobolev mappings, Trans. Amer. Math. Soc.,
Vol. 355, 2003, pp. 477-492.
MC C.B. Morrey. Multiple integrals in the calculus of variations, Classics in Mathematics, Vol. 130,
Springer, 1966.
CT C. Morris, A. Taheri, On the Uniqueness of Energy Minimisers in Homotopy Classes, Submitted for publications, 2017.
MST S. Müller, S.J. Spector, Q. Tang, Invertibility and a topological property of Sobolev maps, SIAM J. Math. Anal.,
Vol. 27, pp. 959-976, 1996.
ShT M.S. Shahrokhi-Dehkordi, A.Taheri, Generalised twists, stationary loops and the Dirichlet energy over a space of
measure preserving maps, Calc. Var. & PDEs, Vol. 35, 2009, pp. 191-213.
ShT2 M.S. Shahrokhi-Dehkordi, A.Taheri, Generalised twists, SO(n) and the p-energy over a space of measure
preserving maps, Ann. Inst. Henri Poincarè, Analyse non lineaire, Vol. 26, 2009, pp. 1897-1924.
S V. Sverak, Regularity properties of deformations with finite energy, Arch. Rational Mech. Anal., Vol. 100,
1988, pp. 105-127.
TA2 A. Taheri, Local minimizers and quasiconvexity - the impact of Topology,
Arch. Rational Mech. Anal., Vol. 176, No. 3, 2005, pp. 363-414.
TA3 A. Taheri, Minimizing the Dirichlet energy over a space of measure preserving maps,
Top. Meth. Nonlinear Anal., Vol. 33, 2009, pp. 179-204.
TA4 A. Taheri, Homotopy classes of self-maps of annuli, generalised twists and spin degree,
Arch. Rational Mech. Anal., Vol. 197, 2010, pp. 239-270.
TA5 A. Taheri, Spherical twists, stationary loops and harmonic maps from generalised
annuli into spheres, NoDEA, Vol. 19, 2012, pp. 79-95.
VG S.K. Vodopyanov, V.M. Gol'dshtein. Quasiconformal mappings and spaces of functions with
generalized first derivatives, Siberian Math. J., Vol. 17, 1977, pp. 515-531.
† DEPARTMENT OF MATHEMATICS, UNIVERSITY
OF SUSSEX, FALMER, BRIGHTON BN1 9RF, ENGLAND, UK.
E-mail address: [email protected]
| null | null | null | null | null | null |
http://arxiv.org/abs/1701.07521v2 | 20170125235717 | Floor Scale Modulo Lifting for QC-LDPC codes | [
"Nikita Polyanskii",
"Vasiliy Usatyuk",
"Ilya Vorobyev"
] | cs.IT | [
"cs.IT",
"math.IT"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.08016v1 | 20170127112314 | Interaction effects in a chaotic graphene quantum billiard | [
"Imre Hagymasi",
"Peter Vancso",
"Andras Palinkas",
"Zoltan Osvath"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.str-el",
"nlin.CD"
] | null | null | null | null | null | null |
|
http://arxiv.org/abs/1701.07602v3 | 20170126075020 | Coarse-graining and the Blackwell order | [
"Johannes Rauh",
"Pradeep Kr. Banerjee",
"Eckehard Olbrich",
"Jürgen Jost",
"Nils Bertschinger",
"David Wolpert"
] | cs.IT | [
"cs.IT",
"math.IT",
"62B15, 94A15, 94A17"
] |
Lattice coding for Rician fading channels from Hadamard rotations
Alex Karrila^1, Niko R. Väisänen^1, David Karpuk^1, Member, IEEE, and Camilla Hollanti^1, Member, IEEE
^1A. Karrila, N. R. Väisänen, D. Karpuk and C. Hollanti are with the Department of Mathematics and Systems Analysis, Aalto University, P.O. Box 11100, FI-00076 AALTO, Espoo, Finland.
Emails: firstname.(letter.)[email protected].
D. Karpuk was supported by Academy of Finland grant #268364.
Grant acknowledgements?
December 30, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Suppose we have a pair of information channels, κ_1,κ_2, with a common input. The Blackwell order is a partial order over channels that compares κ_1 and κ_2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ_1 is said to be Blackwell-inferior to κ_2 if and only if κ_1 can be constructed by garbling the output of κ_2. A related partial order stipulates that κ_2 is more capable than κ_1 if the mutual information between the input and output is larger for κ_2 than for κ_1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ_1 is less capable than κ_2 but not Blackwell-inferior. We show that this may even happen when κ_1 is constructed by coarse-graining the inputs of κ_2. Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions.
Keywords: Channel preorders; Blackwell order; degradation order; garbling; more capable; coarse-graining
§ INTRODUCTION
Suppose we are given the choice of two channels that both provide information about the same random variable, and that we want to make a decision based on the channel outputs. Suppose that our utility function depends on the joint value of the input to the channel and our resultant decision based on the channel outputs. Suppose as well that we know the precise conditional distributions defining the channels, and the distribution over channel inputs.
Which channel should we choose? The answer to this question depends on the choice of our utility function as well as on the details of the channels and the input distribution. So for example, without specifying how we will use the channels, in general we cannot just compare their information capacities to choose between them.
Nonetheless, for certain pairs of channels we can make our choice, even without knowing the utility functions or the distribution over inputs. Let us represent the two channels by two (column) stochastic matrices κ_1 and κ_2, respectively.
Then if there exists another stochastic matrix λ such that κ_1 = λ·κ_2, there is never any reason to strictly prefer κ_1; for if we choose κ_2, we can always make our decision by chaining the output of κ_2 through the channel λ and then using the same decision
function we would have used had we chosen κ_1.
This simple argument shows that whatever the three stochastic matrices are and whatever the decision rule we would use if we chose channel κ_1, we can always get the same expected utility
by instead choosing channel κ_2 with an appropriate decision rule.
In this kind of situation, where κ_1 = λ·κ_2, we say that κ_1 is a garbling (or degradation) of κ_2.
It is much more difficult to prove that the converse also holds true:
Let κ_1,κ_2 be two stochastic matrices representing two channels with the same input alphabet. Then the following two conditions are equivalent:
* When the agent chooses κ_2 (and uses the decision rule
that is optimal for κ_2), her expected utility is always at least as big as the expected utility when she chooses κ_1 (and uses the optimal decision rule for κ_1), independent of the utility function and the distribution of the input S.
* κ_1 is a garbling of κ_2.
Blackwell formulated his result in terms of a statistical decision maker who reacts to the outcome of a
statistical experiment. We prefer to speak of a decision problem instead of a statistical experiment.
See <cit.> for an overview.
Blackwell's theorem motivates looking at the following partial order over channels κ_1,κ_2 with a common input alphabet:
κ_1≼κ_2
:⟺ one of the two statements
in Blackwell's theorem holds true.
We call this partial order the Blackwell order (this partial order is called degradation order by other authors <cit.>). If κ_1≼κ_2, then κ_1 is said to be Blackwell-inferior to κ_2.
Strictly speaking, the Blackwell order is only a preorder, since there are channels κ_1≠κ_2 that satisfy κ_1≼κ_2≼κ_1 (when κ_1 arises from κ_2 by permuting the output alphabet). However, for our purposes such channels can be considered as equivalent. We write κ_1≺κ_2 if κ_1≼κ_2 and κ_1⋡κ_2.
By Blackwell's theorem this implies that κ_2 performs at least as good as κ_1 in any decision problem and that there exist decision problems in which κ_2 outperforms κ_1.
For a given distribution of S, we can also compare κ_1 and κ_2 by comparing the two mutual
informations I(S;X_1), I(S;X_2) between the common input S and the channel outputs X_1 and X_2. The data processing inequality shows that κ_2≽κ_1 implies I(S;X_2)≥ I(S;X_1). However, the converse implication does not hold.
The intuitive reason is that for the Blackwell order,
not only the amount of information is important. Rather, the question is how much of the information that κ_1
or κ_2 preserve is relevant for a given fixed decision problem (that is, a given fixed utility function).
Given two channels κ_1,κ_2, suppose that I(S;X_2)≥ I(S;X_1) for all distributions of S. In this case, we say that κ_2 is more capable than κ_1. Does this imply that κ_1≼κ_2? The answer is known to be negative in general <cit.>.
In Proposition <ref> we introduce a new surprising example of this phenomenon with a particular structure. In fact, in this example, κ_1 is a Markov approximation of κ_2 by a deterministic function, in the following sense:
Consider another random variable f(S) that arises from S by applying a (deterministic) function f.
Given two random variables S, X, denote by X S the channel defined by the conditional probabilities P_X|S(x|s),
and let κ_2:=(X S) and κ_1:=(X f(S))·(f(S) S).
Thus, κ_1 can be interpreted as first replacing S by f(S) and then sampling X according to the conditional distribution P_X|S(x|f(s)).
Which channel is superior? Using the data processing inequality, it is easy to see that κ_1 is less capable than κ_2. However, as Proposition <ref> shows, in general κ_1⋠κ_2.
We call κ_1 a Markov approximation, because the output of κ_1 is independent of the input S given f(S). The channel κ_1 can also be obtained from κ_2 by “pre-garbling” (Lemma <ref>); that is,
there is another stochastic matrix λ^f that satisfies κ_1 = κ_2·λ^f.
It is known that pre-garbling may improve the performance of a channel (but not its capacity) as we recall in Section <ref>. What may be surprising is that this can happen for pre-garblings of the form λ^f, which have the effect of coarse-graining according to f.
The fact that the more capable preorder does not imply the Blackwell order shows that “Shannon information,” as captured by the mutual information, is not the same as “Blackwell information,” as needed for the Blackwell decision problems. Indeed, our example explicitly shows that even though coarse-graining always reduces Shannon information, it need not reduce Blackwell information.
Finally, let us mention that there are further ways of comparing channels (or stochastic matrices); see <cit.> for an overview.
Proposition <ref> builds upon another effect that we find paradoxical: Namely, there exist random variables S,X_1,X_2 and there exists a function f:→' from the support of S to a finite set S'
such that the following holds:
* S and X_1 are independent given f(S).
* (X_1 f(S)) ≼ (X_2 f(S)).
* (X_1 S) ⋠ (X_2 S).
Statement 1) says that everything X_1 knows about S, it knows through f(S). Statement 2) says that X_2
knows more about f(S) than X_1. Still, 3) says that we cannot conclude that X_2 knows more about S
than X_1. The paradox illustrates that it is difficult to formalize what it means to “know more.”
Understanding the Blackwell order is an important aspect of understanding information decompositions; that is, the
quest to find new information measures that separate different aspects of the mutual information
I(S;X_1,…,X_k) of k random variables X_1,…,X_k and a target variable S (see the other
contributions of this special issue and references therein). In particular, <cit.>
argues that the Blackwell order provides a natural criterion when a variable X_1 has unique information
about S with respect to X_2. We hope that the examples we present here are useful in developing intuition on how
information can be shared among random variables and how it behaves when applying a deterministic function, such as a
coarse-graining. Further implications of our examples on information decompositions are discussed in <cit.>.
In the converse direction, information decomposition measures (such as measures of unique information) can be used to
study the Blackwell order and deviations from the Blackwell order. We illustrate this idea in Example <ref>.
The remainder of this work is organized as follows: In Section <ref>, we recall how pre-garbling can be used to improve the performance of a channel. We also show that the pre-garbled channel will always be less capable and that simultaneous pre-garbling of both channels preserves the Blackwell order. In Section <ref>, we state a few properties of the Blackwell order, and we explain why we find these properties counter-intuitive and paradoxical. In particular, we show that coarse-graining the input can improve the performance of a channel. Section <ref> contains a detailed discussion of an example that illustrates these properties.
In Section <ref> we use the unqiue information measure from <cit.>, which has properties similar to the Le Cam's deficiency, to illustrate deviations from the Blackwell relation.
§ PRE-GARBLING
As discussed above (and as made formal in Blackwell's theorem (Theorem <ref>)), garbling the output of a channel (“post-garbling”) never increases the quality of a channel.
On the other hand, garbling the input of a channel (“pre-garbling”) may increase the performance of a channel, as the following example shows.
Suppose that an agent can choose an action from a finite set . She then receives a utility u(a,s) that depends both on the chosen action a∈ and on the value s of a random variable S. Consider the channels
κ_1 =
[ 0.9 0; 0.1 1 ] and κ_2 =
κ_1·[ 0 1; 1 0 ]
=
[ 0 0.9; 1 0.1 ],
and the utility function
s 0 0 1 1
a 0 1 0 1
u(s,a) 2 0 0 1
For uniform input the optimal decision rule for κ_1 is
a(0) = 0,
a(1) = 1
and the opposite
a(0) = 1,
a(1) = 0
for κ_2. The expected utility with κ_1 is 1.4, while using κ_2, it is slightly higher, 1.45.
It is also not difficult to check that neither of the two channels is a garbling of the other (cf. Prop. 3.22
in <cit.>).
The intuitive reason for the difference in the expected utilities is that the channel κ_2 transmits one of the states without noise and the other state with noise. With a convenient pre-processing, it is possible to make sure that the relevant information for choosing an action and for optimizing expected utility is transmitted with less noise.
Note the symmetry of the example: Each of the two channels arises from the other by a convenient pre-processing, since the pre-processing is invertible. Hence, the two channels are not comparable by the Blackwell order. In contrast, two channels that only differ by an invertible garbling of the output are equivalent with respect to the Blackwell order.
The pre-garbling in Example <ref> is invertible, and so it is more aptly described as a pre-processing.
In general, though, pure pre-garbling and pure pre-processing are not easily distinguishable,
and it is easy to perturb
Example <ref> by adding noise without changing the conclusion. In Section <ref>, we will present an example in which the pre-garbling consists of
coarse-graining. It is much more difficult to understand how coarse-graining can be used as sensible pre-processing.
Even though pre-garbling can make a channel better (or, more precisely, more suited for a particular decision problem at hand), pre-garbling cannot invert the Blackwell order:
If κ_1≺κ_2·λ, then κ_1⋡κ_2.
Suppose that κ_1≺κ_2·λ. Then the capacity of κ_1 is less than the capacity
of κ_2·λ, which is bounded by the capacity of κ_2. Therefore, the capacity
of κ_1 is less than the capacity of κ_2.
Also, it follows directly from Blackwell's theorem that
κ_1≼κ_2 implies κ_1·λ≼κ_2·λ
for any channel λ, where the input and output alphabets of λ equal the input alphabet
of κ_1,κ_2. Thus, pre-garbling preserves the Blackwell order when applied to both channels
simultaneously.
Finally, let us remark that certain kinds of simultaneous pre-garbling can also be “hidden” in the utility function: Namely, in Blackwell's theorem, it is not necessary to vary the distribution of S, as long as the support of the (fixed) input distribution has full support (that is, every state of the input alphabet of κ_1 and κ_2 appears with positive probability). In this setting, it suffices to look only at different utility functions. When the input distribution is fixed, it is more convenient to think in terms of random variables instead of channels,
which slightly changes the interpretation of the decision problem. Suppose we are given random variables
S,X_1,X_2 and a utility function u(a,s) depending on the value of S and an action a∈ as above. If we cannot look at both X_1 and X_2, should we rather look at X_1 or at X_2 to take our decision?
The following two conditions are equivalent:
* Under the optimal decision rule, when the agent chooses X_2, her expected utility is always at least as big as the expected utility when she chooses X_1, independent of the utility function.
* (X_1 S) ≼ (X_2 S).
§ PRE-GARBLING BY COARSE-GRAINING
In this section we present a few counter-intuitive properties of the Blackwell order.
There exist random variables S,X_1,X_2 and a function f:→' from the support of S to
a finite set S' such that the following holds:
* S and X_1 are independent given f(S).
* (X_1 f(S)) ≺ (X_2 f(S)).
* (X_1 S) ⋠ (X_2 S).
This result may at first seem paradoxical. After all, property 3) implies that there exists a decision problem involving S for which it is better to use X_1 than X_2.
Property 1) implies that any
information that X_1 has about S is contained in X_1's information about f(S).
One would therefore expect that, from the viewpoint of X_1, any decision problem in which the task is to predict S and to react on S looks like a decision problem in which the task is to react to f(S).
But property 2) implies that for such a decision problem, it may in fact be better to look at X_2.
The proof is by Example <ref>, which will be given in Section <ref>. This example satisfies
* S and X_1 are independent given f(S).
* (X_1 f(S)) ≼ (X_2 f(S)).
* (X_1 S) ⋠ (X_2 S).
It only remains to show that it is possible to also achieve the strict relation (X_1 f(S)) ≺ (X_2
f(S)) in the second statement. This can easily be done by adding a small garbling to the channel X_1 f(S)
(e.g. by adding a binary symmetric channel with sufficiently small noise parameter ϵ). This ensures
(X_1 f(S))≺(X_2 f(S)), and if the garbling is small enough, this does not destroy the property
(X_1 S) ⋠ (X_2 S).
The example from Proposition <ref> also leads to the following paradoxical property:
There exist random variables S,X and there exists a function f:→' from the support of S to
a finite set S' such that the following holds:
(X f(S))·(f(S) S) ⋠ X S.
Let us again give a heuristic argument why we find this property paradoxical. Namely, the combined channel (X
f(S))·(f(S) S) can be seen as a Markov chain approximation of the direct channel X S that corresponds to replacing the conditional distribution
P_X|S(x|s) = ∑_f(s) P_X|Sf(S)(x|s,f(s)) P_f(S)|S(f(s)|s).
by
∑_f(s) P_X|f(S)(x|f(s))P_f(S)|S(f(s)|s).
Proposition <ref> together with Blackwell's theorem states that there exist situations where this
approximation is better than the correct channel.
Let S,X_1,X_2 be as in Example <ref> in Section <ref> that also proves
Proposition <ref>, and let X=X_2. In that example, the two channels X_1 f(S) and X_2 f(S) are equal. Moreover, X_1 and S are independent given f(S). Thus, (X f(S))·(f(S) S) = (X_1 S). Therefore, the statement follows from (X_1 S)⋠(X_2 S).
On the other hand, the channel (X f(S))·(f(S) S) is always less capable than X S:
For any random variables S, X, and function f:→, the channel (X f(S))·(f(S) S) is less capable than X S.
For any distribution of S, let X' be the output of the channel (X f(S))·(f(S) S). Then, X' is
independent of S given f(S). On the other hand, since f is a deterministic function, X' is independent of f(S) given S. Together, this implies I(S;X') = I(f(S);X'). Using the fact that the joint distributions of (X,f(S)) and (X',f(S)) are identical and applying the data processing inequality gives
I(S;X') = I(f(S);X') = I(f(S);X) ≤ I(S;X).
The setting of Proposition <ref> can also be understood as a specific kind of pre-garbling. Namely,
consider the channel λ^f defined by
λ^f_s',s := P_S|f(S)(s'|f(s)).
The effect of this channel can be characterized as a randomization of the input: The precise value of S is forgotten, and only the value of f(S) is
preserved. Then a new value s' is sampled for S according to the conditional distribution of S given f(S).
(X f(S))·(f(S) S) = (X S)·λ^f.
∑_s_1P_X|S(x|s_1)P_S|f(S)(s_1|f(s)) = ∑_s_1, tP_X|S(x|s_1)P_S|f(S)(s_1|t)P_f(S)|S(t|s)
= ∑_tP_X|f(S)(x|t)P_f(S)|S(t|s),
where we have used that X-S-f(S) forms a Markov chain.
While it is easy to understand that pre-garbling can be advantageous in general (since it can work as
preprocessing), we find surprising that this can also happen in the case where the pre-garbling is done in terms of a
function f; that is, in terms of a channel λ^f that does coarse-graining.
§ EXAMPLES
Consider the joint distribution
f(s) s x_1 x_2 P_f(S)SX_1X_2
0 0 0 0 1/4
0 1 0 1 1/4
0 0 1 0 1/8
0 1 1 0 1/8
1 2 1 1 1/4
and the function f:{0,1,2}→{0,1} with f(0)=f(1)=0 and f(2)=1. Then X_1 and X_2 are independent uniform binary random variables, and f(S) = And(X_1,X_2). By symmetry, the joint distributions of the pairs (f(S), X_1) and (f(S), X_2) are identical, and so the two channels X_1 f(S) and X_2 f(S) are identical. In particular (X_1 f(S))≼(X_2 f(S)).
On the other hand, consider the utility function
s a u(s,a)
0 0 0
0 1 0
1 0 1
1 1 0
2 0 0
2 1 1
To compute the optimal decision rule, let us look at the conditional distributions:
s x_1 P_S|X_1(s|x_1)
0 0 1/2
1 0 1/2
0 1 1/4
1 1 1/4
2 1 1/2
s x_2 P_S|X_2(s|x_2)
0 0 3/4
1 0 1/4
0 1 0
1 1 1/2
2 1 1/2
The optimal decision rule for X_1 is a(0) = 0, a(1) = 1, with expected utility
u_X_1 := 1/2· 1/2 + 1/2· 1/2 = 1/2.
The optimal decision rule for X_2 is a(0) = 0, a(1) ∈{0,1} (this is not unique in this case), with expected utility
u_X_2 := 1/2· 1/4 + 1/2· 1/2 = 3/8 < 1/2.
How can we understand this example?
Some observations:
* It is easy to see that X_2 has more irrelevant information than X_1: namely, X_2 can determine relatively precisely when S=0. However, since S=0 gives no utility independent of the action, this information is not relevant.
It is more difficult to understand why X_2 has less relevant information than X_1. Surprisingly, X_1 can determine more precisely when S=1: If S=1, then X_1 “detects this” (in the sense that X_1 chooses action 0) with probability 2/3. For X_2, the same probability is only 1/3.
* The conditional entropies of S given X_2 are smaller than the conditional entropies of S given X_1:
H(S|X_1=0) = log(2), H(S|X_1=1) = 3/2log(2),
H(S|X_2=0) = 2log(2) - 3/2log(3) ≈
0.4150375log(2), H(S|X_2=1) = log(2).
* One can see in which sense f(S) captures the relevant information for X_1, and indeed for the whole
decision problem: knowing f(S) is completely sufficient in order to receive the maximal utility for each state of S. However, when information is incomplete, it matters how the information about the different states of S is mixed, and two variables X_1,X_2 that have the same joint distribution with f(S) may perform differently.
It is somewhat surprising that it is the random variable that has less information about S and that is conditionally independent of S given f(S) which actually performs better.
Example <ref> is different from the pre-garbling Example <ref> discussed in
Section <ref>. In the latter, both channels had the same amount of information (mutual
information) about S, but for the given decision problem the information provided by κ_2 was more relevant than the information provide by κ_1.
The first difference in Example <ref> is that X_1 has less mutual information about S than X_2 (Lemma <ref>). Moreover, both channels are identical with respect to f(S), i.e. they provide
the same information about f(S), and for X_1 it is the only information it has about S. So, one could argue that X_2 has additional information, that does not help though, but decreases the expected utility instead.
We give another example which shows that X_2 can also be chosen a deterministic function of S.
Consider the joint distribution
f(s) s x_1 x_2 P_f(S)SX_1X_2
0 0 0 0 1/6
0 0 1 0 1/6
0 1 0 1 1/6
0 1 1 1 1/6
1 2 1 1 1/3
The function f is as above, but now also X_2 is a function of S. Again, the two channels X_1 f(S) and
X_2 f(S) are identical, and X_1 is independent of S given f(S).
Consider the utility function
s a u(s,a)
0 0 0
0 1 0
1 0 0
1 1 1
2 0 0
2 1 -1
One can show that it is optimal for agent who relies on X_2 to always choose action 0, which brings no reward (and
no loss). However, when the agent knows that X_1 is zero, he may safely choose action 1 and has a positive
probability of receiving a positive reward.
To add another interpretation to the last example, we visualize the situation in the following Bayesian network:
X ← S → f(S) → X',
where, as in Proposition <ref> and its proof, we let X = X_2, and we consider X' = X_1 as an
approximation of X. Then S denotes the state of the system that we are interested in, and X denotes a given set
of observables of interest. f(S) can be considered as a “proxy” in situations where it is difficult to observe X
directly. For example, in neuroimaging, instead of directly measuring the neural activity X, one might look at an MRI
signal f(S). In economic and social sciences, monetary measures like the GDP are used as a proxy for prosperity.
A decision problem can always be considered as a classification problem defined by the utility u(s,a) by
considering the optimal action as the class label of state S. Proposition <ref> now says that there
exist S,X,f(S) and a classifcation problem u(s,a), such that the approximated features X' (simulated from f(S))
allow for a better classification (higher utility) than the original features X.
In such a situation, looking at f(S) will always be better than looking at either X or X'. Thus, the paradox will
only play a role in situations where it is not possible to base the decision on f(S) directly. For example, f(S)
might still be too large, or X might have a more natural interpretation, making it easier to interpret for the
decision taker. But, when it is better to base a decision on a proxy rather than directly on the observable of interest, this interpretation may be erroneous.
§ INFORMATION DECOMPOSITION AND LE CAM DEFICIENCY
Given two channels κ_1,κ_2, how can one decide whether or not κ_1≼κ_2? The easiest way is to check whether the equation κ_1 = λ·κ_2 has a solution λ that is a stochastic matrix. In the finite alphabet case, this amounts to checking feasibility of a linear program, which is considered computationally easy.
However, when the feasibility check returns a negative result, this approach does not give any more information, e.g. how far κ_1 is away from being a garbling of κ_2.
A function that quantifies how far κ_1 is away from being a garbling of κ_2 is given by the (Le Cam) deficiency and its various generalizations <cit.>.
Another such function is given by UI defined in <cit.> that takes into account that the channels we consider are of the form κ_1=(X_1 S) and κ_2=(X_2 S), that is, they are derived from conditional distributions of random variables. In contrast to the deficiencies, UI depends on the input distribution to these channels.
Let P_SX_1X_2 be a joint distribution of S and the outputs X_1 and X_2.
Let Δ_P be the set of all joint distributions of the random variables S,X_1,X_2 (with the same alphabets) that are compatible with the marginal distributions of P_SX_1X_2 for the pairs (S,X_1) and (S,X_2), i.e.,
Δ_P := {Q_SX_1X_2∈Δ: Q_SX_1=P_SX_1, Q_SX_2=P_SX_2}.
In other words, Δ_P consists of all joint distributions that are compatible with κ_1 and κ_2 and that have the same distribution for S as P_SX_1X_2. Consider the function
UI(S;X_1\ X_2) := min_Q∈Δ_P I_Q(S;X_1|X_2),
where I_Q denotes the conditional mutual information evaluated with respect to the the joint distribution Q. This function has the following property: UI(S;X_1\ X_2) = 0 if and only if κ_1≼κ_2 <cit.>.
Computing UI is a convex optimization problem. However, the condition number can be very bad, which makes the problem difficult in practice.
UI is interpreted in <cit.> as a measure of the unique information that X_1 conveys about S (with respect to X_2).
So, for instance, with this interpretation Example <ref> can be summarized as follows: Neither X_1 nor X_2 has unique information about f(S). However, both variables have unique information about S, although X_1 is conditionally independent of S given f(S) and thus, in contrast to X_2, contains no “additional” information about S.
We now apply UI to a parameterized version of the And gate in Example <ref>.
Figure <ref> shows a heat map of UI computed on the set of all distributions
of the form
f(s) s x_1 x_2 P_f(S)SX_1X_2
0 0 0 0 1/8+2b
0 1 0 0 1/8-2b
0 0 0 1 1/8+a
0 1 0 1 1/8-a
0 0 1 0 1/8+a/2+b
0 1 1 0 1/8-a/2-b
1 2 1 1 1/4
where -1/8≤ a ≤1/8 and -1/16≤ b ≤1/16.
This is the set of distributions of S,X_1,X_2 that satisfy the following constraints:
* X_1,X_2 are independent;
* f(S) = And(X_1,X_2), where f is as in Example <ref>; and
* X_1 is independent of S given f(S).
Along the secondary diagonal b=a/2, the marginal distributions of the pairs (S,X_1) and (S,X_2) are identical. In such a situation, the channels (X_1 S) and (X_2 S) are Blackwell-equivalent, and so UI vanishes. Furtheraway from the diagonal, the marginal distributions differ, and UI grows. The maximum value is achieved at the corners for (a,b)=(-1/8,1/16), (1/8,-1/16). At the upper left corner (a,b)=±(-1/8,1/16), we recover Example <ref>.
Figure <ref> shows a heat map of UI computed on the set of all distributions of the form
f(s) s x_1 x_2 P_f(S)SX_1X_2
0 0 0 0 a^2/(a+b)
0 0 1 0 ab/(a+b)
0 1 0 1 ab/(a+b)
0 1 1 1 b^2/(a+b)
1 2 1 1 1 - a - b
where a,b≥ 0 and a+b≤ 1. This extends Example <ref>, which is recovered for a=b=1/3.
This is the set of distributions of S,X_1,X_2 that satisfy the following constraints:
* X_2 is a function of S, where the function is as in Example <ref>.
* X_1 is independent of S given f(S).
* The channels X_1 f(S) and X_2 f(S) are identical.
IEEEtran
| Suppose we are given the choice of two channels that both provide information about the same random variable, and that we want to make a decision based on the channel outputs. Suppose that our utility function depends on the joint value of the input to the channel and our resultant decision based on the channel outputs. Suppose as well that we know the precise conditional distributions defining the channels, and the distribution over channel inputs.
Which channel should we choose? The answer to this question depends on the choice of our utility function as well as on the details of the channels and the input distribution. So for example, without specifying how we will use the channels, in general we cannot just compare their information capacities to choose between them.
Nonetheless, for certain pairs of channels we can make our choice, even without knowing the utility functions or the distribution over inputs. Let us represent the two channels by two (column) stochastic matrices κ_1 and κ_2, respectively.
Then if there exists another stochastic matrix λ such that κ_1 = λ·κ_2, there is never any reason to strictly prefer κ_1; for if we choose κ_2, we can always make our decision by chaining the output of κ_2 through the channel λ and then using the same decision
function we would have used had we chosen κ_1.
This simple argument shows that whatever the three stochastic matrices are and whatever the decision rule we would use if we chose channel κ_1, we can always get the same expected utility
by instead choosing channel κ_2 with an appropriate decision rule.
In this kind of situation, where κ_1 = λ·κ_2, we say that κ_1 is a garbling (or degradation) of κ_2.
It is much more difficult to prove that the converse also holds true:
Let κ_1,κ_2 be two stochastic matrices representing two channels with the same input alphabet. Then the following two conditions are equivalent:
* When the agent chooses κ_2 (and uses the decision rule
that is optimal for κ_2), her expected utility is always at least as big as the expected utility when she chooses κ_1 (and uses the optimal decision rule for κ_1), independent of the utility function and the distribution of the input S.
* κ_1 is a garbling of κ_2.
Blackwell formulated his result in terms of a statistical decision maker who reacts to the outcome of a
statistical experiment. We prefer to speak of a decision problem instead of a statistical experiment.
See <cit.> for an overview.
Blackwell's theorem motivates looking at the following partial order over channels κ_1,κ_2 with a common input alphabet:
κ_1≼κ_2
:⟺ one of the two statements
in Blackwell's theorem holds true.
We call this partial order the Blackwell order (this partial order is called degradation order by other authors <cit.>). If κ_1≼κ_2, then κ_1 is said to be Blackwell-inferior to κ_2.
Strictly speaking, the Blackwell order is only a preorder, since there are channels κ_1≠κ_2 that satisfy κ_1≼κ_2≼κ_1 (when κ_1 arises from κ_2 by permuting the output alphabet). However, for our purposes such channels can be considered as equivalent. We write κ_1≺κ_2 if κ_1≼κ_2 and κ_1⋡κ_2.
By Blackwell's theorem this implies that κ_2 performs at least as good as κ_1 in any decision problem and that there exist decision problems in which κ_2 outperforms κ_1.
For a given distribution of S, we can also compare κ_1 and κ_2 by comparing the two mutual
informations I(S;X_1), I(S;X_2) between the common input S and the channel outputs X_1 and X_2. The data processing inequality shows that κ_2≽κ_1 implies I(S;X_2)≥ I(S;X_1). However, the converse implication does not hold.
The intuitive reason is that for the Blackwell order,
not only the amount of information is important. Rather, the question is how much of the information that κ_1
or κ_2 preserve is relevant for a given fixed decision problem (that is, a given fixed utility function).
Given two channels κ_1,κ_2, suppose that I(S;X_2)≥ I(S;X_1) for all distributions of S. In this case, we say that κ_2 is more capable than κ_1. Does this imply that κ_1≼κ_2? The answer is known to be negative in general <cit.>.
In Proposition <ref> we introduce a new surprising example of this phenomenon with a particular structure. In fact, in this example, κ_1 is a Markov approximation of κ_2 by a deterministic function, in the following sense:
Consider another random variable f(S) that arises from S by applying a (deterministic) function f.
Given two random variables S, X, denote by X S the channel defined by the conditional probabilities P_X|S(x|s),
and let κ_2:=(X S) and κ_1:=(X f(S))·(f(S) S).
Thus, κ_1 can be interpreted as first replacing S by f(S) and then sampling X according to the conditional distribution P_X|S(x|f(s)).
Which channel is superior? Using the data processing inequality, it is easy to see that κ_1 is less capable than κ_2. However, as Proposition <ref> shows, in general κ_1⋠κ_2.
We call κ_1 a Markov approximation, because the output of κ_1 is independent of the input S given f(S). The channel κ_1 can also be obtained from κ_2 by “pre-garbling” (Lemma <ref>); that is,
there is another stochastic matrix λ^f that satisfies κ_1 = κ_2·λ^f.
It is known that pre-garbling may improve the performance of a channel (but not its capacity) as we recall in Section <ref>. What may be surprising is that this can happen for pre-garblings of the form λ^f, which have the effect of coarse-graining according to f.
The fact that the more capable preorder does not imply the Blackwell order shows that “Shannon information,” as captured by the mutual information, is not the same as “Blackwell information,” as needed for the Blackwell decision problems. Indeed, our example explicitly shows that even though coarse-graining always reduces Shannon information, it need not reduce Blackwell information.
Finally, let us mention that there are further ways of comparing channels (or stochastic matrices); see <cit.> for an overview.
Proposition <ref> builds upon another effect that we find paradoxical: Namely, there exist random variables S,X_1,X_2 and there exists a function f:→' from the support of S to a finite set S'
such that the following holds:
* S and X_1 are independent given f(S).
* (X_1 f(S)) ≼ (X_2 f(S)).
* (X_1 S) ⋠ (X_2 S).
Statement 1) says that everything X_1 knows about S, it knows through f(S). Statement 2) says that X_2
knows more about f(S) than X_1. Still, 3) says that we cannot conclude that X_2 knows more about S
than X_1. The paradox illustrates that it is difficult to formalize what it means to “know more.”
Understanding the Blackwell order is an important aspect of understanding information decompositions; that is, the
quest to find new information measures that separate different aspects of the mutual information
I(S;X_1,…,X_k) of k random variables X_1,…,X_k and a target variable S (see the other
contributions of this special issue and references therein). In particular, <cit.>
argues that the Blackwell order provides a natural criterion when a variable X_1 has unique information
about S with respect to X_2. We hope that the examples we present here are useful in developing intuition on how
information can be shared among random variables and how it behaves when applying a deterministic function, such as a
coarse-graining. Further implications of our examples on information decompositions are discussed in <cit.>.
In the converse direction, information decomposition measures (such as measures of unique information) can be used to
study the Blackwell order and deviations from the Blackwell order. We illustrate this idea in Example <ref>.
The remainder of this work is organized as follows: In Section <ref>, we recall how pre-garbling can be used to improve the performance of a channel. We also show that the pre-garbled channel will always be less capable and that simultaneous pre-garbling of both channels preserves the Blackwell order. In Section <ref>, we state a few properties of the Blackwell order, and we explain why we find these properties counter-intuitive and paradoxical. In particular, we show that coarse-graining the input can improve the performance of a channel. Section <ref> contains a detailed discussion of an example that illustrates these properties.
In Section <ref> we use the unqiue information measure from <cit.>, which has properties similar to the Le Cam's deficiency, to illustrate deviations from the Blackwell relation. | null | null | null | null | null |
http://arxiv.org/abs/1701.08185v3 | 20170127202922 | Multilevel maximum likelihood estimation with application to covariance matrices | [
"Marie Turčičová",
"Jan Mandel",
"Kryštof Eben"
] | math.ST | [
"math.ST",
"stat.TH",
"62H12, 62F12"
] |
Multilevel maximum likelihood estimation with application to covariance matrices
Marie Turčičová^∗
Jan Mandel^†
Kryštof Eben^
================================================================================
^∗ Institute of Computer Science, Academy of Sciences of the
Czech RepublicPod Vodárenskou věží 271/2, 182 07
Praha 8, Czech Republic, and Charles University in Prague, Faculty of
Mathematics and PhysicsSokolovská 83, Prague 8, 186 75, Czech
[email protected] 3mm ^†
University of Colorado Denver, Denver, CO 80217-3364, USA, and Institute of Computer Science, Academy of Sciences of the Czech
RepublicPod Vodárenskou věží 271/2, 182 07 Praha 8,
Czech Republic [email protected] 3mm ^ Institute of Computer Science, Academy of Sciences of the Czech
Republic Pod Vodárenskou věží 271/2, 182 07 Praha 8,
Czech Republic [email protected] 3mm
Key Words: hierarchical maximum likelihood; nested parameter spaces;
spectral diagonal covariance model; sparse inverse covariance model; Fisher
information; high dimension.
§.§ ABSTRACT
The asymptotic variance of the maximum likelihood estimate is proved to
decrease when the maximization is restricted to a subspace that contains the
true parameter value. Maximum likelihood estimation allows a systematic
fitting of covariance models to the sample, which is important in data
assimilation. The hierarchical maximum likelihood approach is applied to the
spectral diagonal covariance model with different parameterizations of
eigenvalue decay, and to the sparse inverse covariance model with specified
parameter values on different sets of nonzero entries. It is shown
computationally that using smaller sets of parameters can decrease the
sampling noise in high dimension substantially.
§ INTRODUCTION
Estimation of large covariance matrices from small samples is an important
problem in many fields, including spatial statistics, genomics, and ensemble
filtering. One of the prominent applications is data assimilation in
meteorology and oceanography, where the dimension of state vector describing
the atmosphere or ocean is in order of millions or larger. Every practically
available sample is a small sample in this context, since a reasonable
approximation of the full covariance can be obtained only with sample size of
the order of the dimension of the problem
<cit.>
. In practice, the sample covariance[In this paper, by sample
covariance we mean the maximum likelihood estimate of covariance matrix using
the norming constant N as opposed to the unbiased estimate with norming
constant (N-1).] is singular and polluted by spurious correlations.
Nevertheless, it carries useful information (e.g. on covariances present in
the actual atmospheric flow) and different techniques can be applied in order
to improve the covariance model and its practical performance.
One common technique is shrinkage, that is, a linear combination of sample
covariance and a positive definite target matrix, which prevents the
covariance from being singular. The target matrix embodies some prior
information about the covariance; it can be, e.g., unit diagonal or, more
generally, positive diagonal
<cit.>
. See, e.g., <cit.> for a survey of such shrinkage
approaches. Shrinkage of sample covariance towards a fixed covariance matrix
based on a specific model and estimated from historical data (called
background covariance) was used successfully in meteorology
<cit.>
. This approach is justified as one which combines actual (called
flow-dependent) and long-term average (called climatologic) information on
spatial covariances present in the 3D meteorological fields.
Another approach to improving on the sample covariance matrix is localization
by suppressing long-range spurious correlations, which is commonly done by
multiplying the sample covariance matrix term by term by a gradual cutoff
matrix
<cit.>
to suppress off-diagonal entries. The extreme case, when only the diagonal is
left, is particularly advantageous in the spectral domain, as the covariance
of a random field in Fourier space is diagonal if and only if the random field
in cartesian geometry is second order stationary, i.e., the covariance between
the values at two points depends only on their distance vector. Alternatively,
diagonal covariance in a wavelet basis provides spatial variability as well
<cit.>
. Spectral diagonal covariance models were successfully used in operational
statistical interpolation in meteorology in spherical geometry
<cit.>
, and versions of Ensemble Kalman Filter (EnKF) were developed which construct
diagonal covariance in Fourier or wavelet space in every update step of the
filter at low cost, and can operate successfully with small ensembles
<cit.>
.
Sparse covariance models, such as the spectral diagonal, allow a compromise
between realistic assumptions and cheap computations. Another covariance model
taking advantage of sparsity is a Gauss-Markov Random Field (GMRF), based on
the fact that conditional independence of variables implies zero corresponding
elements in the inverse of the covariance matrix
<cit.>
, which leads to modeling the covariance as the inverse of a sparse matrix.
However, both spectral diagonal and sparse inverse covariance models have a
large number of parameters, namely all terms of the sparse matrix (up to
symmetry) which are allowed to attain nonzero values. This results in
overfitting and significant sampling noise for small samples. Therefore, it is
of interest to reduce the number of parameters by adopting additional,
problem-dependent assumptions on the true parameter values.
The principal result of this paper is the observation that if parameters are
fitted as the Maximum Likelihood Estimator (MLE) and the additional
assumptions are satisfied by the true parameters, then the estimate using
fewer parameters is asymptotically more accurate, and often very significantly
so even for small samples..
The paper is organized as follows. In Sec. <ref>, we provide a brief
statement of MLE and its asymptotic variance. In Sec. <ref>, we use
the theory of maximum likelihood estimation to prove that for any two nested
subspaces of the parametric space containing the true parameter, the
asymptotic covariance matrix of the MLE is smaller for the smaller parameter
space. These results hold for a general parameter and, in the special case of
MLE for covariance matrices we do not need any invertibility assumption. The
applications to estimation of covariance matrices by spectral diagonal and
GMRF are presented in Sec. <ref>, and Sec. <ref>
contains computational illustrations. A comparison of the performance of MLE
for parametric models and of related shrinkage estimators is in
Sec. <ref>.
§ ASYMPTOTIC VARIANCE OF THE MAXIMUM LIKELIHOOD ESTIMATOR
First, we briefly review some standard results for reference. Suppose
𝕏_N=[ X_1,…,X_N]
is a random sample from a distribution on ℝ^n with density
f( x,θ) with unknown parameter
vector θ in a parameter space Θ⊂ℝ
^p. The maximum likelihood estimate θ̂_N of the
true parameter θ^0 is defined by maximizing the
likelihood
θ̂_N=max_θℒ(
θ|𝕏_N) , ℒ(
θ|𝕏_N) =∏_i=1^N
ℒ( θ|X_i)
, ℒ( θ|x) =f(
x,θ) ,
or, equivalently, maximizing the log likelihood
θ̂_N=max_θℓ(
θ|𝕏_N) , ℓ(
θ|𝕏_N) =∑_i=1^Nℓ(
θ|X_i) , ℓ(
θ|x) =log f( x
,θ) .
We adopt the usual assumptions that (i) the true parameter θ^0 lies in the interior of Θ, (ii) the density f determines the
parameter θ uniquely in the sense that f(x
,θ_1)=f(x,θ_2) a.s. if
and only if θ_1=θ_2, and (iii)
f( x,θ) is a sufficiently
smooth function of x and θ. Then the error
of the estimate is asymptotically normal,
√(N)(θ̂_N-θ^0
)𝒩_p(0,Q_θ^0
), as N→∞,
where
Q_θ^0=J_θ^0^-1,
J_θ^0=( ∇_θ
ℓ(θ^0|X)^⊤∇_θℓ(θ^0|X)) , X
∼ f( x,θ^0) .
The matrix J_θ^0 is called the Fisher information
matrix for the parameterization θ^0. Here,
X, x, and θ are columns,
while the gradient ∇_θℓ of ℓ with respect
to the parameter θ is a row vector, which is compatible
with the dimensioning of Jacobi matrices below. The mean value in
(<ref>) is taken with respect to X, which is
the only random quantity in (<ref>). Cf., e.g., <cit.> for details.
§ NESTED MAXIMUM LIKELIHOOD ESTIMATORS
Now suppose that we have an additional information that the true parameter
θ^0 lies in a subspace of Θ, which is
parameterized by k≤ p parameters (φ_1,…, φ
_k)^⊤=φ. Denote by ∇_φ
θ(φ) the p× k Jacobi matrix with
entries ∂θ_i/∂φ_j. In the next theorem,
we derive the asymptotic covariance of the maximum likelihood estimator for
φ,
φ̂_N=max_φℓ(
φ|𝕏_N) , ℓ(
φ|𝕏_N) =∑_i=1^Nℓ(
φ|X_i) , ℓ(
φ|x) =log f( x
,θ( φ) ) ,
based on the asymptotic covariance of θ in
(<ref>).
Assume that the map φ↦θ(φ) is one-to-one from Φ⊂ℝ^k to Θ, the map φ
↦θ(φ) is continuously
differentiable, ∇_φθ(φ) is full rank for all φ∈Φ,
and θ^0=θ(φ^0)
with φ^0 in the interior of Φ. Then,
√(N)(φ̂_N-φ^0
)𝒩_k( 0,Q_φ^0) as N→∞,
where Q_φ^0=J_φ^0^-1, with
J_φ^0 the Fisher information matrix of the
parameterization φ given by
J_φ^0=∇_φθ(φ^0)^⊤J_θ^0∇
_φθ(φ^0).
From (<ref>) and the chain rule
∇_φℓ(φ|X
)=∇_θℓ(θ|X
)∇_φθ(φ),
we have
J_φ^0 =( ∇_φ
ℓ(φ^0|X)^⊤∇
_φℓ(φ^0|X))
=∇_φθ(φ
^0)^⊤( ∇_θℓ(θ
^0|X)^⊤∇_θℓ(θ^0|X)) ∇_φθ(φ^0)
=∇_φθ(φ
^0)^⊤J_θ^0∇_φθ(φ^0).
The asymptotic distribution (<ref>) is now (<ref>)
applied to φ.
When the parameter θ is the quantity of interest in an
application, it is useful to express the estimate and its variance in terms of
the original parameter θ rather than the subspace
parameter φ.
Under the assumptions of Theorem <ref>,
√(N)(θ( φ̂_N)
-θ^0)𝒩_p( 0
,Q_θ( φ^0) )
as N→∞,
where
Q_θ( φ^0)
=∇_φθ(φ
^0)J_φ^0^-1∇_φ
θ(φ^0)^⊤=∇
_φθ(φ^0)(
∇_φθ(φ
^0)^⊤J_θ^0∇_φ
θ(φ^0)) ^-1∇
_φθ(φ^0)^⊤.
The lemma follows from (<ref>) by the delta method <cit.>, since the map φ↦θ
(φ) is continuously differentiable.
The matrix Q_θ( φ^0)
is singular, so it cannot be written as the inverse of another matrix, but it
can be understood as the inverse J_θ(φ^0)^-1 of the Fisher information matrix for φ,
embedded in the larger parameter space Θ.
Suppose that ψ is another parameterization which satisfies
the same assumption as φ in Theorem <ref>:
the map ψ↦θ(ψ) is
one-to-one from Ψ⊂ℝ^m, k ≤ m ≤ p, to Θ,
ψ↦θ(ψ) is
continuously differentiable, ∇_ψθ(ψ) is full rank for all ψ∈Ψ, and
θ^0=θ(ψ^0), where
ψ^0 is in the interior of Ψ. Then, similarly as in
(<ref>), we have also
√(N)(θ( ψ̂_N) -θ^0)𝒩_p( 0
,Q_θ( ψ^0) )
as N→∞,
where, as in (<ref>),
Q_θ( ψ^0) =∇
_ψθ(ψ^0
)J_ψ^0^-1∇_ψθ(ψ^0)^⊤=∇_ψθ(ψ^0)( ∇_ψθ(ψ^0)^⊤J_θ^0∇
_ψθ(ψ^0))
^-1∇_ψθ(ψ
^0)^⊤.
The next theorem shows that when we have two parameterizations
φ and ψ which are nested, then the
smaller parameterization has smaller or equal asymptotic covariance than the
larger one. For symmetric matrices A and B, A≤ B means that A-B is
positive semidefinite.
Suppose that φ and
ψ satisfy the assumptions in Theorem <ref>,
and there exists a differentiable mapping φ↦ψ
from Φ to Ψ, such that φ^0↦ψ^0. Then,
Q_θ( φ^0) ≤
Q_θ( ψ^0) .
In addition, if U∼𝒩_p( 0
,Q_θ( φ^0) )
and V∼𝒩_p( 0,Q_θ(
ψ^0) ) are random vectors with the
asymptotic distributions of the estimates θ(
φ̂_N) and θ(
ψ̂_N), then
| U| ^2=1/N*Tr
Q_θ( φ^0) ≤1/N*TrQ_θ( ψ
^0) =| V| ^2,
where | V| =( V^⊤V) ^1/2 is the
standard Euclidean norm in ℝ^p.
Denote A=J_θ^0, B=∇_φ
θ(φ^0), C=∇_ψ
θ(ψ^0). From the chain rule,
∇_φθ( φ^0) =∇_ψθ(
ψ^0) ∇_φψ( φ^0) ,
we have that B=C∇_φψ(
φ^0), and, consequently, *Range
B⊂*RangeC. Define
P_B =A^1/2B(B^⊤AB)^-1B^⊤A^1/2,
P_C =A^1/2C(C^⊤AC)^-1C^⊤A^1/2.
The matrices P_B and P_C are symmetric and idempotent, hence they are
orthogonal projections. In addition,
*RangeP_B=*RangeA^1/2B⊂*RangeA^1/2C=*RangeP_C.
Consequently, P_B≤ P_C holds from standard properties of orthogonal
projections, and (<ref>) follows.
To prove (<ref>), note that for random vector X with
X=0 and finite second moment, | X| ^2
=*TrCovX from Karhunen-Loève decomposition
and Parseval identity. The proof is concluded by using the fact that for
symmetric matrices, A≤ B implies *TrA≤*TrB, cf. e.g., <cit.>.
In the practically interesting cases when there is a large difference in the
dimensions of the parameters φ and ψ,
many eigenvalues in the covariance of the estimation error become zero. The
computational tests in Sec. <ref> show that the resulting decrease of
the estimation error can be significant.
§ APPLICATION: NESTED COVARIANCE MODELS
Models of covariance (e.g., of the state vector in a numerical weather
prediction model) and the quality of the estimated covariance are one of the
key components of data assimilation algorithms. High dimension of the problem
usually prohibits working with the covariance matrix explicitly. In ensemble
filtering methods, this difficulty may be circumvented by working directly
with the original small sample like in the classical Ensemble Kalman filter.
This, however, effectively means using the sample covariance matrix with its
rank deficiency and spurious correlations. Current filtering methods use
shrinkage and localization as noted above, and ad hoc techniques for dimension reduction.
A reliable way towards effective filtering methods lies in introducing
sparsity into covariance matrices or their inverses by means of suitable
covariance models. The results of previous section suggest that it is
beneficial to choose parsimonious models, and indeed, in practical application
we often encounter models with a surprisingly low number of parameters.
A large class of covariance models which encompass sparsity in an efficient
manner arises from Graphical models <cit.> and
Gaussian Markov Random Fields (GMRF), <cit.>, where a special
structure of inverse covariance is assumed. In the area of GMRF, nested
covariance models arise naturally. If, for instance, we consider a GMRF on a
rectangular mesh, each gridpoint may have 4, 8, 12, 20 etc. neighbouring
points which have nonzero corresponding element in the inverse covariance
matrix. Thus, a block band-diagonal structure in the inverse covariance
arises
<cit.>
. The results of Section <ref> apply for this case and we shall
illustrate them in the simulation study of Section <ref>.
Finally, variational assimilation methods, which dominate today's practice of
meteorological services, usually employ a covariance model based on a series
of transformations leading to independence of variables
<cit.>. At the end, this results in an
estimation problem for normal distribution with a diagonal covariance matrix.
For both ensemble and variational methods, any additional knowledge can be
used to improve the estimate of covariance. Second-order stationarity leads to
diagonality in spectral space, diagonality in wavelet space is often a
legitimate assumption <cit.> and we shall treat
the diagonal case in more detail.
Suppose
X∼𝒩_n(0,D),
where X denotes the random field after the appropriate
transform and D is a diagonal matrix. It is clear that estimating D by the
full sample covariance matrix (what would be the case when using the classical
EnKF) is ineffective in this situation and it is natural to use only the
diagonal part of the sample covariance. In practice, the resulting diagonal
matrix may still turn out to be noisy
<cit.>
, and further assumptions like a certain type of decay of the diagonal entries
may be realistic.
In what follows we briefly introduce the particular covariance structures,
state some known facts on full and diagonal covariance, propose parametric
models for the diagonal and compute corresponding MLE.
§.§ Sample covariance
The top-level parameter space Θ consists of all symmetric positive
definite matrices, resulting in the parameterization Σ with
n( n+1) /2 independent parameters. The likelihood of a
sample 𝕏_N=[ X^(1),…,X
^(N)] from 𝒩_n(0,Σ) is
L( Σ|𝕏_N) =1/( Σ)
^N/2( 2π) ^nN/2e^-1/2*Tr(
Σ^-1𝕏_N𝕏_N^⊤) .
If N≥ n, it is well known (e.g. <cit.>, p. 83) that the
likelihood is maximized at what we call here sample covariance matrix
Σ̂_N=1/N∑_i=1^NX^(i)(
X^(i)) ^⊤.
The Fisher information matrix of the sample covariance estimator is
<cit.>
J^(0)((Σ))=1/2Σ^-1⊗Σ^-1,
where ⊗ stands for the Kronecker product and is an operator
that transforms a matrix into a vector by stacking the columns of the matrix
one underneath the other. This matrix has dimension n^2× n^2.
If Σ̂_N is singular, L( Σ̂_N|𝕏_N) cannot be evaluated because that requires the
inverse of Σ̂. Also, in this case the likelihood L(
Σ|𝕏_N) is not bounded above on the set of all
Σ>0, thus the maximum of L( Σ|𝕏_N) does
not exist on that space. To show that, consider an orthonormal change of basis
so that the vectors in *span( 𝕏_N)
come first, write vectors and matrices in the corresponding 2×2 block
form, and let
Σ̃_N=[
[ Σ̃_11 0; 0 0 ]] , Σ̃_11>0.
Then lim_a→0^+𝕏_N^⊤( Σ̃
_N+aI) ^-1𝕏_N exists, but lim_a→0^+
( Σ̃_N+aI) =0, thus
lim_a→0^+L( Σ̃_N+aI|𝕏_N)=∞.
Note that when the likelihood is redefined in terms of the subspace
*span( 𝕏_N) only, the sample
covariance can be obtained by maximization on the subspace <cit.>.
When the true covariance is diagonal (Σ≡ D, cf. (<ref>)), a
significant improvement can be achieved by setting the off-diagonal terms of
sample covariance to zero,
D̂_N^( 0) =diag( Σ̂
_N) .
It is known that using only the diagonal of the unbiased sample covariance
Σ̂_N^u=1/N-1∑_i=1^NX^(i)(
X^(i)) ^⊤
results in smaller (or equal) Frobenius norm of the error pointwise,
|D̂_N^( 0) -D| _F≤|Σ̂_N^u-D| _F
cf. <cit.> for the case when the mean is assumed to be known
like here, and <cit.> for the unbiased sample covariance
and unknown mean.
§.§ Diagonal covariance
The parameter space Θ_1 consisting of all diagonal matrices with
positive diagonal, with n parameters d=( d_1
,…,d_n) ^⊤, can be viewed as a simple class of models for
either covariance or its inverse. The log-likelihood function for
D=(d_1,…,d_n) with a given random sample 𝕏
_N=[ X^(1),…,X^(N)] from
𝒩_n( 0,D) is
ℓ(D|𝕏_N)=-N/2log( (2π)^n|D|) -1/2∑_k=1^N( X^(k)) ^⊤
D^-1X^(k)
and has its maximum at
d̂_j=1/N∑_k=1^N( X_j^(k)) ^2
,j=1,…,n,
where X_j^(k) denotes the j-th entry of X^(k). The
sum of squares S_j^2=∑_k=1^N( X_j^(k)) ^2 is a
sufficient statistic for the variance d_j. Thus, we get the maximum
likelihood estimator
D̂_N^( 1) =1/N( S_1^2
,…,S_n^2) .
It is easy to compute the Fisher information matrix explicitly,
J_D^( 1) =( 1/2d_1^2,…,1/2d_n^2) .
which is an n × n matrix and gives the asymptotic covariance of the
estimation error
1/NQ_D^( 1) =1/NJ_D^( 1)
^-1=1/N( 2d_1^2,…, 2d_n^2)
from (<ref>).
§.§ Diagonal covariance with prescribed decay by 3 parameters
A more specific situation appears when we have an additional information that
the matrix D is not only diagonal, but its diagonal entries have a
prescribed decay. For instance, this decay can be governed by a model of the
form d_i=((c_1+c_2h_i)f_i(α))^-1, i=1,…,n, where
c_1,c_2 and α are unknown parameters, h_1,…,h_n are
known positive numbers, and f_1,…,f_n are known differentiable
functions. For easier computation it is useful to work with τ_i=1/d_i=(c_1+c_2h_i)f_i(α). Maximum likelihood estimators for
c_1,c_2, and α can be computed effectively from the likelihood
ℓ(D|𝕏_N)=-N/2nlog(2π)+N/2∑_i=1^n
logτ_i-1/2∑_i=1^nτ_iS_i^2
by using the chain rule. It holds that
∂ℓ/∂ c_1 =∑_i=1^n∂ℓ/∂τ_i∂τ_i/∂ c_1=∑_i=1
^n( N/2τ_i-S_i^2/2) ∂τ_i/∂ c_1
=N/2∑_i=1^n( 1/(c_1+c_2h_i)f_i(α
)-1/NS_i^2) f_i(α).
Setting this derivative equal to zero we get
∑_i=1^n( 1/c_1+c_2h_i-1/NS_i^2
f_i(α)) =0.
Analogously,
∂ℓ/∂ c_2=∑_i=1^n∂ℓ/∂τ_i∂τ_i/∂ c_2=N/2
∑_i=1^n( 1/(c_1+c_2h_i)f_i(α)-1/NS_i^2) h_if_i(α),
so the equation for estimating the parameter c_2 is
∑_i=1^n( h_i/c_1+c_2h_i-1/NS_i^2
h_if_i(α)) =0.
Similarly,
∂ℓ/∂α =∑_i=1^n∂ℓ/∂τ_i∂τ_i/∂α=N/2
∑_i=1^n( 1/(c_1+c_2h_i)f_i(α)-1/NS_i^2) (c_1+c_2h_i)∂ f_i(α)/∂α
=N/2∑_i=1^n( 1/f_i(α)-1/N
S_i^2(c_1+c_2h_i)) ∂ f_i(α)/∂α
and setting the derivative to zero, we get
∑_i=1^n( 1/f_i(α)∂ f_i(α
)/∂α-1/NS_i^2(c_1+c_2h_i)∂
f_i(α)/∂α) =0.
The maximum likelihood estimator for D is then given by
D̂^(3)={((ĉ_1+ĉ_2h_i)f_i(α̂))^-1,
i = 1,…,n },
where (ĉ_1,ĉ_2,α̂) is the solution of the system
(<ref>), (<ref>), (<ref>). This expression corresponds to
searching a maximum likelihood estimator of D in the subspace Θ
_3⊂Θ_1⊂Θ formed by diagonal matrices
{((c_1+c_2h_i)f_i(α))^-1,i=1,…,n}.
For completeness, the asymptotic covariance of the estimation error about
D^(3)={d_i(c_1,c_2,α), i=1,…,n},
contained in 𝕏_N is
1/NQ_D^( 3) =1/N∇d
(c_1,c_2,α)J_c_1,c_2,α^-1∇d
(c_1,c_2,α)^⊤
from (<ref>), where the Fisher information matrix J_c_1
,c_2,α is the 3×3 matrix
J_c_1,c_2,α=
[ 1/2∑_i=1^n1/(c_1+c_2h_i)^2 1/2
∑_i=1^nh_i/(c_1+c_2h_i)^2 1/2∑_i=1
^n1/(c_1+c_2h_i)f_i(α)∂ f_i(α
)/∂α; 1/2∑_i=1^nh_i/(c_1+c_2h_i)^2 1/2∑_i=1^nh_i^2/(c_1+c_2h_i)^2 1/2
∑_i=1^nh_i/(c_1+c_2h_i)f_i(α)∂
f_i(α)/∂α; 1/2∑_i=1^n1/(c_1+c_2h_i)f_i(α)
∂ f_i(α)/∂α 1/2∑_i=1^n
h_i/(c_1+c_2h_i)f_i(α)∂ f_i(α
)/∂α 1/2∑_i=1^n1/f_i^2(α
)( ∂ f_i(α)/∂α) ^2 ]
and
d(c_1,c_2,α) =[ d_1(c_1,c_2
,α),…,d_n(c_1,c_2,α)] ^⊤
=[ ((c_1+c_2h_1)f_1(α))^-1,…,((c_1+c_2
h_n)f_n(α))^-1] ^⊤.
§.§ Diagonal covariance with prescribed decay by 2 parameters
We may consider a more specific model for diagonal elements with two
parameters: d_i=(cf_i(α))^-1, i.e. τ_i=cf_i(α),
i=1,…,n, where c and α are unknown parameters. Maximum
likelihood estimators for c and α can be computed similarly as in the
previous case. The estimating equations have the form
1/c =1/n∑_i=1^n1/NS_i^2f_i(α)
1/c∑_i=1^n1/f_i(α)∂ f_i(α
)/∂α =∑_i=1^n1/NS_i^2∂
f_i(α)/∂α,
which can be rearranged to
1/c =1/n∑_i=1^n1/NS_i^2f_i
(α)
0 =∑_i=1^nS_i^2f_i(α)( 1/f_i(α
)∂ f_i(α)/∂α-1/n∑_j=1^n
1/f_j(α)∂ f_j(α)/∂α) .
Equation (<ref>) is an implicit formula for estimating α. Its
result can be used for estimating c through (<ref>). The maximum
likelihood estimator for D is then given by
D̂^(2)=( (ĉf_1(α̂))^-1,…,(ĉf_n(α̂))^-1) ,
where ĉ and α̂ are MLEs of c and α. It
corresponds to searching a maximum likelihood estimator of D in the subspace
Θ_2⊂Θ_3⊂Θ_1⊂Θ formed by diagonal
matrices { (cf_i(α))^-1,i=1,…,n }. Of
course, the estimator D̂^(2) does not have “larger" variance than
D̂^(3).
The covariance of the asymptotic distribution of the parameters d_1
,…,d_n is
1/NQ_D^( 2) =1/N∇d
(c,α)J_c,α^-1∇d(c,α)^⊤,
from (<ref>), where Fisher information matrix at D={d_i
(c,α),i=1,…,n} is the 2×2 matrix
J_c,α=
[ n/2c^2 1/2c∑_i=1^n1/f_i(α)
∂ f_i(α)/∂α; 1/2c∑_i=1^n1/f_i(α)∂ f_i(α
)/∂α 1/2∑_i=1^n1/f_i^2(α
)( ∂ f_i(α)/∂α) ^2 ]
and d(c,α)=[ d_1(c,α),…,d_n
(c,α)] ^⊤=[ (cf_1(α))^-1,…,(cf_n
(α))^-1] ^⊤.
§.§ Sparse inverse covariance and GMRF
In the GMRF method for fields on a rectangular mesh, we
assume that a variable on a gridpoint is conditionally independent on the rest
of the gridpoints, given values on neighboring gridpoints. It follows that
nonzero entries in the inverse of the covariance matrix can be only between
neighbor gridpoints. We start with 4 neighbors (up, down, right, left), and
adding neighbors gives rise to a sequence of nested covariance models. If the
columns of the mesh are stacked vertically, their inverse covariance matrix
will have a band-diagonal structure.
The inverse covariance model fitted by MLE was introduced by
<cit.> and applied on data from oceanography. The corresponding
Fisher information matrix may be found as the negative of the Hessian matrix
<cit.>.
§ COMPUTATIONAL STUDY
In Section <ref>, we have shown that in the sense of asymptotic
variance and second moment (mean-squared) error, the maximum likelihood
estimator computed in a smaller space containing the true parameter is more
(or equally) precise. For small samples, we illustrate this behavior by means
of simulations.
§.§ Simulation of simple GMRF
We first show that in the case of GMRF with four
neighbors per gridpoint, adding dependencies (parameters) which are not
present brings a loss of precision of the MLE. Using the sample covariance in
this case causes a substantial error.
We have generated an ensemble of realizations of a GMRF with dimensions
10×10 (resulting in n=100) and inverse covariance structure as in
Fig. <ref>. The values on the diagonals of the covariance matrix
have been set to constant, since we assume the correlation with left and right
neighbor to be identical, as well as the correlation with upper and lower
neighbor (by symmetry of the covariance matrix and isotropy in both directions
of the field, but different correlation in each direction). This leads to a
model with 3 parameters for 4 neighbors, 5 parameters for 8 neighbors and 7
parameters for 12 neighbors,
The covariance structure of Σ^-1 with 4 neighbors was set as
“truth” and random samples were generated
from 𝒩_n(0,Σ) with sample sizes
N=10,15,20,…,55. The values on first, second and tenth diagonal have
been set as 5, -0.2 and 0.5. For each sample, we computed successively the MLE
with 3, 5 and 7 unknown parameters numerically by Newton's method, as
described in <cit.>.
The difference of each estimator from the true matrix Σ was measured in
the Frobenius norm, which is the same as the Euclidean norm of a matrix
written as one long vector. In order to reduce the sampling error, 50
simulations of the same size were generated and the mean of squared Frobenius
norm was computed. The results can be found in Fig. <ref>.
As expected, the MLE with 3 parameters outperforms the estimates with 5 and 7
parameters and the Frobenius norm for sample covariance stays one order worse
than all parametric estimates.
§.§ Simulation of fields with diagonal covariance
The simulation for spectral diagonal covariance was carried out in a similar
way. First, a diagonal matrix D was prepared, whose diagonal entries decay
according to the model d_i=1/ce^αλ_i,i=1,…,n,
where c and α are parameters and λ_i are the eigenvalues of
Laplace operator in two dimensions on 10×10 nodes (so again n=100).
Such models are useful in modeling smooth random fields, e.g., in meteorology.
Then, random samples were generated from 𝒩_n(0,D)
with sample sizes N=5,…,20. For each sample, four covariance matrix
estimators were computed:
* sample covariance matrix Σ̂_N, cf.
(<ref>)
* diagonal part D̂^( 0) of the sample covariance
matrix, cf. (<ref>)
* MLE D̂^( 1) in the space of diagonal matrices,
cf. (<ref>)
* MLE D̂^( 3) ={(ĉ_1-ĉ_2
λ_i)^-1e^α̂λ_i,i=1,…,n} with 3 parameters
c_1,c_2 and α, cf. (<ref>).
* MLE D̂^( 2) ={ĉ^-1e^α̂λ_i,i=1,…,n} with 2 parameters c and α, cf.
(<ref>).
Let us briefly discuss the choice of the covariance model d_i=1/ce^αλ_i. We decided to carry out the simulation with a
second-order stationary random field, whose covariance can be diagonalized by
the Fourier transform. This transform is formed by the eigenvectors of the
Laplace operator. Hence, it is reasonable to model the diagonal terms of this
covariance matrix (i.e. the covariance eigenvalues) by some function of
eigenvalues of the Laplace operator. This function needs to have a
sufficiently fast decay in order to fulfil the necessary condition for the
proper covariance (the so-called trace class property, e.g.,
<cit.>). Exponential decay is used, e.g., in
<cit.>. Another possible choice of a covariance model is a
power model, where the eigenvalues of the covariance are assumed to be a
negative power of -λ_i,i=1,…,n, e.g., <cit.>.
The difference of each estimator from the true matrix D was measured in the
Frobenius norm again. To reduce the sampling noise, 50 replications have been
done for each sample size and the mean of squared Frobenius norm can be found
in Fig. <ref>.
For the diagonal MLE, given by (<ref>), (<ref>), and (<ref>), we
can expect from (<ref>) that these estimators should
satisfy asymptotically
( |D̂_N^( k) -D| _F
^2) ≈1/N*Tr(J_D^( k)
^-1), k=1,2,3,
even if convergence in distribution does not imply convergence of moments
without additional assumptions. This conjecture can be supported by a
comparison of Figures <ref> and <ref>, where we
observe the same decay. From the nesting, we know that
*Tr(J_D^( 2) ^-1)≤*Tr
(J_D^( 3) ^-1)≤*Tr(J_D^(
1) ^-1)
and we can expect that the Frobenius norm should decrease for more restrictive
models, that is,
|D̂_N^( 2) -D| _F^2
≤|D̂_N^( 3) -D| _F^2
≤|D̂_N^( 1) -D| _F^2,
which is confirmed by the simulations (see Figure <ref>, resp.
<ref>).
The comparisons (<ref>) of the Frobenius norm of the
error in the mean squared complement the pointwise comparison
(<ref>) between the sample covariance and its diagonal. Relying
on MLE for that comparison is not practical, because the sample size of
interest here is N<n, and, consequently, Σ̂_N is singular and
cannot be cast as MLE with an accompanying Fisher information matrix, cf.
Remark <ref>. But it is evident that for small sample sizes,
estimators computed in the proper subspace perform better. Hence, the
hierarchical order seems to hold even when N<n.
§ COMPARISON WITH REGULARIZATION METHODS
In the previous sections, we pointed out the advantages of using
low-parametric models for estimating a covariance matrix using a small sample.
As mentioned in the Introduction, there is another large class of estimating
methods for high-dimensional covariance matrices: shrinkage estimators. The
principle of these methods is to move the sample covariance towards a target
matrix that possesses some desired properties (e.g., full rank, proper
structure). This can be seen as a convex combination of the sample covariance
matrix Σ̂_N and the so called target matrix T:
Σ̂_S=γΣ̂_N+(1-γ)T, for
γ∈0,1].
One of the simplest shrinkage estimators has the form of (<ref>
) with the target matrix equal to identity, which results in shrinking all
sample eigenvalues with the same intensity towards their mean value.
<cit.> derived the optimal shrinkage parameter γ to
minimize the squared Frobenius loss
min_γ||Σ̂_S-D||_F^2.
The comparison of this estimator with the maximum likelihood estimator
D̂^(2) was accomplished by a simulation with identical setting as in
Section <ref>. The results are shown in Fig. <ref>.
For reference, the sample covariance Σ̂_N and its diagonal
D̂^(0) are also added.
Another regularization method is described in
<cit.>. They consider a type of covariance estimator, where the
regularization effect is achieved by bounding the condition number of the
estimate by a regularization parameter κ_max. Since the condition
number is defined as a ratio of the largest and smallest eigenvalue, this
method corrects for overestimation of the largest eigenvalues and
underestimation of the small eigenvalues simultaneously. The resulting
estimator is called a condition-number-regularized covariance estimator and it
is formulated as the maximum likelihood estimator restricted on the subspace
of matrices with condition number bounded by κ_max, i.e.
max_Σℓ(Σ) subject to λ
_max(Σ)/λ_min(Σ)≤κ_max,
where λ_max(Σ), resp. λ_min(Σ), is the largest,
resp. the smallest, eigenvalue of the covariance matrix Σ. An optimal
κ_max is selected by maximization of the expected likelihood, which
is approximated by using K-fold cross-validation. The authors proved that
κ_max selected in this way is a consistent estimator for the true
condition number (i.e. the condition number of D). Therefore, the idea of
this method is to search a MLE in a subspace defined by covariance matrices
with condition number smaller or equal to the true condition number. The form
of the resulting covariance estimator together with the details of the
computational process is provided in <cit.>. In
Fig. <ref>, we can see the performance of this estimator
(denoted as ) in comparison of other methods.
The shrinkage estimator Σ̂_S and the
condition-number-regularized estimator result in non-diagonal matrices, which
in our case predetermines them to perform worse than the diagonal estimator
D̂^(0). However, we have to note that performance of these methods
strongly depends on the particular form of the true covariance matrix D. In
the case when the decrease of the true eigenvalues is less rapid, both methods
may provide better results than the diagonal of sample covariance. The
performance of Σ̂_S could be possibly improved by choosing a
different target matrix that is closer to reality but such a study is out of
the scope of this paper.
It is seen from Fig. <ref> that the
condition-number-regularized estimator provides more precise estimates than
the sample covariance Σ̂_N, as expected. This is in accordance
with the preceding theory and empirical findings about the higher precision of
estimators from a smaller parametric subspace (the corresponding parametric
subspace consists of matrices with the condition number smaller or equal to
κ_max). If, however, the theoretical condition number is very large
as in our case, the method has a problem in estimating this number and its
performance is limited.
Both regularization estimators perform well against sample covariance, but the
setting of our simulation is less favourable for them. Neither of them can
compete with the maximum likelihood estimator found in the true small subspace
of diagonal matrices with proper decay.
§ CONCLUSIONS
Our main aim was to point out the significant advantage resulting from
computing the MLE of the covariance matrix in a proper parameter subspace,
especially in the high-dimensional setting, when the available sample has
small size relative to the dimension of the problem. This subspace can be
formed, e.g., by a parametric model for covariance eigenvalues or for a
diagonal matrix resulting from a suitable set of transformations.
We provided theoretical results on asymptotic comparison of covariance
matrices of each estimator for multivariate normal distribution, where we can
lean on the well-developed maximum likelihood theory. The situation for small
samples was illustrated by means of a simulation. We consider two-parametric
models for the covariance eigenvalues based on the eigenvalues of Laplace
operator. In practice, the proper model/subspace can be inferred from
historical data.
Using a properly specified model, one can reach a significant improvement in
performance, which can have a positive impact on the subsequent tasks like
data assimilation and prediction.
§ ACKNOWLEDGEMENTS
This work was partially supported by the the Czech Science Foundation (GACR)
under grant 13-34856S and by the U.S. National Science Foundation under
grants DMS-1216481 and ICER-1664175.
apacite
| Estimation of large covariance matrices from small samples is an important
problem in many fields, including spatial statistics, genomics, and ensemble
filtering. One of the prominent applications is data assimilation in
meteorology and oceanography, where the dimension of state vector describing
the atmosphere or ocean is in order of millions or larger. Every practically
available sample is a small sample in this context, since a reasonable
approximation of the full covariance can be obtained only with sample size of
the order of the dimension of the problem
<cit.>
. In practice, the sample covariance[In this paper, by sample
covariance we mean the maximum likelihood estimate of covariance matrix using
the norming constant N as opposed to the unbiased estimate with norming
constant (N-1).] is singular and polluted by spurious correlations.
Nevertheless, it carries useful information (e.g. on covariances present in
the actual atmospheric flow) and different techniques can be applied in order
to improve the covariance model and its practical performance.
One common technique is shrinkage, that is, a linear combination of sample
covariance and a positive definite target matrix, which prevents the
covariance from being singular. The target matrix embodies some prior
information about the covariance; it can be, e.g., unit diagonal or, more
generally, positive diagonal
<cit.>
. See, e.g., <cit.> for a survey of such shrinkage
approaches. Shrinkage of sample covariance towards a fixed covariance matrix
based on a specific model and estimated from historical data (called
background covariance) was used successfully in meteorology
<cit.>
. This approach is justified as one which combines actual (called
flow-dependent) and long-term average (called climatologic) information on
spatial covariances present in the 3D meteorological fields.
Another approach to improving on the sample covariance matrix is localization
by suppressing long-range spurious correlations, which is commonly done by
multiplying the sample covariance matrix term by term by a gradual cutoff
matrix
<cit.>
to suppress off-diagonal entries. The extreme case, when only the diagonal is
left, is particularly advantageous in the spectral domain, as the covariance
of a random field in Fourier space is diagonal if and only if the random field
in cartesian geometry is second order stationary, i.e., the covariance between
the values at two points depends only on their distance vector. Alternatively,
diagonal covariance in a wavelet basis provides spatial variability as well
<cit.>
. Spectral diagonal covariance models were successfully used in operational
statistical interpolation in meteorology in spherical geometry
<cit.>
, and versions of Ensemble Kalman Filter (EnKF) were developed which construct
diagonal covariance in Fourier or wavelet space in every update step of the
filter at low cost, and can operate successfully with small ensembles
<cit.>
.
Sparse covariance models, such as the spectral diagonal, allow a compromise
between realistic assumptions and cheap computations. Another covariance model
taking advantage of sparsity is a Gauss-Markov Random Field (GMRF), based on
the fact that conditional independence of variables implies zero corresponding
elements in the inverse of the covariance matrix
<cit.>
, which leads to modeling the covariance as the inverse of a sparse matrix.
However, both spectral diagonal and sparse inverse covariance models have a
large number of parameters, namely all terms of the sparse matrix (up to
symmetry) which are allowed to attain nonzero values. This results in
overfitting and significant sampling noise for small samples. Therefore, it is
of interest to reduce the number of parameters by adopting additional,
problem-dependent assumptions on the true parameter values.
The principal result of this paper is the observation that if parameters are
fitted as the Maximum Likelihood Estimator (MLE) and the additional
assumptions are satisfied by the true parameters, then the estimate using
fewer parameters is asymptotically more accurate, and often very significantly
so even for small samples..
The paper is organized as follows. In Sec. <ref>, we provide a brief
statement of MLE and its asymptotic variance. In Sec. <ref>, we use
the theory of maximum likelihood estimation to prove that for any two nested
subspaces of the parametric space containing the true parameter, the
asymptotic covariance matrix of the MLE is smaller for the smaller parameter
space. These results hold for a general parameter and, in the special case of
MLE for covariance matrices we do not need any invertibility assumption. The
applications to estimation of covariance matrices by spectral diagonal and
GMRF are presented in Sec. <ref>, and Sec. <ref>
contains computational illustrations. A comparison of the performance of MLE
for parametric models and of related shrinkage estimators is in
Sec. <ref>. | null | null | null | null | null |
http://arxiv.org/abs/1701.07723v1 | 20170126144528 | The Many Faces of Data-centric Workflow Optimization: A Survey | [
"Georgia Kougka",
"Anastasios Gounaris",
"Alkis Simitsis"
] | cs.DB | [
"cs.DB"
] |
[email protected]
Aristotle University of Thessaloniki, Greece
[email protected]
Aristotle University of Thessaloniki, Greece
[email protected]
HP Labs, Palo Alto, USA
Workflow technology is rapidly evolving and, rather than being limited to modeling the control flow in business processes, is becoming a key mechanism to perform advanced data management, such as big data analytics. This survey focuses on data-centric workflows (or workflows for data analytics or data flows), where a key aspect is data passing through and getting manipulated by a sequence of steps. The large volume and variety of data, the complexity of operations performed, and the long time such workflows take to compute give rise to the need for optimization. In general, data-centric workflow optimization is a technology in evolution. This survey focuses on techniques applicable to workflows comprising arbitrary types of data manipulation steps and semantic inter-dependencies between such steps. Further, it serves a twofold purpose. Firstly, to present the main dimensions of the relevant optimization problems and the types of optimizations that occur before flow execution. Secondly, to provide a concise overview of the existing approaches with a view to highlighting key observations and areas deserving more attention from the community.
data analytics data flows workflow optimization survey
§ INTRODUCTION
Workflows aim to model and execute real-world intertwined or interconnected processes, named as tasks or activities. While this is still the case, workflows play an increasingly significant role in processing very large volumes of data, possibly under highly demanding requirements.
Scientific workflow systems tailored to data-intensive e-science applications have been around since the last decade, e.g., <cit.>. This trend is nowadays complemented by
the evolution of workflow technology to serve (big) data analysis, in settings such as business intelligence, e.g., <cit.>, and business process management, e.g., <cit.>. Additionally, massively parallel engines, such as Spark, are becoming increasingly popular for designing and executing workflows.
Broadly, there are two big workflow categories, namely control-centric and data-centric. A workflow is commonly represented as a directed graph, where each task corresponds to a node in the graph and the edges represent the control flow or the data flow, respectively. The control-centric workflows are most often encountered in business process management <cit.> and they emphasize the passing of control across tasks and gateway semantics, such as branching execution, iterations, and so on; transmitting and sharing data across tasks is a second class citizen. In control-centric workflows, only a subset of the graph nodes correspond to activities, while the remainder denote events and gateways, as in the BPMN standard. In
data-centric workflows (or workflows for data analytics or simply data flows[Hereafter, these three terms will be used interchangeably; the terms workflow and flow will be used interchangeably, too.]), the graph is typically acyclic (directed acyclic graph - DAG). The nodes of the DAG represent solely actions related to the manipulation, transformation, access and storage of data,
e.g., as in <cit.> and in popular data flow systems, such as Pentaho Data Integration (Kettle) and Spark.
The tokens passing through the tasks correspond to processed data. The control is modeled implicitly assuming that each task may start executing when the entire or part of the input becomes available.
This survey considers data-centric flows exclusively.
Executing data-centric flows efficiently is a far from trivial issue. Even in the most widely used data flow tools, flows are commonly designed manually. Problems in the optimality of those designs stem from the complexity of such flows and the fact that in some applications, flow designers might not be systems experts <cit.> and consequently, they tend to design with only semantic correctness in mind. In addition, executing flows in a dynamic environment may entail that an optimized design in the past may behave suboptimally in the future due to changing conditions <cit.>.
The issues above call for a paradigm shift in the way data flow management systems are engineered and more specifically, there is a growing demand for automated optimization of flows. An analogy with database query processing, where declarative statements, e.g., in SQL, are automatically parsed, optimized, and then passed on to the execution engine is drawn. But data flow optimization is more complex, because tasks need not belong to a predefined set of algebraic operators with clear semantics and there may be arbitrary dependencies among their execution order. In addition, in data flows there may be optimization criteria apart from performance, such as reliability and freshness depending on business objectives and execution environments <cit.>. This survey covers optimization techniques[The terms technique, proposal, and work will be used interchangeably.] applicable to data flows, including database query optimization techniques that consider arbitrary plan operators, e.g., user-defined functions (UDFs), and dependencies between them. To the contrary, we do not aim to cover techniques that perform optimizations considering solely specific types of tasks, such as filters, joins and so on.
The contribution of this survey is the provision of a taxonomy of data flow optimization techniques that refer to the flow plan generation layer. In addition, a concise overview of the existing approaches with a view to (i) explaining the technical details and the distinct features of each approach in a way that facilitates result synthesis; and (ii) highlighting strengths and weaknesses, and areas deserving more attention from the community is provided.
The main findings are that on the one hand, big advances have been made and most of the aspects of data flow optimization have started to be investigated. On the other hand, data flow optimization is rather a technology in evolution. Contrary to query optimization, research so far seems to be less systematic and mainly consists of ad-hoc techniques, the combination of which is unclear.
The structure of the rest of this article is as follows. The next section describes the survey methodology and provides details about the exact context considered. Section <ref> presents a taxonomy of existing optimizations that take place before the flow enactment. Section <ref> describes the state-of-the-art techniques grouped by the main optimization mechanism they employ. Section <ref> presents the ways in which optimization proposals for data-centric workflows have been evaluated. Section <ref> highlights our findings. Section <ref> touches upon tangential flow optimization-related techniques that have recently been developed along with scheduling optimizations taking place during flow execution.
Section <ref> reviews surveys that have been conducted in related areas and finally, Section <ref> concludes the paper.
§ SURVEY METHODOLOGY
We first detail our context with regards to the architecture of a Workflow Management System (WfMS). Then we explain the methodology for choosing the techniques included in the survey and their dimensions, on which we focus. Finally, we summarize the survey contributions.
§.§ Our Context within WfMSs
The life cycle of a workflow can be regarded as an iteration of four phases, which cover every stage from the workflow modeling until its output analysis <cit.>. The four phases are composition, deployment, execution, and analysis <cit.>. The type of workflow optimization, on which this work focuses, is part of the deployment phase where the concrete executable workflow plan is constructed defining execution details, such as the engine that will execute each task. Additionally, Liu et al. <cit.> introduce a functional architecture for each data-centric Workflow Management System (WfMS), which consists of five layers: i) presentation, which comprises the user interface; ii) user services, such as the workflow monitoring and data provision components; iii) workflow execution plan (WEP) generation, where the workflow plan is optimized, e.g., through workflow refactoring and parallelization, and the details needed by the execution engine are defined; iv) WEP execution, which deals with the scheduling and execution of the (possibly optimized) workflow, but also considers fault-tolerance issues, and finally, v) the infrastructure layer, which provides the interface between the workflow execution engine and the underlying physical resources.
According to the above architecture, one of the roles of a WfMS is to compile and optimize the workflow execution plans just before the workflow execution. Optimization of data flows, as conceived in this work, forms an essential part of the WEP generation layer and not of the execution layer. Although there might be optimizations in the WEP execution layer as well, e.g., while scheduling the WEP, these are out of our scope. More specifically, the mapping of flow tasks to concrete processing nodes during execution, e.g, task X of the flow should run on processing node Y, is traditionally considered to be a scheduling activity that is part of WEP execution layer rather than the WEP generation one, on which we focus. Finally, we use the terms task and activity interchangeably, both referring to entities that are not yet instantiated, activated or executed.
§.§ Techniques Covered
The main part of this survey covers all the data flow optimization techniques that meet the following criteria to the best of authors' knowledge:
* They refer to the WEP generation layer in the architecture described above.
* They refer to techniques that are applicable to any type of tasks rather than being tailored to specific types, such as filters and joins.
* The partial ordering of the flow tasks is subject to dependency (or, else precedence) constraints between tasks, as is the generic case for example of scientific and data analysis flows; these constraints denote whether a specific task must precede another task or not in the flow plan.
We surveyed all types of venues where relevant techniques are published. Most of the covered works come from the broader data management and e-science community, but there are proposals from other areas, such as algorithms. We also include techniques that were proposed without generic data flows in mind, but meet our criteria and thus are applicable to generic data flows. An example is the proposal for queries over Web Services (WSs) in <cit.>.
§.§ Technique Dimensions Considered
We assume that the user initially defines the flow either at a high-level non-executable form or in an executable form that is not optimized. The role of the optimizations considered is to transform the initial flow into an optimized ready-to-be executed one.[Through considering optimizations starting from a valid initial flow, we exclude from our survey the big area of answering queries in the presence of limited access patterns, in which, the main aim is to construct such an initial plan <cit.> through selecting an appropriate subset of tasks from a given task pool; however, we have considered works from data integration that optimize the plan after it has been devised, such as <cit.> or <cit.>, which is subsumed by <cit.>.] Analogously to query optimization, it is convenient to distinguish between high-level and low-level flow details. The former capture essential flow parts, such as the final task sequencing, at a higher level than that of complete execution details, whereas the latter include all the information needed for execution. In order to drive the optimization, a set of metadata is assumed to be in place. This metadata can be statistics, e.g., cost per task invocation and size of task output per input data item, information about the dependency constraints between tasks, that is a partial order of tasks, which must be always preserved to ensure semantic correctness, or other types of information as explained in this survey.
To characterize optimizations that take place before the flow execution (or enactment), we pose a set of questions when examining each existing proposal:
* What is the effect on the execution plan?, which aims to identify the type of incurred enhancements to the initial flow plan.
* Why?, which asks for the objectives of the optimization.
* How?, which aims to clarify the type of the solution.
* When?, to distinguish between cases where the WEP generation phase takes place strictly before the WEP execution one, and where these phases are interleaved.
* Where the flow is executed?, which refers to the execution environment.
* What are the requirements?, which refers to the input flow metadata in order to apply the optimization.
* In which application domain?, which refers to the domain for which the technique initially targets.
We regard each of the above questions as a different dimension. As such, we derive seven dimensions: (i) the Mechanisms referring to the process through which an initial flow is transformed into an optimized one; (ii) the Objectives that capture the one or more criteria of the optimization process; (iii) the Solution Types defining whether an optimization solution is accurate or approximate with respect to the underlying formulation of the optimization problem; (iv) the Adaptivity during the flow execution; (v) the Execution Environment of the flow and its distribution; (vi) the Metadata necessary to apply the optimization technique; and finally, (vii) the Application Domain, for which each optimization technique is initially proposed.
§ TAXONOMY OF EXISTING SOLUTIONS
Based on the dimensions identified above, we build a taxonomy of existing solutions. More specifically, for each dimension, we gather the values encountered in the techniques covered hereby.
In other words, the taxonomy is driven by the current state-of-the-art and aims to provide a bird's eye view of today's data flow optimization techniques. The taxonomy is presented in Figure <ref> and analyzed below, followed by a discussion of the main techniques proposed to date in the next section. In the figure, each dimension (in light blue) can take one or more values. Single-value and multi-value dimensions are shown as yellow and green rectangles, respectively.
§.§ Flow Optimization Mechanisms
A data flow is typically represented as a directed acyclic graph (DAG) that is defined as G=(V,E), where V denotes the nodes of the graph corresponding to a set of tasks and E represents a set of pair of nodes, where each pair denotes the data flow between the tasks. If a task outputs data that cannot be directly consumed by a subsequent task, then data transformation needs to take place through a third task; no data transformation takes place through an edge.
Each graph element, either a vertex or an edge, is associated with a triplet of the form <Impl,ExecEng,Config>, either explicitly or implicitly. The Impl property denotes the task or edge implementation, ExecEng provides the engine that will execute each element; and finally, Config captures the configuration of the execution environment, such as the bandwidth reserved for a data transfer across a graph edge, or the number of reducer slots in a Hadoop cluster. Any optimization technique covered in this survey impacts on either the set of V or E, or on (part of) the associated triplets.
Data flow optimization is a multi-dimensional problem and its multiple dimensions are broadly divided according to the two flow specification levels. Consequently, we identify the optimization of the high-level (or logical) flow plan and the low-level (or physical) flow plan, and each type of optimization mechanism can affect the set of V or E of the workflow graph and their properties.
The problem of the logical data flow optimization is to define the exact sets V and E, so that an objective function is optimized. As such, the logical flow optimization types are largely based on workflow structure reformations, while preserving any dependency constraints between tasks; structure reformations are reflected as modifications in V and E. The output of the optimized flow needs to be semantically equivalent as the output of the initial flow, which practically means that two flows receive the same input data and produce the same output data without considering the way this result was produced. Given that data manipulation takes place only in the context of tasks, logical flow optimization is task-oriented. The logical optimization types are characterized as follows (summarized also in Figure <ref>):
* Task Ordering, where we change the sequence of the tasks by applying a set of partial (re)orderings. Task (re)ordering affects the set of E of the workflow DAG.
* Task Introduction, where new tasks are introduced in the data flow plan in order, for example, to minimize the data to be processed and thus, the overall execution cost. The changes occurred by introducing tasks increase the set of V of the flow graph, which also affects the set E, so that the new vertices are connected to the graph.
* Task Removal, which can be deemed as the opposite of task introduction. A task can be safely removed from the flow, if it does not actually contribute to its result dataset. As in the previous case, task removal impacts both on the set V, which is reduced, and on E to remove corresponding edges.
* Task Merge is the optimization action of grouping flow tasks into a single task without changing the semantics, applying changes to the set of V in order, for example, to minimize the overall flow execution cost or to mitigate the overhead of enacting multiple tasks.
* Task Decomposition, where a set of grouped tasks is splitted to more than one flow tasks with less complex functionality for generating more optimal sub-tasks. This is the opposite operation of merge action and may provide more optimization opportunities, as discussed in <cit.>, because of the potential increase in the number of valid (re)orderings. Similar to the task introduction and merge mechanisms, the optimized workflow plan differs in V with regards to the initial workflow graph, while E is also modified only to reflect changes in V.
At the low level, a wide range of implementation aspects need to be specified so that the flow can be later executed. These aspects are captured by the <Impl,ExecEng,Config> triplet, for each property of which, we identify a different physical data flow optimization type, as follows (see also Figure <ref>):
* Task Implementation Selection, which is one of the most significant lower-level problems in flow optimization. This optimization type includes the selection of the exact, logically equivalent, task implementation for each task that will satisfy the defined optimization objectives <cit.>. A well-known counterpart in database optimization is choosing the exact join algorithm (e.g., hash-join, sort-merge-join, nested loops). In this optimization mechanism case, the Impl property of one or more task or edges have to be specified or modified.
* Execution Engine Selection, where we have to decide the type of processing engine to execute each task. The need for such optimization stems from the availability of multiple options in modern data-intensive flows <cit.>. Common choices, nowadays, include DBMSs, massively parallel engines, such as Hadoop clusters, apart from the execution engines that are bundled with data flow management systems. The corresponding decisions affect the ExecEng property in the workflow graph.
* Execution Engine Configuration, where we decide on configuration details of the execution environment, such as the bandwidth, CPU, memory to be reserved during execution or the number of cores allocated <cit.>. This optimization mechanism refers to the specification of the Config property.
§.§ Optimization Objectives
An optimization problem can be defined as either single or multiple objective one depending on the number of criteria that considers. The optimization objectives that are typically presented in the state-of-the-art include the following: performance, reliability, availability, and monetary cost. The latter is important when the flow is executed on resources provided at a price, as in public clouds. Other quality metrics can be applied as well (denoted as other QoS in <ref>).
The first two objectives require further elaboration. Performance can be defined in several forms, depending, for example, on whether the target is the minimization of the response time, or the resource consumption. The detailed definitions of the performance objective in data flows include the following: minimization of the sum of the task and edge costs (Sum Cost), minimization of the sum of the task and edge costs along the flow critical path (Critical Path), minimization of the most expensive task cost in order to alleviate bottleneck problems (Bottleneck), and maximization of the throughput (Throughput). Each of these definitions may be formally expressed as an objective function, as presented later.
Analogously, reliability may appear in several forms. In our context, reliability reflects how much confidence we have in a data flow execution plan to complete successfully. However, in data flow optimization proposals, we have also encountered the following two reliability aspects playing the role of optimization objectives: trustworthiness of a flow (Trust), which is typically based on the trustworthiness of the individual tasks and avoidance of dishonest providers, that is providers with bad reputation; and Fault Tolerance, which allows the execution of the flow to proceed even in the case of failures.
§.§ Optimization Solution Types
The optimization techniques that have been proposed
constitute accurate, approximate or heuristic solutions.
Such solutions make sense only when considered in parallel with the complexity of the exact problem they aim to solve. Unfortunately, a big set of the problems in flow optimization are intractable. For such problems, in the case of accurate solutions, a scalable technique cannot be provided.
In the case of approximate optimization solutions, we typically tackle intractable problems in a scalable way while being able to provide guarantees on the approximation bound. Finally, in the last category, we exploit knowledge about the specific problem characteristics and propose algorithms that are fast and exhibit good behavior in test cases, without examining the deviation of the solution from the optimal in a formal manner.
§.§ Adaptivity of Data-Centric Flow
Data flow adaptivity refers to the ability of technique to reoptimize the data flow plan during the execution phase. So, we characterize the optimization techniques as either static, where once the flow execution plan is derived it is executed in its entirety, or dynamic, where the flow execution plan may be revised on the fly.
§.§ Execution Environment
The techniques that are proposed for data flow optimization problem differ significantly according to the execution environment assumed. The execution environment is defined by the type of resources that execute the flow tasks. Specifically, in a centralized execution environment, all the tasks of a flow are executed by a single-node execution engine. Additionally, in a parallel execution environment, the tasks are executed in parallel by an engine on top of a homogeneous cluster, while in a distributed execution environment, the tasks are executed by remote and potentially heterogeneous execution engines, which are interconnected through ordinary network. Typically, optimizations on the logical level are agnostic to the execution environment, contrary to the physical optimization ones.
§.§ Metadata
The set of metadata includes the information needed to apply the optimizations and as such, can be regarded as existential pre-conditions that should hold.
The most basic input requirement of the optimization solutions is an initial set V of tasks. However,
additional metadata with regards to the flow graph are typically required as well. These metadata are both qualitative and quantitative (statistical), as discussed below.
Qualitative metadata include:
* Dependencies, which explicitly refer to the definition of which vertices in the graph should always precede other vertices. Typically, the definition of dependencies comes in the form of an auxiliary graph.
* Task schemata, which refer to the definition of schema of the data input and/or output of each task. Note that dependencies may be produced by task schemata through simple processing <cit.>, especially if they contain information about which schema elements are bound or free<cit.>. However, task schemata may serve additional purposes than deriving dependencies, e.g., to check whether a task contributes to the final desired output of the flow.
* Task profile, which refers to information about the execution logic of the task, that is the manner it manipulates its input data; e.g, through analysis of the commands implementing each task. If there is no such metadata, the task is considered as a black-box. Otherwise, information e.g., about which attributes are read and which are written, can be extracted.
Quantitative metadata include:
* Vertex cost, which typically refers to the time cost, but can also capture other types of costs, such as monetary cost.
* Edge cost, which refers to the cost associated with edges, such as data transmission cost between tasks.
* Selectivity, which is defined as the (average) ratio of the output to the input data size of a task and its knowledge is equivalent to estimating the data sizes consumed and produced by each task; sizes are typically measured either in bytes or in number of records (cardinality).
* QoS properties, such as values denoting the task availability, reliability, security, and so on.
* Engine details, which cover issues, such as memory capacity, execution platform configurations, price of cloud machines, and so on.
§.§ Application Domain
The final dimension across, which we classify existing solutions, is the application domain assumed when each technique is proposed. This dimension sheds light into differentiating aspects of the techniques with regards to the execution environment and the data types processed that cannot be captured by the previous dimensions. Note that the techniques may be applicable to arbitrary data flows in additional application domains than those initially targeted. In this dimension, we consider two aspects: (i) domain of initial proposal, which can be one of the following: ETL flows, data integration, Web Services (WSs) workflows, scientific workflows, MapReduce flows, business processes, database queries or generic; (ii) online (e.g., real-time) vs. batch processing. Generic domain proposals aim to a broader coverage of data flow applications, but due to their genericity, they make miss some optimization opportunities that a specific domain proposal could exploit. Also, online applications require more sophisticated solutions, since data is typically streaming and employ additional optimization objectives, such as reliability and acquiring responses under pressing deadlines.
§ PRESENTATION OF EXISTING SOLUTIONS
Here, we describe the main techniques grouped according to the optimization mechanism. This type of presentation facilitates result synthesis. Grouping by mechanism makes it easier to reason as to whether different techniques employing the same mechanism can be combined or not, e.g., because the make incompatible assumptions. Additionally, the solutions for each mechanism are largely orthogonal to the solutions for another mechanism, which means that, in principle, they can be combined at least in a naive manner. Therefore, our presentation approach provides more insights into how the different solutions can be synthesized.
The discussion is accompanied by a summary of each proposal in Table <ref> for the dimensions of mechanisms, objectives, solution types, and metadata, and Table <ref>, for the adaptivity, execution environment, and application domain dimensions. When an optimization proposal comes in the form of an algorithm, we also provide the time complexity with respect to the size of the set of vertices |V|=n. However, the interpretation of such complexities requires special attention, when there are several other variables of the problem size, as is common in techniques employing optimization mechanisms at the physical level; details are provided within the main text.
The first column of the table mentions also the publication year of each proposal, in order to facilitate the understanding of the proposal's setting and the time evolution of flow optimization.
Finally, we use a simple running example to present the application of the mechanisms. Specifically, as shown in Figure <ref>, we consider a data flow that (i) retrieves Twitter posts containing product tags (Tweets Input), (ii) performs sentiment analysis (Sentiment Analysis), (iii) filters out tweets according to the results of this analysis (Filter_1), (iv) extracts the product to which the tweet refers to (Lookup ProductID), and (v) accesses a static external data source with additional product information (Join with External Source) in order to produce a report (Report Output). In this simple example, in any valid execution plan step (ii) should precede step (iii) and step (iv) should precede step (v).
§.§ Task Ordering
The goal of Task Ordering is typically specified as that of optimizing an objective function, possibly under certain constraints. A common feature of all proposals is that they assign a metric m(v_i) to each vertex v_i ∈ V, i=1… n.
To date, task ordering techniques have been employed to optimize performance. More specifically, all aspects of performance that we introduced previously have been investigated: the minimization of the sum of execution costs of either all tasks (both under and without constraints) or the tasks that belong to the critical path, the minimization of the maximum task cost, and the maximization of the throughput. Table <ref> summarizes the objective functions of these metrics that have been employed by approaches to task ordering in data flow optimization to date. Existing techniques can be modeled at an abstract level uniformly as follows. The metric m refers either to costs (denoted as c(v_i)) or to throughput values (denoted as f(v_i)). Costs are expressed in either time or abstract units, whereas throughput is expressed as number of records (or tuples) processed per time unit. A more generic modeling assigns a cost to each vertex v_i along with its outcoming edges e_ij, j=1… n (denoted as c(v_i,e_ij)).
These objective functions correspond to problems with different algorithmic complexities. Specifically, the problems that target the minimization of the sum of the vertex cost are intractable <cit.>. Moreover, Burge et al. <cit.> discuss that “it is unlikely that any polynomial time algorithm can approximate the optimal plan to within a factor of O(n^θ)”, where θ is some positive constant. The generic bottleneck minimization problem is intractable as well <cit.>. However, the bottleneck minimization based only on vertex costs and the other two objective functions can be optimally solved in polynomial time <cit.>.
Independently of the exact optimization objectives, all the known optimization techniques in this category assume the existence of dependency constraints between the tasks either explicitly or implicity through the definition of task schemata. For the cost or throughput metadata, some techniques rely on the existence of lower-level information, such as selectivity (see Section <ref>).
§.§.§ Techniques for Minimizing the Sum of Costs
Regarding the minimization of the sum of the vertex costs (first row in Table <ref>), there have been proposed both accurate and heuristic optimization solutions dealing with this intractable problem; apparently the former are not scalable. An accurate task ordering optimization solution is the application of the dynamic programming; dynamic programming is extensively used in query optimization <cit.> and such a technique has been proposed for generic data flows in <cit.>. The rationale of this algorithm is to calculate the cost of task subsets of size n based on subsets of size n-1. For each of these subsets, we keep only the optimal solution that satisfies the dependency constraints. This solution has exponential complexity even for simple linear non-distributed flows (O(2^n)) but, for small values of n, is applicable and fast.
Another optimization technique is the exhaustive production of all the topological sortings in a way that each sorting is produced from the previous one with the minimal amount of changes <cit.>; this approach has been also employed to optimize flows in <cit.>. Despite having a worst case complexity of O(n!), it is more scalable than dynamic programming solution, especially, for flows with many dependency constraints between tasks.
Another exhaustive technique is to define the problem as a state space search one <cit.>. In such a space, each possible task ordering is modeled as a distinct state and all states are eventually visited. Similar to the optimization proposals described previously, this technique is not scalable either.
Another form of task-reordering is when a single input/output task is moved before or after a multi-input or a multi-output task <cit.>. An example case is when two copies of a proliferate single input/ output task are originally placed in the two inputs of a binary fork operation and after reordering, are moved after the fork. In such a case, the two task copies moved downstream are merged into a single one. As another example, a single input/output task placed after a multi-input task can be moved upstream; e.g., when a filter task placed after a binary fork is moved upstream to both fork input branches (or to just one, based on their predicates). This is similar to traditional query optimization where a selective operation can be moved before an expensive operation like a join.
The branch-and-bound task ordering technique is similar to the dynamic programming one in that it builds a complete flow by appending tasks to smaller sub-flows. To this end, it examines only sub-flows in terms of meeting the dependency constraints and applies a set of recursive calls until generating all the promising data flow plans employing early pruning. Such an optimization technique has been applied in <cit.> for executing parallel scientific workflows efficiently, as part of a new optimization technique for the development of a logical optimizer, which is integrated into the Stratosphere system <cit.>, the predecessor of Apache Flink. An interesting feature of this approach is that following common practice from database systems it performs static task analysis (i.e., task profiling) in order to yield statistics and fine-grained dependency constraints between tasks going further from the knowledge that can be derived from simply examining the task schemata.
For practical reasons, the four accurate techniques described above are not a good fit for medium and large flows, e.g., with over 15-20 tasks. In these cases, the space of possible solutions is large and needs to be pruned.
Thus, heuristic algorithms have been presented to find near optimal solutions for larger data flows. For example, Simitsis et al. <cit.> propose a technique of task ordering by allowing state transitions, which corresponds to orderings that differ in the ordering of only two adjacent tasks. Such transitions are equivalent to a heuristic, which swaps every pair of adjacent tasks, if this change yields lower cost, always preserving the defined dependency constraints, until no further changes can be applied. This heuristic, initially proposed for ETL flows, can be applied to parallel and distributed execution environments with streaming or batch input data. Interestingly, this technique is combined with another set of heuristics using additional optimization techniques, such as task merge. In general, this heuristic is shown to be capable of yielding significant improvements. Its complexity is O(n^2), but there can be no guarantee for how much its solutions can deviate from the optimal one.
There is another family of techniques that minimizing the sum of the tasks by ordering the tasks based on their rank value defined as 1-sel(v_i)/c(v_i), where sel(v_i) is the selectivity of v_i. The first examples of these techniques were initially proposed for optimizing queries containing UDFs, while dependency constraints between pairs of a join and UDF are considered <cit.>. However, they can be applied in data flows by considering flow tasks as UDFs and performing straightforward extensions. For example, an extended version of <cit.>, also discussed in <cit.>, builds a flow incrementally in n steps instead of starting from a complete flow and performing changes. In each step, the next task to be appended is the one with the maximum rank value, for which all the prerequisite tasks have been already included. This results in a greedy heuristic of O(n^2) time complexity.
This heuristic has been extended by Kougka et al. <cit.> with techniques that leverage the query optimization algorithm for join ordering by Krishnamurthy et al. <cit.> with appropriate post-processing steps in order to yield novel and more efficient task ordering algorithms for data flows. In <cit.>, a similar rationale is followed with the difference that the execution plan is built from the sink to source task. Both proposals build linear plans, i.e., plans in the form of a chain with a single source and a single sink. These proposals for generic or traditional ETL data flows are essentially similar to the Chain algorithm proposed by Yerneni et al. <cit.> for choosing the order of accessing remote data sources in online data integration scenarios. Interestingly, in <cit.>, it is explained that such techniques are n-competitive, i.e., they can deviate from the optimal plan up to n times.
The incurred performance improvements can be significant. Consider the example in Figure <ref>, where let the cost per single input tweet of the five steps be 1, 10, 1, 1, and 5 units, respectively. Let the selectivities be 1, 1, 0.1, 1, and 0.15, respectively. Then the average cost in Figure <ref> for each initial tweet is 1+10+1+0.1+0.5=12.6, whereas the cost of the flow in Figure <ref> is 1+1+5+1.5+0.15=7.65.
In general, for ordering arbitrary flow tasks in order to minimize the sum of the task costs, any of the above solutions can be used. If the flow is small, exhaustive solutions are applicable; otherwise the techniques in <cit.> are the ones that seem to be capable of yielding the best plans.
Finally, minimizing the sum of the tasks cost appears also in multi-criteria proposals that consider also reliability, in the form of fault tolerance <cit.>. These proposals employ a further constraint in the objective function denoted as function g() (see 2^nd row in Table <ref>). In these proposals, g() defines the number of faults that can be tolerated in a specific time period. The strategy for exploring the search space of different orderings extends the techniques that proposed by Simitsis et al. <cit.>.
§.§.§ Techniques for Minimizing the Bottleneck Cost
Regarding the problem of minimizing the maximum task cost (3^rd row in Table <ref>), which acts as the performance bottleneck, there is a Task Ordering mechanism initially proposed for the parallel execution of online WSs represented as queries <cit.>. The rationale of this technique is to push the selective flow tasks (i.e., those with sel<1) in an earlier stage of the execution plan in order to prune the input dataset of each service. Based on the selectivity values, there may be cases where the output of a service may be dispatched to multiple other services for executing in parallel or in a sequence having time complexity in O(n^5) in the worst case. The problem is formulated in a way that it is tractable and the solutions is accurate.
Another optimization technique that considers task ordering mechanism for online queries over Web Services appears in <cit.>. The formulation in these proposals extends the one proposed by Srivastava et al. <cit.> in that it considers also edge costs. This modification renders the problem intractable <cit.>. The practical value is that edge costs naturally capture the data transmission between tasks in a distributed setting. The solution proposed by Tsamoura et al. <cit.> consists of a branch-and-bound optimization approach with advanced heuristics for early pruning and despite of its exponential complexity, it is shown that it can apply to flows with hundreds of tasks, for reasonable probability distributions of vertex and edge costs.
The techniques for minimizing the bottleneck cost can be combined with those for the minimization of the sum of the costs. More specifically, the pipelined tasks can be grouped together and for the corresponding sub-flow, the optimization can be performed according to the bottleneck cost metric. Then, these groups of tasks can be optimized considering the sum of their costs. This essentially leads to a hybrid objective function that aims to minimize the sum of the costs for segments of pipelining operators, where each segment cost is defined according to the bottleneck cost. A heuristic combining the two metrics has appeared in <cit.>.
§.§.§ Techniques for Optimizing the Critical Path
A technique that considers the critical path providing an accurate solution has appeared in <cit.>. This work has O(n^6) time complexity and has been initially proposed for online queries in parallel execution environments, but is also applicable to data flows. The strong point of this solution is that it can perform bi-objective optimization combining the bottleneck and the critical path criteria.
§.§.§ Techniques for Maximizing the Throughput
Reordering the filter operators of a workflow can be used to find an optimal query execution plan that maximizes throughput leveraging pipelined parallelism. Such a technique has been presented by Deshpande et al. <cit.> considering queries with tree-shaped constraints for parallel execution environment providing an accurate solution that has O(n^3) time complexity. In this proposal, each task is assumed to be executed on a distinct node, where each node has a certain throughput capacity that should not be exceeded. The unique feature of this proposal is that it produces a set of plans that need to be executed concurrently in order to attain throughput maximization. The drawback is that it cannot handle arbitrary constraint graphs, which implies that its applicability to generic data flows is limited.
§.§.§ Task Cost Models
Orthogonally to the objective functions in Table <ref>, different cost models can be employed to derive c(v_i), the cost of the i^th task v_i. The important issue is that a task cost model can be used as a component in any cost-based optimization technique, regardless of whether it has been employed in the original work proposing that technique.
A common assumption is that c(v_i) depends on the volume of data processed by v_i, but this feature can be expressed in several ways:
* c(v_i) = ∏_j=1^|T_i^prec| sel_j * cpi_i : this cost model defines the cost of the i^th task as the product of i) the cost per input data unit (cpi_i) and ii) the product of the selectivities sel of preceding tasks; T_i^prec is the set of all the tasks between the data sources and v_i. This cost model is explicitly used in proposals such as <cit.>.
* c(v_i)=rs(v_i) : In this case, the cost model is defined as the size of the results (rs) of v_i; it is used in <cit.>, where each task is a remote database query.
* c(v_i) = α_i· CPU(v_i) + β_i · IO(v_i) + γ_i · Ship(v_i): this cost model is a weighted sum of the three main cost components, namely the cpu, I/O, and data shipping costs. Further, CPU(v_i) can be elaborated and specified as ∏_j=1^|T_i^prec| sel_j * cpi_i (defined above) plus a startup cost. I/O costs depends on the cost per input data unit to access secondary storage. Data communication cost Ship(v_i) depends on the size of the input of v_i, which, as explained earlier, depends also on previous tasks and the vertex selectivity sel_i. α, β, and γ are the weights. Such an elaborate cost model has been employed by Hueske et al. <cit.>.
* c(v_i) = proc(v_i) + part(v_i): This cost model is suggested by Simitsis et al. <cit.>. It explicitly covers task parallelization and splits the cost of a tasks into the processing cost proc and the cost to partition and merge data part. The former cost is divided into a part that depends on input size and a fixed one. The proposal in <cit.> treats differently the tasks in the flow that add recovery points or create replicas by providing specific formulas for them.
§.§.§ Additional Remarks
Regarding the execution environment, since the task (re-)ordering techniques refer to the logical WEP level, they can be applied to both centralized and distributed flow execution environments. However, in parallel and distributed environments, the data communication cost needs to be considered. The difference between these environments with regards to the communication cost is that in the latter, this cost depends both on the sender and receiver task and as such, it needs to be represented, not as a component of vertex cost but as a property of edge cost.
Additionally, very few techniques, e.g. <cit.>, explicitly consider reorderings between single input/output and multiple-input or multiple-output tasks; however, this type of optimization requires further investigation in the context of complex flow optimization.
Finally, none of the proposed techniques for task ordering technique discussed are adaptive ones, that is they do not consider workflow re-optimization during its execution phase. In general, adaptive flow optimization is a subarea in its infancy. However, Böhm et al. <cit.> has proposed solutions for choosing when to trigger re-optimization, which, in principle, can be coupled with any cost-based flow optimization technique.
§.§ Task Introduction
Task introduction has been proposed for three reasons.
Firstly, to achieve fault-tolerance through the introduction of recovery points and replicator tasks in online ETLs <cit.>. For recovery points, a new node storing the current flow state is inserted in the flow in order to assist recovering from failures without needing to recompute the flow from scratch. Adding a recovery (to a specific point in the plan) depends on a cost function that compares the projected recovery cost in case of failure against the cost to maintain a recovery point. Additionally, the replicator nodes produce copies of specified sub-flows in order to tolerate local failures, when no recovery points can be inserted, e.g., because the associated overhead increases the execution time above a threshold. In both cases of task introduction, the semantics of the flow are immutable. The proposed technique extends the state space search in <cit.> after having pruned the state search space. The objective function employed is the constrained sum cost one (2^nd row in Table <ref>), where the constraint is on the number of places where a failure can occur. The cost model explicitly covers the recovery maintenance overhead (last case in Sec. <ref>). The key idea behind the pruning of search space is first to apply task reordering and then, to detect all the promising places to add the recovery points based on heuristic rules. An example of the technique is in Figure <ref> and suppose that we examine the introduction of up to two recovery points. The two possible places are just after the Sort and Join tasks, respectively. Assume that the most beneficial place is the first one, denoted as RP_1. Also, given RP_1, RP_2 is discarded because it incurs higher cost than re-executing the Join task.
Similarly to the recovery points above, the technique proposed by Huang et al. <cit.> introduces operations that copy intermediate data from transient nodes to primary ones, using a cluster of machines containing both transient and primary cloud machines; the former can be reclaimed by the cloud provided at any time, whereas the latter are allocated to flow execution throughout its execution.
Secondly, task introduction has been employed by Rheinländer et al. <cit.> to automatically insert explicit filtering tasks, when the user has not initially introduced them. This becomes plausible with a sophisticated task profiling mechanism employed in that proposal, which allows the system to detect that some data are not actually needed. The goal is to optimize a sum cost objective function, but the technique is orthogonal to any objective function aiming at performance improvement. For example, in Figure <ref>, we introduce a filtering task if the final report needs only a subset of the initial data, e.g., it refers to a specific range of products.
Third, task introduction can be combined with Implementation Selection (Section <ref>). An example appears in <cit.>, where the purpose is to exploit the benefit of processing sorted records. To this end, it explores the possibility of introducing new vertices, called sorters, and then to choose task implementations that assume sorted input; the overhead of the insertion of the new tasks is outweighed by the benefits of sort-based implementations. In Figure <ref>, we add such a sorter task just before the Join if a sort-based join implementation and report output is preferred. Proactively ordering data to reduce the overall cost has been used in traditional database query optimization <cit.> and it seems to be profitable for ETL flows as well.
Finally, all these three techniques can be combined; e.g., in the example all can apply simultaneously yielding the complete plan in the figure.
§.§ Task Removal
A set of optimization proposals support the idea of removing a task or a set of tasks from the workflow execution plan without changing the semantics in order to improve the performance; these proposals have been proposed mostly for offline scientific workflows, where it is common to reuse tasks or sub-flows from previous workflows without necessarily examining whether all tasks included are actually necessary or whether some results are already present. Three techniques adopt this rationale <cit.>, which are discussed in turn.
The idea of Rheinländer et al. <cit.> is to remove a task or multiple tasks until the workflow consists only of tasks that are necessary for the production of the desired output. This implies that the execution result dataset remains the same regardless of the changes that have been applied. It aims to protect users that have carelessly copied data flow tasks from previous flows. In Figure <ref>, we see that, initially, the example data flow contains an Extract Dates task, which is not actually necessary.
The heuristic of Deelman et al. <cit.> has been proposed for a parallel execution environment and is one of the few dynamic techniques allowing the reoptimization of the workflow during the workflow execution. At runtime, it checks whether any intermediate results already exist at some node, thus making part of the flow obsolete. Both <cit.> and <cit.> are rule-based and do not target an objective function directly.
Another approach for applying task removal optimization mechanism is to detect the duplicate tasks, i.e., tasks performing exactly the same operation and keep only a single copy in the execution workflow plan <cit.>. This might be caused by carelessly combining existing smaller flows from a repository, e.g., myExperiment[<www.myexperiment.org/> in bio-informatics.] A necessary condition in order to ensure that there will be no precedence violations is that these tasks must be dependency constraint free, which is checked with the help of the task schemata. Such a heuristic has O(n^2) time complexity.
§.§ Task Merge
Task Merge has been also employed for improving the performance of the workflow execution plan. The main technique is to apply re-writing rules to merge tasks with similar functions into one bigger task. There are three techniques in this group, all tailored to a specific setting. As such, it is unclear whether they can be combined.
First, in <cit.>, tasks that encapsulate invocations to an underlying database are merged so that fewer (and more complex) invocations take place. This rule-based heuristic has been proposed for business processes, for which it is common to access various data stores, and such invocations incur a large time overhead.
Second, a related technique has been proposed for SQL statements in commercial data integration products <cit.>. The rationale of this idea is to group the SQL statements into a bigger query in order to push the task functionalities to the best processing engine. Both approaches presented in <cit.> derive the necessary information about the functionality of each task with the help of task profiling and produce larger queries employing standard database technology. For example, instead of processing a series of SQL queries to transform data, it is preferable to create a single bigger query. As previously, the optimization is in the form of a heuristic that does not target to optimize any objective function explicitly. A generalization of this idea to languages beyond SQL is presented by Simitsis et al. <cit.> and a programming language translator has been described by Jovanovic et al. <cit.>.
Third, Harold et al. <cit.> presents a heuristic non-exhaustive solution for merging MapReduce jobs. Merging occurs at two levels: first MapReduce jobs are tried to be transformed into Map-only jobs. Then, sharing common Map or Reduce tasks is investigated. These two aspects are examined with the help of a 2-phase heuristic technique.
Finally, in the optimizations in <cit.>, which rely on a state space search as described previously, adjacent tasks that should not be separated may be grouped together during optimization. The aim of this type of merger is not to produce a flow execution plan with fewer and more complex tasks (i.e., no actual task merge optimization takes place), but to reduce the search space so that the optimization is speeded-up; after optimization, the merged tasks are split.
§.§ Task Decomposition
An advanced optimization functionality is Task Decomposition, according to which, the operations of a task are split into more tasks, this results in a modification of the set V of vertices. This mechanism has appeared in <cit.> as a pre-processing step, before the task ordering takes place. Its advantage is that it opens-up opportunities for ordering, i.e., it does not optimize an objective function in its own but it enables more profitable task orderings.
Task decomposition is also employed by Simitsis et al. <cit.>. In these proposals, complex analysis tasks, such as sentiment analysis presented in previous examples, can be split into a sequence of tasks at a finer granularity, such as tokenization, and part-of-speech tagging.
Note that both these techniques are tightly coupled to the task implementation platform assumed.
§.§ Task Implementation Selection
A set of optimization techniques target the Implementation Selection mechanism. At a high level, the problem is that there exist multiple equivalent candidate implementations for each task and we need to decide which ones to employ in the execution plan. For example, a task encapsulating a call to a remote WS, can contact multiple equivalent WSs, or a task may be implemented to run both in a single-machine mode or in as a MapReduce program.
These techniques typically require as input metadata the vertex costs of each task implementation alternative. Suppose that, for each task, there are m alternatives. This leads to a total of O(m^n) of combinations; thus a key challenge is to cope with the exponential search space. In general, the number of alternatives for each task may be different and the total number of combinations is the product of these numbers. For example, in Figure <ref>, there are four and three alternatives (Impl_1, ..., Impl_n) for the Sentiment Analysis and Lookup Product tasks, respectively, corresponding to twelve combinations.
It is important to note that, conceptually, the choice of the implementation of each task is orthogonal to decisions on task ordering and the rest of the high-level optimization mechanisms. As such, the techniques in this section can be combined with techniques from the previous sections.
A brute force, and thus of exponential complexity approach to finding the optimal physical implementation of each flow task before its execution has appeared in <cit.>. This approach models the problem as a state space search one and, although it assumes that the sum cost objective function is to be optimized, it can support other objective functions too. An interesting feature of this solution is that it explicitly explores the potential benefit from processing sorted data. Also, the ordering and task introduction algorithm in <cit.> allows for choosing parallel flavors of tasks. The parallel flavors, apart from cloning the tasks as many times as the degree of partitioned parallelism decided, explicitly consider issues, such as splitting the input data, distributing them across all clones, and merging all their outputs. These issues are reflected in an elaborate cost function as mentioned previously, which is used to decide whether parallelization is beneficial.
Additionally to the optimization techniques above, there is a set of multi-objective optimization approaches for Implementation Selection. These multi-objective heuristics, apart from the vertex cost, require further metadata that depend on the specified optimization objectives. For example, several multi-objective optimization approaches have been proposed for flows, where each task is essentially an invocation to an online WS that may not be always available; in such settings, the aim of the optimizer is the selection of the best service for each service type taking into account both performance and availability metadata.
Three proposals that target this specific environment are <cit.>. To achieve scalability, each task is checked in isolation, thus resulting in O(nm) time complexity, but at the expense of finding local optimal solutions only. Kyriazis et al. <cit.> consider availability, performance, and cost for each task. As initial metadata, scalar values for each objective and for candidate services are assumed to be in place.
The main focus of the proposed solution is (i) on normalizing and scaling the initial values for each of the objectives and (ii) on devising an iterative improvement algorithm for making the final decisions for each task. The multi-objective function is either the optimization of a single criterion under constraints on the others or the optimization of all the objectives at the same time. However, in both cases, no optimality guarantees (e.g., finding a Pareto optimal solution) are provided.
The proposal in <cit.> is similar in not guaranteeing pareto optimal solutions. It considers performance, availability, and reliability for each candidate WS, where each criterion is weighted and contributes to a single scalar value, according to which services are ordered. The notion of reliability in this proposal is based on its trustworthiness. <cit.> is another service selection proposal that considers the three objectives, namely performance, monetary cost, and reliability in terms of successful execution. The service metadata are normalized and the technique proposed employs a max-min heuristic that aims to select a service based on its smallest normalized value. An additional common feature of the proposals in <cit.> is that no objective function is explicitly targeted.
Another multi-objective optimization approach to choosing the best implementation selection of each task consists of linear complexity heuristics <cit.>. The main value of those heuristics are that they are designed to be applied on the fly, thus forming one of the few existing adaptive data flow optimization proposals. Additionally, the technique proposed by Braga et al. <cit.> extends the task ordering approach in <cit.> so that, for each task, the most appropriate implementation is first selected. None of these proposals employ a specific objective function as well.
Finally, multi-objective WS selection mechanism can be performed with the help of ant colony optimization algorithms; an example of applying this optimization technique for selecting WS instantiations between multiple candidates in a setting where the workflows mainly consist of a series of remote WS invocations appears in <cit.>, which is further extended by Tao et al. <cit.>.
Based on the above descriptions, two main observations can be drawn regarding the majority of the techniques. Firstly, they address a multi-objective problem. Secondly, they are proposed for a WS application domain. The latter may imply that transferring the results to dataflows where tasks exchange big volumes of data directly may not be straightforward.
§.§ Execution Engine Selection
The techniques in this category focus on choosing the best execution engine for executing the data flow tasks in distributed environments, where there are multiple options. For example, assume that the sentiment analysis in our running example can take place on either a DBMS server or a MapReduce cluster. As previously, for the techniques using this mechanism, the vertex cost of each task for each candidate execution engine is a necessary piece of metadata for the optimization algorithm. Also, corresponding techniques are orthogonal to optimizations referring to the high-level execution plan aspects.
For those tasks that can be executed by multiple engines, an exhaustive solution can be adopted for optimally allocating the tasks of a flow to different execution engines in order to meet multiple objectives. The drawback is that an exhaustive solution in general does not scale for large number of flow tasks and execution engines similarly to the case of task implementation selection. To overcome this, a set of heuristics can be used for pruning the search space <cit.>. This technique aims to improve not only the performance, but also the reliability of ETL workflows in terms of fault tolerance. Additionally, a multi-objective solution for optimizing the monetary cost and the performance is to check all the possible execution plans that satisfy a specific time constraint; this approach cannot scale for execution plans with high number of operators. The objective functions are those mentioned in Section <ref>. The same approach to deciding the execution engine, can be used to choose the task implementation in <cit.>.
Anytime single-objective heuristics for choosing between multiple engine have been proposed Kougka et al. <cit.>. Such heuristics take into account, apart from vertex costs, the edge costs and constraints on the capability of an engine to execute certain tasks and are coupled with a dynamic programming pseudo-polynomial algorithm that can find optimal allocation for a specific form of DAG shapes, namely linear ones. The objective function is minimizing the sum of the costs for both tasks and edges, extending the definition in Table <ref>: min ∑ c(v_i,e_ij), where i,j=1… n.
A different approach to engine selection has appeared in the commercial tools in <cit.>. There, the main option is ETL operators to execute on a specialized data integration server, unless a heuristic decides to delegate the execution of some of the tasks to the underlying databases, after merging the tasks and reformulating them as a single query.
Finally, the engine selection mechanism can be employed in combination with configuration of execution engine parameters. An example technique is presented by Huang et al. <cit.>, where the initial optimization step deals with the decision of the best type of execution engine and then, the configuration parameters are defined, as it is analyzed in Section <ref>. This technique is extended by Huang et al. <cit.>, which focuses on how to decide on the usage of a specific type of cloud machines, namely spot instances. The problem of deciding whether to employ spot instances in clouds is also considered by Zhou et al. <cit.>.
§.§ Execution Engine Configuration
This type of flow optimization has recently received attention due to the increasing number of parallel data flow platforms, such as Hadoop and Spark. The Engine Configuration mechanism can serve as a complementary component of an optimization technique that applies implementation or engine selection, and in general, can be combined with the other optimization mechanisms. For example, the rationale of the heuristic presented by Kumbhare et al. <cit.> (based on variable sized bin packing) is also to decide the best implementation for each task and then, dynamically configure the resources, such as the number of CPU cores allocated, for executing the tasks. A common feature of all the solutions in this section is that they deal with parallelism, but from different perspectives depending on the exact execution environment.
A specific type of engine configuration, namely to decide the degree of parallelism in MapReduce-like clusters for each task and parameters, such as the number of slots on each node, appears in <cit.>. The time complexity of this optimization technique is exponential. This is repeated for each different type of machines (i.e., different type of execution engine), assuming a context where several heterogeneous clusters are at user's disposal.
Both of these techniques have been proposed for cloud environments and aim to optimize multiple optimization criteria.
In general, execution engines come with a large number of configuration parameters and fine tuning them is a challenging task. For example, MapReduce systems may have more than one hundred configuration parameters. The proposal in <cit.> aims to provide a principle approach to their configuration. Given the number of MapReduce slots and hardware details, the proposed algorithm initially checks all combinations of four key parameters, such as the number of map and reduce waves, and whether to use compression or not. Then, the values of a dozen other configuration parameters that have significant impact on performance are derived. The overall goal is to reduce the execution time taking to account the pipeline nature of MapReduce execution.
An alternative configuration technique is employed by Lim et al. <cit.>, which leverages the what-if engine initially proposed by Herodotou et al. <cit.>. This engine is responsible to configure execution settings, such as memory allocation and number of map and reduce tasks, by answering questions on real and hypothetical input parameters using a random search algorithm. What-if analysis is also employed by <cit.> for optimally configuring memory configurations. The distinctive feature of this proposal is that it is dynamic in the sense that it can take decisions at runtime leading to task migrations.
In a more traditional ETL setting, apart from the optimizations described previously, an additional optimization mechanism has been proposed by Simitsis et al. <cit.> in order to define the degree of parallelism. Specifically, due to the large size of data that a workflow has to process, data is partitioned to be executed following the intra-operator parallelism paradigm. The parallelism is considered profitable whenever the overhead of data partitioning and merging does not incur an overhead higher then the expected benefits. Sometimes it might be worth investigating whether splitting an input dataset into partitions could reduce the latency in ETL flow execution on a single server as well. An example study can be found in <cit.>.
Another approach to choosing the degree of parallelism appears in <cit.>, where a set of greedy and simulated annealing heuristics that decide the degree of parallelism are proposed. This proposal considers two objectives, performance and monetary cost assuming that resources are offered by a public cloud at a certain price. The objective function targets either the minimization of the sum of the task costs constrained by a defined monetary budget, or the minimization of the monetary cost under a constraint on runtime. Additionally, both metrics can be minimized simultaneously using an appropriate objective function, which expresses the speedup when budget is increased.
Another optimization technique in <cit.> proposes a set of optimizations at the chip processor level and more specifically, proposes heuristics to drive compiler decisions on whether to execute low-level commands in a pipelined fashion or to employ SIMD (single instruction multiple data) parallelism. Interestingly, these optimizations are coupled with traditional database-like ones at a higher level, such as pushing selections as early as possible.
§ EVALUATION APPROACHES
Here, we describe the evaluation methods used in each proposed work. We can divide the proposals in three categories.
The first category includes the optimization proposals that are theoretical in their nature and their results are not accompanied by experiments. Examples of this category are <cit.>. The second category consists of optimizations that have found their way into data flow tools; the only examples in this category are <cit.>.
The third category covers the majority of the proposals, for which experimental evaluation has been provided. We are mostly interested in three aspects of such experiments, namely the workflow type used in the experiments, the data type used to instantiate the workflows, and the implementation environment of the experiments. In Table <ref>, the experimental evaluation approaches are summarized, along with the maximum DAG size (in terms of number of tasks) employed. Specifically, the implementation environment defines the execution environment of a workflow during the evaluation procedure. The environment can be a real-world one, which considers either the customization of an existing system to support the proposed optimization solutions or the design of a prototype system, which is essentially a new platform, possibly designed from scratch and tailored to support the evaluation. A common approach consists of a simulation of a real execution environment. Discussing the pros and cons of each approach is out of our scope, but in general, simulations allow the experimentation with a broader range of flow types, whereas real experiments can better reveal the actual benefits of optimizations in practice.
As shown in Table <ref>, the majority of the optimization techniques have been evaluated by executing workflows in a simulated environment.
The real environments that have been employed are as follows. The techniques in <cit.> that focused on (complex) ETL data flows have been evaluated with the help of extensions to the Pentaho Data Integration (Kettle) tool, a commercial database, and a MapReduce engine. The proposals in <cit.> have been tested in the Stratosphere a Big Data Analytics platform <cit.>. A MapReduce-inspired prototype, called Cumulon, is used for the evaluation of the techniques in <cit.>. Other MapReduce extensions have been employed in <cit.>. To evaluate techniques initially proposed for flows consisting of calls to WSs, both ad-hoc prototypes <cit.> and extensions to engines, such as Taverna <cit.> and Web-Sphere Process Server <cit.> have been used. Part of the evaluation of <cit.> involved running Pegasus on a public cloud. The techniques in <cit.> and <cit.> are part of broader prototype systems, called Tupleware and ADP, respectively. Finally, the early works on database queries including UDFs were implemented in a DBMS <cit.>.
The type of the workflows considered are either synthetic or real-world. In the former case, arbitrary DAGs are produced, e.g., based on the guidelines in <cit.>. In the latter case, the flow structure is according to real-world cases. For example, the evaluation of <cit.> is based on real-world scientific workflows, such as the Montage and Cybershake ones described in <cit.>. Another example of real-world workflows are derived by TPC-H queries (used for some of the evaluation experiments in <cit.> along with real world text mining and information extraction examples). In <cit.>, the evaluation of the optimization proposals is based on workflows that represent arbitrary, real-world data transformations and text analytics. The case studies in <cit.> include standard analytical algorithms, such as PageRank, k-means, logistic regression, and naive bayes.
The datasets used for workflow execution may affect the evaluation results, since they specify the range of the statistical metadata considered. The processed datasets can be either synthetic or real ones extracted by repositories, such as the Twitter repository with sample data of real tweets. Examples of real datasets used in <cit.> include biomedical texts, a set of Wikipedia articles, and datasets from DBpedia. Additionally, Braga et al. <cit.> have evaluated the proposed optimization techniques using real data extracted by <www.conference-service.com>, <www.accuweather.com>, and <www.bookings.com>. Typically, when employing standard scientific flows, the datasets used are also fixed; however, in <cit.> a wide-range of artificially created metadata have been used to cover more cases.
Finally, for many techniques, only small data flows comprising no more than 15 nodes were used, or the information with regards to the size of the flows could not be derived. In the latter case, this might be due to the fact that well-known algorithms have been used (e.g., k-means in <cit.> and matrix-multi-plication in <cit.>) without explaining how these algorithms are internally translated to data flows. All experiments with workflows comprising hundreds of tasks used synthetic datasets.
§ DISCUSSION ON FINDINGS
Data flow optimization is a research area with high potential for further improvements given the increasing role of data flows in modern data-driven applications. In this survey, we have listed more than thirty research proposals, most of which have been published after 2010.
In the previous sections, we mostly focused on the merits and the technical details of each proposal. They can lead to performance improvements, and more importantly, they have the potential to lift the burden of manually fixing all implementation details from the data flow designers, which is a key motivation for automated optimization solutions. In this section, we complement any remarks made before with a list of additional observations, which may also serve as a description of directions for further research:
* In principle, the techniques described previously can serve as building block towards more holistic solutions. For instance, task ordering can, in principle, be combined with i) additional high-level mechanisms, such as task introduction, removal, merge, and decomposition; and ii) low-level mechanisms, such as engine configuration, thus yielding added benefits. The main issue arising when mechanisms are combined is the increased complexity. An approach to mitigating the complexity is a two-phase approach, as commonly happens in database queries. Another issue is to determine which mechanism should first be explored. For some mechanisms, this is straight-forward, e.g., decomposition should precede task ordering and task removal should be placed afterwards. But, for mechanisms, such as configuration, this is unclear, e.g., whether it is beneficial to first configure low-level details before higher level ones remains an open issue.
* In general, there is little work on low-complexity, holistic, and multi-objective solutions.
Toward this direction, Simitsis et al. <cit.> considers more than one objective and combines mechanisms at both high and low level execution plan details; for instance, both task ordering and engine configuration are addressed in the same technique. But clearly more work is needed here. In general, most of the techniques have been developed in isolation, each one typically assuming a specific setting and targeting a subset of optimization aspects. This and the lack of a common agreed benchmark makes it difficult to understand how exactly they compare to each other, the details of how the various proposals can be combined in a common framework and how they interplay.
* There seems to be no common approach to evaluating the optimization proposals. Some proposals have not been adequately tested in terms of scalability, since they have considered only small graphs. In some data flow evaluations, workloads inspired from benchmarks such as TPC-DI/DS have been employed, but as most of the authors report as well, it is doubtful whether these benchmarks can completely capture all dimensions of the problem. There is a growing need for the development of systematic and broadly adopted techniques to evaluate optimization techniques for data flows.
* A significant part of the techniques covered in this survey have not been incorporated in tools, nor have been exploited commercially.
Most of the optimization techniques described here, especially regarding the high level execution plan details, have not been implemented in real data flow systems apart from very few exceptions, as explained earlier. Hence, the full potential and practical value of the proposals have not been investigated in actual execution conditions, despite the fact that evaluation results thus far are shown to provide improvements by several orders of magnitude over non-optimized plans.
* A plethora of objective functions and cost models have been investigated, which, to a large extent, they are compatible with each other, despite the fact that original proposals have examined them in isolation.
However, it is unclear whether any of such cost models can capture aspects, such as the execution time of parallel data flows, which are very common nowadays, in a fairly accurate manner. A more sophisticated cost model should take into account sequential, pipelined and partitioned execution in a unified manner, essentially combining the sum, bottleneck and critical path cost metrics.
* Developing adaptive solutions that are capable of revising the flow execution plan on the fly is one important open issue, especially for online, continuous, and stream processing. Also, very few optimization techniques consider the cost of the graph edges. Not considering edge metadata does not
reflect entirely real data flow execution in distributed settings, where the cost of transmitting data depends both on sender and receiver.
* In this survey, we investigated single flow optimizations. Optimizing multiple flows simultaneously, is another area requiring attention. An initial effort is described by Jovanovic et al. <cit.>, which builds upon the task ordering solutions of <cit.>.
* There is early work on statistics collection <cit.>, but clearly, there is more to be done here given that without appropriate statistics, cost-based optimization becomes problematic and prone to significant errors.
* On the other hand, a different school of thought advocates that in contrast to relational databases, automated optimization cannot help in practice in flow optimization due to flow complexity and increased difficulty in maintaining flow statistics, and developing accurate cost models. Based on that, there is a number of commercial flow execution engines (e.g., ETL tools) that instead of offering a flow optimizer they provide users with tips and best practices. No doubt, this is an interesting point, but we consider this category as out of the scope of this work.
Given the above observations and the trend in developing new solutions in the recent years, data flow optimization seems to be technology in evolution rather than an area, where most significant problems have been resolved. Moreover, providing solutions to all these problems is more likely to yield significantly different and more powerful new approaches to data flow optimization, rather than delta improvements on existing solutions.
§ ADDITIONAL ISSUES IN DATA-CENTRIC FLOW OPTIMIZATION
Additional issues are split into four parts. First, we describe optimizations enabled in current state-of-the-art parallel data flow systems, which, however, cannot cover arbitrary DAGs and tasks, and as such, have not been included in the previous sections. Next, we discuss techniques that, although they do not perform optimization in their own, they could, in principle, facilitate optimization. We provide a brief overview of optimization solutions for the WEP execution layer, complementing the discussion of existing scheduling techniques in Section <ref>. We conclude with a brief note on implementing the optimization techniques into existing systems.
§.§ Optimization In Massively Parallel Data Flow Systems
A specific form of data flow systems are massively parallel processing (MPP) engines, such as Spark and Hadoop. These data flow systems can scale to a large number of computing nodes and are specifically tailored to big data management taking care of parallelism efficiency and fault tolerance issues. They accept their input in a declarative form (e.g., PigLatin <cit.>, Hive, SparkSQL), which is then automatically transformed into an executable DAG. Several optimizations take place during this transformation.
We broadly classify these optimizations in two categories. The first category comprises database-like optimizations, such as pushing filtering tasks as early as possible, choosing the join implementation, and using index tables, corresponding to task ordering and implementation selection, respectively. This can be regarded as a direct technology transfer from databases to parallel data flows and to date, these optimizations do not cover arbitrary user-defined transformations.
The second category is specific to the parallel execution environment with a view to minimizing the amount of data read from disk, transmitted over the network, and being processed. For example, Spark groups pipelining tasks in larger jobs (called stages) to benefit from this type of parallelism. Also, it leverages cached data and columnar storage, performs compression, and reduces the amount of data transmitted during data shuffling through early partial aggregation, when this is possible. Grouping tasks into pipelining stages is a form of runtime scheduling.
Early partial aggregation can be deemed as a task introduction technique. The other forms of of optimizations (leveraging cached data, columnar storage, and compression) can be deemed as specific forms of implementation selection.
Flink is another system employing optimizations, but it has not yet incorporated all the (advanced) optimization proposals in its predecessor projects, as described in <cit.>. The proposal in <cit.> is another example that proposes optimizations for a specific operator, namely ParFOR.
We do not include these techniques in Tables <ref> and <ref> because they apply to specific DAG instances and have not matured enough to benefit generic data flows including arbitrary tasks.
§.§ Techniques Facilitating Data-centric Flow Optimization
Statistical metadata, such as cost per task invocation and selectivity, play a significant role in data flow optimization as discussed previously. <cit.> deal with statistics collection and modeling the execution cost of workflows; such issues are essential components in performing sophisticated flow optimization. <cit.> analyze the properties of tasks, e.g., multiple-input vs single-input ones; such properties along with dependency constraint information complement statistics as the basis on top of which optimization solutions can be built.
Some techniques allow for choosing among multiple implementations of the same tasks using ontologies, rather than performing cost-based or heuristic optimization <cit.>. In <cit.>, improving the flow with the help of user interactions is discussed. Additionally, in <cit.>, different scheduling strategies to account for data shipping between tasks are presented, without however proposing an optimization algorithm that takes decisions as to which strategy should be employed.
Apart from the optimizations described in Section <ref>, the proposal in <cit.> considers also the objective of data freshness. To this end, the proposal optimizes the activation time of ETL data flows, so that the changes in data sources are reflected on the state of a Data Warehouse within a time window. Nevertheless, this type of optimization objective leads to techniques that do not focus on optimizing the flow execution plan per se, which is the main topic of this survey.
For the evaluation of optimization proposals, benchmarks for evaluating techniques are proposed in <cit.>. Finally, in <cit.>, the significant role of correct parameter configuration in large-scale workflow execution is identified and relevant approaches are proposed. Proper tuning of the data flow execution environment is orthogonal and complementary to optimization of flow execution plan.
§.§ On Scheduling Optimizations in Data-centric Flows
In general, data flow execution engines tend to have built-in scheduling policies, which are not configured on a single flow basis. In principle, such policies can be extended to take into account the specific characteristics of data flows, where the placement of data and the transmission of data across tasks, represented by the DAG edges, requires special attention <cit.>. For example, in <cit.>, a set of scheduling strategies for improving the performance through the minimization of memory consumption and the execution time of Extract-Transform-Load (ETL) workflows running on a single machine is proposed. As it is difficult to execute the data in pipeline in ETLs due to the blocking nature of some of the ETL tasks, the authors suggest splitting the workflow into several sub-flows and apply different scheduling policies if necessary. Finally, in <cit.>, the placement of data management tasks is decided according to the memory availability of resources taking into account the trade-off between co-locating tasks and the increased memory consumption when running multiple tasks on the same physical computational node.
A large set of scheduling proposals target specific execution environments. For example, the technique in <cit.> targets shared resource
environments. Proposals, such as <cit.> are specific to grid and cloud data-centric flow scheduling. <cit.> discusses optimal time schedules given a fixed allocation of tasks to engines, provided that the tasks belong to a linear workflow.
Also, a set of optimization algorithms for scheduling flows based on deadline and time constraints is analyzed in <cit.>. Another proposal of flow scheduling optimization is presented in <cit.> based on soft deadline rescheduling in order to deal with the problem of fault tolerance in flow executions. In <cit.>, an optimization technique for minimizing the performance fluctuations that might occur by the resource diversity, which also considers deadlines, is proposed. Additionally, there is a set of scheduling techniques based on multi-objective optimization, e.g., <cit.>.
§.§ On incorporation Optimization Techniques into Existing Systems
Without loss of generality, there are two main types of describing the data flow execution plan in existing tools and prototypes: either in an appropriately formatted text file or using internal representations in the code. These two approaches are exemplified in systems, like the Pentaho Kettle, Spark, Taverna, and numerous others. In the former case, an optimization technique can be inserted as a component that processes this text file and produces a different execution plan. As an example, in Pentaho, each task and each graph edge are described as different XML elements in an XML document. Then, a technique that performs task reordering can consist of an independent programming module that parses the XML file and modifies the edge elements. On the other hand, systems, such as Spark, transform the flow submitted by the user in a DAG, but without exposing a high level representation to the end user. The internal optimization component, called Catalyst, then performs modifications to the internal code structure that captures the executable DAG. Extending the optimizer to add new techniques, such as those described in this survey, requires using the Catalyst extensibility points. The second approach seems to require more effort from the developer and be more intrusive.
§ RELATED WORK
To the best of our knowledge, there is no prior survey or overview article on data flow optimization; however, there are several surveys on related topics.
Related work falls into two categories: (i) surveys on generic DAG scheduling and on narrow-scope scheduling problems, which are also encountered in data flow optimization; and (ii) overviews of workflow systems.
DAG scheduling is a persisting topic in computing and has received a renewed attention due to the emergence of Grid and cloud infrastructures, which allow for the usage of remote computational resources. For such distributed settings, the proposals tend to refer to the WEP execution layer and to focus on mapping computational tasks ignoring the data transfer between them, or assume a non-pipelined mode of execution that does not fit will into data-centric flow setting <cit.>. A more recent survey of task mapping is presented in <cit.>, which discusses techniques that assign tasks to resources for efficient execution in Grids under the demanding requirements and resource allocation constraints, such as the dependencies between the tasks, the resource reservation, and so on. In <cit.>, an overview of the pipelined workflow time scheduling problem is presented, where the problem formulation targets streaming applications. In order to compare the effectiveness of the proposed optimization techniques, they present a taxonomy of workflow optimization techniques taking into account workflow characteristics, such as the structure of flow (i.e., linear, fork, tree-shaped DAGs), the computation requirements, the size of data to be transferred between tasks, the parallel or sequential task execution mode, and the possibility of executing task replicas. Additionally, the taxonomy takes into consideration a performance model that describes whether the optimization aims to a single or multiple objectives, such as throughput, latency, reliability, and so on. However, in data-centric flows, tasks are activated upon receipt of input data and not as a result of an activation message from a controller, as assumed in <cit.>. None of the surveys above provides a systematic study of the optimizations at the WEP generation layer.
The second class of related work deals with a broader-scope presentation of workflow systems. The survey in <cit.> aims to present a taxonomy of the workflow system features and capabilities to allow end users to take the best option for each application. Specifically, the taxonomy is inspired by the workflow lifecycle and categorizes the workflow systems according to the lifecycle phase they are capable of supporting. However, the optimizations considered suffer from the same limitations as those in <cit.>. Similarly, in <cit.>, an evaluation of the current workflow technology is also described, considering both scientific and business workflow frameworks. The control and data flow mechanisms and capabilities of workflow systems both for e-science, e.g., Taverna and Triana, and business processes, e.g., YAWL and BPEL-based engines, are discussed in <cit.>. <cit.> discusses how leading commercial tools in the data analysis market handle SQL statements, as a means to perform data management tasks within workflows. Liu et al. <cit.> focus on scientific workflows, which are an essential part of data flows, but does not delve into the details of optimization. Finally, Jovanovic et al. <cit.> present a survey that aims to present the challenges of modern data flows through different data flow scenarios. Additionally, related data flow optimization techniques are summarized, but not surveyed, in order to underline the importance of low data latency in Business Intelligence (BI) processes, while an architecture of next generation BI systems that manage the complexity of modern data flows in such systems is proposed.
Modeling and processing ETL workflows <cit.> focuses on the detailed description of conceptual and logical modeling of ETLs. Conceptual modeling refers to the initial design of ETL processes by using UML diagrams, while the logical modeling refers to the design of ETL processes taking into account required constraints. This survey discusses the generic problems in ETL data flows, including optimization issues in minimizing the execution time of an ETL workflow and the resumption in case of failures during the processing of large amount of data.
Data flow optimization bears also similarities with query optimization over Web Services (WSs) <cit.>, especially when the valid orderings of the calls to the WSs are subject to dependency constraints. This survey includes all the WSs related techniques that can also be applied to data flows.
Part of the optimizations covered in this survey can be deemed as generalizations of the corresponding techniques in database queries. An example is the correspondence between pushing selections down in the query plan and moving filtering tasks as close to data source as possible <cit.>. Comprehensive surveys on database query optimization are in <cit.>, whereas lists of semantic equivalence rules between expressions of relational operators that provide the basis for query optimization can be found in classical database textbooks (e.g., <cit.>). However, as discussed in the introduction, there are essential differences between database queries and data flows, which cannot be described as expressions over a limited set of elementary operations. At a higher level, data flow optimization covers more mechanisms (e.g., task decomposition and engine selection) and a broader setting with regards to the criteria considered and the metadata required.
Nevertheless, it is arguable that data flow task ordering bears similarities to optimization of database queries containing user-defined functions (UDFs) (or, expensive predicates), as reported in <cit.>. This similarity is based on the intrinsic correspondence between UDFs and data flow tasks, but there are two main differences. First, the dependency constraints considered in <cit.> refer to pairs of a join and a UDF, rather than between UDFs. As such, when joins are removed and only UDFs are considered, the techniques described in these proposals are reduced to unconstrained filter ordering. Second, the straightforward extensions to the proposals <cit.> are already covered and improved by solutions targeting data flow task ordering explicitly as discussed in Section <ref>.
§ SUMMARY
This survey covers an emerging area in data management, namely optimization techniques that modify a data-centric workflow execution plan prior to its execution in an automated manner. The survey first provides a taxonomy of the main dimensions characterizing each optimization proposal. These dimensions cover a broad range, from the mechanism utilized to enhance execution plans to the distribution of the setting and the environment for which the solution is initially proposed. Then, we present the details of the existing proposals, divided into eight groups, one for each of the identified optimization mechanisms. Next, we present the evaluation approaches, focusing on aspects, such as the type of workflows and data used during experiments. We complete this survey with a discussion of the main findings, while also, for completeness, we briefly present tangential issues, such as optimizations in massively parallel data flow systems and optimized workflow scheduling.
model1-num-names
| Workflows aim to model and execute real-world intertwined or interconnected processes, named as tasks or activities. While this is still the case, workflows play an increasingly significant role in processing very large volumes of data, possibly under highly demanding requirements.
Scientific workflow systems tailored to data-intensive e-science applications have been around since the last decade, e.g., <cit.>. This trend is nowadays complemented by
the evolution of workflow technology to serve (big) data analysis, in settings such as business intelligence, e.g., <cit.>, and business process management, e.g., <cit.>. Additionally, massively parallel engines, such as Spark, are becoming increasingly popular for designing and executing workflows.
Broadly, there are two big workflow categories, namely control-centric and data-centric. A workflow is commonly represented as a directed graph, where each task corresponds to a node in the graph and the edges represent the control flow or the data flow, respectively. The control-centric workflows are most often encountered in business process management <cit.> and they emphasize the passing of control across tasks and gateway semantics, such as branching execution, iterations, and so on; transmitting and sharing data across tasks is a second class citizen. In control-centric workflows, only a subset of the graph nodes correspond to activities, while the remainder denote events and gateways, as in the BPMN standard. In
data-centric workflows (or workflows for data analytics or simply data flows[Hereafter, these three terms will be used interchangeably; the terms workflow and flow will be used interchangeably, too.]), the graph is typically acyclic (directed acyclic graph - DAG). The nodes of the DAG represent solely actions related to the manipulation, transformation, access and storage of data,
e.g., as in <cit.> and in popular data flow systems, such as Pentaho Data Integration (Kettle) and Spark.
The tokens passing through the tasks correspond to processed data. The control is modeled implicitly assuming that each task may start executing when the entire or part of the input becomes available.
This survey considers data-centric flows exclusively.
Executing data-centric flows efficiently is a far from trivial issue. Even in the most widely used data flow tools, flows are commonly designed manually. Problems in the optimality of those designs stem from the complexity of such flows and the fact that in some applications, flow designers might not be systems experts <cit.> and consequently, they tend to design with only semantic correctness in mind. In addition, executing flows in a dynamic environment may entail that an optimized design in the past may behave suboptimally in the future due to changing conditions <cit.>.
The issues above call for a paradigm shift in the way data flow management systems are engineered and more specifically, there is a growing demand for automated optimization of flows. An analogy with database query processing, where declarative statements, e.g., in SQL, are automatically parsed, optimized, and then passed on to the execution engine is drawn. But data flow optimization is more complex, because tasks need not belong to a predefined set of algebraic operators with clear semantics and there may be arbitrary dependencies among their execution order. In addition, in data flows there may be optimization criteria apart from performance, such as reliability and freshness depending on business objectives and execution environments <cit.>. This survey covers optimization techniques[The terms technique, proposal, and work will be used interchangeably.] applicable to data flows, including database query optimization techniques that consider arbitrary plan operators, e.g., user-defined functions (UDFs), and dependencies between them. To the contrary, we do not aim to cover techniques that perform optimizations considering solely specific types of tasks, such as filters, joins and so on.
The contribution of this survey is the provision of a taxonomy of data flow optimization techniques that refer to the flow plan generation layer. In addition, a concise overview of the existing approaches with a view to (i) explaining the technical details and the distinct features of each approach in a way that facilitates result synthesis; and (ii) highlighting strengths and weaknesses, and areas deserving more attention from the community is provided.
The main findings are that on the one hand, big advances have been made and most of the aspects of data flow optimization have started to be investigated. On the other hand, data flow optimization is rather a technology in evolution. Contrary to query optimization, research so far seems to be less systematic and mainly consists of ad-hoc techniques, the combination of which is unclear.
The structure of the rest of this article is as follows. The next section describes the survey methodology and provides details about the exact context considered. Section <ref> presents a taxonomy of existing optimizations that take place before the flow enactment. Section <ref> describes the state-of-the-art techniques grouped by the main optimization mechanism they employ. Section <ref> presents the ways in which optimization proposals for data-centric workflows have been evaluated. Section <ref> highlights our findings. Section <ref> touches upon tangential flow optimization-related techniques that have recently been developed along with scheduling optimizations taking place during flow execution.
Section <ref> reviews surveys that have been conducted in related areas and finally, Section <ref> concludes the paper. | To the best of our knowledge, there is no prior survey or overview article on data flow optimization; however, there are several surveys on related topics.
Related work falls into two categories: (i) surveys on generic DAG scheduling and on narrow-scope scheduling problems, which are also encountered in data flow optimization; and (ii) overviews of workflow systems.
DAG scheduling is a persisting topic in computing and has received a renewed attention due to the emergence of Grid and cloud infrastructures, which allow for the usage of remote computational resources. For such distributed settings, the proposals tend to refer to the WEP execution layer and to focus on mapping computational tasks ignoring the data transfer between them, or assume a non-pipelined mode of execution that does not fit will into data-centric flow setting <cit.>. A more recent survey of task mapping is presented in <cit.>, which discusses techniques that assign tasks to resources for efficient execution in Grids under the demanding requirements and resource allocation constraints, such as the dependencies between the tasks, the resource reservation, and so on. In <cit.>, an overview of the pipelined workflow time scheduling problem is presented, where the problem formulation targets streaming applications. In order to compare the effectiveness of the proposed optimization techniques, they present a taxonomy of workflow optimization techniques taking into account workflow characteristics, such as the structure of flow (i.e., linear, fork, tree-shaped DAGs), the computation requirements, the size of data to be transferred between tasks, the parallel or sequential task execution mode, and the possibility of executing task replicas. Additionally, the taxonomy takes into consideration a performance model that describes whether the optimization aims to a single or multiple objectives, such as throughput, latency, reliability, and so on. However, in data-centric flows, tasks are activated upon receipt of input data and not as a result of an activation message from a controller, as assumed in <cit.>. None of the surveys above provides a systematic study of the optimizations at the WEP generation layer.
The second class of related work deals with a broader-scope presentation of workflow systems. The survey in <cit.> aims to present a taxonomy of the workflow system features and capabilities to allow end users to take the best option for each application. Specifically, the taxonomy is inspired by the workflow lifecycle and categorizes the workflow systems according to the lifecycle phase they are capable of supporting. However, the optimizations considered suffer from the same limitations as those in <cit.>. Similarly, in <cit.>, an evaluation of the current workflow technology is also described, considering both scientific and business workflow frameworks. The control and data flow mechanisms and capabilities of workflow systems both for e-science, e.g., Taverna and Triana, and business processes, e.g., YAWL and BPEL-based engines, are discussed in <cit.>. <cit.> discusses how leading commercial tools in the data analysis market handle SQL statements, as a means to perform data management tasks within workflows. Liu et al. <cit.> focus on scientific workflows, which are an essential part of data flows, but does not delve into the details of optimization. Finally, Jovanovic et al. <cit.> present a survey that aims to present the challenges of modern data flows through different data flow scenarios. Additionally, related data flow optimization techniques are summarized, but not surveyed, in order to underline the importance of low data latency in Business Intelligence (BI) processes, while an architecture of next generation BI systems that manage the complexity of modern data flows in such systems is proposed.
Modeling and processing ETL workflows <cit.> focuses on the detailed description of conceptual and logical modeling of ETLs. Conceptual modeling refers to the initial design of ETL processes by using UML diagrams, while the logical modeling refers to the design of ETL processes taking into account required constraints. This survey discusses the generic problems in ETL data flows, including optimization issues in minimizing the execution time of an ETL workflow and the resumption in case of failures during the processing of large amount of data.
Data flow optimization bears also similarities with query optimization over Web Services (WSs) <cit.>, especially when the valid orderings of the calls to the WSs are subject to dependency constraints. This survey includes all the WSs related techniques that can also be applied to data flows.
Part of the optimizations covered in this survey can be deemed as generalizations of the corresponding techniques in database queries. An example is the correspondence between pushing selections down in the query plan and moving filtering tasks as close to data source as possible <cit.>. Comprehensive surveys on database query optimization are in <cit.>, whereas lists of semantic equivalence rules between expressions of relational operators that provide the basis for query optimization can be found in classical database textbooks (e.g., <cit.>). However, as discussed in the introduction, there are essential differences between database queries and data flows, which cannot be described as expressions over a limited set of elementary operations. At a higher level, data flow optimization covers more mechanisms (e.g., task decomposition and engine selection) and a broader setting with regards to the criteria considered and the metadata required.
Nevertheless, it is arguable that data flow task ordering bears similarities to optimization of database queries containing user-defined functions (UDFs) (or, expensive predicates), as reported in <cit.>. This similarity is based on the intrinsic correspondence between UDFs and data flow tasks, but there are two main differences. First, the dependency constraints considered in <cit.> refer to pairs of a join and a UDF, rather than between UDFs. As such, when joins are removed and only UDFs are considered, the techniques described in these proposals are reduced to unconstrained filter ordering. Second, the straightforward extensions to the proposals <cit.> are already covered and improved by solutions targeting data flow task ordering explicitly as discussed in Section <ref>. | null | null | null | null |
http://arxiv.org/abs/1701.07576v2 | 20170126044500 | Approximate Capacity of a Class of Partially Connected Interference Channels | [
"Muryong Kim",
"Yitao Chen",
"Sriram Vishwanath"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Approximate Capacity of a Class of
Partially Connected Interference Channels
Muryong Kim, Yitao Chen, and Sriram Vishwanath, Senior Member, IEEE
The authors are with the University of Texas at Austin, Austin, TX 78701 USA (e-mail: [email protected], [email protected], [email protected]).
December 30, 2023
==================================================================================================================================================================================================================================
We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting and lattice alignment.
Interference channel, interference alignment, nested lattice code, side information graph, topological interference management.
§ INTRODUCTION
§.§ Motivation
The capacity of the Interference channel remains one of the most challenging open problems in the domain of network information theory. The capacity region is not known in general, except for a specific range of channel parameters. For the two-user scalar Gaussian interference channel, where the interference alignment is not required, the approximate capacity region to within one bit is known <cit.>. For the channels where interference alignment is required such as the K-user Gaussian interference channel <cit.> and the Gaussian X-channel <cit.>, a tight characterization of the capacity region is not known, even for symmetric channel cases.
A tractable approach to the capacity of interference channels is to consider partial connectivity of interference links and analyze the impact of topology on the capacity. Topological interference management <cit.> approach gives important insights on the degrees-of-freedom (DoF) of partially connected interference channels and their connection to index coding problems <cit.>. It is shown that the symmetric DoF of a partially connected interference channel can be found by solving the corresponding index coding problem.
In this paper, we consider a class of three-user partially connected interference channels and characterize approximate capacity regions at finite SNR. We focus on the impact of interference topology, interference alignment, and interplay between interference and noise. We choose a few representative topologies where we can achieve clear interference alignment gain. For these topologies, Z-channel type outer bounds are tight to within a constant gap from the corresponding inner bound. For each topology, we present an achievable scheme based on rate-splitting, lattice alignment, and successive decoding.
§.§ Related Work
Lattice coding based on nested lattices is shown to achieve the capacity of the single user Gaussian channel in <cit.>. The idea of lattice-based interference alignment by decoding the sum of lattice codewords appeared in the conference version of <cit.>. This lattice alignment technique is used to derive capacity bounds for three-user interference channel in <cit.>. The idea of decoding the sum of lattice codewords is also used in <cit.> to derive the approximate capacity of the two-way relay channel. An extended approach, compute-and-forward <cit.> enables to first decode some linear combinations of lattice codewords and then solve the lattice equation to recover the desired messages. This approach is also used in <cit.> to characterize approximate sum-rate capacity of the fully connected K-user interference channel.
The idea of sending multiple copies of the same sub-message at different signal levels, so-called Zigzag decoding, appeared in <cit.> where receivers collect side information and use them for interference cancellation.
The K-user cyclic Gaussian interference channel is considered in <cit.> where an approximate capacity for the weak interference regime (_k≥_k for all k) and the exact capacity for the strong interference regime (_k≤_k for all k) are derived. Our type 4 and 5 channels are K=3 cases in mixed interference regimes, which were not considered in <cit.>.
§.§ Main Results
We consider five channel types defined in Table <ref> and described in Fig. <ref> (a)–(e). Each channel type is a partially connected three-user Gaussian interference channel. Each transmitter is subject to power constraint 𝔼[X_k^2]≤ P_k=P. Let us denote the noise variance by N_k=𝔼[Z_k^2]. Without loss of generality, we assume that N_1≤ N_2≤ N_3.
The side information graph representation of an interference channel satisfies the following.
* A node represents a transmitter-receiver pair, or equivalently, the message.
* There is a directed edge from node i to node j if transmitter i does not interfere at receiver j.
The side information graphs for five channel types are described in Fig. <ref> (f)–(j). We state the main results in the following two theorems, of which the proofs will be given in the main body of the paper.
For the five channel types, if (R_1,R_2,R_3) is achievable, it must satisfy
∑_j∈𝒦 R_j ≤(1+|𝒦|P/min_j∈𝒦{N_j})
for every subset 𝒦 of the nodes {1,2,3} that does not include a directed cycle in the side information graph over the subset.
For any rate triple (R_1,R_2,R_3) on the boundary of the outer bound region, the point (R_1-1,R_2-1,R_3-1) is achievable.
§.§ Paper Organization and Notation
The capacity outer bounds are derived in Section II. The inner bounds for each channel type and the corresponding gap analysis are given in Section III, IV, V, VI, VII, respectively. Section VIII concludes the paper. While lattice coding-based achievable rate regions for channel types 4 and 5 are presented in Section VI and VII, random coding achievability is given in Appendix.
Signal 𝐱_ij is a coded version of message M_ij with code rate R_ij unless otherwise stated. The single user capacity at receiver k is denoted by C_k=(1+P/N_k). Let 𝒞 denote the capacity region of an interference channel. Also, let ℛ_i and ℛ_o denote the capacity inner bound and the capacity outer bound, respectively. Thus, ℛ_i⊂𝒞⊂ℛ_o. Let δ_k denote the gap on the rate R_k between ℛ_i and ℛ_o. Let δ_jk denote the gap on the sum-rate R_j+R_k between ℛ_i and ℛ_o. For example, if
ℛ_i={(R_j,R_k): R_k≤ L_k, R_j+R_k≤ L_jk}
ℛ_o={(R_j,R_k): R_k≤ U_k, R_j+R_k≤ U_jk},
then δ_k=U_k-L_k and δ_jk=U_jk-L_jk. For side information graph, we use graph notation of <cit.>. For example, 𝒢_1={(1|3),(2),(3|1)} means that node 1 has an incoming edge from node 3, that node 2 has no incoming edge, and that node 3 has an incoming edge from node 1.
§ CAPACITY OUTER BOUNDS
We prove the capacity outer bound in Theorem 1 for each channel type. The result is summarized in Table <ref>. The shape of the outer bound region is illustrated in Fig. <ref>. For all channel types, we assume P_1=P_2=P_3=P and N_1≤ N_2≤ N_3.
§.§ Channel Type 1
In this section, we present an outer bound on the capacity region of Type 1 channel defined by
[ [ Y_1; Y_2; Y_3; ]]
=[ [ 1 1 0; 1 1 1; 0 1 1; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1; Z_2; Z_3; ]].
We state the outer bound in the following theorem.
The capacity region of Type 1 channel is contained in the following outer bound region:
R_k≤ C_k, k=1,2,3
R_1+R_2≤(1+P/N_1)+(2P+N_2/P+N_2)
R_2+R_3≤(1+P/N_2)+(2P+N_3/P+N_3).
The individual rate bounds are obvious. We proceed to sum-rate bounds.
n(R_1+R_2-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)
≤ I(X_1^n;Y_1^n|X_2^n)+I(X_2^n;Y_2^n|X_3^n)
= h(Y_1^n|X_2^n)-h(Y_1^n|X_1^n,X_2^n)
+h(Y_2^n|X_3^n)-h(Y_2^n|X_2^n,X_3^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_2/P+N_2)
where the first inequality is by Fano's inequality, the second inequality due to the independence of X_1,X_2,X_3. The third inequality holds from the fact that Gaussian distribution maximizes differential entropy and that h(X_1^n+Z_1^n)-h(X_1^n+Z_2^n) is also maximized by Gaussian distribution. Similarly,
n(R_2+R_3-ϵ)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)
≤ I(X_2^n;Y_2^n|X_1^n,X_3^n)+I(X_3^n;Y_3^n)
= h(Y_2^n|X_1^n,X_3^n)-h(Y_2^n|X_1^n,X_2^n,X_3^n)
+h(Y_3^n)-h(Y_3^n|X_3^n)
= h(X_2^n+Z_2^n)-h(Z_2^n)
+h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)
≤n/2log(P+N_2/N_2)+n/2log(2P+N_3/P+N_3).
§.§ Channel Type 2
In this section, we present an outer bound on the capacity region of Type 2 channel defined by
[ [ Y_1; Y_2; Y_3; ]]
=[ [ 1 1 1; 1 1 0; 1 0 1; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1; Z_2; Z_3; ]].
We state the outer bound in the following theorem.
The capacity region of Type 2 channel is contained in the following outer bound region:
R_k≤ C_k, k=1,2,3
R_1+R_2≤(1+P/N_1)+(2P+N_2/P+N_2)
R_1+R_3≤(1+P/N_1)+(2P+N_3/P+N_3).
n(R_1+R_2-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)
≤ I(X_1^n;Y_1^n|X_2^n,X_3^n)+I(X_2^n;Y_2^n)
= h(Y_1^n|X_2^n,X_3^n)-h(Y_1^n|X_1^n,X_2^n,X_3^n)
+h(Y_2^n)-h(Y_2^n|X_2^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_2/P+N_2).
n(R_1+R_3-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)
≤ I(X_1^n;Y_1^n|X_2^n,X_3^n)+I(X_3^n;Y_3^n)
= h(Y_1^n|X_2^n,X_3^n)-h(Y_1^n|X_1^n,X_2^n,X_3^n)
+h(Y_3^n)-h(Y_3^n|X_3^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_3/P+N_3).
§.§ Channel Type 3
In this section, we present an outer bound on the capacity region of Type 3 channel defined by
[ [ Y_1; Y_2; Y_3; ]]
=[ [ 1 0 1; 0 1 1; 1 1 1; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1; Z_2; Z_3; ]].
We state the outer bound in the following theorem.
The capacity region of Type 3 channel is contained in the following outer bound region:
R_k≤ C_k, k=1,2,3
R_1+R_3≤(1+P/N_1)+(2P+N_3/P+N_3)
R_2+R_3≤(1+P/N_2)+(2P+N_3/P+N_3).
n(R_1+R_3-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)
≤ I(X_1^n;Y_1^n|X_3^n)+I(X_3^n;Y_3^n|X_2^n)
= h(Y_1^n|X_3^n)-h(Y_1^n|X_1^n,X_3^n)
+h(Y_3^n|X_2^n)-h(Y_3^n|X_2^n,X_3^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_3/P+N_3).
n(R_2+R_3-ϵ)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)
≤ I(X_2^n;Y_2^n|X_3^n)+I(X_3^n;Y_3^n|X_1^n)
= h(Y_2^n|X_3^n)-h(Y_2^n|X_2^n,X_3^n)
+h(Y_3^n|X_1^n)-h(Y_3^n|X_1^n,X_3^n)
= h(X_2^n+Z_2^n)-h(Z_2^n)
+h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)
≤n/2log(P+N_2/N_2)+n/2log(2P+N_3/P+N_3).
§.§ Channel Type 4
In this section, we present an outer bound on the capacity region of Type 4 channel defined by
[ [ Y_1; Y_2; Y_3; ]]
=[ [ 1 0 1; 1 1 0; 0 1 1; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1; Z_2; Z_3; ]].
This is a cyclic Gaussian interference channel <cit.>. We first show that channel type 4 is in the mixed interference regime. By normalizing the noise variances, we get the equivalent channel given by
[ [ Y_1'; Y_2'; Y_3'; ]]
=[ [ h_11 h_12 h_13; h_21 h_22 h_23; h_31 h_32 h_33; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1'; Z_2'; Z_3'; ]]
where Y_k'=1/√(N_k) Y_k, Z_k'=1/√(N_k)Z_k, N_0=𝔼[Z_k'^2]=1, 𝔼[X_k^2]≤ P_k=P and
[ [ h_11 h_12 h_13; h_21 h_22 h_23; h_31 h_32 h_33; ]]
=[ [ 1/√(N_1) 0 1/√(N_1); 1/√(N_2) 1/√(N_2) 0; 0 1/√(N_3) 1/√(N_3); ]].
With the usual definitions of _k=h_kk^2 P_k/N_0 and
_k=h_jk^2 P_k/N_0 for j≠ k as in <cit.>,
_1=P/N_1≥_1=P/N_2
_2=P/N_2≥_2=P/N_3
_3=P/N_3≤_3=P/N_1.
We state the outer bound in the following theorem.
The capacity region of Type 4 channel is contained in the following outer bound region:
R_k≤ C_k, k=1,2,3
R_1+R_2≤(1+P/N_1)+(2P+N_2/P+N_2)
R_1+R_3≤(1+2P/N_1)
R_2+R_3≤(1+P/N_2)+(2P+N_3/P+N_3).
n(R_1+R_2-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)
≤ I(X_1^n;Y_1^n|X_3^n)+I(X_2^n;Y_2^n)
= h(Y_1^n|X_3^n)-h(Y_1^n|X_1^n,X_3^n)
+h(Y_2^n)-h(Y_2^n|X_2^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_2^n+Z_2^n)-h(X_1^n+Z_2^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_2/P+N_2).
n(R_2+R_3-ϵ)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)
≤ I(X_2^n;Y_2^n|X_1^n)+I(X_3^n;Y_3^n)
= h(Y_2^n|X_1^n)-h(Y_2^n|X_1^n,X_2^n)
+h(Y_3^n)-h(Y_3^n|X_3^n)
= h(X_2^n+Z_2^n)-h(Z_2^n)
+h(X_2^n+X_3^n+Z_3^n)-h(X_2^n+Z_3^n)
≤n/2log(P+N_2/N_2)+n/2log(2P+N_3/P+N_3).
n(R_1+R_3-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n|X_2^n)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_1^n|X_1^n)
≤ I(X_1^n,X_3^n;Y_1^n)
= h(Y_1^n)-h(Y_1^n|X_1^n,X_3^n)
= h(X_1^n+X_3^n+Z_1^n)-h(Z_1^n)
≤n/2log(2P+N_1/N_1)
where we used the fact that I(X_3^n;Y_3^n|X_2^n)=I(X_3^n;X_3^n+Z_3^n)≤ I(X_3^n;X_3^n+Z_1^n)=I(X_3^n;Y_1^n|X_1^n).
§.§ Channel Type 5
In this section, we present an outer bound on the capacity region of Type 5 channel defined by
[ [ Y_1; Y_2; Y_3; ]]
=[ [ 1 1 0; 0 1 1; 1 0 1; ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1; Z_2; Z_3; ]].
This is a cyclic Gaussian interference channel <cit.>. We first show that channel type 5 is in the mixed interference regime. By normalizing the noise variances, we get the equivalent channel given by
[ [ Y_1'; Y_2'; Y_3'; ]]
=[ [ 1/√(N_1) 1/√(N_1) 0; 0 1/√(N_2) 1/√(N_2); 1/√(N_3) 0 1/√(N_3); ]]
[ [ X_1; X_2; X_3; ]]
+[ [ Z_1'; Z_2'; Z_3'; ]].
We can see that
_1=P/N_1≥_1=P/N_3
_2=P/N_2≤_2=P/N_1
_3=P/N_3≤_3=P/N_2.
We state the outer bound in the following theorem.
The capacity region of Type 5 channel is contained in the following outer bound region:
R_k≤ C_k, k=1,2,3
R_1+R_2≤(1+2P/N_1)
R_2+R_3≤(1+2P/N_2)
R_1+R_3≤(1+P/N_1)+(2P+N_3/P+N_3).
n(R_1+R_2-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_2^n|X_3^n)
≤ I(X_1^n;Y_1^n)+I(X_2^n;Y_1^n|X_1^n)
≤ I(X_1^n,X_2^n;Y_1^n)
= h(Y_1^n)-h(Y_1^n|X_1^n,X_2^n)
= h(X_1^n+X_2^n+Z_1^n)-h(Z_1^n)
≤n/2log(2P+N_1/N_1)
where we used the fact that I(X_2^n;Y_2^n|X_3^n)=I(X_2^n;X_2^n+Z_2^n)≤ I(X_2^n;X_2^n+Z_1^n)=I(X_2^n;Y_1^n|X_1^n).
n(R_2+R_3-ϵ)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_3^n|X_1^n)
≤ I(X_2^n;Y_2^n)+I(X_3^n;Y_2^n|X_2^n)
≤ I(X_2^n,X_3^n;Y_2^n)
= h(Y_2^n)-h(Y_2^n|X_2^n,X_3^n)
= h(X_2^n+X_3^n+Z_2^n)-h(Z_2^n)
≤n/2log(2P+N_2/N_2)
where we used the fact that I(X_3^n;Y_3^n|X_1^n)=I(X_3^n;X_3^n+Z_3^n)≤ I(X_3^n;X_3^n+Z_2^n)=I(X_3^n;Y_2^n|X_2^n).
n(R_1+R_3-ϵ)
≤ I(X_1^n;Y_1^n)+I(X_3^n;Y_3^n)
≤ I(X_1^n;Y_1^n|X_2^n)+I(X_3^n;Y_3^n)
= h(Y_1^n|X_2^n)-h(Y_1^n|X_1^n,X_2^n)
+h(Y_3^n)-h(Y_3^n|X_3^n)
= h(X_1^n+Z_1^n)-h(Z_1^n)
+h(X_1^n+X_3^n+Z_3^n)-h(X_1^n+Z_3^n)
≤n/2log(P+N_1/N_1)+n/2log(2P+N_3/P+N_3)
§.§ Relaxed Outer Bounds
For ease of gap calculation, we also derive relaxed outer bounds. First, we can see that for N_j≤ N_k,
(1+P/N_j)+(2P+N_k/P+N_k)≤(1+2P/N_j).
Five outer bound theorems in this section, together with this inequality, give the sum-rate bound expression in Theorem 1.
Next, we can assume that P≥ 3N_j for j=1,2,3. Otherwise, showing one-bit gap capacity is trivial as the capacity region is included in the unit hypercube, i.e., R_j≤(1+P/N_j)< 1. For P≥ 3N_j,
(1+2P/N_j)=(P/N_j)+(N_j/P+2)
≤(P/N_j)+(7/3)
(1+P/N_j)≤(P/N_j)+(4/3).
The resulting relaxed outer bounds ℛ_o' are summarized in Table <ref>.
§ INNER BOUND: CHANNEL TYPE 1
Given α=(α_0,α_2)∈ [0,1]^2, the rate region ℛ_α is defined by
R_1 ≤^+(1-α_0/2-α_0+(1-α_0) P/(α_0+α_2) P+N_2)
+(1+α_0 P/N_1)
R_2 ≤(1+α_2 P/α_0 P+N_2)
R_3 ≤^+(1/2-α_0+P/(α_0+α_2) P+N_3)
where log^+(·)=max{0,log(·)}. And,
ℛ=conv(⋃_αℛ_α)
is achievable where conv(·) is convex hull operator.
§.§ Preliminaries: Lattice Coding
Lattice Λ is a discrete subgroup of ℝ^n, Λ ={𝐭=𝐆𝐮: 𝐮∈ℤ^n} where 𝐆∈ℝ^n× n is a real generator matrix. Quantization with respect to Λ is Q_Λ(𝐱)=min_λ∈Λ𝐱-λ. Modulo operation with respect to Λ is M_Λ(𝐱)=[𝐱]=𝐱-Q_Λ(𝐱). For convenience, we use both notations M_Λ(·) and [·] interchangeably. Fundamental Voronoi region of Λ is 𝒱(Λ)={𝐱:Q_Λ(𝐱)=0}. Volume of the Voronoi region of Λ is V(Λ)=∫_𝒱(Λ) d𝐱. Normalized second moment of Λ is G(Λ)=σ^2(Λ)/V(Λ)^2/n where σ^2(Λ)=1/nV(Λ)∫_𝒱(Λ)𝐱^2 d𝐱. Lattices Λ_1, Λ_2 and Λ are said to be nested if Λ⊆Λ_2⊆Λ_1. For nested lattices Λ_2⊂Λ_1,
Λ_1/Λ_2=Λ_1∩𝒱(Λ_2).
We briefly review the lattice decoding procedure in <cit.>. We use nested lattices Λ⊆Λ_t with σ^2(Λ)=S, G(Λ)=1/2π e, and V(Λ)=(2π e S)^n/2. The transmitter sends 𝐱=[𝐭+𝐝] over the point-to-point Gaussian channel 𝐲=𝐱+𝐳 where the codeword 𝐭∈Λ_t∩𝒱(Λ), the dither signal 𝐝∼Unif(𝒱(Λ)), the transmit power 1/n𝐱^2=S and the noise 𝐳∼𝒩(0,N𝐈). The code rate is given by R=1/nlog(V(Λ)/V(Λ_t)).
After linear scaling, dither removal, and mod-Λ operation, we get
𝐲'=[β𝐲-𝐝] = [𝐭+𝐳_e]
where the effective noise is 𝐳_e=(β-1)𝐱+β𝐳_1
and its variance
σ_e^2=1/n𝔼[𝐳_e^2]=(β-1)^2 S+β^2 N.
With the MMSE scaling factor β=S/S+N plugged in, we get σ_e^2=β N=SN/S+N. The capacity of the mod-Λ channel <cit.> between 𝐭 and 𝐲 is
1/n I(𝐭;𝐲)
= 1/n h(𝐲)-1/n h(𝐲|𝐭)
= 1/n h(𝐲)-1/n h(𝐳)
≥ 1/n h(𝐲)-1/n h(𝐳)
= 1/nlog V(Λ)-1/n h(𝐳)
= 1/2log(S/β N)
= 1/2log(1+S/N)
= C
where I(·) and h(·) are mutual information and differential entropy, respectively. For reliable decoding of 𝐭, we have the code rate constraint R≤ C.
With the choice of lattice parameters, σ^2(Λ_t)≥β N, G(Λ_t)=1/2π e and V(Λ_t)^n/2=σ^2(Λ_t)/G(Λ_t)≥ 2π e β N,
R = 1/nlog(V(Λ)/V(Λ_t))
≤ 1/nlog((2π e S)^n/2/(2π e β N)^n/2)
= 1/2log(S/β N).
Thus, the constraint R≤ C can be satisfied. By lattice decoding <cit.>, we can recover 𝐭, i.e.,
Q_Λ_t(𝐲')=𝐭,
with probability 1-P_e where
P_e=Pr[Q_Λ_t(𝐲')≠𝐭]
is the probability of decoding error.
If we choose Λ to be Poltyrev-good <cit.>, then P_e→ 0 as n→∞.
§.§ Achievable Scheme
We present an achievable scheme for the proof of Theorem 8. The achievable scheme is based on rate-splitting, lattice coding, and interference alignment. Message M_1∈{1,2,…,2^nR_1} is split into two parts: M_11∈{1,2,…,2^nR_11} and M_10∈{1,2,…,2^nR_10}, so R_1=R_11+R_10. Transmitter 1 sends 𝐱_1=𝐱_11+𝐱_10 where 𝐱_11 and 𝐱_10 are coded signals of M_11 and M_10, respectively. Transmitters 2 and 3 send 𝐱_2 and 𝐱_3, coded signals of M_2∈{1,2,…,2^nR_2} and M_3∈{1,2,…,2^nR_3}. In particular, 𝐱_11 and 𝐱_3 are lattice-coded signals.
We use the lattice construction of <cit.> with the lattice partition chain Λ_c/Λ_1/Λ_3, so Λ_3⊂Λ_1⊂Λ_c are nested lattices. Λ_c is the coding lattice for both 𝐱_11 and 𝐱_3. Λ_1 and Λ_3 are shaping lattices for 𝐱_11 and 𝐱_3, respectively. The lattice signals are formed by
𝐱_11=[𝐭_11+𝐝_11]_1
𝐱_3=[𝐭_3+𝐝_3]_3
where 𝐭_11∈Λ_c∩𝒱(Λ_1) and 𝐭_3∈Λ_c∩𝒱(Λ_3) are lattice codewords. The dither signals 𝐝_11 and 𝐝_3 are uniformly distributed over 𝒱(Λ_1) and 𝒱(Λ_3), respectively.
To satisfy power constraints, we choose 𝔼[𝐱_11^2]=nσ^2(Λ_1)=(1-α_1) nP, 𝔼[𝐱_10^2]=α_1 nP,
𝔼[𝐱_2^2]=α_2 nP, 𝔼[𝐱_3^2]=nσ^2(Λ_3)=nP.
With the choice of transmit signals, the received signals are given by
𝐲_1=𝐱_11+𝐱_2+𝐱_10+𝐳_1
𝐲_2=[𝐱_11+𝐱_3]+𝐱_2+𝐳_2'
𝐲_3=𝐱_3+𝐳_3'.
where 𝐱_f=[𝐱_11+𝐱_3] is the sum of interference, and 𝐳_2'=𝐱_10+𝐳_2 and 𝐳_3'=𝐱_2+𝐳_3 are the effective Gaussian noise.
The signal scale diagram at each receiver is shown in Fig. <ref> (a).
At the receivers, successive decoding is performed in the following order: 𝐱_11→𝐱_2→𝐱_10 at receiver 1, 𝐱_f→𝐱_2 at receiver 2, and receiver 3 only decodes 𝐱_3.
Note that the aligned lattice codewords 𝐭_11+𝐭_3∈Λ_c, and 𝐭_f=[𝐭_11+𝐭_3]_1 ∈Λ_c∩𝒱(Λ_1). We state the relationship between 𝐱_f and 𝐭_f in the following lemmas.
The following holds.
[𝐱_f-𝐝_f]_1=𝐭_f
where 𝐝_f=𝐝_11+𝐝_3.
[𝐱_f-𝐝_f]_1
=[M_Λ_1(𝐭_11+𝐝_11)+M_Λ_3(𝐭_3+𝐝_3)-𝐝_f]_1
=[M_Λ_1(𝐭_11+𝐝_11)+M_Λ_1(𝐭_3+𝐝_3)-𝐝_f]_1
=[𝐭_11+𝐝_11+𝐭_3+𝐝_3-𝐝_f]_1
=[𝐭_11+𝐭_3]_1
=𝐭_f
The second and third equalities are due to distributive law and the identity in the following lemma.
For any nested lattices Λ_3⊂Λ_1 and
any 𝐱∈ℝ^n, it holds that
[M_Λ_3(𝐱)]_1=[𝐱]_1.
[M_Λ_3(𝐱)]_1
=[𝐱-λ_3]_1
=[M_Λ_1(𝐱)-M_Λ_1(λ_3)]_1
=[M_Λ_1(𝐱)-λ_3 +Q_Λ_1(λ_3)]_1
=[M_Λ_1(𝐱)]_1
=[𝐱]_1
where λ_3=Q_Λ_3(𝐱)∈Λ_1, thus Q_Λ_1(λ_3)=λ_3.
The following holds.
[𝐭_f+𝐝_f]_1=[𝐱_f]_1.
[𝐭_f+𝐝_f]_1
=[M_Λ_1(𝐭_11+𝐭_3)+𝐝_f]_1
=[𝐭_11+𝐭_3+𝐝_f]_1
=[M_Λ_1(𝐭_11+𝐝_11)+M_Λ_1(𝐭_3+𝐝_3)]_1
=[M_Λ_1(𝐭_11+𝐝_11)+M_Λ_3(𝐭_3+𝐝_3)]_1
=[𝐱_11+𝐱_3]_1
=[𝐱_f]_1
Receiver 2 does not need to recover the codewords 𝐭_11 and 𝐭_3 but the real sum 𝐱_f to remove the interference from 𝐲_2. Since 𝐱_f=M_Λ_1(𝐱_f)+Q_Λ_1(𝐱_f), we first recover the modulo part and then the quantized part to cancel out 𝐱_f. This idea appeared in <cit.> as an achievable scheme for the many-to-one interference channel.
The mod-Λ_1 channel between 𝐭_f and 𝐲_2' is given by
𝐲_2'=[β_2𝐲_2-𝐝_f]_1
= [𝐱_f-𝐝_f +𝐳_e2]_1
= [𝐭_f +𝐳_e2]_1
where the effective noise 𝐳_e2=(β_2-1)𝐱_f+β_2(𝐱_2+𝐱_10+𝐳_2). Note that 𝔼[𝐱_f^2]=(α̅_0+1)nP, and the effective noise variance σ_e2^2=1/n𝔼[𝐳_e2^2]=(β_2-1)^2(α̅_0+1)P+β_2^2 N_e2 where N_e2=(α_0+α_2) P+N_2. With the MMSE scaling factor β_2=(α̅_0+1)P/(α̅_0+1)P+N_e2 plugged in, we get σ_e2^2=β_2 N_e2=(α̅_0+1)P N_e2/(α̅_0+1)P+N_e2. The capacity of the mod-Λ_1 channel between 𝐭_f and 𝐲_2' is
1/n I(𝐭_f;𝐲_2')
≥1/nlog(V(Λ_1)/2^h(𝐳_e2))
= 1/2log(α̅_0 P/β_2 N_e2)
= 1/2log(α̅_0(α̅_0+1)P+α̅_0 N_e2/(α̅_0+1)N_e2)
= 1/2log(α̅_0/α̅_0+1+α̅_0 P/N_e2)
= 1/2log(α̅_0/α̅_0+1+α̅_0 P/(α_0+α_2) P+N_2)
= C_f
For reliable decoding of 𝐭_f at receiver 2, we have the code rate constraint R_11=1/nlog(V(Λ_1)/V(Λ_c))≤ C_f. This also implies that R_3=1/nlog(V(Λ_2)/V(Λ_c))≤ C_f+1/nlog(V(Λ_2)/V(Λ_1))=1/2log(P/β_2 N_e2)=1/2log(1/α̅_0+1+P/(α_0+α_2) P+N_2).
By lattice decoding, we can recover the modulo sum of interference codewords 𝐭_f from 𝐲_2'. Then, we can recover the real sum 𝐱_f in the following way.
* Recover M_Λ_1(𝐱_f) by calculating [𝐭_f+𝐝_f]_1 (lemma 3).
* Subtract it from the received signal,
𝐲_2-M_Λ_1(𝐱_f)=Q_Λ_1(𝐱_f) +𝐳_2”
where 𝐳_2”=𝐱_2+𝐱_10+𝐳_2.
* Quantize it to recover Q_Λ_1(𝐱_f),
Q_Λ_1(Q_Λ_1(𝐱_f) +𝐳_2”)=Q_Λ_1(𝐱_f)
with probability 1-P_e where
P_e=Pr[Q_Λ_1(Q_Λ_1(𝐱_f) +𝐳_2”)≠ Q_Λ_1(𝐱_f)]
is the probability of decoding error.
If we choose Λ_1 to be simultaneously Rogers-good and Poltyrev-good <cit.> with V(Λ_1)≥ V(Λ_c), then P_e→ 0 as n→∞.
* Recover 𝐱_f by adding two vectors,
M_Λ_1(𝐱_f)+Q_Λ_1(𝐱_f)=𝐱_f.
We now proceed to decoding 𝐱_2 from 𝐲_2-𝐱_f=𝐱_2+𝐳_2'. Since 𝐱_2 is a codeword from an i.i.d. random code for point-to-point channel, we can achieve rate up to
R_2≤(α_2 P/α_0 P+N_2).
At receiver 1, we first decode 𝐱_11 while treating other signals 𝐱_2+𝐱_10+𝐳_1 as noise. The effective noise in the mod-Λ_1 channel is 𝐳_e1=(β_1-1)^2𝐱_11+β_1(𝐱_2+𝐱_10+𝐳_1) with variance σ_e1^2=1/n𝔼[𝐳_e1^2]=(β_1-1)^2 α̅_0 P+β_1^2 N_e1 where N_e1=(α_0+α_2)P+N_1. For reliable decoding, the rate R_11 must satisfy
R_11≤(σ^2(Λ_1)/β_1σ_e1^2)=(1+α̅_0 P/(α_0+α_2) P+N_1)
where the MMSE scaling parameter β_1=α̅_0 P/α̅_0 P+N_e1. Similarly, we have the other rate constraints at receiver 1:
R_2≤(1+α_2 P/α_0 P +N_1)
R_10≤(1+α_0 P/N_1).
At receiver 3, the signal 𝐱_3 is decoded with the effective noise 𝐱_2+𝐳_3. For reliable decoding, R_3 must satisfy
R_3≤(1+P/α_2 P+N_3).
In summary,
* 𝐱_11 decoded at receivers 1 and 2
R_11≤ T_11'=(1+(1-α_0) P/(α_0+α_2) P+N_1)
R_11≤ T_11”=(c_11+(1-α_0) P/(α_0+α_2) P+N_2)
where c_11=(1-α_0)P/(1-α_0)P+P=1-α_0/2-α_0.
* 𝐱_10 decoded at receiver 1
R_10≤ T_10=(1+α_0 P/N_1)
* 𝐱_2 decoded at receivers 1 and 2
R_2≤ T_2' =(1+α_2 P/α_0 P +N_1)
R_2≤ T_2” =(1+α_2 P/α_0 P +N_2)
* 𝐱_3 decoded at receivers 2 and 3
R_3≤ T_3' =(c_3+P/(α_0+α_2) P+N_2)
R_3≤ T_3” =(1+P/α_2 P+N_3)
where c_3=P/(1-α_0)P+P=1/2-α_0.
Note that 0≤ c_11≤1/2, c_11+c_3=1, and 1/2≤ c_3≤ 1. Putting together, we can see that the following rate region is achievable.
R_1 ≤ T_1=min{T_11',T_11”}+T_10=T_11”+T_10
R_2 ≤ T_2=min{T_2',T_2”}=T_2”
R_3 ≤ T_3=min{T_3',T_3”}
where
T_1=(c_11+(1-α_0) P/(α_0+α_2) P+N_2)
+(1+α_0 P/N_1)
T_2=(1+α_2 P/α_0 P +N_2)
T_3 ≥(c_3+P/(α_0+α_2) P+N_3).
Thus, Theorem 8 is proved.
§.§ The Gap
We choose the parameter α_0=N_2/P, which is suboptimal but good enough to achieve a constant gap. This choice of parameter, inspired by <cit.>, ensures making efficient use of signal scale difference between N_1 and N_2 at receiver 1, while keeping the interference of 𝐱_10 at the noise level N_2 at receiver 2. By substitution, we get
T_1 = (c_11+P-N_2/α_2 P+2N_2)
+(1+N_2/N_1)
T_2 = (1+α_2 P/2 N_2)
T_3 ≥(c_3+P/α_2 P+N_2+N_3).
Since α_0=N_2/P∈[0,1/3], it follows that c_11=1-N_2/P/2-N_2/P≥2/5, and c_3=1/2-N_2/P≥1/2.
Starting from ℛ_o from Table <ref>, we can express the two-dimensional outer bound region at R_2 as
R_1 ≤min{(1+2P/N_1)-R_2,C_1}
≤min{(P/N_1·7/3)-R_2,(P/N_1·4/3)}
R_3 ≤min{(1+2P/N_2)-R_2,C_3}
≤min{(P/N_2·7/3)-R_2,(P/N_3·4/3)}.
Depending on the bottleneck of min{·,·} expressions, there are three cases:
* R_2≤(7/4)
* (7/4)≤ R_2≤(N_3/N_2·7/4)
* R_2≥(N_3/N_2·7/4).
At R_2=(α_2 P/N_2·7/4), the outer bound region is
R_1 ≤min{(P/α_2 P·N_2/N_1·4/3),(P/N_1·4/3)}
R_3 ≤min{(P/α_2 P·4/3),(P/N_3·4/3)}.
Depending on the bottleneck of min{·,·} expressions, we consider the following three cases:
* α_2 P≥ N_3
* N_2≤α_2 P≤ N_3
* α_2 P≤ N_2.
Case i) α_2 P≥ N_3:
The outer bound region at R_2=(α_2 P/N_2·7/4) is
R_1 ≤(P/α_2 P·N_2/N_1·4/3), R_3 ≤(P/α_2 P·4/3).
For comparison, let us take a look at the achievable rate region. The first term of T_1 is lower bounded by
T_11” =(c_11+P-N_2/α_2 P+2N_2)
≥(2/5+P-α_2 P/3α_2 P)
> (P/3α_2 P).
We get the lower bounds:
T_1 = T_11”+T_10
> (P/3α_2 P)+(1+N_2/N_1)
> (P/3α_2 P·N_2/N_1)
T_3≥(1/2+P/α_2 P+N_2+N_3)
> (P/3α_2 P).
For fixed α_2 and R_2=(α_2 P/2N_2), the two-dimensional achievable rate region is given by
R_1 ≤(P/3α_2 P·N_2/N_1), R_3 ≤(P/3α_2 P).
Case ii) N_2≤α_2 P≤ N_3:
The outer bound region at R_2=(α_2 P/N_2·7/4) is
R_1 ≤(P/α_2 P·N_2/N_1·4/3), R_3 ≤(P/N_3·4/3).
Now, let us take a look at the achievable rate region. We have the lower bounds:
T_1 > (P/3α_2 P·N_2/N_1)
T_3 ≥(1/2+P/α_2 P+N_2+N_3)
> (P/3N_3).
For fixed α_2 and R_2=(α_2 P/2N_2), the two-dimensional achievable rate region is given by
R_1≤(P/3α_2 P·N_2/N_1), R_3≤(P/3N_3).
Case iii) α_2 P≤ N_2:
The outer bound region at R_2=(α_2 P/N_2·7/4) is
R_1 ≤(P/N_1·4/3), R_3 ≤(P/N_3·4/3).
For this range of α_2, the rate R_2 is small, i.e., R_2 = (α_2 P/N_2·7/4)≤(7/4)<1/2, and R_1 and R_3 are close to single user capacities C_1 and C_3, respectively.
Let us take a look at the achievable rate region.
The first term of T_1 is lower bounded by
T_11” =(c_11+P-N_2/α_2 P+2N_2)
≥(2/5+P-N_2/3N_2)
> (P/3N_2).
We get the lower bounds:
T_1 = T_11”+T_10
> (P/3N_2)+(1+N_2/N_1)
> (P/3N_1)
T_3 ≥(1/2+P/α_2 P+N_2+N_3)
> (P/3N_3).
For fixed α_2 and R_2=(α_2 P/2N_2), the following two-dimensional rate region is achievable.
R_1 ≤(P/3N_1), R_3 ≤(P/3N_3).
In all three cases above, by comparing the inner and outer bound regions, we can see that δ_1≤(3·4/3)=1, δ_2≤(2·7/4) = 0.91 and δ_3≤(3·4/3)=1. Therefore, we can conclude that the gap is to within one bit per message.
§ INNER BOUND: CHANNEL TYPE 2
Given α_1∈ [0,1], the region ℛ_α is defined by
R_1 ≤(1+α_1 P/N_1)
R_2 ≤^+(1/2+P/α_1 P+N_2)
R_3 ≤^+(1/2+P/α_1 P+N_3),
and ℛ=conv(⋃_α_1ℛ_α) is achievable.
§.§ Achievable Scheme
For this channel type, rate splitting is not necessary. Transmit signal 𝐱_k is a coded signal of M_k∈{1,2,…,2^nR_k},k=1,2,3. In particular, 𝐱_2 and 𝐱_3 are lattice-coded signals using the same pair of coding and shaping lattices. As a result, the sum 𝐱_2+𝐱_3 is a dithered lattice codeword. The power allocation satisfies
𝔼[𝐱_1^2]=α_1 nP, 𝔼[𝐱_2^2]=nP, and 𝔼[𝐱_3^2]=nP.
The received signals are
𝐲_1=[𝐱_2+𝐱_3]+𝐱_1+𝐳_1
𝐲_2=𝐱_2+𝐱_1+𝐳_2
𝐲_3=𝐱_3+𝐱_1+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (b).
Decoding is performed in the following way.
* At receiver 1, [𝐱_2+𝐱_3] is first decoded while treating 𝐱_1+𝐳_1 as noise. Next, 𝐱_1 is decoded from 𝐲_1-[𝐱_2+𝐱_3]=𝐱_1+𝐳_1. For reliable decoding, the code rates should satisfy
R_2 ≤ T_2' =(1/2+P/α_1 P+N_1)
R_3 ≤ T_3' =(1/2+P/α_1 P+N_1)
R_1 ≤ T_1 =(1+α_1 P/N_1).
* At receiver 2, 𝐱_2 is decoded while treating 𝐱_1+𝐳_2 as noise. Similarly at receiver 3, 𝐱_3 is decoded while treating 𝐱_1+𝐳_3 as noise. For reliable decoding, the code rates should satisfy
R_2≤ T_2” =(1+P/α_1 P+N_2)
R_3≤ T_3” =(1+P/α_1 P+N_3).
Putting together, we get
R_1 ≤ T_1
R_2 ≤ T_2 =min{T_2',T_2”}
R_3 ≤ T_3 =min{T_3',T_3”}
where
T_1 =(1+α_1 P/N_1)
T_2 ≥(1/2+P/α_1 P+N_2)
≥(1/2+P/2·max{α_1 P,N_2})
T_3 ≥(1/2+P/α_1 P+N_3)
≥(1/2+P/2·max{α_1 P,N_3}).
§.§ The Gap
Starting from ℛ_o from Table <ref>, we can express the two-dimensional outer bound region at R_1 as
R_2 ≤min{(1+2P/N_1)-R_1,C_2}
≤min{(P/N_1·7/3)-R_1,(P/N_2·4/3)}
R_3 ≤min{(1+2P/N_1)-R_1,C_3}
≤min{(P/N_1·7/3)-R_1,(P/N_3·4/3)}.
Depending on the bottleneck of min{·,·} expressions, there are three cases:
* R_1≤(N_2/N_1·7/4)
* (N_2/N_1·7/4)≤ R_1≤(N_3/N_1·7/4)
* R_1≥(N_3/N_1·7/4).
At R_1=(α_1 P/N_1·7/4), the region can be expressed as
R_2 ≤min{(P/α_1 P·4/3),(P/N_2·4/3)}
R_3 ≤min{(P/α_1 P·4/3),(P/N_3·4/3)}.
Depending on the bottleneck of min{·,·} expressions, we consider the following three cases.
Case i) α_1 P≥ N_3: The two-dimensional outer bound region at R_1=(α_1 P/N_1·7/4) is
R_2 ≤(P/α_1 P·4/3), R_3 ≤(P/α_1 P·4/3).
For fixed α_1 and R_1=(α_1 P/N_1), the following two-dimensional region is achievable.
R_2 ≤(P/2α_1 P), R_3 ≤(P/2α_1 P).
Case ii) N_2≤α_1 P≤ N_3: The two-dimensional outer bound region at R_1=(α_1 P/N_1·7/4) is
R_2 ≤(P/α_1 P·4/3), R_3 ≤(P/N_3·4/3).
For fixed α_1 and R_1=(α_1 P/N_1), the following two-dimensional region is achievable.
R_2 ≤(P/2α_1 P), R_3 ≤(P/2N_3).
Case iii) α_1 P≤ N_2: The two-dimensional outer bound region at R_1=(α_1 P/N_1·7/4) is
R_2 ≤(P/N_2·4/3), R_3 ≤(P/N_3·4/3).
For fixed α_1 and R_1=(α_1 P/N_1), the following two-dimensional region is achievable.
R_2 ≤(P/2N_2), R_3 ≤(P/2N_3).
In all three cases above, by comparing the inner and outer bounds, we can see that δ_1≤(7/4)<0.41, δ_2 ≤(2·4/3)<0.71, and δ_3 ≤(2·4/3)<0.71. We can conclude that the inner and outer bounds are to within one bit.
§ INNER BOUND: CHANNEL TYPE 3
Given α∈ [0,1], the region ℛ_α is defined by
R_1 ≤(1+α P/N_1)
R_2 ≤(1+α P/N_2)
R_3 ≤(1+P/2α P+N_3),
and ℛ=conv(⋃_αℛ_α) is achievable.
§.§ Achievable Scheme
For this channel type, neither rate splitting nor aligned interference decoding is necessary.
Transmit signal 𝐱_k is a coded signal of M_k∈{1,2,…,2^nR_k},k=1,2,3. The power allocation satisfies 𝔼[𝐱_1^2]=α nP, 𝔼[𝐱_2^2]=α nP, and 𝔼[𝐱_3^2]=nP.
The received signals are
𝐲_1=𝐱_3+𝐱_1+𝐳_1
𝐲_2=𝐱_3+𝐱_2+𝐳_2
𝐲_3=𝐱_3+𝐱_1+𝐱_2+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (c). Decoding is performed in the following way.
* At receiver 1, 𝐱_3 is first decoded while treating 𝐱_1+𝐳_1 as noise. Next, 𝐱_1 is decoded from 𝐲_1-𝐱_3=𝐱_1+𝐳_1. For reliable decoding, the code rates should satisfy
R_3 ≤ T_3' =(1+P/α P+N_1)
R_1 ≤ T_1 =(1+α P/N_1).
* At receiver 2, 𝐱_3 is first decoded while treating 𝐱_2+𝐳_2 as noise. Next, 𝐱_2 is decoded from 𝐲_2-𝐱_3=𝐱_2+𝐳_2. For reliable decoding, the code rates should satisfy
R_3 ≤ T_3” =(1+P/α P+N_2)
R_2 ≤ T_2 =(1+α P/N_2).
* At receiver 3, 𝐱_3 is decoded while treating 𝐱_1+𝐱_2+𝐳_3 as noise. For reliable decoding, the code rates should satisfy
R_3≤ T_3”' =(1+P/2α P+N_3).
Putting together, we get
R_1 ≤ T_1
R_2 ≤ T_2
R_3 ≤ T_3 =min{T_3',T_3”,T_3”'}
where
T_1 =(1+α P/N_1)
T_2 =(1+α P/N_2)
T_3 =(1+P/2α P+N_3)
≥(1+P/3·max{α P,N_3}).
§.§ The Gap
Starting from ℛ_o from Table <ref>, we can express the two-dimensional outer bound region at R_3 as
R_1 ≤min{(1+2P/N_1)-R_3,C_1}
≤min{(P/N_1·7/3)-R_3,(P/N_1·4/3)}
R_2 ≤min{(1+2P/N_2)-R_3,C_2}
≤min{(P/N_2·7/3)-R_3,(P/N_2·4/3)}.
Depending on the bottleneck of min{·,·} expressions, there are two cases:
R_3≤(7/4) and R_3≥(7/4).
We assume that R_3≥(7/4), equivalently α≤4/7. We also assume that R_3≤(P/N_3), equivalently α P≥ N_3. The other cases are trivial.
The two-dimensional outer bound region at R_3=(P/α P) is
R_1 ≤min{(α P/N_1·7/3),(P/N_1·4/3)}
R_2 ≤min{(α P/N_2·7/3),(P/N_2·4/3)}.
For α≤4/7, the two-dimensional outer bound region is
R_1 ≤(α P/N_1·7/3), R_2 ≤(α P/N_2·7/3).
For α P≥ N_3, the two-dimensional achievable rate region at R_3=(P/3α P) is
R_1 ≤(α P/N_1), R_2 ≤(α P/N_2).
By comparing the inner and outer bounds, we can see that δ_1≤(7/3)<0.62, δ_2 ≤(7/3)<0.62, and δ_3 ≤(3)<0.8. We can conclude that the inner and outer bounds are to within one bit.
§ INNER BOUND: CHANNEL TYPE 4
The relaxed outer bound region ℛ_o' given by
R_k ≤(P/N_k)+(4/3), k=1,2,3
R_1+R_2 ≤(P/N_1)+(7/3)
R_1+R_3 ≤(P/N_1)+(7/3)
R_2+R_3 ≤(P/N_2)+(7/3).
The cross-sectional region at a given R_1 is described by
R_2≤min{(P/N_1·7/3)-R_1,(P/N_2·4/3)}
R_3≤min{(P/N_1·7/3)-R_1,(P/N_3·4/3)}
R_2+R_3 ≤(P/N_2·7/3).
Depending on the bottleneck of min{·,·} expressions, there are three cases:
* R_1≤(N_2/N_1·7/4)
* (N_2/N_1·7/4)≤ R_1≤(N_3/N_1·7/4)
* R_1≥(N_3/N_1·7/4).
In this section, we focus on the third case. The other cases can be proved similarly.
If the sum of the righthand sides of R_2 and R_3 bounds is smaller than the righthand side of R_2+R_3 bound, i.e.,
log(P/N_1·7/3)-2R_1≤(P/N_2·7/3),
then the R_2+R_3 bound is not active at the R_1.
This condition can be expressed as a threshold on R_1 given by
R_1 > R_1,th= 1/2log(P/N_1·7/3)-1/4log(P/N_2·7/3)
= 1/4log(P/N_1·7/3)+1/4log(N_2/N_1).
For this relatively large R_1, the cross-sectional region is a rectangle as described in Fig. <ref> (a). In contrast, for a relatively small R_1, when the threshold condition does not hold, the cross-sectional region is a MAC-like region as described in Fig. <ref> (b). In the rest of the section, we present achievable schemes for each case.
§.§ Achievable Scheme for Relatively Large R_1
Given α=(α_0,α_1,α_2) ∈ [0,1]^3, the region ℛ_α is defined by
R_1 ≤min{^+(c_11+(1-α_0-α_1-α_2)P/(α_0+α_1+2α_2)P+N_2),.
.(1+α_2 P/α_0 P+N_1)}
+(1+α_1 P/(α_0+α_2) P+N_2)
+(1+α_0 P/N_1)
R_2 ≤(1+α_2 P/α_0 P+N_2)
R_3 ≤^+(c_3+P/(α_0+α_1+α_2) P+N_3)
where c_11=1-α_0-α_1-α_2/2-α_0-α_1-α_2 and c_3=1/2-α_0-α_1-α_2,
and ℛ=conv(⋃_αℛ_α) is achievable.
We present an achievable scheme for the case of R_1 > R_1,th. Message M_1∈{1,2,…,2^nR_1} is split into three parts: M_10∈{1,2,…,2^nR_10}, M_11∈{1,2,…,2^nR_11} and M_12∈{1,2,…,2^nR_12}, so R_1=R_10+R_11+R_12. We generate the signals in the following way: 𝐱_11 and 𝐱_11' are differently coded signals of M_11, and 𝐱_10 and 𝐱_12 are coded signal of M_10 and M_12, respectively. The transmit signal is the sum
𝐱_1=𝐱_10+𝐱_11+𝐱_12+𝐱_11'.
The power allocation satisfies 𝔼[𝐱_10^2]=α_0 nP, 𝔼[𝐱_11^2]=α_2 nP, 𝔼[𝐱_12^2]=α_1 nP, and 𝔼[𝐱_11'^2]=(1-α_0-α_1-α_2) nP.
The transmit signals 𝐱_2 and 𝐱_3 are coded signals of the messages M_2∈{1,2,…,2^nR_2} and M_3∈{1,2,…,2^nR_3}, satisfying 𝔼[𝐱_2^2]=α_2 nP and 𝔼[𝐱_3^2]=nP.
The signals 𝐱_11' and 𝐱_3 are lattice-coded signals using the same coding lattice but different shaping lattices. As a result, the sum 𝐱_11'+𝐱_3 is a dithered lattice codeword.
The received signals are
𝐲_1=[𝐱_11'+𝐱_3]+𝐱_12+𝐱_11+𝐱_10+𝐳_1
𝐲_2=𝐱_11'+𝐱_12+𝐱_11+𝐱_2+𝐱_10+𝐳_2
𝐲_3=𝐱_3+𝐱_2+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (a).
Decoding is performed in the following way.
* At receiver 1, [𝐱_11'+𝐱_3] is first decoded while treating other signals as noise and removed from 𝐲_1. Next, 𝐱_12, 𝐱_11, and 𝐱_10 are decoded successively. For reliable decoding, the code rates should satisfy
R_11≤ T_11' =(c_11+(1-α_0-α_1-α_2)P/(α_0+α_1+α_2)P+N_1)
R_3 ≤ T_3' =(c_3+P/(α_0+α_1+α_2)P+N_1)
R_12≤ T_12' =(1+α_1 P/(α_0+α_2) P+N_1)
R_11≤ T_11” =(1+α_2 P/α_0 P+N_1)
R_10≤ T_10 =(1+α_0 P/N_1)
where c_11=(1-α_0-α_1-α_2)P/(1-α_0-α_1-α_2)P+P=1-α_0-α_1-α_2/2-α_0-α_1-α_2 and c_3=P/(1-α_0-α_1-α_2)P+P=1/2-α_0-α_1-α_2. Note that 0≤ c_11≤1/2, c_11+c_3=1, and 1/2≤ c_3≤ 1.
* At receiver 2, 𝐱_11' is first decoded while treating other signals as noise. Having successfully recovered M_11, receiver 2 can generate 𝐱_11 and 𝐱_11', and cancel them from 𝐲_2. Next, 𝐱_12 is decoded from 𝐱_12+𝐱_2+𝐱_10+𝐳_2. Finally, 𝐱_2 is decoded from 𝐱_2+𝐱_10+𝐳_2. For reliable decoding, the code rates should satisfy
R_11≤ T_11”' =(1+(1-α_0-α_1-α_2)P/(α_0+α_1+2α_2)P+N_2)
R_12≤ T_12” =(1+α_1 P/(α_0+α_2)P+N_2)
R_2 ≤ T_2 =(1+α_2 P/α_0 P+N_2).
* At receiver 3, 𝐱_3 is decoded while treating 𝐱_2+𝐳_3 as noise. Reliable decoding is possible if
R_3 ≤ T_3” =(1+P/α_2 P+N_3).
Putting together, we can see that given α_0,α_1,α_2∈[0,1], the following rate region is achievable.
R_1 ≤ T_1=min{T_11',T_11”,T_11”'}+min{T_12',T_12”}+T_10
R_2 ≤ T_2
R_3 ≤ T_3=min{T_3',T_3”}
where
T_1 = min{T_11',T_11”,T_11”'}+min{T_12',T_12”}+T_10
= min{min{T_11',T_11”'},T_11”}+T_12”+T_10
≥min{(c_11+(1-α_0-α_1-α_2)P/(α_0+α_1+2α_2)P+N_2),.
.(1+α_2 P/α_0 P+N_1)}
+(1+α_1 P/(α_0+α_2) P+N_2)
+(1+α_0 P/N_1)
T_2 =(1+α_2 P/α_0 P+N_2)
T_3 ≥(c_3+P/(α_0+α_1+α_2) P+N_3).
§.§ The Gap for Relatively Large R_1
We choose α_0, α_1 and α_2 such that α_1≤3/8, that α_1≥ 3(α_0+α_2), that α_2 P≥ 3N_3, and that α_0 P=N_2. It follows that α_0+α_1+α_2≤4/3α_1≤1/2, that c_11≥1/3, and that (α_0+α_1+2α_2)P+N_2 = 2(α_0+α_2)P+α_1 P≤5/3α_1 P. We get the lower bounds for each term of T_1 expression above.
min{T_11',T_11”'}
≥(c_11+(1-α_0-α_1-α_2)P/(α_0+α_1+2α_2)P+N_2)
≥(1/3+(1-(4/3)α_1)P/(5/3)α_1 P)
= (P/(5/3)α_1 P-7/15)
= (P/(5/3)α_1 P)+(1-7/15·5/3α_1)
≥(P/(5/3)α_1 P)+(17/24)
≥(P/α_1 P·17/40)
and
T_11”=(1+α_2 P/α_0 P+N_1)
= ((α_0+α_2) P+N_1/α_0 P+N_1)
≥((α_0+α_2) P/α_0 P+N_2)
=((α_0+α_2) P/2 N_2).
Since (α_0+α_2)P≥ N_2+3N_3≥ 4 N_2,
T_12” = (1+α_1 P/(α_0+α_2)P+N_2)
≥(α_1 P/(5/4)(α_0+α_2)P).
Putting together,
T_1 ≥min{(P/α_1 P·17/40),((α_0+α_2) P/2 N_2)}
+(α_1 P/(5/4)(α_0+α_2) P)+(N_2/N_1)
= min{(P/(α_0+α_2) P·N_2/N_1·17/40·4/5),.
.(α_1 P/ N_1·1/2·4/5)}
= min{(P/(α_0+α_2) P·N_2/N_1·17/50),.
.(α_1 P/ N_1·2/5)}.
Given α_1, we choose α_2 that satisfies (P/α_1 P·17/40)=((α_0+α_2) P/2N_2). As a result, we can write T_1≥(α_1 P/ N_1·2/5), and also
T_2 =(1+α_2 P/α_0 P+N_2)
≥((α_0+α_2) P/2 N_2)
=(P/α_1 P·17/40).
Since N_3≤1/3α_2 P≤1/3(α_0+α_2)P ≤1/9α_1 P,
T_3 ≥(c_3+P/(α_0+α_1+α_2) P+N_3)
≥(1/2+P/(4/3)α_1 P+(1/9)α_1 P)
≥(P/(13/9)α_1 P).
The following rate region is achievable.
R_1≤(α_1 P/N_1·2/5)
R_2≤(P/α_1 P·17/40)
R_3≤(P/α_1 P·9/13).
For fixed α_1 and R_1=(α_1 P/N_1·2/5), the two-dimensional rate region, given by
R_2≤(P/α_1 P·17/40), R_3≤(P/α_1 P·9/13)
is achievable.
In comparison, the two-dimensional outer bound region at R_1=(α_1 P/N_1·2/5)+1, given by
R_2 ≤(P/N_1·7/3)-(α_1 P/N_1·2/5)-1
= (P/α_1 P)+(7/3·5/2·1/4)
R_3 ≤(P/N_1·7/3)-(α_1 P/N_1·2/5)-1
= (P/α_1 P)+(7/3·5/2·1/4).
As discussed above, the sum-rate bound on R_2+R_3 is loose for R_1 larger than the threshold, so the rate region is a rectangle.
By comparing the inner and outer bound rate regions, we can see that δ_2< (40/17·7/3·5/2·1/4) < 0.89 and δ_3< (13/9·7/3·5/2·1/4) < 0.54. Therefore, we can conclude that the gap is to within one bit per message.
§.§ Achievable Scheme for Relatively Small R_1
Given α=(α_0,α_1,α_2) ∈ [0,1]^3, the region ℛ_α is defined by
R_1 ≤min{^+(c_11+(1-α_1)P/(α_1+α_2) P+N_2),.
.(1+(α_1-α_0) P/α_0 P+N_1)}+(1+α_0 P/N_1)
R_2 ≤(1+α_2 P/α_0 P+N_2)
R_3 ≤^+(c_3+P/max{α_1,α_2} P+N_3)
where c_11=1-α_1/2-α_1 and c_3=1/2-α_1,
and ℛ=conv(⋃_αℛ_α) is achievable.
For the case of R_1 < R_1,th, we present the following achievable scheme. At transmitter 1, we split M_1 into M_10 and M_11, so R_1=R_10+R_11. The transmit signal is the sum
𝐱_1=𝐱_10+𝐱_11+𝐱_11'.
The power allocation satisfies 𝔼[𝐱_10^2]=α_0 nP, 𝔼[𝐱_11^2]=(α_1-α_0) nP, and 𝔼[𝐱_11'^2]=(1-α_1)nP at receiver 1, 𝔼[𝐱_2^2]=α_2 nP at receiver 2, and 𝔼[𝐱_3^2]=nP at receiver 3.
The signals 𝐱_11' and 𝐱_3 are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum 𝐱_11'+𝐱_3 is a lattice codeword.
The received signals are
𝐲_1 = [𝐱_11'+𝐱_3]+𝐱_11+𝐱_10+𝐳_1
𝐲_2 = 𝐱_11'+𝐱_11+𝐱_2+𝐱_10+𝐳_2
𝐲_3 = 𝐱_3+𝐱_2+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (b).
Decoding is performed in the following way.
* At receiver 1, [𝐱_11'+𝐱_3] is first decoded while treating other signals as noise and removed from 𝐲_1. Next, 𝐱_11 and then 𝐱_10 is decoded successively. For reliable decoding, the code rates should satisfy
R_11≤ T_11' =(c_11+(1-α_1)P/α_1 P+N_1)
R_3 ≤ T_3' =(c_3+P/α_1 P+N_1)
R_11≤ T_11” =(1+(α_1-α_0) P/α_0 P+N_1)
R_10≤ T_10 =(1+α_0 P/N_1)
where c_11=(1-α_1)P/(1-α_1)P+P=1-α_1/2-α_1 and c_3=P/(1-α_1)P+P=1/2-α_1. Note that 0≤ c_11≤1/2, c_11+c_3=1, and 1/2≤ c_3≤ 1.
* At receiver 2, 𝐱_11'
is first decoded while treating other signals as noise. Having successfully recovered M_11, receiver 1 can generate 𝐱_11 and 𝐱_11', and cancel them from 𝐲_2. Next, 𝐱_2 is decoded from 𝐱_2+𝐱_10+𝐳_2. At receiver 2, 𝐱_10 is not decoded. For reliable decoding, the code rates should satisfy
R_11≤ T_11”' =(1+(1-α_1)P/(α_1+α_2) P+N_2)
R_2 ≤ T_2 =(1+α_2 P/α_0 P+N_2).
* At receiver 3, 𝐱_3 is decoded while treating 𝐱_2+𝐳_3 as noise. Reliable decoding is possible if
R_3 ≤ T_3” =(1+P/α_2 P+N_3).
Putting together, we can see that given α_0,α_1α_2∈[0,1], the following rate region is achievable.
R_1 ≤ T_1 =min{T_11',T_11”,T_11”'}+T_10
R_2 ≤ T_2
R_3 ≤ T_3 = min{T_3',T_3”}
where
T_1 = min{T_11',T_11”,T_11”'}+T_10
= min{min{T_11',T_11”'},T_11”}+T_10
≥min{(c_11+(1-α_1)P/(α_1+α_2) P+N_2),.
.(1+(α_1-α_0) P/α_0 P+N_1)}+(1+α_0 P/N_1)
T_2 =(1+α_2 P/α_0 P+N_2)
T_3 ≥(c_3+P/max{α_1,α_2} P+N_3).
§.§ The Gap for Relatively Small R_1
We choose α_0, α_1, and α_2 such that α_1 ≤α_2≤1/2, that α_1 P≥ 3N_2, that α_2 P≥ 3N_3, and that α_0 P=4/5 N_2. It follows that c_11≥1/3 and that (α_1+α_2)P+N_2≤4/3α_1 P+α_2P≤7/3α_2 P.
min{T_11',T_11”'}
=(c_11+(1-α_1)P/(α_1+α_2)P+N_2)
≥(1/3+(1-α_2)P/(7/3)α_2 P)
=(P/(7/3)α_2 P-2/21)
=(P/(7/3)α_2 P)+(1-2/21·7/3α_2)
≥(P/(7/3)α_2 P)+(8/9)
≥(P/α_2 P·8/21)
and
T_11”=(1+(α_1-α_0) P/α_0 P+N_1)
= (α_1 P+N_1/α_0 P+N_1)
≥(α_1 P/α_0 P+N_2)
=(α_1 P/(9/5)N_2).
Putting together,
T_1 ≥min{(P/α_2 P·8/21),(α_1 P/(9/5) N_2) }
+(N_2/N_1·4/5).
Let us define α_1' by the equality (P/α_1' P·8/21)=(α_1 P/(9/5) N_2). If we choose α_2≤α_1', then (P/α_2 P·8/21)≥(α_1 P/(9/5) N_2), and
T_1 ≥(α_1 P/(9/5)N_2·N_2/N_1·4/5)=(α_1 P/N_1·4/9).
We can see that the following rate region is achievable.
R_1≤(α_1 P/N_1·4/9)
R_2≤(α_2 P/(9/5)N_2)
R_3≤(P/(4/3)α_2 P).
For fixed α_2∈ [α_1,α_1'] and R_1=(α_1 P/N_1·4/9), the two-dimensional rate region ℛ_α, given by
R_2≤(α_2 P/(9/5)N_2)
R_3≤(P/(4/3)α_2 P)
is achievable. The union ⋃_α_2∈ [α_1,α_1']ℛ_α is a MAC-like region, given by
R_2≤(α_1' P/(9/5)N_2)
≤(P/α_1 P·8/21)
R_3≤(P/α_1 P·3/4)
R_2+R_3 ≤(α_2 P/(9/5)N_2·P/(4/3)α_2 P)
≤(P/N_2·15/36).
This region is described in Fig. <ref> (a).
In comparison, the two-dimensional outer bound region at R_1=(α_1 P/N_1·4/9)+1, given by
R_2 ≤(P/N_1·7/3)-(α_1 P/N_1·4/9)-1
= (P/α_1 P)+(7/3·9/4·1/4)
R_3 ≤(P/N_1·7/3)-(α_1 P/N_1·4/9)-1
= (P/α_1 P)+(7/3·9/4·1/4)
R_2+R_3 ≤(P/N_2)+(7/3).
Since δ_2 < (21/8·7/3·9/4·1/4) < 0.90, δ_3 < (4/3·7/3·9/4·1/4) < 0.41 and δ_23 < (36/15·7/3) < 1.25 < √(2), we can conclude that the gap is to within one bit per message.
§ INNER BOUND: CHANNEL TYPE 5
Let us consider the relaxed outer bound region ℛ_o' given by
R_k ≤(P/N_k)+(4/3), k=1,2,3
R_1+R_2 ≤(P/N_1)+(7/3)
R_2+R_3 ≤(P/N_2)+(7/3)
R_1+R_3 ≤(P/N_1)+(7/3).
The cross-sectional region at a given R_2 is described by
R_1≤min{(P/N_1·7/3)-R_2,(P/N_1·4/3)}
R_3≤min{(P/N_2·7/3)-R_2,(P/N_3·4/3)}
R_1+R_3 ≤(P/N_1·7/3).
Depending on the bottleneck of min{·,·} expressions, there are three cases:
* R_2≤(7/4)
* (7/4)≤ R_2≤(N_3/N_2·7/4)
* R_2≥(N_3/N_2·7/4).
In this section, we focus on the third case. The other cases can be proved similarly.
If the sum of the righthand sides of R_1 and R_3 bounds is smaller than the righthand side of R_1+R_3 bound, i.e.,
(P/N_1·7/3)+(P/N_2·7/3)-2R_2 ≤(P/N_1·7/3),
then the R_1+R_3 bound is not active at the R_2.
By rearranging, the threshold condition is given by
R_2 > R_2,th=1/4log(P/N_2·7/3).
Note that R_2,th is roughly half of C_2.
For this relatively large R_2, the cross-sectional region is a rectangle as described in Fig. <ref> (a). In contrast, for a relatively small R_1, when the threshold condition does not hold, the cross-sectional region is a MAC-like region as described in Fig. <ref> (b). In the following subsections, we present achievable schemes for each case.
§.§ Achievable Scheme for Relatively Large R_2
Given α=(α_1,α_2,α_2') ∈ [0,1]^3, the region ℛ_α is defined by
R_1 ≤(1+α_1 P/N_1)
R_2 ≤min{^+(c_21+(1-α_2-α_2')P/(α_1+α_2+α_2')P+N_2),.
.(1+α_2' P/N_2)}+(1+α_2 P/α_2' P+N_2)
R_3 ≤^+(c_3+P/max{α_1,α_2+α_2'} P+N_3)
where c_21=1-α_2-α_2'/2-α_2-α_2' and c_3=1/2-α_2-α_2',
and ℛ=conv(⋃_αℛ_α) is achievable.
We present an achievable scheme for the case of R_2 > R_2,th. Message M_2∈{1,2,…,2^nR_2} for receiver 2 is split into two parts: M_21∈{1,2,…,2^nR_21} and M_22∈{1,2,…,2^nR_22}, so R_2=R_21+R_22. We generate the signals in the following way: 𝐱_21 and 𝐱_21' are differently coded signals of M_21, and 𝐱_22 is a coded signal of M_22. The transmit signal is the sum
𝐱_2=𝐱_21+𝐱_22+𝐱_21'.
The power allocation satisfies 𝔼[𝐱_1^2]=α_1 nP, at receiver 1, 𝔼[𝐱_21^2]=α_2' nP, 𝔼[𝐱_22^2]=α_2 nP, and 𝔼[𝐱_21'^2]=(1-α_2-α_2')P at receiver 2, and 𝔼[𝐱_3^2]=nP at receiver 3.
The signals 𝐱_21' and 𝐱_3 are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum 𝐱_21'+𝐱_3 is a lattice codeword.
The received signals are
𝐲_1=𝐱_21'+𝐱_22+𝐱_21+𝐱_1+𝐳_1
𝐲_2=[𝐱_21'+𝐱_3]+𝐱_22+𝐱_21+𝐳_2
𝐲_3=𝐱_3+𝐱_1+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (a). Decoding is performed in the following way.
* At receiver 1, 𝐱_21' is first decoded while treating other signals as noise. Having successfully recovered M_21, receiver 1 can generate 𝐱_21 and 𝐱_21', and cancel them from 𝐲_1. Next, 𝐱_22 is decoded from 𝐱_22+𝐱_1+𝐳_1. Finally, 𝐱_1 is decoded from 𝐱_1+𝐳_1. For reliable decoding, the code rates should satisfy
R_21≤ T_21' =(1+(1-α_2-α_2')P/(α_1+α_2+α_2')P+N_1)
R_22≤ T_22' =(1+α_2 P/α_1 P+N_1)
R_1 ≤ T_1 =(1+α_1 P/N_1).
* At receiver 2, [𝐱_21'+𝐱_3] first decoded while treating other signals as noise and removed from 𝐲_2. Next, 𝐱_22 and 𝐱_21 are decoded successively. For reliable decoding, the code rates should satisfy
R_21≤ T_21” =(c_21+(1-α_2-α_2')P/(α_2+α_2')P+N_2)
R_3 ≤ T_3' =(c_3+P/(α_2+α_2')P+N_2)
R_22≤ T_22” =(1+α_2 P/α_2' P+N_2)
R_21≤ T_21”' =(1+α_2' P/N_2)
where c_21=(1-α_2-α_2')P/(1-α_2-α_2')P+P=1-α_2-α_2'/2-α_2-α_2' and c_3=P/(1-α_2-α_2')P+P=1/2-α_2-α_2'. Note that 0≤ c_21≤1/2, c_21+c_3=1, and 1/2≤ c_3≤ 1.
* At receiver 3, 𝐱_3 is decoded while treating 𝐱_1+𝐳_3 as noise. Reliable decoding is possible if
R_3 ≤ T_3” =(1+P/α_1 P+N_3).
Putting together, we can see that given α_1,α_2,α_2'∈[0,1], the following rate region is achievable.
R_1 ≤ T_1
R_2 ≤ T_2 =min{T_21',T_21”,T_21”'}+min{T_22',T_22”}
R_3 ≤ T_3=min{T_3',T_3”}
where
T_1 =(1+α_1 P/N_1)
T_2 = min{T_21',T_21”,T_21”'}+T_22”
= min{min{T_21',T_21”},T_21”'}+T_22”
≥min{(c_21+(1-α_2-α_2')P/(α_1+α_2+α_2')P+N_2),.
.(1+α_2' P/N_2)}+(1+α_2 P/α_2' P+N_2)
T_3 ≥(c_3+P/max{α_1,α_2+α_2'} P+N_3).
§.§ The Gap for Relatively Large R_2
We choose α_1 and α_2 such that α_1 P≥ N_2, that α_2 P≥ N_3, that α_1=α_2'≤α_2, and that α_1+α_2≤1/2. It follows that c_21≥1/3. We get the lower bounds for each term of T_2 expression above.
min{T_21',T_21”}
≥(c_21+(1-α_1-α_2)P/(2α_1+α_2)P+N_2)
≥(1/3+(1-α_1-α_2)P/(3α_1+α_2)P)
≥(P/(3α_1+α_2)P).
The first entry of min{·,·} in
T_2 = min{min{T_21',T_21”}+T_22”,T_21”'+T_22”}
is lower bounded as follows.
min{T_21',T_21”}+T_22”
≥(P/(3α_1+α_2)P)+((α_1+α_2) P+N_2/α_1 P+N_2)
= (P/α_1 P+N_2·(α_1+α_2) P+N_2/(3α_1+α_2)P)
≥(P/3(α_1 P+N_2))
≥(P/6α_1 P).
The second entry of T_2=min{·,·} is lower bounded as follows.
T_21”'+T_22”
=(1+α_1 P/N_2)+(1+α_2 P/α_1 P+N_2)
=(1+(α_1+α_2) P/N_2)
≥(α_2 P/N_2).
Putting together, we get the lower bound
T_2 ≥min{(P/6α_1 P) ,(α_2 P/N_2)}.
Given α_2, we choose α_1 that satisfies (P/6α_1 P)=(α_2 P/N_2). As a result, we can write T_2≥(α_2 P/N_2).
We also have
T_3 ≥(P/(α_1+α_2) P+N_3) ≥(P/3α_2 P).
Putting together, we can see that the following rate region is achievable.
R_1≤(α_1 P/N_1)
R_2≤(α_2 P/N_2)
R_3≤(P/3α_2 P).
For fixed α_2 and R_2=(α_2 P/N_2), the two-dimensional rate region, given by
R_1≤(α_1 P/N_1)
= (P/6α_2 P·N_2/N_1)
R_3≤(P/3α_2 P)
is achievable.
In comparison, the two-dimensional outer bound region at R_2=(α_2 P/N_2)+1 is given by
R_1 ≤(P/N_1·7/3)-(α_2 P/N_2)-1
= (P/α_2 P·N_2/N_1)+(7/3·1/4)
R_3 ≤(P/N_2·7/3)-(α_2 P/N_2)-1
= (P/α_2 P)+(7/3·1/4).
As discussed above, the sum-rate bound on R_1+R_3 is loose for R_2 larger than the threshold, so the rate region is a rectangle.
By comparing the inner and outer bound rate regions, we can see that δ_1< (6·7/3·1/4) < 0.91 and δ_3< (3·7/3·1/4) < 0.41. Therefore, we can conclude that the gap is to within one bit per message.
§.§ Achievable Scheme for Relatively Small R_2
Given α=(α_1,α_2) ∈ [0,1]^2, the region ℛ_α is defined by
R_1 ≤(1+α_1 P/N_1)
R_2 ≤min{^+(c_21+(1-α_2)P/(α_1+α_2) P+N_2),.
.(1+α_2 P/N_2)}
R_3 ≤^+(c_3+P/max{α_1,α_2} P+N_3)
where c_21=1-α_2/2-α_2 and c_3=1/2-α_2,
and ℛ=conv(⋃_αℛ_α) is achievable.
For the case of R_2 < R_2,th, we present the following scheme. At transmitter 2, rate splitting is not necessary. The transmit signal is the sum
𝐱_2=𝐱_21+𝐱_21'
where 𝐱_21 and 𝐱_21' are differently coded versions of the same message M_2∈{1,2,…,2^nR_2}.
The power allocation: 𝔼[𝐱_1^2]=α_1 nP at receiver 1, 𝔼[𝐱_21^2]=α_2 nP, and 𝔼[𝐱_21'^2]=(1-α_2)nP at receiver 2, and 𝔼[𝐱_3^2]=nP at receiver 3.
The signals 𝐱_21' and 𝐱_3 are lattice codewords using the same coding lattice but different shaping lattices. As a result, the sum 𝐱_21'+𝐱_3 is a lattice codeword.
The received signals are
𝐲_1=𝐱_21'+𝐱_21+𝐱_1+𝐳_1
𝐲_2=[𝐱_21'+𝐱_3]+𝐱_21+𝐳_2
𝐲_3=𝐱_3+𝐱_1+𝐳_3.
The signal scale diagram at each receiver is shown in Fig. <ref> (b).
Decoding is performed in the following way.
* At receiver 1, 𝐱_21'
is first decoded while treating other signals as noise. Having successfully recovered M_21, receiver 1 can generate 𝐱_21 and 𝐱_21', and cancel them from 𝐲_1. Next, 𝐱_1 is decoded from 𝐱_1+𝐳_1. For reliable decoding, the code rates should satisfy
R_21≤ T_21' =(1+(1-α_2)P/(α_1+α_2) P+N_1)
R_1 ≤ T_1 =(1+α_1 P/N_1).
* At receiver 2, [𝐱_21'+𝐱_3] first decoded while treating other signals as noise and removed from 𝐲_2. Next, 𝐱_21 is decoded from 𝐱_21+𝐳_2. For reliable decoding, the code rates should satisfy
R_21≤ T_21” =(c_21+(1-α_2)P/α_2 P+N_2)
R_3 ≤ T_3' =(c_3+P/α_2 P+N_2)
R_21≤ T_21”' =(1+α_2 P/N_2)
where c_21=(1-α_2)P/(1-α_2)P+P=1-α_2/2-α_2 and c_3=P/(1-α_2)P+P=1/2-α_2. Note that 0≤ c_21≤1/2, c_21+c_3=1, and 1/2≤ c_3≤ 1.
* At receiver 3, 𝐱_3 is decoded while treating 𝐱_1+𝐳_3 as noise. Reliable decoding is possible if
R_3 ≤ T_3” =(1+P/α_1 P+N_3).
Putting together, we get
R_1 ≤ T_1
R_2 ≤ T_2 =min{T_21',T_21”,T_21”'}
R_3 ≤ T_3=min{T_3',T_3”}
where
T_1 =(1+α_1 P/N_1)
T_2 = min{T_21',T_21”,T_21”'}
= min{min{T_21',T_21”}, T_21”'}
≥min{(c_21+(1-α_2)P/(α_1+α_2) P+N_2),.
.(1+α_2 P/N_2)}
T_3 ≥(c_3+P/max{α_1,α_2} P+N_3).
§.§ The Gap for Relatively Small R_2
We choose α_1 and α_2 such that α_1 P≥ N_2, that α_2 P≥ N_3, that α_1+α_2≤1/2, and that α_1≥α_2. It follows that c_21≥1/3. We get the lower bound
min{T_21',T_21”}
= (c_21+(1-α_2)P/(α_1+α_2)P+N_2)
≥(1/3+(1-α_1)P/3α_1 P)
= (P/3α_1 P)
and
T_2 ≥min{(P/3α_1 P), (α_2 P/N_2)}.
Let us define α_2' by the equality (P/3α_2' P)=(α_2 P/N_2). If we choose α_1≤α_2', then T_2 ≥(α_2 P/N_2).
We can see that the following rate region is achievable.
R_1≤(α_1 P/N_1)
R_2≤(α_2 P/N_2)
R_3≤(P/2α_1 P).
For fixed α_1∈ [α_2,α_2'] and R_2=(α_2 P/N_2), the two-dimensional rate region ℛ_α, given by
R_1≤(α_1 P/N_1)
R_3≤(P/2α_1 P)
is achievable. The union ⋃_α_1∈ [α_2,α_2']ℛ_α is a MAC-like region, given by
R_1≤(α_2' P/N_1)
= (P/3α_2 P·N_2/N_1)
R_3≤(P/2α_2 P)
R_1+R_3=(P/2N_1).
In comparison, the two-dimensional outer bound region at R_2=(α_2 P/N_2)+1 is given by
R_1 ≤(P/N_1·7/3)-(α_2 P/N_2)-1
= (P/α_2 P·N_2/N_1)+(7/3·1/4)
R_3 ≤(P/N_2·7/3)-(α_2 P/N_2)-1
= (P/α_2 P)+(7/3·1/4)
R_1+R_3 ≤(P/N_1·8/3).
Since δ_1 < (3·7/3·1/4) < 0.41, δ_3 < (2·7/3·1/4) < 0.12 and δ_13 < (2·7/3) < 1.12 < √(2), we can conclude that the gap is to within one bit per message.
§ CONCLUSION
We presented approximate capacity region of five important cases of partially connected interference channels. The outer bounds based on Z-channel type argument are derived. Achievable schemes are developed and shown to approximately achieve the capacity to within a constant bit.
For future work, the channels with fully general coefficients may be considered. In this paper, we presented different schemes for each channel type although they share some principle. A universal scheme is to be developed for unified capacity characterization of all possible topologies. The connection between interference channel and index coding problems is much to explore. In particular, the results on the capacity region for index coding in <cit.> seem to have an interesting connection to our work.
§ RANDOM CODING ACHIEVABILITY: CHANNEL TYPE 4
At transmitter 1, message M_1 is split into three parts (M_12,M_11,M_10), and the transmit signal is 𝐱_1=𝐱_12+𝐱_11+𝐱_10. The signals satisfy 𝔼[𝐱_12^2]=n(P-N_2-N_3), 𝔼[𝐱_11^2]=nN_3, and 𝔼[𝐱_10^2]=nN_2.
At transmitter 2, message M_2 is split into three parts (M_21,M_20), and the transmit signal is 𝐱_2=𝐱_21+𝐱_20. The signals satisfy 𝔼[𝐱_21^2]=n(P-N_3) and 𝔼[𝐱_20^2]=nN_3. Rate-splitting is not performed at transmitter 3, and 𝔼[𝐱_3^2]=nP.
The top layer codewords (𝐱_12,𝐱_21,𝐱_3) are from a joint random codebook for (M_12,M_21,M_3). The mid-layer codewords (𝐱_11,𝐱_20) are from a joint random codebook for (M_11,M_20). The bottom layer codeword 𝐱_10 is from a single-user random codebook for M_10.
The received signals are
𝐲_1=(𝐱_12+𝐱_3)+𝐱_11+𝐱_10+𝐳_1
𝐲_2=(𝐱_12+𝐱_21)+(𝐱_11+𝐱_20)+𝐱_10+𝐳_2
𝐲_3=(𝐱_21+𝐱_3)+𝐱_20+𝐳_3
Decoding is performed from the top layer to the bottom layer. At receiver 1, simultaneous decoding of (𝐱_12,𝐱_3) is performed while treating other signals as noise. And then, 𝐱_11 and 𝐱_10 are decoded successively. At receiver 2, simultaneous decoding of (𝐱_12,𝐱_21) is performed while treating other signals as noise. And then, simultaneous decoding of (𝐱_11,𝐱_20) is performed. At receiver 3, simultaneous decoding of (𝐱_21,𝐱_3) is performed while treating other signals as noise. For reliable decoding, code rates should satisfy
R_12≤ I_1=(1+P-N_2-N_3/N_1+N_2+N_3)
R_3 ≤ I_2=(1+P/N_1+N_2+N_3)
R_12+R_3 ≤ I_3=(1+2P-N_2-N_3/N_1+N_2+N_3)
R_11≤ I_4=(1+N_3/N_1+N_2)
R_10≤ I_5=(1+N_2/N_1)
at receiver 1,
R_12≤ I_6=(1+P-N_2-N_3/2N_2+2N_3)
R_21≤ I_7=(1+P-N_3/2N_2+2N_3)
R_12+R_21≤ I_8=(1+2P-N_2-2N_3/2N_2+2N_3)
R_11≤ I_9=(1+N_3/2N_2)
R_20≤ I_10=(1+N_3/2N_2)
R_11+R_20≤ I_11=(1+2N_3/2N_2)
at receiver 2,
R_21≤ I_12=(1+P-N_3/2N_3)
R_3 ≤ I_13=(1+P/2N_3)
R_21+R_3 ≤ I_14=(1+2P-N_3/2N_3)
at receiver 3. Putting together,
R_12≤ T_1=min{I_1,I_6}=I_6
R_21≤ T_2=min{I_7,I_12}=I_7
R_3 ≤ T_3=min{I_2,I_13}
R_12+R_21≤ T_4=I_8
R_12+R_3 ≤ T_5=I_3
R_21+R_3 ≤ T_6=I_14
at the top layer,
R_11≤ T_7=min{I_4,I_9}=I_9
R_20≤ T_8=I_10
R_11+R_20≤ T_9=I_11
at the mid-layer,
R_10≤ T_10=I_5
at the bottom layer. Note that the rate variables are not coupled between layers.
We get the achievable rate region
R_1=R_12+R_11+R_10≤ T_1+T_7+T_10
R_2=R_21+R_20≤ T_2+T_8
R_3≤ T_3
R_1+R_2≤ T_4+T_9+T_10
R_1+R_3≤ T_5+T_7+T_10
R_2+R_3≤ T_6+T_8.
This region includes the following region.
R_1≤(2+P/N_1)-1
R_2≤(3+P/N_2)-1
R_3≤(3+P/N_3)-(3)
R_1+R_2≤(1+2P/N_1)-1/2
R_1+R_3≤(1+2P/N_1)-1
R_2+R_3≤(1+2P/N_2)-1.
Therefore, we can conclude the capacity region to within one bit.
§ RANDOM CODING ACHIEVABILITY: CHANNEL TYPE 5
Transmit signal construction is the same as the one for channel type 4.
The received signals are
𝐲_1=(𝐱_12+𝐱_21)+(𝐱_11+𝐱_20)+𝐱_10+𝐳_1
𝐲_2=(𝐱_21+𝐱_3)+𝐱_20+𝐳_2
𝐲_3=(𝐱_12+𝐱_3)+𝐱_11+𝐱_10+𝐳_3
Decoding is performed from the top layer to the bottom layer. At receiver 1, simultaneous decoding of (𝐱_12,𝐱_21) is performed while treating other signals as noise. And then, simultaneous decoding of 𝐱_11 and 𝐱_20 is performed. Lastly, 𝐱_10 is decoded. At receiver 2, simultaneous decoding of (𝐱_21,𝐱_3) is performed while treating other signals as noise. And then, 𝐱_20 is decoded. At receiver 3, simultaneous decoding of (𝐱_12,𝐱_3) is performed while treating other signals as noise. And then, 𝐱_11 and 𝐱_10 are decoded successively. For reliable decoding, code rates should satisfy
R_12≤ I_1=(1+P-N_2-N_3/N_1+N_2+2N_3)
R_21≤ I_2=(1+P-N_3/N_1+N_2+2N_3)
R_12+R_21≤ I_3=(1+2P-N_2-2N_3/N_1+N_2+2N_3)
R_11≤ I_4=(1+N_3/N_1+N_2)
R_20≤ I_5=(1+N_3/N_1+N_2)
R_11+R_20≤ I_6=(1+2N_3/N_1+N_2)
R_10≤ I_7=(1+N_2/N_1)
at receiver 1,
R_21≤ I_8=(1+P-N_3/N_2+N_3)
R_3 ≤ I_9=(1+P/N_2+N_3)
R_21+R_3 ≤ I_10=(1+2P-N_3/N_2+N_3)
R_20≤ I_11=(1+N_3/N_2)
at receiver 2,
R_12≤ I_12=(1+P-N_2-N_3/N_2+2N_3)
R_3 ≤ I_13=(1+P/N_2+2N_3)
R_12+R_3 ≤ I_14=(1+2P-N_2-N_3/N_2+2N_3)
at receiver 3. Putting together,
R_12≤ T_1=min{I_1,I_12}=I_1
R_21≤ T_2=min{I_2,I_8}=I_2
R_3 ≤ T_3=min{I_9,I_13}=I_13
R_12+R_21≤ T_4=I_3
R_12+R_3 ≤ T_5=I_14
R_21+R_3 ≤ T_6=I_10
at the top layer,
R_11≤ T_7=I_4
R_20≤ T_8=min{I_5,I_11}=I_5
R_11+R_20≤ T_9=I_6
at the mid-layer,
R_10≤ T_10=I_7
at the bottom layer. Note that the rate variables are not coupled between layers.
We get the achievable rate region
R_1=R_12+R_11+R_10≤ T_1+T_7+T_10
R_2=R_21+R_20≤ T_2+T_8
R_3≤ T_3
R_1+R_2≤ T_4+T_9+T_10
R_1+R_3≤ T_5+T_7+T_10
R_2+R_3≤ T_6+T_8.
This region includes the following region.
R_1≤(2+P/N_1)-1/2
R_2≤(2+P/N_2)-1
R_3≤(3+P/N_3)-(3)
R_1+R_2≤(1+2P/N_1)
R_1+R_3≤(1+2P/N_1)-1/2
R_2+R_3≤(1+2P/N_2)-1/2.
Therefore, we can conclude the capacity region to within one bit.
1
EtkinTseWang08
R. Etkin, D. Tse and H. Wang, “Gaussian interference channel
capacity to within one bit,” IEEE Trans. Inf. Theory, vol. 54, no. 12, pp. 5534–5562, Dec. 2008.
SridharanVishwanathJafar08
S. Sridharan, S. Vishwanath, and S. A. Jafar, “Capacity of the
symmetric K-user Gaussian very strong interference channel,”
Proc. IEEE Global Telecommun. Conf., vol. 56, Dec. 2008.
SridharanJafarianVishwanathJafarShamai08
S. Sridharan, A. Jafarian, S. Vishwanath, S. Jafar, and
S. Shamai, “A layered lattice coding scheme for a class of
three user Gaussian interference channel,” 46th Annual Allerton Conference on Communication, Control, and Computing, pp. 531–538, 2008.
BreslerParekhTse10
G. Bresler, A. Parekh, and D. N. C. Tse, “The approximate capacity
of the many-to-one and one-to-many Gaussian interference channels,”
IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4566–4592, Sep. 2010.
JafarVishwanath10
S. A. Jafar and S. Vishwanath, “Generalized degrees of freedom of the symmetric Gaussian K-user interference channel,” IEEE Trans. Inf. Theory, vol. 56, no. 7, pp. 3297–3303, Jul. 2010.
ZhouYu13
L. Zhou and W. Yu, “On the capacity of the K-user cyclic Gaussian
interference channel,” IEEE Trans. Inf. Theory, vol. 59, no. 1,
pp. 154–165, Jan. 2013.
OrdentlichErezNazer14
O. Ordentlich, U. Erez, and B. Nazer, “The approximate sum capacity of the symmetric Gaussian K-user interference channel,” IEEE Trans. Inf. Theory, vol. 60, no. 6, pp. 3450–3482, Jun. 2014.
Jafar2014
S. A. Jafar, “Topological interference management through index coding,” IEEE Trans. Inf. Theory, vol. 60, no. 1, pp.
529–568, Jan. 2014.
HuangCadambeJafar12
C. Huang, V. R. Cadambe, and S. A. Jafar, “Interference alignment and
the generalized degrees of freedom of the X channel,” IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5130–5150, Aug. 2012.
NiesenMaddahAli13
U. Niesen and M. A. Maddah-Ali, “Interference alignment: From degrees-of-freedom to constant-gap capacity approximations,” IEEE Trans. Inf. Theory, vol. 59, no. 8, pp. 4855–4888, Apr. 2013.
MotahariGharanMaddahAliKhandani14
A. S. Motahari, S. O.-Gharan, M.-A. Maddah-Ali, and A. K. Khandani, “Real interference alignment: Exploiting the potential of single antenna systems,” IEEE Trans. Inf. Theory, vol. 60, no. 8, pp. 4799–4810, Aug. 2014.
ErezZamir04
U. Erez and R. Zamir, “Achieving 1/2log(1+) on the AWGN channel
with lattice encoding and decoding,” IEEE Trans. Inf.
Theory, vol. 50, no. 10, pp. 2293–2314, Oct. 2004.
WilsonNarayananPfisterSprintson10
M. P. Wilson, K. Narayanan, H. Pfister, and A. Sprintson, “Joint physical
layer coding and network coding for bidirectional relaying,” IEEE Trans.
Inf. Theory, vol. 56, no. 11, pp. 5641–5654, Nov. 2010.
NamChungLee10
W. Nam, S.-Y. Chung, and Y. H. Lee, “Capacity of the Gaussian
two-way relay channel to within 1/2 bit,” IEEE Trans. Inf. Theory,
vol. 56, no. 11, pp. 5488–5494, Nov. 2010.
NamChungLee11
W. Nam, S.-Y. Chung, and Y. H. Lee, “Nested lattice codes for Gaussian relay networks with interference,” IEEE Trans. Inf. Theory, vol. 57, no. 12, pp. 7733–7745, Dec. 2011.
NazerGastpar11
B. Nazer and M. Gastpar, “Compute-and-forward: Harnessing interference through structured codes,” IEEE Trans. Inf. Theory, vol. 57, no. 10, pp. 6463–6486, Oct. 2011.
GastparNazer11
M. Gastpar and B. Nazer, “Algebraic structure in network information theory,” IEEE ISIT Tutorial, 2011.
BirkKol98
Y. Birk and T. Kol, “Informed-source coding-on-demand (ISCOD) over
broadcast channels,” in Proc. IEEE INFOCOM, vol. 13, pp. 1257–1264, 1998.
Bar-YossefBirkJayramKol06
Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with
side information,” in Proc. 47th IEEE Ann. Symp. Found. Comput. Sci.
(FOCS), 2006, pp. 197–-206.
Bar-YossefBirkJayramKol11
Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with
side information,” IEEE Trans. Inf. Theory, vol. 57, no. 3, pp.
1479–1494, Mar. 2011.
AlonLubetzkyStavWeinsteinHassidim08
N. Alon, E. Lubetzky, U. Stav, A. Weinstein, and A. Hassidim, “Broadcasting
with side information,” in Proc. 49th IEEE Ann. Symp. Found. Comput. Sci.
(FOCS), 2008, pp. 823–832.
MalekiCadambeJafar2012
H. Maleki, V. Cadambe, and S. A. Jafar, “Index coding–An interference
alignment perspective,” in Proc. IEEE Int. Symp. Inf. Theory, 2012, pp.
2236–2240.
ArbabjolfaeiBandemerKimSasogluWang13
F. Arbabjolfaei, B. Bandemer, Y. -H. Kim, E. Sasoglu, and
L. Wang, “On the capacity region for index coding,” in Proc.
IEEE Int. Symp. Inf. Theory, Istanbul, Turkey, Jul. 2013, pp. 962–966.
Ong14
L. Ong, “Linear codes are optimal for index-coding
instances with five or fewer receivers,” in Proc. IEEE Int. Symp. Inf. Theory, 2014, pp.
491–495.
EffrosElRouayhebLangberg15
M. Effros, S. El Rouayheb, M. Langberg, “An equivalence between network coding and index coding,”
IEEE Trans. Inf. Theory, vol. 61, no. 5, pp.
2478–2487, May 2015.
ElGamalKim11
A. El Gamal and Y.-H. Kim, Network Information Theory. Cambridge Univ. Press, 2011.
Zamir14
R. Zamir, Lattice Coding for Signals and Networks. Cambridge Univ.
Press, 2014.
| §.§ Motivation
The capacity of the Interference channel remains one of the most challenging open problems in the domain of network information theory. The capacity region is not known in general, except for a specific range of channel parameters. For the two-user scalar Gaussian interference channel, where the interference alignment is not required, the approximate capacity region to within one bit is known <cit.>. For the channels where interference alignment is required such as the K-user Gaussian interference channel <cit.> and the Gaussian X-channel <cit.>, a tight characterization of the capacity region is not known, even for symmetric channel cases.
A tractable approach to the capacity of interference channels is to consider partial connectivity of interference links and analyze the impact of topology on the capacity. Topological interference management <cit.> approach gives important insights on the degrees-of-freedom (DoF) of partially connected interference channels and their connection to index coding problems <cit.>. It is shown that the symmetric DoF of a partially connected interference channel can be found by solving the corresponding index coding problem.
In this paper, we consider a class of three-user partially connected interference channels and characterize approximate capacity regions at finite SNR. We focus on the impact of interference topology, interference alignment, and interplay between interference and noise. We choose a few representative topologies where we can achieve clear interference alignment gain. For these topologies, Z-channel type outer bounds are tight to within a constant gap from the corresponding inner bound. For each topology, we present an achievable scheme based on rate-splitting, lattice alignment, and successive decoding.
§.§ Related Work
Lattice coding based on nested lattices is shown to achieve the capacity of the single user Gaussian channel in <cit.>. The idea of lattice-based interference alignment by decoding the sum of lattice codewords appeared in the conference version of <cit.>. This lattice alignment technique is used to derive capacity bounds for three-user interference channel in <cit.>. The idea of decoding the sum of lattice codewords is also used in <cit.> to derive the approximate capacity of the two-way relay channel. An extended approach, compute-and-forward <cit.> enables to first decode some linear combinations of lattice codewords and then solve the lattice equation to recover the desired messages. This approach is also used in <cit.> to characterize approximate sum-rate capacity of the fully connected K-user interference channel.
The idea of sending multiple copies of the same sub-message at different signal levels, so-called Zigzag decoding, appeared in <cit.> where receivers collect side information and use them for interference cancellation.
The K-user cyclic Gaussian interference channel is considered in <cit.> where an approximate capacity for the weak interference regime (_k≥_k for all k) and the exact capacity for the strong interference regime (_k≤_k for all k) are derived. Our type 4 and 5 channels are K=3 cases in mixed interference regimes, which were not considered in <cit.>.
§.§ Main Results
We consider five channel types defined in Table <ref> and described in Fig. <ref> (a)–(e). Each channel type is a partially connected three-user Gaussian interference channel. Each transmitter is subject to power constraint 𝔼[X_k^2]≤ P_k=P. Let us denote the noise variance by N_k=𝔼[Z_k^2]. Without loss of generality, we assume that N_1≤ N_2≤ N_3.
The side information graph representation of an interference channel satisfies the following.
* A node represents a transmitter-receiver pair, or equivalently, the message.
* There is a directed edge from node i to node j if transmitter i does not interfere at receiver j.
The side information graphs for five channel types are described in Fig. <ref> (f)–(j). We state the main results in the following two theorems, of which the proofs will be given in the main body of the paper.
For the five channel types, if (R_1,R_2,R_3) is achievable, it must satisfy
∑_j∈𝒦 R_j ≤(1+|𝒦|P/min_j∈𝒦{N_j})
for every subset 𝒦 of the nodes {1,2,3} that does not include a directed cycle in the side information graph over the subset.
For any rate triple (R_1,R_2,R_3) on the boundary of the outer bound region, the point (R_1-1,R_2-1,R_3-1) is achievable.
§.§ Paper Organization and Notation
The capacity outer bounds are derived in Section II. The inner bounds for each channel type and the corresponding gap analysis are given in Section III, IV, V, VI, VII, respectively. Section VIII concludes the paper. While lattice coding-based achievable rate regions for channel types 4 and 5 are presented in Section VI and VII, random coding achievability is given in Appendix.
Signal 𝐱_ij is a coded version of message M_ij with code rate R_ij unless otherwise stated. The single user capacity at receiver k is denoted by C_k=(1+P/N_k). Let 𝒞 denote the capacity region of an interference channel. Also, let ℛ_i and ℛ_o denote the capacity inner bound and the capacity outer bound, respectively. Thus, ℛ_i⊂𝒞⊂ℛ_o. Let δ_k denote the gap on the rate R_k between ℛ_i and ℛ_o. Let δ_jk denote the gap on the sum-rate R_j+R_k between ℛ_i and ℛ_o. For example, if
ℛ_i={(R_j,R_k): R_k≤ L_k, R_j+R_k≤ L_jk}
ℛ_o={(R_j,R_k): R_k≤ U_k, R_j+R_k≤ U_jk},
then δ_k=U_k-L_k and δ_jk=U_jk-L_jk. For side information graph, we use graph notation of <cit.>. For example, 𝒢_1={(1|3),(2),(3|1)} means that node 1 has an incoming edge from node 3, that node 2 has no incoming edge, and that node 3 has an incoming edge from node 1. | null | null | null | null | We presented approximate capacity region of five important cases of partially connected interference channels. The outer bounds based on Z-channel type argument are derived. Achievable schemes are developed and shown to approximately achieve the capacity to within a constant bit.
For future work, the channels with fully general coefficients may be considered. In this paper, we presented different schemes for each channel type although they share some principle. A universal scheme is to be developed for unified capacity characterization of all possible topologies. The connection between interference channel and index coding problems is much to explore. In particular, the results on the capacity region for index coding in <cit.> seem to have an interesting connection to our work. |