entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
186
| authors
sequencelengths 1
769
| primary_category
stringclasses 96
values | categories
sequencelengths 1
6
| text
stringlengths 3
512k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/1701.07763v1 | 20170126162544 | Necessary conditions for the boundedness of linear and bilinear commutators on Banach function spaces | [
"Lucas Chaffee",
"David Cruz-Uribe"
] | math.CA | [
"math.CA",
"42B20, 42B35"
] |
equationsection
=1.2
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
thmTheorem
lem[thm]Lemma
dfnDefinition[section]
exampleExample[section]
remark[theorem]Remark
|
http://arxiv.org/abs/1701.08001v1 | 20170127103735 | Non-Amontons-Coulomb local friction law of randomly rough contact interfaces with rubber | [
"D. T. Nguyen",
"E. Wandersman",
"A. Prevost",
"Y. Le Chenadec",
"C. Fretigny",
"A. Chateauminois"
] | cond-mat.soft | [
"cond-mat.soft"
] |
[][email protected]
1. Soft Matter Science and Engineering Laboratory (SIMM), UMR CNRS
7615,
Ecole Supérieure de Physique et Chimie Industrielles (ESPCI), Université Pierre et Marie Curie, Paris (UPMC), France
2. CNRS / UPMC Univ Paris 06, FRE 3231, Laboratoire Jean Perrin, F-75005, Paris, France
3. Manufacture Française des Pneumatiques Michelin, 63040 Cedex 9, Clermont-Ferrand, France
We report on measurements of the local friction law at a multi-contact interface formed between a smooth rubber and statistically rough glass lenses, under steady state friction. Using contact imaging, surface displacements are measured, and inverted to extract both distributions of frictional shear stress and contact pressure with a spatial resolution of about 10 μm. For a glass surface whose topography is self-affine with a Gaussian height asperity distribution, the local frictional shear stress is found to vary strongly sub-linearly with the local contact pressure over the whole investigated pressure range. Such sub-linear behavior is also evidenced for a surface with a non Gaussian height asperity distribution, demonstrating that, for such multi-contact interfaces, Amontons-Coulomb's friction law does not prevail at the local scale.
46.50+d Tribology and Mechanical contacts;
62.20 Qp Friction, Tribology and Hardness
Non Amontons-Coulomb local friction law of randomly rough contact interfaces with rubber
Antoine Chateauminoisy^1
December 30, 2023
========================================================================================
§ INTRODUCTION
Friction is one of the long standing problems in physics which still remains partially unsolved. Similarly to adhesive contact problems, friction couples mechanical properties of the materials in contact, roughness and physicochemical characteristics of their surfaces. To incorporate such intricate effects in a description of friction, one needs to postulate a local constitutive law indicating how shear stresses depend on normal stresses at the interface. For macroscopic contacts, Bowden and Tabor <cit.> and later Greenwood and Williamson <cit.> were the first to recognize the crucial contribution of surface roughness in the derivation of such constitutive laws. Their approach to friction is based on the observation that, due to the distribution of asperities heights on the surface, contact between two macroscopic solids is usually made up of a myriad of micro-contacts. The real area of contact is thus much smaller than the macroscopic apparent one. As a result,
friction of multi-contact interfaces combines multiple length scales. At the scale of a single asperity, frictional energy dissipation involves poorly understood physicochemical processes occurring at the intimate contact between surfaces, like adsorption or entanglement/distanglement mechanisms for instance <cit.>, as well as viscoelastic or plastic deformations of the asperities <cit.>. At the macroscopic scale, i.e. the size of the contact, friction processes involve the collective contact mechanics of a statistical set of asperities whose sizes are often distributed over orders of magnitude. Several models were proposed for evaluating the area of real contact and its dependence on the normal load, often based on a spectral description of surface roughness <cit.>. One of the key issues of these models is to incorporate in a realistic way the effects of
adhesion and materials properties such as plasticity and viscoelasticity on the formation of the actual contact area under sliding conditions.
This concept of real contact area is central to sliding situations where the overall friction force is usually assumed to be the sum of the shear resistance of individual micro-contacts. As a crude assumption, the friction force can be considered as the product of the actual contact area by a constant shear stress which embeds all dissipative mechanisms occurring at the scale of micro-contacts. This idea forms the basis of the Bowden and Tabor model <cit.> which was later enriched to account for rate dependence and aging effects on friction <cit.>. As reviewed in <cit.>, it remains the current framework for the description of solid friction at multi-contact interfaces.
Experimentally, validation of such models mostly relies on measurements of the friction force and its dependence on normal load and sliding velocity. Unfortunately, friction force is an average of local frictional properties which makes the validation of local friction laws, and, a fortiori, of the proposed models rather indirect. Knowledge of a local constitutive friction law is however relevant to many contact mechanics models where local friction at contact interfaces is often postulated to obey locally Amontons-Coulomb's friction law <cit.>. It also remains crucial in our understanding of induced non-linear friction force fluctuations which are exhibited for instance in tactile perception <cit.>.
In this Letter, we take advantage of a previously developed experimental method <cit.> for the determination of shear stress and contact pressure distributions within contacts to address the problem of a frictional interface between a smooth silicone rubber and a rigid randomly rough surface. The approach is based on the measurement of the displacement field at the surface of the rubber substrate which, after inversion, provides the corresponding distributions of both local contact pressure and frictional shear stress within the contact. The method is first applied to a frictional interface with a self-affine fractal roughness and Gaussian height asperity distribution, allowing us to measure a local friction law at length scales much smaller than the size of the contact. Its relationship with the macroscopic friction law is also discussed. The method is then applied to a non-Gaussian surface, allowing to probe how the local friction law is affected by a change of
topography.
§ EXPERIMENTAL DETAILS
A commercially available transparent Poly(DiMethylSiloxane) silicone (PDMS Sylgard 184, Dow Corning, Midland, MI) is used as an elastomer substrate. In order to monitor contact induced surface displacements, a square network of small cylindrical holes (diameter 8 μm, depth 11 μm and center-to-center spacing 400 μ m) is stamped on the PDMS surface by means of standard soft lithography techniques. Once imaged in transmission with a white light, the pattern appears as a network of dark spots which are easily detected using image analysis. Full details regarding the design and fabrication of PDMS substrates are provided in <cit.>. Their dimensions (15 × 60 × 60 mm^3) ensure that semi-infinite contact conditions are met during friction experiments (i.e. the ratio of the substrate thickness to the contact radius is larger than 10 <cit.>). Before use, PDMS substrates are thoroughly washed with isopropanol and subsequently dried in a vacuum
chamber kept at low pressure.
Millimeter sized contacts are achieved between the PDMS substrate and plano-convex BK7 glass lenses of radius of curvature 5.2 mm (Melles Griot, France). Their surface are rendered microscopically rough using sand blasting (average grains size of 60 μm). The resulting topography has been characterized using AFM measurements over increasingly large regions of interest, from 0.5 × 0.5 μ m^2 up to 80 × 80 μ m^2. This allowed to probe its roughness at multiple length scales λ from 50 μ m down to a few nanometers, and compute its height distribution and height Power Spectrum Density (PSD) C(q) <cit.>, where q=2 π/λ is the wave vector. The height distribution is found to be Gaussian with a standard deviation σ= 1.40 ± 0.01 μm (Fig. <ref>, inset), and C(q) follows a power law at all q, characteristic of self-affine fractal surfaces (Fig. <ref>). Fitting C(q) with its expected functional form C(
q)∝ q^-2(H+1) for this type of topography yields a Hurst exponent H=0.74 and a fractal dimension D_f=3-H=2.26. This sand-blasted glass surface is sometimes referred to as a Gaussian surface.
Depending on the investigated normal load range, friction experiments are performed using two different custom-built setups designed respectively for high normal loads P (1 to 17 N) and low P (0.02 to 2 N). Both setups, which are described elsewhere (respectively <cit.> and <cit.>), are operated at constant P and constant sliding velocity. The PDMS substrate is displaced with respect to the fixed glass lens by means of a linear translation stage while the lateral load Q is continuously recorded either using a load transducer for the high load setup, or using a combination of a shear cantilever and a capacitive displacement sensor for the low load setup. For all experiments, smooth friction is achieved with no evidence of stick-slip instabilities nor detachment waves <cit.>. Experiments carried out between 0.01 and 10 mm s^-1 did not reveal any strong changes in the frictional behavior and thus, only results obtained at the intermediate
velocity of 0.5 mm s^-1 are reported in the present paper[This specific value was chosen as it falls within both accessible velocity ranges for both setups. Indeed, the high load setup can be operated at a maximum driving velocity of 10 mm s^-1, while the low load setup at 1 mm s^-1.]. During steady state friction, images of the deformed contact zone are continuously recorded through the transparent PDMS substrate using a zoom lens and a camera. The system is configured to a frame size of 1024 × 1024 pixels^2 with 8 bits resolution. For each image, positions of the markers are detected with a sub-pixel resolution using a dedicated image processing software. Accumulation of data from a set of about 400 successive images at a maximum frame rate of 24 Hz results in a well sampled lateral displacement field with a spatial resolution of ∼ 10 μ m, which is much larger than the markers' spacing (400 μ m). The accuracy in the measurement of the lateral displacements is
better than 1 μm.
Surface displacement fields are inverted to extract the corresponding contact stress distribution. As detailed in <cit.>, a three dimensional Finite Element (FE) inversion procedure has been developed which takes into account the non linearities arising from the large strains (up to ≈ 0.4) which are often induced at the edges of the contact, in particular at high normal loads. The principle of the approach is to apply the surface displacement field as a boundary condition at the upper surface of a meshed body representing the rubber substrate and to compute the corresponding stress distribution under the assumption of a Neo-Hookean behavior of the PDMS material <cit.>. In addition to the measured lateral displacement field, the vertical displacements of the PDMS surface within the contact area are also used as a boundary condition in order to compute the contact pressure distribution. Vertical displacements are not measured locally within the contact but they
are deduced using both the radius of curvature of the glass lens and the measured indentation depth under steady state sliding. In other words, a nominal vertical displacement field is used in the inversion which does not include micrometer scale variations due to the surface roughness. Such an approach is expected not to affect the pressure field if asperities heights remain low as compared to the nominal vertical displacement. Such an assumption is likely to be valid except very close to the contact edge or at very low applied normal loads.
After the numerical inversion calculation, the local contact pressure and frictional shear stress are determined from a projection of the stress tensor in a local cartesian coordinate system whose orientation is defined from the normal to the lens surface and from the actual sliding direction. The inversion procedure thus takes into account the contact geometry together with the measured sliding path trajectories.
§ CONTACT PRESSURE AND SHEAR STRESS FIELDS
Figures <ref>a and <ref>b show an example of the contact pressure and shear stress spatial distributions, respectively p(x,y) and τ(x,y), which are measured in steady sliding with the Gaussian rough surface. In what follows, it should be kept in mind that the reported stress data correspond to spatially averaged values over an area of about 10 μm^2, determined by the spatial resolution of the displacement measurement. Owing to the self affine fractal nature of the investigated rough surface, there are still many asperities in contact at this scale. Measured values of the frictional shear stress thus represents a statistical average which encompasses all roughness length scales up to about 10 μm. In Fig. <ref>b, the frictional shear stress distribution shows a shape similar to that of the contact pressure with a maximum at the centre of the contact (Fig. <ref>a). This correlation is further evidenced in Figs. <ref>c and <ref>d where sections of the shear stress and contact pressure fields taken across the contact area and perpendicular to the sliding direction are reported for increasing normal loads. Contact pressure profiles show a bell-shaped Hertz-like distribution which is expected from the prescribed spherical distribution of vertical displacements within the contact area. However, the measured pressure distribution takes into account the non linearities arising from finite strain together with mechanical coupling between normal and lateral stresses as previously reported <cit.>. Similar frictional shear stress profiles are also obtained for an increasing P but with some evidence of a saturation at high contact pressure. Such a dependence of the frictional shear stress on the applied contact pressure reflects the multi-contact nature of the interface. As the local contact pressure is increased, the number of micro-contacts grows, thus enhancing local frictional shear
stresses. As mentioned in previous studies <cit.>, such pressure dependence is not observed within frictional contacts between PDMS and a smooth glass lens where intimate contact is achieved.
At low normal loads (P ≤ 0.5N), stress fluctuations are clearly present in the shear stress profiles (Fig. <ref>d). Looking at 2D spatial maps of the stress fields for these loads (Fig. <ref>) reveals that these fluctuations are distributed spatially over length scales of the order of a few tens of micrometers. Close examination of the shear stress fields measured for three different P actually shows that features of the stress field at a given location within the contact remain at the same location when P is increased. The observed variations of the shear stress at small P thus likely reflect local changes in the contact stress distribution which are induced by details of the topography of the rough lens at these length scales. This result thus demonstrates the ability of displacement fields measurements and inversion procedure to probe spatial fluctuations in the shear stress distribution down to a few tens of micrometers. At higher P, stress spatial
variations are blurred out, most likely as a result of an increasing intimate contact between surfaces.
§ LOCAL FRICTION LAW
We now examine more closely the relationship between contact pressure and frictional stress, i.e. the local friction law. The existence of a well defined relationship between local shear stress τ and contact pressure p would imply that all data points obtained at different P and different positions (x,y) within the contact should merge onto a single curve, when reported in a (τ,p) plane. Such a master curve is indeed obtained as clearly shown on Fig. <ref>. In this figure, each color corresponds to a different P and each data point to a given location within the contact. The local contact pressure profile is close to a Hertzian one (Fig. <ref>c), but does not take into account roughness induced deviations which were predicted theoretically by Greenwood and Tripp <cit.>. Such deviations include at low nominal contact pressure both a decrease of the maximum p at the center of the contact and the existence of a tail in the
pressure distribution at the contact edges <cit.>. As a result of such effects, one should especially expect systematic deviations from the master curve of data points obtained in the low pressure range (i.e. in the vicinity of the contact edges) for each of the considered P. This is not observed in Fig. <ref> which tends to indicate that deviations from Hertz pressure distribution, induced by surface roughness, are not significant in our analysis.
The obtained local friction law is markedly sub-linear over the whole investigated contact pressure range. If one makes the assumption that the shear stress is increasing with the local density of micro-contacts, the observed sub-linear response should reflect the fact that the proportion of area in contact progressively saturates when contact pressure is increased. Saturation of the contact area at all length scales should eventually result in a constant, pressure independent frictional stress. Results shown in Fig. <ref> indicate that such a saturation would occur at contact pressures close to or higher than the Young's modulus of the PDMS substrate (E= 3 MPa).
The measured local friction law can be fitted from the lowest pressures experimentally available up to p=0.5 MPa by a power law, τ(x,y)=β p(x,y)^m with β=0.560± 0.003 and m=0.61 ± 0.03 (Fig. <ref>a). For the rough contact interface considered here, such a local friction law differs significantly from Bowden and Tabor's classical expression <cit.>, i.e. τ=τ_0+α p, since the so-called adhesive term τ_0 is negligible and that the pressure dependent term is markedly non linear. Assuming that p follows a Hertzian profile, integrating τ(x,y) over the contact area yields the total friction force Q which is found to scale with P as Q ∝ P^γ with γ=(m+2)/3. This power law dependence is effectively obtained from friction force measurements as shown in Fig. <ref>b. The experimental value of the exponent (0.93 ± 0.01) is very close to that derived from the integration of the local friction
law,
(m+2)/3=0.87 ± 0.01. Interestingly, the same functional form τ=β p^m was actually postulated by some of us <cit.> in a previous study involving a soft PDMS sphere sliding against a rough rigid plane with a similar roughness as the one used in the present study. The set of parameters (β,m) were deduced from the measured Q versus P relationships using the exact same derivation. Although both systems are in essence different, an exponent 0.87 ± 0.03 was found for Q versus P curves, yielding an exponent m=0.63 nearly equal to the one measured with the current data. As stated in the introduction, friction of rough multi-contact interfaces involves intricate aspects related to the determination of the real contact area and energy dissipation mechanisms at the scale of single asperities. A simple approach based on Greenwood and Williamson rough contact model with the assumption of a constant interfacial shear stress and a Gaussian asperity height distribution would yield an Amontons-Coulomb local friction law at a mesoscopic length scale. The measured sub-linear, non Amontons-Coulomb, friction law may arise from a combination of the progressive saturation of the real contact area at high loads and of possible elastic interactions between neighboring asperities. To our knowledge, no current contact mechanics model provides the derivation of such a local friction law preventing any further discussion of the physical meaning of both m and β and their dependence on surface properties.
In order to assess the sensitivity of the local friction law to the details of surface roughness, a different surface topography was produced by a chemical etching of the sand blasted glass surface in hydrofluoric acid. As detailed in <cit.>, etching silicate glass surfaces with blasting induced micro-flaws results in a surface containing small cusps. Such a structure is shown in the inset of Fig. <ref> together with the height distribution profile showing the non Gaussian nature of the rough surface. In the same figure, it can be seen that the cusp-like surface also yields a power law dependence of the local shear stress on the contact pressure with an exponent m=0.67 ± 0.06, comparable to the one obtained with the Gaussian surface. Such a weak dependence of the exponent on roughness was also evidenced using macroscopic measurements (Q versus P) in <cit.>. The main difference rather lies in the magnitude of the prefactor β=0.45 ± 0.02 which is reduced
for the cusp-like surface. Under the classical assumption that the local shear stress can be described as the product of the actual contact area by an average shear stress embedding all the dissipative mechanisms occurring at micro-asperity scale, this difference could potentially arise from two effects. The first one is obviously a reduction of the proportion of area in contact for a given contact pressure in the case of the cusp-like surface. The second effect at play could be a reduction in the extent of frictional energy dissipation at the scale of the asperity as a result, for example, of a change in viscoelastic losses involved in surface deformation at micro-asperity scale. A discussion of these effects would however require a detailed contact mechanics analysis of the rough surfaces which is beyond the scope of the present paper.
§ CONCLUSION
The local friction law of a rubber surface sliding against randomly rough rigid surfaces has been determined from a measurement of the surface displacement field. Measured contacts stresses being resolved down to a length scale of about 10 μm, they reflect the local frictional properties of the multi-contact interface. The local friction law exhibits a strongly non Amontons-Coulomb, sub-linear dependence on contact pressure. These features are preserved when the topography of the rough surface is changed from Gaussian to non Gaussian which tends to support the generality of the observations. These results question the validity of Amontons-Coulomb's law hypothesis embedded in most rough contact friction models. More generally, the determination of such local friction laws should serve as a basis for the validation of theoretical rough contacts models. We have also shown that our analysis is able to resolve shear stress fluctuations which are induced by the distribution of asperities
size at length scales of the order of a few tens of micrometers. A statistical analysis should interestingly show some correlation between the features of these shear stress variations and roughness parameters. It would, however, deserve an extended set of experiments where shear stress fields are measured for different realizations of the statistically rough surface.
This study was partially funded by ANR (DYNALO project NT09-499845). We thank B. Bresson for the AFM measurements and are indebted to S. Roux for stimulating discussions. We also thank F. Monti for helping us with the chemical etching of the glass surfaces.
unsrt
10
Bowden1958
F.P. Bowden and Tabor D.
The Friction and Lubrication of Solids.
Clarendon Press, Oxford, 1958.
greenwood1966
JA Greenwood and JBP Williamson.
Contact of nominally flat surfaces.
Proceedings of the Royal Society of London. Series A.
Mathematical and Physical Sciences, 295(1442):300–319, 1966.
bureau2004
L. Bureau and L. Leger.
Sliding friction at a rubber/brush interface.
Langmuir, 20:4523, 2004.
drummond2007
C. Drummond, J. Rodríguez-Hernández, S. Lecommandoux, and
P. Richetti.
Boundary lubricant films under shear: effect of roughness and
adhesion.
Journal of Chemical Physics, 126:Art 184906, 2007.
greenwood1958
J.A. Greenwood and D. Tabor.
The friction of hard sliders on lubricated rubber: the importance of
deformation losses.
Proceedings of the Physical Society, 71:989–1001, 1958.
grosch1963a
A.K. Grosch.
The relation between the friction and visco-elastic properties of
rubber.
Proceedings of the Royal Society of London. Series A.
Mathematical and Physical Sciences, 274(1356):21–39, 1963.
campana2007
C. Campana and M. Muser.
Contact mechanics of real versus randomly rough surfaces: A green's
function molecular dynamics study.
Europhysics, 77:38005, 2007.
campana2008
C. Campana, M.H. Muser, and M.O. Robbins.
Elastic contact between self-affine surfaces: comparison of numerical
stress and contact correlation functions with analytic predictions.
Journal of Physics-Condensed Matter, 20:354013, 2008.
persson2001
B.N.J. Persson.
Theory of rubber friction and contact mechanics.
Journal of Chemical Physics., 115(8):3840–3861, 2001.
bureau2002
L Bureau, T. Baumberger, and C. Caroli.
Rheological aging and rejuvenation in solid friction contacts.
European Physical Journal E, 8:331–337, 2002.
ruina1983
A. Ruina.
Slip instability and state variable friction laws.
Journal of Geophysical Research, 88:359–370, 1983.
baumberger2006
T. Baumberger and C. Caroli.
Solid friction from stick-slip down to pinning and aging.
Advances in Physics, 55:279–348, 2006.
wandersman2011
E. Wandersman, R. Candelier, G. Debregeas, and A. Prevost.
Texture-induced modulations of friction force: The fingerprint
effect.
Physical Review Letters, 107:164301, 2011.
chateauminois2008
A. Chateauminois and C. Fretigny.
Local friction at a sliding interface between an elastomer and a
rigid spherical probe.
European Physical Journal E, 27(2):221–227, oct 2008.
nguyen2011
D.T. Nguyen, P. Paolino, M-C. Audry, A. Chateauminois, C. Frétigny, Y. Le
Chenadec, M. Portigliatti, and E. Barthel.
Surface pressure and shear stress field within a frictional contact
on rubber.
Journal of Adhesion, 87:235–250, 2011.
prevost2013
A. Prevost, J. Scheibert, and G. Debrégeas.
Probing the micromechanics of a multi-contact interface at the onset
of frictional sliding.
European Physical E, 36:13017, 2013.
rubinstein2007
S. M. Rubinstein, G. Cohen, and J. Fineberg.
Dynamics of precursors to frictional sliding.
Physical Review Letters, 98(22), Jun 1 2007.
rubinstein2009
S.M. Rubinstein, G. Cohen, and J. Fineberg.
Visualizing stick-slip: experimental observations of processes
governin the nucleation of frictional sliding.
Journal of Physics D: Applied Physics, 42:214016, 2009.
nguyen2013
D. T. Nguyen, S. Ramakrishna, C. Fretigny, N. D. Spencer, Y. Le Chenadec, and
A. Chateauminois.
Friction of rubber with surfaces patterned with rigid spherical
asperities.
Tribology Letters, 49(1):135–144, January 2013.
greenwood1967
Jim A Greenwood and J Hl Tripp.
The elastic contact of rough spheres.
Journal of Applied Mechanics, 34:153, 1967.
scheibert2008
J. Scheibert.
Mécanique du contact aux échelles mésoscopiques.
Sciences Mécaniques et Physiques. Edilivres, 2008.
spierings1993
G.A.C.M. Spierings.
Wetchemical etching of silica glglass in hydrofluoric acid based
solutions.
Journal of Material Science, 28:6261–6273, 1993.
|
http://arxiv.org/abs/1701.07584v2 | 20170126054211 | The number $π$ and summation by $SL(2,\mathbb Z)$ | [
"Nikita Kalinin",
"Mikhail Shkolnikov"
] | math.NT | [
"math.NT"
] |
Limiting curves for polynomial adic systems.[Supported by the RFBR (grant 14-01-00373)]
A. R. MinabutdinovNational Research University Higher School of Economics, Department of Applied Mathematics and Business Informatics, St.Petersburg, Russia, e-mail:
December 30, 2023
==========================================================================================================================================================================
The sum (resp. the sum of the squares) of the defects in the triangle inequalities for the area one lattice parallelograms in the first quadrant has a surprisingly simple expression.
Namely, let f(a,b,c,d)=√(a^2+b^2)+√(c^2+d^2)-√((a+c)^2+(b+d)^2).
Then,
∑ f(a,b,c,d) = 2,
∑ f(a,b,c,d)^2 = 2-π/2,
where the sum runs by all a,b,c,d∈ℤ_≥ 0 such that ad-bc=1.
This paper is devoted to the proof of these formulae. We also discuss possible directions in study of this phenomena.
§ HISTORY: GEOMETRIC APPROACH TO Π
What good your beautiful proof on the transcendence of π: why investigate such problems, given that irrational numbers do not even exists?Apocryphally attributed to Leopold Kronecker by Ferdinand Lindemann
Digit computation of π, probably, is one of the oldest research directions in mathematics. Due to Archimedes we may consider the inscribed and superscribed equilateral polygons for the unit circle. Let p_n (resp., P_n) be the perimeter of such an inscribed (resp., superscribed) 3·2^n-gon. The sequences {p_n},{P_n} obey the recurrence
P_n+1=2p_nP_n/p_n+P_n, p_n+1=√(p_nP_n+1)
and both converge to 2π. However this gives no closed formula.
One of the major breakthrough in studying of π was made by Euler, Swiss-born (Basel) German-Russian mathematician. In 1735, in his Saint-Petersburg Academy of Science paper, he calculated (literally) the right hand side of
∑_n=1^∞1/n^2 = π^2/6.
Euler's idea was to use the identity
1-z/6+…=sin(z)/z=∏_n=1^∞ (1-z/n^2 π^2),
where the first equality is the Taylor series and the second equality happens because these two functions have the same set of zeroes. Equating the coefficient behind z we get (<ref>). This reasoning was not justified until Weierstrass, but there appeared many other proofs. A nice exercise to get (<ref>) is by considering the residues of (π z)/z^2.
We would like to mention here a rather elementary geometric proof of (<ref>) which is contained in [Cauchy, Cours d'Analyse, 1821, Note VIII].
0.3
[scale=2]
(1,0) arc (0:90:1);
(0,0)–(1,0)–(1,0.53)–cycle;
(1,0)–(0.88,0.47)–(0.88,0);
(0,0)–(0.88,0.47)–(0.68,0.91) –cycle;
(0.3,0.08) nodeα;
(0.29,0.24) nodeα;
0.6
Let α=π/2m+1. Let us triangulate the disk as shown in the picture. Then α, the area of each segment, is bound by sinα and tgα. Therefore ^2α≤1/α^2≤^2α. Writing sin ((2m+1)x)/(sin x)^2m+1 as a polynomial in x and using the fact that π r/2m+1 are the roots of this polynomial, through Vieta's Theorem we can find the sum of ^2α and ^2α for α=π r/2m+1, r=1,…, m.
So, the above geometric consideration gives a two-sided estimate for 1/π^2∑_n=1^m1/n^2 whose both sides converge to 1/6 as m→∞.
§ SL(2,)-WAY TO CUT CORNERS
Recall that SL(2,) is the set of matrices
[ a b; c d ] with a,b,c,d∈ and ad-bc=1. With respect to matrix multiplication, SL(2,) is a group. We may identify such a matrix with the pair (a,b),(c,d)∈^2 of lattice vectors such that the area of the parallelogram spanned by them is one.
A vector v∈^2 is primitive if its coordinates are coprime. A polygon P⊂^2 is called unimodular if
* the sides of P have rational slopes;
* two primitive vectors in the directions of every pair of adjacent sides of P give a basis of ^2.
Note that a polygon's property of being unimodular is SL(2,)-invariant.
The polygons P_0 and P_1 in Figure <ref> are unimodular.
Let P_0=[-1,1]^2 and D^2 be the unit disk inscribed in P_0, Figure <ref>, left. Cutting all corners of P_0 by tangent lines to D^2 in the directions (± 1,± 1) results in the octagon P_1 in which D^2 is inscribed, Figure <ref>, right.
Note that if we cut a corner of P_0 by any other tangent line to D^2, then the resulting 5-gon would not be unimodular.
For n≥ 0, the unimodular polygon P_n+1 circumscribing D^2 is defined to be the result of cutting all 4(n+1) corners of P_n by tangent lines to D^2 in such a way that P_n+1 is a unimodular polygon.
Note that passing to P_n+1 is unambiguous, because each unimodular corner of P_n is SL(2,)-equivalent to a corner of P_0 and the only possibility to unimodularly cut a corner at the point (1,1)∈ P_0 is to use the tangent line to D^2 of the direction (-1,1).
The primitive vector (1,1) is orthogonal to a side S of P_1, belongs to the positive quadrant, and goes outside P_1. Two vectors orthogonal to the neighboring to S sides of P_2 are (2,1) and (1,2).
Let Q be a corner of P_n. Let v_1 and v_2 be the primitive vectors orthogonal to the sides of P_n at Q, pointing outwards. Then this corner is cut by the new side of P_n+1 orthogonal to the direction v_1+v_2. Thus, we start with four vectors (1,0),(0,1),(-1,0),(0,-1) — the outward directions for the sides of P_0. To pass from P_n to P_n+1 we order by angle all primitive vectors orthogonal to the side of P_n and for each two neighbor vectors v_1,v_2 we cut the corresponding corner of P_n by the tangent line to D^2, orthogonal to v_1+v_2.
In particular, every tangent to D^2 line with rational slope contains a side of P_n for n large enough.
We can reformulate the above observation as follows:
For all a,b,c,d∈_≥ 0 with ad-bc=1, such that (a,b),(c,d) belong to the same quadrant, there is a corner of P_n for some n≥ 0 supported by the primitive vectors (a,b) and (c,d). In P_n+1 this corner is cropped by the line orthogonal to (a+c,b+d) and tangent to D^2.
The following lemma can be proven by direct computation.
In the above notation, the area of the cropped triangle is 1 2f(a,b,c,d)^2.
We are going to prove that taking the limits of the lattice perimeters and areas of P_n produces our formulae in the abstract. The next lemma is obvious.
lim_n→∞Area(P_n)=Area(D^2), lim_n→∞Perimeter(P_n)=2π.
§ PROOFS
The area of the intersection of P_0∖ D^2 with the first quadrant is 1-π/4. Therefore, it follows from Lemmata <ref>, <ref> that
∑_a,b,c,d1 2f(a,b,c,d)^2 =1-π/4,
which proves the second formula in the abstract.
Let v be a primitive vector. We define the lattice length of a vector kv, k∈_≥ 0 to be k.
In other words, the length is normalized in each direction in such a way that all primitive vectors have length one. Note that the lattice length is SL(2,)-invariant.
The lattice perimeter of P_n is the sum of the lattice lengths of its sides. For example, the usual perimeter of the octagon P_1 is 8√(2)-4 and the lattice perimeter is 2√(2)+4.
The lattice perimeter of P_n
* tends to zero as n→∞;
* is given by 4(2-∑ f(a,b,c,d)), where the sum runs over a,b,c,d∈_≥ 0, ad-bc =1, (a,b) and (c,d) are orthogonal to a pair of neighbor sides of some P_k with k≤ n.
The second statement follows from the cropping procedure. To prove the first statement we note that for each primitive direction v the length of the side of P_n, parallel to v, tends to 0 as n→∞. The usual perimeter of P_n is bounded (and tends to 2π), and in the definition of the lattice length we divide by the lengths |v| of the primitive directions v for the sides of P_n.
Therefore, for each N>0, the sum of the lattice lengths of the sides of P_n parallel to v with |v|<N tends to zero, and the rest part of the lattice perimeter of P_n is less than 2π/N, which concludes the proof by letting N→∞.
Finally, we deduce the first equality in the abstract from Lemmata <ref>, <ref>.
§ QUESTIONS
One may ask what happens for other powers of f(a,b,c,d). There is a partial answer in degree 3, which also reveals the source of our formulae.
For every primitive vector w consider a tangent line to D^2 consisting of all points p satisfying w· p+|w|=0. Consider a piecewise linear function F:D^2→ given by
F(p)=inf_w∈^2\0(w· p+|w|).
Performing verbatim the analysis of cropped tetrahedra applied to the graph of F one can prove the following lemma.
4-2∑ f(a,b,c,d)^3=3∫_D^2 F.
Now we describe the general idea behind the formulas.
Denote by C⊂ D^∘ the locus of all points p where the function F is not smooth. The set C is a locally finite tree (see Figure <ref>). In fact, it is naturally a tropical curve (see <cit.>). The numbers f(a,b,c,d) represent the values of F at the vertices of C and can be computed from the equations of tangent lines.
Below we list some direction which we find interesting to explore.
Coordinates on the space of compact convex domains. For every compact convex domain Ω we can define F_Ω as the infimum of all support functions with integral slopes, exactly as in (<ref>). Consider the values of F_Ω at the vertices of C_Ω, the corner locus of F_Ω. These values are the complete coordinates on the set of convex domains, therefore the characteristics of Ω, for example, the area, can be potentially expressed in terms of these values. How to relate these coordinates of Ω with those of the dual domain Ω^*?
Higher dimensions. We failed to reproduce this line of arguments “by cropping” for three-dimensional bodies, but it seems that we need to sum up by all quadruples of vectors v_1,v_2,v_3,v_4 such that ConvHull(0,v_1,v_2,v_3,v_4) contains no lattice points.
Zeta function. We may consider the sum ∑ f(a,b,c,d)^α as an analog of the Riemann zeta function. This motivates a bunch of questions. What is the minimal α such that this sum converges? We can prove that 2/3<α_min≤ 1. This problem boils down to evaluating the sum ∑1/(|v||w||v+w|)^α by all pairs of primitive lattice vectors v,w in the first quadrant such that the area of the parallelogram spanned by them is one. Can we extend this function for complex values of α?
Other proofs. It would be nice to reprove our formulae with other methods which are used to prove (<ref>). Note that the vectors (a,b),(c,d) can be uniquely reconstructed by the vector (a+c,b+d) and our construction reminds the Farey sequence a lot. Can we interpret f(a,b,c,d) as a residue of a certain function at (a+b)+(c+d)i? The Riemann zeta function is related to integer numbers, could it be that f is related to the Gauss integer numbers?
Modular forms. We can extend f to the whole SL(2,). If both vectors (a,b),(c,d) belong to the same quadrant, we use the same definition. For (a,b),(c,d) from different quadrant we could define
f(a,b,c,d)=√(a^2+b^2)+√(c^2+d^2)-√((a-c)^2+(b-d)^2).
Then
∑_m∈ SL(2,)f(m) = ∑_a,b,c,d∈
ad-bc=1f(a,b,c,d)
is well defined. Can we naturally extend this function to the ℂ/SL(2,)? Can we make similar series for other lattices or tessellations of the plane?
§.§ Aknowledgement
We would like to thank an anonymous referee for the idea to discuss the Euler formula,
and also Fedor Petrov and Pavol Ševera for fruitful discussions. We want to thank the God and the universe for these beautiful formulae.
1
us
N. Kalinin and M. Shkolnikov.
Tropical curves in sandpile models (in preparation).
arXiv:1502.06284, 2015.
announce
N. Kalinin and M. Shkolnikov.
Tropical curves in sandpiles.
Comptes Rendus Mathematique, 354(2):125–130, 2016.
|
http://arxiv.org/abs/1701.08219v1 | 20170127233433 | Putting gravity in control | [
"C S Lopez-Monsalvo",
"I Lopez-Garcia",
"F Beltran-Carbajal",
"R Escarela"
] | math.OC | [
"math.OC",
"gr-qc",
"math-ph",
"math.MP"
] |
equation*endequation*
wEḍ D f gTheoremTheoremcorolaryCorolarylemmaLemma}[2]#1/#2
[email protected] Autónoma Metropolitana Azcapotzalco
Avenida San Pablo Xalpa 180, Azcapotzalco, Reynosa Tamaulipas, 02200 Ciudad de México, MéxicoUniversidad Autónoma Metropolitana Azcapotzalco
Avenida San Pablo Xalpa 180, Azcapotzalco, Reynosa Tamaulipas, 02200 Ciudad de México, México
The aim of the present manuscript is to present a novel proposal in Geometric Control Theory inspired in the principles of General Relativity and energy-shaping control.
It has been a pleasure to prepare this contribution to celebrate the two parallel lives of effort and inspiration in Theoretical Physics of Rodolfo Gambini and Luis Herrera. Albeit different in scope, their research shares the common thread of gravity and its geometric principles, together with their ultimate consequences ranging from the very nature of space and time to the astrophysical implications of thermodynamics and gravity. It is thus our wish to take this opportunity to present a different perspective on the use of the same guiding principles that led to General Relativity and explore, even if very briefly, its implications within the area of control theory.
General relativity brought a new paradigm in the understanding of physical reality, establishing a deep connection between geometry and physics through the falsification of the motion of a test body due to a universal force<cit.> by free motion in a curved manifold. In such case, curvature becomes morally tantamount to a universal force field strength. This insight gave rise to a successful geometrization programme for field theories. However, due to the non-universal nature of the other known interactions, the geometrization is slightly different from that of gravity.
In geometric field theories, problems are of two kinds: given the sources determine the curvature, or, given the curvature determine the trajectories of test particles. In this two type of problems, it is the source which determines the background geometry for the motion. Thus, in general, changing the source distribution changes the curvature of the spacetime manifold, and thus the motion of the test particles. Therefore, in principle, one could be able to reproduce any observed motion in space by means of a suitable distribution of energy. By placing appropriate sources here and there, one can control the motion of test particles.
Of course, one cannot simply engineer and energy-momentum tensor so that test particles follow our desired trajectories. Most of such energy distributions are indeed unphysical from the spacetime point of view. However, this very principle might be applied to a different physical setting, one for which the background geometry is not the spacetime manifold. Such is the case of the geometrization of classical mechanics and the control theory that can be built from it.
We begin by briefly revisiting the geometrization programme of classical mechanics. Let us consider a mechanical systems characterized by n degrees of freedom (DoF) and defined by a Lagrangian function L:T𝒬⟶ℝ. Here, the configuration space is an n dimensional manifold 𝒬 whose tangent bundle is denoted by T𝒬. The generic problem in geometric mechanics can be stated as follows. Given a pair of points p_1, p_2 ∈𝒬, find the curve γ⊂𝒬γ:[0,1] ⟶𝒬, γ(0)=p_1, γ(1)=p_2,
for which the functional
S[ γ] = ∫_0^1 L[γ̃(τ)] τ̣,
is an extremum. Here, γ̃ denotes canonical lift of γ to T𝒬.
γ̃:[0,1] ⟶ T𝒬, γ̃:τ⟶[γ(τ),γ̇(τ) ].
The fact that the natural evolution of a mechanical system connecting two given points of the configuration space 𝒬 follows precisely such a path is known as Hamilton's Principle. Thus, natural motions solve the Euler-Lagrange equations
ℰ(L) = 0,
where ℰ is the Euler operator <cit.>.
We will consider systems generated by Lagrangian functions of the form
L:T𝒬⟶ℝ, L = T - V,
where T and V represent the kinetic and potential energies, respectively. Moreover, we will restrict our analysis to the case where the kinetic energy is defined in terms of a symmetric, non-degenerate and (usually) positive definite second rank tensor field
M:T𝒬× T𝒬⟶ℝ.
Namely, those for which the kinetic energy at a given point is written as
T|_p = 1/2 M (U_p,U_p),
where
U_p = .γ̣/τ̣|_τ_0∈ T_p𝒬, with γ(τ_0) = p ∈𝒬
is the velocity of the trajectory at the point p. Such systems are called natural (cf. <cit.>)
The tensor field (<ref>) satisfies at each point of 𝒬 the properties of an inner product, promoting the configuration space into a Riemannian manifold. Thus, one could try to relate the curves extremizing the action functional S to geodesics of M in 𝒬. However, they only coincide in the case of free motion, that is, when there is no potential energy nor external forces. Nonetheless, this observation gives rise to the problem of finding a metric tensor G for the configuration space 𝒬 such that the extremal curves of the action S coincide with the geodesics of G for a natural system defined by Lagrangian function with potential energy function V.
Recalling that geodesics are themselves extrema of the arc-length functional of 𝒬, it is a straightforward exercise showing that (c.f. Chapter 4 in <cit.>), for conservative systems, the metric we are looking for is conformal to M. That is, the solutions to the geodesic equation
∇_U U = 0,
where ∇ is a connection compatible with the metric
G = 2[E - V(p)] M,
extremize the action functional (<ref>). Here, E is the energy of the initial conditions and, by assumption, it is a constant of the motion. The metric (<ref>) is called the Jacobi metric. There is a difference, however, in the geometric origin of these curves. On the one hand, they are the paths followed by the system in a potential whilst, on the other, those are the free paths of the purely kinetic Lagrangian
L_G = 1/2 G (U,U).
Now that we can identify the trajectories in configuration space of the natural motion of a given system with geodesics in a Riemannian manifold, we would like to bend those paths so that the system evolves in a desired manner, that is, we want to re-shape the geometry so that the desired evolution corresponds to geodesic motion in a control Riemannian manifold. Thus, albeit both, classical mechanics and control theory are based on the same dynamical principles, they do differ in their objectives and goals. On the one hand, the generic problem of classical mechanics is that of finding the integral curves to a given Lagrangian vector field whilst, on the other hand, in control theory one is interested in finding the control inputs, generated by properly located actuators, so that the integral curves of a given system follow a designed or desired path in configuration space. Moreover, in both cases, the problem of stability is of paramount relevance. In the following lines we present a proposal to address the stability problem in control theory from a Riemannian point of view.
The way one controls the evolution of a system is by directly acting upon a set of accesible degrees of freedom (ADoF) <cit.>𝒟 = {ê_(i)(p) }_i=1^m, where ê_(i)(p) ∈ T_p 𝒬 ∀ p ∈𝒬,
is the ith element of a frame over the configuration space so that we can place the control action directly into the Euler-Lagrange equations as
ℰ(L) = M^♯[u] ∈ T𝒬, u=∑_i=1^m u_i f̂^(i) with f̂^(i) = M^♭[ê_(i)],
where u represents our control input as an applied external force acting on 𝒟. Here, M^♯ and M^♭ denote the musical isomorphisms between the tangent and co-tangent bundles defined by the kinetic energy metric M, i.e.
M^♯: T_p^*𝒬⟶ T_p 𝒬 and M^♭:T_p𝒬⟶ T_p^*𝒬.
Similarly, we have the equivalent Riemannian problem
∇_U U = G^♯[ u ], with u=∑_i=1^m u_i θ̂^(i) with θ̂^(i) = G^♭[ê_(i)].
Note that u is the same as in equation (<ref>) but expressed in terms of a different co-frame, the one corresponding to the Jacobi metric G.
If span(𝒟) = T_p𝒬 then we say the system is fully-actuated. Otherwise, we say it is under-actuated. Most of the relevant situations in control theory involve under-actuated systems, i.e. those for which span(𝒟) ⊂ T_p𝒬<cit.>. Furthermore, if one provides an input force depending solely on time, controlling the evolution of the system in the configuration space 𝒬, then u is called and open loop control[In fact, open loop tracking control for desired reference motion trajectories could be directly synthesized for a class of under-actuated controllable dynamical systems in absence of uncertainty, e.g. differentially flat systems <cit.>. In such case, system variables (states and control) can be expressed in terms of a set of flat output variables and a finite number of their time derivatives]. However, the central interest in control theory is in those systems which can regulate themselves against unknown perturbations, that is, those whose control input is responsive to spontaneous variations of the system configuration. This is referred as closed loop control. In such case, the control input force depends on the state of the system at any given time, that is, u must be a section of the co-tangent bundle
and the control objective is that the integral curves of (<ref>)– or, equivalently, those satisfying (<ref>)– remain “close” to a reference path γ^* ⊂𝒬 with some desired equilibrium properties.
To clearly state our Riemannian control problem, we propose a slight modification of Lewis' work <cit.>. Moreover, without any loss of generality and to keep our argument sufficiently simple, we will restrict ourselves to the case where no external nor gyroscopic forces are present (cf. <cit.>). Thus, let us define an open loop control system as the triad
Σ_ol≡{𝒬,G_ol,𝒲},
where 𝒬 and G_ol are the configuration space and the Jacobi metric, respectively; and 𝒲⊂ T^*𝒬 is the control sub-bundle defined at each point as
𝒲_p = span(F_p), F_p = {θ̂^(i)(p) }_i=1^m ∀ p ∈𝒬,
so that equation (<ref>) above is satisfied for a certain u ∈𝒲 given some reference γ^*. The goal is to obtain a pair – the closed loop system –Σ_cl = {𝒬, G_cl},
such that the geodesics of G_clmatch the open loop solutions of (<ref>). Here, we use the term match to indicate that the geodesics of the closed loop metric are not required to coincide at every point with the solutions of the open loop system, but merely that the vector fields share the same singular points together with their equilibrium properties, i.e. that the equilibria of the open loop system are the same as those of the closed loop geodesic vector field and with the same asymtotic behaviour. In such case we say that u∈𝒲 is a stabilizing input for the geometry shaping problem solved by G_cl<cit.>. Using this formulation the stability properties of the closed loop system can be assessed directly by means of the Riemann tensor of G_cl through the geodesic deviation equation. Intuitively, unstable regions should correspond to subsets of the configuration space where the eigenvalues of the Riemann tensor take negative values and the geodesics in a congruence are divergent, providing us with a completely geometric and coordinate independent stability criterion.
Notice that the closed loop metric is not required to be a Jacobi metric, that is, it is not necessary to find a function V_cl = V_cl(p) such that G_cl be of the form (<ref>). Moreover, it may not even share the same signature with G_ol (cf. Remark 4.7 in <cit.>). Finally, recalling that the geodesics of the closed loop metric correspond to the integral curves of a purely kinetic Lagrangian vector field with kinetic energy metric G_cl, and that the open loop metric is a Jacobi metric, G_cl must satisfy the partial differential equation
G_ol^♯[∑_i=1^m u_i(U_p) θ̂^(i)(p) ] = (∇^cl - ∇^ol)[U_p,U_p ],
where ∇^cl and ∇^ol are the Levi-Civita connections of G_cl and G_ol, respectively; we have used the shorthand (∇^cl - ∇^ol)[U_p,U_p ] to denote the connection difference tensor, for every pair (p,U_p) denoting a posible state of the system at point p̃∈ T𝒬 [cf. equation (<ref>), above]. We have been emphatic on the fact that the stabilizing control input must depend on the state of the system. Equations of the form (<ref>) might be used to define further geometric structures in the space they are defined, as has been done in the case of the geometrization programme of thermodynamics and fluctuation theory developed in <cit.>.
In recent years, solutions to equation (<ref>) have been the object of various studies <cit.>. However, one should be aware that the existence of a solution of pde1 satisfying some desired properties might be severely constrained by the topology of 𝒬<cit.>. Thus, in general, there is no generic criterion for deciding the solvability of (<ref>). Nevertheless, it has been shown that the case of one degree of under-actuation is fully stabilizable by means of (<ref>)<cit.>; modulo the difference introduced by using the open loop Jacobi metric in the problem's definition. Interestingly, in such case, equation (<ref>) resembles very closely that of the definition of the general relativistic elasticity difference tensor introduced by Karlovini and Samuelsson <cit.>
S = P(∇ - ∇̃)
where ∇ and ∇̃ represent the spacetime and the (pulled back) matter space metrics, respectively (cf. <cit.> for a detailed presentation of matter spaces in relativistic elasticity and dissipation) and P denotes the orthogonal projection with respect to a free falling geodesic congruence. Such a tensor has been used to recast the relativistic Euler's equations in terms of the Hadamard Elasticity, a form which is elegant and useful in the study of wave propagation in relativistic elastic media <cit.>. This provides us with an interesting link between relativistic elasticity and geometric control theory which deserves further exploration.
Let us close this contribution by noting that there might be several stabilizing inputs for a given desiderata. In such case, one might look for the `most suitable' geometry solving our control objective and use (<ref>) as a constraint for a variational problem. Its particular form should be fixed by some a priori known cost functional that is to be extremized by the sought for closed loop metric. Such is the standard optimal control problem<cit.>. To preserve the geometric nature of the whole construction, the cost functional can only depend on scalars formed from the metric and its derivatives, that is
𝒜[G ] = ∫_𝒬ℱ(G,G',G”,... ) √(G) ^̣n q,
where √(G) ^̣n q is the invariant volume element on 𝒬. Thus, our search for geometries solving a control objective has led us to the study of cost functionals akin to the various classes of gravitational theories where the extremum is achieved by geometries required to be compatible with the observed free fall motion of certain spacetime observers<cit.>. Therefore, a complete solution to our problem – if it exists –, should be a metric G_cl extremizing (<ref>)
such that equation (<ref>) is satisfied for a given open loop control. An exploration of the equations of motion stemming from the class of cost functionals constructed from curvature invariants in control theory will be the subject of further investigations.
In this sense, the lessons learned and the results obtained from the variational formulation of gravitational theories might find a novel application in the realm of geometric control theory.
§ ACKNOWLEDGEMENTS
CSLM wishes to express his gratitude to the Organizing Committee for their kind hospitality and strong efforts in providing us with such an inspiring venue for this celebration.
§ REFERENCES
10reichenbach2012philosophy
H. Reichenbach.
The Philosophy of Space and Time.
Dover Books on Physics. Dover Publications, 2012.
olver2000applications
P.J. Olver.
Applications of Lie Groups to Differential Equations.
Applications of Lie Groups to Differential Equations. Springer New
York, 2000.
pettini2007geometry
M. Pettini.
Geometry and Topology in Hamiltonian Dynamics and Statistical
Mechanics.
Interdisciplinary Applied Mathematics. Springer New York, 2007.
lewis2004notes
Andrew D Lewis.
Notes on energy shaping.
In Proceedings of the 43rd IEEE Conference on Decision and
Control, volume 5, pages 4818–4823. Citeseer, 2004.
auckly1999control
Dave Auckly, Lev Kapitanski, and Warren White.
Control of nonlinear underactuated systems.
arXiv preprint math/9901140, 1999.
fliess1995flatness
Michel Fliess, Jean Lévine, Philippe Martin, and Pierre Rouchon.
Flatness and defect of non-linear systems: introductory theory and
examples.
International journal of control, 61(6):1327–1361, 1995.
bullo2004geometric
Francesco Bullo and Andrew D Lewis.
Geometric control of mechanical systems: modeling, analysis, and
design for simple mechanical control systems, volume 49.
Springer Science & Business Media, 2004.
Lewis:2007:WLD:1317140.1317151
Andrew D. Lewis.
Is it worth learning differential geometric methods for modeling and
control of mechanical systems?
Robotica, 25(6):765–777, November 2007.
gharesifard2008geometric
Bahman Gharesifard, Andrew D Lewis, Abdol-Reza Mansouri, et al.
A geometric framework for stabilization by energy shaping: Sufficient
conditions for existence of solutions.
Communications in Information & Systems, 8(4):353–398, 2008.
bravetti2015sasakian
A Bravetti and CS Lopez-Monsalvo.
Para-sasakian geometry in thermodynamic fluctuation theory.
Journal of Physics A: Mathematical and Theoretical,
48(12):125206, 2015.
bravetti2015conformal
Alessandro Bravetti, Cesar S Lopez-Monsalvo, and Francisco Nettel.
Conformal gauge transformations in thermodynamics.
Entropy, 17(9):6150–6168, 2015.
fernandez2015generalised
P Fernández de Córdoba and JM Isidro.
Generalised complex geometry in thermodynamical fluctuation theory.
Entropy, 17(8):5888–5902, 2015.
crasta2015matching
N Crasta, Romeo Ortega, and Harish K Pillai.
On the matching equations of energy shaping controllers for
mechanical systems.
International Journal of Control, 88(9):1757–1765, 2015.
ng2013energy
Wai Man Ng, Dong Eui Chang, and George Labahn.
Energy shaping for systems with two degrees of underactuation and
more than three degrees of freedom.
SIAM Journal on Control and Optimization, 51(2):881–905, 2013.
gharesifard2011stabilization
Bahman Gharesifard.
Stabilization of systems with one degree of underactuation with
energy shaping: a geometric approach.
SIAM Journal on Control and Optimization, 49(4):1422–1434,
2011.
karlovini2003elastic
Max Karlovini and Lars Samuelsson.
Elastic stars in general relativity: I. foundations and equilibrium
models.
Classical and Quantum Gravity, 20(16):3613, 2003.
carter1972foundations
B Carter and H Quintana.
Foundations of general relativistic high-pressure elasticity theory.
In Proceedings of the Royal Society of London A: Mathematical,
Physical and Engineering Sciences, volume 331, pages 57–83. The Royal
Society, 1972.
lopez2011covariant
César Simón López-Monsalvo.
Covariant thermodynamics and relativity.
arXiv preprint arXiv:1107.1005, 2011.
lopez2011thermal
Cesar S Lopez-Monsalvo and Nils Andersson.
Thermal dynamics in general relativity.
In Proceedings of the Royal Society of London A: Mathematical,
Physical and Engineering Sciences, volume 467, pages 738–759. The Royal
Society, 2011.
andersson2007relativistic
Nils Andersson and Gregory L Comer.
Relativistic fluid dynamics: physics for many different scales.
Living Rev. Relativity, 10(1), 2007.
vaz2008analysing
EGLR Vaz and Irene Brito.
Analysing the elasticity difference tensor of general relativity.
General Relativity and Gravitation, 40(9):1947–1966, 2008.
sethi2000optimal
Suresh P Sethi and Gerald L Thompson.
What is Optimal Control Theory? Springer, 2000.
lovelock1971einstein
David Lovelock.
The einstein tensor and its generalizations.
Journal of Mathematical Physics, 12(3):498–501, 1971.
|
http://arxiv.org/abs/1701.08056v1 | 20170127140205 | Equilibration and Order in Quantum Floquet Matter | [
"R. Moessner",
"S. L. Sondhi"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn",
"cond-mat.stat-mech",
"cond-mat.str-el",
"quant-ph"
] | |
http://arxiv.org/abs/1701.07734v1 | 20170126150613 | Regularized characteristic boundary conditions for the Lattice-Boltzmann methods at high Reynolds number flows | [
"Gauthier Wissocq",
"Nicolas Gourdain",
"Orestis Malaspinas",
"Alexandre Eyssartier"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
ISAE,Altran,Cerfacs]Gauthier Wissocq
ISAE]Nicolas Gourdain
UGEN]Orestis Malaspinas
Altran]Alexandre Eyssartier
[ISAE]ISAE, Dpt. of Aerodynamics, Energetics and Propulsion, Toulouse, France
[Altran]Altran, DO ME, Blagnac, France
[Cerfacs]Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique (CERFACS), CFD Team, 42 avenue Gaspard Coriolis, 31057 Toulouse Cedex 01, France
[UGEN]SPC - Centre Universitaire d'Informatique, Université de Genève 7, route de Drize, CH-1227 Switzerland
This paper reports the investigations done to adapt the Characteristic Boundary Conditions (CBC) to the Lattice-Boltzmann formalism for high Reynolds number applications. Three CBC formalisms are implemented and tested in an open source LBM code: the baseline local one-dimension inviscid (BL-LODI) approach, its extension including the effects of the transverse terms (CBC-2D) and a local streamline approach in which the problem is reformulated in the incident wave framework (LS-LODI). Then all implementations of the CBC methods are tested for a variety of test cases, ranging from canonical problems (such as 2D plane and spherical waves and 2D vortices) to a 2D NACA profile at high Reynolds number (Re = 10^5), representative of aeronautic applications. The LS-LODI approach provides the best results for pure acoustics waves (plane and spherical waves). However, it is not well suited to the outflow of a convected vortex for which the CBC-2D associated with a relaxation on density and transverse waves provides the best results. As regards numerical stability, a regularized adaptation is necessary to simulate high Reynolds number flows. The so-called regularized FD (Finite Difference) adaptation, a modified regularized approach where the off-equilibrium part of the stress tensor is computed thanks to a finite difference scheme, is the only tested adaptation that can handle the high Reynolds computation.
Lattice Boltzmann method characteristic boundary conditions LODI high Reynolds number flows
§ INTRODUCTION
A better understanding of turbulent unsteady flows is a necessary step towards a breakthrough in the design of modern aircraft and propulsive systems. Due to the difficulty of predicting turbulence with complex geometry, the flow that develops in these engines remains difficult to predict. At this time, the most popular method to model the effect of turbulence is still the Reynolds Averaged Navier-Stokes (RANS) approach. However there is some evidence that this formalism is not accurate enough, especially when a description of time-dependent turbulent flows is desired (high incidence angle, laminar-to-turbulent transition, etc.) <cit.>. With the increase in computing power, Large Eddy Simulation (LES) applied to the Navier-Stokes equations emerges as a promising technique to improve both knowledge of complex physics and reliability of flow solver predictions <cit.>. It is still the most popular and mature approach to describe the behavior of turbulent flow in complex geometries (e.g. aircraft and gas turbines). However, the resolution of the NS equations requires to add artificial dissipation to ensure numerical stability <cit.>. The consequence is an over-dissipation which affects the flow and limits the capability to transport flow patterns (like turbulence) on a long distance. In some specific cases, like aero-acoustic (far-field noise), NS can thus face some difficulties to predict the flow.
In this context, there is an increasing interest in the fluid dynamics community for emerging methods, based on the Lattice Boltzmann approach <cit.>. The Lattice-Boltzmann Method (LBM) has already demonstrated its potential for complex geometries, thanks to immersed boundary conditions (that allow the use of cartesian grids) and low dissipation properties required for capturing the small acoustic pressure fluctuations <cit.>. LBM also provides the advantage of an easy parallelization, making it well suited for High-Performance Computing <cit.>. However, the most widely used Lattice-Boltzmann models still suffer from weaknesses like a lack of robustness for high Mach number flows (M > 0.4), a limitation to low compressible isothermal flows <cit.> and the use of artificial boundary conditions (Dirichlet/Neumann types can lead to the reflection of outgoing acoustic waves that have a significant influence on the flow field <cit.>). While the use of artificial boundary conditions is also critical for NS methods, it is more problematic for LBM due to the low dissipation of the method.
A potential way to avoid unphysical acoustic reflections at the boundary is to use a "sponge layer" inside the computational domain, on which artificial dissipation (by upwinding) is introduced or physical viscosity is increased (viscosity sponge zones). Acoustic waves (physical or not) are thus damped in such a zone, which allows to eliminate or limit numerical reflections <cit.>. This solution has however important drawbacks. First the calibration of sponge layers is difficult, as a balance must be found between a brutal increase of the viscosity (that will generate acoustic reflections) and a too low dissipation that will not be effective. Such sponge layers also have an impact on the computational cost, since a part of the domain is dedicated to slowly increasing the viscosity. Last, some boundary conditions cannot be treated with a sponge layer, for instance an inlet with turbulence injection.
For NS methods, a successful approach is the use of non-reflective boundary conditions based on a treatment of the characteristic waves of the local flow <cit.>. However, the extension of this approach to LBM is not straightforward given the difficulty to find a bridge between the LBM that describes the world at the mesoscopic level (a population of particles) and the NS world based on a macroscopic description of the flow. But some progress has recently been made on the adaptation of characteristic boundary conditions to the LBM formalism. Izquierdo and Fueyo <cit.> used a pressure antibounceback boundary condition <cit.> adapted to the multiple relaxation time (MRT) collision scheme <cit.> to impose the Dirichlet density and velocity conditions given by the local one-dimensional inviscid (LODI) equations, which provided non-reflective outflow boundary conditions for one-dimensional waves. More recently, Jung et al. <cit.> extended the previous work to include the effects of transverse and viscous terms in the Characteristic Boundary Conditions (CBC) and showed good performance for vortex outflow. Meanwhile, Heubes et al. <cit.> adapted the solution given by a modified-Thompson approach by imposing the corresponding equilibrium populations and Schlaffer <cit.> assessed a modified Zou/He boundary condition <cit.>.
Still, previous researches are limited to low Reynolds number applications while Latt et al. showed that increasing the Reynolds number can have drastic impact on the numerical stability of the LBM boundary condition <cit.>. Even when the MRT collision is used, as in <cit.>, the numerical stability of the characteristic boundary conditions has not been demonstrated. The aim of this study is thus to develop a numerically stable adaptation of the CBC to the LBM formalism taking advantage of the regularized collision scheme <cit.>, which has proved to be numerically stable at high Reynolds number flows <cit.>, and the corresponding regularized boundary conditions <cit.>. Different kinds of CBC will also be evaluated. This article is structured as follows. The first section describes the LBM framework. Then, the second section presents three kinds of CBCs and three possible adaptations to the LBM formalism for 2D problems in the low compressible isothermal case. In the third section, these models are assessed for simple cases: normal, oblique and spherical waves and a convected vortex at Re=10^3. Finally, the method is assessed on a high Reynolds number application: a NACA0015 airfoil at Re=10^5.
§ NUMERICAL METHOD AND GOVERNING EQUATIONS
§.§ Lattice Boltzmann framework for isothermal flows
A description of the Lattice-Boltzmann Method can be found in <cit.>. The governing equations describe the evolution of the probability density of finding a set of particles with a given microscopic velocity at a given location:
f_i(𝐱+𝐜_𝐢Δ t , t+Δ t) = f_i(𝐱,t) + Ω_i (𝐱,t)
for 0≤ i < q, where 𝐜_𝐢 is a discrete set of q velocities, f_i(𝐱,t) is the discrete single particle distribution function corresponding to 𝐜_𝐢 and Ω_i is an operator representing the internal collisions of pairs of particles.
Macroscopic values such as density, ρ, and the flow velocity, 𝐮, can be deduced from the set of probability density functions f_i(𝐱,t), such as:
ρ = ∑_i=0^q-1 f_i, ρ𝐮 =∑_i=0^q-1 f_i 𝐜_𝐢.
Some of the most popular choices for the set of velocities are D2Q9 and D3Q27 lattices, respectively 9 velocities in 2D and 27 velocities in 3D (see Fig. <ref>). For both of these lattices, the sound speed in lattice units (normalized by the ratio between the spatial resolution and the time step Δ x/Δ t) is given by c_s = 1/√(3) <cit.>.
The collision operator Ω_i is usually modelled with the Bhatnagar-Gross-Krook (BGK) approximation <cit.>, which consists in a relaxation, with a relaxation time τ, of every population to the corresponding equilibrium probability density function f_i^(eq):
Ω_i = -1/τ[f_i(x,t)-f_i^(eq)(x,t)].
The equilibrium distribution function f_i^(eq) is a local function that only depends on density and velocity in the isothermal case. It can be computed thanks to a second order development of the Maxwell-Boltzmann equilibrium function <cit.>:
f_i^(eq) = w_i ρ[ 1+𝐜_𝐢·𝐮/c_s^2 + ( 𝐜_𝐢·𝐮/2c_s^2)^2- 𝐮^2/2c_s^2],
where w_i are the gaussian weights of the lattice.
A Chapman-Enskog expansion, based on the assumption that f_i is given by the sum of the equilibrium distribution plus a small perturbation f^(1)_i
f_i=f_i^(eq)+f_i^(1), with f_i^(1)≪ f_i^(eq),
can be applied to (<ref>) in order to recover the exact Navier-Stokes equation for quasi-incompressible flows in the limit of long-wavelength <cit.>. The pressure is thus given by p=c_s^2 ρ and the kinematic viscosity is linked to the BGK relaxation parameter through
ν = c_s^2(τ - 1/2).
The Chapman–Enskog expansion also relates the second order tensor Π^(1) defined as
Π^(1)=∑_i𝐜_𝐢𝐜_𝐢f^(1)_i,
with the strain rate tensor 𝐒=(∇𝐮+(∇𝐮)^T)/2 through the relation
Π^(1)=-2 c_s^2 ρτ𝐒.
In turn to the leading order f_i^(1) can be approximated by
f_i^(1)≅w_i/2 c_s^2𝐐_i:Π^(1),
where 𝐐_i≡(𝐜_i𝐜_i-c_s^2𝐈). The colon symbol stands for the double contraction operator and 𝐈 is the identity matrix. A regularization step consisting in the reconstruction of the off-equilibrium parts using (<ref>) and (<ref>) can improve the precision and the numerical stability of the single relaxation time BGK collision <cit.>. This model will be used in the next parts for high Reynolds number flow simulations.
§.§ The Palabos open-source library
The LBM flow solver used in this work is the Palabos[Copyright 2011-2012 FlowKit Ltd.] open-source library. The Palabos library is a framework for general-purpose CFD with a kernel based on the lattice Boltzmann method. The use of C++ code makes it easy to install and to run on every machine. It is thus possible to set up fluid flow simulations with relative ease and to extend the open-source library with new methods and models, which is of paramount importance for the implementation of new boundary conditions. The numerical scheme is divided in two steps:
* A collision step where the BGK model is applied:
f_i(𝐱, t+1/2) = f_i(𝐱,t) + 1/τ[f_i^(eq)(𝐱,t)-f_i(𝐱,t)],
with f_i^(eq) computed using the macroscopic values at time t and f_i can be regularized in order to increase numerical stability for high Reynolds number flows.
* A streaming step:
f_i(𝐱+𝐜_𝐢, t+1) = f_i(𝐱, t+1/2).
The streaming step consists in an advection of each discrete population to the neighbor node located in the direction of the corresponding discrete velocity. Since a boundary node has less neighbors than an internal node (less than 9 neighbors in 2D or 27 neighbors in 3D), some populations are missing at the boundary after each iteration. These populations need to be reconstructed, which is the purpose of the implementation of boundary conditions in LBM. Up to now, different methods can be used in Palabos, such as regularized BC <cit.>
or Zou/He BC <cit.>
to implement open boundaries. However, none of them can be used as they stand for an outflow boundary condition and the use of sponge zones is necessary to avoid non physical reflections. The next sections will aim at developing a more natural boundary condition that minimize acoustic reflections for an outflow boundary type, based on the Characteristic Boundary Conditions (CBC).
§ ADAPTATION OF CHARACTERISTIC BOUNDARY CONDITIONS TO THE LBM FORMALISM
One of the most popular methods in the NS community for subsonic non reflective outflow boundary conditions is the CBC method <cit.>. The adaptation of the CBC to the LBM formalism is presented here for an isothermal flow, in lattice units (normalized by Δ x and Δ t). Acoustic waves thus propagate at the constant lattice sound speed c_s. In the isothermal case, pressure is defined as p=c_s^2 ρ. Three CBC methods will be introduced below: the local one-dimensional inviscid (LODI) approximation, a 2D extension of the LODI approximation including transverse waves and a last method called local-streamline LODI.
§.§ LODI Approximation (Baseline LODI)
Let us consider a domain outlet located at x=L, as descripted on Fig. <ref>. A diagonalisation of the x-derivative terms in the Navier-Stokes equation allows to define five waves ℒ_i that propagate respectively at velocity u-c_s, u, u, u and u+c_s, where u is the x-component (streamwise) of the non-dimensional macroscopic velocity 𝐮= [u, v, w]. These waves are represented on Fig. <ref> on the inlet (x=0) and outlet (x=L) of a computational domain.
At the outlet (x=L on Fig. <ref>), ℒ_2, ℒ_3, ℒ_4 and ℒ_5 leave the computational domain and are obtained with the general expression of characteristic waves:
ℒ_2=u( c_s^2 ∂ρ/∂ x - ∂ p/∂ x)=0,
ℒ_3=u ∂ v/∂ x,
ℒ_4=u ∂ w/∂ x,
ℒ_5=(u+c) ( ∂ p/∂ x + ρ c_s ∂ u/∂ x).
Let us notice that ℒ_2, the entropy wave, is null in the isothermal case.The x-derivative terms can be computed using the interior points by one-sided finite difference. The treatment of ℒ_1 is different : since it comes from the outside, it can not be computed using the interior points. The perfectly non-reflecting case is obtained by fixing ℒ_1=0, which ensures eliminating the incoming wave. However, this is known to be unstable because of lack of control of the outlet flow variables. A simple way to ensure well-posedness is to set
ℒ_1=K_1(p-p_∞),
where K_1=σ (1-M^2)c_s/L, p_∞ is the target pressure at the outlet, σ is a constant, M is the maximum Mach number in the flow and L is a characteristic size of the domain <cit.>.
The time-derivative of the primitive variables can be computed in function of the wave amplitudes by examining a LODI problem :
∂ρ/∂ t + 1/c_s^2[ ℒ_2 + 1/2(ℒ_5+ℒ_1) ]=0,
∂ p/∂ t + 1/2(ℒ_5+ℒ_1)=0,
∂ u/∂ t+1/2ρ c_s(ℒ_5-ℒ_1)=0,
∂ v/∂ t + ℒ_3 = 0,
∂ w/∂ t+ℒ_4=0.
In the isothermal case, (<ref>) and (<ref>) are equivalent. Finally, with a temporal discretization using an explicit second-order scheme, the physical values that must be imposed at the next time step in order to avoid acoustic reflections can be computed. This implementation will be referred to as the baseline LODI (BL-LODI) in the rest of the paper.
§.§ LODI approximation including transverse terms
The previous relations are perfectly non-reflecting for the only case of a pure 1D plane wave. For a non-normal wave, the LODI approximation is not verified and a reflected wave, all the more important as the incidence increases, can appear. To take this phenomenon into account, a possible solution is to add the influence of transverse waves in the LODI equations <cit.>:
∂ρ/∂ t + 1/2c_s^2(ℒ_5+ℒ_1)=1/2c_s^2(𝒯_5+𝒯_1),
∂ u/∂ t+1/2ρ c_s(ℒ_5-ℒ_1)=1/2ρ c_s(𝒯_5-𝒯_1),
∂ v/∂ t + ℒ_3 = 𝒯_3,
∂ w/∂ t+ℒ_4=𝒯_4.
The transverse waves can be computed as follows:
𝒯_1=-[ 𝐮_𝐭·∇_tp +p∇_t·𝐮_𝐭 -ρ c_s 𝐮_𝐭·∇_tu ],
𝒯_3=-[ 𝐮_𝐭·∇_tv + 1/ρ∂ p/∂ y],
𝒯_4=-[ 𝐮_𝐭·∇_tw + 1/ρ∂ p/∂ z],
𝒯_5=-[ 𝐮_𝐭·∇_tp +p∇_t·𝐮_𝐭 +ρ c_s 𝐮_𝐭·∇_tu ],
where 𝐮_𝐭=[v,w] and ∇_t=[∂_y, ∂_z].
The second transverse wave 𝒯_2 is not introduced here since it is null in the isothermal case, as well as ℒ_2. The non reflective outflow boundary condition needs now to be set as:
ℒ_1=K_1(p-p_∞) - K_2(𝒯_1-𝒯_1,exact) + 𝒯_1,
with K_1=σ (1-M^2)c_s/L, K_2 should be equal to the Mach number of the mean flow and 𝒯_1,exact is a desired steady value of 𝒯_1 <cit.>. In the rest of the paper, this method will be named 2D-CBC in the perfectly non-reflecting case (K_1=K_2=0) and 2D-CBC relaxed when a relaxation is done on ℒ_1.
§.§ Local streamline LODI
Another potential solution is to compute the LODI equation in the local streamline based frame R (Fig. <ref>) <cit.>. In order to compute the new characteristic waves L_i, the non-dimensional velocity vector is projected into the new reference frame with the difficulty to compute x̃-derivative terms from the lattice discretization. A simple approximation is to set
∂ϕ̃/∂x̃ = ∂ϕ̃/∂ x,
and thus to compute it by a first-order upwind scheme using the lattice discretization. This implementation of the CBC condition will be referred to as the local streamline LODI (LS-LODI).
§.§ Adaptation to the Lattice Boltzmann scheme
The main difficulty in LBM is then to find a set of populations in order to impose the physical values obtained by the CBC theory and the correct associated gradients. The possible adaptations can be divided in two families: those preserving the known particle populations (e.g. Zou/He BC <cit.>) and those replacing all particle populations (e.g. Regularized BC <cit.>). Izquierdo and Fueyo <cit.> and Jung et al. <cit.> chose to modify the missing populations by adapting a pressure anti-bounceback boundary condition, which can be used as far as the MRT collision scheme is adopted <cit.>. Heubes et al. <cit.> decided to impose the CBC physical values thanks to the equilibrium populations, which is known to impose incorrect gradients at the boundary <cit.>. Three possibilities are introduced below and will be further evaluated: a modified Zou/He method and two declinations of the more stable regularized adaptation.
§.§.§ Adaptation with Zou/He boundary conditions
Let us consider an outflow boundary located at x=L on a D2Q9 lattice (Fig. <ref>). After streaming, three incoming populations are missing : f_1, f_2 and f_3.
The zeroth and first-order hydrodynamic moments are:
ρ = f_1+f_2+f_3+ρ_0+ρ_+,
ρ u = ρ_+ - (f_1+f_2+f_3),
where ρ_+ = f_5+f_6+f_7 and ρ_0=f_0+f_4+f_8. These equations can be combined, by eliminating the missing populations (f_1+f_2+f_3), to obtain
ρ = 1/1-u(ρ_0+2ρ_+),
where ρ and u are still non-dimensional. Thus, ρ and u are linked with relation (<ref>), which proves that it is impossible to impose both ρ_b and 𝐮_𝐛 at the boundary without modifying any of the known populations.The same relation can be obtained for every 1D, 2D or 3D lattice as far as there is only one level of velocity. This relation concerns every boundary condition of the first family (those preserving the knwon populations) and will be used later.
Let us note g_i the corrected populations at the boundary. As proposed in <cit.>, the missing populations can be computed as follows:
g_1=f_1^(eq) + f_5^(neq) + 1/2(f_4^(neq) - f_8^(neq)),
g_2=f_2^(eq) + f_6^(neq),
g_3=f_3^(eq) + f_7^(neq) - 1/2(f_4^(neq) - f_8^(neq)),
while every other populations are kept unchanged:
g_i = f_i, i=0, 4, 5, 6, 7, 8,
with f_i^(neq)=f_i - f_i^(eq) and where the equilibrium populations are computed with the physical values imposed at the boundary, ρ_b and 𝐮_𝐛=(u_b, v_b).
As shown in <cit.>, corrections (<ref>), (<ref>) and (<ref>) then allow to impose the first-order moment at the boundary ρ_b 𝐮_𝐛. However, density and velocity are still linked with (<ref>). This is not a problem for a dirichlet Zou/He boundary condition where only one physical value (either ρ or 𝐮) is imposed and the other one is computed with (<ref>). On the contrary, for a non reflective outflow, both of them need to be imposed as the result given by the CBC method, so that the only condition set by the user is the value of a characteristic wave. It is then necessary to modify at least one known population. Schlaffer suggests correcting every populations in order to impose the correct density <cit.>. The solution proposed here is to add a correction on the population associated to a null velocity only:
g_0=f_0 + ρ_b - 1/1-u(ρ_0 + 2ρ_+),
which ensures the value of ρ_b. This choice is motivated by the fact that this added correction will only affect the collision phase and will not be streamed into the computational domain.
To sum up this method, g_1, g_2, g_3 and g_0 are computed with respectively (<ref>), (<ref>), (<ref>) and (<ref>) while other populations are kept unchanged after streaming:
g_i=f_i, i=4, 5, 6, 7, 8.
This method can be easily transposed on every 3D lattice (except for high order lattices <cit.>) by using the general formula of the Zou/He boundary conditions that can be found in <cit.> for the missing populations, and correction (<ref>) to ensure the value of ρ_b and consequently 𝐮_𝐛.
§.§.§ Adaptation with the regularized method
The Zou/He boundary condition provides the advantage of a very good precision in the definition of the boundary physical values. Unfortunately, this solution suffers from lack of stability for large Reynolds numbers. For example, it has been shown that Zou/He boundary conditions become unstable at Re>100 for a given resolution N=200 nodes per characteristic length in a 2D channel flow <cit.>.
Another possible adaptation is to use the regularized boundary condition in order to impose the physical values computed by the CBC theory. This solution is yet less accurate but well more stable, as shown by Latt et al. <cit.>.
More details about the regularized method for boundary conditions can be found for example in <cit.>. The purpose
of this section is to explain how this particular boundary condition
is used to impose ρ_b and 𝐮_b on a flat boundary.
The leading order of the populations f_i can be expressed (see end of (<ref>) and (<ref>)) as
f_i=f_i^(eq)(ρ_b,𝐮_𝐛)+f_i^(1)(Π^(1)).
On a boundary node the density ρ_b and velocity 𝐮_b are imposed. Therefore in order
to be able to use this last equation one needs a way to compute Π^(1). This is achieved
by using the fact that 𝐐_i is a symmetric tensor with respect to i which means that
𝐐_i=𝐐_opp(i), where
opp(i)={j|𝐜_i=-𝐜_j}.
At the leading order, this property leads to
f^(1)_i=f^(1)_opp(i).
The known f_i^(1) can be straightforwardly computed by the following formula
f^(1)_i=f_i-f_i^(eq)(ρ_b,𝐮_b).
With the last two equations, the set of f_i^(1) is complete (they are all known) and can be used to compute Π^(1) through (<ref>).
Then using (<ref>) one recomputes regularized f_i^(1) populations and the total populations f_i are computed with the relation (<ref>). This method is valid in both 2D and 3D and will be called Regularized Bounceback (or Regularized BB) adaptation in the next sections. Another possibility is to compute Π^(1) by a second order finite difference scheme thanks to (<ref>) and recompute f^(1) using (<ref>). This method will be called the Regularized FD adaptation.
§.§ Summary of the method
The non reflecting outflow boundary condition using CBC theory for LBM can be summarized as follows, considering everything is known at (non-dimensional) time t:
* Computation of the physical values that must be imposed at t+1 to avoid non physical reflections by the CBC theory using either BL-LODI, CBC-2D or LS-LODI method. These values are stored to be used in the last step.
* Collision step.
* Streaming step: some populations are missing at the boundary.
* Correction of the set of populations at the boundary so that the physical values stored in the first step are imposed by using the Zou/He adaptation the so-called regularized Bounceback adaptation or the Regularized FD adapatation.
§ APPLICATION TO ACADEMIC CASES
In this section, the CBC approach for LBM is assessed on simple 2D cases: a normal plane wave, a plane wave with different incidence angles, a spherical density wave and a convected vortex.
§.§ 2D normal plane wave
The computational domain, a square of 200×200 cells, is initiated with a gaussian plane wave as follows (in lattice units)
ρ_0=1+0.1*exp(-(x-x_0)^2/2σ^2),
u_0=0.1,
v_0=0.1*exp(-(x-x_0)^2/2R_c^2),
with x_0=110 and 2R_c^2=20 in lattice units (i.e. in number of cells). The Reynolds number, computed with the horizontal non-dimensional initial velocity u_0, the characteristic size of the box in number of voxels and the viscosity in lattice units, is equal to 100.
The boundary conditions are: vertical periodicity, reflecting inlet on the left and perfectly non reflecting outflow on the right. For this pure 1D case, the choice of the CBC condition (baseline, local streamline LODI or LODI with transverse terms) has no effect on the absorption rate. Moreover, all the three adaptations provide the same results at 10^-6 and no stability issues have been encountered with the Zou/He adaptation for this test case. Thus, the results presented below have been obtained with baseline LODI and Zou/He adaptation.
This test case has been chosen so that the reflection rate for every macroscopic value (ρ, u and v) can be computed, since two pressure and axial velocity waves propagates at speed (u-c_s) and (u+c_s) and one transverse velocity wave propagates at speed u. The computed density waves are represented on Fig. <ref> at two different time steps: before reflection of the (u+c_s) wave on the non-reflective outlet and shortly after reflection.
A very low reflected amplitude can be distinguished. The (u-c_s) wave propagates to the left of the domain without encountering any boundary at the two observed time steps. It is only affected by viscous dissipation and is used as a reference amplitude in the computation of the reflection rates of density and axial velocity. For the transverse velocity wave, one can compute the reflection rate as the ratio between the reflected wave amplitude and the amplitude shortly before reflection. The obtained results are presented on Table <ref>.
As often with CBC, the treatment is more difficult for the pressure wave, but the results obtained here are in good agreement compared to what is found in the literature <cit.>.
It is noticed that ρ=1 is not correctly recovered at the outlet after reflection, as the boundary has been set as perfectly non-reflecting (ℒ_1=0). In order to impose the correct boundary condition, a relaxation should be implemented, as in eq (<ref>). However, in that case, the reflection rate will increase as shown in <cit.>.
§.§ 2D plane wave with incidence
At t=0, the computational domain is at rest (ρ_0=1, 𝐮_0=0) except for an oblique line on which the density is set to ρ_0=1.1 in order to generate a plane wave with an incidence α. The reflection coefficient is measured by computing the ratio of maximal amplitudes in density waves only, as this is the most critical hydrodynamic variable. The three implementations are tested for this case: BL-LODI, LS-LODI and CBC-2D in the perfectly non-reflecting case. Again, the results obtained with the Zou/He adaptation only will be presented here, as the same results at 10^-6 have been obtained with the Regularized BB and Regularized DF adaptations.
[][h!]
[width=0.35]ObliqueWaveProblem
(49,68)α
(53,56)M
(63,65)c_s
(65,35)h
(90,75)270 Non-reflective outlet
(5,20)90 Reflective inlet
(40,88)Periodicity
(40,5)Periodicity
Schematic plot of the initialization of the plane wave test case with an incidence angle α. Spherical waves instantaneously appear at the two extremities to distort the plane wave.
Because of a spurious phenomenon appearing at high incidence angles, only angles below 45 ^∘could be computed. Indeed, contrary to the previous normal wave test case, the incident wave cannot be infinite in its transverse direction. Then, spherical waves appear at each extremity of the oblique line initializing the wave, as shown on Fig. <ref>. Let us imagine a point M located at a distance h of one extremity and moving with the oblique plane wave at velocity c_s. The spherical wave reaches M after a time h/c_s. The point M reaches the non-reflective outlet boundary condition at a time h tan (α)/c_s. The condition for this point of the wave to reach the outlet before being distorted is:
h/c_stan(α) < h/c_s⇔tan(α) < 1,
which means than only incidence angles below 45^∘ can be computed with such a test case. This problem is avoided in <cit.> by imposing an 'exact' solution obtained on a larger computational domain until the desired wave reaches the boundary. The exact solution is then switched to the CBC condition. It has not been tested in this paper in order to avoid the eventual acoustic waves generated by a brutal change in the boundary condition.
The reflection coefficient of the density wave with respect to the incidence angle is represented on Fig. <ref> for BL-LODI, LS-LODI and CBC-2D. The results are compared with what is obtained for the same test case with the modified Thompson method with a coefficient γ=3/4 as introduced in <cit.>.
As expected, for the baseline approach (a), the reflection coefficient increases as the incidence angle increases. For the modified Thompson approach, the results are close to what can be found in <cit.>: the reflection rate slightly increases and reaches 5% at 40^∘ of incidence. On the contrary, when the CBC method is extended with the effects of transverse waves as in <cit.>, the reflection rate decreases until 30^∘ and then begins to increase. In the case of a local streamline LODI implementation, the coefficient remains stable at around 2% of reflected wave.
§.§ 2D Spherical wave
The computational domain, a square of 600×600 cells is initiated with a gaussian density in order to generate a spherical wave, as follows :
ρ=1+0.1*exp(-((x-x_0)^2+(y-y_0)^2)/2R_c^2),
with R_c = 3.2, x_0=520 and y_0=300 nodes.
The boundary conditions are periodic in the vertical direction, a CBC condition is set on the right boundary and a reflective boundary condition on the left (located far enough to avoid its influence on the spherical wave at the studied time steps). Four cases are computed: (a) baseline LODI, (b) CBC with transverse terms, (c) local streamline LODI and (d) reference case where the domain is enlarged in the horizontal direction so that the spherical wave is not affected by the boundary (Fig. <ref>). As for the previous test cases, the LBM adaptation of the CBC condition had no impact on the results.
The local reflection coefficient for such a spherical wave can be computed by the following formula:
r = R-A_ref/|I|,
where I is the amplitude of the original density wave running towards the boundary condition, R is the amplitude of the density wave after reflection at the outlet and A_ref is the amplitude of the density wave at the same lattice node compared to the reference case (d) (no reflection).
A map of reflection rates after reflection at the outlet (after 400 iterations) is shown on Fig. <ref>. For the baseline approach, one can see that the reflected wave becomes more important as the local incidence of the spherical waves increases, because the incident wave can be approximated as a local normal plane in the center of the outlet while, in the corners, the incidence becomes important (up to 75^∘). When adding transverse term without any relaxation as in <cit.>, the observation confirms the prospective behavior of the plane waves simulations: even if the reflection rate is consistently reduced for angles below 40^∘, it reaches 20% for the greatest observed angles. On the contrary, the reflection rate remains constant with the local streamline LODI implementation.
§.§ 2D convected vortex
A 2D vortex is convected from left to right and exits the computational domain, a square of 600×600 grid points. A particular attention must be paid to the initialization of the Lamb-Oseen vortex <cit.> which has to be adapted to the isothermal case in order to avoid spurious waves generated by the adaptation of a wrong initial density. The initial conditions, in lattice units, are imposed as follows:
u = u_0 - β u_0 (y-y_0)/R_cexp( -r^2/2R_c),
v = β u_0 (x-x_0)/R_cexp( -r^2/2R_c),
ρ = [ 1- (β u_0)^2/2C_vexp( -r^2/2) ] ^1/(γ-1),
where u_0=0.1 in lattice units, β=0.5, x_0=y_0=300 nodes (the vortex is initially centered on the box), R_c=20 nodes and r^2=(x-x_0)^2+(y-y_0)^2. With the BGK collision operator with a single relaxation time, the simulated gas has the following constants <cit.>:
γ=D+2/D,
C_v=D/2 c_s^2,
where D=2 is the dimension of the problem. The specific heat capacity at constant volume C_v appears instead of C_p in (<ref>) because of an error in the heat flux obtained in the Navier-Stokes equations after the Chapman-Enskog development for athermal lattices <cit.>. The Reynolds number based on u_0 and the size of the computational domain is equal to 1000 and the Regularized BGK scheme is chosen for the collision step. This convected vortex test case is a well known test for boundary conditions as it often reveals spurious distorsions at boundaries due to the presence of transverse terms in the Navier-Stokes equation <cit.>. As previously, top and bottom boundary conditions are periodic, the left condition is a regularized inlet and four different CBC conditions will be evaluated at the right boundary:(a) baseline LODI with K_1=0,
(b) local streamline LODI with K_1=0,
(c) CBC-2D with ℒ_1=𝒯_1,
(d) CBC-2D relaxed including transverse terms with:
ℒ_1=σ (1-M^2)c_s^3/R_c(ρ-1) + (1-M)𝒯_1,
where M=0.2 and σ=0.9.
For low Reynolds numbers (Re < 1000), simulations — not shown here — provided the same results at 10^-6. However, for Re=1000, the Zou/He adaptation was no more stable for this test case, contrary to both regularized methods. The results presented here have been obtained with the Regularized BB adaptation.
Fig.<ref> and <ref> show isovalues of non dimensional longitudinal velocity for the four studied cases. First, it can be noticed that, contrary to what was observed in the previous test cases, the use of local streamline boundary condition does not allow to reduce non physical reflections compared to baseline LODI. A possible explanation would be that, inside the vortex, local streamlines are nearly perpendicular to the direction of propagation of the local wave, whereas they were aligned in the case of a pure acoustic wave. The LODI equations are thus applied in a wrong frame which generates non physical reflections. Results are slightly better for the BL-LODI for which the local frame is the good one for at least local normal waves. The addition of the transverse waves in the CBC-2D boundary allows the vortex to keep a correct shape at least until 1000 iterations. However, once the first half of the vortex has reached the outlet, it is distorted as in the BL-LODI case. The addition of relaxation coefficients allows the vortex to keep its shape at the last iteration, even if it is a bit distorted.
It can be noticed that the configuration appears to be symmetric for boundaries (a) and (c), which is not the case for (b) and (d). Indeed, with the LS-LODI, the streamlines used in the change of frame are not symmetric and in the last case, the density frame appears to be asymmetric which has an influence on ℒ_1 through the relaxed pressure, and thus on the longitudinal velocity fields.
§.§ Stability analysis
An analysis of numerical stability of the previous convected vortex test case at different Reynolds numbers and different grid resolutions has been carried out. It has been noticed that the CBC type (BL-LODI, LS-LODI or CBC-2D) had no impact on the stability as far as ℒ_1 is not relaxed: the numerical stability comes from the LBM adaptation itself.
Fig. <ref> compares the stability of the Zou/He adaptation and the Regularized BB adaptation. As predicted from <cit.>, the classical regularized boundary condition is more stable than the Zou/He adaptation. However, a huge resolution is still required in order to reach high Reynolds numbers. On the contrary, the Regularized FD adaptation has shown to be unconditionnally numerically stable: in every configurations of the convected vortex test case, the first numerical instabilities came from the inside of the domain and not the CBC boundary condition.
§ APPLICATION TO A NACA0015 PROFIL AT HIGH REYNOLDS NUMBER
The objective of this section is to demonstrate the robustness of the CBCs in a case relevant for high Reynolds aerodynamics applications. The configuration is a NACA0015 profile, in a 8C×8C domain (with C=1 m the chord of the profile). The Mach number is set to 0.04 and the Reynolds number is set to Re=10^5. The lattice dimension is Δ x=1/400 m, corresponding to a time step Δ t=4.25×10^-6 s. Each simulation is run for 200 000 time steps, but only the last 100 000 time steps are kept for data post-processing. The simulations are performed on a 3,200×3,200 points 2D grid, to ensure numerical stability and a proper resolution of the flow patterns with a LES formalism. The sub grid scale model is the Smagorinsky model, with a constant C_s=0.18, associated with a Regularized BGK collision scheme. In order to generate complex flow patterns, the profile is inclined with an angle of 15^o, compared to the freestream velocity direction which is purely axial. Three CBCs are tested: BL-LODI, LS-LODI and CBC-2D. The CBC solutions are compared with a reference solution obtained on 16 C×8C domain, Fig. <ref>. In this reference case, vortices do not leave the extended computational domain at the observed time steps and the initial acoustic wave is evacuated thanks to a Neumann outlet (populations at the boundary are copied from the neighbor voxel) associated with a 4C-wide viscosity sponge zone inside which the relaxation time is increased up to Δ t with a sinusoidal shape in order to smoothly increase the numerical dissipation. Among CBC computations, only the regularized FD adaptation has been sufficiently robust to achieve the simulations.
At this Reynolds number, the real flow should be 3D in the vicinity of the profile and in the wake, due to the development of turbulent flow patterns. However, the present 2D approach is not able to reproduce such 3D effects. Despite that, the trajectory of the vortices exhibits a chaotic behavior, as observed on the time-averaged flow field in Fig. <ref>. The challenges with the present test case are to ensure that:
* the initial acoustic wave generated by the presence of the airfoil leaves the domain with minimum reflection,
* the vortices generated by the boundary layer separation are correctly convected beyond the outlet plane with minimum spurious wave reflection.
The difficulty for such a test case lies in the chaotic nature of the flow, since the trajectory of the vortices is affected by small perturbations. Because of this phenomenon, the visualization of error fields, computed as the difference between the reference solution and each tested CBC, would not be compellant since vortices are not superposed in each computation. A better choice is to plot the root mean square of the pressure field, defined as p_RMS=√(p'^2)/p, to underline the behavior of each CBC during the whole computation as in Fig. <ref>.
It can be noticed that the baseline LODI approach (a) allows to evacuate the initial acoustic wave and the convected vortices with minimal reflection. The only differences with the reference p_RMS field are the small perturbations observed at the outlet and a background noise which can be due to the reflection of the initial acoustic wave. The local streamline LODI approach (b) is less efficient: the background noise is more pronounced and the dissymetry observed in the previous academic test cases is still present, which increases pressure fluctuations at the outlet. In the CBC-2D case (c), the map of pressure fluctuations seems more smooth and the vortices does not seem do produce spurious reflections. But the overall prms is increased in the whole domain, which can be due to a drift of the mean pressure because of the absence of relaxation in the boundary condition.
To quantify more the performance of the three CBC approaches, the time-averaged fluctuations of pressure p_RMS and velocity u_RMS, v_RMS are shown in Fig. <ref> at x/C=7.5 (close to the outlet where the CBC is applied). All CBCs show a good ability to predict the mean flow as well as pressure and velocity fluctuations. This test case remains a challenge for CBCs since the pressure and velocity fluctuations outside the wake are four magnitude orders lower than the mean field value. In that regard, all CBC methods predict pressure and velocity fluctuations of the same magnitude order than in the reference case. The CBC that provides the best results in term of accuracy for both pressure and velocity fluctuations is the BL-LODI approach. The LS-LODI gives satisfactory results outside the wake but it overestimates p_RMS by a factor 2 in the wake (at y/C=4) because of the dissymetry in the vortices reflections. The CBC-2D approach overestimates p_RMS outside the wake by a factor 4 but it gives good results in the wake, similar to those obtain with the BL-LODI approach. Similar conclusions can be drawn for axial and transverse velocity fluctuations, except that all methods predict the correct velocity fluctuations in the wake, including the LS-LODI approach.
§ SUMMARY AND CONCLUSION
An implementation of a non reflective outflow boundary condition, which does not need additional absorbing layers nor extended domains, has been proposed for a lattice Boltzmann solver. The methods presented here are based on the Characteristic Boundary Conditions (CBC) with the classical LODI approach (BL-LODI), its extension to transverse waves (CBC-2D) and the LODI approach in the local streamline based frame (LS-LODI). Three ways of computing missing populations in order to impose the CBC physical values have been introduced. The first one is based on the classical Zou/He boundary condition, while the other ones are based on the more stable regularized boundary condition : the so-called "Regularized BB" adaptation, where the off-equilibrium part of the stress tensor is evaluated thanks to a bounceback rule, and the "Regularized FD" adaptation, where it is computed thanks to an upwind finite difference scheme. All these methods provided very good results in the test case of a normal plane wave, where computed reflection rates were about 1%. Testcases of a plane wave with incidence and a spherical wave showed that the reflection rate increases with the incidence angle for the BL-LODI adaptation. When adding the effect of transverse terms, the reflected wave is considerably reduced but begins to increase for incidence angles greater than 30^∘. For these pure acoustic waves, the LS-LODI adaptation provided the best results since the reflection rate remained below 5% whatever the incidence angle. However, this method is not adapted for a convected vortex, for which the CBC-2D adaptation with relaxation on density and transverse waves provided the best results and lead to only slight distortions of the velocity fields. As regards the numerical stability of the implemented CBC, the regularized adaptations have shown to be well more stable than the Zou/He one. The regularized FD adaptation associated with a regularized BGK scheme allowed to run the NACA0015 case at high Reynolds number (Re=10^5) in 2D with N=400 cells per chord length, which was not possible with Zou/He or Regularized BB adaptations. Thus, the use of the regularized FD adaptation is well advised for high Reynolds computations.
§ ACKNOWLEDGEMENTS
The authors are grateful to the Calmip computing center of the Federal University of Toulouse (Project account number P1425) for providing all resources that have been used in this work. The authors would also thank J.F. Boussuge from CERFACS for his help on post-processing and the discussion about the method.
elsarticle-num
10
url<#>1urlprefixURL href#1#2#2 #1#1
Tucker:2014
P. G. Tucker, J. R. DeBonis, Aerodynamics, computers and the environment,
Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences 372 (2022) (2014) 20130331–20130331.
Chen_AnnuRevFluid_30_1998
S. Chen, G. D. Doolen, Lattice Boltzmann method for fluid fows, Annual
Review of Fluid Mechanics 30 (1) (1998) 329–364.
Succi_2001
S. Succi, The Lattice Boltzmann Equation: For Fluid Dynamics and
Beyond, Numerical Mathematics and Scientific Computation, Clarendon Press,
2001.
Lallemand_PhysRevE_61_2000
P. Lallemand, L.-S. Luo, Theory of the lattice Boltzmann method:
Dispersion, dissipation, isotropy, Galilean invariance, and stability,
Phys. Rev. E 61 (2000) 6546–6562.
Buick_EPL_43_1998
J. M. Buick, C. A. Greated, D. M. Campbell, Lattice BGK simulation of
sound waves, EPL (Europhysics Letters) 43 (3) (1998) 235.
Marie_JCP_228_2009
S. Marié, D. Ricot, P. Sagaut, Comparison between lattice Boltzmann
method and Navier-Stokes high order schemes for computational
aeroacoustics, Journal of Computational Physics 228 (4) (2009) 1056 – 1070.
Heuveline_CMAP_58_2009
V. Heuveline, M. J. Krause, J. Latt, Towards a hybrid parallelization of
lattice Boltzmann methods, Computers & Mathematics with Applications
58 (5) (2009) 1071 – 1080, mesoscopic Methods in Engineering and Science.
Colonius_AnnuRevFluid_36_2004
T. Colonius, Modeling artificial boundary conditions for compressible flow,
Annual Review of Fluid Mechanics 36 (1) (2004) 315–345.
Bodony_JCP_212_2006
D. J. Bodony, Analysis of sponge zones for computational fluid mechanics,
Journal of Computational Physics 212 (2) (2006) 681 – 702.
Israeli_JCP_41_1981
M. Israeli, S. A. Orszag, Approximation of radiation boundary conditions,
Journal of Computational Physics 41 (1) (1981) 115 – 135.
Poinsot_JCP_101_1992
T. Poinsot, S. Lele, Boundary conditions for direct simulations of compressible
viscous flows, Journal of Computational Physics 101 (1) (1992) 104 – 129.
Yoo_CTM_9_2005
C. S. Yoo, Y. Wang, A. Trouvé, H. G. Im, Characteristic boundary conditions
for direct simulations of turbulent counterflow flames, Combustion Theory and
Modelling 9 (4) (2005) 617–646.
Lodato_JCP_227_2008
G. Lodato, P. Domingo, L. Vervisch, Three-dimensional boundary conditions for
direct and large-eddy simulation of compressible viscous flows, Journal of
Computational Physics 227 (10) (2008) 5105 – 5143.
Izquierdo_PhysRevE_78_2008
S. Izquierdo, N. Fueyo, Characteristic nonreflecting boundary conditions for
open boundaries in lattice Boltzmann methods, Phys. Rev. E 78 (2008)
046707.
Ginzburg:2008
I. Ginzburg, F. Verhaeghe, D. d'Humieres, Two-relaxation-time lattice boltzmann
scheme: About parametrization, velocity, pressure and mixed boundary
conditions, Comm. Comp. Phys. 3 (2008) 427–478.
Dhumieres:1992
D. D'Humieres, Generalized lattice-Boltzmann equations, Progress in
Astronautics and Aeronautics (159) (1992) 450–458.
Jung:2015
N. Jung, H. W. Seo, C. S. Yoo, Two-dimensional characteristic boundary
conditions for open boundaries in the lattice Boltzmann methods, Journal of
Computational Physics 302 (August) (2015) 191–199.
Heubes_JCAM_262_2014
D. Heubes, A. Bartel, M. Ehrhardt, Characteristic boundary conditions in the
lattice Boltzmann method for fluid and gas dynamics, Journal of
Computational and Applied Mathematics 262 (2014) 51 – 61, selected Papers
from NUMDIFF-13.
Schlaffer:2013
M. B. Schlaffer, Non-reflecting Boundary Conditions for the Lattice Boltzmann
Method, Ph.D. thesis, Technische Universität Münschen (2013).
Zou_PhysFluids_9_1997
Q. Zou, X. He, On pressure and velocity boundary conditions for the lattice
Boltzmann BGK model, Physics of Fluids 9 (6) (1997) 1591–1598.
Latt_PhysRevE_77_2008
J. Latt, B. Chopard, O. Malaspinas, M. Deville, A. Michler, Straight velocity
boundaries in the lattice Boltzmann method, Phys. Rev. E 77 (2008) 056703.
Latt:2006
J. Latt, B. Chopard, Lattice Boltzmann method with regularized pre-collision
distribution functions, Mathematics and Computers in Simulation 72 (2-6)
(2006) 165–168.
http://arxiv.org/abs/0506157 arXiv:0506157.
Malaspinas:2015
O. Malaspinas, Increasing stability and accuracy of the lattice Boltzmann
scheme: recursivity and regularization (2015) 1–31http://arxiv.org/abs/1505.06900 arXiv:1505.06900.
Bhatnaghar_PhysRev_94_1954
P. L. Bhatnagar, E. P. Gross, M. Krook, A Model for Collision Processes
in Gases. I. Small Amplitude Processes in Charged and Neutral
One-Component Systems, Phys. Rev. 94 (1954) 511–525.
Qian_EPL_17_1992
Y. H. Qian, D. D'Humières, P. Lallemand, Lattice BGK Models for
Navier-Stokes Equation, EPL (Europhysics Letters) 17 (6) (1992) 479.
Chapman_1952
S. Chapman, T. Cowling, The mathematical theory of non-uniform gases: an
account of the kinetic theory of viscosity, thermal conduction, and diffusion
in gases, no. vol. 2, University Press, 1952.
Malaspinas_CompFluids_49_2011
O. Malaspinas, B. Chopard, J. Latt, General regularized boundary condition for
multi-speed lattice Boltzmann models, Computers & Fluids 49 (1) (2011) 29
– 35.
Yoo:2007
C. S. Yoo, H. G. Im, Characteristic boundary conditions for simulations of
compressible reacting flows with multi-dimensional, viscous and reaction
effects, Combustion Theory and Modelling 11 (2) (2007) 259–286.
Albin_CompFluids_51_2011
E. Albin, Y. D’Angelo, L. Vervisch, Flow streamline based Navier-Stokes
Characteristic Boundary Conditions: Modeling for transverse and
corner outflows, Computers & Fluids 51 (1) (2011) 115 – 126.
Philippi2006
P. C. Philippi, L. A. Hegele, L. O. E. dos Santos, R. Surmas, From the
continuous to the lattice Boltzmann equation: The discretization problem and
thermal models, Physical Review E 73 (5) (2006) 056702.
Lamb:1932
H. Lamb, Hydrodynamics, 6th Edition, Cambridge University Press, 1932.
Guo:2007
Z. Guo, C. Zheng, B. Shi, T. S. Zhao, Thermal lattice Boltzmann equation for
low Mach number flows: Decoupling model, Phys. Rev. E 75 (3) (2007) 1–15.
Yoo_CTM_11_2007
C. S. Yoo, H. G. Im, Characteristic boundary conditions for simulations of
compressible reacting flows with multi-dimensional, viscous and reaction
effects, Combustion Theory and Modelling 11 (2) (2007) 259–286.
|
http://arxiv.org/abs/1701.08172v2 | 20170127190847 | Unveiling $ν$ secrets with cosmological data: neutrino masses and mass hierarchy | [
"Sunny Vagnozzi",
"Elena Giusarma",
"Olga Mena",
"Katherine Freese",
"Martina Gerbino",
"Shirley Ho",
"Massimiliano Lattanzi"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
[email protected]
The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden
[email protected]
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720-8153, USA
Berkeley Center for Cosmological Physics, University of California, Berkeley, CA 94720, USA
[email protected]
Instituto de Física Corpuscolar (IFIC), Universidad de Valencia-CSIC, E-46980, Valencia, Spain
The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden
Michigan Center for Theoretical Physics, Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA
The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden
McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA
Lawrence Berkeley National Laboratory (LBNL), Physics Division, Berkeley, CA 94720-8153, USA
Berkeley Center for Cosmological Physics, University of California, Berkeley, CA 94720, USA
Dipartimento di Fisica e Scienze della Terra, Università di Ferrara, I-44122 Ferrara, Italy
Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Ferrara, I-44122 Ferrara, Italy
Using some of the latest cosmological datasets publicly available, we derive the strongest bounds in the literature on the sum of the three active neutrino masses, M_ν, within the assumption of a background flat ΛCDM cosmology. In the most conservative scheme, combining Planck cosmic microwave background (CMB) temperature anisotropies and baryon acoustic oscillations (BAO) data, as well as the up-to-date constraint on the optical depth to reionization (τ), the tightest 95% confidence level (C.L.) upper bound we find is M_ν<0.151 eV. The addition of Planck high-ℓ polarization data, which however might still be contaminated by systematics, further tightens the bound to M_ν<0.118 eV. A proper model comparison treatment shows that the two aforementioned combinations disfavor the IH at ∼ 64% C.L. and ∼ 71% C.L. respectively. In addition, we compare the constraining power of measurements of the full-shape galaxy power spectrum versus the BAO signature, from the BOSS survey. Even though the latest BOSS full shape measurements cover a larger volume and benefit from smaller error bars compared to previous similar measurements, the analysis method commonly adopted results in their constraining power still being less powerful than that of the extracted BAO signal. Our work uses only cosmological data; imposing the constraint M_ν>0.06 eV from oscillations data would raise the quoted upper bounds by O(0.1σ) and would not affect our conclusions.
Unveiling ν secrets with cosmological data: neutrino masses and mass hierarchy
Massimiliano Lattanzi
December 30, 2023
==============================================================================
§ INTRODUCTION
The discovery of neutrino oscillations, which resulted in the 2015 Nobel Prize in Physics <cit.>, has robustly established the fact that neutrinos are massive <cit.>. The results from oscillation experiments can therefore be successfully explained assuming that the three neutrino flavour eigenstates (ν_e, ν_μ, ν_τ) are quantum superpositions of three mass eigenstates (ν_1, ν_2, ν_3). In analogy to the quark sector, flavour and mass eigenstates are related via a mixing matrix parametrized by three mixing angles (θ_12, θ_13, θ_23) and a CP-violating phase δ_CP.
Global fits <cit.> to oscillation measurements have determined with unprecedented accuracy five mixing parameters, namely, sin^2θ_12, sin^2θ_13, sin^2θ_23, as well as the two mass-squared splittings governing the solar and the atmospheric transitions. The solar mass-squared splitting is given by Δ m_21^2 ≡ m^2_2 - m^2_1 ≃ 7.6× 10^-5 ^2. Because of matter effects in the Sun, we know that the mass eigenstate with the larger electron neutrino fraction is the one with the smallest mass. We identify the lighter state with “1" and the heavier state (which has a smaller electron neutrino fraction) with “2". Consequently, the solar mass-squared splitting is positive. The atmospheric mass-squared splitting is instead given by |Δ m_31^2| ≡ |m^2_3 - m^2_1|≃ 2.5× 10^-3^2. Since the sign of the largest mass-squared splitting |Δ m_31^2| remains unknown, there are two possibilities for the mass ordering: the normal hierarchy (NH, Δ m_31^2 > 0, with m_1<m_2<m_3) and the inverted hierarchy (IH, Δ m_31^2 < 0, and m_3<m_1<m_2). Other unknowns in the neutrino sector are the presence of CP-violation effects (i.e. the value of δ_CP), the θ_23 octant, the Dirac versus Majorana neutrino nature, and, finally, the absolute neutrino mass scale, see Ref. <cit.> for a recent review on unknowns of the neutrino sector.
Cosmology can address two out of the above five unknowns: the absolute mass scale and the mass ordering. Through background effects, cosmology is to zeroth-order sensitive to the absolute neutrino mass scale, that is, to the quantity:
M_ν≡ m_ν_1 + m_ν_2 + m_ν_3 ,
where m_ν_i denotes the mass of the ith neutrino mass eigenstate. Indeed, the tightest current bounds on the neutrino mass scale come from cosmological probes, see for instance <cit.>. More subtle perturbation effects make cosmology in principle sensitive to the mass hierarchy as well (see e.g. <cit.> for comprehensive reviews on the impact of nonzero neutrino masses on cosmology), although not with current datasets.
As light massive particles, relic neutrinos are relativistic in the early Universe and contribute to the radiation energy density. However, when they turn non-relativistic at late times, their energy density contributes to the total matter density. Thus, relic neutrinos leave a characteristic imprint on cosmological observables, altering both the background evolution and the spectra of matter perturbations and Cosmic Microwave Background (CMB) anisotropies (see <cit.> as well as the recent <cit.> for a detailed review on massive neutrinos in cosmology, in light of both current and future datasets). The effects of massive neutrinos on cosmological observables will be discussed in detail in Sec. <ref>.
Cosmological probes are primarily sensitive to the sum of the three active neutrino masses M_ν. The exact distribution of the total mass among the three mass eigenstates induces sub-percent effects on the different cosmological observables, which are below the sensitivities of ongoing and near future experiments <cit.>. As a result, cosmological constraints on M_ν are usually obtained by making the assumption of a fully degenerate mass spectrum, with the three neutrinos sharing the total mass [m_ν_i=M_ν/3, with i=1,2,3, which we will later refer to as 3deg, see Eq.(<ref>)]. Strictly speaking, this is a valid approximation as long as the mass of the lightest eigenstate, m_0 ≡ m_1 [m_3] in the case of NH [IH], satisfies:
m_0 ≫| m_i-m_j | , ∀ i,j=1,2,3.
The approximation might fail in capturing the exact behaviour of massive neutrinos when M_ν∼, where =√(Δ m_21^2)+√(Δ m_31^2)≃0.06 [=√(Δ m_31^2)+√(Δ m_31^2+Δ m_21^2)≃0.1 ] is the minimal mass allowed by oscillation measurements in the NH [IH] scenario <cit.>, see Appendix A for detailed discussions. Furthermore, it has been argued that the ability to reach a robust upper bound on the total neutrino mass below =0.1 would imply having discarded at some statistical significance the inverted hierarchy scenario. In this case, one has to provide a rigorous statistical treatment of the preference for one hierarchy over the other <cit.>. 3deg repeated
We will be presenting results obtained within the approximation of three massive degenerate neutrinos. That is, we consider the following mass scheme, which we refer to as 3deg:
m_1 = m_2 = m_3 = M_ν/3 (3deg) ,
This approximation has been adopted by the vast majority of works when M_ν is allowed to vary. This includes the Planck collaboration, which recently obtained M_ν < 0.234 eV at 95% C.L. <cit.> through a combination of temperature and low-ℓ polarization anisotropy measurements, within the assumption of a flat ΛCDM+M_ν cosmology. Physically speaking, this choice is dictated by the observation that the impact of the NH and IH mass splittings on cosmological data is tiny if one compares the 3deg approximation to the corresponding NH and IH models with the same value of the total mass M_ν (see Appendix A for further discussions). For the purpose of comparison with previous work, in Appendix B we briefly discuss other less physical approximations which have been introduced in the recent literature, as well as some of the bounds obtained on M_ν within such approximations.
We present the constraints in light of the most recent cosmological data publicly available. In particular, we make use of i) measurements of the temperature and polarization anisotropies of the CMB as reported by the Planck satellite in the 2015 data release; ii) baryon acoustic oscillations (BAO) measurements from the SDSS-III BOSS data release 11 CMASS and LOWZ samples, and from the Six-degree Field Galaxy Survey (6dFGS) and WiggleZ surveys; iii) measurements of the galaxy power spectrum of the CMASS sample from the SDSS-III BOSS data release 12; iv) local measurements of the Hubble parameter (H_0) from the Hubble Space Telescope; v) the latest measurement of the optical depth to reionization (τ) coming from the analysis of the high-frequency channels of the Planck satellite, and vi) cluster counts from the observation of the thermal Sunyaev-Zeldovich (SZ) effect by the Planck satellite.
In addition to providing bounds on M_ν, we also use these bounds to provide a rigorous statistical treatment of the preference for the NH over the IH. We do so by applying the simple but rigorous method proposed in <cit.>, and evaluate both posterior odds for NH against IH, as well as the C.L. at which current datasets can disfavor the IH.
The paper is organized as follows. In Sec. <ref>, we describe our analysis methodology. In Sec. <ref> we instead provide a careful description of the datasets employed, complemented with a full explanation of the physical effects of massive neutrinos on each of them. We showcase our main results in Sec. <ref>, with Sec. <ref> in particular devoted to an analysis of the relative constraining power of shape power spectrum versus geometrical BAO measurements, whereas in Sec. <ref> we provide a rigorous quantification of the exclusion limits on the inverted hierarchy from current datasets. Finally, we draw our conclusions in Sec. <ref>.
For the reader who wants to skip to the results: the most important results of this paper can be found in Tabs. <ref>, <ref>, <ref>. The first two of these tables present the most constraining 95% C.L. bounds on the sum of the neutrino masses using a combination of CMB (temperature and polarization), BAO, and other external datasets. The bounds in Tab. <ref> have been obtained using also small-scale CMB polarization data which may be contaminated by systematics, yet we present the results as they are useful for comparing to previous work. Finally Tab. <ref> presents exclusion limits on the Inverted Hierarchy neutrino mass ordering, which is disfavored at about 70% C.L. statistical significance.
§ ANALYSIS METHOD
In the following we shall provide a careful description of the statistical methods employed in order to obtain the bounds on the sum of the three active neutrino masses we show in Sec. <ref>, as well as caveats to our analyses. Furthermore, we provide a brief description of the statistical method adopted to quantify the exclusion limits on the IH from our bounds on M_ν. For more details on the latter, we refer the reader to <cit.> where this method was originally described.
§.§ Bounds on the total neutrino mass
In our work, we perform standard Bayesian inference (see e.g. <cit.> for recent reviews) to derive constraints on the sum of the three active neutrino masses. That is, given a model described by the parameter vector θ, and a set of data x, we derive posterior probabilities of the parameters given the data, p(θ|x), according to:
p(θ|x) ∝ L(x|θ)p(θ) ,
where L(x|θ) is the likelihood function of the data given the model parameters, and p(θ) denotes the data-independent prior. We derive the posteriors using the Markov Chain Monte Carlo (MCMC) sampler with an efficient sampling method <cit.>. To assess the convergence of the generated chains, we employ the Gelman and Rubin statistics <cit.> R-1, which we require to satisfy R-1<0.01 when the datasets do not include SZ cluster counts, R-1<0.03 otherwise (this choice is dictated by time and resource considerations: runs involving SZ cluster counts are more computationally expensive than those that do not include SZ clusters, to achieve the same convergence). In this way, the contribution from statistical fluctuations is roughly a few percent the limits quoted. [Notice that this is a very conservative requirement, as a convergence of 0.05 is typically more than sufficient for the exploration of the posterior of a parameter whose distribution is unimodal <cit.>.]
We work under the assumption of a background flat ΛCDM Universe, and thus consider the following seven-dimensional parameter vector:
θ≡{Ω_b h^2, Ω _c h^2, Θ_s, τ, n_s, log(10^10A_s), M_ν} .
Here, Ω_bh^2 and Ω_ch^2 denote the physical baryon and dark matter energy densities respectively, Θ_s is the ratio of the sound horizon to the angular diameter distance at decoupling, τ indicates the optical depth to reionization, whereas the details of the primordial density fluctuations are encoded in the amplitude (A_s) and the spectral index (n_s) of its power spectrum at the pivot scale k_⋆ = 0.05 h Mpc ^-1. Finally, the sum of the three neutrino masses is denoted by M_ν. For all these parameters, a uniform prior is assumed unless otherwise specified.
Concerning M_ν, we impose the requirement M_ν≥ 0. Thus, we ignore prior information from oscillation experiments, which, as previously stated, set a lower limit of ∼0.06 [0.10 ] for the NH [IH] mass ordering. If we instead had chosen not to ignore prior information from oscillation experiments, the result would be a slight shift of the center of mass of our posteriors on M_ν towards higher values. As a consequence of these shifts, the 95% C.L. upper limits we report would also be shifted to slightly higher values. Nonetheless, in this way we can obtain an independent upper limit on M_ν from cosmology alone, while at the same time making the least amount of assumptions. It also allows us to remain open to the possibility of cosmological models predicting a vanishing neutrino density today, or models where the effect of neutrino masses on cosmological observables is hidden due to degeneracies with other parameters (see e.g. <cit.>). One can get a feeling for the size of the shifts by comparing our results to those of <cit.>, where a prior M_ν≥ 0.06 was assumed. As we see, the size of the shifts is small, of O(0.1σ). We summarize the priors on cosmological parameters, as well as some of the main nuisance parameters, in Tab. <ref>.
All the bounds on M_ν reported in Sec. <ref> are 95% C.L. upper limits. These bounds depend more or less strongly on our assumption of a background flat ΛCDM model, and would differ if one were to consider extended parameter models, for instance scenarios in which the number of relativistic degrees of freedom N_eff and/or the dark energy equation of state w are allowed to vary, or if the assumption of flatness is relaxed, and so on. For recent related studies considering extensions to the minimal ΛCDM model we refer the reader to e.g. <cit.>, as well as Sec. <ref>. For other recent studies which investigate the effect of systematics or the use of datasets not considered here (e.g. cross-correlations between CMB and large-scale structure) see e.g. <cit.>.
§.§ Model comparison between mass hierarchies
As we discussed previously, several works have argued that reaching an upper bound on M_ν of order 0.1 eV would imply having discarded the IH at some statistical significance. In order to quantify the exclusion limits on the IH, a proper model comparison treatment, thus rigorously taking into account volume effects, is required. Various methods which allow the estimation of the exclusion limits on the IH have been devised in the recent literature, see e.g. <cit.>. Here, we will briefly describe the simple but rigorous model comparison method which we will use in our work, proposed by Hannestad and Schwetz in <cit.>, and based on previous work in <cit.>. The method allows the quantification of the statistical significance at which the IH can be discarded, given the cosmological bounds on M_ν. We refer the reader to the original paper <cit.> for further details.
Let us again consider the likelihood function L of the data x given a set of cosmological parameters θ, the mass of the lightest neutrino m_0 = m_1 [m_3] for NH [IH], and the discrete parameter H representing the mass hierarchy, with H=N [I] for NH [IH] respectively: L(x|θ,m_0,H). Then, given the prior(s) on cosmological parameters p(θ), we define the likelihood marginalized over cosmological parameters θ assuming a mass hierarchy H, E_H(m_0), as:
E_H(m_0) ≡∫ dθ L(x|θ,m_0,H)p(θ) = L(x| m_0, H) .
Imposing an uniform prior m_0 ≥ 0 eV and assuming factorizable priors for the other cosmological parameters it is not hard to show that, as a consequence of Bayes' theorem, the posterior probability of a mass hierarchy H given the data x, p_H ≡ p(H|x), can be obtained as below:
-0.6 cm p_H = p(H)∫_0^∞ dm_0 E_H(m_0)/p(N)∫_0^∞ dm_0 E_N(m_0)+p(I)∫_0^∞ dm_0 E_I(m_0) ,
where p(N) and p(I) denote priors on the NH and IH respectively, with p(N)+p(I)=1. The posterior odds of NH against IH are then given by p_N/p_I, whereas the C.L. at which the IH is disfavored, which we refer to as CL_ IH, is given by:
CL_ IH = 1-p_I .
The expression in Eq. <ref> is correct as long as the assumed prior on m_0 is uniform, and the priors on the other cosmological parameters are factorizable. Different choices of priors on m_0 will of course lead to a larger or smaller preference for the NH. As an example, <cit.> considered the effect of logarithmic priors, showing that this leads to a strong preference for the NH (see, however, <cit.>).
Another valid possibility, which has not explicitly been considered in the recent literature, is that of performing model comparison between the two neutrino hierarchies by imposing an uniform prior on M_ν instead of m_0. In this case, it is easy to show that the posterior odds for NH against IH, p_N/p_I, is given by (considering for simplicity the case where NH and IH are assigned equal priors):
p_N/p_I≡∫_0.06 eV^∞dM_ν E(M_ν)/∫_0.10 eV^∞dM_ν E(M_ν) ,
where analogously to Eq. (<ref>), we define the marginal likelihood E(M_ν) as:
E_H(M_ν) ≡∫ dθ L(x|θ,M_ν,H)p(θ) = L(x| M_ν, H) .
It is actually easy to show that in the low-mass region of parameter space currently favoured by cosmological data, i.e. M_ν≲ 0.15 eV, the posterior odds for NH against IH one obtains by choosing a flat prior on M_ν [Eq. (<ref>)] or a flat prior on m_0 [Eq. (<ref>)] are to very good approximation equal. It is also interesting to note that, as is easily seen from Eq. (<ref>), cosmological data will always prefer the normal hierarchy over the inverted hierarchy, simply as a consequence of volume effects: that is, the volume of parameter space available to the normal hierarchy (M_ν>0.06 eV) is greater than that available to the inverted hierarchy (M_ν>0.1 eV). For this reason, the way the prior volume is weighted plays a crucial role in determining the preference for one hierarchy over the other (see discussions in <cit.>).
In our work, we choose to follow the prescription of <cit.> (based on a uniform prior on m_0) and hence apply Eq. (<ref>) to determine the preference for the normal hierarchy over the inverted one from cosmological data.
§ DATASETS AND THEIR SENSITIVITY TO M_Ν
We present below a detailed description of the datasets used in our analyses and their modeling, discussing their sensitivity to the sum of the active neutrino masses. For clarity, all the denominations of the combinations of datasets we consider are summarized in Tab. <ref>. For plots comparing cosmological observables in the presence or absence of massive neutrinos, we refer the reader to <cit.> and especially Fig. 1 of the recent <cit.>.
§.§ Cosmic Microwave Background
Neutrinos leave an imprint on the CMB (both at the background and at the perturbation level) in, at least, five different ways, extensively explored in the literature <cit.>:
* By delaying the epoch of matter-radiation equality, massive neutrinos lead to an enhanced early integrated Sachs-Wolfe (EISW) effect <cit.>. This effect is due to the time-variation of gravitational potentials which occurs during the radiation-dominated, but not during the matter-dominated era, and leads to an enhancement of the first acoustic peak in particular. Traditionally this has been the most relevant neutrino mass signature as far as CMB data is concerned.
* Because of the same delay as above, light (f_ν<0.1) massive neutrinos actually increase the comoving sound horizon at decoupling r_s(z_dec), thus increasing the angular size of the sound horizon at decoupling Θ_s and shifting all the peaks to lower multipoles ℓ's <cit.>.
* By suppressing the structure growth on small scales due to their large thermal velocities (see further details later in Sec. <ref>), reducing the lensing potential and hence the smearing of the high-ℓ multipoles due to gravitational lensing <cit.>. This is a promising route towards determining both the absolute neutrino mass scale and the neutrino mass hierarchy, see e.g. Ref. <cit.>, because it probes the matter distribution in the linear regime at higher redshift, and because the unlensed background is precisely understood. CMB lensing suffers from systematics as well, although these tend to be of instrumental origin and hence decrease with higher resolution. In fact, a combination of CMB-S4 <cit.> lensing and DESI <cit.> BAO is expected to achieve an uncertainty on M_ν of 0.016 eV <cit.>.
* Massive neutrinos will also lead to a small change in the diffusion scale, which affects the photon diffusion pattern at high-ℓ multipoles <cit.>, although again this effect is important only for neutrinos which are non-relativistic at decoupling, i.e. for M_ν>0.6 eV.
* Finally, since the enhancement of the first peak due to the EISW depends, in principle, on the precise epoch of transition to the non-relativistic regime of each neutrino species, that is, on the individual neutrino masses, future CMB-only measurements such as those of <cit.> could, although only in a very optimistic scenario, provide some hints to unravel the neutrino mass ordering <cit.>. Current data instead has no sensitivity to this effect. [The effect is below the level for all multipoles, hence well beyond the reach of Planck. The effect will be below the reach of ground-based Stage-III experiments such as Advanced ACTPol <cit.>, SPT-3G <cit.>, the Simons Array <cit.> and the Simons Observatory <cit.>. It will most likely be below the reach of ground-based Stage-IV experiments such as CMB-S4 <cit.>, or next-generation satellites such as the proposed LiteBIRD <cit.>, COrE <cit.>, and PIXIE <cit.>.]
Although all the above effects may suggest that the CMB is exquisitely sensitive to the neutrino mass, in practice, the shape of the CMB anisotropy spectra is governed by several parameters, some of which are degenerate among themselves <cit.>. We refer the reader to the dedicated study of Ref. <cit.> (see also <cit.>).
To assess the impact of massive neutrinos on the CMB, all characteristic times, scales, and density ratios governing the shape of the CMB anisotropy spectrum should be kept fixed, i.e. keeping z_eq and the angular diameter distance to last-scattering d_A(z_dec) fixed. This would result in: a decrease in the late integrated Sachs-Wolfe (LISW) effect, which however is poorly constrained owing to the fact that the relevant multipole range is cosmic variance limited; a modest change in the diffusion damping scale for M_ν≳ 0.6 eV; and finally, a Δ C_ℓ/C_ℓ∼ -(M_ν/0.1 eV)% depletion of the amplitude of the C_ℓ's for 20 ≲ℓ≲ 200, due to a smaller EISW effect, which also contains a sub- effect due to the individual neutrino masses, essentially impossible to detect.
§.§.§ Baseline combinations of datasets used, and their definitions, I.
Measurements of the CMB temperature, polarization, and cross-correlation spectra from the Planck 2015 data release <cit.> are included. We consider a combination of the high-ℓ (30 ≤ℓ≤ 2508) TT likelihood, as well as the low-ℓ (2 ≤ℓ≤ 29) TT likelihood based on the CMB maps recovered with : we refer to this combination as PlanckTT. We furthermore include the Planck polarization data in the low-ℓ (2 ≤ℓ≤ 29) likelihood, referring to it as lowP. Our baseline model, consisting of a combination of PlanckTT and lowP, is referred to as base.
In addition to the above, we also consider the high-ℓ (30 ≤ℓ≤ 1996) EE and TE likelihood, which we refer to as highP. In order to ease the comparison of our results to those previously presented in the literature, we shall add high-ℓ polarization measurements to our baseline model separately, referring to the combination of base and highP as basepol. For the purpose of clarity, we have summarized our nomenclature of datasets and their combinations in Tab. <ref>.
All the measurements described above are analyzed by means of the publicly available Planck likelihoods <cit.>. [http://www.cosmos.esa.int/web/planck/plawww.cosmos.esa.int/web/planck/pla] When considering a prior on the optical depth to reionization τ we shall only consider the TT likelihood in the multipole range 2 ≤ℓ≤ 29. We do so for avoiding double-counting of information, see Sec. <ref>. Of course, these likelihoods depend also on a number of nuisance parameters, which should be (and are) marginalized over. These nuisance parameters describe, for instance, residual foreground contamination, calibration, and beam-leakage (see Refs. <cit.>).
CMB measurements have been complemented with additional probes which will help breaking the parameter degeneracies discussed. These additional datasets include large-scale structure probes and direct measurements of the Hubble parameter, and will be described in what follows. We make the conservative choice of not including lensing potential measurements, despite measuring M_ν via lensing potential reconstruction is the expected target of the next-generation CMB experiments. This choice is dictated by the observation that lensing potential measurements via reconstruction through the temperature 4-point function are known to be in tension with the lensing amplitude as constrained by the CMB power spectra through the A_lens parameter <cit.> (see also <cit.> for relevant work).
§.§ Galaxy power spectrum
Once CMB data is used to fix the other cosmological parameters, the galaxy power spectrum could in principle be the most sensitive cosmological probe of massive neutrinos among those exploited here. Sub-eV neutrinos behave as a hot dark matter component with large thermal velocities, clustering only on scales below the neutrino free-streaming wavenumber k_fs <cit.>:
k_fs≃ 0.018 Ω_m^1/2 (M_ν/1 eV )^1/2 h Mpc^-1 .
On scales below the free-streaming scale (or, correspondingly, for wavenumbers larger than the free-streaming wavenumber), neutrinos cannot cluster as their thermal velocity exceeds the escape velocity of the gravitational potentials on those scales. Conversely, on scales well above the free-streaming scale, neutrinos behave as cold dark matter after the transition to the non-relativistic regime. Massive neutrinos leave their imprint on the galaxy power spectrum in several different ways:
* For wavenumbers k>k_fs, the power spectrum in the linear perturbation regime is subject to a scale-independent reduction by a factor of (1-f_ν)^2, where f_ν≡Ω_ν/Ω_m is defined as the ratio of the energy content in neutrinos to that in matter <cit.>.
* In addition, the power-spectrum for wavenumbers k>k_fs is further subject to a scale-dependent step-like suppression, starting at k_fs and saturating at k ∼ 1 h Mpc^-1. This suppression is due to the absence of neutrino perturbations in the total matter power spectrum, ultimately due to the fact that neutrinos do not cluster on scales k>k_fs. At k ∼ 1 h Mpc^-1, the suppression reaches a constant amplitude of Δ P(k)/P(k) ≃ -10f_ν <cit.> (the amplitude of the suppression is independent of redshift, however see the point below).
* The growth rate of the dark matter perturbations is reduced from δ∝ a to δ∝ a^1-3/5f_ν, due to the absence of gravitational back-reaction effects from free-streaming neutrinos. The redshift dependence of this suppression implies that this effect could be disentangled from that of a similar suppression in the primordial power spectrum by measuring the galaxy power spectrum at several redshifts, which amounts to measuring the time-dependence of the neutrino mass effect <cit.>.
* On very large scales (10^-3 < k < 10^-2), the matter power spectrum is enhanced by the presence of massive neutrinos <cit.>.
* As in the case of the EISW effect in the CMB, the step-like suppression in the matter power spectrum carries a non-trivial dependence on the individual neutrino masses, as it depends on the time of the transition to the non-relativistic regime for each neutrino mass eigenstate <cit.> (k_fs∝ m_ν_i^1/2), and thus is in principle extremely sensitive to the neutrino mass hierarchy. However, the effect is very small and very hard to measure, even with the most ambitious next-generation large-scale structure surveys <cit.>. Through the same effect, the lensed CMB as well as the lensing potential power spectrum could also be sensitive to the neutrino mass hierarchy.
Notice that, in principle, once CMB data is used to fix the other cosmological parameters, the galaxy power spectrum could be the most sensitive probe of neutrino masses. In practice, the potential of this dataset is limited by several effects. Galaxy surveys have access to a region of k-space k_min<k<k_max where the step-like suppression effect is neither null nor maximal. The minimum wavenumber accessible is limited both by signal-to-noise ratio and by systematics effects, and is typically of order k ∼ 10^-2 h Mpc^-1, meaning that the fourth effect outlined above is currently not appreciable. The maximum wavenumber accessible is instead limited by the reliability of the non-linear predictions for the matter power spectrum.
At any given redshift, there exists a non-linear wavenumber, above which the galaxy power spectrum is only useful insofar as one is able to model non-linear effects, redshift space distortions, and the possible scale-dependence of the bias (a factor relating the spatial distribution of galaxies and the underlying dark matter density field <cit.>) correctly. The non-linear wavenumber depends not only on the redshift of the sample but also on other characteristics of the sample itself (e.g. whether the galaxies are more or less massive). At the present time, the non-linear wavenumber is approximately k=0.15 h Mpc^-1, whereas for the galaxy sample we will consider (DR12 CMASS, at an effective redshift of z=0.57, see footnote 4 for the definition of effective redshift) we will show that wavenumbers smaller than k=0.2 h Mpc^-1 are safe against large non-linear corrections (see also Fig. <ref>, where the galaxy power spectrum has been evaluated for M_ν=0 eV given that the Coyote emulator adopted <cit.> does not fully implement corrections due to non-zero neutrino masses on small scales, and Ref. <cit.>). [The effective redshift consists of the weighted mean redshift of the galaxies of the sample, with the weights described in <cit.>.]
The issue of the scale-dependent bias is indeed more subtle than it might seem, given that neutrinos themselves induce a scale-dependent bias <cit.>. A parametrization of the galaxy power spectrum in the presence of massive neutrinos in terms of a scale-independent bias and a shot-noise component [see Eq.(<ref>)], which in itself adds two extra nuisance parameters, may not capture all the relevant effects at play. Despite these difficulties, the galaxy power spectrum is still a very useful dataset as it helps breaking some of the degeneracies present with CMB-only data, in particular by improving the determination of Ω_mh^2 and n_s, the latter being slightly degenerate with M_ν. Moreover, as we shall show in this paper, the galaxy power spectrum represents a conservative dataset (see Sec. <ref>).
Nonetheless, a great deal of effort is being invested into determining the scale-dependent bias from cosmological datasets. There are several promising routes towards achieving this, for instance through CMB lensing, galaxy lensing, cross-correlations of the former with galaxy or quasar clustering measurements, or higher order correlators of the former datasets, see e.g. Refs. <cit.>. A sensitivity on M_ν of 0.023 eV has been forecasted from a combination of Planck CMB measurements together with weak lensing shear auto-correlation, galaxy auto-correlation, and galaxy-shear cross-correlation from Euclid <cit.>, after marginalization over the bias, with the figure improving to 0.01 eV after including a weak lensing-selected cluster sample from Euclid <cit.>. Similar results are expected to be achieved for certain configurations of the proposed WFIRST survey <cit.>. It is worth considering that the sensitivity of these datasets would be substantially boosted by determining the scale-dependent bias as discussed above.
A conservative cut-off in wavenumber space, required in order to avoid non-linearities when dealing with galaxy power spectrum data, denies access to the modes where the signature of non-zero M_ν is greatest, i.e. those at high k where the free-streaming suppression effect is most evident. One is then brought to question the usefulness of such data when constraining M_ν. Actually, the real power of P(k) rests in its degeneracy breaking ability, when combined with CMB data. For example, P(k) data is extremely useful as far as the determination of certain cosmological parameters is concerned (e.g. n_s, which is degenerate with M_ν).
The degeneracy breaking effect of P(k), however, is most evident when in combination with CMB data. As an example, let us consider what is usually referred to as the most significant effect of non-zero M_ν on P(k), that is, a step-like suppression of the small-scale power spectrum. This effect is clearest when one increases M_ν while fixing (Ω_m,h). However, as we discussed in Sec. <ref>, the impact of non-zero M_ν on CMB data is best examined fixing Θ_s. If one adjusts h in order to keep Θ_s fixed, and in addition keeps Ω_bh^2 and Ω_ch^2 fixed, the power spectrum will be suppressed on both large and small scales, i.e. the result will be a global increase in amplitude <cit.>. In other words, this reverses the fourth effect listed above. This is just an example of the degeneracy breaking power of P(k) data in combination with CMB data.
Galaxy clustering measurements are addressed by means of the Sloan Digital Sky Survey III (SDSS-III; <cit.>) Baryon Oscillation Spectroscopic Survey (BOSS; <cit.>) DR12 <cit.>. The SDSS-III BOSS DR12 CMASS sample covers an effective volume of V_eff≈ 7.4 Gpc^3 <cit.>. It contains 777202 massive galaxies in the range 0.43 < z < 0.7, at an effective redshift z = 0.57 (see footnote 4 for the definition of effective redshift), covering 9376.09 deg^2 over the sky. Here we consider the spherically averaged power spectrum of this sample, as measured by Gil-Marín et al. in <cit.>. We refer to this dataset as P(k). The measured galaxy power spectrum P_meas^g consists of a convolution of the true galaxy power spectrum P_true^g with a window function W(k_i,k_j), which accounts for correlations between the measurements at different scales due to the finite size of the survey geometry:
P_meas^g(k_i) = ∑_j W(k_i,k_j)P_true^g(k_j)
Thus, at each step of the Monte Carlo, we need to convolve the theoretical galaxy power spectrum P_th at the given point in the parameter space with the window function, before comparing it with the measured galaxy power spectrum and constructing the likelihood.
Following previous works <cit.>, we model the theoretical galaxy power spectrum as:
P_th = b_HF^2P_HFν^m(k,z)+P_HF^s ,
where P_HFν^m denotes the matter power spectrum calculated at each step by the Boltzmann solver , corrected for non-linear effects using the method <cit.>. We make use of the modified version of designed by <cit.> to improve the treatment of non-linearities in the presence of massive neutrinos. In order to reduce the impact of non-linearities we impose the conservative choice of considering a maximum wavenumber k_max = 0.2 h Mpc^-1. As we show in Fig. <ref> (for M_ν=0 eV), this region is safe against uncertainties due to non-linear evolution, and is also convenient for comparison with other works which have adopted a similar maximum wavenumber cutoff. The smallest wavenumber we are considering is instead of k_min = 0.03 h Mpc^-1, and is determined by the control over systematics, which dominate at smaller wavenumbers. The parameters b_HF and P_HF^s denote the scale-independent bias and the shot noise contributions: the former reflects the fact that galaxies are biased tracers of the underlying dark matter distribution, whereas the latter arises from the discrete point-like nature of the galaxies as tracers of the dark matter. We impose flat priors in the range [0.1,10] and [0,10000] respectively for b_HF and P_HF^s.
Although in this simple model the bias and shot noise are assumed to be scale-independent, there is no unique prescription for the form of these quantities. In particular, concerning the bias, several theoretically well-motivated scale-dependent functional forms exist in the literature (such as the Q model of <cit.>, that of <cit.>, or that of <cit.> motivated by local primordial non-Gaussianity). It is beyond the scope of our paper to explore the impact of different bias function choices on the neutrino mass bounds. Instead, we simply note that it is not necessarily true that increasing the number of parameters governing the bias shape may result in broader constraints. Indeed, tighter constraints on M_ν may arise in some of the bias parameterizations with more than one parameter involved, because they might have comparable effects on the power spectrum.
§.§ Baryon acoustic oscillations
Prior to the recombination epoch, photons and baryons in the early Universe behave as a tightly coupled fluid, whose evolution is determined by the interplay between the gravitational pull of potential wells, and the restoring force due to the large pressure of the radiation component. The resulting pressure waves which set up, before freezing at recombination, imprint a characteristic scale on the late-time matter clustering, in the form of a localized peak in the two-point correlation function, or a series of smeared peaks in the power spectrum. This scale corresponds to the sound horizon at the drag epoch, denoted by r_s(z_drag), where the drag epoch is defined as the time when baryons were released from the Compton drag of photons, see Ref. <cit.>. Then, r_s(z_drag) takes the form:
r_s(z_drag) = ∫_z_drag^∞ dz c_s(z)/H(z) ,
where c_s(z) denotes the sound speed and is given by c_s(z) = c/√(3(1+R)), with R=3ρ_b/4ρ_r being the ratio of the baryon to photon momentum density. Finally, the baryon drag epoch z_drag is defined as the redshift such that the baryon drag optical depth τ_drag is equal to one:
τ_drag(η_drag) = 4/3Ω_r/Ω_b∫_0^z_drag dz dη/daσ_T x_e(z)/1+z = 1 ,
where σ_T=6.65 × 10^-29 m^2 denotes the Thomson cross-section and x_e(z) represents the fraction of free electrons.
BAO measurements contain geometrical information in the sense that, as a “standard ruler” of known and measured length, they allow for the determination of the angular diameter distance to the redshift of interest, and hence make it possible to map out the expansion history of the Universe after the last scattering. In addition, they are affected by uncertainties due to the non-linear evolution of the matter density field to a lesser extent than the galaxy power spectrum, making them less prone to systematic effects than the latter. An angle-averaged BAO measurement constrains the quantity D_v(z_eff)/r_s(z_drag), where the dilation scale D_v at the effective redshift of the survey z_eff is a combination of the physical angular diameter distance D_A(z) and the Hubble parameter H(z) (which control the radial and the tangential separations within a given cosmology, respectively):
D_v(z) = [ (1+z)^2D_A(z)^2cz/H(z) ]^1/3 .
D_v quantifies the dilation in distances when the fiducial cosmology is modified. The power of the BAO technique resides on its ability of resolving the existing degeneracies present when the CMB data alone is used, in particular in sharpening the determination of Ω_m and of the Hubble parameter H_0, discarding the low values of H_0 allowed by the CMB data.
Massive neutrinos affect both the low-redshift geometry and the growth of structure, and correspondingly BAO measurements. If we increase M_ν, while keeping Ω_bh^2 and Ω_ch^2 fixed, the expansion rate at early times is increased, although only for M_ν>0.6 eV. Therefore, in order to keep fixed the angular scale of the sound horizon at last scattering Θ_s (which is very well constrained by the CMB acoustic peak structure), it is necessary to decrease Ω_Λ. As Ω_Λ decreases, it is found that H(z) decreases for z < 1 <cit.>. It can be shown that an increase in M_ν has a negligible effect on r_s(z_drag). Hence, we conclude that the main effect of massive neutrinos on BAO measurements is to increase D_v(z)/r_s(z_drag) and decrease H_0, as M_ν is increased (see <cit.>). It is worth noting that there is no parameter degeneracy which can cancel the effect of a non-zero neutrino mass on BAO data alone, as far as the minimal ΛCDM+M_ν extended model is concerned <cit.>.
§.§.§ Baseline combinations of datasets used, and their definitions, II.
In this work, we make use of BAO measurements extracted from a number of galaxy surveys. When using BAO measurements in combination with the DR12 CMASS P(k), we consider data from the Six-degree Field Galaxy Survey (6dFGS) <cit.>, the WiggleZ survey <cit.>, and the DR11 LOWZ sample <cit.>, as done in <cit.>. We refer to the combination of these three BAO measurements as BAO. When combining BAO with the base CMB dataset and the DR12 CMASS P(k) measurements, we refer to the combination as basePK. When combining BAO with the basepol CMB dataset and the DR12 CMASS P(k) measurements, we refer to the combination as basepolPK. Recall that we have summarized our nomenclature of datasets (including baseline datasets) and their combinations in Tab. <ref>.
The 6dFGS data consists of a measurement of r_s(z_drag)/D_V(z) at z = 0.106 (as per the discussion above, r_s/D_V decreases as M_ν is increased). The WiggleZ data instead consist of measurements of the acoustic parameter A(z) at three redshifts: z = 0.44, z = 0.6, and z = 0.73, where the acoustic parameter is defined as:
A(z) = 100D_v(z)√(Ω_mh^2)/cz .
Given the effect of M_ν on D_v(z), A(z) will increase as M_ν increases. Finally, the DR11 LOWZ data consists of a measurement of D_v(z)/r_s(z_drag) (which increases as M_ν is increased) at z = 0.32.
Since the BAO feature is measured from the galaxy two-point correlation function, to avoid double counting of information, when considering the base and basepol datasets we do not include the DR11 CMASS BAO measurements, as the DR11 CMASS and DR12 CMASS volumes overlap. However, if we drop the DR12 CMASS power spectrum from our datasets, we are allowed to add DR11 CMASS BAO measurements without this leading to double-counting of information. Therefore, for completeness, we consider this case as well. Namely, we drop the DR12 CMASS power spectrum from our datasets, replacing it with the DR11 CMASS BAO measurement. This consists of a measurement of D_v(z_eff)/r_s(z_drag) at z_eff = 0.57.
§.§.§ Baseline combinations of datasets used, and their definitions, III.
We refer to the combination of the four BAO measurements (6dFGS, WiggleZ, DR11 LOWZ, DR11 CMASS) as BAOFULL. We instead refer to the combination of the base CMB and the BAOFULL datasets with the nomenclature baseBAO. When high-ℓ polarization CMB data is added to this baseBAO dataset, the combination is referred to as basepolBAO, see Tab. <ref>. The comparison between basePK and baseBAO, as well as between basepolPK and basepolBAO, gives insight into the role played by large-scale structure datasets in constraining neutrino masses. In particular, it allows for an assessment of the relative importance of shape information in the form of the power spectrum against geometrical information in the form of BAO measurements when deriving the neutrino mass bounds. For clarity, all the denominations of the combinations of datasets we consider are summarized in Tab. <ref>.
All the BAO measurements used in this work are tabulated in Tab. <ref>. Note that we do not include BAO measurements from the DR7 main galaxy sample <cit.> or from the cross-correlation of DR11 quasars with the Lyα forest absorption <cit.>, and hence our results are not directly comparable to other existing studies which included these measurements.
§.§ Hubble parameter measurements
Direct measurements of H_0 are very important when considering bounds on M_ν. With CMB data alone, there exists a strong degeneracy between M_ν and H_0 (see e.g. <cit.>). When M_ν is varied, the distance to last scattering changes as well. Defining ω_b ≡Ω_bh^2, ω_c ≡Ω_ch^2, ω_m ≡Ω_mh^2, ω_r ≡Ω_rh^2, ω_ν≡Ω_νh^2, within a flat Universe, this distance is given by:
χ = c∫_0^z_decdz/√(ω_r(1+z)^4+ω_m(1+z)^3+ ( 1-ω_m/h^2 )) ,
where ω_m = ω_c + ω_b + ω_ν. The structure of the CMB acoustic peaks leaves little freedom in varying ω_c and ω_b. Therefore, for what concerns the distance to the last scattering, a change in M_ν can be compensated essentially only by a change in h or, in other words, by a change in H_0. This suggests that M_ν and H_0 are strongly anti-correlated: the effect on the CMB of increasing M_ν can be easily compensated by a decrease in H_0, and vice versa.
In light of the above discussion, we expect a prior on the Hubble parameter to help pinning down the allowed values of M_ν from CMB data. Here, we consider two different priors on the Hubble parameter. The first prior we consider is based on a reanalysis of an older measurement based on the Hubble Space Telescope, the original measurement being H_0 = (73.8 ± 2.4) km s^-1 Mpc^-1 <cit.>. The original measurement showed a ∼ 2.4σ tension with the value of H_0 derived from fitting CMB data <cit.>. The reanalysis, conducted by Efstathiou in Ref. <cit.>, used the revised geometric maser distance to NGC4258 of Ref. <cit.> as a distance anchor. This reanalysis obtains a more conservative value of H_0 = (70.6 ± 3.3) km s^-1 Mpc^-1, which agrees with the extracted H_0 value from CMB-only within 1σ. We refer to this prior as H070p6.
The second prior we consider is based on the most recent HST 2.4% determination of the Hubble parameter in Ref. <cit.>. This measurement benefits from more than twice the number of Cepheid variables used to calibrate luminosity distances, with respect to the previous analysis <cit.>, as well as from improved determinations of distance anchors. The measured value of the Hubble parameter is H_0 = (73.02 ± 1.79) km s^-1 Mpc^-1, which is in tension with the CMB-only H_0 value by 3σ. We refer to the corresponding prior as H073p02. [We do not include here the latest 3.8% determination of H_0 by the H0LiCOW program. The measurement, based on gravitational time delays of three multiply-imaged quasar systems, yields H_0 = 71.9^+2.4_-3.0 km s^-1 Mpc^-1 <cit.>.]
A consideration is in order at this point. Given the strong degeneracy between M_ν and H_0, we expect the introduction of the two aforementioned priors (especially the H073p02 one) to lead to a tighter bound on M_ν. At the same time, we expect this bound to be less reliable and/or robust. In other words, such a bound would be quite artificial, as it would be driven by a combination of the tension between direct and primary CMB determinations of H_0 and the strong M_ν-H_0 degeneracy. We can therefore expect the fit to degrade when any of the two aforementioned priors is introduced. We nonetheless choose to include these prior for a number of reasons. Firstly, the underlying measurement in <cit.> has attracted significant attention and hence it is worth assessing its impact on bounds on M_ν, subject to the strict caveats we discussed, in light of its potential to break the M_ν-H_0 degeneracy. Next, our results including the H_0 priors will serve as a warning of the danger of adding datasets which are inconsistent between each other.
§.§ Optical depth to reionization
The first generation of galaxies ended the dark ages of the Universe. These galaxies emitted UV photons which gradually ionized the neutral hydrogen which had rendered the Universe transparent following the epoch of recombination, in a process known as reionization (see e.g. Ref. <cit.> for a review). So far, it is not entirely clear when cosmic reionization took place. Cosmological measurements can constrain the optical depth to reionization τ, which, assuming instantaneous reionization (a very common useful approximation), can be related to the redshift of reionization z_re.
Early CMB measurements of τ from WMAP favored an early-reionization scenario (z_re = 10.6 ± 1.1 in the instantaneous reionization approximation <cit.>), requiring the presence of sources of reionization at z≳ 10. This result was in tension with observations of Ly-α emitters at z≃ 7 (see e.g. <cit.>), that suggest that reionization ended by z≃ 6. However, the results delivered by the Planck collaboration in the 2015 public data release, using the large-scale (low-ℓ) polarization observations of the Planck Low Frequency Instrument (LFI) <cit.> in combination with Planck temperature and lensing data, indicate that τ = 0.066 ± 0.016 <cit.>, corresponding to a significantly lower value for the redshift of instantaneous reionization: z_re = 8.8^+1.2_-1.1 (see also <cit.> for an assessment of the role of the cleaning procedure on the lower estimate of τ, and <cit.> for an alternative indirect method for measuring large-scale polarization and hence constrain τ using only small-scale and lensing polarization maps), and thus reducing the need for high-redshift sources of reionization <cit.>.
The optical depth to reionization is a crucial quantity when considering constraints on the sum of neutrino masses, the reason being that there exist degeneracies between τ and M_ν (see e.g. <cit.>). If we consider CMB data only (focusing on the TT spectrum), an increase in M_ν, which results in a suppression of structure, reduces the smearing of the damping tail. This effect can be compensated by an increase in τ. Due to the well-known degeneracy between A_s and τ from CMB temperature data (which is sensitive to the combination A_se^-2τ), the value of A_s should also be increased accordingly. However, the value of A_s also determines the overall amplitude of the matter power spectrum, which is furthermore affected by the presence of massive neutrinos, which reduce the small-scale clustering. If, in addition to TT data, low-ℓ polarization measurements are considered, the degeneracy between A_s and τ will be largely alleviated and, consequently, also the multiple ones among the A_s, τ, and M_ν cosmological parameters.
Recently, the Planck collaboration has identified, modeled, and removed previously unaccounted systematic effects in large angular scale polarization data from the Planck High Frequency Instrument (HFI) <cit.> (see also <cit.>). Using the new HFI low-ℓ polarization likelihood (that has not been made publicly available by the Planck collaboration), the constraints on τ have been considerably improved, with a current determination of τ = 0.055 ± 0.009 <cit.>, entirely consistent with the value inferred from LFI.
In this work, we explore the impact on the constraints on M_ν of adding a prior on τ. Specifically, we impose a Gaussian prior on the optical depth to reionization of τ = 0.055 ± 0.009, consistent with the results reported in <cit.>. We refer to this prior as τ0p055. We expect this prior to tighten our bounds on M_ν. However, a prior on τ is a proxy for low-ℓ polarization spectra (low-ℓ C_ℓ^EE, C_ℓ^BB, and C_ℓ^TE). Therefore, as previously stated, when adding a prior on τ, we remove the low-ℓ polarization data from our datasets, in order to avoid double-counting information, while keeping low-ℓ temperature data.
§.§ Planck SZ clusters
The evolution with mass and redshift of galaxy clusters offers a unique probe of both the physical matter density, Ω_m, and the present amplitude of density fluctuations, characterized by the root mean squared of the linear overdensity in spheres of radius 8 h^-1Mpc, σ_8, for a review see e.g. <cit.>. Both quantities are of crucial importance when extracting neutrino mass bounds from large-scale structure, due to the neutrino free-streaming nature.
CMB measurements are able to map galaxy clusters via the Sunyaev-Zeldovich (SZ) effect, which consists of an energy boost to the CMB photons, which are inverse Compton re-scattered by hot electrons (see e.g. <cit.>). Therefore, the thermal SZ effect imprints a spectral distortion to CMB photons traveling along the cluster line of sight. The distortion consists of an increase in intensity for frequencies higher than 220 GHz, and a decrease for lower frequencies.
We shall here make us of cluster counts from the latest Planck SZ clusters catalogue, consisting of 439 clusters detected via their SZ signal <cit.>. We refer to the dataset as SZ. The cluster counts function is given by the number of clusters of a certain mass M within a redshift range [z,z+dz], i.e. dN/dz:
dN/dz|_M > M_min = f_skydV(z)/dz∫_M_min^∞ dM dn/dM(M,z) .
The dependence on the underlying cosmological model is encoded in the differential volume dV/dz:
dV(z)/dz = 4π/H(z)∫_0^z dz' 1/H^2(z') ,
through the dependence of the Hubble parameter H(z) on the basic cosmological parameters, and further through the dependence of the cluster mass function dn/dM (calculated through N-body simulations) on the parameters Ω_m and σ_8.
The largest source of uncertainty in the interpretation of cluster counts measurements resides in the masses of clusters themselves, which in turn can be inferred by X-ray mass proxies, relying however on the assumption of hydrostatic equilibrium. This assumption can be violated by bulk motion or non-thermal sources of pressure, leading to biases in the derived value of the cluster mass. Further systematics in the X-ray analyses can arise e.g. due to instrument calibration or the temperature structure in the gas. Therefore, it is clear that determinations of cluster masses carry a significant uncertainty, with a typical Δ M/M∼ 10-20%, quantified via the cluster mass bias parameter, 1-b:
M_X = (1-b)M_500 ,
where M_X denotes the X-ray extracted cluster mass, and M_500 the true halo mass, defined as the total mass within a sphere of radius R_500, R_500 being the radius within which the mean overdensity of the cluster is 500 times the critical density at that redshift.
As the cluster mass bias 1-b is crucial in constraining the values of Ω_m and σ_8, and hence the normalization of the matter power spectrum, it plays an important role when constraining M_ν. We impose an uniform prior on the cluster mass bias in the range [0.1,1.3], as done in Ref. <cit.>, in which it is shown that this choice of 1-b leads to the most stringent bounds on the neutrino mass. There exist as well independent lensing measurements of the cluster mass bias, as those provided by the Weighing the Giants project <cit.>, by the Canadian Cluster Comparison Project <cit.>, and by CMB lensing <cit.> (see also Ref. <cit.>). However, we shall not make use of 1-b priors based on these independent measurements, as the resulting value of σ_8 is in slight tension, at the level of 1-2σ, with primary CMB measurements (however, see <cit.>).
The value of σ_8 indicated by weak lensing measurements is smaller than that derived from CMB-only datasets, favoring therefore quite large values of M_ν, large enough to suppress the small-scale clustering in a significant way. Therefore, we restrict ourselves to the case in which the cluster mass bias is allowed to freely vary between 0.1 and 1.3. It has been shown in <cit.> that this choice leads to robust and unbiased neutrino mass limits. In this way, the addition of the SZ dataset can be considered truly reliable.
§ RESULTS ON M_Ν
We begin here by analyzing the results obtained for the different datasets and their combinations, assessing their robustness. The constraining power of geometrical versus shape large-scale structure datasets will be discussed in Sec. <ref>. In Sec. <ref> we apply the method of <cit.> and described in Sec. <ref> to quantify the exclusion limits on the inverted hierarchy given the bounds on M_ν presented in the following. The 95% C.L. upper bounds on M_ν we obtain are summarized in Tabs. <ref>, <ref>, <ref>, <ref>. The C.L.s at which our most constraining datasets disfavor the Inverted Hierarchy, CL_ IH, obtained through our analysis in Sec. <ref>, are reported in Tab. <ref>.
Table <ref> shows the results for the more conservative approach when considering CMB data; namely, by neglecting high-ℓ polarization data. The limits obtained when the base dataset is considered are very close to those quoted in Ref. <cit.>, where a three degenerate neutrino spectrum with a lower prior on M_ν of 0.06 eV was assumed, whereas we have taken a lower prior of 0 eV. Our choice is driven by the goal of obtaining independent bounds on M_ν from cosmology alone, making the least amount of assumptions. This different choice of prior is the reason for the (small) discrepancy in our 95% C.L. upper limit on M_ν (0.716 eV) and the limit found in Ref. <cit.> (0.754 eV), and, in general, in all the bounds we shall describe in what follows. That is, these discrepancies are due to differences in the volume of the parameter space explored. When P(k) data are added to the base, CMB-only dataset, the neutrino mass limits are considerably improved, reaching M_ν <0.299 eV at 95% C.L..
The limits reported in Table <ref>, while being consistent with those presented in Ref. <cit.> (obtained with an older BOSS full shape power spectrum measurement, the DR9 CMASS P(k)), are slightly less constraining. We attribute this mild slight loss of constraining power to the fact that the DR12 P(k) appears slightly suppressed on small scales with respect to the DR9 P(k), see Fig. <ref>. This fact, already noticed for previous data releases, can ultimately be attributed to a very slight change in power following an increase in the mean galaxy density over time due to the tiling (observational) strategy of the survey <cit.>. The changes are indeed very small, and the broadband shape of the power spectra for different data releases in fact agree very well within error bars. A small suppression in small-scale power, nonetheless, is expected to favor higher values of M_ν, which help explaining the observed suppression, and this explains the slight difference between our results and those of Ref. <cit.>.
While the addition of external datasets, such as a prior on τ or Planck SZ cluster counts, leads to mild improvements in the constraints on M_ν, the tightest bounds are obtained when considering the H073p02 prior on the Hubble parameter, due to the large existing degeneracy between H_0 and M_ν at the CMB level, and only partly broken via P(k) or BAO measurements. However, as previously discussed, this H073p02 measurement shows a significant tension with CMB estimates of the Hubble parameter. [See e.g. Refs. <cit.> for recent works examining this discrepancy and possible solutions.] Therefore, the 95% C.L. limits on M_ν of <0.164, <0.140, <0.136 eV for the basePK+H073p02, basePK+H073p02+τ0p055 and basePK+H073p02+τ0p055+SZ cases should be regarded as the most aggressive limits one can obtain when considering a prior on H_0 and neglecting high-ℓ polarization data. Indeed, when using the H070p6 prior, a less constraining limit of M_ν <0.219 eV at 95% C.L. is obtained in the basePK+H070p06 case, value that is closer to the limits obtained when additional measurements (not related to H_0 priors) are added to the basePK data combination.
The tension between the H073p02 measurement and primary CMB determinations of H_0 implies that the very strong bounds obtained using such prior are also the least robust and/or reliable. They are almost entirely driven by the aforementioned tension in combination with the strong M_ν-H_0 degeneracy, and hence are somewhat artificial. We expect in fact the quality of the fit to deteriorate in the presence of 2 inconsistent datasets (that is, CMB spectra and H_0 prior). To quantify the worsening in fit, we compute the Δχ^2 associated to the bestfit, for a given combination of datasets before and after the addition of the H_0 prior. For example, for the basePK dataset combination, we find Δχ^2 ≡χ^2_min(basePK+H073p02)-χ^2_min(basePK)=+5.2, confirming as expected a substantial worsening in fit when the H073p02 prior is added to the basePK dataset. The above observation reinforces the fact that any bound on M_ν obtained using the H073p02 prior should be interpreted with considerable caution, as such bound is most likely artificial.
Table <ref> shows the equivalent to Tab. <ref> but including high-ℓ polarization data. Notice that the limits are considerably tightened. As previously discussed, the tightest bounds are obtained when the H073p02 prior is considered. For instance, we obtain M_ν<0.109 eV at 95% C.L. from the
basepolPK+H073p02+τ0p055 data combination. We caution once more against the very tight bounds obtained with the H073p02 being most likely artificial. This is confirmed for example by the Δχ^2_min=+6.4 between the basepolPK+H073p02 and basepolPK datasets.
§.§ Geometric vs shape information
In the following, we shall compare the constraining power of geometrical probes in the form of BAO measurements versus shape probes in the form of power spectrum measurements. For that purpose, we shall replace here the DR12 CMASS P(k) and the BAO datasets by the BAOFULL dataset, which consists of BAO measurements from the BOSS DR11 (both CMASS and LOWZ samples) survey, the 6dFGS survey, and the WiggleZ survey, see Tab. <ref> for more details. The main results of this section are summarized in Tabs. <ref> and <ref>, as well as Figs. <ref> and <ref>.
Table <ref> shows the equivalent to the third, fourth, sixth, eighth and ninth rows of Tab. <ref>, but with the shape information from the BOSS DR12 CMASS spectrum replaced by the geometrical BAO information from the BOSS DR11 CMASS measurements. Firstly, we notice that all the geometrical bounds are, in general, much more constraining than the shape bounds, as previously studied and noticed in the literature (see e.g <cit.>, see also <cit.> for recent studies on the subject). These studies have shown that, within the minimal ΛCDM+M_ν scenario, BAO measurements provide tighter constraints on M_ν than data from the full power spectrum shape. Nevertheless, it is very important to assess whether these previous findings still hold with the improved statistics and accuracy of today's large-scale structure data (see the recent Ref. <cit.> for the expectations from future galaxy surveys).
We confirm that this finding still holds with current data. Therefore, current analyses methods of large-scale structure datasets are such that these are still sensitive to massive neutrinos through background rather than perturbation effects, despite the latter are in principle a much more sensitive probe of the effect of massive neutrinos on cosmological observables. However, as we mentioned earlier, this behaviour could be reverted once we are able to determine the amplitude and scale-dependence of the galaxy bias through CMB lensing, cosmic shear, galaxy clustering measurements, and their cross-correlations (see e.g. <cit.>).
Moreover, it is also worth reminding that BAO measurements do include non-linear information through the reconstruction procedure, whereas the same information is prevented from being used in the power spectrum measurements due to the cutoff we imposed at k=0.2 h Mpc^-1. In order to fully exploit the constraining power of shape measurements, improvements in our analyses methods are necessary: in particular, it is necessary to improve our understanding of the non-linear regime of the galaxy power spectrum in the presence of massive neutrinos, as well as further our understanding of the galaxy bias at a theoretical and observational level.
The addition of shape measurements requires at least two additional nuisance parameters, which in our case are represented by the bias and shot noise parameters. These two parameters relate the measured galaxy power spectrum to the underlying matter power spectrum, the latter being what one can predict once cosmological parameters are known. [Moreover, at least another nuisance parameter is required in order to account for systematics in the measured galaxy power spectrum, although the impact of this parameter is almost negligible, as we have checked (see Refs. <cit.>)]. The prescription we adopted relating the galaxy to the matter power spectrum is among the simplest choices. However, it is not necessarily true that more sophisticated choices with more nuisance parameters would further degrade the constraining power of shape measurements, particularly if we were to obtain a handle on the functional form of the scale-dependent bias <cit.>. On the other hand, it remains true that the possibility of benefiting from a large number of modes by increasing the value of k_max (which remains one of the factors limiting the constraining power of shape information compared to geometrical one) would require an exquisite knowledge of non-linear corrections, a topic which is the subject of many recent investigations particularly in the scenario where massive neutrinos are present, see e.g. <cit.>. The conclusion, however, remains that improvements in our current analyses methods, as well as further theoretical and modeling advancements, are necessary to exploit the full constraining power of shape measurements (see also <cit.>).
Finally, we notice that, even without considering the high-ℓ polarization data, we obtain the very constraining bound of M_ν<0.114 eV at 95% C.L. for the baseBAO+H073p02+τ0p055+SZ datasets. We caution again against the artificialness of bounds obtained using the H073p02 prior, as the tension with primary CMB determinations in H_0 leads to a degradation in the quality of fit. Nonetheless, even without considering the H_0 prior, we still obtain a very constraining bound of M_ν<0.151 eV at 95% C.L. In any case, results adopting these dataset combinations contribute to reinforcing the previous (weak) cosmological hints favouring the NH scenario <cit.>.
Table <ref> shows the equivalent to Tab. <ref> but with the high-ℓ polarization dataset included, i.e. adding the highP Planck dataset in the analyses. We note that the results are quite impressive, and it is interesting to explore how far could one currently get in pushing the neutrino mass limits by means of the most aggressive and least conservative datasets. The tightest limits we find are M_ν<0.093 eV at 95% C.L. using the basepolBAO+H073p02+τ0p055+SZ dataset, well below the minimal mass allowed within the IH. Therefore, within the less-conservative approach illustrated here, especially due to the use of the H073p02 prior, there exists a weak preference from present cosmological data for a normal hierarchical neutrino mass scheme. Neglecting the information from the H073p02 prior, which leads to an artificially tight bound as previously explained, the preference turns out to be weaker (M_ν<0.118 eV from the basepolBAO+τ0p055 dataset combination) but still present.
We end with a consideration, stemming from the observation that with our current analyses methods BAO measurements are more constraining than full-shape power spectrum ones. This suggests that, despite uncertainties in the modeling of the galaxy power spectrum due to the unknown absolute scale of the latter (in other words, the size of the bias) and non-linear evolution, the galaxy power spectrum actually represents a conservative dataset given that the bounds on M_ν obtained using the corresponding BAO dataset are considerably tighter.
In the remainder of the Section we will be concerned with providing a proper quantification of the statistical significance at which we can disfavor the IH, performing a simple but rigorous model comparison analysis.
§.§ Exclusion limits on the inverted hierarchy
Here we apply the method of <cit.> and described in Sec. <ref> to determine the statistical significance at which the inverted hierarchy is disfavored given the bounds on M_ν just obtained. Our results are summarized in Tab. <ref>. In order to quantify the exclusion limits on the inverted hierarchy, we apply Eq. (<ref>) to our most constraining dataset combinations, where the criterion for choosing these datasets will be explained below.
Note that in Eq. (<ref>) we set p(N)=p(I)=0.5. That is, we assign equal priors to NH and IH, which not only is a reasonable choice when considering only cosmological datasets <cit.>, but is also the most uninformative and most conservative choice when there is no prior knowledge about the hierarchies. In any case, the formalism we adopt would allow us to introduce informative prior information on the two hierarchies, i.e. p(N) ≠ p(I) ≠ 0.5. It would in this way be possible to include information from oscillation experiments, which suggest a weak preference for the normal hierarchy due to matter effects (see e.g. <cit.>). Including this weak preference does not significantly affect our results, precisely because the current sensitivity to the neutrino mass hierarchy from both cosmology and oscillation experiments is extremely weak (see also e.g. <cit.>).
We choose to only report the statistical significance at which the IH is discarded for the most constraining dataset combinations, that is, those which disfavor the IH at >70% C.L.: we have checked that threshold for reaching a ≈ 70% C.L. exclusion limit of the IH is reached by datasets combinations which disfavor at 95% C.L. values of M_ν greater than ≈ 0.12 eV. In fact, the most constraining bound within our conservative scheme, obtained through the baseBAO+τ0p055 combination (thus disfavoring datasets which exhibit some tension with CMB or galaxy clustering measurements, for a 95% C.L. upper limit on M_ν of 0.151 eV), falls short of this threshold, and is only able to disfavor the IH at 64% C.L., providing posterior odds for NH versus IH of 1.8:1.
The hierarchy discrimination is improved when small-scale polarization is added to the aforementioned datasets combination, or when the H073p02 prior (and eventually SZ cluster counts) are added to the same datasets combination, leading to a 71% C.L. and 72% C.L. exclusion of the IH respectively. Similar levels of statistical significance for the exclusion of the IH are reached when the datasets combinations basepolPK+H073p02+τ0p055, basepolPK+H073p02+τ0p055+SZ, and basepolBAO+H073p02 are considered, leading to 74% C.L., 71% C.L., and 72% C.L. exclusion of the IH respectively. However, it is worth reminding once more that the latter figures relied on the addition of the H073p02 prior, which leads to less reliable bounds. It is also worth noting that our most constraining datasets combination(s), that is, basepolBAO+H073p02+τ0p055(+SZ), only provide a 77% C.L. exclusion of the IH.
Our findings are totally consistent with those of <cit.> and suggest that an improved sensitivity of cosmological datasets is required in order to robustly disfavor the IH, despite current datasets are already able to substantially reduce the volume of parameter space available within this mass ordering. In fact, it has been argued in <cit.> that a sensitivity of at least ≈ 0.02 eV is required in order to provide a 95% C.L. exclusion of the IH. Incidentally, not only does such a sensitivity seem within the reach of post-2020 experiments <cit.>, but it would also provide a detection of M_ν at a significance of at least 3σ, unless non-trivial late-Universe effects are at play (see e.g. <cit.>).
§.§ Bounds on M_ν in extended parameter spaces: a brief discussion
Thus far we have explored bounds on M_ν within the assumption of a flat background ΛCDM cosmology. We have used different dataset combinations, and have identified the baseBAO dataset (leading to an upper limit of M_ν<0.186 eV) combination as being the one providing one of the strongest bounds while at the same time being one of the most robust to systematics and tensions between datasets.
However, we expect the bounds on M_ν to degrade if we were to open the parameter space: that is, if we were to vary additional parameters other than the 6 base ΛCDM parameters and M_ν. While there is no substantial indication for the need to extend the base set of parameters of the ΛCDM model (see e.g. <cit.>), one is nonetheless legitimately brought to wonder about the robustness of the obtained bounds against extended parameter spaces.
While a detailed study belongs to a follow-up paper in progress <cit.>, we nonetheless decide to present two examples of bounds on M_ν within minimally extended parameter spaces. That is, we allow in one case the dark energy equation of state w to vary within the range [-3,1] (parameter space denoted by ΛCDM+M_ν+w), and in the other case the curvature energy density Ω_k to vary freely within the range [-0.3,0.3] (parameter space denoted by ΛCDM+M_ν+Ω_k). Both parameters are known to be relatively strongly degenerate with M_ν and hence we can expect our allowing them to vary to lead to less stringent bounds on M_ν. In both cases we consider for simplicity the baseBAO dataset, for the reasons described above: therefore, the corresponding bound within the ΛCDM+M_ν parameter space to which we should compare our results to is M_ν<0.186 eV at 95% C.L., as reported in the first row of Tab. <ref>.
For the ΛCDM+M_ν+w extension, where we leave the dark energy equation of state w free to vary within the range [-3,1], we can expect the bounds on M_ν to broaden due to a well-known degeneracy between M_ν and w <cit.>. Specifically, an increase in M_ν can be compensated by a decrease in w, due to the mutual degeneracy with Ω_m. Our results confirm this expectation. With the baseBAO data combination we find M_ν<0.313 eV at 95% C.L., and w=-1.08^+0.09_-0.08 at 68% C.L., with a correlation coefficient between M_ν and w of -0.56. [The correlation coefficient between two parameters i and j (in this case i=M_ν, j=w)is defined as R=C_ij/√(C_iiC_jj), with C the covariance matrix of cosmological parameters.] The degeneracy between M_ν and w is clearly visible in the triangle plot of Fig. <ref>.
For the ΛCDM+M_ν+Ω_k extension, where we leave the curvature energy density Ω_k free to vary within the range [-0.3,0.3], we can again expect the bounds on M_ν to broaden due to the three-parameter geometric degeneracy between h, Ω_νh^2 and Ω_k <cit.>. For the baseBAO data combination we find M_ν<0.299 eV at 95% C.L., and Ω_k=0.001^+0.003_-0.004 at 68% C.L., with a correlation coefficient between M_ν and Ω_k of 0.60. The degeneracy between M_ν and Ω_k is clearly visible in the triangle plot of Fig. <ref>.
A clarification is in order here: when leaving the dark energy equation of state w and the curvature energy density Ω_k free to vary, it would be extremely useful to add supernovae data, given that these are extremely sensitive to these two quantities. We have however chosen not to do so in order to ease comparison with the bound M_ν<0.186 eV obtained for the same baseBAO combination within the ΛCDM+M_ν parameter space. Moreover, in this way we are able to reach a conservative conclusion concerning the robustness of M_ν bounds to the ΛCDM+M_ν+w and ΛCDM+M_ν+Ω_k parameter spaces, as the addition of supernovae data would lead to tighter bounds than the M_ν<0.313 eV and M_ν<0.299 eV quoted.
Of course, as expected, the bounds on M_ν degrade the moment we consider extended parameter spaces. Given our discussion in Sec. <ref>, this means within the extended parameter spaces considered the preference for one hierarchy over another essentially vanishes. However, the last statement is not necessarily always true: for instance, in certain models of dynamical dark energy with specific functional forms of w(z), the constraints on M_ν can get tighter: an example is the holographic dark energy model, within which bounds on M_ν have been shown to be substantially tighter than within a ΛCDM Universe <cit.>. An interesting thing to note, however, is that within better than 1σ uncertainties (i.e. within ∼ 68% C.L.), both w and Ω_k are compatible with the values to which they are fixed within the minimal ΛCDM+M_ν parameter space, that is, -1 and 0 respectively.
§ CONCLUSIONS
Neutrino oscillation experiments provide information on the two mass splittings governing the solar and atmospheric neutrino transitions, but are unable to measure the total neutrino mass scale, M_ν. The sign of the largest mass splitting, the atmospheric mass gap, remains unknown. The two resulting possibilities are the so-called normal (positive sign) or inverted (negative sign) mass hierarchies. While in the normal hierarchy scheme neutrino oscillation results set the minimum allowed total neutrino mass M_ν to be approximately equal to ∼0.06 eV, in the inverted one this lower limit is ∼0.1 eV.
Currently, cosmology provides the tightest bounds on the total neutrino mass M_ν, i.e. on the sum of the three active neutrino states. If these cosmological bounds turned out to be robustly and significantly smaller than the minimum allowed in the inverted hierarchy, then one would indeed determine the neutrino mass hierarchy via cosmological measurements. In order to prepare ourselves for the hierarchy extraction, an assessment of the cosmological neutrino mass limits, studying their robustness against different priors and assumptions concerning the neutrino mass distribution among the three neutrino mass eigenstates, is mandatory. Moreover, the development and application of rigorous model comparison methods to assess the preference for one hierarchy over the other is necessary. In this work, we have analyzed some of the most recent publicly available datasets to provide updated constraints on the sum of the three active neutrino masses, M_ν, from cosmology.
One very interesting aspect is whether the information concerning the total neutrino mass from the large-scale structure of the universe in its geometrical form (i.e. via the BAO signature) supersedes that of full-shape measurements of the power spectrum. While previous studies have addressed the question with former galaxy clustering datasets, it is timely to explore the situation with current galaxy catalogs, covering much larger volumes, benefiting from smaller error-bars and also from improved, more accurate descriptions of the mildly non-linear regime in the matter power spectrum.
We find that, despite the latest measurements of the galaxy power spectrum cover a vast volume of our universe, the BAO signature extracted from comparable datasets is still more powerful than the full-shape information, within the minimal ΛCDM+M_ν model studied here. This statement is expected to change within the context of extended cosmological models, such as those with non-zero curvature or a time-dependent dark energy equation of state, and we reserve this study to future work <cit.> (whereas a short discussion on the robustness of the bounds on M_ν within extended parameter spaces is provided in Appendix B).
The reason for the supremacy of BAO measurements over shape information is due to the cutoff in k-space imposed when treating the power spectrum. This cutoff is required to avoid the impact of non-linear evolution. It is worth reminding once more that BAO measurements contain non-linear information wrapped in with the reconstruction procedure. This same non-linear information cannot be used in the power spectrum due to the choice of the conservative cutoff in k-space. Moreover, the need for at least two additional nuisance parameters relating the galaxy power spectrum to the underlying matter power spectrum further degrades the constraining power of the latter. Therefore, the stronger constraints obtained through geometrical rather than shape measurements should not be seen as a limitation of the constraining power of the latter, rather as a limitation of methods currently used to analyze these datasets. A deeper understanding of the non-linear regime of the galaxy power spectrum in the presence of massive neutrinos, as well as further understanding of the galaxy bias at a theoretical and observational level, are required: it is worth noting that a lot of effort is being invested into tackling these issues.
Finally, in this work we have presented the tightest up-to-date neutrino mass constraints among those which can be found in the literature. Neglecting the debated prior on the Hubble constant of H_0 = (73.02 ± 1.79) km s^-1 Mpc^-1, the tightest 95% C.L. upper bound we find is M_ν<0.151 eV (assuming a degenerate spectrum), from CMB temperature anisotropies, BAO and τ measurements. Adding Planck high-ℓ polarization data tightens the previous bound to M_ν<0.118 eV. Further improvements are possible if a prior on the Hubble parameter is also added. In this less conservative approach, the 95% C.L. neutrino mass upper limit is brought down to the level of ∼ 0.09 eV, indicating a weak preference for the normal neutrino hierarchy due to volume effects. Our work also suggests that we can identify a restricted set of conservative but robust datasets: this includes CMB temperature data, as well as BAO measurements and galaxy power spectrum data, after adequate corrections for non-linearities. These datasets allow us to identify a robust upper bound of ∼ 0.15 eV on M_ν from cosmological data alone.
In addition to providing updated bounds on the total neutrino mass, we have also performed a simple but robust model comparison analysis, aimed at quantifying the exclusion limits on the inverted hierarchy from current datasets. Our findings indicate that, despite the very stringent upper bounds we have just outlined, current data is not able to conclusively favor the NH over the IH. Within our most conservative scheme, we are able to disfavor the IH with a significance of at most 64% C.L., corresponding to posterior odds of NH over IH of 1.8:1. Even the most constraining and less conservative datasets combinations are able at most to disfavor the IH at 77% C.L., with posterior odds of NH against IH of 3.3:1. This suggests that further improvements in sensitivity, down to the level of 0.02 eV, are required in order for cosmology to conclusively disfavor the IH. Fortunately, it looks like a combination of data from near-future CMB experiments and galaxy surveys should be able to reach this target.
We conclude that our findings, while unable to robustly disfavor the inverted neutrino mass ordering, significantly reduce the volume of parameter space allowed within this mass hierarchy. The more robustly future bounds will be able to disfavor the region of parameter space with M_ν > 0.1 eV, the more the IH will be put under pressure with respect to the NH. In other words future cosmological data, in the absence of a neutrino mass detection, are expected to reinforce the current mild preference for the normal hierarchy mass ordering. On the other hand, if the underlying mass hierarchy is the inverted one, a cosmological detection of the neutrino mass scale could be quick approaching. In any case, we expect neutrino cosmology to remain an active and exciting field of discovery in the upcoming years.
§ APPENDIX A: THE 3DEG APPROXIMATION
Throughout the paper we have presented bounds within the 3deg approximation of a neutrino mass spectrum with three massive degenerate mass eigenstates. The choice was motived, as discussed in Sec. <ref>, by the observations that the NH and IH mass splittings have a tiny effect on cosmological data, when compared to the 3deg approximation with the same value of the total mass M_ν. Here we discuss the conditions under which this approximation is mathematically speaking valid. We also briefly discuss why the 3deg approximation is nonetheless physically accurate given the sensitivity of current data.
Mathematically speaking, the 3deg approximation is valid as long as:
m_0 ≫| m_i-m_j | , ∀ i,j=1,2,3 ,
where m_0=m_1 [m_3] in the NH [IH] scenario (see Sec. <ref> for the definition of the labeling of the three mass eigenstates). Recall that, according to our convention, m_1<m_2<m_3 [m_3<m_1<m_2] in the NH [IH]. Therefore, the 3deg approximation is strictly speaking valid when the absolute neutrino mass scale is much larger than the individual mass splittings. A good candidate for a figure of merit to quantify the goodness of the 3deg approximation can then be obtained by considering the ratio of any given mass difference, over a quantity proportional to the absolute neutrino mass scale. This leads us to consider the following figure(s) of merit:
ζ_ij≡3| m_i-m_j |/M_ν ,
where the indices i,j run over i,j=1,2,3. The figures of merit ζ_ij quantify the goodness of the 3deg approximation. In the case where the 3deg approximation were exact (which, of course, is physically impossible given the non-zero mass-squared splittings), one would have ζ_ij=0. The 3deg approximation, then, can be considered valid from a practical point of view as long as ζ_ij is sufficiently small, where the amount of deviation from ζ_ij=0 one can tolerate defines what is sufficiently small and hence the validity criterion for the 3deg approximation.
In Fig. <ref> we plot our figure(s) of merit ζ_ij, for i,j=1,2 (red) and i,j=1,3 (blue) in Eq. (<ref>) and for the NH (solid) and IH (dashed) scenarios (see the caption for details), against the total neutrino mass M_ν. We plot the same quantities, but this time against the lightest neutrino mass m_0 = m_1 [m_3] for the NH [IH], in Fig. <ref>. As we discussed previously, the 3deg approximation would be exact if ζ_ij=0 (which of course cannot be displayed due to the choice of a logarithmic scale for the y axis).
As we already discussed, the decision of whether or not 3deg is a sensible approximation mathematically speaking depends on the amount of deviation from ζ_ij=0 that can be tolerated. As an example, from Fig. <ref> and Fig. <ref> we see that, considering an indicative value of M_ν≈ 0.15 eV, the value of ζ_13≈ 0.4, indicating a ≈ 40% deviation from the exact 3deg scenario, which can hardly be considered small.
This indicates that, within the remaining allowed region of parameter space, the 3deg approximation is mathematically speaking not valid. It is worth remarking that there is a degree of residual model dependency as this conclusion was reached taking at face value the indicative upper limit on M_ν of ≈ 0.15 eV, which has been derived under the assumption of a flat ΛCDM background. One can generically expect the bounds we obtained to be loosened to some extent if considering extended cosmological scenarios (although this needs not necessarily always be the case).
A different issue is, instead, whether the 3deg approximation is physically appropriate, given the sensitivity of current and near-future experiments. The issue has been discussed extensively in the literature, and in particular in some recent works <cit.>. It has been argued that, if M_ν > 0.1 eV, future cosmological observations, while measuring M_ν with high accuracy, will not be able to discriminate between the NH and the IH. In any case, cosmological measurements in combination with laboratory experiments will in this case (M_ν > 0.1 eV) play a key role in unravelling the hierarchy <cit.>. If M_ν < 0.1 eV, most of the discriminatory power in cosmological data between the NH and the IH is essentially due to volume effects: i.e., the fact that oscillation data force ≃0.1 in the IH, implying that the IH has access to a reduced volume of parameter space with respect to the NH.
Another example of the goodness of the 3deg approximation is provided in <cit.> considering a combination of forecasts for COrE, Euclid, and DESI data. Specifically, <cit.> considered a fiducial mock dataset generated implementing the full NH or IH, and then studied whether fitting the fiducial dataset using the 3deg approximation rather than the “true" NH or IH would lead to substantial biases. The findings suggest that, apart from small O(0.1σ) reconstruction biases (which can be removed for M_ν<0.1 eV), the 3deg approximation is able to recover the fiducial value of M_ν (as long as the free parameter is taken to be consistently either M_ν or m_0). This suggests that even with near-future cosmological data the 3deg approximation will still be sufficiently accurate for the purpose of estimating cosmological parameters, and further validates the goodness of the 3deg approximation in our work.
The conclusion is that current cosmological datasets are sensitive to the total neutrino mass M_ν rather than to the individual masses m_i, implying that the 3deg approximation is sufficiently precise for the purpose of obtaining reliable cosmological neutrino mass bounds for the time being. On the other hand, for future high precision cosmological data, which could benefit from increased sensitivity and could reliably have access to non-linear scales of the matter power spectrum, modelling the mass splittings correctly will matter.
In conclusion, although the 3deg approximation is not, mathematically speaking, valid in the remaining volume of parameter space, it is physically speaking a good approximation given the sensitivity of current datasets. However, quantitative claims about disfavoring the inverted hierarchy have to be drawn with care, making use of rigorous model comparison methods.
§ APPENDIX B: THE 1MASS APPROXIMATION
As argued in a number of works, the ability to robustly reach an upper bound on M_ν of ≈ 0.1 eV translates more or less directly into the ability of excluding the inverted hierarchy at a certain statistical significance, as we quantified in Sec. <ref>. In this case it is desirable to check whether one's conclusions are affected by assumptions on the underlying neutrino mass spectrum. Throughout our paper we have presented bounds on M_ν making the assumption of a spectrum of three massive degenerate neutrinos, denoted 3deg. As we have argued extensively (see e.g. Appendix A), given the sensitivity of current data, this assumption does not to any significant extent influence the resulting bounds. Nonetheless, it is interesting and timely to investigate the dependence of neutrino mass bounds under assumptions of different mass spectra, which was recently partly done in <cit.>.
Here, as in <cit.>, we consider (in addition to the 3deg spectrum) the approximation spectrum featuring a single massive eigenstate carrying the total mass M_ν together with two massless species. We refer to this scheme by the name 1mass:
m_1 = m_2 = 0 , m_3 = M_ν (1mass) .
The motivation for the 1mass choice is twofold: i) it is the usual approximation adopted when performing cosmological analyses with the total neutrino mass fixed to =0.06, in order to mimic the minimal mass scenario in the case of the NH (m_1=0, m_2≪ m_3), and ii) it might provide a better description of the underlying neutrino mass ordering in the M_ν<0.1 mass region, in which m_1∼ m_2≪ m_3, although a complete assessment goes beyond the scope of our work. The latter is the main motivation for exploring the 1mass approximation further, given the recent weak cosmological hints favoring the NH.
Before proceeding, it is useful to clarify why we have chosen to focus on results within the 3deg scheme. As we discussed, it has been observed that the impact of the NH and IH mass splittings on cosmological data is tiny if one compares the 3deg approximation to the corresponding NH and IH models with the same value of the total mass M_ν. However, this does not necessarily hold when the comparison is made between 3deg and 1mass, because the latter always has two pure dark radiation components (see footnote 8 for a definition of dark radiation) throughout the whole expansion history and, in particular, at the present time (on the other hand, NH and IH can have at most one pure radiation component at present time, a situation which occurs in the minimal mass scenario when m_0 = 0 eV and thus only for one specific point in neutrino mass parameter space) [Dark radiation consists of any weakly or non-interacting extra radiation component of the Universe, see e.g. <cit.> for a review and <cit.> for recent relevant work in connection to neutrino physics. For example, sterile neutrinos may in some models have contributed as dark radiation, see e.g. <cit.>, or possibly thermally produced cosmological axions <cit.>. Dark radiation might also arise in dark sectors with additional relativistic degrees of freedom which decouple from the Standard Model as, for instance, hidden photons (see e.g. <cit.>).]. The extra massless component(s) present in the 1mass case, but not in the NH and IH (1mass features only one extra component compared to the NH and IH if these happen to correspond to the minimal mass scenario where m_0 = 0 eV; if m_0 ≠ 0 eV, 1mass possesses two extra massless components), are known to have a non-negligible impact on cosmological observables, in particular the CMB anisotropy spectra <cit.>.
Let us now discuss how the bounds on M_ν change when passing from the 3deg to the 1mass approximation. We observe that when considering the base dataset combinations, and extensions thereof (i.e. the combinations considered in Tab. <ref>, where we report the 3deg results), the bounds obtained within the 1mass approximation are typically more constraining than the 3deg ones, by about ∼ 2-8%. For example, the 95% C.L. upper bound on M_ν is tightened from 0.716 eV to 0.658 eV for the base combination, from 0.299 eV to 0.293 eV for the base+P(k) combination, and from 0.246 eV to 0.234 eV for the basePK combination. When small-scale polarization data is added (see Tab. <ref> for the 3deg results), we observe a reversal in this behaviour: that is, the bounds obtained within the 1mass approximation are looser than the 3deg ones. For example, the 95% C.L. upper bound on M_ν is loosened from 0.485 eV to 0.619 eV for the basepol combination, from 0.275 eV to 0.300 eV for the basepol+P(k) combination, and from 0.215 eV to 0.228 eV for the basepolPK combination.
Regarding the baseBAO and basepolBAO dataset combinations and extensions thereof (see Tabs. <ref>, <ref> for the 3deg results), no clear trend emerges when passing from the 3deg to the 1mass approximation, although we note that the bounds typically degrade slightly: for example, the 95% C.L. upper bound on M_ν is loosened from 0.186 eV to 0.203 eV for the baseBAO combination, and from 0.153 eV to 0.155 eV for the basepolBAO combination.
We choose not to further investigate the reason behind these tiny but noticeable shifts because, as previously stated, the 1mass distribution is less “physical", owing to the presence of two unphysical dark radiation states. Instead, we report these numbers in the interest of noticing how these shifts suggest that, at present, cosmological measurements are starting to become sensitive (albeit in a very weak manner) to the late-time hot dark matter versus dark radiation distribution among the neutrino mass eigenstates, a conclusion which had already been reached in <cit.>.
One of the reasons underlying the choice of studying the 1mass approximation is that this scheme might represent an useful approximation to the minimal mass scenario in the NH. Of course, the possibility that the underlying neutrino hierarchy is inverted is far from being excluded. This raises the question of whether an analogous scheme, which we refer to as 2mass (already studied in <cit.>), might instead approximate the minimal mass scenario in the IH:
m_3 = 0 , m_1 = m_2 = M_ν/2 (2mass) .
Of course, the previously discussed considerations concerning the non-physicality of the 1mass approximation (due to the presence of extra pure radiation components) automatically apply to the 2mass approximation as well. Moreover, we note that bounds on M_ν obtained within the 2mass approximation (which features one pure radiation state) are always intermediate between those of the 3deg (which features no pure radiation state) and the 1mass (which features two pure radiation states) ones (see also e.g. <cit.>). This confirms once more that the discrepancy between bounds within these three different approximations are to be attributed to the impact of the unphysical pure radiation states on cosmological observables, in particular the CMB anisotropy spectra. In conclusion, we remark once more that, while the 3deg approximation is sufficiently accurate given the precision of current data, other approximations which introduce non-physical pure radiation states, such as the 1mass and 2mass ones, are not. Adopting these to obtain bounds on M_ν might instead lead to unphysical shifts in the determination of cosmological parameters, and hence should be avoided.
SV, EG, and MG acknowledge Hector Gil-Marín for very useful discussions. SV and OM thank Antonio Cuesta for valuable correspondence. We are very grateful to the anonymous referee for a detailed and constructive report which enormously helped to improve the quality of our paper. This work is based on observations obtained with Planck (http://www.esa.int/Planckwww.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. We acknowledge use of the Planck Legacy Archive. We also acknowledge the use of computing facilities at NERSC. K.F. acknowledges support from DoE grant DE-SC0007859 at the University of Michigan as well as support from the Michigan Center for Theoretical Physics. M.G., S.V. and K.F. acknowledge support by the Vetenskapsrådet (Swedish Research Council) through contract No. 638-2013-8993 and the Oskar Klein Centre for Cosmoparticle Physics. M.L. acknowledges support from ASI through ASI/INAF Agreement 2014-024-R.1 for the Planck LFI Activity of Phase E2. O.M. is supported by PROMETEO II/2014/050, by the Spanish Grant FPA2014–57816-P of the MINECO, by the MINECO Grant SEV-2014-0398 and by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreements 690575 and 674896. O.M. would like to thank the Fermilab Theoretical Physics Department for its hospitality. E.G. is supported by NSF grant AST1412966. S.H. acknowledges support by NASA-EUCLID11-0004, NSF AST1517593 and NSF AST1412966.
99
nobel
“The Nobel Prize in Physics 2015”,
Nobelprize.org, Nobel Media AB 2014, Web, 18 Oct 2016,
http://www.nobelprize.org/nobel_prizes/physics/laureates/2015www.nobelprize.org/nobel_prizes/physics/laureates/2015
superk
Y. Fukuda et al. [Super-Kamiokande Collaboration],
Phys. Rev. Lett. 81 (1998) 1562
[hep-ex/9807003].
sno
Q. R. Ahmad et al. [SNO Collaboration],
Phys. Rev. Lett. 89 (2002) 011301
[nucl-ex/0204008].
kamland
T. Araki et al. [KamLAND Collaboration],
Phys. Rev. Lett. 94 (2005) 081801
[hep-ex/0406035].
minos
P. Adamson et al. [MINOS Collaboration],
Phys. Rev. Lett. 101 (2008) 131802
[arXiv:0806.2237 [hep-ex]].
dayabay
F. P. An et al. [Daya Bay Collaboration],
Phys. Rev. Lett. 108 (2012) 171803
[arXiv:1203.1669 [hep-ex]].
reno
J. K. Ahn et al. [RENO Collaboration],
Phys. Rev. Lett. 108 (2012) 191802
[arXiv:1204.0626 [hep-ex]].
doublechooz
Y. Abe et al. [Double Chooz Collaboration],
Phys. Rev. D 86 (2012) 052008
[arXiv:1207.6632 [hep-ex]].
t2k
K. Abe et al. [T2K Collaboration],
Phys. Rev. Lett. 112 (2014) 061802
[arXiv:1311.4750 [hep-ex]].
fit1
M. C. Gonzalez-Garcia, M. Maltoni and T. Schwetz,
JHEP 1411, 052 (2014)
[arXiv:1409.5439 [hep-ph]].
fit2
D. V. Forero, M. Tortola and J. W. F. Valle,
Phys. Rev. D 90, no. 9, 093006 (2014)
[arXiv:1405.7540 [hep-ph]].
fit3
I. Esteban, M. C. González-García, M. Maltoni, I. Martínez-Soler and T. Schwetz,
arXiv:1611.01514 [hep-ph].
fit4
F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri and A. Palazzo,
Phys. Rev. D 95 (2017) no.9, 096014
[arXiv:1703.04471 [hep-ph]].
fit5
A. Caldwell, A. Merle, O. Schulz and M. Totzauer,
arXiv:1705.01945 [hep-ph].
Capozzi:2016rtj
F. Capozzi, E. Lisi, A. Marrone, D. Montanino and A. Palazzo,
Nucl. Phys. B 908 (2016) 218
[arXiv:1601.07777 [hep-ph]].
Giusarma:2014zza
E. Giusarma, E. Di Valentino, M. Lattanzi, A. Melchiorri and O. Mena,
Phys. Rev. D 90 (2014) no.4, 043507
[arXiv:1403.4852 [astro-ph.CO]].
Palanque-Delabrouille:2015pga
N. Palanque-Delabrouille et al.,
JCAP 1511 (2015) no.11, 011
[arXiv:1506.05976 [astro-ph.CO]].
DiValentino:2015wba
E. Di Valentino, E. Giusarma, M. Lattanzi, O. Mena, A. Melchiorri and J. Silk,
Phys. Lett. B 752 (2016) 182
[arXiv:1507.08665 [astro-ph.CO]].
DiValentino:2015sam
E. Di Valentino, E. Giusarma, O. Mena, A. Melchiorri and J. Silk,
Phys. Rev. D 93, no. 8, 083527 (2016)
[arXiv:1511.00975 [astro-ph.CO]].
Cuesta:2015iho
A. J. Cuesta, V. Niro and L. Verde,
Phys. Dark Univ. 13 (2016) 77
[arXiv:1511.05983 [astro-ph.CO]].
Huang:2015wrx
Q. G. Huang, K. Wang and S. Wang,
Eur. Phys. J. C 76 (2016) no.9, 489
[arXiv:1512.05899 [astro-ph.CO]].
DiValentino:2016ikp
E. Di Valentino, S. Gariazzo, M. Gerbino, E. Giusarma and O. Mena,
Phys. Rev. D 93, no. 8, 083523 (2016)
[arXiv:1601.07557 [astro-ph.CO]].
Giusarma:2016phn
E. Giusarma, M. Gerbino, O. Mena, S. Vagnozzi, S. Ho and K. Freese,
Phys. Rev. D 94 (2016) no.8, 083522
[arXiv:1605.04320 [astro-ph.CO]].
Lesgourgues:2006nd
J. Lesgourgues and S. Pastor,
Phys. Rept. 429 (2006) 307
[astro-ph/0603494].
Wong:2011ip
Y. Y. Y. Wong,
Ann. Rev. Nucl. Part. Sci. 61 (2011) 69
[arXiv:1111.1436 [astro-ph.CO]].
Lesgourgues:2012uu
J. Lesgourgues and S. Pastor,
Adv. High Energy Phys. 2012 (2012) 608515
[arXiv:1212.6154 [hep-ph]].
Abazajian:2013oma
K. N. Abazajian et al. [Topical Conveners: K.N. Abazajian, J.E. Carlstrom, A.T. Lee Collaboration],
Astropart. Phys. 63 (2015) 66
[arXiv:1309.5383 [astro-ph.CO]].
book
J. Lesgourgues, G. Mangano, G. Miele and S. Pastor,
“Neutrino Cosmology,”
Cambridge, UK: Cambridge University Press (2013) 378 p
Lesgourgues:2014zoa
J. Lesgourgues and S. Pastor,
New J. Phys. 16 (2014) 065002
[arXiv:1404.1740 [hep-ph]].
Archidiacono:2016lnv
M. Archidiacono, T. Brinckmann, J. Lesgourgues and V. Poulin,
JCAP 1702 (2017) no.02, 052
[arXiv:1610.09852 [astro-ph.CO]].
Lesgourgues:2004ps
J. Lesgourgues, S. Pastor and L. Perotto,
Phys. Rev. D 70 (2004) 045016
[hep-ph/0403296].
Pritchard:2008wy
J. R. Pritchard and E. Pierpaoli,
Phys. Rev. D 78 (2008) 065009
[arXiv:0805.1920 [astro-ph]].
DeBernardis:2009di
F. De Bernardis, T. D. Kitching, A. Heavens and A. Melchiorri,
Phys. Rev. D 80 (2009) 123509
[arXiv:0907.1917 [astro-ph.CO]].
Jimenez:2010ev
R. Jiménez, T. Kitching, C. Peña-Garay and L. Verde,
JCAP 1005 (2010) 035
[arXiv:1003.5918 [astro-ph.CO]].
Wagner:2012sw
C. Wagner, L. Verde and R. Jiménez,
Astrophys. J. 752 (2012) L31
[arXiv:1203.5342 [astro-ph.CO]].
Hannestad:2016fog
S. Hannestad and T. Schwetz,
JCAP 1611 (2016) no.11, 035
[arXiv:1606.04691 [astro-ph.CO]].
Xu:2016ddc
L. Xu and Q. G. Huang,
arXiv:1611.05178 [astro-ph.CO].
martinanew
M. Gerbino, M. Lattanzi, O. Mena and K. Freese,
arXiv:1611.07847 [astro-ph.CO].
blennow
M. Blennow,
JHEP 1401 (2014) 139
[arXiv:1311.3183 [hep-ph]].
planckcosmological
P. A. R. Ade et al. [Planck Collaboration],
Astron. Astrophys. 594 (2016) A13
[arXiv:1502.01589 [astro-ph.CO]].
bayes
R. Trotta,
Contemp. Phys. 49 (2008) 71
[arXiv:0803.4089 [astro-ph]].
trottareview
R. Trotta,
arXiv:1701.01467 [astro-ph.CO].
cosmomc1
A. Lewis and S. Bridle,
Phys. Rev. D 66, 103511 (2002)
[astro-ph/0205436].
cosmomc2
A. Lewis,
Phys. Rev. D 87, no. 10, 103529 (2013)
[arXiv:1304.4473 [astro-ph.CO]].
gelmanandrubin
S. Brooks and A. Gelman,
J. Comp. Graph. Stat. 7, 434-455 (1998).
lewiscosmocoffee
A. Lewis,
post on http://cosmocoffee.info/viewtopic.php?t=350&sid=adcdc49e35eb4cbf3c69d8273c9980ebCosmoCoffee
Beacom:2004yd
J. F. Beacom, N. F. Bell and S. Dodelson,
Phys. Rev. Lett. 93 (2004) 121302
[astro-ph/0404585].
bellini
N. Bellomo, E. Bellini, B. Hu, R. Jiménez, C. Peña-Garay and L. Verde,
arXiv:1612.02598 [astro-ph.CO].
joudaki
S. Joudaki,
Phys. Rev. D 87 (2013) 083523
[arXiv:1202.0005 [astro-ph.CO]].
archidiaconogiusarmamelchiorrimena
M. Archidiacono, E. Giusarma, A. Melchiorri and O. Mena,
Phys. Rev. D 86 (2012) 043509
[arXiv:1206.0109 [astro-ph.CO]].
feeney
S. M. Feeney, H. V. Peiris and L. Verde,
JCAP 1304 (2013) 036
[arXiv:1302.0014 [astro-ph.CO]].
archidiacono1
M. Archidiacono, N. Fornengo, C. Giunti, S. Hannestad and A. Melchiorri,
Phys. Rev. D 87 (2013) no.12, 125034
[arXiv:1302.6720 [astro-ph.CO]].
archidiacono2
M. Archidiacono, S. Hannestad, A. Mirizzi, G. Raffelt and Y. Y. Y. Wong,
JCAP 1310 (2013) 020
[arXiv:1307.0615 [astro-ph.CO]].
mirizzi
A. Mirizzi, G. Mangano, N. Saviano, E. Borriello, C. Giunti, G. Miele and O. Pisanti,
Phys. Lett. B 726 (2013) 8
[arXiv:1303.5368 [astro-ph.CO]].
verde
L. Verde, S. M. Feeney, D. J. Mortlock and H. V. Peiris,
JCAP 1309 (2013) 013
[arXiv:1307.2904 [astro-ph.CO]].
gariazzo
S. Gariazzo, C. Giunti and M. Laveder,
JHEP 1311 (2013) 211
[arXiv:1309.3192 [hep-ph]].
archidiacono3
M. Archidiacono, N. Fornengo, S. Gariazzo, C. Giunti, S. Hannestad and M. Laveder,
JCAP 1406 (2014) 031
[arXiv:1404.1794 [astro-ph.CO]].
bergstrom
J. Bergström, M. C. González-García, V. Niro and J. Salvado,
JHEP 1410 (2014) 104
[arXiv:1407.3806 [hep-ph]].
rossi
G. Rossi, C. Yèche, N. Palanque-Delabrouille and J. Lesgourgues,
Phys. Rev. D 92 (2015) no.6, 063505
[arXiv:1412.6763 [astro-ph.CO]].
Zhang:2015rha
J. F. Zhang, M. M. Zhao, Y. H. Li and X. Zhang,
JCAP 1504 (2015) 038
[arXiv:1502.04028 [astro-ph.CO]].
divalentino1
E. Di Valentino, A. Melchiorri and J. Silk,
Phys. Rev. D 92 (2015) no.12, 121302
[arXiv:1507.06646 [astro-ph.CO]].
gerbino
M. Gerbino, M. Lattanzi and A. Melchiorri,
Phys. Rev. D 93 (2016) no.3, 033001
[arXiv:1507.08614 [hep-ph]].
divalentino2
E. Di Valentino, E. Giusarma, M. Lattanzi, O. Mena, A. Melchiorri and J. Silk,
Phys. Lett. B 752 (2016) 182
[arXiv:1507.08665 [astro-ph.CO]].
zhang
X. Zhang,
Phys. Rev. D 93 (2016) no.8, 083011
[arXiv:1511.02651 [astro-ph.CO]].
kitching
T. D. Kitching, L. Verde, A. F. Heavens and R. Jiménez,
Mon. Not. Roy. Astron. Soc. 459 (2016) no.1, 971
[arXiv:1602.02960 [astro-ph.CO]].
moresco
M. Moresco, R. Jiménez, L. Verde, A. Cimatti, L. Pozzetti, C. Maraston and D. Thomas,
JCAP 1612 (2016) no.12, 039
doi:10.1088/1475-7516/2016/12/039
[arXiv:1604.00183 [astro-ph.CO]].
canac
N. Canac, G. Aslanyan, K. N. Abazajian, R. Easther and L. C. Price,
JCAP 1609 (2016) no.09, 022
[arXiv:1606.03057 [astro-ph.CO]].
archidiacono4
M. Archidiacono, S. Gariazzo, C. Giunti, S. Hannestad, R. Hansen, M. Laveder and T. Tram,
JCAP 1608 (2016) no.08, 067
[arXiv:1606.07673 [astro-ph.CO]].
kumar
S. Kumar and R. C. Nunes,
Phys. Rev. D 94 (2016) 123511
[arXiv:1608.02454 [astro-ph.CO]].
bouchet
E. Di Valentino and F. R. Bouchet,
JCAP 1610 (2016) no.10, 011
[arXiv:1609.00328 [astro-ph.CO]].
Kumar:2017dnp
S. Kumar and R. C. Nunes,
arXiv:1702.02143 [astro-ph.CO].
Guo:2017hea
R. Y. Guo, Y. H. Li, J. F. Zhang and X. Zhang,
JCAP 1705 (2017) no.05, 040
[arXiv:1702.04189 [astro-ph.CO]].
Zhang:2017rbg
X. Zhang,
Sci. China Phys. Mech. Astron. 60 (2017) no.6, 060431
[arXiv:1703.00651 [astro-ph.CO]].
Li:2017iur
E. K. Li, H. Zhang, M. Du, Z. H. Zhou and L. Xu,
arXiv:1703.01554 [astro-ph.CO].
Yang:2017amu
W. Yang, R. C. Nunes, S. Pan and D. F. Mota,
Phys. Rev. D 95 (2017) no.10, 103522
[arXiv:1703.02556 [astro-ph.CO]].
Feng:2017nss
L. Feng, J. F. Zhang and X. Zhang,
Eur. Phys. J. C 77 (2017) no.6, 418
doi:10.1140/epjc/s10052-017-4986-3
[arXiv:1703.04884 [astro-ph.CO]].
Dirian:2017pwp
Y. Dirian,
arXiv:1704.04075 [astro-ph.CO].
Feng:2017mfs
L. Feng, J. F. Zhang and X. Zhang,
arXiv:1706.06913 [astro-ph.CO].
Lorenz:2017fgo
C. S. Lorenz, E. Calabrese and D. Alonso,
arXiv:1706.00730 [astro-ph.CO].
Couchot:2017pvz
F. Couchot, S. Henrot-Versillé, O. Perdereau, S. Plaszczynski, B. Rouillé D. 'orfeuil, M. Spinelli and M. Tristram,
arXiv:1703.10829 [astro-ph.CO].
Doux:2017tsv
C. Doux, M. Penna-Lima, S. D. P. Vitenti, J. Tréguer, E. Aubourg and K. Ganga,
arXiv:1706.04583 [astro-ph.CO].
Simpson:2017qvj
F. Simpson, R. Jimenez, C. Pena-Garay and L. Verde,
JCAP 1706 (2017) no.06, 029
[arXiv:1703.03425 [astro-ph.CO]].
Schwetz:2017fey
T. Schwetz, K. Freese, M. Gerbino, E. Giusarma, S. Hannestad, M. Lattanzi, O. Mena and S. Vagnozzi,
arXiv:1703.04585 [astro-ph.CO].
Lewis:2006fu
A. Lewis and A. Challinor,
Phys. Rept. 429, 1 (2006)
[astro-ph/0601594].
Hall:2012kg
A. C. Hall and A. Challinor,
Mon. Not. Roy. Astron. Soc. 425 (2012) 1170
[arXiv:1205.6172 [astro-ph.CO]].
Ade:2013zuv
P. A. R. Ade et al. [Planck Collaboration],
Astron. Astrophys. 571 (2014) A16
[arXiv:1303.5076 [astro-ph.CO]].
s41
K. N. Abazajian et al. [Topical Conveners: K.N. Abazajian, J.E. Carlstrom, A.T. Lee Collaboration],
Astropart. Phys. 63 (2015) 66
[arXiv:1309.5383 [astro-ph.CO]].
s42
K. N. Abazajian et al.,
Astropart. Phys. 63 (2015) 55
[arXiv:1309.5381 [astro-ph.CO]].
s43
K. N. Abazajian et al. [CMB-S4 Collaboration],
arXiv:1610.02743 [astro-ph.CO].
Levi:2013gra
M. Levi et al. [DESI Collaboration],
arXiv:1308.0847 [astro-ph.CO].
Aghamousa:2016zmz
A. Aghamousa et al. [DESI Collaboration],
arXiv:1611.00036 [astro-ph.IM].
Aghamousa:2016sne
A. Aghamousa et al. [DESI Collaboration],
arXiv:1611.00037 [astro-ph.IM].
actpol1
E. Calabrese et al.,
JCAP 1408 (2014) 010
[arXiv:1406.4794 [astro-ph.CO]].
actpol2
S. W. Henderson et al.,
J. Low. Temp. Phys. 184 (2016) no.3-4, 772
[arXiv:1510.02809 [astro-ph.IM]].
spt-3g
B. A. Benson et al. [SPT-3G Collaboration],
Proc. SPIE Int. Soc. Opt. Eng. 9153 (2014) 91531P
[arXiv:1407.2973 [astro-ph.IM]].
sa
A. Suzuki et al. [POLARBEAR Collaboration],
J. Low. Temp. Phys. 184 (2016) no.3-4, 805
[arXiv:1512.07299 [astro-ph.IM]].
so
https://simonsobservatory.org/simonsobservatory.org
litebird
T. Matsumura et al.,
J. Low. Temp. Phys. 176 (2014) 733
[arXiv:1311.2847 [astro-ph.IM]].
core1
F. R. Bouchet et al. [COrE Collaboration],
arXiv:1102.2181 [astro-ph.CO].
core2
E. Di Valentino et al. [the CORE Collaboration],
arXiv:1612.00021 [astro-ph.CO].
core3
F. Finelli et al. [CORE Collaboration],
arXiv:1612.08270 [astro-ph.CO].
pixie
A. Kogut et al.,
JCAP 1107 (2011) 025
[arXiv:1105.2044 [astro-ph.CO]].
bond
G. Efstathiou and J. R. Bond,
Mon. Not. Roy. Astron. Soc. 304 (1999) 75
[astro-ph/9807103].
howlett
C. Howlett, A. Lewis, A. Hall and A. Challinor,
JCAP 1204 (2012) 027
[arXiv:1201.3654 [astro-ph.CO]].
Gerbino:2016sgw
M. Gerbino, K. Freese, S. Vagnozzi, M. Lattanzi, O. Mena, E. Giusarma and S. Ho,
Phys. Rev. D 95 (2017) no.4, 043512
[arXiv:1610.08830 [astro-ph.CO]].
planckproducts
R. Adam et al. [Planck Collaboration],
Astron. Astrophys. 594 (2016) A1
[arXiv:1502.01582 [astro-ph.CO]].
plancklikelihood
N. Aghanim et al. [Planck Collaboration],
Astron. Astrophys. 594 (2016) A11
[arXiv:1507.02704 [astro-ph.CO]].
Calabrese:2008rt
E. Calabrese, A. Slosar, A. Melchiorri, G. F. Smoot and O. Zahn,
Phys. Rev. D 77 (2008) 123531
[arXiv:0803.2309 [astro-ph]].
DiValentino:2015ola
E. Di Valentino, A. Melchiorri and J. Silk,
Phys. Rev. D 92 (2015) no.12, 121302
[arXiv:1507.06646 [astro-ph.CO]].
DiValentino:2015bja
E. Di Valentino, A. Melchiorri and J. Silk,
Phys. Rev. D 93 (2016) no.2, 023513
[arXiv:1509.07501 [astro-ph.CO]].
Addison:2015wyg
G. E. Addison, Y. Huang, D. J. Watts, C. L. Bennett, M. Halpern, G. Hinshaw and J. L. Weiland,
Astrophys. J. 818 (2016) no.2, 132
[arXiv:1511.00055 [astro-ph.CO]].
neutrinofootprint
R. Jiménez, C. Peña-Garay and L. Verde,
Phys. Dark Univ. 15 (2017) 31
[arXiv:1602.08430 [astro-ph.CO]].
kaiserb
N. Kaiser,
Astrophys. J. 284 (1984) L9.
Heitmann:2008eq
K. Heitmann, M. White, C. Wagner, S. Habib and D. Higdon,
Astrophys. J. 715 (2010) 104
[arXiv:0812.1052 [astro-ph]].
Heitmann:2013bra
K. Heitmann, E. Lawrence, J. Kwan, S. Habib and D. Higdon,
Astrophys. J. 780 (2014) 111
[arXiv:1304.7849 [astro-ph.CO]].
Kwan:2013jva
J. Kwan, K. Heitmann, S. Habib, N. Padmanabhan, H. Finkel, E. Lawrence, N. Frontiere and A. Pope,
Astrophys. J. 810 (2015) no.1, 35
[arXiv:1311.6444 [astro-ph.CO]].
castorina1
F. Villaescusa-Navarro, F. Marulli, M. Viel, E. Branchini, E. Castorina, E. Sefusatti and S. Saito,
JCAP 1403 (2014) 011
[arXiv:1311.0866 [astro-ph.CO]].
loverde
M. LoVerde,
Phys. Rev. D 93 (2016) no.10, 103526
[arXiv:1602.08108 [astro-ph.CO]].
Raccanelli:2017kht
A. Raccanelli, L. Verde and F. Villaescusa-Navarro,
arXiv:1704.07837 [astro-ph.CO].
Gaztanaga:2011yi
E. Gaztañaga, M. Eriksen, M. Crocce, F. Castander, P. Fosalba, P. Martí, R. Miquel and A. Cabré,
Mon. Not. Roy. Astron. Soc. 422 (2012) no.4, 2904
[arXiv:1109.4852 [astro-ph.CO]].
Hand:2013xua
N. Hand et al.,
Phys. Rev. D 91 (2015) no.6, 062001
[arXiv:1311.6200 [astro-ph.CO]].
Bianchini:2014dla
F. Bianchini et al.,
Astrophys. J. 802 (2015) no.1, 64
[arXiv:1410.4502 [astro-ph.CO]].
Pullen:2015vtb
A. R. Pullen, S. Alam, S. He and S. Ho,
Mon. Not. Roy. Astron. Soc. 460 (2016) no.4, 4098
[arXiv:1511.04457 [astro-ph.CO]].
Bianchini:2015yly
F. Bianchini et al.,
Astrophys. J. 825 (2016) no.1, 24
[arXiv:1511.05116 [astro-ph.CO]].
Kirk:2015dpw
D. Kirk et al. [DES Collaboration],
Mon. Not. Roy. Astron. Soc. 459 (2016) no.1, 21
[arXiv:1512.04535 [astro-ph.CO]].
Pujol:2016lfe
A. Pujol et al.,
Mon. Not. Roy. Astron. Soc. 462 (2016) no.1, 35
[arXiv:1601.00160 [astro-ph.CO]].
Singh:2016xey
S. Singh, R. Mandelbaum and J. R. Brownstein,
Mon. Not. Roy. Astron. Soc. 464 (2016) no.2, 2120
arXiv:1606.08841 [astro-ph.CO].
Prat:2016xor
J. Prat et al. [DES Collaboration],
[arXiv:1609.08167 [astro-ph.CO]].
Laureijs:2011gra
R. Laureijs et al. [EUCLID Collaboration],
arXiv:1110.3193 [astro-ph.CO].
Carbone:2010ik
C. Carbone, L. Verde, Y. Wang and A. Cimatti,
JCAP 1103 (2011) 030
[arXiv:1012.2868 [astro-ph.CO]].
Joudaki:2011nw
S. Joudaki and M. Kaplinghat,
Phys. Rev. D 86 (2012) 023526
[arXiv:1106.0299 [astro-ph.CO]].
Carbone:2011by
C. Carbone, C. Fedeli, L. Moscardini and A. Cimatti,
JCAP 1203 (2012) 023
[arXiv:1112.4810 [astro-ph.CO]].
Hamann:2012fe
J. Hamann, S. Hannestad and Y. Y. Y. Wong,
JCAP 1211 (2012) 052
[arXiv:1209.1043 [astro-ph.CO]].
Basse:2013zua
T. Basse, O. E. Bjælde, J. Hamann, S. Hannestad and Y. Y. Y. Wong,
JCAP 1405 (2014) 021
[arXiv:1304.2321 [astro-ph.CO]].
Spergel:2013tha
D. Spergel et al.,
arXiv:1305.5422 [astro-ph.IM].
Poulin:2016nat
V. Poulin, P. D. Serpico and J. Lesgourgues,
JCAP 1608 (2016) no.08, 036
[arXiv:1606.02073 [astro-ph.CO]].
sdss
D. J. Eisenstein et al. [SDSS Collaboration],
Astron. J. 142 (2011) 72
[arXiv:1101.1529 [astro-ph.IM]].
bolton
A. S. Bolton et al. [Cutler Group, LP Collaboration],
Astron. J. 144 (2012) 144
[arXiv:1207.7326 [astro-ph.CO]].
dawson
K. S. Dawson et al. [BOSS Collaboration],
Astron. J. 145 (2013) 10
[arXiv:1208.0022 [astro-ph.CO]].
smee
S. Smee et al.,
Astron. J. 146 (2013) 32
[arXiv:1208.2233 [astro-ph.IM]].
alam
S. Alam et al. [SDSS-III Collaboration],
Astrophys. J. Suppl. 219 (2015) no.1, 12
[arXiv:1501.00963 [astro-ph.IM]].
12
S. Alam et al. [BOSS Collaboration],
[arXiv:1607.03155 [astro-ph.CO]].
reid
B. Reid et al.,
Mon. Not. Roy. Astron. Soc. 455 (2016) no.2, 1553
[arXiv:1509.06529 [astro-ph.CO]].
Gil-Marin:2015sqa
H. Gil-Marín et al.,
Mon. Not. Roy. Astron. Soc. 460 (2016) no.4, 4188
[arXiv:1509.06386 [astro-ph.CO]].
acousticscale
H. J. Seo et al.,
Astrophys. J. 761 (2012) 13
[arXiv:1201.2172 [astro-ph.CO]].
giusarmadeputterhomena
E. Giusarma, R. de Putter, S. Ho and O. Mena,
Phys. Rev. D 88 (2013) no.6, 063515
[arXiv:1306.5544 [astro-ph.CO]].
Lewis:1999bs
A. Lewis, A. Challinor and A. Lasenby,
Astrophys. J. 538 (2000) 473
[astro-ph/9911177].
halofit1
R. E. Smith et al. [VIRGO Consortium Collaboration],
Mon. Not. Roy. Astron. Soc. 341 (2003) 1311
[astro-ph/0207664].
halofit2
R. Takahashi, M. Sato, T. Nishimichi, A. Taruya and M. Oguri,
Astrophys. J. 761 (2012) 152
[arXiv:1208.2701 [astro-ph.CO]].
birdviel
S. Bird, M. Viel and M. G. Haehnelt,
Mon. Not. Roy. Astron. Soc. 420 (2012) 2551
[arXiv:1109.4416 [astro-ph.CO]].
bqq
S. Cole et al. [2dFGRS Collaboration],
Mon. Not. Roy. Astron. Soc. 362 (2005) 505
[astro-ph/0501174].
amendola
L. Amendola, E. Menegoni, C. Di Porto, M. Corsi and E. Branchini,
arXiv:1502.03994 [astro-ph.CO].
Dalal:2007cu
N. Dalal, O. Dore, D. Huterer and A. Shirokov,
Phys. Rev. D 77 (2008) 123514
[arXiv:0710.4560 [astro-ph]].
Eisenstein:1997ik
D. J. Eisenstein and W. Hu,
Astrophys. J. 496 (1998) 605
[astro-ph/9709112].
Hou:2011ec
Z. Hou, R. Keisler, L. Knox, M. Millea and C. Reichardt,
Phys. Rev. D 87 (2013) 083008
[arXiv:1104.2333 [astro-ph.CO]].
spt
Z. Hou et al.,
Astrophys. J. 782 (2014) 74
[arXiv:1212.6267 [astro-ph.CO]].
6dfgs
F. Beutler et al.,
Mon. Not. Roy. Astron. Soc. 416 (2011) 3017
[arXiv:1106.3366 [astro-ph.CO]].
wigglez
C. Blake et al.,
Mon. Not. Roy. Astron. Soc. 418 (2011) 1707
[arXiv:1108.2635 [astro-ph.CO]].
dr11
L. Anderson et al. [BOSS Collaboration],
Mon. Not. Roy. Astron. Soc. 441 (2014) no.1, 24
[arXiv:1312.4877 [astro-ph.CO]].
mgs
A. J. Ross, L. Samushia, C. Howlett, W. J. Percival, A. Burden and M. Manera,
Mon. Not. Roy. Astron. Soc. 449 (2015) no.1, 835
[arXiv:1409.3242 [astro-ph.CO]].
lymana
A. Font-Ribera et al. [BOSS Collaboration],
JCAP 1405 (2014) 027
[arXiv:1311.1767 [astro-ph.CO]].
giusarmadeputtermena
E. Giusarma, R. De Putter and O. Mena,
Phys. Rev. D 87 (2013) no.4, 043515
[arXiv:1211.2154 [astro-ph.CO]].
riess2011
A. G. Riess et al.,
Astrophys. J. 730 (2011) 119
Erratum: [Astrophys. J. 732 (2011) 129]
[arXiv:1103.2976 [astro-ph.CO]].
revisited
G. Efstathiou,
Mon. Not. Roy. Astron. Soc. 440 (2014) no.2, 1138
[arXiv:1311.3461 [astro-ph.CO]].
ngc4258
E. M. L. Humphreys, M. J. Reid, J. M. Moran, L. J. Greenhill and A. L. Argon,
Astrophys. J. 775 (2013) 13
[arXiv:1307.6031 [astro-ph.CO]].
riess2016
A. G. Riess et al.,
Astrophys. J. 826 (2016) no.1, 56
[arXiv:1604.01424 [astro-ph.CO]].
bonvin
V. Bonvin et al.,
arXiv:1607.01790 [astro-ph.CO].
inthebeginning
R. Barkana and A. Loeb,
Phys. Rept. 349 (2001) 125
[astro-ph/0010468].
Hinshaw:2012aka
G. Hinshaw et al. [WMAP Collaboration],
Astrophys. J. Suppl. 208, 19 (2013)
[arXiv:1212.5226 [astro-ph.CO]].
Stark:2010qj
D. P. Stark, R. S. Ellis, K. Chiu, M. Ouchi and A. Bunker,
Mon. Not. Roy. Astron. Soc. 408 (2010) 1628
[arXiv:1003.5244 [astro-ph.CO]].
Pentericci:2014nia
L. Pentericci et al.,
Astrophys. J. 793, no. 2, 113 (2014)
[arXiv:1403.5466 [astro-ph.CO]].
Schenker:2014tda
M. A. Schenker, R. S. Ellis, N. P. Konidaris and D. P. Stark,
Astrophys. J. 795, no. 1, 20 (2014)
[arXiv:1404.4632 [astro-ph.CO]].
Treu:2013ida
T. Treu, K. B. Schmidt, M. Trenti, L. D. Bradley and M. Stiavelli,
Astrophys. J. 775, L29 (2013)
[arXiv:1308.5985 [astro-ph.CO]].
Tilvi:2014oia
V. Tilvi et al.,
Astrophys. J. 794, no. 1, 5 (2014)
[arXiv:1405.4869 [astro-ph.CO]].
Lattanzi:2016dzq
M. Lattanzi et al.,
JCAP 1702 (2017) no.02, 041
[arXiv:1611.01123 [astro-ph.CO]].
Meerburg:2017lfh
P. D. Meerburg, J. Meyers, K. M. Smith and A. van Engelen,
arXiv:1701.06992 [astro-ph.CO].
mesinger
A. Mesinger, A. Aykutalp, E. Vanzella, L. Pentericci, A. Ferrara and M. Dijkstra,
Mon. Not. Roy. Astron. Soc. 446 (2015) 566
[arXiv:1406.6373 [astro-ph.CO]].
choudhury
T. R. Choudhury, E. Puchwein, M. G. Haehnelt and J. S. Bolton,
Mon. Not. Roy. Astron. Soc. 452 (2015) no.1, 261
[arXiv:1412.4790 [astro-ph.CO]].
Robertson:2015uda
B. E. Robertson, R. S. Ellis, S. R. Furlanetto and J. S. Dunlop,
Astrophys. J. 802, no. 2, L19 (2015)
[arXiv:1502.02024 [astro-ph.CO]].
Bouwens:2015vha
R. J. Bouwens, G. D. Illingworth, P. A. Oesch, J. Caruana, B. Holwerda, R. Smit and S. Wilkins,
Astrophys. J. 811, no. 2, 140 (2015)
[arXiv:1503.08228 [astro-ph.CO]].
mitrachoudhuryferrara
S. Mitra, T. R. Choudhury and A. Ferrara,
Mon. Not. Roy. Astron. Soc. 454 (2015) no.1, L76
[arXiv:1505.05507 [astro-ph.CO]].
Allison:2015qca
R. Allison, P. Caucal, E. Calabrese, J. Dunkley and T. Louis,
Phys. Rev. D 92 (2015) no.12, 123535
[arXiv:1509.07471 [astro-ph.CO]].
Liu:2015txa
A. Liu, J. R. Pritchard, R. Allison, A. R. Parsons, U. Seljak and B. D. Sherwin,
Phys. Rev. D 93 (2016) no.4, 043013
[arXiv:1509.08463 [astro-ph.CO]].
Calabrese:2016eii
E. Calabrese, D. Alonso and J. Dunkley,
arXiv:1611.10269 [astro-ph.CO].
plancktau
N. Aghanim et al. [Planck Collaboration],
arXiv:1605.02985 [astro-ph.CO].
planckreionization
R. Adam et al. [Planck Collaboration],
Astron. Astrophys. 596 (2016) A108
arXiv:1605.03507 [astro-ph.CO].
clusters
S. W. Allen, A. E. Evrard and A. B. Mantz,
Ann. Rev. Astron. Astrophys. 49 (2011) 409
[arXiv:1103.4829 [astro-ph.CO]].
sz1
Y. B. Zeldovich and R. A. Sunyaev,
Astrophys. Space Sci. 4 (1969) 301.
sz2
R. A. Sunyaev and Y. B. Zeldovich,
Astrophys. Space Sci. 7 (1970) 3.
sz3
R. A. Sunyaev and Y. B. Zeldovich,
Ann. Rev. Astron. Astrophys. 18 (1980) 537.
plancksz
P. A. R. Ade et al. [Planck Collaboration],
Astron. Astrophys. 594 (2016) A27
[arXiv:1502.01598 [astro-ph.CO]].
plancksz1
P. A. R. Ade et al. [Planck Collaboration],
Astron. Astrophys. 594 (2016) A24
[arXiv:1502.01597 [astro-ph.CO]].
wtg
A. von der Linden et al.,
Mon. Not. Roy. Astron. Soc. 443 (2014) no.3, 1973
[arXiv:1402.2670 [astro-ph.CO]].
melin
J. B. Melin and J. G. Bartlett,
Astron. Astrophys. 578 (2015) A21
[arXiv:1408.5633 [astro-ph.CO]].
zaldarriaga
M. Zaldarriaga and U. Seljak,
Phys. Rev. D 59 (1999) 123507
[astro-ph/9810257].
Kitching:2016zkn
T. D. Kitching, J. Alsing, A. F. Heavens, R. Jiménez, J. D. McEwen and L. Verde,
arXiv:1611.04954 [astro-ph.CO].
Dvorkin:2014lea
C. Dvorkin, M. Wyman, D. H. Rudd and W. Hu,
Phys. Rev. D 90 (2014) no.8, 083503
[arXiv:1403.8049 [astro-ph.CO]].
cuestapc
A. Cuesta, private communication.
h01
A. Pourtsidou and T. Tram,
Phys. Rev. D 94 (2016) no.4, 043518
[arXiv:1604.04222 [astro-ph.CO]].
h02
S. Grandis, D. Rapetti, A. Saro, J. J. Mohr and J. P. Dietrich,
Mon. Not. Roy. Astron. Soc. 463 (2016) no.2, 1416,
arXiv:1604.06463 [astro-ph.CO].
h03
E. Di Valentino, A. Melchiorri and J. Silk,
Phys. Lett. B 761 (2016) 242
[arXiv:1606.00634 [astro-ph.CO]].
h05
Q. G. Huang and K. Wang,
Eur. Phys. J. C 76 (2016) no.9, 506
[arXiv:1606.05965 [astro-ph.CO]].
h06
B. L'Huillier and A. Shafieloo,
JCAP 1701 (2017) no.01, 015
[arXiv:1606.06832 [astro-ph.CO]].
h07
Y. Chen, S. Kumar and B. Ratra,
Astrophys. J. 835 (2017) 86
[arXiv:1606.07316 [astro-ph.CO]].
h08
T. Tram, R. Vallance and V. Vennin,
JCAP 1701 (2017) no.01, 046
[arXiv:1606.09199 [astro-ph.CO]].
h09
J. L. Bernal, L. Verde and A. G. Riess,
JCAP 1610 (2016) no.10, 019
[arXiv:1607.05617 [astro-ph.CO]].
h010
V. V. Luković, R. D'Agostino and N. Vittorio,
Astron. Astrophys. 595 (2016) A109
[arXiv:1607.05677 [astro-ph.CO]].
h011
P. Ko and Y. Tang,
Phys. Lett. B 762 (2016) 462
[arXiv:1608.01083 [hep-ph]].
h012
T. Karwal and M. Kamionkowski,
Phys. Rev. D 94 (2016) no.10, 103523
[arXiv:1608.01309 [astro-ph.CO]].
h013
A. E. Romano,
arXiv:1609.04081 [astro-ph.CO].
h014
S. Joudaki et al.,
arXiv:1610.04606 [astro-ph.CO].
h015
A. Shafieloo and D. K. Hazra,
JCAP 1704 (2017) no.04, 012
[arXiv:1610.07402 [astro-ph.CO]].
h016
W. Cardona, M. Kunz and V. Pettorino,
JCAP 1703 (2017) no.03, 056
[arXiv:1611.06088 [astro-ph.CO]].
h017
S. Bethapudi and S. Desai,
Eur. Phys. J. Plus 132 (2017) no.2, 78
[arXiv:1701.01789 [astro-ph.CO]].
h018
I. Odderskov, S. Hannestad and J. Brandbyge,
JCAP 1703 (2017) no.03, 022
[arXiv:1701.05391 [astro-ph.CO]].
hamannetal
J. Hamann, S. Hannestad, J. Lesgourgues, C. Rampf and Y. Y. Y. Wong,
JCAP 1007 (2010) 022
[arXiv:1003.3999 [astro-ph.CO]].
Zhao:2016ecj
M. M. Zhao, Y. H. Li, J. F. Zhang and X. Zhang,
Mon. Not. Roy. Astron. Soc. 469 (2017) 1713
[arXiv:1608.01219 [astro-ph.CO]].
Wang:2016tsz
S. Wang, Y. F. Wang, D. M. Xia and X. Zhang,
Phys. Rev. D 94 (2016) no.8, 083519
[arXiv:1608.00672 [astro-ph.CO]].
brandbyge
J. Brandbyge, S. Hannestad, T. Haugbølle and Y. Y. Y. Wong,
JCAP 1009 (2010) 014
[arXiv:1004.4105 [astro-ph.CO]].
ichiki
K. Ichiki and M. Takada,
Phys. Rev. D 85 (2012) 063521
[arXiv:1108.4688 [astro-ph.CO]].
castorina2
E. Castorina, E. Sefusatti, R. K. Sheth, F. Villaescusa-Navarro and M. Viel,
JCAP 1402 (2014) 049
[arXiv:1311.1212 [astro-ph.CO]].
costanzi
M. Costanzi, F. Villaescusa-Navarro, M. Viel, J. Q. Xia, S. Borgani, E. Castorina and E. Sefusatti,
JCAP 1312 (2013) 012
[arXiv:1311.1514 [astro-ph.CO]].
castorina3
E. Castorina, C. Carbone, J. Bel, E. Sefusatti and K. Dolag,
JCAP 1507 (2015) no.07, 043
[arXiv:1505.07148 [astro-ph.CO]].
Carbone:2016nzj
C. Carbone, M. Petkova and K. Dolag,
JCAP 1607 (2016) no.07, 034
[arXiv:1605.02024 [astro-ph.CO]].
zennaro
M. Zennaro, J. Bel, F. Villaescusa-Navarro, C. Carbone, E. Sefusatti and L. Guzzo,
Mon. Not. Roy. Astron. Soc. 466 (2017) no.3, 3244
[arXiv:1605.05283 [astro-ph.CO]].
rizzo
L. A. Rizzo, F. Villaescusa-Navarro, P. Monaco, E. Munari, S. Borgani, E. Castorina and E. Sefusatti,
JCAP 1701 (2017) no.01, 008
[arXiv:1610.07624 [astro-ph.CO]].
Hand:2017ilm
N. Hand, U. Seljak, F. Beutler and Z. Vlah,
arXiv:1706.02362 [astro-ph.CO].
Modi:2017wds
C. Modi, M. White and Z. Vlah,
arXiv:1706.03173 [astro-ph.CO].
Seljak:2017rmr
U. Seljak, G. Aslanyan, Y. Feng and C. Modi,
arXiv:1706.06645 [astro-ph.CO].
jaffe
J. Errard, S. M. Feeney, H. V. Peiris and A. H. Jaffe,
JCAP 1603 (2016) no.03, 052
[arXiv:1509.06770 [astro-ph.CO]].
inprep
S. Vagnozzi et al., in preparation.
Raveri:2015maa
M. Raveri,
Phys. Rev. D 93 (2016) no.4, 043522
[arXiv:1510.00688 [astro-ph.CO]].
Heavens:2017hkr
A. Heavens, Y. Fantaye, E. Sellentin, H. Eggers, Z. Hosenie, S. Kroon and A. Mootoovaloo,
arXiv:1704.03467 [astro-ph.CO].
Hannestad:2005gj
S. Hannestad,
Phys. Rev. Lett. 95 (2005) 221301
[astro-ph/0505551].
Archidiacono:2013fha
M. Archidiacono, E. Giusarma, S. Hannestad and O. Mena,
Adv. High Energy Phys. 2013, 191047 (2013)
[arXiv:1307.0637 [astro-ph.CO]].
Banerjee:2016suz
A. Banerjee, B. Jain, N. Dalal and J. Shelton,
arXiv:1612.07126 [astro-ph.CO].
Brust:2017nmv
C. Brust, Y. Cui and K. Sigurdson,
arXiv:1703.10732 [astro-ph.CO].
DiValentino:2013qma
E. Di Valentino, A. Melchiorri and O. Mena,
JCAP 1311 (2013) 018
[arXiv:1304.5981 [astro-ph.CO]].
Roland:2016gli
S. B. Roland and B. Shakya,
JCAP 1705 (2017) no.05, 027
[arXiv:1609.06739 [hep-ph]].
Melchiorri:2007cd
A. Melchiorri, O. Mena and A. Slosar,
Phys. Rev. D 76 (2007) 041303
[arXiv:0705.2695 [astro-ph]].
Conlon:2013isa
J. P. Conlon and M. C. D. Marsh,
JHEP 1310 (2013) 214
[arXiv:1304.1804 [hep-ph]].
Ackerman:mha
L. Ackerman, M. R. Buckley, S. M. Carroll and M. Kamionkowski,
Phys. Rev. D 79 (2009) 023519
[arXiv:0810.5126 [hep-ph]].
Kaplan:2009de
D. E. Kaplan, G. Z. Krnjaic, K. R. Rehermann and C. M. Wells,
JCAP 1005 (2010) 021
[arXiv:0909.0753 [hep-ph]].
Cline:2012is
J. M. Cline, Z. Liu and W. Xue,
Phys. Rev. D 85 (2012) 101302
[arXiv:1201.4858 [hep-ph]].
CyrRacine:2012fz
F. Y. Cyr-Racine and K. Sigurdson,
Phys. Rev. D 87 (2013) no.10, 103515
[arXiv:1209.5752 [astro-ph.CO]].
Fan:2013yva
J. Fan, A. Katz, L. Randall and M. Reece,
Phys. Dark Univ. 2 (2013) 139
[arXiv:1303.1521 [astro-ph.CO]].
Vogel:2013raa
H. Vogel and J. Redondo,
JCAP 1402 (2014) 029
[arXiv:1311.2600 [hep-ph]].
Petraki:2014uza
K. Petraki, L. Pearce and A. Kusenko,
JCAP 1407 (2014) 039
[arXiv:1403.1077 [hep-ph]].
Foot:2014uba
R. Foot and S. Vagnozzi,
Phys. Rev. D 91 (2015) 023512
[arXiv:1409.7174 [hep-ph]].
Foot:2014osa
R. Foot and S. Vagnozzi,
Phys. Lett. B 748 (2015) 61
[arXiv:1412.0762 [hep-ph]].
Chacko:2015noa
Z. Chacko, Y. Cui, S. Hong and T. Okui,
Phys. Rev. D 92 (2015) 055033
[arXiv:1505.04192 [hep-ph]].
Foot:2016wvj
R. Foot and S. Vagnozzi,
JCAP 1607 (2016) no.07, 013
[arXiv:1602.02467 [astro-ph.CO]].
Boddy:2016bbu
K. K. Boddy, M. Kaplinghat, A. Kwa and A. H. G. Peter,
Phys. Rev. D 94 (2016) no.12, 123017
[arXiv:1609.03592 [hep-ph]].
|
http://arxiv.org/abs/1701.07610v2 | 20170126080743 | Charge States and FIP Bias of the Solar Wind from Coronal Holes, Active Regions, and Quiet Sun | [
"Hui Fu",
"Maria S. Madjarska",
"Lidong Xia",
"Bo Li",
"Zhenghua Huang",
"Zhipeng Wangguan"
] | astro-ph.SR | [
"astro-ph.SR"
] |
1Shandong Provincial Key Laboratory of Optical
Astronomy and Solar-Terrestrial Environment, Institute of Space Sciences, Shandong University, Weihai 264209, Shandong,China; [email protected]
2Max Planck Institute for Solar System Research, Justus-von-Liebig-Weg 3, 37077, Göttingen, Germany
Connecting in-situ measured solar-wind plasma properties with typical regions
on the Sun can provide an effective constraint
and test to various solar wind models.
We examine the statistical characteristics of the solar wind with an origin
in different types of source regions.
We find that the speed distribution of coronal hole (CH) wind is bimodal
with the slow wind peaking at ∼400 and a fast at ∼600 .
An anti-correlation between the solar wind speeds and the ion ratio
remains valid in all three types of solar wind as well during the three studied solar cycle activity phases, i.e. solar maximum, decline and minimum.
The range and its average values all
decrease with the increasing solar wind speed in different types of solar wind.
The range (0.06–0.40, FIP bias range 1–7) for AR wind is wider than for CH wind (0.06–0.20, FIP bias range 1–3) while the minimum value of (∼ 0.06) does not change with
the variation of speed,
and it is similar for all source regions.
The two-peak distribution of CH wind and
the anti-correlation between the speed and in all three types of solar wind can be explained qualitatively
by both the wave-turbulence-driven (WTD) and
reconnection-loop-opening (RLO) models,
whereas the distribution features of in different source regions
of solar wind can be explained more reasonably by the RLO models.
§ INTRODUCTION
It is common knowledge that the in-situ solar wind has two basic components:
a steady fast (∼800 ) and a variable slow (∼400 ) component
<cit.>.
While it is widely accepted that the fast solar wind (FSW) originates
in coronal holes
<cit.>,
the source regions of the slow solar wind (SSW) are still poorly understood.
One of the sources of the SSW has been linked to sources at the edges
of active regions
<cit.>
and it is also believed that the SSW originates in the quiet Sun
<cit.>.
Intuitively, identification of wind sources can be done by tracing
wind parcels back to the Sun.
By applying a potential-field-source-surface (PFSS) model,
<cit.>
mapped a low latitude solar wind back to the photosphere for nearly
three solar activity cycles. They showed, for instance, that polar coronal holes contribute to the solar wind only over about the half solar cycle, while for the rest of the time the low-latitude solar wind originates from “isolated low-latitude
and midlatitude coronal holes or polar coronal hole
extensions that have a flow character distinct from that
of the large polar hole flows”. Using a standard two-step mapping procedure,
<cit.>
traced solar wind parcels back to the solar surface
for four Carrington rotations during a solar maximum phase.
The solar wind was divided into two categories: coronal hole and
active region wind, and their statistical parameters were analyzed
separately.
The authors reported that the ion ratio is lower for the coronal-hole wind
in comparison to the active-region wind.
<cit.>
traced the solar wind back to its sources and classified
the solar wind by the type of the source region,
i.e. active region (AR), quiet Sun (QS) and coronal hole (CH) wind.
They found that the fractions occupied by each type of solar wind
change with the solar cycle activity and established that the quiet Sun regions are an important source of the solar wind during the solar minimum phase.
Alternatively, wind sources can be determined by examining in-situ charge
states and elemental abundances.
For the former, the charge states of species such as oxygen and carbon are regarded
as a telltale signature of the solar wind sources.
For example, the density ratio of n(O^7+) to n(O^6+) (i.e. ionic charge state ratio, hereafter ) does not
vary with the distance
beyond several solar radii above the solar surface, and, therefore,
it reflects the electron temperature in the coronal sources
<cit.>.
As the temperatures in different source regions are different,
therefore, the source regions can be identified by the charge states
detected in situ
<cit.>.
The first ionization potential (FIP) effect describes the element anomalies in the
upper solar atmosphere and the solar wind (especially in the SSW),
i.e. the abundance increase of elements with a FIP of less than 10 eV (e.g., Mg, Si, and Fe) to those with a higher FIP (e.g., O, Ne, and He).
The in-situ measured FIP bias is usually represented by
and it can be expressed as
FIP bias=(N_Fe/N_O)_solar wind/(N_Fe/N_O)_photosphere,
where N_Fe/N_O is the abundance ratio of iron (Fe) and
oxygen (O).
In the slow wind the FIP bias is ∼3,
while in the fast streams it is found to be smaller
but still above 1
<cit.>.
As the FIP bias in coronal holes, the quiet Sun, and active regions
has significant differences, the solar wind detected in situ
can be linked to those source regions
<cit.>.
In the present study, we only present the in-situ measurements of N_Fe/N_O
that can easily be translated to a FIP bias by considering N_Fe/N_O in the photosphere to be a constant at ∼0.06 <cit.>.
Two theoretical frameworks, the wave-turbulence-driven (WTD) models and
the reconnection-loop-opening (RLO) models, have been proposed
to account for the observational results.
In the WTD models, the magnetic funnels are jostled by the
photosphere convection,
and waves are produced that propagate into the upper atmosphere.
These waves can dissipate to heat and
accelerate the nascent solar wind
<cit.>.
In the RLO models the magnetic-field line of loops reconnect with open field lines and
during this process mass and energy are released
<cit.>.
For more details on these two classes of models please see the review by
<cit.>.
There are two important differences between the two models.
First, in the WTD models the plasma escapes directly along open magnetic-field lines,
whereas in the RLO models the plasma is released from closed loops
through magnetic reconnection.
Second, in the WTD models the speed of the solar wind is determined
by the super radial expansion
<cit.> and curvature degree <cit.>
of the open magnetic-field lines.
In the RLO models, the speed of the solar wind depends on
the temperature of the loops that reconnect with
the open magnetic-field lines
with hotter loops producing slow wind,
and cooler loops producing fast wind
<cit.>.
Observations can be used to test any of the above mentioned models.
For this purpose, the following three questions need to be addressed.
First, where do the two components (a steady fast component and
a variable slow portion
<cit.>)
of the solar wind originate from?
Second, why does the charge state anti-correlate with the solar wind speed
<cit.>?
Third, why is the FIP bias (FIP bias value range) higher (wider)
in the slow solar wind than in the fast wind
<cit.>?
Traditionally, solar wind is classified by its speeds.
The speed, however, is not the only characteristic feature of
the solar wind
<cit.>.
The plasma properties and magnetic-field structures can be
significantly different depending on the solar regions,
i.e. coronal holes, quiet Sun, and active regions.
The differences in the source regions would then influence the solar wind
streams they generate <cit.>.
Therefore, connecting in-situ measured solar-wind plasma properties with
typical regions on the Sun can provide an effective
constraint and test to various solar wind models
<cit.>.
In an earlier study,
we classified the solar wind by the source region
type (CHs, QS, and ARs)
<cit.>.
Here, we analyze the relationship between in-situ solar wind parameters and
source regions in different phases of the solar cycle activity.
We aim at answering the following outstanding questions:
1) Are there any differences in the speed, , and distributions
of the different types of solar wind, i.e. AR, QS and CH?
2) Is the anti-correlation between the solar wind speed and
charge state still valid for each type of solar wind?
3) What are the characteristics of the distribution in speed and space
for the different types of solar wind?
We also discuss our new results in the light
of the WTD and RLO models.
The paper is organized as follows.
In Section 2 we describe the data and the analysis methods.
The statistical results are discussed in Section 3.
The summary and concluding remarks are given in Section 4.
§ DATA AND ANALYSIS
In <cit.>, the source regions were categorized
into three groups:
CHs, ARs, and QS,
and the wind streams originating from these regions were given
the corresponding names
CH wind, AR wind, and QS wind, respectively. We used hourly averaged solar wind speeds
measured by the Solar Wind Electron, Proton, and Alpha Monitor
onboard the Advanced Composition Explorer
<cit.>.
The charge state and FIP bias (also hourly averaged)
were recorded by the
Solar Wind Ion Composition Spectrometer,
<cit.>.
In the present study we are only interested in the non-transient solar wind, and therefore,
the intervals occupied by Interplanetary
Coronal Mass Ejections (ICMEs) were excluded.
We used the method suggested by <cit.> where charge state exceeding
6.008 exp(-0.00578v) (v is the ICME speed) were discarded.
In this study, the threshold for FSW and SSW is chosen as 500 <cit.>.
The two-step mapping procedure
<cit.>
was applied to trace the solar wind parcels back to the solar surface.
The footpoints were then placed on the EUV images observed by the
Extreme-ultraviolet Imaging Telescope (EIT, ,
)
and the photospheric magnetograms taken by the
Michelson Doppler imager
(MDI, )
onboard SoHO
(Solar and Heliospheric Observatory,
, ).
Here, the EIT 284 Å passband was used as coronal holes are best distinguishable there.
The scheme for classifying the source regions is illustrated
in the top panels (a–d) of Figure <ref>
where the footpoint locations (red crosses) are overplotted
on the EIT images (a1, b1, c1, d1) and the photospheric magnetograms (a2, b2, c2, d2).
The wind with footpoints located within CHs is classified as “CH wind".
A quantitative approach which follows that of <cit.>
for identifying coronal hole boundaries is implemented.
In this approach, a rectangular box which includes apparently dark area
and its brighter surrounding area is chosen.
There would be a multipeak distribution for its intensity histogram
(see Figure 3 in <cit.> and Figure 2 in <cit.>).
The minimum between the first two peaks was defined
as the threshold for the CH boundary (see the green contours in Figure <ref>, a1–d1).
This scheme can define CH boundaries more objectively and it is not influenced
by the emission variation of the corona with the solar activity.
The definition of the AR wind relies on the magnetic field strength
at photospheric level and corresponds to magnetically concentrated areas (MCAs). MCAs conform an area defined by the value of contour levels
that are 1.5–4 times the mean of the radial component
of the photospheric magnetic field.
We found that the morphology of MCA is not sensitive to contour levels
if it is in the above mentioned range.
This means that the MCAs have a strong spatial gradient of the radial magnetic-field component.
The defined MCAs encompass all active regions numbered by NOAA as given
by the solarmonitor [http://solarmonitor.org].
However, not all MCAs correspond to an AR numbered by NOAA.
As CH boundaries are defined quantitatively, we need only to consider
the regions outside CHs when we identify AR and QS regions.
An AR wind is defined when its footpoint is located inside an MCA that
is a numbered NOAA AR.
The QS wind is defined when the footpoints are located outside any MCA and CH.
The regions for which a footpoint is located in MCA that is not numbered by
NOAA are named as “Undefined" in order to keep the selection of the three groups solar wind “pure”.
The fractions of the undefined group range from ∼ 5% to ∼ 20%
for the years 2000 to 2008.
More details on the background work can be found in
<cit.>.
In the present study, the temporal resolution of the data used
for tracing the solar
wind back to the solar surface is enhanced to 12 hours.
The data for which the polarities are inconsistent at the two ends are removed
as done in <cit.> and <cit.>.
The statistical results for the solar wind parameters are almost the same as
those of <cit.>
in which the temporal resolution is 1-day.
More detailed analysis shows that
the footpoints stay in a particular region (CH, AR or QS)
for several days.
One example is shown in Figure 1 (a1), where a footpoint is located in
the same big equatorial hole for almost 7 days.
This means that a higher temporal resolution can only influence
the classification when a footpoint lies near the edge of a certain region.
The analysed data cover the time period from 2000 to 2008 which is further
divided into a solar maximum (2000–2001), decline (2002–2006),
and minimum phases (2007–2008)
based on the monthly sunspot number.
§ RESULTS AND DISCUSSION
§.§ Parameter distributions
To demonstrate the linkage of in-situ measured solar wind speeds,
and to particular solar regions,
we describe in detail a randomly chosen example of a period of time with typical CH, AR, and QS wind.
Figure <ref> provides an illustration
of the classification scheme
of the solar wind (a–d) and
the solar wind parameters
for the time period from day 313 to day 361 of 2003 (e–g).
The footpoint of solar wind parcel detected by ACE on day 316
is shown in Figure <ref>, a1 and a2, day 336 corresponds to b1 and b2,
day 341 to c1 and c2, day 346 to d1 and d2.
The solar wind was classified as CH, AR, QS and CH wind, respectively.
The period of time from day 315 to day 322 and 345 to 349 show
two fast solar wind streams with an average speed of
∼700 and ∼800 .
These two streams are separated approximately 27 days
during which the streams
from two active regions, an equatorial coronal hole
and two quiet-Sun periods are identified.
The two fastest streams are associated with a low charge state
and a ratio of
0.03±0.015 and 0.1±0.01, respectively.
As shown in Figure <ref>, d1, the fast stream
(from day 345 to day 349 ) originates from a big equatorial coronal hole.
Thus, the start (days 342.0-343.5) and end (days 350.5 to 352.0) periods
are considered as coronal-hole-boundary origin regions.
They have parameters characteristic for slow solar wind, i.e. the
charge state is 0.14±0.07 and 0.13±0.03, and
– 0.20±0.04 and 0.14±0.01, respectively.
In contrast to the two fastest CH streams, the AR wind has a lower speed of ∼500 and
∼400 , and higher and (0.20±0.05 and 0.29±0.05, and 0.15±0.01 and 0.14±0.02,
respectively).
The QS wind has parameters comparable to the AR speeds (∼450 ),
(0.19±0.04 and 0.18±0.05) and ratios (0.10±0.01 and 0.12±0.02).
The speed, and ratios for the wind that originates from the
equatorial CH (day 327.5-332.5) are
524±80 , 0.10±0.04 and 0.12±0.01.
As the magnetic field configurations and plasma properties
are significantly different for the different types of source regions,
we investigate here whether there also exist differences
between the three sources of solar wind.
In Figure <ref>,
we show the normalized (to the maximum for each wind type) distributions
of the solar wind speed and the and ratios
for the solar wind as a whole (in order to compare with earlier studies) and the different source regions, i.e. AR, QS and CH.
As these parameters are expected to change with the solar activity
<cit.>,
the results are shown for three different phases of solar cycle 23.
From Figure <ref> (a1) we note that
as expected the average speed of the CH wind is higher than the AR and QS wind.
Further, we estimated the contribution of each of the source regions to the fast
and the slow solar wind (Table <ref>).
If the whole cycle is considered as one,
the CHs (39.3%) have just a ∼4% higher contribution to the FSW than the QS (35.5%), with ARs having the smallest input of 25.2%.
For a given solar cycle phase, however, the true contribution of each type of
solar wind becomes more evident.
During the maximum phase of the solar cycle, the ARs are the dominant FSW source
at 40.3% followed by the CHs at 34.1% and the QS at 25.6%.
At the time of the decline phase the CHs are prevailing at 48.2% and
the rest of the FSW input comes almost equally from the QS (24.0%) and ARs (27.8%).
The most dominant during the minimum is the QS at 64.3%.
With regard to the SSW, if all the whole cycle is examined,
the QS (43.7%) and ARs (42.9%) have an almost equal contribution.
At the maximum ARs are the main source of SSW at 58.8%,
while in the decline phase again the AR and QS have
almost the same contribution of ∼42%.
The predominant source of the SSW during the minimum of the solar activity
is the QS at 72.9%.
The fractional contribution of each wind for a given source region is shown in Table <ref>. Again if all solar cycle phases together are studied, then CHs produce ∼60% FSW (∼40% SSW), ARs ∼77% SSW of their total solar wind input, while ∼29% of the total QS contribution goes into the FSW. More interesting is, however, how the fractions change during the different phases of the solar cycle activity. During the maximum CH SSW rises to ∼60%, while the solar main contribution of ARs and QS (of more than ∼80%) is to the SSW. In the decline phase the CH produces more FSW (∼64%) than SSW (∼36%). Again the ARs and QS are predominantly contributing to the SSW at a bit more than ∼70% of their total wind contribution. During the minimum the CH FSW contribution decreases slightly to ∼58% while the emission of SSW grows to ∼42%. In the minimum the FSW contribution of the ARs and QS goes further up to 34% and 37%, respectively, while their SSW contribution decrease.
It is important to point out that the above results only reflect the wind detected
by ACE which lies in the ecliptic plane.
Also we have to note that the heliospheric structures
(such as neutral line and heliospheric current sheet)
which may influence the statistical results were not removed during the investigated years.
Usually, those structures are associated with the boundaries between
different source regions of the solar wind
<cit.>.
Case and statistical studies have already shown that CHs are sources of
both the fast and slow wind
<cit.>.
<cit.>
compared the flow speed derived from Doppler dimming and
density observations by UVCS/SoHO (UltraViolet Coronagraph Spectrometer)
and suggested that the QS regions are an additional source
of the fast solar wind, thus questioning the
traditional belief
that the fast solar wind originates only from CHs.
There are many small regions (size of several arcsec across) in a typical QS region
that are similar in brightness to CH regions as observed in, for instance, coronal spectral lines Ne viii (T_max ∼ 6×10^5 K) and Mg x (T_max ∼ 10^6 K) (from SUMER), as well as EIT and TRACE coronal solar-disk images.
Thus, <cit.> speculated that those dark QS regions
may be the source regions of the fast solar wind suggested by
<cit.>.
In the present study, the solar wind is classified by source regions which differs from classifications that are based on solar wind
parameters (such as solar wind speed and charge state ) <cit.>.
Bearing the uncertainties of our solar-wind classification scheme as discussed
in <cit.> (such as the reliability of the PFSS model and the simple ballistic treatment in the mapping procedure),
our results demonstrate the complexity of the fast and slow
solar wind origin.
A significant feature for the CH wind speeds are their two peak
distributions for all three solar activity phases.
We suggest that the fast and slow distribution peaks
come from the CH center and the boundary regions that
include both CH open and QS/AR close magnetic fields, respectively.
For instance, as shown in the wind-source identification example in
Figure <ref>,
the solar wind streams coming from CH core regions are faster,
while the CH boundary streams are slower.
Similar results are shown by
<cit.> and
<cit.>.
Several studies have suggested that structures like
coronal bright points and plumes which are located in
CH boundary regions may also be sources of the SSW
<cit.>.
This can be interpreted by both the WTD and RLO models.
In the WTD models, it means that the super radial expansion and curvature
degree of the open magnetic-field lines are smaller and lower at the center of CHs compared to CH boundary regions.
RLO models suggest that loops that reconnect with open
magnetic-field lines have lower temperature at CH central regions than loops located in the boundary regions.
The two peak distribution of solar wind speeds may, therefore, reflect either
the super radial expansion and the curvature degree
of the open magnetic-field lines, or loop temperature.
As shown in Figure <ref>, (b1) the CH solar-wind
speed distribution has a stronger peak at ∼400 ,
and a second weaker peak at ∼600 during the solar maximum.
During the decline and minimum phases,
the second peak at ∼600 is stronger than the peak at lower speeds.
The two-peak distribution variation of the CH wind may be related to the
different average areas and physical properties (such as magnetic field strength)
of CHs during different solar activity phases <cit.>.
Another possible explanation is the difference of the heliospheric structure
during the different solar cycle phases.
During solar maximum, ACE may encounter a longer period of neutral line or
heliospheric current sheet which usually corresponds to the SSW,
thus the slow wind peak is higher.
The faster peak at ∼600 is stronger
because there are more
equatorial CHs during the decline phase
<cit.>.
As it can be seen from the middle and bottom rows in
Figure <ref>,
in all three phases of the solar cycle discussed here the average values of and are the highest for the AR wind,
the lowest for the CH wind with the QS wind in between.
Consistent with
<cit.>,
our study demonstrates that the distributions of the
speed, , and ratios for the different types of solar wind
have a large overlap, and therefore,
it is hard to distinguish the source regions only by those wind parameters.
The other characteristics for the parameters of and ratio will be
discussed in Section <ref> and <ref>
§.§ Distributions in the space of speed vs
Figure <ref> shows the scatter plot of versus the solar wind speed for the different source regions,
i.e. CH, AR, and QS.
For the purpose of being comparable to previous studies,
we show in column 1 of Figure <ref>
the relation between and the solar wind speeds
for all three regions summed together.
The distributions for each individual source regions
are shown in column 2, 3 and 4 of Figure <ref>.
The anti-correlation of the solar wind speed and remains valid for all three types of solar wind.
In the left column of Figure <ref>,
the linear fit of the distributions for the different source regions of
solar wind are overplotted,
showing that slopes and intercepts are the same
for the different solar-wind source regions.
Quantitatively, the slops are -0.0027, -0.0028 and -0.0028 for
CH, AR and QS wind, respectively, during solar maximum,
and -0.0026, -0.0025, -0.0030 at the decline phase,
and -0.0038, -0.0035, -0.0034 during the solar minimum.
The absolute values of the slops for a certain type of
solar wind are almost constant during the solar maximum and decline phases
and larger during the solar minimum.
The anti-correlation between solar wind speed and was first reported
by observations from the International Sun-Earth Explorer 3 (ISEE 3)
and Ulysses <cit.>.
<cit.> suggested that this observational fact
supports the notion that the mechanism
suggested by the RLO models is the main physical process for heating and acceleration of the nascent solar wind.
Our statistical study shows that the anti-correlation
between solar wind speeds and is not only
present for the CH wind
<cit.>
but is also valid for the AR and QS wind.
This suggests that the mechanisms which account for the anti-correlation
between the solar wind speeds and ratio are the same
in CH, QS, and AR wind.
Relating to the RLO models, this means that the correlation between
loop size and loop temperature
is similar in all three types of regions.
With regard to the WTD models, this suggests that the super radial expansion and
curvature degree of the open magnetic filed lines
is proportional to the source region temperature.
The anti-correlation between the solar wind speed and can also be explained by
a scaling law, in which a higher coronal electron temperature
leads to more energy lost from radiation for SSW,
and vice versa as suggested by
<cit.>, <cit.>, and <cit.>.
This scaling law is required for all
magnetically driven solar wind models.
Considering that the physical parameters and magnetic field configurations
for a certain type of region (CH, AR and QS)
have a large range,
our statistical results for the different types of solar wind provide a test
for this scaling law.
Our results demonstrate that the relationship for solar wind speed and is valid and is almost the same
for different types of solar wind,
which means the scaling law is valid in all three types of solar wind.
§.§ Distributions in the space of speed vs
Figure <ref> presents the scatter plot of the
solar wind speed versus
again for all three regions together in column 1,
and for the individual source regions in columns 2, 3, and 4.
The distribution for all three regions together is similar to
earlier studies
<cit.>.
There are four important features concerning the relation between
and the solar wind speeds.
First, the average value of is the highest in the AR wind, and the lowest
for the CH wind.
Second, the range (0.06–0.40, FIP bias range 1–7) for the AR wind is wider than
for the CH wind (0.06–0.20, FIP bias range 1–3).
Third, similar to the wind as a whole,
the ranges and their average values all decrease
with the increasing solar wind speed in the different types of solar wind.
Fourth, the minimum value of is similar (∼0.06, FIP bias ∼1)
for all source regions and it does not change with the speed of the solar wind.
The remote measurements in the solar corona given by
<cit.>, <cit.>, and
<cit.>
show that the FIP bias is higher
in AR regions (dominated by loops) than in CHs.
The FIP bias in CHs is between 1 and 1.5
<cit.>,
while in ARs
it can reach values larger than 4 in the case of older ARs
<cit.>.
The remote measurements of the solar corona
also show that the variation of FIP bias in ARs is larger than in CHs <cit.>.
Our results demonstrate that the ranges and their average value are higher in the AR wind.
This means that the plasma stored in closed loops can escape into
interplanetary space,
and that this mass supply scenario is consistent with the RLO models.
The differences in ranges and their average values between different
source regions of solar wind can also be explained
qualitatively by the RLO models.
<cit.> reviewed the morphological features in the
upper atmosphere.
They showed that the small loops (10–20 arcsecs) are cooler (30 000 K to 0.7 MK)
and have shorter lifetime (100 to 500 s)
in QS and CH regions.
There are also larger loops (tens to hundreds of arcsecs) which have
higher temperature (1.2–1.6 MK) and longer lifetime (1–2 days) in QS regions.
By reconstructing the magnetic field with the help
of a potential magnetic field model,
<cit.> suggest that the loops in CHs are
on average flatter and shorter than in the QS.
The range of loop sizes and temperatures are wider in AR regions
including small (10–20 arcsecs) cool (<0.1 MK) loops
<cit.>, as well cool (0.1 MK – 1 MK),
warm (1–2M K) and hot
hot (> 2 MK) loops with lengths ranging from
a few tens to a few hundreds of arcsecs
<cit.>.
The above results mean that the ranges of temperatures and
loop sizes are wider in AR regions than in CHs.
Although this relation is not strictly proportional,
the lifetime of loops is connected to these parameters.
<cit.>
studied the FIP bias of four emerging active regions
and found that the FIP bias increases progressively after the emergence.
They concluded that the low FIP elements enrichment relates to
the age of coronal loops.
The AR wind may both come from new loops (with low FIP bias)
and old loops (with higher FIP bias).
Therefore, the range and its average value in the AR wind is
wider (higher) than in the CH wind.
The fact that range and average value decrease with the
increasing solar wind speed
in all three types of solar wind
(see Figure <ref>)
can also be explained qualitatively by the RLO models.
<cit.>
showed that the speed of the solar wind is inverse
to the loop temperatures.
For the fast wind, the loops in the source regions are cooler and their
life time is shorter.
Thus, their FIP bias is lower and its distribution range is narrower.
In contrast, the slow wind is at the other extreme.
There are two possible interpretations for the fact that
the minimum value of is similar (∼0.06, FIP bias ∼1)
for all source regions
and it does not change with speed of the solar wind.
First, in the RLO models,
some of the new born loops (size may be large or small)
reconnect with the open field lines
producing solar wind with lower (wind speed is slow or fast).
Second, based on the WTD models the solar wind escapes directly
along the open magnetic field lines
and the FIP fractionation is restricted to
the top of the chromosphere
based on a model in which the FIP fractionation is caused
by ponderomotive force
<cit.>.
Thus, the FIP bias is lower (∼1–2) in open filed lines
compared with that in the closed loops (∼2–7, please see Table 3 and 4 in <cit.>).
In all three types of regions, the solar wind which escapes from
open magnetic filed
lines directly has lower FIP bias, regardless of whether the speed is slow or fast.
§ SUMMARY AND CONCLUDING REMARKS
The main purpose of this work was to examine
the statistical properties of the solar wind
originating from different solar regions, i.e. CHs, ARs, and QS.
The solar wind speeds, , and were analyzed for different solar cycle phases
(maximum, decline, and minimum).
Our main results can be summarized as follows:
* We found in the present study that
the proportions of FSW and SSW are 59.3% and 40.7% for CH regions.
Fast solar wind is also found to emanate from AR and the QS,
and the proportion of the FSW from AR and QS with respect to their total solar wind input are
13.7% and 17.0%, 25.8% and 28.4%, 34.0% and 36.8%,
during the solar maximum, decline, and minimum phases, respectively.
The distributions of speed, , and ratio
for the different source regions of solar wind have large overlaps
indicating that it is hard to distinguish the source regions
only by those wind parameters.
* We found that the speed distribution of the CH wind
is bimodal in all three solar activity phases.
The peak of the fast wind from CHs for the period of time studied here is found to be at
∼ 600 and the slow wind peak is at ∼ 400 .
The fast and slow wind components possibly come from
the center and boundary regions of CHs, respectively.
* This study demonstrates that the anti-correlation between the speed and ratio remains valid in all three types of solar wind
and during the three studied solar cycle activity phases.
* We identify four features of the distribution of
in the different solar wind types.
The average value of is highest in the AR wind,
and lowest for the CH wind.
The average values and ranges of all
decrease with the solar wind speed.
The range in the AR wind is
larger (0.06–0.40) than in CH wind (0.06–0.20).
The minimum value of (∼ 0.06) does not change with
the variation of speed,
and it is similar for all source regions.
The statistical results indicate that
the solar wind streams that come from different source regions
are subject to similar constraints.
This suggests that the heating and acceleration mechanisms
of the nascent solar wind in coronal holes, active regions,
and quiet Sun have great similarities.
The two-peak distribution of the CH wind and
the anti-correlation between the speed and in all three types of solar wind can be explained qualitatively
by both the WTD and RLO models,
whereas the distribution features of in different source regions
of solar wind can be explained more reasonably by the RLO models.
The authors thank very much the anonymous referee for the helpful comments and suggestions.
We thank the ACE SWICS, SWEPAM, and MAG instrument
teams and the ACE Science Center for providing the ACE data.
SoHO is a project of international cooperation
between ESA and NASA.
This research is supported by
the National Natural Science Foundation of China (41274178,41604147,
41404135, 41474150,
41274176, and 41474149).
H.F. thanks the Shandong provincial Natural Science Foundation (ZR2016DQ10).
Z.H. thanks the Shandong provincial Natural Science Foundation (ZR2014DQ006)
and the China Postdoctoral Science Foundation for financial supports.
aasjournal
|
http://arxiv.org/abs/1701.07972v2 | 20170127084152 | Spin-momentum locked polariton transport in the chiral strong coupling regime | [
"Thibault Chervy",
"Stefano Azzini",
"Etienne Lorchat",
"Shaojun Wang",
"Yuri Gorodetski",
"James A. Hutchison",
"Stéphane Berciaud",
"Thomas W. Ebbesen",
"Cyriaque Genet"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
ISIS & icFRC, Université de Strasbourg and CNRS, UMR 7006, F-67000 Strasbourg, France
ISIS & icFRC, Université de Strasbourg and CNRS, UMR 7006, F-67000 Strasbourg, France
Université de Strasbourg, CNRS, IPCMS, UMR 7504, F-67000 Strasbourg, France
Dutch Institute for Fundamental Energy Research, Eindhoven, The Netherlands
Mechanical Engineering and Mechatronics Department and Electrical Engineering and Electronics Departement, Ariel University, Ariel 40700, Israel
ISIS & icFRC, Université de Strasbourg and CNRS, UMR 7006, F-67000 Strasbourg, France
Université de Strasbourg, CNRS, IPCMS, UMR 7504, F-67000 Strasbourg, France
ISIS & icFRC, Université de Strasbourg and CNRS, UMR 7006, F-67000 Strasbourg, France
ISIS & icFRC, Université de Strasbourg and CNRS, UMR 7006, F-67000 Strasbourg, France
[Corresponding author:][email protected]
We demonstrate room temperature chiral strong coupling of valley excitons in a transition
metal dichalcogenide monolayer with spin-momentum locked surface plasmons.
In this regime, we measure spin-selective excitation of directional
flows of polaritons. Operating under strong light-matter coupling, our
platform yields robust intervalley contrasts and coherences, enabling us
to generate coherent superpositions of chiral
polaritons propagating in opposite directions. Our results reveal the rich and easy to implement possibilities offered by our
system in the context of chiral optical networks.
Spin-momentum locked polariton transport in the chiral strong coupling regime
Cyriaque Genet
December 30, 2023
=============================================================================
Optical spin-orbit (OSO) interaction couples the polarization of a
light field with its propagation direction <cit.>. An important body of work has recently
described how OSO interactions can be exploited at the level of nano-optical
devices, involving dielectric <cit.> or plasmonic
architectures <cit.>, all able to confine the electromagnetic field
below the optical wavelength. Optical spin-momentum locking effects have been used to
spatially route the flow of surface plasmons depending on
the spin of the polarization of the excitation beam <cit.> or to
spatially route the flow of photoluminescence (PL) depending on
the spin of the polarization of the emitter transition
<cit.>. Such
directional coupling, also known as chiral coupling, has been
demonstrated in both the classical and in the quantum regimes <cit.>. Chiral coupling
opens new opportunities in the field of light-matter interactions
with the design of non-reciprocal devices,
ultrafast optical switches, non destructive photon
detector, and quantum memories and networks (see <cit.> and
references therein).
In this letter, we propose a new platform consisting of spin-polarized
valleys of a transition metal dichalcogenide (TMD) monolayer strongly
coupled to a plasmonic OSO mode, at room temperature (RT).
In this strong coupling regime, each spin-polarized valley exciton is
hybridized with a single plasmon mode of specific momentum.
The chiral nature of this interaction generates spin-momentum
locked polaritonic states, which we will refer to with the
portmanteau chiralitons.
A striking feature of our platform is its capacity to induce
RT robust valley contrasts, enabling the directional transport of chiralitons over
micron scale distances. Interestingly, the strong coupling regime also yields coherent intervalley dynamics whose contribution can still be observed in the steady-state. We hence demonstrate the generation of coherent superpositions (i.e. pairs) of chiralitons flowing in opposite directions.
These results, unexpected from the bare TMD monolayer RT
properties <cit.>, point towards the importance of the strong coupling regime where fast Rabi oscillations compete with TMD valley relaxation dynamics, as recently discussed <cit.>.
The small Bohr radii and reduced screening of monolayer TMD excitons
provide the extremely large oscillator strength required for light
matter interaction in the strong coupling regime, as already achieved
in Fabry-Pérot cavities <cit.>
and more recently in plasmonic resonators <cit.>.
In this context, Tungsten Disulfide (WS_2) naturally sets itself as
a perfect material for RT strong coupling <cit.> due to
its sharp and intense A-exciton absorption peak, well separated from
the higher energy B-exciton line (see Fig. <ref>(a)) <cit.>.
Moreover, the inversion symmetry breaking of the crystalline
order on a TMD monolayer,
combined with time-reversal symmetry, leads to spin-polarized
valley transitions at the K and K' points of the associated
Brillouin zone, as sketched in Fig. <ref>(b) <cit.>. This polarization property makes therefore atomically thin TMD semiconductors
very promising candidates with respect to the chiral aspect of the
coupling between the excitonic valleys and the plasmonic OSO modes <cit.>,
resulting in the strongly coupled energy diagram shown in Fig. <ref>(c).
Experimentally, our system, shown in
Fig. <ref>(a), consists of a mechanically exfoliated
monolayer of WS_2 covering a
plasmonic OSO hole array, with a 5 nm thick dielectric spacer (polymethyl
methacrylate).
The array, imaged in Fig. <ref>(b), is designed on a (x,y)
square lattice with a grating period Λ, and consists of rectangular
nano-apertures (160×90 nm^2) rotated stepwise along the
x-axis by an angle ϕ=π/6. The associated orbital period
6×Λ sets a rotation vector Ω=(ϕ /
Λ)ẑ, which combines with the spin σ of the
incident light into a geometric phase Φ_g=- Ωσ x
<cit.>. The gradient of this geometric phase
imparts a momentum k_g=-σ(ϕ / Λ)x̂
added to the matching condition on the array between the
plasmonic k_ SP and incidence in-plane k_ in momenta:
k_
SP= k_ in + (2π / Λ) (nx̂+mŷ)+
k_g.
This condition defines different (n,m) orders for the plasmonic
dispersions, which are transverse magnetic (TM) and transverse electric (TE)
polarized along the x and y axis
of the array respectively (see Fig. <ref>(b)).
The dispersive properties of such a resonator thus combines two modal responses: plasmon excitations directly determined on the square Bravais lattice of the grating for both
σ^+ and σ^- illuminations via (2π / Λ) (nx̂+mŷ), and spin-dependent plasmon OSO modes launched by the additional geometric momentum k_g.
It is important to
note that the contribution of the geometric phase impacts the TM
dispersions only. The period of our structure Λ=480 nm is
optimized to have n=+1 and n=-1 TM modes resonant with the
absorption energy of the A-exciton of WS_2 at 2.01 eV for
σ^+ and σ^- illuminations respectively. This
strict relation between n=±
1 and σ=± 1 is the OSO mechanism that breaks the left
vs. right symmetry of the modal response of the array, which in this
sense becomes chiral. Plasmon OSO modes are thus launched in
counter-propagating directions along the x-axis for
opposite spins σ of
the excitation light. In the case of a bare plasmonic OSO resonator,
this is clearly observed in Fig. <ref> (c).
We stress that similar arrangements of anisotropic apertures have previously been
demonstrated to allow for spin-dependent surface plasmon
launching <cit.>.
As explained in the Supporting Information (Sec. A), the low transmission measured through our WS_2/plasmonic array
sample (Fig. <ref>(a)) enables us to obtain absorption spectra directly
from reflectivity spectra.
Angle-resolved white light absorption spectra are hence recorded and shown in Fig. <ref> (a) and (b) for left and
right circular polarizations. In each case, two strongly dispersing branches are
observed, corresponding to upper and lower chiralitonic
states. As detailed in the Supporing Information (Sec. A), a fit of a coupled dissipative oscillator model to the dispersions enables us to extract a branch splitting 2√((ħΩ_R)^2 - ( ħγ_ex- ħΓ_OSO)^2)=40 meV. With measured linewidths of the excitonic mode ħΓ_OSO=80 meV and of the plasmonic mode ħγ_ex=26 meV, this fitting yields a Rabi frequency
of ħΩ_ R=70 meV, close to our previous
observations on non-OSO plasmonic resonators <cit.>. We emphasize that this value clearly fulfills the strong coupling criterion with a figure-of-merit Ω_ R^2/(γ_ exc^2+Γ_ OSO^2) =0.69 larger than the 0.5 threshold that must be reached for strong coupling <cit.>. This demonstrates that our system does operate in the strong coupling regime, despite the relatively low level of visibility of the anti-crossing. This is only due (i) to spatial and spectral disorders which leave, as always for collective systems, an inhomogeneous band of uncoupled states at the excitonic energy, and (ii) to the fact that an uncoupled Bravais plasmonic branch is always superimposed to the plasmonic OSO mode, leading to asymmetric lineshapes clearly seen in Fig. <ref> (a) and (b). As shown in the Supporting Information (Sec. A), the anti-crossing can actually be fully resolved through a first-derivative analysis of our absorption spectra.
In such strong coupling conditions,
the two dispersion diagrams also show a clear mirror symmetry breaking with respect to
the normal incident axis (k_x=0) for the two opposite optical spins. This clearly
demonstrates the capability of our structure to act as a spin-momentum locked polariton
launcher. From
the extracted linewidth that gives the lifetime of the chiralitonic mode and the curvature of the dispersion relation that provides its group velocity, we can
estimate a chiraliton propagation length of the order of 4 μm, in
good agreement with the measured PL diffusion length presented in the Supporting Information (Sec. B).
In view of chiral light-chiral matter interactions, we further
investigate the interplay between this
plasmonic chirality and the valley-contrasting chirality of the WS_2
monolayer. A first demonstration of such an interplay is found in the
resonant second harmonic (SH) response of the strongly coupled system.
Indeed, monolayer TMDs have been shown to give a high valley contrast in the
generation of a SH signal resonant with their A-excitons
<cit.>. As we show in Supporting Information (Sec. G) such high SH valley contrast are measured on a bare WS_2 monolayer. The optical selection rules for SH generation are opposite
to those in linear absorption since the process involves two
excitation photons, and are more robust since the SH process
is instantaneous.
The angle resolved resonant SH signals are shown in
Fig. <ref> (c) and (d) for right and left circularly polarized excitation. The SH signals are angularly exchanged when the spin of the excitation is reversed with a right vs. left contrast (ca. 20%) close to the one measured on
the reflectivity maps (ca. 15%). This unambigously demonstrates the selective coupling of excitons in
one valley to surface plasmons propagating in one direction, thus
realizing valley-contrasting chiralitonic states with spins locked to
their propagation wavevectors:
|P^±_K,σ^+,-k_ SP> = |g_K,1_σ^+,-k_ SP> ± |e_K,0_σ^+,-k_ SP>
|P^±_K',σ^-,+k_ SP> = |g_K',1_σ^-,+k_ SP> ± |e_K',0_σ^-,+k_ SP>,
where e_i(g_i) corresponds to the presence (absence) of an
exciton in the valley i=(K,K') of WS_2, and 1_j(0_j) to 1 (0) plasmon
in the mode of polarization j=(σ^+,σ^-) and wavevector
± k_ SP.
The detailed features of SH signal (crosscuts in Fig. <ref> (c) and (c)) reveal within the bandwidth of our pumping laser the contributions of both the uncoupled excitons and the upper chiraliton to the SH enhancement. The key observation, discussed in the Supporting Information (Sec. D), is the angular dependence of the main SH contribution. This contribution, shifted from the anticrossing region, is a feature that gives an additional proof of the strongly coupled nature of our system because it is determined by the excitonic Hopfield coefficient of the spin-locked chiralitonic state. In contrast, the residual SH signal related to the uncoupled (or weakly coupled) excitons simply follows the angular profile of the absorption spectra taken at 2 eV, thus observed over the anticrossing region. Resonant SH spectroscopy of our system therefore confirms the presence of the chiralitonic states, with the valley contrast
of WS_2 and the spin-locking property of the OSO plasmonic resonator
being imprinted on these new eigenstates of the system.
Revealed by these resonant SH measurements, the spin-locking property of chiralitonic states incurs however
different relaxation mechanisms through the dynamical
evolution of the chiralitons. In particular, excitonic intervalley
scattering can erase valley contrast in WS_2 at RT
<cit.> -see below. In our configuration, this would transfer
chiraliton population
from one valley to the other, generating via optical spin-locking, a
reverse flow, racemizing the chiraliton
population. This picture however does not
account for the possibility of more robust valley contrasts in
strong coupling conditions, as recently reported with MoSe_2 in
Fabry-Pérot cavities <cit.>.
The chiralitonic flow is measured by performing angle resolved polarized PL
experiments, averaging the signal over the PL lifetime of ca. 200
ps (see Supporting Information, Sec. D and E). For these
experiments, the laser excitation
energy is chosen at 1.96 eV, slightly below the WS_2 band-gap. At this energy, the measured PL results
from a phonon-induced up-convertion process that minimizes intervalley scattering events <cit.>.
The difference between PL dispersions obtained with left and right
circularly polarized excitations is displayed in Fig. <ref> (a),
showing net flows of chiralitons with
spin-determined momenta. This is in agreement with the
differential white-light reflectivity map
R_σ^--R_σ^+ of Fig. <ref> (b). Considering that this map gives the sorting efficiency of our OSO resonator, such correlations in the PL implies that the effect of the initial spin-momentum determination of the chiralitons (see Fig. <ref> (e) and (f)) is still observed after 200 ps at RT.
After this PL lifetime, a net chiral flow
ℱ=I_σ^--I_σ^+ of ∼ 6% is extracted
from Fig. <ref> (a). This is the signature of a chiralitonic
valley polarization, in striking contrast with the absence of
valley polarization that we report for a bare WS_2 monolayer at RT
in the Supporting Information, Sec. G.
The extracted net flow is however limited by the
finite optical contrast 𝒞 of our OSO resonator, which we
measure at a 15% level from a cross-cut taken on
Fig. <ref> (b) at 1.98 eV. It is therefore possible to
infer that a chiralitonic valley contrast of
ℱ / 𝒞≃ 40% can be reached at RT for the strongly coupled WS_2 monolayer.
As mentioned above, we understand this surprisingly robust contrast by invoking the fact that under strong coupling conditions, valley relaxation is outweighted by the faster Rabi energy exchange between the exciton of each valley and the corresponding plasmonic OSO mode, as described in the Supporting Information (Sec. A). From the polaritonic point of view, the local dephasing and scattering processes at play on bare excitons -that erase valley contrasts on a bare WS2 flake as observed in the Supporting Information (Sec. G)- are reduced by the delocalized nature of the chiralitonic state, a process akin to motional narrowing and recently observed on other polaritonic systems <cit.>.
As a consequence of this motional narrowing effect, such a strongly coupled system involving atomically thin crystals of
TMDs could then provide new ways to incorporate intervalley coherent
dynamics <cit.> into the
realm of polariton physics. To illustrate this, we now show
that two counter-propagating flows of chiralitons can evolve coherently. It is clear from Fig. <ref> (c) that within such a coherent superposition of counter-propagating chiralitons
|Ψ> = |P^±_K,σ^+,-k_ SP> + |P^±_K',σ^-,+k_ SP>
flow directions and spin polarizations become non-separable.
Intervalley coherence is expected to result in a non-zero
degree of linearly polarized
PL when excited by the same linear polarization. This can be monitored
by measuring the S_1=I_ TM-I_ TE
coefficient of the PL Stokes vector, where
I_ TM( TE) is the emitted PL intensity
analyzed in TM (TE) polarization.
This coefficient is displayed in the k_x-energy plane
in Fig. <ref>(c) for an incident
TM polarized excitation at 1.96 eV. Fig. <ref>(e) displays the
same coefficient under TE excitation.
A clear polarization anisotropy on the
chiraliton emission is observed for both TM and TE excitation
polarizations, both featuring the same symmetry along the k_x=0 axis
as the differential reflectivity dispersion map
R_ TM-R_ TE shown in
Fig. <ref>(d). As detailed in the
Supporting Information (Sec. F), the degree
of chiralitonic intervalley coherence can be directly quantified by the
difference (S_1^ out|_
TM-S_1^ out|_ TE)/2, which
measures the PL linear depolarization factor displayed (as m_11) in
Fig. <ref> (f). By this procedure, we retrieve a chiralitonic
intervalley coherence that varies between 5% and 8% depending on
k_x. Interestingly, these values that we reach at RT have magnitudes comparable to those reported on a bare WS_2 monolayer at 10 K <cit.>. This
unambiguously shows how such strongly coupled TMD systems can sustain
RT coherent dynamics robust enough to be observed despite the long exciton PL lifetimes and plasmonic propagation distances.
In summary, we demonstrate valley contrasting
spin-momentum locked chiralitonic states in an atomically thin TMD
semiconductor strongly coupled to a plasmonic OSO resonator. Likely, the observation of such contrasts even after 200 ps lifetimes is made possible by the unexpectedly robust RT coherences inherent to the strong coupling regime. Exploiting such robust coherences, we measure chiralitonic
flows that can evolve in superpositions over micron scale
distances.
Our results show that the combination of
OSO interactions with TMD valleytronics is an interesting path to
follow in order to explore and manipulate RT coherences in chiral quantum architectures <cit.>.
We thank David Hagenmüller for fruitful discussions.
This work was supported in part by the ANR Equipex “Union” (ANR-10-EQPX-52-01), ANR Grant (H2DH
ANR-15-CE24-0016), the Labex NIE projects
(ANR-11-LABX-0058-NIE) and USIAS within the Investissement d'Avenir
program ANR-10-IDEX-0002-02. Y. G. acknowledges support from the
Ministry of Science, Technology and Space, Israel. S. B. is a member
of the Institut Universitaire de France (IUF).
Author Contributions -
T. C. and S. A. contributed equally to this work.
§ SUPPORTING INFORMATION
§ A: LINEAR ABSORPTION DISPERSION ANALYSIS
Angle resolved absorption spectra
are given from the measured reflectivity of
the WS_2 flake on top of the plasmonic grating
R_sample with:
A = 1 - R_sample/R_substrate,
where R_substrate is the angle resolved reflectivity of
the optically thick (200 nm thickness) Au substrate.
The 1% max. transmission through the structure
can be safely neglected.
As we explain in the main text, the resulting dispersion spectra
are broadened by the contribution of different plasmonic modes as well
as coupled and uncoupled exciton populations.
In order to highlight the polaritonic contribution in the absorption spectra, we
calculate the first derivative of the reflectivity
dispersions d[
R_sample/R_substrate] / dE. The
derivative was approximated by interpolating the reflectivity spectra on
an equally spaced wavelength grid of step Δλ = 0.55 nm
and using the following finite difference expression
valid up to fourth order in the grid step:
dR/dλ(λ_0) ≃1/12R(λ_-2)
-2/3R(λ_-1) + 2/3R(λ_+1) -
1/12R(λ_+2)/Δλ,
where R(λ_n) is the reflectivity evaluated n steps away
from λ_0. The resulting first derivative reflectivity spectra were then converted to an
energy scale and plotted as dispersion diagrams in Fig. <ref> (a) and (b).
In these first derivative reflectivity maps, the zero-crossing points correspond
to the peak positions of the modes, and the maxima and minima
indicate the inflection points of the reflectivity lineshapes.
At the excitonic asymptotes of the dispersion curves, where the
polaritonic linewidth is expected to match that of the bare WS_2
exciton, we read a linewidth of 26 meV from the maximum to
minimum energy difference of the derivative reflectivity maps.
This value is equal to the full width at half maximum (FWHM) ħγ_ exc that we measured
from the absorption spectrum of a bare WS_2 flake on a dielectric
substrate.
On the low energy plasmonic asymptotes, we clearly observe the effect
of the Bravais and OSO modes, partially overlapping in an asymmetric
broadening of the branches. In this situation, a measure of the mode half-widths can
be extracted from the (full) widths of the positive or negative regions of the first
differential reflectivity maps. This procedure yields an energy
half-width for the plasmonic modes of ħΓ_ OSO/2=40 meV. This width in energy
can be related to an in-plane momentum width of ca. 0.5
μm^-1 via the plasmonic group velocity v_G
= ∂ E/∂ k = 87
meV·μm, that we calculate from the branch curvature at 1.85 eV.
This in-plane momentum width results in a plasmonic propagation length
of about 4 μm. This value is in very good agreement with the measured PL
extension above the structure, as discussed in section B below, validating our estimation of the mode linewidth ħΓ_ OSO=80 meV.
The dispersive modes of the system can be modeled by a dipolar
Hamiltonian, where excitons in each valley are selectively
coupled to degenerated OSO plasmonic modes of opposite wavevectors
± k_ SP, as
depicted in Fig. 2 in the main text:
ℋ = ∑_k_x[ℋ_OSO(k_x) +
ℋ_ex +
ℋ_int(k_x)],
which consists of three different contributions:
ℋ_OSO(k_x) =
ħω_OSO(k_x)(a^†_k_xa_k_x +
a^†_-k_xa_-k_x),
ℋ_ex(k_x) =
ħω_ex(b^†_K-k_xb_K-k_x + b^†_K'+k_xb_K'+k_x),
ℋ_int(k_x) = ħ g(a^†_k_x
+ a_k_x)(b^†_K'+k_x + b_K'+k_x)
+ ħ g(a^†_-k_x
+ a_-k_x)(b^†_K-k_x + b_K-k_x),
where a(a^†) are the lowering (raising) operators of the
OSO plasmonic modes of energy ħω_OSO(k_x),
b(b^†) are the lowering (raising) operators of the
exciton fields of energy ħω_ex, and g = Ω_R/2 is the
light-matter coupling frequency. In this hamiltonian the chiral
light-chiral matter interaction is effectively accounted for by
coupling excitons of the valley K'(K) to plasmons propagating with
wavevectors k_x(-k_x). Moreover, the dispersion of
the exciton energy can be neglected on the scale of the plasmonic wavevector k_ SP.
Using the Hopfield procedure <cit.>, we can diagonalize
the total Hamiltonian by
finding polaritonic normal mode operators P^±_K(K') associated with each
valley, and obeying the following equation of motion at each k_x
[P^±_K(K'),ℋ] = ħω_±
P^±_K(K'),
with ω_±>0. In the rotating wave approximation (RWA), justified
here by the moderate coupling strength (see below), P^j_λ≃α^j_λa +β^j_λb, j∈{+,-} and λ∈{K,K'}.
The plasmonic and excitonic Hopfield coefficients α^j_λ(k_x)
and β^j_λ(k_x) are obtained by diagonalizing the following
matrix for every k_x
(
[ ħω_OSO iħΩ_ R 0 0; -iħΩ_ R ħω_ex 0 0; 0 0 -ħω_OSO iħΩ_ R; 0 0 -iħΩ_ R -ħω_ex ]).
The dynamics of the coupled system will be ruled by the
competition between the coherent evolution described by the Hamiltonian (<ref>) and the
different dissipative processes contributing to the uncoupled modes
linewidths. This can be taken into account by including the measured
linewidths as imaginary parts in the excitonic and plasmonic mode
energies (Weisskopf-Wigner approach). Under such conditions, we evaluate the eigenvalues ω_± of the the matrix (<ref>). The real parts of ω_± are then fitted to the maxima of the angle resolved reflectivity maps presented on Fig. 3 in the main text, or to the zeros of the first derivative reflectivity maps
shown here in Fig. <ref> (a) and (b). Both procedures give the same best fit that yields the polaritonic branch splitting as <cit.>
ħ(ω_+-ω_-) = 2√((ħΩ_R)^2 - ( ħγ_ex- ħΓ_OSO)^2)
which equals 40 meV. From the determination (see above) of the FWHM of the excitonic ħγ_ex and plasmonic ħΓ_OSO modes, we evaluate a Rabi energy ħΩ_R=70 meV.
These values give a ratio
(ħΩ_R)^2 / ((ħγ_ex)^2 +
(ħΓ_OSO)^2) = 0.69,
above the 0.5 threshold which is the strong coupling criterion -see <cit.> for a detailed discussion.
This figure-of-merit of 0.69>0.5 therefore clearly demonstrates that our system is operating in the strong coupling regime.
Interestingly, the intervalley scattering rate ħγ_KK' does not
enter in this strong coupling criterion. Indeed, such
events corresponding to an inversion of the valley indices
K↔ K' do not contribute to
the measured excitonic linewidth, and are thus not detrimental to the
observation of strong coupling. In the ħΩ_R ≪ħγ_KK' limit, the Hamiltonian (<ref>) would reduce to the usual RWA
Hamiltonian and the valley contrasting chiralitonic behavior would be lost.
The results gathered in Fig. 4 in the main text clearly show that this is not the case for our system,
allowing us to conclude that the Rabi frequency overcomes such
intervalley relaxation rates. Remarkably, strong coupling thus
allows us to put an upper bounds to those rates, in close relation with <cit.>.
§ B: CHIRALITON DIFFUSION LENGTH
The diffusion length of chiralitons can be estimated by measuring the
extent of their photoluminescence (PL) under a tightly focused
excitation. To measure this extent, we excite a part of a WS_2
monolayer located above the plasmonic hole array (Fig. S<ref>(a)
and (b)). This measurement is done on a home-built PL microscope,
using a 100× microscope
objective of 0.9 numerical aperture and exciting the PL with a HeNe laser at
1.96 eV, slightly below the exciton band-gap. A diffraction-limited
spot of 430 nm half-width is obtained (Fig. S<ref>(c)) by bringing the
sample in the focal plane of the
microscope while imaging the laser beam on a cooled CCD camera.
The PL is collected by exciting at 10 μW of
optical power, and is filtered from the scattered laser light by a
high-energy-pass filter. The resulting PL image is shown in
Fig. S<ref>(d), clearly demonstrating the propagating character of
the emitting chiralitons. The logarithmic cross-cuts (red curves in (c)
and (d)) reveal a propagation length of several microns.
We note that the PL of the WS_2 monolayer also extends further away
from the flake above the OSO array (Fig. S<ref>(b)). We extract from this result the
1/e decay length of the plasmon to be ∼ 3.4 μm. This
value nicely agrees with that obtained in section A from the linear dispersion
analysis.
§ C: RESONANT SECOND HARMONIC GENERATION ON A WS_2 MONOLAYER
As discussed in the main text, TMD monolayers have recently been
shown to give a high valley
contrast in the generation of a second harmonic (SH) signal resonant
with their A-excitons. We obain a similar result when measuring a part
of the WS_2 monolayer sitting above
the bare metallic surface, i.e. aside from the plasmonic hole array.
In Fig. S<ref> we show the SH signal obtained in left and right
circular polarization for an incident femto-second pump beam (120 fs
pulse duration, 1 kHz repetition rate at 1.01 eV) in (a) left and (b)
right circular polarization.
This result confirms that the SH signal polarization is a good
observable of the valley degree of freedom of the WS_2 monolayer,
with a contrast reaching ca. 80%.
In Fig. 3 (c) and (d) in the main text, we show how this valley contrast is
imprinted on the chiralitonic states.
§ D: RESONANT SH GENERATION IN THE STRONG COUPLING REGIME
The resonant SH signal writes as <cit.>:
I(2ω)∝ (ρ_ωI_ω)^2· |χ^(2)(2ω) |^2 ·ρ_2ω
where I_ω is the pump intensity, χ^(2)(2ω) the second order susceptibility, ρ_ω the optical mode density of the resonator related to the fraction of the pump intensity that reaches WS_2 and ρ_2ω the optical mode density of the resonator that determines the fraction of SH intensity decoupled into the far field. While ρ_ω can safely be assumed to be non-dispersive at ħω = 1 eV, the dispersive nature of the resonator leads to ρ_2ω strongly dependent on the in-plane wave vector k_x. The optical mode density being proportional to the absorption, ρ_2ω(k_x) is given by the angular absorption spectrum crosscut at 2ħω = 2 eV, displayed in the lower panels of Fig. 3 (e) and (f) in the main text.
Under the same approximations of <cit.>, the resonant second order susceptibility can be written as
χ^(2)(2ω)=α^(1)(2ω)∑_nK_eng/ω_ng-ω
where ∑_n sums over virtual electronic transitions, and K_eng=⟨ e| p|g⟩⊗⟨ e| p|n⟩⊗⟨ n| p|g⟩ is a third-rank tensor built on the electronic dipole moments p taken between the e,n,g states. The prefactor α^(1)(2ω) is the linear polarizability of the system at frequency 2ω, yielding resonantly enhanced SH signal at every allowed |g⟩→ |e⟩ electronic transitions of the system.
With two populations of uncoupled and strongly coupled WS_2 excitons, the SH signal is therefore expected to be resonantly enhanced when the SH frequency matches the transition frequency of either uncoupled or strongly coupled excitons. When the excited state is an uncoupled exciton associated with a transition energy fixed at frequency 2ħω=2 eV for all angles, the tensor K_eng is non-dispersive and the SH signal is simply determined and angularly distributed from ρ_2ω(k_x).
When the excited state is a strongly coupled exciton, the resonant second order susceptibility becomes dispersive with χ^(2)(2ω,k_x). This is due to the fact that the tensor K_eng incorporates the excitonic Hopfield coefficient of the polaritonic state involved in the electronic transition |g⟩→ |e⟩ when the excited state is a polaritonic state. In our experimental conditions with a pump frequency at 1 eV, this excited state is the upper polaritonic state with |e⟩≡ |P^+_K(K'),σ^±,∓ k_ SP⟩ and therefore K_eng∝ [β_K(K')^+ (k_x)]^2. This dispersive excitonic Hopfield coefficient is evaluated by the procedure described in details above, Sec. A. The profile of the SH signal then follows the product between the optical mode density ρ_2ω(k_x) and |χ^(2)(2ω,k_x)|^2∝[β_K(K')^+ (k_x)]^4.
These two contributions are perfectly resolved in the SH data displayed in Fig. 3 (e) and (f) in the main text. The angular distribution of the main SH signal clearly departs from ρ_2ω(k_x), revealing the dispersive influence of β_K(K')^+ (k_x). This is perfectly seen on the crosscuts displayed in the lower panels of Figs. 3 (e) and (f) in the main text. This feature is thus an indisputable proof of the existence of chiralitonic states, i.e. of the strongly coupled nature of our system.
A residual SH signal is also measured which corresponds to the contribution of uncoupled excitons. This residual signal is measured in particular within the anticrossing region, as expected from the angular profile of ρ_2ω(k_x) shown in the lower panels of Figs. 3 (e) and (f) in the main text.
Finally, the angular features of the SH signal exchanged when the spin of the pump laser is flipped from σ^+ to σ^- reveal how valley contrasts have been transferred to the polariton states. These features therefore demonstrate the chiral nature of the strong coupling regime, i.e. the existence of genuine chiralitons.
§ E: PL LIFETIME MEASUREMENT ON THE STRONGLY COUPLED SYSTEM
The PL lifetime of the strongly coupled system is measured by
time-correlated single photon counting (TCSPC) under pico-second pulsed
excitation (instrument response time 120 ps, 20 MHz repetition rate at 1.94 eV).
The arrival time histogram of PL photons, when measuring a part of the
WS_2 monolayer located above the plasmonic hole array, gives the decay dynamic shown
in Fig. S<ref>(a). On this figure we also display the PL decay of a
reference WS_2 monolayer exfoliated on a dielectric substrate
(polydimethylsiloxane), as well as the instrument response function
(IRF) measured by recording the excitation pulse photons scattered by a gold
film. Following the procedure detailed in <cit.>, we
define the calculated decay times τ_ calc as the
area under the decay curves (corrected for their backgrounds) divided
by their peak values. This yields a calculated IRF time constant
τ_ calc^ IRF = 157 ps, and
calculated PL decay constants τ_
calc^ ref = 1.39 ns and τ_
calc^ sample = 384 ps for the reference bare flake
and the strongly coupled sample respectively. The real
decay time constants τ_
real corresponding to the calculated ones can then be
estimated by convoluting different monoexponential decays with the
measured IRF, computing the corresponding τ_
calc and interpolating this calibration curve
(Fig. S<ref>(b)) for the values of τ_
calc^ ref and τ_
calc^ sample. This results in τ_
real^ ref = 1.06 ns and τ_
real^ sample = 192 ps.
While the long life-time (ns) of the bare WS_2 exciton has been attributed to the trapping of the exciton outside the light-cone at RT through phonon scattering <cit.>, Fig. S<ref> simply shows that the exciton life time is reduced by the presence of the metal or the OSO resonator. Clearly, the competition induced under strong coupling conditions between the intra and inter valley relaxation rates and the Rabi oscillations must act under shorter time scales that are not resolved here.
§ F: OPTICAL SETUP
The optical setup used for PL polarimetry
experiments is shown in Fig. S<ref>. The WS_2
monolayer is excited by a continous-wave HeNe laser at 1.96 eV (632 nm),
slightly below the direct band-gap of the atomic crystal, in order to
reduce phonon-induced inter-valley scattering effects at room
temperature. The pumping laser beam is filtered by a bandpass filter
(BPF) and its polarization state is controlled by a set of
polarization optics: a linear polarizer (LP), a half-wave plate
(HWP) and a quarter-wave plate (QWP). The beam is focused onto
the sample surface at oblique incidence angle by a microscope
objective, to a typical spot size of 100 μm^2. This corresponds to
a typical flux of 10 W·cm^-2. In such conditions of
irradiation, the PL only comes from the A-exciton. The emitted
PL signal is collected by a high
numerical aperture objective, and its polarization state is analyzed
by another set of broadband polarization optics (HWP, QWP, LP). A
short-wavelength-pass (SWP) tunable filter is placed on the
optical path to stop the laser light scattered. Adjustable
slits (AS) placed at the image plane of the
tube lens (TL) allow to spatially select the PL signal coming only
from a desired area of the sample, whose Fourier-space (or real space) spectral
content can be imaged onto the entrance slits of the spectrometer by a
Fourier-space lens (FSL), or adding a real-space lens (RSL). The resulting image is recorded by a cooled CCD Si camera.
§ G: VALLEY CONTRAST MEASUREMENTS ON A BARE WS_2 MONOLAYER
The valley contrast ρ^± of a bare WS_2 monolayer exfoliated on a
dielectric substrate (polydimethylsiloxane) is computed from the
measured room temperature PL spectra obtained for left and right circular excitations,
analysed in the circular basis by a combination of a
quarter-wave plate and a Wollaston prism:
ρ^± = I_σ^±(σ^+) - I_σ^±(σ^-)/I_σ^±(σ^+) + I_σ^±(σ^-),
where I_j(l) is the measured PL spectrum for a
j=(σ^+,σ^-) polarized excitation and a
l=(σ^+,σ^-) polarized analysis.
A typical emission spectrum (I_σ^-(σ^-)) is shown in
Fig. S<ref>(a) and the valley contrasts ρ^± are displayed
in Fig. S<ref>(b). As discussed in the main text, this emission
spectrum consists of a phonon-induced up-converted PL <cit.>.
Clearly, there is no difference in the
I_j(l) spectra, hence no valley polarization at room temperature on
the bare WS_2 monolayer. These results are in striking contrast to those
reported in the main text for the strongly coupled system, under
similar excitation conditions. Note also
that the absence of valley contrast on our bare WS_2 monolayer
differs from the results of <cit.> reported however on WS_2
grown by chemical vapor deposition.
§ H: ANGLE-RESOLVED STOKES VECTOR POLARIMETRY
The optical setup shown in Fig. S<ref> is used to measure the
angle-resolve PL spectra for different combinations of excitation and
detection polarizations. Such measurements allow us to retrieve the
coefficients of the Mueller matrix ℳ of the system, characterizing how the
polarization state of the excitation beam affects the polarization
state of the chiralitons PL. As discussed in the main text, the
spin-momentum locking mechanism of our chiralitonic system
relates such PL polarization states to
specific chiraliton dynamics. An incident excitation in a given polarization state
is defined by a Stokes vector S^ in, on which
the matrix ℳ acts
to yield an output PL Stokes vector S^ out:
S^ out = (
[ I; I_V - I_H; I_45 - I_-45; I_σ^+ - I_σ^- ])_ out = ℳ(
[ I_0; I_V - I_H; I_45 - I_-45; I_σ^+ - I_σ^- ])_ in,
where I_(0) is the emitted (incident) intensity, I_V - I_H
is the relative intensity in vertical and horizontal polarizations, I_45 - I_-45
is the relative intensity in +45^o and -45^o polarizations and
I_σ^+ - I_σ^- is the relative intensity in σ^+
and σ^- polarizations.
We recall that for our specific alignment of the OSO resonator with
respect to the slits of the spectrometer, the angle-resolved PL
spectra in V and H polarizations correspond to
transverse-magnetic (TM) and transverse-electric (TE) dispersions
respectively (see Fig. 2 (b) in the main text).
Intervalley chiraliton coherences, revealed by a non-zero
degree of linear polarization in the PL upon the same linear
excitation, are then measured by the S_1 = I_V - I_H coefficient of the PL
output Stokes vector. This coefficient is obtained
by analysing the PL in the linear basis, giving an angle-resolved PL
intensity (S_0^ out +(-) S_1^ out)/2,
for TM (TE) analysis. In order to obtain the polarization
characteristics of the chiralitons, we measure the four possible
combinations of excitation and detection polarization in the linear
basis:
I_ TM/TM = (m_00 + m_01 + m_10 + m_11)/2
I_ TM/TE = (m_00 + m_01 - m_10 - m_11)/2
I_ TE/TM = (m_00 - m_01 + m_10 - m_11)/2
I_ TE/TE = (m_00 - m_01 - m_10 + m_11)/2,
where I_p/a is the angle-frequency resolved intensity measured for
a pump polarization p=(TE,TM) and analysed in
a=(TE,TM) polarization, and m_i,j are the coefficients
of the 4x4 matrix ℳ. By solving this linear system of
equations, we obtain the first quadrant of the Mueller matrix: m_00,
m_01, m_10 and m_11.
The S_1^ out|_ TM coefficient of the output Stokes vector
for a TM excitation is then directly given by m_10+m_11 as
can be seen from (<ref>) by setting I_V=1, I_H=0 and all
the other input Stokes coefficients to zeros. This quantity,
normalized to S_0^ out, is
displayed in the k_x-energy plane in Fig. 4 (c) in the main text. Similarly, the
S_1^ out|_ TE coefficient is given by
m_10-m_11, which is the quantity displayed in Fig. 4 (d) in the main text.
As the dispersion of the OSO resonator is different for TE and TM
polarizations, the pixel-to-pixel operations performed to obtain
S_1^ out do not directly yield the chiraliton
inter-valley contrast. In particular, the observation of negative
value regions in S_1^ out|_ TM only reveals that the
part of the chiraliton population that lost inter-valley
coherence is dominating the total PL in the region of the dispersion where
the TE mode dominates over the TM mode (compare Fig. 4(c) and (e)). It
does not correspond to genuine anti-correlation of the chiraliton PL polarization
with respect to the pump polarization. To correct for such dispersive
effects and obtain the degree of chiraliton intervalley coherence, the
appropriate quantity is (S_1^ out|_ TM-S_1^ out|_ TE)/(2S_0^ out) = m_11,
resolved in the k_x-energy plane in
Fig. 4 (f) in the main text. This quantity can also be refered to as
a chiraliton linear depolarization factor.
For these polarimetry measurements, the base-line noise was determined by measuring the Mueller matrix associated with an empty setup which are expected to be proportional to the identity matrix. With polarizer extinction coefficients smaller than 0.1%, white light (small) intensity fluctuations, and positioning errors of the polarization optics, we reach standard deviations from the identity matrix of the order of 0.4%. This corresponds to a base-line noise valid for all the polarimetry measurements presented in the main text. The noise level seen in Fig. 4 in the main text is thus mostly due to fluctuations in the WS_2 PL intensity.
|
http://arxiv.org/abs/1701.08049v3 | 20170127133711 | On a conjecture of Sokal concerning roots of the independence polynomial | [
"Han Peters",
"Guus Regts"
] | math.CO | [
"math.CO",
"cs.DS",
"math.DS"
] |
Modelling Competitive Sports:
Bradley-Terry-Élő Models
for Supervised and On-Line Learning
of Paired Competition Outcomes
[
January 27, 2017
============================================================================================================================
A conjecture of Sokal <cit.> regarding the domain of non-vanishing for independence polynomials of graphs, states that given any natural number Δ≥ 3, there exists a neighborhood in ℂ of the interval [0, (Δ-1)^Δ-1/(Δ-2)^Δ) on which the independence polynomial of any graph with maximum degree at most Δ does not vanish. We show here that Sokal's Conjecture holds, as well as a multivariate version, and prove optimality for the domain of non-vanishing. An important step is to translate the setting to the language of complex dynamical systems.
Keywords: Independence polynomial, hardcore model, complex dynamics, roots, approximation algorithms.
§ INTRODUCTION
For a graph G=(V,E) and λ=(λ_v)_v∈ V∈^V, the multivariate independence polynomial, is defined as
Z_G(λ):=∑_I⊆ V
independent∏_v∈ Iλ_v.
We recall that a set I⊆ V is called independent if it does not span any edges of G.
The univariate independence polynomial, which we also denote by Z_G(λ), is obtained from the multivariate independence polynomial by plugging in λ_v=λ for all v∈ V.
In statistical physics the univariate independence polynomial is known as the partition function of the hardcore model. When λ=1, Z_G(λ) equals the number of independent sets in the graph G.
Motivated by applications in statistical physics Sokal <cit.> asked about domains of the complex plane where the independence polynomial does not vanish.
Just below Question 2.4 in <cit.>, Sokal conjectures: “there is a complex domain D_Δ containing at least the interval 0≤λ<1/(Δ-1) of the real axis — and possibly even the interval 0≤λ<λ_Δ:=(Δ-1)^Δ-1/(Δ-2)^Δ — on which Z_G(λ) does not vanish for all graphs of maximum degree at most Δ".
In this paper we confirm the strong form of his conjecture for the univariate independence polynomial.
In Section <ref> we will prove the following result:
Let Δ∈ℕ with Δ≥ 3. Then there exists a complex domain D_Δ containing the interval 0≤λ<λ_Δ such that for any graph G=(V,E) of maximum degree at most Δ and any λ∈ D_Δ, we have that Z_G( λ)≠ 0.
If we allow ourselves an epsilon bit of room, then the same result also holds for multivariate independence polynomial. This is the contents of Theorem <ref> in Section <ref>. We show in Appendix <ref> that the literal statement of Theorem <ref> does not hold in the multivariate setting.
It follows from nontrivial results in complex dynamical systems that the bound in Theorem <ref> is in fact optimal, in light of the following:
Let Δ∈ℕ with Δ≥ 3. Then there exist λ∈ℂ arbitrarily close to λ_Δ for which there exists a graph G of maximum degree Δ with Z_G(λ)=0.
This result is a direct consequence of Proposition <ref> in Subsection <ref>.
We discuss the underlying results from the theory of complex dynamical systems in Appendix <ref>.
Other results for the nonvanishing of the independence polynomial include a result of Shearer <cit.> that says that for any graph G=(V,E) of maximum degree at most Δ and any λ such that for each v∈ V, |λ_v|≤(Δ-1)^Δ-1/Δ^Δ one has Z_G(λ)≠ 0. See <cit.> for a slight improvement and extensions.
Moreover, Chudnovsky and Seymour <cit.> proved that the univariate independence polynomial of a claw-free graph (a graph G is called claw-free if it does not contain four vertices that induce a tree with three leaves), has all its roots on the negative real axis.
§.§.§ Motivation
Another motivation for Theorem <ref> comes from the design of efficient approximation algorithms for (combinatorial) partition functions.
In <cit.> Weitz showed that there is a (deterministic) fully polynomial time approximation algorithm (FPTAS) for computing Z_G(λ) for any 0≤λ<λ_c(Δ) for any graph of maximum degree at most Δ. His method is often called the correlation decay method and has subsequently been used and modified to design many other FPTAS's for several other types of partition functions; see e.g. <cit.>.
More recently, Barvinok initiated a line of research that led to quasi-polynomial time approximation algorithms for several types of partition functions and graph polynomials; see e.g. <cit.> and Barvinok's recent book <cit.>.
This approach is based on Taylor approximations of the log of the partition function/graph polynomial, and allows to give good approximations in regions of the complex plane where the partition function/polynomial does not vanish.
In his recent book <cit.>, Barvinok refers to this approach as the interpolation method.
Patel and the second author <cit.> recently showed that the interpolation method in fact yields polynomial time approximation algorithms for these partition functions/graph polynomials when restricted to bounded degree graphs.
In combination with the results in Section 4.2 from <cit.>, Theorem <ref> immediately implies that the interpolation methods yields a polynomial time approximation algorithm for computing the independence polynomial at any fixed 0≤λ<λ_Δ on graphs of maximum degree at most Δ, thereby matching Weitz's result.
In particular, Theorem <ref> gives evidence for the usefulness of the interpolation method.
§.§ Preliminaries
We collect some preliminaries and notational conventions here.
Graphs may be assumed to be simple, as vertices with loops attached to them can be removed from the graph and parallel edges can be replaced by single edges without affecting the independence polynomial.
Let G=(V,E) be a graph. For a subset U⊆ V we denote the graph induced by U by G[U].
For U⊂ V we denote the graph induced by V∖ U by G∖ U; in case U={u} we just write G-u.
For a vertex v∈ V we denote by N[v]:={u∈ V|{u,v}∈ E}∪{v} the closed neighborhood of v.
The maximum degree of G is the maximum number of neighbors of a vertex over all vertices of G. This is denoted by Δ(G).
For Δ∈ and k∈ we denote by T_Δ,k the rooted tree, recursively defined as follows: for k=0, T_Δ,0 consists of a single vertex; for k>0, T_Δ,k consists of the root vertex, which is connected to the Δ-1 root vertices of Δ-1 disjoint copies of T_Δ,k-1.
We will sometimes, abusing terminology, refer to the T_Δ,k as regular trees.
Note that the maximum degree of T_Δ,k equals Δ whenever k≥ 2 and equals Δ-1 when k=1.
Organization
The remainder of this paper is organised as follows. In the next section we translate the setting to the language of complex dynamical systems and we prove another non-vanishing result for the multivariate independence polynomial, cf. Theorem <ref>.
Section <ref> contains technical, yet elementary, derivations needed for the proof of our main result, which is given in Section <ref>. We conclude with some questions in Section <ref>.
In the appendix we discuss results from complex dynamical systems theory needed to prove Proposition <ref>.
§ SETUP
We will introduce our setup in this section.
Let us fix a graph G=(V,E), λ=(λ_v)_v∈ V∈^V and a vertex v_0∈ V.
The fundamental recurrence relation for the independence polynomial is
Z_G(λ)=λ_v_0 Z_G∖ N[v_0](λ)+Z_G-v_0(λ).
Let us define, assuming Z_G-v_0(λ)≠ 0,
R_G,v_0=λ_v_0 Z_G∖ N[v_0](λ)/Z_G-v_0(λ).
In the case that λ_v>0 for all v∈ V (<ref>) is always defined.
This definition is inspired by Weitz <cit.>.
We note that by (<ref>),
R_G,v_0≠ -1 if and only if Z_G(λ)≠ 0.
So for our purposes it suffices to look at the ratio R_G,v_0.
§.§ Regular trees
We now consider the univariate independence polynomial for the trees T_Δ,k.
Let v_k denote the root vertex of T_Δ,k.
Then for k>0, T_Δ,k-v_k is equal to the disjoint union of Δ-1 copies of T_Δ,k-1.
Additionally, for k>1, T_Δ,k∖ N[v_k] is equal to the disjoint union of Δ-1 copies of T_Δ,k-1-v_k-1.
Using this we note that for k>2 (<ref>) takes the following form:
R_T_Δ,k,v_k =λ(Z_T_Δ,k-1-v_k-1/Z_T_Δ,k-1)^Δ-1
=λ(Z_T_Δ,k-1-v_k-1/λ Z_T_Δ,k-1∖ N[v_k-1]+Z_T_Δ,k-1-v_k-1)^Δ-1
=λ/(1+R_T_Δ,k-1,v_k-1)^Δ-1.
We denote the extended complex plane ∪{∞} by .
Define for λ∈ and d∈, f_d,λ:→ by
f_d,λ(x)=λ/(1+x)^d.
So (<ref>) gives that R_T_Δ,k,v_k=f_Δ-1,λ(R_T_Δ,k-1,v_k-1).
Noting that R_T_Δ,v_0=λ, we observe that R_T_Δ,k,v_0=f_Δ-1,λ^∘ k(λ).
So to understand under which conditions R_T_Δ,k,v_k equals -1 or not, it suffices to look at the orbits of f_Δ-1,λ with starting point λ, or equivalently with starting point -1.
A somewhat similar relation between graphs and the iteration of rational maps was explored by Bleher, Roeder and Lyubich in <cit.> and <cit.>. While here one iteration of f_Δ-1,λ corresponds to adding an additional level to a tree, there one iteration corresponded to adding an additional refinement to a hierarchical lattice.
Let us denote by U_d⊂ the open set of parameters λ for which f_d,λ has an attracting fixed point.
Then
U_d={-α d^d/(d+α)^d+1| |α|<1}.
Indeed, writing f=f_d,λ, we note that if x is a fixed point of f we have
f^'(x)=-d/1+xλ/(1+x)^d=-dx/1+x.
Let α∈.
Then f^'(x)=α if and only if x=-α/d+α and consequently,
λ=x(1+x)^d=-α d^d/(d+α)^d+1.
A fixed point x = f(x) is attracting if and only if |f^'(x)|<1, which implies the description (<ref>). For parameters λ in the boundary ∂ U_Δ-1 the function f has a neutral fixed point, and for a dense set of parameters λ∈∂ U_Δ - 1 the fixed point is parabolic, i.e. the derivative at the fixed point is a root of unity. Classical results from complex dynamical systems allow us to deduce the following regarding the vanishing/non-vanishing of the independence polynomial:
Let Δ∈ be such that Δ≥ 3. Then
(i) for all k∈ and λ∈ U_Δ-1, Z_T_Δ,k(λ)≠ 0;
(ii) if λ∈∂ U_Δ-1, then for any open neighborhood U of λ there exists λ'∈ U and k∈ such that Z_T_Δ,k(λ')=0.
We note that for λ=-(Δ-1)^Δ-1/Δ^Δ part (ii) was proved by Shearer <cit.>; see also <cit.>.
Part (i) follows quickly from elementary results in complex dynamics, but the statements that imply part (ii) are less trivial. The necessary background from the complex dynamical systems, including the proof of Proposition <ref> and a counterexample to the multivariate statement of Theorem <ref>, will be discussed in Appendix <ref>. Note that Proposition <ref> from the introduction is a special case of Proposition <ref>.
So we can conclude that Sokal's conjecture is already proved for regular trees.
We now move to general (bounded degree) graphs.
§.§ A recursive procedure for ratios for all graphs
It will be convenient to have an expression similar to (<ref>) for all graphs.
Let G be a graph with fixed vertex v_0.
Let v_1,…,v_d be the neighbors of v_0 in G (in any order). Set G_0=G-v_0 and define for i=1,…,d, G_i:=G_i-1-v_i.
Then G_d=G∖ N[v_0].
The following lemma gives recursive relation for the ratios and has been used before over the real numbers in e.g. <cit.>.
Suppose Z_G_i(λ)≠ 0 for all i=0,…,d.
Then
R_G,v_0=λ_v_0/∏_i=1^d (1+R_G_i-1,v_i).
Let us write
Z_G-v_0(λ)/Z_G∖ N[v_0](λ) =Z_G_0(λ)/Z_G_1(λ)Z_G_1(λ)/Z_G_2(λ)⋯Z_G_d-1(λ)/Z_G_d(λ)=∏_i=1^dZ_G_i(λ)+λ_v_iZ_G_i-1∖ N[v_i]/Z_G_i(λ)
=∏_i=1^d(1+R_G_i-1,v_i),
where in the second equality we use (<ref>).
As
R_G,v_0=λ_v_0 Z_G∖ N[v_0](λ)/Z_G-v_0(λ)=λ_v_0/Z_G-v_0(λ)/Z_G∖ N[v_0](λ),
the lemma follows.
As an illustration of Lemma <ref> we will now prove a result that shows that Z_G(λ) is nonzero as long as the norms and arguments of the λ_v are small enough. This result is implied by our main theorem for angles that are much smaller still, but the statement below is not implied by our main theorem, and is another contribution to Sokal's question <cit.>.
The proof moreover serves as warm up for the proof of our main result.
Let G=(V,E) be any graph of maximum degree at most Δ≥ 2.
Let >0 and let λ∈^V be such that |λ_v|≤tan(π/(2+)(Δ-1)), and such that |(λ_v)|</2/2+π for all v∈ V. Then Z_G(λ)≠ 0.
Since the independence polynomial is multiplicative over the disjoint union of graphs, we may assume that G is connected.
Fix a vertex v_0 of G.
We will show by induction that for each subset U⊆ V∖{v_0} we have
(i) Z_G[U](λ)≠ 0,
(ii)if u∈ U has a neighbor in V∖ U, then |R_G[U],u|<tan(π/(2+)(Δ-1)),
(iii)if u∈ U has a neighbor in V∖ U, then (R_G[U],u)>0.
Clearly, if U=∅ both (i), (ii) and (iii) are true.
Now suppose U⊆ V∖{v_0} and let H=G[U].
Let u_0∈ U be such that u_0 has a neighbor in V∖ U (u_0 exists as G is connected).
Let u_1,…,u_d be the neighbors of u_0 in H. Note that d≤Δ-1.
Define H_0=H-u_0 and set for i=1,…,d H_i=H_i-1-u_i.
Then by induction we know that for i=0,…,d, Z_H_i(λ)≠ 0 and for i≥ 1, (R_H_i-1,u_i)>0, implying that |1+R_H_i-1,u_i|≥ 1.
So by Lemma <ref> we know that
|R_H,u_0|=|λ_u_0|/∏_i=1^d |1+R_H_i-1,u_i|<|λ_u_0|≤tan(π/(2+)(Δ-1)),
showing that (ii) holds for U.
To see that (iii) holds we look at the angle α that R_H,u_0 makes with the positive real axis.
It suffices to show that |α|<π/2.
Since by induction (R_H_i-1,u_i)>0 and |R_H_i-1,u_i|≤tan(π/(2+)(Δ-1)), we see that the angle α_i that 1+R_H_i-1,u_i makes with the positive real axis satisfies |α_i|≤π/(2+)(Δ-1).
This implies by Lemma <ref> that
|α|<(Δ-1)π/(2+)(Δ-1)+(/2)π/2+= π/2,
showing that (iii) holds.
As by (iii), R_H,u_0 has strictly positive real part and hence does not equal -1 we conclude by (<ref>) that Z_H(λ)≠ 0.
So we conclude that (i), (ii) and (iii) hold for all U⊆ V∖{v_0}.
To conclude the proof, it remains to show that Z_G(λ)≠ 0.
Let v_1,…,v_d be the neighbors of v_0. Let G_i, for i=0,…,d, be defined as the graphs H_i above.
Then by (i) and (ii) we know that for i=0,…,d, Z_G_i(λ)≠ 0 and (R_G_i-1,v_i)>0 for i≥ 1.
So as above we have
that the angle α_i, that 1+R_G_i-1,v_i makes with the positive real line, satisfies |α_i|≤π/(2+)Δ-1.
So by Lemma <ref> the absolute value of the argument of R_G,v_0 is bounded by
(/2)π/2++Δπ/(2+)(Δ-1)≤(2+/2)π/2+<π,
using that Δ/Δ-1≤ 2.
This implies by (<ref>) that Z_G(λ)≠0 and finishes the proof.
Define for λ∈ and d∈ the map F_d,λ:^d→ by
(x_1,…,x_d)↦λ/∏_i=1^d(1+x_i).
Given ϵ>0, the proof of Theorem <ref> consisted mainly of finding a domain D⊂ not containing -1 such that if x_1, … ,x_d ∈ D, then F_d,λ(x_1, …, x_d) ∈ D for all 0 ≤ d ≤Δ-1.
To prove Theorem <ref>, we will similarly construct for each Δ a domain D, containing the interval [0, λ_Δ] but not the point -1, which is mapped inside itself by f_d,λ for all 0≤ d≤Δ-1 and all λ in a sufficiently small complex neighborhood of the interval [0,(1-ϵ)λ_Δ). Had these functions f_d,λ all been strict contractions on the interval [0, λ_Δ], the existence of such a domain D would have been immediate. Unfortunately the functions f_d,λ are typically not contractions, even for real valued λ. However, since the positive real line is contained in the basin of an attracting fixed point, it follows from basic theory of complex dynamical systems <cit.> that each f_d,λ is strictly contracting on [0,λ_Δ) with respect to the Poincaré metric of the corresponding attracting basin. While these Poincaré metrics vary with λ and d, this observation does give hope for finding coordinates with respect to which all the maps f_d,λ are contractions.
In the next section we will introduce explicit coordinates with respect to which f_Δ-1,λ_Δ becomes a contraction, and then show that for d ≤Δ-1 and λ∈ [0, λ_Δ) the maps f_d, λ are all strict contractions with respect to the same coordinates. We will then utilize these coordinates to give a proof of Theorem <ref> in Section <ref>.
§ A CHANGE OF COORDINATES
It is our aim in this section to find a coordinate change for each Δ≥ 3 so that the maps f_d,λ are contractions in these coordinates for any 0≤ d≤Δ-1 and any 0≤λ≤λ_Δ.
§.§ The case d=Δ-1 and λ=λ_Δ
We consider the coordinate changes.
z = φ_y(x) = log(1+ylog(1+x)),
with y>0. We note that a similar coordinate change using a double logarithm was used in <cit.>. The best argument for using the specific form above is that it seems to fit our purposes.
Our initial goal is to pick a y, depending on Δ such that the parabolic map f(x):=f_Δ-1,λ_Δ(x) becomes a contraction with respect to the new coordinates. Note that we call f parabolic if λ = λ_Δ.
In this case the fixed point of f is given by
x_Δ=1/Δ-2 = 1/d-1,
and has derivative f^'(x_Δ) = -1, and is thus parabolic.
In the z-coordinates we consider the map
g(z) = g_Δ-1,λ_Δ(z) = φ_y ∘ f ∘φ_y^-1.
Note that the function φ_y:ℝ_+ →ℝ_+ is bijective, and ℝ_+ is forward invariant under f. It follows that the composition g is well defined on ℝ_+.
We write z_Δ:= φ_y(x_Δ).
Then z_Δ is fixed under g, and one immediately obtains g^'(z_Δ) = -1. Thus, in order for |g^'| ≤ 1 we in particular need that g^''(z_Δ) = 0.
Let us start by computing g^' and g^''.
Writing x_1 = f(x_0) and z_0 = φ_y(x_0) we note that
g^'(z_0) = φ_y^'(x_1) · f^'(x_0) · (φ_y^-1)^'(z_0)
= φ_y^'(x_1)/φ_y^'(x_0)· f^'(x_0)
= 1+ ylog(1+x_0)/1+ylog(1+x_1)·1+x_0/1+x_1·-d x_1/1+x_0
= 1+ ylog(1+x_0)/1+ylog(1+x_1)·-d x_1/1+x_1.
Now note that
g^'' = ∂ g^'/∂ x_0·∂ x_0/∂ z_0,
and since ∂ x_0/∂ z_0 > 0, we look for points z_0 where ∂ g^'/∂ x_0(z_0) = 0. We obtain
∂ g^'/∂ x_0(z_0) = y/(1+x_0)/1+ylog(1+x_1)·-d x_1/1+x_1
+ (1+ ylog(1+x_0)) ·∂/∂ x_1( 1/1+ylog(1+x_1)·-d x_1/1+x_1) ·∂ x_1/∂ x_0.
By considering x_1 as a variable depending on x_0, and thus also on z_0, the presentation of the calculations here and later in this section becomes significantly more succinct. Since
∂ x_1/∂ x_0=-dx_1/1+x_0,
and since
∂/∂ x_1( 1/1+ylog(1+x_1)·-dx_1/1+x_1) = d ·x_1 y - (1+ylog(1+x_1))/(1+x_1)^2(1+ylog(1+x_1))^2,
we obtain
∂ g^'/∂ x_0(z_0) = y/(1+x_0)/1+ylog(1+x_1)·-d x_1/1+x_1
+(1+ ylog(1+x_0))-d^2x_1/1+x_0·x_1 y - (1+ylog(1+x_1))/(1+x_1)^2(1+ylog(1+x_1))^2.
The only value of y > 0 for which g^''(z_Δ) = 0 is given by
y = y_Δ := 1/2x_Δ - log(1+x_Δ).
Noting that x_1 = x_0 and dx_1/(1+x_1)=1 when x_0 = x_Δ, we obtain
∂ g^'/∂ x_0(z_Δ) = d ·1+ylog(1+x_Δ) - 2x_Δ y/(1+ylog(1+x_Δ))(1+x_Δ)^2.
Thus g^''(z_Δ) = 0 if and only if
y=y_Δ := 1/2x_Δ - log(1+x_Δ).
From now on we assume that y = y_Δ.
We have that |g^'(z)| ≤ 1 for all z ≥ 0.
Since
lim_z → +∞ |g^'(z)| = 0,
it suffices to show that |g^'(0)| < 1, which follows if we show that g^''(0)<0, for which it is sufficient to show that ∂ g^'/∂ x_0(0)<0.
Plugging in x_0=0 in (<ref>) we get
∂ g^'/∂ x_0(0) = dx_1(d - y(1+x_1))(1+ylog(1+x_1) - dx_1 y)/(1+ylog(1+x_1))^2(1+x_1)^2,
with x_1 = f(0) = λ. Hence we can complete the proof by showing that
d - y(1+λ) = d - y - yλ < 0.
Using that 1/y=2/(d-1)-log(d/(d-1)) we observe that
1/y<1/d-1+1/2(d-1)^2
and hence y > (d-1)^2/d-1/2 .
From this we obtain
d - y - yλ <d(d-1/2)-(d-1)^2-(d-1)(d/(d-1))^d)/d-1/2
<d(d-1/2)-(d-1)^2-(d-1)(1+d/(d-1))/d-1/2=-d/2/d-1/2<0,
which completes the proof.
In particular it follows that for all x ≥ 0 we have that f^∘ n(x) → x_Δ.
§.§ Smaller values of λ and d
We now consider the case where λ < λ_Δ, and the map f has degree d≤Δ-1.
We again consider the map
g_d,λ(z)=φ_y∘ f_d,λ∘φ^-1_y.
Again we will often just write g instead of g_d,λ.
Our goal is to show that |g^'(z_0)| < 1 for all z_0 ≥ 0.
To do so we will consider g^' as a function of λ,d and z_0.
We first look at the case where λ is fixed and d is varying.
Let Δ∈ with Δ≥ 3.
Let 0≤λ≤λ_Δ and let d∈{0,1,…,Δ-1}.
Let z_0≥ 0 be such that g^''_d,λ(z_0)=0. Then we have 0≥ g_d,λ^'(z_0)≥ g_Δ-1,λ^'(z_0).
We will consider the derivative of g^' with respect to d in the points z_0 where g^''(z_0)=0.
By (<ref>), g^''(z_0) is a multiple of
y/1+ylog(1+x_1)·1/1+x_1 + -d (1+ylog(1+x_0)) ·1+ylog(1+x_1)-x_1y/(1+x_1)^2(1+ylog(1+x_1))^2.
As g^''(z_0) =0, we obtain
y(1+x_1)(1+ylog(1+x_1)) = d (1+ylog(1+x_0)) · (1+ylog(1+x_1) - x_1 y).
In particular we get that
1 + y log(1+x_1) - x_1 y > 0,
and
dlog(1+x_0) = (1+x_1)(1+ylog(1+x_1))/1+ylog(1+x_1)-x_1y - d/y.
Now notice that by (<ref>) we have that ∂/∂ d g^' is a positive multiple of
-x_1/(1+x_1)(1+ylog(1+x_1)+∂ x_1/∂ d·∂/∂ x_1( 1/1+ylog(1+x_1)·-dx_1/1+x_1),
which by (<ref>) is a positive multiple of
- (1+x_1) (1 + y log(1+x_1)) + dlog(1+x_0) · (1 + y log(1+x_1) - x_1 y).
When we plug in equation (<ref>) to eliminate x_0 from this expression, we note that the term (1+x_1)(1+ylog(1+x_1)) cancels and we obtain that ∂/∂ dg^' is a positive multiple of
- d/y(1 + ylog(1+x_1) - x_1 y),
which is negative as observed in (<ref>).
So, we see that as we decrease d the value of g^'(z_0) increases and hence it follows that 0≥ g_d,λ^'(z_0) ≥ g_Δ-1,λ^'(z_0), as desired.
We next compute the derivative of g^' with respect to λ.
Note that x_1 depends on λ, but x_0 does not, hence
∂ g^'/∂λ (z_0) = (1+ y log(1+x_0)) ·∂/∂λ(-dx_1/(1+x_1)(1+ylog(1+x_1)))
= (1+ y log(1+x_0)) ·∂ x_1/∂λ·∂/∂ x_1(-dx_1/(1+x_1)(1+ylog(1+x_1))) .
Thus ∂ g^'/∂λ (z_0) = 0 if and only if
∂/∂ x_1(-dx_1/(1+x_1)(1+ylog(1+x_1))) = 0,
which, by (<ref>) is the case if and only if
x_1 y - (1+ylog(1+x_1)) = 0.
Let Δ≥ 5. For any λ≤λ_Δ and 0≤ d≤Δ-1, we have
x_1 y - (1+ylog(1+x_1))<0.
In particular g^'(z_0) is decreasing in λ for any z_0≥ 0.
We note that x_1 y - (1+ylog(1+x_1)) is increasing in x_1 for x_1>0.
So it suffices to plug in λ=λ_Δ and x_0=0, that is, plug in x_1=λ_Δ.
Note that this makes it independent of d.
Plugging in x_1=λ_Δ we get
λ_Δ y-(1+ylog(1+λ_Δ)=y(λ_Δ-(1/y+log(1+λ_Δ)),
So as y>0 is suffices to show
c(Δ):=λ_Δ-(1/y+log(1+λ_Δ)<0.
By a direct computer calculation, we obtain the following approximate values for c(Δ) for Δ∈{5,6,7}:
[ Δ 5 6 7; c(Δ) -0.0450 -0.0809 -0.0887; ]
and we conclude that (<ref>) holds for Δ∈{5,6,7}.
Using that x-x^2/2≤log(1+x)≤ x for all x≥ 0, we obtain
λ_Δ-(1/y+log(1+λ_Δ)≤ λ_Δ-(x_Δ +λ_Δ-λ_Δ^2/2)
= λ_Δ^2/2-1/Δ-2.
Using that
λ_Δ=Δ-1/(Δ-2)^2(Δ-1/Δ-2)^Δ-2≤e(Δ-1)/(Δ-2)^2,
we obtain that
λ_Δ-(1/y+log(1+λ_Δ))≤e^2(1+1/Δ-2)^2-2(Δ-2)/2(Δ-2)^2.
Since the right-hand side of (<ref>) is negative for Δ=8 and since the numerator is clearly decreasing in Δ, we conclude that (<ref>) is true for all Δ≥ 8. This concludes the proof.
Let Δ∈{3,4}.
Let z_0>0 and λ_0>0 be such that
∂/∂λg^'_Δ-1,λ(z_0) = 0
for λ = λ_0. Then g^'_Δ-1,λ_0(z_0)≥ -0.92.
By assumption we have ∂ g^'(z_0)/∂λ = 0. Thus (<ref>) implies that
x_1 y = 1 + y log(1 + x_1),
This implies that for x_1 to be a solution to (<ref>), we need that x_1≥ x_Δ.
Indeed suppose that x_1<x_Δ. Then we have from (<ref>) that
x_1y=1+ylog(1+x_1)>1+yx_1-yx_1^2/2,
from which we obtain yx_1^2>2.
However, as y<1/x_Δ we have yx_1^2<yx_Δ^2<x_Δ<2, a contradiction.
Now (<ref>) combined with (<ref>) gives
g^'(z_0)=1+ylog(1+x_0)/yx_1·-(Δ-1)x_1/1+x_1 = -(Δ-1)(1+ylog(1+x_0))/y(1+x_1).
Now recall that y=y_Δ satisfies
2x_Δ y = 1+ylog(1+x_Δ) .
Now using that x_1≥ x_Δ and by combining (<ref>) and (<ref>) we obtain
x_1=1+ylog(1+x_1)/y=2x_Δ(1+ylog(1+x_1)/1+ylog(1+x_Δ))≥ 2x_Δ.
Using this we obtain
x_1= 2x_Δ(1+ylog(1+x_1)/1+ylog(1+x_Δ))≥ 2x_Δ(1+ylog(1+2x_Δ)/1+ylog(1+x_Δ))
= 2x_Δ( 1+log(1+2x_Δ)/2x_Δ-log(1+x_Δ)/1+log(1+x_Δ)/2x_Δ-log(1+x_Δ))=2x_Δ+log(1+2x_Δ/1+x_Δ)=α_Δ x_Δ,
where α_3=2+log(3/2)≈ 2.405, and where α_4=2+2log(4/3))≈ 2.575.
This then implies that
1+x_0≤ (λ_Δ/x_1)^1/(Δ-1)≤α_Δ^-1/(Δ-1)(1+x_Δ).
Since (<ref>) is decreasing in x_0 and increasing in x_1, we can plug in x_0=α_Δ^-1/(Δ-1)(1+x_Δ) and x_1=α_Δ x_Δ to obtain
g_Δ-1,λ_0^'(z_0) > -(Δ-1)(1+ylog(α_Δ^-1/(Δ-1)(1+x_Δ))/y(1+α_Δ x_Δ)
=
-2(Δ-1)x_Δ y + ylog (α_Δ)/y(1+α_Δ x_Δ)
= -2(Δ-1)/(Δ-2) + log(α_Δ)/(Δ-2+α_Δ)/(Δ-2)≈{[ -0.9168 if Δ=3,; -0.8979 if Δ=4. ].
This finishes the proof.
We can now finally show that the coordinate changes works for all values of the parameters we are interested in.
Let Δ≥ 3 and let >0. Then there exists δ>0 such that if 0≤λ<(1-)λ_Δ, then |g^'_d,λ(z_0)|<1-δ for all z_0≥ 0 and d∈{0,1,…,Δ-1}.
Let J=[0,(1-)λ_Δ] and let
M:=min_z_0≥ 0,λ∈ J,d=0,…, Δ-1 g^'_d,λ(z_0).
As for any λ∈ J we have that lim_z_0→∞g^'_d,λ(z_0)=0 and as g^''(0)<0 by the proof of Corollary <ref> (which remains valid as (<ref>) is decreasing in d) it follows that we may assume that M is attained at some triple (z_0,λ_0,d) with z_0>0, λ_0∈ J and d∈{0,…,Δ-1}.
This then implies that g_d,λ_0^''(z_0)=0 and hence by Lemma <ref> we know that g^'_d,λ_0(z_0)≥ g^'_Δ-1,λ_0(z_0),
that is, we have that d=Δ-1.
If g^'_d,λ_0(z_0) attains its minimum (as a function of λ) at some λ<λ_Δ, then ∂/∂λ g^'(z_0)=0. So by Lemma <ref> we know that Δ∈{3,4}.
Then Lemma <ref> implies that M≥ -0.92.
So we may assume that g^' is strictly decreasing as a function of λ on [0,λ_Δ].
This then implies that λ_0=(1-)λ_Δ and so there exists δ>0 (and we may assume δ<0.08)
such that
M=g_d,λ_0^'(z_0)> (1-δ) g_d,λ_Δ^'(z_0)≥ -1+δ,
where the last inequality is by Corollary <ref>.
This finishes the proof.
§ PROOF OF THEOREM <REF>
Our proof will essentially follow the same pattern as the proof of Theorem <ref>, but instead of working with the function F_d,λ we now need to work with a conjugation of F_d,λ.
Let Δ≥ 3.
Recall from the previous section the function φ:_+→_+ defined by z = φ(x) = log(1+ylog(1+x)), with y=y_Δ.
We now extend the function φ to a neighborhood V ⊂ of _+ by taking the branch for both logarithms that is real for x >0. By making V sufficiently small we can guarantee that φ is invertible. Now define for d=0,…, Δ-1, the map G_d,λ: φ(V)^d → by
(z_1,…,z_d)↦φ(λ/∏_i=1^d 1+φ^-1(z_i)).
For a set A⊂ and >0 we write 𝒩(A,):={z∈| |z-a|< for some a∈ A}.
Now define for >0 the set D()⊂ by
D():=𝒩([0,φ(λ_Δ)],).
We collect a very useful property:
Let Δ≥ 3 and let >0.
Then there exist _1,_2>0 such that for any λ∈Λ(_2):=
𝒩([0,(1-)λ_Δ ],_2), any d=0,…,Δ-1 and z_1,…,z_d∈ D(_1) we have G_d,λ(z_1,…,z_d)∈ D(_1).
We first prove this for the special case that z_1=z_2=…=z_d=z.
In this case we have G_d,λ(z_1,…,z_d)=g_d,λ(z).
By Proposition <ref> we know that there exists δ>0 such that for any d=0,…,Δ-1 we have
|g^'_d,λ(z)|<1-δ for all λ∈ [0,(1-)λ_Δ] and z∈ [0,φ(λ_Δ)].
By continuity of g^' as a function of z and λ there exists _1,_2>0 such that for all d=0,…,Δ-1 and each (z,λ)∈ D(_1)×Λ(_2)
we have
|g_d,λ^'(z)|≤ 1-δ/2.
We may assume that _2 is small enough so that for any d,
sup_λ∈Λ(_2),z∈ [-_1,φ (λ_Δ)]|∂/∂λg_d,λ(z)|≤δ_1/2_2,
Fix now λ∈Λ(_2) and d and let z∈ D(_1).
Let z'∈ [0,φ(λ_Δ)] be such that |z-z'|< _1 and let λ'∈ [0,(1-)λ_Δ)) be such that |λ-λ'|<_2
Then
|g_d,λ(z)-g_d,λ'(z')|≤ |g_d,λ(z)-g_d,λ(z')|+|g_d,λ(z')-g_d,λ'(z')|
< (1-δ/2)_1+_1δ/2<_1,
implying that the distance of g_d,λ(z) to [0,φ(λ_Δ)] is at most _1, as g_d,λ'(z')∈ [0,φ(λ_Δ)]. Hence g_d,λ(z)∈ D(_1), which proves the lemma for z_1=z_2=…=z_d=z.
For the general case fix d, let λ∈Λ(_2) and consider x=∏_i=1^d (1+φ^-1(z_i)) for certain z_i∈ D=D(_1).
We want to show that x=∏_i=1^d (1+φ^-1(z)) = (1+φ^-1(z))^d for some z∈ D.
First of all note that
1+φ^-1(z_i)= exp(exp(z_i)-1/y).
Then
x=exp( ∑_i=1^d (exp(z_i)-1/y)),
which is equal to (1+φ^-1(z))^d for some z∈ D provided
1/d∑_i=1^d exp(z_i)=exp(z),
for some z∈ D. Consider the image of D under the exponential map. D is a smoothly bounded domain whose boundary consist of two arbitrarily small half-circles and two parallel horizontal intervals. Recall that the exponential imagine of a disk of radius less than 1 is strictly convex, a fact that can easily be checked by computing that the curvature of its boundary has constant sign. Therefore exp(D) is a smoothly bounded domain whose boundary consists of two radial intervals and two strictly convex curves, hence exp(D) must also be convex. See Figure <ref> for a sketch of the domain D and its image under the exponential map.
It follows that the convex combination 1/d∑_i=1^d exp(z_i) is contained in the image of D.
In other words, there exists z∈ D such that (<ref>) is satisfied.
This now implies that G_d,λ(z_1,…,z_d)=g_d,λ(z)∈ D, as desired.
§.§ Proof of Theorem <ref>
We first state and prove a more precise version of Theorem <ref> for the multivariate independence polynomial:
Let Δ∈ℕ with Δ≥ 3. Then for any >0 there exists δ>0 such that for any graph G=(V,E) of maximum degree at most Δ and any λ∈^V satisfying λ_v ∈𝒩([0,(1-)λ_Δ),δ) for each v∈ V, we have that Z_G( λ)≠ 0.
Let _1 and _2 be the two constants from Lemma <ref>, where _1 is chosen sufficiently small.
Let D=D(_1) and let δ=_2.
Let G be a graph of maximum degree at most Δ.
Since the independence polynomial is multiplicative over the disjoint union of graphs, we may assume that G is connected.
Fix a vertex v_0 of G.
We will show by induction that for each subset U⊆ V∖{v_0} we have
(i) Z_G[U](λ)≠ 0,
(ii)if u∈ U has a neighbor in V∖ U, then φ(R_G[U],u)∈ D,
Clearly, if U=∅, then both (i) and (ii) are true.
Now suppose U⊆ V∖{v_0} is nonempty and let H=G[U].
Let u_0∈ U be such that u_0 has a neighbor in V∖ U (u_0 exists as G is connected).
Let u_1,…,u_d be the neighbors of u_0 in H. Note that d≤Δ-1.
Define H_0=H-u_0 and set for i=1,…,d H_i=H_i-1-u_i.
Then by induction we know that for i=0,…,d, Z_H_i(λ)≠ 0 and so the ratios R_H_i-1,u_i are well defined for i≥ 1 and by induction they satisfy φ(R_H_i-1,u_i)∈ D.
By Lemma <ref>
R_H,u_0=λ_u_0/∏_i=1^d(1+R_H_i-1,u_i).
Since φ(R_H_i-1,u_i)∈ D for i=1,…,d, we have by Lemma <ref> that φ(R_H,u_0)∈ D.
From this we conclude that R_H,u_0≠ -1, as -1∉φ^-1(D).
So by (<ref>) Z_H(λ)≠ 0.
This shows that (i) and (ii) hold for all subsets U⊆ V∖{v_0}.
To conclude the proof we need to show that Z_G(λ)≠ 0.
Let v_1,…,v_d be the neighbors of v_0 (in any order). Define G_0=G-v_0 and set for i=1,…,d G_i=G_i-1-v_i.
Then by (i)
we know that for i=0,…,d, Z_G_i(λ)≠ 0 and so the ratios R_G_i-1,v_i are well defined for i≥ 1 and by
(ii)
they satisfy φ(R_G_i-1,v_i)∈ D. Write for convenience z_i=R_G_i-1,v_i for i=1,…,d.
Then, by the same reasoning as above, we have
R_G,v_0(1+z_d)=λ_v_0/∏_i=1^d-1(1+z_i)∈φ^-1(D).
This implies that R_G,v_0 is not equal to -1, for if this were the case, we would have
-1∈ z_d+φ^-1(D). However, z_d∈φ^-1(D) and for _1 small enough, φ^-1(D) will have real part bounded away from -1/2, a contradiction.
We conclude that Z_G(λ)≠ 0.
Theorem <ref> is now an easy consequence.
Let for >0, δ() be the associated δ>0 from Theorem <ref>.
Consider a sequence ϵ_i → 0 and define
D_Δ:=⋃𝒩([0,(1-_i)λ_Δ),δ(_i)).
The set D_Δ is clearly open and contains [0,λ_Δ).
Moreover, for any graph G of maximum degree at most Δ and λ∈ D_Δ we have Z_G(λ)≠ 0, as λ∈𝒩([0,(1-)λ_Δ),δ()) for some >0.
Let us recall that the literal statement of Theorem <ref> is false in the multivariate setting as we will prove in the appendix.
However, by the same reasoning as above we do immediately obtain the following.
Let Δ∈ℕ with Δ≥ 3, and let n ∈ℕ. Then there exists a complex domain D_Δ containing [0,λ_Δ)^n such that for any graph G=(V,E) with V = {1,… , n} of maximum degree at most Δ and any λ∈ D_Δ, we have that Z_G( λ)≠ 0.
We remark that the difference between Corollary <ref> and Theorem <ref> is subtle. The set D_Δ is chosen of the form
D_Δ:=⋃𝒩([0,(1-_i)λ_Δ),δ(_i))^n,
as above. In particular the set D_Δ is not of the form D^n for some open set D containing [0,λ_Δ), hence in this sense it is not a literal generalization of Theorem <ref>.
§ CONCLUDING REMARKS AND QUESTIONS
In this paper we have shown that Sokal's conjecture is true.
By results from <cit.> this gives as a direct application the existence of an efficient algorithm (different than Weitz's algorithm <cit.>) for approximating the independence polynomial at any fixed 0<λ<λ_Δ.
By a result of Sly and Sun <cit.> it is known that unless NP=RP there does not exist an efficient approximation algorithm for computing the independence polynomial at λ>λ_Δ for graphs of maximum degree at most Δ.
Very recently it was shown by Galanis, Goldberg and Štefankovič <cit.>, building on locations of zeros of the independence polynomial for certain trees, that it is NP-hard to approximate the independence polynomial at λ<-(Δ-1)^Δ-1/Δ^Δ for graphs of maximum degree at most Δ.
Recall from Proposition <ref> that at any λ contained in
U_Δ-1={λ_Δ(α)=-α (Δ-1)^Δ-1/(Δ-1+α)^Δ| |α|< 1},
the independence polynomial for regular trees does not vanish and that for any λ∈∂(U_Δ-1) there exists λ' arbitrarily close to λ for which there exists a regular tree T such that Z_T(λ')=0.
This naturally leads two the following two questions.
Let α∈ be such that |α|>1. Let >0 and let Δ∈. Is it true that it is NP-hard to compute an -approximation[By an -approximation of Z_G(λ) we mean a nonzero number ζ∈ such that e^-≤ |Z_G(λ)/ζ| ≤ e^ and such that the angle between Z_G(λ) and ζ is at most .] of the independence polynomial at λ_Δ(α) for graphs G of maximum degree at most Δ?
This question has recently been answered positively, in a strong form, by Bezáková, Galanis, Goldberg, and Štefankovič <cit.>. They in fact showed that it is even #P hard to approximate the independence polynomial at non-positive λ contained in the complement of the closure of U_Δ-1.
Is it true that for any graph G of maximum degree at most Δ≥ 3 and any α∈ with |α|<1 one has Z_G(λ_Δ(α))≠ 0?
The same question is also interesting for the multivariate independence polynomial.
We note that if this question too has a positive answer, it would lead to a complete understanding of the complexity of approximating the independence polynomial of graphs at any complex number λ in terms of the maximum degree.
Appendix
§ PARABOLIC BIFURCATIONS IN COMPLEX DYNAMICAL SYSTEMS, AND PROPOSITION <REF>
The proof of Proposition <ref> follows from results well known to the complex dynamical systems community, but not easily found in textbooks. In this appendix we give a short overview of the results needed, and outline how Proposition <ref> can be deduced from these results. The presentation is aimed at researchers who are not experts on parabolic bifurcations. Details of proofs will be given only in the simplest setting. Readers interested in working out the general setting are encouraged to look at the provided references.
We consider iteration of the rational function
f_λ(z) = λ/(1+z)^d,
where λ∈ℂ and d ≥ 2. We note that f_λ has two critical
points, -1 and ∞, and that f_λ(-1) = ∞.
If f_λ has an attracting or parabolic periodic orbit {x_1, … , x_k}, then the orbits of -1 and ∞ both converge to this orbit.
This statement is the immediate consequence of the following classical result, which can for example be found in <cit.>.
Let f be a rational function of degree d ≥ 2 with an attracting or parabolic cycle. Then the corresponding immediate basin must contain at least one critical point.
Let us say a few words about how to prove this result in the parabolic case. Recall that a period orbit is called parabolic if its multiplier, the derivative in case of a fixed point, is a root of unity. We consider the model case, where 0 is a parabolic fixed point with derivative 1, and f has the form
z_1 = z_0 - z_0^2 + h.o.t..
By considering the change of coordinates u = 1/z we obtain
u_1 = u_0 + 1 + O(1/u_0),
and we observe that if r>0 is chosen sufficiently small, the orbits of all initial values z ∈ D(r,r) ={|z-r|<r} converge to the origin tangent to the positive real axis. In fact, after a slightly different change of coordinates one can obtain the simpler map
u_1 = u_0 + 1.
These coordinates on D(r,r) are usually denoted by u = ϕ^i(z), and are referred to as the incoming Fatou coordinates. The Fatou coordinates are invertible on a sufficiently small disk D(r,r), and can be holomorphically extended to the whole parabolic basin by using the functional equation ϕ^i(f(z)) = ϕ^i(z) + 1.
By considering the inverse map z_1 = z_0 + z_0^2 + h.o.t. we similarly obtain the outgoing Fatou coordinates ϕ^o, defined on a small disk D(-r,r). It is often convenient to use the inverse map of ϕ^o, which we will denote by ψ^o. This inverse map can again be extended to all of ℂ by using the functional equation ψ^o(ζ-1) = f(ψ^o(ζ)).
Now let f be a rational function of degree at least 2, and imagine that the parabolic basin does not contain a critical point. Then ϕ^i extends to a biholomorphic map from ℂ to the parabolic basin. This gives a contradiction, as a parabolic basin must be a hyperbolic Riemann surface, i.e. its covering space is the unit disk, and therefore cannot be equivalent to ℂ. A similar argument can be given to deduce that any attracting basin must contain a critical point.
Let us return to the original maps f_λ. Recall that for fixed d ≥ 2, we denote the region in parameter space ℂ_λ for which f_λ has an attracting fixed point by U_d. The set U_d is an open and connected neighborhood of the origin. An immediate corollary of the above discussion is the following.
For each λ∈ U_d, the orbit of the initial value
z_0 = f^∘ 2_λ(∞) = λ
avoids the point -1.
In fact, it turns out that one can prove the following stronger statement.
The region U_d is a maximal open set of parameters λ for which the orbit of z_0 avoids the critical point -1.
Observe that Lemma <ref> directly implies Proposition <ref>.
Note that the parameters λ for which there is a parabolic fixed point form a dense subset of ∂ U_d. Hence in order to obtain Lemma <ref> it suffices to prove that for any parabolic parameter λ_0 ∈∂ U_d and any neighborhood 𝒩(λ_0), there exists a parameter λ∈𝒩(λ_0) and an N ∈ℕ for which f_λ^∘ N(z_0) = -1. The fact that such λ and N exist is due to the following result regarding parabolic bifurcations.
Let f_ϵ be a one-parameter family of rational functions that vary holomorphically with ϵ. Assume that f = f_0 has a parabolic periodic cycle, and that this periodic cycle bifurcates for ϵ near 0. Denote one of the corresponding parabolic basins by ℬ_f, let z_0 ∈ℬ_f, and let w ∈ℂ̂∖ℰ_f. Then there exists a sequence of ϵ_j → 0 and N_j →∞ for which f_ϵ_j^∘ N_j(z_0) = w.
Here ℰ_f denotes the exceptional set, the largest finite completely invariant set, which by Montel's Theorem contains at most two points; see <cit.>. Since the set {-1,∞} containing the two critical points of the rational functions f_λ: z ↦λ/(1+z)^d does not contain periodic orbits, it quickly follows that the exceptional set of these functions is empty. Lemma <ref> follows from Theorem <ref> by taking w = -1 and considering a sequence (λ_j) that converges to a parabolic parameter λ_0 ∈∂ U_d.
Perturbations of parabolic periodic points play a central role in complex dynamical systems, and have been studied extensively, see for example the classical works of Douady <cit.> and Lavaurs <cit.>. We will only give an indication of how to prove Theorem <ref>, by discussing again the simplest model, f(z) = z - z^2 + h.o.t., and f_ϵ(z) = f(z) + ϵ^2. For ϵ≠ 0, the unique parabolic fixed point 0 = f(0) splits up into two fixed points. For ϵ>0 small these two fixed points are both close to the imaginary axis, forming a small “gate” for orbits to pass through.
For ϵ>0 small enough, the orbit of an initial value z_0 ∈ℬ_f, converging to 0 under the original map f, will pass through the gate between these two fixed points, from the right to the left half plane. The time it takes to pass through the gate is roughly π/ϵ. The following more precise statement was proved in <cit.>.
Let α∈ℂ, and consider sequences (ϵ_j) of complex numbers satisfying ϵ_j → 0, and positive integers (n_j) for which
π/ϵ_j - n_j →α.
Then the maps f_ϵ_j^∘ n_j converge, uniformly on compact subsets of ℬ_f, to the map ℒ_α = ψ^o ∘ T_α∘ϕ^i, where T_α denotes the translation x ↦ x+α.
Let w ∈ℂ̂∖ℰ, and let ζ_0 ∈ℂ for which ψ^o(ζ_0) = w.
Let α∈ℂ be given by
α = ζ_0 - ϕ^i(z_0)
such that ℒ_α(z_0) = w. Fix ρ>0 small, and for θ∈ [0,2π] write
α(θ) = α + ρ e^iθ
and
ϵ_n(θ) = π/α(θ) + n.
It follows that
f_ϵ_n(θ)^∘ n(z_0) ⟶ℒ_α(θ) := ψ^o ∘ T_α(θ)∘ϕ^i(z_0),
uniformly over all θ∈ [0,2π] as n →∞. Since the curve given by θ↦ℒ_α(θ)(z_0) winds around -1, it follows that for n sufficiently large there exists an α^'_n ∈𝒩(α,ρ) for which
f_ϵ_n^'^∘ n(z_0) = w
is satisfied for
ϵ_n^' = π/α_n^' + n.
The general proof of Theorem <ref> follows the same outline.
We end by proving that the literal statement of Theorem <ref> is false in the multivariate setting.
Let Δ≥ 3 and let D_Δ be any neighborhood of the interval [0, λ_Δ). Then there exists a graph G = (V,E) of maximum degree at most Δ and λ∈ D_Δ^V such that Z_G(λ) = 0.
We will in fact use regular trees G for which all vertices on a a given level will have the same values λ_v_i. In this setting we are dealing with a non-autonomous dynamical system given by the sequence
x_k = λ_k/(1+x_k-1)^Δ-1,
with x_0=0 and where each λ_k ∈ D_Δ. Hence Theorem <ref> is implied by the following proposition.
Given Δ and D_Δ as in Theorem <ref>, there exist an integer N ∈ℕ and λ_0, … ,λ_N ∈ D_Δ which give x_N = -1.
The proof follows from the following lemma, which can be found in <cit.> and is a direct consequence of Montel's Theorem.
Let f be a rational function of degree at least 2, let x lie in the Julia set of f, and let V be a neighborhood of x. Then
⋃_n ∈ℕ f^n(V) = ℂ̂∖ℰ_f,
where ℰ_f is the exceptional set of f.
Let Δ≥ 3 and λ≠ 0. As noted before in this appendix, the exceptional set of the function f_Δ-1, λ is empty. Thus, by compactness of the Riemann sphere, it follows that for any neighborhood V of a point in the Julia set there exists an N ∈ℕ such that f_Δ-1, λ^N(V) = ℂ̂.
To prove Proposition <ref>, let us denote the set of all possible values of points x_N by A. Then A contains D_Δ, so in particular a neighborhood V of the parabolic fixed point x_Δ of the function f_Δ-1, λ_Δ.
The parabolic fixed point x_Δ is contained in the Julia set of f_Δ-1, λ_Δ, thus it follows that there exists an N ∈ℕ for which f_Δ-1, λ_Δ^N(V) = ℂ̂. But then f_Δ-1, λ^N(V) = ℂ̂ holds for λ∈ D sufficiently close to λ_Δ, and thus A = ℂ̂. But then -1 ∈ A, which completes the proof of Proposition <ref>.
Note that in this construction the λ_i's take on exactly two distinct values. On the lowest level of the tree they are very close to x_Δ, and on all other levels they are very close to λ_Δ. The thinner the set D_Δ, the more levels the tree needs to have.
§ ACKNOWLEDGEMENT
We thank Heng Guo for pointing out an inaccuracy in an earlier version of this paper.
We moreover thank Roland Roeder and Ivan Chio for helpful comments and spotting some typos. We also thank an anonymous referee for helpful comments.
30
BGKNT7 M. Bayati, D. Gamarnik, D. Katz, C. Nair and P. Tetali, Simple deterministic approximation
algorithms for counting matchings. In Proceedings of the thirty-ninth
annual ACM symposium on Theory of computing (pp. 122–127), ACM, 2007.
B14aA. Barvinok, Computing the permanent of (some) complex matrices, Foundations of Computational Mathematics (2014), 1–14.
B14bA. Barvinok, Computing the partition function for cliques in a graph, Theory of Computing, Volume 11 (2015), Article 13 pp. 339–355.
B15A. Barvinok, Approximating permanents and hafnians of positive matrices, Discrete Analysis, 2017:2, 34 pp.
B17A. Barvinok, Combinatorics and Complexity of Partition Functions, Algorithms and Combinatorics vol. 30, Springer, Cham, Switzerland, 2017.
BS14aA. Barvinok and P. Soberón, Computing the partition function for graph homomorphisms, Combinatorica 37, (2017), 633–650.
BS14bA. Barvinok and P. Soberón, Computing the partition function for graph homomorphisms with multiplicities, Journal of Combinatorial Theory, Series A, 137 (2016), 1–26.
BGGS17I. Bezakóva, A. Galanis, L.A. Goldberg, and D.Štefankovič, Inapproximability of the independent set polynomial in the complex plane, arXiv preprint arXiv:1711.00282 (2017).
BLR2010P. Bleher, M. Lyubich and R. Roeder, Lee-Yang zeros for DHL and 2D rational dynamics, I. Foliation of the physical cylinder, Journal de Mathématiques Pures et Appliquées, 107 (2017), 491–590.
BLR2011P. Bleher, M. Lyubich and R. Roeder, Lee-Yang-Fisher zeros for DHL and 2D rational dynamics, II. Global Pluripotential Interpretation. Preprint, available online at arXiv:1107.5764, (2011), 36 pages.
CS7 M. Chudnovsky and P. Seymour, The roots of the independence polynomial of a
clawfree graph, Journal of Combinatorial Theory, Series B, 97 (2007), 350–357.
D94 A. Douady, Does a Julia set depend continuously on the polynomial? Complex dynamical
systems (Cincinnati, OH, 1994), 91–138, Proc. Sympos. Appl. Math., 49, Amer. Math.
Soc., Providence, RI, 1994.
GGS16A. Galanis, L.A. Goldberg, D. Štefankovič, Inapproximability of the independent set polynomial below the Shearer threshold, In LIPIcs-Leibniz International Proceedings in Informatics, vol. 80. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2017 (and arXiv preprint arXiv:1612.05832, 2016 Dec 17).
GK12D. Gamarnik and D. Katz: Correlation decay and deterministic FPTAS for counting list-colorings of a graph, Journal of Discrete Algorithms, 12 (2012), 29–47.
L89 P. Lavaurs, Systèmes dynamiques holomorphiques : explosion des points périodiques.
PhD Thesis, Univesité Paris-Sud (1989).
LL15J. Liu, P. Lu, FPTAS for # BIS with degree bounds on one side, in: Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing 2015 Jun 14 (pp. 549–556), ACM.
LY13 P. Lu and Y. Yin, Improved FPTAS for multi-spin systems, in: Approximation, Randomization,
and Combinatorial Optimization. Algorithms and Techniques, pp 639–654, Springer Berlin Heidelberg, 2013.
PR16V. Patel, G. Regts, Deterministic polynomial-time approximation algorithms for partition functions and graph polynomials, SIAM Journal on Computing, 46 (2017), 1893–1919.
M06 J. Milnor, Dynamics in one complex variable. Third edition. Annals of Mathematics Studies, 160. Princeton University Press, Princeton, NJ, 2006
R15G. Regts, Zero-free regions of partition functions with applications to algorithms and graph limits, to appear in Combinatorica (2017), https://doi.org/10.1007/s00493-016-3506-7.
SS05A.D. Scott and A.D. Sokal, The repulsive lattice gas, the independent-set polynomial, and the Lovász local lemma, Journal of Statistical Physics, 118 (2005), 1151–1261.
Sh85 J. B. Shearer. On a problem of Spencer, Combinatorica, 5 (1985), 241–245.
S1A. Sokal, A personal list of unsolved problems concerning lattice gases and antiferromagnetic Potts models, Markov Processes And Related Fields, 7 (2001), 21–38.
SS14Allan Sly and Nike Sun. Counting in two-spin models on d-regular graphs, Annals of Probability, 42 (2014), 2383–2416.
W6D. Weitz, Counting independent sets up to the tree threshold, in Proceedings of the
thirty-eighth annual ACM symposium on Theory of computing, STOC 06, pages 140–149,
New York, NY, USA, 2006. ACM.
|
http://arxiv.org/abs/1701.07939v2 | 20170127042744 | Locally optimal configurations for the two-phase torsion problem in the ball | [
"Lorenzo Cavallina"
] | math.OC | [
"math.OC",
"49Q10"
] |
Computationally Efficient Market Simulation Tool for Future Grid Scenario Analysis
Shariq Riaz, Graduate Student Member, IEEE,
Gregor Verbič, Senior Member, IEEE,
and Archie C. Chapman, Member, IEEE
Shariq Riaz, Gregor Verbič and Archie C. Chapman are with the School of Electrical and Information Engineering, The University of Sydney, Sydney, New South Wales, Australia. e-mails: (shariq.riaz, gregor.verbic, [email protected]).
Shariq Riaz is also with the Department of Electrical Engineering, University of Engineering and Technology Lahore, Lahore, Pakistan.
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We consider the unit ball Ω⊂ (N≥2) filled with two materials with different conductivities. We perform shape derivatives up to the second order to find out precise information about locally optimal configurations with respect to the torsional rigidity functional.
In particular we analyse the role played by the configuration obtained by putting a smaller concentric ball inside Ω.
In this case the stress function admits an explicit form which is radially symmetric: this allows us to compute the sign of the second order shape derivative of the torsional rigidity functional with the aid of spherical harmonics.
Depending on the ratio of the conductivities a symmetry breaking phenomenon occurs.
2010 Mathematics Subject classification. 49Q10
Keywords and phrases: torsion problem, optimization problem, elliptic PDE, shape derivative
§ INTRODUCTION
We will start by considering the following two-phase problem.
Let Ω⊂^N (N≥ 2) be the unit open ball centered at the origin.
Fix m∈ (0, (Ω)), where here we denote the N-dimensional Lebesgue measure of a set by (·) .
Let ω⊂⊂Ω be a sufficiently regular open set such that (ω)=m.
Fix two positive constants σ_-, σ_+ and consider the following distribution of conductivities:
σ:=σ_ω:= σ_- 1_ω+σ_+ 1_Ω∖ω,
where by 1_A we denote the characteristic function of a set A (i.e. 1_A(x)=1 if x∈ A and vanishes otherwise).
Consider the following boundary value problem:
-(σ_ω∇ u )= 1 in Ω,
u=0 on ∂Ω.
We recall the weak formulation of (<ref>):
∫_Ωσ_ω∇ u ·∇φ = ∫_Ωφ for all φ∈ H^1_0(Ω).
Moreover, since σ_ω is piecewise constant, we can rewrite (<ref>) as follows
-σ_ωΔ u = 1 in ω∪(Ω∖ω),
σ_-∂_n u_-= σ_+∂_n u_+ on ∂ω,
u=0 on ∂Ω,
where the following notation is used: the symbol is reserved for the outward unit normal and :=∂/∂ denotes the usual normal derivative. Throughout the paper we will use the + and - subscripts to denote quantities in the two different phases (under this convention we have (_ω)_+=_+ and (_ω)_-=_- and our notations are “consistent" at least in this respect). The second equality of (<ref>) has to be intended in the sense of traces. In the sequel, the notation [f]:=f_+-f_- will be used to denote the jump of a function f through the interface ∂ω (for example, following this convention, the second equality in (<ref>) reads “u=0 on ∂ω”).
We consider the following torsional rigidity functional:
E(ω):= ∫_Ωσ_ω |∇ u_ω|^2=∫_ω_- | u_ω|^2+∫_Ω∖ω̅_+ | u_ω|^2=
∫_Ω u_ω,
where u_ω is the unique (weak) solution of (<ref>).
Physically speaking, we imagine our ball Ω being filled up with two different materials and the constants _± represent how “hard" they are. The second equality of (<ref>), which can be obtained by integrating by parts after splitting the integrals in (<ref>), is usually referred to as transmission condition in the literature and can be interpreted as the continuity of the flux through the interface .
The functional E, then, represents the torsional rigidity of an infinitely long composite beam. Our aim is to study (locally) optimal shapes of ω with respect to the functional E under the fixed volume constraint.
The one-phase version of this problem was first studied by Pólya.
Let D⊂ (N≥ 2) be a bounded Lipschitz domain.
Define the following shape functional
ℰ(D):= ∫_D | u_D|^2,
where the function u_D (usually called stress function) is the unique solution to
- u = 1 in D,
u=0 on ∂ D.
The value ℰ(D) represents the torsional rigidity of an infinitely long beam whose cross section is given by D.
The following theorem (see <cit.>) tells us that beams with a spherical section are the “most resistant”.
The ball maximizes ℰ among all Lipschitz domains with fixed volume.
Inspired by the result of Theorem <ref> it is natural to expect radially symmetrical configurations to be optimizers of some kind for E (at least in the local sense).
From now on we will consider ω:= B_R (the open ball centered at the origin, whose radius, 0<R<1, is chosen to verify the volume constraint |B_R|=m) and use shape derivatives to analyze this configuration.
This technique has already been used by Conca and Mahadevan in <cit.> and Dambrine and Kateb in <cit.> for the minimization of the first Dirichlet eigenvalue in a similar two-phase setting (Ω being a ball) and it can be applied with ease to our case as well.
A direct calculation shows that the function u, solution to (<ref>) where ω=, has the following expression:
u(x)= 1-R^2/2N_++R^2-|x|^2/2N_- for |x|∈ [0,R],
1-|x|^2/2N_+ for |x|∈ [R,1].
In this paper we will use the following notation for Jacobian and Hessian matrix respectively.
(D v)_ij:=∂ v_i/∂ x_j, (D^2 f)_ij= ∂^2 f/∂ x_i∂ x_j,
for all smooth real valued function f and vector field v=(v_1,…, v_N) defined on Ω.
We will introduce some differential operators from tangential calculus that will be used in the sequel. For smooth f and v defined on ∂ω we set
_τ f := f-(f·) ( tangential gradient),
_τ v := v-·( D v) (tangential divergence),
where f and v are some smooth extensions on the whole Ω of f and v respectively. It is known that the differential operators defined in (<ref>) do not depend on the choice of the extensions.
Moreover we denote by D_τ v the matrix whose i-th row is given by _τ v_i.
We define the (additive) mean curvature of ∂ω as H:=_τ (cf. <cit.>). According to this definition, the mean curvature H of ∂ is given by (N-1)/R.
A first key result of this paper is the following.
For all suitable perturbations that fix the volume (at least at first order), the first shape derivative of E at B_R vanishes.
An improvement of Theorem <ref> is given by the following precise result (obtained by studying second order shape derivatives).
Let _-,_+>0 and R∈(0,1). If _->_+ then B_R is a local maximizer for the functional E under the fixed volume constraint.
On the other hand, if _-<_+ then B_R is a saddle shape for the functional E under the fixed volume constraint.
In section 2 we will give the precise definition of shape derivatives and introduce the famous Hadamard forumlas, a precious tool for computing them. In the end of section 2 a proof of Theorem <ref> will emerge as a natural consequence of our calculations.
Section 3 will be devoted to the computation of the second order shape derivative of the functional E in the case ω=.
In Section 4 we will finally calculate the sign of the second order shape derivative of E by means of the spherical harmonics. The last section contains an analysis of the different behaviour that arises when volume preserving transformations are replaced by surface area preserving ones.
§ COMPUTATION OF THE FIRST ORDER SHAPE DERIVATIVE:
PROOF OF THEOREM <REF>
We consider the following class of perturbations with support compactly contained in Ω:
𝒜:={Φ∈^∞([0,1)×,) | Φ(0,·)= Id, ∃ K⊂⊂Ω s.t. Φ(t,x)=x ∀ t∈ [0,1) , ∀ x∈∖ K }.
For Φ∈𝒜 we will frequently write Φ(t) to denote Φ(t,·) and, for all domain D in , we will denote by Φ(t)(D) the set of all Φ(t,x) for x∈ D. We will also use the notation D_t:=Φ(t)(D) when it does not create confusion.
In the sequel the following notation for the first order approximation (in the “time” variable) of Φ will be used.
Φ(t)= Id + t +o(t) as t→0,
where is a smooth vector field. In particular we will write h_n:=· (the normal component of ) and _τ : = - on the interface.
We are ready to introduce the definition of shape derivative of a shape functional J with respect to a deformation field Φ in 𝒜 as the following derivative along the path associated to Φ.
d/dt J(D_t)t=0 = lim_t→ 0J(D_t)-J(D)/t.
This subject is very deep. Many different formulations of shape derivatives associated to various kinds of deformation fields have been proposed over the years. We refer to <cit.> for a detailed analysis on the equivalence between the various methods. For the study of second (or even higher) order shape derivatives and their computation we refer to <cit.>.
The structure theorem for first and second order shape derivatives (cf. <cit.>, Theorem 5.9.2, page 220 and the subsequent corollaries) yields the following expansion.
For every shape functional J, domain D and pertubation field Φ in 𝒜, under suitable smoothness assumptions the following holds.
J(D_t)=J(D)+t l_1^J (D)(h_n)+t^2/2( l_2^J(D)(,)+l_1^J(D)(Z) ) + o(t^2) as t→ 0,
for some linear l_1^J(D): ^∞ (∂ D)→ and bilinear form l_2^J(D): ^∞ (∂ D)×^∞ (∂ D)→ to be determined eventually.
Moreover for the ease of notation we have set
Z:=( V'+D)·+ ((D_τ) )·-2_τ·,
where V(t,Φ(t)):=∂_t Φ(t) and V':=∂_t V(t,·).
According to the expansion (<ref>), the first order shape derivative of a shape functional depends only on its first order apporximation by means of its normal components.
On the other hand the second order derivative contains an “acceleration” term l_1^J(D)(Z).
It is woth noticing that, (see Corollary 5.9.4, page 221 of <cit.>) Z vanishes in the special case when Φ=𝕀+t with _τ= on ∂ D (this will be a key observation to compute the bilinear form l_2^J).
We will state the following lemma, which will aid us in the computations of the linear and bilinear forms l_1^J(D) and l_2^J(D) for various shape functionals (cf. <cit.>, formula (5.17), page 176 and formulas (5.110) and (5.111), page 227).
Take Φ∈ and let f=f(t,x)∈^2([0,T), L^1())∩^1([0,T), W^1,1()). For every smooth domain D in define
J(D_t):=∫_D_t f(t)
(where we omit the space variable for the sake of readability). Then the following identities hold:
l_1^J(D)()= ∫_D ∂_t ft=0 + ∫_∂ D f(0) h_n,
l_2^J(D)(,)=∫_D ∂_tt^2 ft=0+∫_∂ D2∂_t ft=0h_n+(H f(0)+∂_n f(0) )h_n^2.
Since we are going to compute second order shape derivatives of a shape functional subject to a volume constraint, we will need to restric our attention to the class of perturbations in that fix the volume of ω:
ℬ(ω):= {Φ∈𝒜 | (Φ(t)(ω))=(ω)=m for all t∈ [0,1) }.
We will simply write in place of ().
Employing the use of Lemma <ref> for the volume functional and of the expansion (<ref>), for all Φ∈ we get
(ω_t)=(ω)+ t ∫_∂ω + t^2/2( ∫_∂ω H h_n^2 + ∫_∂ω Z ) + o(t^2) as t→0.
This yields the following two conditions:
∫_∂ω =0, (1^ st order volume preserving)
∫_∂ω H h_n^2+ ∫_∂ωZ=0. (2^ nd order volume preserving)
For every admissible perturbation field Φ=𝕀+t in , with satisfying (<ref>), we can find some perturbation field Φ∈ such that Φ=𝕀 + t + o(t) as t→ 0.
For example, the following construction works just fine:
Φ(t,x)=Φ(t,x)/η(x)( (Φ(t)(ω))/(ω))^1/N+(1-η(x)),
where η is a suitable smooth cutoff function compactly supported in Ω that attains the value 1 on a neighbourhood of ω.
We will now introduce the concepts of “shape" and “material" derivative of a path of real valued functions defined on Ω.
Fix an admissible perturbation field Φ∈ and let u=u(t,x) be defined on [0,1)×Ω for some positive T.
Computing the partial derivative with respect to t at a fixed point x∈Ω is usually called shape derivative of u; we will write:
u'(t_0,x):= ∂ u/∂ t (t_0,x), for x∈Ω, t_0∈ [0,1).
On the other hand differentiating along the trajectories gives rise to the material derivative:
u̇(t_0,x):= ∂ v/∂ t(t_0,x), x∈Ω, t_0∈ [0,1);
where v(t,x):=u(t, Φ(t,x)).
From now on for the sake of brevity we will omit the dependency on the “time" variable unless strictly necessary and write u(x), u'(x) and u̇(x) foru(0,x), u'(0,x) and u̇(0,x).
The following relationship between shape and material derivatives hold true:
u' =u̇- u ·.
We are interested in the case where u(t,·):=u_(B_R)_t i.e. it is the solution to problem (<ref>) when ω=Φ(t)(B_R).
In this case, since by symmetry we have u = ( u), the formula above admits the following simpler form on the interface ∂ B_R:
u'=u̇-( u) .
It is natural to ask whether the shape derivatives of the functional E are well defined. Actually, by a standard argument using the implicit function theorem for Banach spaces (we refer to <cit.> for the details) it can be proven that the application mapping every smooth vector field h compactly supported in Ω to E((𝕀+)(ω)) is of class 𝒞^∞ in a neighbourhood of h = 0. This implies the shape differentiability of the functional E for any admissible deformation field Φ∈. As a byproduct we obtain the smoothness of the material derivative u̇.
As already remarked in <cit.> (Remark 2.1), in contrast to material derivatives, the shape derivative u' of the solution to our problem has a jump through the interface. This is due to the presence of the gradient term in formula (<ref>) (recall that the transmission condition provides only the continuity of the flux). On the other hand we will still be using shape derivatives because they are easier to handle in computations (and writing Hadamard formulas using them is simpler).
For any given admissible Φ∈, the corresponding u' can be characterized as the (unique) solution to the following problem in the class of functions whose restriction to both and Ω∖ is smooth:
u' =0 in ∪ (Ω∖),
u'=0 on ,
u'=- u on ,
u'= 0 on ∂Ω.
Let us now prove that u' solves (<ref>).
First we take the shape derivative of both sides of the first equation in (<ref>) at points away from the interface:
Δ u' = 0 in ω∪ (Ω∖).
In order to prove that u' vanishes on ∂ B_R we will proceed as follows.
We performing the change of variables y:=Φ(t,x) in (<ref>) and set φ(x)=:ψ( Φ(t,x) ). Taking the derivative with respect to t, bearing in mind the first order approximation of Φ given by (<ref>) yields the following.
σ( - D∇ u + ∇u̇)·∇ψ - u · Dψ + u ·ψ = ψ.
Rearranging the terms yields:
u̇·ψ +( -D - D ^T + () I ) u ·ψ_⊛ =
-·ψ.
Let x and y be two sufficiently smooth vector fields in such that D x= ( D x) ^T and D y= ( D y) ^T. It is easy to check that the following identity holds:
( -D - D^T + () I ) x· y = (( x· y) ) - (· x)· y- (· y)· x.
We can apply this identity with x= u and y= ψ to rewrite ⊛ as follows:
( -D - D ^T + () I ) u ·ψ=( u ·ψ)_1 -(· u)·ψ_2 -(·ψ)· u_3.
Thus
u̇·ψ - u ·ψ- (· u)·ψ+_- u_- ψ =0,
where we have split the integrals and integrated by parts to handle the terms coming from 1 and 3.
Now, merging together the integrals on Ω in the left hand side by (<ref>) and exploiting the fact that u = ( u) on , the above simplifies to
u' ·ψ = 0.
Splitting the domain of integration and integrating by parts, we obtain
0=-∫__- Δ u_-' ψ + ∫__- u_-' ψ -∫_Ω∖_+ Δ u_+' ψ -∫__+ u_+' ψ-∫_∂Ω_+ u_+' ψ
= u'ψ,
where in the last equality we have used (<ref>) and the fact that ψ vanishes on ∂Ω.
By the arbitrariness of ψ∈ H_0^1(Ω) we can conclude that u'=0 on ∂.
The remaining conditions of problem (<ref>) are a consequence of (<ref>).
To prove uniqueness for this problem in the class of functions whose restriction to both and Ω∖ is smooth, just consider the difference between two solutions of such problem and call it w. Then w solves
w =0 in ∪ (Ω∖),
w=0 on ,
w=0 on ,
w=0 on ∂Ω;
in other words, w solves
- ( w)=0 in Ω,
w=0 on ∂Ω.
Since the only solution to the problem above is the constant function 0, uniqueness for Problem (<ref>) is proven.
We emphasize that formulas (<ref>) and (<ref>) are valid only for f belonging at least to the class ^2([0,T), L^1())∩^1([0,T), W^1,1()).
We would like to apply them to f(t)=u_t and f(t)=σ_t |∇ u_t|^2, where _t and u_t are the distribution of conductivities and the solution of problem (<ref>) respectively corresponding to the case ω=()_t. On the other hand, u_t is not regular enough in the entire domain Ω, despite being fairly smooth in both ω_t and Ω_t∖ω_t: therefore we need to split the integrals in order to apply (<ref>) and (<ref>) (this will give rise to interface integral terms by integration by parts).
For all Φ∈ we have
l_1^E(B_R)()=- | u|^2.
In particular, for all Φ satisfying the first order volume preserving condition (<ref>) we get l_1^E(B_R)()=0.
We apply formula (<ref>) to
E(ω_t)=∫_Ω u_t=∫_ω_t u_t+∫_ u_t
to get
l_1^E(B_R)()= ∫_ u_-'+∫_u_- h_n + ∫_Ω∖B_R u_+'
-∫_ u_+ h_n.
Using the jump notation we rewrite the previous expression as follows
l_1^E(B_R)()= ∫_Ω u' - ∫_ [u h_n]=∫_Ω u';
notice that the surface integral in (<ref>) vanishes as both u and h_n are continuous through the interface.
Next we apply (<ref>) to E(ω_t)=∫_Ωσ_t |∇ u_t|^2.
l_1^E(B_R)()= 2∫_σ_- ∇ u_-·∇ u_-' + ∫_σ_- |∇ u_-|^2 h_n+
2∫_σ_+ ∇ u_+·∇ u_+' +∫_∂Ωσ_+ |∇ u_+|^2 h_n - ∫_σ_+ |∇ u_+|^2 h_n.
Thus we get the following:
l_1^E(B_R)()= 2∫_Ωσ∇ u·∇ u' -σ |∇ u|^2h_n.
Comparing (<ref>) (choose ψ=u) with
(<ref>) gives
l_1^E(B_R)()= - | u|^2.
By symmetry, the term σ |∇ u|^2 is constant on and can be moved outside the integral sign. Therefore we have
l_1^E(B_R)()= 0 for all Φ satisfying (<ref>).
This holds in particular for all Φ∈.
§ COMPUTATION OF THE SECOND ORDER SHAPE DERIVATIVE
The result of the previous chapter tells us that the configuration corresponding to is a critical shape for the functional E under the fixed volume constraint. In order to obtain more precise information, we will need an explicit formula for the second order shape derivative of E.
The main result of this chapter consists of the computation of the bilinear form l_2^E(B_R)(,).
For all Φ∈ we have
=-2_- u_- u' -2_- u_- ∂_nn^2 uh_n^2-_- u_- u H h_n^2.
Take Φ=𝕀+t in with = on . As remarked after (<ref>), Z vanishes in this case. We get
=d^2/dt^2E(Φ(t)(B_R))t=0.
Hence, substituting the expression of the first order shape derivative obtained in Theorem <ref> yields
= -d/dt( ∫_(∂)_t_t | u_t|^2∘Φ^-1(t) ) t=0.
We unfold the jump in the surface integral above and apply the divergence theorem to obtain
=d/dt( ∫_(B_R)_t( _- | u_t|^2 ∘Φ^-1(t) ) - ∫_Ω∖ (B_R)_t( _+ | u_t|^2 ∘Φ^-1(t) ) )t=0
We will treat each integral individually.
By (<ref>) we have ∂_t ( Φ)t=0=-, therefore ∂_t(∘Φ^-1)t=0=-D. Now set f(t):=_- | u_t|^2.
By (<ref>) we have
d/dt(∫_(B_R)_t( f(t) ∘Φ^-1(t) ))t=0 = ∫_∂_t( ( f(t) ∘Φ^-1(t) ) )t=0_(A)
+ (f(0))_(B).
We have
(A)=∫_( ∂_t ft=0 + f(0)∂_t (∘Φ^-1(t))t=0) =
∫_( ∂_t ft=0) - ∫_( f(0)D) =
∫_∂_t ft=0 - f(0) · D.
On the other hand
(B)= ( f(0) ) = ( f(0)· + f(0)) .
Using the fact that = and -· D=: _τ ()=H (c.f. Equation (5.22), page 366 of <cit.>) we get
(A)+(B)= f' + ( f+ Hf)h_n^2.
Substituting f(t)=_-| u_t|^2 yields
(A)+(B)= 2 _- u_- · u_-' + 2_- u_- ∂_nn^2 u_- h_n^2 + _- | u_-|^2 H h_n^2.
The calculation for the integral over Ω∖ ()_t in (<ref>) is analogous.
We conclude that
= -2 _- u_- u' -2_- u_- ∂_nn^2 uh_n^2- _- u_- u H h_n^2.
The following is a first expression we obtain for E”()[,].
With the same notation as before, we have:
l_2^E(B_R)(,)= 2 u h_n^2 -2 u u'
- 2 u u· (D) + H u u h_n^2.
In order to prove Proposition <ref> we will need the following identities concerning shape derivatives of u (whose proofs will be postponed for the sake of clarity).
| u'|^2= u u'.
u· u”= u”+ 2 u u'
+ u ∂_nn u + u u· (D ) .
First of all we write the torsional rigidity functional as E(ω_t)= u_t and then apply formula (<ref>) to get
E”=E”()[,]= u” - 2 u' - Hu h_n^2- u h_n^2.
Since u is continuous through the interface, the term containing u vanishes. Moreover, rewriting u' using (<ref>) and recalling that the material derivative u̇ has no jump through the interface, we get
E” = u”+ u.
Next we apply (<ref>) to the other expression E(ω_t)=_t | u_t|^2:
E”= 2 | u'|^2+ 2 u · u” - 4 u u'
- H u u - 2 u ∂_nn u.
As usual our aim is to make all volume integral disappear by wise substitutions.
By Lemma <ref>,
E”= 2 u · u” - 2 u u'
- H u u - 2 u ∂_nn u.
Then, substituting the term containing u· u ” by Lemma <ref> yields
E”= 2 u” + 2 u u· (D ) + 2 u u'- H u u.
We finally prove the claim by combining (<ref>) with (<ref>).
Let us now prove Lemmas <ref> to <ref>.
Let us multiply both sides of (<ref>) by u' and integrate on and Ω∖.
We get
0= ∫__- u' u' +∫_Ω∖_+ u' u'.
Integrating by parts to get the boundary terms yields
0= - | u'|^2- u' u'.
Using the fact that u' has no jump (Proposition <ref>), we get
0= - | u'|^2- u' u'.
By (<ref>) we have
0= - | u'|^2+ u' u.
Rearranging the terms above and using the transmission condition we conclude as follows
| u'|^2 = u' u= u' u= u u'.
Multiplying the first equation in (<ref>) by u” and integrating by parts on and Ω∖ we get
u” = -∫_ u u” - ∫_Ω∖ u u”= u · u” + u u”.
We conclude by rewriting u” using (<ref>).
Having finally proven the two lemmas above concludes the proof of Proposition <ref>. What we got here is an expression for the shape Hessian of E at made up of the sum of four surface integrals. There are a few reasons why this is still not enough for our purposes. First of all, we succeeded in removing the dependence on u” but still we are left with a term involving the normal derivative of u'. Unlike the function u, its shape derivative u' depends on the choice of and fails to have an explicit formula to express it; we will have to deal with this problem later in the paper.
Furthermore, the integral depending on the Jacobian of can be simplified as follows.
Employing the use of Eq.(5.19) of page 366 of <cit.> we get
· (D ) = - _ h on ,
then, since we are working with a constrained problem, we can restrict ourselves to deformation fields that are tangent to the constraint, i.e. in our case we can suppose without loss of generality that =0 in a neighbourhood of (cf. <cit.>).
Therefore we may write,
· (D ) = - _ h= - _()=-H on ,
where we have used Eq.(5.22) of page 366 of <cit.>.
Thus, for any divergence free admissible =, the result of Proposition <ref> simplifies to
E” ()[,]= 2 u h_n^2 -2 u u' + 3 H u u h_n^2.
§ CLASSIFICATION OF THE CRITICAL SHAPE :
PROOF OF THEOREM <REF>
In order to classify the critical shape of the functional E under the volume constraint we will use the expansion shown in (<ref>).
For all Φ∈ and t>0 small, it reads
E( Φ(t)())= E()+ t^2/2( - | u|^2 Z )+o(t^2).
Employing the use of the second order volume preserving condition (<ref>) and the fact that, by symmetry, the quantity | u|^2 is constant on the interface we have
- | u|^2 Z = | u|^2 H h_n^2.
Combining this with the result of Theorem <ref> yields
E( Φ(t)())= E()+ t^2{ -_- u_- u' -_- u_- ∂_nn^2 uh_n^2 }+o(t^2).
We will denote the expression between braces in the above by Q(). Since u' depends linearly on (see (<ref>)), it follows immediately that Q() is a quadratic form in .
Since both u and u' verify the transmission condition (see (<ref>) and (<ref>)) we have
_- u_- u'= u u'= _-u'_- u on .
Using the explicit expression of u given in (<ref>), after some elementary calculation we write
Q()=R/N(1/_--1/_+)( -_- u'_- +1/N h_n^2 ).
In the following we will try to find an explicit expression for u'. To this end we will perform the spherical harmonic expansion of the function :→.
We set
(Rθ)=∑_k=1^∞∑_i=1^d_kα_k,i Y_k,i(θ) for all θ∈∂ B_1
.
The functions Y_k,i are called spherical harmonics in the literature. They form a complete orthonormal system of L^2(∂ B_1) and are defined as the solutions of the following eigenvalue problem:
-_τ Y_k,i=λ_k Y_k,i on ∂ B_1,
where _τ:= _τ_τ is the Laplace-Beltrami operator on the unit sphere.
We impose the following normalization coniditon
∫_∂ B_1 Y_k,i^2=R^1-N.
The following expressions for the eigenvalues λ_k and the corresponding multiplicities d_k are also known:
λ_k= k(k+N-2), d_k= N+k-1k-N+k-2k-1.
Notice that the value k=0 had to be excluded from the summation in (<ref>) because verifies the first order volume preserving condition (<ref>).
Let us pick an arbitrary k∈{1,2,…} and i∈{1,…, d_k}. We will use the method of separation of variables to find the solution of problem (<ref>) in the particular case when (Rθ)=Y_k,i(θ), for all θ∈∂ B_1.
Set r:=|x| and, for x 0, θ:=x/|x|.
We will be searching for solutions to (<ref>) of the form u'=u'(r,θ)=f(r)g(θ).
Using the well known decomposition formula for the Laplacian into its radial and angular components, the equation u'=0 in ∪ (Ω∖) can be rewritten as
0= u'(x) = f_rr(r)g(θ)+N-1/rf_r(r)g(θ)+1/r^2f(r)_τg(θ) for r∈(0,R)∪ (R,1), θ∈∂ B_1.
Take g=Y_k,i.
Under this assumption, we get the following equation for f:
f_rr+N-1/rf_r-λ_k/r^2f=0 in (0,R)∪ (R,1).
It can be easily checked that the solutions to the above consist of linear combinations of r^η and r^ξ, where
η =η(k)=1/2( 2-N + √( (N-2)^2+4λ_k ))=k,
ξ =ξ(k)= 1/2( 2-N - √( (N-2)^2+4λ_k ))=2-N-k.
Since equation (<ref>) is defined for r∈ (0,R)∪ (R,1), we have that the following holds for some real constants A, B, C and D;
f(r)=
Ar^2-N-k+Br^k for r∈(0,R),
Cr^2-N-k+Dr^k for r∈(R,1).
Moreover, since 2-N-k is negative, A must vanish, otherwise a singularity would occur at r=0.
The other three constants can be obtained by the interface and boundary conditions of problem (<ref>) bearing in mind that u'(r,θ)=f(r)Y_k,i(θ)=f(r)h_n(R θ).
We get the following system:
C R^2-N-k+ DR^k-BR^k= -R/N_- + R/N _+,
_- kB R^k-1= _+ (2-N-k) C R^2-N-k + _+ k D R^k-1,
C+D=0.
Although this system of equations could be easily solved completely for all its indeterminates, we will just need to find the explicit value of B in order to go on with our computations.
We have
B=B_k=R^1-k/N_-·k(_–_+)R^k-(2-N-k)(_–_+)R^2-N-k/k(_–_+)R^k+((2-N-k)_+-k_-)R^2-N-k.
Therefore, in the particular case when (R ·)=Y_k,i,
u'_-=u'_-(r,θ)=B_k r^k Y_k,i(θ), r∈[0,R), θ∈∂ B_1,
where B_k is defined as in (<ref>).
By linearity, we can recover the expansion of u'_- in the general case (i.e. when (<ref>) holds):
u'_-(r,θ)= ∑_k=1^∞∑_i=1^d_kα_k B_k r^k Y_k,i(θ), r∈[0,R), θ∈∂ B_1, and therefore
u'_-(R,θ)=∑_k=1^∞∑_i=1^d_kα_k B_k k R^k-1 Y_k,i(θ), θ∈∂ B_1.
We can now diagonalize the quadratic form Q, in other words we can consider only the case (R ·)=Y_k,i for all possible pairs (k,i).
We can write Q as a function of k as follows:
Q()=Q(k)=R/N(_+-_-/_+_-)(-_-B_k k R^k-1 +1/N)=
R/N^2( _+-_-/_+_-)(1- k k(_–_+)R^k-(2-N-k)(_–_+)R^2-N-k/k(_–_+)R^k+((2-N-k)_+-k_-)R^2-N-k).
The following lemma will play a central role in determining the sign of Q(k) and hence proving Theorem <ref>.
For all R∈ (0,1) and _-,_+>0, the function k↦ Q(k) defined in (<ref>) is monotone decreasing for k≥ 1.
Let us denote by ρ the ratio of the the conductivities, namely ρ:= _-/_+.
We get
Q(k)=R/N^2( 1-ρ/_-)(1- k k(ρ-1)R^k-(2-N-k)(ρ-1)R^2-N-k/k(ρ-1)R^k+((2-N-k)-kρ)R^2-N-k).
In order to prove that the map k↦ Q(k) is monotone decreasing it will be sufficient to prove that the real function
j(x):=xx-(2-N-x)R^2-N-2x/(1-ρ)x+ (-2+N+x+ρ x )R^2-N-2x
is monotone increasing in the interval (1,∞). Notice that this does not depend on the sign of ρ-1.
From now on we will adopt the following notation:
L:=R^-1>1, M:=N-2≥ 0, P=P(x):= L^2x+M.
Using the notation introduced above, j can be rewritten as follows
j(x)=x^2+(x^2+Mx)P/(1-ρ)x+(x+M+ρ x)P.
In order to prove the monotonicity of j, we will compute its first derivative and then study its sign.
We get
j'(x)=MP(MP+2Px+2x)+x^2(P+1)^2+ρ x^2P (P-1/P-4xlog(L)-2Mlog(L))/((1-ρ)x+(x+M+ρ x)P )^2.
The denominator in the above is positive and we claim that also the numerator is. To this end it suffices to show that the quantity multiplied by ρ x^2P in the numerator, namely P-1/P-4xlog(L)-2Mlog(L), is positive for x∈ (1,∞) (although, we will show a stronger fact, namely that it is positive for all x>0).
d/dx(P-1/P-4xlog(L)-2Mlog(L))= 2log(L)(P+1/P-2)>0 for x>0,
where we used the fact that L>1 and that P↦ P+P^-1-2 is a non-negative function vanishing only at P=1 (which does not happen for positive x).
We now claim that
(P-1/P-4xlog(L)-2Mlog(L))x=0= L^M-1/L^M-2M log(L)≥ 0.
This can be proven by an analogous reasoning: treating M as a real variable and differentiating with respect to it, we obtain
d/dM( L^M-1/L^M-2M log(L) ) = log(L)(L^M+1/L^M-2)≥ 0
(notice that the equality holds only when M=0), moreover,
( L^M-1/L^M-2M log(L) )M=0=0,
which proves the claim.
We are now ready to prove the main result of the paper.
Let _-,_+>0 and R∈(0,1). If _->_+
then
d^2/dt^2E(Φ(t)(B_R))t=0<0 for all Φ∈.
Hence, B_R is a local maximizer for the functional E under the fixed volume constraint.
On the other hand, if _-<_+, then there exist some Φ_1 and Φ_2 in , such that
d^2/dt^2E(Φ_1(t)(B_R))t=0<0, d^2/dt^2E(Φ_2(t)(B_R))t=0>0.
In other words, B_R is a saddle shape for the functional E under the fixed volume constraint.
We have
Q(1)=R/N^2( 1-ρ/_-) Nρ/ρ(R^N+1) +N-R^N-1.
Since N≥ 2, R∈ (0,1), we have N-R^N-1>0 and therefore it is immediate to see that Q(1) and 1-ρ have the same sign.
If _->_+, then, by Lemma <ref>, we get in particular that Q(k) is negative for all values of k≥ 1. This implies that the second order shape derivative of E at B_R is negative for all Φ∈ and therefore B_R is a local maximizer for the functional E under the fixed volume constraint as claimed.
On the other hand, if _-<_+, by (<ref>) we have Q(1)>0.
An elementary calculation shows that, for all _-,_+>0,
lim_k→∞ Q(k)=-∞.
Therefore, when _-<_+, B_R is a saddle shape for the functional E under the fixed volume constraint.
§ THE SURFACE AREA PRESERVING CASE
The method employed in this paper can be applied to other constraints without much effort. For instance, it might be interesting to see what happens when volume preserving perturbations are replaced by surface area preserving ones. Is B_R a critical shape for the functional E even in the class of domains of fixed surface area? If so, of what kind?
We set (D):=∫_∂ D 1 for all smooth bounded domain D⊂.
The following expansion for the functional can be obtained just as we did for (<ref>):
(ω_t)=(ω)+ t ∫_∂ωH + t^2/2( l_2^(ω) (,) + ∫_∂ωH Z ) + o(t^2) as t→0,
where, (cf. <cit.>, page 225)
l_2^(ω)(,)=∫_∂ω |_τ|^2 +∫_∂ω( H^2 - tr((D_τ)^T D_τ) ) h_n^2.
We get the following first and second order surface area preserving conditions.
∫_∂ωH =0,
∫_∂ω |_τ|^2 +∫_∂ω( H^2 - tr((D_τ)^T D_τ) ) h_n^2+ ∫_∂ωH Z=0.
Notice that when ω=B_R, the first order surface area preserving condition is equivalent to the first order volume preserving condition (<ref>) and therefore, by Theorem <ref>, B_R is a critical shape for E under the fixed surface area constraint as well.
The study of the second order shape derivative of E under this constraint is done as follows. Employing the use of (<ref>) together with the second order surface area preserving condition in (<ref>) we get
d^2/dt^2E(Φ(t)(B_R))t=0= l_2^E(B_R)(,) + | u|^2/H l_2^(B_R)(,).
In other words, we managed to write the shape Hessian of E as a quadratic form in . We can diagonalize it by considering (R·)=Y_k,i for all possible pairs (k,i), where we imposed again the normalization (<ref>). Under this assumption, by (<ref>) we get
∫_ |_τ|^2= λ_k/R^2=k(k+N-2)/R^2.
We finally combine the expression for l_2^E(B_R) of Theorem <ref> with that of l_2^ (<ref>) to obtain
E(Φ(t)(B_R))= E(B_R) + t^2 Q(k) + o(t^2) as t→ 0,
where
Q(k)=R/N^2( 1-ρ/_-)(3/2 - k(k+N-2)/2(N-1) -k k(ρ-1)R^k-(2-N-k)(ρ-1)R^2-N-k/k(ρ-1)R^k+((2-N-k)-kρ)R^2-N-k).
It is immediate to check that Q(1)=Q(1) and therefore, Q(1) is negative for _->_+ and positive otherwise. On the other hand, lim_k→∞Q(k)=∞ for _->_+ and lim_k→∞Q(k)=-∞ for _-<_+. In other words, under the surface area preserving constraint B_R is always a saddle shape, independently of the relation between _- and _+.
We can give the following geometric interpretation to this unexpected result.
Since the case k=1 corresponds to deformations that coincide with translations at first order, it is natural to expect a similar behaviour under both volume and surface area preserving constraint. On the other hand, high frequency perturbations (i.e. those corresponding to a very large eigenvalue) lead to the formation of indentations in the surface of B_R. Hence, in order to prevent the surface area of B_R from expanding, its volume must inevitably shrink (this is due to the higher order terms in the expansion of Φ). This behaviour can be confirmed by looking at the second order expansion of the volume functional under the effect of a surface area preserving transformation Φ∈ on the ball:
(Φ(t)())=()+t^2/2( 1/R-k(k+N-2)/(N-1)R) +o(t^2) as t→ 0.
We see that the second order term vanishes when k=1, while getting arbitrarily large for k≫1.
Since this shrinking effect becomes stronger the larger k is, this suggests that the behaviour of E(Φ(t)(B_R)) for large k might be approximated by that of the extreme case ω=∅. For instance, when _->_+ we have that E(B_R)<E(∅) and this is coherent with what we found, namely Q(k)>0 for k≫ 1.
§ ACKNOWLEDGMENTS
This paper is prepared as a partial fulfillment of the author's doctor's degree at Tohoku University.
The author would like to thank Professor Shigeru Sakaguchi (Tohoku University) for his precious help in finding interesting problems and for sharing his naturally optimistic belief that they can be solved.
Moreover we would like to thank the anonymous referee, who suggested to study the surface area preserving case and helped us find a mistake in our calculations. Their detailed analysis and comments on the previous version of this paper, contributed to make the new version shorter and more readable.
10
L. Ambrosio, G. Buttazzo, An optimal design problem with perimeter penalization, Calc. Var. Part. Diff. Eq. 1 (1993): 55-69.
conca C.Conca, R.Mahadevan, L.Sanz, Shape derivative for a two-phase eigenvalue problem and optimal configuration in a ball. In CANUM 2008, ESAIM Proceedings 27, EDP Sci., Les Ulis, France, (2009): 311-321.
bandle C. Bandle, A. Wagner, Second domain variation for problems with Robin boundary conditions. J. Optim. Theory Appl. 167 (2015), no. 2: 430-463.
sensitivity M. Dambrine, D. Kateb, On the shape sensitivity of the first Dirichlet eigenvalue for two-phase problems. Applied Mathematics and Optimization 63.1 (Feb 2011): 45-74.
stability M. Dambrine, J. Lamboley, Stability in shape optimization with second variation. arXiv:1410.2586v1 [math.OC] (9 Oct 2014).
SG M.C. Delfour, Z.-P. Zolésio, Shapes and Geometries: Analysis, Differential Calculus, and Optimization. SIAM, Philadelphia (2001).
dephfig G. De Philippis, A. Figalli, A note on the dimension of the singular set in free interface problems, Differential Integral Equations Volume 28, Number 5/6 (2015): 523-536.
espfusc L. Esposito, N. Fusco, A remark on a free interface problem with volume constraint, J. Convex Anal. 18 (2011): 417-426.
GT D. Gilbarg, N.S. Trudinger, Elliptic Partial Differential Equation of Second Order, second edition, Springer.
henrot A. Henrot, M. Pierre, Variation et optimisation de formes. Mathématiques & Applications. Springer Verlag, Berlin (2005).
new R. Hiptmair, J. Li, Shape derivatives in differential forms I: an intrinsic perspective, Ann. Mate. Pura Appl. 192(6) (2013): 1077-1098.
lar C.J. Larsen, Regularity of components in optimal design problems with perimeter penalization, Calc. Var. Part. Diff. Eq. 16 (2003): 17-29.
lin F.H. Lin, Variational problems with free interfaces, Calc. Var. Part. Diff. Eq. 1 (1993):149-168.
structure A. Novruzi, M. Pierre, Structure of shape derivatives. Journal of Evolution Equations 2 (2002): 365-382.
polya G. Pólya, Torsional rigidity, principal frequency, electrostatic capacity and symmetrization. Q. Appl. Math. 6 (1948): 267-277.
simon J. Simon, Second variations for domain optimization problems, International Series of Numerical Mathematics, vol. 91. Birkhauser, Basel (1989): 361-378.
Research Center for Pure and Applied Mathematics, Graduate
School of
Information Sciences, Tohoku University, Sendai 980-8579
, Japan.
Electronic mail address:
[email protected]
|
http://arxiv.org/abs/1701.07641v1 | 20170126103236 | Pseudo bulges in galaxy groups: the role of environment in secular evolution | [
"Preetish K. Mishra",
"Yogesh Wadadekar",
"Sudhanshu Barway"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Identification of nonclassical properties of light with multiplexing layouts
W. Vogel
December 30, 2023
============================================================================
We examine the dependence of the fraction of galaxies containing pseudo bulges on environment for a flux limited sample of ∼5000 SDSS galaxies. We have separated bulges into classical and pseudo bulge categories based on their position on the Kormendy diagram. Pseudo bulges are thought to be formed by internal processes and are a result of secular evolution in galaxies. We attempt to understand the dependence of secular evolution on environment and morphology. Dividing our sample of disc+bulge galaxies based on group membership into three categories: central and satellite galaxies in groups and isolated field galaxies, we find that pseudo bulge fraction is almost equal for satellite and field galaxies. Fraction of pseudo bulge hosts in central galaxies is almost half of the fraction of pseudo bulges in satellite and field galaxies. This trend is also valid when only galaxies are considered only spirals or S0. Using the projected fifth nearest neighbour density as measure of local environment, we look for the dependence of pseudo bulge fraction on environmental density. Satellite and field galaxies show very weak or no dependence of pseudo bulge fraction on environment. However, fraction of pseudo bulges hosted by central galaxies decreases with increase in local environmental density. We do not find any dependence of pseudo bulge luminosity on environment. Our results suggest that the processes that differentiate the bulge types are a function of environment while processes responsible for the formation of pseudo bulges seem to be independent of environment.
galaxies: bulges – galaxies: evolution – galaxies: formation – galaxies: groups
§ INTRODUCTION
Recent progress in our understanding of the central component of disc galaxies has expanded our knowledge of galaxy formation and evolution. We now know that the central component i.e. the bulge,
comes in two flavours. Classical bulges are thought to be formed by mergers <cit.> or sinking of giant gas clumps found in high redshift discs to central region of the galaxy and formation of these bulges through violent relaxation and starbursts <cit.>. Pseudo bulges on the other hand are thought to be the product of internal processes and have secularly evolved through time <cit.>. Difference in the formation scenario for these two bulge types makes them fundamentally different from one another, which is also reflected in their distinct properties. Pseudo bulges exhibit nuclear morphological features which are characteristic of galaxy discs such as a nuclear bar, spiral or ring <cit.> while classical bulges are featureless. Also, pseudo bulges are composed of a younger stellar population with a flatter radial velocity dispersion profile as compared to that of classical bulges <cit.>. The two types of bulges behave differently with respect to several well known correlations between structural parameters of galaxies. For example, <cit.> has shown that the two bulge types occupy different regions of the parameter space, when plotted as different projections of the fundamental plane <cit.>. A smooth transition from one type of bulge to another as seen in these correlations also points to possible existence of composite bulges with mixed properties <cit.>. Properties of composite bulges have been explored in detail in recent works for e.g. <cit.> but much work is needed to put strong limits on the frequency of occurrence of composite bulges in galaxies.
Most previous work on bulges <cit.> has focussed on the relation between bulges and properties of their host galaxies, which in turn, have been used as criteria for classification of bulge type. As a result, there has been significant increase in our understanding of the importance of bulges in the evolution of their host galaxies and associated black holes, AGN etc. <cit.>. At this time there exist a number of simulations which are successful in forming classical bulges via mergers <cit.> or via clump instabilities <cit.>. Also, our understanding of secular evolution is now detailed enough to qualitatively explain commonly occurring morphological features in galaxies such as nuclear rings, nuclear bars and pseudo bulges. It has been shown in simulations that bar driven secular evolution can form inner and outer rings, pseudo bulges and structure that resemble nuclear spirals as observed in disc galaxies <cit.>. On the other hand, studies attempting to quantitatively understand the process of secular evolution in diverse environments are not very common in the literature.
A study of the Virgo cluster by <cit.> shows that 2/3 of stellar mass is in elliptical galaxies alone. Information on other extreme of environmental density regime comes from <cit.>. By studying galaxies within the local 11 Mpc volume in low density regions, they report that 1/4 of stellar mass is contained within ellipticals and classical bulges while rest 3/4 of mass is distributed in pseudo bulges, discs and bulge-less galaxies. These two observations have led the authors to conclude that the process driving distribution of bulge type appears to be a strong function of environment. There are only a few works which explore the effect of environment specifically on bulges. <cit.> have studied the colour of bulges and discs in clusters and found that the bulge colour does not depend on environment. <cit.> distinguished between galaxies having a quiescent bulge and a star forming bulge based on the strength of the 4000 Å break (D_n(4000) index). They have associated quiescent bulges with classical bulges and star forming bulges with pseudo bulges. In their work, the classical bulge profile is modelled as having de Vaucouleurs profile with bulge Sérsic index n_b = 4 and pseudo bulges are modelled with an exponential profile with n_b = 1. Using fifth projected neighbourhood density as a measure of local environment, they show a strong increase in fraction of galaxies hosting a classical bulge with increase in local density. On the other hand, fraction of galaxies hosting a pseudo bulge decreases slowly as one goes from a lower to a higher local density environment. <cit.> have focussed on studying the dependence of galaxy properties of classical bulges on the environment. There is a need to explore properties of pseudo bulges over a wide range of environmental density in order to expand our knowledge of secular evolution and make it more quantitative. Dependence of distribution of pseudo bulges and their intrinsic properties on environment will help us understand how environment affects the processes that govern distribution and formation of pseudo bulges.
In this work, we have explored the dependence of bulge type on
environment as well as on galaxy morphology. Our sample spans a wide
range of environmental density and is composed of mainly
isolated/field galaxies and galaxy groups with available
morphological information for each object. We have identified and
classified our sample of S0 and spiral galaxies into classical and
pseudo bulge host galaxies. We further divide our sample by
galaxy group association, into three categories: 1. field galaxies not
belonging to any group, 2. galaxies which reside in the center of galaxy
groups and 3. their satellite galaxies. We investigate the dependence of
bulge type on environments in these three categories. The paper is
organised as follows. Section 2 describes the data and sample
selection. Section 3 describes our results and Section 4 summarizes
the findings and the implications. Throughout this work, we have used
the WMAP9 cosmological parameters : H_0 = 69.3 km
s^-1Mpc^-1, Ω_m = 0.287 and Ω_Λ=
0.713. Unless otherwise noted, photometric measurements used are in the
SDSS r band. All logarithms are to base 10.
§ DATA AND SAMPLE SELECTION
To aim for a sample suitable for studying bulges in galaxy groups, we
started with data from <cit.> which provides a catalogue of
nearly 700,000 spectrocopically selected galaxies drawn from the SDSS
DR7 in SDSS g, r and i bands <cit.>. This catalogue is a
flux limited sample with r-band Petrosian magnitude for all galaxies
having magnitude in range 14<r<17.7 and provides 2D, PSF corrected
de Vaucouleurs, Sérsic, de Vaucouleurs+Exponential and
Sérsic+Exponential fits of galaxies with flags indicating goodness of
the fit, using the PyMorph pipeline <cit.>.
We cross matched the <cit.> catalogue with the data
provided in <cit.> which is a catalogue of detailed visual
classification for nearly 14,000 spectroscopically targeted galaxies
in the SDSS DR4. The <cit.> catalogue is a flux limited sample
with an extinction corrected limit of g<16 mag in SDSS g band,
spanning the redshift range 0.01 < z < 0.1. In addition to
morphological T-type classification it also provides measurements of
stellar mass of each object taken from <cit.> which used stellar absorption-line indices and broadband photometry from SDSS to estimate the stellar mass of galaxies. The <cit.> catalogue also contains information on
average environmental density from <cit.> and information
on galaxy groups from <cit.> such as group mass, group
luminosity, group halo mass, group richness etc. We have made use of
all relevant information on individual galaxies and groups as provided
in <cit.> in our work. Our cross match of these two
catalogues resulted in a sample of 8929 galaxies on which we have
further imposed condition a requirement of a “good fit” as given in <cit.> and
which is described below.
To obtain our final sample of galaxies we have first removed galaxies with bad
fits and problematic two component fits as indicated by flags in
<cit.>. The three categories of good fits in this catalogue
are as follows.
(i) Good two component fits : which includes
galaxies where both bulge and disc components have reliable estimates
(ii) Good bulge fits : which includes galaxies where disc estimates
are unreliable while the bulge measurements are trustworthy
(iii)
Good disc fits : which includes galaxies where bulge estimates are
unreliable but the disc measurements are trustworthy.
Since the
focus of this study is on bulges we have retained galaxies in the first two categories
and have discarded those in the third one. Additional constraints comes from
fact that two component fits with bulge Sérsic index n ≥ 8 can be used
for total magnitude and radius measurement but they have unreliable
subcomponents. We have taken a conservative approach and have retained
only the galaxies having bulge Sérsic index n < 8.
After applying all selection criteria mentioned above, we are left with
4991 galaxies , 2026 of which are spirals (1≤ T ≤ 9), 1732 are
S0s (-3≤ T ≤ 0) and 1233 are ellipticals (-4≤ T). From
here onwards, we will collectively refer to the population of
spirals+S0s as disc galaxies. Out of the 3758 disc galaxies in our sample,
we have information about the group properties of 3641 galaxies from <cit.>.
§ RESULTS
§.§ Identifying pseudo bulges
A common practice in studies of bulges is to classify them on basis of the Sérsic index. In this method, pseudo bulges are defined as those having Sérsic index below a certain threshold. Usually this threshold value is taken to be n = 2 <cit.>. However, measurement of Sérsic index from ground based telescopes are reported to have errors as large as 0.5 <cit.>. Also, Sérsic index n and effective radius (r_e) have degenerate errors which leads to additional error in n due to uncertainty in measurement in r_e. Hence, using a specific Sérsic index threshold for bulge classification may lead to ambiguity. Therefore, we have refrained from using the Sérsic index to classify bulges in favour of a better physically motivated classification criteria due to <cit.> which has been used in recent works for eg. <cit.>.
This criteria involves classification of bulge types based on their position on the Kormendy diagram <cit.>. The Kormendy diagram is a plot of the average surface brightness of the bulge within its effective radius (μ_b(< r_e)) against logarithm of effective radius
r_e. Elliptical galaxies are known to obey a tight linear relation on this diagram. Classical bulges are thought to be structurally similar to ellipticals and therefore obey a similar relation while pseudo bulges being structurally different, lie away from it. Any bulge that deviates more that three times the r.m.s. scatter from the best fit relation for ellipticals is classified as
pseudo bulge by this criterion <cit.>.
The Kormendy diagram for our sample is shown in Figure <ref>. The best fit line was obtained by plotting elliptical galaxies using r band data. The equation of the best fit line is
⟨μ_b(<r_e)⟩ = (2.768 ± 0.0306)logr_e + 18.172 ± 0.0255
The rms scatter in ⟨μ_b(<r_e) ⟩ around the best fit line is 0.414. The dashed lines encloses region of 3 times rms scatter from the fit. All galaxies lying outside the region enclosed by the dashed lines are taken to be pseudo bulges.
We have separated the disk galaxies having S0 and spiral morphology and have plotted them on the Kormendy diagram as shown in Figures <ref> and <ref> respectively. It is clear from these figures that the number of pseudo bulge host galaxies are higher in spirals than in S0 galaxies. We have found that out of 2026 spiral galaxies, 338 (16.68 percent of spiral population) host a pseudo bulge. On the other hand only 93 (5.37 percent of S0s) out of a total of 1732 number of S0 galaxies are pseudo bulge hosts. This result is summarised in Table <ref>.
To test the robustness of our classification criterion, we have compared the classical and pseudo bulges in our sample with respect to the properties in which two bulge types are expected to show different behaviour. Secularly evolved pseudo bulges, due to their disk like stellar kinematics are dynamically colder systems compared to the merger generated classical bulges. This property is reflected in the different values of velocity dispersion of the two bulge types. For eg. <cit.> have shown that on an average pseudo bulges have lower central velocity dispersion than classical bulges. Classifying the bulge type based on the Sérsic index they have seen that pseudo bulges have average velocity dispersion ∼90 km/s whereas this value is ∼160 km/s for classical bulges. We have obtained the values of central velocity dispersion for the galaxies in our sample from <cit.>. After applying aperture correction, we have plotted the distribution of central velocity dispersion for classical and pseudo bulge host galaxies in our sample which is shown in Figure <ref>. One can see a bimodal distribution of central velocity distribution with respect to the bulge type. Pseudo bulges are found to have a lower value of central velocity dispersion, with their distribution peaking around ∼60 km/s in contrast to the distribution of the same for classical bulges which peaks around ∼160 km/s, in agreement with expected trends.
Previous studies <cit.> have also indicated that pseudo bulges exhibit star forming activity as opposed to classical bulges which are mainly composed of old stars. To separate old and young stellar population in galaxies, we have used the strength of the 4000 Å spectral break which arises due to accumulation of absorption lines of mainly metals in the atmosphere of old, low mass stars and by a lack of hot, blue stars in galaxies. The strength of this break is quantified by D_n(4000) index. In literature, one can find several definition of this index which are available. In this work, we have used the definition of this index as provided in <cit.>. A low value of D_n(4000) index denotes young stellar population. We have taken the D_n(4000) index measurement from SDSS DR7 MPA/JHU catalogue[<http://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/>] and have plotted it's distribution for classical and pseudo bulges in our sample as shown in Figure <ref>.
As expected, our pseudo bulges have lower value of the D_n(4000) index which peaks around value ∼1.2 as compared to the classical bulges peaking around ∼1.8. A similar bimodal distribution of D_n(4000) index with respect to bulge types, has also been found in works <cit.> that employ only bulge Sérsic index cutoff n=2 to classify bulges. <cit.> have compared this bimodal distribution of D_n4000 when bulges are classified using the Kormendy relation only and when classification is based on threshold bulge Sérsic index. They find that peaks of distribution of D_n(4000) for classical and pseudo bulge are closer when bulges are identified using the Sérsic index as compared to distance between peaks when the Kormendy relation is used for classification. Our result for distribution of D_n(4000) is consistent with trend reported in <cit.> which uses the Kormendy diagram for bulge classification. This gives support to the bulge classification criteria that we have used.
At this point, we would like to mention that our sample is a flux limited sample with an extinction corrected flux limit of g<16 mag in the SDSS g band which makes it biased towards bright and massive galaxies. In Figure <ref> we have plotted the stellar mass distribution of disk galaxies in our sample. Its clear that our sample is biased towards massive galaxies.
As will be seen in later part of this paper that pseudo bulges are more common in galaxies having low stellar mass. Hence, the result presented here on fraction of pseudo bulges is applicable only to bright and massive galaxies.
§.§ Bulge fraction as a function of environment
<cit.> provides the information of group membership and number of galaxies in a particular group or group richness taken from <cit.>. Depending on group ID and group richness (Ngr) a flag has been provided which tells if a galaxy is the most massive member of the group or a satellite galaxy. To study the effect of environment on frequency of occurrence of bulge type, we have divided our sample in three categories. We use flags specified in <cit.> catalogue to classify the galaxies as :
(i) Central galaxies: are the galaxies which are most massive in a particular group and have group richness Ngr > 1.
(ii) Satellite galaxies: are galaxies other than the central galaxy in groups with richness Ngr > 1.
(iii) Field galaxies: are galaxies having group richness Ngr = 1.
One should keep in mind the fact that a group as defined by <cit.> refers to a collection of galaxies which reside in a common dark matter halo. Hence, according to this definition, clusters of galaxies having hundreds of members or just two neighbouring galaxies as long as they reside in same dark matter halo are labelled as groups. Table <ref> provides number distribution of ellipticals in our sample which are central, satellite or field galaxies. Tables <ref>, <ref> and <ref> summarise the statistics of total number of galaxies specified as central, satellite and field galaxies in disc galaxies as well as in spiral and S0 galaxies separately. Comparing Tables <ref> and <ref> we see that the pseudo bulge fraction (defined as number of pseudo bulge hosts divided by total number of galaxies) in spirals is more than 3 times of the fraction of pseudo bulge hosts in S0 galaxies. This applies to all three categories i.e. central, satellite and field galaxies of spiral and S0 morphology class. It is also interesting to note that for a specific morphology, the pseudo bulge fraction is similar for satellites and fields but becomes less than half of this value in central galaxies.
<cit.> have reported a strong dependence of bulge type on host galaxy mass in low density environments. Since our sample spans a wide density range upto cluster environments, we checked the same dependence by plotting the fraction of bulge type across different mass bins of host galaxies. Mass of all galaxies in our sample is taken from <cit.> and the resulting plot is shown in Figure <ref> which shows that the pseudo bulge fraction decreases with increase in host galaxy mass while the trend is reversed for classical bulge hosts. The errors are taken as Poisson on the total number of pseudo and classical bulges and have been propagated to determine error bars on pseudo and classical bulge fractions. It is also evident from Figure <ref> that pseudo bulge hosts dominate when host galaxy mass is less than 10^9.5M_⊙ while classical bulges are more common above this limit. Revisiting Tables <ref>, <ref> and <ref> with this information of stellar mass dependence of pseudo bulge hosts, we note that the median stellar mass of central, satellite and field galaxies is similar. So the fact that pseudo bulge fraction in central galaxies is half of the pseudo bulge fraction found in satellite and field galaxies seems to be an environmental effect.
We now explore the dependence of bulge fraction in central, satellite and field galaxies separately on average environmental density parameter Σ which is available in Nair catalogue taken from
<cit.>. It is defined as Σ=N/(π d_N^2) where d_N is projected comoving distance to the Nth nearest neighbour. <cit.> catalogue gives a best estimate density
obtained by calculating the average density for N = 4 and N = 5 with spectroscopically confirmed members only with entire sample. For each category (central, satellite, field) of galaxies, we have divided Σ in different bins and in each of these bins we have
calculated the fraction of galaxies hosting classical and pseudo bulges. Figures <ref>, <ref> and <ref> shows the dependence of the
fraction bulge type on average environmental density Σ for central, satellite and disc galaxies respectively. A quick examination of these three plots tells us that within error bars pseudo bulge fraction in satellites and fields show very minor variation with respect to each other but we find a significant trend in pseudo bulge fraction with average environmental density. At lowest environmental densities pseudo bulge fraction in central galaxies is around 21%
which steadily decreases and reaches about 5% and becomes constant within error bars for log Σ≥ 0.0. A point to note here is that the total number of pseudo bulges in S0 galaxies (see Table <ref>) is significantly less than total number of pseudo bulges in spiral galaxies (see Table <ref>) in all three classes viz. central, satellite and field galaxies. As a result, the number of pseudo bulge hosting S0 galaxies will also be significantly less as compared to number of pseudo bulge hosting spirals in each bin of environmental density. Hence these trends of pseudo bulge fraction with environmental density are driven by the larger number of spiral galaxies.
We need to check whether the dependence of pseudo bulge host central galaxies on environment is a direct effect or is an indirect effect induced by their common dependence on stellar mass. To do this, we have plotted a 2D histogram of galaxy stellar mass vs. average
environmental density for all central disc galaxies to check for existence of any correlation. The resultant plot is shown in Figure <ref> and we see no obvious dependence of stellar mass on average environmental density Σ. We checked the possibility that the high fraction of pseudo bulges as seen in the environmental density range -1.5 <log Σ< 0.5 in central galaxies is due to the fact that galaxies having low stellar mass might be dominating in
that density range. We have seen in Figure <ref> that pseudo bulge hosts dominate below stellar mass <10^9.5M_⊙. To find out the stellar mass distribution of pseudo bulge host central galaxies across environment, we have plotted 2D histogram of stellar mass vs. average environmental density (Σ) for only those central galaxies which host a pseudo
bulge. This plot is shown in Figure <ref> and it is clear that in the range -1.5 <Σ< 0.5 it is dominated with pseudo bulge hosts with mass > 10^9.5M_⊙ ruling out the possibility that for the central galaxies, high fraction of pseudo bulges seen in this density range is only a result of stellar mass dependence of pseudo bulge fraction. This result further provides support to the idea that the pseudo bulge fraction is dependent on environment.
To understand the influence of environment on intrinsic properties of bulge rather than their host galaxies, we have estimated the r band absolute magnitude of the bulges in our sample. The dependence of bulge type on bulge absolute magnitude is shown in Figure <ref> which shows that pseudo bulge fraction increases in low luminosity regime and the trend is reversed for classical bulge hosts. Figures <ref> and <ref> are 2D histogram plots of bulge absolute magnitude vs. average environmental density for classical and pseudo bulges in our sample respectively. Large scatter in these two plots readily tells us that
there is no strong dependence of bulge luminosity on environment.
§ SUMMARY & DISCUSSION
We have presented a systematic study of bulges of disc galaxies in galaxy group environment. The groups are defined as galaxies that share the same dark matter halo as identified by <cit.>. We use position of bulge on Kormendy diagram as the defining criterion for determination of bulge type. We find that 11.47% of disc galaxies in our sample host pseudo bulges. Dividing this sample by morphology we find that 16.68% of spiral galaxies in our sample are pseudo bulge host while this percentage is 5.37 % for S0 galaxies. Further division of galaxies into three group based categories of central, satellite and field galaxies tells us that for the satellite and field galaxies pseudo bulge fraction is similar. On the other hand, we find that pseudo bulge fraction in central galaxies is less than half of the fraction in satellite and field galaxies, irrespective of morphology. We find a significant dependence of pseudo bulge fraction hosted by central galaxies on average environmental density where pseudo bulges are more likely to be found in low density environments. We hardly see any such dependence of pseudo bulge fraction on environment for satellite galaxies and those which are in the field. Since galaxies having mass < 10^9.5M_⊙ are dominated by pseudo bulge hosts, we also have checked if the trend of pseudo bulges in central galaxies with environment is an indirect effect of stellar mass dependence of bulge type. We find that inferred dependence of pseudo bulge hosting central galaxies on environment is a likely to be direct effect of environment.
If pseudo bulges are formed through internal processes and evolve secularly then it seems environment plays some role in affecting the process which determines distribution of bulge type. Our finding of higher number of pseudo bulges in less dense environment is consistent with earlier studies for eg. <cit.> where pseudo bulges and discs were found to be more dominant in under dense void like environments as compared to galaxy clusters. We also have found that bulge absolute magnitude which is an intrinsic property of the bulge does not depend on environmental density. Thus it might be likely that the processes which form and grow pseudo bulges are independent of environment and are governed by internal processes only.
However, while interpreting our results one should keep in mind the fact that bulge type is dependent on a number of parameters such as galaxy stellar mass, SFR, sSFR, morphology etc. Hence, to correctly determine the effect of environment on distribution of bulge types,
one needs to separate any effect from these parameters on which bulge type depends, as they might be contributing indirectly to some extent in the observed trends. Stellar mass and a number of parameters such as sSFR, SFR etc. are well correlated. Therefore, checking for any indirect effect of stellar mass that may show up in trend of pseudo bulges with environmental density, takes care of other quantities which are well correlated to the stellar mass. Indirect effect of other parameters such as morphology etc. on environmental dependence of pseudo bulges, have not been specifically checked in this work.
Finally, we would like to remind the reader about the sample bias. From Figure <ref> we know pseudo bulges are more commonly found in galaxies having stellar mass < 10^9.5M_⊙, but as shown in Figure <ref> number of such galaxies are very less in our sample. As a result, we have less number of pseudo bulges in our sample than one would expect in this mass range. Therefore, the results presented in this work on pseudo bulge fraction should be understood keeping the sample bias in mind.
In future, we would like to explore properties of pseudo bulges and their dependence on environment as well on morphology in greater detail. Using Galaxy Zoo data <cit.> which provides us with morphological information on nearly 900,000 galaxies in combination with information on group properties from <cit.>, we will be able to explore many aspects of environmental secular evolution. Our result shows that the pseudo bulge fraction seems to be dependent on distance of galaxy from the group centre with lower fraction of pseudo bulges found in central galaxies as compared to satellites. With a large sample of galaxy groups, it will be possible to explore dependence of pseudo bulge fraction on the group centric distance of member galaxies. Dividing the group centric distance into different bins, one can also check dependence of pseudo bulge fraction on environment for the galaxies which are nearer and farther away from the galaxy group centre. Morphological information on large number of galaxies will also help us to separate dependence of pseudo bulge fraction on galaxy morphology which might be contributing to some extent on environmental dependence of pseudo bulge fraction. We believe that work in these directions will help to understand secular evolution in a quantitative way.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for insightful comments that have improved both the content and presentation of this paper. PKM thanks Peter Kamphuis for useful discussions and Omkar Bait for help with Python programming. YW thanks IUCAA for hosting him on his sabbatical when this work was initiated. SB would like to acknowledge support from the National Research Foundation research grant (PID-93727). SB and YW acknowledge support from a bilateral grant under the Indo-South Africa Science and Technology Cooperation (PID-102296) funded by the Departments of Science and Technology (DST) of the Indian and South African Governments.
mnras
|
http://arxiv.org/abs/1701.07802v3 | 20170126181252 | Bounds for Substituting Algebraic Functions into D-finite Functions | [
"Manuel Kauers",
"Gleb Pogudin"
] | cs.SC | [
"cs.SC"
] |
2
Bounds for Substituting Algebraic Functions into D-finite Functions
Manuel KauersSupported by the Austrian Science Fund (FWF): Y464, F5004.
Institute for Algebra / Johannes Kepler University
4040 Linz, Austria
[email protected]
Gleb PogudinSupported by the Austrian Science Fund (FWF): Y464.
Institute for Algebra / Johannes Kepler University
4040 Linz, Austria
[email protected]
======================================================================================================================================================================================================================================================================================================================================================
It is well known that the composition of a D-finite function with an algebraic function is again D-finite.
We give the first estimates for the orders and the degrees of annihilating operators for the compositions.
We find that the analysis of removable singularities leads to an order-degree curve which is much more
accurate than the order-degree curve obtained from the usual linear algebra reasoning.
§ INTRODUCTION
A function f is called D-finite if it satisfies an ordinary linear differential equation
with polynomial coefficients,
p_0(x)f(x)+p_1(x)f'(x)+⋯+p_r(x)f^(r)(x)=0.
A function g is called algebraic if it satisfies a polynomial equation with polynomial
coefficients,
p_0(x)+p_1(x)g(x) + ⋯ + p_r(x)g(x)^r=0.
It is well known <cit.> that when f is D-finite and g is algebraic,
the composition f∘ g is again D-finite. For the special case f=id this reduces to Abel's theorem, which says
that every algebraic function is D-finite. This particular case was investigated closely
in <cit.>, where a collection of bounds was given for the orders and degrees of the
differential equations satisfied by a given algebraic function. It was also pointed out in <cit.>
that differential equations of higher order may have significantly lower degrees,
an observation that gave rise to a more efficient algorithm for transforming an algebraic equation
into a differential equation. Their observation has also motivated the study of order-degree
curves: for a fixed D-finite function f, these curves describe the boundary of the region of all pairs (r,d)∈ N^2
such that f satisfies a differential equation of order r and degree d.
We have fixed some randomly chosen operator
L∈ C[x][∂] of order r_L=3 and degree d_L=4
and a random
-1.7[b].6
polynomial P∈ C[x][y] of y-degree r_P=3 and x-degree d_P=4.
For some prescribed orders r, we computed the smallest degrees d such that there is an operator M
of order r and degree d that annihilates f∘ g for all solutions f of L
and all solutions g of P.
The points (r,d) are shown in the figure on the right.-.25em
-.6em
[yscale=.46,scale=1.1]
[->] (0,0)–(0,5) node[left] d;
[->] (0,0)–(1.5,0) node[right] r;
in 1,...,4 (0,)–(-.1,) node[left] 00;
in 1,...,1 (,0)–(,-.1) node[below] 00;
(.5,0)–(.5,-.1) node[below] 50;
(-1,-1) rectangle (1.5,5);
(.10,3.16) node ·
(.11,2.40) node ·
(.12,2.02) node ·
(.13,1.79) node ·
(.14,1.64) node ·
(.15,1.53) node ·
(.16,1.45) node ·
(.17,1.38) node ·
(.18,1.33) node ·
(.19,1.29) node ·
(.20,1.26) node ·
(.21,1.23) node ·
(.22,1.20) node ·
(.23,1.18) node ·
(.24,1.16) node ·
(.25,1.14) node ·
(.26,1.13) node ·
(.27,1.12) node ·
(.28,1.10) node ·
(.29,1.09) node ·
(.30,1.08) node ·
(.31,1.07) node ·
(.33,1.06) node ·
(.34,1.05) node ·
(.35,1.04) node ·
(.37,1.03) node ·
(.39,1.02) node ·
(.41,1.01) node ·
(.44,1.00) node ·
(.47,.99) node ·
(.50,.98) node ·
(.54,.97) node·
(.59,.96) node·
(.66,.95) node·
(.74,.94) node·
(.85,.93) node·
(1.00,.92) node·
(1.23,.91) node·
(1.61,.90) node·
;
Experiments suggested that order-degree curves are often just simple hyperbolas. A priori knowledge of these
hyperbolas can be used to design efficient algorithms. For the case of creative
telescoping of hyperexponential functions and hypergeometric terms, as well as for
simple D-finite closure properties (addition, multiplication, Ore-action), bounds for order-degree curves
have been derived <cit.>. However, it turned out that these bounds are often not
tight.
A new approach to order-degree curves has been suggested in <cit.>,
where a connection was established between order-degree curves and apparent
singularities. Using the main result of this paper, very accurate order-degree
curves for a function f can be written down in terms of the number and the
cost of the apparent singularities of the minimal order annihilating operator
for f. However, when the task is to compute an annihilating operator
from some other representation, e.g., a definite integral, then the information
about the apparent singularities of the minimal order operator is only a
posteriori knowledge. Therefore, in order to design efficient algorithms using the result of <cit.>,
we need to predict the singularity structure of the output operator in terms of
the input data. This is the program for the present paper.
First (Section <ref>), we derive an order-degree bound for
D-finite substitution using the classical approach of considering a suitable
ansatz over the constant field, comparing coefficients, and balancing variables
and equations in the resulting linear system. This leads to an order-degree
curve which is not tight. Then (Section <ref>) we estimate the
order and degree of the minimal order annihilating operator for the composition
by generalizing the corresponding result of <cit.> from f=id
to arbitrary D-finite f. The derivation of the bound is a bit more tricky in
this more general situation, but once it is available, most of the subsequent
algorithmic considerations of <cit.> generalize
straightforwardly. Finally (Section <ref>) we turn to the analysis of the
singularity structure, which indeed leads to much more accurate results. The
derivation is also much more straightforward, except for the required
justification of the desingularization cost. In practice, it is almost always
equal to one, and although this is the value to be expected for generic input,
it is surprisingly cumbersome to give a rigorous proof for this expectation.
Throughout the paper, we use the following conventions:
* C is a field of characteristic zero, C[x] is the usual commutative ring of
univariate polynomials over C. We write C[x][y] or C[x,y] for the commutative
ring of bivariate polynomials and C[x][∂] for the non-commutative ring of linear
differential operators with polynomial coefficients. In this latter ring, the multiplication
is governed by the commutation rule ∂ x=x∂ +1.
-
* L∈ C[x][∂ ] is an operator of order r_L:=_∂(L)
with polynomial coefficients of degree at most d_L:=_x(L).
-
* P∈ C[x,y] is a polynomial of degrees r_P:=_y(P) and d_P:=_x(P).
It is assumed that P is square-free as an element of C(x)[y] and that
it has no divisors in C̅[y], where C̅ is the algebraic closure of C.
-
* M∈ C[x][∂ ] is an operator such that for every solution f of L
and every solution g of P, the composition f∘ g is a solution of M.
The expression f ∘ g can be understood either as a composition of analytic functions
in the case C = ℂ, or in the following sense.
We define M such that for every α∈ C, for every solution g ∈ C[[x - α]] of P
and every solution f ∈ C[[x - g(α)]] of L, M annihilates f∘ g, which is a well-defined
element of C[[x - α]]. In the case C = ℂ these two definitions coincide.
§ ORDER-DEGREE-CURVEBY LINEAR ALGEBRA
Let g be a solution of P, i.e., suppose that P(x,g(x))=0, and let f be a
solution of L, i.e., suppose that L(f)=0. Expressions involving g and f can
be manipulated according to the following three well-known observation:
* (Reduction by P) For each polynomial Q∈ C[x,y] with _y(Q)≥ r_P
there exists a polynomial Q̃∈ C[x,y] with _y(Q̃)≤_y(Q)-1 and _x(Q̃)≤_x(Q)+d_P
such that
Q(x,g)= 1/_y(P)Q̃(x,g).
The polynomial Q̃ is the result of the first step of computing the pseudoremainder of Q by P w.r.t. y.
* (Reduction by L) There exist polynomials v,q_j,k∈ C[x] of degree at most d_Ld_P such that
f^(r_L)∘ g=1/v∑_j=0^r_P-1∑_k=0^r_L-1 q_j,kg^j · (f^(k)∘ g).
To see this, write L=∑_k=0^r_Ll_k∂ ^k for some polynomials l_k∈ C[x] of degree at most d_L. Then we have
f^(r_L)∘ g = -1/l_r_L∘ g∑_k=0^r_L-1(l_k∘ g) · (f^(k)∘ g).
By the assumptions on P, the denominator l_r_L∘ g cannot be zero. In other words, (P(x,y),l_r_L(y))=1 in C(x)[y].
For each k=0,…,r_L-1, consider an ansatz AP+Bl_r_L=l_k for polynomials A,B∈ C(x)[y] of
degrees at most d_L-1 and r_P-1, respectively, and compare coefficients with respect to y.
This gives k inhomogeneous linear systems over C(x) with r_P+d_L variables and equations, which
only differ in the inhomogeneous part but have the same matrix M=Syl_y(P,l_r_L) for every k.
The claim follows using Cramer's rule, taking into account that the coefficient matrix of the system
has d_L many columns with polynomials of degree d_P and r_P many columns with polynomials of degree _x l_k(y)=0
(which is also the degree of the inhomogeneous part). Note that v=(M) does not depend on k.
* (Multiplication by g') For each polynomial Q∈ C[x,y] with _y(Q)≤ r_P-1
there exist polynomials q_j∈ C[x] of degree at most _x(Q)+2r_Pd_P such that
g' Q(x,g) = 1/w_y(P)∑_j=0^r_P-1q_jg^j,
where w∈ C[x] is the discriminant of P.
To see this, first apply Observation <ref> (Reduction by P) to rewrite -QP_x as T=1/_y(P)∑_j=0^2r_P-2t_j y^j
for some t_j∈ C[x] of degree _x(Q)+d_P.
Then consider an ansatz AP+BP_y=_y(P)T with unknown polynomials A,B∈ C(x)[y] of degrees
at most r_P-2 and r_P-1, respectively, and compare coefficients with respect to y.
This gives an inhomogeneous linear system over C(x) with 2r_P-1 variables and equations.
The claim then follows using Cramer's rule.
Let u = vw_y(P)^r_P, where v and w are as in the Observations <ref> and <ref> above.
Let f be a solution of L and g be a solution of P.
Then for every ℓ∈ N there are polynomials e_i,j∈ C[x] of degree at most ℓ(u) such that
∂ ^ℓ (f∘ g) = 1/u^ℓ∑_i=0^r_P-1∑_j=0^r_L-1 e_i,j g^i · (f^(j)∘ g).
This is evidently true for ℓ=0. Suppose it is true for some ℓ. Then
∂^ℓ+1 (f∘g) =
∑_i=0^r_P-1∑_j=0^r_L-1 (e_i,j/u^ℓ g^i ·(f^(j)∘g))'
=
∑_i=0^r_P-1∑_j=0^r_L-1 (
e_i,j'u - ℓe_i,ju'/u^ℓ+1 g^i ·(f^(j)∘g)
+
e_i,j/u^ℓ(i g^i-1 ·(f^(j)∘g) + g^i ·(f^(j+1)∘g)) g'
).
The first term in the summand expression already matches the claimed bound. To complete the proof, we
show that
(i g^i-1· (f^(j)∘ g) + g^i · (f^(j+1)∘ g)) g'=1/u∑_k=0^r_P-1q_kg^k
for some polynomials q_k of degree at most (u). Indeed, the only critical term is f^(r_L)∘ g. According to
Observation <ref>, f^(r_L)∘ g can be rewritten as 1/v∑_j=0^r_P-1∑_k=0^r_L-1q_j,kg^j · (f^(k)∘ g)
for some q_j,k∈ C[x] of degree at most d_Ld_P. This turns the left hand side of (<ref>) into an expression
of the form 1/v∑_j=0^2r_P-2q̃_j,kg^j · (f^(k)∘ g) for some polynomials
q̃_j,k∈ C[x] of degree at most d_Ld_P. An (r_P-1)-fold application of Observation <ref>
brings this expression to the form 1/v_y (P)^r_P-1∑_j=0^r_P-1q̅_j,kg^j · (f^(k)∘ g)
for some polynomials q̅_j,k∈ C[x] of degree at most d_Ld_P+(r_P-1)d_P. Now Observation <ref>
completes the induction argument.
Let r,d∈ N be such that
r ≥ r_Lr_P
and
d ≥r(3r_P+d_L-1)d_P r_L r_P/r+1-r_L r_P.
Then there exists an operator M∈ C[x][∂ ] of order ≤ r and degree ≤ d
such that for every solution g of P and every solution f of L the composition
f∘ g is a solution of M.
In particular, there is an operator M of order r=r_Lr_P and degree
(3r_P+d_L-1)d_P r_L^2 r_P^2 = Ø((r_P+d_L)d_P r_L^2 r_P^2).
Let g be a solution of P and f be a solution of L.
Then we have P(x,g(x))=0 and L(f) = 0, and we seek an operator
M=∑_i=0^d∑_j=0^r c_i,j x^i ∂ ^j∈ C[x][∂ ] such that
M(f∘ g)=0. Let r≥ r_Lr_P and consider an ansatz
M=∑_i=0^d∑_j=0^r c_i,jx^i∂ ^j
with undetermined coefficients c_i,j∈ C.
Let u be as in Lemma <ref>. Then applying M to f∘ g and multiplying by u^r gives an expression
of the form
∑_i=0^d+r(u)∑_j=0^r_P-1∑_k=0^r_L-1 q_i,j,kx^i g^j · (f^(k)∘ g),
where
the q_i,j,k are C-linear combinations of the undetermined coefficients c_i,j. Equating all the q_i,j,k
to zero leads to a linear system over C with at most (1+d+r(u))r_Lr_P equations and exactly (r+1)(d+1)
variables. This system has a nontrivial solution as soon as
(r+1)(d+1) > (1+d+r(u))r_L r_P
(r+1-r_L r_P)(d+1) > r r_Lr_P(u)
d > -1 + r r_Lr_P(u)/r+1-r_Lr_P.
The claim follows because (u)≤ d_Pd_L+(2r_P-1)d_P+r_Pd_P=(3r_P+d_L-1)d_P.
§ A DEGREE BOUND FORTHE MINIMAL OPERATOR
According to Theorem <ref>, there is
operator M of order r=r_Lr_P and degree d=Ø((r_P+d_L)d_Pr_L^2r_P^2).
Usually there is no operator of order less than r_Lr_P, but if such an operator
accidentally exists, Theorem <ref> makes no statement about its degree.
The result of the present section (Theorem <ref> below) is a degree bound
for the minimal order operator, which also applies when its order is less than r_Lr_P,
and which is better than the bound of Theorem <ref> if the minimal order
operator has order r_Lr_P.
The following Lemma is a variant of Lemma <ref> in which g is allowed to appear
in the denominator, and with exponents larger than r_P-1. This allows us to keep the
x-degrees smaller.
Let f be a solution of L and g be a solution of P.
For every ℓ∈ N, there exist polynomials E_ℓ, j∈ C[x, y] for 0 ≤ j < r_L
such that _x E_ℓ, j≤ℓ(2d_P - 1) and _y E_ℓ, j≤ℓ(2r_P+ d_L - 1) for all 0 ≤ j < r_L, and
∂^ℓ( f ∘ g) = 1/U(x, g)^ℓ∑_j = 0^r_L - 1 E_ℓ,j(x, g) (f^(j)∘ g),
where U(x, y) = P_y^2(x, y) l_r_L(y).
This is true for ℓ = 0. Suppose it is true for some ℓ. Then
∂^ℓ + 1(f ∘ g) = ( 1/U(x, g)^ℓ∑_j = 0^r_L - 1 E_ℓ, j(x, g) (f^(j)∘ g) )^'
= ∑_j = 0^r_L - 1( ℓ(U_x + g' U_y)/U^ℓ + 1 E_i, j· (f^(j)∘ g)
+ 1/U^ℓ((E_ℓ, j)_x + g' · (E_ℓ, j)_y) (f^(j)∘ g) + 1/U^ℓ E_ℓ, j g' · (f^(j + 1)∘ g) )
We consider the summands separately.
In ℓ(U_x + g' U_y)/U^ℓ + 1, U_x is already a polynomial in x and g of bidegree at most (2d_p - 1, 2r_P + d_L - 1).
Since g' = -P_x(x, g)/P_y(x, g) and U_y is divisible by P_y, g' U_y is also a polynomial with the same bound for the bidegree.
Futhermore, we can write
(E_ℓ, j)_x + g' · (E_ℓ, j)_y = 1/U (U(E_ℓ, j)_x - P_xP_yl_r_L(g)(E_ℓ, j)_y),
where the expression in the parenthesis satisfies the stated bound.
For j + 1 < r_L, the last summand can be written as
1/U^ℓ E_ℓ, j g' · (f^(j + 1)∘ g) = P_xP_yl_r_l(g)/U^ℓ + 1E_ℓ, j· (f^(j + 1)∘ g).
For j = r_L + 1, due to Observation <ref>
g' · (f^(r_L)∘ g) = -P_xP_y/U∑_j = 0^r_L - 1 l_j(g) (f^(j)∘ g).
Right-hand sides of both (<ref>) and (<ref>) satisfy the bound.
Let f_1, …, f_r_L be C-linearly independent solutions of L, and let g_1, …, g_r_P be distinct solutions of P.
By r we denote the C-dimension of the C-linear space V spanned by f_i ∘ g_j for all 1 ≤ i ≤ r_L and 1 ≤ j ≤ r_P.
The order of the operator annihilating V is at least r.
We will construct an operator of order r annihilating V using Wronskian-type matrices.
There exists a matrix A(x,y)∈ C[x,y]^(r + 1) × r_L such that
the bidegree of every entry of the i-th row of A(x, y) does not exceed (2rd_P - i + 1, r(2r_P + d_L - 1)) and
f ∈ V if and only if the vector (f, …, f^(r))^T lies in
the column space of the (r + 1) × r_Lr_P matrix [ A(x, g_1) ⋯ A(x, g_r_P) ].
With the notation of Lemma <ref>, let
A(x, y) be the matrix whose (i, j)-th entry is E_i - 1, j - 1(x, y) U(x, y)^r + 1 - i.
Then A(x, y) meets the stated degree bound.
By W_i we denote the (r + 1) × r_L Wronskian matrix for f_1 ∘ g_i, …, f_r_L∘ g_i.
Then f ∈ V if and only if the vector (f, …, f^(r))^T lies in the column space of the matrix [ W_1 ⋯ W_r_P ].
Hence, it is sufficient to prove that W_i and A(x, g_i) have the same column space.
The following matrix equality follows from the definition of E_i, j
W_i = 1/U(x, g_i)^rA(x, g_i)
[ f_1 ∘ g_i ⋯ f_r_L∘ g_i; f_1' ∘ g_i ⋯ f_r_L∘ g_i; ⋮ ⋱ ⋮; f_1^(r_L - 1)∘ g_i ⋯ f_r_L^(r_L - 1)∘ g_i ].
The latter matrix is nondegenerate since it is a Wronskian matrix for the C-linearly independent power series
f_1 ∘ g_i, …, f_r_L∘ g_i with respect to the derivation (g_i')^-1∂.
Hence, W_i and A(x, g_i) have the same column space.
In order to express the above condition of lying in the column space in terms of vanishing of a single determinant, we want to “square” the matrix [ A(x, g_1), ⋯, A(x, g_r_P) ].
There exists a matrix B(y)∈ C[y]^(r_L r_P - r) × r_L such that the degree of every entry does
not exceed r_P - 1 and the (r_Lr_P + 1)× r_Lr_P matrix
C =
[ A(x, g_1) ⋯ A(x, g_r_P); B(g_1) ⋯ B(g_r_P) ]
has rank r_Lr_P.
Let D be the Vandermonde matrix for g_1, …, g_r_P,
and let I_r_L denote the identity matrix.
Then C_0 = D ⊗ I_r_L is nondegenerate and has the form [ B_0(g_1), …, B_0(g_r_P) ],
for some B_0(y)∈ C[y]^r_Lr_P × r_L with entries of degree at most r_P - 1.
Since C_0 is nondegenerate, we can choose r_Lr_P - r rows which span a complimentary subspace to the row space of [ A(x, g_1), …, A(x, g_r_P) ].
Discarding all other rows from B_0(y), we obtain B(y) with the desired properties.
By C_ℓ (A_ℓ(x, y), resp.) we will denote the matrix C (A(x, y), resp.) without the ℓ-th row.
For every 1 ≤ℓ≤ r + 1 the determinant of C_ℓ is divisible by ∏_i < j (g_i - g_j)^r_L
We show that C_ℓ is divisible by (g_i - g_j)^r_L for every i ≠ j.
Without loss of generality, it is sufficient to show this for i = 1 and j = 2.
We have
C_ℓ =
A_ℓ(x, g_1)-A_ℓ(x, g_2) A_ℓ(x, g_2) ⋯ A_ℓ(x, g_r_P)
B(g_1)-B(g_2) B(g_2) ⋯ B(g_r_P).
Since for every polynomial p(y) we have g_1 - g_2 | p(g_1) - p(g_2), every entry of the first r_L columns in the above matrix is divisible by g_1 - g_2.
Hence, the whole determinant is divisible by (g_1 - g_2)^r_L.
The minimal operator M ∈ C[x][∂] annihilating f ∘ g for every f and g such that L(f) = 0 and P(x, g(x)) = 0
has order r ≤ r_Lr_P and degree at most
2r^2d_P - 12(r-2)(r-1) + rd_Pr_L (2r_P + d_L - 1) - d_P r_L (r_P-1)
=Ø(r d_Pr_L(d_L+r_P)).
We construct M using C_ℓ for 1 ≤ℓ≤ r + 1.
We consider some f and by F we denote the (r_Lr_P + 1)-dimensional vector (f, …, f^(r), 0, …, 0)^T.
If f ∈ V, then the first r + 1 rows of the matrix [ C F ] are linearly dependent, so it is degenerate.
On the other hand, if this matrix is degenerate, then Lemma <ref> implies that F is a linear combination of the columns of C, so Lemma <ref> implies that f ∈ V.
Hence f ∈ V ⇔ C_1f ±⋯ + (-1)^r C_r + 1f^(r) = 0.
Due to Lemma <ref>, the latter condition is equivalent to c_1f + ⋯ + c_r + 1f^(r) = 0,
where c_ℓ= (-1)^ℓ - 1 C_ℓ/∏_i < j (g_i - g_j)^r_L.
Thus we can take M = c_1 + ⋯ + c_r + 1∂^r.
It remains to bound the degrees of the coefficients of M.
Combining lemmas <ref>, <ref>, and <ref>, we obtain
d_X := _x c_ℓ≤∑_i ≠ℓ (2rd_P+1-i) ≤ 2r^2d_P - 12(r-2)(r-1),
d_Y := _g_i c_ℓ≤ rr_L (2r_P + d_L - 1) - r_L(r_P - 1).
Since c_ℓ is symmetric with respect to g_1, …, g_r_P, it can be written as an element of C[x, s_1, …, s_r_P]
where s_j is the j-th elementary symmetric polynomial in g_1, …, g_r_P,
and the total degree of c_ℓ with respect to s_j's does not exceed d_Y.
Substituting s_j with the corresponding coefficient of 1/_y PP(x, y) and clearing denominators, we obtain a polynomial
in x of degree at most d_X + d_Yd_P.
Since the order of M is equal to the dimension of the space of all compositions of the form f ∘ g,
where L(f) = 0 and P(x, g) = 0, M is the minimal annihilating operator for this space.
The proof of Theorem <ref> is a generalization of the proof of <cit.>.
Specializing r_L = 1, d_L = 0 in Theorem <ref> gives a sightly larger bound as the
bound in <cit.>, but with the same leading term.
Although the bound of Theorem <ref> for r=r_Lr_P beats the bound of Theorem <ref>
for r=r_Lr_P by a factor of r_P, it is apparently still not tight. Experiments
we have conducted with random operators lead us to conjecture that in fact, at least
generically, the minimal order operator of order r_Lr_P has degree Ø(r_L r_P d_P (d_L + r_L r_P)).
By interpolating the degrees of the operators we found in our computations,
we obtain the expression in the following conjecture.
For every r_P,r_L,d_P,d_L≥2 there exist L and P such that the corresponding
minimal order operator M has order r_Lr_P and degree
r_L^2 (2 r_P(r_P-1) + 1) d_P
+r_L r_P (d_P(d_L+1) + 1)
+d_L d_P
-r_L^2 r_P^2
-r_L d_L d_P,
and there do not exist L and P for which the corresponding minimal operator M
has order r_Lr_P and larger degree.
§ ORDER-DEGREE-CURVEBY SINGULARITIES
A singularity of the minimal operator M is a root of its leading coefficient polynomial _∂(M)∈ C[x].
In the notation and terminology of <cit.>, a factor p of this polynomial is called removable at cost n if there exists an operator
Q∈ C(x)[∂] of order _∂(Q)≤ n such that QM∈ C[x][∂] and (_∂(QM),p)=1.
A factor p is called removable if it is removable at some finite cost n∈ N, and non-removable otherwise.
The following theorem <cit.> translates information about the removable singularities of a minimal operator
into an order-degree curve.
Let M∈ C[x][∂], and let p_1,…,p_m∈ C[x] be pairwise coprime
factors of _∂(M) which are removable at costs c_1,…,c_m, respectively. Let r≥_∂(M)
and
d≥_x(M)-⌈∑_i=1^m(1-c_i/r-_∂(M)+1)^+_x(p_i)⌉,
where we use the notation (x)^+:=max{x,0}. Then there exists an operator Q∈ C(x)[∂] such that
QM∈ C[x][∂] and _∂(QM)=r and _x(QM)=d.
The order-degree curve of Theorem <ref> is much more accurate than that of Theorem <ref>.
However, the theorem depends on quantities that are not easily observable when only L and P are known.
From Theorem <ref> (or Conj. <ref>), we have a good bound for _x(M). In the
the rest of the paper, we discuss bounds and plausible hypotheses for the degree and the cost of the removable factors.
The following example shows how knowledge about the degree of the operator and the degree and cost of its removable
singularities influence the curve.
The figure below compares the data of Example <ref> with the curve obtained from Theorem <ref>
using m=1, _x(M_min)=544, _x(p_1)=456, c_1=1.
This curve is labeled (a) below. Only for a few orders r, the curve slightly overshoots.
In contrast, the curve of Theorem <ref>, labeled (b) below, overshoots significantly
and systematically.
The figure also illustrates how the parameters affect the accuracy of the estimate.
The value _x(M_min)=544 is correctly predicted by Conjecture <ref>. If we
use the more conservative estimate _x(M_min)=1568 of Theorem <ref>,
we get the curve (e).
For curve (d) we have assumed a removability degree of _x(p_1)=408, as predicted by Theorem <ref>
below, instead of the true value _x(p_1)=456.
For (c) we have assumed a removability cost c_1=10 instead of c_1=1.
[yscale=.6,xscale=1.2,scale=1]
[->] (0,0)–(0,8) node[left] d;
[->] (0,0)–(4,0) node[right] r;
in 1,...,7 (0,)–(-.1,) node[left] 00;
in 1,...,3 (,0)–(,-.1) node[below] 00;
(-1,-1) rectangle (5,8);
[ultra thin, variable=,̊domain=.09:2,samples=50]
plot (,432*/̊(100*-̊8));
plot (,.08*(-31+1100*)̊/(100*-̊8));
plot (,min(5.44,.08*(482+1100*)̊/(100*-̊8)));
plot (,1.36*(-5+100*)̊/(100*-̊8));
plot (,8*(.97+11.00*)̊/(100*-̊8));
[ultra thin, variable=,̊domain=2:4,samples=5]
plot (,432*/̊(100*-̊8));
plot (,.08*(-31+1100*)̊/(100*-̊8));
plot (,.08*(482+1100*)̊/(100*-̊8));
plot (,1.36*(-5+100*)̊/(100*-̊8));
plot (,8*(.97+11.00*)̊/(100*-̊8));
[ultra thin] (.50,5.14286) – ++(.33,.5) node[above right,xshift=-2pt,yshift=-2pt] b;
[ultra thin] (.50,1.96571) – ++(.33,.5) node[above right,xshift=-2pt,yshift=-2pt] c;
[ultra thin] (.50,1.45714) – ++(.33,.5) node[above right,xshift=-2pt,yshift=-2pt] d;
[ultra thin] (.60,1.16077) – ++(-.33,-.5) node[below left,xshift=2pt,yshift=2pt] e;
[ultra thin] (1.5,.912113) – ++(.33,-.5) node[below right,xshift=-2pt,yshift=2pt] a;
(.09,5.44) node ·
(.10,3.16) node ·
(.11,2.40) node ·
(.12,2.02) node ·
(.13,1.79) node ·
(.14,1.64) node ·
(.15,1.53) node ·
(.16,1.45) node ·
(.17,1.38) node ·
(.18,1.33) node ·
(.19,1.29) node ·
(.20,1.26) node ·
(.21,1.23) node ·
(.22,1.20) node ·
(.23,1.18) node ·
(.24,1.16) node ·
(.25,1.14) node ·
(.26,1.13) node ·
(.27,1.12) node ·
(.28,1.10) node ·
(.29,1.09) node ·
(.30,1.08) node ·
(.31,1.07) node ·
(.33,1.06) node ·
(.34,1.05) node ·
(.35,1.04) node ·
(.37,1.03) node ·
(.39,1.02) node ·
(.41,1.01) node ·
(.44,1.00) node ·
(.47,.99) node ·
(.50,.98) node ·
(.54,.97) node·
(.59,.96) node·
(.66,.95) node·
(.74,.94) node·
(.85,.93) node·
(1.00,.92) node·
(1.23,.91) node·
(1.61,.90) node·
;
§.§ Degree of Removable Factors
Let P(x, y)∈ C[x, y] be a polynomial with _y P = d, and R(x) = _y(P, P_y).
Assume that α∈C̅ is a root of R(x) of multiplicity k.
Then the squarefree part
S(y) = P(α, y)/( P(α, y), P_y(α, y) )
of P(α, y) has degree at least d - k.
Let M(x) be the Sylvester matrix for P(x, y) and P_y(x, y) with respect to y.
The value R^(k)(α) is of the form ∑ M_i(α), where every M_i(x) has
at least 2d - 1 - k common columns with M(x). Since R^(k)(α) ≠ 0, at least one of
these matrices is nondegenerate. Hence, M(α) ≤ k.
On the other hand, M(α) is equal to the dimension of the space of pairs of polynomials
(a(y), b(y)) such that a(y)P(α, y) + b(y)P_y(α, y) = 0 and b(y) < d.
Then b(y) is divisible by S(y), and for every b(y) divisible by S(y) there exists exactly one a(y).
Hence, M(α) = d - S(y) ≤ k.
Let M be the minimal order operator annihilating all compositions f∘ g of a solution of P with a solution of L.
The leading coefficient q = _∂(M)∈ C[x] can be factored as q = q_q_,
where q_ and q_ are the products of all removable and all nonremovable factors of _∂(M), respectively.
q_≤ d_P(4r_Lr_P - 2r_L + d_L).
For α∈C̅ by π_α (λ_α, μ_α, resp.) we denote r_P
(r_L or _∂ M, resp.)
minus the number of solutions of P(x, g(x)) = 0 (the dimension of the solutions set of Lf(x) = 0 or Mf(x) = 0, resp.)
in C̅[[x - α]].
Corollary 4.3 from <cit.> implies that _α q_ (the minimal order at α in _α_α(M) in notation of <cit.>)
is equal to μ_α (_α B_α(M) - (s_α + 1) in notation of <cit.>).
Summing over all α, we have ∑_α∈C̅μ_α = q_.
Bounding the degree of the nonremovable part of _∂ (L) by d_L, we also have ∑_α∈C̅λ_α≤ d_L.
Let R(x) be the resultant of P(x, y) and P_y(x, y) with respect to y.
Let α be a root of R(x) of multiplicity k.
Lemma <ref> implies that the degree of the squarefree part of P(α, y) is at least r_P - k.
So, at most k roots are multiple, so at least r_P - 2k roots are simple.
Hence, P(x, y) = 0 has at least r_P - 2k solutions in C̅[[x - α]].
Thus ∑_α∈C̅π_α≤ 2 R ≤ 2d_P(2r_P - 1).
Let α∈C̅ and let g_1(x), …, g_r_P - π_α(x) ∈C̅[[x - α]]
be solutions of P(x, g(x)) = 0.
Let β_i = g_i(0) for all 1 ≤ i ≤ r_P - π_α.
Since the composition of a power series in x - β_i with g_i(x) is a power series in x - α,
μ_α≤ r_L π_α + ∑_i = 1^r_P - π_αλ_β_i.
We sum (<ref>) over all α∈C̅.
The number of occurrences of λ_β in this sum for a fixed β∈C̅ is equal to the number
of distinct power series of the form
g(x) = β + ∑ c_i (x - γ)^i such that P(x, g(x)) = 0.
Inverting these power series, we obtain distinct Puiseux series solutions of P(x, y) = 0 at y = β, so this number does not exceed d_P.
Hence
∑_α∈C̅μ_α≤ r_L ∑_α∈C̅π_α + d_P ∑_β∈C̅λ_β≤ 2r_L d_P(2r_P - 1) + d_Pd_L.-1ex
In order to use Theorem <ref>, we need a lower bound for q_.
Theorem <ref> gives us an upper bound for _x M, but we must also estimate the difference _x M - _∂ M.
By N_α we denote the Newton polygon for M at α∈C̅∪{∞}
(for definitions and notation, see <cit.>).
By H_α, we denote the difference of the ordinates of the highest and the smallest vertices of N_α,
and we call this quantity the height of the Newton polygon.
Note that H_∞≤_x M - _∂ M.
This estimate together with the Lemma above implies
q_≥_x(M)-H_∞-d_P(4r_Lr_P - 2r_L + d_L).
The equation P(x, y) = 0 has r_P distinct Puiseux series solutions g_1(x), …, g_r_P(x) at infinity.
For 1 ≤ i ≤ r_P, let β_i = g_i(∞) ∈C̅∪{∞}, and let ρ_i be the order of
zero of g_i(x) - β_i (1/g_i(x), resp.) at infinity if β_i ∈C̅ (β_i = ∞, resp.).
The numbers ρ_1, …, ρ_r_P are positive rationals and can be read off from Newton polygons of P (see <cit.>).
H_∞≤∑_i=1^r_Pρ_i H_β_i.
Writing L as L(x, ∂) ∈ C[x][∂], we have
M = ( L(g_1, 1/g_1'∂), …, L( g_r_P, 1/g_r_P'∂) ).
Hence, the set of edges of N_∞ is a subset of the union of sets of edges of Newton polygons of the operators L(g_i,1/g_i'∂),
so the height of N_∞ is bounded by the sum of the heights of the Newton polygons of these operators.
Consider g_1 and assume that β_1 ∈C̅.
Then the Newton polygon for L at β_1 is constructed from the set of monomials of L written as an element of C(x - β_1)[(x - β_1)∂].
Let L(x, ∂) = L̃(x - β_1, (x- β_1)∂), then
L(g_1, 1/g_1'∂)
= L̃( g_1 - β_1, g_1-β_1/g_1'∂)
= L̃( x^-ρ_1h_1(x), xh_2(x) ∂),
where h_1(∞) and h_2(∞) are nonzero elements of C̅.
Since h_1 and h_2 do not affect the shape of the Newton polygon at infinity, the Newton polygon at infinity for L(g_1, 1/g_1'∂)
is obtained from the Newton polygon for L at β_1 by stretching it vertically by the factor ρ_1,
so its height is equal to ρ_1H_β_1.
The case β_1 = ∞ is analogous using L = L̃( 1/x, -x∂).
Generically, the β_i's will be ordinary points of L, so it is
fair to expect H_β_i=0 for all i in most situations.
The following theorem is a consequence of Theorem <ref> and the discussion above.
Let ρ_1, …, ρ_r_P be as above.
Assume that all removable singularities of M are removable at cost at most c.
Let δ = ∑_i = 1^r_Pρ_i H_β_i + d_P(4r_Lr_P - 2r_L + d_L).
Let r ≥_∂ M + c - 1 and
d ≥δ·(1-c/r-_∂(M)+1) + _x M ·c/r - _∂ (M) + 1.
Then there exists an operator Q ∈ C(x)[∂] such that QM ∈ C[x][∂] and _∂ (QM) = r and _x (QM) = d.
Note that _x(M) may be replaced with the expression from Theorem <ref>
or Conjecture <ref>.
§.§ Cost of Removable Factors
The goal of this final section is to explain why in the case r_P > 1 one can almost always choose c=1 in Theorem <ref>.
For a differential operator L ∈ C[x][∂], by M(L) we denote the minimal operator M such that Mf(g(x)) = 0 whenever L f = 0 and P(x, g(x)) = 0.
We want to investigate the possible behaviour of a removable singularity at α∈ C when L varies and P with r_P>1 is fixed.
Without loss of generality, we assume that α = 0.
We will assume that:
(S1) P(0, y) is a squarefree polynomial of degree r_P;
-
(S2) g(0) is not a singularity of L for any root g(x) of P;
-
(G) Roots of P(x, g(x)) = 0 at zero are of the form g_i(x) = α_i + β_i x + γ_i x^2 + …, where β_2, …, β_r_P are nonzero,
and either β_1 or γ_1 is nonzero.
Conditions [s1](S1) and [s2](S2) ensure that zero is not a potential true singularity of M(L).
Condition [g](G) is an essential technical assumption on P.
We note that it holds at all nonsingular points (not just at zero) for almost all P, because this condition is violated at α iff some root of P(α, y) = P_x(α, y) = 0 (this means that at least one of β_i is zero) is also a root of either P_xx(α, y) = 0 (then γ_i is also zero) or P_xy(α, y) = 0 (then there are at least two such β's).
For a generic P this does not hold.
Under these assumptions we will prove the following theorem.
Informally speaking, it means that if M(L) has an apparent singularity at zero, then it almost surely is removable at cost one.
Let d_L be such that d_L ≥ (r_Lr_P - r_L + 1)r_P.
By V we denote the (algebraic) set of all L ∈C̅[x][∂] of order r_L and degree ≤ d_L
such that the leading coefficient of L does not vanish at α_1, …, α_r_P.
We consider two (algebraic) subsets in V
X = { L ∈V | M(L) has an apparent singularity at 0},
Y = { L ∈V | M(L) has an apparent singularity at 0
which is not removable at cost one }.
Then, X > Y as algebraic sets.
For α∈C̅, by _α(r, d) we denote the space of differential operators in C̅[x - α][∂] of order at most r and degree at most d.
By _α(r, d) ⊂_α(r, d) we denote the set of L such that L = r and (_∂ L) (α) ≠ 0.
Then
V ⊂_α_1(r_L, d_L) ∩…∩_α_r_P (r_L, d_L).
To every operator L ∈_α(r, d_0) and d_1 ≥ r, we assign a fundamental matrix of degree d_1 at α, denote it by F_α(L, d_1).
It is defined as the r × (d_1 + 1) matrix such that the first r columns constitute the identity matrix I_r,
and every row consists of the first d_1 + 1 terms of some power series solution of L at x = α.
Since L ∈_α(r, d_0), F(L, d_1) is well defined for every d_1.
By F(r, d) we denote the space of all possible fundamental matrices of degree d for operators of order r.
This space is isomorphic to 𝔸^r(d + 1 - r).
The following proposition says that a generic operator has generic and independent fundamental matrices, so we can work with these matrices instead of working with operators.
Let φ V →( F(r_L, r_Lr_P) )^r_P be the map sending L ∈ V to F_α_1(L, r_Lr_P) ⊕…⊕ F_α_r_P(L, r_Lr_P).
Then φ is a surjective map of algebraic sets, and all fibers of φ have the same dimension.
For the proof we need the following lemma.
Let ψ_α(r, d) → F(r, d + r) be the map sending L to F_α(L, d + r).
Then ψ is surjective and all fibers have the same dimension.
First we assume that L is of the form L = ∂^r_L + a_r_L - 1(x) ∂^r_L - 1 + … + a_0(x),
and a_j(x) = a_j, dx^d + … + a_j, 0, where a_j, i∈C̅.
We also denote the truncated power series corresponding to the j + 1-st row of F(L, d + r_L) by f_j and write it as
f_j = x^j + ∑_i = 0^d b_j, i x^r_L + i, where b_j, i∈C̅.
We will prove the following claim by induction on i:
Claim. For every 0 ≤ j ≤ r_L - 1 and every 0 ≤ i ≤ d, b_j, i can be written as a polynomial in a_p, q with q < i and a_j, i.
And, vice versa, a_j, i can be written as a polynomial in b_p, q with q < i and b_j, i.
The claim would imply that ψ defines an isomorphism of algebraic varieties between F_α(r_P, d + r) and the subset of monic operators in _α(r, d).
For i = 0, looking at the constant term of L (f_j), we obtain that j! a_j, 0 + r_L! b_j, 0 = 0.
This proves the base case of the induction.
Now we consider i > 0 and look at the constant term of ∂^i L(f_j).
The operator ∂^i L can be written as
∂^i L =
∂^i + r_L + a_r_L - 1^(i)(x) ∂^r_L - 1 + …+ a_0^(i)(x)
+ ∑_k < i, l < i + r_L, s ≤d c_k, l, s a_s^(k)(x) ∂^l
Applying this to f_j, we obtain the following expression for the constant term:
(i + r_L)! b_j, i + j! i! a_j, i + ∑_k < i, l < i + r_L, s ≤ dc̃_k, l, s a_s, k b_j, l - r_L = 0.
Applying the induction hypothesis to the equalities
b_j, i = -1/(i + r_L)!( j! i! a_j, i + ∑_k < i, l < i + r_L, s ≤ dc̃_k, l, s a_s, k b_j, l - r_L)
a_j, i = -1/i! j!( (i + r_L)! b_j, i + ∑_k < i, l < i + r_L, s ≤ dc̃_k, l, s a_s, k b_j, l - r_L)
we prove the claim.
The above proof also implies that F(L, d + r) is completely determined by the truncation of L at degree d + 1.
So, for arbitrary L ∈_α(r, d), F(L, d) = F(L̃, d), where L̃ is the truncation of 1/_∂ L L at degree d + 1,
which is monic in ∂.
Hence, every fiber of ψ is isomorphic to the set of all polynomials of degree at most d with nonzero constant term.
This set is isomorphic to C̅^*×C̅^d.
Let d_0=r_Lr_P - r_L.
We will factor φ as a composition
V ⊕_i=1^r_P_α_i(r_L, d_0)
F(r_L, r_Lr_P)^r_P,
where φ_2 is a component-wise application of F_α_i(∗, d_0) and φ_1 sends L ∈ V to
a vector whose i-th coordinate is the truncation at degree d_0 + 1 of L written as an element of C̅[x - α_i][∂].
We will prove that both these maps are surjective with fibers of equal dimension.
The map φ_1 can be extended to
φ_1_0(r_L, d_L) →_α_1(r_L, d_0) ⊕…⊕_α_r_P(r_L, d_0).
This map is linear, so it is sufficient to show that the dimension of the kernel is equal to the difference of the dimensions of the source space and the target space.
The latter number is equal to (d_L + 1)(r_L + 1) - (d_0 + 1)(r_L + 1)r_P.
Let L ∈φ_1.
This is equivalent to the fact that every coefficient of L is divisible by (x - α_i)^d_0 + 1 for every 1 ≤ i ≤ r_P.
The dimension of the space of such operators is equal to (r_L + 1)(d_L + 1 - r_P(d_0 + 1)) ≥ 0, so φ_1 is surjective.
Lemma <ref> implies that φ_2 is also surjective and all fibers are of the same dimension.
Let g_1(x), …, g_r_P(x) ∈C̅[[x]] be solutions of P(x, y) = 0 at zero.
Recall that g_i(x) = α_i + β_i x + … for all 1 ≤ i ≤ r_P, and by [g](G) we can assume that β_2, …, β_r_P are nonzero.
Consider A ∈ F(r_L, d), assume that its rows correspond to truncations of power series f_1, …, f_r_L∈C̅[[x - α_i]].
By ε(g_i, A) we denote the r_L × (d + 1)-matrix whose rows are truncations of f_1∘ g_i, …, f_r_L∘ g_i ∈C̅[[x]] at degree d + 1.
We can write ε(g_i, A) = A · T(g_i),
where T(g_i) is an upper triangular (d + 1) × (d + 1)-matrix depending only on g_i
with 1, β_i, …, β_i^d on the diagonal.
Futhermore, if β_i = 0 and g_i(x) = α_i + γ_i x^2 + …, then the i-th row of T(g_i)
is zero for i ≥d + 3/2,
and starts with 2(i - 1) zeroes and γ_i^i - 1 for i<d+3/2.
Let the j-th row of A correspond to a polynomial f_j(x - α_i) = x^j - 1 + O(x^r_L).
The substitution operation f_j → f_j ∘ g_i is linear with respect to coefficients of f_i, so ε(g_i, A) = A · T(g_i) for some matrix T(g_i).
Since the coefficient of x^k in f_j ∘ g_i is a linear combination of coefficients of (x - α_i)^l with l ≤ k in f_j, the matrix T(g_i) is upper triangular.
Since (x - α_i)^k ∘ g_i = β_i^k x^k + O(x^k + 1), T(g_i) has 1, β_i, …, β_i^d on the diagonal.
The second claim of the lemma can be verified by a similar computation.
If β_i ≠ 0, then the matrix ε(g_i, A) has the form (A_0 A_1), where A_0 is an upper triangular matrix over C̅, and the entries of A_1 are linearly independent linear forms in the entries of A.
An element of the affine space W = ( F(r_L, r_L r_P))^r_P is a tuple of matrices N_1, …, N_r_P∈ F(r_L, r_Lr_P), where
every N_i has the form N_i = (E_r_L Ñ_i).
Entries of Ñ_1, …, Ñ_r_P are coordinates on W, so we will view entries of Ñ_i as a set X_i of algebraically independent variables.
We will represent N as a single (r_L r_P) × (r_L r_P + 1)-matrix
N = [ N_1; ⋮; N_r_P ], and set ε(N) = [ ε(g_1, N_1); ⋮; ε(g_r_P, N_r_P) ].
For any matrix A, by A_(1) and A_(2) we denote A without the last column and without the last but one column, respectively.
By π we denote the composition ε∘φ.
Since π(L) represents solutions of M(L) at zero truncated at degree r_Lr_P + 1, properties of the operator L ∈ V can be described in terms of the matrix π(L):
* M(L) has order less than r_L r_P or has an apparent singularity at zero iff π(L)_(1) is degenerate;
* M(L) has order less than r_L r_P or has an apparent singularity at zero which is either not removable at cost one or of degree greater than one iff both π(L)_(1) and π(L)_(2) are degenerate.
Let X_0 = { L ∈ V |π(L)_(1) = 0} and Y_0 = { L ∈ V |π(L)_(2) = 0},
then X_0 ∖ Y_0 ⊂ X ⊂ X_0 and Y ⊂ Y_0.
φ(X_0) is an irreducible subset of W, and φ(Y_0) is a proper algebraic subset of φ(X_0).
The above discussion and the surjectivity of φ imply that φ(X_0) = { N ∈ W |ε(N)_(1) = 0}.
Hence, we need to prove that ε(N)_(1) is a nonzero irreducible polynomial in R = C̅[ X_1, …, X_r_P ].
We set A = ε(N)_(1).
We claim that there is a way to reorder columns and rows of A such that it will be of the form
[ B C_1; C_2 D ],
where B and D are square matrices, and
* B is upper triangular with nonzero elements of C̅ on the diagonal;
* entries of D are algebraically independent over the subalgebra generated in R by entries of B, C_1, and C_2.
In order to prove the claim we consider two cases:
* β_1 ≠ 0. By Corollary <ref>, A is already of the desired form with B being an r_L × r_L-submatrix.
* β_1 = 0. Then [g](G) implies that g_1(x) = α_1 + γ_1 x^2 + … with γ_1 ≠ 0.
Then Lemma <ref> implies that the following permutations would give us the desired block structure with B being an ⌊ 3r_L / 2 ⌋×⌊ 3r_L / 2 ⌋-submatrix,
for columns:
1, 3, …, 2r_L - 1, 2, 4, …, 2⌊ r_L / 2⌋, ∗,
and for rows:
1, 2, …, r_L,r_L + 2, r_L + 4, …, r_L + 2⌊ r_L / 2 ⌋,∗,
where ∗ stands for all other indices in any order.
Using elementary row operations, we can bring A to the form
[ B ∗; 0 D ],
where the entries of D are still algebraically independent.
Hence, A is proportional to D which is irreducible.
In order to prove that φ(Y_0) is a proper subset of φ(X_0) it is sufficient to prove that ε(N)_(2) is not divisible by ε(N)_(1).
This follows from the fact that these polynomials are both of degree r_Lr_P - r_L with respect to (algebraically independent)
entries of Ñ_2, …, Ñ_r_P, but involve different subsets of this variable set.
Now we can complete the proof of Theorem <ref>.
Proposition <ref> implies that φ(X_0) > φ(Y_0).
Since all fibers of φ have the same dimension, X_0 > Y_0.
Hence, X ≥ (X_0 ∖ Y_0) = X_0 > Y_0 ≥ Y.
Theorem <ref> is stated only for points satisfying [s1](S1) and [s2](S2).
However, the proof implies that every such point is generically nonsingular.
We expect that the same technique can be used to prove that generically no removable singularities occur
in points violating conditions [s1](S1) and [s2](S2).
This expectation agrees with our computational experiments with random operators and random polynomials.
We think that these experimental results and Theorem <ref> justify the choice c = 1 in Theorem <ref>
in most applications.
On the other hand, neither Theorem <ref> nor our experiments support the choice c=1 in the
case r_P=1. Instead, it seems that in this case the cost for removability is systematically larger. To
see why, consider the special case P=y-x^2 of substituting the polynomial g(x)=x^2 into a
solution f of a generic operator L. If the solution space of L admits a basis of the form
1 + a_1,r_Lx^r_L + a_1,r_L+1x^r_L+1 + ⋯,
x + a_2,r_Lx^r_L + a_2,r_L+1x^r_L+1 + ⋯,
⋮
x^r_L-1 + a_r_L-1,r_Lx^r_L + a_r_L-1,r_L+1x^r_L+1 + ⋯,
and M is the minimal operator for the composition, then its solution space obviously has the basis
1 + a_1,r_Lx^2r_L + a_1,r_L+1x^2r_L+2 + ⋯,
x^2 + a_2,r_Lx^2r_L + a_2,r_L+1x^2r_L+2 + ⋯,
⋮
x^2(r_L-1) + a_r_L-1,r_Lx^2r_L + a_r_L-1,r_L+1x^2r_L+2 + ⋯,
and so the indicial polynomial of M is λ(λ-2)⋯(λ-2(r_L-1)).
According to the theory of apparent singularities <cit.>, M has a removable
singularity at the origin and the cost of removability is as high as r_L.
More generally, if g is a rational function and α is a root of g', so that
g(x)=c+Ø((x-α)^2), a reasoning along the same lines confirms that such
an α will also be a removable singularity with cost r_L.
Acknowledgement. We thank the referees for their constructive critizism.
plain
|
http://arxiv.org/abs/1701.07443v2 | 20170125190048 | Dark Matter and Exotic Neutrino Interactions in Direct Detection Searches | [
"Enrico Bertuzzo",
"Frank F. Deppisch",
"Suchita Kulkarni",
"Yuber F. Perez Gonzalez",
"Renata Zukanovich Funchal"
] | hep-ph | [
"hep-ph"
] |
HEPHY-PUB 983/17
Unifying microscopic and continuum treatments of van der Waals and Casimir interactions
Alejandro W. Rodriguez
December 30, 2023
=======================================================================================
§ INTRODUCTION
The Standard Model (SM) of particle physics, despite its enormous success in describing experimental data, cannot explain DM observations. This has motivated a plethora of Beyond the Standard Model (BSM) extensions. Despite intense searches, none of these BSM extensions have been experimentally observed, leaving us with little knowledge of the exact nature of DM.
The lack of an experimentally discovered theoretical framework that connects the SM degrees of freedom with the DM sector has led to a huge activity in BSM model building. Among various DM scenarios, the Weakly Interacting Massive Particle (WIMP) DM remains the most attractive one, with several experiments actively searching for signs of WIMPs. Within this paradigm, DM is a stable particle by virtue of a 𝒵_2 symmetry under which it is odd. The WIMP interactions with the SM particles can be detected via annihilation (at indirect detection experiments), production (at collider experiments) and scattering (at direct detection experiments). If the WIMP idea is correct, the Earth is subjected to a wind of DM particles that interact weakly with ordinary matter, thus direct detection experiments form a crucial component in the experimental strategies to detect them.
At direct detection experiments, WIMP interactions are expected to induce nuclear recoil events in the detector target material. These nuclear recoils can be, in most detectors, discriminated from the electron recoils produced by other incident particles. Depending on the target material and the nature of DM-SM interactions, two different kind of DM interactions can be probed: spin-dependent and spin-independent DM-nucleus scattering. The current limits for spin-independent DM-nucleus interactions are considerably more stringent, and the next generation of direct detection experiments will probe the spin-independent interactions even further by lowering the energy threshold and increasing the exposure.
A signal similar to DM scattering can also be produced by coherent neutrino scattering off nuclei (CNSN) in DM direct detection experiments <cit.>, hence constituting a background to the WIMP signal at these experiments. Unlike more conventional backgrounds such as low energy electron recoil events or neutron scattering due to ambient radioactivity and cosmic ray exposure, the CNSN background can not be reduced. The main sources contributing to the neutrino background are the fluxes of solar and atmospheric neutrinos <cit.>, both fairly well measured in neutrino oscillation experiments <cit.>. Within the SM, CNSN originates from the exchange of a Z boson via neutral currents. Given the minuscule SM cross-section σ_ CNSN 10^-39 cm^2 for neutrino energies 10 MeV, and the insensitivity of the existing DM detectors to this cross-section, CNSN events have yet to be experimentally observed. The minimum DM – nucleus scattering cross-section at which the neutrino background becomes unavoidable is termed the neutrino floor <cit.>. In fact, the neutrino floor limits the DM discovery potential of direct detection experiments, so diminishing the uncertainties on the determination of solar and atmospheric neutrino fluxes as well as the direct measurement of CNSN is very important. Fortunately, dedicated experiments are being developed to try to directly detect CNSN <cit.> in the very near future.
The absence of any WIMP signal at the existing direct detection experiments has resulted in the need for next generation experiments. It is expected that these experiments will eventually reach the sensitivity to measure solar (and perhaps atmospheric) neutrinos from the neutrino floor. It thus becomes important to analyse the capacity of these experiments to discriminate between DM and neutrino scattering events. It has been shown that a sufficiently strong Non-Standard Interaction (NSI) contribution to the neutrino – nucleus scattering can result in a signal at direct detection experiments <cit.>. Several attempts have been made to discriminate between DM scattering and neutrino scattering events <cit.>.
The cases considered so far involve the presence of BSM in either the neutrino scattering or the DM sector. However, it is likely that a BSM mediator communicates with both the neutrino and the DM sector, and the DM and a hidden sector may even be responsible for the light neutrino mass generation <cit.>. In such cases, it is important to consider the combined effect of neutrino and DM scattering at direct detection experiments. In this work we analyse quantitatively the effect of the presence of BSM physics communicating to both the neutrinos and the DM sector on the DM discovery potential at future direct detection experiments.
The paper is organized as follows. In section <ref> we define the simplified BSM models we consider while section <ref> is dedicated to calculational details of neutrino scattering and DM scattering at direct detection experiments. Equipped with this machinery, in section <ref>, we describe the statistical procedure used to derive constraints for the existing and future experiments. We consider the impact of the BSM physics in the discovery potential of direct detection experiments in section <ref>. Finally, we conclude in section <ref>.
§ THE FRAMEWORK
Working within the framework of simplified models, we consider scenarios where the SM is extended with one DM and one mediator field. The DM particle is odd under a 𝒵_2 symmetry, while the mediator and the SM content is 𝒵_2 even. The symmetry forbids the decay of DM to SM particles and leads to 2 → 2 processes between SM and DM sector which results in the relic density generation, as well as signals at (in)direct detection experiments.
To be concrete, we extend the SM sector by a Dirac DM fermion, χ, with mass m_χ and consider two distinct possibilities for the mediator. In our analysis we will only specify the couplings which are relevant for CNSN and DM-nucleus scattering, namely, the couplings of the mediator to quarks, neutrinos and DM. We will explicitly neglect mediator couplings to charged leptons, and comment briefly on possible UV-complete models.
§.§ Vector mediator
In this scenario, we extend the SM by adding a Dirac fermion DM, χ, with mass m_χ and a real vector boson, V_μ, with mass m_V. The relevant terms in the Lagrangian are:
L_ vec = V_μ (J_f^μ + J_χ^μ)+ 1/2 m_V^2 V_μ V^μ ,
where the currents are (f={u,d,ν})
J_f^μ = ∑_f fγ^μ (g_V^f + g_A^f γ_5 ) f ,
J_χ^μ = χγ^μ (g_V^χ + g_A^χγ_5 ) χ .
Here, g_V^f and g_A^f are the vector and axial-vector couplings of SM fermions to the vector mediator V_μ, while g_V^χ and g_A^χ define the vector and axial-vector couplings between the mediator V_μ and χ. The Lagrangian contains both left-and right-handed currents, thus implicitly assuming the presence of an extended neutrino sector either containing sterile neutrinos or right-handed species. We will not go into the details of such an extended neutrino sector, simply assuming the presence of such left-and right-handed currents and dealing with their phenomenology.
Following the general philosophy of simplified DM models <cit.>, we write our effective theory after electroweak symmetry breaking (EWSB), assuming all the couplings to be independent. This raises questions about possible constraints coming from embedding such simplified models into a consistent UV-completion. The case of a U(1) gauge extension has for example been studied in <cit.>, where it has been shown that, depending on the vector-axial nature of the couplings between the vector and the fermions (either DM or SM particles), large regions of parameter space may be excluded.[Even without considering the additional fermionic content which may be needed to make the model anomaly free.] In our case we allow the possibility of different couplings between the members of weak doublets, i.e. terms with isospin breaking independent from the EWSB. As shown in <cit.>, such a possibility arises, for example, at the level of d=8 operators, implying that the couplings are likely to be very small. Moreover, we expect a non-trivial contribution of the isospin breaking sector to electroweak precision measurements, in particular to the T parameter. Since the analysis is however highly model dependent, we will not pursue it here.
We briefly comment here on collider limits for the new neutral vector boson. For m_V<209 GeV, limits from LEP I <cit.>, analyzing the channel e^+ e^- →μ^+ μ^-, impose that its mixing with the Z boson has to be 10^-3, implying the new gauge coupling to be 10^-2. This limit can be evaded in the case of a U(1) gauge extension, if the new charges are not universal and the new boson does not couple (or couples very weakly) to muons or by small U(1) charges in extensions involving extra scalar fields (See, for instance, eq.(2.5) of ref. <cit.>). For m_V>209 GeV, there are also limits from LEP II <cit.>, Tevatron <cit.> and the LHC <cit.>. Since these limits depend on the fermion U(1) charges, they can be either avoided or highly suppressed.
§.§ Scalar mediator
For a scalar mediator, we extend the SM by adding a Dirac fermion DM, χ, with mass m_χ and a real scalar boson, S, with mass m_S. The relevant terms in the Lagrangian with the associated currents are
L_ sc = S(∑_f g_S^f f f + g_S^χ χ χ) - 1/2 m_S^2 S^2 .
The couplings g_S^f and g_S^χ define the interaction between the scalar and the SM and the DM sectors, respectively. Similar to the vector mediator interactions, the presence of an extended neutrino sector is assumed in the scalar mediator Lagrangian as well. Since in this work we will focus on the spin-independent cross-section at direct detection experiments, we consider only the possibility of a CP even real scalar mediator.
For a scalar singlet it is easier to imagine a (possibly partial) UV-completion. Take, for example, the case of a singlet scalar field S added to the SM, which admits a quartic term V ⊂λ_HS |H|^2 S^2 and takes the vacuum expectation value (VEV) v_S. Non local dimension 6 operators such as S^2 ℓ_L H e_R are generated (with ℓ_L and e_R being the SM lepton doublets and singlets), which after spontaneous symmetry breaking and for energies below the Higgs boson mass m_H produce the couplings of eq. (<ref>) as g_S^f = y_f sinα, where α = 1/2arctan(λ_HS v v_S/(m_S^2-m_H^2) ) and y_f is the fermion Yukawa coupling. The coupling with neutrinos can be arranged for example in the case of a neutrinophilic 2HDM <cit.>. Typical values of g_S^f can be inferred from the specific realization of the simplified model, but in the remainder of this paper, we will remain agnostic about realistic UV-completions in which the simplified models we consider can be embedded, focusing only on the information that can be extracted from CNSN. We stress however that, depending upon the explicit UV-completion, other constraints apply and must be taken into account to assess the viability of any model. For example, in generic BSM scenarios with a new mediator coupling to fermions, the dijet and dilepton analyses at the LHC put important bounds, as has been exemplified in <cit.>. In this paper we aim to concentrate only on the constraints arising from direct detection experiments.
§ SCATTERING AT DIRECT DETECTION EXPERIMENTS
§.§ Neutrino and dark matter Scattering
Let us now remind the reader about the basics of CNSN. In the SM, coherent neutrino scattering off nuclei is mediated by neutral currents. The recoil energy released by the neutrino scattering can be measured in the form of heat, light or phonons. The differential cross-section in terms of the nuclear recoil energy E_R reads <cit.>
. dσ^ν/dE_R|_ SM = ( Q_V^ SM)^2 F^2(E_R) G_F^2 m_N/4π(1-m_N E_R/2 E_ν^2) ,
with the SM coupling factor
Q_V^ SM = N + (4 s_W^2 -1) Z .
Here, N and Z are the number of neutrons and protons in the target nucleus, respectively, F(E_R) the nuclear form factor, E_ν the incident neutrino energy and m_N the nucleus mass G_F is the Fermi constant and s_W=sinθ_W is the sine of the weak mixing angle. In addition, we use the nuclear form factor <cit.>
F(E_R) =3 j_1(q(E_R)r_N)/q(E_R)r_Nexp(-1/2[s q(E_R)]^2),
where j_1(x) is a spherical Bessel function, q(E_R)=√(2 m_n (N+Z) E_R) the momentum exchanged during the scattering, m_n ≃ 932 MeV the nucleon mass, s∼ 0.9 the nuclear skin thickness and r_N≃ 1.14 (Z+N)^1/3 is the effective nuclear radius.
In the case of the vector model defined in eq. (<ref>), the differential cross-section gets modified by the additional V exchange. The total cross-section should be calculated as a coherent sum of SM Z and vector V exchange, and reads
. dσ^ν/dE_R|_ V = G_V^2. dσ^ν/dE_R|_ SM , with G_V = 1 + √(2)/G_F Q_V/ Q_V^ SMg_V^ν - g_A^ν/q^2-m_V^2 .
Here, the coupling factor Q_V of the exotic vector boson exchange is given by <cit.>
Q_V = (2Z+N) g_V^u + (2N+Z) g_V^d ,
and q^2 = - 2 m_N E_R is the square of the momentum transferred in the scattering process. To obtain eq. (<ref>), we assumed that the neutrino production in the sun is basically unaffected by the presence of NP, in such a way that only LH neutrinos hit the target. As expected, if the new vector interacts only with RH neutrinos g_V^ν = g_A^ν, the NP contribution vanishes completely and no modification to the CNSN is present. On the other hand, when g_V^ν≠ g_A^ν, the interference term proportional to g_V^ν - g_A^ν can give both constructive and destructive interference; in particular, remembering that q^2 - m_V^2 = - (2 m_N E_R + m_V^2) is always negative, we have constructive interference for g_V^ν < g_A^ν. For a detailed discussion of the interference effects at direct detection using effective theory formalism, see <cit.>. As a last remark, let us notice that, due to the same Dirac structure of the SM and NP amplitudes, the correction to the differential cross-section amounts to an overall rescaling of the SM one.
For the simplified model with a scalar mediator defined in eq. (<ref>), the differential cross-section has a different form,
. dσ^ν/d E_R|_ S = . dσ^ν/d E_R|_ SM + F^2(E_R) G_S^2 G_F^2/4π m_S^4 E_R m_N^2 /E_ν^2 (q^2-m_S^2)^2 , with G_S = |g_S^ν| Q_S/G_F m_S^2 .
In this case, the modified differential cross-section is not simply a rescaling of the SM amplitude, but due to the different Dirac structure of the Sν̅ν vertex with respect to the SM vector interaction, it may in principle give rise to modification of the shape of the distribution of events as a function of the recoil energy. However, as we will see, for all practical purposes the impact of such modification is negligible.
Using the analysis presented in <cit.>, the coupling factor for the scalar boson exchange is given by
Q_S = Z m_n [∑_q=u,d,s g_S^q f_Tq^p/m_q +2/27( 1- ∑_q=u,d,s f_Tq^p ) ∑_q=c,b,tg_S^q/m_q]
+ N m_n [∑_q=u,d,s g_S^q f_Tq^n/m_q +2/27( 1- ∑_q=u,d,s f_Tq^n ) ∑_q=c,b,tg_S^q/m_q] .
The form factors f_Tq^p, f_Tq^n capture the effective low energy coupling of a scalar mediator to a proton and neutron, respectively, for a quark flavor q. For our numerical analysis we use
f_Tu^p = 0.0153, f_Td^p=0.0191, f_Tu^n = 0.011, f_Td^n = 0.0273 and f_Ts^p,n = 0.0447, which are the values found in micrOMEGAs <cit.>.
A more recent determination of some of these form factors can be found in Ref. <cit.>, we have used this estimation to
determine the effect of the form factors on our final result (see Sec. <ref>).
A comment about the normalization of G_V and G_S is
in order. G_V = 1 indicates recovery of purely SM
interactions with no additional contributions from exotic
interactions. For the scalar case, the situation is much
different. G_S includes Q_S, a quantity dependent on
the target material. For LUX, Q_S ≈ 1362 g_S^q,
considering universal quark-mediator couplings, hence for
|g_S^ν|∼ 1, |g_S^q|∼ 1, m_S∼ 100 GeV, natural values
of G_S are ∼ 10^4. Turning now to DM, its scattering off
the nucleus can give rise to either spin-independent or spin-dependent
interactions. In our analysis we will consider only the
spin-independent scattering [For the mediators we consider here,
the spin-dependent cross-section is in fact velocity suppressed by v^2, see
for instance <cit.>.], as the next generation experiments
sensitive to this interaction will also be sensitive to neutrino
scattering events. The spin-independent differential cross-section in
each of the two simplified models is given by
. dσ^χ_SI/dE_R|_V =
F^2(E_R) (g_V^χ)^2 Q_V^2/4πm_χ m_N/E_χ (q^2-m_V^2)^2 ,
. dσ^χ_SI/dE_R|_S =
F^2(E_R) (g_S^χ)^2 Q_S^2/4πm_χ m_N/E_χ (q^2-m_S^2)^2 ,
with the energy E_χ of the incident DM particle and all other variables as previously defined.
§.§ Recoil events induced by DM and neutrino scattering
Given the detector exposure, efficiency and target material, the above specified differential cross-sections can be converted into recoil event rates.
We first look at the recoil event rate induced by neutrino scattering, where the differential recoil rate is given by
.dR/dE_R|_ν = 𝒩∫_E^ν_ mindΦ/dE_ν dσ^ν/dE_RdE_ν .
Here, 𝒩 is the number of target nuclei per unit mass, dΦ/dE_ν the incident neutrino flux and E^ν_ min=√(m_N E_R/2) is the minimum neutrino energy. dσ^ν/dE_R is the differential cross-section as computed in Eqs. (<ref>)-(<ref>) for the vector and scalar mediator models, respectively. For our numerical analysis, we use the neutrino fluxes from <cit.>. Integrating the recoil rate from the experimental threshold E_ th up to 100 keV <cit.>, one obtains the number of neutrino events
Ev ^ν = ∫_E_ th .dR/dE_R|_ν ε(E_R) dE_R ,
to be computed for either the scalar or the vector mediator models. Here, E_th is the detector threshold energy and ε(E_R) is the detector efficiency function.
For the DM scattering off nuclei, the differential recoil rate also depends on astrophysical parameters such as the local DM density, the velocity distribution and it is given as
.dR/dE_R|_χ = 𝒩 ρ_0/m_N m_χ∫_v_ minv f(v)dσ^χ_SI/dE_Rd^3v ,
where ρ_0=0.3 GeV/c^2/cm^3 is the local DM density <cit.>, v is the magnitude of the DM velocity, v_ min(E_R) is the minimum DM speed required to cause a nuclear recoil with energy E_R for an elastic collision and f(v) the DM velocity distribution in the Earth's frame of reference. This distribution is in principle modulated in time due to the Earth's motion around the Sun, but we ignore this effect here as it is not relevant for our purposes. If the detector has different target nuclides, one has to sum over all their weighed contributions as, for instance, is done in Ref. <cit.>.
In what follows we will assume a Maxwell-Boltzmann distribution, given as
f(v) = {[ 1/N_ esc (2π σ_v^2)^3/2 exp[ -(v + v_ lab)^2/2σ_v^2] | v + v_ lab| < v_ esc,; 0 | v + v_ lab| ≥ v_ esc, ].
where v_ esc=544 km s^-1, v_ lab=232 km s^-1 and N_ esc=0.9934 is a normalization factor taken from <cit.>.
In order to constrain only the DM-nucleon interaction cross-section σ_χ n at zero momentum transfer, which is independent of the type of experiment, it is customary to write eq. (<ref>) as
.dR/dE_R|_χ = 𝒩 ρ_0/2 μ_n^2 m_χσ_χ n (Z+N)^2
F(E_R)^2 ∫_ v_minf(v)/vd^3v ,
where μ_n = m_n m_χ / (m_n + m_χ) is the reduced mass of the DM-nucleon system. For the cases we are considering (NP = V or S), we have <cit.>
σ_χ n = μ_n^2/μ_N^21/(Z+N)^2∫_0^2 μ_N^2 v^2/m_N.dσ_SI(E_R=0)/d E_R|_ NP dE_R
= (g^χ_NP)^2 Q_ NP^2/m_ NP^4μ^2_n/π (Z+N)^2 ,
where μ_N = m_N m_χ / (m_N + m_χ) is the reduced mass of the DM-nucleus system and we used that F(0)=1. The number of DM events per ton-year can be obtained using an expression analogous to eq. (<ref>), explicitly
Ev ^χ = ∫_E_ th .dR/dE_R|_χ ε(E_R) dE_R .
§.§ Background free sensitivity in the presence of exotic neutrino interactions
The presence of CNSN at direct detection experiments highlights the existence of a minimal DM - nucleon scattering cross-section below which CNSN events can not be avoided and in this sense the direct detection experiments no longer remain background free. This minimum cross-section is different for different experiments, depending on the detector threshold, exposure and target material. Using the definition given in eq. (<ref>), it is possible to represent the CNSN in the (m_χ, σ_χ n) plane introducing the so-called one neutrino event contour line. This line essentially defines the DM mass dependent threshold/exposure pairs that optimise the background-free sensitivity estimate at each mass while having a background of one neutrino event. The presence of additional mediators will modify this minimum cross-section with respect to the SM and hence, modify the maximum reach of an experiment. In this section, we show how the one neutrino event contour line changes due to the additional vector and scalar mediators considered in eq. (<ref>) and (<ref>).
To compute the one neutrino event contour line we closely follow Ref. <cit.>. Considering, for instance, a fictitious Xe target experiment, we determine the exposure to detect a single neutrino event,
E_ν(E_ th) = Ev ^ν=1/∫_E_ th .dR/dE_R|_ν dE_R ,
as a function of energy thresholds in the range 10^-4 keV≤ E_ th≤ 10^2 keV, varied in logarithmic steps. For each threshold we then compute the background-free exclusion limits, defined at 90% C.L. as the curve in which we obtain 2.3 DM events for the computed exposure:
σ_χ n^1ν = 2.3/ E_ν(E_ th) ∫_E_ th .dR/dE_R|_χ, σ_χ n=1 dE_R .
If we now take the lowest cross-section of all limits as a function of the DM mass, we obtain the one neutrino event contour line, corresponding to the best background-free sensitivity achievable for each DM mass for a one neutrino event exposure. Let us stress that the one neutrino event contour line, as defined in this section, is computed with a 100% detector efficiency. The effect of a finite detector efficiency will be taken into account in Sec. <ref> when we will compute how the new exotic neutrino interactions can affect the discovery potential of direct detection DM experiments. Comparing eq. (<ref>) with Eqs. (<ref>) and (<ref>), we see that the simplified models introduced in Sec. <ref> can modify the one neutrino event contour line. In fact, such modifications have been studied in specific models with light new physics e.g. in <cit.>. We show in Fig. <ref> some examples of a modified one neutrino event contour line for our models, fixing the values of the parameters 𝒢_V and 𝒢_S as specified in the legends. These parameters have been chosen to be still allowed by current data, see sections <ref> and <ref>. The left panel of the figure describes changes in the one neutrino event contour line in presence of a new vector mediator. As will be explained below, it is possible to have cancellation between SM and exotic neutrino interactions leading to a lowering of the contour line as shown for the case of G_V = 0.3. It is also worth recollecting that G_V includes the SM contribution i.e. G_V = 1 is the SM case. For the vector case the one neutrino event contour line is effectively a rescaling of the SM case. figure <ref> (right panel) on the other hand shows modification of the contour line for a scalar mediator. Note that unlike in the vector scenario, the factor G_S has a different normalization. No significant change in the one neutrino event contour line is expected in the scalar case.
There are a few remarks we should make here. First, it is possible, in the context of the vector mediator model, to cancel the SM contribution to CNSN and completely eliminate the neutrino background. For mediator masses heavy enough to neglect the q^2 dependence of the cross-sections, this happens when, c.f. with eq. (<ref>),
g^ν_V - g^ν_A = Q_V^ SM/ Q_VG_F m_V^2/√(2) = a^ν_V/g_V^q( m_V/ GeV)^2 ,
where for the last equality we assume g_V^u=g_V^d=g_V^q and a^ν_V is a numerical value that depends only on the target nucleus. We show in table <ref> the values of a^ν_V for various nuclei.
Second, in the case of the scalar scenario, it is possible to compensate for only part of the SM contribution to the CNSN. Inspecting Eqs. (<ref>) and (<ref>) we see that the positive scalar contribution can at most cancel the negative SM term depending on E_R/E_ν^2, resulting in an effective increase of the cross-section. This is accomplished for
g^ν_S = Q_V^ SM / Q_SG_F m_S^2/√(2)= a^ν_S/g_S^q( m_S/ GeV)^2 ,
where again a^ν_S is a numerical value that depends only on the target nucleus. Its value for different nuclei are shown in table <ref>. We show in the right panel of figure <ref> an example of this situation, orange line, G_S = 52.3, corresponding to the case of g_S^q = 1 and m_S = 100 GeV.
Finally, we should note that the one neutrino event contour line only gives us a preliminary estimate of the minimum cross-sections that can be reached by a DM direct detection experiment. It is worth recalling that this estimate is a background-free sensitivity. Interactions modifying both neutrino and DM sector physics will lead to a non-standard neutrino CNSN background which should be taken into account. Furthermore, the compatibility of the observed number of events should be tested against the sum of neutrino and DM events. In this spirit, to answer the question what is the DM discovery potential of an experiment? one has to compute the real neutrino floor. This will be done in Sec. <ref>, which will include a more careful statistical analysis taking into account background fluctuations and the experimental efficiency.
§ CURRENT AND FUTURE LIMITS ON DM-NEUTRINO INTERACTIONS
When new physics interacts with the DM and neutrino sector, the limits from direct detection experiments become sensitive to the sum of DM and neutrino scattering events. A natural question to ask is the capacity of current experiments to constrain this sum. The aim of this section is to assess these constraints and derive sensitivities for the next generation of direct detection experiments. For the analysis of the current limits we consider the results of the Large Underground Xenon (LUX) <cit.> experiment. This choice is based on the fact that this experiment is at present the most sensitive one probing the m_χ> 5 GeV region on which we focus. On the other hand, for the future perspectives we will consider two Xe target based detectors: the one proposed by the LUX-ZonEd Proportional scintillation in LIquid Noble gases (LUX-ZEPLIN) Collaboration <cit.> and the one proposed by the DARk matter WImp search with liquid xenoN (DARWIN) Collaboration <cit.>.
Current bounds. LUX is an experiment searching for WIMPs through a dual phase Xe time projection chamber. We will consider its results after a 3.35 × 10^4 kg-days run presented in 2016 <cit.>, performed with an energy threshold of 1.1 keV. We also use the efficiency function ε(E_R) reported in the same work.
In order to assess the constraining power of current LUX results for the two models presented in Eqs. (<ref>)-(<ref>), we compute the total number of nuclear recoil events expected at each detector as
Ev^ total = Ev^χ + Ev^ν.
Using this total number of events, we compute a likelihood function constructed from a Poisson distribution in order to use their data to limit the parameters of our models,
ℒ(θ̂|N) = P(θ̂|N)=(b+μ(θ̂))^Ne^-(b+μ(θ̂))/N! ,
where θ̂ indicates the set of parameters of each model, N the observed number of events, b the expected background and μ(θ̂) is the total number of events Ev^ total. According to <cit.> we use N=2 for the number of observed events and b=1.9 for the estimated background. Maximizing the likelihood function we can obtain limits for the different planes of the parameter space.
In the case of the vector model, we performed a scan of the parameter space in the ranges
0≤| g_V^ν - g_A^ν|≤ 10,
0 ≤| g_V^χ|≤ 1 ,
while we always choose g_A^χ=0 in order to avoid spin-dependent limits. We show our limits in figure <ref> for Λ^-2_V ≡ g^q_V/m_V^2=10^-6 GeV^-2 [For g^q_V = 10^-2, 10^-1, 0.25, 0.5 and 1, this corresponds, respectively, to m_V ∼ 100 GeV, 315 GeV, 500 GeV, 710 GeV and 10^3 GeV.] and Λ^-2_V = √(4π) GeV^-2 (right), which corresponds to a light mediator with m_V = 1 GeV and a coupling at the perturbativity limit. In each case, we show the results for three values of the DM mass, m_χ = 10 GeV (violet), 15 GeV (red) and 50 GeV (green). We see that we can clearly distinguish two regions: for Λ^-2_V =10^-6 GeV^-2, when | g^ν_V-g^ν_A |≲ 3-4, the DM contribution is the dominant one (in particular, as | g^ν_V - g^ν_A |→ 0 the contribution to the neutrino floor is at most the SM one), and sets | g^χ_V | 2 × 10^-3 (4 × 10^-4 ) for m_χ = 10 (50) GeV. On the other hand, for larger values of | g^ν_V-g^ν_A |, the number of neutrino events rapidly becomes dominant and no bound on the DM-mediator coupling can be set. For the extreme value Λ^-2_V =√(4π) GeV^-2, one can set the limits | g^χ_V | 4.3 × 10^-10 (1.2 × 10^-10 ) for m_χ = 10 (50) GeV and | g_V^ν - g_A^ν| few 10^-6.
Inspection of figure <ref> shows two peculiar features: an asymmetry between the bounds on positive and negative values of g_V^ν-g^ν_A, and the independence of these limits on the DM mass. We see from eq. (<ref>) that the asymmetry can be explained from the dependence of the interference term on the sign of g_V^ν-g^ν_A. Such interference is positive for g_V^ν-g^ν_A<0, explaining why the bounds on negative g_V^ν-g^ν_A are stronger. As for the independence of the g_V^ν-g^ν_A bounds from the DM mass, this can be understood from the fact that when g_V^χ becomes sufficiently small we effectively reach the g_V^χ→ 0 limit in which the DM mass is
not relevant.
Turning to the bounds that the current LUX results impose on the parameter space of the scalar model, we varied the parameters in the ranges
0 ≤ |g_S^ν| ≤ 5 0 ≤ |g_S^χ| ≤ 1 .
Our results are presented in figure <ref>. On the top left (right) panel, fixing Λ^-2_S ≡ g^q_S/m_S^2=10^-6 GeV^-2 (Λ^-2_S =√(4π) GeV^-2), for m_χ = 10 GeV (violet), 15 GeV (red) and 50 GeV (green). From these plots we see that LUX can limit | g^χ_S | 4.5 × 10^-4 (1 × 10^-4 ) for m_χ = 10 (50) GeV if | g^ν_S | < 0.5, when Λ^-2_S=10^-6 GeV^-2. For the limiting case Λ^-2_S=√(4π) GeV^-2, we get the bound | g^χ_S | 1.3 × 10^-10 (3.2 × 10^-11 ) for m_χ = 10 (50) GeV if | g^ν_S | < 10^-7.
As g^ν_S → 0, the contribution to the neutrino floor tends to the SM one, except for a particular value of g^ν_S g^q_S, as discussed at the end of the previous section. In the opposite limit, i.e. where the neutrino floor dominates, g^χ_S → 0, the current limit is | g^ν_S | 0.7 (| g^ν_S | few 10^-7) for Λ^-2_S =10^-6 (=√(4π)) GeV^-2. As in the vector case, we see that this bound does not depend on the DM matter mass, for the same reasons explained above.
Future sensitivity. To assess the future projected LUX-ZEPLIN sensitivity, we will assume an energy threshold of 6 keV, a maximum recoil energy of 30 keV and a future exposure of 15.34 t-years <cit.>. According to the same reference, we use a 50% efficiency for the nuclear recoil. For DARWIN, we will consider an aggressive 200 t-years exposure, no finite energy resolution but a 30% acceptance for nuclear-recoil events in the energy range of 5–35 keV <cit.>.
Let us now discuss the bounds that can be imposed on the parameter space of our models in case the future experiments LUX-ZEPLIN and DARWIN will not detect any signal. We scan the parameter space over the ranges of Eqs. (<ref>)
and (<ref>), obtaining the exclusion at 90% C.L. The results are presented in the bottom panels of figure <ref> (figure <ref>) for the vector (scalar) model.
In the region in which the DM events dominate, we see that LUX-ZEPLIN will be able to improve the bound on | g_V^χ| and | g_S^χ| by a factor between 2 and 10 depending on the DM mass, while another order of magnitude improvement can typically be reached with DARWIN. However, we also see that, somehow contrary to expectations, the bounds on the neutrino couplings are expected to be less stringent than the present ones. While the effect is not particularly relevant in the vector case, we can see that in the scalar case the LUX-ZEPLIN sensitivity is expected to be about a factor of 4 worse than the current LUX limit. This is due to the higher threshold of the experiment, that limits the number of measurable solar neutrino events. As such, a larger | g_S^χ| is needed to produce a sufficiently large number of events, diminishing the constraining power of LUX-ZEPLIN. While in principle this is also true for the DARWIN experiment, the effect is compensated by the aggressive expected exposure.
§ SENSITIVITY TO DM-NUCLEON SCATTERING IN PRESENCE OF EXOTIC NEUTRINO INTERACTIONS
In section <ref>, we computed the background-free sensitivity of direct detection experiments in presence of exotic neutrino interactions. However, what is the true 3σ discovery potential given the exotic neutrino interactions background remains unanswered. In this section, we perform a detailed statistical analysis, taking into account the estimated background and observed number of events and comparing these against the DM and neutrino interaction via a profile likelihood analysis.
To assess the DM discovery potential of an experiment we calculate, as in Ref. <cit.>, the minimum value of the scattering cross-section σ_χ n as a function of m_χ that can be probed by an experiment. This defines a discovery limiting curve that is the true neutrino floor of the experiment. Above this curve the experiment has a 90% probability of observing a 3σ DM detection. This is done by defining a binned likelihood function <cit.>
L(σ_χ n,m_χ,ϕ_ν,Θ) = ∏_i=1^n_ b P( Ev^ obs_i | Ev^χ_i + ∑_j=1^n_ν Ev^ν_i(ϕ_ν^j);Θ) ×∏_j=1^n_ν L(ϕ_ν^j) ,
where we have a product of Poisson probability distribution functions
(P) for each bin i (n_ b=100), multiplied by
gaussian likelihood functions parametrizing the uncertainties on each
neutrino flux normalization, L(ϕ_ν^j)
<cit.>. The neutrino (Ev^ν) and DM (Ev^χ) number of events were
computed according to Eqs. (<ref>) and (<ref>),
respectively. For each neutrino component j=1,…,
n_ν, the individual neutrino fluxes from solar and atmospheric
neutrinos are denoted by ϕ_ν^j, while Θ is a
collection of the extra parameters (g_V,S^q, g_V,S^ν,
etc.) to be taken into account in the model under consideration. Since
we will introduce the discovery limit in the DM cross-section, note
that we will keep the DM-mediator coupling g_V,S^χ free. For
this study, we considered only the contribution of the ^8B and hep
solar and atmospheric neutrinos, due to the thresholds of the
considered experiments. For a fixed DM mass, we can use
eq. (<ref>) to test the neutrino-only hypothesis H_0
against the neutrino+DM hypothesis H_1 constructing the ratio
λ(0) = L(σ_χ n=0,ϕ̂̂̂_ν,Θ)/ L
(σ̂_χ n,ϕ̂_ν,Θ) ,
where ϕ̂_ν and σ̂_χ n are the values of the fluxes and DM cross-section that maximize
the likehood function L (σ̂_χ n,ϕ̂_ν,Θ), while ϕ̂̂̂_ν is used to maximize L(σ_χ n=0,ϕ̂̂̂_ν,Θ). For each mass m_χ and cross-section σ_χ n we build a probability density function p(Z| H_0) of the test statistics under H_0, the neutrino only hypothesis. This is performed by constructing an ensemble of 500 simulated experiments, determining for each one the significance Z=√(-2 lnλ(0)) <cit.>. Finally, we compute the significance that can be achieved 90% of the times, Z_90, given by
∫_0^Z_90 p(Z| H_0) dZ =0.90 .
Therefore, the minimum value for the cross-section for which the experiment has 90% probability of
making a 3σ DM discovery is defined as the value of σ_χ n that corresponds to Z_90=3.
In figure <ref> we can see the neutrino floor considering
only the SM contribution to the CNSN (dark blue) as well as the result
for some illustrative cases, in the vector mediator scenario, for the
LUX-ZEPLIN experiment with two different energy thresholds. The case
G_V=3.6 (light blue) can be considered an extreme case,
corresponding to the current limit on | g_V^ν - g_A^ν|
(10^-6) for Λ_V^-2= √(4 π) GeV^-2. Above
this curve a 3σ DM discovery can be achieved by the experiment,
while below this curve it is difficult to
discriminate between a DM signal and a non-standard (vector mediated)
contribution to the neutrino floor. We also show the case G_V=0.3 (red), where the new vector contribution comes with the
opposite sign to the SM one, so it actually cancels some of the
standard signal. On the other hand, in the case corresponding to the
threshold E_th=0.1 keV, we find the same phenomenon noticed in the
literature: close to a DM mass of 6 GeV, the discovery limit is
substantially worsened because of the similarity of the spectra of
^8B neutrinos and the WIMP, see, for instance
<cit.>. However, the minimum cross-section that can be
probed is different for each parameter 𝒢_V, due to the
contribution of the vector mediator. For the case of E_th=6 keV,
we see that the vector mediator decreases or increases the discovery
limit according to the value of G_V.
In figure <ref> we can see the neutrino floor considering
only the SM contribution to the CNSN (dark blue) as well as the result
for some illustrative cases, in the scalar mediator scenario, for the
LUX-ZEPLIN experiment with two different energy thresholds. Here the
case G_S= 82.8 (orange) can be considered an extreme case,
since this corresponds to the current limit on | g_S^ν|
(2× 10^-7) for Λ_S^-2= √(4 π)
GeV^-2. In the case of the lower threshold, we see that the point
where the discovery limit is highly affected due to the ^8B
neutrinos is displaced close to a mass of 7 GeV.
This shift of the distribution is provoked by the extra factor
that appears in the scalar case with respect to the SM (see
eq. (<ref>)).
For the case of E_ th=6 keV, the scalar mediator
does not modify significantly the discovery limit. Therefore, we see
that contrary to the vector case, the scalar contribution does not
affect very much the discovery reach of the experiment as compared to
the one limited by the standard CNSN.
In figure <ref> we show the behavior of the number of CNSN
events as a function of the energy threshold of the detector and for a
detector efficiency varying from 40% to 60%. From this we see that
for G_V=3.6 and G_S=82.8, values that saturate the
current limit of the LUX experiment for the vector and scalar mediator
models, the number of neutrino events for E_ th∼ 1 keV are
basically the same and both about 10 times larger than the SM
contribution. However, for the choices E_ th∼ 0.1 keV
(lower threshold) and E_ th∼ 6 keV (higher threshold) used
in Figs. <ref> and <ref>, the number of
CNSN events for the vector model is about 4 times larger than that for
the scalar model, explaining the difference in sensitivity for the
vector and scalar models at those thresholds. We see again that the SM
and the vector mediator model number of CNSN events differ simply by a
scale factor, independent of the energy threshold, as expected from
eq. (<ref>). On the other hand, for the scalar case
there is a non-trivial behavior with respect to the SM due to the
extra term in the cross-section
that depends on E_R m_N/E_ν^2 (see eq. (<ref>)), thus on E_ th. For the lower
threshold low energy ^8B neutrinos become accessible. However, the difference
between the SM and the scalar mediator cross-sections diminishes more
with lower E_ th than it increases with lower E_ν so the number
of CNSN events differs only by a factor ∼ 3. For the higher
threshold only atmospheric neutrinos are available, both SM and scalar
contributions are expected to be of the same order as the extra scalar
contribution is suppressed by E_ν^-2. We also see that a
detector efficiency between 40% to 60% does not affect the above
discussion and consequently we do not expect the neutrino floors we
have calculated in this section to be very different had we chosen to
use in our computation 40% or 60% efficiency instead of the 50% we
have used.
We have also performed an estimation of the effect of the uncertainty on the
form factors f_Tq^p,n on the results of our calculation and concluded that
they can affect the neutrino floor by ∼ 30%.
To exemplify the difficulty in discriminating between an energy spectrum produced by DM collisions from the modified neutrino floor, in the two cases studied in this paper, we show in figure <ref> examples of the energy spectrum for the points corresponding to the red stars in figure <ref> (vector) and figure <ref> (scalar). We show explicitly the various contributions: the recoil spectrum produced by DM events only (green), by the standard CNSN (black), by the non-standard CNSN due either to the vector or scalar mediator (blue). In red we show the combined spectrum. In both cases, one would be able to discriminate the spectrum due to DM plus SM ν events (orange curve) from only CNSN events (black). However, if there is an extra contribution from non-standard interactions, increasing the neutrino background (blue), one cannot discriminate anymore this situation from the total spectrum which also contain DM events(red). Both points were chosen in a region where solar neutrinos dominate the background and are only achievable for a very low energy threshold. For the nominal threshold of the LUX-ZEPLIN experiment only the vector scenario will affect the sensitivity of the experiment for σ_χ n few 10^-47 cm^2, we do not present here our results for DARWIN as they are qualitative similar to those of LUX-ZEPLIN.
§ CONCLUSIONS
Coherent neutrino scattering off nuclei is bound to become an irreducible background for the next generation of dark matter direct detection experiments, since the experimental signature is very similar to DM scattering off nuclei. In this work we have considered the case in which new physics interacts with both DM and neutrinos. In this situation, it becomes important to compute the neutrino floor while taking into account the contributions from exotic neutrino interaction. This sets the true discovery limit for direct detection experiments instead of a background-free sensitivity. For definitiveness, we have focused on two simplified models, one with a vector and one with a scalar mediator interacting with the DM and the SM particles. We calculated the bounds on the parameter space of the two simplified models imposed by the latest LUX data. These are presented in Figs. <ref> and <ref>.
The most interesting case is, however, the one in which some signal could be detected in a future DM direct detection experiments. In this case our models predict modifications to the standard neutrino floor. The main result of our analysis is shown in Figs. <ref> and <ref>, in which we show that it is possible to find points in the parameter space of the models in which not only the number of events produced by DM and by the modified CNSN are compatible, but in which also the spectra are very similar. This immediately implies that the modified CNSN can mimic a DM signal above the standard neutrino floor, challenging the interpretation of a DM discovery signal. We show that the problem is more significant for experiments that can probe m_χ < 10 GeV or σ_χ n 10^-47 cm^2. Although a new scalar interaction will not, in practice, affect the discovery reach of future experiments such as LUX-ZEPLIN or DARWIN, a new vector interaction can mimic DM signals in a region above the standard neutrino floor of those experiments, challenging any discovery in this region.
It should be noted that the scenarios considered here lead to a variety of signatures apart from a modification of the CNSN at direct detection experiments. First and foremost, we did not account for any relic density constraints from DM annihilation. Throughout the analysis we have assumed that the DM relic density is satisfied. Secondly, the DM annihilation to neutrinos will generate signals at indirect detection experiments which will lead to additional constraints on the parameter space. Direct production of DM particles at the LHC, constrained by monojet searches will also be an additional signature of interest. Finally, exotic neutrino interactions themselves are constrained by several neutrino experiments and should be taken into account for a more complete analysis.
Despite these possible extensions of the study, our analysis is new in the sense that it considers for the first time the combined effect of exotic neutrino and DM interactions at the direct detection experiments. We demonstrate the current limits on the combined parameter space for the DM and neutrino couplings and finally demonstrate the reach of direct detection experiments.
We are thankful to Achim Gütlein for several very useful discussions about neutrino floor calculations. We also would like to thank Geneviève Bélanger for helpful discussions. SK wishes to thank USP for hospitality during her visit, where this work originated. SK is supported by the `New Frontiers' program of the Austrian Academy of Sciences. This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Conselho Nacional de Ciência e Tecnologia (CNPq).
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 674896.
JHEP
|
http://arxiv.org/abs/1701.07852v2 | 20170126192941 | An Empirical Analysis of Feature Engineering for Predictive Modeling | [
"Jeff Heaton"
] | cs.LG | [
"cs.LG"
] |
An Empirical Analysis of Feature Engineering for Predictive Modeling
Jeff Heaton
McKelvey School of Engineering
Washington University in St. Louis
St. Louis, MO 63130
Email: [email protected]
December 30, 2023
==================================================================================================================================
Machine learning models, such as neural networks, decision trees, random forests, and gradient boosting machines, accept a feature vector, and provide a prediction. These models learn in a supervised fashion where we provide feature vectors mapped to the expected output. It is common practice to engineer new features from the provided feature set. Such engineered features will either augment or replace portions of the existing feature vector. These engineered features are essentially calculated fields based on the values of the other features.
Engineering such features is primarily a manual, time-consuming task. Additionally, each type of model will respond differently to different kinds of engineered features. This paper reports empirical research to demonstrate what kinds of engineered features are best suited to various machine learning model types. We provide this recommendation by generating several datasets that we designed to benefit from a particular type of engineered feature. The experiment demonstrates to what degree the machine learning model can synthesize the needed feature on its own. If a model can synthesize a planned feature, it is not necessary to provide that feature. The research demonstrated that the studied models do indeed perform differently with various types of engineered features.
§ INTRODUCTION
Feature engineering is an essential but labor-intensive component of machine learning applications <cit.>. Most machine-learning performance is heavily dependent on the representation of the feature vector. As a result, data scientists spend much of their effort designing preprocessing pipelines and data transformations <cit.>.
To utilize feature engineering, the model must preprocess its input data by adding new features based on the other features <cit.>. These new features might be ratios, differences, or other mathematical transformations of existing features. This process is similar to the equations that human analysts design. They construct new features such as body mass index (BMI), wind chill, or Triglyceride/HDL cholesterol ratio to help understand existing features' interactions.
Kaggle and ACM's KDD Cup have seen feature engineering play an essential part in several winning submissions. Individuals applied feature engineering to the winning KDD Cup 2010 competition entry <cit.>. Additionally, researchers won the Kaggle Algorithmic Trading Challenge with an ensemble of models and feature engineering. These individuals created these engineered features by hand.
Technologies such as deep learning <cit.> can benefit from feature engineering. Most research into feature engineering in the deep learning space has been in image and speech recognition <cit.>. Such techniques are successful in the high-dimension space of image processing and often amount to dimensionality reduction techniques <cit.> such as PCA <cit.> and auto-encoders <cit.>.
§ BACKGROUND AND PRIOR WORK
Feature engineering grew out of the desire to transform non-normally distributed linear regression inputs. Such a transformation can be helpful for linear regression. The seminal work by George Box and David Cox in 1964 introduced a method for determining which of several power functions might be a useful transformation for the outcome of linear regression <cit.>. This technique became known as the Box-Cox transformation.
The alternating conditional expectation (ACE) algorithm <cit.> works similarly to the Box-Cox transformation. An individual can apply a mathematical function to each component of the feature vector outcome. However, unlike the Box-Cox transformation, ACE can guarantee optimal transformations for linear regression.
Linear regression is not the only machine-learning model that can benefit from feature engineering and other transformations. In 1999, researchers demonstrated that feature engineering could enhance rules learning performance for text classification <cit.>. Feature engineering was successfully applied to the KDD Cup 2010 competition using a variety of machine learning models.
§ EXPERIMENT DESIGN AND METHODOLOGY
Different machine learning model types have varying degrees of ability to synthesize various types of mathematical expressions. If the model can learn to synthesize an engineered feature on its own, there was no reason to engineer the feature in the first place. Demonstrating empirically a model's ability to synthesize a particular type of expression shows if engineered features of this type might be useful to that model. To explore these relations, we created ten datasets contain the inputs and outputs that correspond to a particular type of engineered feature. If the machine-learning model can learn to reproduce that feature with a low error, it means that that particular model could have learned that engineered feature without assistance.
For this research, only considered regression machine learning models for this experiment. We chose the following four machine learning models because of the relative popularity and differences in approach.
* Deep Neural Networks (DANN)
* Gradient Boosted Machines (GBM)
* Random Forests
* Support Vector Machines for Regression (SVR)
To mitigate the stochastic nature of some of these machine learning models, each experiment was run 5 times, and the best run's outcome was used for the comparison. These experiments were conducted in the Python programming language, using the following third-party packages: Scikit-Learn <cit.> and TensorFlow<cit.>. Using this combination of packages, model types of support vector machine (SVM) <cit.><cit.>, deep neural network <cit.>, random forest <cit.>, and gradient boosting machine (GBM) <cit.> were evaluated against the following sixteen selected engineered features:
* Counts
* Differences
* Distance Between Quadratic Roots
* Distance Formula
* Logarithms
* Max of Inputs
* Polynomials
* Power Ratio (such as BMI)
* Powers
* Ratio of a Product
* Rational Differences
* Rational Polynomials
* Ratios
* Root Distance
* Root of a Ratio (such as Standard Deviation)
* Square Roots
The techniques used to create each of these datasets are described in the following sections. The Python source code for these experiments can be downloaded from the author's GitHub page <cit.> or Kaggle <cit.>.
§.§ Counts
The count engineered feature counts the number of elements in the feature vector that satisfies a certain condition. For example, the program might generate a count feature that counts other features above a specified threshold, such as zero. Equation <ref> defines how a count feature might be engineered.
y=∑_i=1^n 1 if x_i > t else 0
The x-vector represents the input vector of length n. The resulting y contains an integer equal to the number of x values above the threshold (t). The resulting y-count was uniformly sampled from integers in the range [1,50], and the program creates the corresponding input vectors for the program to generate a count dataset. Algorithm <ref> demonstrates this process.
Several example rows of the count input vector are shown in Table <ref>. The y_1 value simply holds the count of the number of features x_1 through x_50 that contain a value greater than 0.
§.§ Differences and Ratios
Differences and ratios are common choices for feature engineering. To evaluate this feature type a dataset is generated with x observations uniformly sampled in the real number range [0,1], a single y prediction is also generated that is various differences and ratios of the observations. When sampling uniform real numbers for the denominator, the range [0.1,1] is used to avoid division by zero. The equations chosen are simple difference (Equation <ref>), simple ratio (Equation <ref>), power ratio (Equation <ref>), product power ratio (Equation <ref>) and ratio of a polynomial (Equation <ref>).
y=x_1 - x_2
y=x_1/x_2
y=x_1/x_2^2
y=x_1 x_2/x_3^2
y=1/5x+8x^2
§.§ Distance Between Quadratic Roots
It is also useful to see how capable the four machine learning models are at synthesizing ordinary mathematical equations. We generate the final synthesized feature from a distance between the roots of a quadratic equation. The distance between roots of a quadratic equation can easily be calculated by taking the difference of the two outputs of the quadratic formula, as given in Equation <ref>, in its unsimplified form.
y=| -b+√(b^2-4ac)/2a- -b-√(b^2-4ac)/2a|
The dataset for the transformation represented by Equation <ref> is generated by uniformly sampling x values from the real number range [-10,10]. We discard any invalid results.
§.§ Distance Formula
The distance formula contains a ratio inside a radical, and is shown in Equation <ref>. The input are for x values uniformly sampled from the range [0, 10], and the outcome is the Euclidean distance between (x_1, x_2) and (x_3, x_4).
y=√((x_1 - x_2)^2 + (x_3 - x_4)^2)
§.§ Logarithms and Power Functions
Statisticians have long used logarithms and power functions to transform the inputs to linear regression <cit.>. Researchers have shown the usefulness of these functions for transformation for other model types, such as neural networks <cit.>. The log and power transforms used in this paper are of the type shown in Equations <ref>,<ref>, and <ref>.
y=log(x)
y=x^2
y=x^1/2
This paper investigates using the natural log function, the second power, and the square root. For both the log and root transform, random x values were uniformly sampled in the real number range [1,100]. For the second power transformation, the x values were uniformly sampled in the real number range [1, 10]. A single x_1 observation is used to generate a single y_1 observation. The x_1 values are simply random numbers that produce the expected y_1 values by applying the logarithm function.
§.§ Max of Inputs
Ten random inputs are generated for the observations (x_1 - x_10). These random inputs are sampled uniformly in the range [1, 100]. The outcome is the maximum of the observations. Equation <ref> shows how this research calculates the max of inputs feature.
y=max(x_1 ... x_10)
§.§ Polynomials
Engineered features might take the form of polynomials. This paper investigated the machine learning models' ability to synthesize features that follow the polynomial given by Equation <ref>.
y=1+5x+8x^2
An equation such as this shows the models' ability to synthesize features that contain several multiplications and an exponent. The data set was generated by uniformly sampling x from real numbers in the range [0,2). The y_1 value is simply calculated based on x_1 as input to Equation <ref>.
§.§ Rational Differences and Polynomials
Useful features might also come from combinations of rational equations of polynomials. Equations <ref> & <ref> show the types of rational combinations of differences and polynomials tested by this paper. We also examine a ratio power equation, similar to the body mass index (BMI) calculation, shown in Equations <ref>.
y=x_1-x_2/x_3-x_4
y=1/5x+8x^2
y=x_1/x_2^2
To generate a dataset containing rational differences (Equation <ref>), four observations are uniformly sampled from real numbers of the range [1,10]. Generating a dataset of rational polynomials, a single observation is uniformly sampled from real numbers of the range [1,10].
§ RESULTS ANALYSIS
To evaluate the effectiveness of the four model types over the sixteen different datasets we must account for the differences in ranges of the y values. As Table <ref> demonstrates, the maximum, minimum, mean, and standard deviation of the datasets varied considerably. Because an error metric, such as root mean square error (RMSE) is in the same units as its corresponding y values, some means of normalization is needed. To allow comparison across datasets, and provide this normalization, we made use of the normalized root-mean-square deviation (NRMSD) error metric shown in Equation <ref>. We capped all NRMSD values at 1.5; we considered values higher than 1.5 to have failed to synthesize the feature.
NRMSD=1/σ√(∑_t=1^T (ŷ_t - y_t)^2/T)
The results obtained by the experiments performed in this paper clearly indicate that some model types perform much better with certain classes of engineered features than other model types. The simple transformations that only involved a single feature were all easily learned by all four models. This included the log, polynomial, power, and root. However, none of the models were able to successfully learn the ratio difference feature. Table <ref> provides the scores for each equation type and model. The model specific results from this experiment are summarized in the following sections.
§.§ Neural Network Results
For each engineered feature experiment, create an ADAM <cit.> trained deep neural network. We made use of a learning rate of 0.001, β_1 of 0.9, β_2 of 0.999, and ϵ of 1 × 10^-7, the default training hyperparameters for Keras ADAM.
The deep neural network contained the number of input neurons equal to the number of inputs needed to test that engineered feature type. Likewise, a single output neuron provided the value generated by the specified engineered feature. When viewed from the input to the output layer, there are five hidden layers, containing 400, 200, 100, 50, and 25 neurons, respectively. Each hidden layer makes use of a rectifier transfer function <cit.>, making each hidden neuron a rectified linear unit (ReLU). We provide the results of these deep neural network engineered feature experiments in Figure <ref>.
The deep neural network performed well on all equation types except the ratio of differences. The neural network also performed consistently better on the remaining equation types than the other three models. An examination of the calculations performed by a neural network will provide some insight into this performance. A single-layer neural network is essentially a weighted sum of the input vector transformed by a transfer function, as shown in Equation <ref>.
f(x,w,b)= ϕ( ∑_n (w_i x_i) + b )
The vector x represents the input vector, the vector w represents the weights, and the scalar variable b represents the bias. The symbol ϕ represents the transfer function. This paper's experiments used the rectifier transfer function <cit.> for hidden neurons and a simple identity linear function for output neurons. The weights and biases are adjusted as the neural network is trained. A deep neural network contains many layers of these neurons, where each layer can form the input (represented by x) into the next layer. This fact allows the neural network to be adjusted to perform many mathematical operations and explain some of the results shown in Figure <ref>. The neural network can easily add, sum, and multiply. This fact made the counts, diff, power, and rational polynomial engineered features all relatively easy to synthesize by using layers of Equation <ref>.
§.§ Support Vector Machine Results
The two primary hyper-parameters of an SVM are C and γ. It is customary to perform a grid search to find an optimal combination of C and γ <cit.>. We tried 3 C values of 0.001, 1, and 100, combined with the 3 γ values of 0.1, 1, and 10. This selection resulted in 9 different SVMs to evaluate. The experiment results are from the best combination of C and γ for each feature type. A third hyper-parameter specifies the type of kernel that the SVM uses, which is a Gaussian kernel. Because support vector machines benefit from their input feature vectors normalized to a specific range <cit.>, we normalized all SVM input to [0,1]. This required normalization step for the SVM does add additional calculations to the feature investigated. Therefore, the SVM results are not as pure of a feature engineering experiment as the other models. We provide the results of the SVM engineered features in Figure <ref>.
The support vector machine found the max, quadratic, ratio of differences, a polynomial ratio, and a ratio all difficult to synthesize. All other feature experiments were within a low NRMSD level.
Smola and Vapnik extended the original support vector machine to include regression; we call the resulting algorithm a support vector regression (SVR) <cit.>. A full discussion of how an SVR is fitted and calculated is beyond the scope of this paper. However, for this paper's research, the primary concern is how an SVR calculates its final output. This calculation can help determine the transformations that an SVR can synthesize. The final output for an SVR is given by the decision function, shown in Equation <ref>.
y = ∑_i=1^n (α_i - α_i^*)K(x_i,x)+ρ
The vector x represents the input vector; the difference between the two alphas is called the SVR's coefficient. The weights of the neural network are somewhat analogous to the coefficients of an SVR. The function K represents a kernel function that introduces non-linearity. This paper used a radial basis function (RBF) kernel based on the Gaussian function. The variable ρ represents the SVR intercept, which is somewhat analogous to the bias of a neural network.
Like the neural network, the SVR can perform multiplications and summations. Though there are many differences between a neural network and SVR, the final calculations share many similarities.
§.§ Random Forest Results
Random forests are an ensemble model made up of decision trees. We randomly sampled the training data to produce a forest of trees that together will usually outperform the individual trees. The random forests used in this paper all use 100 classifier trees. This tree count is a hyper-parameter for the random forest algorithm. We show the result of the random forest model's attempt to synthesize the engineered features in Figure <ref>.
The random forest model had the most difficulty with the standard deviation, a ratio of differences, and sum.
§.§ Gradient Boosted Machine
The gradient boosted machine (GBM) model operates very similarly to random forests. However, the GBM algorithm uses the gradient of the training objective to produce optimal combinations of the trees. This additional optimization sometimes gives GBM a performance advantage over random forests. The gradient boosting machines used in this paper all used the same hyper-parameters. The maximum depth was ten levels, the number of estimators was 100, and the learning rate was 0.05. We provide the results of the GBM engineered features in Figure <ref>.
Like the random forest model, the gradient boosted machine had the most difficulty with the standard deviation, the ratio of differences, and sum.
§ CONCLUSION & FURTHER RESEARCH
Figures 1-4 clearly illustrate that machine learning models such as neural networks, support vector machines, random forests, and gradient boosting machines benefit from a different set of synthesized features. Neural networks and support vector machines generally benefit from the same types of engineered features; similarly, random forests and gradient boosting machines also typically benefit from the same set of engineered features. The results of this research allow us to make recommendations for both the types of features to use for a particular machine learning model type and the types of models that will work well with each other in an ensemble.
Based on the experiments performed in this research, the type of machine learning model used has a great deal of influence on the types of engineered features to consider. Engineered features based on a ratio of differences were not synthesized well by any of the models explored in this paper. Because these ratios of difference might be useful to a wide array of models, all models explored here might benefit from engineered features based on ratios with differences.
The research performed by this paper also empirically demonstrates one of the reasons why ensembles of models typically perform better than individual models. Because neural networks and support vector machines can synthesize different features than random forests and gradient boosting machines, ensembles made up of a model from each of these two groups might perform very well. A neural network or support vector machine might ensemble well with a random forest or gradient boosting machine.
We did not spend significant time tuning the models for each of the datasets. Instead, we made reasonably generic choices for the hyper-parameters chosen for the models. Results for individual models and datasets might have shown some improvement for additional time spent tuning the hyper-parameters.
Future research will focus on exploring other engineered features with a wider set of machine learning models. Engineered features that are made up of multiple input features seem a logical focus.
This paper examined 16 different engineered features for four popular machine learning model types. Further research is needed to understand what features might be useful for other machine learning models. Such research could help guide the creation of ensembles that use a variety of machine learning model types. We might also examine additional types of engineered features. It would be useful to see how more complex classes of features affect machine learning models' performance.
IEEEtran
|
http://arxiv.org/abs/1701.08048v1 | 20170127133456 | Structural, thermodynamic, and transport properties of CH$_2$ plasma in the two-temperature regime | [
"D. V. Knyazev",
"P. R. Levashov"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
^1Joint Institute for High Temperatures RAS, Izhorskaya 13 bldg. 2, Moscow 125412, Russia
^2Moscow Institute of Physics and Technology (State University), Institutskiy per. 9, Dolgoprudny, Moscow Region 141700, Russia
^3State Scientific Center of the Russian Federation – Institute for Theoretical and Experimental Physics of National Research Centre “Kurchatov Institute”, Bolshaya Cheremushkinskaya 25, 117218, Moscow, Russia
^4Tomsk State University, Lenin Prospekt 36, Tomsk 634050, Russia
This paper covers calculation of radial distribution functions, specific energy and static electrical conductivity of CH_2 plasma in the two-temperature regime. The calculation is based on the quantum molecular dynamics, density functional theory and the Kubo-Greenwood formula.
The properties are computed at 5 kK ≤ T_i≤ T_e≤40 kK and ρ=0.954 g/cm^3 and depend severely on the presence of chemical bonds in the system. Chemical compounds exist at the lowest temperature T_i=T_e=5 kK considered; they are destroyed rapidly at the growth of T_i and slower at the increase of T_e.
A significant number of bonds are present in the system at 5 kK ≤ T_i≤ T_e≤10 kK. The destruction of bonds correlates with the growth of specific energy and static electrical conductivity under these conditions.
Structural, thermodynamic, and transport properties of CH_2 plasma in the two-temperature regime
D. V. Knyazev^1,2,3 and P. R. Levashov^1,4
April 24, 2017
================================================================================================
§ INTRODUCTION
Carbon-hydrogen plastics are widely used nowadays in various experiments on the interaction of intense energy fluxes with matter. One of these fruitful applications is described in the paper by Povarnitsyn et al.:<cit.> a polyethylene film may be used to block the prepulse, and thereby, to improve the contrast of an intense laser pulse.
Two types of prepulse are considered:<cit.> nanosecond (intensity I_ns=10^13 W/cm^2; duration t_ns=2 ns) and picosecond (I_ps=10^15 W/cm^2; t_ps=20 ps) ones . Both prepulses absorbed by the film produce the state of plasma with an electron temperature T_e exceeding an ion temperature T_i. The conditions with T_e>T_i are often called a two-temperature (2T) regime. The appearance of the 2T-state may be explained as follows. The absorption of laser radiation by electrons assists the creation of the 2T-state: the larger laser intensity I, the faster T_e-T_i grows. The electron-phonon coupling destroys the 2T-state: the larger electron-phonon coupling constant G, the faster T_e-T_i decreases. Thus if the prepulse intensity I is great enough, the 2T-state with considerable T_e-T_i may be created.
The action of the prepulse may be described quantitatively using numerical simulation <cit.>. Modelling <cit.> shows, that after a considerable part of the nanosecond prepulse has been absorbed, the following conditions are obtained: relative change of density ρ/ρ_0=10^-4–10^-2, T_i∼400 kK, T_e∼4·10^3 kK. A number of matter properties are required to simulate the action of the prepulse. Particularly, an equation of state, a complex dielectric function and a thermal conductivity coefficient should be known. Paper <cit.> employs rather rough models of matter properties. Therefore, the need for better knowledge of plasma properties arises.
The matter properties should be known for all the states of the system: from the ambient conditions to the extreme parameters specified above. The required properties may be calculated via various techniques, including: the average atom model <cit.>, the chemical plasma model <cit.> and quantum molecular dynamics (QMD). None of these methods may yield data for all conditions emerging under the action of the prepulse. The QMD technique is a powerful tool for the calculation of properties in the warm dense matter regime (rather high densities and moderate temperatures).
QMD is widely used for the calculation of thermodynamic properties, including equation of state <cit.>, shock Hugoniots <cit.> and melting curves <cit.>. A common approach to obtain electronic transport and optical properties from a QMD simulation is to use the Kubo-Greenwood formula (KG). Here the transport properties encompass static electrical conductivity and thermal conductivity, whereas the optical properties include dynamic electrical conductivity, complex dieletric function, complex refraction index and reflectivity. The QMD+KG technique became particularly widespread after the papers <cit.>. Some of the most recent QMD+KG calculations address transport and optical properties of deuterium <cit.>, berillium <cit.>, xenon <cit.> and copper <cit.>.
Carbon-hydrogen plasma has also been explored by the QMD technique recently. Pure C_mH_n plasma (carbon and hydrogen ions are the only ions present in the system) was considered in papers <cit.>. Other works <cit.> study the influence of dopants. Transport and optical properties of carbon-hydrogen plasma were investigated in papers <cit.>. Some of the cited works are discussed in more detail in our previous work <cit.>.
None of the papers mentioned above studies the influence of the 2T-state on the properties of carbon-hydrogen plasma. The lack of such data was the first reason stimulating this work. In this paper we calculate specific energy and static electrical conductivity of CH_2 plasma in the 2T-state via the QMD+KG technique. The plasma of CH_2 composition corresponds to polyethylene heated by laser radiation. In this work the properties are calculated at the normal density of polyethylene ρ=0.954 g/cm^3 and at temperatures 5 kK ≤ T_i≤ T_e≤40 kK. These conditions correspond to the very beginning of the prepulse action; the temperatures after the prepulse are much larger. However, the beginning of the prepulse action should be also simulated carefully, since the spacial distribution of plasma at the initial stage influences the whole following process dramatically. Thus we have to know the properties of CH_2 plasma even for such moderate temperatures.
Properties of CH_2 plasma in the one-temperature (1T) case T_i=T_e were investigated in our previous work <cit.>. The properties were calculated at ρ=0.954 g/cm^3 and at temperatures 5 kK ≤ T_i=T_e=T≤100 kK.
The most interesting results obtained in <cit.> concern specific heat capacity and static electrical conductivity of CH_2 plasma. The specific heat capacity 𝒞_v(T) decreases at 5 kK ≤ T≤15 kK and increases at 15 kK ≤ T≤100 kK. The decrease of 𝒞_v(T) corresponds to the concave shape of the temperature dependence of specific energy ℰ(T). The temperature dependence of the static electrical conductivity σ_1_DC(T) demonstrates step-like behavior: it grows rapidly at 5 kK ≤ T≤10 kK and remains almost constant at 20 kK ≤ T≤60 kK. Similar step-like curves for reflectivity along principal Hugoniots of carbon-hydrogen plastics were obtained in the previous works <cit.>.
The second reason for the current work is the drive to explain the obtained ℰ(T) and σ_1_DC(T) dependences. During a 2T-calculation one of the temperatures (T_i or T_e) is kept fixed while the other one is varied. This helps to understand better the influence of T_i and T_e on the one-temperature ℰ(T) and σ_1_DC(T) dependences.
The structure of our paper is quite straightfoward. Sec. <ref> contains a brief description of the computation method. The technical parameters used during the calculation are available in Sec. <ref>. The results on ℰ and σ_1_DC of CH_2 plasma are presented in Sec. <ref>. The discussion of the results based on the investigation of radial distribution functions (RDFs) is also available in Sec. <ref>.
§ COMPUTATION TECHNIQUE
The computation technique is based on quantum molecular dynamics, density functional theory (DFT) in its Kohn-Sham formulation and the Kubo-Greenwood formula. The method of calculation for 1T case was described in detail in our previous work <cit.> and the papers <cit.>. An example of 2T-calculation is present in our previous paper <cit.>. Here we will give only a brief overview of the employed technique.
The computation method consists of three main stages: QMD simulation, precise resolution of the band structure and the calculation via the KG formula.
At the first stage N_C atoms of carbon and N_H=2N_C atoms of hydrogen are placed in a supercell with periodic boundary conditions. The total number of atoms N_at=N_C+N_H may be varied. At the given N_at the size of the supercell is chosen to yield the correct density ρ. Ions are treated classically. The ions of carbon and hydrogen are placed in the random nodes of the auxiliary simple cubic lattice. We have discussed the choice of the initial ionic configuration and performed an overview of the works on this issue previously <cit.>. Then the QMD simulation is performed.
The electronic structure is calculated at each QMD step within the Born-Oppenheimer approximation: electrons totally adjust to the current ionic configuration. This calculation is performed within the framework of DFT: the finite-temperature Kohn-Sham equations are solved. The occupation numbers used during their solution are set by the Fermi-Dirac distribution. The latter includes the electron temperature T_e; this is how the calculation depends on T_e.
The forces acting on each ion from the electrons and other ions are calculated at every step. The Newton equations of motion are solved for the ions using these forces; thus the ionic trajectories are calculated. Additional forces are also acting on the ions from the Nosé thermostat. These forces are used to bring the total kinetic energy of ions E_i^kin(t) to the average value 3/2(N_at-1)kT_i after some period of simulation; here k is the Boltzmann constant. This is how the calculation depends on T_i. QMD simulation is performed using the Vienna ab initio simulation package (VASP) <cit.>.
The ionic trajectories and the temporal dependence of the energy without the kinetic contribution of ions [E-E_i^kin](t) are obtained during the QMD simulation. The system comes to a two-temperature equilibrium state after some number of QMD steps is performed. In this state equilibrium exists only within the electronic and ionic subsystems separately. The exchange of energy between electrons and ions is absent, since the Born-Oppenheimer approximation is used during the QMD simulation. [E-E_i^kin](t) fluctuates around its average value in this two-temperature equilibrium state. A significant number of sequential QMD steps corresponding to the two-temperature equilibrium state are chosen. [E-E_i^kin](t) dependence is averaged over these sequential steps; thus the thermodynamic value [E-E_i^kin] is obtained. If the dependence on time is not mentioned, [E-E_i^kin] denotes the thermodynamic value of the energy without the kinetic contribution of ions here. The thermodynamic value is then divided by the mass of the supercell; this specific energy is designated by [ℰ-ℰ_i^kin].
The total energy of electrons and ions E may also be calculated. However, these data are not presented in this paper to understand better the temperature dependence of energy. The thermodynamic value of E_i^kin is 3/2(N_at-1)kT_i because of the interaction with the Nosé thermostat. Thus E_i^kin depends only on T_i in a rather simple way: E_i^kin(T_i)∼ T_i. [E-E_i^kin] may depend both on T_i and T_e in a complicated manner. The addition of E_i^kin(T_i) will just obscure the [E-E_i^kin](T_i,T_e) dependence. If necessary, the total energy may be reconstructed easily.
The separate configurations corresponding to the two-temperature equilibrium state are selected for the calculation of static electrical conductivity and optical properties. At the second stage the precise resolution of the band structure is performed for these separate configurations. The same Kohn-Sham equations as during the first stage are solved, though the technical parameters yielding a higher precision may be used. At this stage the Kohn-Sham eigenvalues, corresponding wave functions and occupation numbers are obtained. The precise resolution of the band structure is performed with the VASP package. Then the obtained wave functions are used to calculate the matrix elements of the nabla operator; this is done using the optics.f90 module of the VASP package.
At the third stage the real part of the dynamic electrical conductivity σ_1(ω) is calculated via the KG formula presented in our previous paper <cit.>; the formula includes matrix elements of the nabla operator, energy eigenvalues and occupation numbers calculated during the precise resolution of the band structure. We have created a parallel program to perform a calculation according to the KG formula, it uses data obtained by the VASP as input information. σ_1_j(ω) is obtained for each of the selected ionic configurations. These σ_1_j(ω) curves are then averaged to get the final σ_1(ω). The static electrical conductivity σ_1_DC is calculated via an extrapolation of σ_1(ω) to zero frequency. The simple linear extrapolation described in <cit.> is used.
A number of sequential ionic configurations corresponding to the 2T equilibrium stage of the QMD simulation are also used to calculate RDFs. C-C, C-H and H-H RDFs are calculated with the Visual Molecular Dynamics program (VMD) <cit.>.
§ TECHNICAL PARAMETERS
The QMD simulation was performed with 120 atoms in the computational supercell (40 carbon atoms and 80 hydrogen atoms). At the initial moment the ions were placed in the random nodes of the auxiliary simple cubic lattice (discussed in <cit.>). Then 15000 steps of the QMD simulation were performed, one step corresponded to 0.2 fs. Thus the evolution of the system during 3 ps was tracked. The calculation was run in the framework of the local density approximation (LDA) with the Perdew–Zunger parametrization (set by Eqs. (C3), (C5) and Table XII, the first column, of paper<cit.>). We have applied the pseudopotentials of the projector augmented-wave (PAW <cit.>) type both for carbon and hydrogen. The PAW pseudopotential for carbon took 4 electrons into account (2s^22p^2); the core radius r_c was equal to 1.5a_B, here a_B is the Bohr radius. The PAW pseudopotential for hydrogen allowed for 1 electron per atom (1s^1), r_c=1.1a_B. The QMD simulation was performed with 1 k-point in the Brillouin zone (Γ-point) and with the energy cut-off E_cut=300 eV. All the bands with occupation numbers larger than 5×10^-6 were taken into account. The section of the QMD simulation corresponding to 0.5 ps ≤ t≤2.5 ps was used to average the temporal dependence [E-E_i^kin](t).
We have chosen 15 ionic configurations for the further calculation of σ_1_DC. The first of these configurations corresponded to t=0.2 ps, the time span between the neighboring configurations also was 0.2 ps. The band structure was calculated one more time for these selected configurations. The exchange-correlation functional, pseudopotential, number of k-points and energy cut-off were the same as during the QMD simulation. Additional unoccupied bands were taken into account, they spanned an energy range of 40 eV.
The σ_1_j(ω) curves were calculated for the selected ionic configurations at 0.005 eV ≤ω≤40 eV with a frequency step 0.005 eV. The δ-function in the KG formula was broadened by the Gaussian<cit.> function with the standard deviation of 0.2 eV.
The RDFs were calculated for the section of the QMD simulation corresponding to 1.8002 ps ≤ t≤2 ps for the distances 0.05 ≤ r≤4 with the step of Δ r=0.1 .
For computational results to be reliable, the convergence by technical parameters should be checked. However, the full investigation of convergence is very time-consuming. In our previous works we have performed full research on convergence for aluminum <cit.> and partial—for CH_2 plasma <cit.>. It was shown <cit.>, that the number of atoms N_at and the number of k-points N_𝐤 during the precise resolution of the band structure contribute to the error of σ_1_DC most of all; the effect of these parameters is of the same order of magnitude. The size effects in CH_2 plasma were investigated earlier:<cit.> the results for σ_1_DC were the same within several percents for 120 and 249 atoms in the supercell. In our current paper we use the same moderate N_at and N_𝐤 values as in <cit.>. This introduces a small error to our results, but speeds up computations considerably.
§ RESULTS
The following types of calculations were performed:
* one-temperature case with T_i=T_e=T;
* 2T-computations at fixed T_i and varied T_e, so that T_e≥ T_i;
* 2T-computations at fixed T_e and varied T_i, so that T_i≤ T_e.
The overall range of temperatures under consideration is 5 kK ≤ T_i≤ T_e≤ 40 kK. The calculations were performed at fixed ρ=0.954 g/cm^3.
These calculations allow us to obtain the dependence of a quantity f both on T_i and T_e: f(T_i,T_e). Here f may stand for [ℰ-ℰ_i^kin] or σ_1_DC. The one-temperature dependences f(T)|_T_i=T_e were investigated previously <cit.>. In this paper we also present f(T_e)|_T_i=const and f(T_i)|_T_e=const. The slope of f(T_e)|_T_i=const equals (∂ f/∂ T_e)_T_i; the slope of f(T_i)|_T_e=const—(∂ f/∂ T_i)_T_e. The slope of f(T)|_T_i=T_e may be designated by (∂ f/∂ T)_T_i=T_e. Then the following equation is valid:
(∂ f/∂ T)_T_i=T_e=[(∂ f/∂ T_e)_T_i+(∂ f/∂ T_i)_T_e]_T_i=T_e.
If we compare the contributions of (∂ f/∂ T_e)_T_i and (∂ f/∂ T_i)_T_e to (∂ f/∂ T)_T_i=T_e we can understand, whether f(T)|_T_i=T_e dependence is mainly due to the change of T_i or T_e.
Now we can define the volumetric mass-specific heat capacity without the kinetic contribution of ions [𝒞_v-𝒞_v i^kin] for the one-temperature case T_i=T_e=T:
[𝒞_v-𝒞_v i^kin]=(∂[ℰ-ℰ_i^kin]/∂ T)_T_i=T_e.
Then Eq. (<ref>) for f=[ℰ-ℰ_i^kin] may be written as:
[𝒞_v-𝒞_v i^kin]=[(∂ [ℰ-ℰ_i^kin]/∂ T_e)_T_i+
+(∂ [ℰ-ℰ_i^kin]/∂ T_i)_T_e]_T_i=T_e.
Radial distribution functions were calculated for two cases only:
* one-temperature case 5 kK ≤ T_i=T_e=T≤40 kK (Fig. <ref>(b), Fig. <ref>(b), Fig. <ref>(b));
* two-temperature case T_i=5 kK; 5 kK ≤ T_e≤40 kK (Fig. <ref>(a), Fig. <ref>(a), Fig. <ref>(a)).
Carbon-carbon (Fig. <ref>), carbon-hydrogen (Fig. <ref>) and hydrogen-hydrogen (Fig. <ref>) RDFs are presented.
It is convenient to start the discussion of the results from the RDFs. The general view in Figs. <ref>–<ref> shows, that there are peaks at the RDF curves at low temperatures; these peaks vanish at higher temperatures. In the further discussion we will assume that these peaks are due to the chemical bonds.
Strictly speaking, the presence of chemical bonds may not be reliably established based on the analysis of RDFs only. The peaks at RDF curves have only the following meaning: the interionic distances possess in average certain values {r_peak} more often than other values. But nothing may be said about how long the ions are located at these {r_peak} distances from each other. We should check that ions are located at {r_peak} for certain periods of time {t_peak}; only in this case we may establish reliably the presence of chemical bonds. These periods {t_peak} may be called the lifetimes of chemical bonds. This complicated analysis may be found in the papers <cit.>. However, in the current work we will use only RDFs to register chemical bonds.
Fig. <ref>(a), Fig. <ref>(a), Fig. <ref>(a) show the RDFs at fixed T_i=5 kK and various T_e from 5 kK up to 40 kK. If T_e increases from 5 kK to 10 kK, the bonds are almost intact and the RDF curves almost do not change (only H-H bonds decay to some extent). At T_e = 20 kK C-C and C-H bonds are mostly intact (though the peaks become lower); only H-H bonds break almost totally. And only if T_e is risen to 40 kK all the bonds decay.
The situation is different in the one-temperature case T_i=T_e=T (Fig. <ref>(b), Fig. <ref>(b), Fig. <ref>(b)). The increase of T from 5 kK to 10 kK already makes the bonds decay. Given the influence of T_e on the bonds is rather weak in this temperature range (see above), this breakdown of bonds is totally due to the increase of T_i. At T=10 kK H-H bonds are already destroyed totally (Fig. <ref>(b)), C-H bonds—almost totally (Fig. <ref>(b)), bonds—considerably (Fig. <ref>(b)). The increase of T to 20 kK leads to the almost complete decay of all bonds.
The following conclusions may be derived from the performed consideration of the RDF curves. If T_i is kept rather low (5 kK), T_e should be risen to 20 kK–40 kK to destroy the bonds. If both T_e and T_i are increased simultaneously (and T_e=T_i), temperatures 10 kK–20 kK are quite enough to break the bonds.
The temperature dependences of [ℰ-ℰ_i^kin] and σ_1_DC are presented in Figs. <ref>–<ref>. The behavior of the calculated properties depends largely on whether the bonds are destroyed or not.
The obtained results may be divided into three characteristic cases.
1) 5 kK ≤ T_i≤ T_e≤10 kK. The considerable number of bonds are present in the system under these conditions. The chemical bonds break rapidly as T_i grows, and decay rather slowly as T_e increases.
[ℰ-ℰ_i^kin] increases rapidly as T_i grows (Fig. <ref>(b)), and increases slowly as T_e grows (Fig. <ref>(a)). σ_1_DC increases rapidly as T_i grows (Fig. <ref>(b)) and increases slowly as T_e grows (Fig. <ref>(a)).
Thus the growth of [ℰ-ℰ_i^kin] and the growth of σ_1_DC correlate somewhat with the destruction of the chemical bonds: these processes occur rapidly if T_i rises, and slowly with the rise of T_e.
In the one-temperature situation (5 kK ≤ T_i=T_e=T≤10 kK) [ℰ-ℰ_i^kin](T) increases as T grows, this increase is mostly determined by T_i influence (i.e. by the second term in the right hand side of Eq. (<ref>)). The (∂[ℰ-ℰ_i^kin]/∂ T_i)_T_e contribution to [𝒞_v-𝒞_v i^kin] is larger than (∂[ℰ-ℰ_i^kin]/∂ T_e)_T_i (see Eq. (<ref>) and Fig. <ref>). Since the growth of [ℰ-ℰ_i^kin] correlates with the destruction of bonds here, we may assume that the energy supply necessary for the bond decay gives the main contribution to [𝒞_v-𝒞_v i^kin].
The rapid growth of σ_1_DC(T) in the one-temperature situation is mostly due to the influence of T_i.
2) 20 kK ≤ T_i≤ T_e≤40 kK. There are no chemical bonds in the system under these conditions.
[ℰ-ℰ_i^kin] grows as T_e increases (Fig. <ref>(a)) and is almost independent of T_i (Fig. <ref>(b)). [ℰ-ℰ_i^kin] includes the kinetic energy of electrons, electron-electron, electron-ion and ion-ion potential energies. The fact, that [ℰ-ℰ_i^kin] does not depend on T_i, is intuitively clear: there are no significant changes of the ionic structure under the conditions considered (the decay of chemical bonds could be mentioned as a possible example of such changes).
σ_1_DC increases as T_e grows (Fig. <ref>(a)) and decreases as T_i grows (Fig. <ref>(b)).
In the one-temperature situation (20 kK ≤ T_i=T_e=T≤40 kK) [ℰ-ℰ_i^kin](T) increases as T grows only due to T_e influence (i.e. due to the first term in the right hand side of Eq. (<ref>)). [𝒞_v-𝒞_v i^kin] totally equals (∂[ℰ-ℰ_i^kin]/∂ T_e)_T_i here (Eq. (<ref>)). We can assume that the temperature excitation of the electron subsystem determines [𝒞_v-𝒞_v i^kin] values in this situation.
σ_1_DC decreases as T_i grows and increases as T_e grows. In the one-temperature case these two opposite effects compensate each other totally and form σ_1_DC(T), that does not depend on T.
3) 5 kK ≤ T_i≤10 kK, 30 kK ≤ T_e≤40 kK. This case is qualitatively close to the second one. There are no chemical bonds in the system.
[ℰ-ℰ_i^kin] increases as T_e grows (Fig. <ref>(a)) and does not depend on T_i (Fig. <ref>(b)); σ_1_DC increases as T_e grows (Fig. <ref>(a)) and decreases as T_i grows (Fig. <ref>(b)).
§ CONCLUSION
In this paper we have calculated the properties of CH_2 plasma in the two-temperature case. First of all, the properties at T_e> T_i are of significant interest for the simulation of rapid laser experiments. The performed calculations also help us to understand better the properties in the one-temperature case T_i=T_e=T. Two characteristic regions of the one-temperature curves <cit.> may be considered.
The first region corresponds to the temperatures of 5 kK ≤ T ≤ 10 kK. The significant number of chemical bonds exist in the system in this case. These bonds decay if T is increased (mainly because of heating of ions). We assume, that the energy necessary for the destruction of bonds gives the main contribution to [𝒞_v-𝒞_v i^kin] in this region. The decay of bonds also correlates with the rapid growth of σ_1_DC.
The second region corresponds to the temperatures of 20 kK ≤ T≤ 40 kK. The system contains no chemical bonds under these conditions. The growth of [ℰ-ℰ_i^kin](T) is totally determined by heating of electrons. We assume, that the temperature excitation of the electron subsystem determines [𝒞_v-𝒞_v i^kin] values here. σ_1_DC is influenced by heating of both electrons and ions moderately and oppositely. These opposite effects form the plateau on σ_1_DC(T) dependence in the second region.
§ ACKNOWLEDGEMENT
The majority of computations, development of codes, and treatment of results were carried out in the Joint Institute for High Temperatures RAS under financial support of the Russian Science Foundation (Grant No. 16-19-10700). Some numerical calculations were performed free of charge on supercomputers of Moscow Institute of Physics and Technology and Tomsk State University.
35
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Povarnitsyn et al.(2013)Povarnitsyn, Andreev, Levashov,
Khishchenko, Kim, Novikov, and Rosmej]Povarnitsyn:LasPartBeams:2013
author author M. E. Povarnitsyn, author N. E. Andreev, author P. R. Levashov, author K. V. Khishchenko, author D. A. Kim, author V. G. Novikov, and author O. N. Rosmej, 10.1017/S0263034613000700 journal journal Laser and Particle Beams volume 31, pages 663 (year 2013)NoStop
[Povarnitsyn et al.(2015)Povarnitsyn, Fokin, Levashov, and Itina]Povarnitsyn:PRB:2015
author author M. E. Povarnitsyn, author V. B. Fokin, author P. R. Levashov,
and author T. E. Itina, 10.1103/PhysRevB.92.174104 journal journal Phys. Rev. B volume 92, pages 174104 (year 2015)NoStop
[Ovechkin et al.(2014)Ovechkin, Loboda, Novikov, Grushin, and Solomyannaya]Ovechkin:HEDP:2014
author author A. A. Ovechkin, author P. A. Loboda, author V. G. Novikov,
author A. S. Grushin, and author A. D. Solomyannaya, 10.1016/j.hedp.2014.09.001 journal journal High Energy Density Physics volume 13, pages 20 (year 2014)NoStop
[Apfelbaum(2016)]Apfelbaum:CPP:2016
author author E. M. Apfelbaum, 10.1002/ctpp.201500077 journal
journal Contrib. Plasma Phys. volume
56, pages 176 (year 2016)NoStop
[Wang and Zhang(2013)]Wang:PhysPlasmas:2013
author author C. Wang and author P. Zhang, 10.1063/1.4821839 journal journal Phys. Plasmas volume 20, pages 092703 (year 2013)NoStop
[Wang et al.(2013)Wang,
Long, Tian, He, and Zhang]Wang:PRE:2013:2
author author C. Wang, author Y. Long, author M.-F. Tian, author
X.-T. He, and author
P. Zhang, 10.1103/PhysRevE.87.043105 journal journal Phys.
Rev. E volume 87, pages 043105
(year 2013)NoStop
[Minakov et al.(2014)Minakov, Levashov, Khishchenko, and Fortov]Minakov:JAP:2014
author author D. V. Minakov, author P. R. Levashov, author K. V. Khishchenko, and author V. E. Fortov, 10.1063/1.4882299 journal
journal J. Appl. Phys. volume 115, pages 223512 (year 2014)NoStop
[Minakov and Levashov(2015)]Minakov:PRB:2015
author author D. V. Minakov and author P. R. Levashov, 10.1103/PhysRevB.92.224102 journal journal Phys. Rev. B volume
92, pages 224102 (year 2015)NoStop
[Desjarlais et al.(2002)Desjarlais, Kress, and Collins]Desjarlais:PRE:2002
author author M. P. Desjarlais, author J. D. Kress, and author L. A. Collins, 10.1103/PhysRevE.66.025401 journal journal Phys. Rev. E volume
66, pages 025401 (year 2002)NoStop
[Recoules and Crocombette(2005)]Recoules:PRB:2005
author author V. Recoules and author J.-P. Crocombette, 10.1103/PhysRevB.72.104202 journal journal Phys. Rev. B volume
72, pages 104202 (year 2005)NoStop
[Hu et al.(2015)Hu,
Goncharov, Boehly, McCrory,
Skupsky, Collins, Kress, and Militzer]Hu:PhysPlasmas:2015
author author S. X. Hu, author V. N. Goncharov,
author T. R. Boehly, author R. L. McCrory, author
S. Skupsky, author L. A. Collins, author J. D. Kress, and author B. Militzer, 10.1063/1.4917477
journal journal Phys. Plasmas volume 22, pages 056304 (year
2015)NoStop
[Li et al.(2015)Li,
Wang, Wu, Li, Li, and Zhang]Li:PhysPlasmas:2015
author author Ch.-Y. Li, author C. Wang, author Z.-Q. Wu,
author Z. Li, author
D.-F. Li, and author
P. Zhang, 10.1063/1.4931068 journal journal Phys.
Plasmas volume 22, pages 092705
(year 2015)NoStop
[Norman et al.(2015)Norman,
Saitov, Stegailov, and Zhilyaev]Norman:PRE:2015
author author G. Norman, author I. Saitov,
author V. Stegailov, and author P. Zhilyaev, 10.1103/PhysRevE.91.023105 journal journal Phys. Rev. E volume 91, pages 023105 (year 2015)NoStop
[Migdal et al.(2016)Migdal,
Petrov, Il'nitsky, Zhakhovsky, Inogamov, Khishchenko,
Knyazev, and Levashov]Migdal:APA:2016
author author K. P. Migdal, author Yu. V. Petrov, author D. K. Il'nitsky, author V. V. Zhakhovsky, author N. A. Inogamov, author K. V. Khishchenko, author D. V. Knyazev, and author P. R. Levashov, 10.1007/s00339-016-9757-8 journal journal Appl. Phys. A volume
122, pages 408 (year 2016)NoStop
[Mattsson et al.(2010)Mattsson, Lane, Cochrane, Desjarlais, Thompson, Pierce, and Grest]Mattsson:PRB:2010
author author T. R. Mattsson, author J. M. D. Lane, author K. R. Cochrane,
author M. P. Desjarlais,
author A. P. Thompson, author F. Pierce, and author
G. S. Grest, 10.1103/PhysRevB.81.054103 journal journal Phys.
Rev. B volume 81, pages 054103
(year 2010)NoStop
[Wang et al.(2011)Wang,
He, and Zhang]Wang:PhysPlasmas:2011
author author C. Wang, author X.-T. He, and author P. Zhang, 10.1063/1.3625273 journal journal
Phys. Plasmas volume 18, pages
082707 (year 2011)NoStop
[Lambert and Recoules(2012)]Lambert:PRE:2012
author author F. Lambert and author V. Recoules, 10.1103/PhysRevE.86.026405 journal journal Phys. Rev. E volume
86, pages 026405 (year 2012)NoStop
[Hamel et al.(2012)Hamel,
Benedict, Celliers, Barrios,
Boehly, Collins, Döppner,
Eggert, Farley, Hicks,
Kline, Lazicki, LePape,
Mackinnon, Moody, Robey,
Schwegler, and Sterne]Hamel:PRB:2012
author author S. Hamel, author L. X. Benedict,
author P. M. Celliers, author M. A. Barrios, author
T. R. Boehly, author
G. W. Collins, author
T. Döppner, author
J. H. Eggert, author
D. R. Farley, author
D. G. Hicks, author
J. L. Kline, author
A. Lazicki, author S. LePape, author A. J. Mackinnon, author J. D. Moody, author H. F. Robey, author E. Schwegler, and author P. A. Sterne, 10.1103/PhysRevB.86.094113 journal journal Phys. Rev. B volume 86, pages 094113 (year 2012)NoStop
[Chantawansri et al.(2012)Chantawansri, Sirk, Byrd, Andzelm, and Rice]Chantawansri:JCP:2012
author author T. L. Chantawansri, author T. W. Sirk, author E. F. C. Byrd,
author J. W. Andzelm, and author B. M. Rice, 10.1063/1.4767394 journal journal J.
Chem. Phys. volume 137, pages 204901
(year 2012)NoStop
[Hu et al.(2014)Hu,
Boehly, and Collins]Hu:PRE:2014:1
author author S. X. Hu, author T. R. Boehly, and author L. A. Collins, 10.1103/PhysRevE.89.063104 journal journal Phys. Rev. E volume 89, pages 063104 (year 2014)NoStop
[Danel and Kazandjian(2015)]Danel:PRE:2015
author author J.-F. Danel and author L. Kazandjian, 10.1103/PhysRevE.91.013103 journal journal Phys. Rev. E volume
91, pages 013103 (year 2015)NoStop
[Horner et al.(2010)Horner,
Kress, and Collins]Horner:PRB:2010
author author D. A. Horner, author J. D. Kress, and author L. A. Collins, 10.1103/PhysRevB.81.214301 journal journal Phys. Rev. B volume 81, pages 214301 (year 2010)NoStop
[Magyar et al.(2015)Magyar,
Root, Cochrane, Mattsson, and Flicker]Magyar:PRB:2015
author author R. J. Magyar, author S. Root,
author K. Cochrane, author T. R. Mattsson, and author D. G. Flicker, 10.1103/PhysRevB.91.134109 journal journal Phys.
Rev. B volume 91, pages 134109
(year 2015)NoStop
[Huser et al.(2015)Huser,
Recoules, Ozaki, Sano,
Sakawa, Salin, Albertazzi,
Miyanishi, and Kodama]Huser:PRE:2015
author author G. Huser, author V. Recoules,
author N. Ozaki, author T. Sano, author
Y. Sakawa, author G. Salin, author B. Albertazzi, author K. Miyanishi, and author R. Kodama, 10.1103/PhysRevE.92.063108
journal journal Phys. Rev. E volume 92, pages 063108 (year
2015)NoStop
[Colin-Lalu et al.(2015)Colin-Lalu, Recoules, Salin, and Huser]Colin-Lalu:PRE:2015
author author P. Colin-Lalu, author V. Recoules, author G. Salin, and author G. Huser, 10.1103/PhysRevE.92.053104 journal journal Phys. Rev. E volume 92, pages 053104 (year 2015)NoStop
[Knyazev and Levashov(2015)]Knyazev:PhysPlasmas:2015
author author D. V. Knyazev and author P. R. Levashov, 10.1063/1.4919963 journal
journal Phys. Plasmas volume 22, pages 053303 (year 2015)NoStop
[Knyazev and Levashov(2013)]Knyazev:COMMAT:2013
author author D. V. Knyazev and author P. R. Levashov, 10.1016/j.commatsci.2013.04.066 journal journal Comput. Mater. Sci. volume 79, pages 817 (year 2013)NoStop
[Knyazev and Levashov(2014)]Knyazev:PhysPlasmas:2014
author author D. V. Knyazev and author P. R. Levashov, 10.1063/1.4891341 journal
journal Phys. Plasmas volume 21, pages 073302 (year 2014)NoStop
[Kresse and Hafner(1993)]Kresse:PRB:1993
author author G. Kresse and author J. Hafner, 10.1103/PhysRevB.47.558 journal
journal Phys. Rev. B volume 47, pages 558 (year 1993)NoStop
[Kresse and Hafner(1994)]Kresse:PRB:1994
author author G. Kresse and author J. Hafner, 10.1103/PhysRevB.49.14251 journal
journal Phys. Rev. B volume 49, pages 14251 (year 1994)NoStop
[Kresse and Furthmüller(1996)]Kresse:PRB:1996
author author G. Kresse and author J. Furthmüller, 10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Humphrey et al.(1996)Humphrey, Dalke, and Schulten]Humphrey:JMG:1996
author author W. Humphrey, author A. Dalke, and author K. Schulten, 10.1016/0263-7855(96)00018-5 journal journal J. Mol. Graphics volume 14, pages 33 (year 1996)NoStop
[Perdew and Zunger(1981)]Perdew:PRB:1981
author author J. P. Perdew and author A. Zunger, 10.1103/PhysRevB.23.5048 journal
journal Phys. Rev. B volume 23, pages 5048 (year 1981)NoStop
[Blöchl(1994)]Blochl:PRB:1994
author author P. E. Blöchl, 10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume
50, pages 17953 (year 1994)NoStop
[Kresse and Joubert(1999)]Kresse:PRB:1999
author author G. Kresse and author D. Joubert, 10.1103/PhysRevB.59.1758 journal
journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop
|
http://arxiv.org/abs/1701.07848v1 | 20170126191526 | Green formulation for studying electromagnetic scattering from graphene-coated wires of arbitrary section | [
"Claudio Valencia",
"Máximo A. Riso",
"Mauro Cuevas",
"Ricardo A. Depine"
] | physics.optics | [
"physics.optics"
] |
We present a rigorous electromagnetic method based on Green's second identity for studying the plasmonic response
of graphene–coated wires of arbitrary shape. The wire is illuminated perpendicular to its axis by a
monochromatic electromagnetic wave and the wire substrate is homogeneous and isotropic.
The field is expressed everywhere in terms of two unknown source functions evaluated on the graphene coating which can be obtained from the numerical solution of a coupled pair of inhomogeneous integral equations.
To assess the validity of the Green formulation, the scattering and absorption efficiencies obtained numerically in the particular case of circular wires are compared with those obtained from the multipolar Mie theory.
An excellent agreement is observed in this particular case, both for metallic and dielectric substrates.
To explore the effects that the break of the rotational symmetry of the wire section introduces in the plasmonic features of the scattering and absorption response, the Green formulation is applied to the case of graphene-coated wires of elliptical section.
As might be expected from symmetry arguments, we find a two-dimensional anisotropy in the angular optical
response of the wire, particularly evident in the frequency splitting of multipolar plasmonic resonances.
The comparison between the spectral position of the enhancements in the scattering and absorption efficiency
spectra for low–eccentricity elliptical and circular wires allows us to guess the multipolar order of each plasmonic resonance.
We present calculations of the near field distribution for different frequencies which explicitly reveal the
multipolar order of the plasmonic resonances. They also confirm the previous guess and serve as a further test
on the validity of the Green formulation.
Green formulation for studying electromagnetic scattering from graphene–coated wires of arbitrary section
Claudio Valencia^1, Máximo A. Riso^2, Mauro Cuevas^3, and Ricardo A. Depine^2,*
^1 Facultad de Ciencias, Universidad Autónoma de Baja California (UABC), Ensenada, BC 22860, México
^2Grupo de Electromagnetismo Aplicado, Departamento de Física, FCEN, Universidad de Buenos Aires and IFIBA, Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Ciudad Universitaria, Pabellón I, C1428EHA, Buenos Aires, Argentina
^3 Facultad de Ingeniería y Tecnología Informática, Universidad de Belgrano, Villanueva 1324, C1426BMJ, Buenos Aires, Argentina and Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
^*email: [email protected]
December 30, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Due to its particular electronic band structure, the atom-thick form of carbon known as graphene exhibits unique electronic and optical properties that have attracted tremendous attention in recent years <cit.>.
Several applications from terahertz (THz) to visible frequencies, including solar cells, touch screens, photodetectors, light-emitting devices and ultrafast lasers, are clear evidence of the rise of graphene in photonics and optoelectronics <cit.>. The strength of the mutual interaction between graphene and electromagnetic radiation plays a key role in a great majority of these applications. However, a single sheet of homogeneous graphene exhibits an optical absorbance of ∼ 2.3% <cit.>, a value strong enough to detect exfoliated monolayers by visual inspection under an optical microscope but not sufficiently strong as could be desirable in many photonics and optoelectronics applications.
The interaction between graphene and electromagnetic radiation can be improved in the presence of surface plasmons, whether by combining a graphene layer with conventional plasmonic nanostructures based on noble metals <cit.> or by taking advantage of the long-lived, electrically tunable surface plasmons supported by graphene <cit.>. The first alternative fits well in the visible and near-infrared frequencies where the interband loss becomes large and graphene behaves as a dielectric material, whereas the second alternative fits well in the
terahertz and infrared regions where doped graphene nanostructures can support surface plasmons <cit.>.
Surface plasmons can be roughly divided into two categories: surface plasmon polaritons (SPPs) propagating along waveguiding structures and localized surface plasmons (LSPs) supported by spatially limited structures, such as scattering particles.
Since the spatial periodicity associated with a surface plasmon propagating along a graphene monolayer is always less than the spatial periodicity which could be induced by an incident plane wave, plane waves cannot resonantly excite propagating surface plasmons at a flat graphene monolayer.
In contrast, in bounded geometries, localized surface plasmons (LSPs) can be resonantly excited by plane waves at discrete frequencies that depend on the size and shape of the object to which they are confined.
The plasmonic properties of graphene wrapped particles have recently attracted the attention of researchers <cit.>.
Analytical solutions are only available for particles with a very simple shape, such as spheres
<cit.> or circular cylinders <cit.>.
In these cases, the application of Mie theory leads to multipole coefficients for the scattered field which have essentially the same form as those corresponding to the bare particles, except for additive corrections proportional to the graphene surface conductivity in the numerator and denominator.
Taking into account the significant progress made in the fabrication of particles with a variety of shapes and dimensions <cit.> and that wrapping up particles with graphene coatings brings extra freedom for scattering engineering and for improving the interaction between graphene and electromagnetic radiation via surface plasmon mechanisms, the investigation of the scattering characteristics of graphene–coated particles seems a particularly
interesting and very promising area to explore.
In the particular case of dielectric, almost transparent particles, graphene coatings introduce tunable plasmons which are absent in the bare particle, whereas for metallic or metallic-like particles, graphene coatings can modify in a controlled manner the LSPs already existing in the bare particle.
The particular relation between the shape of a graphene particle and its plasmonic characteristics –such as cross-section enhancement, interplay between near- and far–field quantities, plasmon resonance frequencies and linewidths– can be exploited in many THz applications, including the sensing of small changes in a host-medium refractive index caused by the change in concentration of an analyzed substance <cit.> or the design of sub–wavelength metamaterials <cit.>.
In order to realize graphene–wrapped plasmonic particles with tailored properties for specific applications, complete electromagnetic solutions are needed. The purpose of this paper is to present a rigorous, fully retarded method based on Green's second identity <cit.> for modeling the scattering characteristics of graphene–coated, homogeneous
dielectric or metallic two dimensional particles (wires) of rather arbitrary shape.
The paper is organized as follows. First, in Section <ref> we give exact expressions for the electromagnetic field scattered by a graphene–coated wire illuminated perpendicular to its axis by a p– or s–polarized monochromatic plane wave. The scattered field is expressed in terms of two unknown source functions evaluated on the wire surface,
one related to the total field in the medium of incidence and the other related to its normal derivative.
To find both surface source functions, a coupled pair of inhomogeneous integral equations must be
solved numerically.
In Section <ref> the numerical technique is illustrated and validated for wires of circular and elliptical section.
As a first validation
we consider metallic and dielectric circular wires and show that in this case the numerical results agree perfectly well with those obtained analytically using Mie's theory <cit.>.
To explore the effects that the departure from circular geometries has on the scattering and extinction cross-sections we then consider graphene–coated wires of elliptical section.
We present scattering and absorption efficiency spectra showing that, similarly to the plasmonic resonances of bare metallic wires, the break of the rotational symmetry of the wire section introduces a two-dimensional anisotropy in the angular optical response of the wire, particularly evident in a frequency splitting of the strong dipolar plasmonic resonance <cit.>, but also observable for higher multipolar resonances.
Using the Green formulation, we show near-field distributions which reveal that
the splittings correspond to strong localization of the near field at specific positions along the ellipse axes.
Besides, and as a further test on the validity of the Green formulation, we show that the multipolar order inferred
for ellipses with low eccentricities from the topology of the near–field distribution for a given resonant frequency
is in excellent agreement with the multipolar order inferred from the spectral proximity between the enhancement
peaks in the scattering efficiency spectra of an elliptical and a similar circular wire.
Finally, in Section <ref> we summarize and discuss the results obtained. The Gaussian system of units is used and an exp(-iω t) time–dependence is implicit throughout the paper, with ω the angular frequency, c the speed of light in vacuum, t the time, and i=√(-1).
§ GREEN'S APPROACH
We consider a wire in the form of an infinite cylinder whose axis lie along the ẑ axis and whose cross–section is defined by the planar curve Γ described by the vector valued function r_s(τ) = f(τ)x̂+g(τ)ŷ (see Figure <ref>).
The parameter τ can represent the arc length along the curve or any other convenient parameter, such as the angle θ.
The wire substrate (region 2) is characterized by the electric permittivity ε_2 and the magnetic permeability μ_2 and is embedded in a transparent medium (region 1) with electric permittivity ε_1 and magnetic permeability μ_1. This wire is coated with a graphene monolayer which can be considered as an infinitesimally thin, local and isotropic two-sided layer with frequency–dependent surface conductivity σ(ω)=σ ^intra+σ ^inter given by the Kubo formula <cit.>, with the intraband contribution
σ ^intra given by
σ ^intra(ω) =
2ie^2T/πħ(ω+iγ_c)ln[2cosh(μ_c /2T)] ,
and the interband contribution σ ^inter given by
σ ^inter=
e^2/4ħ{Θ(ħω-2μ_c)-i/πln|ħω+2μ_c/ħω-2μ_c|} ,
where μ_c is the chemical potential (controlled with the help of a gate voltage), γ_c the carriers scattering rate, Θ(x) the Heaviside function, e the electron charge, k_B the Boltzmann constant and ħ the reduced Planck constant.
The intraband contribution (<ref>) dominates for large doping μ_c>>k_B T and is a generalization of the Drude model for the case of arbitrary band structure, whereas the interband contribution (<ref>) dominates for large frequencies ħω≳μ_c.
When the wavevector of the incident plane wave
is perpendicular to the wire axis, the scattering problem can be decomposed into two independent scalar problems: electric field parallel to the main section of the cylindrical surface (p polarization, magnetic field along ẑ) and magnetic field parallel to the main section of the cylindrical surface (s polarization, electric field along ẑ).
In region j (j=1,2) and for each polarization mode we denote by ψ^(j)( r) the non-zero component of the total electromagnetic field along the axis of the cylinder,
evaluated at the observation point r=x x̂+ y ŷ.
These field components must satisfy Helmholtz equations in each region
(∇^2+(ω/c)^2ε_j μ_j ) ψ^(j)=0 . j=1,2
As in the case of uncoated cylinders <cit.>, the starting point for the derivation of an exact expression for the scattered field is Green's second integral. Using (<ref>) in the exterior region and separating the total field in this region into contributions from the incident and scattered field, ψ^(1)(r)=ψ^(1)_inc(r)+ψ^(1)_sc(r),
we obtain
ψ^(1)(r)=ψ^(1)_inc(r)
+1/4π∫_Γ( ∂ G^(1)(r,r')/∂n̂'ψ^(1)(r')-G^(1)(r,r') ∂ψ^(1)(r')/∂n̂')dS',
where r' is a point on the boundary Γ with arc element dS' and G^(1)(r,r') is the Green function of (<ref>) in the exterior region. Analogously, using Green's second integral and (<ref>) in the interior region, we obtain
0=
∫_Γ( ∂ G^(2)(r,r')/∂n̂'ψ^(2)(r')-G^(2)(r,r') ∂ψ^(2)(r')/∂n̂')dS',
where G^(2)(r,r') is the Green function of (<ref>) in the interior region.
Due to the cylindrical symmetry both Green functions may be expressed in terms of the
zeroth-order Hankel function of the first kind H_0^(1),
G^(j)( r| r^ ')=iπ
H_0^(1)( k_j | r- r^ '|) ,
with k_j=ω/c√(ε_jμ_j) (j=1, 2).
By letting the point of observation r approach the surface r' in expressions (<ref>) and (<ref>), we obtain a pair of integral equations with four unknown functions: the values of the fields ψ^(j) and of their normal derivatives
∂ψ^(j) / ∂n̂,
j=1,2, at the boundary Γ. As in the case of uncoated cylinders, the number of unknowns can be reduced to two since the electromagnetic boundary conditions at Γ provide two additional relationships between the normal derivatives and the fields at the boundary.
Taking into account that because of the graphene coating the tangential components of the magnetic field H are no longer continuous across the boundary Γ –as they were in the case of uncoated cylinders– the boundary conditions for our case can be expressed as
1/ε_1∂ψ^(1)/∂n̂
=
1/ε_2∂ψ^(2)/∂n̂ ψ^(1)- ψ^(2) =
4iπ/ωε_1σ∂ψ^(1)/∂n̂ ,
for p-polarization, and
ψ^(1)= ψ^(2) 1/μ_1∂ψ^(1)/∂n̂ -
1/μ_2∂ψ^(2)/∂n̂
=-4iπωσ/c^2ψ^(1) ,
for s-polarization.
In equations (<ref>) and (<ref>) the fields and their normal derivatives are evaluated at r=r_s(τ).
The boundary conditions allow us to express ψ^(2) and
∂ψ^(2)/∂n̂
in terms of ψ^(1) and
∂ψ^(1)/∂n̂.
Therefore, eqs (<ref>) and (<ref>) can be rewritten as a set of coupled, inhomogeneous integral equations for the unknown exterior functions ψ^(1) and
∂ψ^(1)/∂n̂.
To find these functions, the system of integral equations is converted into matrix equations which are then solved numerically. We do this by using a set [t_1,...,t_N] for discretizing the interval where the parameter τ describing the boundary varies.
The evaluation of the matrix elements and the treatment of the Hankel functions singularities when the argument is zero follows closely that described in <cit.>.
Once the functions ψ^(1) and ∂ψ^(1)/∂n̂ are known, the scattered field, given by the second term in (<ref>), can be calculated at every point in the exterior region. Yet another application of Green's second integral in the interior region gives the following expression for the field at every point inside the wire
ψ^(2)( r) = -i/4 ∫_Γ( k_2 n̂^'· ( r- r^ ') H_1^(1)( k_2 | r- r^ '|)/| r- r^ '| ψ^(2)( r^ ') .
-.
H_0^(1)( k_2
| r- r^ '|
) ∂ψ^(2)( r^ ')/∂n̂^') dS^' .
where the boundary conditions (<ref>) and (<ref>) must be used to obtain the interior source functions ψ^(2) and ∂ψ^(2)/∂n̂ in terms of the
exterior source functions ψ^(1) and ∂ψ^(1)/∂n̂.
Knowing the total electromagnetic field allows us to calculate optical characteristics such as the scattering,
absorption, and extinction cross sections <cit.>.
The time-averaged total power scattered by a two-dimensional particle can be evaluated by calculating the complex Poynting vector flux through an imaginary cylinder of length L and radius r_0 that encloses the particle
(see Figure <ref>)
P_sc= r_0 L ∫_0^ 2 π⟨ S_sc(r_0,θ) ⟩·r̂ dθ,
where
⟨ S_sc(r_0,θ) ⟩ =c^2/8π ω η Re(i F(r,θ) ×[∇× F(r,θ)]^* ),
and F(r,θ)=ψ_sc(r,θ) ẑ and η=μ_1 (s-polarization) or η=ε_1 (p-polarization). Introducing (<ref>) into (<ref>), we obtain
P_sc= c^2 r_0 L/8π ω η∫_0^ 2 π Re( i ψ_sc(r_0,θ) ∂ψ_sc^*/∂ r) dθ.
In the far-field region the calculation of the scattered fields –given by the second term in (<ref>)– can be greatily simplified using the asymptotic expansion of the Hankel function for large argument.
After some algebraic manipulation, the following results are found
ψ_sc(r,θ) = - i exp ( i [ k_1 r-π/4] ) /√(8 π k_1 r) F_ang(θ),
∂ψ^*_sc(r,θ)/∂ r = i k_1 ψ_sc(r,θ),
where the angular factor F_ang(θ) is given by
F_ang(θ)=∫_J(Γ)(ik_1 [-g'(τ)cosθ+
f'(τ)sinθ]ψ^(1)(τ)
+∂ψ^(1)/∂n̂(τ))
exp(-i k_1 [f(τ)cosθ+g(τ)sinθ]) dτ .
When Eqs. (<ref>) and (<ref>) are substituted into
Eq. (<ref>) we get
P_sc=c^2 L/64 π^2 ω η∫_0^2 π |F_ang(θ )|^2 dθ.
The scattering efficiency Q_s is defined as the ratio between the total power scattered by the two-dimensional particle, given by (<ref>), and the incident power P_inc intersected by the area DL (see Figure <ref>).
Analogously, the absorption efficiency Q_a is defined as the ratio between the power P_a
absorbed by the graphene-wrapped two-dimensional particle and P_inc.
P_a can be obtained as
P_a=-c^2 L/8 π ω η∫_J(Γ) Re ( i N(τ) ψ^(1)(τ) [∂ψ^(1)/∂n̂(τ) ]^* ) dτ
with N(τ)=√((f'(τ))^2+(g'(τ))^2).
§ RESULTS AND DISCUSSION
§.§ Wires of circular section
To assess the validity of the integral formalism sketched in Section <ref> for investigating light scattering and absorption in graphene-coated wires near LSP resonances, we resort to circular geometries where
a reference solution exists <cit.>.
In Figure <ref> we compare the numerical results obtained with the integral formalism
described in this paper (solid curves) with the semi-analytical results obtained using Lorenz-–Mie-–Debye solution
for the scattered fields in the form of infinite series of cylindrical multipole partial waves (circles). The curves
in this figure represent the scattering efficiency spectra for a circular wire with a radius R=0.5 μm, made with a nonplasmonic, transparent material (ε_2=3.9, μ_2=1) in a vaccum (μ_1=ε_1=1). We used Kubo parameters T=300^∘ K, γ_c=0.1 meV, different values of μ_c,
excitation frequencies in the range between 5 THz (incident wavelength 60 μ m) and 30 THz (incident wavelength 10 μm) and p–polarized incident waves. The curve corresponding to the uncoated wire,
not showing any plasmonic feature in this spectral range, is given as a reference.
An excellent agreement between both formalisms is observed in Figure <ref>.
The spectral position of the dipolar graphene surface plasmon resonances –at wavelengths near
40.60 μm (μ_c=0.4 eV), 33.20 μm (μ_c=0.6 eV) and 27.16 μm (μ_c=0.9 eV)–
also agree well with those calculated using nonretarded analytical expressions <cit.>.
The agreement is also observed for the spectral position of the lower local maxima near 28.65 μm (for μ_c=0.4 eV), 23.40 μm (for μ_c=0.6 eV) and 19.13 μm (for μ_c=0.9 eV), which correspond
to quadripolar surface current distributions in the graphene coating and are associated with a complex pole in the second coefficient of the multipole expansion (the coefficient of the term that varies twice from positive to negative around the cylinder).
To further assess the suitability of the Green formalism presented here, we repeat the comparisons for circular geometries, but with metallic (intrinsically plasmonic) cores instead of dielectric (non plasmonic) cores.
The results are shown in Figure <ref>, where we show the spectral dependence of the
p–polarized scattering efficiency Q_s for a graphene-coated metallic wire with R=50nm,
illuminated from vacuum. We assume that the interior electric permittivity is described by the Drude model
ε_2(ω)=ε_∞-ω_p^2/(ω^2+iγ_mω)
with ε_∞=1, γ_m=0.01 eV and different values of ħω_p
(0.4 eV, 0.7 eV and 0.9 eV). Kubo parameters for σ(ω) are T=300^∘ K,
γ_c=0.1 meV and μ_c=0.5eV.
We observe that numerical results obtained with the integral formalism (solid curves) and with the Mie solution (circles) are all again in excellent agreement. As explained in <cit.>, the net effect of the graphene coating is to increase the charge density induced on the surface of the metallic particle, thus blueshifting the resonances of the metallic particle.
§.§ Wires of elliptic section
Having determined the suitability of the Green approach for the simulation of plasmon resonances in
graphene–covered wires, we next explore the effects that the departure from circular geometries has on
the spectrum of graphene LSPs supported by the wire.
Guided by previous research on metallic nanowires with a nonregular cross section <cit.>, the complexity of the resonance spectrum is expected to increase when the symmetry of the wire section decreases.
In order to proceed gradually and keep some degree of control
to verify the effectiveness of our Green theoretical formulation and numerical codes,
we consider wires with elliptical section, a geometry which includes the circle as a special case.
In Figure <ref> we plot the scattering efficiency Q_s for p–polarized incident waves and for various angles of incidence for a graphene-coated elliptical wire with major and minor semi–axes
a=0.55 μm and b=0.45 μm respectively.
The wire substrate is a nonplasmonic, transparent material (ε_2=3.9, μ_2=1), the medium of incidence is vaccum (μ_1=ε_1=1) and the Kubo parameters are T=300^∘ K, γ_c=0.1 meV and μ_c=0.9 eV.
The curve corresponding to a graphene–coated wire of circular section and a=b=0.5 μm is given as a reference.
We observe that, similar to the case of metallic LSPs <cit.>, the break of the rotational symmetry
of the wire section introduces a two-dimensional anisotropy in the angular optical response.
This anisotropy is particularly evident for the dipolar plasmonic resonance which for the circular case
(a=b=0.5 μm) occurs near 27.16 μm and that is split into two different resonant peaks,
one near 25.81 μm and the other near 28.78 μm
The first peak corresponds to the illumination direction along the ellipse's major axis
while the second peak corresponds to the illumination direction perpendicular
to the major axis, as clearly indicated in Figure <ref> by the
fact that both resonances are decoupled for illumination directions parallel
to either of the ellipse's axes and that
the first (respectively second) peak is absent when the illumination direction is perpendicular to
(respectively along) the major axis.
We observe that while the graphene coating always introduces a minimum in the scattering efficiency of
circular rods –near 21.02 μm for the parameters in Figure <ref>, a relevant feature
in the context of graphene invisibility cloaks <cit.> and corresponding to a complex zero of the first
coefficient of the cylindrical multipole expansion <cit.>– the position and magnitude of this minimum depend strongly on the illumination direction, which is another manifestation of two-dimensional anisotropy in the angular optical response of the elliptical wire.
Although associated with weaker enhancements of the scattering efficiency, other peaks corresponding to multipolar modes higher than the dipolar mode can also be observed in figure <ref>.
These higher frequency graphene LSP modes are better appreciated in the near field, as shown in Figure
<ref> where we plot absorption efficiency Q_a spectra for the same elliptical wire and directions of incidence considered in Figure <ref>. The abosorption spectrum corresponding to the circular case (a=b=0.5 μm) is also given as a reference.
Apart from the frequency splitting of the dipolar mode already noted in Figure <ref>, we also observe a splitting in the quadrupolar resonance, which in the circular case occurs at a wavelength near 19.13 μm and that in the elliptical case is split into a peak near 19.27 μm, corresponding to illumination directions parallel to the ellipse's axes, and another peak near 19.06 μm, the only resolved peak when the illumination direction
makes and angle of 45^∘ with the ellipse's axes.
In agreement with the results shown in Figure <ref> for graphene–coated metallic rods of circular section, in the elliptical case the enhancements of the scattering efficiency are significant only for the dipolar mode.
This is shown in Figure <ref>, where we observe that the dipolar resonance is now split
into two peaks corresponding to illumination directions perpendicular and parallel
to the ellipse's major axis. The curve corresponding to the bare elliptical wire illuminated at an angle of 45^∘ with the ellipse's axes is given as a reference. We note that both dipolar resonances are blueshifted compared to those
of the bare wire, in complete agreement with the fact that the net effect of the graphene coating is to increase the induced charge density on the cylindrical surface of the metallic wire <cit.>.
Graphene-coated plasmonic particles are particularly attractive because their scattering and absorption characteristics can be tuned by varying the graphene chemical potential μ_q with the help of a constant electric field (electric field effect, gate voltage), and not only by changing their size or their dielectric constant, as is the case for metallic particles.
To illustrate this tunability for elliptical wires, we show in Figure <ref>
the scattering efficiency Q_s for the wire considered in Figure <ref> and three different values
of the chemical potential, μ_c=0.5 eV, 0.7 eV and 1.0 eV.
The illumination direction makes an angle of 45^∘ with the major axis of the ellipse and the incident wave is p–polarized. We observe that the value of the chemical potential influences the position of the multipolar resonances and the magnitude of the splitting.
In Figure <ref> we plot the spatial distribution of the electric field normalized to the incident amplitude for the wire considered in Figures <ref> and <ref>. In Figure <ref>a the incident wavelength is
λ=25.81 μm and the illumination direction is along the ellipse's major axis (0^∘) whereas
in Figure <ref>b the incident wavelength is λ=28.78 μm and the illumination direction is
along the ellipse's minor axis (90^∘). We observe that in both cases the wire is behaving as an oscillating electric dipole oriented along the direction of the incident field, that is, along the ellipse's minor axis in Figure <ref>a or
along the ellipse's major axis in Figure <ref>b.
Figure <ref>c, corresponding to an incident wavelength λ=25.81 μm and illumination direction
making an angle of 45^∘ with the ellipse's axes, shows that in these conditions the electric field inside the wire is parallel to the minor axis, although the components of the incident field along both ellipse's axes have equal magnitude.
Figure <ref>d, corresponding to the incident wavelength λ=28.78 μm and illumination direction
at 45^∘ with the ellipse's axes, shows a similar behavior, except that now the electric field inside the wire is parallel to the major axis, despite the fact that the components of the incident field along both ellipse's axes have equal magnitude.
The results in Figures <ref>c and <ref>d are a clear confirmation that
the values λ=25.81 μm and λ=28.78 μm correspond to the splitting
of the degenerate dipolar resonance of the circular wire, as suggested in Figures <ref> and <ref> by the correspondence between the scattering and absorption efficiency
spectra for low eccentricity elliptical and circular wires.
The agreement between the multipolar order revealed by the topology of the near–field on the one hand and that suggested by
the correspondence between spectra for circular and for low–eccentricity elliptical wires on the other hand is also observed for
higher frequency peaks. This is illustrated by the panel in Figure <ref>, showing near–field maps for three illumination directions: at 45^∘ with the ellipse's axes (top row),
along the ellipse's minor axis (middle row)
and along the ellipse's major axis (bottom row).
The near–field maps in the left column correspond to the quadrupolar resonance, in the circular case (a=b=0.5 μm) located near 19.13 μm and split into two close resonant peaks, one at
λ=19.27 μm, associated with field enhancements near vertices and co–vertices, and the other at
λ=19.06 μm, associated with field enhancements near the diagonals of the rectangle circumscribed
to the ellipse.
The maps in the right column correspond to hexapolar resonances near λ=15.64 μm (near
15.61 μm in the circular case). Although the topology of the hexapolar near–field
when the illumination direction is at 45^∘ with the ellipse's axes is very different to the topology
when the illumination direction is along the axes, symmetry–breaking splittings of the hexapolar peaks are not
resolvable in the scattering and absorption efficiency spectra.
§ SUMMARY AND CONCLUSIONS
In conclusion, we have presented in this paper an electromagnetically rigorous method based on Green's second identity for studying the plasmonic response of graphene–coated wires of arbitrary shape.
To validate the method, we compare the numerically computed scattering and absorption efficiencies
in the particular case of graphene–coated wires of circular section with the results obtained from a
multipolar Mie theory. All the results agree excellently, both for metallic and dielectric substrates.
To explore the effects that the break of the rotational symmetry of the wire section has in the plasmonic features of the scattering and absorption response, we apply the Green formulation to the case of graphene-coated wires of elliptical section.
Compared with the scattering and absorption efficiency spectra for graphene–coated wires of circular section,
and as it might be expected on symmetry grounds, a frequency splitting of multipolar plasmonic resonances is
observed in the spectra for low–eccentricity elliptical wires. To illustrate the application of the Green method
in the near–field we investigate the spatial distribution of the electromagnetic field near the graphene coating for different frequencies and directions of incidence.
As a further test on the validity of the Green formulation, we show that the multipolar order revealed by the
topology of the near field agrees perfectly well with the multipolar order obtained from
the correspondence between spectra for circular and for low–eccentricity elliptical wires.
The method presented here should be useful in the design and engineering of graphene–wrapped particles
with tailored properties for specific plasmonic applications, including photovoltaic devices, nanoantennas,
switching, biosensing and even medical treatments.
Funding. Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET PIP 1800); Universidad de Buenos Aires (UBA 20020100100327); Universidad Autónoma de Baja California (UABC) and Consejo Nacional de Ciencia y Tecnología (CONACYT).
1
geim1
A. K. Geim, “Graphene: status and prospects," Science 324, 1530-34 (2009).
bonaccorso1
F. Bonaccorso et al, “Graphene, related two-dimensional crystals, and hybrid systems for energy conversion and storage," Science 347, (6217) (2015).
bonaccorso2
F. Bonaccorso et al, “Science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems," Nanoscale 7, 4598-4810 (2015).
ssc1
F. M. Kin, J. Long, W. Feng, and T. F. Heinz, “Optical spectroscopy of graphene: From the far infrared to the ultraviolet," Solid State Commun. 152, 1341–49 (2012).
conven1
M. Grande et al, “Fabrication of doubly resonant plasmonic nanopatch arrays on graphene," Appl. Phys. Lett. 102, 231111 (2013).
conven2
M. Hashemi, M. H. Farzad, N. A. Mortensen, and S. Xiao, “Enhanced absorption of graphene in the visible region by use of plasmonic nanostructures," J. Opt. 15, 055003 (2013).
sp-graf1
M. Jablan, M. Soljacic, and H. Buljan, “Plasmons in Graphene: Fundamental Properties and Potential Applications," Proceedings of the IEEE 101 (7), 1689-704 (2013).
sp-graf2
T. Low and P. Avouris, “Graphene Plasmonics for Terahertz to Mid–Infrared Applications," ACS nano 8, (2) 1086-101 (2014).
esfera2
M. Farhat, C. Rockstuhl, and H. Bagci,
“A 3D tunable and multi-frequency graphene plasmonic cloak," Opt. Express 21, 12592-603 (2013).
esfera1
T. Christensen, A. P. Jauho, M. Wubs, and N. Mortensen, “Localized plasmons in graphene-coated nanospheres," Phys. Rev. B 91, 125414 (2015).
esfera3
B. Yang, T. Wu, Y. Yang, and X. Zhang, “Tunable subwavelength strong absorption by graphene wrapped dielectric particles," J. Opt. 17, 035002 (2015).
cil4
Z. R. Huang et al, “A mid–infrared fast–tunable graphene ring resonator based on guided–plasmonic wave resonance on a curved graphene surface," J. Opt. 16, 105004 (2014).
cilindroconico
T. J. Arruda, A. S. Martinez, and F. A. Pinheiro, “Electromagnetic energy within coated cylinders at
oblique incidence and applications to graphene coatings," J. Opt. Soc. Am. A 31, (2014).
cilOE
J. Zhao et al, “Surface-plasmon-polariton whispering-gallery mode analysis of the graphene monolayer coated InGaAs nanowire cavity," Opt. Express 22, 5754–61 (2014).
cilindros1
R. J. Li, X. Lin, S. S. Lin, X. Liu, and H. S. Chen, “Tunable deep–subwavelength superscattering using graphene monolayers," Opt. Lett. 40, (2015).
cilindros2
M. Riso, M. Cuevas, and R. A. Depine, “Tunable plasmonic enhancement of light scattering and absorption in graphene-coated subwavelength wires," J. Opt. 17, 075001 (2015).
cilindros3
M. Cuevas, M. Riso, and R. A. Depine, “Complex frequencies and field distributions of localized surface plasmon modes in graphene-coated subwavelength wires," J. Quant. Spectrosc. Ra. 173, (2015).
cilindros4
E. Velichko, “Evaluation of a graphene-covered dielectric microtube as a refractive-index sensor in the terahertz range," J. Opt. 18, 035008 (2016).
fabric1
N. Kumar and S. Kumbhat, Essentials in Nanoscience and Nanotechnology (New York: Wiley, 2016).
C2gribonexp01
L. Ju et al, “Graphene plasmonics for tunable terahertz metamaterials," Nat. Nanotechnol. 6 (10) 630-34 (2011).
MMMM
A. A. Maradudin, T. Michel, A. McGurn, and E. R. Mendez,
“Enhanced backscattering of light from a random grating," Ann. Phys. 203, 255-307 (1990).
civ2NL C. Valencia, E. Méndez, and B. Mendoza, “Second harmonic generation in the scattering of light by two-dimensional particles," J. Opt. Soc. Am. B 20, 2150–61 (2003).
martin2
J. P. Kottmann, O. J. F. Martin, D. R. Smith, and S. Schultz, “Plasmon resonances of silver nanowires with a nonregular cross section," Phys. Rev. B 64, 235402 (2001).
martin3
J. P. Kottmann, O. J. F. Martin, D. R. Smith, and S. Schultz, “Field polarization and polarization charge distributions in plasmon resonant nanoparticles,"
New J. Phys. 2, 27 (2000).
kubo2
S. A. Milkhailov, K. Siegler, “New Electromagnetic Mode in Graphene," Phys. Rev. Lett. 99, 016803 (2007).
kubo1
L. A. Falkovsky, “Optical properties of graphene and IV–VI semiconductors," Phys. Usp. 51, 887-897 (2008).
mishc1
M. Mishchenko, L. D. Travis, and A. A. Lacis, Scattering, Absorption, And Emission Of Light By Small Particles (Cambridge: Cambridge University Press, 2002).
bohren
C. F. Bohren and D. R. Huffman, Absorption and scattering of light by small particles (New York: Wiley, 1983).
|
http://arxiv.org/abs/1701.08790v1 | 20170126192340 | Vulnerability and co-susceptibility determine the size of network cascades | [
"Yang Yang",
"Takashi Nishikawa",
"Adilson E. Motter"
] | physics.soc-ph | [
"physics.soc-ph",
"cond-mat.dis-nn",
"cs.SI"
] |
Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208, USA
Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208, USA
Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL 60208, USA
Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208, USA
Northwestern Institute on Complex Systems, Northwestern University, Evanston, IL 60208, USA
In a network, a local disturbance can propagate and eventually cause a substantial part of the system to fail, in cascade events that are easy to conceptualize but extraordinarily difficult to predict. Here, we develop a statistical framework that can predict cascade size distributions by incorporating two ingredients only: the vulnerability of individual components and the co-susceptibility of groups of components (i.e., their tendency to fail together). Using cascades in power grids as a representative example, we show that correlations between component failures define structured and often surprisingly large groups of co-susceptible components. Aside from their implications for blackout studies, these results provide insights and a new modeling framework for understanding cascades in financial systems, food webs, and complex networks in general.
Vulnerability and co-susceptibility determine the size of network cascades
Adilson E. Motter
Accepted XXX. Received YYY; in original form ZZZ
==========================================================================
The stability of complex networks is largely determined by their ability to operate close to equilibrium—a condition that can be compromised by relatively small perturbations that can lead to large cascades of failures.
Cascades are responsible for a range of network phenomena, from power blackouts <cit.> and air traffic delay propagation <cit.> to secondary species extinctions <cit.> and large social riots <cit.>.
Evident in numerous previous modeling efforts <cit.> is that dependence between components is the building block of these self-amplifying processes and can lead to correlations among eventual failures in a cascade.
A central metric characterizing a cascade is its size.
While the suitability of a size measure depends on the context and purpose, a convenient measure is the number of network components (nodes or links) participating in the cascade (e.g., failed power lines, delayed airplanes, extinct species).
Since there are many known and unknown factors that can affect the details of cascade dynamics, the main focus in the literature has been on characterizing the statistics of cascade sizes rather than the size of individual events.
This leads to a fundamental question: what determines the distribution of cascade sizes?
In this Letter, we show that cascading failures (and hence their size distributions) are often determined primarily by two key properties associated with failures of the system components: the vulnerability, or the failure probability of each component, and the
co-susceptibility, or the tendency of a group of components to fail together.
The latter is intimately related to pairwise correlations between failures, as we will see below.
We provide a concrete algorithm for identifying groups of co-susceptible components for any given network.
We demonstrate this using the representative example of
cascades of overload failures in power grids
(Fig. <ref>).
Based on our findings, we develop the co-susceptibility model—a statistical modeling framework capable of accurately predicting the distribution of cascade sizes, depending solely on the vulnerability and co-susceptibility of component failures.
We consider a system of n components subject to cascading failures, in which a set of initial component failures can induce a sequence of failures in other components.
Here we assume that the initial failures and the propagation of failures can be modeled as stochastic and deterministic processes, respectively (although the framework also applies if the propagation or both are stochastic).
Thus, the cascade size N, defined here as the total number of components that fail after the initial failures, is a random variable that can be expressed as
N = ∑_ℓ=1^n F_ℓ,
where F_ℓ is a binary random variable representing the failure status of component ℓ (i.e., F_ℓ = 1 if component ℓ fails during the cascade, and F_ℓ = 0 otherwise).
While the n components may be connected by physical links, a component may fail as the cascade propagates even if none of its immediate neighbors have failed <cit.>.
For example, in the case of cascading failures of transmission lines in a power grid, the failure of one line can cause a reconfiguration of power flows across the network that leads to the overloading and subsequent failure of other lines away from the previous failures <cit.>.
A concrete example network we analyze throughout this Letter using the general setup above is the Texas power grid, for which we have 24 snapshots, representing on- and off-peak power demand in each season of three consecutive years.
Each snapshot comprises the topology of the transmission grid, the capacity threshold of each line, the power demand of each load node, and the power supply of each generator node (extracted from the data reported to FERC <cit.>).
For each snapshot we use a physical cascade model to generate K=5,000 cascade events.
In this model (which is a variant of that in Ref. <cit.> with the power re-balancing scheme from Ref. <cit.>),
an initial perturbation to the system (under a given condition) is modeled by the removal of a set of randomly selected lines.
A cascade following the initial failures is then modeled as an iterative process.
In each step, power flow is redistributed according to Kirchhoff's law and might therefore cause some lines to be overloaded and removed (i.e., to fail) due to overheating.
The temperature of the transmission lines is described by a continuous evolution model and the overheating threshold for line removal is determined by the capacity of the line <cit.>.
When a failure causes part of the grid to be disconnected, we re-balance power supply and demand under the constraints of limited generator capacity <cit.>.
A cascade stops when no more overloading occurs, and we define the size N of the cascade as the total number of removed lines (excluding the initial failures).
This model <cit.>, accounting for several physical properties of failure propagation, sits relatively high in the hierarchy of existing power-grid cascade models <cit.>, which ranges from the most detailed engineering models to simplest graphical or stochastic models.
The model has also been validated against historical data <cit.>.
In general, mutual dependence among the variables F_ℓ may be necessary to explain the distribution of the cascade size N.
We define the vulnerability p_ℓ≡⟨ F_ℓ⟩ of component ℓ to be the probability that this component fails in a cascade event (including events with N=0).
If the random variables F_ℓ are uncorrelated (and thus have zero covariance), then N would follow Poisson's binomial distribution <cit.>, with average μ̃=∑_ℓ p_ℓ and variance σ̃^2=∑_ℓ p_ℓ(1-p_ℓ).
However, the actual variance σ^2 of N observed in the cascade-event data is significantly larger than the corresponding value σ̃^2 under the no-correlation assumption for all 24 snapshots of the Texas power grid (with the relative difference, σ̅^2≡(σ^2 - σ̃^2)/σ̃^2, ranging from around 0.18 to nearly 39).
Thus, the mutual dependence must contribute to determining the distribution of N in these examples.
Part of this dependence is captured by the correlation matrix C, whose elements are the pairwise Pearson correlation coefficients among the failure status variables F_ℓ.
When the correlation matrix is estimated from cascade-event data, it has noise due to finite sample size, which we filter out using the following procedure.
First, we standardize F_ℓ by subtracting the average and dividing it by the standard deviation.
According to random matrix theory, the probability density of eigenvalues of the correlation matrix computed from K samples of T independent random variables follow the Marchenko-Pastur distribution <cit.>, ρ(λ) = K√((λ_+ - λ)(λ - λ_-))/(2 πλ T), where λ_± = [1 ±√(T/K) ]^2.
Since those eigenvalues falling between λ_- and λ_+ can be considered contributions from the noise, the sample correlation matrix C can be decomposed as C = C^(ran)+C^(sig), where C^(ran) and C^(sig) are its random and significant parts, respectively, which can be determined from the eigenvalues and the associated eigenvectors <cit.>.
In the network visualization of Fig. <ref>(a), we show the correlation coefficients C_ℓℓ'^(sig) between components ℓ and ℓ' estimated from the cascade-event data for the Texas grid under the 2011 summer on-peak condition.
Note that we compute correlation only between those components that fail more than once in the cascade events.
As this example illustrates, we observe no apparent structure in a typical network visualization of these correlations.
However, as shown in Fig. <ref>(b), after repositioning the nodes based on correlation strength, we can identify clusters of positively and strongly correlated components—those that tend to fail together in a cascade.
To more precisely capture this tendency of simultaneous failures, we define a notion of co-susceptibility: a given subset of m components ℐ≡{ℓ_1,…,ℓ_m} is said to be co-susceptible if
γ_ℐ≡⟨ N_ℐ| N_ℐ≠0 ⟩ - n̅_ℐ/m - n̅_ℐ > γ_th,
where N_ℐ≡∑_j=1^m F_ℓ_j is the number of failures in a cascade event among the m components, ⟨ N_ℐ| N_ℐ≠0 ⟩ denotes the average number of failures among these components given that at least one of them fails, n̅_ℐ≡∑_j=1^m p_ℓ_j/[1-∏_k=1^m(1-p_ℓ_k)] ≥ 1 is the value ⟨ N_ℐ| N_ℐ≠0 ⟩ would take if F_ℓ_1,…,F_ℓ_m were independent.
Here we set the threshold in Eq. (<ref>) to be γ_th=σ_N_ℐ/(m - n̅_ℐ), where σ_N_ℐ^2 ≡∑_j=1^m p_ℓ_j(1-p_ℓ_j)/[1-∏_k=1^m(1-p_ℓ_k)] - n̅_ℐ^2 ∏_k=1^m(1-p_ℓ_k) is the variance of N_ℐ given N_ℐ≠0 for statistically independent F_ℓ_1,…,F_ℓ_m.
By definition, the co-susceptibility measure γ_ℐ equals zero if F_ℓ_1,…,F_ℓ_m are independent.
It satisfies -(n̅_ℐ-1)/(m-n̅_ℐ) ≤γ_ℐ≤ 1, where the (negative) lower bound is achieved if multiple failures never occur and the upper bound is achieved if all m components fail whenever one of them fails.
Thus, a set of co-susceptible components are characterized by significantly larger number of simultaneous failures among these components, relative to the expected number for statistically independent failures.
While γ_ℐ can be computed for a given set of components, identifying sets of co-susceptible components in a given network from Eq. (<ref>) becomes infeasible quickly as n increases due to combinatorial explosion.
Here we propose an efficient two-stage algorithm for identifying co-susceptible components <cit.>.
The algorithm is based on partitioning and agglomerating the vertices of the auxiliary graph G_0 in which vertices represent the components that fail more than once in the cascade-event data, and (unweighted) edges represent the dichotomized correlation between these components.
Here we use C^(sig)_ℓℓ'>0.4 as the criteria for having an edge between vertices ℓ and ℓ' in G_0.
In the first stage [illustrated in Fig. <ref>(a)], G_0 is divided into non-overlapping cliques—subgraphs within which any two vertices are directly connected—using the following iterative process.
In each step k=1,2,…, we identify a clique of the largest possible size (i.e., the number of vertices it contains), denote this clique as Q_k, remove Q_k from the graph G_k-1, and then denote the remaining graph by G_k.
Repeating this step for each k until G_k is empty, we obtain a sequence Q_1, Q_2, …, Q_m of non-overlapping cliques in G_0, indexed in the order of non-increasing size.
In the second stage, we agglomerate these cliques, as illustrated in Fig. <ref>(b).
Initially, we set R_k = Q_k for each k.
Then, for each k=2,3,…,m, we either move all the vertices in R_k to the largest group among R_1,⋯, R_k-1 for which at least 80% of all the possible edges between that group and R_k actually exist, or we keep R_k unchanged if no group satisfies this criterion.
Among the resulting groups, we denote those groups whose size is at least three by R_1,R_2,…,R_m', m'≤ m.
A key advantage of our method over applying community-detection algorithms <cit.> is that the edge density threshold above can be optimized for the accuracy of cascade size prediction.
We test the effectiveness of our general algorithm on the Texas power grid.
As Figs. <ref>(a) and <ref>(b) show, the block-diagonal structure of C^(sig) indicating high correlation within each group and low correlation between different groups becomes evident when the components are reindexed according to the identified groups.
We note, however, that individual component vulnerabilities do not necessarily correlate with the co-susceptibility group structure [see Fig. <ref>(d), in comparison with Fig. <ref>(c)].
We find that the sizes of the groups of co-susceptible components vary significantly across the 24 snapshots of the Texas power grid, as shown in Fig. <ref>(e).
The degree of co-susceptibility, as measured by the total number of co-susceptible components, is generally lower under an off-peak condition than the on-peak counterpart [Fig. <ref>(e)].
This is consistent with the smaller deviation from the no-correlation assumption observed in Fig. <ref>(f), where this deviation is measured by the relative difference in the variance, σ̅^2 (defined above).
Since high correlation within a group of components implies a high probability that many of them fail simultaneously, the groups identified by our algorithm tend to have high values of γ_ℐ.
Indeed, our calculation shows that Eq. (<ref>) is satisfied for all the 171 co-susceptible groups found in the 24 snapshots.
Given the groups of components generated through our algorithm,
the co-susceptibility model is defined as the set of binary random variables F_ℓ (different from F_ℓ) following the dichotomized correlated Gaussian distribution <cit.> whose marginal probabilities (i.e., the probabilities that F_ℓ=1) equal the estimates of p_ℓ from the cascade-event data and whose correlation matrix C is given by
C_ℓℓ' = C^(sig)_ℓℓ' if ℓ,ℓ'∈ R_k for some k≤ m',
0 otherwise.
We are thus approximating the correlation matrix C by the block diagonal matrix C, where the blocks correspond to the sets of co-susceptible components.
In terms of the correlation network, this corresponds to using only those links within the same group of co-susceptible components for predicting the distribution of cascade sizes.
Since individual groups are assumed to be uncorrelated, this can be interpreted as model dimensionality reduction, in which the dimension reduces from n to the size of the largest group.
We sample F_ℓ using the code provided in Ref. <cit.>.
In this implementation, the computational time for sampling scales with the number of variables with an exponent of 3, so factors of 2.0 to 15.2 in dimensionality reduction observed for the Texas power grid correspond to a reduction of computational time by factors of more than 8 to more than 3,500.
We now validate the co-susceptibility model for the Texas grid.
We estimate the cumulative distribution function S_ N(x) of cascade size, N≡∑_ℓ F_ℓ, using 3,000 samples generated from the model.
As shown in Fig. <ref>(a), this function matches well with the cumulative distribution function S_N(x) of cascades size N computed directly from the cascade-event data.
This is validated more quantitatively in the inset; the (binned) probability p_ N(x) that x ≤ N≤ x+Δ x for the co-susceptibility model is plotted against the corresponding probability p_N(x) for the cascade-event data, using a bin size of Δ x = N_max/20, where N_max denotes the maximum cascade size observed in the cascade-event data.
The majority of the points lie within the 95% confidence interval for p_ N(x), computed using the estimated p_N(x).
To validate the co-susceptibility model across all 24 snapshots, we use the Kolmogorov-Smirnov (KS) test <cit.>.
Specifically, for each snapshot we test the hypothesis that the samples of N and the corresponding samples of N are from the same distribution.
Figure <ref>(b) shows the measure of distance between two distributions, sup_x |S_N(x)-S_ N(x)|, which underlies the KS test, as a function of the total amount of electrical load in the system.
We find that the null hypothesis cannot be rejected
at the 5% significance level
for most of the cases we consider [21/24=87.5%, blue dots in Fig. <ref>(b)]; it can be rejected in only three cases (red triangles, above the threshold distance indicated by the dashed line), all corresponding to high stress (i.e., high load) conditions.
We also see that more stressed systems are associated with larger distances between the distributions, and a higher likelihood of being able to reject the null hypothesis.
We believe this is mainly due to higher-order correlations not captured by p_ℓ and C.
The identification of co-susceptibility as a key ingredient in determining cascade sizes leads to two new questions:
(1) What gives rise to co-susceptibility?
(2) How to identify the co-susceptible groups?
While the first question opens an avenue for future research, the second question is addressed by the algorithm developed here (for which we provide a ready-to-use software <cit.>).
The co-susceptibility model is general and can be used for cascades of any type (of failures, information, or any other spreadable attribute) for which information is available on the correlation matrix and the individual “failure” probabilities.
Such information can be empirical, as in the financial data studied in Ref. <cit.>, or generated from first-principle models, as in the power-grid example used here.
Our approach accounts for correlations (a strength shared by some other approaches, such as the one based on branching processes <cit.>), and does so from the fresh, network-based perspective of co-susceptibility.
Finally, since co-susceptibility is often a nonlocal effect, our results suggest that we may need nonlocal strategies for reducing the risk of cascading failures, which bears implications for future research.
This work was supported by ARPA-E Award No. DE-AR0000702.
hines2009large
P. Hines, J. Apt, and S. Talukdar,
Large blackouts in North America: Historical trends and policy implications,
Energ. Policy 37, 5249 (2009).
fleurquin2013
P. Fleurquin, J. J. Ramasco, and V. M. Eguiluz,
Systemic delay propagation in the US airport network,
Sci. Rep. 3, 1159 (2013).
eco:11
J. A. Estes, J. Terborgh, J. S. Brashares, M. E. Power, J. Berger, W. J. Bond, and D. A. Wardle,
Trophic downgrading of planet Earth,
Science 333, 301 (2011).
sahasrabudhe2011rescuing
S. Sahasrabudhe and A. E. Motter,
Rescuing ecosystems from extinction cascades through compensatory perturbations,
Nature Commun. 2, 170 (2011).
watts2002
D. J. Watts,
A simple model of global cascades on random networks,
Proc. Natl. Acad. Sci. USA 99, 5766 (2002).
brummitt2015
C. D. Brummitt, G. Barnett, and R. M. D'Souza,
Coupled catastrophes: sudden shifts cascade and hop among interdependent systems,
J. Roy. Soc. Interface 12, 20150712 (2015).
kinney_2005
R. Kinney, P. Crucitti, R. Albert, and V. Latora,
Modeling cascading failures in the North American power grid,
Euro. Phys. J. B 46, 101 (2005).
buldyrev_2010
S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin,
Catastrophic cascade of failures in interdependent networks,
Nature 464, 1025 (2010).
goh_2003
K. I. Goh, D. S. Lee, B. Kahng, and D. Kim,
Sandpile on scale-free networks,
Phys. Rev. Lett. 91, 148701 (2003).
motter_2004
A. E. Motter,
Cascade control and defense in complex networks,
Phys. Rev. Lett. 93, 098701 (2004).
dobson2007complex
I. Dobson, B. A. Carreras, V. E. Lynch, and D. E. Newman,
Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization,
Chaos 17, 026103 (2007).
bak_1987
P. Bak, C. Tang, and K. Wiesenfeld,
Self-organized criticality: An explanation of the 1/f noise,
Phys. Rev. Lett. 59, 381 (1987).
Witthaut:2015
D. Witthaut and M. Timme,
Nonlocal effects and countermeasures in cascading failures,
Phys. Rev. E 92, 032809 (2015).
dobson_2016
I. Dobson, B. A. Carreras, D. E. Newman, and J. M. Reynolds-Barredo,
Obtaining statistics of cascading line outages spreading in an electric transmission network from standard utility data,
IEEE T. Power Syst. 99, 1 (2016).
anghel2007stochastic
M. Anghel, K. A. Werley, and A. E. Motter,
Stochastic model for power grid dynamics,
Proc. 40th Int. Conf. Syst. Sci. HICSS'07,
Big Island, HI, USA,
Vol. 1, 113 (2007).
FERC
The data for the snapshots are obtained from Federal Energy Regulatory Commission (FERC) Form 715.
Hines2011
P. Hines, E. Cotilla-Sanchez, and S. Blumsack,
Topological models and critical slowing down: Two approaches to power system blackout risk analysis,
Proc. 44th Int. Conf. Syst. Sci. HICSS'11,
Kauai, HI, USA, 1 (2011).
source-code
Source code is available for download at <https://github.com/yangyangangela/determine_cascade_sizes>
opaModel
B. A. Carreras, D. E. Newman, I. Dobson, and N. S. Degala,
Validating OPA with WECC data,
Proc. 46th Int. Conf. Syst. Sci. HICSS'13,
Maui, HI, USA, 2197 (2013).
henneaux2016
P. Henneaux, P. E. Labeau, J. C. Maun, and L. Haarla,
A two-level probabilistic risk assessment of cascading outages,
IEEE T. Power Syst. 31, 2393 (2016).
hines_2015
P. D. Hines, I. Dobson, and P. Rezaei,
Cascading power outages propagate locally in an influence graph that is not the actual grid topology,
arXiv:1508.01775 (2016).
dobson2012_vulnerability
B. A. Carreras, D. E. Newman, and I. Dobson,
Determining the vulnerabilities of the power transmission system,
Proc. 45th Int. Conf. Syst. Sci. HICSS'12,
Maui, HI, USA, 2044 (2012).
Yang:2016
Y. Yang, T. Nishikawa, and A. E. Motter (to be published).
Wang:1993
Y. H. Wang,
On the number of successes in independent trials,
Stat. Sinica. 3, 295 (1993).
mehta2004random
M. L. Mehta,
Random Matrices, 3rd edn.
(Academic Press, 2004).
MacMahon:2015
M. MacMahon and D. Garlaschelli,
Community Detection for Correlation Matrices,
Phys. Rev. X 5, 021006 (2015).
emrich1991method
L. J. Emrich and M. R. Piedmonte,
A method for generating high-dimensional multivariate binary variates,
Amer. Statist. 45, 302 (1991).
macke2009generating
J. H. Macke, P. Berens, A. S. Ecker, A. S. Tolias, and M. Bethge,
Generating spike trains with specified correlation coefficients,
Neural Comp. 21, 397 (2009).
massey1951kolmogorov
F. J. Massey, Jr.,
The Kolmogorov-Smirnov test for goodness of fit,
J. Amer. Statist. Assoc. 46, 68 (1951).
Plerou:2002
V. Plerou, P. Gopikrishnan, B. Rosenow, L. A. N. Amaral, T. Guhr, and H. E. Stanley,
Random matrix approach to cross correlations in financial data,
Phys. Rev. E 65, 066126 (2002).
dobson2012
I. Dobson,
Estimating the propagation and extent of cascading line outages from utility data with a branching process,
IEEE T. Power Syst. 27, 2146 (2012).
|
http://arxiv.org/abs/1701.07444v3 | 20170125190111 | Discovering the interior of black holes | [
"Ram Brustein",
"A. J. M. Medved",
"K. Yagi"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"hep-th"
] | |
http://arxiv.org/abs/1701.07798v1 | 20170126180836 | Estimating solar flux density at low radio frequencies using a sky brightness model | [
"Divya Oberoi",
"Rohit Sharma",
"Alan E. E. Rogers"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.SR"
] |
addressref=aff1,corref,[email protected]]D.Divya Oberoi
addressref=aff1,[email protected]]R.Rohit Sharma
addressref=aff2]A.GAlan E.E. Rogers
[id=aff1]National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune 411007, India
[id=aff2]MIT Haystack Observatory, Westford MA 01886, USA
Oberoi et al.
Estimating solar flux density at low radio frequencies using a sky brightness model
Sky models have been used in the past to calibrate individual low radio frequency telescopes.
Here we generalize this approach from a single antenna to a two element interferometer and formulate the problem in a manner to allow us to estimate the flux density of the Sun using the normalized cross-correlations (visibilities) measured on a low resolution interferometric baseline.
For wide field-of-view instruments, typically the case at low radio frequencies, this approach can provide robust absolute solar flux calibration for well characterized antennas and receiver systems.
It can provide a reliable and computationally lean method for extracting parameters of physical interest using a small fraction of the voluminous interferometric data, which can be prohibitingly compute intensive to calibrate and image using conventional approaches.
We demonstrate this technique by applying it to data from the Murchison Widefield Array and assess its reliability.
§ INTRODUCTION
Modern low radio frequency arrays use active elements to provide sky noise dominated signal over large bandwidths (e.g., LOFAR, LWA and MWA).
At these long wavelengths it is hard to build test setups to determine the absolute calibration for antennas or antenna arrays.
Models of the radio emission from the sky have, however, successfully been used to determine absolute calibration of active element arrays <cit.>.
Briefly, the idea is that the power output of an antenna, W, can be modeled as:
W(LST) = W_i,i(LST) = g_i g_i^* <V_i V_i^*>,
where LST is the local sidereal time;
i is an index to label antennas;
g_i is the instrumental gain of the i^th antenna;
^* represents complex conjugation;
V_i is the voltage measured by the radio frequency probe
and angular brackets denote averaging over time.
In temperature units, V_i V_i^* is itself given by:
V_i,i = V_i V_i^* =1/2 A_eΔν∫_Ω T_Sky(s⃗) P_N(s⃗) dΩ + T_Rec,
where
A_e is the effective collecting area of the antenna;
Δν is the bandwidth over which the measurement is made;
s⃗ is a direction vector in a coordinate system tied to the antenna (e.g., altitude-azimuth);
T_Sky(s⃗) is the sky brightness temperature distribution;
P_N(s⃗) is the normalized power pattern of the antenna;
dΩ represents the integration over the entire solid angle; and T_Rec is the receiver noise temperature and also includes all other terrestrial contributions to the antenna noise.
The sky rotates above the antenna at the sidereal rate, hence the measured W is a function of the local sidereal time.
The presence of strong large scale features in T_Sky lead to a significant sidereal variation in W, even when averaged over the wide beams of the low radio frequency antennas.
If a model for T_sky(s⃗) is available at the frequency of observation and P_N(s⃗) is independently known, the integral in Eq. <ref> can be evaluated.
Assuming that either g is stable over the period of observation, or that its variation can be independently calibrated, the only unknowns left in Eqs. <ref> and <ref> are T_Rec and the instrumental gain g.
<cit.> successfully fitted the measurements with a model for the expected sidereal variation and determined both these free parameters.
This method requires observations spanning a large fraction of a sidereal day to be able to capture significant variation in W.
As T_sky(s⃗) does not include the Sun, which usually dominates the antenna temperature, T_Ant, at low radio frequencies, such observations tend to avoid the times when the Sun is above the horizon.
Our aim is to achieve absolute flux calibration for the Sun.
With this objective, we generalize the idea of using T_Sky for calibrating an active antenna described above, from total power observations with a single element to a two element interferometer with well characterized active antenna elements and receiver systems.
Further, we pose the problem of calibration in a manner which allows us take advantage of the known antenna parameters to compute the solar flux using a sky model.
We demonstrate this technique on data from the Murchison Widefield Array (MWA) which operates in the 80–300 MHz band.
A precursor to the Square Kilometre Array, the MWA is located in the radio quiet Western Australia.
Technical details about the MWA design are available in <cit.>.
The MWA science case is summarized in <cit.> and includes solar, heliospheric and ionospheric studies among its key science focii.
With its densely sampled u-v plane and the ability to provide spectroscopic imaging data at comparatively high time and spectral resolution, the MWA is very well suited for imaging the spectrally complex and dynamic solar emission <cit.>.
Using the MWA imaging capabilities to capture the low level variations seen in the MWA solar data requires imaging at high time and frequency resolutions, 0.5 s and about hundred kHz, respectively.
In addition, the large number of the elements of the MWA (128), which give it good imaging capabilities, also lead to an intrinsically large data rate (about 1 TB/hour for the usual solar observing mode).
Hence, imaging large volumes of solar MWA data is challenging from perspectives ranging from data transport logistics, to computational and human resources needed.
Hence, a key motivation for this work was to develop a computationally inexpensive analysis technique capable of extracting physically interesting information from a small fraction of these data without requiring full interferometric imaging.
Such a technique also needs to be amenable to automation so that it can realistically be used to analyze large data-sets spanning thousands of hours.
The basis of this technique is formulated in Sec. <ref> and its implementation for the MWA data is described in Sec. <ref>.
The results and a study of the sources of random and systematic errors are presented in sections <ref> and <ref>, respectively.
A discussion is presented in Sec. <ref>, followed by the conclusions in Sec. <ref>.
§ FORMALISM
The response of a baseline to the sky brightness distribution, I(s⃗), can be written as
V(b⃗) = 1/2 A_e Δν∫_Ω I(s⃗) P_N(s⃗) e^-2 π i νb⃗·s⃗/c dΩ,
where V(b⃗) is the measured cross correlation for the baseline b⃗;
A_e is the effective collecting area of the antennas;
Δν is the bandwidth over which the measurement is made;
I(s⃗) is the sky brightness distribution
and
P_N(s⃗) is the normalized antenna power pattern <cit.>.
The antennas forming the baseline are assumed to be identical.
In terms of vector components this can be expressed as follows:
V(u,v,w) = 1/2 A_e Δν∫∫ I(l,m) P_N(l,m)
e^-2 π i {ul + vm + w(√(1 - l^2 - m^2)-1)}dl dm /√(1 - l^2 -m^2),
where
u,v and w are the components of the baseline vector b⃗, expressed in units of λ, in a right handed Cartesian coordinate system with u pointing towards the local east, v towards the local north and w along the direction of the phase center,
and
l,m and n are the corresponding direction cosines with their origin at the phase center <cit.>.
Compensation for the geometric delay between the signals arriving at the two ends of the baseline prior to their multiplication leads to the introduction of the minus one in the coefficient of w in the exponential.
It is assumed that Δν is narrow enough that variations of P and I with ν can be ignored.
We assume both the signal chains involved to also be identical.
In terms of the various sources of signal and noise contributing noise power to an interferometric measurement, the normalized cross-correlation coefficient, r_N, measured by a baseline can be written as
r_N = T_b⃗/T_Sky + T_Rec + T_Pick-up.
The numerator represents the signal power which is correlated between the two elements forming the baseline, T_b⃗, and the denominator is the sum of all the various contributions to the noise power of the individual elements.
T_Rec represents the noise contribution of the signal chain, T_Pick-up the noise power picked up from the ground and T_Sky the beam-averaged noise contribution from the sky visible to the antenna beam.
T_Sky is given by <cit.>:
T_Sky = 1/Ω_P∫_Ω T_Sky(s⃗) P_N(s⃗) dΩ.
Here Ω_P is the solid angle of the normalized antenna beam, P_N.
The advantage of using a normalized quantity like r_N is that, unlike Eq. <ref>, it is independent of instrumental gains, g_is.
For reasonably well characterized instruments, reliable estimates for P(s⃗), T_Rec and T_Pick-up are available from a mix of models and measurements.
Prior work by <cit.> has demonstrated that for antennas with wide fields of view, which allow for averaging over small angular scale variations, the 408 MHz all sky map by <cit.>, scaled using an appropriate spectral index, can be used as a suitable sky model.
As the sky model does not include the Sun, for solar observations T_Sky is modeled as the sum of contribution of the sky model as given in Eq. <ref> and the beam-averaged contribution of the Sun, T_⊙, P.
The angular size of radio Sun can be a large fraction of a degree.
So for estimating solar flux, one needs baselines short enough that their angular resolution is much larger than the angular size of the Sun.
The best suited baselines are the ones which resolve out the bulk of the smooth large angular scale Galactic emission averages out over the wide field-of-view
while retaining practically the entire solar emission.
In order to account appropriately for the sky model emission picked up by the baseline, T_b⃗ Sky, the numerator of Eq. <ref> is modeled as
T_b⃗ = T_⊙, P + T_b⃗ Sky.
Given the geometry of the baseline, T_b⃗ Sky can be computed by incorporating the phase term reflecting the baseline response from Eq. <ref> in Eq. <ref>,
T_b⃗,Sky = | 1/Ω_P∫_Ω T_Sky(s⃗) P_N(s⃗)
e^-2 π i (ul + vm + w(√(1-l^2-m^2)-1) dΩ |.
Thus, for solar observations, Eq. <ref> can be written as
r_N, ⊙ = T_⊙, P + T_b⃗ Sky/T_Sky + T_⊙, P + T_Rec + T_Pick-up.
The LHS of Eq. <ref> is the measured quantity and once T_Sky and T_b⃗ Sky are available from a model, the only remaining unknown on the RHS is T_⊙, P.
Once T_⊙, P has been computed, the flux density of the Sun, S_⊙, is given by
S_⊙ = 2 k T_⊙,P/λ^2 Ω_P.
One can thus estimate S_⊙ using measurements from a single interferometric baseline from a wide field of view instrument.
Additionally, if the angular size of the Sun, Ω_⊙, is independently known, the average brightness temperature of the Sun, T_⊙, is given by
T_⊙ = T_⊙, P Ω_P/Ω_⊙.
§ IMPLEMENTATION
To illustrate this approach we use data from the MWA taken on September 3, 2013 as a part of the solar observing proposal G0002 from 04:00:40 to 04:04:48.
One GOES C1.3 class and 4 GOES B class flares were reported on this day.
Six minor type III radio bursts and one minor type IV radio burst were also reported.
Overall the level of activity reported on this day was classified as low by solarmonitor.org.
The MWA provides the flexibility to spread the observing bandwidth across the entire RF band in 24 pieces, each 1.28 MHz wide, providing a total observing bandwidth of 30.72 MHz.
These data were taken in the so called picket-fence mode where 12 groups of 2 contiguous coarse channels were distributed across the 80–300 MHz band in a roughly log-spaced manner.
Here we work with the 10 spectral bands at 100 MHz and above.
The time and frequency resolution of these data are 0.5 s and 40 kHz, respectively.
Some of the spectral channels suffer from instrumental artifacts, and were not used in this study.
For an interferometric baseline Eq. <ref> can be generalized to
W_i,j = g_i g_j^* <V_i V_j^*>,
where i and j are labels for the antennas comprising the baseline,
and the corresponding generalization for Eq. <ref> is given in Eqs. <ref> or <ref>.
The LHS of Eq. <ref> is the measurable and is constructed as given below from the observed quantities:
r_N,⊙ = W_i,j/√(W_i,i× W_j,j),
It is evident from Eqs. <ref> and <ref> that r_N,⊙ is independent of the instrumental gain terms, making it suitable for the present application.
In the following sub-sections we discuss the various terms in Eq. <ref> needed for estimating T_⊙, P.
§.§ P_N(s⃗) and T_Pick-up
Detailed electromagnetic simulations of the MWA antenna elements, referred to as tiles, including the effects of mutual coupling and finite ground screen, have been done to compute reliable models for P_N(s⃗) in the 100–300 MHz band.
These simulations compute the embedded patterns using a numerical electromagnetic code (FEKO).
The MWA tiles comprise 16 dual-polarization active elements arranged in a 4×4 grid placed on a 5m × 5m ground screen.
For efficiency of computing, we assume that a tile has only 4 distinct embedded patterns from which all 16 can be obtained by rotation and mirror reflection.
The full geometry is placed on welded wire ground screen over a dielectric earth.
These simulations also allow us to compute the ground loss.
The beam pattern for a given direction is the vector sum of the embedded patterns for each of the 16 elements using the appropriate geometric phase delays.
The embedded patterns change slowly with frequency and we compute and store the real and imaginary parts for each polarization every 10 MHz at a 1^∘× 1^∘ azimuth-elevation grid for all the 4 embedded patterns.
Using these embedded pattern files, we can interpolate to compute the beam patterns for any given frequency and direction.
We also determine the ground loss as a function of frequency in terms of noise power which will be added to the receiver and sky noise.
This contribution to noise power, referred to as T_Pick-up, varies between 10–20 K.
An example MWA beam pattern at 238 MHz is shown in the top panel of Fig. <ref> for an azimuth and zenith angle of 0.0^∘ and 36.4^∘, respectively.
§.§ T_Rec
The T_Rec for the MWA has been modeled based on the radio frequency design and signal chain, and successfully verified against field measurements.
Though all MWA tiles are identical in design, they lie at differing distances from the receiver units where the data is digitized.
A few different flavors of cables are used to connect them to the receivers.
The value of T_Rec for a tile depends on the length and characteristics of this cable.
Here we have chosen to work only with the tiles using the shortest cable runs (90 m), which also give the best T_Rec performance.
For these tiles the T_Rec is close to 35 K at 100 MHz, drops gradually to about 20 K at 180 MHz and then increases smoothly to about 30 K at 300 MHz.
§.§ Choice of sky model and spectral index
The <cit.> all-sky map at 408 MHz, with an angular resolution of 0.^∘85, zero level offset estimated to be better than 2 K and random temperature errors on final maps < 0.5 K <cit.>, is the best suited map for our application.
Its reliability has been independently established in prior work <cit.> and it is routinely used as the sky model at low radio frequencies.
A spectral index is used to translate the map to the frequency of interest.
The observed radio emission comes from both Galactic and extra-galactic sources and it is commonly assumed that the emission spectrum, averaged over sufficiently large patches in the sky, can be described simply by a spectral index, α, typically defined in temperature units as T ∝ν^α.
The α can vary from one part of the sky to another so, in principle, one needs an all-sky α map to account for its variation across the sky.
In practice, when averaged over the large angular scales corresponding to fields-of-view of low frequency elements (order 10^3 deg^2), α converges to a rather stable value.
There have been a few independent estimates of the spectral index of the Galactic background radiation and its variation as a function of direction <cit.>.
The most recent of these studies computed a spectral index between 45 MHz and 408 MHz.
It concluded that over most of the sky the spectral index is between 2.5 and 2.6, which is reduced by thermal absorption in much of the |b| < 10^∘ region to values between 2.1 and 2.5.
This study also provided a spectral index map.
Here we work with only one pointing direction which is chosen to avoid the Galactic plane and use α=-2.55, which is appropriate for this direction.
The middle panel of Fig. <ref> shows the model sky at 238 MHz derived from the <cit.> 408 MHz map.
§.§ Computing T_Sky and T_b⃗, Sky
Once T_Sky(s⃗) and P_N(s⃗) are known, T_Sky can be computed using Eq. <ref>.
Computing T_b⃗, Sky requires choosing a baseline.
The integrand in Eq. <ref> includes a phase term which is responsible for a given baseline averaging out the spatially smooth part of the emission in the beam.
For this application, the ideal baselines are the ones which are short enough for a source approaching 1^∘ to appear like an unresolved point source (Sec. <ref>), while the contribution of the smoothly varying Galactic emission drops dramatically as it gets averaged over multiple fringes of the phase term in Eq. <ref>.
The heavily centrally condensed MWA array configuration provides many suitable baselines.
The cosine part of the phase term mentioned above is shown in the bottom panel of Fig. <ref> for an example baseline.
Table <ref> lists the T_b⃗, Sky for all the different frequencies considered here for this baseline.
§.§ Choice of angular size of Sun
While S_⊙ can be computed unambiguously in this formalism, computing T_⊙ requires an additional piece of information, Ω_⊙ (Eq. <ref>).
The MWA data can provide the images from which Ω_⊙ can, in principle, be measured.
Deceptively, however, this involves some complications.
Given the imaging dynamic range and the resolution of the MWA, the Sun usually appears as an asymmetric source with a somewhat complicated morphology.
This rules out the approach of fitting elliptical Gaussians to estimate the radio size of the Sun used in some of the earlier work <cit.>.
Associating an angular size with such a source requires one to define a threshold and integrate the region enclosed within this contour to give the angular size of the Sun.
The solar emission at these frequencies comes from the corona and this emission does not have a sharp boundary.
The choice of the threshold is, hence, bound to be somewhat subjective.
Also, as mentioned in Sec. <ref>, a key objective of this work is to develop a technique which is numerically much less intensive than interferometric imaging, so we cannot expect these solar radio images to be available.
In absence of more detailed information, our best recourse is to assume the Sun to be effectively a circular disc with a frequency dependent size given by the following empirical relationship (Stephen White; private communication):
θ_⊙ = 32.0 + 2.22 ×ν_GHz ^-0.60,
where θ_⊙ is the expected effective solar diameter in arcmin and ν_GHz, the observing frequency in GHz.
The solar radio images are well known to have non-circular appearance and their equatorial and polar diameters can be different by as much as 30% at metre wavelengths <cit.>.
θ_⊙ represents an effective diameter yielding the same surface area as the true solar brightness distribution.
Values of θ_⊙ are tabulated in Table <ref>.
While the solid angle subtended by the Sun is expected to vary with the presence of coronal features like streamers and coronal holes, and the phase of the solar cycle, this expected variation is only a fraction of its mean angular size.
Additionally, we note that the active emissions are usually expected to come from compact sources, so in presence of solar activity, this leads to a large underestimate of the true T_⊙ for active regions.
In spite of these limitations this approach provides very useful estimates of T_⊙, especially for the quiet Sun.
§.§ Choice of MWA pointing direction
The MWA tiles are pointed towards the chosen direction in the sky by introducing appropriate delays between the signals from the dipoles comprising a tile.
These delays are implemented by switching in one or more of five independent delay lines, which provide delays in steps of two, for each of the dipoles <cit.>.
A consequence of this discreteness in the delay settings is that all the different signals can be delayed by exactly the required amounts only for certain specific directions.
These directions are referred to as sweet spots and the MWA beams are expected to be closest to the modeled values towards these directions.
For this reason, for solar observations we point to the sweet spot nearest to the Sun, rather than the Sun itself.
For the data presented here, the nearest sweet spot was located at a distance of 4.18^∘ from the Sun and implies that P_N is not unity towards the direction of the Sun.
Figure <ref> plots the value of P_N towards the direction of the Sun as a function of frequency for both the polarizations.
The pointing center was also used as the phase center for computing the cross-correlations.
We account carefully for the loss of flux on the short baselines due to this, using our chosen model for solar radio emission (Sec. <ref>).
The amplitude of the integral over the phase term in Eq. <ref> over a circular disc of size given by Eq. <ref> located with an appropriate offset with respect to the phase center measures the solar flux picked up by a given baseline.
This quantity is also shown in Fig. <ref> as a fraction of the flux recovered by some example baselines as a function of frequency.
Both of these effects are corrected for in all subsequent analysis.
§ RESULTS
To provide an estimate of the magnitudes of the different terms in Eq. <ref>, Table <ref> lists representative values of these terms for all the ten observing bands spanning 100–300 MHz for the XX polarization for the baseline Tile011-Tile022.
Figure <ref> shows the dynamic spectra for S_⊙ for the bands listed in Table <ref>.
As the bulk of the emission at these radio frequencies is thermal emission from the million K coronal plasma, the broadband featureless emission is not expected to have significant linear polarization.
The consistency between T_⊙,P computed for the XX and the YY polarizations, for all the ten spectral bands, is demonstrated in Fig <ref>.
The availability of many MWA baselines of suitably short lengths provides a convenient way to check for consistency between estimates of T_⊙,P from different baselines.
Here, we consider all six baselines formed between the following four tiles – Tile011, Tile021, Tile022 and Tile023.
Table <ref> shows the mean of the various parameters of interest over these six baselines, and RMS of these values computed over these baselines.
Figure <ref> shows a comparison of the T_⊙,P computed using the data for the same polarization (XX) for these baselines.
The variations in the median values of these histograms of ratios of T_⊙,P measured on the same baselines are shown as a function of frequency in Fig <ref>, along with the FWHM of the these histograms.
§ UNCERTAINTY ESTIMATES
The key quantity of physical interest is S_⊙.
The intrinsic measurement uncertainty in S_⊙ due to thermal noise is given by
δ S_⊙,Th = 2 k/A_effT_Sys/√(Δν Δ t),
where T_Sys, the system temperature, is the sum of all the terms in the denominator of Eq. <ref>, A_eff is the effective collecting area of an MWA tile in m^2 (given by λ^2/Ω_P), and Δν and Δ t the bandwidth and the durations of individual measurements, respectively.
For the data presented here, δ S_⊙,Th lies in the range 0.02–0.06 SFU (Table <ref>).
The uncertainty in S_⊙ due to thermal noise is at most a few % and usually <1%.
Figure <ref> shows S_⊙, δ S_⊙,Th and δ S_⊙,Obs, the observed RMS on S_⊙, as a function of ν.
δ S_⊙,Obs exceeds δ S_⊙,Th by factors ranging from a few to almost two orders of magnitude, even during a relatively quiet times.
This establishes that δ S_⊙,Obs is dominated by intrinsic changes in S_⊙ and demonstrates the sensitivity of these observations to low level changes in S_⊙.
In addition to the random errors discussed above, estimates of S_⊙ will also suffer from systematic errors.
In fact, the uncertainty in the estimate of S_⊙, δ S_⊙, is expected to be dominated by these systematic errors.
Following the usual principles of propagation of error and assuming the different sources of errors to be independent and uncorrelated, we estimate δ S_⊙ considering the various known sources of error.
Rearranging Eq. <ref>, the primary measurable, T_⊙,P, can be expressed as:
T_⊙,P = r_N, ⊙(T_Sky + T_Rec + T_Pick-up) - T_b⃗, Sky/1 - r_N, ⊙,
and δT_⊙,P,Abs the absolute error in, T_⊙,P, is given by:
δT_⊙,P,Abs^2 =
δ r_N,⊙^2 (T_Sky+T_Rec+T_Pick-up-T_b⃗,Sky/(1-r_N,⊙)^2)^2
+ ( δT_Sky^2 + δ T_Rec^2 + δ T_Pick-up^2 ) ( r_N,⊙/1-r_N,⊙)^2
+ δ T_b⃗, Sky^2 ( 1/1-r_N,⊙)^2,
where the pre-fix δ indicates the error in that quantity.
δ S_⊙,Abs can be computed from δT_⊙,P,Abs using an equation similar to Eq. <ref>, though it is more convenient to discuss the different error contributions in temperature units.
For the MWA tiles used here, a generous estimate of δ T_Rec is about 30 K.
Due to effects like change in the dielectric constant of the ground with the level of moisture and uncertainties in modeling, δ T_Pick-up is expected to be about 50%.
Prior work has estimated the intrinsic error in sky brightness distribution at 408 MHz from <cit.> to be about 3% <cit.>.
Scaling T_Sky using a spectral index leaves the fractional error unchanged.
Averaging over a large solid angle patch, as is done here, is expected to reduce it to a lower level.
Due to antenna-to-antenna variations, arising from manufacturing tolerances and imperfections in instrumentation, and the tilt and rotation of the antenna beams due to the gradients in the terrain, the true P_N(s⃗) will differ at some level from the model P_N(s⃗) used here.
These errors in P_N(s⃗) become fractionally larger with increasing angular distance from the beam center.
Hence, they are less important for the location of the Sun which lies close to the beam center.
These variations have recently been studied in detail by <cit.>.
They estimate the net uncertainty due to all the causes considered to be of order ±10–20% (1 σ) near the edge of the main-lobe (∼20^∘ from the beam center) and in the side-lobe regions.
At such distances from the beam center, P_N(s⃗) drops by factors of many to an order of magnitude and can be expected to contribute an independent error of a few percent.
As the intrinsic uncertainties in the sky model and the P_N(s⃗) cannot be disentangled in our framework, we combine them both in the δT_Sky term and regard it to be about 5%.
Given that the geometry of the baseline is known to a high accuracy, δ T_b⃗, Sky is primarily due to δT_Sky, which is discussed above.
We assume it to scale similarly to δT_Sky and set it to 5%.
The δ S_⊙,Abs, based on the uncertainties discussed above is shown in Fig <ref>.
To obtain a realistic estimate, the observed value of δ r_N,⊙ from Table <ref> was used.
Including the known systematics pushes δ S_⊙,Abs to 10–60%, for most frequencies, and generally increases with frequency.
§ DISCUSSION
§.§ Flux estimates
This technique estimates S_⊙ to lie in the range from ∼2–18 SFU in the 100–300 MHz band (Figs. <ref> and <ref>, and Tables <ref> and <ref>).
Being a non-imaging technique, S_⊙ measures the integrated emission from the entire solar disc.
For the values listed in Tables <ref> and <ref> we use a period which shows a comparatively low level of time variability, or a comparatively quiet time.
These values compare well with earlier measurements <cit.>.
The spectrum shown in Fig. 7 peaks at about 240 MHz, in good agreement with theoretical models for thermal solar emission <cit.>.
Independent solar flux estimates are available from the Radio Solar Telescope Network (RSTN) at a few fixed frequencies, one of which lies in the range covered here.
The 245 MHz flux reported by Learmonth station in Australia, in the same interval as presented in Table <ref>, is ∼18 SFU.
The nearest frequency in our data-set is 238 MHz, at which we estimate flux density of 17.1±0.2 SFU.
Though our measurements are simultaneous they are not at overlapping frequencies.
The period for this comparison was chosen carefully to avoid short-lived emission spikes which often do not have wide enough bandwidth to be seen across even nearby frequencies simultaneously.
A description of the analysis procedure followed at RSTN and the uncertainty associated with these measurements is not available in the literature, though the latter is expected to be about 2–3 SFU (private communication, Stephen White).
Given the associated uncertainties, these measurements are remarkably consistent.
§.§ Polarization
The bulk of the data in Fig. <ref> clearly follows the x=y line, as is expected for unpolarized thermal emission.
To minimize geometric polarization leakage, the observations presented here were centered at azimuth of 0^∘.
At the low temperature end, the distribution of data points around the x=y line shows a larger spread, which is reduced at higher temperatures.
The lower temperature measurements come from lower frequencies and this is a manifestation of fractionally larger δ S_⊙, Th, or poorer SNR, in this part of the band.
A gradual improvement in the tightness of the distribution along the x=y line is observed as the measured temperature increases and SNR improves with increasing frequency.
Table <ref> shows that data corresponding to quiescent emission lies in the range 50–500 K.
The data points lying at higher temperatures come from the type III-like event close to the start of the observing period and the numerous fibrils of emission seen at some of the higher frequencies (Fig. <ref>), which are not assured to be unpolarized.
The bulk of the points farther away from the x=y line lie beyond 500 K.
For even the thermal part of the emission, the three highest frequencies do show a systematic departure from the x=y line, which is yet to be understood.
Similar behavior is seen for other baselines which were studied.
§.§ Comparison across baselines
To build a quantitative sense for the uncertainty in the estimates of T_⊙, we examine the ratios of T_⊙,P measured on different baselines (Figs. <ref> and <ref>).
The histograms of this ratio are symmetric and Gaussian-like in appearance.
At the frequency band where the widest spread is seen (167 MHz), the medians of these histograms lie in the range 0.8–1.2.
For many of the frequencies this spread is larger than the expectations based on the widths of these Gaussians.
The ratios of baseline pairs exhibit smooth trends, as opposed to showing random fluctuations.
The observed spread is likely due to the systematic effects which give rise to antenna-to-antenna differences, and were not accounted for in the analysis.
Table <ref> lists the mean and rms values of various quantities of interest over all six baselines.
The RMS in the estimates of S_⊙ due to these systematic effects across many baselines (Table <ref>) is usually less than that due to the intrinsic variations in S_⊙ on a given baseline (Table <ref>).
It is usually a few percent or less and the largest value observed is about 8%.
§.§ Uncertainty analysis
An analysis of various known sources of systematic errors lead to an uncertainty in the absolute values of S_⊙ estimated not exceeding ± 60%, except at 240 MHz, where the value is larger due to the much larger variation in δ r_N, ⊙ (Fig. <ref>, Table <ref> and Sec. <ref>).
The observed baseline-to-baseline variation is significantly smaller than this uncertainty estimate (Fig. <ref>).
This suggests that at least some of the values for uncertainties in the individual parameters of the system considered in Sec. <ref> are over-estimates.
We also note that the uncertainty in relative values of S_⊙ from a given interferometric baseline, which is determined primarily by the δ S_⊙, Th, is much smaller than that in its absolute value.
These measurements, hence have the ability to quantify variations in the observed values of S_⊙, as small as about a percent.
§ CONCLUSIONS
We have demonstrated that this technique provides a convenient and robust approach for determining the solar flux using radio interferometric observations from a handful of suitable baselines.
It is much less intensive in terms of the data (<0.1% in the case of MWA), and the human and computational effort it requires, when compared to conventional interferometric analysis.
It provides flux estimates with fair absolute accuracy and can reliably measure relative changes of order a percent.
As it provides flux estimates at the native resolution of the data, this technique is equally applicable for quiet and active solar emissions.
Further, on assuming an angular size for the Sun, this approach can also provide the average brightness temperature of the Sun.
Good solar science requires monitoring-type observations, where a given instrument observes the Sun for the longest duration feasible every day of the year.
However, given the enormous rate at which data is generated by the new technology low radio frequency interferometers and the requirements of solar imaging to maintain high time and spectral resolution in the image domain, it is currently not possible for the conventional analysis methods to keep up with the rate of data generation.
Efficient techniques and algorithms need to be developed not only to image these data, but also to synthesize the information made available in the 4D image cubes in a meaningful humanly understandable form.
In the interim, algorithms like the one presented here enable some of the novel and interesting science made accessible by these data.
For wide-band low radio frequency observations, which are increasingly becoming more common, this technique will allow simultaneous characterization of the coronal emissions over a large range of coronal heights.
This technique can naturally be implemented for other existing and planned wide field-of-view instruments with similar levels of characterization, like LOFAR, LWA and SKA-Low.
Amenability of this technique to automation will enable studies involving large volumes of data, which will, in turn, open the doors to multiple novel investigations addressing fundamental questions ranging from variations in the solar flux as a function of solar cycle to quantifying the short-lived narrow-band weak emission features seen in the wideband low radio frequency solar data and exploring their role in coronal heating.
We acknowledge Randall Wayth and Budi Juswardy, both at Curtin University, Australia, for helpful discussions and providing estimates of T_Rec and δ T_Rec.
We also acknowledge helpful comments from Stephen White (Air Force Research Laboratory, Kirtland, NM, USA) and David Webb (Boston College, MA, USA) on an earlier version of the manuscript.
This scientific work makes use of the Murchison Radio-astronomy Observatory, operated by CSIRO. We acknowledge the Wajarri Yamatji people as the traditional owners of the Observatory site. Support for the operation of the MWA is provided by the Australian Government Department of Industry and Science and Department of Education (National Collaborative Research Infrastructure Strategy: NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. We acknowledge the iVEC Petabyte Data Store and the Initiative in Innovative Computing and the CUDA Center for Excellence sponsored by NVIDIA at Harvard University.
Facilities: Murchison Widefield Array.
[Bowman et al.(2013)Bowman, Cairns, Kaplan, Murphy,
Oberoi, Staveley-Smith, Arcus, Barnes, Bernardi, Briggs, Brown,
Bunton, Burgasser, Cappallo, Chatterjee, Corey, Coster,
Deshpande, deSouza, Emrich, Erickson, Goeke, Gaensler,
Greenhill, Harvey-Smith, Hazelton, Herne, Hewitt,
Johnston-Hollitt, Kasper, Kincaid, Koenig, Kratzenberg, Lonsdale,
Lynch, Matthews, McWhirter, Mitchell, Morales, Morgan, Ord,
Pathikulangara, Prabu, Remillard, Robishaw, Rogers, Roshi,
Salah, Sault, Shankar, Srivani, Stevens, Subrahmanyan, Tingay,
Wayth, Waterson, Webster, Whitney, Williams, Williams, &
Wyithe]Bowman2013-MWA-scienceBowman, J. D., Cairns, I., Kaplan, D. L., et al. 2013, PASA, 30, 31
[Chambe(1978)Chambe]Chambe1978Chambe, G. 1978, 70, 255–263
[Guzmán et al.(2011)Guzmán, May, Alvarez, &
Maeda]Guzman2011-Tsky-spectral-indexGuzmán, A. E., May, J., Alvarez, H., & Maeda, K. 2011, , 525,
A138
[Haslam et al.(1981)Haslam, Klein, Salter, Stoffel,
Wilson, Cleary, Cooke, & Thomasson]Haslam1981
Haslam, C. G. T., Klein, U., Salter, C. J., et al. 1981, , 100, 209
[Haslam et al.(1982)Haslam, Salter, Stoffel, &
Wilson]Haslam1982-408-map
Haslam, C. G. T., Salter, C. J., Stoffel, H., & Wilson, W. E. 1982,
, 47, 1
[Lantos, et al.(1992)Lantos, Alissandrakis, & Rigaud]Lantos1992
Lantos, P., Alissandrakis, C. E. & Rigaud, D. 1992, , 137, 225-256
[Lawson et al.(1987)Lawson, Mayer, Osborne, &
Parkinson]Lawson1987-GB-spectral-index
Lawson, K. D., Mayer, C. J., Osborne, J. L., & Parkinson, M. L. 1987,
, 225, 307
[Lonsdale et al.(2009)Lonsdale, Cappallo, Morales, Briggs,
Benkevitch, Bowman, Bunton, Burns, Corey, Desouza, Doeleman,
Derome, Deshpande, Gopala, Greenhill, Herne, Hewitt, Kamini,
Kasper, Kincaid, Kocz, Kowald, Kratzenberg, Kumar, Lynch,
Madhavi, Matejek, Mitchell, Morgan, Oberoi, Ord,
Pathikulangara, Prabu, Rogers, Roshi, Salah, Sault, Shankar,
Srivani, Stevens, Tingay, Vaccarella, Waterson, Wayth, Webster,
Whitney, Williams, & Williams]Lonsdale2009-MWA-design
Lonsdale, C. J., Cappallo, R. J., Morales, M. F., et al. 2009, IEEE
Proceedings, 97, 1497
[Martyn (1948) Martyn]Martyn1948
Martyn, D. F. 1948, Proc. of the Royal Society of London Series A, 193, 44–59
[McLean & Sheridan(1985)]McLean-Sheridan-1985
McLean, D. J., & Sheridan, K. V. 1985, The quiet sun at metre
wavelengths, ed. D. J. McLean & N. R. Labrum, 443–466
[Mercier & Chambe(2012)]Mercier2012
Mercier, C. & Chambe, G. 2012, , 540, A18
[Neben et al.(2016)Neben, Hewitt, Bradley, Dillon,
Bernardi, Bowman, Briggs, Cappallo, Corey, Deshpande, Goeke,
Greenhill, Hazelton, Johnston-Hollitt, Kaplan, Lonsdale,
McWhirter, Mitchell, Morales, Morgan, Oberoi, Ord, Prabu,
Udaya Shankar, Srivani, Subrahmanyan, Tingay, Wayth, Webster,
Williams, & Williams]Neben-2016-MWA_beams
Neben, A. R., Hewitt, J. N., Bradley, R. F., et al. 2016, , 820, 44
[Oberoi et al.(2011)Oberoi, Matthews, Cairns, Emrich,
Lobzin, Lonsdale, Morgan, Prabu, Vedantham, Wayth, Williams,
Williams, White, Allen, Arcus, Barnes, Benkevitch, Bernardi,
Bowman, Briggs, Bunton, Burns, Cappallo, Clark, Corey,
Dawson, DeBoer, De Gans, deSouza, Derome, Edgar, Elton,
Goeke, Gopalakrishna, Greenhill, Hazelton, Herne, Hewitt,
Kamini, Kaplan, Kasper, Kennedy, Kincaid, Kocz, Koeing,
Kowald, Lynch, Madhavi, McWhirter, Mitchell, Morales, Ng,
Ord, Pathikulangara, Rogers, Roshi, Salah, Sault, Schinckel,
Udaya Shankar, Srivani, Stevens, Subrahmanyan, Thakkar, Tingay,
Tuthill, Vaccarella, Waterson, Webster, & Whitney]Oberoi2011
Oberoi, D., Matthews, L. D., Cairns, I. H., et al. 2011, , 728,
L27
[Rogers & Bowman(2008)]Rogers2008-Tsky-spectral-index
Rogers, A. E. E. & Bowman, J. D. 2008, , 136, 641
[Rogers et al.(2004)Rogers, Pratap, Kratzenberg, &
Diaz]Rogers2004-cal-using-Gbg
Rogers, A. E. E., Pratap, P., Kratzenberg, E., & Diaz, M. A. 2004,
Radio Science, 39, 2023
[Smerd (1950) Smerd]Smerd1950
Smerd, S. F. 1950, Aus. J. of Sci. Res. A Phy. Sci., 3, 34
[Taylor et al.(1999)Taylor, Carilli, &
Perley]Synthesis-Imaging-1999
Taylor, G. B., Carilli, C. L., & Perley, R. A., eds. 1999, Astronomical
Society of the Pacific Conference Series, Vol. 180, Synthesis Imaging in
Radio Astronomy II
[Tingay et al.(2013)Tingay, Goeke, Bowman, Emrich, Ord,
Mitchell, Morales, Booler, Crosse, Wayth, Lonsdale, Tremblay,
Pallot, Colegate, Wicenec, Kudryavtseva, Arcus, Barnes,
Bernardi, Briggs, Burns, Bunton, Cappallo, Corey, Deshpande,
Desouza, Gaensler, Greenhill, Hall, Hazelton, Herne, Hewitt,
Johnston-Hollitt, Kaplan, Kasper, Kincaid, Koenig, Kratzenberg,
Lynch, Mckinley, Mcwhirter, Morgan, Oberoi, Pathikulangara,
Prabu, Remillard, Rogers, Roshi, Salah, Sault, Udaya-Shankar,
Schlagenhaufer, Srivani, Stevens, Subrahmanyan, Waterson,
Webster, Whitney, Williams, Williams, &
Wyithe]Tingay2013-MWA-design
Tingay, S. J., Goeke, R., Bowman, J. D., et al. 2013, PASA, 30, 7
|
http://arxiv.org/abs/1701.08003v1 | 20170127104218 | On convergence criteria for incompressible Navier-Stokes equations with Navier boundary conditions and physical slip rates | [
"Yasunori Maekawa",
"Matthew Paddick"
] | math.AP | [
"math.AP",
"math-ph",
"math.MP"
] |
On convergence criteria for incompressible Navier-Stokes equations with Navier boundary conditions and physical slip rates
Anas Chaaban, Aydin Sezgin, and Mohamed-Slim Alouini
A. Chaaban and M.-S. Alouini are with the Division of Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) at King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia. Email: {anas.chaaban,slim.alouini}@kaust.edu.sa.
A. Sezgin is with the Institute of Digital Communication Systems at the Ruhr-Universität Bochum, Bochum, Germany. Email: [email protected].
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Yasunori Maekawa
Department of Mathematics, Graduate School of Science, Kyoto University
[email protected]
Matthew Paddick
Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions
[email protected]
theoTheorem
lemma[theo]Lemma
propo[theo]Proposition
*defiDefinition
*notaNotations
corCorollary
On convergence criteria for incompressible Navier-Stokes equations with Navier boundary conditions and physical slip rates
Anas Chaaban, Aydin Sezgin, and Mohamed-Slim Alouini
A. Chaaban and M.-S. Alouini are with the Division of Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) at King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia. Email: {anas.chaaban,slim.alouini}@kaust.edu.sa.
A. Sezgin is with the Institute of Digital Communication Systems at the Ruhr-Universität Bochum, Bochum, Germany. Email: [email protected].
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We prove some criteria for the convergence of weak solutions of the 2D incompressible Navier-Stokes equations with Navier slip boundary conditions to a strong solution of incompressible Euler.
The slip rate depends on a power of the Reynolds number, and it is increasingly apparent that the power 1 may be critical for L^2 convergence, as hinted at in <cit.>.
§ THE INVISCID LIMIT PROBLEM WITH NAVIER-SLIP BOUNDARY CONDITIONS
In this brief note, we shed some light on how some well-known criteria for L^2 convergence in the inviscid limit for incompressible fluids work when the boundary condition is changed.
We consider the two-dimensional Navier-Stokes equation on the half-plane Ω = {(x,y)∈ℝ^2 | y>0},
{[ tu^ + u^·∇ u^ - Δ u^ + ∇ p^ = 0; ÷ u^ = 0; u^|_t=0 = u^_0, ].
and study the inviscid limit problem. This involves taking → 0, and the question of whether the solutions of (<ref>) converge towards a solution of the formal limit, the Euler equation,
{[ tv + v·∇ v + ∇ q = 0; ÷ v = 0; v|_t=0 = v_0, ].
in presence of a boundary is one of the most challenging in fluid dynamics. This is because the boundary conditions required for (<ref>) are different to those for (<ref>).
In the inviscid model, there only remains the non-penetration condition
v· n|_y=0 = v_2|_y=0 = 0,
hence inviscid fluids are allowed to slip freely along the boundary, while viscous fluids adhere to it when the most commonly used boundary condition, homogeneous Dirichlet,
u^|_y=0 = 0,
is used. As goes to zero, solutions of the Navier-Stokes equation are expected to satisfy the following ansatz,
u^(t,x,y) = v(t,x,y) + V^(t,x,y/√()),
where V^ is a boundary layer, such that V^(t,x,0) = -v(t,x,0).
However, the validity of such an expansion is hard to prove, and, in some cases, such as when v is a linearly unstable 1D shear flow, it is wrong in the Sobolev space H^1, as shown by E. Grenier <cit.>.
General validity results require considerable regularity on the data. M. Sammartino and R. Caflisch proved the stability of Prandtl boundary layers in the analytic case <cit.>, and the first author <cit.> proved in the case when the initial Euler vorticity is located away from the boundary.
Recently, this has been extended to Gevrey framework by the first author in collaboration with D. Gérard-Varet and N. Masmoudi <cit.>.
Precisely, in <cit.> a Gevrey stability of shear boundary layer is proved when the shear boundary layer profile satisfies some monotonicity and concavity conditions. One of the main objectives there is the system
{[ tv^ - Δ v^ + V^∂_x v^ + v^_2 ∂_y V^ e_1+ ∇ p^ = -v^·∇ v^ ,; ÷ v^ = 0 ,; v^|_y=0 =0 , v^|_t=0 = v^_0 . ].
Here V^ (y) = U^E (y) -U^E(0) + U (y/√()), and (U^E,0) describes the outer shear flow and U is a given boundary layer profile of shear type.
In <cit.> the data is assumed to be periodic in x, and the following Gevrey class is introduced:
X_γ,K = { f∈ L^2_σ (𝕋×ℝ_+) |
f_X_γ,K = sup_n∈ℤ (1+|n|)^10 e^K|n|^γf̂ (n,·)_L^2_y (ℝ_+)<∞} .
Here K>0, γ≥ 0, and f̂(n,y) is the nth Fourier mode of f (·,y).
The key concavity condition on U and the regularity conditions on U^E and U are stated as follows:
(A1) U^E, U∈ BC^2(ℝ_+), and ∑_k=0,1,2sup_Y≥ 0 (1+Y^k) | ∂_Y^k U (Y)| <∞.
(A2) ∂_Y U>0 for Y≥ 0, U(0)=0, and lim_Y→∞ U(Y) = U^E(0).
(A3) There is M>0 such that -M∂_Y^2 U≥ (∂_Y U)^2 for Y≥ 0.
[<cit.>] Assume that (A1)-(A3) hold. Let K>0, γ∈ [2/3,1]. Then there exist C, T, K', N>0 such that for all small and v_0^∈ X_γ,K with v_0^_X_γ,K≤^N, the system (<ref>) admits a unique solution v^∈ C([0,T]; L^2_σ (𝕋×ℝ_+)) satisfying the estimate
sup_0≤ t≤ T ( v^ (t) _X_γ,K' + ( t)^1/4 v^ (t) _L^∞ + ( t)^1/2∇ v^ (t) _L^2 ) ≤ C v_0^_X_γ,K .
In Theorem <ref> the condition γ≥2/3 is optimal at least in the linear level, due to the Tollmien-Schlichting instability; see Grenier, Guo, and Nguyen <cit.>. More general results, including the case when U^E and U depend also on the time variable, can be obtained; see <cit.> for details.
The situation remains delicate when the Dirichlet boundary condition (<ref>) is replaced by (<ref>) plus a mixed boundary condition such as the Navier friction boundary condition,
yu^_1|_y=0 = a^ u^_1|_y=0 .
This was derived by H. Navier in the XIXth century <cit.> by taking into account the molecular interactions with the boundary.
To be precise, the Navier condition expresses proportionality between the tangential part of the normal stress tensor and the tangential velocity, thus prescribing how the fluid may slip along the boundary.
As indicated, the coefficient a^ may depend on the viscosity. Typically, we will look at
a^ = a/^β,
with a>0 and β≥ 0. A previous paper by the second author <cit.> showed that nonlinear instability remains present for this type of boundary condition, in particular for the case of boundary-layer-scale data,
β=1/2, where there is strong nonlinear instability in L^∞ in the inviscid limit. However, the same article also showed general convergence in L^2 when β<1.
[Theorem 1.2 in <cit.>]
Let u^_0∈ L^2(Ω) and u^ be the Leray solution of (<ref>) with initial data u^_0, satisfying the Navier boundary conditions (<ref>) and (<ref>), with a^ as in (<ref>) with β<1.
Let v_0∈ H^s(Ω) with s>2, so that v is a global strong solution of the Euler equation (<ref>)-(<ref>), and assume that u^_0 converges to v_0 in L^2(Ω) as →0.
Then, for any T>0, we have the following convergence result:
sup_t∈[0,T]L^2(Ω)u^(t)-v(t) = O(^(1-β)/2).
This theorem is proved using elementary energy estimates and Grönwall's lemma, and it extended results by D. Iftimie and G. Planas <cit.>, and X-P. Wang, Y-G. Wang and Z. Xin <cit.>.
It is worth noting, on one hand, that convergence breaks down for β=1, and on the other, that a comparable result is impossible to achieve in the no-slip case, since the boundary term ∫_∂Ωyu^_1 v_1 dx cannot be dealt with.
The first remark is important since β=1 is what we call the “physical” case, because this was the dependence on the viscosity predicted by Navier in <cit.>, and because it is indeed
the Navier condition that one obtains when deriving from kinetic models with a certain scaling (see <cit.> for the Stokes-Fourier system, and recently <cit.> extended the result to Navier-Stokes-Fourier).
One purpose of this work is therefore to further explore whether or not β=1 is effectively critical for convergence.
By using the L^2 convergence rate and interpolation, we can obtain a range of numbers p for which convergence in L^p(Ω) occurs depending on β, which also breaks down when β=1. The following extends Theorem <ref>.
Let u^_0∈ L^2(Ω) and u^ be the Leray solution of (<ref>) with initial data u^_0, satisfying the Navier boundary conditions (<ref>) and (<ref>), with a^ as in (<ref>) with β<1.
Let v_0∈ H^s(Ω) with s>2, so that v is a global strong solution of the Euler equation (<ref>)-(<ref>), and assume that u^_0 converges to v_0 in L^2(Ω) as →0.
Then, for any T>0, we have the following convergence result:
lim_→ 0sup_t∈[0,T]L^p(Ω)u^(t)-v(t) = 0 if 2≤ p < 2(1+3β)/5β-1.
The convergence rate is ^(1-β)/2-(p-2)(1+3β)/4p.
On the second remark, relating to the Dirichlet case, even if no general result like Theorem <ref> is known, there are necessary and sufficient criteria for L^2 convergence. We sum two of these up in the following statement.
Let u^_0∈ L^2(Ω) and u^ be the Leray solution of (<ref>) with initial data u^_0, satisfying the Dirichlet boundary condition (<ref>).
Let v_0∈ H^s(Ω) with s>2, so that v is a global strong solution of the Euler equation (<ref>)-(<ref>), and assume that u^_0 converges to v_0 in L^2(Ω) as →0.
Then, for any T>0, the following propositions are equivalent:
* lim_→ 0sup_t∈[0,T]L^2(Ω)u^(t)-v(t) = 0;
* lim_→ 0√()∫_0^T L^2(Γ_κ)∂_y u_1^(t) dt = 0, where Γ_κ={(x,y)∈Ω | y<κ} for κ smaller than some κ_0≤ 1 (a variant of T. Kato <cit.>);
* lim_→ 0∫_0^T ∫_∂Ω (v_1 yu^_1)|_y=0 dx dt = 0 (S. Matsui <cit.>, Theorem 3).
Regarding the key statement b. in Theorem <ref>, the original condition found by Kato <cit.> was
lim_→0∫_0^T ∇ u^ (t) _L^2 (Γ_κ)^2 d t =0.
This criterion has been refined by several authors: R. Temam and X. Wang <cit.>, X. Wang <cit.>, J. P. Kelliher <cit.>, and P. Constantin, I. Kukavica, and V. Vicol <cit.>. In fact, the argument of <cit.> provides the inequality
lim sup_→ 0sup_t∈ [0,T] u^ (t) - v(t) _L^2(Ω)^2
≤ C e^2 ∫_0^T ∇ v _L^∞ (Ω) d tlim sup_→ 0| ∫_0^T ⟨∂_y u_1^, rot Ṽ^κ⟩ _L^2(Ω) d t |.
Here C is a numerical constant and Ṽ^κ (t,x,y) = Ṽ (t,x,y/κ), with a sufficiently small κ∈ (0,1], is the boundary layer corrector used in <cit.>.
Indeed, Kato's result relied on the construction of a boundary layer at a different scale than in the ansatz presented earlier. It involved an expansion like this,
u^(t,x,y) = v(t,x,y) + Ṽ(t,x,y/κ),
thus convergence in the Dirichlet case is governed by the vorticity's behaviour in a much thinner layer than the physical boundary layer.
The direction from b. to a. follows from (<ref>).
Meanwhile, Matsui's result is proved using the energy estimates.
We will show that Theorem <ref> extends `as is' to the Navier boundary condition case.
Let u^_0∈ L^2(Ω) and u^ be the Leray solution of (<ref>) with initial data u^_0, satisfying the Navier boundary conditions (<ref>) and (<ref>) with a^≥ 0.
Let v_0∈ H^s(Ω) with s>2, so that v is a global strong solution of the Euler equation (<ref>)-(<ref>), and assume that u^_0 converges to v_0 in L^2(Ω) as →0.
Then, for any T>0, convergence in L^∞(0,T;L^2(Ω)) as in Theorem <ref> is equivalent to the same Kato and Matsui criteria in the sense as in Theorem <ref>.
Indeed, we will show that (<ref>) is valid also for the case of Navier boundary conditions (<ref>) and (<ref>).
Since the right-hand side of (<ref>) is bounded from above by
C e^2 ∫_0^T ∇ v _L^∞ (Ω) d tlim sup_→ 0κ^-1/2^1/2∫_0^T ∂_y u_1^_L^2(Ω) d t
≤ C e^2 ∫_0^T ∇ v _L^∞ (Ω) d tκ^-1/2lim sup_→ 0 u^_0 _L^2(Ω) T^1/2.
As a direct consequence, we have
Under the assumptions of Theorem <ref> or <ref>, we have
lim sup_→ 0sup_t∈ [0,T] u^ (t) - v(t) _L^2(Ω) ≤ C e^∫_0^T ∇ v _L^∞ (Ω) d t v_0 _L^2(Ω)^1/2 T^1/4,
for some numerical constant C.
Estimate (<ref>) shows that the permutation of limits
lim_T→ 0lim_→ 0sup_t∈ [0,T] u^ (t) - v(t) _L^2(Ω) =lim_→ 0lim_T→ 0sup_t∈ [0,T] u^ (t) - v(t) _L^2(Ω)
is justified, and that this limit is zero, which is nontrivial since → 0 is a singular limit.
In particular, at least for a short time period but independent of , the large part of the energy of u^ (t) is given by the Euler flow v(t).
Initially, we hoped to get a result with a correcting layer which could be more tailor-made to fit the boundary condition, but it appears that Kato's Dirichlet corrector yields the strongest statement.
Whenever we change the -scale layer's behaviour at the boundary, we end up having to assume both Kato's criterion and another at the boundary.
So this result is actually proved identically to Kato's original theorem, and we will explain why in section 3. We will also see that Matsui's criterion extends with no difficulty, but it has more readily available implications.
Indeed, the Navier boundary condition gives information on the value of yu^_1 at the boundary. Assuming that a^ = a^-β as in (<ref>), we see that
(v_1 yu^_1)|_y=0 = ^1-β (v_1 u^_1)|_y=0.
Simply applying the Cauchy-Schwarz inequality to the integral in the Matsui criterion and using the energy inequality of the Euler equation, we have
∫_0^T ∫_∂Ω (v_1 yu^_1)|_y=0 dx dt ≤^1-βL^2(Ω)v_0∫_0^T L^2(∂Ω)u^_1(t)|_y=0 dt.
As the energy inequality for Leray solutions of the Navier-Stokes equation with the Navier boundary condition shows that
^1-β∫_0^T L^2(∂Ω)u^_1(t)^2 dt ≤L^2(Ω)u^(0)^2,
the right-hand side of (<ref>) behaves like C^(1-β)/2, and thus converges to zero when β<1. The Matsui criterion therefore confirms Theorem <ref>, without being able to extend it to the physical case.
Once again, the physical slip rate appears to be critical.
§ PROOF OF L^P CONVERGENCE
To prove Theorem <ref>, we rely on a priori estimates in L^∞ and interpolation.
First, since the vorticity, ω^ = xu^_2-yu^_1, satisfies a parabolic transport-diffusion equation, the maximum principle shows that
L^∞((0,T)×Ω)ω^≤max (L^∞(Ω)ω^|_t=0,a^-βL^∞((0,T)×∂Ω)u^_1|_y=0)
by the Navier boundary condition (<ref>)-(<ref>). To estimate u^_1 on the boundary, we use the Biot-Savart law:
u^_1(t,x,0) = 1/2π∫_Ωy'/|x-x'|^2+|y'|^2ω^(t,x',y') dx'dy'.
Let us denote κ(x,x',y') the kernel in this formula. We split the integral on y' into two parts, ∫_0^K and ∫_K^+∞ with K to be chosen. On one hand, we have
|∫_0^K ∫_ℝy'/|x-x'|^2+|y'|^2ω^(t,x',y') dx'dy'| ≤ C_0 K L^∞((0,T)×Ω)ω^
by integrating in the variable x' first and recognising the derivative of the arctangent function.
On the other, we integrate by parts, integrating the vorticity ω^, so
∫_K^+∞∫_ℝκ(x,x',y') ω^(t,x',y') dx'dy'
= -∫_K^+∞∫_ℝ u^·∇_x',y'^κ dx'dy' + ∫_ℝ (κ u^_1)|_y'=K dx'.
The first two terms are easily controlled using the Cauchy-Schwarz inequality: L^2u^(t) is uniformly bounded by the energy estimate for weak solutions of Navier-Stokes,
while quick explicit computations show that L^2∇_x',y'κ≤ C/K. Likewise, in the boundary term, the kernel is also O(1/K) in L^2(ℝ),
but we must now control the L^2 norm of the trace of u^_1 on the set {y'=K}: by the trace theorem and interpolation, we have
L^2({y'=K})u^_1≤√(L^2(Ω)u^_1L^2(Ω)ω^),
and both of these are uniformly bounded. Hence, in total,
L^∞((0,T)×Ω)ω^≤L^∞(Ω)ω^(0) + a ^-β C_0 K L^∞((0,T)×Ω)ω^ + a^-βC/K.
By choosing K ∼^β so that a^-βC_0 K < 1/2, we can move the second term on the right-hand side to the left, and we conclude that, essentially,
L^∞((0,T)×Ω)ω^≤ C^-2β.
Using the Gagliardo-Nirenberg interpolation inequality from <cit.>, we can now write that, for p≥ 2,
L^p(Ω)u^(t)-v(t)≤ CL^2u^(t)-v(t)^1-qL^∞ (u^-v)(t)^q,
where q=p-2/2p. By Theorem <ref>, the first term of this product converges to zero with a rate ^(1-q)(1-β)/2 when β<1, while we have just shown that the second behaves like ^-2qβ, so the bound is
L^p(Ω)u^(t)-v(t)≤ C^(1-β)/2 - q(1+3β)/2 .
It remains to translate this into a range of numbers p such that this quantity converges, which happens when q<1-β/1+3β. Recalling the value of q, we get that weak solutions of the Navier-Stokes equation
converge in L^p towards a strong solution of the Euler equation if
2 ≤ p < 2(1+3β)/5β-1,
and the right-hand bound is equal to 2 when β=1.
§ ABOUT THE KATO AND MATSUI CRITERIA
The starting point for both criteria is the weak formulation for solutions of the Navier-Stokes equation.
If E is a function space on Ω, we denote E_σ the set of 2D vector-valued functions in E that are divergence free and tangent to the boundary.
Recall that, through the rest of the paper, a^ is a non-negative function of >0 (not necessarily the same form as in (<ref>)).
A vector field u^: [0,T]×Ω→ℝ^2 is a Leray solution of the Navier-Stokes equation (<ref>) with Navier boundary conditions (<ref>)-(<ref>) if:
* u^∈ C_w([0,T],L^2_σ) ∩ L^2([0,T],H^1_σ) for every T>0,
* for every φ∈ H^1([0,T],H^1_σ), we have
⟨ u^(T),φ(T) ⟩_L^2(Ω)-∫_0^T ⟨ u^ , ∂_tφ⟩_L^2(Ω) + a^∫_0^T ∫_∂Ω (u^_1 φ_1)|_y=0
+ ∫_0^T ⟨ω^ , φ⟩_L^2(Ω) - ∫_0^T ⟨ u^⊗ u^ , ∇φ⟩_L^2(Ω) = ⟨ u^(0) , φ(0) ⟩_L^2(Ω),
* and, for every t≥ 0, u^ satisfies the following energy equality (in 3D, this is an inequality):
1/2L^2(Ω)u^(t)^2+ a^∫_0^t∫_∂Ω (|u^_1|^2)|_y=0+∫_0^tL^2(Ω)ω^^2 = 1/2L^2(Ω)u^(0)^2 .
When formally establishing the weak formulation (<ref>), recall that
-∫_ΩΔ u^φ = ∫_Ω (ω^·φ - ∇÷ u^·φ) + ∫_∂Ω (ω^φ· n^)|_y=0,
where n^ = (n_2, -n_1) is orthogonal to the normal vector n. In the flat boundary case with condition (<ref>) on the boundary, we get the third term of (<ref>).
The differences with the Dirichlet case are two-fold: first, the class of test functions is wider (in the Dirichlet case, the test functions must vanish on the boundary), and second, there is a boundary
integral in (<ref>) and (<ref>) due to u^_1 not vanishing there.
We will not go into great detail for the proof of Theorem <ref>, since it is virtually identical to Theorem <ref>. In particular, Matsui's criterion is shown with no difficulty,
as only the boundary term in (<ref>), with φ=u^-v, is added in the estimates, and this is controlled as a part of the integral I_3 in equality (4.2) in <cit.>, page 167.
This proves the equivalence a.⇔c.
We take more time to show the equivalence a.⇔b., Kato's criterion. In <cit.>, Kato constructed a divergence-free corrector Ṽ^κ, acting at a range O()
of the boundary and such that v|_y=0 = Ṽ^κ|_y=0, and used φ=v-Ṽ^κ as a test function in (<ref>) to get the desired result. We re-run this procedure, which finally leads to the identity
⟨ u^ (t), v(t) - Ṽ^κ (t) ⟩ _L^2(Ω)
= ⟨ u^_0, v_0 ⟩_L^2(Ω) - ⟨ u^_0, Ṽ^κ (0)⟩ _L^2(Ω) - ∫_0^t ⟨ u^, ∂_t Ṽ^κ⟩ _L^2(Ω)
+ ∫_0^t ⟨ u^ - v, (u^ - v) ·∇ v ⟩ _L^2(Ω) -∫_0^t ⟨ω^, rot v⟩ _L^2 (Ω)
+ ∫_0^t ⟨ω^, rot Ṽ^κ⟩ _L^2(Ω) - ∫_0^t ⟨ u^⊗ u^, ∇Ṽ^κ⟩ _L^2(Ω).
In deriving this identity, one has to use the Euler equations which v satisfies and also ⟨ v, (u^ - v) ·∇ v⟩ _L^2(Ω) =0. On the other hand, we have from (<ref>),
u^ (t) - v(t) _L^2(Ω)^2 = u^ (t) _L^2(Ω)^2 + v(t) _L^2(Ω)^2 - 2⟨ u^ (t), v(t) - Ṽ^κ⟩ _L^2 (Ω)
- 2⟨ u^ (t) , Ṽ^κ (t) ⟩ _L^2 (Ω)
= -2 a^∫_0^t u^_1 _L^2 (∂Ω)^2 -2 ∫_0^t ω^_L^2(Ω)^2
+ u^_0 _L^2(Ω)^2 + v_0 _L^2 (Ω^2 - 2⟨ u^ (t) , Ṽ^κ (t) ⟩ _L^2 (Ω)
- 2⟨ u^ (t), v(t) - Ṽ^κ (t) ⟩ _L^2 (Ω)
Combining (<ref>) with (<ref>), we arrive at the identity which was essntially used reached by Kato in <cit.> for the no-slip case:
u^ (t) - v(t) _L^2(Ω)^2
= -2 a^∫_0^t u^_1 _L^2 (∂Ω)^2 -2 ∫_0^t ω^_L^2(Ω)^2 + u^_0 -v_0 _L^2(Ω)^2
- 2⟨ u^ (t) , Ṽ^κ (t) ⟩ _L^2 (Ω) + 2⟨ u^_0, Ṽ^κ (0) ⟩ _L^2(Ω)
+2 ∫_0^t ⟨ u^, ∂_t Ṽ^κ⟩ _L^2(Ω) + 2∫_0^t ⟨ω^, rot v⟩ _L^2 (Ω)
-2 ∫_0^t ⟨ u^ - v, (u^ - v) ·∇ v ⟩ _L^2(Ω)
+2 ∫_0^t ⟨ u^⊗ u^, ∇Ṽ^κ⟩ _L^2(Ω)-2 ∫_0^t ⟨ω^, rot Ṽ^κ⟩ _L^2(Ω).
Let us run down the terms in this equality. The first line is comprised of negative terms and the initial difference, which is assumed to converge to zero.
The terms on the second and third lines of (<ref>) tend to zero as → 0 with the order 𝒪((κ)^1/2), since the boundary corrector has the thickness 𝒪 (κ).
Meanwhile, on the fourth line, we have
-2 ∫_0^t ⟨ u^ - v, (u^ - v) ·∇ v ⟩ _L^2(Ω)≤ 2∫_0^t ∇ v_L^∞ u^ - v_L^2(Ω)^2,
which will be harmless when we apply the Grönwall inequality later. For the Navier-slip condition case, a little adaptation is necessary to control the fifth line,
I := ∫_0^t ⟨ u^⊗ u^, ∇Ṽ^κ⟩_L^2(Ω) .
In the Dirichlet case, the nonlinear integral I is bounded by using the Hardy inequality, since u^ vanishes on the boundary. In our case with the Navier condition, however, u^_1 does not vanish, so we need to explain this part.
Let us first manage the terms in I which involve u^_2, which does vanish on the boundary. Recall that Ṽ^κ has the form Ṽ (t,x,y/κ) and is supported in Γ_κ={(x,y)∈Ω | 0<y<κ}, so we write
|∫_Ω (u^_2)^2 yṼ_2^κ| = |∫_Γ_κ(u^_2/y)^2 y^2 yṼ_2^κ| ≤ C L^∞y^2 yṼ_2^κL^2(Ω)∇ u^_2^2,
in which we have used the Hardy inequality. Note that yṼ^κ is of order (κ)^-1, so y^2 yṼ_2^κ is bounded by Cκ in L^∞(Γ_κ), and we conclude that
|∫_Ω (u^_2)^2 yṼ_2^κ| ≤ CκL^2(Ω)∇ u^^2.
Here C is a numerical constant.
This is what happens on all terms in <cit.>, and the same trick works for ∫_Ω u^_1 u^_2 xṼ_2^κ;
this term is in fact better, since the x-derivatives do not make us lose uniformity in . Using the fact that L^2u^ is bounded courtesy of the energy estimate (<ref>), we have
|∫_Ω u^_1 u^_2 xṼ_2^κ| ≤ Cκ u^_L^2(Ω)L^2(Ω)∇ u^.
The term ∫_Ω u^_1 u^_2 yṼ_1^κ is trickier, since the y-derivative is bad for uniformity in , and we only have one occurrence of u^_2 to compensate for it.
Let us integrate this by parts: using the divergence-free nature of u^, we quickly get
∫_Ω u^_1 u^_2 yṼ_1^κ = ∫_Ω u^_1 xu^_1 Ṽ_1^κ - ∫_Ωyu^_1 u^_2 Ṽ_1^κ
= -1/2∫_Ω (u^_1)^2 xṼ_1^κ - ∫_Ωyu^_1 u^_2 Ṽ_1^κ.
The second term can be dealt with using the Hardy inequality as above, and its estimate is identical to (<ref>). The first term, meanwhile, is the same as the remaining one in I.
To handle ∫_Ω (u^_1)^2 xṼ_1^κ, in which no term vanishes on the boundary, we proceed using the Sobolev embedding and interpolation. Indeed, we have
|∫_Ω (u^_1)^2 xṼ_1^κ| ≤ 2 L^2(Ω)(u^_1 - v_1)^2L^2(Ω)∂_x Ṽ_1^κ + 2 v_1^2 _L^2(Ω)∂_x Ṽ_1^κ_L^2(Ω)
≤ C (κ)^1/2 u_1^ - v_1 _L^4(Ω)^2 + C κ v_L^∞(Ω)^2.
Here we have used that L^2(Ω)Ṽ_1^κ≤ C(κ)^1/2, while
L^4(Ω)u^_1-v_1^2 ≤ CL^2(Ω)u^_1-v_1H^1(Ω)u^_1-v_1,
and so, in total, we conclude that
| I| ≤ C ( κL^2(Ω)∇ u^^2 +κ u^_L^2(Ω)L^2(Ω)∇ u^
+ u^ - v_L^2 (Ω)^2 + (κ)^1/2∇ v _L^2(Ω) u^-v_L^2(Ω) + κ v_L^∞ (Ω)^2 ).
Here C is a numerical constant.
Then, by virtue of the identity ω^_L^2(Ω) = ∇ u^_L^2(Ω),
the term Cκ∇ u^_L^2 (Ω)^2 in the right-hand side of (<ref>) can be absorbed by the dissipation in the first line of (<ref>) if κ>0 is suifficiently small.
We come to the final linear term -2∫_0^t ⟨ω^, rot Ṽ^κ⟩ _L^2 (Ω) in the fifth line of (<ref>). Using ω^ = ∂_x u^_2 - ∂_y u^_1, we have from the integration by parts,
-2∫_0^t ⟨ω^, rot Ṽ^κ⟩ _L^2 (Ω)
= 2∫_0^t ⟨∂_y u_1^, rot Ṽ^κ⟩ _L^2 (Ω) + 2∫_0^t ⟨u_2^/y, y ∂_x rot Ṽ^κ⟩ _L^2(Ω)
≤ 2∫_0^t ⟨∂_y u_1^, rot Ṽ^κ⟩ _L^2 (Ω) + C κ^1/2^3/2∫_0^t ∇ u^_2 _L^2(Ω).
Collecting all these estimates, we get from (<ref>) that for 0<t≤ T,
u^ (t) - v(t) _L^2(Ω)^2
≤ - 2 a^∫_0^t u^_1 _L^2 (∂Ω)^2 - ∫_0^t ω^_L^2(Ω)^2 + u^_0 -v_0 _L^2(Ω)^2
+ C (κ)^1/2 + ∫_0^t (C_0 + 2 ∇ v _L^∞ (Ω) ) u^ - v _L^2(Ω)^2
+ 2 ∫_0^t ⟨∂_y u_1^, rot Ṽ^κ⟩ _L^2(Ω).
Here C depends only on T, u^_0 _L^2(Ω), and v_0 _H^s(Ω), while C_0 is a numerical constant. Inequality (<ref>) is valid also for the no-slip (Dirichlet) case; indeed, we can drop the negative term - 2 a^∫_0^t u^_1 _L^2 (∂Ω)^2.
By applying the Grönwall inequality and by taking the limit → 0, we arrive at (<ref>).
This is enough to extend Kato's criterion to the Navier boundary condition case; the rest is identical to Kato's proof in <cit.>.
We have achieved this result by re-using the Dirichlet corrector because, since the test function φ=v-Ṽ^κ vanishes at y=0, the boundary integral in (<ref>) does not contribute.
This does not feel quite satisfactory. One would have hoped to get criteria by constructing more appropriate correctors, such as one so that the total satisfies the Navier boundary condition, but, as we have just mentioned,
a boundary integral appears and it is not clear that we can control it. In fact, this boundary term is similar to the one in the Matsui criterion, which, as we have proved, is equivalent to Kato's.
We observe that when considering a corrector which does not vanish on the boundary, convergence of Navier-Stokes solutions to Euler solutions happens if and only if both b. and c. are satisfied.
It appears difficult to get refinements of criteria for L^2 convergence in the inviscid limit problem according to the boundary condition.
Acknowledgements. These results were obtained during MP's visit to Kyoto University, the hospitality of which is warmly acknowledged. The visit was supported by the JSPS Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers,
`Development of Concentrated Mathematical Center Linking to Wisdom of the Next Generation', which is organized by the Mathematical Institute of Tohoku University.
MP is also supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Grant agreement No 63765, project `BLOC', as well as the French Agence Nationale de la Recherche project `Dyficolti' ANR-13-BS01-0003-01.
abbrv
|
http://arxiv.org/abs/1701.07803v1 | 20170126181629 | Synergies between Asteroseismology and Three-dimensional Simulations of Stellar Turbulence | [
"W. David Arnett",
"E. Moravveji"
] | astro-ph.SR | [
"astro-ph.SR"
] |
1Steward Observatory, University of Arizona, Tucson, AZ 85721, [email protected]
2Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium
DRAFT FROM December 30, 2023
Turbulent mixing of chemical elements by convection has fundamental effects on the evolution of stars.
The standard algorithm at present, mixing-length theory (MLT), is intrinsically local, and must be supplemented
by extensions with adjustable parameters.
As a step toward reducing this arbitrariness, we compare asteroseismically inferred internal structures of
two Kepler slowly pulsating B stars (SPB's; M∼ 3.25 M_⊙) to predictions of 321D turbulence theory,
based upon well-resolved, truly turbulent three-dimensional simulations <cit.> which include
boundary physics absent from MLT.
We find promising agreement between the steepness and shapes of the
theoretically-predicted composition profile outside the convective region in 3D simulations and in asteroseismically
constrained composition profiles in the best 1D models of the two SPBs.
The structure and motion of the boundary layer, and the generation of waves, are discussed.
§ INTRODUCTION
Observational and computational advances over the past decade
have placed us in a
unique era, in which study of the
interiors of
stars can be pursued at much higher precision than before.
The recent observations from space of pulsating stars (through the MOST, CoRoT, Kepler, K2, and
BRITE-constellation missions), and the planned future missions (like TESS and PLATO) supply precise photometry
of
stars, with near-continuous time sampling for durations of weeks to years.
Among all targets, those more massive than ∼1.4 M_⊙ have a critical feature in common:
during their main sequence lives, they harbor a convective core and a radiative envelope.
The gravity (g) modes propagate in the radiative envelope, are reflected from the convective core,
providing valuable information regarding the physical conditions near the boundary.
Thus, g-mode pulsating stars are excellent tests of
the physics of core convection.
In parallel, two- and three-dimensional (2D/3D) implicit large eddy simulations (ILES) of turbulent convection at
different evolutionary phases (e.g., <cit.>,
and many more)
have shed light on the
behavior of stellar convection, and in particular on
the interface between convective and radiative zones.
This allows
the development of non-local time-dependent convective theories
which are consistent with the
3D simulations. The numerical data provides closure to the Reynolds-averaged Navier-Stokes
(RANS) equations <cit.>,
converting numerical experiments into theory.
This approach is called 321D, and aims to provide alternatives (of increasing sophistication)
to the classical Mixing Length Theory of convection <cit.>,
for use in
one-dimensional (1D) stellar evolution codes.
Despite this progress in computation,
simplifications are necessary. Attainable numerical resolution allows numerical Reynolds numbers Re > 10^4, which are definitely turbulent, but stars have far higher Reynolds numbers <cit.>. To attain highly turbulent simulations, only a fraction of the star is computed (a “box-in-star” approach) and rotation and magnetic fields are ignored. The simulations extend from the integral scale down into the Richardson-Kolmogorov cascade, and the subgrid dissipation merges with the Kolmogorov “four-fifths” law <cit.>.
In addition to the Reynolds number issue, the simulations have negligible radiation diffusion
(infinite Péclet number) because of vigorous neutrino cooling,
rather than large but finite Péclet numbers found
during hydrogen and helium burning <cit.>.
These simulations represent turbulent solutions
to the Navier-Stokes equations, and a step beyond mixing-length theory:
they can resolve boundaries.
It is timely to compare
the results of observations, asteroseismic modeling
and 2D/3D simulations
<cit.>.
Although these are independent approaches, it is possible to
understand many
underlying similarities between the state-of-the-art simulations and modeling.
Such synergies shall allow us improve our treatment of turbulent convection in 1D models, and
account for the convectively-induced mixing in the radiative interior (through overshooting and internal
gravity waves) in a more consistent way.
We review the recent asteroseismic modeling of two Kepler
slowly pulsating B stars (SPB's) in <ref>.
In <ref> we compare the theoretical descriptions in MLT and 321D, region by region.
In <ref> we compare the shapes of the abundance profiles at the convective boundaries, as inferred from astreroseismology
and from the 3D simulations of O-burning shell in a 23 M_⊙ model <cit.> and of C-burning shell in a 15 M_⊙ model <cit.>. Our conclusions are summarized in <ref>.
§ INPUT MODELS
<cit.> recently did
in-depth forward seismic modeling of two SPB stars
having the richest seismic spectra known so far.
Both KIC 10526294 <cit.>, and KIC 7760680 <cit.> are of
spectral class B8V (3.25 M_⊙), and exhibit long and uninterrupted series of dipole (ℓ=1) g-modes.
In both cases, the relative frequency differences between the observations and models is less than 0.5%.
In addition to placing tight constraints on the extent of overshooting beyond the core, and additional diffusive mixing
in the radiative interior, the authors concluded that the exponentially-decaying diffusive mixing profile for
core overshoot outperforms the step-function prescription.
Thus, convectively-induced mixing beyond the formal core boundary seems to have a radial dependence, and decays
outwards.
Figure <ref> shows the internal mixing profile (colored regions) of the best
model of KIC 7760680, with selected positions in the enclosed mass coordinate labeled O, S, M, and W.
We will discuss each region, comparing and contrasting the MLT, and a 321D theory based upon
well-resolved numerical simulations
(see <cit.> and extensive references and discussion therein).
§ COMPARING MLT AND 321D
There are four distinct regions in Fig. <ref>, which we elaborate below: (1) a Schwarzschild core, (2) a braking (overshoot) region, (3) a composition gradient, and (4) a radiative envelope.
§.§ Region OS: Schwarzschild core
§.§.§ MLT
In Figure <ref>, the region extending from the origin at O to S is well mixed; at S the Schwarzschild criterion changes sign, so that within OS buoyancy drives convection.
For a composition gradient of zero, -∇_μ = ∇_Y ≡Σ_i X_i/A_i → 0, where X_i and A_i are the mass fraction and mass number of each nucleus i.
The Ledoux discriminant,
L = Δ∇ = ∇ - ∇_ ad - ∇_Y,
reduces to the the Schwarzschild discriminant,
S = ∇ - ∇_ ad,
which is positive.
∇ and ∇_ ad are actual and adiabatic temperature gradients, respectively, and ∇_μ is the corresponding dimensionless gradient in mean molecular weight.
The convective velocity is approximately
u^2 ∼ g ℓΔ∇ >0,
where ℓ is the free parameter, the mixing length,
the temperature excess is Δ∇ = ∇ - ∇_a -∇_Y, and g is the gravitational acceleration.
For the turbulent velocity, the MLT as given by Eq. <ref> is an adequate
approximation over region OS, and is the commonly used choice.
This is a steady-state approximation which may be inferred from the turbulent kinetic energy equation
<cit.> or alternatively, from the balance of buoyant acceleration and deceleration by turbulent
friction (u|u|/ℓ); see <cit.>, <cit.>, <cit.>, and <cit.>.
Using the sound speed s^2 =γ PV, the Mach number M = u/s, and the pressure scale height H_P = PV/g, we may write Eq. <ref> as
M^2 ∼ (ℓ / H_P) Δ∇ .
For quasi-static stellar evolution, the convective Mach numbers are small (M≤ 0.01). Since ℓ/H_P ∼ 1, Δ∇≪ 1 giving ∇∼∇_ ad for a well-mixed convection zone in a stellar interior.
§.§.§ 321D
Fully turbulent 3D simulations of convection may be represented as solutions to a simple differential equation for either the turbulent kinetic energy, or (as shown here) for the turbulent velocity u <cit.>,
∂_t u + ( u ·∇) u = B - D
where the buoyancy term[For strongly stratified media, there is an additional driving term due to “pressure dilatation” <cit.>.] is B≈ gβ_T Δ∇ (β_T = (∂lnρ/ ∂ln T )_P is the thermodynamic factor[Sometimes denoted δ or Q.] to convert temperature excess to density deficit at constant pressure), and the drag term (chosen to be consistent with the Kolmogorov cascade) is D≈ u/τ, with the dissipation time τ = ℓ_d / |u|.
Here ℓ_d is the dissipation length, a property of the turbulent cascade, and may differ from the mixing length used in MLT.
We average over angles and take the steady state limit, which gives for the radial convective speed u_r,
u_r d u_r/dr = gβ_T Δ∇ - u_r|u_r|/ℓ_d.
Away from the convective boundaries the gradient of u is small, and this equation resembles Eq. <ref>
for appropriately chosen ℓ_d.
Eq. <ref> implies a heating rate due to the dissipation of flow at small scales <cit.>, ϵ_K = u^2/τ = u^3/ℓ, a feature ignored in MLT. This is the frictional cost to the star for moving enthalpy by turbulence. In practice this term is small but not negligible. The 321D algorithm accounts for this additional heating term in the computation of 1D models <cit.>.
§.§ Region SM: Braking (overshoot)
§.§.§ MLT
The Schwarzschild discriminant,
S = Δ∇ = ∇ - ∇_ ad
changes sign at boundary S,
and ∇_Y ≈ 0 there, so Eq. <ref> implies an imaginary convective speed. To deal with this
singular
behavior, different physics is traditionally introduced. A region SM is defined as the “overshoot” region, in which the luminosity is presumably carried entirely by radiative diffusion (∇ = ∇_ rad) and a new algorithm is defined, replacing the variable u by a new variable, the effective diffusion rate D_ov; see <cit.> for a short and clear discussion.
Inside SM, ∇_ rad-∇_ ad≤ S≤0.
The coordinate M designates the layer at which this effective “convective diffusion” is no longer able to destroy the composition gradient, so over region SM we still have ∇_Y ≈ 0.
§.§.§ 321D
Because the Ledoux discriminant,
L = Δ∇ = ∇ - ∇_ ad - ∇_Y
changes sign at S, B < 0, and
the region SM is subjected to buoyant deceleration. Mixing continues over SM so that all of the region OSM is mixed, even though L is negative over SM. Thus SM is the overshoot region, in which the flow turns back to complete its overturn. The vector velocity u has different signs in upflow and downflow; it becomes horizontal at coordinate M, not zero; <cit.>. The coordinate M is a shear layer.
Here ∇_Y → 0, so we have ∇ - ∇_ ad < 0, and g ℓΔ∇ <0.
Eq. <ref> is not possible.
Near the boundaries the velocity gradient terms dominate over the Kolmogorov term in Eq. <ref>, so
u_r du_r/dr = d (u_r^2/2)/dr ∼ g β_T ℓΔ∇.
For negative buoyancy (Δ∇ <0), the buoyant deceleration acts to decrease the radial
component of the turbulent kinetic energy.
<cit.> show that this is essentially the same as defining the boundary by the gradient Richardson
criterion (Ri = N^2/(∂ u /∂ r)^2 > 1 / 4).
The Schwarzschild criterion is only a linear instability condition, derived by assuming infinitesimal perturbations. The Richardson criterion is a nonlinear condition, which indicates whether the turbulent kinetic energy can overcome the potential energy implied by stable stratification. For a well-mixed region L→ S.
Use of L in a stellar code can give
fictitious boundaries due to small abundance gradients, which are blown away in a 3D fluid dynamic simulation;
the Richardson criterion insures against these.
The boundary of a convective region has a negligible radial velocity of turbulence; Eq. <ref>
indicates where this occurs, so that by a solution of Eq. <ref>, 321D automatically determines where the
boundary of convection is.
This contrasts with MLT for which boundaries are undefined and thus a thorny issue.
§.§.§ Radiation diffusion effects
In order to compare the asteroseismic models from hydrogen burning to simulations from later burning stages, allowance must be made for the difference in strength of radiative diffusion in the two cases. The duration of neutrino-cooled stages is much shorter than the radiative leakage time from the core, while core hydrogen burning takes much longer than its corresponding radiative leakage time.
In the terminology of fluid mechanics, the Péclet number is significantly different in these two cases <cit.>, so their flows may be significantly different too. This is especially important for thin layers, for which the radiative diffusion time is shorter.
For oxygen burning and carbon burning, the flow follows a nearly adiabatic trajectory,
having an entropy deficit in the braking region (Fig. 4 in <cit.>). If
radiative diffusion is not negligible, heat will flow into this region of entropy deficit, reducing the strength of the buoyancy braking and thus widening the overshoot layer. This causes the actual gradient to deviate from the adiabatic gradient (∇_ ad) and approach the radiative one (∇_ rad).
The temperature gradient in the overshooting region is expected to lie between the adiabatic and radiative ones ∇_ rad<∇<∇_ ad <cit.>.
Calibration of overshooting algorithms using observations of hydrogen-burning stars may be inadequate for later burning stages because of the differences in Péclet number <cit.>. This
could be a problem for helium burning <cit.>, and will certainly get worse for the neutrino-cooled stages.
§.§ Region MW: Composition Gradient
<cit.> found it desirable to add an extra diffusive mixing (their D_ ext) beyond the well-mixed region OSM,
because the agreement between observed and modeled frequencies improve – in χ^2 sense – by
a factor 11. This is an important clue regarding the structure of this region.
The physical basis for extra mixing seems to be at least two-fold. Because coordinate M is a shear layer,
Kelvin-Helmholtz instabilities will mix matter above and below coordinate M.
In Eq. <ref> we saw that the radial velocity could be decelerated at a boundary. As the flow turns, a horizontal velocity u_h must develop <cit.>. This variable does not appear in MLT (the “blob disappears back into the environment"). A finite u_h implies a shear instability may occur.
A necessary condition for instability (due to
Rayleigh and to Fjørtoft, see 8.2 in <cit.>) is that
d^2 u_h/dr^2 (u_h-u_0) < 0,
somewhere as we move in radius through the flow at the boundary.
Here u_h is the horizontal velocity and u_0 is that
velocity at the radius at which d^2u_h/dr^2 = 0 (the point of inflection).
While a stably-stratified composition gradient may tend to inhibit the instability, Eq. <ref> illustrates a basic feature:
the velocity field drives the instability.
If this horizontal flow is stable, a composition gradient can be maintained. Turbulent fluctuations may cause the stability criterion to be violated, and thus erode the layer until it is again stable.
If there is entrainment at the boundary, as the 3D simulations show <cit.>, then the boundary moves away from the center of the convection zone.
The result is that the composition gradient will be left near the margin of instability. This gives the sigmoid shape, found in 3D simulations of oxygen burning <cit.> and carbon burning <cit.>.
At mass coordinate M the average turbulent velocity in the radial direction is zero.
Because the flow is turbulent, it is zero only on average, so ⟨ u ⟩ = 0, but has a finite rms value, ⟨ u^2 ⟩≠ 0.
Over the regions SM and MW, Δ∇ < 0, so the Brunt-Väisälä frequency is real.
Thus, this region can support waves, which the velocity fluctuations necessarily excite. The turbulent fluctuations couple best to g-modes at low Mach numbers <cit.>.
The mixing induced here is much slower than in the convection region[Most of the turbulent energy is at large wavelengths (g-modes), so the induced waves are less able to cause mixing than shorter wavelengths might.], and allows a composition gradient to be sustained over MW. Because the mixing is slow, the composition structure may be
a combination of previous history and slow modification.
§.§ Region W: Radiative envelope
To the extent that nonlinear interactions of waves generate entropy, the circulation theorem implies that slow currents will be induced, leading to more “extra mixing” in radiative regions such as those above coordinate W.
In 1D stellar
evolution, this type of mixing is treated diffusively (D_ ext), and
asteroseismology can tightly constraint it.
§ COMPARING ABUNDANCE PROFILES
Figure <ref> shows the inferred profile of hydrogen in the two Kepler SPB's.
This may be well-fitted by a logistic function
strikingly similar to the composition profile
shapes resulting from 3D simulations. We choose to fit the abundance profiles X (in mass fraction) to
X(z) = θ + ϕ - θ/1 + exp[-η (z-1)],
where z=r/r_ mid is the normalized radius
and r_mid is the radius at the mid-point of the abundance gradient.
The steepness of the composition profile is encoded in η.
At the “`core”, -η(z-1) ≫ 1, and X(z) →θ.
At r=r_ mid , X_ mid= (X_ core + X_ surf)/2,
so z=1 and X_ mid=0.5(θ+ϕ).
At the stellar “surface” (z ≫ 1) and for large η, X_ surf=ϕ.
Convective boundaries in stars are often narrow with respect to radius.
The mathematical form in Eq. <ref> stretches the abundance gradient over the
whole z axis.
Thus, it is straightforward to fit the sigmoid shape to any abundance profile from stellar models at any
evolutionary phase from 1D or 2D/3D (after temporal and angular averaging).
The normalization used in Eq. <ref> tends to separate shape from absolute scale, so that it may be used for different compositions (burning stages).
Radiative transfer is negligible for carbon burning and oxygen burning, so the Peclét number is essentially
infinite <cit.>, while radiative transfer does affect hydrogen burning.
In the braking region, radial deceleration (turning of the flow) is due to negative buoyancy. At the top of a convection zone, rising matter becomes cooler (has lower entropy) than its surroundings. Radiative transfer tends to heat this cooler
matter, reducing the negative buoyancy (radial braking) so that the matter must continue further before it is turned. This results in a wider braking layer (smaller η); see discussion in <cit.>.
Boundary layers are narrower for neutrino-cooled stages, so that calibration of boundary widths from
photon-cooled stages are systematically in error.
Simulations in 3D of H burning are difficult without artificial scaling of the heating rate <cit.>.
Because of negligible
radiative diffusion in carbon and oxygen burning, those
corresponding values for η in Table <ref> should tend to be higher than would be expected in hydrogen burning, as they do.
Simulations of oxygen burning give behavior similar to carbon burning, but are more complicated to interpret because of a significant initial readjustment of convective shell size, and an episode of ingestion of ^20Ne; see <cit.> and references therein to earlier works. Despite this, the large value of η reasonably represents the time- and angle-averaged O-profiles.
In Table <ref> we compare upper boundaries because
core H burning has only an upper boundary.
The large values of η≫ 1 imply that the composition boundaries are narrow
relative to the radius, in all cases.
The η for KIC 10526294 is roughly twice that of KIC 7760680.
The former star is very young (X_ core=0.63) with a
steeper composition jump; the latter is more evolved (X_ core=0.50) with broader
μ-gradient zone.
Further investigation of precisely which physical processes determine the parameter η is underway (see <ref>).
In Fig. <ref>, the maximum and minimum mass fractions of fuel are simply a function of the chosen consumption by burning, fixing ϕ and θ. The position of the boundary is normalized relative to the center of the gradient r_mid (which avoids the important issue of entrainment). What is important here is the narrowness of the composition gradient (the large value of η) and the shape of the curve joining the high and low fuel abundances, both of which are predicted by the 3D simulations, and independently inferred from asteroseismology, giving an encouraging agreement.
§ SUMMARY
Asteroseismically-inferred composition gradients are strikingly similar to those found in
3D simulations; MLT cannot predict boundary structure.
We compare, zone by zone, the predictions of MLT and 321D, from the convective core to the radiative mantle.
Advantages of 321D are its time-dependence, non-locality, incorporation of the
Kolmogorov cascade and turbulent heating, and resolution of convective boundaries.
The properties of the fully convective regions are similar in 321D and MLT.
Use of the 321D can avoid imaginary convective velocities in the braking regions, which are
related to the development of singularities in boundary layers (<cit.> 40, <cit.>).
The 321D can provide a continuous description of the convective boundary – from fully
convective to radiative – avoiding the awkward patching characteristic of MLT.
The 321D procedure promises a dynamical treatment of the overshooting and wave generation
in stellar models.
Possible effects on convective flow from radiative diffusion should be further explored for regions with moderate Péclet numbers <cit.>, between existing deep interior and atmospheric simulations.
Better boundaries can provide a possible solution to the behavior of
convective helium burning cores, which the MLT fails to represent well.
The sigmoid fits to the hydrogen profiles of the two Kepler targets come from 1D (MESA) including
ad-hoc core overshooting and extra diffusive mixing to match the observed g-mode frequencies.
In contrast the simulations of C and O burning shells have no free parameters to adjust.
Despite dealing with such different burning stages, the similarity shown in Table <ref>
and Fig. <ref> is striking.
Special thanks are due to Simon Campbell and to Andrea Cristini for access to their simulation data prior to publication, and to Raphael Hirschi, Casey Meakin, Cyril Georgy, Maxime Viallet, John Lattanzio and Miro Mocák for helpful discussions. We thank an anonymous referee who helped to improve the manuscript.
This work was supported in part by the Theoretical Program in Steward Observatory,
and by the People Programme (Marie
Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA
grant agreement N^∘ 623303 (ASAMBA).
[Aerts & Rogers (2015)]conny Aerts, C., & Rogers, T. M. 2015, , 806, 33
[Arnett, Meakin & Young(2009)]amy09vel Arnett, W. D., Meakin, C., & Young, P. A., 2009, , 690, 1715
[Arnett, et al.(2015)]321D Arnett, W. D., Meakin, C. A., Viallet, M., Campbell, S. W., Lattanzio, J. C. & Mocák, M., 2015, , 809, 30
[Arnett & Meakin(2016)]ropp Arnett, W. D. & Meakin, C. A., 2016, Reports on Progress in Physics, in press
[Bazàn & Arnett(1994)]ba94 Bazàn, G., & Arnett, D.,
1994, , 433, L41
[Böhm-Vitense(1958)]bv58 Böhm-Vitense, E., 1958, , 46, 108
[Constantino, et al.(2015)]tom15 Constantino, T., Campbell, S., Christensen-Dalsgaard, J., Lattanzio, J., Stello, D., 2015, , 452, 123
[Cristini, et al.(2015)]andrea2015 Cristini, A., Hirschi, R., Georgy, C., Meakin, C., Arnett, D., & Viallet, M., 2015, IAUS 307, 459
[Cristini, et al.(2016)]andrea2016 Cristini, A., Hirschi, R., Georgy, C., Meakin, C., Arnett, D., & Viallet, M., 2016, submitted to
[Drazin(2002)]drazinDrazin, P. G., 2002, Introduction to Hydrodynamic Stability, Cambridge University Press, Cambridge, U. K.
[Freytag, Ludwig, & Steffen(1996)]fls96 Freytag, B., Ludwig, H.-G., & Steffen, M., 1996, , 313, 497
[Frisch(1995)]frisch Frisch, U., 1995, Turbulence, Cambridge University Press, Cambridge
[Ghasemi, et al.(2016)]ghasemi16 Ghasemi, H., Moravveji, E., Aerts, C., et al. 2017, , 465, 1518
[Gough(1977)]gough77 Gough, D. O., 1977, 38th Coll., Problems of Stellar Convection (Berlin: Springer), 71, 799
[Kitiashvili, et al.(2016)]kitiashvili Kitiashvili, I N., Kosovichev, A. G., Mansour, N. N., & Wray, A. A. 2016, , 821, 17
[Kolmogorov(1941)]kolmg41 Kolmogorov, A. N., 1941, Dokl. Akad. Nauk SSSR, 30, 299
[Kolmogorov(1962)]kolmg Kolmogorov, A. N., 1962, J. Fluid Mech., 13, 82
[Landau & Lifshitz(1959)]llfm Landau, L. D. & Lifshitz, E. M., 1959, Fluid Mechanics,
Pergamon Press, London.
[Meakin & Arnett(2006)]ma06 Meakin, C. A., & Arnett, W. D., 2006, , 637, 53
[Meakin & Arnett(2007a)]ma07a Meakin, C. A., & Arnett, W. D., , 665, 690
[Meakin & Arnett(2007b)]ma07b Meakin, C. A., & Arnett, W. D., 2007b, , 667, 448
[Miesch(2005)]miesch Miesch, M. S., 2005, Living Reviews in Solar Physics, 2, 1
[Moravveji, et al.(2015)]ehsan1 Moravveji, E., Aerts, C., Pápics, et al. 2015, , 580, 27
[Moravveji, et al.(2016)]ehsan2 Moravveji, E., Townsend, R. H. D., Aerts, C., Mathis, S., 2016, , 823, 130
[Mosser et al.(2014)]mosser Mosser, B., Benomar, O., Belkacem, K., et al., 2014, , 572, 5
[Nordlund & Stein(1995)]ns95 Nordlund, A., & Stein, R., 1995, Stellar Evolution: What Should Be Done, 32nd Liège Int. Astroph. Coll., 32, 75
[Nordlund, Stein, & Asplund(2009)]nsa Nordlund, A., Stein, R., & Asplund, M., 2009,
<http://www.livingreviews.org/lrsp-2009-2>
[Papics et al. (2014)]papics1 Pápics, P. I., Moravveji, E., Aerts, C., et al. 2014, , 570, 8
[Papics et al. (2015)]papics2 Pápics, P. I., Tkachenko, A., Aerts, C., et al. 2015, , 803, 25
[Schindler, et al.(2015)]jt2015 Schindler, J.-T., Green, E. M., & Arnett, W. D., 2015, , 806, 178
[Smith & Arnett(2014)]nathan2014 Smith, Nathan, & Arnett, W. D., 2014, , 785, 82
[Viallet, et al.(2013)]viallet2013 Viallet, M., Meakin, C., Arnett, D., Mocák, M., 2013, , 769, 1
[Viallet, et al.(2015)]viallet2015 Viallet, Maxime, Meakin, C., Prat, V., & Arnett, D., 2015, , 580, 61
[Zahn(1991)]zahn91 Zahn, J.-P. 1991, , 252, 179
[Zhang(2016)]zhang-2016-01 Zhang, Q. S. 2016, , 818, 146
|
http://arxiv.org/abs/1701.07453v4 | 20170125191349 | Iterative methods for solving factorized linear systems | [
"Anna Ma",
"Deanna Needell",
"Aaditya Ramdas"
] | math.NA | [
"math.NA"
] |
Control Allocation for Wide Area Coordinated Damping
M. Ehsan Raoufat, Student Member, IEEE,
Kevin Tomsovic, Fellow, IEEE, and Seddik M. Djouadi, Member, IEEE
This work was supported in part by the National Science Foundation under grant No CNS-1239366, and in part by the Engineering Research Center Program of the National Science Foundation and the Department of Energy under NSF Award Number EEC-1041877 and the CURENT Industry Partnership Program.
M. Ehsan Raoufat, Kevin Tomsovic and Seddik M. Djouadi are with the Min H. Kao Department of Electrical Engineering and Computer Science, The University of Tennessee, Knoxville, TN 37996 USA (e-mail: [email protected]).
A. [email protected], B. [email protected]
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Stochastic iterative algorithms such as the Kaczmarz and Gauss-Seidel methods have gained recent attention because of their speed, simplicity, and the ability to approximately solve large-scale linear systems of equations without needing to access the entire matrix. In this work, we consider the setting where we wish to solve a linear system in a large matrix that is stored in a factorized form, =; this setting either arises naturally in many applications or may be imposed when working with large low-rank datasets for reasons of space required for storage. We propose a variant of the randomized Kaczmarz method for such systems that takes advantage of the factored form, and avoids computing . We prove an exponential convergence rate and supplement our theoretical guarantees with experimental evidence demonstrating that the factored variant yields significant acceleration in convergence.
§ INTRODUCTION
Recently, revived interest in stochastic iterative methods like the Kaczmarz <cit.> and Gauss-Seidel <cit.> methods has grown due to the need for large-scale approaches for solving linear systems of equations. Such methods utilize simple projections and require access to only a single row in a given iteration, hence having a low memory footprint. For this reason, they are very efficient and practical for solving extremely large, usually highly overdetermined, linear systems. In this work, we consider algorithms for solving linear systems when the matrix is available in a factorized form. As we discuss below, such a factorization may arise naturally in the application, or may be constructed explicitly for efficient storage and computation. We seek a solution to the original system directly from its factorized form, without the need to perform matrix multiplication.
To that end, borrowing the notation of linear regression from statistics, suppose we want to solve the linear system = with ∈ℂ^m × n.
However, instead of the full system , we only have access to , such that =.
In this case, we want to solve the linear system:
= ,
where ∈ℂ^m × k and ∈ℂ^k × n.
Instead of taking the product of and , to form , which may not be desirable, we approach this problem using stochastic iterative methods to solve the individual subsystems
=
= ,
in an alternating fashion.
Note that in (<ref>) is the vector of unknowns that we want to solve for in (<ref>) and in (<ref>) is the known right hand side vector of (<ref>). If we substitute (<ref>) into (<ref>), we acquire the full linear system (<ref>). We will often refer to (<ref>) as the “full system” and (<ref>) and (<ref>) as “subsystems”, and say that a system is consistent if it has at least one solution (and inconsistent otherwise).
There are some situations when approximately knowing would suffice. We assume that (for reasons of interpretability, or for downstream usage) the scientist is genuinely interested in solving the full system, i.e. she is interested in the vector , not in .
It is arguably of practical interest to give special importance to the case of k < min(m,n), which arises in modern data science as motivated by the following examples, but we discuss other settings later.
§.§ Motivation
If is large and low-rank, one may have many reasons to work with a factorization of . We shall discuss three reasons below — algorithmic, infrastructural, and statistical.
Consider data matrices encountered in “recommender systems” in machine learning <cit.>. For concreteness, consider the Netflix (or Amazon, or Yelp) problem, where one has a users-by-movies matrix whose entries correspond to ratings given by users to movies. is usually quite well approximated by low-rank matrices — intuitively, many rows and columns are redundant because every row is usually similar to many other rows (corresponding to users with similar tastes), and every column is usually similar to many other columns (corresponding to similar quality movies in the same genre). Usually we have observed only a few entries of , and wish to infer the unseen ratings in order to provide recommendations to different users based on their tastes. Algorithms for “low-rank matrix completion” have proved to be quite successful in the applied and theoretical machine learning community <cit.>. One popular algorithm, alternating-minimization <cit.>, chooses a (small) target rank k, and tries to find , such that _ij≈ ()_ij for all the observed entries (i,j) of . As its name suggests, the algorithm alternates between solving for keeping fixed and then solving for keeping fixed. In this case, at no point does the algorithm even form the entire completed (inferred) matrix , and the algorithm only has access to factors , simply due to algorithmic choices.
There may be other instances where a data scientist may have access to the full matrix , but in order to reduce the memory storage footprint, or to communicate the data, may explicitly choose to decompose ≈ and discard to work with the smaller matrices instead.
Consider an example motivated by “topic modeling” of text data.
Suppose Google has scraped the internet for English documents (or maybe a subset of documents like news articles), to form a document-by-word matrix , where each entry of the matrix indicates the number of times that word occurred in that document. Since many documents could be quite similar in their content (like articles about the same incident covered by different newspapers), this matrix is easily seen to be low-rank. This is a classic setting for applying a machine learning technique called “non-negative matrix factorization” <cit.>, where one decomposes as the product of two low-rank non-negative matrices ,; the non-negativity is imposed for human interpretability, so that can be interpreted as a documents-by-topics matrix, and as a topics-by-words. In this case, we do not have access to as a result of systems infrastructure constraints (memory/storage/communication).
Often, even for modestly sized data matrices, the relevant “signal” is contained in the leading singular vectors corresponding to large singular values, and the tail of small singular values is often deemed to be “noise”. This is precisely the idea behind the classical topic of principal component analysis (PCA), and the modern machine learning literature has proposed and analyzed a variety of algorithms to approximate the top k left and right singular vectors in a streaming/stochastic/online fashion <cit.>. Hence, the factorization may arise from a purely statistical motivation.
Given a vector (representing age, or document popularity, for example), suppose the data scientist is interested in regressing onto , for the purpose of scientific understanding or to take future actions. Can we utilize the available factorization efficiently, designing methods that work directly on the lower dimensional factors and rather than computing the full system ?
Our goal will be to propose iterative methods that work directly on the factored system, eliminating the need for a full matrix product and potentially saving computations on the much larger full system.
§.§ Main contribution
We propose two stochastic iterative methods for solving system (<ref>) without computing the product of and .
Both methods utilize iterates of well studied algorithms for solving linear systems.
When the full system is consistent, the first method, called RK-RK, interlaces iterates of the Randomized Kaczmarz (RK) algorithm to solve each subsystem and finds the optimal solution.
When the full system is inconsistent, we introduce the REK-RK method, an interlacing of Randomized Extended Kaczmarz (REK) iterates to solve (<ref>) and RK iterates to solve (<ref>), that converges to the so-called ordinary least squares solution. We prove linear (“exponential”) convergence to the solution in both cases.
§.§ Outline
In the next section, we provide background and discuss existing work on stochastic methods that solve linear systems.
In particular, we describe the RK and REK algorithms as well as the Randomized Gauss-Seidel (RGS) and Randomized Extended Gauss-Seidel (REGS) algorithms.
In Section <ref> we investigate variations of settings for subsystems (<ref>) and (<ref>) that arise depending on the consistency and size of .
Section <ref> introduces our proposed methods, RK-RK and REK-RK. We provide theory that shows linear convergence in expectation to the optimal solution for both methods.
Finally, we present experiments in Section <ref> and conclude with final remarks and future work in Section <ref>.
§.§ Notation
Here and throughout the paper, matrices and vectors are denoted with boldface letters (uppercase for matrices and lowercase for vectors).
We call the i^th row of the matrix and the j^th column of .
The euclidean norm is denoted by ·_2 and the Frobenius norm by ·_F. Lastly, ^* denotes the adjoint (conjugate transpose) of the matrix . Motivated by applications, we allow to be rank deficient and assume that and are full rank.
§ BACKGROUND AND EXISTING WORK
In this section we summarize existing work on stochastic iterative methods and different variations of linear systems.
§.§ Linear Systems
Linear systems take on one of three settings determined by the size of the system, rank of the matrix , and the existence of a solution. First we discuss solutions to systems with full rank matrices then remark on how rank deficiency affects the desired solution.
In the full rank underdetermined case, m < n and the system has infinitely many solutions; here, we often want to find the least Euclidean norm solution to (<ref>):
:= ^*(^*)^-1 .
Clearly, =, and all other solutions to an underdetermined system can be written as = + where = 0.
In the overdetermined setting, we have m > n and the system can have a unique (exact) solution or no solution. If there is a unique solution, the linear system is called an overdetermined consistent system.
When is full rank, the optimal unique solution is _uniq such that _uniq =:
_uniq := (^* )^-1^* .
If there is no exact solution, the system is called an overdetermined inconsistent system. When a system is inconsistent and is full rank, we often seek to minimize the sum of squared residuals, i.e. to find the ordinary least squares solution
:= (^* )^-1^* .
The residual can be written as = -. Note that ^* = 0, which can be easily seen by substituting = + into (<ref>). For simplicity, we will refer to the matrix of a linear system as consistent or inconsistent when the system itself is consistent or inconsistent.
If the matrix in the linear system = is rank deficient, then there are infinitely many solutions to the system regardless the size of m and n. In this case, we again want the least norm solution in the underdetermined case and the “least-norm least-squares" solution in the overdetermined case,
:=
^*(^*)^† , if m < n
(^* )^†^* , if m > n,
where (·)^† is the pseudo-inverse. General solutions to the linear system can be written as = + where = 0—note that = = + =. Similar to the full rank case, when the low-rank system is inconsistent, we can write = +, again where ^* = 0.
§.§ Randomized Kaczmarz and its Extension
The Kaczmarz Algorithm <cit.> solves a linear system = by cycling through rows of and projects the estimate onto the solution space given by the chosen row.
It was initially proposed by Kaczmarz <cit.> and has recently regained interest in the setting of computer tomography where it is known as the Algebraic Reconstruction Technique <cit.>.
The randomized variant of the Kaczmarz method introduced by Strohmer and Vershynin <cit.> was proven to converge linearly in expectation for consistent systems.
Formally, given and of (<ref>), RK chooses row i ∈{1, 2, ... m } of with probability ^2_2/^2_F,
and projects the previous estimate onto that row with the update
:= + (_i - )/^2_2 ()^*.
Needell <cit.> later studied the inconsistent case and showed that RK does not converge to the least squares solution for inconsistent systems, but rather converges linearly to some convergence radius of the solution.
To remedy this, Zouzias and Freris <cit.> proposed the Randomized Extended Kaczmarz (REK) algorithm to solve linear systems in all settings.
For REK, row i∈{1, 2, ... m } and column j ∈{1, ...n } of are chosen at random with probability
(row = i) = ^2_2/^2_F , (column = j) = ^2_2/^2_F,
and starting from _0=0 and _0=, every iteration computes
:= + (^i -^i_t- )/^2_2 ()^*, _t := _t-1 - ⟨, _t-1⟩/_2^2.
REK finds the optimal solution in all linear system settings.
In the consistent setting, it behaves as RK.
In the overdetermined inconsistent setting, estimates the residual vector and allows to converge to the true least squares solution of the system.
REK was shown to converge linearly in expectation to the least-squares solution by Zouzias and Freris <cit.>.
§.§ Randomized Gauss-Seidel and its Extension
The Gauss-Seidel method was originally published by Seidel but it was later discovered that Gauss had studied this method in a letter to his student <cit.>.
Instead of relying on rows of a matrix, the Gauss-Seidel method relies on columns of .
The randomized variant was studied by Leventhal and Lewis <cit.> shortly after RK was published.
The randomized variant (RGS) requires a column j to be chosen randomly with probability ^2_2/^2_F,
and updates at every iteration
:= + ^*(- )/^2_2_(j),
where _(j) is the j^th basis vector (a vector with 1 in the j^th position and 0 elsewhere). Leventhal and Lewis <cit.> showed that RGS converges linearly in expectation when is overdetermined.
However, it fails to find the least norm solution for an underdetermined linear system <cit.>.
The Randomized Extended Gauss-Seidel (REGS) resolves this problem, much like REK did for RK in the case of overdetermined systems.
The method chooses a random row and column of exactly as in REK, and then updates at every iteration
:= + ^*(- )/^2_2_(j) ,
_i := _n - ()^*/_2^2 ,
_t := _i(_t-1 + - ),
and at any fixed time t, outputs - _t as the estimated solution to =. Here, _n denotes the n × n identity matrix. This extension works for all variations of linear systems and was proven to converge linearly in expectation by Ma et al. <cit.>.
The RK and RGS methods along with their extensions are extensively studied and compared in <cit.>.
Table <ref> summarizes the convergence properties of each of the randomized methods and their extensions.
In this paper, we focus on using combinations of RK and REK but also discuss RGS and REGS for comparison.
We choose to focus on RK and REK because their updates consist only of scalar operations and inner products as opposed to REGS which requires an outer product. The methods proposed are easily extendable to RGS and REGS.
§ VARIATIONS OF FACTORED LINEAR SYSTEMS
Our proposed methods rely on interleaving solution estimates to subsystem (<ref>) and subsystem (<ref>).
Because the convergence of RK, RGS, REK, and REGS are heavily dependent on the number of rows and columns in the linear system, it is important to discuss how the settings of (<ref>) and (<ref>) are determined by .
In this section, we will discuss when we can expect our methods to solve the full system.
For simplicity in notation, we will denote , , and as the “optimal” solution of (<ref>), (<ref>), and (<ref>) respectively, as summarized in Table <ref>.
By “optimal" solution for (<ref>) and (<ref>), we mean the unique, least norm, or the least squares solution, depending on the type of system (overdetermined consistent, underdetermined, overdetermined inconsistent). Since we assume that may be low-rank, is going to be the least norm solution as described in (<ref>).
Table <ref> presents such a summary depending on the size of k with respect to m and n.
We spend the rest of this section justifying the shading in Table <ref>. For this, we split Table <ref> into three scenarios : (S1) is overdetermined and consistent, (S2) is underdetermined, (S3a and S3b) is overdetermined and inconsistent. It should be noted that in Scenarios S1 and S2, the overdetermined-ness or underdetermined-ness of each subsystem follows immediately from sizes of m, n, and k and the assumption that the subsystems are full rank. We use the over/underdetermined-ness of each subsystem to show = for Scenario S1 and ≠ for Scenario S2. In Scenario S3, a little more work needs to be done to conclude the consistency of each subsystem. For Scenario S3a and S3b, we first investigate how inconsistency in (<ref>) affects the consistency of (<ref>) and (<ref>), then show that for Scenario S3a, ≠ and for Scenario S3b =. This section provides the intuition on when we should expect our methods (or similar ones based on interleaving solutions to the subsystems) to work. However, one may also skip ahead to the next section where formally we present our algorithm and main results.
* Scenario S1: U overdetermined, consistent.
When is overdetermined and consistent, we find that solving (<ref>) and (<ref>) gives us the optimal solution of (<ref>).
Indeed, in the case where is overdetermined and consistent, we have:
= (^*)^-1^* (^*)^-1^*
= (^*)^-1^* (^*)^-1^*
= (^*)^-1^*
=
In the case where is underdetermined, we have:
= ^*(^*)^-1 (^*)^-1^*
= ^*(^*)^-1 (^*)^-1^*
=
Since is possibly low-rank, we still need to argue that this implies that =, i.e. is indeed the least norm solution to the full system. Suppose towards a contradiction that is the least norm solution of the full system but not subsystem (<ref>); in other words assume that = + where = 0 and is nontrivial. Multiplying both sides by , we see that
since has a nontrivial component such that = 0, it cannot be the least norm solution to the full system as assumed, reaching a contradiction.
Therefore, in the consistent case when is overdetermined, we have that =, and may hope that our proposed methods will be able to solve the full system (<ref>) utilizing the subsystems (<ref>) and (<ref>).
* Scenario S2: U underdetermined.
When is underdetermined, solving (<ref>) and (<ref>) for their optimal solutions does not guarantee the optimal solution of the full system.
Intuitively, (<ref>) has infinitely many solutions and = _LN≠.
Mathematically, investigating , we find that
= ^*(^*)^-1^*(^*)^-1
= ^*(^*)^-1^*(^*)^-1
= ^*(^*)^-1
≠ if underdetermined
= (^*)^-1^* (^*(^*)^-1 - _V)
= (^*)^-1^* ^*(^*)^-1
≠ if overdetermined, inconsistent,
where we rewrite = + _V for _V ∈ null(^*) since subsystem (<ref>) may be inconsistent. Note that if subsystem (<ref>) is consistent then we simply have _V = 0 and the above calculation still carries through.
Therefore, we do not expect our proposed methods to succeed when is underdetermined. Fortunately, this case seems to be of little practical interest, since factoring an underdetermined system does not typically save any computation.
* Scenario S3: X inconsistent.
Before we discuss whether it's possible to recover the optimal solution to the full system, we must first discuss what being inconsistent implies about the subsystems (<ref>) and (<ref>).
In particular, one needs to determine whether inconsistency in the full system creates inconsistencies in the individual subsystems.
If is inconsistent then we have + = where is the optimal solution of (<ref>) and ^* = 0.
Now, consider decomposing = _1 + _2 where ^* _1 = 0, ^* _2 ≠0, and ^*^*_2 = 0.
Notice that ^* = ^*^*(_1 + _2) = ^*^*_1 + ^*^*_2 = 0, as desired.
We want to decompose the full system + = into two subsystems.
Following a similar thought process as before, we choose to decompose our full system into the following:
+ _1 + _2=
= .
Clearly, (<ref>) is inconsistent since ^*_1=0 and ^* _2 ≠0.
Because must be overdetermined for to be inconsistent, = (^* )^-1^*( - _1 - _2) = (^* )^-1^*( - _2) is the least squares solution to (<ref>).
* Case S3a: overdetermined.
Note that the second subsystem (<ref>) is possibly inconsistent (since there may be a component of in the null space of ^*).
Writing = + _V such that _V ∈ null(^*), we have
= (^*)^-1^* ((^* )^-1^* ( - _1 - _2) - _V )
= (^*)^-1^* (^* )^-1^* ( - _2)
= - (^*)^-1^* (^* )^-1^*_2
≠.
Similar to Scenario S2, if subsystem (<ref>) is indeed consistent then _V = 0 and the above calculation still carries through.
Therefore, in this case, we do not expect to find the optimal solution to (<ref>).
* Case S3b: underdetermined.
In this case, _2 = 0 and solving (<ref>) and (<ref>) obtains the optimal solution to the full system since
= ^*(^*)^-1 (^* )^-1^*( - _1)
= ^*(^*)^-1 (^* )^-1^*
= ^*(^*)^-1 (^* )^-1^*
=^*(^*)^-1
=
Following the same argument as in Scenario S1 when is underdetermined, we reach the conclusion that =. Thus, in this case our methods have the potential to solve the full system.
These three scenarios fully explain the shading in Table <ref>.
The focus of the remainder of this paper will be the case in which k < m,n (i.e. left column of Table <ref>) since, as mentioned, it is practically the most relevant setting.
§ METHODS AND MAIN RESULTS
Our approach intertwines two iterative methods to solve subsystem (<ref>) followed by subsystem (<ref>).
For the consistent setting, we propose Algorithm <ref> which uses an iterate of RK on (<ref>) intertwined with an iterate of RK to solve (<ref>).
For the inconsistent setting, we propose using REK to solve subsystem (<ref>) followed by RK to solve subsystem (<ref>) as shown in Algorithm <ref>. We view the latter method as a more practical approach, and the former as interesting from a theoretical point of view.
Recall that ^p is the p^th element in the vector .
Standard stopping criteria include terminating when the difference in the iterates is small or when the residual is less than a predetermined tolerance. To avoid adding complexity to the algorithm, the residual should be computed and checked approximately every m iterations.
We propose an approach that interlaces solving subsystems (<ref>) and (<ref>); this has a couple of advantages over solving each subsystem separately.
First, if we are given some tolerance ϵ that we allow on the full system, it is unclear when we should stop the iterates of the first subsystem to obtain such an error — if solving the first subsystem is terminated prematurely, the error may propagate through iterates when solving the second subsystem.
Second, the interlacing allows for opportunities to implement these algorithms in parallel.
We leave the specifics of such an implementation as future work as it is outside the scope of this paper.
§.§ Main result
Our main result shows that Algorithm <ref> and Algorithm <ref> converge linearly to the desired solution. The convergence rate, as expected, is a function of the conditioning of the subsystems, and hence we introduce the following notation. Here and throughout, for any matrix we write
α_A := 1 - σ_min^2()/^2_F ,
:= σ_max^2()/σ_min^2() ,
where σ_min^2() is the smallest non-zero singular value of , and is the squared condition number of .
Recall that the optimal solution to a system is either the least-norm, unique, or least-squares solution depending on whether the system is underdetermined, overdetermined consistent, or overdetermined inconsistent, respectively.
Let be low rank, = such that ∈ℂ^m × k and ∈ℂ^k × n are full rank, and the systems =, =, and = have optimal solutions , , and respectively, and , , κ^2_U are as defined in (<ref>) and (<ref>). Setting _0 = 0 and assuming k < m,n, we have
* if = is consistent, then = and Algorithm <ref> converges with expected error
- ^2 ≤^t ^2 + (1-γ_1)^-1α_max^t^2/^2_F, if ≠
^t ^2 + t α_max^t ^2/^2_F , else
where α_max = max{, } and γ_1 = min{/, /}.
* if = is inconsistent, then = and Algorithm <ref> converges with expected error
- ^2 ≤^t ^2 +(1-γ_2)^-1α̃_max^t-1(1 + 2κ_U^2)^2/^2_F, if √()≠
^t ^2 + t α̃_max^t-1(1 + 2κ_U^2)^2/^2_F, else
where α̃_max = max{√(), } and γ_2 = min{√()/, /√()}.
Remarks.
1. Theorem <ref>(a) also applies to the setting in which is overdetermined, consistent, and n < k < m. In the proof of Lemma <ref>, one must simply note that the bound (<ref>) still holds for this setting and all other steps in the proof analogously follow.
2. Empirically, our experiments in the next section suggest that and can be substantially better conditioned than .
3. Algorithm <ref> is interesting to discuss from a theoretical standpoint but in applications Algorithm <ref> is more practical as linear systems are typically inconsistent. Algorithm <ref> can be utilized in applications if error in the solution is tolerable. In particular, if = is inconsistent then Algorithm <ref> will converge in expectation to some convergence horizon. This can be see by replacing the use of Proposition <ref> in the bound (<ref>) with the convergence bound of RK on inconsistent linear systems found in Theorem 2.1 of <cit.>.
4. While not the main focus of this paper, we briefly note here that for matrices large enough that they cannot be stored entirely in memory, there is an additional cost that must be paid in terms of moving data between the disk and RAM. In a truly large-scale implementation, the RK-RK algorithm might be more scalable than the REK-RK algorithm since RK only accesses random rows of both and , which is efficient if both matrices are stored in row major form, but REK accesses both random rows and columns, and hence storing in either row major or column major format will be slow for one of the two operations.
§.§ Supporting results
To prepare for the proof of the above theorem (the central theoretical result of the paper), we state a few supporting results which will help simplify the presentation of the proof. We begin by stating known results on the convergence of RK and REK on linear systems.
Let _b denote the expected value taken over the choice of rows in and _x the expected value taken over the choice of rows in and when necessary the choice of columns in .
Also, let denote the full expected value (over all random variables and iterations) and ^t-1 be the expectation conditional on the first t-1 iterations.
(<cit.>) Given a consistent linear system =, the Randomized Kaczmarz algorithm, with initialization _0 = 0, as described in Section <ref> converges to the optimal solution with expected error
_t - ^2 ≤^t^2,
where is as defined in (<ref>).
(<cit.>) Given a linear system =, the Randomized Extended Kaczmarz algorithm, with initialization _0 = 0, as described in Section <ref> converges to the optimal solution with expected error
_t - ^2 ≤^⌊ t/2 ⌋ (1 + 2 ) ^2,
where is as defined in (<ref>) and is as defined in (<ref>).
The proof of Theorem <ref> builds directly on two useful lemmas. Lemma <ref> addresses the impact of intertwining the algorithms. In particular, it shows useful relationships involving , the RK update solving the linear system = at the t^th iteration (with as the previous estimate), and our update . Lemma <ref> states that conditional on the first t-1 iterations, we can split the norm squared error - ^2 into two terms relating to the error from solving subsystem (<ref>) and the error from solving subsystem (<ref>). To complete the proof of Theorem <ref>, we bound the error from solving (<ref>) depending on whether we use RK (as in Algorithm <ref>) or REK (as in Algorithm <ref>) then apply the law of iterated expectations to bound the error from solving (<ref>). We now state the aforementioned lemmas, and then formally prove the theorem.
Let = + ^p - /^2 ()^*. In Algorithm <ref> and Algorithm <ref> we have that:
* _b^t-1⟨ - , - ⟩ = 0,
* - ^2 = - ^2 - - ^2.
In words, part (a) states that the difference between an RK iterate solving the exact linear system = and our RK iterate (which solves the linear system resulting from intertwining =), is orthogonal to -. This will come in handy in Lemma <ref>. Part (b) is a Pythagoras-style statement, which follows from well-known orthogonality properties of RK updates, included here for simplicity and completeness.
To prove statement (b), we note that ( - ) is parallel to and ( - ) is perpendicular to since ( - ) = ( + ^p - /^2 ()^* - ) = + ^p - - ^p = 0. We apply the Pythagorean Theorem to obtain the desired result.
We prove statement (a) by direct substitution and expansion, as follows:
_b^t-1 ⟨ - , - ⟩
= _b^t-1⟨^p - ^p/^2 ()^*, - + ^p- /^2 ()^* ⟩
(i)=_b^t-1⟨^p - ^p/^2 ()^*, ^p- /^2 ()^* ⟩ + _b^t-1⟨^p - ^p/^2 ()^*, - ⟩
(ii)=_b^t-1 (^p - ^p)( ^p- ) /^2 + ⟨_b^t-1^p - ^p/^2 ()^*, - ⟩
(iii)=∑_p (^p - ^p)( ^p- ) /^2 ^2/^2_F + ⟨∑_p (^p - ^p) ()^* /^2 ^2/^2_F, - ⟩
= ( - )^*(- ) /^2_F + ⟨^*( - ) /^2_F, - ⟩
(iv)=⟨ - /^2_F , (- ) ⟩ + ⟨ - /^2_F , ( - )⟩
= 0.
Step (i) follows from linearity of inner products, step (ii) simplifies the inner product of two parallel vectors, and step (iii) computes the expectation over all possible choices of rows of . In step (iv), we use the fact that for k < m,n, subsystem (<ref>) is always consistent (since is underdetermined) to make the substitution =.
In Algorithm <ref> and Algorithm <ref>, we can bound the expected norm squared error of - as
^t-1 - ^2 ≤ - ^2 + _x^t-1 - ^2/^2_F .
We investigate the expectation of the norm squared error of - conditional on the first t-1 iterations and over the choice of rows of . We keep _x^t-1 in our bound as this expectation will depend on whether Algorithm <ref> or Algorithm <ref> is being used.
^t-1 - ^2
=^t-1 - + - ^2
=^t-1 - ^2 + ^t-1 - ^2 + 2 ^t-1⟨ - , - ⟩
(iii)=^t-1 - ^2 + ^t-1 - ^2
(iv)=^t-1 - ^2 - ^t-1 - ^2 + ^t-1 - ^2
(v)= - ^2 - ^t-1 (^p - ) /^2 ()^* ^2 + ^t-1(^p - ^p)/^2 ()^* ^2
= - ^2 - ^t-1[ |- |^2 /^2 ] + ^t-1[ |^p - ^p|^2/^2 ].
Steps (iii) and (iv) are applications of Lemma <ref>(a) and Lemma <ref>(b) respectively, and step (v) follows from the definition of each term and simplification using the fact that =.
Now, we evaluate the conditional expectation on the choices of rows of to complete the proof:
^t-1 - ^2
(vi)= - ^2 - _b^t-1[ |- |^2 /^2 ] + _x^t-1_b^t-1[ |^p - ^p |^2/^2 ]
= - ^2 - ∑_p = 1^k |- |^2 /^2 ^2/^2_F + _x^t-1∑_p = 1^k |^p - ^p |^2/^2 ^2/^2_F
= - ^2 - - ^2 /^2_F + _x^t-1[ - ^2/^2_F ]
(vii)≤ - ^2 - σ_min^2() - ^2 /^2_F + _x^t-1[ - ^2/^2_F ]
= - ^2 + _x^t-1[ - ^2/^2_F ] .
In step (vi), we use iterated expectations to split the expected value ^t-1 = _x^t-1_b^t-1. Step (vii) uses the fact that ( - ) ^2 ≥σ_min^2() - ^2 since - are in the row span of for all t. We simplify and obtain the desired bound.
§.§ Proof of main result
We now have all the ingredients we need to prove Theorem <ref>, which we now proceed to below.
The fact that = was already argued in scenarios S1 and S3(b) in the previous section, so we do not reproduce its argument here. Given this fact, to prove Theorem <ref>, we only need to invoke the statement of Lemma <ref> and bound the term _x^t-1 - ^2 using Proposition <ref> or Proposition <ref> depending on whether we are using Algorithm <ref> or Algorithm <ref>, respectively.
* For Algorithm <ref>, plugging Proposition <ref> into the statement of Lemma <ref> yields
^t-1 - ^2
≤ - ^2 + ^t ^2/^2_F .
Let α_max = max{α_U, α_V }, γ_1 = min{α_U/α_V, α_V/α_U}, and note that γ_1 < 1 if α_U ≠α_V. Taking expectations over the randomness from the first t-1 iterations and using the Law of Iterated Expectation, we have
- ^2 ≤^t ^2 + ^2/^2_F ∑_h=0^t-1^t-h^h
=^t ^2 + ^2/^2_F ∑_h=0^t-1^t-1-h^h
(i)=^t ^2 + α_max^t-1^2/^2_F ∑_h=0^t-1γ_1^h
(ii)≤^t ^2 + α_max^t^2/^2_F 1/1-γ_1.
where step (i) uses the fact that the summation is symmetric with respect to α_U and α_V. Step (ii) uses the fact that α_max^t-1 = α_max^t if α_max = and α_max^t-1 < α_max^t if α_max =. Now, if α_U = α_V = α_max then we have
- ^2 ≤^t ^2 + ^2/^2_F ∑_h=0^t-1α_max^t-hα_max^h
(ii)=^t ^2 + tα_max^t ^2/^2_F ,
where the second term of (ii) approaches 0 as t →∞ since α_max< 1.
* For Algorithm <ref>, plugging Proposition <ref> into the statement of Lemma <ref> yields
^t-1 - ^2
≤ - ^2 + (1+2)^⌊ t/2 ⌋^2/^2_F .
Taking expectations over the remaining randomness, and using the fact that ^⌊t-h/2⌋≤^t-h-1/2 since < 1, we have
- ^2
≤^t ^2 + (1+2) ^2/^2_F ∑_h=0^t-1^⌊t-h/2⌋^h
≤^t ^2 + (1+2) ^2/^2_F ∑_h=0^t-1^t-h-1/2^h
= ^t ^2 + (1+2) ^2/^2_F ∑_h=0^t-1√()^t-1-h^h .
Using the same techniques as in (a), let α̃_max = max{, √()} and γ_2 = min{/√() , √()/} and again noting γ_2 < 1 when ≠√(), we can write:
- ^2
≤^t ^2 + (1+2) ^2/^2_F ∑_h=0^t-1√()^t-1-h^h
= ^t ^2 + (1+2) α̃_max^t-1^2/^2_F ∑_h=0^t-1γ_2^h
≤^t ^2 + α̃_max^t-1 (1+2) ^2/^2_F 1/1-γ_2 .
When = √() = α̃_max,
- ^2
≤^t ^2 + (1+2) ^2/^2_F ∑_h=0^t-1α̃_max^t-1-hα̃_max^h
(i)=^t ^2 + tα̃_max^t-1(1+2) ^2/^2_F ,
where the second term of (i) goes to 0 as t goes to infinity.
This concludes the proof of the theorem.
§ EXPERIMENTS
In this section we discuss experiments done on both simulated and real data using different algorithms in different settings.
The naming convention for the remainder of the paper will be to refer to ALG1-ALG2 as an interlaced algorithm where ALG1 is the algorithm iterate used to solve subsystem (<ref>) and ALG2 is the algorithm used to solve subsystem (<ref>). When an algorithm's name is used alone, we imply applying the algorithm on the full system (<ref>).
In Figure <ref> we show our first set of experiments. Entries of , , and are drawn from a standard Gaussian distribution. We set = and = if is consistent and = + where ∈ null (^*) (computed in Matlab using function) if is inconsistent. In this first set of experiments, m,n,k ∈{ 100, 150, 200 } depending on the desired size of k with respect to the over or underdetermined-ness of . For example, if k < m,n and is overdetermined then k = 100, m = 200, and n = 150. The plots show iteration vs ℓ_2-error, - ^2, of each method averaged over 40 runs and allowing each algorithm to run 7 × 10^4 iterations. The layout of Figure <ref> is exactly as in Table <ref>. For each row, we have a different setting for and for each column, we vary the size of k depending on the size of . Looking at the overall trends, we see that when k < m,n and when is overdetermined, consistent and n < k < m, there is a method that obtains the optimal solution for the system. These results align with the expectations set in Table <ref>. Looking at each individual subplot, we also find what one would expect according to Table <ref>. In other words, if or is in one of the settings where RK or RGS are expected to fail then RK-RK or RGS-RGS fail as well.
When is overdetermined, inconsistent and k < m,n we have that is underdetermined. In this case, we don't need to interlace iterates of REK and REK together. To work on an underdetermined system, using RK is enough to find the optimal solution of that subsystem. This motivated interlacing iterates of RK with REK. Figure <ref> has the same set up as discussed in the previous experiment with the exception of using larger random matrices with : 1200 × 750 and k = 500. In Figure <ref> we plot iteration vs ℓ_2-error and in Figure <ref> we plot FLOPS vs ℓ_2-error. The errors are averaged over 40 runs, with shaded regions representing error within two standard deviations from the mean. Note that Algorithm <ref> performs excellently in practice, better than our theoretical upper bound as computed in Theorem <ref>. In this experiment, we see that REK-RK and REK-REK perform comparably in error and that REK-RK is more efficient in FLOPS.
In addition to simulated experiments, we also show the usefulness of these algorithms on real world data sets on wine quality, bike rental data, and Yelp reviews. In all following experiments, we plot the average ℓ_2-error at the t^th iteration over 40 runs and shaded regions representing the ℓ_2-error within two standard deviations. In addition to empirical performance, we also plot the theoretical convergence bound derived in Theorem <ref> (labeled “BND" in the legends). From these experiments, it is clear that the algorithms perform even better in practice than the worst-case theoretical upper bound. The data sets on wine quality and bike rental data are obtained from the UCI Machine Learning Repository <cit.>. The wine data set is a sample of m=1599 red wines with n = 11 physio-chemical properties of each wine. We choose k = 5 and compute and using Matlab's function for nonnegative matrix factorization (recall the motivations from the first section). Figure <ref> shows the results from this experiment. The conditioning of , , and are = 2.46 × 10^3, = 25.96, and = 4.20 respectively. We plot the ℓ_2-error averaged over 40 runs. Since has such a large condition number, this impacts the convergence of REK on negatively as shown by the seemingly horizontal line (the error is actually decreasing, but incredibly slowly). We also see that REK-RK and REK-REK are working comparably and significantly faster than REK alone. This can be explained by the better conditioning on and .
The bike data set contains hourly counts of rental bikes in a bike share system. The data sets contains date as well as weather and seasonal data. There are m = 17379 samples and n = 9 attributes per sample. We choose k = 8 and compute and in the same way as with the wine data set. Figure <ref> shows the results from this experiment. The conditioning of , , and are = 94.27, = 54.91, and = 2.99 respectively. Similar to Figure <ref>, we see that the convergence of REK suffers from the poorly conditioned matrix . We also see again that REK-REK and REK-RK behave similarly and outperform REK.
To show the advantage of our algorithms on large systems, we create extremely large standard Gaussian matrices : 10^6 × 10^3 and : 10^3 × 10^4. These matrices are so large that the matrix product cannot be computed in Matlab due to memory constraints. These results are shown in Figure <ref>. We see that without needing to do the matrix computation, we are still able to find the solution to the linear system =.
Lastly, we present the performance of our methods on a large real world data set. We use the Yelp challenge data set <cit.>. In our setting, we let : 10^5 × 10^4 be a document term frequency matrix where each row represents a Yelp review and each column represents a word feature. The elements of contain the frequency at which the word is used in the review. We only use a subset of the amount of data available due Matlab memory constraints. Here, is a vector that represents the number of stars a review received. We choose k = 5000. Figure <ref> shows the results from this experiment using REK, REK-REK, and REK-RK. The conditioning of , , and are = 127.3592, = 24.274, and = 19.096 respectively. In this large real world data set, we can again see the usefulness of our proposed methods when we are given =.
These experiments complement and verify our theoretical findings. In settings which we expect to fail to obtain the least squares or least norm solutions, our experiments show that they do indeed fail. Additionally, where we expect that the optimal solution is obtainable, the experiments
show the proposed methods can obtain such solutions and in many instances outperform the original algorithm on the full system. We see that empirically, subsystems are better conditioned than full systems, thus explaining their better performance.
§ CONCLUSION
We have proposed two methods interlacing Kaczmarz updates to solve factored systems. For large-scale applications in which the system is stored in factored form for efficiency or the factorization arises naturally, our methods allow one to solve the system without the need to perform the large-scale matrix product first. Our main result proves that our methods provide linear convergence in expectation to the (least-squares or least-norm) solution of (overdetermined or underdetermined) linear systems. Our experiments support these results, and show that our methods provide significant computational advantages for factored systems. The interlaced structure of our methods suggests they can be implemented in parallel which would lead to even further computational gains. We leave such details for future work. Additional future work includes the design and analysis of methods that converge to the solution in the settings not covered in this paper, i.e. the gray cells of Table <ref>. Although its practical implications are not immediately clear to us, these may still be of theoretical interest.
§ ACKNOWLEDGMENTS
Needell was partially supported by NSF CAREER grant #1348721, and the Alfred P. Sloan Fellowship. Ma was supported in part by NSF CAREER grant #1348721, the CSRC Intellisis Fellowship, and the Edison International Scholarship. The authors would like to thank Wutao Si for pointing out a flaw in the original proof of Theorem 1. In addition, the authors also thank the Institute of Pure and Applied Mathematics (IPAM) where this collaboration started.
abbrvnat
|
http://arxiv.org/abs/1701.07856v3 | 20170126194102 | A 3D model for CO molecular line emission as a potential CMB polarization contaminant | [
"Giuseppe Puglisi",
"Giulio Fabbian",
"Carlo Baccigalupi"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA"
] |
firstpage–lastpage
Antiferromagnetic textures and dynamics on the surface of a heavy metal
Ricardo Zarzuela and Yaroslav Tserkovnyak
December 30, 2023
=======================================================================
We present a model for simulating Carbon Monoxide (CO) rotational line
emission in molecular clouds, taking account of their 3D spatial
distribution in galaxies with different geometrical properties. The model
implemented is based on recent results in the literature and has been
designed for performing Monte-Carlo simulations of this emission. We
compare the simulations produced with this model and calibrate them, both
on the map level and on the power spectrum level, using the second release
of data from the satellite for the Galactic plane, where the
signal-to-noise ratio is highest. We use the calibrated model to
extrapolate the CO power spectrum at low Galactic latitudes where no high
sensitivity observations are available yet. We then forecast the level of
unresolved polarized emission from CO molecular clouds which could
contaminate the power spectrum of Cosmic Microwave Background (CMB)
polarization B-modes away from the Galactic plane. Assuming realistic
levels of the polarization fraction, we show that the level of
contamination is equivalent to a cosmological signal with r ≲
0.02. The Monte-Carlo MOlecular Line Emission ()
package, which implements this model, is being made publicly available.
Interstellar Medium: molecules, magnetic fields, lines and bands
Cosmology: observations, cosmic background radiation
§ INTRODUCTION
The Carbon monoxide (CO) molecule is one of the most interesting molecules
present in molecular clouds within our Galaxy. Although the most abundant
molecule in Galactic molecular clouds is molecular hydrogen (H_2),
it is inconvenient to use the emission from that as a tracer since it is
difficult to detect because of having a low dipole moment and so being a very
inefficient radiator. We therefore need to resort to alternative techniques
for tracing molecular clouds using rotational or vibrational transitions of
other molecules such as CO. Observations of CO emission are commonly used
to infer the mass of molecular gas in the Milky Way by assuming a linear
proportionality between the CO and H_2 densities via the CO-to-H_2
conversion factor, X_CO. A commonly accepted value for X_CO is 2× 10^20 molecules· cm^-2 (K km
s^-1)^-1, although this could vary with position in the
Galactic plane, particularly in the outer Galaxy <cit.>.
The most intense CO rotational transition lines are the J = 1→0,
2→1, 3→2 transitions at sub-millimetre wavelengths (115,
230 and 345 GHz respectively). These can usually be observed in optically
thick and thermalized regions of the interstellar medium. Traditionally, the
observations of standard ^12CO emission are complemented by
measurements of ^13CO lines. Being less abundant (few percent),
this isotopologue can be exploited for inferring the dust extinction in
nearby clouds and hence providing a better constraint for measuring the
H_2 abundance <cit.>.
However, there is growing evidence that ^13CO regions could be
associated with colder and denser environments, whereas ^12CO
emission originates from a diffuse component of molecular gas
<cit.>.
The spatial distribution of the CO line emission reaches a peak in the inner
Galaxy and is mostly concentrated close to or within the spiral arms, in a
well-defined ring, the so-called molecular ring between about 4 - 7
kpc from the Galactic centre. This property is not unique to the Milky Way but
is quite common in barred spiral galaxies (see <cit.> for
further references). The emission in the direction orthogonal to the Galactic
plane is confined within a Gaussian slab with roughly 90 pc full width half
maximum (FWHM) in the inner Galaxy getting broader towards the outer Galactic
regions, reaching a FWHM of several hundred parsecs outside the solar
circle.
In the centre of the Galaxy, we can also identify a very dense CO emission zone
,rich in neutral gas and individual stars, stretching out to about 700 light
years (ly) from the centre and known as the Central Molecular Zone.
Since the 1970s, many CO surveys of the Galactic plane have been carried out
with ground-based telescopes, leading to accurate catalogues of molecular
clouds <cit.>. Usually these surveys
have observed a strip of |b|≲ 5 around the Galactic plane. At
higher Galactic latitudes (|b|>30), the low opacity regions of both gas
and dust, together with a relatively low stellar background which is useful for
spotting extinction regions, complicate the observation of CO lines making this
very challenging. In fact, only ≈100 clouds have been detected so far
in these regions.
The Planck satellite team recently released CO emission maps of the lowest
rotational lines, J = 1-0, 2-1 ,3-2 observed in the 100, 217, 353 GHz
frequency channels of the High Frequency Instrument (HFI)
<cit.>. These were sensitive enough to map the CO
emission even though the widths of these lines are orders of magnitude narrower
than the bandwidth of the frequency channels. These single frequency maps
have been processed with a dedicated foreground cleaning procedure so as to
isolate this emission. The maps were found to be broadly consistent with
the data from other CO surveys <cit.>, although
they might still be affected by residual astrophysical emissions and
instrumental systematics. In <ref>, we show the so called
Type 1 map of the CO J:1-0 line <cit.>[http://pla.esac.esa.int/pla] which will be
used in the following.
Many current and future CMB polarization experiments[For a complete
list of the operating and planned probes see e.g. <lambda.gfsc.nasa.gov>]
are designed to exploit the faint B-mode signal of CMB polarization as a
cosmological probe, in particular to constrain the physics of large scale
structure formation or the inflationary mechanism in the early universe
<cit.>. One of the main challenges in the way
of achieving these goals is the contamination of the primordial CMB signal by
diffuse Galactic emission. In this respect, the synchrotron and thermal dust
emission are known to be potentially the most dangerous contaminants, because
they are intrinsically polarized. In fact, several analyses conducted on Planck
and Wilkinson Microwave Anisotropy Probe (WMAP) data from intermediate and high
Galactic latitudes at high <cit.> and low frequencies
<cit.> showed that these emissions are
dangerous at all microwave frequencies and locations on the sky (even if far
from the galactic plane), confirming early studies using the WMAP satellite
<cit.>.
Appropriate observations and theoretical investigations and modelling of polarized foreground emission for all emissions at sub-mm frequencies are
therefore crucial for the success of future experiments. As these will observe
at frequencies overlapping with the CO lines, unresolved CO line emission could
significantly contaminate these measurements as well.
CO lines are in fact expected to be polarized at the percent level or below
<cit.> because of interaction of the magnetic moment of the molecule
with the Galactic magnetic field. This causes the so-called Zeeman
splitting of the rotational quantum levels J into the magnetic sub-levels
M which are intrinsically polarized. Moreover, if molecular clouds are
somehow anisotropic (e.g when in the presence of expanding or collapsing envelopes
in star formation regions) or are asymmetric, population imbalances of the M
levels can arise. This leads to different line intensities depending on the
directions (parallel or perpendicular to the magnetic field) and to a net
linearly polarized emission. <cit.> detected polarization in five
star-forming regions near to the Galactic Centre while observing
the CO lines J=2-1, 3-2 and the J=2-1 of the isotopologue
^13CO. The degree of polarization ranged from 0.5
to 2.5 %. Moreover, the deduced magnetic field direction was
found to be consistent with previous measurements coming from dust polarimetry,
showing that the polarized CO emission could become a sensitive tracer of
small-scale Galactic magnetic fields.
The goal of this paper is to propose a statistical 3D parametric model of CO
molecular cloud emission, in order to forecast the contamination of CMB signal
by this, including in the polarization. Being able to perform statistical
simulation of this emission is crucial for assessing the impact of foreground residual uncertainties on cosmological constraints coming from the CMB. In addition, the capability of modeling the Galactic foreground emission in its full complexity taking into account line-of-sight effects is becoming necessary in light of the latest experimental results and the expected level of sensitivity for the future experiments <cit.>. In <ref> we present the assumptions made for building the model and the simulation pipeline for its implementation. In
<ref> we describe the methodology for calibrating the CO
simulations to match observations.
In <ref> we show how the parameters describing molecular cloud
distribution shape the angular power spectrum of CO emission. Finally, in
<ref> we forecast the expected level of polarized CO
contaminations for the B-modes at high Galactic latitudes using our
calibrated simulation of <ref> to infer statistically the
emission at high Galactic latitude, where current observations are less
reliable.
§ BUILDING A STATISTICAL 3D CO EMISSION MODEL
In order to build an accurate description of CO emission in the Galaxy, we
collected the most up to date astrophysical data present in the literature
concerning the distribution of molecular gas as a function of the Galactic
radius (R) and the vertical scale of the Galactic disk (z) as well as of
the molecular size and the mass function. The model has been implemented in a
Python package named [https://github.com/giuspugl/MCMole3D]
which is being made publicly available, and we present details of it in this
Section[In the following we will refer to this model as the model for the sake of clarity.]. The model builds on and extends the method
proposed by <cit.> who conducted a series of analyses
distributing statistically a relative large number of molecular cloud objects
according to the axisymmetric distribution of H_2 observed in the
Galaxy <cit.>.
§.§ CO cloud spatial distribution
As mentioned in the introduction, the CO emission is mostly concentrated around the molecular ring. We have considered and implemented two different spatial distributions of the molecular clouds: an axisymmetric ring-shaped one and one with 4 spiral arms, as shown in <ref>(b) and (a) respectively.
The first is a simplified model and is parametrized by R_ring,and σ_ring which are the radius and the width of the molecular ring respectively.
On the other hand, the spiral arm distribution is in principle closer to the symmetry of our Galaxy and is therefore more directly related to observations. The distribution is described by two more parameters than for the axisymmetric case: the arm width and the spiral arm pitch angle. For the analysis conducted in the following sections, we fixed the value of the pitch angle to be i ∼ -13 following the latest measurements of <cit.> and fixed the arm half-width to be 340 pc <cit.>.
<cit.> found that the vertical profile of the CO emissivity can be optimally described by a Gaussian function of z centred on z_0 and having a half-width z_1/2 from the Galactic plane at z=0. Both of the parameters z_0 and z_1/2 are in general functions of the Galactic radius R (see <cit.> for recent measurements).
Since we are interested in the overall distribution of molecular clouds mainly in regions close to the Galactic plane, where data are more reliable, we adopted this parametrization but neglected the effects of the mid-plane displacement z_0 and set it to a constant value z_0=0, following <cit.>. The vertical profile is then parametrized just by z_1/2 and mimics the increase of the vertical thickness scatter that is observed when moving from the inner Galaxy towards the outer regions:
z_1/2(R)∝σ_z(R)=σ_z,0cosh(R/h_R ),
where σ_z,0 =0.1 and h_R = 9 kpc corresponds to the radius where the vertical thickness starts increasing. The half-width z_1/2 is related to σ_z through the usual relation z_1/2=√(2 ln 2)σ_z.
The final vertical profile is then:
z(R) = 1/√(2 π)σ_z(R) exp[- (z/√(2)σ_z(R) )^2 ] .
§.§ CO cloud emission
The key ingredients for modeling the molecular cloud emission are the dimension of the cloud and its typical emissivity. We assume an exponential CO emissivity profile which is a function of the Galactic radius following <cit.>:
ϵ_0(R)=ϵ_c exp(R/R_em),
where ϵ_c is the typical emissivity of a particular CO line observed towards the centre of the Galaxy and R_em the scale length over which the emissivity profile changes. Clouds observed in the outer Galaxy are in fact dimmer.
We then assume the distribution of cloud size ξ(L) defined by their typical size scale, L_0, the range of sizes [L_min, L_max] and two power-laws with spectral indices <cit.>
ξ(L)=dn/dL∝{[ L^0.8 L_min<L< L_0,; L^-α_L L_0<L< L_max, ].
with α_L= 3.3, 3.9 for clouds inside or outside the solar circle respectively. From the cloud size function ξ(L) we derive the corresponding probability 𝒫(L) of having clouds with sizes smaller than L:
𝒫(<L)=∫_L_min ^L d L'ξ(L') .
The probability functions for different choices of the spectral index α_L are shown in <ref>.
We then inverted <ref> to get the cloud size associated with a given probability L(p). The cloud sizes are drawn from a uniform distribution in [ 0,1]. The histograms of the sizes generated following this probability function are shown in the top panel of <ref> and are peaked around the most typical size L_0. In the analysis presented in the following L_0 is considered as a free parameter.
Finally, we assume a spherical shape for each of the simulated molecular clouds once they are projected on the sky. However, we implemented different emissivity profiles that are function of the distance from the cloud center, such as Gaussian or cosine profiles. These are particularly useful because, by construction, they give zero emissivity at the boundaries[For the Gaussian profile, we set σ in order to have the cloud boundaries at 6σ, i.e. where the Gaussian function is zero to numerical precision.] and the maximum of the emissivity in the centre of the projected cloud on the sky. This not only mimics a decrease of the emission towards the outer regions of the cloud, where the density decreases, but also allows to minimize numerical artifacts when computing the angular power spectrum of the simulated maps (see <ref>). An abrupt top-hat transition at the boundary of each cloud would in fact cause ringing effects that could bias the estimate of the power spectrum.
§.§ Simulation procedure
The model outlined in the previous Section enables statistical simulations of CO emission in our Galaxy to be performed for a given set of free parameters Θ^CO that can be set by the user:
Θ^CO= { N_clouds, ϵ_c, R_em, R_ring, σ_ring,
σ_z,0, h_R, L_min,L_max, L_0}.
The values chosen for our analysis are listed in <ref>.
For each realization of the model, we distribute by default 40,000 clouds within our Galaxy. This number is adopted for consistency with observations when observational cuts are applied (for further details see <cit.>). The product of each simulation is a map, similar to the one in <ref>, in the Hierarchical Equal Area Latitute Pixelization (HEALPIX, <cit.> ) [http://healpix.sourceforge.net] pixelization scheme including all the simulated clouds as seen by an observer placed in the solar system. This map can be smoothed to match the resolution of a specific experiment and/or convolved with a realistic frequency bandwidth. When we compare with the maps described in <ref>, we convolve the simulated maps to the beam resolution of the 100 GHz channel (∼ 10 arcmin).
The procedure implemented for each realization is the following:
* assign the (R_gal,ϕ,z) Galacto-centric positions. In particular:
* R_gal is extracted from a Gaussian distribution defined by the R_ring and σ_ring parameters. However, the σ_ring is large enough to give non-zero probability at R_gal≤ 0. All of the negative values of R_gal are either automatically set to R_gal=0 (axisymmetric case), or recomputed extracting new positive values from a normal distribution centred at R=0 and with the r.m.s given by the scale of the Galactic bar (spiral-arm case). This choice allows us to circumvent not only the issue of negative values of R_gal due to a Gaussian distribution, but also to produce the high emissivity of the Central Molecular zone (see <cit.> for a similar approach).
* the z-coordinate is drawn randomly from the distribution in <ref>.
* the azimuth angle ϕ is computed from a uniform distribution ranging over [0,2π ) in the case of the axial symmetry. Conversely, in the case of spiral arms, ϕ follows the logarithmic spiral polar equation
ϕ(R)= A logR +B ,
where A=(tan i)^-1 and B=-log R_bar are, respectively, functions of the mean pitch angle and the starting radius of the spiral arm. In our case we set i=-12 deg, R_bar=3 kpc;
* assign cloud sizes given the probability function 𝒫(L) (<ref>);
* assign emissivities to each cloud from the emissivity profile (see <ref>);
* convert (R_gal,ϕ,z) positions into the heliocentric coordinate frame (ℓ,b,d_⊙).
In <ref> we show an example of the 3D distribution of the emission as well as the distribution of the location of the simulated clouds using both of the geometries implemented in the package.
§.§ Simulation results
In <ref> we show two typical realizations of maps of CO emission for the and geometries prior to any smoothing. As we are interested in the statistical properties of the CO emission, we report a few examples of the angular power spectrum 𝒞_ℓ corresponding to different distributions of CO emission in <ref>. In the ones shown subsequently the spectra are 𝒟_ℓ encoding a normalization factor 𝒟_ℓ =ℓ (ℓ +1)𝒞_ℓ/2π.
We can observe two main features in the morphology of the power spectrum: a bump around ℓ∼ 100 and a tail at higher ℓ. We interpret both of these features as the projection of the distribution of clouds from a reference frame off-centred (on the solar circle).
The bump reflects the angular scale (∼ 1 deg) related to the clouds which have the most likely size, parametrized by the typical size parameter, L_0, and which are close to the observer. On the other hand, the tail at ℓ≳ 600 (i.e. the arcminute scale) is related to the distant clouds which lie in the diametrically opposite position with respect to the observer. This is the reason why the effect is shifted to smaller angular scales.
The L_0 and σ_ring parameters modify the power spectrum in two different ways. For a given typical size, if the width of the molecular ring zone σ_ring increases, the peak around ℓ∼ 100 shifts towards lower multipoles, i.e. larger angular scales, and its amplitude increases proportionally to σ_ring, see for instance the bottom right panel in <ref>.
This can be interpreted as corresponding to the fact that the larger is σ_ring, the more likely it is to have clouds closer to the observer at the solar circle with a typical size given by L_0.
On the other hand, if we choose different values for the size parameter (left panels in <ref>) the tail at small angular scales moves downwards and flattens as L_0 increases. Vice versa, if we keep L_0 constant (<ref> bottom right panel), all of the tails have the same amplitude and an ℓ^2 dependency. In fact, if L_0 is small, the angular correlation of the simulated molecular clouds looks very similar to the one of point sources which has Poissonian behaviour.
Conversely if the typical size increases, the clouds become larger and they behave effectively as a coherent diffuse emission and less as point sources.
Far from the Galactic plane, the shape of the power spectrum is very different. In <ref> we show an example of the average power spectrum of 100 MC realizations of CO emission at high Galactic latitudes, i.e. |b|>30 deg, for both the and geometries. For this run we choose the so-called best fit values for the L_0 and σ_ring parameters discussed later in <ref>. In addition to the different shape depending on the assumed geometry, one can notice a significant amplitude difference with respect to the power spectrum at low latitudes. Moreover, this is in contrast with the trend observed in the galactic plane, where the geometry tends to predict a power spectrum of higher amplitude. In both cases, however, the model suppresses the emission in these areas, as shown in <ref>. In the case, the probability of finding clouds in regions in between spiral arms is further suppressed and could explain this feature. The emission is dominated by clouds relatively close to the observer for both geometries, and so the angular correlation is mostly significant at large angular scales (of the order of a degree or more) and is damped rapidly at small angular scales.
§ COMPARISON WITH PLANCK DATA
§.§ Dataset
The collaboration released three different kinds of CO molecular line emission maps, described in <cit.>. We decided to focus our analysis on the so-called Type 1 CO maps which have been extracted exploiting differences in the spectral transmission of a given CO emission line in all of the bolometer pairs relative to the same frequency channel.
Despite being the noisiest set of maps, Type 1 are in fact the cleanest maps in terms of contamination coming from other frequency channels and astrophysical emissions. In addition, they have been obtained at the native resolution of the frequency channels, and so allow full control of the effective beam window function for each map.
For this study we considered in particular the CO 1-0 line, which has been observed in the 100 GHz channel of the HFI instrument. This channel is in fact the most sensitive to the CO emission in terms of signal-to-noise ratio (SNR) and the 1-0 line is also the one for which we have the most detailed external astrophysical observations.
However, the frequency bands were designed to observe the CMB and foreground emissions which gently vary with frequency and, thus, they do not have the spectral resolution required to resolve accurately the CO line emission.
To be more quantitative, the spectral response at 100 GHz is roughly 3 GHz, which corresponds to ∼ 8000 km s^-1, i.e. about 8 orders of magnitude larger than the CO rotational line width (which can be easily approximated as a Dirac delta). Therefore, the CO emission observed by along each line of sight is integrated over the whole channel frequency band. Further details about the spectral response of the HFI instrument can be found in <cit.>.
§.§ Observed CO angular power spectrum
Since one of the goals of this paper is to understand the properties of diffuse CO line emission, we computed the angular power spectrum of the Type 1 1-0 CO map to compare qualitatively the properties of our model with the single realization given by the emission in our Galaxy. We distinguish two regimes of comparison, low Galactic latitudes (|b|≤30) and high Galactic latitude (|b|>30). While at low Galactic latitudes the signal is observed with high sensitivity, at high latitudes it is substantially affected by noise and by the fact that the emission in this region is faint due to its low density with respect to the Galactic disk.
In <ref> we show the angular power spectra of the first three CO rotational line maps observed by as well as the expected noise level at both high and low Galactic latitudes computed using a pure power spectrum estimator <cit.>. This is a pseudo power spectrum method <cit.> which corrects the so called E-to-B-modes leakage in the polarization field that arises in the presence of incomplete sky coverage <cit.>. Although this feature is not strictly relevant for the analysis of this section, because we are considering the unpolarized component of the signal, it is important for the forecast presented in <ref>. We estimated the noise as the mean of 100 MC Gaussian simulations based on the the diagonal pixel-pixel error covariance included in the maps. One may notice how the noise has a level comparable to that of the CO power spectrum at high Galactic latitude. However, we note that the released Type 1 maps are obtained from the full mission data from Planck, and not from subsets of the data (e.g. using the so called half-rings or half-mission splits). Thus, it was not possible to test whether the observed flattening of the power spectrum at large angular scale is due to additional noise correlation not modelled by the Gaussian uncorrelated model discussed above. We notice that, if these maps were present, we could have had an estimate of this correlation using the noise given by the difference between the map auto-spectra and the noise-bias free signal obtained from the cross-spectra of the maps from data subsets. Since even for the 1-0 line, the noise becomes dominant on scales ℓ≈ 20 we decided to limit the comparison at low Galactic latitude where the signal to noise ratio is very high.
We note that in the following we considered the error bars on the power spectrum as coming from the gaussian part of the variance, i.e., following <cit.>
ΔC̃_ℓ = √(2/ν)(C_ℓ + N_ℓ)
where ν is the number of degrees of freedom taking into account the finite number of modes going into the power spectrum calculation in each ℓ mode and the effective sky coverage. N_ℓ represents the noise power spectrum and the C_ℓ is the theoretical model describing the CO angular power spectrum with the tilde denoting measured quantities. Because we do not know the true CO theoretical power spectrum we assumed that C_ℓ + N_ℓ = C̃_ℓ. The gaussian approximation however underestimates the error bars. The CO field is in fact a highly non-gaussian field with mean different from zero. As such, its variance should contain contributions coming from the expectation value of its 1 and 3 point function in the harmonic domain that are zero in the gaussian approximation. These terms are difficult to compute and we considered the gaussian approximation sufficient for the level of accuracy of this study.
As can be seen in <ref>, all of the power spectra of CO emission at low Galactic latitudes have a broad peak around the multipole 100÷ 300, i.e. at the ≈1 angular scale. The signal power starts decreasing up to ℓ∼ 600 and then grows again at higher ℓ due to the Planck instrumental noise contamination.
Such a broad peak suggests that there is a correlated angular scale along the Galactic plane. This can be understood with a quick order of magnitude estimate. If we assume that most of the CO emission is localized at a distance of 4 kpc (in the molecular ring) and molecular clouds have a typical size of 30 pc, we find that each cloud subtends a ∼ 0.5 deg area in the sky. This corresponds to a correlated scale in the power spectrum at an ℓ of the order of a few hundred but the detail of this scale depends on the width of the molecular ring zone.
§.§ Galactic plane profile emission comparison
As a first test we compared the profile of CO emission in the Galactic plane predicted by the model and the one observed in the data. Since we are mostly interested in a comparison as direct as possible with the observed data, we convolved the maps with a Gaussian beam of 10 arcmin FWHM, corresponding to the nominal resolution of the 100 GHz channel of HFI, prior to any further processing.
In order to compare the data and the simulations, we constrained the total flux of the simulated CO maps with the one observed in the data. This is necessary, otherwise the emission would be directly proportional to the number of clouds distributed in the simulated Galaxy. Such a procedure also breaks possible parameter degeneracies with respect to the amplitude of the simulated power spectra (see next section). Following <cit.>, we therefore computed the integrated flux of the emission along the two Galactic latitudes and longitudes (l, b) defined as
I^X(l)=∫ db I^X(l, b),
I_tot^X=∫ dl db I^X(l,b),
where X refers both to the model and to the observed CO map. We then rescaled the simulated maps, dividing by the factor f defined as:
f=I_tot^observ/I_tot^model.
We estimated the integrals in <ref> and <ref> by considering a narrow strip of Galactic latitudes within [-2,2] degrees. We found that the value of f is essentially independent of the width of the Galactic latitude strip used to compute the integrals because most of the emission comes from a very thin layer along the Galactic plane of amplitude |b|≲ 2 deg.
In <ref> we show the comparison between I^observ(l) and the I^model(l) as defined in <ref> computed as the mean of 100 MC realizations of galaxies populated by molecular clouds for both the and models as well as their typical standard deviation. In particular, we chose for these simulations the default parameters in <ref>.
The emission profiles are quite consistent in the regions from which most of the CO emission comes, i.e. in the inner Galaxy, the I and the IV quadrants (longitude in [ -90 ,90 ] deg[We stress that the definition of quadrants comes from the Galactic coordinates centred on the Sun. The I and IV quadrants are related to the inner Galaxy, while the II and the III ones look at its outer regions.]). On the contrary, the emission in the other two quadrants looks to be under-estimated but within the scatter of the simulations. In fact, the observed emissions in both the II and III quadrants come mainly from the closer and more isolated system of clouds. These are actually more difficult to simulate because in that area (at Galactic longitudes |l|>100 deg) the presence of noise starts to be non-negligible (see shaded blue in <ref>).
In addition, we note that the bump in the profile at l ≃ 60 - 70 deg, where we see a lack of power in both the and cases, corresponds to the complex region of Cygnus-X, which contains the very well known X-ray source Cyg-X1, massive protostars and one of the most massive molecular clouds known, 3× 10^6 M_⊙, 1.4 kpc distant from the Sun.
Given the assumptions made in <ref>, these large and closer clouds are not easy to simulate with especially where they are unlikely to be found, as in inter-spiral arm regions.
Despite of this, one can notice an overall qualitative better agreement with observations for the model than for the one. The latter reconstructs the global profile very well, but the former contains more peculiar features such us the central spike due to the Central Molecular Zone within the bar, or the complex of clouds at longitudes around ∼ -140, -80, 120 deg. We will perform a more detailed comparison of the two geometries in the following section and in <ref>.
§.§ Constraining the model with Planck data
After comparing the CO profile emission we checked whether the model is capable of reproducing the characteristic shape of the Planck CO angular power spectrum. Given the knowledge we have on the shape of the Milky Way, we decided to adopt the geometry as a baseline for this comparison, and to fix the parameters for the specific geometry to the values describing the shape of our Galaxy (see <ref>). For sake of completeness we reported the results of the same analysis adopting an geometry in <ref>.
We left the typical cloud size L_0 and σ_ring (the width of the molecular ring) as free parameters of the model. While the former is directly linked to the observed angular size of the clouds, the role of the second one is not trivial, especially if we adopt the more realistic 4 spiral arms distribution. Intuitively, it changes the probability of observing more clouds closer to the observer and affects more the amplitude of the power on the larger angular scales.
We defined a large interval, reported in <ref>, where L_0 and σ_ring are allowed to vary. Looking at the series of examples reported in <ref> we can see that suitable parameter ranges which yield power spectra close to the observations are L_0=10 ÷30 pc and σ_ring=2 ÷ 3 kpc. It is interesting to note that these are in agreement with estimates available in the literature (see e.g. <cit.>).
We then identified a set of values within the intervals just mentioned for which we computed the expected theoretical power spectrum of the specific model. Each theoretical model is defined as the mean of the angular power spectrum of 100 MC realizations of the model computed with . For each realization of CO distribution we rescaled the total flux following the procedure outlined in the previous section before computing its power spectrum.
Once the expected angular power spectra for each point of the parameter domain had been computed, we built the hyper-surface ℱ(ℓ; σ_ring, L_0) which for a given set of values (σ_ring, L_0) retrieved the model power spectrum, by interpolating it from its value at the closest grid points using splines. We checked that alternative interpolation methods did not impact significantly our results. We then computed the best-fit parameters of the model by performing a χ^2 minimization with the Planck CO power spectrum data. For this procedure we introduced a further global normalization parameter A_CO to take account of the bandpass effects or other possible miscalibration of the model. These might come either from variations from the scaling laws employed in the model (that are thus not captured by the total flux normalization described earlier), or calibration differences between the data and the surveys used to derive the scaling laws themselves. The bandpass effects tend to decrease the overall amplitude of the simulated signal because each line gets diluted over the width of the frequency band.
Since the theoretical model has been estimated from Monte Carlo simulations, we added linearly to the sample variance error of the data an additional uncertainty budget corresponding to the uncertainty of the mean theoretical power spectrum estimated from MC. We note that when we compute the numerator of the f rescaling factor, we include not only the real flux coming from the CO lines but also an instrumental noise contribution. We therefore estimated the expected noise contribution to f by computing the integral of <ref> on the error map and found it to be equal to 10%. We propagated this multiplicative uncertainty to the power spectrum level, rescaling the mean theoretical MC error bars by the square of this factor.
We limit the range of angular scales involved in the fit to ℓ≤400 in order to avoid the regions that display an unusual bump at scales of around ℓ≈ 500 that is not captured by any realization of our model (see next section). The best-fit parameters are reported in <ref>
L_0 = 14.50 ± 0.58 pc,
σ_ring= 2.76 ± 0.19 Kpc,
A_CO = 0.69 ± 0.06 .
The values are within the ranges expected from the literature. As can be seen in <ref> the power spectrum corresponding to the model with the best fit parameters, describes the data reasonably well. The minimum χ^2 obtained by the minimization process gives 1.48 that corresponds to a p-value of 13 %. We note, however,that all of the parameters are highly correlated. This is somewhat expected as the larger is σ_ring, the closer the clouds get to the observer placed in the solar circle. This effect can be compensated by an overall decrease of the typical size of the molecular cloud as shown in <ref>(d).
Finally, we note that A_CO≲ 1 suggests that, despite the rescaling procedure constraining quite well the overall power spectrum amplitude, the spatial distribution seems to be more complex than the one implemented in the model. This might partially be explained by the fact that we do not model explicitly any realistic bandpass effect of the channel or the finite width of the CO line. Additional sources of signal overestimation could be residual contamination of ^13CO 1-0 line or thermal dust in the map or variations of the emissivity profile in <ref>.
§.§ Consistency checks on other maps
The collaboration released multiple CO maps extracted using different component separation procedures.
We can test the stability of our results by using CO maps derived with these different approaches, in particular the so-called Type 2 maps. These have been produced exploiting the intensity maps of several frequencies (multi-channel approach) to separate the CO emission from the astrophysical and CMB signal <cit.>. The maps are smoothed at a common resolution of 15 arcmin and have better S/N ratio than the Type 1 ones. However, the CO is extracted by assuming several simplifications which may leak into contamination due to foreground residuals and systematics, as explained in <cit.>.
We repeated the procedure outlined in <ref> and <ref> using the Type 2 1-0 map. The values of the best fit parameters are summarized in <ref> and we show in <ref> the best-fit model power spectrum together with the power spectrum of Type 2 map data. We found that the values of A_CO obtained for Type 2 are inconsistent with the one obtained for the Type 1 maps. However, this discrepancy is consistent with the overall inter calibration difference between the two maps reported in <cit.>.
Such differences are mainly related to a combination of bandpass uncertainties in the observations and presence of a mixture of ^12 CO and ^13 CO (emitted at 110 GHz) lines for the Type 1 maps.
While σ_ring is consistent between the two maps, the Type 2 L_0 parameters are in slight tension at 2.7σ level. The overall correlation of the parameters is increased and the overall agreement between data and the mode is reduced although it remains acceptable. We cannot exclude however that this is a sign of additional systematic contamination in the Type 2 maps.
The collaboration provided maps of the 2-1 line for both of the methods and we could use our model to constrain the relative amplitudes of the lines, while fixing the parameter of the cloud distribution. However, such analysis is challenging and might be biased by the presence of variations of local physical properties of the clouds (opacity and temperature) or by the red or blue shift of the CO line within the bandpass induced by the motion of the clouds themselves <cit.>. For these reasons, we decided to restrict our analysis only to the CO 1-0 line, since it is the one for which the observational data are more robust.
We finally note that the observed angular power spectra of the maps display an oscillatory behaviour at a scale of ℓ≥ 400 with a clear peak at around ℓ≈ 500. The fact that this feature is present in all of the lines and for all of the CO extraction methods means that it can reasonably be considered as a meaningful physical signature. Because a single cloud population produces an angular power spectrum with a characteristic peak scale, we speculate that this could be the signature of the presence of an additional cloud population with a different typical size or location. We however decided to leave the investigation of this feature for a future work.
§.§ Comparison with data at high Galactic latitudes
In <ref>, we compare the CO 1-0 power spectrum at high Galactic latitudes with the average power spectrum of 100 MC realizations of the model for the same region of the sky. We assumed for these runs the best-fit values of the L_0, σ_ring parameters reported in <ref> and a distribution. Because the maps at these latitudes are dominated by noise, we subtracted our MC estimates of the noise bias data power spectrum so as to have a better estimate of the signal (blue circles).
As can be observed in <ref>, some discrepancy arises when comparing the power spectrum expected from the simulation of at high Galactic latitudes with the noise debiased data. This is rather expected because the model has larger uncertainties at high Galactic latitudes than in the Galactic mid-plane (where the best-fit parameters are constrained) given the lack of high sensitivity data. The discrepancy seems to point to an overestimation of the vertical profile parameters σ_z,0 and h_R (see <ref>) which gives a higher number of clouds close to the observer at high latitude.
However, we also point out that, as explained in <ref>, the error bars in <ref> might be underestimated especially at the largest angular scales where we are signal dominated. Therefore, discrepancies of order ≈ 3σ do not seem unlikely.
Since we are mainly interested in using the model to forecast the impact of unresolved CO emission far from the Galactic plane (|b|>30), we investigated whether removing the few high Galactic latitude clouds in the simulation that appear close to the observer would improve the agreement with the data. All of these clouds have, in fact, a flux exceeding the CO map noise in the same sky area and they should have already been detected in real data. We will refer to this specific choice of cut as the High Galactic Latitudes (HGL) cut in the following. The power spectrum of the simulated maps obtained after the application of the HGL cut is shown in <ref>.
We found that the model calibrated at low latitudes and after the application of the HGL-cut agrees very well with the data on the angular scales where the signal slightly dominates, i.e. ℓ≲ 80. We could not extend the comparison to smaller angular scales because the data become noise dominated and the residual increase of power observed on the power spectrum is dominated by a noise bias residual.
§ POLARIZATION FORECASTS
As noted in <ref>, CO lines are polarized and could contaminate sensitive CMB polarization measurements together with other polarized Galactic emission (synchrotron and the thermal dust) at sub-millimeter wavelengths.
Future experiments will preferentially perform observations at intermediate and high Galactic latitudes, to minimize contamination from strong Galactic emissions close to the plane. Since CO data at high Galactic latitudes are not sensitive enough to perform accurate studies of this emission, we provide two complementary estimates of the possible contamination from its polarized counterpart to the CMB B-mode power spectrum in this sky region.
§.§ Data-based order of magnitude estimate
Starting from the measured power spectrum at low Galactic latitudes, one can extrapolate a very conservative value of the CO power spectrum at higher latitudes. Assuming that all of the variance observed in the high Galactic latitude region is distributed among the angular scales in the same way as in the Galactic plane, we can write
𝒞_ℓ ^high, CO= 𝒞_ℓ ^Galvar(high)/var(Gal) .
This is a somewhat conservative assumption because we know that the bulk of the CO line emission is concentrated close to the Galactic disk and also because it assumes that the noise at high Galactic latitudes is diffuse CO emission. The variance of the CO map is 0.3 K^2 (km s^-1)^2, at |b|>30 deg, while for |b|<30 deg we get a variance of 193.5 K^2 (km s^-1)^2.
Taking 1% as the polarization fraction, p_CO, of the CO emission and an equal power in E and B-modes of polarized CO, we can convert 𝒞_ℓ^CO high into its B-mode counterpart as 𝒞_ℓ ^CO high, EE=𝒞_ℓ ^CO high, BB= 𝒞_ℓ^high, COp_CO^2/2. We then apply the conversion factors of <cit.> to convert the CO power spectrum into thermodynamic units (from K_RJ km s^-1 to μK). We can compare 𝒞_ℓ ^CO high, BB to the amplitude of equivalent cosmological CMB inflationary B-modes with tensor-to-scalar ratio r=1 at ℓ=80. In terms of 𝒟^BB_ℓ, this is equal to ∼ 6.67 × 10^-2μK^2 for a fiducial Planck 2015 cosmology. We found that the amplitude of the extrapolated CO B-mode power spectrum is equal to a primordial B-mode signal having r_CO =0.025.
§.§ Simulation estimate
In order to verify and refine the estimate given in the previous Section, we used the model presented in <ref> to infer the level of contamination from unresolved polarized CO emission.
For doing this, we first set the free parameter of the model to the best-fit value derived in <ref>.
From the total unpolarized emission in each sky pixel of the simulation, I^CO we can then extract its linearly polarized part by taking into account the global properties of the Galactic magnetic field. Following <cit.> the Q and U Stokes parameters of each CO cloud can be related to the unpolarized emission as
Q(n̂)^CO =p_CO g_d(n̂)I(n̂)^CO cos(2 ψ(n̂)),
U(n̂)^CO =p_CO g_d(n̂)I(n̂)^CO sin(2 ψ(n̂))..
where p_CO is the intrinsic polarization fraction of the CO lines, while g_d is the geometric depolarization factor which accounts for the induced depolarization of the light when integrated along the line of sight. The polarization angle ψ describes the orientation of the polarization vector and, for the specific case of Zeeman emission, it is related to the orientation of the component of the Galactic magnetic field orthogonal to the line of sight B_. Following the findings of <cit.>, we adopted a conservative choice of a constant p_CO=1 % for each molecular cloud of the simulation. Because the polarized emission in molecular clouds is correlated with the polarized dust emission <cit.>, we used the g_d and ψ templates for the Galactic dust emission available in the public release of the Planck Sky Model suite[<http://www.apc.univ-paris7.fr/ delabrou/PSM/psm.html>] <cit.>. These have been derived from 3D simulations of the Galactic magnetic field (including both a coherent and a turbulent component) and data of the WMAP satellite.
Since we assumed a constant polarization fraction, the geometrical depolarization effectively induces a change in the polarization fraction as a function of Galactic latitudes decreasing it when moving away from the poles.
This effect has already been confirmed by observations <cit.> of thermal dust, whose polarization fraction increases at high latitudes.
In order to forecast the contamination of unresolved CO polarized emission alone, we apply the HGL-cut as described in <ref> to each realization of the model for consistency.
Once the Q^CO and U^CO maps have been produced, we computed the angular power spectrum using .
In <ref> we show the mean and standard deviation of the B-mode polarization power spectrum extracted from 100 MC realizations of the CO emission following the procedure just outlined. Even though in <ref> we showed that our model tends to slightly overestimate the normalization of the power spectrum, we decided not to apply the best-fit amplitude A_CO to the amplitude of the B-mode power spectrum in order to provide the most conservative estimates of the signal.
As could be seen from <ref>, there is a significant dispersion compared to the results of the MC simulations at low Galactic latitudes (see <ref>). This simply reflects the fact that the observations, and hence our model, do not favour the presence of molecular clouds at high Galactic latitudes. Therefore their number can vary significantly between realizations. We repeated this test using the geometry and changing the parameter σ_ring. The result is stable with respect to these assumptions. We found that the spatial scaling of the average E and B-mode power spectrum can be approximated by a decreasing power-law 𝒟_ℓ∼ℓ^α, with α = -1.78.
Our simulations suggest that the level of polarized CO emission from unresolved clouds, despite being significantly lower than synchrotron or thermal dust, can nevertheless significantly bias the primordial B-mode signal if not taken into account. The signal concentrates mainly on large angular scales and at ℓ∼ 80, 𝒟_ℓ=(1.1 ± 0.8)× 10^-4μK^2 where the uncertainty corresponds to the error in the mean spectra estimated from the 100 MC realizations. Therefore, the level of contamination is comparable to a primordial B-mode signal induced by tensor perturbations of amplitude r_CO= 0.003 ± 0.002, i.e. below the recent upper limit r<0.07 reported by the BICEP2 Collaboration <cit.> but higher than the r=0.001 target of upcoming experiments <cit.>. The contamination quickly becomes sub-dominant on small angular scales (ℓ≈ 1000) where the B-modes are mostly sourced by the gravitational lensing.
We finally note that these estimates are conservative since the assumed polarization fraction of 1% of polarized is close to the high end of the polarization fractions observed in CO clouds.
§ CONCLUSIONS
In this work we have developed a parametric model for CO molecular line emission which takes account of the CO clouds distribution within our Galaxy in 3D with different geometries, as well as the most recent observational findings concerning their sizes, locations, and emissivity.
Despite most of the observations having so far been confined to the Galactic plane, we have built the model to simulate the emission over the full sky. The code implementing is being made publicly available.
We have compared the results of our simulations with CO data on the map level and statistically (by matching angular power spectra). We found that:
* the parameters of the size function, L_0, and the width of the Galactic radial distributions σ_ring play a key role in shaping the power spectrum;
* the choice of symmetries in the cloud distribution changes the profile of the integrated emission in the Galactic plane (<ref>) but not the power spectrum morphology;
* our model is capable of reproducing fairly well the observations at low Galactic latitudes (see <ref>) and the power spectrum at high latitudes (<ref>).
We used our model to fit the observed CO power spectrum and to estimate the most relevant parameters of the CO distribution, such as the typical size of clouds and the thickness of the molecular ring, finding results in agreement with values reported in the literature.
The model which we have developed could easily be generalized and extended whenever new data become available. In particular, its accuracy at high Galactic latitudes would greatly benefit from better sub-mm measurements going beyond the Planck sensitivity, as well as from better information about the details of the CO polarization properties.
Finally, we used the best-fit parameters obtained from comparing the model with data to forecast the unresolved CO contamination of the B-mode power spectrum at high Galactic latitudes. We conservatively assumed a polarization fraction of p_CO=1%, which corresponds to the high end of those observed at low latitudes,since no polarized CO cloud has yet been observed far from the Galactic plane due to the weakness of this emission.
We found that this signal could mimic a B-mode signal with tensor-to-scalar ratio 0.001≲ r≲ 0.025.
This level of contamination is indeed relevant for accurate measurements of CMB B-modes. It should therefore be inspected further in light of the achievable sensitivities of upcoming and future CMB experiments together with the main diffuse polarized foreground (thermal dust and synchrotron). From the experimental point of view, trying to find dedicated instrumental solution for minimizing the impact of CO emission lines, appears to be particularly indicated in the light of these results.
§
Acknowledgements. We would like to thank Françoise Combe for many useful comments and suggestions for the development of this study, as well as Alessandro Bressan, Luigi Danese, Andrea Lapi, Akito Kusaka, Davide Poletti and Luca Pagano for several enlightening discussions. We thank John Miller for his careful reading of this work. We thank Guillaume Hurier for several clarifications about the CO products.
This work was supported by the RADIOFOREGROUNDS grant of the European Union's Horizon 2020 research and innovation programme (COMPET-05-2015, grant agreement number 687312) and by by Italian National Institute of Nuclear Physics (INFN) INDARK project. GF acknowledges support of the CNES postdoctoral programme.
Some of the results in this paper have been derived using the HEALPIX <cit.> package.
plainnat
§ BEST-FIT WITH GEOMETRY
In this appendix we present the results of the analysis described in <ref> to constraint the CO distribution using the model adopting an geometry instead of the one. Following the procedure of <ref> we construct a series of ℱ(ℓ; σ_ring, L_0) hyper-surfaces sampled on an ensemble of specific values of the L_0 and σ_ring parameters within the same ranges reported in <ref>.
In <ref> we show the results of the fit of the axisymmetric model to the CO power spectrum of the Type 1 and Type 2 CO maps in the Galactic plane. We summarize the best-fit values of these parameters in <ref>. As it can be seen from the results of the χ^2 test in <ref> the model does not fit the data satisfactorily. Moreover one of the parameters of the model, the typical cloud size L_0, is in practice unconstrained. For this reason we decided to adopt the geometry as a baseline choice for our forecast presented in <ref>. Nevertheless, we pushed the comparison between the two geometries in the high galactic latitude area for sake of completeness.
In <ref> we show the comparison between the data for Type 1 maps and the axisymmetric best-fit model after the application of the HGL cut described in the paper. The model describes the data similarly to the model at the larger scales. The difference in the signal amplitude is in fact less then 30% for angular scales ℓ≲ 100 and the two models are compatible within the error bars. This seems to indicate that in this regime, the details of the CO distribution in the high galactic latitude region are mainly affected by the properties of the vertical profile rather than by the geometry of the distribution. Conversely, the difference between the two geometries becomes important at smaller angular scales reaching a level of ≈ 2 at ℓ≈ 1000.
We finally performed a series of polarized simulations as in <ref> to access the level of contamination to the CMB B-modes power spectrum with the best-fit model and found r_CO≲0.001.
Moreover, the slope of the BB power spectrum in <ref>(b) is -2.2 similar to the one computed with the geometry.
Because the model describes the data both in the high and low galactic latitude area, we consider the upper limit derived with this setup more reliable and the reference estimate for the contamination to the cosmological signal due to the CO polarized emission.
|
http://arxiv.org/abs/1701.08120v1 | 20170127171356 | Cooperative photometric redshift estimation | [
"Stefano Cavuoti",
"Crescenzo Tortora",
"Massimo Brescia",
"Giuseppe Longo",
"Mario Radovich",
"Nicola R. Napolitano",
"Valeria Amaro",
"Civita Vellucci"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Mode volume, energy transfer, and spaser threshold in plasmonic systems with gain
Tigran V. Shahbazyan
January 27, 2017
==================================================================================
In the modern galaxy surveys photometric redshifts play a central role in a broad range of studies, from gravitational lensing and dark matter distribution to galaxy evolution.
Using a dataset of ∼25,000 galaxies from the second data release of the Kilo Degree Survey (KiDS) we obtain photometric redshifts with five different methods: (i) Random forest, (ii) Multi Layer Perceptron with Quasi Newton Algorithm, (iii) Multi Layer Perceptron with an optimization network based on the Levenberg-Marquardt learning rule, (iv) the Bayesian Photometric Redshift model (or BPZ) and (v) a classical SED template fitting procedure (Le Phare). We show how SED fitting techniques could provide useful information on the galaxy spectral type which can be used to improve the capability of machine learning methods constraining systematic errors and reduce the occurrence of catastrophic outliers. We use such classification to train specialized regression estimators, by demonstrating that such hybrid approach, involving SED fitting and machine learning in a single collaborative framework, is capable to improve the overall prediction accuracy of photometric redshifts.
§ INTRODUCTION
Photometric redshift produced through the modern multi-band digital sky surveys are crucial to provide a reliable distance estimation for a large number of galaxies in order to be used for several tasks in precision cosmology, to mention just a few: the weak gravitational lensing to constrain dark matter and dark energy, the identification of galaxy clusters and groups, the search of strong lensing and ultra-compact galaxies, as well as the study of the mass function of galaxy clusters.
We can derive photometric redshift (hereafter photo-z) thanks to the existence of a hidden (and complex) correlation among the fluxes in the different broad bands, the spectral types of the object itself, and the real distance.
Although hidden, the mapping function that can map the photometric space into the redshift one could be approximated in several ways, and the existing methods can be broadly divided into two main classes: theoretical and empirical.
In the previous work <cit.> we have already applied an empirical method, the Multi Layer Perceptron with Quasi Newton Algorithm, MLPQNA, (<cit.>, <cit.>, <cit.>,
<cit.>, <cit.>), to a dataset extracted from the Kilo Degree Survey (KiDS).
Here we apply five different photo-z techniques to the same dataset and then we analyze the behavior of such methods with the aim at finding a way to combine their features in order to to optimize the accuracy of photo-z estimation; a similar, but reversed approach was followed recently by <cit.>.
§ THE DATA
As stated before we used the photometric data from the KiDS optical survey <cit.>. The KiDS data releases consist of tiles which are observed in the u, g, r, and i bands.
The sample of galaxies on which we performed our analysis is mostly extracted from KiDS-DR2 <cit.>, which contains 148 tiles observed in all filters during the first two years of survey regular operations. We added 29 extra tiles, not included in the DR2 at the time this was released, that will be part of the forthcoming KiDS data release, thus covering an area of 177 square degrees.
We used the multi-band source catalogs, based on source detection in the r-band images. While magnitudes are measured in all filters, the star-galaxy separation, as well as the positional and shape parameters are derived from the r-band data only, which typically offers the best image quality and seeing ∼ 0.65”, thus providing the most reliable source positions and shapes. The KiDS survey area is split into two fields, KiDS-North and KiDS-South, KiDS-North is completely covered by the combination of SDSS and the 2dF Galaxy Redshift Survey (2dFGRS), while KiDS-South corresponds to the 2dFGRS south Galactic cap region. Further details about data reduction steps and catalog extraction are provided in <cit.> and <cit.>.
Aperture photometry in the four ugri bands measured within several radii was derived using S-Extractor <cit.>. In this work we use magnitudes MAGAP_4 and MAGAP_6, measured within the apertures of diameters 4” and 6”, respectively. These apertures were selected to reduce the effects of seeing and to minimize the contamination from mis-matched sources. The limiting magnitudes are: MAGAP_4_u =25.17, MAGAP_6_u =24.74, MAGAP_4_g =26.03, MAGAP_6_g =25.61, MAGAP_4_r =25.89, MAGAP_6_r =25.44, MAGAP_4_i =24.53, MAGAP_6_i =24.06. To correct for residual offsets in the photometric zero points, we used the SDSS as reference: for each KiDS tile and band we matched bright stars with the SDSS catalog and computed the median difference between KiDS and SDSS magnitudes (psfMag). For more details about data preparation and pre-processing see <cit.> and <cit.>.
In order to build the spectroscopic Knowledge Base (KB) we cross-matched the KiDS data with the spectroscopic samples available in the GAMA data release 2, <cit.>, and SDSS-III data release 9 <cit.>.
The detailed procedure adopted to obtain the data used for the experiments was as follow: (i) we excluded objects having low photometric quality (i.e., with flux error higher than one magnitude); (ii) we removed all objects having at least one missing band (or labeled as Not-a-Number or NaN), thus obtaining the cleaned catalogue used to create the training and test sets, in which all photometric and spectroscopic information required is complete for all objects; (iii) we performed a randomly shuffled splitting into a training and a blind test set, by using the 60% / 40% percentages, respectively; (iv) we applied the cuts on limiting magnitudes (see <cit.> for details): (v) we selected objects with IMA_FLAGS equal to zero in the g, r and i bands, i.e., sources that have not been flagged because located in proximity of saturated pixels, star haloes, image border or reflections, or within noisy areas, see <cit.>. The u band is not considered in such selection since the masked regions relative to this band are less extended than in the other three KiDS bands.
The final KB consists of 15,180 objects to be used as training set and 10,067 for the test set.
§ THE METHODS
We chose three machine learning methods, among the ones which are publicly available in the DAta Mining & Exploration Web Application REsource or simply DAMEWARE <cit.> web-based infrastructure: the Random Forest (RF; <cit.>), and two versions of the Multi Layer Perceptron with different optimization methods, i.e., the Quasi Newton Algorithm <cit.> and the Levenberg-Marquardt rule <cit.>, respectively; furthermore we made use of a SED fitting method: Le Phare <cit.> and BPZ <cit.>, a Bayesian photo-z estimation based on a template fitting method which is the last method involved in our experiments.
The results were evaluated using only the objects of the blind test set by calculating the following set of standard statistical estimators for the quantity Δ z = (-)/(1+): (i) bias: defined as the mean value of the residuals Δ z;
(ii) σ: the standard deviation of the residuals; (iii) σ_68: the radius of the region that includes 68% of the residuals close to 0; (iv) NMAD: Normalized Median Absolute Deviation of the residuals, defined as NMAD(Δ z) = 1.48 × Median (|Δ z|);
(v) fraction of outliers with |Δ z| > 0.15.
§ EXPERIMENTS
After a preliminary evaluation of the photometric redshifts, based on each of the five methods, by analyzing the results on the basis of the spectral type classification performed by Le Phare (i.e., the class of the template which shows the best fitting), we noticed that ML methods have a better performance, although strongly dependent from the spectral type itself.
Therefore, we decide to exploit the capability of Le Phare to produce such spectral type classifications to train a specific regressor for each class. The workflow is described in Fig. <ref>.
It goes without saying that the training of a specific regression model for each class can be effective only if the subdivision itself is as accurate as possible.
After having obtained the preliminary results, we started by creating a reference spectral type classification of data objects through Le Phare model. By bounding the fitting procedure with the spec-z's, Le Phare provided the templates with the best fit. In this way it was possible to assign a specific spectral type class to each object. Afterwards, by replacing the spec-z's with the photo-z's estimated by the preliminary experiment and by alternating the 5 photo-z estimates (one for each applied model) as redshift constraint for the fitting procedure, the Le Phare model was used to derive five different spectral type classifications for each object of the KB.
A normalized confusion matrix has been used to find the best classification, as the class of the best fit template derived from Le Phare which resulted from the experiment using the photometric redshifts produced by the random forest.
By comparing the five matrices, the case of RF model
presents the best behavior for all classes. Therefore, we considered as the best classification the one obtained by using the photo-z's provided by the RF model.
We then subdivided the KB on the base of the five spectral type classes, thus obtaining five different subsets used to perform distinct training and blind test experiments, one for each individual class.
The final stage of the workflow consisted into the combination of the five subsets to produce the overall photo-z estimation, which was compared with the preliminary experiment in terms of the statistical estimators described in Sec. 3. The combined statistics was calculated on the whole datasets, after having gathered together all the objects of all classes and are reported in the last two rows of the Table <ref>. As with single classes, all the statistical estimators show an improvement in the combined approach case, with the exception of a slightly worst bias.
As Table <ref> shows, the proposed combined approach induces an estimation improvement for each class, as well as for the whole dataset.
§ DISCUSSION AND CONCLUSIONS
In this work we described an original workflow designed to improve the photo-z estimation accuracy through a combined use of theoretical (SED fitting) and empirical (machine learning) methods.
The data sample used for the analysis was extracted from the ESO KiDS DR2 photometric galaxy data, using a knowledge base derived from the SDSS and GAMA spectroscopic samples. For a catalog of about 25,000 galaxies with spectroscopic redshifts, we estimated photo-z's using five different methods: (i) Random Forest; (ii) MLPQNA (Multi Layer Perceptron with the Quasi Newton learning rule); (iii) LEMON (Multi Layer Perceptron with the Levenberg-Marquardt learning rule); (iv) Le Phare SED fitting and (v) the bayesian model BPZ. The results obtained with the MLPQNA on the complete KiDS DR2 data have been discussed in <cit.>, and further details are provided there.
The spectral type classification provided by the SED fitting method allows to derive also for ML models the statistical errors as function of spectral type, thus leading to a more accurate characterization of the errors. Therefore, it is possible to assign a specific spectral type attribute to each object and to evaluate single class statistics. This fact by itself, can be used to derive a better characterization of the errors. Furthermore, as it has been shown, the combination of SED fitting and ML methods allows also to build specialized (i.e., expert) regression models for each spectral type class, thus refining the process of redshift estimation.
Although the spec-z's are in principle the most accurate information available to bound the SED fitting techniques, this would make impossible to produce a wide catalogue of photometric redshifts, that would also include objects not observed spectroscopically. Thus, it appears reasonable to identify the best solution by making use of predicted photo-z's to bound fitting, in order to obtain a reliable spectral type classification for the widest set of objects. This approach, having also the capability to use arbitrary ML and SED fitting methods, makes the proposed workflow widely usable in any survey project.
By looking at Table <ref>, our procedure shows clearly how the MLPQNA regression method benefits from the knowledge contribution provided by the combination of SED fitting (Le Phare in this case) and machine learning (RF in the best case. This allows to use a set of regression experts based on MLPQNA model, specialized to predict redshifts for objects belonging to specific spectral type classes, thus gaining in terms of a better photo-z estimation.
By analyzing the results of Table <ref> in more detail, the improvement in photo-z quality is significant for all classes and for all statistical estimators. Only the two classes Scd and SB show a less evident improvement, since their residual distributions appear almost comparable in both experiment types, as confirmed by their very similar values of statistical parameters σ and σ_68.
This leads to obtain a more accurate photo-z prediction by considering the whole test set.
The only apparent exception is the mean (column bias of Table <ref>), which suffers the effect of the alternation of positive and negative values in the hybrid case, that causes the algebraic sum to result slightly worse than the standard case (the effect occurs on the fourth decimal digit, see column bias of the last two rows of Table <ref>). This is not statistically relevant because the bias is one order of magnitude smaller than σ and σ_68, therefore negligible.
We note that in some cases, the hybrid approach leads to the almost complete disappearance of catastrophic outliers. This is the case, for instance of the E type galaxies. The reason is that for the elliptical galaxies the initial number of objects is lower than for the other spectral types in the KB. In the standard case, i.e., the standard training/test of the whole dataset, such small amount of E type representatives is mixed together with other more populated class objects, thus causing a lower capability of the method to learn their photometric/spectroscopic correlations. Instead, in the hybrid case, using the proposed workflow, the possibility to learn E type correlations through a regression expert increases the learning capabilities, thus improving the training performance and the resulting photo-z prediction accuracy.
The confusion matrices allow us to compare classification statistics. The most important statistical estimators are: (i) the purity or precision, defined as the ratio between the number of correctly classified objects of a class (the block on the main diagonal for that class) and the number of objects predicted in that class (the sum of all blocks of the column for that class); (ii) the completeness or recall, defined as the ratio between the number of correctly classified objects in that class (the block on the main diagonal for that class) and the total number of (true) objects of that class originally present in the dataset (the sum of all blocks of the row for that class); (iii) the contamination, automatically defined as the reciprocal value of the purity.
Scd and SB spectral type classes are well classified by all methods. This is also confirmed by their statistics, since the purity is on average on all five cases around 88% for Scd and 87% for SB, with an averaged completeness of, respectively, 91% in the case of Scd and 82% for SB.
Moreover the three classifications based on the machine learning models maintain a good performance in the case of E/S0 spectral type class, reaching on average a purity and a completeness of 89% for both estimators.
In the case of Sab class, only the RF-based classification is able to reach a sufficient degree of efficiency (78% of purity and 85% of completeness). In particular, for the two cases based on photo-z's predicted by SED fitting models, for the Sab class the BPZ-based results are slightly more pure than those based on Le Phare (68% vs 66%) but much less complete (49% vs 63%).
Finally, by analyzing the results on the E spectral type class, the classification performance is on average the worst case, since only the RF-based case is able to maintain a sufficient compromise between purity (77%) and completeness (63%). The classification based on Le Phare photo-z's reaches a 69% of completeness on the E class, but shows an evident high level of contamination between E and E/S0, thus reducing its purity to the 19%. We also note that the intrinsic major difficulty to separate E objects from E/S0 class is due to the partial co-presence of both spectral types in the class E/S0, that may partially cause wrong evaluations by the classifier.
Furthermore, the fact that later Hubble types are less affected may be easily explained by considering that their templates are, on average, more homogeneous than for early type objects.
All the above considerations lead to the clear conclusion that the classification performed by Le Phare model and based on RF photo-z's achieves the best compromise between purity and completeness of all spectral type classes. Therefore, its spectral classification has been taken as reference throughout the further steps of the workflow.
At the final stage of the proposed workflow, the photo-z quality improvements obtained by the expert MLPQNA regression estimators on single spectral types of objects induce a reduction of σ from 0.026 to 0.023 and of σ_68 from 0.018 to 0.016 for the overall test set,
in addition to a more significant improvement for the E class (σ from 0.029 to 0.020 and of σ_68 from 0.028 to 0.017). This is mostly due to the reduction of catastrophic outliers. This result, together with the generality of the workflow in terms of choice of the classification/regression methods, demonstrates the possibility to optimize the accuracy of photo-z estimation through the collaborative combination of theoretical and empirical methods.
§ ACKNOWLEDGMENTS
CT is supported through an NWO-VICI grant (project number 639.043.308). MB and SC acknowledge financial contribution from the agreement ASI/INAF I/023/12/1. MB acknowledges the PRIN-INAF 2014 Glittering kaleidoscopes in the sky: the multifaceted nature and role of Galaxy Clusters.
99
[Ahn et al. (2012)]ahn2012 Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, ApJS, 203, 21
[Benitez (2000)]Benitez Benitez, N., 2000, ApJ, 536, 571
[Bertin & Arnouts (1996)]bertin1996 Bertin, E., Arnouts, S., 1996, A&AS, 117, 393
[Breiman (2001)]breiman2001 Breiman, L., 2001, Machine Learning, Springer Eds., 45, 1, 25-32
[Brescia et al. (2015)]brescia2015 Brescia, M., Cavuoti, S., Longo, G., De Stefano, V., 2015, A&A, 568, A126.
[Brescia et al. (2014)]brescia2014 Brescia, M., Cavuoti, S., Longo, G., et al., 2014, PASP, 126, 942, 743-797
[Brescia et al. (2013)]brescia2013 Brescia M., Cavuoti S., D'Abrusco R., Mercurio A., Longo G., 2013, ApJ, 772, 140
[Byrd et al. (1994)]byrd1994 Byrd, R.H., Nocedal, J., Schnabel, R.B., 1994, Mathematical Programming, 63, 129-156
[Cavuoti et al. (2017)]cavuoti2017 Cavuoti, S., et al., 2017, MNRAS 465 (2): 1959-1973.
[Cavuoti et al. (2015a)]Cavuoti+15_KIDS_I Cavuoti, S., Brescia, M., Tortora, C., et al. 2015, MNRAS, 452, 3, 3100-3105
[Cavuoti et al. (2015b)]cavuoti2015 Cavuoti, S., et al., 2015, Experimental Astronomy, Springer, Vol. 39, Issue 1, 45-71
[Cavuoti et al. (2012)]cavuoti2012 Cavuoti, S., Brescia, M., Longo, G., Mercurio, A., 2012, A&A, 546, 13
[de Jong et al. (2015)]deJong+15_KIDS_paperI de Jong, J. T. A., Verdoes Kleijn, G. A., Boxhoorn, D. R., et al., 2015, A&A, 582, A62
[Fotopoulou et al. (2016)]fotopoulou2016Fotopoulou, S. et al. submitted to MNRAS
[Ilbert et al. (2006)]ilbert2006 Ilbert, O., Arnouts, S., McCracken, H. J., et al., 2006, A&A, 457, 841
[Liske et al. (2015)]liske2015 Liske, J., Baldry, I. K., Driver, S. P., et al., 2015, MNRAS, 452, 2, 2087-2126
[Nocedal & Wright (2006)]nocedal2006 Nocedal, J., Wright, S. J., 2006, Numerical Optimization, 2nd Edition. Springer
[Tortora et al. (2016)]Tortora+15_KiDS_compacts Tortora, C., La Barbera, F., Napolitano, N. R., et al., 2016, MNRAS, 457, 3, 2845-2854
|
http://arxiv.org/abs/1701.08192v1 | 20170127210551 | A construction of hyperkähler metrics through Riemann-Hilbert problems II | [
"César Garza"
] | math.CA | [
"math.CA"
] |
Department of Mathematics, IUPUI, Indianapolis, USA
[email protected]
[2010]Primary
We develop the theory of Riemann-Hilbert problems necessary for the results in <cit.>. In particular, we obtain solutions for a family of non-linear Riemann-Hilbert problems through classical contraction principles and saddle-point estimates. We use compactness arguments to obtain the required smoothness property on solutions. We also consider limit cases of these Riemann-Hilbert problems where the jump function develops discontinuities of the first kind together with zeroes of a specific order at isolated points in the contour. Solutions through Cauchy integrals are still possible and they have at worst a branch singularity at points where the jump function is discontinuous and a zero for points where the jump vanishes.
A construction of hyperkähler metrics through Riemann-Hilbert problems II
C. Garza
=========================================================================
§ INTRODUCTION
This article presents the analytic results needed in <cit.>. As stated in said article, in order to construct complete hyperkähler metrics g on a special case of complex integrable systems (where moduli spaces of Higgs bundles constitute the prime example of this), one must obtain solutions to a particular infinite-dimensional, nonlinear family of Riemann-Hilbert problems. The analytical methods used to obtain these solutions and the smoothness results can be studied separately from the geometric motivations and so we present them in this article.
For limiting values of the parameter space, the Riemann-Hilbert problem degenerates in the sense that discontinuities appear in the jump function G(ζ) at ζ = 0 and ζ = ∞ in the contour Γ. Moreover, G(ζ) may vanish at isolated pairs of points in Γ. We study the behavior of the solutions to this boundary value problem near such singularities and we obtain their general form, proving that these functions do not develop singularities even in the presence of these pathologies, thus proving the existence of the hyperkähler metrics in <cit.>.
The paper is organized as follows:
In Section <ref> we state the Riemann-Hilbert problems to be considered. As shown in <cit.>, this arises from certain complex integrable systems satisfying a set of axioms motivated by the theory of moduli spaces of Higgs bundles, but we shall not be concerned about the geometric aspects in this paper.
In Section <ref> we solve the Riemann-Hilbert problem by iterations running estimates based on saddle-point analysis. Under the right Banach space, these estimates show that we have a contraction, proving that solutions exist and are unique. We then apply the Arzela-Ascoli theorem and uniform estimates to show that the solutions are smooth with respect to the parameter space.
In Section <ref> we consider the special case when the parameter a approaches 0 yielding a Riemann-Hilbert problem whose jump function has discontinuities and zeroes along the contour. We apply Cauchy integral techniques to obtain the behavior of the solutions near the points on the contour with these singularities. We show that a discontinuity of the jump function induces a factor ζ^η in the solutions, where η is determined by the discontinuities of the jump function G. A zero of order k at ζ_0 induces a factor (ζ - ζ_0)^k on the left-side part of the solutions. The nature of these solutions are exploited in <cit.> to reconstruct a holomorphic symplectic form ϖ(ζ) and, ultimately, a hyperkähler metric.
Acknowledgment: The author would like to thank Professor Alexander Its for many illuminating conversations that greatly improved the manuscript.
§ FORMULATION OF THE RIEMANN-HILBERT PROBLEM
§.§ Monodromy data
We state the monodromy data we will use in this paper. For a more geometric description of the assumptions we make, see <cit.>.
Since we will only consider the manifolds ℳ in <cit.> from a local point of view, we can consider it as a trivial torus fibration. With that in mind, here are the key ingredients we need in order to define our Riemann-Hilbert problem:
* A neighborhood U of 0 in with coordinate a. On U we have a trivial torus fibration U × T^2 := U × (S^1)^2 with θ_1, θ_2 the torus coordinates.
* Γ≅^2 is a lattice equipped with an antisymmetric integer valued pairing ⟨ , ⟩. We also assume we can choose primitive elements γ_1, γ_2 in Γ forming a basis for the lattice and such that ⟨γ_1, γ_2⟩ = 1.
* A homomorphism Z from Γ to the space of holomorphic functions on U.
* A function Ω : Γ→ such that Ω(γ) = Ω(-γ), γ∈Γ and such that, for some K > 0,
|Z_γ|/γ > K
for a positive definite norm on Γ and for all γ for which Ω(γ) ≠ 0.
For the first part of this paper we work with the extra assumption
(5) Z_γ_1(a), Z_γ_2(a) ≠ 0 for any a in U.
Later in this paper we will relax this condition.
Observe that the torus coordinates θ_1, θ_2 induce a homomorphism θ from Γ to the space of functions on T^2 if we assign γ_k↦θ_k, k = 1, 2. We denote by θ_γ, γ∈Γ the result of this map.
We consider a different complex plane with coordinate ζ. Let R > 0 be an extra real parameter that we consider. We define the “semiflat” functions 𝒳_γ : U × T^2 ×→ for any γ∈Γ as
𝒳^sf_γ (a, θ_1, θ_2, ζ) = exp( π R Z_γ(a)/ζ + iθ_γ + π R ζZ_γ(a))
As in the case of the map θ, it suffices to define 𝒳^sf_γ_1 and 𝒳^sf_γ_2.
For each a ∈ U and γ∈Γ such that Ω(γ) ≠ 0, the function Z_γ defines a ray ℓ_γ(a) in given by
ℓ_γ(a) = {ζ∈ : ζ = -t Z_γ(a), t > 0 }
Given a pair of functions 𝒳_k : U × T^2 ×→, k = 1, 2, we can extend this with the basis {γ_1, γ_2} as before to a collection of functions 𝒳_γ, γ∈Γ. Each element γ in the lattice also defines a transformation 𝒦_γ for these functions in the form
𝒦_γ𝒳_γ' = 𝒳_γ' (1 - 𝒳_γ)^⟨γ', γ⟩
For each ray ℓ from 0 to ∞ in we can define a transformation
S_ℓ = ∏_γ : ℓ_γ(u) = ℓ𝒦_γ^Ω(γ)
Observe that all the γ's involved in this product are multiples of each other, so the 𝒦_γ commute and the order for the product is irrelevant.
We can now state the main type of Riemann-Hilbert problem we consider in this paper. We seek to obtain two functions 𝒳_k : U × T^2 ×→, k = 1, 2 with the following properties:
* Each 𝒳_k depends piecewise holomorphically on ζ, with discontinuities only at the rays ℓ_γ(a) for which Ω(γ) ≠ 0. The functions are smooth on U × T^2.
* The limits 𝒳_k^± as ζ approaches any ray ℓ from both sides exist and are related by
𝒳^+_k = S_ℓ^-1∘𝒳^-_k
* 𝒳 obeys the reality condition
𝒳_-γ(-1/ζ) = 𝒳_γ(ζ)
* For any γ∈Γ, lim_ζ→ 0𝒳_γ(ζ) / 𝒳^sf_γ(ζ) exists and is real.
§.§ Isomonodromic Deformation
It will be convenient for the geometric applications to move the rays to a contour that is independent of a. Even though the rays ℓ_γ defining the contour for the Riemann-Hilbert problem above depend on the parameter a, we can assume the open set U ⊂ is small enough so that there is a pair of rays r, -r such that for all a ∈ U, half of the rays lie inside the half-plane ℍ_r of vectors making an acute angle with r; and the other half of the rays lie in ℍ_-r. We call such rays admissible rays. We are allowing the case that r is one of the rays ℓ_γ, as long as it satisfies the above condition.
For γ∈Γ, we define γ > 0 (resp. γ < 0) as ℓ_γ∈ℍ_r (resp. ℓ_γ∈ℍ_-r). Our Riemann-Hilbert problem will have only two anti-Stokes rays, namely r and -r. In this case, the Stokes factors are the concatenation of all the Stokes factors S^-1_ℓ in (<ref>) in the counterclockwise direction:
S_r = ∏^_γ > 0𝒦^Ω(γ; a)_γ
S_-r = ∏^_γ < 0𝒦^Ω(γ; a)_γ
Thus, we reformulate the Riemann-Hilbert problem in terms of two functions 𝒴_k : U × T^2 ×→, k = 1, 2 with discontinuities at the admissible rays r, -r by replacing condition <ref> above with
[ 𝒴^+_k = S_r ∘𝒴^-_k, along r; 𝒴^+_k = S_-r∘𝒴^-_k, along -r ]
The other conditions remain the same:
* The functions 𝒴_k are smooth on U × T^2.
* 𝒴 obeys the reality condition
𝒴_-γ(-1/ζ) = 𝒴_γ(ζ)
* For any γ∈Γ, lim_ζ→ 0𝒴_γ(ζ) / 𝒳^sf_γ(ζ) exists and is real.
In the following section we will prove the main theorem of this paper:
There exists a pair of functions 𝒴_k : U × T^2 ×→, k = 1, 2 satisfying (<ref>) and conditions (<ref>), (<ref>), (<ref>). These functions are unique up to multiplication by a real constant.
§ SOLUTIONS
We start working on a proof of Theorem <ref>. As in the classical scalar Riemann-Hilbert problems, we obtain the solutions 𝒴_k by solving the integral equation
𝒴_k(a,ζ) = 𝒳_γ_k^sf(a,ζ) exp( 1/4π i{∫_r K(ζ, ζ') log(S_r 𝒴_k) + ∫_-r K(ζ, ζ') log(S_-r𝒴_k) }), k = 1, 2
where we abbreviated dζ'ζ'ζ'+ζζ'-ζ as K(ζ',ζ). The dependence of 𝒴_k on the torus coordinates θ_1, θ_2 has been omitted to simplify notation. We will write 𝒴_γ to denote the function resulting from the (multiplicative) homomorphism from Γ to nonzero functions on U × T^2 × induced by 𝒴_k, k = 1, 2.
It will be convenient to write
𝒴_γ(a, ζ, θ) = 𝒳_γ^sf(a, ζ, Θ),
for Θ_k : U × T^2 ×→, k = 1, 2. We abuse notation and write θ for (θ_1, θ_2), as we do with Θ.
If we take the power series expansion of log(S_r 𝒴_k), log(S_-r𝒴_k) and decompose the terms into their respective components in each γ∈Γ, we can rewrite the integral equation (<ref>) as
𝒴_γ(a,ζ) = 𝒳_γ^sf(a,ζ)exp( 1/4π i{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒴_γ'(a,ζ') + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒴_γ'(a,ζ')})
where
f^γ' = c_γ'⟨γ, γ' ⟩,
c_γ' a rational constant obtained by power series expansion.
[The Pentagon case]
As our main example of this families of Riemann-Hilbert problems, we have the Pentagon case, studied in more detail in <cit.>. Here the jump functions S_r, S_-r are of the form
. [ 𝒴_1 ↦𝒴_1(1-𝒴_2); 𝒴_2 ↦𝒴_2(1-𝒴_1(1-𝒴_2))^-1 ]} S_r
and, similarly. [ 𝒴_1 ↦𝒴_1(1-𝒴^-1_2)^-1; 𝒴_2 ↦𝒴_2(1-𝒴^-1_1(1-𝒴^-1_2)) ]} S_-r
If we expand log(S_r𝒴_k), k = 1, 2 etc. we obtain
f^iγ_1+jγ_2= {[ -1j⟨γ, γ_2⟩ if i=0; (-1)^ji|i||j|⟨γ, γ_1⟩ if 0≤ j≤ i or i ≤ j ≤ 0; 0 otherwise. ].
Back in the general case, our approach for a solution to (<ref>) is to work with iterations. For ν∈ℕ:
𝒴^(ν+1)_γ(a,ζ) = 𝒳_γ^sf(a,ζ)exp( 1/4π i{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒴^(ν)_γ'(a,ζ') + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒴^(ν)_γ'(a,ζ')})
Formula (<ref>) requires an explanation. Assuming 𝒴^(ν-1)_γ', γ' ∈Γ has been constructed, by definition, 𝒴^(ν)_γ' has jumps at r and -r. By abuse of notation, 𝒴^(ν)_γ' in (<ref>) denotes the analytic continuation to the ray r (resp. -r) along ℍ_r (resp. ℍ_-r) in the case of the first (resp. second) integral.
By using (<ref>), we can write (<ref>) as an additive Riemann-Hilbert problem where we solve the integral equation
e^iΘ_γ = e^iθ_γexp( 1/4π i{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ) + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ)})
As in (<ref>), the solution of (<ref>) is obtained through iterations:
Θ^(0)(ζ,θ) = θ,
e^iΘ_γ^(ν+1) = e^iθ_γexp(1/4π i{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν)) + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))})
We need to show that Θ^(ν) = (Θ_1^(ν), Θ_2^(ν)) converges uniformly in a to well defined functions Θ_k : U × T^2 ×→, k = 1, 2 with the right smooth properties on a and ζ. Define X as the completion of the space of bounded functions of the form Φ: U × T^2 ×→^2 that are smooth on U × T^2 under the norm
Φ = sup_ζ,θ, aΦ(ζ, θ, a) _^2,
where ^2 is assumed to have as norm the maximum of the Euclidean norm of its coordinates. Notice that we have not put any restriction on the functions Φ in the slice, except that they must be bounded. Our strategy will be to solve the Riemann-Hilbert problem in X and show that for sufficiently big (but finite) R, we can get uniform estimates on the iterations yielding such solutions and any derivative with respect to the parameters a, θ. The Arzela-Ascoli theorem will give us that the solution Φ not only lies in X, but it preserves all the smooth properties. The very nature of the integral equation will guarantee that its solution is piecewise holomorphic on ζ, as desired.
We're assuming as in <cit.> that Γ has a positive definite norm satisfying the Cauchy-Schwarz property
|⟨γ, γ' ⟩| ≤γγ'
as well as the “Support property” (<ref>).
For any Φ∈X, let Φ_k denote the composition of Φ with the kth projection π_k : ^2→, k = 1, 2. Instead of working with the full Banach space X, let X^* be the collection of Φ∈X in the closed ball
Φ - θ≤ϵ,
for an ϵ > 0 so small that
sup_ζ,θ,a| e^iΦ_k| ≤ 2,
for k = 1, 2. In particular, X^* is closed, hence complete. Note that by (<ref>), if Φ∈X^*, then e^iΦ∈X. Furthermore, by (<ref>), the transformation in ζ is only as an integral transformation, so Θ^(ν) is holomorphic in either of the half planes ℍ_r or ℍ_-r.
§.§ Saddle-point Estimates
We will prove the first of our uniform estimates on Θ^(ν).
Θ^(ν)∈ for all ν.
We follow <cit.>, using induction on ν. The statement is clearly true for ν = 0 by (<ref>). Assuming Θ^(ν)∈X^*, take the log in both sides of (<ref>):
Θ^(ν+1)_k - θ_k = -1/4π{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν)) + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν)) }, k = 1, 2
For general Φ∈X^*, Φ can be very badly behaved in the slice, but by our inductive construction, Θ^(ν+1) is even holomorphic in ℍ_r and ℍ_-r. Consider the integral
∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))
The function Θ^(ν) can be analytically extended along the ray r so that it is holomorphic on the sector V bounded by r and ℓ_γ', γ' > 0 (see Figure <ref>). By Cauchy's theorem, we can move (<ref>) to one along the ray ℓ_γ', possibly at the expense of a residue of the form
4π i exp[iΘ_γ'^(ν) + π R ( Z_γ'/ζ + Z_γ'ζ) ]
if ζ lies in V. This residue is in control. Indeed, by the induction hypothesis, | e^i Θ_γ'^(ν)| < 2^γ', independent of ν. Moreover, we pick a residue only if ζ lies in the sector S bounded by the first and last ℓ_γ_k, γ_k ∈{γ_1, γ_2} included in ℍ_r traveling in the counterclockwise direction. This sector is strictly smaller than ℍ_r (see Figure <ref>), so Z_γ' - ζ∈ (-π,π) and, since r makes an acute angle with all rays ℓ_γ', γ'>0:
| Z_γ' - ζ| > const > π/2 for all γ' > 0, ζ∈ S.
In particular,
cos( Z_γ' - ζ) < -const <0 for all γ' > 0, ζ∈ S.
Using the fact that inf (|ζ| + 1/|ζ|) = 2, the sum of residues of the form (<ref>) is bounded by:
∑_γ' >0| f^γ'| 2^γ' e^-const R|Z_γ'|
Recall that γ' < const |Z_γ'|, so (<ref>) can be simplified to
∑_γ' >0|f^γ'| e^(-const R + δ)|Z_γ'|
for a constant δ. We're assuming that Ω(γ') do not grow too quickly with γ', by the support property (<ref>), so |f^γ'| is dominated by the exponential term and the above sum can be made arbitrarily small if R is big enough. This bound can be chosen to be independent of ν, ζ and the basis element γ_k (by choosing the maximum among the γ_1, γ_2). The exact same argument can be used to show that the residues of the integrals along -r are in control. In fact, let ϵ > 0 be given. Choose R > 0 so that the total sum of residues Res(ζ) is less than ϵ/2.
Thus, we can assume the integrals are along ℓ_γ' and consider
∫_ℓ_γ' K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))
The next step is to do a saddle point analysis and obtain the asymptotics for large R. Since this type of analysis will be of independent interest to us, we leave these results to a separate Lemma at the end of this section.
By (<ref>), | exp( i Θ^(ν)_γ'(ζ_0)) | ≤ 2^γ'. Thus, by Lemma <ref>, for ζ away from the saddle ζ_0, we can bound the contribution from the integral by
const | f^γ'| 2^γ'e^-2π R|Z_γ'|/√(R |Z_γ'|)
if R is big enough.
The case of ζ = ζ_0 is, by Lemma <ref>, as in (<ref>) except without the √(R) term in the denominator. In any case, by (<ref>), and since exp( i Θ^(ν)_γ'(ζ_0)) ≤ 2^γ' by (<ref>) and by (<ref>),
| ∑_γ' f^γ'∫_ℓ_γ' K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν)) | <
const ∑_γ'|
f^γ'| e^(-2π R + δ)|Z_γ'|.
The δ constant is the same appearing in (<ref>). This sum is convergent by the tameness condition on the Ω(γ') coefficients, and can be made arbitrarily small if R is big enough. Putting everything together:
sup_ζ,θ| Θ^(ν+1)_γ - θ_γ| =const ∑_γ'|
f^γ'| e^(-2π R + δ)|Z_γ'| + Res(ζ)
< ϵ/2 + ϵ/2 = ϵ.
Therefore Θ^(ν+1) - θ < ϵ. In particular, Θ^(ν+1) < ∞, so Θ^(ν+1)∈X^*. Since ϵ was arbitrary, Θ^(ν+1) satisfies the side condition (<ref>) and thus Θ^(ν)∈X^* for all ν if R is big enough.
We finish this subsection with the proof of some saddle-point analysis results used in the previous lemma.
For every ν consider an integral of the form
F(ζ) = ∫_ℓ_γ' K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))
Let ζ_0 = -e^i Z_γ'. Then, for ζ≠ζ_0, we can estimate the above integral as
F(ζ) = -ζ_0 + ζ/ζ_0 - ζexp( i Θ^(ν)(ζ_0)) 1/√(R|Z_γ'|) e^-2π R |Z_γ'| + O( e^-2π R |Z_γ'|/R), as R →∞
For ζ = ζ_0,
F(ζ_0) = O( e^-2π R |Z_γ'|/R), as R →∞
Equation (<ref>) is of the type
h(R) = ∫_ℓ_γ' g(ζ') e^π R f(ζ')
where
g(ζ') = ζ'+ζ/ζ'(ζ'-ζ), f(ζ') = Z_γ'/ζ'
+ ζ' Z_γ'.
The function f has a saddle point ζ_0 = -e^i Z_γ' at the intersection of the ray ℓ_γ' with the unit circle. Moreover, f(ζ_0) = -2|Z_γ'|. The ray ℓ_γ' and the unit circle are the locus of Im f(ζ') = Im f(ζ_0) = 0. It's easy to see that in ℓ_γ' f(ζ') < f(ζ_0) if ζ' ≠ζ_0, so ℓ_γ' is the path of steepest descent (see Figure <ref>).
Introduce τ by
1/2 (ζ' - ζ_0)^2 f”(ζ_0) + O((ζ' -ζ_0)^3) = -τ^2
and so
ζ' - ζ_0 = {-2/f”(ζ_0)}^1/2τ + O(τ^2)
for an appropriate branch of {f”(ζ_0)}^1/2. Let α = f”(ζ_0) = -2 Z_γ' + π. The branch of {f”(ζ_0)}^1/2 is chosen so that τ > 0 in the part of the steepest descent path outside the unit disk in Figure <ref>. That is, τ > 0 when (ζ' - ζ_0) = 1/2π - 1/2α, and so {f”(ζ_0)}^1/2 = i√(2|Z_γ'|)e^-i Z_γ'. Thus (<ref>) simplifies to
ζ' - ζ_0 = -ζ_0/√(|Z_γ'|)τ + O(τ^2)
We expand g(ζ'(τ)) as a power series[In our case, g depends also on the parameter R, so this is an expansion on ζ']:
g(ζ'(τ)) = g(ζ_0) + g'(ζ_0) {-2/f”(ζ_0)}^1/2τ + O(τ^2)
As in <cit.>,
h(R) ∼ e^Rf(ζ_0)g(ζ_0) {-2/f”(ζ_0)}^1/2∫_-∞^∞ e^-R τ^2 dτ + …
and so
h(R) = √(2π/R|f”(ζ_0)|) g(ζ_0) e^R f(ζ_0) + (i/2)(π - α) + O( e^R f(ζ_0)/R)
in our case, and since ζ_0 = -e^i Z_γ' = -ζ_0 + ζ/ζ_0 - ζexp( i Θ^(ν)(ζ_0)) 1/√(R|Z_γ'|) e^-2π R |Z_γ'| + O( e^-2π R |Z_γ'|/R), as R →∞
This shows (<ref>).
If ζ→ζ_0, we take a different path of integration, consisting of 3 parts ℓ_1, ℓ_2, ℓ_3 (see Figure <ref>).
If we parametrize the ℓ_γ' ray as ζ' = -e^t + i Z_γ' = - e^t ζ_0, -∞ < t < ∞, the ℓ_2 part is a semicircle around t = -ϵ and t = ϵ, for small ϵ. The contribution from ℓ_2 is clearly (up to a factor of 2 π i) half of the residue of the function in (<ref>). As in (<ref>), this residue is:
2 π i exp( i Θ^(ν)(ζ_0) -2 π R |Z_γ'|).
If we denote by exp( i Θ^(ν)(t) ) the evaluation exp( i Θ^(ν)(-tζ_0) ), the contributions from ℓ_1 and ℓ_3 in the integral are of the form
lim_ϵ→ 0{. ∫_-∞^-ϵ dt -e^t + 1/-e^t - 1exp( i Θ^(ν)(t) ) exp( π R (e^t + e^-t))
. + ∫_ϵ^∞ dt -e^t + 1/-e^t - 1exp( i Θ^(ν)(t)) exp( π R (e^t + e^-t)) }
If we do the change of variables t ↦ -t in the first integral, (<ref>) simplifies to
∫_0^∞ dt -e^t + 1/-e^t - 1[ exp( i Θ^(ν)(t)) - exp( i Θ^(ν)(-t)) ] exp( π R (e^t + e^-t))
(<ref>) is of the type (<ref>), with
g(ζ') = ζ'+ζ_0/ζ'(ζ'-ζ_0)[ exp( i Θ^(ν)(ζ')) - exp( i Θ^(ν)(1/ζ')) ]
Since ζ_0 = 1/ζ_0, the apparent pole at ζ_0 of g(ζ') is removable and the integral can be estimated by the same steepest descent methods as in (<ref>). The only difference is that the saddlepoint now lies at one of the endpoints. This only introduces a factor of 1/2 in the estimates (see <cit.>). If g(ζ_0) ≠ 0 in this case, the integral is just
g(ζ_0)/2√(R|Z_γ'|) e^-2π R |Z_γ'|+i Z_γ' + O( e^-2π R |Z_γ'|/R)
If g(ζ_0) = 0, then the estimate is at least of the order O( e^-2π R |Z_γ'|/R). This finishes the proof of (<ref>).
§.§ Uniform Estimates on Derivatives
Now let β = (β_1, β_2, β_3, β_4) be a multi-index in ℕ^4, and let D^β be a differential operator acting on the iterations Θ^(ν):
D^βΘ^(ν)_γ = ∂/∂θ_1^β_1∂θ_2^β_2∂ a^β_3∂a^β_4Θ^(ν)_γ
We need to uniformly bound the partial derivatives of Θ^(ν) on compact subsets:
Let K be a compact subset of U × T^2. Then
sup_× K D^βΘ^(ν) < C_β,K
for a constant C_β,K independent of ν.
Lemma <ref> is the case |β| := ∑β_i = 0, with ϵ as C_0,K. To simplify notation, we'll drop the K subindex in these constants. Assume by induction we already did this for |β| = k - 1 derivatives and for the first ν≥ 0 iterations, the case ν = 0 being trivial. Take partial derivatives with respect to θ_s, for s = 1, 2 in (<ref>). This introduces a factor of the form
i∂/∂θ_sΘ^(ν)_γ'
By induction on ν, the above can be bounded by γ' C_β', where β' = (1,0,0,0) or (0,1,0,0), depending on the index s. When we take the partial derivatives with respect to a in (<ref>), we add a factor of
π R/ζ'∂/∂ a Z_γ'(a) + i ∂/∂ aΘ^(ν)_γ'
in the integrals (<ref>). Similarly, a partial derivative with respect to a adds a factor of
π Rζ' ∂/∂aZ_γ'(a) + i ∂/∂aΘ^(ν)_γ'
As in (<ref>), the second term in (<ref>) and (<ref>) can be bounded by γ' C_β' for |β'| = 1. Since Z_γ' is holomorphic on U ⊂, and since K ⊂ U × T^2 is compact,
| ∂^k/∂a^k Z_γ'| ≤ k! γ' C
for all k and some constant C, independent of k and a. Likewise for a, Z_γ'. Thus if we take D^βΘ^(ν+1)_γ in (<ref>) for a multi-index β with |β| = k, the right side of (<ref>) becomes:
-1/4π{∑_γ' > 0 f^γ'∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))P_γ'(a,ζ',θ) + ∑_γ' < 0 f^γ'∫_-r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))Q_γ'(a,ζ',θ) },
where each P_γ' or Q_γ' is a polynomial obtained as follows:
Each 𝒳^sf_γ'(a,ζ',Θ^(ν)) is a function of the type e^g, for some g(a,a̅,θ_1, θ_2). If {x_1, …, x_k} denotes a choice of k of the variables a,a̅, θ_1, θ_2 (possibly with multiplicities), then by the Faà di Bruno Formula:
∂^k/∂ x_1 ⋯∂ x_k e^g = e^g ∑_π∈Π∏_B ∈π∂^|B|g/∏_j ∈ B∂ x_j:= e^g P_γ'
where
* π runs through the set Π of all partitions of the set {1, …, k}.
* B ∈π means the variable B runs through the list of all of the “blocks” of the partition π, and
* |B| is the size of the block B.
The resulting monomials in P_γ' (same thing holds for Q_γ') are products of the variables given by (<ref>), (<ref>), (<ref>) or their subsequent partial derivatives in θ, a, a. For each monomial, the sum of powers and total derivatives of terms must add up to k by (<ref>). For instance, when computing
∂^3/∂θ_1 ∂ a^2𝒳^sf_γ'(a,ζ',Θ^(ν)) = ∂^3/∂θ_1 ∂ a^2 e^g,
a monomial that appears in the expansion is:
∂ g/∂θ_1[ ∂ g/∂ a]^2 = i∂/∂θ_1Θ^(ν)_γ'[ π R/ζ'∂/∂ a Z_γ'(a) + i ∂/∂ aΘ^(ν)_γ']^2
There are a total of (possibly repeated) B_k monomials in P_γ', where B_k is the Bell number, the total number of partitions of the set {1, …, k} and B_k ≤ k!. We can assume, without loss of generality, that any constant C_β is considerably larger than any of the C_β' with |β'| < |β|, by a factor that will be made explicit. First notice that since there is only one partition of {1, …, k} consisting of 1 block, the Faà di Bruno Formula (<ref>) shows that P_γ' contains only one monomial with the factor D^βΘ^(ν). The other monomials have factors D^β'Θ^(ν) for |β'| < |β|. We can do a saddle point analysis for each integrand of the form
∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))P^i_γ'(a,ζ',θ),
for P^i_γ' (or Q^i_γ') one of the monomials of P_γ' (Q_γ'). The saddle point analysis and the induction step for the previous Θ^(ν) give the estimate
C_β·const∑_γ'| ⟨γ, f^γ'⟩| e^(-2π R + δ)|Z_γ'|
for the only monomial with D^βΘ^(ν) on it. The estimates for the other monomials contain the same exponential decay term, along with powers s of C_β', C such that s · |β'| ≤ |β|, and constant terms. By making C_β significantly bigger than the previous C_β', we can bound the entire (<ref>) by C_β, completing the induction step
To see better the estimates we obtained in the previous proof, let's consider the particular case k = |β| = 3. If k = 3, there are a total of 4+3-13 = 20 different third partial derivatives for each Θ^(ν + 1). There are a total of 5 different partitions of the set {1, 2, 3} and correspondingly
∂^3 /∂ x_1 ∂ x_2 ∂ x_3 e^g =
e^g [ ∂^3 /∂ x_1 ∂ x_2 ∂ x_3g + ( ∂^2 /∂ x_1 ∂ x_2 g) ( ∂/∂ x_3g) + ( ∂^2 /∂ x_1 ∂ x_3 g) ( ∂/∂ x_2g) .
. + ( ∂^2 /∂ x_2 ∂ x_3 g) ( ∂/∂ x_1g) + ( ∂/∂ x_1 g ) ( ∂/∂ x_2 g ) ( ∂/∂ x_3 g ) ]
If x_1 = x_2 = x_3 = a,
∂^3/∂ a^3𝒳^sf_γ'(a,ζ',Θ^(ν)) = 𝒳^sf_γ'(a,ζ',Θ^(ν)) [ π R/ζ'∂^3/∂ a^3 Z_γ' + i ∂^3/∂ a^3Θ^(ν)_γ'.
+ 3 (π R/ζ'∂^2/∂ a^2 Z_γ' + i ∂^2/∂ a^2Θ^(ν)_γ')( π R/ζ'∂/∂ a Z_γ' + i ∂/∂ aΘ^(ν)_γ')
. + ( π R/ζ'∂/∂ a Z_γ' + i ∂/∂ aΘ^(ν)_γ')^3 ]
= 𝒳^sf_γ'(a,ζ',Θ^(ν))P(Θ^(ν)_γ')
There is one and only one term containing ∂^3/∂ a^3Θ^(ν)_γ'. By induction on ν, |∂^3/∂ a^3Θ^(ν)_γ'| < γ' C_β. For the estimates of
i f^γ'∫_r K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))∂^3/∂ a^3Θ^(ν)_γ',
we do exactly the same as in the proof of Lemma <ref>. Namely, move the ray r to the corresponding BPS ray ℓ_γ', possibly at the expense of gaining a residue bounded by
C_β·const| f^γ'| e^(-2π R + δ)|Z_γ'|
The sum of all these residues over those γ' such that ⟨γ, γ'⟩≠ 0 is just a fraction of C_β. After moving the contour we estimate
i f^γ'∫_ℓ_γ' K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν))∂^3/∂ a^3Θ^(ν)_γ'
As in (<ref>), we run a saddle point analysis and obtain a similar estimate (<ref>) as in Lemma <ref>. The result is that the estimate for this monomial is an arbitrarily small fraction of C_β.
If we take other monomials, like say
P^1_γ' = 3 ( π R/ζ')^2 ∂^2/∂ a^2 Z_γ'∂/∂ a Z_γ'
and estimate
3 f^γ'∂^2/∂ a^2 Z_γ'∂/∂ a Z_γ'∫_r ( π R/ζ')^2 K(ζ,ζ') 𝒳^sf_γ'(a,ζ',Θ^(ν)),
we do as before, computing residues and doing saddle point analysis. The difference with these terms is that partial derivatives of Z_γ' are bounded by (<ref>), and at most second derivatives of Θ^(ν) (for this specific monomial, there are no such terms) appear. The extra powers of π R/ζ' that appear here don't affect the estimates, since 𝒳^sf_γ' has exponential decay on π R/ζ'. The end result is an estimate of the type
C_β'_1^s_1⋯ C_β'_m^s_m C^j ·const| f^γ'| e^(-2π R + δ)|Z_γ'|
with all s_i · |β'_i| and j ≤ |β|. By induction on |β|, we can make C_β big enough so that (<ref>) are just a small fraction of C_β. This completes the illustration of the previous proof for β = (0,0,3,0) of the fact that sup |D^βΘ^(ν + 1)| < C_β on the compact set K.
Now we're ready to prove the main part of Theorem <ref>, that of the existence of solutions to the Riemann-Hilbert problem.
The sequence {Θ^(ν)} converges in X. Moreover, its limit Θ is piecewise holomorphic on ζ with jumps along the rays r, -r and continuous on the closed half-planes determined by these rays. Θ is C^∞ on a, a, θ_1, θ_2.
We first show the contraction of the Θ^(ν) in the Banach space X thus proving convergence. We will use the fact that e^x is locally Lipschitz and the Θ^(ν) are arbitrarily close to θ if R is big. In particular,
sup_ζ,θ,a| e^iΘ_γ^(ν) - e^iΘ_γ^(ν-1)| < const·sup_ζ,θ,a| Θ_γ^(ν) - Θ_γ^(ν-1)| ≤constΘ^(ν) - Θ^(ν-1),
for γ one of the basis elements γ_1, γ_2. For arbitrary γ', recall that if γ' =
c_1 γ_1 + c_2 γ_2, then Θ_γ'^(ν) = c_1 Θ_γ_1^(ν) +
c_2Θ_γ_2^(ν). It follows from the last inequality that
sup_ζ,θ| e^iΘ_γ'^(ν) - e^iΘ_γ'^(ν-1)| < const^γ' Θ^(ν) - Θ^(ν-1)
We estimate
Θ^(ν+1) - Θ^(ν) = 1/4π∑_γ'>0 f^γ'∫_r K(ζ,ζ') [ 𝒳^sf_γ'(a,ζ',Θ^(ν)) -
𝒳^sf_γ'(a,ζ',Θ^(ν-1)) ].
. + ∑_γ'<0 f^γ'∫_-r K(ζ,ζ') [ 𝒳^sf_γ'(a,ζ',Θ^(ν))-𝒳^sf_γ'(a,ζ',Θ^(ν-1)]
≤1/4π∑_γ'>0 f^γ'∫_r K(ζ,ζ') | 𝒳^sf_γ'(a,ζ',θ)|| e^iΘ_γ'^(ν) - e^iΘ_γ'^(ν-1)|
+ 1/4π∑_γ'<0 f^γ'∫_r K(ζ,ζ') | 𝒳^sf_γ'(a,ζ',θ)| | e^iΘ_γ'^(ν) - e^iΘ_γ'^(ν-1)|
As in the proof of Lemma <ref>, we can move the integrals to the rays ℓ_γ' introducing an arbitrary small contribution from the residues. The differences of the form
| e^iΘ_γ'^(ν) - e^iΘ_γ'^(ν-1)|
can be expressed in terms of Θ^(ν) - Θ^(ν-1) by (<ref>).
The sum of the resulting integrals can be made arbitrarily small if R is big by a saddle point analysis as from (<ref>) onwards. By (<ref>):
Θ^(ν+1) - Θ^(ν) < const∑_γ' f^γ' e^(-2π R + δ)|Z_γ'|Θ^(ν) - Θ^(ν-1),
By making R big, we get the desired contraction in X and the convergence is proved.
The holomorphic properties of Θ on ζ are clear since Θ solves the integral equation (<ref>) and the right side of it is piecewise holomorphic, regardless of the integrand.
Finally, by Lemma <ref>, {D^βΘ^(ν)} is an equicontinuous and uniformly bounded family on compact sets K for any differential operator D^β as in (<ref>). By Arzela-Ascoli, a subsequence converges uniformly and hence its limit is of type C^k for any k. Since we just showed that Θ^(ν) converges, this has to be the limit of any subsequence. Thus such limit Θ must be of type C^∞ on U × T^2, as claimed.
By Theorem <ref>, the functions 𝒴_k(a, ζ, θ) := 𝒳^sf_k(a, ζ, Θ), k = 1, 2 satisfy (<ref>) and condition (<ref>). It remains to show that the functions also satisfy the reality conditions.
For 𝒴_k(a, ζ, θ) defined as above and with γ = c_1 γ_1 + γ_2 ∈Γ, we define 𝒴_γ = 𝒴_1^c_1𝒴_2^c_2. Then
𝒴_-γ(-1/ζ) = 𝒴_γ(ζ)
Ignoring the parameters a, θ_1, θ_2 for the moment, it suffices to show
Θ_k(-1/ζ) = Θ_k(ζ), k = 1, 2
We show that this is true for all Θ^(ν) defined as in (<ref>) by induction on ν. For ν = 0, Θ^(0) = (θ_1, θ_2) which are real torus coordinates and independent of ζ, so (<ref>) is true.
Assuming (<ref>) is true for ν, we obtain Θ^(ν+1) as in (<ref>). If we write ζ as te^iφ, t > 0 for some angle φ, and if we parametrize the admissible ray r as se^iρ, s > 0, then (<ref>) for ν+1 follows by induction and by rewriting the integrals in (<ref>) after the reparametrization s →1/s. An essential part of the proof is the form of the symmetric kernel
K(ζ, ζ') = dζ'/ζ'ζ' + ζ/ζ' - ζ
which inverts the roles of 0 and ∞ after the reparametrization.
To verify the last property of 𝒴_k, we prove
For 𝒴_k(a, ζ, θ) defined as above
lim_ζ→ 0𝒴_γ(ζ) / 𝒳^sf_γ(ζ)
exists and is real.
Write Θ_k^0 for lim_ζ→ 0Θ_k. In a similar way we can define Θ_k^∞. It suffices to show that Θ_k^0 - θ_k is imaginary. This follows from Lemma <ref> by letting ζ→ 0.
Observe that this and the reality condition give
Θ_k^0 = Θ_k^∞
To finish the proof of Theorem <ref>, we apply the classical arguments: given two solutions 𝒴_k, 𝒵_k satisfying the conditions of the theorem, the functions 𝒴_k 𝒵^-1_k are entire functions bounded at ∞, so this must be a constant. By the reality condition <ref>, this constant must be real. This finishes the proof of Theorem <ref>.
§ SPECIAL CASES
In our choice of admissible rays r, -r, observe that due to the exponential decay of 𝒳^sf_k, k = 1, 2 along these rays (see (<ref>)) and the rays ℓ_γ, the jumps S_ℓ or S_r, S_-r are asymptotic to the identity transformation as ζ→ 0 or ζ→∞ along these rays. Thus, one can define a Riemann-Hilbert problem whose contour is a single line composed of the rays r, -r, the latter with orientation opposite to the one in the previous section. The jump S along the contour decomposes as S_r, S_-r^-1 in the respective rays and we can proceed as in the previous section with a combined contour.
§.§ Jump Discontinuities
In <cit.>, we will be dealing with a modification of the Riemann-Hilbert problem solved in <ref>. In particular, that paper deals with the new condition
(5') Z_γ_2(0) ≠ 0 for any a in U but Z_γ_1 attains its unique zero at a = 0.
Because of this condition, the jumps loose the exponential decay along those rays and they are no longer asymptotic to identity transformations. In fact, in <cit.> we show that this causes the jump function S(ζ) to develop a discontinuity of the first kind along ζ = 0 and ζ = ∞.
In this paper we obtain the necessary theory of scalar boundary-value problems to obtain solutions to this special case of Riemann-Hilbert problems appearing in <cit.>. We consider a general scalar boundary value problem consisting in finding a sectionally analytic function X(ζ) with discontinuities at an oriented line ℓ passing through 0. If X^+(t) (resp. X^-(t)) denotes the limit from the left-hand (resp. right-hand) side of ℓ, for t ∈ℓ, they must satisfy the boundary condition
X^+(t) = G(t) X^-(t), t ∈ℓ
for a function G(t) that is Hölder continuous on ℓ except for jump discontinuities at 0 and ∞. We require a symmetric condition on these singularities: if Δ_i, i = 0 or ∞ represents the jump of the function G near any of these points,
Δ_0 = lim_t → 0^+ G(ζ) - lim_t → 0^- G(ζ), etc.
Then we assume
Δ_0 = -Δ_∞
Near 0 or ∞, we require for the analytic functions X^+(ζ), X^-(ζ) to have only one integrable singularity of the form
|X^±(ξ)| < C/|ξ|^η, (0 ≤η < 1)
For ξ a coordinate of centered at either 0 or ∞. By (<ref>), each function X^± is asymptotic to 0 near the other point in the set {0, ∞}.
There exists functions X^+(ζ), X^-(ζ), analytic on opposite half-planes on determined by the contour ℓ and continuous on the closed half-planes such that, along ℓ, the functions obey (<ref>) and (<ref>), with a Hölder continuous jump function G(t) satisfying (<ref>). The functions X^+(ζ), X^-(ζ) are unique up to multiplication by a constant.
We follow <cit.> for the solution of this exceptional case. As seen above, we only have jump discontinuities at 0 and ∞. For any point t_0 in the contour ℓ, and a function f with discontinuities of the first kind on ℓ at t_0, we denote by f(t_0 - 0) (resp. f(t_0+0)) the left (resp. right) limit of f at t_0, according to the given orientation of ℓ.
Let
η_0 = 1/2π ilogG(0-0)/G(0+0)
Similarly, define
η_∞ = 1/2π ilogG(∞-0)/G(∞+0)
Since G obeys condition <ref>, η_0 = - η_∞. Observe that by definition, |η_0| < 1, and hence the same is true for η_∞.
Let D^+ be the region in bounded by ℓ with the positive, counterclockwise orientation. Denote by D^- the region where ℓ as a boundary has the negative orientation. We look for solutions of the homogeneous boundary problem (<ref>) . To solve this, pick a point ζ_0 ∈ D^+ and introduce two analytic functions
(ζ - ζ_0)^η_0, ζ^η_0
Make a cut in the ζ-plane from the point ζ_0 to ∞ through 0, with the segment of the cut from ζ_0 to 0 wholly in D^+. Consider the functions
ω^+(ζ) = ζ^η_0, ω^- = ( ζ/ζ - ζ_0)^η_0
Due to our choice of cut, ω^+ is analytic in D^+ and ω^- is analytic in D^-. Introduce new unknown functions Y^± setting
X^± (ζ) = ω^± (ζ) Y^± (ζ)
The boundary condition (<ref>) now takes the form
Y^+(t) = G_1(t) Y^-(t), t ∈ℓ
where
G_1(t) = ω^-(t)/ω^+(t) G(t) = (t - ζ_0)^-η_0 G(t), t ∈ℓ
By the monodromy of the function (ζ - ζ_0)^-η_0 around 0 and infinity and since η_∞ = -η_0, it follows that G_1 is continuous in the entire line ℓ. Hence, we reduced the problem (<ref>) to a problem (<ref>) with continuous coefficient, which can be solved with classical Cauchy integral methods.
By assumption, we seek solutions of (<ref>) with only one integrable singularity i.e. estimates of the form (<ref>). The notion of index (winding number) for G(t) in the contour ℓ is given by (see <cit.>) ϰ = ⌊η_0⌋ + ⌊η_∞⌋ + 1 = 0, so the usual method of solution of (<ref>) as
Y = exp( 1/2π i∫_ℓ K(ζ',ζ) log G_1(ζ') )
(for a suitable kernel K(ζ',ζ) that makes the integral along ℓ convergent) needs no modification. We can also see from (<ref>) that X^± has an integrable singularity at 0 (resp. ∞) if η_0 is negative (resp. positive).
We need to show that for different choices of ζ_0 ∈ D^+ the solutions X only differ by a constant. To see this, by taking logarithms in (<ref>) it suffices to show uniqueness of solutions to the homogeneous additive boundary problem
Φ^+(t) - Φ^-(t) = 0, t ∈ℓ
and with the assumption that Φ vanishes at a point and at the points of discontinuity of G(ζ), Φ^± satisfies an estimate as in (<ref>). The relation (<ref>) indicate that the functions Φ^+, Φ^- are analytically extendable through the contour ℓ and, consequently, constitute a unified analytic function in the whole plane. This function has, at worst, isolated singularities but according to the estimates (<ref>), these singularities cannot be poles or essential singularities, and hence they can only be branch points. But a single valued function with branch points must have lines of discontinuity, which contradicts the fact that Φ^+ = Φ^- is analytic (hence continuous) on the entire plane except possibly at isolated points. Therefore, the problem (<ref>) has only the trivial solution.
§.§ Zeroes of the boundary function
Because of condition <ref>, yet another special kind of Riemann-Hilbert problem arises in <cit.>. We still want to find a
sectionally analytic function X(ζ) satisfying the conditions (<ref>) with G(t) having jump discontinuities at 0, ∞ with
the properties (<ref>) and (<ref>). In this subsection, we allow the case of G(t) having zeroes of integer order on finitely many
points α_1, …, α_μ along ℓ. Thus, we consider a Riemann-Hilbert problem of the form
X^+(t) = ∏_j = 1^μ (t - α_j)^m_j G_1(t) X^-(t), t ∈ℓ
where m_j are integers and G_1(t) is a non-vanishing function as in <ref>, still with discontinuities at 0 and ∞ as in (<ref>).
For a scalar Riemann-Hilbert problem as in <ref> and with G_1(t) a non-vanishing function with discontinuities of the first kind at 0 and ∞ obeying (<ref>), there exist solutions X^±(ζ) unique up to multiplication by a constant. At all points α_j as above, both analytic functions X^+(ζ), X^-(ζ) are bounded and X^+ has a zero of order m_j.
By Lemma <ref>, there exists non-vanishing analytic functions Y^+(ζ), Y^-(ζ) on opposite half-planes D^+, D^- determined by ℓ and continuous along the boundary such that
G_1(t) = Y^+(t)/Y^-(t), t ∈ℓ
We can define
X^+(ζ) = ∏_j = 1^μ (ζ - α_j)^m_j Y^+(ζ)
X^-(ζ) = Y^-(ζ)
This clearly satisfies (<ref>) and, since Y^+ is non-vanishing on D^+, it shows that X^+ has a zero of order m_j at α_j ∈ℓ. To show uniqueness of solutions, note that if X^+, X^- are any solutions to the Riemann-Hilbert problem, we can write the boundary condition (<ref>) in the form
X^+(t)Y^+(t) ∏_j = 1^μ (t - α_j)^m_j = X^-(t)/Y^-(t), t ∈ℓ
The last relation indicates that the functions
X^+(ζ)Y^+(ζ) ∏_j = 1^μ (ζ - α_j)^m_j, X^-(ζ)/Y^-(ζ)
are analytic in the domains D^+, D^- respectively and they constitute the analytical continuation of each other through the contour ℓ. The points α_j cannot be singular points of this unified analytic function, since this would contradict the assumption of boundedness of X^+ or X^-. The behavior of X^± at 0 or ∞ is that of Y^±, so by Liouville's Theorem,
X^+(ζ)Y^+(ζ) ∏_j = 1^μ (ζ - α_j)^m_j = X^-(ζ)/Y^-(ζ) = C
for C a constant. This forces X^+(ζ), X^-(ζ) to be of the form (<ref>), (<ref>).
amsplain
|
http://arxiv.org/abs/1701.08177v1 | 20170127192503 | Convective Quenching of Field Reversals in Accretion Disc Dynamos | [
"Matthew S. B. Coleman",
"Evan Yerger",
"Omer Blaes",
"Greg Salvesen",
"Shigenobu Hirose"
] | astro-ph.HE | [
"astro-ph.HE"
] |
firstpage–lastpage 2016
Self-Organizing Systems in Planetary Physics :
Harmonic Resonances of Planet and Moon Orbits
Markus J. Aschwanden^1
Accepted —. Received —; in original form —
=========================================================================================================
Vertically stratified shearing box simulations of magnetorotational turbulence
commonly exhibit a so-called butterfly diagram of quasi-periodic azimuthal
field reversals. However, in the presence of hydrodynamic convection, field
reversals no longer occur. Instead, the azimuthal field strength fluctuates
quasi-periodically while maintaining the same polarity, which can
either be symmetric or antisymmetric about the disc midplane.
Using data from the simulations
of <cit.>, we demonstrate that the lack of field reversals in the presence
of convection is due to hydrodynamic mixing of magnetic field from the
more strongly magnetized upper layers into the midplane, which then annihilate
field reversals that are starting there.
Our convective simulations differ in several respects from those reported in previous work by others, in which stronger magnetization likely plays a more important role than convection.
accretion, accretion discs, dynamos, convection — MHD — turbulence — stars: dwarf novae.
§ INTRODUCTION
As a cloud of gas contracts under the influence of gravity, it is likely to reach a point where net rotation dominates the dynamics and becomes a bottleneck restricting further collapse. This scenario naturally leads to a disc structure, thus explaining why accretion discs are so prevalent in astrophysics. The question of how these discs transport angular momentum to facilitate accretion still remains. For sufficiently
electrically conducting
discs, there is a reasonable consensus that the magnetorotational instability (MRI) is the predominate means of transporting angular momentum, at least on local scales <cit.>. There has also been significant work on understanding
non-local mechanisms of angular momentum transport, such as spiral waves <cit.>. While these global structures may be important in discs with low conductivity, local MRI simulations of fully ionized accretion discs produce values of α consistent with those inferred from
observations for accretion discs in binary systems.
This is even true for the case of dwarf novae, for which MRI simulations
lacking net vertical magnetic flux previously had trouble with matching
observations <cit.>.
This is because hydrodynamic convection
occurs in the vicinity of the hydrogen ionization regime, and enhances
the time-averaged <cit.> alpha-parameter <cit.>.
Enhancement of MRI turbulent stresses by convection was also independently claimed by <cit.>.
Whether this enhancement is enough to reproduce observed dwarf nova light curves remains
an unanswered question, largely due to uncertain physics in the quiescent
state and in the propagation of heating and cooling fronts <cit.>.
The precise reason as to why convection enhances the turbulent stresses
responsible for angular momentum transport is still not fully understood.
<cit.> conducted simulations with fixed thermal diffusivity and impenetrable vertical boundary conditions, and
found that when this diffusivity was low, the time and horizontally-averaged
vertical density profiles became very flat or even slightly inverted, possibly due to
hydrodynamic convection taking place. They
suggested that either these flat profiles, or the overturning convective
motions themselves, might make the magnetohydrodynamic dynamo more efficient.
<cit.> used radiation MHD simulations with outflow vertical boundary conditions, and the hydrogen ionization
opacities and equation of state that are relevant to dwarf novae. They found
intermittent episodes of convection separated by periods of radiative
diffusion. The beginning of the convective episodes were associated with
an enhancement of energy in vertical magnetic field relative to the horizontal
magnetic energy, and this was then followed by a rapid growth of
horizontal magnetic energy. These authors therefore suggested that
convection seeds the axisymmetric magnetorotational instability, albeit in
a medium that is already turbulent. In addition, the phase lag between
stress build up and heating which causes pressure to build up also contributes
to an enhancement of the alpha parameter.
The mere presence of vertical hydrodynamic convection is not sufficient
to enhance the alpha parameter, however; the Mach number of the convective
motions also has to be sufficiently high, and in fact the alpha parameter
appears to be better correlated with the Mach number of the convective motions
than with the fraction of heat transport that is carried by convection
<cit.>.
Hydrodynamic convection does not simply enhance the turbulent stress
to pressure ratio, however.
It also fundamentally alters the character of the MRI dynamo. In the
standard weak-field MRI, vertically stratified shearing box simulations
exhibit
quasi-periodic field reversals of the azimuthal magnetic field
(B_y) with periods of ∼ 10 orbits <cit.>. These reversals start near the
midplane and propagate outward making a pattern (see top-left panel of
Figure <ref> below) which resembles a time inverse of the solar sunspot
butterfly diagram.
The means by which these field reversals propagate away from the midplane is
likely the buoyant advection of magnetic flux tubes
<cit.>, and many studies have also suggested that magnetic buoyancy is important in accretion discs <cit.>. Magnetic buoyancy
is consistent with the Poynting flux which tends to be oriented
outwards (see top-right panel of Fig. <ref>), and we
give further evidence supporting this theory below.
While this explains how field reversals propagate through the disc, it does not explain how these magnetic field reversals occur in the first place, and despite numerous dynamo models there is currently no consensus on the physical mechanism driving the reversals <cit.>.
However, in the presence of convection,
the standard pattern of azimuthal field reversals is disrupted.
Periods of convection appear to be characterized by longer term
maintenance of a particular azimuthal field polarity, and this persistent
polarity can be of even <cit.> or odd parity with respect to the disk midplane.
As we discuss in this paper, the simulations of <cit.> also exhibit
this pattern of persistent
magnetic polarity during the intermittent periods of convection, but the
field reversals associated with the standard butterfly diagram return during
the episodes of radiative diffusion (see Fig. <ref> below).
Here we exploit this intermittency to
try and understand the cause of the persistent magnetic polarity in the
convective episodes. We demonstrate that this is due to hydrodynamic
mixing of magnetic field from strongly magnetized regions at high altitude
back toward the midplane.
This paper is organized as follows.
In Section 2 we discuss the butterfly diagram in detail and how it changes character when convection occurs. In Section 3 we describe magnetic buoyancy and the role it plays in establishing the butterfly diagram,
and the related thermodynamics.
We explain how convection acts to alter these effects in Section 4. The implications of this work are discussed in Section 5, and our results are summarized in Section 6.
§ THE BUTTERFLY DIAGRAM
To construct the butterfly diagram and explore its physical origin, it is useful to
define the following quantities related to some fluid variable f: the horizontal
average of this quantity, the variation with respect to this horizontal average,
and a version of the variable that is smoothed in time over one orbit. These are
defined respectively by
f(t,z) ≡1/L_xL_y∫_-L_x/2^L_x/2dx∫_-L_y/2^L_y/2dy
f(t,x,y,z)
δ f ≡ f - f,
f(t) ≡.∫_t-1/2^t+1/2 f(t^') dt^'/ 1 orbit..
Here L_x, L_y, and L_z are the radial, azimuthal and vertical extents of the simulation domain, respectively (listed in Table <ref>). Additionally we define the quantity f_ conv as a means to estimate the fraction of vertical energy transport which is done by convection:
f_ conv(t) ≡{∫{<(e+E)v_z>}_t sign(z)<P_th>dz∫{<F_ tot,z>}_t sign(z)<P_th>dz}_t,
where e is the gas internal energy density, E is the radiation energy density, v_z is the vertical velocity, P_ th is the thermal pressure (gas plus radiation), and F_ tot,z is the total energy flux in the vertical direction, including Poynting
and radiation diffusion flux.
These quantities will assist us in analyzing and discussing the interactions between convection and dynamos in accretion discs.
The butterfly diagram is obtained by plotting B_y as a function of time and distance from the disc midplane (see left frames of Fig. <ref>). The radiative simulation ws0429 <cit.> shows the standard pattern of field reversals normally associated with the butterfly diagram, which appear to start at the midplane and propagate outwards. This outward propagation of magnetic field is consistent with the Poynting flux (also shown in Fig. <ref>), which generally points outwards away from the midplane.
When simulations with convection are examined (e.g. ws0446 listed in Table <ref>), however, the butterfly diagram looks
completely different (see bottom left frame of Fig. <ref>), as first discussed
by <cit.>. Similar to the lack of the azimuthal magnetic field reversals found
by these authors, we find that when convection is present in the <cit.>
simulations, there is also a lack of field reversals.
Additionally, we find that the azimuthal magnetic field in the
high altitude “wings" of the butterfly diagram
is better characterized by quasi-periodic pulsations, rather
than quasi-periodic field reversals. These pulsations have roughly the same period as the field reversals found
in radiative epochs.
For example, the convective simulation ws0446 shown in Fig. <ref> has a radiative
epoch where field reversals occur (centered near 55 orbits), and the behavior of this
epoch resembles that of the radiative simulation ws0429. However, during convective
epochs where f_ conv is high, the field maintains its polarity and pulsates with a period of ∼10 orbits. In fact, the
only time field reversals occur is when f_ conv dips to low values[As discussed in <cit.> f_ conv can be slightly negative. This can happen when energy is being advected inwards to the disc midplane.], indicating
that radiative diffusion is dominating convection.
This lack of field reversals during convective epochs locks the vertical structure of
B_y into either an even parity or odd parity state, where B_y maintains sign across
the midplane or it changes sign, respectively. (Compare orbits 10-40 to orbits 70-100
in the bottom left panel of Fig. <ref>). This phenomenon of the parity of
B_y being held fixed throughout a convective epoch shall henceforth be referred to as
parity locking.
During even parity epochs (e.g. orbits 10-40 and 120-140 of ws0446), there are field
reversals in the midplane, but they are quickly quenched, and what field concentrations
are generated here do not migrate away from the midplane as they do during radiative
epochs.
Also during even parity convective epochs, the Poynting flux tends to be oriented inwards roughly half way between the photospheres and the midplane. For odd parity convective epochs the behavior of the Poynting flux is more complicated but is likely linked to the
motion of the B_y=0 surface.
In summary, we seek to explain the following ways in which convection alters the
butterfly diagram:
* Magnetic field reversals near the midplane are quickly quenched
during convective epochs.
* Magnetic field concentrations do not migrate away
from the midplane during convective epochs as they do during radiative epochs.
* During convective epochs, the magnetic field in the
wings of the butterfly diagram
is better characterized by quasi-periodic pulsations, rather
than quasi-periodic field reversals, with roughly the same period.
* B_y is held fixed in either an odd or even parity state during convective
epochs.
§ THERMODYNAMICS AND MAGNETIC BUOYANCY
Much like in <cit.>, we find that that during radiative epochs, nonlinear
concentrations of magnetic field form in the midplane regions, and these concentrations
are underdense and therefore buoyant. The resulting upward motion of these field
concentrations is the likely cause of the vertically outward moving field pattern observed in
the standard butterfly diagram. In our simulations, this magnetic buoyancy appears to
be more important when radiative diffusion, rather than convection, is the predominate
energy transport process. This is due to the different opacities and rates of radiative
diffusion between these two regimes, which alter the thermodynamic conditions of the
plasma.
When the disc is not overly opaque, and convection is therefore never present, temperature
variations (δ T) at a given height are rapidly suppressed by radiative diffusion.
This causes horizontal variations in gas pressure (δ P_ gas) and mass density (δρ) to be highly correlated (see Fig. <ref>), and allows
us to simplify our analysis by assuming δ T=0.
This should be contrasted with convective simulations
(see Fig. <ref>) which show a much noisier relation and show a tendency towards adiabatic fluctuations during convective epochs.
By computing rough estimates of the thermal time we can see how isothermal and adiabatic
behaviour arise for radiative and convective epochs, respectively.
The time scale to smooth out temperature fluctuations over a length scale Δ L
is simply the photon diffusion time times the ratio of gas internal energy density
e to photon energy density E,
t_ th≃3κ_ Rρ(Δ L)^2/ce/E.
For the midplane regions of the radiative simulation ws0429 at times 75-100 orbits,
the density ρ≃7×10^-7g cm^-3,
e≃2× 10^7 erg cm^-3, E≃9×10^5 erg cm^-3, and
the Rosseland mean opacity κ_R≃10 cm^2 g^-1.
Hence, t_ th≃30(Δ L/H)^2 orbits. Radiative diffusion is therefore
extremely fast in smoothing out temperature fluctuations on scales of order several
tenths of a scale height, and thus horizontal fluctuations are roughly isothermal.
Isothermality (T=T)
in combination with pressure equilibrium (P_ tot=P_ tot) leads to the following
equation:
P_ tot=ρμ m_pkT+P_ mag,
where radiation pressure has been neglected, as P_ rad≪ P_ gas.
Thus, during radiative epochs, it is clear that regions of highly concentrated magnetic
field (e.g. flux tubes) must be under-dense. Figure <ref> confirms this
for the radiative simulation ws0429 by depicting a 2D histogram of magnetic pressure
and density fluctuations. A clear anticorrelation is seen which extends up to very
nonlinear concentrations of magnetic field, all of which are underdense. This
anticorrelation was also observed in radiation pressure dominated simulations
appropriate for high luminosity black hole accretion discs in <cit.>.
This anti-correlation causes the buoyant rise of magnetic field which would explain
the outward propagation seen in the butterfly diagram and is also consistent with the
vertically outward Poynting flux (see top panels of Figure <ref>).
On the other hand, for the midplane regions of the convective simulation ws0446 at
the times 80-100 orbits, ρ≃2×10^-7g cm^-3,
e≃3×10^6 erg cm^-3, E≃1×10^3 erg cm^-3, and
κ_ R≃7×10^2 cm^2 g^-1.
Hence, t_ th≃4×10^4(Δ L/H)^2 orbits. All fluctuations in
the midplane regions that are resolvable by the simulation are therefore roughly adiabatic.
Perhaps somewhat coincidentally, Γ_1≈1.3 in the midplane regions of
the convective simulation, so the pressure-density fluctuations, even though
adiabatic, are in any case close to an isothermal relationship[
This reduction in the adiabatic gradient within the hydrogen ionization transition
actually contributes significantly to establishing a convectively unstable situation
in our dwarf nova simulations. We typically find that the adiabatic temperature gradient
∇_ ad within
the hydrogen ionization transition is significantly less than the value 0.4 for a monatomic gas. In fact, the gas pressure weighted average value of
∇_ ad can be as low as 0.18. For ∼ 60% of the convective simulations
of <cit.> and <cit.>, the temperature gradient ∇ is superadiabatic
but less than 0.4 during convective epochs.].
However, the biggest difference between the radiative and convective cases
is caused by the departure from isothermality in convective epochs, allowing for the possibility of highly magnetized regions to be overdense. This leads to much larger scatter in the probability distribution of the density perturbations in convective epochs.
How this affects magnetic buoyancy in
radiative and
convective epochs will be discussed
in detail in the next section.
§ EFFECTS OF CONVECTION
In this section we lay out the main mechanisms by which convection acts to modify the
dynamics of the dynamo and thereby fundamentally alter the large scale magnetic field
structure in the simulations.
§.§ Mixing from the Wings
As a convective cell brings warm underdense plasma from the midplane outward towards
the photosphere (i.e. the wing region of the butterfly diagram), it must also circulate
cold overdense material from the wing down towards the midplane.
As is typical of stratified shearing box simulations of MRI turbulence (e.g.
), the horizontally and time-averaged magnetic energy density peaks away from the midplane, and the surface photospheric regions are magnetically dominated.
Dense fluid parcels that sink down toward the midplane are therefore likely to carry
significant magnetic field inward.
These fluid parcels that originated from high altitude can actually be identified
in the simulations because the high opacity, which contributes to the onset of
convection, prevents cold fluid parcels from efficiently thermalizing with their
local surroundings. Hence they retain a lower
specific entropy compared to their surroundings as they are brought to the midplane by
convective motions. We therefore expect negative specific entropy fluctuations in the
midplane regions to be correlated with high azimuthal magnetic field strength of the
same polarity as the photospheric regions during a convective epoch.
Figures <ref> and <ref> show that this is indeed the case.
Figure <ref> shows a 2D histogram of entropy fluctuations and azimuthal
field strength B_y in the midplane regions for the even parity convective epoch
15≤ t≤40
in simulation ws0446 (see Fig. <ref>). The yellow vertical population is
indicative of adiabatic fluctuations (i.e. δ s=0) at every height which are
largely uncorrelated with B_y. However, the upper left quadrant of this figure
shows a significant excess of cells with lower than average entropy for their height
and large positive B_y. It is important to note that this corresponds to the sign of
the azimuthal field at high altitude, even though near the midplane B_y is
often negative (see bottom panel of Fig. <ref>). This is strong evidence that
convection is advecting low entropy magnetized fluid parcels from the near-photosphere
regions into the midplane.
Figure <ref> shows the same thing for the odd parity convective epoch
80≤ t≤100. Because the overall sign of the horizontally-averaged azimuthal
magnetic field flips across the midplane, cells that are near but above the
midplane are shown in the left panel, while cells that are near but below the midplane
are shown on the right. Like in Figure <ref>, there is a significant excess
of negative entropy fluctuations that are correlated with azimuthal field strength and
have the same sign as the field at higher altitudes on the same side of the
midplane. Again, these low entropy regions represent fluid parcels that have
advected magnetic field inward from higher altitude. The correlation between
negative entropy fluctuation and azimuthal field strength is somewhat weaker than in
the even parity case (Fig. <ref>), but that is almost certainly due to the
fact that inward moving fluid elements can overshoot across the midplane.
In contrast, Figure <ref> shows a similar histogram of entropy fluctuations
and B_y for the radiative simulation ws0429, and it completely lacks this correlation
between high azimuthal field strength and negative entropy fluctuation. This is in
part due to the fact that fluid parcels are no longer adiabatic, but isothermal. But
more importantly, it is because there is no mixing from the highly magnetized regions
at high altitude down to the midplane.
Instead, the tight crescent shaped correlation of Figure <ref> arises simply
by considering the linear theory of isothermal, isobaric fluctuations at a particular
height. Such fluctuations have perturbations in entropy given by
δ s =kμ m_pB^2-B^22P_ gas.
This is shown as the dotted line in Figure <ref>, and fits the observed
correlation very well.
The inward flux of magnetic energy from high altitude is also energetically large enough to quench field reversals in the midplane regions. To demonstrate this, we examined the divergence of the Poynting flux and compared it to the time derivative of the magnetic pressure.
During radiative epochs when the magnetic field is growing after a field reversal, typical values for dP_ mag/ dt in the midplane are about half of the typical value of - dF_ Poynt,z/ dz near the midplane during convective epochs. This shows that the magnetic energy being transported by the Poynting flux during convective epochs is strong enough to quench the field reversals that would otherwise exist. The sign of the divergence of the Poynting flux during convective epochs is also consistent with magnetic energy being removed from high altitude (positive) and deposited in the midplane (negative).
To conclude, by using specific entropy as a proxy for where a fluid parcel was last
in thermal equilibrium, we have shown that convection advects field inward from high
altitude, which is consistent with the inward Poynting flux seen during even parity
convective epochs (see Fig. <ref>). The lack of such clear inward Poynting flux
during the odd parity convective epochs is likely related to the movement of the
B_y=0 surface by convective overshoot across the disc midplane.
However, in both even and odd convective epochs
dF_ Poynt,z/ dz is typically a few times dP_ mag/ dt and is consistent with enough magnetic energy being deposited in the midplane to quench field reversals that would otherwise take place. This further suggests that
regardless of parity,
this convective mixing from high altitude
to the midplane is quenching field reversals in the midplane by
mixing in field of a consistent polarity.
§.§ Disruption of Magnetic Buoyancy
In addition to quenching magnetic field reversals, convection
and the associated high opacities
act to disrupt magnetic buoyancy which transports field away from the midplane, thereby preventing any reversals which do occur, from propagating vertically outwards.
The large opacities which contribute to the onset of convection also allow for thermal fluctuations on a given horizontal slice (δ T) to persist for several orbits. This breaks one of the approximations which lead to the formulation of Eqn. <ref>, allowing for the possibility for large magnetic pressures to be counterbalanced by low temperatures, reducing the anti-correlation between density and temperature.
Additionally, convective turbulence also generates density perturbations that are uncorrelated with magnetic fields, and combined with the lack of isothermality can cause fluid parcels to have both a high magnetic pressure and be overdense (see Fig. <ref>).
These overdense over-magnetized regions can be seen in the right (convective) frame of Fig. <ref> where the probability density at
δρ≈ 0.5ρ, δ P_ mag≈P_ mag
is only about one order of magnitude below its peak value. This should be contrasted with the left (radiative) frame of the same figure and with Fig. <ref> where the probability density at this coordinate is very small or zero respectively.
Hence while the overall anti-correlation between magnetic pressure and density still exists in convective epochs, indicating some magnetic buoyancy, the correlation is weakened by the presence of overdense high magnetic field regions. Magnetic buoyancy is therefore weakened compared to radiative epochs.
§.§ Parity Locking
The effects described above both prevent magnetic field reversals in the midplane and reduce the tendency for any reversals which manage to occur from propagating outwards. Therefore convection creates an environment which prevents field reversals, and leads to the parity of the field being locked in place. Due to the variety of parities seen, it appears likely
that it is simply the initial conditions when a convective epoch is initiated that set
the parity for the duration of that epoch.
§ DISCUSSION
It is important to note that we still do not understand many aspects of the MRI
turbulent dynamo. Our analysis here has not shed any light on the actual origin of
field reversals in the standard (non-convective) butterfly diagram, nor have we provided
any explanation for
the quasi-periodicities observed in both the field reversals of the standard diagram
and the pulsations that we observe at high altitude during convective epochs. However,
the outward moving patterns in the standard butterfly wings combined with the fact
that the horizontally-averaged Poynting flux is directed outward strongly suggests that
field reversals are driven in the midplane first and then propagate out by magnetic
buoyancy. On the other hand, we continue to see the same quasi-periodicity
at high altitude in convective epochs as we do in the field reversals in the radiative
epochs. Moreover, it is clear from Figure <ref> that field reversals occasionally
start to manifest in the midplane regions during convective epochs, but they simply
cannot be sustained because they are annihilated by inward advection of magnetic
field of sustained polarity. This suggests
perhaps that there are two spatially separated dynamos which are operating.
This modification of the dynamo by convection presents a challenge to dynamo models. However, potentially promising dynamo mechanisms have recently been discovered in stratified and unstratified shearing boxes.
Recently, <cit.> found that quasi-periodic azimuthal field reversals
occur even in unstratified, zero net flux shearing box simulations, provided
they are sufficiently tall (L_z/L_x ≥ 2.5).
Furthermore, they found
that the magnetic shear-current effect <cit.> was responsible for
this dynamo; however, they were not able to explain why the reversals occurred.
The shear-current effect can also apparently be present during hydrodynamic
convection <cit.>, implying that this dynamo mechanism might be
capable of persisting through convective epochs.
Most of the work on the MRI dynamo has been done with vertically stratified
shearing box simulations. One of the earliest examples of this is <cit.>, who found that multiple dynamos can work in conjunction with the MRI on different scales.
This is consistent with the findings of <cit.>, that an “indirect” larger scale dynamo should coexist with the MRI, and they propose two candidates: a Parker-type dynamo <cit.>, and a “buoyant” dynamo caused
by the Lorentz force. Furthermore, <cit.> find that the αΩ dynamo produces cycle frequencies comparable to that of the butterfly diagram, and that there is a non-local relation between electromotive forces and the mean magnetic field which varies vertically throughout the disc. This sort of non-local description may be necessary to understand how the midplane and high altitude regions differ from each other, and we hope to pursue such an analysis in future
work.
§.§ Departures from Standard Disc Dynamo: Comparison with Other Works
We find that some properties of the dynamo during convective epochs are similar to the convective simulations of <cit.>, such as prolonged states
of azimuthal magnetic field polarity and an enhancement of Maxwell stresses
compared to purely radiative simulations (see, e.g. Figure 6 of
).
However, there are some dynamo characteristics observed by <cit.> that are not present in our simulations.
For example, their simulations typically evolved to a strongly magnetized
state, something which we never find.
We also
find that in our simulations, which exhibit intermittent convection,
the time-averaged Maxwell stress in the midplane regions is approximately
the same in both the convective and radiative epochs[Although the
Maxwell stress is approximately the same between radiative and convective epochs
in a given simulation, the α parameter is enhanced during convective
epochs because the medium is cooler and the time-averaged midplane pressure is
smaller.],
and is independent of the vertical parity of the azimuthal field. In contrast,
<cit.> find substantially less Maxwell stress during odd parity epochs.
Azimuthal field reversals occasionally occur during their convective simulations, whereas we never see such reversals during our convective epochs. Finally, their simulations exhibit a strong preference for epochs of even parity, and a lack of quasi-periodic pulsations in the wings of the butterfly diagram.
Remarkably, all of these aforementioned properties of their simulations arise in strongly magnetized shearing box simulations <cit.>. We suggest here that these properties of the dynamo that <cit.> attribute to convection are actually a manifestation of strong magnetization. To demonstrate this, we start by noting that the simulations presented in <cit.> adopted: (1) impenetrable vertical boundary conditions that prevented outflows; thus, trapping magnetic field within the domain, and (2) initial configurations with either zero or non-zero net vertical magnetic flux.
We first consider the <cit.> simulations with net vertical magnetic flux. Figure 10 of <cit.> shows that for increasing net vertical flux, the strength of the azimuthal field increases and field reversals decrease in frequency with long-lived (short-lived/transitionary) epochs of even (odd) parity. No dynamo flips in the azimuthal field were seen for the strongest net flux case. Figure <ref> shows that the isothermal net vertical flux simulations of <cit.> reproduce all features of the butterfly diagrams in the convective simulations of <cit.>, for the same range of initial plasma-β. This remarkable similarity between these simulations with and without convective heat transport suggests that strong magnetization (i.e., β∼ 1 at the disc mid-plane) is responsible for the conflicts listed in the previous paragraph with the <cit.> simulations under consideration.
We now seek to understand the role of convection on dynamo behavior in the zero net vertical magnetic flux simulations of <cit.>. These simulations also developed into a strongly magnetized state and exhibited similarly dramatic departures from the standard butterfly pattern as their net flux counterparts. <cit.> demonstrated that zero net vertical flux shearing box simulations with constant thermal diffusivity and the same impenetrable vertical boundaries adopted by <cit.> lead to the following: (1) A butterfly pattern that is irregular, yet still similar to the standard pattern that is recovered for outflow boundaries. However, the box size in <cit.> was comparable to the smallest domain considered in <cit.>, which was not a converged solution. (2) Maxwell stresses that are enhanced by a factor of ∼ 2 compared to the simulation with outflow boundaries. This is likely because impenetrable boundaries confine magnetic field, which would otherwise buoyantly escape the domain <cit.>. (3) Substantial turbulent convective heat flux, which is significantly reduced when using outflow boundaries. Therefore, perhaps the enhanced convection resulting from using impenetrable boundaries is indeed responsible for the strongly magnetized state and dynamo activity seen in the “Case D” zero net flux simulation of <cit.>.
Despite starting with identical initial and boundary conditions, the zero net vertical flux simulation M4 of <cit.> does not evolve to the strongly magnetized state seen in the <cit.> simulations. The reason for this discrepancy is unclear. In an attempt to reproduce Case D in <cit.> and following <cit.>, we ran an isothermal, zero net vertical flux shearing box simulation that had an initial magnetic field, 𝐁 = B_0 sin( 2 π x / L_x), where B_0 corresponded to β_0 = 1600. This simulation, labeled ZNVF-βZ1600, had domain size ( L_x, L_y, L_z) = (24H_0, 18H_0, 6H_0) with H_0 being the initial scale height due to thermal pressure support alone, resolution 24 zones / H_0 in all dimensions, and periodic vertical boundaries that trap magnetic field, which we believe to be the salient feature of the impenetrable boundaries discussed above. Figure <ref> shows that the space-time diagram of the horizontally-averaged azimuthal magnetic field for this simulation does not reproduce Case D, but instead evolves to a weakly magnetized state with a conventional butterfly pattern.
However, we note that <cit.> found that the standard butterfly diagram is recovered when replacing impenetrable boundaries with outflow boundaries. Similarly,
<cit.> initialized two zero net vertical flux simulations with a purely azimuthal magnetic field corresponding to β_0 = 1. In simulation ZNVF-P, which adopted periodic boundary conditions that prevent magnetic field from buoyantly escaping, the butterfly pattern was not present and the azimuthal field locked into a long-lived, even parity state. However, for simulation ZNVF-O, which adopted outflow vertical boundaries, the initially strong magnetic field buoyantly escaped and the disc settled down to a weakly magnetized configuration with the familiar dynamo activity <cit.>. Therefore, vertical boundaries that confine magnetic field may dictate the evolution of shearing box simulations without net poloidal flux.
Based on the discussion above, we suggest that the dynamo behavior in the <cit.> simulations with net vertical magnetic flux is a consequence of strong magnetization and not convection. For the zero net vertical flux simulations with impenetrable vertical boundaries, the relative roles of convection vs. strong magnetization in influencing the dynamo is less clear. The main result of this paper — that convection quenches azimuthal field reversals in accretion disc dynamos — applies to the case of zero net vertical magnetic flux and realistic outflow vertical boundaries. Future simulations in this regime with larger domain size will help to determine the robustness of this result.
§.§ Quasi-Periodic Oscillations
In addition to the outburst cycles observed in dwarf novae, cataclysmic
variables in general exhibit shorter timescale
variability such as dwarf nova oscillations (DNOs) on ∼10 s time scales
and quasi-periodic oscillations (QPOs) on ∼ minute to hour
time scales <cit.>. A plausible explanation for DNOs in CVs involving
the disk/white dwarf boundary layer
has been proposed <cit.>. However, a substantial number of QPOs
remain unexplained. It is possible that the quasi-periodic
magnetic field reversals seen in the MRI butterfly diagram are responsible for
some of these QPOs and other variability. Temporally, one would expect
variations from the butterfly diagram to occur on minutes to an hour timescales:
τ_ bf = 223 s× r_9^3/2(M0.6 M_)^-1/2τ_ bf10 τ_ orb,
where τ_ bf is the period of the butterfly cycle, r_9 is the radial location of variability in units of 10^9 cm, M is the mass of the white dwarf primary, and τ_ orb is the orbital period.
Indeed, this suggestion has already been made
in the context of black hole X-ray binaries <cit.>, however, no plausible emission mechanism to convert these field reversals into radiation has been identified. However, it is noteworthy that quasi-periodic azimuthal field
reversals have also been seen in global accretion disk simulations with substantial coherence over
broad ranges of radii <cit.>, indicating that this phenomena is not unique to shearing box simulations.
If QPOs in dwarf novae are in fact associated with azimuthal field reversals,
then our work here further suggests that these QPOs will differ between
quiescence and outburst due to the fact that the convection quenching of field reversals (i.e. the butterfly diagram) only occurs in outburst, and this quenching may leave an observable mark on the variability of dwarf novae.
§ CONCLUSIONS
We analyzed the role of convection in altering the dynamo in the shearing box simulations of <cit.>. Throughout this paper we explained how convection acts to:
* quickly quench magnetic field reversals near the midplane;
* weaken magnetic buoyancy which transports magnetic field concentrations away
from the midplane;
* prevent quasi-periodic field reversals, leading to quasi-periodic pulsations in the
wings of the butterfly diagram instead; and
* hold the parity of B_y fixed in either an odd or even state.
All of these are dramatic departures from how the standard quasi-periodic field reversals and resulting butterfly diagram work during radiative epochs.
The primary role of convection in disrupting the butterfly diagram is to mix magnetic field from high altitude (the wings) down into the midplane. This mixing was identified through correlations between entropy and B_y (see Figs. <ref>, <ref>). Due to the high opacity of the convective epochs, perturbed fluid parcels maintain their entropy for many dynamical times. Hence the observed low-entropy highly-magnetized fluid parcels found in the midplane must have been mixed in from the wings. The sign of B_y for these parcels also correspond to the wing which is closest and tend to oppose the sign found in the midplane, quenching field reversals there.
The high opacity which allows for the fluid parcels to preserve their entropy also allows for thermal fluctuations to be long lived. This combined with the turbulence generated by convection weakens the anti-correlation between magnetic pressure and density found in radiative simulations (contrast Figs. <ref> and <ref>) and creates an environment where some flux tubes can be overdense. This acts to weaken, but not quench, magnetic buoyancy thereby preventing the weak and infrequent field reversals which do occur from propagating outwards to the wings.
It is through these mechanisms that convection acts to disrupt the butterfly diagram and
prevent field reversals, even though it is clear that the MRI turbulent dynamo continues
to try and drive field reversals in the midplane regions. This results in the sign of
B_y and its parity across the
midplane to be locked in place for the duration of a convective epoch. The quenching of
field reversals and the maintenance of a particular parity (odd or even) across the
midplane is a hallmark of convection in our simulations, and we hope that it may shed
some light on the behaviour of the MRI turbulent dynamo in general.
§ ACKNOWLEDGEMENTS
We thank the anonymous referee for a constructive report that led to improvements in this paper.
We also wish to acknowledge Tobias Heinemann, Amitiva Bhattacharjee, and Johnathan Squire for their useful discussions and insight generated from their work.
This research was supported by the United States National Science Foundation
under grant AST-1412417 and also in part by PHY11-25915.
We also acknowledge support from the UCSB Academic Senate, and the Center for
Scientific Computing from the CNSI, MRL: an NSF MRSEC (DMR-1121053) and NSF
CNS-0960316.
SH was supported by Japan JSPS KAKENH 15K05040 and the joint research
project of ILE, Osaka University.
GS is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1602169.
Numerical calculation was
partly carried out on the Cray XC30 at CfCA, National Astronomical
Observatory of Japan, and on SR16000 at YITP in Kyoto University.
This work also used the Janus supercomputer, which is supported by the National Science Foundation (award number CNS-0821794) and the University of Colorado Boulder. The Janus supercomputer is a joint effort of the University of Colorado Boulder, the University of Colorado Denver, and the National Center for Atmospheric Research.
apj
|
http://arxiv.org/abs/1701.07446v1 | 20170125190159 | Linear and Unconditionally Energy Stable Schemes for the binary Fluid-Surfactant Phase Field Model | [
"Xiaofeng Yang",
"Lili Ju"
] | math.NA | [
"math.NA"
] |
[pages=1-last]Surfactant_sub.pdf
|
http://arxiv.org/abs/1701.07609v1 | 20170126080658 | Generating controllable type-II Weyl points via periodic driving | [
"Raditya Weda Bomantara",
"Jiangbin Gong"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"quant-ph"
] |
Department of Physics, National University of Singapore, Singapore 117543
[email protected]
Department of Physics, National University of Singapore, Singapore 117543
NUS Graduate School for Integrative Science and Engineering, Singapore 117597
Type-II Weyl semimetals are a novel gapless topological phase of matter discovered recently in 2015. Similar to normal (type-I) Weyl semimetals, type-II Weyl semimetals consist of isolated band touching points. However, unlike type-I Weyl semimetals which have a linear energy dispersion around the band touching points forming a three dimensional (3D) Dirac cone, type-II Weyl semimetals have a tilted cone-like structure around the band touching points. This leads to various novel physical properties that are different from type-I Weyl semimetals. In order to study further the properties of type-II Weyl semimetals and perhaps realize them for future applications, generating controllable type-II Weyl semimetals is desirable. In this paper, we propose a way to generate a type-II Weyl semimetal via a generalized Harper model interacting with a harmonic driving field. When the field is treated classically, we find that only type-I Weyl points emerge. However, by treating the field quantum mechanically, some of these type-I Weyl points may turn into type-II Weyl points. Moreover, by tuning the coupling strength, it is possible to control the tilt of the Weyl points and the energy difference between two Weyl points, which makes it possible to generate a pair of mixed Weyl points of type-I and type-II. We also discuss how to physically distinguish these two types of Weyl points in the framework of our model via the Landau level structures in the presence of an artificial magnetic field. The results are of general interest to quantum optics as well as
ongoing studies of Floquet topological phases.
42.50.Ct, 42.50.Nn, 03.65.Vf, 05.30.Rt
Generating controllable type-II Weyl points via periodic driving
Jiangbin Gong
December 30, 2023
================================================================
§ INTRODUCTION
Since the discovery and realization of topological insulators <cit.>, topological phases of matter have attracted a lot of interests from both theoretical and practical points of view. Topological insulators are characterized by the existence of metallic surface states at the boundaries, which are very robust against small perturbations as long as the topology is preserved. These stable edge states are linked to the topological invariant defining the topological insulator via the bulk-edge correspondence <cit.>. As a consequence of their topological properties, topological insulators are potentially useful to generate magnetoelectric effect better than multiferroic materials due to the presence of the axionic term in the electrodynamic Lagrangian <cit.>. In addition, Ref. <cit.> shows that by placing a topological insulator next to a superconductor, proximity effect will modify its metallic surface states and turn them into superconducting states. These superconducting states can in turn be used to realize and manipulate Majorana Fermions, which have potential applications in the area of topological quantum computation <cit.>.
The interesting properties and potential applications of topological insulators have led to the development of other topological phases. In 2011, Ref. <cit.> discovered a gapless topological phase called Weyl semimetal. Weyl semimetals are characterized by several isolated band touching points in the 3D Brillouin zone, called Weyl points, around which the energy dispersion is linear along any of the quasimomenta forming a 3D Dirac cone. Near these Weyl points, the system can be described by a Weyl Hamiltonian, and the quasiparticle behaves as a relativistic Weyl fermion. Unlike other gapless materials such as Graphene, the Weyl points in Weyl semimetal are very robust against perturbations and cannot be destroyed easily, provided the perturbations respect both translational invariance and charge conservation <cit.>. Each Weyl point is characterized by a topological charge known as chirality. Under open boundary conditions (OBC), edge states are observed in Weyl semimetals. In particular,
a pair of edge states meets along a line connecting two Weyl points of opposite chiralities, which is called Fermi arc <cit.>. Weyl semimetals are known to exhibit novel transport properties, such as negative magnetoresistance <cit.>, anomalous Hall effect <cit.>, and chiral magnetic effect <cit.>. In 2015, a new type of Weyl semimetal phases called type-II Weyl semimetals was discovered <cit.>. In type-II Weyl semimetals, the energy dispersion near the Weyl points forms a tilted cone. As a result, the quasiparticle near these Weyl points behaves as a new type of quasiparticles which do not respect Lorentz invariance and thus have never been encountered in high energy physics. Moreover, type-II Weyl semimetals possess novel transport properties different from normal (type-I) Weyl semimetals. For example, in type-II Weyl semimetals, chiral anomaly exists only if the direction of the magnetic field is within the tiled cone <cit.> and the anomalous Hall effect depends on the tilt parameters <cit.>.
Despite the increasing efforts to realize these topological phases, engineering a controllable topological phase is quite challenging. One proposal to attain a controllable topological phase is to introduce a driving field (time periodic term) into a system. By using Floquet theory <cit.>, it can be shown that such a driving field can modify the topology of the system's band structure. This method has been used to generate several topological phases such as Floquet topological insulators <cit.> and Floquet Weyl semimetals <cit.>. Our recent studies have also shown how a variety of novel topological phases emerge in a periodically driven system <cit.>. Note however, when the coupling with the driving field is sufficiently strong and the field itself is weak, then it becomes necessary to treat the driving field quantum mechanically as a collection of photons. On the one hand,
the total Hamiltonian including the photons has a larger dimension; on the other hand, it becomes time independent and our intuition about static systems can be useful again. This can sometimes offer an advantage over Floquet descriptions in the classical driving field case. As a result, several works have also been done on the constructions of nontrivial topological phases induced by a quantized field <cit.>.
In this paper, we show another example of topological phase engineering via interaction with a driving field. Our starting static system is the generalized Harper model, i.e., Harper model with an off-diagonal modulation. This effectively one dimensional (1D) model has been known to simulate a Weyl semimetal phase with the help of its two periodic parameters which serve as artificial dimensions <cit.>. In our previous work <cit.>, we have shown that adding a driving term in a form of a series of Dirac delta kicks leads to the emergence of new Weyl points. Here, we consider a more realistic driving term of the form ∝cos(Ω t), with Ω being its frequency, to replace the kicking term in our previous model. As a result, our model is now more accessible experimentally. In addition, the simplicity of the model allows us to treat the driving term quantum mechanically and consider the full quantum picture of the system, which can then be compared with the semiclassical picture, i.e., by treating the particle quantum mechanically and the driving term classically. We find that when the driving term is treated classically, only type-I Weyl points are found. However, by treating the driving term quantum mechanically, some of the type-I Weyl points may turn into type-II Weyl points. Moreover, by tuning the coupling strength, we can control the tilt of the Weyl points and the energy difference between two Weyl points. This makes it possible to generate a pair of mixed Weyl points, with one belonging to type-I and the other belonging to type-II.
This paper is organized as follows. In Sec. <ref>, we introduce the details of the model studied in this paper and set up some notation. In Sec. <ref>, we focus on the semiclassical case when the driving field is treated classically. We elucidate from both numerical and analytical perspectives how new type-I Weyl points emerge when the coupling strength is increased, and discuss its implications on the formation of edge states and quantization of adiabatic pump. In Sec. <ref>, we briefly explain the comparison with the static version of the model. In Sec. <ref>, we focus on the fully quantum version when the driving field is treated quantum mechanically. We show that the Weyl points are formed at the same locations as those in the semiclassical case. However, some of these Weyl points are now tilted and the energy at which they emerge is shifted by an amount which depends on the coupling strength. In Sec. <ref>, we briefly propose some possible experimental realizations. In Sec. <ref>, we examine a way to distinguish type-II Weyl points from type-I Weyl points in our system based on the formation of Landau levels when a synthetic magnetic field is applied <cit.>. In Sec. <ref>, we summarize our results and discuss possible further studies.
§ THE MODEL
In this paper, we focus on the following Hamiltonian,
H(t) = ∑_n=1^N-1{[J+(-1)^nλcos(ϕ_y)]|n⟩⟨ n+1 | +h.c.} + ∑_n=1^N (-1)^n [V_1+V_2cos(Ω t)] cos(ϕ_z) |n⟩⟨ n | ,
where n is the lattice site index, N is the total number of lattice sites, J and λ are parameters related to the hopping strength, V_1 is the onsite potential, V_2 represents the coupling with the harmonic driving field, and Ω=2π/T with T being the period of the driving field. The parameters ϕ_y and ϕ_z can take any value in (-π,π], so that they can be regarded as the quasimomenta along two artificial dimensions <cit.>. As a result, although Eq. (<ref>) is physically a 1D model, it can be used to simulate 3D topological phases. For example, if V_2=0, Eq. (<ref>) reduces to the off-diagonal Harper model (ODHM), which has been shown to exhibit a topological Weyl semimetal phase <cit.>. For nonzero V_2, the system is effectively coupled to a periodic driving field, and thus its topological properties are expected to change depending on the values of V_2. We shall refer to this system as the continuously driven off-diagonal Harper model (CDODHM), which is a modification of the off-diagonal kicked Harper model (ODKHM) considered in our previous work <cit.>.
Under periodic boundary conditions (PBC), Eq. (<ref>) is invariant under translation by two lattice sites. Therefore, Eq. (<ref>) can be expressed in terms of the quasimomentum k by using Fourier transform as
H(t) = ∑_k ℋ_k (t) ⊗ |k⟩⟨ k | ,
where |k⟩ is a basis state representing the quasimomentum k, and ℋ_k is the momentum space Hamiltonian given by
ℋ_k (t) = 2Jcos(k)σ_x +2λcos(ϕ_y)sin(k) σ_y +[V_1+V_2cos(Ω t)] cos(ϕ_z) σ_z
= ℋ_k,0+V_2cos(Ω t) cos(ϕ_z) σ_z ,
with σ_x, σ_y, and σ_z are Pauli matrices representing the sublattice degrees of freedom.
§ CLASSICAL DRIVING FIELD
§.§ Emergence of type-I Weyl points
Since the Hamiltonian described by Eq. (<ref>) is time periodic, its properties can be captured by diagonalizing its corresponding Floquet operator (U), which is defined as a one period time evolution operator. In particular, under PBC, by diagonalizing the momentum space Floquet operator (𝒰_k) as a function of k, ϕ_y, ϕ_z over the whole 3D Brillouin zone, i.e., the region (-π,π]× (-π,π]× (-π,π] (with the lattice constant set to 1 for simplicity), we can obtain its Floquet band (quasienergy band). Fig. <ref> shows a typical quasienergy spectrum of the CDODHM in the unit where T=ħ=1 and the parameters J, λ, V_1, and V_2 are dimensionless. Here, the quasienergy (ε) is defined as the phase of the eigenvalue of the Floquet operator, i.e., 𝒰_k |ψ⟩ = exp(iε)|ψ⟩. By construction, ε is only defined up to a modulus of 2π, and thus ε=-π and ε=π are identical. Therefore, unlike the ODHM, which only exhibits band touching points at energy 0, in the CDODHM, it is possible for the two bands to touch at both quasienergy 0 and π, which is evident from Fig. <ref>.
Near these band touching points, time dependent perturbation theory can be applied to obtain an approximate analytical expression of the momentum space Floquet operator. By leaving any technical details in Appendix <ref>, it is found that the momentum space Floquet operator around a band touching point at (k,ϕ_y,ϕ_z)=(π/2,π/2, ϕ_l), with ϕ_l=arccos(lπ/V_1), is given by
𝒰(k_x, k_y, k_z) = exp{-i{ lπ-[ V_1 k_zsin(ϕ_l)σ_z+2Jk_x J_l(l c)σ_x +2λ k_y J_l(l c)σ_y]}} ,
where k_x≡ k-π/2, k_y≡ϕ_y-π/2, k_z≡ϕ_z-ϕ_l, c=V_2/V_1, and J_l is the Bessel function of the first kind. By comparing Eq. (<ref>) with the general form 𝒰=exp[-iℋ_eff] of the momentum space Floquet operator, with ℋ_eff be the momentum space effective Hamiltonian, it is found that
ℋ_eff = lπ-[ V_1 k_zsin(ϕ_l)σ_z+2Jk_x J_l(l c)σ_x +2λ k_y J_l(l c)σ_y] .
Eq. (<ref>) is in the form of a Weyl Hamiltonian with chirality χ = -sgn[V_1Jλsin(ϕ_l)] <cit.> and quasienergy
ε = {[ ±[π-√(V_1^2k_z^2sin^2(ϕ_l)+4J^2k_x^2 J_l^2(lc)+4λ^2k_y^2 J_l^2(lc))] if l is odd; ±√(V_1^2k_z^2sin^2(ϕ_l)+4J^2k_x^2 J_l^2(lc)+4λ^2k_y^2 J_l^2(lc)) if l is even ]. .
In particular, because of the absence of any tilting term <cit.> in Eq. (<ref>), it describes a type-I Weyl Hamiltonian. Consequently, the band touching point at (k,ϕ_y,ϕ_z)=(π/2,π/2, ϕ_l) corresponds to a type-I Weyl point.
In order to verify their topological signature, Fig. <ref> shows the quasienergy spectrum of the Floquet operator associated with Eq. (<ref>) under OBC, i.e., by taking a finite N=100. Fig. <ref>a shows that two dispersionless edge states (marked by red circles and green crosses) emerge at quasienergy π connecting two Weyl points with opposite chiralities when viewed at a fixed ϕ_z. These edge states are analogues to Fermi arcs in static Weyl semimetal systems <cit.>, and they arise as a consequence of the topology of the Floquet Su-Schrieffer-Heeger (SSH) model <cit.>. When viewed at a constant |ϕ_y|<π/2, as shown in Fig. <ref>b, two edge states are shown to traverse the gap between the two Floquet bands and meet at the projection of the Weyl points onto the plane of constant ϕ_y. These edge states emerge due to the topology of two mirror copies of Floquet Chern insulators <cit.>, and disappear when |ϕ_y|>π/2, due to the topological transition from Floquet Chern to normal insulators. Floquet Fermi arcs observed in Fig. <ref>a are formed by joining these meeting points starting from the plane ϕ_y=-π/2 to the plane ϕ_y=π/2, i.e., the locations of two Weyl points with opposite chiralities. The 3D nature of the CDODHM can therefore be constructed by stacking a series of Floquet Chern insulators sandwiched by normal insulators. The Weyl points emerge at the interface separating the Floquet Chern and normal insulators.
The topological charge (chirality) of the Weyl points can also be manifested in terms of the quantization of adiabatic transport. According to our previous work <cit.>, by preparing a certain initial state and driving it adiabatically along a closed loop in the parameter space (ϕ_y and ϕ_z), the change in position expectation value after one full cycle is given by
Δ⟨ X⟩ = a χ_enc ,
where a is the effective lattice constant, which is equal to 2 in this case since one unit cell consists of two lattice sites, and χ_enc is the total chirality of the Weyl points enclosed by the loop. By following the same procedure in Ref. <cit.>, we prepare the following initial state,
| Ψ(t=0) ⟩ = 1/2π∫_-π^π |ψ_-(k,ϕ_y(0),ϕ_z(0)) ⟩ dk ,
where |ψ_-(k,ϕ_y,ϕ_z) is the Floquet eigenstate associated with the lower band in Fig. <ref>, and ϕ_y and ϕ_z are tuned adiabatically according to ϕ_y=ϕ_y,0+rcos[θ(t)+Φ] and ϕ_z=ϕ_z,0+rsin[θ(t)+Φ], with Φ be a constant phase and θ(t)=2π i/M for i-1<t≤ i and 0<i≤ M. Adiabatic condition is reached by setting M to be very large. Fig. <ref> shows the change in position expectation value of Eq. (<ref>) after it is driven along various closed loops in parameter space. It is evident from the figure that Eq. (<ref>) is satisfied. For instance, when the loop is chosen to enclose two Weyl points with the same chiralities, i.e., Fig. <ref>a, <ref>b, and <ref>e, Δ⟨ X⟩/2=± 2 after one full cycle, whereas if it encloses Weyl points with opposite chiralities or no Weyl point, i.e., Fig. <ref>c and <ref>d, Δ⟨ X⟩/2=0 after one full cycle.
§.§ Comparison with the ODHM
According to our findings in Sec. <ref>, the CDODHM is able to host as many type-I Weyl points as possible by simply increasing the parameter V_1. As V_1 increases, there are more integers satisfying lπ≤ V_1, and hence more type-I Weyl points emerge. On the other hand, in the ODHM, i.e., V_2=0 case, no matter what the values of the parameters J, λ, and V_1 are, there are only 8 type-I Weyl points touching at energy 0, corresponding to (k,ϕ_y,ϕ_z)=(±π/2, ±π/2, ±π/2). This can be understood as follows. If l≠ 0, then J_l(lc)=J_l(0)=0, which implies that terms proportional to Pauli matrices σ_x and σ_y in Eq. (<ref>) are missing. As a result, Eq. (<ref>) no longer describes a Weyl Hamiltonian, and the band touching point at (k,ϕ_y,ϕ_z)=(±π/2,±π/2, ϕ_l) for l≠ 0 is not a Weyl point. If however l=0, i.e., ϕ_l=ϕ_0=±π/2, then J_l(lc)=J_0(0)=1, and the terms proportional to Pauli matrices σ_x and σ_y in Eq. (<ref>) remain nonzero. Consequently, Eq. (<ref>) still describes a type-I Weyl Hamiltonian, and the band touching point at (k,ϕ_y,ϕ_z)=(±π/2,±π/2, ±π/2) corresponds to a type-I Weyl point.
The emergence of the additional type-I Weyl points in the CDODHM can be understood as follows. First, we separate the time dependent and independent part of Eq. (<ref>). The time independent part is simply the ODHM momentum space Hamiltonian, whereas the time dependent part can be understood as its interaction with the driving field, which can in general induce transition between the two energy bands of the ODHM, and hence modify its band structure. When V_1≥ lπ, there exists a point in the Brillouin zone at which the energy difference between the two bands of the ODHM is equal to 2lπ. In the unit we choose, this energy difference also represents the transition frequency between the two energy levels, which is on resonance with the frequency of the driving field Ω=2π. As a result, the two energy levels will be dynamically connected with each other, yielding a type-I Weyl point in the quasienergy spectrum.
§ QUANTUM DRIVING FIELD
§.§ Quantized model
Quantum mechanically, the Hamiltonian of the driving field takes the form of the harmonic oscillator Hamiltonian, which can be written as
H_ field= Ω a^† a ,
where a (a^†) is the photon destruction (creation) operator, and the zero point energy 1/2Ω has been suppressed since it will not contribute to our present analysis. In the Heisenberg picture, the time dependence of a and a^† can be found by solving the following equation of motion,
da/dt = -[H_field, a]/i
= -iΩ a .
It can be immediately verified from Eq. (<ref>) that a(t)=a(0)exp(-iΩ t ) and a^†(t)=a^†(0)exp(iΩ t ). By including the quantized driving field as part of our system, the total Hamiltonian can be written as
H_tot = I_p ⊗ H_ODHM+H_field⊗ I_ODHM + H_int ,
where H_ODHM is the ODHM Hamiltonian (the time independent part of Eq. (<ref>)), I_p and I_ODHM are the identity operator in the photon and the ODHM space respectively, and H_int is the interaction Hamiltonian describing the coupling between the ODHM and the driving field. The form of H_int can be obtained from the time dependent part of Eq. (<ref>). By writing cos(Ω t) =1/2[exp(-iΩ t )+exp(iΩ t )] in Eq. (<ref>), we can identify exp(-iΩ t ) and exp(iΩ t ) terms as a(t) and a^†(t) respectively. The time dependence of a and a^† can be transferred to the corresponding basis states in the photon space (by changing from the Heisenberg to the Schrodinger picture) <cit.>, so that Eq. (<ref>) is time independent, with H_int given by
H_int = ∑_n^N (-1)^n V_2cos(ϕ_z)/2(a+a^†) ⊗ |n⟩⟨ n | .
Under PBC, the momentum space Hamiltonian associated with Eq. (<ref>) is given by
ℋ_ tot = I_p ⊗ℋ_k,0+H_field⊗ I_2 + ℋ_int ,
where ℋ_k,0 is given by Eq. (<ref>), I_2 is a 2× 2 identity matrix, and
ℋ_int = V_2cos(ϕ_z)/2(a+a^†) ⊗σ_z .
Fig. <ref> and Fig. <ref> show a typical energy band structure of the model under PBC and OBC, obtained by diagonalizing Eq. (<ref>) and Eq. (<ref>), respectively. It is observed from Fig. <ref> that in addition to the Weyl points at (k,ϕ_y,ϕ_z)=(±π/2,±π/2,±π/2), new Weyl points emerge at some other points. Fermi arc surface states connecting each pair of these new Weyl points, similar to what we observed in Sec. <ref>, are also evident from Fig. <ref>a, which confirms their topological nature. Near these new Weyl points, the energy dispersion forms a tilted cone (blue circle in Fig. <ref>b), suggesting that they might be categorized as type-II Weyl points. In Sec. <ref>, we are going to show analytically that these type-II Weyl points emerge at the same points as the additional type-I Weyl points were the driving field treated classically, as elucidated in Sec. <ref>. This result suggests that in the quantum limit, some type-I Weyl points will turn into type-II Weyl points.
§.§ Emergence of type-II Weyl points
Although Eq. (<ref>) is time independent, it now has a larger dimension since it includes the photon space. By introducing the quadrature operators X and P satisfying the commutation relation [X,P]=i and are related to a and a^† by
a = 1/√(2)(X+i P) ,
a^† = 1/√(2)(X-i P) ,
Eq. (<ref>) becomes (at k=ϕ_y=π/2)
ℋ_tot,±(π/2,π/2,ϕ_z)=2π{P^2/2+1/2[X±V_2cos(ϕ_z)/2√(2)π]^2-1/2}± V_1cos(ϕ_z) -V_2^2cos^2(ϕ_z)/8π .
Eq. (<ref>) is simply the harmonic oscillator Hamiltonian with shifted “position" expectation value. Near (k,ϕ_y,ϕ_z)=(π/2,π/2, ϕ_1 ), with ϕ_1=arccos(π/V_1), it is shown in Appendix <ref> that the energy dispersion is given by
E_n,± = π (2n-1)-π V_2^2/8V_1^2 +V_2^2/4V_1k_z sin(ϕ_1)
±√(V_1^2 sin^2(ϕ_1) k_z^2+4J^2J_1(√(n)V_2/V_1)^2 k_x^2+4λ^2 J_1(√(n)V_2/V_1)^2 k_y^2) ,
where, similar to our previous notation, k_x=k-π/2, k_y=ϕ_y-π/2, and k_z=ϕ_z-ϕ_1. Furthermore, Eq. (<ref>) will be block diagonal in the basis spanned by the eigenstates associated with Eq. (<ref>) in Appendix <ref>, where each subblock consists of 2× 2 matrix which can be written in the following form,
[ℋ_q]_n = π (2n-1)-π V_2^2/8V_1^2 +V_2^2/4V_1k_z sin(ϕ_1)-V_1 sin(ϕ_1) k_zτ_z -(2Jk_x τ_x + 2λ k_y τ_y) J_1(√(n)V_2/V_1) ,
where τ_x, τ_y, and τ_z take the form of Pauli matrices. Eq. (<ref>) is in the form of a Weyl Hamiltonian, which resembles a similarity with Eq. (<ref>) found in Sec. <ref>, apart from the extra tilting term V_2^2/4V_1k_z sin(ϕ_1) and the energy shift -π V_2^2/8V_1^2. These extra terms in turn lead to novel phenomena which are not captured if the driving field is treated classically. First, because of the tilting term, it is possible for the Dirac cone around the Weyl point described by Eq. (<ref>) to tip over at a sufficiently large matter-field coupling V_2, so that it is categorized into type-II Weyl points. According to the classification in Ref. <cit.>, this Weyl point is a type-II Weyl point if V_2>2V_1. Second, the energy shifting term will shift the energy at which the Weyl point is formed, so that it is not an integer multiple of π.
These two phenomena are the main results of this paper, which have some fascinating implications. First, they show the difference between quantum and classical treatments of light, which is one of the main interests in the studies of quantum optics <cit.>. Second, since both the tilting and energy shifting terms are proportional ∝ V_2^2, they can be easily controlled by simply tuning V_2. Moreover, we note that these two terms will not affect the Weyl points at (k,ϕ_y,ϕ_z)=(±π/2, ±π/2, ±π/2), which can be easily verified by expanding Eq. (<ref>) up to first order near these points. By following the same procedure that leads to Eq. (<ref>), it can be shown that both the second (the energy shifting) and the third (the tilting) terms are missing. As a result, these Weyl points always correspond to type-I Weyl points and are located at a fixed energy regardless of V_2. This implies that by tuning V_2, it is possible to generate a pair of mixed Weyl points, with one belonging to type-I while the other belonging to type-II, separated by a controllable energy difference. This might serve as a good starting point to study further the properties of such mixed Weyl semimetal systems. For example, by fixing ϕ_y and ϕ_z in between a pair of mixed Weyl points and applying a magnetic field, one could explore the possiblity of generating the chiral magnetic effect <cit.>, i.e., the presence of dissipationless current along the direction of the magnetic field, which is known to depend on the energy difference between two type-I Weyl points <cit.>.
Despite the difference between the semiclassical and fully quantum results described above, they share some similarities in terms of the quantization of adiabatic transport. Fig. <ref> shows the change in position of expectation value when an initial state similar to Eq. (<ref>) is driven adiabatically along various closed loops by tuning ϕ_y and ϕ_z in the same manner as that elucidated in Sec. <ref>. Similar to what we observed in Sec. <ref>, the change in position expectation value after one full cycle still obeys Eq. (<ref>) regardless of the type of the Weyl points enclosed. This indicates clearly that a transition from type-I to type-II Weyl point will preserve its chirality. This makes sense since such a transition is induced by a term that doesn't depend on any of the Pauli matrices, and hence will not affect its chirality.
We end this section by presenting a comparison between Eq. (<ref>) and Eq. (<ref>). By identifying V_2 in Eq. (<ref>) as V_2√(n) in Eq. (<ref>), it can be immediately shown that Eq. (<ref>) will reduce to Eq. (<ref>) when V_2→ 0 while n→∞, such that V_2√(n) remains finite. In this regime, Eq. (<ref>) will be periodic with a modulus of 2π, which is the same as Eq. (<ref>). This explains why the extra tilting and energy shifting terms are not observed in the classical driving field case. Since these two terms are proportional to V_2^2, their effect will diminish as we move from the quantum to classical driving field regime. This observation can also be understood more physically as follows. In both quantum and classical field regime, the additional Weyl points emerge as a result of the resonance between the particle transition frequency and the frequency of the driving field. Since the interaction between the particle and a single photon depends on the parameter ϕ_z, it is expected in general that the modification of the band structure near the resonant points (the additional Weyl points) also depends on ϕ_z, resulting in the emergence of the tilting term in the full quantum field regime. Since the Weyl points at (k,ϕ_y,ϕ_z)=(±π/2, ±π/2, ±π/2) are not resonant with the driving field, the interaction effect will be quite small, and the ϕ_z dependence effect of the interaction will not be visible near these Weyl points, which explains the absence of the tilting term even in the full quantum field regime. The energy shifting term in the full quantum field regime is a result of the change in the energy difference between the Weyl points at (k,ϕ_y,ϕ_z)=(±π/2, ±π/2, ±π/2) and the resonant points before and after the driving field is introduced. Finally, in the classical field regime, the interaction between the particle and a single photon is very weak. Although there are infinitely many photons in the classical field case, both the tilting and energy shifting terms depend only on the interaction strength with a single photon even near the resonant points. Therefore, the most visible effect of the interaction with all the photons is to just dress the band structure near the Weyl points, which is uniform up to first order in ϕ_z.
§ DISCUSSIONS
§.§ Possible experimental realizations
There have already been several proposals to experimentally realize the Harper model in the framework of ultracold atom systems <cit.> as well as optical waveguides <cit.>. The semiclassical version of our model can be easily realized by slightly modifying some of these experimental methods to incorporate the time periodic driving field. For example, in the ultracold atom realizations of the Harper model <cit.>, which make use of non-interacting Bose-Einstein condensate (BEC) under a 1D optical lattice, the time dependent term ∝cos(Ω t) can be obtained by linearly chirping the frequencies of two counter-propagating waves <cit.>. Meanwhile, in the optical waveguide realization proposed by Ref. <cit.>, where time is simulated by the propagation distance of the light, the time dependent term ∝cos(Ω t) can be implemented by varying the refractive index of each waveguide periodically along its length.
In order to realize the fully quantum version of our model, ultracold atom realizations of the Harper model <cit.> might be more suitable as a starting point. Interaction with a quantized driving field can be simulated by placing the non-interacting BEC systems inside a quantum LC circuit <cit.>. Alternatively, as proposed by Ref. <cit.>, optical cavity setups can be used, and single mode photon field can be selected from a ladder of cavity modes by using a dispersive element and dielectric mirrors. The coupling strength V_2 can be tuned by varying the position of the mirrors. Finally, we note that strong coupling regime between optical cavities and atomic gases or various qubit systems have been achieved experimentally <cit.>. This opens up many other possibilities to realize our model.
§.§ Towards possible detection of type-II Weyl points
Here we discuss one possible way to manifest type-II Weyl points and distinguish them from type-I Weyl points via applying
an artificial magnetic field. It was shown recently that the tilting term in the Weyl Hamiltonian causes a “squeezing" in the Landau level solutions if the direction of the magnetic field is perpendicular to the direction of the tilt <cit.>.
Under such a magnetic field, as the Weyl points undergo a transition from type-I to type-II, the Landau levels are expected to collapse <cit.>, namely, the two bands in the vicinity of the type-II Weyl points start to overlap with each other. For our CDODHM with only one physical dimension, Artificial magnetic field <cit.> can be introduced to simulate the effect of magnetic field in real 3D systems. For example, in order to simulate a magnetic field along y direction, which corresponds to the vector potential 𝒜=(0,0,-Bx) in the Landau gauge, Peierls substitution amounts to modifying ϕ_z→ϕ_z+e B x, so that Eq. (<ref>) becomes,
H(B) = ∑_n {[J+(-1)^nλcos(ϕ_y)]ĉ^†_n+1ĉ_n+h.c. }
+∑_n (-1)^n [V_1+V_2cos(Ω t)]cos(ϕ_z+eB n)ĉ_n^†ĉ_n
in the semiclassical case. It is seen above that such artificial magnetic field is achieved by a lattice-site-dependent phase modulation introduced to ϕ_z. In the quantum case, cos(Ω t)→a+a^†/2 and H_field as given by Eq. (<ref>) is added into the Hamiltonian.
By diagonalizing the Floquet operator associated with Eq. (<ref>) numerically, the quasienergy spectrum can be obtained for the semiclassical case, which is shown in Fig. <ref>. In order to make a comparison with the fully quantum case, we are focusing on the Weyl points at quasienergy π, which may turn into type-II Weyl points in the quantum regime, and hence we choose the region of the quasienergy to be in [0,2π]. As is evident from the figure, in the vicinity of the Weyl points at quasienergy π (Weyl points marked by the green dashed line), the Landau level structures remain qualitatively the same regardless of the value of the coupling strength V_2 when the lattice-site-dependent phase modulation is added. In order to understand the robustness of the Landau level structures near the Weyl points, we calculate the quasienergies associated with Eq. (<ref>) but now under such a lattice-site-dependent phase modulation. Because here we treat an effective Hamiltonian exactly like that of a Dirac Hamiltonian in the presence of a magnetic field, we easily find
ε_n≠ 0 = lπ -sgn(n) √(v_0^2 k_y^2+|n|ω_c^2) ,
ε_0 = lπ +v_0 k_y ,
where v_0= 2λ J_l(lc) and ω_c =√(4eV_1 Jsin(ϕ_l)J_l(lc)B). Eq. (<ref>) and Eq. (<ref>) imply that the quasienergy solutions are independent of ϕ_z (which is somewhat expected because eigenvalues of Landau levels should not depend on where electrons are). This explains the observation of plateaus in the vicinity of the Weyl points in
Fig. <ref>. In addition, when k_y=0 (ϕ_y=π/2), the zeroth Landau level quasienergy ε_0 is equal to an integer multiple of π. Finally, we note that the only effect of the coupling strength V_2 (c ≡ V_2/V_1) here in Eq. (<ref>) and Eq. (<ref>) is to renormalize v_0 and ω_c via the Bessel function J_l(lc), without modifying the form of the quasienergy solutions.
In the fully quantum case, the first four bands of the energy spectrum have also been obtained numerically in Fig. <ref>. By focusing on the Weyl points along the green dotted line, which acquire a tilt as the coupling strength is tuned (i.e., these Weyl points in panel (c) have more tilting compared to those in panel (a)), it is evident that when the tilting term is not too large (the Weyl points still belong to type-I), the Landau level structures around the green dotted line in the vicinity of these Weyl points remain qualitatively the same, as shown in Fig. <ref>b. However, as the tilting term gets larger such that a transition from type-I to type-II Weyl points takes place, these Landau level structures collapse (levels start to overlap one another), as is depicted in Fig. <ref>d around the green dotted line in the vicinity of the original type-II Weyl points. By contrast, the Weyl points along the red dotted line do not acquire any tilt as the coupling strength is varied. As a result, in both Fig. <ref>b and Fig. <ref>d, the Landau level structures around the red dotted line do not change much. This observation can also be understood in terms of the Landau level solutions of the effective Hamiltonian near these Weyl points. Near the Weyl points marked by the red dotted line, the effective Hamiltonian takes the same form as Eq. (<ref>), thus leading to similar quasienergy solutions and properties (i.e., robustness of the quasienergy structures under a change in the phase parameter ϕ_z and coupling strength V_2) as Eq. (<ref>) and Eq. (<ref>) we have elucidated earlier. Near the Weyl points marked by the green dotted line, the technique introduced in <cit.> can be applied to derive the energy solutions associated with Eq. (<ref>) under the lattice-site-dependent phase modulation introduced to k_z. The derivations are not trivial <cit.> and we finally obtain
E_n,m≠ 0 = π(2n-1)-π V_2^2/8V_1^2-sgn(m)√(α^2 v_0^2 k_y^2 +|m|α^3 ω_c^2) ,
E_n,0 = π(2n-1)-π V_2^2/8V_1^2+α v_0 k_y ,
where α=√(1-β^2), β=V_2/2V_1, v_0 and ω_c are similar with those in Eq. (<ref>) and Eq. (<ref>) with J_l(lc) replaced by J_1(√(n) V_2/V_1). Due to the additional of α factor in Eq. (<ref>) and Eq. (<ref>), the spacing between each Landau level decreases. Moreover, for type-II Weyl points, we have V_2>2V_1, which implies β>1. As a result, Eq. (<ref>) and Eq. (<ref>) become imaginary and no longer correctly describes the energy structures near such Weyl points, i.e., Landau level solutions collapse <cit.>.
The above observed Landau level collapse in the vicinity of type-II Weyl points suggests a possible detection of type-II Weyl points by using ideas borrowed from standard means such as the Shubnikov-de Haas oscillations or the scanning tunneling spectroscopy (STS) as mentioned in Ref. <cit.>. In addition, since the generation of an artificial magnetic field only involves the modification of the phase parameter ϕ_z, it should be feasible in terms of the experimental proposals elucidated in Sec. <ref>. The measurement of the Landau level structures under the introduction of such a lattice-site-dependent phase modulation thus provides a physical way to distinguish type-II from type-I Weyl points in our physically 1D model.
§ CONCLUSIONS
In this paper, we consider an extension to our previous work <cit.> to explore the generation of novel topological phases by using a more realistic driving term, i.e., in the form of a harmonic driving field. We then show that an interaction between the ODHM and a harmonic driving field leads to the emergence of additional Weyl points, similar to the ODKHM studied in <cit.>. However, the simplicity of the model considered in this paper allows us to study the system in both full quantum (quantum field) and semiclassical (classical field) pictures.
When the driving field is treated classically as a time dependent potential, we have found using Floquet theory the locations at which new Weyl points emerge. By expanding the Floquet operator around the Weyl points, we are able to show that these Weyl points belong to type-I Weyl points. The topological signatures of these Weyl points are confirmed by the existence of Fermi arc edge states connecting each pair of Weyl points of opposite chiralities when the Floquet operator is diagonalized under OBC. Furthermore, by driving a localized Wannier state along a closed loop in parameter space, the change in its position expectation value is proportional to the total chirality of the Weyl points enclosed.
When the field is treated quantum mechanically, i.e., by taking both the atom and photons as a single system, we have shown that Weyl points emerge at the same locations as those found in the classical field case. However, some of these Weyl points acquire an extra tilting and energy shifting terms which depends on the matter-light coupling strength V_2. As a result, when V_2 is sufficiently large, it is possible for some of these type-I Weyl points to transform into type-II Weyl points. In addition, since both extra terms will not affect the Weyl points at (k,ϕ_y,ϕ_z)=(±π/2, ±π/2, ±π/2), it is possible to generate a pair of mixed Weyl points with tunable energy difference, which opens up a possibility to realize or explore further the properties of such mixed Weyl semimetals. We have also verified that Fermi arc edge states connecting two Weyl points of opposite chiralities emerge. Moreover, via the quantization of adiabatic transport, we confirm that the chirality of the Weyl points is preserved under the transition from type-I to type-II. Possible experimental realizations have also been briefly discussed for both the semiclassical and fully quantum case. Finally, a scheme to distinguish type-II from type-I Weyl points discovered in our 1D system has also been elucidated.
Following this paper, we could now focus on studying the properties of more general Weyl semimetal systems which possess both type-I and type-II Weyl points, e.g., chiral anomaly induced transport properties, and verify them experimentally by designing an experimental realization of our model. It might also be interesting to design an experimental scheme which can realize both semiclassical and full quantum versions of our model within a single framework to observe the quantum to classical transition occurring in the model. There are some other aspects that deserve further explorations. For example, given that an interaction with a single photon mode gives rise to such controllable novel topological phases, considering multimode photon fields is imagined to be more fruitful. However, even with just a single photon mode, a possible future direction might be to consider its interaction with a topologically nontrivial many-body system (such a set up is also related to superradiant phase transition <cit.>). Finally, it is hoped that such a controllable mixed Weyl semimetal system we discovered can be useful for future devices.
Acknowledgements: We thank Longwen Zhou for helpful discussions.
§ DERIVATION OF EQ. (<REF>)
Consider a rotating frame which corresponds to a transformation |ψ⟩→ R|ψ⟩, where R=exp(iV_2cos(ϕ_z)sin(Ω t)/ħΩσ_z). The Hamiltonian in this new frame is given by
ℋ_k' = [2Jcos(k) cos(2a)+2λsin(k)cos(ϕ_y) sin(2a)]σ_x
+ [-2Jcos(k) sin(2a)+2λsin(k)cos(ϕ_y) cos(2a)]σ_y +V_1 cos(ϕ_z) σ_z ,
where a=V_2cos(ϕ_z)sin(Ω t)/ħΩ. Near a band touching point at (k,ϕ_y,ϕ_z)=(π/2,π/2, ϕ_l), where ϕ_l is as defined in the main text, Eq. (<ref>) can be approximated as
ℋ_k' ≈ {-2Jk_xcos[lc sin(Ω t)]-2λ k_y sin[lcsin(Ω t)]}σ_x
+{ 2Jk_xsin[lc sin(Ω t)]-2λ k_y cos[lcsin(Ω t)]}σ_y +[lπ-V_1 k_zsin(ϕ_l)]σ_z
= ℋ_pert+ [lπ-V_1 k_zsin(ϕ_l)]σ_z ,
where k_x, k_y, k_z, and c are as defined in the main text. By applying the time dependent perturbation theory, a one period time evolution operator in the interaction picture can be obtained as <cit.>,
U_I (1,0) ≈ I-∫_0^1 exp{i[lπ-V_1 k_zsin(ϕ_l)]t }ℋ_pertexp{-i[lπ-V_1 k_zsin(ϕ_l)]t } dt
= I+i∫_0^1 dt (2Jk_xσ_x+2λ k_y σ_y) cos{ 2[lπ-V_1 k_zsin(ϕ_l)]t +lcsin(Ω t)}
+i∫_0^1 dt (2Jk_xσ_x-2λ k_y σ_y) sin{ 2[lπ-V_1 k_zsin(ϕ_l)]t+lcsin(Ω t)}
= I+i(2Jk_xσ_x+2λ k_y σ_y) J_l-V_1 k_z sin(ϕ_l)/2π(lc)
≈ I+i(2Jk_xσ_x+2λ k_y σ_y) J_l(lc) .
Finally, in order to obtain the Floquet operator, which is interpreted as a one period time evolution operator in the Schrodinger picture, i.e., 𝒰(k_x,k_y,k_z)=U(1,0), we need to convert Eq. (<ref>) back to the Schrodinger picture. Therefore,
U(1,0) ≈ exp{-i [lπ-V_1 k_zsin(ϕ_l)]σ_z}[I+i(2Jk_xσ_x+2λ k_y σ_y) J_l(lc)]
≈ exp(-ilπ)[I+i V_1 k_zsin(ϕ_l)σ_z][I+i(2Jk_xσ_x+2λ k_y σ_y) J_l(lc)]
≈ exp(-ilπ)[I+i V_1 k_zsin(ϕ_l)σ_z+i(2Jk_xσ_x+2λ k_y σ_y) J_l(lc)]
≈ exp{-i{ lπ-[ V_1 k_zsin(ϕ_l)σ_z+2Jk_x J_l(l c)σ_x +2λ k_y J_l(l c)σ_y]}} ,
which proves Eq. (<ref>).
§ DERIVATION OF EQ. (<REF>)
We start by introducing the following unit vectors,
n̂ = 2Jcos(k)x̂+2λsin(k)cos(ϕ_y)ŷ+V_1cos(ϕ_z)ẑ/1/2ω ,
m̂ = -2λsin(k) cos(ϕ_y) x̂+2Jcos(k)ŷ/1/2ω' ,
l̂ = -ω'/ωẑ+V_1cos(ϕ_z)2λsin(k)cos(ϕ_y)ŷ+2Jcos(k)x̂/1/4ωω' ,
where x̂, ŷ, and ẑ are unit vectors along x, y, and z direction, 1/2ω =√(1/4ω'^2+V_1^2cos^2(ϕ_z)) and 1/2ω' =√(4J^2cos^2(k)+4λ^2sin^2(k)cos^2(ϕ_y)). It can be verified that l̂, m̂, and n̂ are three unit vectors that form a right-handed system similar to x̂, ŷ, and ẑ. Next, we define σ_± =l̂·σ±im̂·σ. If |ψ_±⟩ is the eigenstate of n̂·σ corresponding to eigenvalue ± 1, then σ_+ |ψ_+ ⟩ = σ_- |ψ_- ⟩ = 0, σ_+ |ψ_-⟩ =2 c_+ |ψ_+ ⟩ and σ_- |ψ_+⟩ =2 c_- |ψ_- ⟩, where c_± is a unit complex numbers which depends on the representation of the eigenstates. It can also be shown that σ_± and n̂·σ satisfy the following algebra,
[σ_-, σ_+] = -4n̂·σ ,
[ σ_± , n̂·σ] = ∓ 2σ_± .
In terms of the notations defined above, Eq. (<ref>) can be recast in the following form,
ℋ_q = 1/2ωn̂·σ +Ω a^† a -V_2cos(ϕ_z)ω'/4ω(a+a^†)[σ_+ + σ_- -4V_1cos(ϕ_z)/ω'n̂·σ] .
In X representation and in the basis { |ψ_+⟩ , |ψ_-⟩}, where X is one of the quadrature operators defined in the main text, the energy eigenvalue equation associated with Eq. (<ref>) near (k,ϕ_y,ϕ_z)=(π/2, π/2,ϕ_1 ), up to first order in k_x, k_y, and k_z defined in the main text, can be written as
([ A(x)+B(x) C(x)ω' c_-; C(x) ω' c_+ A(x)-B(x) ]) ([ f_1(x); f_2(x) ]) = E ([ f_1(x); f_2(x) ]) ,
where x is the eigenvalue of X, E is the energy eigenvalue, and
A(x) = 1/2Ω(x^2 -∂^2/∂ x^2-1) ,
B(x) ≈ 1/2ω +V_1V_2[π^2/V_1^2+2π k_z/V_1sin(ϕ_1)]/ω√(2) x ,
C(x) ≈ -V_2/4V_1√(2) x .
Since k_x and k_y are very small quantities, ω' is also very small by construction and thus the off-diagonal terms in Eq. (<ref>) can be treated as perturbations. Without the off-diagonal terms, Eq. (<ref>) reduces to two uncoupled harmonic oscillator eigenvalue equations, which can readily be solved for the energy E^(0) and the eigenfunctions f_1(x) and f_2(x). In particular,
E_n,±^(0) = π (2n± 1) ∓ V_1 k_zsin(ϕ_1) -π V_2^2/8V_1^2 +V_2^2/4V_1k_z sin(ϕ_1) ,
where n is a non-negative integer.
To understand the effect of the off-diagonal term, we define the following operators,
𝒜_+ = 1/√(2)[(X+√(2)V_2/2V_1)+i P] ,
𝒜_- = 1/√(2)[(X-√(2)V_2/2V_1)+i P] .
The off-diagonal perturbation term and the unperturbed diagonal term can then be written as, respectively,
H_off = -V_2/8V_1ω' (𝒜_+^† +𝒜_-)(σ_+ +σ_-) ,
H_on = [π -V_1sin(ϕ_1)k_z] ([ 1 0; 0 -1 ]) -π V_2^2/8V_1^2+V_2^2/4V_1k_z sin(ϕ_1)+2π([ 𝒜_+^†𝒜_+ 0; 0 𝒜_-^†𝒜_- ]) .
Since ω' is a very small quantity, rotating wave approximation (RWA) could be made if 𝒜_+^† and σ_-, as well as 𝒜_- and σ_+, are governed by approximately the same frequency of evolution. Therefore, let's first analyze the equations of motion for 𝒜_± and σ_± (in the interaction picture):
dσ_±/dt = [σ_±, H_on]/i
≈ ∓ 2πσ_±/i∓2V_2π/i V_1(𝒜_+^† +𝒜_-)σ_± ,
d𝒜_±/dt = [𝒜_±, H_on]/i
= 2π𝒜_±/i∓π V_2/i V_1+π V_2/i V_1([ 1 0; 0 -1 ]) .
Let's first assume V_2 to be sufficiently small, so that the solutions to the above equations are approximately σ_± (t) ≈σ_± (0) e^±i 2π t and 𝒜_± (t) ≈𝒜_± (0) e^- i 2π t. RWA can then be invoked, and the total Hamiltonian can be divided into subblocks spanned by the states |n,-⟩ and |n-1,+⟩, which are eigenstates of H_on corresponding to E_n,-^(0) and E_n-1,+^(0) as given in Eq. (<ref>) respectively. The reduced 2× 2 Hamiltonian in {|n,-⟩, |n-1,+⟩} basis is,
[H_q]_n =̂ ([ E_n-1,+^(0) -V_2/4V_1ω' √(n) c_+; -V_2/4V_1ω' √(n) c_- E_n,-^(0) ]) .
By considering a representation where c_-=4Jk_x/ω'+i4λ k_y/ω', and τ_x, τ_y, and τ_z take the usual Pauli matrices form, the reduced Hamiltonian can be written more compactly as,
[H_q]_n = π (2n-1)-π V_2^2/8V_1^2 +V_2^2/4V_1k_z sin(ϕ_1)-V_1 sin(ϕ_1) k_zτ_z -(2J k_x τ_x + 2λ k_y τ_y) √(n)V_2/2V_1 .
Let's now relax the assumption that V_2 is sufficiently small. We notice that √(n)V_2/2V_1 corresponds to the lowest order term in the series expansion of a certain function, e.g. J_1(√(n)V_2/V_1). Since the Hamiltonian is required to reduce to Eq. (<ref>) in the classical limit n→∞ and V_2→ 0, we argue that for an arbitrary value of V_2 (not necessarily small), Eq. (<ref>) need to be modified by replacing the √(n)V_2/2V_1 factor in the τ_x and τ_y terms by J_1(√(n)V_2/V_1), so that Eq. (<ref>) follows. Although this argument is not obvious to be justified analytically, it is still possible to verify Eq. (<ref>) numerically by comparing the eigenvalues of Eq. (<ref>) obtained directly from exact diagonalization and the eigenvalues of Eq. (<ref>) near a Weyl point when V_2 is at the same order as the other parameters, as confirmed in Fig. <ref>.
99
SHI C. L. Kane and E. J. Mele, 95, 146802 (2005).
SHI2 C. L. Kane and E. J. Mele, 95, 226801 (2005).
SHI3 B. A. Bernevig, T. L. Hughes and S. C. Zhang, Science 314, 1757 (2006).
SHI4 M. Konig, S. Wiedmann, C. Brune, A. Roth, H. Buhmann, L. W. Molenkamp, X. L. Qi and S. C. Zhang, Science 318, 766-770 (2007).
jack R. Jackiw and C. Rebbi, 13, 3398 (1976).
joel J. E. Moore, Nature 464, 194-198 (2010).
maj L. Fu and C. L. Kane, 100, 096407 (2008).
collins G. P. Collins, Sci. Am. 294, 57-63 (2006).
FerarcX. Wan, A. M. Turner, A. Vishwanath and S. Y. Savrasov, 83, 205101 (2011).
HosurP. Hosur and X. Qi, C. R. Phys. 14, 857-870 (2013).
Ferarc2S. Y. Xu, C. Liu, S. K. Kushawa, R. Sankar, J. W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T. R. Chang, H. T. Jeng, C. Y. Huang, W. F. Tsai, H. Lin, P. P. Shibayev, F. C. Chou, R. J. Cava and M. Z. Hasan, Science 347, 294-298 (2015).
NMR H. B. Nielsen and M. Ninomiya, Phys. Lett. B 130, 389-396 (1983).
NMR2 V. Aji, 85, 241101 (2012).
NMR3 H. J. Kim, K. S. Kim, J. F. Wang, M. Sasaki, N. Satoh, A. Ohnisi, M. Kitaura, M. Yang and L. Li, 111, 246603 (2013).
AHE K. Y. Yang, Y. M. Lu and Y. Ran, 84, 075129 (2011).
AHE1 A. A. Burkov and L. Balents, 107, 127205 (2011).
AHE2 G. Xu, H. Weng, Z. Wang, X. Dai and Z. Fang, 107, 186806 (2011).
AHE3 A. A. Zyuzin and A. A. Burkov, 86, 115133 (2012).
CME1 A. A. Zyuzin, S. Wu and A. A. Burkov, 85, 165110 (2012).
CME2 K. Taguchi, T. Imaeda, M. Sato and Y. Tanaka, 93, 201202(R) (2016).
wyel2A. A. Soluyanov, D. Gresch, Z. Wang, Q. S. Wu, M. Troyer, X. Dai and B. A. Bernevig, Nature 527, 495-498 (2015).
wyel22 A. A. Zyuzin and R. P. Tiwari, JETP Lett. 103, 717 (2016).
f01T. Oka and H. Aoki, Phys. Rev. B 79, 081406 (2009).
f02D. Y. H. Ho and J. B. Gong, Phys. Rev. Lett. 109, 010601 (2012); Phys. Rev. B 90, 195419 (2014).
Floquet1 J. H. Shirley, Phys. Rev. 138, B979 (1965).
Floquet2 H. Sambe, 7, 2203 (1973).
FTIN. H. Lindner, G. Refael and V. Galitski, Nat. Phys. 7, 490-495 (2011).
FTI2 J. Cayssol, B. Dora, F. Simon and R. Moessner, Phys. Stat. Sol. RRL 7, 101 (2013).
Fweyl R. Wang, B. Wang, R. Shen, L. Sheng and D. Y. Xing, EPL 105, 17004 (2014).
Radit R. W. Bomantara, G. N. Raghava, L. Zhou, and J. B. Gong, 93, 022209 (2016).
q1 M. Trif and Y. Tserkovnak, 109, 257002 (2012).
q2 B. Gulacsi and B. Dora, 115, 160402 (2015).
SarmaS. Ganeshan and S. D. Sarma, 91, 125438 (2015).
LL Z. M. Yu, Y. Yao and S. A. Yang, 117, 077202 (2016).
note Readers not familiar with the idea of artificial dimensions may still treat ϕ_y and ϕ_z as merely parameters without any change in their physical meaning. Consequently, such readers may also refer to Weyl points as simply band touching points representing topological phase transitions and Weyl Hamiltonian as massive 1D Dirac Hamiltonian throughout this paper. In the language of fully 1D system, this paper elucidates the properties of the Hamiltonian given by Eq. (<ref>) and its full quantum version near the topological phase transition points.
LL2 F. Y. Li, X. Luo, X. Dai, Y. Yu, F. Zhang and G. Chen, 94, 121105(R) (2016).
Loudon R. Loudon, The Quantum Theory of Light (Oxford, 2000).
exp0 G. Roati, C. D'Errico, L. Fallani, M. Fattori, C. Fort, M. Zaccanti, G. Modugno, M. Modugno, M. Inguscio, Nature 453, 895 (2008).
exp01 M. Atala, M. Aidelsburger, J. T. Barreiro, D. Abanin, T. Kitagawa, E. Demler and I. Bloch, Nat. Phys. 9, 795 (2013).
exp1 Y. E. Kraus, Y. Lahini, Z. Ringel, M. Verbin and O. Zilberberg, 109, 106402 (2012).
exp2 M. Verbin, O. Zilberberg, Y. E. Kraus, Y. Lahini and Y. Silberberg, 110, 076403 (2013).
Longwen L. Zhou, H. Wang, D. Y. H. Ho and J. B. Gong, Eur. Phys. J. B 87, 204 (2014).
chirp A. R. Kolovsky, Front. Phys. 7, 3 (2012).
qexp1 Y. Todorov and C. Sirtori, Phys. Rev. X 4, 041031 (2014).
qexp2 A. Blais, R. S. Huang, A. Wallraff, S. M. Girvin and R. J. Schoelkopf, 69, 062320 (2004).
qexp3 A. Wallraff, D. I. Schuster, A. Blais, L.Frunzio, R. S. Huang, J. Majer, S. Kumar, S. M. Girvin and R. J. Schoelkopf, Nature 431, 162 (2004).
qexp4 K. Baumann, C. Guerlin, F. Brennecke and T. Esslinger, Nature 464, 1301 (2010).
qexp5 Y. Todorov, A. M. Andrews, R. Colombelli, S. D. Liberato, C. Ciuti, P. Klang, G. Strasser and C. Sirtori, 105, 196402 (2010).
AMF1 Y.- J. Lin, R. L. Compton, K. Jimenez-Garcia, J. V. Porto and I. B. Spielman, Nature 462, 628-632 (2009).
AMF2 J. Towers, S. C. Cormack and D. A. W. Hutchinson, 88, 043625 (2013).
AMF3 O. Dutta, A. Przysiezna and J. Zakrzewski, Sci. Rep. 5, 11060 (2015).
Dicke R. H. Dicke, Phys. Rev. 93, 99-110 (1954).
Sakurai J. J. Sakurai and J. Napolitano, Modern Quantum Mechanics (San Francisco, 2011).
|
http://arxiv.org/abs/1701.07762v1 | 20170126162358 | A negative answer to a conjecture arising in the study of selection-migration models in population genetics | [
"Elisa Sovrano"
] | math.AP | [
"math.AP",
"q-bio.PE",
"92D25, 35K57, 34B18"
] |
[t1]This work was supported by the auspices of
Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro
Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
Department of Mathematics, Computer Science and Physics, University of Udine,
via delle Scienze 206, 33100 Udine, Italy
We deal with the study of the evolution of the allelic frequencies, at a single locus,
for a population distributed continuously over a bounded habitat.
We consider evolution which occurs under the joint action of selection and arbitrary migration,
that is independent of genotype, in absence of mutation and random drift.
The focus is on a conjecture, that was raised up in literature of population genetics,
about the possible uniqueness of polymorphic equilibria, which are known as clines,
under particular circumstances.
We study the number of these equilibria, making use of topological tools,
and we give a negative answer to that question by means of two examples.
Indeed, we provide numerical evidence of multiplicity of positive solutions
for two different Neumann problems satisfying the requests of the conjecture.
Migration Selection Cline Polymorphism Indefinite weight Neumann problem
[2010] 92D25 35K57 34B18.
§ INTRODUCTION
Population genetics is a field of biology concerning the genetic structure inside the populations.
Its main interest is the understanding of evolutionary processes that make the complexity of the Nature so intriguing.
One of the main causes of the diversity among organisms are the changes in the genetic sequence.
The genome evolution is influenced by selection, recombination, harmful and beneficial mutations, among others.
This way, population genetics becomes helpful in order to tackle a very broad class of issues,
from epidemiology, animal or plant breeding, demography and ecology.
The birth of the “modern population genetics” can be traced back at the need to interlace the Darwin's evolution theory with the Mendelian laws of inheritance.
This has taken place in the 1920s and early 1930s, when Fisher, Haldane and Wright
have developed mathematical models in order to analyze how the natural selection, along with other factors,
would modify the genetic composition of a population over time.
Accordingly, an impressive moment on the history of this field of the genetics is the “Sixth International Congress of Genetics” in Ithaca,
where all the three fathers of the genetical theory of evolution have given a presentation of their pioneering works, see <cit.>.
Mathematical models of population genetics can be described by relative genotypic frequencies or
relative allelic frequencies, that may depend on both space and time.
A common assumption is that individuals mate at random in a habitat
(which can be bounded or not) with respect to the locus under consideration.
Furthermore, the population is usually considered large enough so that frequencies can be treated as deterministic.
This way, a probability is associated to the relative frequencies of genotypes/alleles.
The dynamics of gene frequencies are the result of some genetic principles along with several environmental influences,
such as selection, segregation, migration, mutation, recombination and mating, that lead to different evolutionary processes
like adaptation and speciation, see <cit.>.
Amongst these influences, by natural selection we mean that some genotypes enjoy a survival or reproduction advantage over other ones.
This way, the genotypic and allelic frequencies change in accord to the proportion of progeny to the next generation of
the various genotypes which is named fitness.
Thinking to model real-life populations, we have to take into account which is unusual that the selection factor acts alone.
Since every organism lives in environments that are heterogeneous,
another considerable factor is the natural subdivision of the population that mate at random only locally.
Thus, migration is often considered as a factor that affects the amount of genetic change.
There are two different ways, in order to model the dispersion or the migration of organisms:
one is of discrete type and the other one of a continuous nature.
If the population size is sufficiently large and the selection is restricted to a single locus with two alleles,
then deterministic models continuous in time and space lead to mathematical problems
which involve a single nonlinear partial differential equation of reaction-diffusion type.
In this direction, a seminal paper was given by <cit.>.
In that work, it was studied the frequency of an advantageous gene for a uniformly distributed population in a one-dimensional habitat
which spreads through an intensity constant selection term.
Accordingly, a mathematical model of a cline was built up as a non-constant stationary solution
of the nonlinear diffusion equation in question.
The term cline was coined by J. Huxley in <cit.>:
“Some special term seems desirable to direct attention to variation within groups, and I propose the term cline,
meaning a gradation in measurable characters.”
One of the major causes of cline's occurrence is the migration or the selection which favors an allele in a region of the habitat
and, a different one in another region.
The steepness of a cline is considered as an indicating character of the level of the geographical variation.
Another contribution comes from J.B. Haldane in <cit.>, who has studied the cline's stability by considering
as a selection term a stepwise function which depends on the space and changes its sign.
Some meaningful generalizations of these models have been performed, for example,
in <cit.> by introducing a linear spatial dependence in the selection term; in
<cit.> by considering a different diffusion term that can model barriers and in
<cit.> by taking into account population not necessarily uniformly distributed and
terms of migration-selection that depend on both space and time.
During the past decade, these mathematical treatments have opened the door to a great amount of works that
investigated the existence, uniqueness and stability of clines.
Since a complete list of references of further analysis on clines is out of the scope of this work,
we limit ourselves to cite some of the earliest contributions in the literature that have inspired the succeeding ones,
see for instance <cit.>.
Understanding the processes that act in order to have non-constant genetic polymorphisms
(i.e., loci that occur in more than one allelic form) is an important challenge in population genetics.
In the present work, we deal with a class of diallelic migration-selection models in continuous space and time
introduced by W.H. Fleming in <cit.> and D. Henry in <cit.>.
We focus on a conjecture stated in <cit.>,
that, for such a kind of reaction diffusion equations, guess the uniqueness of a cline (instead of the existence of multiple ones).
In a one-dimensional setting, we will give a negative answer to that conjecture,
by providing two examples with multiplicity of non-constant steady states.
This type of treatment is inspired by the result, about multiplicity of positive solutions for indefinite weight problems
with Dirichlet boundary conditions, performed in <cit.>.
Although the problem approach has a topological feature, numerical simulations are given in order to support it.
The plan of the paper is the following.
In Section <ref>, we present the class of migration-selection models
considered and the state of the art which has lead to the formulation of the conjecture of Lou and Nagylaki,
with reference to the genetical and mathematical literature.
In Section <ref>, we build up two examples giving a negative answer to this conjecture.
In Section <ref>, we conclude with a discussion.
§ MIGRATION-SELECTION MODEL: THE CONJECTURE OF LOU AND NAGYLAKI
2.equation To ease understanding the conjecture raised up in <cit.>,
we introduce some notations.
We also provide an overview of the classical migration-selection model, continuous in space and in time,
of a population in which the genetic diversity occurs in one locus with two alleles, A_1 and A_2.
Let us consider a population continuously distributed in a bounded habitat, say Ω.
In our context, genetic diversity is the result only of the joint action of dispersal within Ω
and selective advantage for some genotypes,
so that, no mutation nor genetic drift will be considered.
This way, the gene frequencies, after random mating, are given by the Hardy-Weinberg relation.
The genetic structure of the population is measured by the frequencies p(x,t) and
q(x,t):=(1-p(x,t)) at time t and location x∈Ω of A_1 and A_2, respectively.
Thus, by the assumptions made, the mathematical formulation of this migration-selection model
leads to the following semilinear parabolic PDE:
∂ p/∂ t = Δ p + λ w(x) f(p) in Ω×]0,∞[,
where Δ denotes the Laplace operator and Ω⊆ℝ^N is a bounded open connected set, with N≥1,
whose boundary ∂Ω is C^2.
The term λ w(x) f(u) models the effect of the natural selection.
More in detail, the real parameter λ > 0 plays the role of the ratio of the selection intensity and
the function w∈ L^∞(Ω) represents the local selective advantage (if w(x) > 0),
or disadvantage (if w(x) < 0), of the gene at the position x∈Ω.
Moreover, following <cit.> and <cit.>, the nonlinear term we treat is a general function f: [0,1]→ℝ of class C^2 satisfying
f(0) = f(1) =0, f(s) > 0 ∀ s∈ ]0,1[ , f'(0) > 0 > f'(1).
(f_*)
We also impose that there is no-flux of genes into or out of the habitat Ω,
namely we assume that
∂ p/∂ν = 0 on ∂Ω×]0,∞[,
where ν is the outward unit normal vector on ∂Ω.
Since p(t,x) is a frequency, then we are interested only in positive solutions of (<ref>)–(<ref>) such that 0≤ p≤ 1.
By the analysis developed in <cit.>,
we know that, if the conditions in (f_*) hold and 0≤ p(·,0) ≤ 1 in Ω, then 0≤ p(x,t) ≤ 1
for all (x,t)∈Ω×]0,∞[ and equation (<ref>) defines a dynamical system in
X:={p∈ H^1(Ω): 0 ≤ p(x) ≤ 1, a.e. in Ω},
where H^1(Ω) is the standard Sobolev space of integrable functions whose first derivative is also square integrable.
Moreover, the stability of the solutions is determined by the equilibrium solutions in the space X.
Clearly, a stationary solution of the problem is a function p(·) satisfying 0 ≤ p≤ 1,
-Δ p = λ w(x) f(p) in Ω
and the Neumann boundary condition
∂ p/∂ν = 0 on ∂Ω.
Notice that p≡ 0 and p≡ 1 are constant trivial solutions
to problem , that correspond to monomorphic equilibria,
namely when, in the population, the allele A_2 or A_1, respectively, is gone to fixation.
So, one is interested in finding non-constant stationary solutions or, in other words, polymorphic equilibria.
Indeed, our main interest is the existence of clines for system (<ref>)–(<ref>).
The maintenance of genetic diversity is examined by seeking for the existence
of polymorphic stationary solutions/clines,
that are solutions p(·) to system
-Δ p = λ w(x) f(p) in Ω,
∂ p/∂ν = 0 on ∂Ω,
(𝒩_λ)
with 0 < p(x) < 1 for all x∈Ω.
In this respect, the assumption f(s) > 0 for every s > 0 implies that a necessary condition
for positive solutions of problem (𝒩_λ) is that the function w changes its sign.
In fact, by integrating (<ref>) over Ω, we obtain
0=∫_ΩΔ p +λ w(x) f(p) dx=
∫_∂Ω∂ p/∂ν dx+λ∫_Ω w(x) f(p) dx =
λ∫_Ω w(x) f(p) dx.
Notice that we can see the function w in (𝒩_λ)
as a weight term which attains both positive and negative values,
so that such a kind of system is usually known as problem with indefinite weight.
It is a well-known fact that the existence of positive solutions of (𝒩_λ)
depends on the sign of
w̅:=∫_Ωw(x) dx.
Indeed, for the linear eigenvalue problem -Δ p(x)=λ w(x)p(x),
under Neumann boundary condition on Ω, the following facts hold:
if w̅<0, then there exists a unique positive eigenvalue having an associated eigenfunction which does not change sign;
on the contrary, if w̅≥0 such an eigenvalue does not exist and 0 is the only non-negative eigenvalue
for which the corresponding eigenfunction does not vanish, see <cit.>.
Furthermore, under the additional assumption of concavity for the nonlinearity:
f”(s)≤0, ∀ s>0,
it follows that, if w̅<0,
then there exists λ_0>0 such that for each λ>λ_0
problem has at most one nonconstant stationary positive solution
(i.e., cline) which is asymptotically stable, see <cit.>.
After these works a great deal of contributions appeared in order
to complement these results of existence and uniqueness on population genetics, see for instance <cit.>;
or to consider also unbounded habitats as done in <cit.>;
or even to treat more general uniformly elliptic operators, as in <cit.>.
Taking into account these works, in <cit.> the migration-selection model
with an isotropic dispersion, that is identified with the Laplacian operator,
was generalized to an arbitrary migration, which involves a strongly uniformly elliptic differential operator of second order
(see also <cit.> for the derivation of this model as a continuous approximation of the discrete one).
By modeling single locus diallelic populations, there is an interesting family of nonlinearities
which satisfies the conditions in (f_*) and
allows to consider different phenotypes of alleles, A_1 and A_2.
This family can be obtained by considering
the map f_k:ℝ^+→ℝ^+ such that
f_k(s):= s(1-s)(1+k-2ks),
where -1≤ k≤1 represents the degree of dominance of the alleles independently of the space variable, see <cit.>.
In this special case, if k=0 then the model does not present any kind of dominance,
instead, if k=1 or k=-1 then the allelic dominance is relative to A_1, in the first case, and to A_2 in the second one
(the last is also equivalent to said that A_1 is recessive).
In view of this, we can make mainly the following two observations.
In the case of no dominance, i.e. k=0, from (<ref>) we have f_0(s)=s(1-s) which is a concave function.
Therefore, we can enter in the settings considered by <cit.>.
So if w(x)>0 on a set of positive measure in Ω and w̅<0,
then for λ sufficiently large there exists a unique positive non trivial equilibrium of the equation
∂ p/ ∂ t = Δ u + λ w(x) p(1-p) for every (x,t)∈Ω×]0,∞[ under
the boundary condition (<ref>).
In the case of completely dominance of allele A_2, i.e. k=-1, from (<ref>)
we have f_-1(s)=2s^2(1-s) which is not a concave function.
Thanks to the results in <cit.>, if w(x)>0 on a set of positive measure in Ω and w̅<0,
then for λ sufficiently large there exist at least two positive non trivial equilibrium of the equation
∂ p/ ∂ t = Δ p + λ w(x) 2 p^2(1-p) for every (x,t)∈Ω×]0,∞[ under
the boundary condition (<ref>).
We observe that the map s↦ f_0(s)/s is strictly decreasing with f_0(s) concave.
On the contrary, the map s↦ f_-1(s)/s is not strictly decreasing with f_-1(s) not concave.
Thus, from Remark <ref> and Remark <ref>, it arises a natural question which involves the possibility
to weaken the concavity assumption (<ref>) further to the monotonicity of the map s↦ f(s)/s, in order to get
uniqueness results of nontrivial equilibria for problem .
This is still an open question, firstly appeared in <cit.>, known as the “conjecture of Lou and Nagylaki”.
Conjecture “Suppose that w(x)>0 on a set of positive measure in Ω and such that w̅=∫_Ω w dx<0.
If the map s↦ f(s)/s is monotone decreasing in ]0,1[,
then has at most one nontrivial equilibrium p(t,x) with 0<p(0,x)<1 for every x∈Ω̅,
which, if it exists, is globally asymptotically stable.” <cit.>.
The study of existence, uniqueness and multiplicity of positive solutions for nonlinear indefinite weight problems
is a very active area of research, starting from the Seventies,
and several types of boundary conditions along with a wide variety of nonlinear functions, classified according to growth conditions,
were taken into account.
Several authors have addressed this topic, see <cit.>,
just to recall the first main papers dedicated.
Instead, the recent literature about multiplicity results for positive solutions of indefinite weight problems with
Dirichlet or Neumann boundary conditions is really very rich.
In order to cover most of the results achieved with different techniques so far,
we give reference of the following bibliography <cit.>.
Nevertheless, as far as we known, there is no answer about the conjecture of Lou and Nagylaki.
It is interesting to notice that the study of the concavity of f(s) versus the monotonicity of f(s)/s has significance also
in the investigation on the uniqueness of positive solutions for a particular class of indefinite weight problems with Dirichlet boundary conditions.
More in detail, these problems involve positive nonlinearities which have linear growth at zero and sublinear growth at infinity,
namely
-Δ p = λ w(x) g(p) in Ω,
p = 0 on ∂Ω,
(𝒟_λ)
where g:ℝ^+→ℝ^+ is a continuous function satisfying
g(0)=0 , g(s) > 0 ∀ s>0 , lim_s→0^+g(s)/s>0 , lim_s→+∞g(s)/s=0.
(g_*)
The state of the art on this topic refers mainly on two papers.
From the results achieved in <cit.>, it follows that,
if (g_*) holds and, moreover, the map s→ g(s)/s is strictly decreasing,
then there exist at most one positive solution of (𝒟_λ) only
if the weight function w(x) > 0 for a.e. x∈Ω.
On the other hand, from <cit.>, if the conditions in (g_*) are satisfied for a smooth concave nonlinear term g
and the weight w is a smooth and changing sign function, then there exists at most one positive solutions of (𝒟_λ).
Therefore, if the weight function is positive, then the hypothesis of Brezis-Oswald, concerning the monotonicity of g(s)/s,
is more general than the requirement of Brown-Hess about the concavity of g(s).
At this point one could query whether something similar to the conjecture of Lou and Nagylaki happens
also for this family of Dirichlet problems.
This was done in <cit.>, where it was shown that the monotonicity of the map s↦ g(s)/s is not enough
to guarantee the uniqueness of positive solutions for problems as in (𝒟_λ) with an indefinite weight.
Through numerical evidence, more than one positive solution has been detected for an exemplary
two-point boundary value problem (𝒟_λ).
§ MULTIPLICITY OF CLINES: THE CONJECTURE HAS NEGATIVE ANSWER
3.equation In this section we look at the framework of the conjecture of Lou and Nagylaki.
So, from now on we tacitly consider a nonlinear function f: [0,1]→ℝ of class C^2
which is not concave, satisfies (f_*) and is such that the map s↦ f(s)/s is strictly decreasing.
We concentrate on the one-dimensional case N=1
and we take as a habitat an open interval Ω:=]ω_1,ω_2[ with ω_1,ω_2∈ℝ
such that ω_1<0<ω_2.
This type of habitats, confined to one-dimensional spaces, have an intrinsic interest in modeling phenomena which occur,
for example, in neighborhoods of rivers, sea shores or hills, see <cit.>.
As in <cit.>, we assume that the weight term w is step-wise.
Hence, let us consider the following class of indefinite weight functions
w(x):=
-α x∈[ω_1,0[,
1 x∈]0,ω_2],
such that
w̅ = -ω_1α +ω_2<0,
with w̅ defined as in (<ref>).
In these settings, the indefinite Neumann problem (𝒩_λ) reads as follows
p” + λ w(x) f(p) = 0,
p'(ω_1)=0=p'(ω_2),
with 0 < p(x) < 1 for all x∈[ω_1,ω_2].
Inspired by the results in <cit.>, we will consider two particular functions f
in order to provide a negative reply to the conjecture under examination.
In both cases, we are going to use a topological argument, that is called shooting method,
and, with the aid of some numerical computations, we give evidence
of multiplicity of positive solutions for the corresponding problems in (<ref>).
The shooting method relies on the study of the deformation of planar continua under the action of the vector field associated to
the second order scalar differential equation in (<ref>), whose formulation, in the phase-plane (u,v),
is equivalent to the first order planar system
u '=v,
v '= -λ w(x) f(u).
Solutions p(·) of problem (<ref>) we are looking for are also solutions (u(·),v(·)) of system (<ref>),
such that v(ω_1)=0=v(ω_2).
We set the interval [0,1] contained in the u-axis as follows
ℒ_{v=0}:={(u,v)∈ℝ^2: 0≤ u≤ 1, v=0}.
This way, as a real parameter r ranges between 0 and 1, we are interested in the solution,
(u(· ;ω_1,(r,0)),v(· ;ω_1,(r,0))), of the Cauchy problem
with initial conditions
u(ω_1)=r,
v(ω_1)=0,
such that (u(ω_2 ;ω_1,(r,0)),v(ω_2 ;ω_1,(r,0)))∈ℒ_{v=0}.
Hence, let us consider the planar continuum Γ obtained by shooting ℒ_{v=0} forward
from ω_1 to ω_2, namely
Γ:={(u(ω_2;r),v(ω_2;r))∈ℝ^2 : r∈[0,1] }.
We define the set of the intersection points between this continuum and the segment [0,1] contained in the u-axis, as
𝒮:= Γ∩ℒ_{v=0}.
Then, there exists an injection form the set of the solutions p(·) of (<ref>) such that 0 < p(x) < 1 for all x∈[ω_1,ω_2]
and the set 𝒮∖( {(0,0)}∪{(1,0)}).
More formally, we denote
by ζ(· ; ω_0,z_0)=(u(· ; ω_0,z_0),v(· ; ω_0,z_0))
the solution of (<ref>) with ω_0∈[ω_1,ω_2] and initial condition
ζ(ω_0 ; ω_0,z_0)=z_0=(u_0,v_0)∈ℝ^2.
The uniqueness of the solutions for the associated initial value problems guarantee that
the Poincaré map associated to system (<ref>) is well defined.
Recall that, for any τ_1,τ_2∈[ω_1,ω_2], the Poincaré map for system (<ref>),
denoted by Φ_τ_1^τ_2,
is the planar map which at any point z_0=(u_0,v_0)∈ℝ^2 associates the point (u(τ_2),v(τ_2))
where (u(·),v(·)) is the solution of (<ref>) with (u(τ_1),v(τ_1))=z_0.
Notice that Φ^τ_2_τ_1 is a global diffeomorphism of the plane onto itself.
Under these notations, the recipe of the shooting method is the following.
A solution p(·) of (<ref>) such that 0<p(x)<1 for all x∈[ω_1,ω_2]
is identified by a point (c,0)∈ℒ_{v=0} whose image through the action of the Poincaré map,
say C:=Φ_ω_1^ω_2((c,0))∈Γ, belongs to ℒ_{v=0}.
This way, the solution p(·) of the Neumann problem with p(ω_1)=c is found
looking at the first component of the map
x↦Φ_ω_1^x((c,0))=(u(x),v(x)),
since, by construction, p'(ω_1)=v(ω_1)=0 and p'(ω_2)=v(ω_2)=0.
This means that the set 𝒮 is made by points such that, each of them determines univocally
an initial condition, of the form (<ref>), for which the solution (u(·),v(·)) of the Cauchy problem
associated to (<ref>) verifies v(ω_1)=0=v(ω_2).
The study of the uniqueness of the clines is based on the study, in the phase plane (u,v), of the qualitative properties
of the shape of the continuum Γ which is the image of ℒ_{v=0}
under the action of the Poincaré map Φ_ω_1^ω_2.
More in detail, we are interested in find real values c∈]0,1[ such that
Φ^ω_2_ω_1((c,0))∈Φ_ω_1^ω_2(ℒ_{v=0})∩ℒ_{v=0}.
Indeed, our aim is looking for values c∈]0,1[ such that the point C=Φ^ω_2_ω_1((c,0)) belongs to 𝒮.
So, let us show now that there exist Neumann problems as in (<ref>) that admit more than one positive solution.
Namely, there exist more than one polymorphic stationary solution for the equation:
∂ p/∂ t=p”+λ w(x)f(p).
Roughly speaking, if Γ crosses the u-axis more than one time, out of the points (0,0) and (1,0),
then #(𝒮∖( {(0,0)}∪{(1,0)}))>1 and so, we expect a result of non-uniqueness of clines
for equation (<ref>).
§.§ First example
Taking into account the definition of the functions in (<ref>), given a real parameter h>0, let us consider the family of maps
f̂_h:[0,1]→ℝ of class C^2 such that
f̂_h(s):=s(1-s)(1-h s+h s^2).
By definition f̂(0)=0=f̂(1). Moreover, to have s↦f̂_h(s)/s monotone decreasing in ]0,1[ it is sufficient to assume 0<h≤ 3.
If the parameter h ranges in ]0,3], then it is straightforward to check that f̂_h is not concave and
f̂_h(s)>0 for every s∈]0,1[.
Let us fix h=3. Then, in this case, all the conditions in (f_*) are verified and
f̂_3(s)=s(1-s)(1-3s+3s^2).
As a consequence, we point out the following result of multiplicity.
Let f:[0,1]→ℝ be such that
f(s):=s(1-s)(1-3s+3s^2).
Assume w:[ω_1,ω_2]→ℝ be defined as in (<ref>)
with α=1, ω_1=-0.21 and ω_2=0.2.
Then, for λ=45 the problem (<ref>)
has at least 3 solutions such that 0 < p(x) < 1 for all x∈[ω_1,ω_2].
Notice that w̅=-0.01<0, so we are in the hypotheses of the conjecture.
Now we follow the scheme of the shooting method, in order to detect three polymorphic stationary solutions
for the equation (<ref>).
This approach, with the help of numerical estimates, will enable us to prove Proposition <ref>.
In the phase-plane (u,v), Figure <ref> shows the existence of at least four points
(r_i,0)∈ℒ_{v=0} with i=1,…,4 such that,
by defining their images through the Poincaré map Φ^ω_2_ω_1 as
R_i:=(R_i^u,R_i^v)=Φ^ω_2_ω_1((r_i,0))∈Γ
for every i∈{1,…,4},
the following conditions
R_i^v<0 for i=1,3, R_i^v>0 for i=2,4,
are satisfied.
This is done, for example, with the choice of the values r_1=0.1, r_2=0.4, r_3=0.65 and r_4=0.75.
The solutions of the Cauchy problems associated to system (<ref>), with initial conditions (r_i,0) for i=1,…,4,
assume at x=ω_2 the values R_1=(0.230, -0.066), R_2= (0.922, 0.165), R_3=(0.790, 0.036)
and R_4=(0.533, 0.055), truncated at the third significant digit.
Therefore, we have R_1^v<0<R_2^v, R_2^v>0>R_3^v and R_3^v<0<R_4^v.
Then, by a continuity argument (that means an application of the Mean Value Theorem),
there exist at least three real values c_1,c_2 and c_3 such that
r_j<c_j<r_j+1 and
C_j:=Φ^ω_2_ω_1((c_j,0))∈𝒮∖( {(0,0)}∪{(1,0)}),
for every j∈{1,…,i-1}. So, let us see how to find such values.
The curve Γ is the result of the integration of several system of differential equation (<ref>),
with initial conditions taken within a uniform discretization of the interval [0,1],
followed by the interpolation of the approximated values of each solution ζ(x ;ω_0,z_0) at x=ω_2.
Hence, Γ represents the approximation of the image of the interval [0,1]
under the action of the Poincaré map Φ^ω_2_ω_1.
As the Figure <ref> suggests, the projection of Γ on its first component
is not necessarily contained in the interval [0,1], which includes the only values of biological pertinence.
Nonetheless, this does not avoid the existence of solutions p(·) of the problem (<ref>) such that 0 <p(x)<1
for all x∈[ω_1,ω_2].
This way, by means of a fine discretization of ℒ_{v=0},
we have found the approximate values of the intersection points C_j∈Γ∩ℒ_{v=0},
with j=1,2,3. In this case they are:
C_1=(0.273,0), C_2=(0.601,0) and C_3=(0.833,0), truncated at the third significant digit (see Figure <ref>).
The intersection points between ℒ_{v=0} and its image Γ
through the Poincaré map Φ^ω_2_ω_1, namely C_j with j=1,2,3, are in agreement with the previous predictions.
At last, we computed the values c_1=0.125, c_2=0.479 and c_3=0.683, which verify the required conditions (<ref>).
For j=1,2,3, in Figure <ref> are represented the trajectories of the solutions of the initial value problem
p”+λ w(x)f(p)=0,
p(ω_1)=c_j,
p'(ω_1)=0,
that, by construction, satisfy p'(ω_2)=0.
We observe also that the values of each solution p(·) of the three different
initial value problems ranges in ]0,1[ as desired.
Once found the values c_j with j=1,2,3, a numerically result of multiplicity of clines is achieved.
Indeed, in Figure <ref>, we display the approximation of the three nontrivial stationary solution p(·)
of equation (<ref>) that are identified by the points
C_j∈(𝒮∖( {(0,0)}∪{(1,0)})),
with j=1,2,3.
§.§ Second example
We refer now to the application given in <cit.> and we adapt it to our purposes.
So we consider, the nonlinear term f̃:ℝ^+→ℝ^+ defined by
f̃(s):=(10 s e^-25s^2+s/|s|+1).
It is straightforward to check that f̃ is not concave and the map s→f̃(s)/s is strictly decreasing.
Moreover, f̃(0)=0 and f̃(s)>0 for every s>0, but f̃ does not take value zero in s=1,
since f̃(1)=10 e^-25+1≠0.
To satisfy all the conditions in (f_*), it is sufficient to multiply f̃ by the term arctan(m(1-x)) with m>0.
This way, the following result holds.
Let f:[0,1]→ℝ be such that
f(s):=(10 s e^-25s^2+s/|s|+1) arctan(10-10s).
Assume w:[ω_1,ω_2]→ℝ be defined as in (<ref>)
with α=2.4, ω_1=-0.255 and ω_2=0.6.
Then, for λ=3 the problem (<ref>)
has at least 3 solutions such that 0 < p(x) < 1 for all x∈[ω_1,ω_2].
Notice that, under the assumptions of Proposition <ref>, the hypotheses of the conjecture are now all satisfied since w̅=-0.012<0.
To prove the existence of at least three clines for the equation (<ref>),
we exploit again the shooting method.
So, our main interest is in finding real values r_i∈]0,1[ with i∈ℕ such that,
given R_i:=(R_i^u,R_i^v)=Φ^ω_2_ω_1((r_i,0)), it follows
R_i^v<0 for i=2ℓ +1, R_i^v>0 for i=2ℓ, with ℓ∈ℕ.
In this case, the features of the nonlinear term along with the joint action of the indefinite weight
give rise to an involved deformation of the segment ℒ_{v=0}.
Nevertheless, looking at Figure <ref>, we can see that there exist more than one intersection point between the continuum
Γ and the u-axis such that their abscissa is contained in the interval open ]0,1[.
This way, the previous observation suggests us the following analysis. By choosing the values
r_1=0.01, r_2=0.1, r_3=0.45 and r_4=0.9 we compute the points R_i for i=1,…,4.
All the results achieved are truncated at the third significant digit and so we obtain
R_1^v=-0.639<0, R_2^v=2.160>0, R_3^v<-0.036 and R_4^v=1.392>0.
The numerical details are thus represented in Figure <ref>.
At this point, an application of the Intermediate Value Theorem guarantees the existence of at least three initial conditions
(c_j,0) with j=1,2,3, such that each respective solution of the initial value problem (<ref>) is also a positive solution
of the Neumann problem (<ref>) we are looking for.
Indeed, the values c_1=0.436, c_2=0.776 and c_3=0.854 satisfy the conditions in (<ref>).
Finally, we display the approximation of the three nontrivial stationary solution p(·)
of equation (<ref>) in Figure <ref>.
§ DISCUSSION
5.equation
When the selection gradient is described by a piecewise constant coefficient,
we have studied the conjecture proposed in <cit.> within a finite and one-dimensional environment.
Summing up: we have found multiplicity of positive solutions
for two different indefinite Neumann problems defined as in (<ref>) where the nonlinear term is an application
f:[0,1]→ℝ which assumes two particular forms, the one in (<ref>) or that in (<ref>).
In our examples, the nonlinearity f is a function of class C^2 such that
f(0) = f(1) =0, f(s) > 0 ∀ s∈ ]0,1[ , f'(0) > 0 > f'(1),
(f_*)
and
f is not concave, s↦ f(s)/s is strictly decreasing.(H)
Hence, uniqueness of positive solutions in general is not guaranteed for indefinite Neumann problems
whose nonlinear term f is a function satisfying (f_*) and (H), and
the indefinite weight w is defined on a bounded domain Ω with ∫_Ωw(x) dx<0.
Nevertheless, in the case of the family of functions f_k(s)=s(1-s)(1+k-2ks)
with k∈[-1,1], there are issues in this direction that have not yet been answered.
So, a question still open is the following:
under the action of gene flow, what is the minimal set of assumptions
under which a selection gradient will maintain a unique gene frequency cline?
The delicate matter of the comparison between the concavity
versus a condition about monotonicity arises also in other context than the Neumann one.
With this respect, indefinite weight problems under Dirichlet boundary conditions
have been considered in <cit.> where an example of multiplicity of positive solutions was given.
As far as we know, the mathematical literature lacks of a rigorous multiplicity result in both the two cases.
This way, we see how these issues deserve to be studied in deep for both a natural mathematical
and genetical interest.
§ ACKNOWLEDGMENTS
I thank prof. Reinhard Bürger for inspiring me to work on this problem.
I am also very grateful to profs. Fabio Zanolin, Carlota Rebelo and Alessandro Margheri
for providing useful comments
that greatly improved the manuscript.
*
elsart-num-sort
|
http://arxiv.org/abs/1701.07560v2 | 20170126030726 | 3D Printing of Fluid Flow Structures | [
"Kunihiko Taira",
"Yiyang Sun",
"Daniel Canuto"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
3D Printing of Fluid Flow Structures[Part of this work was supported by the US Air Force Office of Scientific Research (Program manager: Douglas Smith, Grant number FA9550-13-1-0091).]
Kunihiko Taira and Yiyang Sun
Department of Mechanical Engineering
Florida State University
[email protected] and [email protected]
Daniel Canuto
Department of Mechanical and Aerospace Engineering
University of California, Los Angeles
[email protected]
Updated: July 05, 2017
=============================================================================================================================================================================================================================================================================
We discuss the use of 3D printing to physically visualize (materialize) fluid flow structures. Such 3D models can serve as a refreshing hands-on means to gain deeper physical insights into the formation of complex coherent structures in fluid flows. In this short paper, we present a general procedure for taking 3D flow field data and producing a file format that can be supplied to a 3D printer, with two examples of 3D printed flow structures. A sample code to perform this process is also provided. 3D printed flow structures can not only deepen our understanding of fluid flows but also allow us to showcase our research findings to be held up-close in educational and outreach settings.
Keywords: 3D printing, visualization, coherent structures, wakes
§ INTRODUCTION
For students starting their training in fluid mechanics, one of the major challenges they encounter is that most fluid flows are not readily visible. This is precisely why the black-and-white, 1960s collection of National Committee for Fluid Mechanics Films that contains beautiful flow visualizations to explain fluid mechanics, has been long cherished as a great educational resource even to this date <cit.>. Over the course of advancement in fluid mechanics, the development of flow visualization techniques has been critical to enable experimental and computational investigations <cit.>. The beauty of the visualized complex and delicate flow structures has attracted researchers to uncover their formation mechanisms and their effects on the overall dynamics.
In experiments, there are a number of techniques to visualize the flow field. Since most fluid flows are transparent, fluid mechanicians have relied on visible media or tracers to highlight flow features. Dye and bubbles have been extensively used to identify flow patterns <cit.>. Presently, one of the de facto standards in capturing the velocity field is particle image velocimetry (PIV), which uses tracer particles and cross correlation techniques to determine displacement vectors <cit.>. While PIV was originally limited to two dimensions, development of stereoscopic and tomographic PIV techniques has enabled the extraction of full 3D velocity vectors along a plane and over a volume, respectively <cit.>.
Moreover, computational fluid dynamics simulations have allowed full access to a variety of flow field data and enabled visualization of coherent structures in flows, even down to the finest details. The use of isosurfaces and structural identification schemes have provided us with deep insights into the formation of such structures over space and time <cit.>. Vortex identification techniques, such as the Q and λ_2 criteria, have revealed the formation and evolution of vortical structures <cit.>. Such identification techniques are also used for experimental data.
Here, we discuss the use of 3D printing to materialize flow structures based on computational and experimental flow field data. 3D printing <cit.> has become widely available in industry and academia, enabling rapid prototyping of parts out of plastics, metals, ceramics, and other materials. There are also emerging efforts to print biological organs and prosthetics <cit.>. In fluid mechanics, we can take advantage of these 3D printing technologies[Another major use of 3D printing is the fabrication of scaled models for wind and water tunnel testings.] to present representative flow structures to an audience, akin to material scientists or roboticists showcasing their sample solid structures or robots, respectively.
This new technology offers an opportunity for fluid mechanicians to materialize coherent structures and share them with students in classrooms, technical conferences, public outreach activities, and even as artistic objects. The ability to physically hold, touch, and view printed flow structures provides unique opportunities for firsthand studies of the flow structures in research and educational settings.
§ PREPARATION OF DATA FOR 3D PRINTING
We can print 3D models of fluid flow structures from computational or experimental flow field data. One of the most straightforward approaches is to consider a 3D isosurface for printing. Once the 3D structure is generated, its surface data can be exported to a CAD data format. We provide a sample MATLAB script (see Code 1) that takes 3D flow field data and outputs a STereoLithography (STL) file, which is a file format widely used in 3D printing. When this STL file is created, it can be sent to a 3D printer for fabrication of the 3D model.
Depending on the 3D printer specifications, care must be taken to check the fine-scale details of the model to be produced. At the moment, sharp edges and small-scale flow features can be a challenge for 3D printing, especially those seen in turbulent flows. The material properties of the print medium influence models' maximum fineness. For flows with multi-component structures, we can print each piece separately. In the following section, we present two examples in which the 3D models are comprised of single as well as multiple pieces. We note in passing that some printers can print in multiple colors, enabling color map projection onto a 3D model (e.g., pressure distribution).
§ EXAMPLES OF 3D PRINTED FLOW STRUCTURES
§.§ Vortices behind a pitching low-aspect-ratio wing
The formation of three-dimensional vortices behind a low-aspect-ratio rectangular wing is known to be complex <cit.>. As the vortices develop from the leading-edge, trailing-edge, and tips of the wing, they interact while convecting downstream. Unsteady wing maneuvers such as pitching and acceleration can also influence the vortex dynamics <cit.>.
Here, we consider printing the data obtained from DNS of an unsteady incompressible wake behind a rectangular flat-plate wing of aspect ratio 2, undergoing a pitching maneuver at Re = 500 <cit.>. The simulation is performed with the immersed boundary projection method <cit.>, and the vortical structures in the wake are captured by a Q-criterion isosurface <cit.>. These structures and the flat-plate wing are printed with a 3D printer, as shown in Figure <ref>. The color projected on the model represents the streamwise coordinate to highlight the wake structure location with respect to the wing.
The 3D model reveals the intricate details of the formation of the leading-edge and tip vortices at an early stage of the dynamics. Viewing the printed model from the rear, we can observe how the legs of wake vortices are intricately wrapped around each other while pinned to the top surface of the wing, satisfying Helmholtz's second theorem. The ability to hold and examine the model provides firsthand insights into the complex vortex dynamics caused by the unsteady wing motion and the separated flow.
§.§ Global stability mode inside a cavity
We can also print structures from modal decomposition and stability analyses <cit.>. These flow structures hold importance in understanding the dynamics and stability of fluid flows. In particular, the dominant global stability mode highlights how a perturbation in a flow can grow or decay about its base state.
As an example, we 3D print the dominant stability mode determined from global stability analysis of compressible flow over a spanwise-periodic rectangular cavity at Re = 1500 and M_∞ = 0.6 with aspect ratio L/D=2 <cit.>. The stability analysis is performed here with a global stability analysis code (large-scale eigenvalue problem) developed upon the finite-volume solver CharLES <cit.>. Shown in Figure <ref> are two sets of isosurfaces (positive and negative) of the dominant global stability mode in terms of the spanwise velocity with two different colors.
We print these structures as separate pieces, which are not in contact with the cavity (fabricated separately). Hence, thin metal wires and adhesives are used to hold the modal structures in place, as shown in the inserted figure. The spanwise size of these structures indicates the wavelength of this particular mode, and the temporal frequency influences the size of the individual structures. The structure of the mode can be studied to assess where sensors and actuators may be placed for flow control. The 3D printed stability mode offers a chance to examine its structure up-close, which is not ordinarily visible in cavity flows. The model can also facilitate the conveyance of this specialized concept of global stability to a general audience through physical contact and viewing.
§ CONCLUDING REMARKS
We have discussed the use of 3D printing to physically visualize fluid flow structures. By having a materialized structure, we believe that such printed models can further deepen our understanding of fluid flows, and allow us to showcase our research in educational or outreach settings. At the moment, most 3D printers cannot output models that have very fine structures, which may limit the Reynolds number of printable flows. However, with continuous improvement of 3D printing technology, this limitation may be relaxed or eliminated in the future. Moreover, use of transparent printing media and future reduction in cost may allow for complex flows resulting from turbulence to be physically printed. We hope this short paper stimulates readers to try 3D printing of fluid flow structures obtained from their computational and experimental studies and share their models in various opportunities to highlight the beauties of fluid flows.
|
http://arxiv.org/abs/1701.07583v1 | 20170126053555 | Lyapunov exponents for random perturbations of some area-preserving maps including the standard map | [
"Alex Blumenthal",
"Jinxin Xue",
"Lai-Sang Young"
] | math.DS | [
"math.DS",
"37D25 (Primary), 37H15 (Secondary)"
] |
Limiting curves for polynomial adic systems.[Supported by the RFBR (grant 14-01-00373)]
A. R. MinabutdinovNational Research University Higher School of Economics, Department of Applied Mathematics and Business Informatics, St.Petersburg, Russia, e-mail:
December 30, 2023
==========================================================================================================================================================================
We consider a large class of 2D area-preserving diffeomorphisms that are
not uniformly hyperbolic but have strong hyperbolicity properties on large regions of
their phase spaces. A prime example is the standard map. Lower bounds for
Lyapunov exponents of such systems are very hard to estimate, due to the potential
switching of “stable" and “unstable" directions. This paper shows
that with the addition of (very) small random perturbations, one obtains with
relative ease Lyapunov exponents reflecting the geometry of the deterministic maps.
§ INTRODUCTION
A signature of chaotic behavior in dynamical systems is sensitive dependence on
initial conditions. Mathematically, this is captured by the positivity of Lyapunov
exponents: a differentiable map F of a Riemannian manifold M is said to have
a positive Lyapunov exponent (LE) at x ∈ M if dF^n_x grows exponentially fast
with n. This paper is about volume-preserving diffeomorphisms, and
we are interested in behaviors that occur on positive Lebesgue measure sets.
Though the study of chaotic systems occupies a good part of smooth ergodic theory, the hypothesis of positive LE is extremely difficult to verify
when one is handed a concrete map defined by a specific equation – except
where the map possesses a continuous family of invariant cones.
An example that has come to symbolize the enormity of the challenge is the standard map, a mapping
Φ=Φ_L of the 2-torus given by
Φ (I, θ) = (I + L sinθ, θ + I + L sinθ)
where both coordinates I, θ are taken modulo 2π and
L ∈ is a parameter. For L ≫ 1, the map Φ_L has strong expansion
and contraction, their directions separated by clearly defined invariant cones
on most of the phase space – except on two narrow strips near θ = ±π/2
on which vectors are rotated violating cone preservation.
As the areas of these “critical regions" tend to zero as L →∞, one might
expect LE to be positive, but this problem has remained unresolved:
no one has been able to prove, or disprove, the positivity of Lyapunov exponents
for Φ_L for any one L, however large,
in spite of considerable
effort by leading researchers. The best result known <cit.> is that the LE of Φ_L is positive on sets of Hausdorff dimension 2 (which are very far from having positive Lebesgue measure). The presence of elliptic islands, which has been shown
for a residual set of parameters <cit.>, confirms that the obstructions to proving the positivity
of LE are real.
In this paper, we propose that this problem can be more tractable if one accepts
that dynamical systems are inherently noisy. We show,
for a class of 2D maps F that includes the standard map, that by adding a
very small, independent random perturbation at each step, the resulting
maps have a positive LE that correctly reflects the rate
of expansion of F – provided that F has sufficiently large expansion
to begin with. More precisely, if dF∼ L, L ≫ 1, on a large portion of the phase space, then random perturbations of size O(e^-L^2-ε) are sufficient for
guaranteeing a LE ∼log L.
Our proofs for these results, which are very short compared to previous works
on establishing nonuniform hyperbolicity for deterministic maps
(e.g. <cit.>)
are based on the following idea: We view the random process as a Markov chain
on the projective bundle of the manifold on which the random maps act, and
represent LE as an integral. Decomposing this integral into a “good part" and
a “bad part", we estimate the first leveraging the strong hyperbolicity of the
unperturbed map, and obtain a lower bound for the second provided
the stationary measure is not overly concentrated in certain “bad regions".
We then use
a large enough random perturbation to make sure that the stationary measure
is sufficiently diffused.
We expect that with more work, this method can be extended both to higher
dimensions and to situations where conditions on the unperturbed map are
relaxed.
Relation to existing results. Closest to the present work are the unpublished results of Carleson and Spencer <cit.>,
who showed for very carefully selected parameters L ≫ 1 of the standard map that
LE are positive when the map's derivatives are randomly perturbed.
For comparison, our first result applies to all L ≫ 1 with a slightly larger
perturbation than in <cit.>, and our second result
assumes additionally a finite condition on a finite set; we avoid the rather delicate
parameter selection by perturbing the maps themselves, not just
their derivatives.
Parameter selections similar to
those in <cit.> were used – without random perturbations –
to prove the positivity of LE
for the Hénon maps <cit.>, quasi-periodic cocycles <cit.>, and rank-one attractors <cit.>, building on earlier
techniques in 1D, see e.g. <cit.>.
See also <cit.>, which estimates LE from below
for Schrödinger cocycles over the standard map.
Relying on random perturbation alone – without parameter deletion – are <cit.>, which contains results analogous to ours in 1D, and
<cit.>, which applied random rotations to twist maps.
We mention also <cit.>, which uses hyperbolic toral automorphisms
in lieu of random perturbations.
Farther from our setting, the literature on LE is vast. Instead of
endeavoring to give reasonable citation of individual papers, let us mention
several categories of results in the literature that have attracted much attention,
together with a small sample of results in each. Furstenberg's work <cit.> in the early 60's initiated extensive research on criteria for the LE of random matrix products to be distinct (see e.g. <cit.>).
Similar ideas were exploited to study LE of cocycles over hyperbolic
and partially hyperbolic systems (see e.g. <cit.>), with a generalization to deterministic maps <cit.>. Unlike the results in the first two paragraphs, these results
do not give quantitative estimates; they assert only that LE are simple, or nonzero.
We mention as well the formula of Herman <cit.> and the related work <cit.>, which use subharmonicity to estimate Lyapunov exponents, and the substantial body of work on 1D Schrödinger operators (e.g. <cit.>). We also note the C^1 genericity of zero Lyapunov exponents of volume-preserving surface diffeomorphisms away from Anosov <cit.> and its higher-dimensional analogue <cit.>. Finally, we acknowledge results on the continuity or stability of LE, as in, e.g., <cit.>.
This paper is organized as follows: We first state and prove two results in a relatively
simple setting: Theorem <ref>, which contains the core idea of this paper, is proved in
Sections 3 and 4, while Theorem <ref>, which
shows how perturbation size can be decreased if some mild
conditions are assumed, is proved in Section 5.
We also describe a slightly more general setting
which includes the standard map, and observe in Section 6 that the proofs
given earlier in fact apply, exactly as written, to this broader setting.
§ RESULTS AND REMARKS
§.§ Statement of results
We let ψ : 𝕊^1 → be a C^3 function for which the following hold:
(H1) C_ψ' = {x̂∈𝕊^1 : ψ'(x̂) = 0} and C_ψ” = {ẑ∈𝕊^1 : ψ”(ẑ) = 0} have finite cardinality.
(H2) min_x̂∈ C_ψ' |ψ”(x̂)| > 0 and min_ẑ∈ C_ψ” |ψ”'(ẑ)| > 0.
For L > 1 and a ∈ [0,1), we define
f=f_L,a: 𝕊^1 →
f(x) = L ψ(x) + a .
Let 𝕋^2 = 𝕊^1 ×𝕊^1 be the 2-torus.
The deterministic map to be perturbed is
F=F_L,a: 𝕋^2 →𝕋^2
F(x,y) = ( [ f(x) - y (mod 1); x ]) .
We have abused notation slightly in Eq (<ref>):
We have made sense of f(x)-y
by viewing y ∈𝕊^1 as belonging in [0,1), and have
written “z (mod 1)" instead of
π(z) where π:ℝ→𝕊^1 ≅ℝ/ℤ is the usual
projection.
Observe that F is an area-preserving diffeomorphism of 𝕋^2.
We consider compositions of random maps
F^n_ = F_ω_n∘⋯∘ F_ω_1 n=1,2, …,
where
F_ω = F ∘ S_ω ,
S_ω(x,y) = ( x + ω (mod 1), y) ,
and the sequence =(ω_1, ω_2, …) is chosen i.i.d. with respect
to the uniform distribution ν^ on [- , ] for some > 0. Thus our sample space can be
written as Ω = [- , ]^,
equipped with the probability =(ν^)^.
Throughout, we let Leb denote Lebesgue measure on ^2.
Assume ψ obeys (H1),(H2), and fix a ∈ [0,1). Then
(a) for every L>0 and >0,
λ_1^ = lim_n →∞1/nlog (d F_^n)_(x,y)
exists and is independent of (x, y, ) for every (x,y) ∈𝕋^2 and -a.e. ∈Ω;
(b) given , ∈̱(0,1), there is a constant C = C_α, β > 0 such
that for all L, where L is sufficiently large (depending on ψ, , )̱
and ≥ L^- CL^1 - β, we have
λ_1^≥log L .
Theorem <ref>
assumes no information whatsoever on dynamical properties of F beyond its
definition in Eq (<ref>).
Our next result shows, under some minimal, easily checkable, condition on
the first iterates of F, that the bound above on λ^_1
continues to hold for a significantly smaller . Let 𝒩_c(C_ψ') denote the c-neighborhood of C_ψ' in 𝕊^1. We formulate the following condition
on f=f_L,a:
(H3)(c) For any x̂, x̂' ∈ C_ψ', we have that f x̂ - x̂'
(mod 1) ∉𝒩_c(C_ψ') .
Observe that for L large, the set of a for which (H3)(c) is satisfied
tends to 1 as c → 0.
Let ψ be as above, and fix an arbitrary
c_0>0. Then given , ∈̱(0,1), there is a constant C = C_α, β > 0 such that for all L, a, where
– L is sufficiently large (depending on ψ,c_0,, $̱),
–a ∈[0,1)is chosen so thatf = f_L, asatisfies (H3)(c_0), and
–≥L^- CL^2 - β,
then we have
λ_1^≥log L .
A slight extension
Letψ: 𝕊^1 →be as above. ForL > 0anda ∈[0,1),
we writef_0 = f_ψ, L, a = L ψ+ a, and forε> 0define𝒰_ε, L(f_0) = { f : 𝕊^1 → such that f-f_0_C^3 < L ε} .We letC'_fandC”_fdenote the zeros off'andf”. Below, (H3)(c)is to be
read withC'_fin the place ofC'_ψ.
We writeF_f(x,y) = (f(x) -y , x)forf ∈𝒰_ε, L(f_0).
Let ψ : 𝕊^1 → satisfy (H1), (H2) as before. For a ∈ [0,1) and L >1, let f_0 be as defined above. Then there exists ε>0 sufficiently
small so that
(1) Theorems <ref> and <ref> hold for F = F_f
for all L > 0 sufficiently large and f ∈𝒰_ε, L(f_0);
(2) L depends only on ψ as before but a in Theorem <ref>
depends on f.
The Chirikov standard map is defined as follows: a parameterL > 0is fixed, and the map(I, θ) ↦(I̅, θ̅), sending[0,2 π)^2into itself, is defined by
I̅ = I + 2 π L sinθ ,
θ̅ = θ + I̅ = θ + I + 2 π L sinθ ,
where both coordinatesI, θare taken modulo2 π.
Let L be sufficiently large. Then:
* Theorem <ref> holds for the standard map.
* If additionally the map f(x) = L sin(2 π x) + 2 x satisfies (H3)(c) for some c > 0, then Theorem <ref> holds for the standard map for this value of L.
Theorem <ref> and Corollary <ref> are proved in Section 6. All discussions prior to Section 6
pertain to the setting described at the beginning of this section.
§.§ Remarks
Remark 1. Uniform hyperbolicity on large but non-invariant regions
of the phase space. An important property of the deterministic mapFis that
cone fields can be defined on all of𝕋^2in such a way
that they are preserved bydF_(x,y)for(x,y)in a large but non-invariant
region in^2. For example, letC_1/5={v=(v_x, v_y): |v_y/v_x| ≤1/5}.
Then for(x,y) ∉{|f'| < 10}, which by (H1) and (H2) is comprised
of a finite number of very narrow vertical strips in𝕋^2forLlarge,
one checks easily thatdF_(x,y)mapsC_1/5intoC_1/5, and expands vectors in these cones
uniformly. It is just as easy to see that this cone invariance property
cannot be extended across the strips in{|f'| < 10}, and thatFis not uniformly hyperbolic.
These “bad regions" where the invariant cone property fails
shrink in size asLincreases. More precisely,
letK_1 > 1be such that|ψ'(x)| ≥K_1^-1 d(x, C_ψ');
that such aK_1exists follows from (H1), (H2) in Sect. 2.1. It is easy to check that
for anyη∈(0,1),d(x, C_ψ') ≥K_1/ L^1 - η
|f'(x)| ≥ L^η ,and this strong expansion in thex-direction
is reflected indF_(x,y)for anyy.
We must stress, however, that regardless of how small these “bad regions" are, the positivity of Lyapunov exponents is not guaranteed for the deterministic mapF– except for the Lebesgue measure zero set of orbits that never venture into
these regions. In general, tangent vectors that have expanded in the good regions
can be rotated into contracting directions when the orbit visits a bad region.
This is how elliptic islands are formed.
Remark 2. Interpretation of condition (H3). We have seen that visiting neighborhoods ofV_x̂ := {x=x̂}forx̂ ∈C'_ψcan lead to
a loss in hyperbolicity, yet
at the same time it is unavoidable that the “typical" orbit will visit these “bad regions".
Intuitively, it is logical to expect the situation to improve if we do not permit orbits to visit these bad regions two iterates in a row – except
that such a condition is impossible to arrange: sinceF(V_x̂')= {y=x̂'},
it follows thatF(V_x̂')meetsV_x̂for everyx̂, x̂' ∈C'_ψ.
In Theorem <ref>,
we assert that in the case of random maps, to reduce the size ofit suffices to impose the condition that no orbit can be inC'_ψ×𝕊^1for three consecutive iterates. That is to say, supposeF(x_i,y_i)=(x_i+1,y_i+1), i=1,2,…. Ifx_i, x_i+1 ∈C'_ψ,
thenx_i+2must stay away fromC'_ψ. This is a rephrasing of (H3).
Such a condition is both realizable and checkable, as it involves
only a finite number of iterates for a finite set of points.
Remark 3. Potential improvements.
Condition (H3) suggests that one may be able to shrinkfurther by imposing
similar conditions on one or two more iterates
ofF. Such conditions will cause the combinatorics in Section 5 to be more involved,
and since our, which is∼L^-L^2-, is already extremely small for largeL,
we will not pursue these possibilities here.
§ PRELIMINARIES
The results of this section apply to allL, >0unless otherwise stated.
§.§ Relevant Markov chains
Our random maps system{F^n_}_n ≥1can be seen as a
time-homogeneous Markov chain𝒳 := {(x_n, y_n)}given by
(x_n, y_n) = F^n_(x_0, y_0) = F_ω_n(x_n - 1, y_n - 1) .
That is to say, for fixed, the transition probability starting from(x,y) ∈𝕋^2is
P((x,y), A) = P^((x,y), A) = ν^{ω∈ [- , ] : F_ω(x,y) ∈ A }
for BorelA ⊂𝕋^2. We writeP^(k)((x,y), ·)(orP^(k)_(x,y)) for the correspondingk-step transition probability. It is easy to see that for this chain, Lebesgue measure is stationary,
meaning for any Borel setA ⊂𝕋^2,
(A) = ∫ P((x,y), A) d (x,y) .
Ergodicity of this chain is easy and we dispose of it quickly.
Lebesgue measure is ergodic.
For any (x,y) ∈𝕋^2 and ω_1, ω_2 ∈ [-,],
F_ω_2∘ F_ω_1(x,y) = F ∘ F ∘ S'_ω_1, - ω_2(x,y)
where S'_ω, ω'(x,y) = ( x + ω (mod 1), y + ω' (mod 1) ). That is to say, P^(2)_(x,y) is supported on
the set F^2([x-, x+] × [y-,y+]), on which it is equivalent to
Lebesgue measure. From this one deduces immediately
that (i) every ergodic stationary measure of 𝒳 = {(x_n, y_n)} has a density,
and (ii) all nearby points in 𝕋^2
are in the same ergodic component. Thus there can be at most one ergodic
component.
Part (a) of Theorem <ref> follows immediately from Lemma <ref> together with the
Multiplicative Ergodic Theorem for random maps.
Next we introduce a Markov chain𝒳̂on^2, the projective bundle over^2. Associatingθ∈^1 ≅[0, π)with the unit vectoru_θ= (cosθ, sinθ),F_ωinduces a mappingF̂_ω: ^2 →^2defined byF̂_ω (x,y,θ) = (F_ω(x,y),
θ') u_θ' =
±(dF_ω)_(x,y) u_θ/(dF_ω)_(x,y) u_θ .Here±is chosen to ensure thatθ' ∈[0,π). The Markov chain𝒳̂ : = {(x_n,y_n,θ_n)}is then defined by(x_n, y_n,θ_n) = F̂_ω_n(x_n-1, y_n-1, θ_n-1) .We writeP̂for its transition operator,P̂^(n)for then-step transition transition operator, and use Leb to denote also Lebesgue measure on^2.
For any stationary probability measureμ̂of the Markov chain(x_n,y_n,θ_n), defineλ(μ̂) = ∫log(dF_ω)_(x,y) u_θ dμ̂(x,y,θ)
dν^ (ω) .For any stationary probability measure μ̂ of
the Markov chain 𝒳̂, we have
λ^_1 ≥λ(μ̂) .
By the additivity of the
cocycle (x,y,θ) ↦logd(F_ω)_(x,y) u_θ, we have, for any n ∈,
λ (μ̂) = ∫1/nlog(dF_^n)_(x,y) u_θ dμ̂(x,y,θ) d (ν^)^n()
≤ ∫1/nlog(dF_^n)_(x,y) d (x,y) d (ν^)^n() .
That μ̂ projects to Lebesgue measure on ^2 is used in passing from
the first to the second line, and the latter converges to
λ^_1 as n →∞ by the Multiplicative Ergodic Theorem.
Thus to prove part (b) of Theorem <ref>, it suffices to prove thatλ(μ̂) ≥logLfor someμ̂. Uniqueness ofμ̂is not required. On the other hand, once we have
shown thatλ^_1>0, it will follow that there can be at most
oneμ̂withλ(μ̂)>0.
Details are left to the reader.
We remark also that while Theorems <ref>–<ref> hold for arbitrarily large values
of, we will treat only the case≤1/2, leaving the very minor
modifications needed for the> 1/2case to the reader.
Finally, we will omit from time to time the notation “(mod 1)" when the meaning
is obvious, e.g. instead of the technically correct
but cumbersomef(x+ω) - y ,
we will writef(x+ω) - y.
§.§ A 3-step transition
In anticipation for later use, we compute here the transition probabilitiesP̂^(3)((x,y,θ), ·), also denotedP̂^(3)_(x,y,θ).
Let(x_0, y_0, θ_0) ∈^2be fixed.
We defineH=H^(3)_(x_0, y_0, θ_0) : [-,]^3 →^2byH(ω_1, ω_2, ω_3) = F̂_ω_3∘F̂_ω_2∘F̂_ω_1 (x_0, y_0, θ_0) .ThenP̂^(3)_(x_0,y_0,θ_0) = H_*((ν^)^3), the pushforward of(ν^)^3on[-,]^3byH. Write(x_i, y_i,θ_i) = F̂_ω_i (x_i-1, y_i-1,θ_i-1),i = 1,2,3.
Let ∈ (0, 1/2]. Let (x_0, y_0, θ_0) ∈^2 be fixed, and let
H= H^(3)_(x_0, y_0, θ_0) be as above. Then
(i)
dH(ω_1, ω_2, ω_3) = sin^2 (θ_3) tan^2(θ_2) tan^2(θ_1) f”(x_0 + ω_1) ;
(ii) assuming θ_0 π/2, we have that
dH 0 on V where V ⊂
[-,]^3 is an open and dense set having full Lebesgue measure in [-,]^3;
(iii)H is at most #(C_ψ”)-to-one, i.e., no point in ^2 has more than # C_ψ” preimages.
The projectivized map F̂_ω can be written as
F̂_ω(x,y,θ)=(f(x+ω)-y ,x+ω, arctan1/f'(x+ω)-tanθ) ,
where arctan is chosen to take values in [0,π).
(i) It is convenient to write k_i=tanθ_i, so that k_i + 1 = (f'(y_i + 1) - k_i)^-1. Note as well that x_i + 1 = f(y_i + 1) - y_i. Then
dx_3∧ dy_3∧ dθ_3
= (f'(y_3)dy_3-dy_2)∧ dy_3∧(∂θ_3/∂ y_3dy_3+∂θ_3/∂ k_2dk_2
)
= -dy_2∧ dy_3∧(∂θ_3/∂ k_2dk_2)
= -dy_2∧(dω_3+f'(y_2)dy_2-dy_1)∧(∂θ_3/∂ k_2)(∂ k_2/∂ y_2dy_2+∂ k_2/∂ k_1dk_1)
= -dy_2∧ d(ω_3-y_1)∧(∂θ_3/∂ k_2∂ k_2/∂ k_1dk_1)
= -(dω_2+f'(y_1)dy_1)∧ d(ω_3-y_1)∧(∂θ_3/∂ k_2∂ k_2/∂ k_1∂ k_1/∂ y_1 dy_1)
= -dω_2∧ dω_3∧(∂θ_3/∂ k_2∂ k_2/∂ k_1∂ k_1/∂ y_1 dω_1).
It remains to compute the parenthetical term. The second two partial derivatives are straightforward. The first partial derivative is computed by taking the partial derivative of the formula θ_3 = f'(y_3) - k_2 with respect to k_2 on both sides.
We obtain as a result
∂θ_3/∂ k_2∂ k_2/∂ k_1∂ k_1/∂ y_1= - sin^2θ_3tan^2θ_2 tan^2θ_1 f”(x_0+ω_1) .
(ii) For x ∈ [0,1) and θ∈ [0,π) ∖{π/2}, define U(x, θ) = {ω∈ [- , ] : f'(x + ω) - tanθ≠ 0}. Note that U(x, θ) has full Lebesgue measure in [- , ] by (H1). We define
V = {(ω_1, ω_2, ω_3) ∈ [- , ]^3 : ω_1 ∈ U(x_0, θ_0), ω_2 ∈ U(x_1, θ_1),
ω_3 ∈ U(x_2, θ_2) , and f”(x_0 + ω_1) ≠ 0} .
By (H1) and Fubini's Theorem, V has full measure in [- , ]^3, and it is clearly open
and dense. To show dH 0,
we need θ_i 0 for i=1,2,3 on V. This follows from the fact that for
θ_i-1≠π/2, if ω_i ∈ U(x_i-1, θ_i-1) then θ_i ≠ 0, π/2.
(iii) Given (x_3, y_3, θ_3), we solve for (ω_1, ω_2, ω_3)
so that H(ω_1, ω_2, ω_3)=(x_3, y_3, θ_3). Letting
(x_i, y_i, θ_i), i = 1,2, be the intermediate images, we note that
y_2 is uniquely determined by x_3 = f(y_3) - y_2, θ_2 is determined
by θ_3 = f'(y_3) - tanθ_2, as is θ_1 once θ_2 and y_2 are fixed. This in turn determines f'(x_0 + ω_1), but here uniqueness of
solutions breaks down.
Let ω_1^(i)∈ [- , ], i=1, …, n, give the required value of f'(x_0+ω_1^(i)). We observe that each ω_1^(i) determines uniquely
y_1^(i) = x_0 + ω_1^(i), x_1^(i)=f(y_1^(i))-y_0,
ω_2^(i) = y_2-x_1^(i), x_2^(i) = f(y_2) - y_1^(i), and
finally ω_3^(i) = y_3 - x_2^(i).
Thus the number of H-preimages of any one point in ^2 cannot exceed n.
Finally, we have n ≤ 2 for small, and n ≤# (C_ψ”) for as large as 1/2.
For any stationary probability μ̂ of 𝒳̂,
we have μ̂(^2 ×{π/2}) =0, and for any
(x_0,y_0,θ_0) with θ_0 π/2 and any (x_3, y_3,θ_3) ∈^2,
the density of P̂^(3)_(x_0,y_0,θ_0) at (x_3, y_3,θ_3)
is given by
1/(2)^3(∑_ω_1 ∈ℰ(x_3, y_3,θ_3)1/|f”(x_0 + ω_1)|) 1/ρ(x_3,y_3,θ_3)
where
ℰ(x_3, y_3,θ_3) = {ω_1: ∃ω_2, ω_3
H(ω_1, ω_2, ω_3) = (x_3, y_3,θ_3)}
and
ρ(x,y,θ) = sin^2 (θ) [f'(f(y) - x) (f'(y) - θ) - 1 ]^2 .
To show μ̂(×{π/2}) = 0, it suffices to show that
given any x ∈ [0,1) and any θ∈ [0,π),
ν^{ω∈ [- , ] : f'(x + ω) = tanθ} = 0, and
that is true because C”_ψ is finite by (H1). The formula in (<ref>)
follows immediately
from the proof of Lemma <ref>, upon expressing tan^2(θ_2) tan^2(θ_1)
in terms of (x_3,y_3,θ_3) as was done in the proof of Lemma <ref>(iii).
§ PROOF OF THEOREM <REF>
The idea of our proof is as follows: Letμ̂be any stationary probability
of the Markov chain𝒳̂.
To estimate the integral inλ(μ̂), we need
to know the distribution ofμ̂in theθ-direction.
Given that the mapsF_ωare strongly uniformly hyperbolic on a large part of the phase space
with expanding directions well aligned with thex-axis (see Remark 1),
one can expect that underdF^N_for largeN,μ̂will be pushed toward a neighborhood of{θ=0}on much of^2,
and that is consistent withλ^_1 ≈logL. This reasoning, however,
is predicated onμ̂not being concentrated, or stuck, on very small sets far away
from{θ≈0}, a scenario not immediately ruled out
as the densities of transition probabilities are not bounded.
We address this issue directly by proving
in Lemma <ref> an a priori bound
on the extent to whichμ̂-measure can be concentrated on (arbitrary) small sets.
This bound is used in Lemma <ref> to estimate theμ̂-measure of the
set in^2not yet attracted to{θ=0}inNsteps. The rest of the proof
consists of checking that these bounds are adequate for our purposes.
In the rest of the proof, letμ̂be an arbitrary invariant
probability measure of𝒳̂.
Let A ⊂{θ∈ [π/4, 3 π/4]} be a Borel subset of ^2. Then for L large enough,
μ̂(A) ≤Ĉ/L^1/4(1 + 1/^3 L^2(A) ) ,
for all ∈ (0, 1/2], where Ĉ > 0 is a constant independent of L, or A.
By the stationarity of μ̂, we have, for every Borel set
A ⊂^2,
μ̂(A) = ∫_^2P̂^(3)_(x_0, y_0, θ_0)(A)
d μ̂(x_0, y_0, θ_0) .
Our plan is to decompose this integral into a main term and “error terms", depending
on properties of the density of P̂^(3)_(x_0, y_0, θ_0). The decomposition
is slightly different depending on whether ≤ L^-1/2 or ≥ L^-1/2.
The case ≤ L^-1/2. Let K_2 ≥ 1 be such that |ψ”(x)| ≥ K_2^-1 d(x,C”_ψ); such
a K_2 exists by (H1),(H2). Define B” = {(x, y) : d(x,C_ψ”) ≤ 2K_2 L^- 1/2}.
Then splitting the right side of (<ref>) into
∫_B”× [0, π) + ∫_^2 ∖ (B”× [0, π)) ,
we see that the first integral is ≤(B”) ≤4K_2 M_2/√(L) where M_2 = # C_ψ”. As for (x_0,y_0) ∉B”, since |f”(x_0+ω)| ≥ L^1/2, the density of
P̂^(3)_(x_0, y_0, θ_0) is ≤ [(2)^3 M_2^-1 L^1/2ρ]^-1
by Corollary <ref>.
To bound the second integral in (<ref>), we need to consider the zeros of ρ. As A ⊂^2 × [π/4, 3 π/4],
we have sin^2(θ_3) ≥ 1/2. The form of ρ in Corollary <ref> prompts us to decompose A into
A = (A ∩Ĝ) ∪ (A ∖Ĝ)
where Ĝ = G × [0,π) and
G = {(x,y) : d(y, C'_ψ) > K_1 L^-1/2, d(f(y)-x, C'_ψ) ≥ K_1 L^-1/2} .
Then on Ĝ∩ A, we have
ρ≥1/2 (1/2 L)^2 for L sufficiently large. This gives
∫_^2 ∖ (B”× [0, π))P̂^(3)_(x_0, y_0, θ_0)(A ∩Ĝ) d μ̂≤C/^3 √(L)1/L^2 (A) .
Finally, by the invariance of μ̂,
∫_^2 ∖ (B”× [0, π))P̂^(3)_(x_0, y_0, θ_0)(A ∖Ĝ) d μ̂ ≤ μ̂(A ∖Ĝ) =
(^2 ∖ G) .
We claim that this is ≲ L^-1/2. Clearly, Leb{d(y, C'_ψ) ≤ K_1 L^-1/2})
≈ L^-1/2. As for the second condition,
{y: f(y) ∈ (z-K_1 L^-1/2, z+ K_1 L^-1/2)}
= {y: ψ(y) ∈ (z'-K_1 L^-3/2, z'+ K_1 L^-3/2)}
which in the worst case has Lebesgue measure ≲ L^-3/4 by (H1), (H2).
The case ≥ L^-1/2. Here we let B̃” = {(x,y) : d(x,C_ψ”) ≤ K_2 L^- 3/4}, and decompose the right side of (<ref>) into
∫ (P̂^(3)_(x_0, y_0, θ_0))_1(A) d μ̂ + ∫ (P̂^(3)_(x_0, y_0, θ_0))_2(A) d μ̂
where, in the notation in Sect. 3.2,
(P̂^(3)_(x_0, y_0, θ_0))_1 =
H_* ((ν^)^ 3|_{x_0 + ω_1 ∈B̃”})
(P̂^(3)_(x_0, y_0, θ_0))_2 =
H_* ((ν^)^ 3|_{x_0 + ω_1 ∉B̃”}) .
Then the first integral is bounded above by
sup_x_0 ∈𝕊^1ν^{ω_1 ∈B̃” - x_0}≲^-1(B̃”) ≤ Const · L^- 1/4 ,
while the density of (P̂^(3)_(x_0, y_0, θ_0))_2 is ≤ [(2)^3 M_2^-1
L^1/4·ρ(x_3,y_3,θ_3)]^-1. The second integral is
treated as in the case of ≤ L^-1/2.
As discussed above, we now proceed to estimate
the Lebesgue measure of the set that remains far
away from{θ=0}afterNsteps, whereNis arbitrary for now.
For fixed= (ω_1, …, ω_N ), we write(x_i,y_i) = F^i_ (x_0,y_0)for1 ≤i ≤N, and defineG_N=G_N(ω_1, …, ω_N )by
G_N = {(x_0, y_0) ∈^2 : d(x_i + ω_i + 1, C_ψ') ≥ K_1 L^-1 + for all 0 ≤ i ≤ N-1 } .
We remark that for(x_0, y_0) ∈G_N, the orbitF^i_ (x_0,y_0), i ≤N,
passes through uniformly hyperbolic regions of^2, where invariant
cones are preserved and|f'(x_i + ω_i + 1)| ≥L^βfor eachi<N; see Remark 1 in Section 2.
We further defineĜ_N = { (x_0,y_0,θ_0) : (x_0,y_0) ∈G_N}.
Let β>0 be given. We assume
L is sufficiently large (depending on $̱).
Then for anyN ∈ℕ,∈ (0, 1/2]andω_1, …, ω_N ∈ [- , ],μ̂(Ĝ_N ∩{|tanθ_N| > 1}) ≤Ĉ/L^1/4(1+ 1/^3 L^2+β N) .For (x_0,y_0) ∈ G_N, consider the singular value decomposition of
(dF_^N)_(x_0, y_0). Let ϑ^-_0 denote the angle corresponding to the most contracted direction of (dF_^N)_(x_0, y_0) and ϑ_N^- its
image under (dF_^N)_(x_0, y_0), and let σ > 1 > σ^-1 denote
the singular values of (dF_^N)_(x_0, y_0). A straightforward computation gives
1/2 L^β≤ |tanϑ^-_0|, |tanϑ_N^-| and σ≥( 1/3 L^)^N .
It follows immediately that for fixed (x_0,y_0),
{θ_0 : |tanθ_N| >1}⊂ [π/4, 3π/4] and
{θ_0 : |tanθ_N| >1} < L^-β N .
Applying Lemma <ref> with A = Ĝ_N ∩{ | tanθ_N| > 1},
we obtain the asserted bound.
By the stationarity ofμ̂, it is true for anyNthatλ(μ̂) = ∫( ∫ (dF_ω_N + 1)_(x_N, y_N) u_θ_N d (F̂_ω_N∘…∘F̂_ω_1)_* μ̂)
dν^(ω_1) ⋯ d ν^(ω_N + 1) .We have chosen to estimateλ(μ̂)one sample path at a time because
we have information from Lemma <ref> on(F̂_ω_N∘…∘F̂_ω_1)_*μ̂for each sequenceω_1, …, ω_N.
Let α, ∈̱(0,1). Then, there are constants C = C_α, β > 0
and C' = C_α, β' > 0 such that for any L sufficiently large, we have the following. Let N = ⌊ C' L^1 - ⌋, ∈ [ L^- C L^1 - β, 1/2], and fix arbitrary ω_1, ⋯, ω_N + 1∈ [- , ]. Then,
I := ∫_^2log (d F_ω_N + 1)_(x_N, y_N) u_θ_N d μ̂(x_0,y_0, θ_0) ≥ log L .
Integrating (<ref>) over(ω_1, …, ω_N + 1)givesλ(μ̂) ≥log L.
Asλ^_1 ≥λ(μ̂), part (b) of Theorem <ref> follows
immediately from this proposition.
The number N will be determined in the course of the proof,
and L will be enlarged a finite number of times as we go along.
As usual, we will split I, the integral in (<ref>), to one on a good and
a bad set.
The good set is essentially the one in Lemma <ref>, with an additional condition on
(x_N,y_N), where dF will be evaluated. Let
G^*_N = {(x_0, y_0) ∈ G_N : d(x_N + ω_N + 1, C_ψ') ≥ K_1 m } ,
where m > 0 is a small parameter to be specified later.
As before, we let Ĝ^*_N = G^*_N × [0,2π).
Then 𝒢:= Ĝ^*_N ∩{|tanθ_N| ≤ 1} is the good set;
on 𝒢, the integrand in (<ref>) is ≥log( m L/4). Elsewhere we use the worst
lower bound - log (2 ψ'_C_0 L). Altogether we have
I ≥ log (1/4 m L) - logm ψ' L^2/2 μ̂(ℬ) ,
where
ℬ = ^2 ∖𝒢 =
(^2 ∖Ĝ^*_N) ∪ (Ĝ^*_N
∩{|tanθ_N| > 1}) .
We now bound μ̂(ℬ). First,
μ̂(^2 ∖Ĝ^*_N) = 1- (G^*_N) ≤ K_1 M_1 (m + NL^-1 + ) ,
where M_1 = # C_ψ'. Letting N= ⌊ C'L^1-⌋ and m=C'=p/4K_1M_1
where p is a small number to be determined,
we obtain μ̂(^2 ∖Ĝ^*_N) ≤1/2 p.
From Lemma <ref>,
μ̂(Ĝ^*_N ∩{|tanθ_N| > 1}) ≤Ĉ/L^1/4(1 + 1/( L^1/3Ṉ)^3) .
For N as above and in the designated range (with C = ̱/3 C'),
the right side of (<ref>) is easily made < 1/2 p
by taking L large, so we have μ̂(ℬ) ≤ p.
Plugging into (<ref>), we see that
I ≥ (1 - 2p) log L - {log p, plog p } .
Setting p = 1/4 (1 - ) and taking L large enough, one ensures that
I > log L.
§ PROOF OF THEOREM <REF>
We now show that with the additional assumption (H3), the same result holds
for≥ L^- C L^-2+.
§.§ Proof of theorem modulo main proposition
As the idea of the proof of Theorem <ref> closely parallels that of Proposition <ref>, it is useful to recapitulate the main ideas:
1. The main Lyapunov exponent estimate is carried on the subset
{(x_0,y_0,θ_0): (x_0,y_0) ∈ G_N,
|tanθ_N|<1} of ^2, where G_N
consists of points whose orbits stay
≳ L^-1+ away from C'_ψ×𝕊^1 in their first N iterates.
2. Since Leb(G^c_N)∼ NL^-1+, we must take N ≲ L^1-.
3. By the uniform hyperbolicity of F^N_ on G_N, Leb{|tanθ_N|>1}∼ L^-cN.
4. For μ̂{|tanθ_N|>1} to be small, we must have 1/^3 L^-cN≪ 1 (Lemma <ref>).
Items 2–4 together suggest that we require > L^-1/3 cN≥
L^-c'L^-1+, and we checked that for thisthe proof goes through.
The proof of Theorem <ref> we now present differs from the above in the following way:
The setG_N, which plays the same role as in Theorem <ref>, will be different.
It will satisfy
(A) Leb(G^c_N)∼ NL^-2+, and
(B) the composite map dF^N_ is uniformly hyperbolic on G_N.
The idea is as follows: To decrease, we must increaseN, while keeping
the setG^c_Nsmall. This can be done by allowing the random orbit to come closer toC_ψ' ×𝕊^1, but with that, one cannot expect uniform hyperbolicity
in each of the firstNiterations, so we require only (B). This is the main difference
between Theorems <ref> and <ref>.
OnceG_Nis properly identified and properties (A) and (B) are proved, the rest of
the proof follows that of Theorem <ref>: Property (A) permits us to takeN ∼ L^2-in item 2, and item 3 is valid by Property (B).
Item 4 is general and therefore unchanged, leading
to the conclusion that it suffices to assume > L^-c'L^-2+.
As the arguments follow those in Theorem <ref> verbatim modulo the bounds above and
accompanying constants, we will not repeat
the proof. The rest of this section is focused on
producingG_Nwith the required properties.
It is assumed from here on that (H3)(c_0)holds,
andL, aandare as in Theorem <ref>. Having proved Theorem <ref>,
we may assume≤ L^-1. In light of the discussion above,ω_1, ⋯, ω_N , ω_N + 1∈ [-,]will be fixed throughout, and(x_i,y_i)=F^i_(x_0,y_0)as before.
Definition of G_N. For arbitraryNwe defineG_Nto be
G_N = {(x_0, y_0) ∈^2 :
(a) for all 0 ≤ i ≤ N -1,
(i) d(x_i + ω_i + 1 , C_ψ') ≥ K_1 L^-2 + ,
(ii) d(x_i + ω_i + 1 , C_ψ')· d(x_i+1 + ω_i + 2 , C_ψ') ≥ K_1^2 L^-2+ /̱ 2 ,
(b) d(x_0 + ω_1, C_ψ'), d(x_N-1 + ω_N, C_ψ')≥ p/(16M_1)
}
whereM_1 = # C_ψ'andp=p(α)is a small number to be determined.
Notice that (a)(i) implies only|f'(x_i+ω_i+1)| ≥ L^-1 + ,
not enough to guarantee expansion in the horizontal direction. We remark also that even though (a)(ii)
implies|f'(x_i + ω_i + 1) f'(x_i+1 + ω_i + 2)| ≥ L^/̱ 2,
hyperbolicity does not follow without control of the angles of the vectors involved.
There exists C_2 ≥ 1 such that for all N,
(G_N^c) ≤ C_2 NL^-2+ + p/4 .
Let
A_1 = {x ∈ [0,1) : d(x, C_ψ') ≥ K_1 L^-2 + } ,
A_2 = {(x, y) ∈^2 : x ∈ A_1, and d(x, C_ψ') · d(f x - y, C_ψ') ≥ K_1^2 L^-2 + /̱2} .
We begin by estimating (A_2). Note that (A_1^c) ≤ 2 M_1 K_1 L^-2 +, and for each fixed x ∈ A_1,
{y ∈ [0,1) : d(f x - y, C_ψ') < K_1^2 L^-2 + /̱2/d(x, C_ψ')}≤2 M_1 K_1^2 L^-2 + /̱2/d(x, C_ψ') ,
hence
A_2^c ≤ A_1^c + ∫_x ∈ A_12 M_1 K_1^2 L^-2 + /̱2/d(x, C_ψ') dx .
Let ĉ = 1/2min{ d(x̂,x̂') : x̂, x̂' ∈ C_ψ', x̂≠x̂'} . We split the integral above into ∫_d(x, C_ψ') > ĉ+ ∫_K_1 L^-2 + ≤ d(x, C_ψ') ≤ĉ. The first one is bounded from above by 2 M_1 K_1^2 ĉ^-1 L^-2 + /̱2, and the second by
4 M_1^2 K_1^2 L^-2 + /̱2∫_ K_1 L^-2 + ^ĉdu/u≤ 4 (2 - )̱ M_1^2 K_1^2 L^-2 + /̱2log L ,
(having used that - log K_1 and logĉ are < 0) and so on taking L large enough so that L^/̱2≥log L, it follows that (A_2^c) ≤ C_2 L^-2 +, where C_2 = C_2, ψ depends on ψ alone.
Let G̃_N be equal to G_N with condition (b) removed. Then
G̃_N = ⋂_i = 0^N-1 (F_^i)^-1( A_2 - (ω_i + 1, 0) ) ,
so (G̃_N) ≥ 1 - C_2 N L^-2 +. The rest is obvious.
For any N ≥ 2,
(dF^N_)_(x_0, y_0) is hyperbolic on G_N with the following uniform bounds:
The larger singular value σ_1 of (dF^N_)_(x_0, y_0) satisfies
σ_1( (dF^N_)_(x_0, y_0)) ≥ L^/15 N ,
and if ϑ_0^- ∈ [0,π) denotes the most contracting direction of (dF^N_)_(x_0, y_0) and ϑ_N^- ∈ [0,π) its image, then
|ϑ_0^- - π/2|, |ϑ_N^- - π/2| ≤ L^- .
The bulk of the work in the proof of Theorem <ref> goes into proving this proposition.
§.§ Proof of Property (B) modulo technical estimates
Letc =c_ψ≪ c_0wherec_0is as in (H3); we stipulate additionally thatc ≤ p / 16 M_1, wherep= p_αandM_1are as before. First we introduce the following symbolic encoding of^2. LetB = 𝒩_√(c/L)( C'_ψ) ×𝕊^1,
I = 𝒩_c( C'_ψ) ×𝕊^1 ∖ B,
G = ^2 ∖ (B ∪ I) .To each(x_0, y_0) ∈^2we associate a symbolic sequence(x_0,y_0) ↦ W̅ = W_N-1⋯ W_1 W_0 ∈{B, I, G}^N ,where(x_i + ω_i + 1, y_i) ∈ W_i. We will refer to any symbolic sequence of length≥ 1,
e.g.V̅ = GBBG, as a word, and use Len(V̅)to denote the length
ofV̅, i.e., the number of letters it contains. We also writeG^kas shorthand for a word consisting
ofkcopies ofG. Notice that symbolic sequences are to be read from right
to left.
The following is a direct consequence of (H3).
Assume that < L^-1. Let (x_0,y_0) ∈^2 be such that (x_0 + ω_1, y_0) ∈ B ∪ I and (x_1 + ω_2, y_1) ∈ B.
Then (x_2 + ω_3, y_2) ∈ G.
Let x̂_0, x̂_1 ∈ C'_ψ (possibly x̂_0 = x̂_1)
be such that d(x_0,x̂_0 )< c and d(x_1,x̂_1) < √(c/L).
Since (f(x̂_1) - x̂_0) ∉𝒩_c_0( C'_ψ)
by (H3), it suffices to show |x_2 - (f(x̂_1) - x̂_0) | ≪ c_0:
|x_2 - (f(x̂_1) - x̂_0) | =
|(f(x_1 + ω_2)-y_1) - (f(x̂_1) - x̂_0) |
≤ | f(x_1 + ω_2) -f(x̂_1)| + d(y_1, x̂_0) .
To see that this is ≪ c_0, observe that for large L, we have
|f(x_1 + ω_2) - f(x̂_1)| < 1/2 L ψ”(√(c/L) +
L^-1)^2 < ψ” c ,
and d(y_1, x̂_0) = d(x_0+ω_1, x̂_0) < 2c.
Next we apply Lemma <ref> to put constraints on the set of all possible wordsW̅associated with(x_0, y_0) ∈ G_N.
Let W̅ be associated with (x_0, y_0) ∈ G_N. Then
W̅ must have the following form:
W̅ = G^k_MV̅_M G^k_M-1V̅_M-1⋯ G^k_1V̅_1 G^k_0 ,
where M ≥ 0, k_0, k_1, ⋯, k_M ≥ 1, and if M > 0, then each
V̅_i is one of the words in
𝒱 = {B, BB, B I^k B, I^k B, I^k, B I^k k ≥ 1 } .
The sequence W̅ starts and ends with G by the definition of G_N and the stipulation that c ≤ p / (16 M_1); thus a decomposition of the form (<ref>) is obtained with words {V̅_i}_i = 1^M formed from the letters {I, B}. To show that the words {V̅_i}_i = 1^M must be of the proscribed form, observe that
* BB occurs only as a subword of GBBG;
* BI only occurs as a subword of GBI;
* IB only occurs as a subword of IBG.
Each of these constraints follows from Lemma <ref>; for the third,
G is the only letter that can precede IB. It follows from the last two bullets that all
the Is must be consecutive, and B can appear at most twice.
With respect to the representation in (<ref>), we view eachV̅_ias representing
an excursion away from the “good region"G. In what follows,
we will show thatG_Nand (H3) are chosen so that
for(x_0,y_0) ∈ G_N, vectors are not rotated by too much during these excursions,
and hyperbolicity is restored with each visit toG.
To prove this, we introduce the following cones in tangent space:
𝒞_n = 𝒞(L^-1 + /̱4) , 𝒞_1 = 𝒞(1) , and 𝒞_w = 𝒞(L^1 - /̱4) ,
where𝒞(s)refers to the cone of vectors whose slopes have absolute value≤ s. The lettersn, wstand for `narrow' and `wide', respectively.
Let(x_0,y_0) ∈ G_Nand suppose for somemandl,{(x_m+i-1 + ω_m+i, y_m+i-1)}_i=1^lcorresponds to the wordV̅ = V_l ⋯ V_1 ∈𝒱.
To simplify notation, we write(x̃_i, ỹ_i)= (x_m+i-1 + ω_m+i, y_m+i-1)
dF̃^l = dF_(x̃_l, ỹ_l)∘⋯∘ dF_(x̃_1, ỹ_1) .Let {(x̃_i, ỹ_i)}_i = 1^l and V̅ = V_l ⋯ V_1∈𝒱
be as above. Then
dF̃^l (𝒞_n) ⊂𝒞_w ,
(dF̃^l) ^* (𝒞_n) ⊂𝒞_w , min_u ∈𝒞_n, u = 1 dF̃^l u ≥ 1/2 L^/5 n_I (V̅)
where n_I(V̅) is the number of appearances of the letter I in the word V̅.
We defer the proof of Proposition <ref> to the next subsection.
For (x_0, y_0) ∈ G_N, let W̅ be as in (<ref>). It is easy to check
that if (x_m + ω_m + 1, y_m) ∈ G, then
(dF_ω_m + 1)_(x_m ,y_m) (𝒞_w) ⊂𝒞_n
min_u ∈_w, u = 1(dF_ω_m + 1)_(x_m ,y_m) u≥1/4 L^/̱4 .
Applying (<ref>) and Proposition <ref> alternately, we obtain
(dF^N_)_(x_0, y_0) (_w) ⊂_n .
Identical considerations for the adjoint yield the cones relation (dF^N_)_(x_0, y_0)^* _w ⊂_n. We now use the following elementary fact from linear algebra: if M is a 2 × 2 real matrix with distinct real eigenvalues η_1 > η_2 and corresponding eigenvectors v_1, v_2 ∈^2, and if 𝒞 is any closed convex cone with nonempty interior for which M 𝒞⊂𝒞, then v_1 ∈𝒞.
We therefore conclude that the maximal expanding direction ϑ_0^+ for (dF^N_)_(x_0, y_0) and its image ϑ_N^+ both belong to 𝒞_n. The estimates for ϑ_0^-, ϑ_N^- now follow on recalling that ϑ_0^- = ϑ_0^+ + π/2 (mod π), ϑ_N^- = ϑ_N^+ + π/2 (mod π).
It remains to compute σ_1 ( (dF^N_)_(x_0, y_0)). From
(<ref>) and the derivative bound in Proposition <ref> gives
min_u ∈_w, u = 1(dF^N_)_(x_0, y_0) u≥ L^/5( (k_0 - 1) + (k_1 - 1) + ⋯ + (k_M-1 - 1) + k_M + ∑_i = 1^M (n_I(V̅_i) + 1) )
As there cannot be more than two copies of B in each V̅∈𝒱,
we have
n_I(V̅) + 1/(V̅) + 1≥1/3 ,
and the asserted bound follows.
§.§ Proof of Proposition <ref>
Cones relations for adjoints are identical to those of the original, and so are omitted: hereafter we work exclusively with the original (unadjointed) derivatives. We will continue to use the notation in Proposition <ref>. Additionally,
in each of the assertions below, ifdF̃^lis applied to the cone, thenminrefers to the minimum
taken over all unit vectorsu ∈.
The proof consists of enumerating all cases ofV̅∈𝒱.
We group the estimates as follows:
(a) For V̅ = I: dF̃ (_1) ⊂_1 and min dF̃ u≥1/2 K_1 √(c)√(L)≫ L^1/4.
(b) For V̅ = B: dF̃ (_n) ⊂_w and min dF̃ u≥1/2.
The next group consists of two-letter words the treatment of which will rely on
condition (a)(ii) in the definition ofG_N.
(c) For V̅ = BB: dF̃^2(_n) ⊂_w and mindF̃^2u ≥ L^/̱3.
(d) For V̅ = BI: dF̃^2(_1) ⊂_w and mindF̃^2 u≥min{1/2 K_1 √(c)√(L) , L^/̱3}≥ L^/̱5.
(e) For V̅ = IB: d F̃^2 (_n) ⊂_1 and mindF̃^2u ≥ L^/̱3.
This leaves us with the following most problematic case:
(f) For V̅ = BIB: dF̃^3 (_n) ⊂_w and mindF̃^3 u≥ L^/̱5 .
We go over the following checklist:
* V̅ = B or BB was covered by (b) and (c); total growth on _n is ≥1/2.
For k ≥ 1,
* V̅ = I^k follows from (a); total growth on _n is ≥ L^k /4≫ L^/5 k.
* V̅ = I^k B = I^k-1(IB) follows from concatenating (e) and (a); total growth on _n is
≥ L^(k-1) / 4· L^/̱3≫ L^/5 k.
* V̅ = B I^k = (BI)I^k-1 follows from concatenating (a) and (d); total growth on _n is
≥ L^/̱5· L^(k-1)/4≫ L^/5 k.
Lastly,
* V̅ = BIB follows from (f); total growth on _n is ≥ L^/̱5,
and
* for k ≥ 2, V̅ = BI^kB = (BI) I^k-2 (IB) follows by concatenating (e), followed
by (a) then
(d); total growth on _n is ≥ L^/̱5· L^(k-2) /4· L^/̱3≫ L^/5 k.
This completes the proof.
Lemma <ref> is easy and left to the reader; it is a straightforward application of the formulae
tanθ_ 1 = 1/f'(x̃_1) - tanθ_0 , d F̃ u_θ = √((f'(x̃_1) cosθ_0 - sinθ_0)^2 + cos^2 θ_0) ,
whereθ_1 ∈ [0, π)denotes the angle of the image vectord F̃ u_θ_0.
Below we letKbe such that|f'| ≤ KL.
We write u = u_θ_0 and θ_1, θ_2 ∈ [0,2 π) for the angles of the images dF̃ u, dF̃^2u respectively. Throughout, we use the following `two step' formulae:
tanθ_2 = f'(x̃_1) - tanθ_0/f'(x̃_1) f'(x̃_2) - f'(x̃_2) tanθ_0 - 1 ,
dF̃^2 u_θ≥ |( f'(x̃_1) f'(x̃_2) - 1) cosθ_0| - | f'(x̃_2) sinθ_0|
The estimate |f'(x̃_1) f'(x̃_2)| ≥ L^/̱ 2 (condition (a)(ii) in the definition
of G_N) will be used repeatedly throughout.
We first handle the vector growth estimates. For (c) and (e), as u = u_θ_0∈_n, the right side of (<ref>) is
≥1/2 L^/̱2 - 2 K L^/̱4≫ L^/̱3.
For (d) we break into the cases
(d.i) |f'(x̃_2)| ≥ L^/̱4 and
(d.ii) |f'(x̃_2)| < L^/̱4.
In case (d.i), by (a) we have that u_θ_1∈_1 and d F_(x̃_1, ỹ_1) u_θ_0≥1/2 K_1 √(c)√(L). Thus |tanθ_2| ≤ 2 L^- /̱4≪ 1 and dF_(x̃_2, ỹ_2) u_θ_1≥1/2 L^/̱4≫ 1, completing the proof. In case (d.ii), the right side of (<ref>) is
≥1/√(2) (L^/̱2 - 1) - 1/√(2) L^/̱4≫ L^/̱3 .
We now check the cones relations for (c) – (e). For (c),
|tanθ_2| ≤|f'(x̃_1)| + |tanθ_0|/|f'(x̃_1) f'(x̃_2) | - |f'(x̃_2) tanθ_0| - 1≤K L + L^-1 +/̱4 /L^/̱2 - K L^/̱4 - 1≤ 2 K L^1 - /̱2≪ L^1 - /̱4 ,
so that u_θ_2∈_w as advertised. The case (d.i) has already been
treated. For (d.ii), the same bound as in (c) gives
|tanθ_2| ≤K L + 1 /L^/̱2 - L^/̱4 - 1≤ 2 K L^1 - /̱2≪ L^1 - /̱4 ,
hence u_θ_2∈_w.
For (e) we again distinguish the cases
(e.i) |f'(x̃_1)| ≥ L^/̱ 4 and
(e.ii) |f'(x̃_1)| < L^/̱ 4.
In case (e.i), one easily checks that dF_(x̃_1, ỹ_1) (_n) ⊂_1 and then dF_(x̃_2, ỹ_2) (_1) ⊂_1 by (a). In case (e.ii) we compute directly that
|tanθ_2| ≤|f'(x_1)| + |tanθ_0|/|f'(x_1) f'(x_2) | - |f'(x_ 2) tanθ_0| - 1≤L^/̱4 + L^-1 + /̱4/L^/̱2 - K L^/̱4 - 1≪ 1 ,
hence u_θ_2∈_1.
We let u = u_θ_0∈_n (i.e. |tanθ_0| ≤ L^-1 + /̱4) and write θ_1, θ_2, θ_3 ∈ [0,π) for the angles associated to the subsequent images of u. We break into two cases:
(I) |f'(x̃_3)| ≥ |f'(x̃_1)| and
(II) |f'(x̃_3)| < |f'(x̃_1)|.
In case (I), we compute
|tanθ_2 |≤|f'(x̃_1)| + |tanθ_0|/|f'(x̃_1) f'(x̃_2)| - |f'(x̃_2) tanθ_0| - 1≤2 |f'(x̃_1)|/L^/̱2 - 2 K L^/̱4 - 1≤ 4 |f'(x̃_1)| L^- /̱2 ,
having used that |f'(x̃_1)| ≥ L^-1 + and |tanθ_0| ≤ L^-1 + /̱4 in the second inequality. Now,
|tanθ_3| ≤1/|f'(x̃_3)| - |tanθ_2|≤1/|f'(x̃_1)| - 4 |f'(x̃_1)| L^- /̱2≤2/|f'(x̃_1)|≤ 2 L^1 - ≪ L^1 - /̱4 .
In case (II), we use
|tanθ_1| ≤1/|f'(x̃_1)| - |tanθ_0|≤1/|f'(x̃_3)| - L^-1 + /̱4≤2/|f'(x̃_3)| ,
again using that |f'(x̃_3)| ≥ L^-1 +, and then
|tanθ_3| ≤ |f'(x̃_2)| + |tanθ_1|/|f'(x̃_2) f'(x̃_3)| - |f'(x̃_3) tanθ_1| - 1
≤ K L + 2 |f'(x̃_3)|^-1/L^/̱2 - 3≤K L + 2 L^1 - /L^/̱2 - 3≤ 2 K L^1 - /̱2≪ L^1 - /̱4 .
For vector growth, observe that from (e) we have
dF̃^2(_n) ⊂_1 and mindF̃^2 u≥ L^/̱3.
So, if |f'(x̃_3)| ≥ L^/̱12 then
dF_(x̃_3, ỹ_3) u_θ_2≥1/√(2) (L^/̱12 -1 ) ≫ 1 .
Conversely, if |f'(x̃_3)| < L^/̱12 then we can use the crude estimate (dF_(x̃, ỹ))^-1≤√(|f'(x̃)|^2 + 1) applied to (x̃, ỹ) = (x̃_3, ỹ_3), yielding
(dF_(x̃_3, ỹ_3))^-1≤√(L^2/̱12 + 1)≤ 2 L^/̱12 ,
hence dF̃^3 u≥1/2 L^/̱3 - /̱12 = 1/2 L^/̱4≫ L^/̱5, completing the proof.
§ THE STANDARD MAP
Letψandf_0 = f_ψ, L, a = L ψ + abe as defined in Sect. 2.1.
There exists ε > 0 and K_0 > 1, depending only on ψ, for which the following holds: for all L > 0 and f ∈𝒰_ε, L(f_0),
(a)max{f'_C^0, f”_C^0, f”'_C^0}≤ K_0 L,
(b) The cardinalities of C'_f and C”_f are equal to those of f_0 (equivalently
those of ψ),
(c)min_x̂∈ C_f' |f”(x̂)| , min_ẑ∈ C_f” |f”'(ẑ)| ≥ K_0^-1 L , and
(d)min_x̂, x̂' ∈ C'_f d(x̂,x̂' ) , min_ẑ, ẑ' ∈ C”_f
d(ẑ, ẑ') ≥ K_0^-1
The proof is straightforward and is left to the reader.
We claim – and leave it to the reader to check – that the proofs in Sections 3–5 (with C_f', C_f” replacing C_ψ', C_ψ”)
use only the form of the maps F= F_f as defined in Sect. 2.1,
and the four properties above. Thus they prove Theorem <ref> as well.
Under the (linear) coordinate change x = 1/2 πθ, y = 1/2 π(θ - I), the standard map conjugates to the map
(x,y) ↦ (L sin (2 π x) + 2 x - y, x)
defined on ^2, with both coordinates taken modulo 1. This map is of the form F_f, with f(x) = f_0(x) +2 x and f_0 (x) : =L sin (2 π x); here a = 0 and ψ(x) = sin(2 π x).
Let ε>0 be given by Theorem <ref> for this choice of ψ.
Then f clearly belongs in 𝒰_ε, L(f_0) for large
enough L.
siam |
http://arxiv.org/abs/1701.07599v2 | 20170126073518 | Topological and Algebraic Characterizations of Gallai-Simplicial Complexes | [
"Imran Ahmed",
"Shahid Muhmood"
] | math.AT | [
"math.AT",
"math.AC",
"05E25, 55U10, 13P10 (Primary), 06A11, 13H10 (Secondary)"
] |
COMSATS Institute of Information Technology, Lahore, Pakistan
[email protected]
COMSATS Institute of Information Technology, Lahore, Pakistan
[email protected]
Topological and Algebraic Characterizations of Gallai-Simplicial
Complexes
Imran Ahmed, Shahid Muhmood
December 30, 2023
==========================================================================
We recall first Gallai-simplicial
complex Δ_Γ(G) associated to Gallai graph Γ(G)
of a planar graph G, see <cit.>. The Euler characteristic is
a very useful topological and homotopic invariant to classify
surfaces. In Theorems <ref> and <ref>, we compute Euler
characteristics of Gallai-simplicial complexes associated to
triangular ladder and prism graphs, respectively.
Let G be a finite simple graph on n vertices of the form n=3l+2 or 3l+3. In Theorem
<ref>, we prove that G will be f-Gallai graph for the following types of constructions of G.
Type 1. When n=3l+2. G=𝕊_4l is a graph
consisting of two copies of star graphs S_2l and S'_2l with
l≥ 2 having l common vertices.
Type 2. When n=3l+3.
G=𝕊_4l+1 is a graph consisting of two star graphs
S_2l and S_2l+1 with l≥ 2 having l common vertices.
0.4 true cm
Key words: Euler characteristic, simplicial complex and f-ideals.
2010 Mathematics Subject Classification: Primary 05E25, 55U10, 13P10 Secondary 06A11, 13H10.
myheadings Ahmed and
Muhmood
Topological and Algebraic Characterizations of Gallai-Simplicial
Complexes
Topological and Algebraic Characterizations of Gallai-Simplicial
Complexes
Imran Ahmed, Shahid Muhmood
December 30, 2023
==========================================================================
§ INTRODUCTION
Let X be a finite CW complex of dimension N. The Euler
characteristic is a function χ which associates to each X an
integer χ(X). More explicitly, the Euler characteristic of X
is defined as the alternating sum
χ(X)=∑_k=0^N(-1)^kβ_k(X)
with β_k(X)=rank(H_k(X)) the k-th Betti number of X.
The Euler characteristic is a very useful topological and homotopic
invariant to classify surfaces. The Euler characteristic is uniquely
determined by excision χ(X)=χ(C)+χ(X\ C), for
every closed subset C⊂ X. The excision property has a dual
form χ(X)=χ(U)+χ(X\ U), for every open subset
U⊂ X, see <cit.> and <cit.> for more details.
We consider a planar graph G, the Gallai graph Γ(G) of G
is a graph having edges of G as its vertices, that is,
V(Γ(G))=E(G) and two distinct edges of G are adjacent in
Γ(G) if they are adjacent in G but dot span a triangle. The
buildup of the 2-dimensional Gallai-simplicial complex
Δ_Γ(G) from a planar graph G is an abstract idea
similar to building an origami shape from a plane sheet of paper by
defining a crease pattern, see <cit.>.
Let S=k[x_1,…,x_n] be a polynomial ring over an infinite
field k. There is a natural bijection between a square free
monomial ideal and a simplicial complex written as
Δ↔ I_𝒩(Δ), where
I_𝒩(Δ) is the Stanley-Reisner ideal or non-face
ideal of Δ, see for instance <cit.>. In <cit.>, Faridi
introduced another correspondence Δ↔
I_ℱ(Δ), where I_ℱ(Δ) is the
facet ideal of Δ. She discussed its connections with the
theory of Stanley-Reisner rings.
In <cit.> and <cit.>, the authors investigated the
correspondence δ_ℱ(I)↔
I↔δ_𝒩(I), where
δ_ℱ(I) and δ_𝒩(I) are facet
and non-face simplicial complexes associated to the square free
monomial ideal I (respectively). A square free monomial ideal I
in S is said to be an f-ideal if and only if both
δ_ℱ(I) and δ_𝒩(I) have the
same f-vector. The concepts of f- ideals is important in the
sense that it discovers new connections between both the theories.
The complete characterization of f-ideals in the polynomial ring
S over a field k can be found in <cit.>. A simple finite
graph G is said to be the f-graph if its edge ideal I(G) is an
f-ideal of degree 2, see <cit.>.
In Theorems <ref> and <ref>, we compute Euler characteristics
of Gallai-simplicial complexes associated to triangular ladder and
prism graphs, respectively.
Let G be a finite simple graph on n vertices of the form n=3l+2 or 3l+3. In Theorem
<ref>, we prove that G will be f-Gallai graph for the following types of constructions of G.
Type 1. When n=3l+2. G=𝕊_4l is a graph
consisting of two copies of star graphs S_2l and S'_2l with
l≥ 2 having l common vertices.
Type 2. When n=3l+3.
G=𝕊_4l+1 is a graph consisting of two star graphs
S_2l and S_2l+1 with l≥ 2 having l common vertices.
§ PRELIMINARIES
A simplicial complex Δ on [n]={1,…, n}
is a collection of subsets of [n] with the property that {i}∈Δ for all i, and if F∈Δ then every subset of F
will belong to Δ (including empty set). The elements of
Δ are called faces of Δ and the dimension of a face
F∈Δ is defined as |F|-1, where |F| is the number of
vertices of F. The faces of dimensions 0 and 1 are called
vertices and edges, respectively, and dim ∅= -1.
The maximal faces of Δ under inclusion are called
facets. The dimension of Δ is denoted by Δ and is
defined as:
Δ=max{ F | F∈Δ}.
A simplicial complex is
said to be pure if it has all the facets of the same dimension. If
{F_1,… ,F_q} is the set of all the facets of Δ, then
Δ=<F_1,… ,F_q>.
We denote by Δ_n the closed n-dimensional simplex. Every
simplex Δ_n is homotopic to a point and thus
χ(Δ_n)=1, ∀ n≥ 0.
Note that ∂Δ_n is homeomorphic to the (n-1)-sphere
S^n-1. Since S^0 is a union of two points, we have
χ(S^0)=2. In general, the n-dimensional sphere is a union of
two closed hemispheres intersecting along the Equator which is a
(n-1) sphere. Therefore,
χ(S^n)=2χ(Δ_n)-χ(S^n-1)=2-χ(S^n-1).
We deduce inductively
2=χ(S^n)+χ(S^n-1)=…=χ(S^1)+χ(S^0)
so that χ(S^n)=1+(-1)^n. Now, note that the interior of
Δ_n is homeomorphic to ℝ^n so that
χ(ℝ^n)=χ(Δ_n)-χ(∂Δ_n)=1-χ(S^n-1)=(-1)^n.
The excision property implies the following useful formula. Suppose
∅⊂Δ^(0)⊂…⊂Δ^(N)=Δ
is an increasing filtration of Δ by closed subsets. Then,
χ(Δ)=χ(Δ^(0))+χ(Δ^(1)\Δ^(0))
+…+χ(Δ^(N)\Δ^(N-1)).
We denote by
Δ^(k) the union of the simplices of dimension ≤ k.
Then, Δ^(k)\Δ^(k-1) is the union of
interiors of the k-dimensional simplices. We denote by
f_k(Δ) the number of such simplices. Each of them is
homeomorphic to ℝ^k and thus its Euler characteristic is
equal to (-1)^k. Consequently, the Euler characteristic of
Δ is given by
χ(Δ)=∑_k=0^N(-1)^kf_k(Δ),
see <cit.> and <cit.>.
Let Δ be a simplicial complex of dimension N, we define its
f-vector by a (N+1)-tuple f=(f_0,…, f_N), where f_i is
the number of i-dimensional faces of Δ.
The following definitions serve as a bridge between the
combinatorial and algebraic properties of the simplicial complexes
over the finite set of vertices [n].
Let Δ be a simplicial complex over the vertex set
{v_1,…,v_n} and S=k[x_1,…,x_n] be the polynomial
ring on n variables. We define the facet ideal of Δ by
I_ℱ(Δ), which is an ideal of S generated by
square free monomials x_i_1… x_i_s where
{v_i_1,…,v_i_s} is a facet of Δ. We define the
non-face ideal or the Stanley-Reisner ideal of Δ by
I_𝒩(Δ), which is an ideal of S generated by
square free monomials x_i_1… x_i_s where
{v_i_1,…,v_i_s} is a non-face of Δ.
Let I=(M_1,…,M_q) be a square free monomial ideal in the
polynomial ring S=k[x_1,…,x_n], where {M_1,…,M_q} is
a minimal generating set of I. We define a simplicial complex
δ_ℱ(I) over a set of vertices
v_1,…,v_n with facets F_1,…,F_q, where for each
i, F_i={v_j | x_j|M_i, 1≤ j≤ n}.
δ_ℱ(I) is said to be the facet complex of I. We
define a simplicial complex δ_𝒩(I) over a set of
vertices v_1,…,v_n, where {v_i_1,…,v_i_s} a
face of δ_𝒩(I) if and only if the product
x_i_1… x_i_s does not belong to I. We call
δ_𝒩(I) the non-face complex or the
Stanley-Reisner complex of I.
To proceed further, we define the Gallai-graph Γ(G), which is
a nice combinatorial buildup, see <cit.> and <cit.>.
Let G be a graph and Γ(G) is said to be the Gallai graph of G if
the following conditions hold;
1. Each edge of G represents a vertex of Γ(G).
2. If two edges are adjacent in G that do not span a triangle in
G then their corresponding vertices will be adjacent in
Γ(G).
The graph G and its
Gallai graph Γ(G) are given in figures (i) and (ii),
repectively.
< g r a p h i c s >
To define Gallai-simplicial complex Δ_Γ(G) of a planar
graph G, we introduce first a few notions, see <cit.>.
<cit.> Let G be a finite simple graph with vertex set
V(G)=[n] and edge set E(G)={e_i,j={i,j} | i,j∈ V(G)}.
We define the set of Gallai-indices Ω(G) of the graph G as
the collection of subsets of V(G) such that if e_i,j and
e_j,k are adjacent in Γ(G), then F_i,j,k={i,j,k}∈Ω(G) or if e_i,j is an isolated vertex in Γ(G) then
F_i,j={i,j}∈Ω(G).
<cit.> A Gallai-simplicial complex Δ_Γ(G) of G is
a simplicial complex defined over V(G) such that
Δ_Γ(G)=<F | F∈Ω(G)>,
where Ω(G) is
the set of Gallai-indices of G.
For the graph G shown in figure below, its Gallai-simplicial
complex Δ_Γ(G) is given by
Δ_Γ(G)=<{2,4}, {1,2,3},{1,3,4},{1,2,5},{1,4,5}>.
§ CHARACTERIZATIONS OF GALLAI-SIMPLICIAL COMPLEXES
The ladder graph L_n is a planar undirected graph with 2n
vertices and 3n-2 edges. The ladder graph L_n is the cartesian
product of two path graphs P_n and P_2, that is L_n=P_n×
P_2 and looks like a ladder with n rungs. The path graph P_n is
a graph whose vertices can be listed in an order v_1,…,v_n
such that {v_i,v_i+1} is an edge for 1≤ i≤ n-1. If we
add a cross edge between every two consecutive rungs of the ladder
then the resulting graph is said to be a triangular ladder graph
L^∗_n with 2n vertices and 4n-3 edges.
Let L^∗_n be the triangular ladder graph on 2n vertices with
fixing the label of the edge-set E(L^∗_n) as follows;
E(L^∗_n)={e_1,2, e_2,3,…,e_2n-1,2n,e_1,2n-1,e_1,2n,…,e_n-1,n+1,e_n-1,n+2}.
Then, we have
Ω(L^∗_n)={F_1,2,3,…,F_n-2,n-1,n,F_n,n+1,n+2,…,F_2n-2,2n-1,2n,
F_1,2,2n,F_2,3,2n-1,…,F_n-1,n,n+2,F_1,2,2n-2,…,F_n-2,n-1,n+1,
F_1,2n-2,2n-1,…,F_n-2,n+1,n+2,F_2,2n-1,2n,…,F_n-1,n+2,n+3}.
By definition, it is clear that F_i,i+1,i+2∈Ω(L^∗_n)
because i, i+1, i+2 are consecutive vertices of 2n-cycle and
edges e_i,i+1 and e_i+1,i+2 do not span a triangle except
F_n-1,n,n+1 and F_2n-1,2n,1 as the edge sets
{e_n-1,n,e_n,n+1} and {e_2n-1,2n,e_2n,1} span
triangles in the triangular ladder graph L^∗_n. Moreover,
F_i,i+1,j∈Ω(L^∗_n) for indices of types 1≤ i≤
n-1; j=2n+1-i and 1≤ i≤ n-2; j=2n-1-i. Also,
F_i,j,j+1∈Ω(L^∗_n) for indices of types 1≤ i≤
n-2; j=2n-1-i and 2≤ i≤ n-1; j=2n+1-i. Hence the
result.
Let Δ_Γ(L^∗_n) be the Gallai simplicial complex of
triangular ladder graph L^∗_n with 2n vertices for n≥ 3.
Then, the Euler characteristic of Δ_Γ(L^∗_n) is
χ(Δ_Γ(L^∗_n))=∑_k=0^N(-1)^kf_k=0.
Since, the triangular ladder graph has 2n vertices therefore, we
have f_0=2n.
Moreover, for {l,j,k}∈Δ_Γ(L^∗_n) with 1≤
l≤ 2n-2 and j,k∈ [2n], we have
* |{1,j,k}|=4 with {j,k}∈{{2,3},{2,2n-2},{2,2n},{2n-2,2n-1}};
* |{l,j,k}|=5(n-3) for 2≤ l≤ n-2 and
{j,k}∈{{l+1,l+2},{l+1,2n-1-l},{l+1,2n+1-l},{2n-1-l,2n-l},{2n+1-l,2n+2-l}};
* |{n-1,j,k}|=2 with {j,k}∈{{n,n+2},{n+2,n+3}};
* |{l,l+1,l+2}|=n-1 for n≤ l≤ 2n-2.
Adding the results from (1) to (4), we get
|{l,j,k}|=4+5(n-3)+2+(n-1)=6n-10
with 1≤ l≤ 2n-2 and j,k∈ [2n]. Therefore, f_2=6n-10.
Now, for {j,k}∈Δ_Γ(L^∗_n) with 1≤ j≤
2n-1 and k∈ [2n], we have
* |{1,k}|=5, where k∈{2,3,2n-2,2n-1,2n};
* |{j,k}|=6(n-3) with 2≤ j≤ n-2 and k∈{j+1,j+2,2n-1-j,2n-j,2n+1-j,2n+2-j};
* |{n-1,k}|=4, where k∈{n,n+1,n+2,n+3};
* |{j,k}|=2(n-1) with n≤ j≤ 2n-2 and k∈{j+1,j+2};
* |{2n-1,2n}|=1.
Adding the results from (5) to (9), we obtain
|{j,k}|=5+6(n-3)+4+2(n-1)+1=8n-10,
where 1≤ j≤ 2n-1 and k∈ [2n]. Therefore, f_1=8n-10.
Thus, we compute
χ(Δ_Γ(L^∗_n))=f_0-f_1+f_2=2n-(8n-10)+(6n-10)=0,
which is the desired result.
The Prism graph Y_3,n is a simple graph defined by the cartesian
product Y_3,n=C_3× P_n with 3n vertices and 3(2n-1)
edges. We label the edge-set of Y_3,n in the following way;
E(Y_3,n)={e_1,2,
e_2,3,e_3,1,e_4,5,e_5,6,e_6,4,…,e_3i+1,3i+2,e_3i+2,3i+3,e_3i+3,3i+1,…,
e_3n-2,3n-1,e_3n-1,3n,e_3n,3n-2,e_1,4,e_4,7,…,e_3n-5,3n-2,e_2,5,e_5,8,…,e_3n-4,3n-1,e_3,6,
e_6,9,…,e_3n-3,3n}, where
e_3i+1,3i+2,e_3i+2,3i+3,e_3i+3,3i+1 for 0≤ i≤ n-1
are the edges of (i+1)-th C_3-cycle.
Let Y_3,n be a prism graph on the vertex set [3n] and edge set E(Y_3,n), with labeling of edges
given above. Then, we have
Ω(Y_3,n)={F_1,2,4,F_1,2,5,F_2,3,5,F_2,3,6,
F_4,5,1,F_4,5,2,F_4,5,7,F_4,5,8,F_5,6,2,
F_5,6,3,F_5,6,8,
F_5,6,9,…,
F_3n-5,3n-4,3n-8,F_3n-5,3n-4,3n-7,F_3n-5,3n-4,3n-2,
F_3n-5,3n-4,3n-1,
F_3n-4,3n-3,3n-7,F_3n-4,3n-3,3n-6,F_3n-4,3n-3,3n-1,F_3n-4,3n-3,3n,
F_3n-2,3n-1,3n-5,
F_3n-2,3n-1,3n-4,F_3n-1,3n,3n-4,F_3n-1,3n,3n-3,
F_3,1,6,F_3,1,4,F_6,4,3, F_6,4,1,F_6,4,9,F_6,4,7,
…,F_3n-3,3n-5,3n-6,F_3n-3,3n-5,3n-8,F_3n-3,3n-5,3n,
F_3n-3,3n-5,3n-2, F_3n,3n-2,3n-3,
F_3n,3n-2,3n-5,
F_1,4,7,…,F_3n-8,3n-5,3n-2,F_2,5,8,
…,F_3n-7,3n-4,3n-1, F_3,6,9,
…, F_3n-6,3n-3,3n}.
By definition, one can easily see that F_3i+1,3i+2,3i+3 does not
belong to Ω(Y_3,n) because 3i+1,3i+2,3i+3 with 0≤
i≤ n-1 are vertices of (i+1)-th C_3-cycle. Therefore, from
construction of all possible triangles in
prism graph Y_3,n, we have
(i) F_j,j+1,j-3,F_j,j+1,j-2∈Ω(Y_3,n) for 4≤
j≤
3n-1 but j is not multiple of 3;
(ii) F_j,j+1,j+3,F_j,j+1,j+4∈Ω(Y_3,n) for 1≤
j≤
3n-4 but j is not multiple of 3;
(iii) F_3j,3j-2,3j-3,F_3j,3j-2,3j-5∈Ω(Y_3,n) for
2≤
j≤ n;
(iv) F_3j,3j-2,3j+3,F_3j,3j-2,3j+1∈Ω(Y_3,n) for
1≤
j≤ n-1;
(v) F_j,j+3,j+6∈Ω(Y_3,n) for 1≤ j≤
3n-6.
Hence the proof.
Let Δ_Γ(Y_3,n) be the Gallai-simplicial complex of
prism graph Y_3,n with 3n vertices for n≥ 3. Then, the
Euler characteristic of Δ_Γ(Y_3,n) is
χ(Δ_Γ(Y_3,n))=∑_k=0^N(-1)^kf_k=3(n-1).
Since, the prism graph has 3n vertices therefore, we have
f_0=3n.
Now, for {3l+i,j,k}∈Δ_Γ(Y_3,n) with 0≤ l≤
n-2 and j,k∈[3n] such that i=1,2,3, we have
* |{3l+1,j,k}|=7(n-2) with 0≤ l≤ n-3 and
{j,k}∈{{3l+2,3l+4},{3l+2,3l+5},{3l+3,3l+4},{3l+3,3l+6},{3l+4,3l+5},{3l+4,3l+6},{3l+4,3l+7}};
* |{3l+2,j,k}|=5(n-2) for 0≤ l≤ n-3 and
{j,k}∈{{3l+3,3l+5},{3l+3,3l+6},{3l+4,3l+5},{3l+5,3l+6},{3l+5,3l+8}};
* |{3l+3,j,k}|=3(n-2) for 0≤ l≤ n-3 and {j,k}∈{{3l+4,3l+6},{3l+5,3l+6},{3l+6,3l+9}};
* |{3n-5,j,k}|=6, where {j,k}∈{{3n-4,3n-2},{3n-4,3n-1},{3n-3,3n-2},{3n-3,3n},{3n-2,3n-1},{3n-2,3n}};
* |{3n-4,j,k}|=4, where {j,k}∈{{3n-3,3n-1},{3n-3,3n},{3n-2,3n-1},{3n-1,3n}};
* |{3n-3,j,k}|=2, where {j,k}∈{{3n-2,3n},{3n-1,3n}}.
Adding the results from (1) to (6), we get
f_2=7(n-2)+5(n-2)+3(n-2)+6+4+2=15n-18.
Next, for {3j+i,k}∈Δ_Γ(Y_3,n) with 0≤ j≤
n-2 and k∈[3n] such that i=1,2,3, we obtain
* |{3j+1,k}|=6(n-2) with 0≤ j≤ n-3 and
k∈{3j+2,3j+3,3j+4,3j+5,3j+6,3j+7};
* |{3j+2,k}|=5(n-2) with 0≤ j≤ n-3 and k∈{3j+3,3j+4,3j+5,3j+6,3j+8};
* |{3j+3,k}|=4(n-2) with 0≤ j≤ n-3 and k∈{3j+4,3j+5,3j+6,3j+9};
* |{3n-5,k}|=5, where k∈{3n-4,3n-3,3n-2,3n-1,3n};
* |{3n-4,k}|=4, where k∈{3n-3,3n-2,3n-1,3n};
* |{3n-3,k}|=3, where k∈{3n-2,3n-1,3n}.
Moreover, we have
* |{3n-2,k}|=2, where k∈{3n-1,3n};
* |{3n-1,3n}|=1.
Adding the results from (7) to (14), we get
f_1=6(n-2)+5(n-2)+4(n-2)+5+4+3+2+1=15n-15.
Hence, we compute
χ(Δ_Γ(Y_3,n))=f_0-f_1+f_2=3n-(15n-15)+(15n-18)=3(n-1),
which is the desired result.
§ CONSTRUCTION OF F-GALLAI GRAPHS
We introduce first the f-Gallai graph.
A finite simple graph G is said to be f-Gallai graph, if
the edge ideal I(Γ(G)) of the Gallai graph Γ(G) is an
f-ideal.
The following theorem provided us a construction of f-graphs.
<cit.>.
Let G be a simple graph on n vertices. Then for the
following
constructions, G will be f-graph:
Case(i) When n= 4l. G consists of two components G_1 and G_2
joined with l-edges, where both G_1 and G_2 are the complete
graphs on 2l vertices.
Case(ii) When n= 4l+ 1. G consists of two components G_1 and
G_2 joined with l-edges, where G_1 is the complete graph on
2l vertices and G_2 is the complete graph on 2l+ 1 vertices.
The star graph S_n is a complete bipartite graph K_1,n on
n+1 vertices and n edges formed by connecting a single vertex
(central vertex) to all other vertices.
We establish now the following result.
Let G be a finite simple graph on n vertices of the form n=3l+2 or 3l+3. Then for the following constructions, G will
be f-Gallai graph.
Type 1. When n=3l+2. G=𝕊_4l is a graph
consisting of two copies of star graphs S_2l and S'_2l with
l≥ 2 having l common vertices.
Type 2. When n=3l+3.
G=𝕊_4l+1 is a graph consisting of two star graphs
S_2l and S_2l+1 with l≥ 2 having l common vertices.
Type 1. When n=3l+2, the number of edges in
𝕊_4l will be 4l, as shown in figure 𝕊_12
with l=3. Let {e_1,…,e_2l} and {e'_1,…,e'_2l}
be the edge sets of the star graphs S_2l and S'_2l,
respectively such that e_i and e'_i have a common vertex for
each i=1,…,l. While finding Gallai graph
Γ(𝕊_4l) of the graph 𝕊_4l, we observe
that the edges e_1,…,e_2l of the star graph S_2l in
𝕊_4l will induce a complete graph Γ(S_2l) on
2l vertices in the Gallai graph Γ(𝕊_4l), as
shown in figure Γ(𝕊_12) with l=3. Similarly, the
edges e'_1,…,e'_2l of the star graph S'_2l will induce
another complete graph Γ(S'_2l) on 2l vertices in
Γ(𝕊_4l). As, e_i and e'_i are the adjacent
edges in 𝕊_4l for each i=1,…,l. Therefore, e_i
and e'_i will be incident vertices in Γ(𝕊_4l)
for every i=1,…,l. Thus, Gallai graph
Γ(𝕊_4l) having 4l vertices consists of two
components Γ(S_2l) and Γ(S'_2l) joined with l-
edges, where both Γ(S_2l) and Γ(S'_2l) are
complete graphs on 2l vertices. Therefore, by Theorem <ref>,
Γ(𝕊_4l) is f-Gallai graph.
Type 2. When n=3l+3, the number of edges in
𝕊_4l+1 will be 4l+1, see figure 𝕊_13
(where l=3). Let {e_1,…,e_2l} and
{e'_1,…,e'_2l+1} be the edge sets of the star graphs
S_2l and S_2l+1 (respectively) such that e_i and e'_i
share a common vertex for each i=1,…,l. One can easily see
that the edges e_1,…,e_2l of S_2l in
𝕊_4l+1 will induce a complete graph Γ(S_2l) on
2l vertices in the Gallai graph Γ(𝕊_4l+1), see
figure Γ(𝕊_13) (where l=3). Similarly, the edges
e'_1,…,e'_2l+1 of S_2l+1 will induce another complete
graph Γ(S_2l+1) on 2l+1 vertices in
Γ(𝕊_4l+1). Since e_i and e'_i are the adjacent
edges in 𝕊_4l+1 for every i=1,…,l. Therefore,
e_i and e'_i will be incident vertices in the Gallai graph
Γ(𝕊_4l+1) for each i=1,…,l. Thus, the
Gallai graph Γ(𝕊_4l+1) having 4l+1 vertices
consists of two components Γ(S_2l) and Γ(S_2l+1)
joined with l-edges, where Γ(S_2l) and Γ(S_2l+1)
are complete graphs on 2l and 2l+1 vertices, respectively.
Hence, by Theorem <ref>, Γ(𝕊_4l+1) is
f-Gallai graph.
One can easily see that the Gallai graph of the
line graph L_n is isomorphic to L_n-1 and that of cyclic graph
C_n is isomorphic to C_n. Therefore, both Γ(L_n) and
Γ(C_n) are f-Gallai graphs if and only if n=5, see
<cit.>.
1
AAAB G. Q. Abbasi, S. Ahmad, I. Anwar, W. A. Baig, f-ideals of degree
2, Algebra Colloquium, 19 (2012), no. 1, 921-926.
AKNS I. Anwar, Z. Kosar and S. Nazir, An Efficient Algebraic Criterion For Shellability, arXiv: 1705.09537.
AMBZ I. Anwar, H. Mahmood, M. A. Binyamin and M. K. Zafar, On the Characterization of
f-Ideals, Communications in Algebra 42 (2014), no. 9, 3736-3741.
WBH W. Bruns and J. Herzog, Cohen-Macaulay Rings, Revised
Edition, Cambridge Studies in Advanced Marthematics, Vol. 39,
Cambridge University Press, Cambridge, 1998.
SF S. Faridi, The Facet Ideal of a Simplicial
Complex, Manuscripta Mathematica, 109 (2002), 159-174.
TG T. Gallai, Transitiv Orientierbare Graphen, Acta Math.
Acad. Sci. Hung., 18 (1967), 25-66.
H A. Hatcher, Algebraic Topology, Cambridge
University Press, 2002.
VBL V. B. Le, Gallai Graphs and Anti-Gallai Graphs, Discrete
Math., 159 (1996), 179-189.
MAZ H. Mahmood, I. Anwar, M. K. Zafar, A Construction of Cohen-Macaulay f-Graphs, Journal of Algebra and its
Applications, 13 (2014), no. 6, 1450012-1450019.
M W.S. Massey, Algebraic Topology, An Introduction,
Springer-Verlag, New York, 1977.
|
http://arxiv.org/abs/1701.08006v2 | 20170127104545 | Quasi-homography warps in image stitching | [
"Nan Li",
"Yifang Xu",
"Chao Wang"
] | cs.CV | [
"cs.CV"
] |
Quasi-homography Warps in Image Stitching
Nan Li, N. Li is with the Center for Applied Mathematics, Tianjin University, Tianjin 300072, China. E-mail: [email protected] Xu^*, and Chao Wang
Y. Xu is with the Center for Combinatorics, Nankai University, Tianjin 300071, China. Email: [email protected].
C. Wang is with the Department of Software, Nankai University, Tianjin 300071, China. Email: [email protected].
January 27, 2017
====================================================================================================================================================================================================================================================================================================================================================================================================
The naturalness of warps is gaining extensive attentions in image stitching.
Recent warps such as SPHP and AANAP, use global similarity warps to mitigate projective distortion (which enlarges regions), however, they necessarily bring in perspective distortion (which generates inconsistencies). In this paper, we propose a novel quasi-homography warp, which effectively balances the perspective distortion against the projective distortion in the non-overlapping region to create a more natural-looking panorama. Our approach formulates the warp as the solution of a bivariate system, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization respectively. Because our proposed warp only relies on a global homography, thus it is totally parameter-free. A comprehensive experiment shows that a quasi-homography warp outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that it wins most users' favor, comparing to homography and SPHP.
Image stitching, image warping, natural-looking, projective distortion, perspective distortion.
§ INTRODUCTION
Image stitching plays an important role in many multimedia applications, such like panoramic videos <cit.>, virtual reality <cit.>. Conventionally, image stitching is a process of composing multiple images with overlapping fields of views, to produce a wide-view panorama <cit.>, where the first stage is to determine a warp for each image and transform it into a common coordinate system, then the warped images are composed <cit.> and blended <cit.> into a final mosaic.
Evaluations of warping include the alignment quality in the overlapping region and the naturalness quality in the non-overlapping region.
Early warps focus on the alignment quality, which is measured in two different aspects:
* (global) the root mean squared error on the set of feature correspondences,
* (local) the patch-based mean error along a stitching seam.
Global warps such as similarity or homography warps <cit.>, aim to minimize alignment errors between overlapping pixels via a uniform transformation. Homography is the most frequently used warp, because it is the most flexible planar transformation which preserves all straight lines. For a better global alignment quality, recent spatially-varying warps <cit.> use multiple local transformations instead of a single global one to address the large parallax issue in the overlapping region. Some seam-driven warps <cit.> address the same problem by pursuing a better local alignment quality in the overlapping region, such that there exists a local region to be seamlessly
blended.
Recent warps concentrate more on the naturalness quality, which is embodied in two consistency properties:
* (local) the region of any object should be consistent with its appearance in the original,
* (global) the perspective relation of the same object should be consistent between different images.
Violations of consistencies lead to two types of distortion:
* (projective) the region of an object is enlarged, compared to its appearance in the original (see people and trees in Fig. <ref>(a)),
* (perspective) the perspectives of an object in two images are inconsistent with each other (see buildings and signs in Fig. <ref>(b)).
Similarity warps automatically satisfy the local consistency, since they purely involve translation, rotation and uniformly scaling, but may suffer from perspective distortion.
Homography warps conventionally satisfy the global consistency, if a good alignment quality is guaranteed,
but may suffer from projective distortion.
Warps such like the shape-preserving half-projective (SPHP) warp <cit.> and the adaptive as-natural-as-possible (AANAP) warp <cit.> use a spatial combination of homography and similarity warps to mitigate projective distortion in the non-overlapping region.
Other warps address the same problem via a joint optimization of alignment and naturalness qualities, which either constrains the warp resembles a similarity as a whole <cit.>, or constrains the warp preserves detected straight lines <cit.>.
In this paper, we propose a quasi-homography warp, which balances the projective distortion against the perspective distortion in the non-overlapping region, to create a more natural-looking mosaic (see Fig. <ref>(c)). Our proposed warp only relies on a global homography, thus it is totally parameter-free. The rest of the paper is organized as follows. Section <ref> describes some recent works. Section <ref> provides a naturalness analysis of image warps, where Section <ref> presents two intuitive tools to demonstrate projective distortion and perspective distortion via mathematical derivations. Section <ref> and <ref> employ such tools to analyze homography and SPHP warps in aspects of local and global consistencies. Our quasi-homography warp is defined as a solution of a bivariate system in Section <ref>, which is based on a new formulation of homography in Section <ref>. Implementation details (including two-image stitching and multiple-image stitching) and method variations (including orientation rectification and partition refinement) are proposed in Section <ref>. Section <ref> presents a comparison experiment and a user study, which demonstrate that the quasi-homography warp not only outperforms some state-of-the-art warps in urban scenes, but also wins most users' favor. Finally, conclusions are drawn in Section <ref> and some mathematical formulas are explained in Appendix.
§ RELATED WORK
In this section, we review some recent works of image warps in aspects of alignment and naturalness qualities respectively. For more fundamental concepts about image stitching, please refer to a comprehensive survey <cit.> by Szeliski.
§.§ Warps for Better Alignment
Conventional stitching methods always employ global warps such as similarity, affine and homography, to align images in the overlapping region <cit.>. Global warps are robust but often not flexible enough to provide accurate alignment. Gao et al. <cit.> proposed a dual-homography warp to address scenes with two dominant planes by a weighted sum of two homographies. Lin et al. <cit.> proposed a smoothly varying affine (SVA) warp to replace a global affine warp with a smoothly affine stitching field, which is more flexible and maintains much of the motion generalization properties of affine or homography. Zaragoza et al. <cit.> proposed an as-projective-as-possible (APAP) warp in a moving DLT framework, which is able to accurately register images that differ by more than a pure rotation. Lou et al. <cit.> proposed a piecewise alignment method, which approximates regions of image with planes by incorporating piecewise local geometric models.
Other methods combine image alignment with seam-cutting approaches <cit.>, to find a locally registered area which is seamlessly blended instead of aligning the overlapping region globally. Gao et al. <cit.> proposed a seam-driven framework, which searches a homography with minimal seam costs instead of minimal alignment errors on a set of feature
correspondences. Zhang and Liu <cit.> proposed a parallax-tolerant warp, which combines homography and content-preserving warps to locally register images. Lin et al. <cit.> proposed a seam-guided local alignment warp, which iteratively improves the warp by adaptive feature weighting according to the distance to current seams.
§.§ Warps for Better Naturalness
Many efforts have been devoted to mitigate distortion in the non-overlapping region for creating a natural-looking mosaic. A pioneering work <cit.> uses spherical or cylindrical warps to produce multi-perspective results to address this problem, but it necessarily curves straight lines.
Recently, some methods take advantage of global similarity (preserves the original perspective) to mitigate projective distortion in the non-overlapping region. Chang et al. <cit.> proposed a SPHP warp that spatially combines a homography warp and a similarity warp, which makes the homography maintain good alignment in the overlapping region while the similarity keep the original perspective in the non-overlapping region. Lin and Pankanti <cit.> proposed an AANAP warp, which combines a linearized homography warp and a global similarity warp with the smallest rotation angle to create natural-looking mosaics.
Other methods model their warps as mesh deformations via energy minimization, which address naturalness quality issues by enforcing different constraints. Chen et al. <cit.> proposed a global-similarity-prior (GSP) warp, which constrains the warp resembles a similarity as a whole. Zhang et al. <cit.> proposed a warp that produces an orthogonal projection of a wide-baseline scene by constraining it preserves extracted straight lines,
and allows perspective corrections via scale preservation.
§ NATURALNESS ANALYSIS OF IMAGE WARPS
This section describes a naturalness analysis of image warps. First, the global consistency is characterized as line-preserving, where the perspective distortion is illustrated through a mesh-to-mesh transformation, and the local consistency is characterized as uniformly-scaling, where the projection distortion is demonstrated via the linearity of a scaling function. Then, we analyze the naturalness of homography and SPHP warps by these tools.
§.§ Mathematical Setup
Let I and I^' denote the target image and the reference image respectively.
A warp ℋ is a planar transformation <cit.>, which relates pixel coordinates (x,y)∈ I to (x',y')∈ I^', where
{[ x^'=f(x,y); y^'=g(x,y) ]..
If ℋ is of global consistency, then it must be line-preserving, i.e., a straight line l={(x+z,y+kz)|z∈ℝ}∈ I should be mapped to a straight line l^'={(x^'+z^',y^'+k^'z^')|z^'∈ℝ}∈ I^'. Actually, the calculation of the slope k^' provides a criterion to validate line-preserving, i.e., ℋ is line-preserving, if and only if
k^' = g(x+z,y+kz)-g(x,y)/f(x+z,y+kz)-f(x,y)
is independent of z. Its proof is easy. Given a point (x,y)∈ I and a slope k, then they define a straight line l={(x+z,y+kz)|z∈ℝ}∈ I. If k^' calculated by (<ref>) is a constant, then l is mapped to a straight line l^'={(x^'+z^',y^'+k^'z^')|z^'∈ℝ}∈ I^', which is defined by (x',y')∈ I^' and k^'.
Since k^' only depends on (x,y) and k, we denote it by slope(x,y,k).
Suppose ℋ is line-preserving and 𝒞^1 continuous, then
slope(x,y,k) =lim_z→0g(x+z,y+kz)-g(x,y)/f(x+z,y+kz)-f(x,y)
=g_x(x,y)+kg_y(x,y)/f_x(x,y)+kf_y(x,y),
where f_x,f_y,g_x,g_y denote the partial derivatives of f and g.
In fact, there exists a mesh-to-mesh transformation that maps all horizontal lines and vertical lines to straight lines with slopes
slope(x,y,0) =g_x(x,y)/f_x(x,y),
slope(x,y,∞) =g_y(x,y)/f_y(x,y),
which are independent of x and y respectively.
Consequently, any point (x,y)∈ I can be expressed as the intersection point of a horizontal line and a vertical line, which is corresponding to the point (x^',y^')∈ I^' as the intersection point of two lines with slopes slope(x,y,0) and slope(x,y,∞). In the rest of the paper, we constantly employ this mesh-to-mesh transformation to demonstrate perspective distortion comparisons among different warps (see Fig. <ref>).
On the other side, if ℋ is of local consistency, then it must be uniformly-scaling. Actually, the local consistency automatically holds for similarity warps, because they purely involve translation, rotation and uniformly scaling. Suppose ℋ is not only line-preserving but also uniformly-scaling, then a line segment s={(x+z,y+kz)|z∈[z_1,z_2]}∈ I should be mapped to a line segment s^'={(x^'+z^',y^'+k^'z^')|z^'∈[z^'_1,z^'_2]}∈ I^' with a uniform scaling factor.
Conversely, the linearity of a scaling function on arbitrary line is a necessary condition of the local consistency.
By assuming cameras are oriented and motions are horizontal, there should exist a horizontal line l_x={(x,y_*)|x∈ℝ}∈ I which remains a horizontal line l_x^'={(x^',y_*^')|x^'∈ℝ}∈ I^', if a good alignment is guaranteed. In fact, l_x is roughly located in the horizontal plane of cameras, and y_* satisfies
slope(x,y_*,0)=g_x(x,y_*)/f_x(x,y_*)=0.
Given a point (x_*,y_*)∈ l_x, then for ∀(x,y_*)∈ l_x, |f(x,y_*)-f(x_*,y_*)| should equal to a uniform scaling factor times |x-x_*|. In other words, f(x,y_*) should be linear in x. In the rest of the paper, we constantly employ the linearity to demonstrate projective distortion comparisons among different warps (see Fig. <ref>).
Some other notations are stated as follows. Let 𝒪 denote the overlapping region and l_y={(x_*,y) | y∈ℝ} denote a vertical line which divides ℝ^2 into half spaces R_O={(x,y)|x≤ x_*} and R_Q={(x,y)|x_*< x}, such that 𝒪⊂ R_O. Our proposed warp ℋ_† is a spatial combination of a homography warp ℋ_0 within R_O and a squeezed homography warp ℋ_* within R_Q, where R_O^' and R_Q^' are respective half spaces after warping.
§.§ Naturalness Analysis of Homography
A homography warp ℋ_0 is the most flexible warp for better alignment, which is normally defined as
f_0(x,y) =h_1x+h_2y+h_3/h_7x+h_8y+1,
g_0(x,y) =h_4x+h_5y+h_6/h_7x+h_8y+1,
where h_1-h_8 are eight parameters. It is easy to certify that ℋ_0 is line-preserving, since the slope k^' in (<ref>) is independent of z. To illustrate the property more intuitively, we draw a mesh-to-mesh transformation (see Fig. <ref>(b)), where horizontal lines and vertical lines are mapped to straight lines with slopes
slope(x,y,0) =(h_4h_8-h_5h_7)y+(h_4-h_6h_7)/(h_1
h_8-h_2h_7)y+(h_1-h_3h_7),
slope(x,y,∞) =(h_4h_8-h_5h_7)x+(h_6h_8-h_5)/(h_1
h_8-h_2h_7)x+(h_3h_8-h_2).
Under the assumption that cameras are oriented and motions are horizontal, for l_x={(x,y_*)|x∈ℝ}∈ I, we derive
y_* =h_6h_7-h_4/h_4h_8-h_5h_7,
by solving the equation (<ref>). For ∀(x,y_*)∈ l_x,
f_0(x,y_*)=h_1x+h_2y_*+h_3/h_7x+h_8y_*+1,
is non-linear in x when h_7≠0 (see Fig. <ref>(b)), which indicates the invalidation of uniformly-scaling.
In summary, homography warps conventionally satisfy the global consistency if a good alignment is guaranteed, however they usually suffer from projective distortion in the non-overlapping region (see Table <ref>).
For example, the people and the tree are enlarged in Fig. <ref>(a) comparing to the original.
§.§ Naturalness Analysis of SPHP
To overcome such drawbacks of homography warps, Chang et al. <cit.> proposed a shape-preserving half-projective (SPHP) warp, which is a spatial combination of a homography warp and a similarity warp, to create a natural-looking
multi-perspective panorama.
Specifically, after adopting the change of coordinates, SPHP divides ℝ^2 into three regions. 1. R_H={(u,v)|u≤ u_1}, where a homography warp is applied to achieve a good alignment. 2. R_S={(u,v) | u_2≤ u}, where a similarity warp is applied to mitigate projective distortion. 3. R_T={(u,v) | u_1<u<u_2}, a buffer region where a warp is applied to gradually change a homography warp to a similarity warp. Consequently, a SPHP warp 𝒲 is defined as
w(u,v)={[ H(u,v), (u,v)∈ R_H; T(u,v), (u,v)∈ R_T; S(u,v), (u,v)∈ R_S ].,
where u_1 and u_2 are parameters, such that 𝒲 can approach a similarity warp as much as possible. Note that, the change of coordinates plays an important role in SPHP, since a similarity simply combines a homography via a single partition line.
Both homography and similarity are line-preserving, thus 𝒲 is certainly of global consistency in R_H and R_S respectively. However, 𝒲 may suffer from line-bending within R_T, because of its non-linearity.
Moreover, perspectives of R_H and R_S may contradict each other.
For example, parallels remain parallels in R_S, while they do not in R_H (see Fig. <ref>(c)).
𝒲 is certainly of local consistency in R_S, because a similarity warp is applied (see Fig. <ref>(c)).
In summary, SPHP warps achieve the alignment quality as good as homography warps in R_H, and the local consistency as good as similarity warps in R_S. However, SPHP warps may suffer from line-bending in R_T and perspective distortion between R_H and R_S (see Table <ref>). Note that, the non-linearity of T(u,v) in R_T merely blends certain lines in theory, but it is still possible to preserve visible straight lines in practice. Many results in <cit.> justify that SPHP is capable of doing so. Unfortunately, it will get worse for urban scenes, which are filled with visible lines and visible parallels (see the sign in Fig. <ref>(b)).
It is also worth noting that, SPHP creates a multi-perspective panorama, thus different perspectives may contradict each other (see buildings in Fig. <ref>(b)).
These naturalness analysis of homography and SPHP warps motivate us to construct a warp, which achieves a good balance between the perspective distortion and the projective distortion in the non-overlapping region,
via relaxing the local and global consistencies such that they are both partially satisfied.
§ PROPOSED WARPS
This section presents how to construct a warp for balancing perspective distortion against projective distortion
in the non-overlapping region. First, we propose a different formulation of the homography warp to characterize the global consistency as slope preservation while the local consistency as scale linearization respectively. Then, we describe how to adopt this formulation to present a quasi-homography warp, which squeezes the mesh of the corresponding homography warp but without varying its shape.
§.§ Review of Homography
Given eight parameters h_1-h_8, we formulate a homography warp ℋ_0 in another way, as the solution of a bivariate system
y^'-g_0(x_*,y)/x^'-f_0(x_*,y) =slope(x,y,0),
y^'-g_0(x,y_*)/x^'-f_0(x,y_*) =slope(x,y,∞),
where (x_*,y) and (x,y_*) are projections of a point (x,y) onto l_y={(x_*,y) | y∈ℝ} and l_x={(x,y_*) | x∈ℝ} respectively (see Fig. <ref>(a)). Besides, equations of f_0, g_0 and slope(x,y,0), slope(x,y,∞) are given in (<ref>,<ref>,<ref>,<ref>).
Our formulation (<ref>,<ref>) is equivalent to (<ref>,<ref>).
In fact, it is easy to check that (<ref>,<ref>) is a solution of (<ref>,<ref>). Furthermore, it is the unique solution, because the Jacobian is invertible if and only if slope(x,y,0)≠slope(x,y,∞).
Then, comparing with (<ref>,<ref>), our formulation (<ref>,<ref>) characterizes the global consistency as slope preservation while the local consistency as scale linearization respectively. Intuitively, slope(x,y,0) and slope(x,y,∞) formulate the shape of the mesh, while f_0(x,y_*) and g_0(x_*,y) formulate the density of the mesh (see Fig. <ref>(b)). It should be noticed that we made no assumptions on x_* or y_* in the above analysis. In the next subsection, we will assume that l_y isolates the overlapping region 𝒪 and l_x remains horizontal under ℋ_0, for stitching multiple images captured by oriented cameras via horizontal motions.
§.§ Quasi-homography
Our proposed warp makes use of the formulation (<ref>,<ref>) to balance perspective distortion
against projective distortion in the non-overlapping region. First, we divide ℝ^2 by the vertical line l_y={(x_*,y) | y∈ℝ} into half spaces R_O={(x,y)|x≤ x_*} and R_Q={(x,y)|x_*< x}, where the overlapping region 𝒪⊂ R_O.
Then, we formulate our warp ℋ_† as the solution of a bivariate system
y^'-g_0(x_*,y)/x^'-f_0(x_*,y) =slope(x,y,0),
y^'-g_0(x,y_*)/x^'-f_†(x,y_*) =slope(x,y,∞),
where y_* satisfies (<ref>) and f_† (x,y_*) is defined as
f_† (x,y_*)=
f_0(x,y_*), if (x,y_*)∈ R_O,
f_*(x,y_*), if (x,y_*)∈ R_Q,
f_*(x,y_*)=f_0(x_*,y_*)+f_0^'(x_*,y_*)(x-x_*),
on the horizontal line l_x={(x,y_*) | x∈ℝ}.
In fact, f_*(x,y_*) is the first-order truncation of the Taylor's series for f_0(x,y_*) at x=x_*,
which successfully makes f_† (x,y_*) piece-wise 𝒞^1 continuous and linear in x within R_Q.
Because the Jacobian of (<ref>,<ref>) is invertible, it possesses a unique solution ℋ_† as
ℋ_†={[ ℋ_0, (x,y)∈ R_O; ℋ_*, (x,y)∈ R_Q ].,
where
x^'=f_†(x,y) =
f_0(x,y), if (x,y)∈ R_O,
f_*(x,y), if (x,y)∈ R_Q,
y^'=g_†(x,y) =
g_0(x,y), if (x,y)∈ R_O,
g_*(x,y), if (x,y)∈ R_Q,
where f_*(x,y) and g_*(x,y) are rational functions in variables x and y, whose coefficients are polynomial functions in h_1-h_8 and x_*.
The detailed derivations are presented in Appendix.
In fact, the warp ℋ_† just squeezes the meshes of homography in the horizontal direction but without varying its shape (see Fig. <ref>(c)). In this sense, we call ℋ_† a quasi-homography warp that corresponds to a homography warp ℋ_0. A quasi-homography warp maintains good alignment in R_O as a homography warp, and it mitigates perspective distortion and projective distortion simultaneously via slope preservation and scale linearization in R_Q. Intuitively, ℋ_† relaxes arbitrary line-preserving to only preserving the shape of the mesh (see Fig. <ref>(d)), while relaxes uniformly-scaling everywhere to only uniforming the density of the mesh on l_x in R_Q (see Fig. <ref>(d)).
On the other hand, since ℋ_† just squeezes the mesh of ℋ_0 but without varying its shape, ℋ_† is an injection if ℋ_0 is an injection.
Given (x^',y^')∈ R_O^', then (x,y)∈ R_O is determined by ℋ_0^-1. Given (x^',y^')∈ R_Q^', then (x,y)∈ R_Q is determined by solving (<ref>,<ref>) (regard x,y as unknowns)
x =(m_1x^2+m_2x+m_3),
y =(h_6h_7-h_
4)x'+(h_1-h_3h_7)y'+(h_3h_4-h_1h_6)/(h_4h_8-h_5h_7)x'+(h_2h_7-h_1h_8)y'+(h_1h
_5-h_2h_4),
where m_1-m_3 are polynomial functions in x',y',x_*, and h_1-h_8. The detailed derivations are presented in Appendix.
Note that, though both SPHP and quasi-homography warps adopt a spatial combination of a homography warp and another warp to create more natural-looking mosaics, their motivations and frameworks are different. SPHP focuses on the local consistency, to create a natural-looking multi-perspective
panorama. Quasi-homography concentrates on balancing global and local consistencies, to generate a natural-looking single-perspective panorama. SPHP introduces a change of coordinates such that a similarity combines a homography via a single partition line, and a buffer region such that a homography gradually changes into a similarity. Quasi-homography reorganizes homography's point correspondences via solving the bivariate system (<ref>,<ref>), where the shape is preserved and the size is squeezed.
It is worth noting that the construction of quasi-homography makes no assumptions on the special horizontal line l_x and the vertical partition line l_y. For stitching multiple images captured by oriented cameras via horizontal motions, the horizontal line that remains horizontal best measures the projective distortion. Therefore, quasi-homography can preserve horizontal lines or nearly-horizontal lines better than SPHP (see result comparisons in Section <ref>), and ordinary users prefer such stitching results in urban scenes (see user study in Section <ref>).
In summary, quasi-homography warps achieve a good alignment quality as homography warps in R_O, while partially possess the local consistency and the global consistency in R_Q, such that perspective distortion and projective distortion are balanced (see Table <ref>). Note that the warp may still suffer from diagonal line-bending and vertical region-enlarging within R_Q, because line-preserving and uniformly-scaling are relaxed to partially valid. Please see more details in Section <ref>.
§ IMPLEMENTATION
In this section, we first present more implementation details of our quasi-homography in two-image stitching and multiple-image stitching, then we propose two variations of the method including orientation rectification and partition refinement.
§.§ Two-image Stitching
Given a pair of two images, which are captured by oriented cameras via
horizontal motions, if a homography warp ℋ_0 can be estimated with a good alignment quality in the overlapping region, then a quasi-homography warp ℋ_† can be calculated, which smoothly extrapolates from ℋ_0 in R_O into ℋ_* in R_Q. A brief algorithm is given in Algorithm <ref>.
§.§ Multiple-image Stitching
Given a sequence of multiple images, which are captured by oriented cameras via horizontal motions, our warping method consists of three stages. In the first stage, we pick a reference image as a standard perspective, such that other images should be consistent with it. Then we estimate a homography warp for each image and transform them in the coordinate system of the reference image via bundle adjustment as in <cit.> and calculate pairwise quasi-homography warps of adjacent images. Finally, we concatenate other images to the reference one by a chained composite map of pairwise quasi-homgraphy warps.
Fig. <ref> illustrates an example of the concatenation procedure for stitching five images. First we select I_3 as the reference one such that perspectives of other four images should agree with it.
Then we estimate homography warps ℋ_0^1→3, ℋ_0^2→3, ℋ_0^4→3, ℋ_0^5→3 via bundle adjustment <cit.> and calculate pairwise quasi-homography warps ℋ_†^1→2, ℋ_†^2→3, ℋ_†^4→3, ℋ_†^5→4. Finally, we concatenate I_1 and I_5 to I_3 by
ℋ_†^1→3=ℋ_†^2→3∘ℋ_†^1→2,
ℋ_†^5→3=ℋ_†^4→3∘ℋ_†^5→4.
Therefore, the concatenation warp for every image is a chained composite map of pairwise quasi-homography warps.
§.§ Orientation Rectification
In urban scenes, users accustom to taking pictures by oriented cameras via horizontal motions, hence any vertical line in the target image is expected to be transformed to a vertical line in the warped result. However, it inevitably sacrifices the alignment quality in the overlapping region.
In order to achieve orientation rectification, we incorporate an extra constraint in the homography estimation, which constrains that the external vertical boundary of the target image preserves vertical in the warped result.
Then for a homography warp ℋ_0 as (<ref>,<ref>), it should satisfy
f_0(w,0)=f_0(w,h) ⇔ h_8=h_2(h_7w+h_9)/h_1w+h_3,
where w and h are the width and the height of I respectively.
A global homography is then estimated by solving
min∑_i=1^N a_i h^2 s.t. h=1, h_8=h_2(h_7w+h_9)/h_1w+h_3.
Because the quasi-homography warp just squeezes the mesh of a homography warp but without varying its shape,
the external vertical boundary still preserves vertical (see Fig. <ref>).
§.§ Partition Refinement
In our analysis of quasi-homography warps in Section <ref>, the uniform scaling factor on the special horizontal line l_x in R_Q depends on the linearized scaling function (<ref>). Moreover, it depends on the determination of the partition point (x_*,y_*). In fact, the factor is more accurate if (x_*,y_*) is better aligned.
Hence, we replace the partition line l_y closest to the border of the overlapping region and the non-overlapping region, by it closest to the external border of the seam for further refinement (see Fig. <ref>).
§ EXPERIMENTS
We experimented our proposed method on a range of images captured through both rear and front cameras in urban scenes. In our experiments, we employ SIFT <cit.> to extract and match features, RANSAC <cit.> to estimate a global homography, and seam-cutting <cit.> to blend the overlapping region. Codes are implemented in OpenCV 2.4.9 and generally take 1s to 2s on a desktop PC with Intel i5 3.0GHz CPU and 8GB memory to stitching two images with 800× 600 resolution by Algorithm <ref>, where the calculation of the quasi-homography warp only takes 0.1s (including the forward map and the backward map). We used the codes of AutoStitch[http://matthewalunbrown.com/autostitch/autostitch.html] and SPHP[http://www.cmlab.csie.ntu.edu.tw/∼frank/] from the authors' homepage in the experiment.
§.§ Result Comparisons
We compared our quasi-homography warp to state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. Because our method focuses on the naturalness quality in the non-overlapping region, we only compare with methods using global homography alignment in the overlapping region,
while not comparing to methods using spatially-varying warps.
Nevertheless, some urban scenes with repetitive structures still cause alignment issues <cit.>, which may limit the application of our method.
Therefore, we use a more robust feature matching method RepMatch <cit.>, and a more robust RANSAC solution USAC <cit.> for estimating a global homography, to generalize our proposed method in urban scenes. Non-planar scenes may cause outlier removal issues <cit.>, but fortunately, <cit.> justifies that a simple RANSAC-driven homography still works reasonably well even for such cases.
In order to highlight the comparison of the naturalness quality in the non-overlapping region, for homography, SPHP and quasi-homography, we use the same homography alignment and the same seam-cutting composition in the overlapping region.
Fig. <ref> illustrates a naturalness comparison for stitching two and three images from data sets of DHW <cit.> and SPHP <cit.>.
Homography preserves straight lines, but it enlarges the regions of cars and people. SPHP preserves respective perspectives, but it causes contradictions in the ground and wires.
AutoStitch uses a spherical projection to produce a multi-perspective stitching result. Quasi-homography uses a planar projection to produce a single-perspective stitching result, which appears as oriented line-preserving and uniformly-scaling. More results from other data sets including DHW <cit.>, SPHP <cit.>, GSP <cit.> and APAP <cit.> are available in the supplementary material.
Fig. <ref> illustrates a naturalness comparison for stitching two sequences of ten and nine images. Homography stretches cars, trees and people. AutoStitch presents a nonlinear-view stitching result. Quasi-homography creates a natural-looking linear-view stitching result. More results for stitching long sequences of images are available in the supplementary material.
§.§ User Study
To investigate whether quasi-homography is more preferred by users in urban scenes, we conduct a user study to compare our results to homography and SPHP. We invite 17 participants to rank 20 unannotated groups of stitching results, including 5 groups from the front cameras and 15 groups from rear ones. For each group, we adopt the same homography alignment and the same seam-cutting composition, and all parameters are set to produce optimal final results. In our study, each participant ranks three unannotated stitching results in each group, and a score is recorded by assigning weights 4, 2 and 1 to Rank 1, 2 and 3. Twenty groups of stitching results are available in the supplementary material.
Table <ref> shows a summary of rank votes and total scores for three warps, and the histogram of three scores is shown in Fig. <ref> in three aspects. This user study demonstrates that stitching results of quasi-homography warps win most users' favor in urban scenes.
.52
Score results of user study.
0.48
Methods 4cResults
2-5
Rank 1 Rank 2 Rank 3 Total score
Homography 69 210 61 757
SPHP <cit.> 19 56 265 453
Quasi-homography 252 74 14 1170
.475
< g r a p h i c s >
Histogram of scores.
§.§ Failure Cases
Experiments show that quasi-homography warps usually ba-lance the projective distortion against the perspective distortion in the non-overlapping region, but there still exist some limitations. For example, diagonal lines may not stay straight anymore and regions of objects may suffer from vertical stretches (especially for stitching images from different planes). Two of failure examples are shown in Fig. <ref>.
§ CONCLUSION
In this paper, we propose a quasi-homography warp, which balances the perspective distortion against the projective distortion in the non-overlapping region, to create natural-looking single-perspective panoramas. Experiments show that stitching results of quasi-homography outperform some state-of-the-art warps under urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that quasi-homography wins most users' favor as well, comparing to homography and SPHP.
Future works include generalizing quasi-homography warps into spatially-varying warping frameworks like in <cit.> to improve alignment qualities as well in the overlapping region, and incorporating our method in SPHP to create more natural-looking multi-perspective panoramas. Multimedia applications that are relevant to image stitching could also be considered, such as feature selection <cit.>, video composition <cit.>, cross-media stitching <cit.> and action recognition <cit.>.
[]
The forward map (<ref>,<ref>) and the backward map (<ref>,<ref>) of a quasi-homography warp (<ref>,<ref>) are solved by the computer algebra system Maple via commands
solve({(<ref>),(<ref>)},{x^',y^'}),
where x^',y^' are unknowns and x,y,x_*,y_*,h_1-h_8 are parameters,
solve({(<ref>),(<ref>)},{x,y})
where x,y are unknowns and x',y',x_*,y_*,h_1-h_8 are parameters.
Because the analytic solutions of (<ref>,<ref>) contain over one thousand monomials, we omitted their complicated expressions here.
A maple worksheet is available to download at the page http://cam.tju.edu.cn/ nan/QH.htmlhttp://cam.tju.edu.cn/%7enan/QH.html for readers to verify the correctness.
Actually, if the parameters h_1-h_8 and a pair of x,y or x^',y^' are given, we plug their values into (<ref>,<ref>) or (<ref>,<ref>) then solve the forward or the backward map directly, without using these analytic solutions.
A symbolic proof for the line-preserving property of homography warps and the equivalence of two homography formulations are included in the worksheet as well.
IEEEtran
10
url@samestyle
Tzavidas2005Multicamera
S. Tzavidas and A. K. Katsaggelos, “A multicamera setup for generating stereo
panoramic video,” IEEE Transactions on Multimedia, vol. 7, no. 5, pp.
880–890, 2005.
Sun2005Region
X. Sun, J. Foote, D. Kimber, and B. S. Manjunath, “Region of interest
extraction and virtual camera control based on panoramic video capturing,”
IEEE Transactions on Multimedia, vol. 7, no. 5, pp. 981–990, 2005.
Gaddam2016Tiling
V. R. Gaddam, M. Riegler, R. Eg, and P. Halvorsen, “Tiling in interactive
panoramic video: Approaches and evaluation,” IEEE Transactions on
Multimedia, vol. 18, no. 9, pp. 1819–1831, 2016.
Shum2005A
H. Y. Shum, K. T. Ng, and S. C. Chan, “A virtual reality system using the
concentric mosaic: construction, rendering, and data compression,”
IEEE Transactions on Multimedia, vol. 7, no. 1, pp. 85–95, 2005.
Tang2005A
W. K. Tang, T. T. Wong, and P. A. Heng, “A system for real-time panorama
generation and display in tele-immersive applications,” IEEE
Transactions on Multimedia, vol. 7, no. 2, pp. 280–292, 2005.
Zhao2013Cube2Video
Q. Zhao, L. Wan, W. Feng, and J. Zhang, “Cube2Video: Navigate between cubic
panoramas in real-time,” IEEE Transactions on Multimedia, vol. 15,
no. 8, pp. 1745–1754, 2013.
szeliski2006image
R. Szeliski, “Image alignment and stitching: A tutorial,” Found. Trends
Comput. Graph. Vis., vol. 2, no. 1, pp. 1–104, 2006.
peleg1981elimination
S. Peleg, “Elimination of seams from photomosaics,” Comput. Graph.
Image Process., vol. 16, no. 1, pp. 90–94, 1981.
duplaquet1998building
M.-L. Duplaquet, “Building large image mosaics with invisible seam lines,” in
Proc. SPIE Visual Information Processing VII, 1998, pp. 369–377.
davis1998mosaics
J. Davis, “Mosaics of scenes with moving objects,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recog., June. 1998, pp. 354–360.
efros2001image
A. A. Efros and W. T. Freeman, “Image quilting for texture synthesis and
transfer,” in Proc. ACM SIGGRAPH, 2001, pp. 341–346.
mills2009image
A. Mills and G. Dudek, “Image stitching with dynamic elements,” Image
Vis. Comput., vol. 27, no. 10, pp. 1593–1602, 2009.
burt1983multiresolution
P. J. Burt and E. H. Adelson, “A multiresolution spline with application to
image mosaics,” ACM Trans. Graphics, vol. 2, no. 4, pp. 217–236,
1983.
Perez:2003
P. Pérez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM
Trans. Graphics, vol. 22, no. 3, pp. 313–318, 2003.
levin2004seamless
A. Levin, A. Zomet, S. Peleg, and Y. Weiss, “Seamless image stitching in the
gradient domain,” in Proc. Eur. Conf. Comput. Vis., May 2004, pp.
377–389.
hartley2003multiple
R. Hartley and A. Zisserman, Multiple view geometry in computer
vision.1em plus 0.5em minus 0.4emCambridge univ. press, 2003.
gao2011constructing
J. Gao, S. J. Kim, and M. S. Brown, “Constructing image panoramas using
dual-homography warping,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., Jun. 2011, pp. 49–56.
lin2011smoothly
W.-Y. Lin, S. Liu, Y. Matsushita, T.-T. Ng, and L.-F. Cheong, “Smoothly
varying affine stitching,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., Jun. 2011, pp. 345–352.
zaragoza2013projective
J. Zaragoza, T.-J. Chin, M. S. Brown, and D. Suter, “As-projective-as-possible
image stitching with moving DLT,” in Proc. IEEE Conf. Comput. Vis.
Pattern Recog., Jun. 2013, pp. 2339–2346.
Lou2014Image
Z. Lou and T. Gevers, “Image alignment by piecewise planar region matching,”
IEEE Transactions on Multimedia, vol. 16, no. 7, pp. 2052–2061, 2014.
gao2013seam
J. Gao, Y. Li, T.-J. Chin, and M. S. Brown, “Seam-driven image stitching,”
Eurographics, pp. 45–48, 2013.
zhang2014parallax
F. Zhang and F. Liu, “Parallax-tolerant image stitching,” in Proc. IEEE
Conf. Comput. Vis. Pattern Recog., May 2014, pp. 3262–3269.
lin2016seam
K. Lin, N. Jiang, L.-F. Cheong, M. Do, and J. Lu, “SEAGULL: Seam-guided
local alignment for parallax-tolerant image stitching,” in Proc. Eur.
Conf. Comput. Vis., Oct. 2016.
chang2014shape
C.-H. Chang, Y. Sato, and Y.-Y. Chuang, “Shape-preserving half-projective
warps for image stitching,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recog., May 2014, pp. 3254–3261.
lin2015adaptive
C.-C. Lin, S. U. Pankanti, K. N. Ramamurthy, and A. Y. Aravkin, “Adaptive
as-natural-as-possible image stitching,” in Proc. IEEE Conf. Comput.
Vis. Pattern Recog., Jun. 2015, pp. 1155–1163.
Chen:2016:NIS
Y.-S. Chen and Y.-Y. Chuang, “Natural image stitching with the global
similarity prior,” in Proc. Eur. Conf. Comput. Vis., 2016, pp.
186–201.
zhang2016multi
G. Zhang, Y. He, W. Chen, J. Jia, and H. Bao, “Multi-viewpoint panorama
construction with wide-baseline images,” IEEE Trans. Image Process.,
vol. 25, no. 7, pp. 3099–3111, 2016.
boykov2001fast
Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via
graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23,
no. 11, pp. 1222–1239, Nov. 2001.
agarwala2004interactive
A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless,
D. Salesin, and M. Cohen, “Interactive digital photomontage,” ACM
Trans. Graphics, vol. 23, no. 3, pp. 294–302, 2004.
kwatra2003graphcut
V. Kwatra, A. Schödl, I. Essa, G. Turk, and A. Bobick, “Graphcut textures:
image and video synthesis using graph cuts,” ACM Trans. Graphics,
vol. 22, no. 3, pp. 277–286, 2003.
Eden:2006
A. Eden, M. Uyttendaele, and R. Szeliski, “Seamless image stitching of scenes
with large motions and exposure differences,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recog., vol. 2, Jun. 2006, pp. 2498–2505.
Brown:2007
M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant
features,” Int. J. Comput. Vis., vol. 74, no. 1, pp. 59–73, 2007.
lowe2004distinctive
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”
Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.
fischler1981random
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for
model fitting with applications to image analysis and automated
cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
zaragoza2014projective
J. Zaragoza, T.-J. Chin, Q.-H. Tran, M. S. Brown, and D. Suter,
“As-projective-as-possible image stitching with moving DLT,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 36, no. 7, pp. 1285–1298, 2014.
kushnir2014epipolar
M. Kushnir and I. Shimshoni, “Epipolar geometry estimation for urban scenes
with repetitive structures,” IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 36, no. 12, pp. 2381–2395, 2014.
lin2016repmatch
W.-Y. Lin, S. Liu, N. Jiang, M. N. Do, P. Tan, and J. Lu, “RepMatch: Robust
feature matching and pose for reconstructing modern cities,” in Proc.
Eur. Conf. Comput. Vis., 2016, pp. 562–579.
raguram2013usac
R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm, “USAC: a
universal framework for random sample consensus,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 35, no. 8, pp. 2022–2038, 2013.
Tran2012
Q.-H. Tran, T.-J. Chin, G. Carneiro, M. S. Brown, and D. Suter, “In defence of
RANSAC for outlier rejection in deformable registration,” in Proc.
Eur. Conf. Comput. Vis., 2012, pp. 274–287.
yang2013feature
Y. Yang, Z. Ma, A. G. Hauptmann, and N. Sebe, “Feature selection for
multimedia analysis by sharing information among multiple tasks,” IEEE
Transactions on Multimedia, vol. 15, no. 3, pp. 661–669, 2013.
VideoPuzzle
Q. Chen, M. Wang, Z. Huang, Y. Hua, Z. Song, and S. Yan, “VideoPuzzle:
Descriptive one-shot video composition,” IEEE Transactions on
Multimedia, vol. 15, no. 3, pp. 521–534, Apr. 2013.
Xu2016
Y. Yan, F. Nie, W. Li, C. Gao, Y. Yang, and D. Xu, “Image classification by
cross-media active learning with privileged information,” IEEE
Transactions on Multimedia, vol. 18, no. 12, pp. 2494–2502, Dec. 2016.
action
Y. Li, P. Li, D. Lei, Y. Shi, and L. Tan, “Investigating image stitching for
action recognition,” Multimedia Tools and Applications, Aug. 2017.
[Online]. Available: <https://doi.org/10.1007/s11042-017-5072-4>
|
http://arxiv.org/abs/1701.07447v3 | 20170125190232 | An Explicit, Coupled-Layer Construction of a High-Rate Regenerating Code with Low Sub-Packetization Level, Small Field Size and $d< (n-1)$ | [
"Birenjith Sasidharan",
"Myna Vajha",
"P. Vijay Kumar"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
=1
trees
defnDefinition
thmTheorem[section]
cor[thm]Corollary
propProposition
claimClaim
lem[thm]Lemma
conj[thm]Conjecture
constr[thm]Construction
noteRemark
exampleExample
|
http://arxiv.org/abs/1701.07808v4 | 20170126183734 | Linear convergence of SDCA in statistical estimation | [
"Chao Qu",
"Huan Xu"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Mapping the Milky Way with LAMOST I: Method and overview
Chao Liu
1
Yan Xu
1
Jun-Chen Wan
1,2
Hai-Feng Wang
1,2
Jeffrey L. Carlin
3
Li-Cai Deng
1
Heidi Jo Newberg
4
Zihuang Cao
1
Yonghui Hou
5
Yuefei Wang
5
Yong Zhang
5
Received 2017 month day; accepted 2017 month day
=============================================================================================================================================================================================================================================================================
In this paper, we consider stochastic dual coordinate (SDCA) without strongly convex assumption or convex assumption. We show that SDCA converges linearly under mild conditions termed restricted strong convexity. This covers a wide array of popular statistical models including Lasso, group Lasso, and logistic regression with ℓ_1 regularization, corrected Lasso and linear regression with SCAD regularizer. This significantly improves previous convergence results on SDCA for problems that are not strongly convex. As a by product, we derive a dual free form of SDCA that can handle general regularization term, which is of interest by itself.
§ INTRODUCTION
First order methods have again become a central focus of research in optimization and particularly in machine learning in recent years, thanks to its ability to address very large scale empirical risk minimization problems that are ubiquitous in machine learning, a task that is often challenging for other algorithms such interior point methods. The randomized (dual) coordinate version of the first order method samples one data point and updates the objective function at each time step, which avoids the computations of the full gradient and pushes the speed to a higher level. Related methods have been implemented in various software packages <cit.>. In particular, the randomized dual coordinate method considers the following problem.
min_w∈Ω F(w) :=1/n∑_i=1^n f_i (w)+λ g(w)
= f(w)+λ g(w),
where f_i(w) is a convex loss function of each sample and g(w) is the regularization, Ω is a convex compact set.
Instead of directly solving the primal problem it look at the dual problem
D(α)=1/n∑_i=1^n-ψ_i^* (-α_i)-λ g^*( 1/λ n∑_i=1^nX_iα_i ),
where it assumes the loss function f_i(w) has the form ψ_i(X_i^Tw). <cit.> consider this dual form and proved linear convergence of the stochastic dual coordinate ascent method (SDCA) when g(w)=w_2^2. They further extend the result into a general form, which allows the regulerizer g(w)
to be a general strongly convex function <cit.>. <cit.> propose the AdaSDCA, an adaptive variant of SDCA, to allow the method to adaptively change the probability distribution over the dual variables through the iterative process. Their experiment results outperform the non-adaptive methods. <cit.> consider a primal-dual frame work and extend SDCA to the non-strongly convex case with sublinear rate. <cit.> further improve the convergece speed using a novel non-uniform sampling that selects each coordinate with a probability proportional to the square root of the smoothness parameter. Other acceleration techniques <cit.>, as well as mini-batch and distributed variants on coordinate method <cit.> have been studied in literature. See <cit.> for a review on the coordinate method.
Our goal is to investigate how to extend SDCA to non-convex statistical problems. Non-convex optimization problems attract fast growning attentions due to the rise of numerous applications, notably non-convex M estimators (e.g.,SCAD <cit.>, MCP <cit.>), deep learning <cit.> and robust regression <cit.>. <cit.> proposes a dual free version of SDCA and proves the linear convergence of it, which addresses the case that each individual loss function f_i(w) may be non-convex, but their sum f(w) is strongly convex. This extends the applicability of SDCA.
From a technical perspective, due to non-convexity of f_i(w), the paper avoids explicitly using its dual (hence the name “dual free”), by introducing pseudo-dual variables.
In this paper, we consider using SDCA to solve M-estimators that are not strongly convex, or even non-convex. We show that under the restricted strong convexity condition, SDCA converges linearly. This setup includes well known formulations such as Lasso, Group Lasso, Logistic regression with ℓ_1 regularization, linear regression with SCAD regularizer <cit.>, Corrected Lasso <cit.>, to name a few. This significantly improves upon existing theoretic results <cit.> which only established sublinear convergence on convex objectives.
To this end, we first adapt SDCA <cit.> into a generalized dual free form in Algorithm <ref>. This is because to apply SDCA in our setup, we need to introduce a non-convex term and thus demand a dual free analysis.
We remark that the theory about dual free SDCA established in <cit.> does not apply to our setup for the following reasons:
* In <cit.>, F(w) needs to be strongly convex, thus does not apply to Lasso or non-convex M-estimators.
* <cit.> only studies the special case where g(w)=1/2w_2^2, while the M-estimators we consider include non-smooth regularization such as ℓ_1 or ℓ_1,2 norm.
We illustrate the update rule of Algorithm <ref> using an example. While our main focus is for non-strongly convex problems, we start with a strongly convex example to obtain some intuitions. Suppose f_i(w)=1/2(y_i-w^T x_i)^2 and g(w)=1/2w_2^2+ w_1. It is easy to see ∇ f_i(w)= x_i(w^T x_i-y_i) and w_t=min_w∈Ω1/2w-v_t_2^2+ w_1.
It is then clear that to apply SDCA g(w) needs to be strongly convex, or otherwise the proximal step w_t=max_w∈Ω w^Tv^t-w_1 becomes ill-defined: w_t may be infinity (if Ω is unbounded) or not unique. This observation motivates the following preprocessing step: if g(·) is not strongly convex, for example Lasso where f_i(w)=1/2(y_i-w^T x_i)^2 and g(w)=w_1, we redefine the formulation by adding a strongly convex term to g(w) and subtract it from f(w).
More precisely, for i=1,...,n, we define ϕ_i(w)=n+1/2ny_i-w^Tx_i_2^2, ϕ_n+1(w)=-λ̃ (n+1)/2w_2^2, g̃(w)= 1/2w_2^2+λ/λ̃w_1,
and apply Algorithm <ref> on 1/n+1∑_i=1^n+1ϕ_i(w)+λ̃g̃(w) (which is equivalent to Lasso), where the value of λ̃ will be specified later. Our analysis is thus focused on this alternative representation. The main challenge arises from the non-convexity of the newly defined ϕ_n+1, which precludes the use of the dual method as in <cit.>. While in <cit.>, a dual free SDCA algorithm is proposed and analyzed, the results do not apply to out setting for reasons discussed above.
Our contributions are two-fold. 1. We prove linear convergence of SDCA for a class of problem that are not strongly convex or even non-convex, making use of the concept restricted strong convexity (RSC). To the best of our knowledge, this is the first work to prove linear convergence of SDCA under this setting which includes several statistical model such as Lasso, Group Lasso, logistic regression, linear regression with SCAD regularization, corrected Lasso, to name a few. 2. As a byproduct, we derive a dual free from SDCA that extends the work of <cit.> to account for more general regularization g(w).
Related work. <cit.> prove linear convergence of the batched gradient and composite gradient method in Lasso, Low rank matrix completion problem using the RSC condition. <cit.> extend this to a class of non-convex M-estimators. In spirit, our work can be thought as a stochastic counterpart. Recently, <cit.> consider SVRG and SAGA in the similar setting as ours, but they look at the primal problem. <cit.> considers dual free SDCA, but the analysis does not apply to the case where F(·) is not strongly convex. Similarly, <cit.> consider the non-strongly convex setting in SVRG (^++), where F(w) is convex, each individual f_i(w) may be non-convex, but only establish sub-linear convergence. Recently, <cit.> consider SVRG with a zero-norm constraint and prove linear convergence for this specific formulation. In comparison, our results hold more generally, covering not only sparsity model but also corrected Lasso with noisy covariate, group sparsity model, and beyond.
§ PROBLEM SETUP
In this paper we consider two setups, namely, (1) convex but not strongly convex F(w) and (2) non-convex F(w).
For the convex case, we consider
min_g(w)≤ρ F(w) :=f(w)+λ
g(w)
:=1/n∑_i=1^n f_i (w)+λ g(w),
where ρ>0 is some pre-defined radius, f_i(w) is the loss function for sample i, and g(w) is a norm.
Here we assume each f_i(w) is convex and L_i-smooth.
For the non-convex F(w) we consider
min_d_λ(w)≤ρF(w): =f(w)+d_λ,μ(w)
=1/n∑_i=1^n f_i(w)+d_λ,μ(w) ,
where ρ>0 is some pre-defined radius, f_i(w) is convex and L_i smooth, d_λ,μ(w) is a non-convex regularizer depending on a tuning parameter λ and a parameter μ explained in section <ref>. This M-estimator includes a side constraint depending on d_λ (w), which needs to be a convex function and admits a lower bound d_λ(w)≥w_1. It is also closed w.r.t. to d_λ,μ(w), more details are deferred to section <ref>.
We list some examples that belong to these two setups.
* Lasso: F(w)=∑_i=1^n1/2(y_i-w^Tx_i)^2+λw_1.
* Logistic regression with ℓ_1 penalty:
F(w)=∑_i=1^nlog (1+exp (-y_ix_i^Tw))+λw_1.
* Group Lasso F(w)=∑_i=1^n1/2(y_i-w^Tx_i)^2+λw_1,2.
* Corrected Lasso <cit.>:
F(w)=∑_i=1^n1/2n (⟨ w,x_i ⟩-y_i)^2-1/2w^TΣ w+λw_1, where Σ is some positive definite matrix.
* linear regression with SCAD regularizer <cit.>: F(w)=∑_i=1^n1/2n (⟨ w,x_i ⟩-y_i)^2+SCAD(w).
The first three examples belong to first setup while the last two belong to the second setup.
§.§ Restricted Strongly Convexity
Restricted Strongly Convexity (RSC) is proposed in <cit.> and has been explored in several other work <cit.>. We say the loss function f(w) satisfies the RSC condition with curvature κ and the tolerance parameter τ with respect to the norm g(w) if
Δ f(w_1,w_2)≜ f(w_1)-f(w_2)-⟨∇ f(w_2),w_1-w_2 ⟩
≥κ/2w_1-w_2_2^2-τ g^2(w_1-w_2).
When f(w) is γ strongly convex, then it is RSC with κ=γ and τ=0. However in many case, f(w) may not be strongly convex, especially in the high dimensional case where the ambient dimension p>n. On the other hand, RSC is easier to be satisfied. Take Lasso as an example, under some mild condition, it is shown that <cit.>
Δ f(w_1,w_2)≥ c_1 w_1-w_2_2^2-c_2log p/nw_1-w_2_1^2,
where c_1, c_2 are some positive constants. Besides Lasso, the RSC condition holds for a large range of statistical models including log-linear model, group sparsity model, and low-rank model. See <cit.> for more detailed discussions.
§.§ Assumption on convex regularizer g(w)
Decomposability is the other ingredient needed to analyze the algorithm.
A regularizer is decomposable with respect to a pair of subspaces 𝒜⊆ℬ if
g(α+β)=g(α)+g(β) for all α∈𝒜, β∈ℬ^⊥,
where ⊥ means the orthogonal complement.
A concrete example is l_1 regularization for sparse vector supported on subset S. We define the subspace pairs with respect to the subset S⊂{1,...,p},
𝒜(S)={w∈ℝ^p| w_j=0 for all j∉ S } and ℬ(S)=𝒜(S). The decomposability is thus easy to verify. Other widely used examples include non-overlap group norms such as·_1,2, and the nuclear norm | ·|_* <cit.>. In the rest of the paper, we denote w_ℬ as the projection of w on the subspace ℬ.
§.§.§ Subspace compatibility constant
For any subspace 𝒜 of ℝ^p, the subspace compatibility constant with respect to the pair (g,·_2) is give by
Ψ (𝒜)=sup_u∈𝒜\{0}g(u)/u_2.
That is, it is the Lipschitz constant of the regularizer restricted in 𝒜. For example, for the above-mentioned sparse vector with cardinality s, Ψ(𝒜)=√(s) for g(u)=u_1.
§.§ Assumption on non-convex regularizer d_λ,μ (w)
We consider regularizers that are separable across coordinates, i.e., d_λ,μ(w)=∑_j=1^pd̅_λ,μ (w_j).
Besides the separability, we make further assumptions on the univariate function d̅_λ,μ (t):
* d̅_λ,μ(·) satisfies d̅_λ,μ(0)=0 and is symmetric about zero (i.e., d̅_λ,μ(t)=d̅_λ,μ(-t)).
* On the nonnegative real line, d̅_λ,μ (·) is nondecreasing.
* For t>0, d̅_λ,μ (t)/t is nonincreasing in t.
* d̅_λ,μ(·) is differentiable for all t≠0 and subdifferentiable at t=0, with lim_t→0^+d̅_λ,μ'(t)=λ L_d.
* d̅_λ(t):= (d̅_λ,μ(t)+μ/2t^2)/λ is convex.
For instance, SCAD satisfying these assumptions.
𝐒𝐂𝐀𝐃_λ,ζ (𝐭)=
λ |t|, for |t|≤λ,
-(t^2-2ζλ|t|+λ^2)/(2(ζ-1)), for λ<|t|≤ζλ,
(ζ+1)λ^2/2, for |t|>ζλ,
where ζ>2 is a fixed parameter. It satisfies the assumption with L_d=1 and μ=1/ζ-1 <cit.>.
§.§ Applying SDCA
§.§.§ Convex F(w)
Following a similar line as we did for Lasso, to apply the SDCA algorithm, we define
ϕ_i(w)=n+1/n f_i(w) for i=1,...,n , ϕ_n+1(w)=-λ̃ (n+1)/2w_2^2 , g̃(w)= 1/2w_2^2+λ/λ̃g(w).
Correspondingly, the new smoothness parameters are L̃_i=n+1/nL_i for i=1,...,n and L̃_n+1=λ̃(n+1). Problem (<ref>) is thus equivalent to the following
min_w∈Ω 1/1+n∑_i=1^n+1ϕ_i(w)+ λ̃g̃(w).
This enables us to apply Algorithm <ref> to the problem with ϕ_i, g̃, L̃_i, λ̃. In particular, while ϕ_n+1 is not convex, g̃ is still convex (1-strongly convex). We exploit this property in the proof and define g̃^*(v)=max_w∈Ω⟨ w,v⟩-g̃(w), where Ω is a convex compact set. Since g̃(w) is 1-strongly convex, g̃^*(v) is 1-smooth <cit.>.
§.§.§ Non-convex F(w)
Similarly, we define
ϕ_i(w)=n+1/n f_i(w) for i=1,...,n,
ϕ_n+1(w)=- (λ̃+μ) (n+1)/2w_2^2,
g̃(w)= 1/2w_2^2+λ/λ̃d_λ(w),
and then apply Algorithm <ref> on
min_w∈Ω 1/1+n∑_i=1^n+1ϕ_i(w)+ λ̃g̃(w).
The update rule of proximal step for different g(·),d_λ(·) (such as SCAD and MCP) can be found in <cit.>.
§ THEORETICAL RESULT
In this section, we present the main theoretical results, and some corollaries that instantiate the main results in several well known statistical models.
§.§ Convex F(w)
To begin with, we define several terms related to the algorithm.
* w^* is true unknown parameter. g^*(·) is the dual norm of g(·). Conjugate function g̃^*(v)=max_w∈Ω⟨ w,v ⟩-g̃(w), where Ω={ w| g(w)≤ρ}.
* ŵ is the optimal solution to problem <ref>, and we assume ŵ is in the interior of Ω w.l.o.g. by choosing Ω large enough.
* (ŵ,v̂) is an optimal solution pair satisfying g̃(ŵ)+g̃^*(v̂)=⟨ŵ,v̂⟩.
* A_t=∑_j=1^n+11/q_ja_j^t-â_j_2^2, where â_j=-∇ϕ_j(ŵ). B_t=2(g̃^*(v^t)-⟨∇g̃^*(v̂),v^t-v̂⟩-g̃^*(v̂)).
We remark that our definition on the first potential A_t is the same as in <cit.>, while the second one B_t is different. If g̃ (w)=1/2w_2^2, our definition on B_t reduces to that in <cit.>, i.e., w^t-ŵ_2^2. To see this, when g̃ (w)=1/2w_2^2, g̃^*(v)=1/2v_2^2 and v^t=w^t, v̂=ŵ. We then define another potential C_t, which is a combination of A_t and B_t. C_t=η/(n+1)^2 A_t +λ̃/2 B_t. Notice that using smoothness of g̃^*(·) and Lemma <ref> and <ref> in the appendix, it is not hard to show B_t≥ w^t-ŵ_2^2. Thus if C_t converges, so does w^t-ŵ_2^2.
For notational simplicity, we define the following two terms used in the theorem.
* Effective RSC parameter: κ̃=κ-64τΨ^2 (ℬ).
* Tolerance: δ=c_1τδ_stat^2, where δ_stat^2=(Ψ(ℬ)ŵ-w^* _2+g (w^*_𝒜^⊥))^2, c_1 is a universal positive constant.
Assume each f_i(w) is L_i smooth and convex, f(w) satisfies the RSC condition with parameter (κ,τ), w^* is feasible, the regularizer is decomposable w.r.t ( 𝒜, ℬ ) such that κ̃≥ 0, and the Algorithm <ref> runs with η≤min{1/16 (λ̃+L̅),1/4λ̃ (n+1)}, where L̅=1/n∑_i=1^n L_i, λ̃ is chosen such that 0<λ̃≤κ̃. If we choose the regularization parameter λ such that λ≥max(2g^*(∇ f(w^*)), c τρ) where c is some universal positive constant, then we have
𝔼(C_t)≤ (1-ηλ̃)^t C_0,
until F(w^t)-F(ŵ)≤δ, where the expectation is for the randomness of sampling of i in the algorithm.
Some remarks are in order for interpreting the theorem.
* In several statistical models, the requirement of κ̃> 0 is easy to satisfy under mild conditions. For instance, in Lasso we have Ψ^2 (ℬ)=s. τ≃log p/n, κ=1/2 if the feature vector x_i is sampled from N(0,I). Thus, if 128 slog p/n≤ 1, we have κ̃≥1/4.
* In some models, we can choose the pair of subspace (𝒜, ℬ) such that w^*∈𝒜 and thus δ=c_1 τΨ^2(ℬ) ŵ-w^*_2^2. In Lasso, as we mentioned above τ≃log p/n, thus δ≃ c_1s log p/nŵ-w^*_2^2, i.e., this tolerance is dominated by statistical error if s is small and n is large.
* We know B_t≥w^t-ŵ_2^2 and A_t≥ 0, thus C_t≥λ̃/2w^t-ŵ_2^2 , thus, if C_t converge, so does w^t-ŵ_2^2.
When F(w^t)-F(ŵ)≤δ, using Lemma <ref> in the supplementary material, it is easy to get w^t-ŵ_2^2≤4δ/κ̃.
Combining these remarks, Theorem <ref> states that the optimization error decreases geometrically until it achieves the tolerance δ/κ̃ which is dominated by the statistical error ŵ-w^*_2^2, thus can be ignored from the statistical view.
If g(w) in Problem (<ref>) is indeed 1-strongly convex, we have the following proposition, which extends dual-free SDCA <cit.> into the general regularization case. Notice we now directly apply the Algorithm <ref> on Problem (<ref>) and change the definitions of A_t and B_t correspondingly. In particular, A_t=∑_j=1^n1/q_ja_j^t-â_j_2^2, B_t=2(g^*(v^t)-⟨∇ g^*(v̂),v^t-v̂⟩-g^*(v̂)), where â_j=-∇ f_j(ŵ). C_t is still defined in the same way, i.e., C_t=η/n^2 A_t +λ/2 B_t.
Suppose each f_i (w) is L_i smooth, f(w) is convex, g(w) is 1- strongly convex, ŵ is the optimal solution of Problem (<ref>), and L̅=1/n∑_i=1^nL_i. Then we have:
(I) If each f_i(x) is convex, we run the Algorithm
<ref> with η≤min{1/4L̅,1/4λ n}, then
𝔼(C_t)≤ (1-ηλ )^t C_0. (II) Otherwise, we run the Algorithm <ref> with η≤min{λ/4L̅^2,1/4λ n}, and 𝔼(C_t)≤ (1-ηλ )^t C_0. Note that B_t≥w^t-ŵ_2^2, A_t≥ 0, thus w^t-ŵ_2^2 decreases geometrically.
In the following we present several corollaries that instantiate Theorem <ref> with several concrete statistical models. This essentially requires to choose appropriate subspace pair (𝒜,ℬ) in these models and check the RSC condition.
§.§.§ Sparse linear regression
Our first example is Lasso, where f_i(w)=1/2(y_i-w^Tx_i)^2 and g(w)=w_1. We assume each feature vector x_i is generated from Normal distribution N(0,Σ) and the true parameter w^* ∈ℝ ^p is sparse with cardinality s. The observation y_i is generated by y_i=(w^*)^T x_i+ξ_i, where ξ_i is a Gaussian noise with mean 0 and variance σ^2. We denote the data matrix by X∈ℝ^n× p and X_j is the jth column of X. Without loss of generality, we assume X is column normalized, i.e., X_j_2/√(n)≤ 1 for all j=1,2,...,p. We denote σ_min (Σ) as the smallest eigenvalue of Σ, and ν (Σ)=max_i=1,2,..,pΣ_ii.
Assume w^* is the true parameter supported on a subset with cardinality at most s, and we choose the parameter λ,λ̃ such that λ≥max(6σ√(log p /n), c_1 ρν (Σ) log p/n) and 0< λ̃≤κ̃ hold, where κ̃=1/2σ_min(Σ)-c_2ν (Σ) s log p/n, where c_1,c_2 are some universal positive constants. Then we run the Algorithm <ref> with η≤min{1/16(λ̃ +L̅),1/4λ̃ (n+1)} and have
𝔼(C_t)≤ (1-ηλ̃)^t C_0
with probability at least 1-exp(-3log p)-exp (-c_3 n),
until F(w^t)-F(ŵ)≤δ, where δ=c_4 ν(Σ) slog p /nŵ-w^*_2^2. c_1,c_2,c_3,c_4 are some universal positive constants.
The requirement λ≥ 6σ√(log p /n) is documented in literature <cit.> to ensure that Lasso is statistically consistent. And λ≥ c_1 ρν (Σ) log p/n is needed for fast convergence of optimization algorithms, which is similar to the condition proposed in in <cit.> for batch optimization algorithm. When slog p/n=o(1), which is necessary for statistical consistency of Lasso, we have 1/2σ_min (Σ)-c_2 ν (Σ) s log p/n≥ 0 , which guarantees the existence of λ̃. Also notice under this condition, δ= c_3ν(Σ)slog p/nŵ-w^*_2^2 is of a lower order of ŵ-w^*_2^2. Using remark 3 in Theorem <ref>, we have w^t-ŵ_2^2≤4δ/κ̃, which is dominated by the statistical error ŵ-w^*_2^2 and hence can be ignored from the statistical perspective. Thus to sum up, Corollary <ref> states the optimization error decreases geometrically until it achieves the statistical limit of Lasso.
§.§.§ Group Sparsity Model
<cit.> introduce the group Lasso to allow predefined groups of covariates to be selected together into or out of a model together. The most commonly used regularizer to encourage group sparsity is ·_1,2. In the following, we define group sparsity formally. We assume groups are disjointed, i.e., 𝒢={G_1,G_2,...,G_N_𝒢} and G_i ∩ G_j=∅. The regularization is w_𝒢,q≜∑_g=1^N_gw_g_q. When q=2, it reduces to the commonly used group Lasso <cit.>, and another popularly used case is q=∞ <cit.>. We require the following condition, which generalizes the column normalization condition in the Lasso case. Given a group G of size m and X_G∈ℝ^n× m , the associated operator norm ||| X_G_i|||_q → 2≜max_w_q=1X_Gw_2 satisfies
||| X_G_i|||_q→ 2/√(n)≤ 1 for all i=1,2,...,N_𝒢.
The condition reduces to the column normalized condition when each group contains only one feature (i.e., Lasso).
We now define the subspace pair (𝒜,ℬ) in the group sparsity model. For a subset S_𝒢⊆{1,...,N_𝒢} with cardinality s_𝒢= |S_𝒢|, we define the subspace
𝒜 (S_𝒢)={w|w_G_i=0 for all i∉ S_𝒢} ,
and 𝒜=ℬ.
The orthogonal complement is
ℬ^⊥(S_𝒢) = {w|w_G_i=0 for all i∈ S_𝒢}.
We can easily verify that
α +β_𝒢,q=α_𝒢,q+β_𝒢,q,
for any α∈𝒜(S_𝒢) and β∈ℬ^⊥ (S_𝒢).
In the following corollary, we use q=2, i.e., group Lasso, as an example. We assume the observation y_i is generated by y_i=x_i^T w^*+ξ_i, where x_i∼ N(0,Σ), and ξ_i∼ N(0,σ^2).
Assume w∈ℝ^p and each group has m parameters, i.e., p=m N_𝒢. Denote by s_𝒢 the cardinality of non-zero group, and we choose parameters λ, λ̃ such that
λ ≥max(4σ (√(m/n)+√(log N_𝒢/n)), c_1ρσ_2(Σ) (√(m/n)+√(3 log N_𝒢/n) )^2);
0 < λ̃≤κ̃, κ̃=σ_1(Σ)-c_2σ_2(Σ)s_𝒢 (√(m/n)+√(3 log N_𝒢/n) )^2;
where σ_1 (Σ) and σ_2 (Σ) are positive constant depending only on Σ. If we run the Algorithm 1 with η≤min{1/16(λ̃ +L̅),1/4λ̃ (n+1)}, then we have
𝔼(C_t)≤ (1-ηλ̃)^t C_0
with probability at least 1-2exp (-2 log N_𝒢)-c_2 exp(-c_3n) ,
until F(w^t)-F(ŵ)≤δ, where δ=c_3σ_2 (Σ)s_𝒢(√(m/n)+√(3 log N_𝒢/n))^2 ŵ-w^*_2^2.
We offer some discussions to interpret the corollary.
To satisfy the requirement κ̃≥ 0, it suffices to have
s_𝒢(√(m/n)+√(3 log N_𝒢/n))^2=o(1).
This is a mild condition, as it is needed to guarantee the statistical consistency of group Lasso <cit.>. Notice that the condition is easily satisfied when s_𝒢 and m are small. Under this same condition, since
δ=c_3σ_2(Σ)s_𝒢(√(m/n)+√(3 log N_𝒢/n))^2 ŵ-w^*_2^2,
we conclude that δ is dominated by ŵ-w^*_2^2.
Again, it implies the optimization error decrease geometrically up to the scale o(ŵ-w^*_2^2) which is dominated by the statistical error of the model.
§.§.§ Extension to generalized linear model
We consider the generalized linear model of the following form,
min_w∈Ω1/n∑_i=1^n (Φ (w ,x_i)-y_i⟨ w,x_i ⟩)+λw_1,
which covers such case as Lasso (where Φ (θ)=θ^2/2) and logistic regression (where Φ(θ)=log (1+exp (θ))). In this model, we have
Δ f(w_1,w_2)=1/n∑_i=1^nΦ” (⟨ w_t,x_i ⟩) ⟨ x_i, w_1-w_2 ⟩^2,
where w_t=t w_1+(1-t)w_2 for some t∈ [0,1 ].
The RSC condition thus is equivalent to:
1/n∑_i=1^nΦ” (⟨ w_t,x_i ⟩)⟨ x_i, w_1-w_2 ⟩ ^2
≥ κ/2w_1-w_2_2^2-τ g^2 (w_1-w_2) for w_1,w_2∈Ω.
Here we require Ω to be a bounded set <cit.>. This requirement is essential since in some generalized linear model Φ”(θ) approaches to zero as θ diverges. For instance, in logistic regression, Φ”(θ)=exp (θ)/(1+exp(θ))^2, which tends to zero as θ→∞ . For a broad class of generalized linear models, RSC holds with τ=clog p/n, thus the same result as that of Lasso holds, modulus change of constants.
§.§ Non-convex F(w)
In the non-convex case, we assume the following RSC condition:
Δ f(w_1,w_2)≥κ/2w_1-w_2_2^2-τw_1-w_2_1^2
with τ=θlog p/n for some constant θ. We again define the potential A_t ,B_t and C_t in the same way with convex case. The main difference is that now we have ϕ_n+1=-(λ̃+μ)(n+1)/2w_2^2 and the effective RSC parameter κ̃ is different. The necessary notations for presenting the theorem are listed below:
* w^* is the unknown true parameter that is s-sparse. Conjugate function g̃^*(v)=max_w∈Ω⟨ w,v ⟩-g̃(w), where Ω={ w| d_λ(w)≤ρ}. Note Ω is convex due to convexity of d_λ(w).
* ŵ is the global optimum of Problem (<ref>), we assume it is in the interior of Ω w.l.o.g.
* (ŵ,v̂) is an optimal solution pair satisfying g̃(ŵ)+g̃^*(v̂)=⟨ŵ,v̂⟩.
* A_t=∑_j=1^n+11/q_ja_j^t-â_j_2^2, B_t=2(g̃^*(v^t)-⟨∇g̃^*(v̂),v^t-v̂⟩-g̃^*(v̂)), C_t=η/(n+1)^2 A_t +λ̃/2 B_t, where â_j=-∇ϕ_j(ŵ).
* Effective RSC parameter: κ̃=κ-μ-64τ s, where τ=θlog p/n for some constant θ. Tolerance: δ=c_1τ s ŵ-w^*_2^2, where c_1 is a universal positive constant.
Suppose w^* is s sparse, ŵ is the global optimum of Problem (<ref>). Assume each f_i(w) is L_i smooth and convex, f(w) satisfies the RSC condition with parameter (κ,τ), where τ=θlog p/n for some constant θ, d_λ,μ (w) satisfies the Assumption in section <ref>, λ L_d≥max{ cρθlog p /n , 4 ∇ f (w^*)_∞} where c is some universal positive constant, and κ̃-μ≥λ̃> 0, the Algorithm <ref> runs with η≤min{1/16 (λ̃+L̅),1/4λ̃ (n+1)}, where L̅=1/n∑_i=1^n L_i, then we have
𝔼(C_t)≤ (1-ηλ̃)^t C_0,
until F(w^t)-F(ŵ)≤δ, where the expectation is for the randomness of sampling of i in the algorithm.
Some remarks are in order to interpret the theorem.
* We require λ̃ to satisfy 0<λ̃≤κ-2μ-64θlog p/n s. Thus if the non-convex parameter μ is too large, we can not find such λ̃.
* Note that δ=c_1sθlog p/nŵ-w^*_2^2 is dominated by ŵ-w^*_2^2 when the model is sparse and n is large. Similar as the convex case, by B_t≥w^t-ŵ_2^2, the theorem says the optimization error deceases linearly up to the fundamental statistical error of the model.
The first non-convex model we consider is linear regression with SCAD. Here f_i(w)=1/2 (⟨ w, x_i ⟩-y_i)^2 and d_λ,μ(·) is SCAD(·) with parameter λ and ζ. The data (x_i,y_i) are generated similarly as in the Lasso case.
Suppose we have n i.i.d. observations { (x_i,y_i) }, w^* is s sparse , ŵ is global optima and we choose λ and λ̃ such that λ≥max{ c_1ρν (Σ) log p /n , 12σ√(log p/n)}, κ̃ -1/ζ-1≥λ̃>0, then we run the algorithm with η≤min{1/16 (λ̃+L̅),1/4λ̃ (n+1)}, where L̅=1/n∑_i=1^n L_i, then have
𝔼(C_t)≤ (1-ηλ̃)^t C_0,
with 1-exp(-3log p)-exp (-c_2 n), until F(w^t)-F(ŵ)≤δ, where κ̃=1/2σ_min(Σ)-c_3 ν(Σ) s log p/n-1/ζ-1, δ=c_4ν (Σ)slog p/nŵ-w^*_2^2. Here c_1, c_2, c_3,c_4 are universal positive constants.
If the non-convex parameter 1/ζ-1 is small, s is sparse, n is large, then we can choose a positive λ̃ to guarantee the convergence of algorithm. Under this setting, we have the tolerance δ dominated by ŵ-w^*_2^2.
The second example is the corrected Lasso. In many applications, the covariate may be observed subject to corruptions. In this section, we consider corrected Lasso proposed by <cit.>. Suppose data are generated according to a linear model y_i=x_i^Tw^*+ξ_i, where ξ_i is a random zero-mean sub-Gaussian noise with variance σ^2; and each data point x_i is i.i.d. sampled from a zero-mean normal distribution, i.e., x_i∼ N(0,Σ). We denote the data matrix by X∈ℝ^n× p , the smallest eigenvalue of Σ by σ_min (Σ), and the largest eigenvalue of Σ by σ_max (Σ) .
The observation z_i of x_i is corrupted by addictive noise, in particular, z_i=x_i+ς_i, where ς_i∈ℝ^p is a random vector independent of x_i, say zero-mean with known covariance matrix Σ_ς. Define Γ̂=Z^TZ/n-Σ_ς and γ̂=Z^Ty/n. Our goal is to estimate w^* based on y_i and z_i (but not x_i which is not observable), and the corrected Lasso proposes to solve the following:
ŵ∈min_w_1≤ρ1/2 w^T Γ̂ w-γ̂ w+ λw_1.
Equivalently, it solves
min_w_1≤ρ1/2n∑_i=1^n (y_i-w^Tz_i)^2-1/2w^TΣ_ς w +λw_1 .
Notice that due to the term -1/2w^TΣ_ς w, the optimization problem is non-convex.
Suppose we are given i.i.d. observation {(z_i,y_i) } from the linear model with additive noise, w^* is s sparse, and Σ_ς=γ_ς I. Let ŵ be the global optima. We choose λ≥max{ c_0ρlog p /n , c_1 φ√(log p/n)} , and positive λ̃ such that κ̃-γ_ς≥λ̃>0, where φ=(√(σ_max (Σ))+√(γ_ς))(σ+√(γ_ς)w^*_2), and
κ̃=1/2σ_min(Σ)-c_2 σ_min(Σ)max( (σ_max (Σ)+γ_ς/σ_min(Σ))^2,1 ) slog p/n-γ_ς,
then if we run the algorithm with η≤min{1/16(λ̃ +L̅),1/4λ̃ (n+1)}, we have
𝔼(C_t)≤ (1-ηλ̃)^t C_0,
with probability at least 1-c_3 exp(-c_4 nmin( σ^2_min (Σ)/( σ_max(Σ)+γ_ς)^2,1 ) )-c_5exp(-c_6 log p) , until F(w^t)-F(ŵ)≤δ, where δ=c_7 σ_min(Σ)max( (σ_max (Σ)+γ_ς/σ_min(Σ))^2,1 )slog p/nŵ-w^*_2^2, and c_0 to c_7 are some universal positive constants.
Some remarks of the corollary are in order.
* The result can be easily extended to more general Σ_ς≼γ_ς I.
* The requirement of λ is similar with its batch counterpart <cit.>.
* Similar to the setting in Lasso, we need slog p/n=o(1). To ensure the existence of such λ̃, the non-convex parameter γ_ς can not be too large, which is similar to the result in <cit.>.
§ EXPERIMENT RESULT
We report numerical experiment results to validate our theoretical findings, namely, without strong convexity, SDCA still achieves linear convergence under our setup. The setup of experiment is similar to that in <cit.>. On both synthetic and real datasets we report results of SDCA and compare with several other algorithms. These algorithms are Prox-SVRG <cit.>, SAGA <cit.>, Prox-SAG an proximal version of the algorithm in <cit.>, proximal stochastic gradient (Prox-SGD), regularized dual averaging method (RDA) <cit.> and the proximal full gradient method (Prox-GD) <cit.>. For the algorithms with a constant learning rate (i.e., Prox-SAG, SAGA, Prox-SVRG, SDCA, Prox-GD), we tune the learning rate from an exponential grid { 2, 2/2^1,...,2/2^12} and chose the one with best performance. Below are some further remarks.
* Convergence rates of Prox-SGD and RDA are sub-linear according to the analysis in <cit.>. The stepsize of Prox-SGD is set as η_k=η_0/√(k) as suggested in <cit.> and that of RDA is β_k=β_0 √(k) as suggested in <cit.>. η_0 and β_0 are chosen as the one that attains the best performance among powers of 10.
* <cit.> prove that Prox-GD converges linearly in the setting of our experiment.
* The convergence rate of Prox-SVRG and SAGA is linear in our setting, shown recently in <cit.>.
* To the best of our knowledge, linear convergence of Prox-SAG in our setting has not been established, although in practice it works well.
§.§ Synthetic dataset
In this section we test algorithms above on Lasso, Group Lasso, Corrected Lasso and SCAD .
§.§.§ Lasso
We generated the feature vector x_i∈ℝ^p independently from N(0,Σ), where we set Σ_ii=1, for i=1,...,p and Σ_ij=b, for i≠ j. The responds y_i is generated by follows: y_i=x_i^T w^*+ξ_i. w^*∈ℝ^p is a sparse vector with cardinality s, where the non-zero entries are ± 1 drawn from the Bernoulli distribution with probability 0.5. The noise ξ_i follows the standard normal distribution. We set λ=0.05 in Lasso and choose λ̃=0.25 in SDCA. In the following experiment, we set p=5000, n=2500 and try different settings on s and b.
Figure <ref> presents simulation results on Lasso. Among all four settings, SDCA, SVRG and SAG converge with linear rates. When b=0.4 and s=100, SDCA outperforms the other two. When b=0, the Prox-GD works although with slower rate. While b is nonzero, the Prox-GD does not works well due to the large condition number. RDA and SGD converges slowly in all setting because of the large variance in gradient.
§.§.§ Group Lasso
We report the experiment result on Group Lasso in Figure <ref>. Similarly as Lasso, we generate the observation y_i=x_i^T w^*+ξ_i with the feature vectors independently sampled from N(0,Σ), where Σ_ii=1 and Σ_ij=b, i≠ j. The cardinality of non-zero group is s_𝒢, and the non-zero entries are sampled uniformly from [-1, 1]. In the following experiment, we try different setting on b, group size m and group sparsity s_𝒢.
Similar with the result in Lasso, SDCA, SVRG and SAG performs well in all settings. When m=20 and s_𝒢=20, SDCA outperforms the other two. The Prox-GD converges with linear rate when m=10, s_G=10, b=0 but does not work well in the other three setting. SGD and RDA converge slowly in all four settings.
§.§.§ Corrected Lasso
We generate data as follows: y_i=x_i^Tw^*+ξ_i, where each data point x_i∈ℝ^p is drawn from normal distribution N(0,I), and the noise ξ_i is drawn from N(0,1). The coefficient w^* is sparse with cardinality s, where the non-zero coefficient equals to ± 1 generated from the Bernoulli distribution with probability 0.5. We set covariance matrix Σ_ς=γ_ς I. We choose λ=0.05 in the formulation and λ̃=0.1 in SDCA. The result is presented in Figure <ref>.
In both setting, we see κ̃> λ̃+γ_ς, thus according to our theory, SDCA converges linearly. In both figures (a) and (b), SDCA, Prox-SVRG, Prox-SAG and Prox-GD converge linearly. SDCA performs better in the second setting. SGD and RDA converge slowly due to the large variance in gradient.
§.§.§ SCAD
The way to generate data is same with Lasso. Here x_i∈ℝ^p is drawn from normal distribution N(0,2I) to satisfy the requirement of on κ̃, γ_ς and λ̃. We set λ=0.05 in the formulation and choose λ̃=0.1 in SDCA. We present the result in Figure <ref>. We try two different settings on n, p, s, ζ. In both case, SDCA, Prox-SVRG, Prox-SAG, converge linearly with similar performance. Prox-GD also converges linearly but with slower rate. According to our theory, κ̃≥ 1 and λ̃+μ≤ 0.5 in both cases, thus SDCA can converge linearly and the simulation results verify our theory.
§.§ Real dataset
§.§.§ Sparse Classification Problem
In this section, we evaluate the performance of the algorithms when solving the logistic regression with ℓ_1 regularization: min_w∑_i=1^nlog (1+exp (-y_ix_i^Tw))+λw_1.
We conduct experiments on two real-world data sets, namely, rcv1 <cit.> and sido0 <cit.>. The regularization parameters are set as λ=2· 10^-5 in rcv1 and λ=10^-4 in sido0, as suggested in <cit.>. For SDCA, we choose λ̃=0.002 and λ̃=0.001 in these two experiments, respectively.
In Figure <ref> and <ref> we report the performance of different algorithms. In Figure <ref> , Prox-SVRG performs best, and closely followed by SDCA, SAGA, and then Prox-SAG. We observe that Prox-GD converges much slower, albeit in theory it should converges with a linear rate <cit.>, possibly because its contraction factor is close to one. Prox-SGD and RDA converge slowly due to the variance in the stochastic gradient. The objective gaps of them remain significant even after 1000 passes of the whole dataset. In Figure <ref>, similarly as before, Prox-SVRG, SAGA, SDCA and Prox-SAG (some part of Prox-SAG overlaps with SDCA) perform well. On this dataset, SAGA performs best followed by SDCA , Prox-SAG and then Prox-SVRG. The performance of Prox-GD is even worse than Prox-SGD. RDA converges the slowest.
§.§.§ Sparse Regression Problem
In Figure <ref>, we present the result of Lasso on IJCNN1 dataset (n=49990,p=22) <cit.> with λ=0.02. It is easy to see, the performance of SDCA is best and then followed by SAGA, Prox-SAG, SVRG and Prox-GD. Prox-SGD and RDA does not work well.
We apply the linear regression with SCAD regularization on IJCNN1 dataset <cit.> and present the result in Figure <ref>. In this dataset, SAGA and SDCA have almost identical performance, then followed by Prox-SVRG, Prox-SAG and Prox-GD. Prox-SGD converge fast at beginning, but has a large optimality gap (10^-4). RDA does not work at all.
In Figure <ref>, we consider a group sparse regression problem on the Boston Housing dataset (n=506,p=13) <cit.>. As suggested in <cit.>, to take into account the non-linear relationship between variables and response, up to third-degree polynomial expansion is applied on each feature. In particular, terms x, x^2 and x^3 are grouped together. We consider group Lasso model on this problem with λ=0.1. We choose the setting m=2n in SVRG and λ̃=0.1 in SDCA. Figure <ref> shows the objective gap of various algorithms versus the number of passes over the dataset. Prox-SVRG, SDCA, SAGA and Prox-SAG have almost same performance. Prox-SGD does not converge: the objective gap oscillates between 0.1 and 1. Both the Prox-GD and RDA converge, but with much slower rates.
§ CONCLUSION AND FUTURE WORK
In this paper, we adapt SDCA into a dual-free form to solve non-strongly convex problems and non-convex problems. Under the condition of RSC, we prove that this dual-free SDCA converges with linear rates which covers several important statistical models.
From a high level, our results re-confirmed a well-observed phenomenon that statistically easy problems tend to be more computationally friendly. We believe this intriguing fact indicates that there are fundamental relationships between statistics and optimization, and understanding such relationships may shed deep insights.
§ PROOFS
To begin with, we present a technical Lemma that will be used repeatedly in the following proofs.
Suppose function f(x) is convex and L smooth then we have
f(x)-f(y)-⟨∇ f(y),x-y ⟩≥1/2L∇ f(x)-∇ f(y)_2^2.
Define p(x)=f(x)-⟨∇ f(x_0),x⟩. It is obvious that p(x_0) is the optimal value of p(x) due to convexity.
Since f(x) is L smooth, so is p(x), and we have
p(x_0)≤ p(x-1/L∇ p(x))≤ p(x)-1/2L∇ p(x)_2^2.
That is
f(x_0)-⟨∇ f(x_0),x_0 ⟩≤ f(x)-⟨∇ f(x_0),x ⟩-1/2L∇ f(x)-∇ f(x_0)_2^2.
Rearrange the terms, we have
f(x)-f(x_0)-⟨ f(x_0),x-x_0 ⟩≥1/2L∇ f(x)-∇ f(x_0)_2^2.
The following Lemma presents a well-known fact on conjugate function of a strongly convex function. We extract it from Theorem 1 of <cit.>.
Define p^*(v)=max_w∈Ω⟨ w,v⟩-p(w), where Ω is a convex compact set, p(w) is a 1-strongly convex function, then p^*(v) is 1-smooth and ∇ p^*(v)=w̅ where w̅=max_w∈Ω⟨ w,v⟩-p(w).
§.§ Proof of results for Convex F(w)
In this section we establish the results for convex F(w), namely, Theorem <ref>.
Recall the problem we want to optimize is
min_w∈Ω F(w):=f(w)+λ
g(w):=1/n∑_i=1^n f_i (w)+λ g(w),
where Ω is {w∈ℝ^p | g(w)≤ρ}.
Remind that instead of directly minimizing the above objective function, we look at the following form.
min_w∈Ω F(w):=ϕ(w)+λ̃g̃(w)= 1/1+n∑_i=1^n+1ϕ_i(w)+ λ̃g̃(w).
Also remind that L̃_i is the smooth parameter of ϕ_i(w) and we define L̃≜1/n+1∑_i=1^n+1L̃_i.
Notice the Algorithm 1 keeps the following relation v^t-1=1/λ̃ (n+1)∑_i=1^n+1a_i^t-1, and thus we have:
E (η_i (∇ϕ_i(w^t-1)+a_i^t-1)) =η/n+1∑_i=1^n+1 (∇ϕ_i(w^t-1)+a_i^t-1)
=η (1/n+1∑_i=1^n+1∇ϕ_i(w^t-1)+λ̃ v^t-1)
=η (∇ϕ(w^t-1)+λ̃ v^t-1).
Before we start the proof of the main theorem , we present several technical lemmas. The following two lemmas are similar to its batched counterpart in <cit.>.
Suppose f(w) is convex and g(w) is decomposable with respect to (𝒜,ℬ), g(w^*)≤ρ, if we choose λ≥ 2g^*(∇ f(w^*)), define the error term Δ^*=ŵ-w^*, then we have the following condition holds
g(Δ^*_ℬ^⊥)≤ 3 g(Δ^*_ℬ)+4g(w^*_𝒜^⊥),
which implies g (Δ^*)≤ g(Δ^*_ℬ^⊥)+g(Δ^*_ℬ)≤ 4 g(Δ^*_ℬ)+4g(w^*_𝒜^⊥).
Using the optimality of ŵ, we have
f(ŵ)+λ g(ŵ)-f(w^*)-λ g(w^*)≤ 0.
So we get
λ g(w^*)-λ g(ŵ)≥ f(ŵ)-f(w^*)≥⟨∇ f(w^*), ŵ-w^* ⟩≥ -g^*(∇ f(w ^*)) g (Δ^*),
where the second inequality holds from the convexity of f(w), and the third one holds by Holder's inequality. Using triangle inequality, we have
g(Δ^*)≤ g(Δ^*_ℬ)+g(Δ^*_ℬ^⊥), which leads to
λ g(w^*)-λ g(ŵ)≥ -g^*(∇ f(w ^*)) (g(Δ^*_ℬ)+g(Δ^*_ℬ^⊥) ).
Notice
ŵ=w^*+Δ^*= w^*_𝒜+w^*_𝒜^⊥+Δ^*_ℬ+Δ^*_ℬ^⊥.
Now we obtain
g (ŵ)-g(w^*) (a)≥ g (w^*_𝒜+Δ^*_ℬ^⊥)-g(w^*_𝒜^⊥)-g(Δ^*_ℬ)-g(w^*)
(b)= g (w^*_𝒜)+g (Δ^*_ℬ^⊥)-g(w^*_𝒜^⊥)-g(Δ^*_ℬ)-g(w^*)
(c)≥ g (w^*_𝒜)+g (Δ^*_ℬ^⊥)-g(w^*_𝒜^⊥)-g(Δ^*_ℬ)-g(w^*_𝒜)-g(w^*_𝒜^⊥)
≥ g (Δ^*_ℬ^⊥)-2g(w^*_𝒜^⊥)-g(Δ^*_ℬ),
where (a) and (c) hold from the triangle inequality, and (b) uses the decomposability of g(·).
Substitute the above to (<ref>), and and use the assumption that λ≥ 2g^*(∇ f(w^*)), we obtain
-λ/2 (g(Δ^*_ℬ)+g(Δ^*_ℬ^⊥) )+λ (g (Δ^*_ℬ^⊥)-2g(w^*_𝒜^⊥)-g(Δ^*_ℬ))≤ 0,
which implies
g(Δ^*_ℬ^⊥)≤ 3 g(Δ^*_ℬ)+4g(w^*_𝒜^⊥).
Suppose f(w) is convex and g(w) is decomposable with respect to (𝒜,ℬ). If we choose λ≥ 2g^*(∇ f(w^*)), and suppose there exist a given tolerance ξ and T such that F(w^t)-F(ŵ)≤ξ for all t> T , then for the error term Δ^t=w^t-w^* we have
g(Δ^t_ℬ^⊥)≤ 3 g(Δ^t_ℬ)+4g(w^*_𝒜^⊥)+ 2min{ξ/λ, ρ},
which implies
g(Δ^t)≤ 4 g(Δ^t_ℬ)+4g(w^*_𝒜^⊥)+ 2 min{ξ/λ,ρ}.
First notice F(w^t)-F(w^*)≤ξ holds by assumption since F(w^*)≥ F(ŵ).
So we have
f(w^t)+λ g(w^t)-f(w^*)-λ g(w^*)≤ξ.
Follow the same steps as those in the proof of Lemma <ref>, we have
g(Δ^t_ℬ^⊥)≤ 3 g(Δ^t_ℬ)+4g(w^*_𝒜^⊥)+ 2ξ/λ.
Using the fact that w^* and w^t both belong to Ω={w∈ℝ^p | g(w)≤ρ}, we have g(Δ^t)≤ g (w^*)+g(w^t)≤ 2ρ. This leads to
g(Δ^t_ℬ^⊥)≤ g(Δ^t_ℬ)+2ρ,
by triangle inequality g(Δ^t_ℬ^⊥)≤ g(Δ^t_ℬ)+g(Δ^t).
Combine this with the above result we have
g(Δ^t_ℬ^⊥)≤ 3 g(Δ^t_ℬ)+4g(w^*_𝒜^⊥)+ 2min{ξ/λ,ρ}.
The second statement follows immediately from g(Δ^t)≤ g(Δ^t_ℬ)+g(Δ^t_ℬ^⊥).
Under the same assumption of Lemma <ref>, we have
F(w^t)-F(ŵ)≥(κ/2-32τΨ^2(ℬ ))Δ̂^t _2^2- ϵ^2(Δ^*,𝒜,ℬ),
and
ϕ(ŵ)-ϕ(w^t)-⟨ŵ-w^t,∇ϕ(w^t)⟩
≥[(κ-λ̃/2-32τΨ^2(ℬ ))Δ̂^t _2^2- ϵ^2(Δ^*,𝒜,ℬ)],
where Δ̂^t=w^t-ŵ, ϵ^2(Δ^*,𝒜,ℬ)=2τ (δ_stat+δ)^2, δ=2min{ξ/λ, ρ}, and δ_stat= 8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥).
We begin the proof by establishing a simple fact on Δ̂^t=w^t-ŵ. We adapt the argument in Lemma <ref> (which is on Δ^t) to Δ̂^t:
g(Δ̂^t) ≤ g (Δ^t)+g(Δ^*)
≤ 4 g(Δ^t_ℬ)+4g(w^*_𝒜^⊥)+ 2min{ξ/λ, ρ}+4 g(Δ^*_ℬ)+4g(w^*_𝒜^⊥)
≤ 4 Ψ(ℬ)Δ^t_2+4Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ},
where the first inequality holds from the triangle inequality, the second inequality uses Lemma <ref> and <ref>, the third holds because of the definition of subspace compatibility.
We know
f(w^t)-f(ŵ)-⟨∇ f(ŵ), Δ̂^t ⟩≥κ/2Δ̂^t _2^2-τ g^2(Δ̂^t)
which implies F(w^t)-F(ŵ)≥κ/2Δ̂^t _2^2-τ g^2(Δ̂^t), since ŵ is the optimal solution to the problem <ref> and g(w) is convex.
Notice that
g(Δ̂^t) ≤ 4 Ψ(ℬ)Δ^t_2+4Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ}
≤ 4 Ψ(ℬ)Δ̂^t_2+8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ},
where the second inequality uses the triangle inequality.
Using the inequality (a+b)^2≤ 2 a^2+2b^2, we can upper bound g^2 (Δ̂^t).
g^2 (Δ̂^t)≤ 32Ψ^2(ℬ)Δ̂^t_2^2+2[8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ}]^2.
We now use above result to rewrite the RSC condition.
Substitute this upper bound in the RSC, we have
F(w^t)-F(ŵ)≥(κ/2-32τΨ^2(ℬ ))Δ̂^t _2^2-2τ[8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ}]^2.
Notice by δ=2min{ξ/λ,ρ}, δ_stat= 8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥), and ϵ^2(Δ^*,𝒜,ℬ)=2τ(δ_stat+δ)^2, we obtain
ϵ^2(Δ^*,𝒜,ℬ)=2τ(8Ψ(ℬ)Δ^*_2+8g(w^*_𝒜^⊥)+2min{ξ/λ,ρ})^2.
We thus conclude
F(w^t)-F(ŵ)≥(κ/2-32τΨ^2(ℬ ))Δ̂^t _2^2- ϵ^2(Δ^*,𝒜,ℬ).
Recall ϕ(w)= f(w)-λ̃/2w_2^2, and hence we have
ϕ(ŵ)-ϕ(w^t)-⟨ŵ-w^t,∇ϕ(w^t)⟩
= f(ŵ)-f(w^t)-⟨∇ f(w^t), ŵ-w^t⟩-λ̃/2ŵ-w^t_2^2
≥ κ/2Δ̂^t_2^2-τ g^2 (Δ̂^t)-λ̃/2Δ̂^t_2^2,
where the inequality is due to the RSC condition.
Now we plug in the upper bound of g^2(Δ̂^t) in Equation (<ref>), and arrange the terms to establish Equation (<ref>).
Recall we define two potentials
A_t =∑_j=1^n+11/q_ia_j^t-â_j_2^2,
B_t =2(g̃^*(v^t)-⟨∇g̃^*(v̂),v^t-v̂⟩-g̃^*(v̂)).
Notice using Theorem 1 in <cit.>, we know g̃^* (v) is 1- smooth and w=∇g̃^* (v).
We remark that the potential A_t is defined the same as in <cit.> while we define B_t differently to solve the problem with general regularization g(w). When g̃(w)=1/2w_2^2, B_t reduce to v^t-v̂_2^2=w^t-ŵ_2^2, which is same as in <cit.>.
Step 1. The first step is to lower bound A_t-1-A_t, in particular, to establish that
𝔼[A_t-1-A_t]=ηλ̃( A_t-1+∑_i=1^n+11/q_i (-u_i-â_i_2^2+(1-β_i)m_i_2^2) ).
This step is indeed same as <cit.>, which we present for the completeness. Define u_i=-∇ϕ_i(w^t-1), β_i=η_iλ̃ (n+1) and m_i=-u_i+a_i^t-1 for notational simplicity. Suppose coordinate i is picked up at time t, then we have
A_t-1-A_t= -1/q_ia_i^t-â_i_2^2+1/q_ia_i^t-1-â_i_2^2
=-1/q_i(1-β_i) (a_i^t-1-â_i)+β_i(u_i-â_i) _2^2+1/q_ia_i^t-1-â_i_2^2
≥ -1/q_i( (1-β_i)a_i^t-1-â_i _2^2+β_i u_i-â_i_2^2-β_i (1-β_i) a_i^t-1-u_i_2^2)
+ 1/q_ia_i^t-1-â_i_2^2
=β_i/q_i(a_i^t-1-â_i_2^2-u_i-â_i_2^2+(1-β_i)m_i_2^2).
Taking expectation on both sides with respect to the random sampling of i at time step t, we established Equation (<ref>).
Step 2. We now look at the evolution of B_t. In particular, we will prove that
𝔼(B_t-1-B_t)=2⟨ w^t-1-ŵ, η ( ∇ϕ(w^t-1)+λ̃ v^t-1) ⟩- η^2/(n+1)^2∑_i=1^n+11/q_im_i_2^2.
To this end, notice that
1/2(B_t-1-B_t)
=g̃^*(v^t-1)-⟨∇g̃^*(v̂),v^t-1-v̂⟩-g̃^*(v̂)- (g̃^*(v^t)-⟨∇g̃^*(v̂),v^t-v̂⟩-g̃^*(v̂))
=g̃^*(v^t-1)-g̃^*(v^t)-⟨∇g̃^*(v̂),v^t-1-v^t ⟩
≥⟨∇g̃^*(v^t-1), v^t-1-v^t⟩-1/2v^t-1-v^t_2^2-⟨∇g̃^*(v̂),v^t-1-v^t ⟩
= ⟨∇g̃^*(v^t-1)- ∇g̃^*(v̂),v^t-1-v^t ⟩-1/2v^t-1-v^t_2^2
=⟨ w^t-1-ŵ, v^t-1-v^t ⟩-1/2v^t-1-v^t_2^2,
where the inequality holds because
g̃^*(v^t)-g̃^*(v^t-1)≤⟨∇g̃^* (v^t-1),v^t-v^t-1⟩+1/2v^t-v^t-1_2^2,
which results from g̃^*(v) being 1-smooth; and
the last equality holds from the fact that ∇g̃^* (v^t-1)=w^t-1 using Lemma <ref>.
Take expectation on both sides of Equation (<ref>), we established Equation (<ref>),
where we use result in Equation (<ref>).
Step 3. We define a new potential C_t=c_a A_t+c_b B_t, and prove in this step that
𝔼(C_t-1-C_t)≥ c_aηλ̃ A_t-1-c_aηλ̃∑_i=1^n+11/q_iu_i-â_i_2^2+c_bηλ̃ B_t-1
+2c_bη(F(w^t-1)-F(ŵ))
+2c_bη ( ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1) ).
From the definition of C_t, and using Equation (<ref>) and (<ref>), we have
𝔼(C_t-1-C_t)≥ c_aηλ̃ A_t-1-c_aηλ̃∑_i=1^n+11/q_iu_i-â_i_2^2+2c_bη⟨ w^t-1-ŵ,∇ϕ(w^t-1)+λ̃v^t-1⟩
+∑_i=1^n+11/q_im_i_2^2 (c_aηλ̃ (1-β_i)-c_b η^2/(n+1)^2).
We next show that
⟨ w^t-1-ŵ,∇ϕ(w^t-1)+λ̃v^t-1⟩ -λ̃/2B_t-1-(F(w^t-1)-F(ŵ))=ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1)⟩.
This holds by directly verifying as follows:
⟨ w^t-1-ŵ,∇ϕ(w^t-1)+λ̃v^t-1⟩ -λ̃/2B_t-1-(F(w^t-1)-F(ŵ))
= ⟨ w^t-1-ŵ,∇ϕ(w^t-1)+λ̃v^t-1⟩-λ̃ (g̃^*(v^t-1)-⟨∇g̃^*(v̂),v ^t-1-v̂⟩-g̃^*(v̂))-F(w^t-1)+F(ŵ)
= ⟨ w^t-1,λ̃ v^t-1⟩-λ̃g̃^*(v^t-1) -⟨ŵ,λ̃v̂⟩+λ̃g̃^*(v̂)+⟨ŵ,-∇ϕ(w^t-1) ⟩-F(w^t-1)+F(ŵ)
+ ⟨ w^t-1,∇ϕ(w^t-1) ⟩
= λ̃g̃(w^t-1)-λ̃g̃(ŵ)-F(w^t-1)+F(ŵ)+⟨ w^t-1-ŵ,∇ϕ(w^t-1)⟩
= λ̃g̃(w^t-1)-λ̃g̃(ŵ)-λ̃g̃(w^t-1)-ϕ(w^t-1)+λ̃g̃(ŵ)+ϕ(ŵ)+⟨ w^t-1-ŵ,∇ϕ(w^t-1) ⟩
= ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1)⟩,
where the second equality uses the fact that ŵ=∇g̃^*(v̂), and the third equality holds using the definition of g̃^*(v).
Thus, substituting the equation into (<ref>), we get
𝔼(C_t-1-C_t)≥ c_aηλ̃ A_t-1-c_aηλ̃∑_i=1^n+11/q_iu_i-â_i_2^2+c_bηλ̃ B_t-1+2c_bη(F(w^t-1)-F(ŵ))
+∑_i=1^n+11/q_im_i_2^2 (c_aηλ̃ (1-β_i)-c_b η^2/(n+1)^2)+2c_bη [ ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1) ].
We can choose η≤q_i/2λ̃ and c_b/c_a=λ̃ (n+1)^2/2η so that β_i≤ 1/2, and the term ∑_i=1^n+11/q_im_i^2 (c_aηλ̃ (1-β_i)-c_b η^2/(n+1)^2) is non-negative. Since q_i≥1/2(n+1) for every i, we can choose η≤1/4λ̃ (n+1).
Using these condition, we established (<ref>).
Step 4 We now bound ∑_i=1^n+11/q_iu_i-â_i_2^2.
Notice that for i=1,...,n, since ϕ_i is convex, we can apply Lemma <ref>, and the fact that ξ+∇ f(ŵ)=0, for ξ∈λ∂ g(ŵ).
∑_i=1^n1/q_iu_i-â_i_2^2 =∑_i=1^n1/q_i∇ϕ_i(w^t-1)-∇ϕ_i(ŵ) _2^2
≤(2maxL̃_i/q_i) ∑_i=1^n(ϕ_i(w^t-1)-ϕ_i(ŵ)-⟨∇ϕ_i(ŵ),w^t-1-ŵ⟩)
=n+1/n(2maxL̃_i/q_i) ∑_i=1^n(f_i(w^t-1)-f_i(ŵ)-⟨∇ f_i(ŵ),w^t-1-ŵ⟩)
≤n+1/n(2maxL̃_i/q_i) n( f(w^t-1)-f(ŵ)+λ g(w^t-1)-λ g(ŵ) )
≤(2maxL̃_i/q_i)(n+1) (F(w^t-1)-F(ŵ)).
As for i=n+1, we have
1/q_n+1∇ϕ_n+1(w^t-1)-∇ϕ_n+1(ŵ)_2^2 ≤λ̃^2 (n+1)^2/q_n+1w^t-1-ŵ_2^2
=2(n+1)L̃_n+1/q_n+1λ̃/2w^t-1-ŵ_2^2.
Step 5 We now analyze the progress of the potential by induction.
We need to relate λ̃/2w^t-1-ŵ_2^2 to F(w^t-1)-F(ŵ). In high level, we divide the time steps t=1,2,... into several epochs, i.e., ([ T_0,T_1), [T_1,T_2),...). At the end of each epoch j, we prove that C_t decreases with linear rate until the optimality gap F(w^t)-F(ŵ) decrease to some tolerance ξ_j. We then prove that (ξ_1,ξ_2,ξ_3,...) is a decreasing sequence and finish the proof.
Assuming that time step t-1 is in the epoch j, we use Equation (<ref>) in Lemma <ref> and the fact that λ̃≤κ̃ to obtain
1/q_n+1∇ϕ_n+1(w^t-1)-∇ϕ_n+1(ŵ)_2^2≤ 2(n+1)L̃_n+1/q_n+1 (F(w^t-1)-F(ŵ)+ϵ_j^2 (Δ^*,𝒜,ℬ)).
Combining the above two results on ϕ_i(·) together, we have
∑_i=1^n+11/q_iu_i-â_i_2^2≤ 4(n+1) (max_ i∈{1,..,n+1 }L̃_̃ĩ/q_i ) (F(w^t-1)-F(ŵ)+1/2ϵ_j^2 (Δ^*,𝒜,ℬ))
≤ 8(n+1)^2L̃ (F(w^t-1)-F(ŵ))+4(n+1)^2L̃ϵ_j^2 (Δ^*,𝒜,ℬ),
where we use the fact L̃_i/q_i=2(n+1)L̃L̃_i/L̃_i+L̃≤ 2(n+1) L̃.
Replace the corresponding term in equation (<ref>), we have
𝔼(C_t-1-C_t)≥ c_aηλ̃ A_t-1-8c_aηλ̃(n+1)^2L̃ (F(w^t-1)-F(ŵ))-4c_aηλ̃ (n+1)^2L̅ϵ_j^2 (Δ^*,𝒜,ℬ)
+c_bηλ̃ B_t-1+2c_bη(F(w^t-1)-F(ŵ))+2c_bη ( ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1) )
≥ηλ̃ C_t-1+η(2c_b-8c_aλ̃ (n+1)^2L̃)(F(w^t-1)-F(ŵ)) -4c_aηλ̃(n+1)^2L̃ϵ_j^2 (Δ^*,𝒜,ℬ)
+(c_bη (κ̃-λ̃) w^t-ŵ_2^2 -2c_bηϵ_j^2 (Δ^*,𝒜,ℬ)),
where the second inequality is due to Lemma <ref>.
We choose 2c_b=16c_a λ̃ (n+1)^2L̃, and use the fact that c_b/c_a=λ̃ (n+1)^2/2η, we have η=1/16L̃ and
𝔼(C_t-1-C_t)≥ηλ̃ C_t-1+ η c_b (F(w^t-1)-F(ŵ))-3c_bηϵ_j^2 (Δ^*,𝒜,ℬ),
where we use the assumption that κ̃≥λ̃ and the assumption n>4.
Remind that in epoch j, we have
ϵ_j^2(Δ^*,𝒜,ℬ) =2τ (δ_stat+δ_j-1)^2, δ_j-1=2min{ξ_j-1/λ,ρ},
δ_stat = 8Ψ(ℬ)Δ^*+8g(w^*_𝒜^⊥).
The epoch is determined by the comparison between F(w^t)-F(ŵ) and 3ϵ_j^2 (Δ^*,𝒜,ℬ)
In the first epoch, we choose δ_0=2ρ.
Thus ϵ_1^2 (Δ^*,𝒜,ℬ)=2τ (δ_stat+2ρ)^2.
We choose T_1 such that
F(w^T_1-1)-F(ŵ)≥ 3ϵ_1^2 (Δ^*,𝒜,ℬ) and F(w^T_1)-F(ŵ)≤ 3ϵ_1^2 (Δ^*,𝒜,ℬ).
If no such T_1 exists, that means F(w^t)-F(ŵ)≥ 3ϵ_1^2 (Δ^*,𝒜,ℬ) always holds and we have 𝔼(C_t)≤ (1-ηλ̃) C_t-1 for all t and get the geometric convergence rate. It is a contradiction with F(w^t)-F(ŵ)≥ 3ϵ_1^2 (Δ^*,𝒜,ℬ).
Now we know F(w^T_1)-F(ŵ)≤ 3ϵ_1^2 (Δ^*,𝒜,ℬ), and hence we choose ξ_1= 6τ (δ_stat+δ_0)^2.
In the second epoch we use the same argument:
ϵ_2^2 (Δ^*,𝒜,ℬ)=2τ_σ (δ_stat+δ_1)^2 where δ_1=2 min{ξ_1/λ,ρ}.
We choose T_2 such that
F(w^T_2-1)-F(ŵ)≥ 3ϵ_2^2 (Δ^*,𝒜,ℬ) and F(w^T_2)-F(ŵ)≤ 3ϵ_2^2 (Δ^*,𝒜,ℬ).
Then we choose ξ_2= 6τ (δ_stat+δ_1)^2.
Similarly in epoch j, we choose T_j such that
F(w^T_j-1)-F(ŵ)≥ 3ϵ_j^2 (Δ^*,𝒜,ℬ) and F(w^T_j)-F(ŵ)≤ 3ϵ_j^2 (Δ^*,𝒜,ℬ),
and ξ_j= 6τ (δ_stat+δ_j-1)^2.
In this way, we arrive at recursive equalities of the tolerance {ξ_j}_j=1^∞ where
ξ_j= 6τ_σ (δ_stat+δ_j-1)^2 and δ_j-1=2 min{ξ_j-1/λ,ρ}.
We claim that following holds, until δ_j=δ_stat.
(I) ξ_k+1≤ξ_k/ (4^2^k-1)
(II) ξ_k+1/λ≤ρ/4^2^k for k=1,2,3,...
The proof of Equation (<ref>) is same with Equation (60) in <cit.>, which we present here for completeness.
We assume δ_0≥δ_stat (otherwise the statement is true trivially), so ξ_1≤ 96 τρ^2. We make the assumption that λ≥ 384τρ, so ξ_1/λ≤ρ/4 and ξ_1≤ξ_0.
In the second epoch we have
ξ_2 (1)≤ 12τ (δ^2_stat+δ^2_1)≤ 24τδ_1^2≤96 τξ_1^2/λ^2(2)≤96τξ_1/4λ(3)≤ξ_1/4,
where (1) holds from the fact that (a+b)^2≤ 2a^2+2b^2, (2) holds using ξ_1/λ≤ρ/4, (3) uses the assumption on λ. Thus,
ξ_2/λ≤ξ_1/4λ≤ρ/16.
In i+1th step, with similar argument, and by induction assumption we have
ξ_j+1≤96τξ_j^2/λ^2≤96τξ_j/4^2^jλ≤ξ_j/4^2^k-1
and
ξ_j+1/λ≤ξ_j/4^2^j-1λ≤ρ/4^2^j.
Thus we know ξ_j is a decreasing sequence, and 𝔼 (C_t)≤ (1-ηλ̃) C_t-1 holds until F(w^t)-F(ŵ)≤ 6τ (2δ_stat)^2, where η is set to satisfy η≤min (1/16L̃, 1/4λ̃(n+1)).
Observe that (n+1)L̃=n+1/n(∑_i=1^nL_i)+λ (n+1)=(n+1) (λ +1/n∑_i=1^n L_i), we establish the theorem.
§.§ Strongly convex F(w)
If g(w) is 1 strongly convex, we can directly apply the algorithm on Equation (<ref>). In this case, RSC is not needed and the proof is significantly simpler. As the main road map of the proof is similar, we just mention the difference in the following.
We define the potential,
A_t =∑_j=1^n1/q_ia_j^t-â_j_2^2,
B_t =2(g^*(v^t)-⟨∇ g^*(v̂),v^t-v̂⟩-g^*(v̂)).
Notice in this setup, the potential B_t is defined on g rather than g̃.
The evolution of A_t follows from the same analysis of Step 1 in the proof of Theorem <ref> , except that we replace λ̃ by λ and we only have n terms rather than n+1 terms. This gives
𝔼[A_t-1-A_t]=ηλ ( A_t-1+∑_i=1^n1/q_i (-u_i-â_i_2^2+(1-β_i)m_i_2^2) ).
Note that g^*(v) is 1-smooth, follow the same analysis of Step 2 in the proof of Theorem <ref>, specifically the derivation in (<ref>), we obtain
1/2(B_t-1-B_t)≥⟨ w^t-1-ŵ, v^t-1-v^t ⟩-1/2v^t-1-v^t_2^2.
Combining the above two equations, we have
𝔼(C_t-1-C_t)≥ c_aηλ A_t-1-c_aηλ∑_i=1^n1/q_iu_i-â_i_2^2+2c_bη⟨ w^t-1-ŵ,∇ f(w^t-1)+λ v^t-1⟩
+∑_i=1^n1/q_im_i_2^2 (c_aηλ (1-β_i)-c_b η^2/n^2).
We can choose η≤q_i/2λ and c_b/c_a=λ n^2/2η so that β≤ 1/2, and the term ∑_i=1^n1/q_im_i_2^2 (c_aηλ (1-β_i)-c_b η^2/n^2) is non-negative. According to the definition of q_i, we know q_i ≥1/2n, thus we can choose η≤1/4n to ensure that η≤q_i/2λ.
We can show that
⟨ w^t-1-ŵ,∇ f(w^t-1)+λ v^t-1⟩ -λ/2B_t-1-(F(w^t-1)-F(ŵ))
= f(ŵ)-f(w^t-1)-⟨ŵ-w^t-1,∇ f(w^t-1)⟩≥ 0
This follows from the same derivation as in Equation (<ref>) , modulus replacing λ̃, g̃(w) and ϕ_i(w) by λ, g(w) and f_i(w) correspondingly. The last step follows immediately from the convexity of f(w).
The proof now proceed with discussing two cases separately: (1). each f_i(w) is convex. And (2). f_i(w) is not necessarily convex, but the sum f(w) is convex.
Case 1: Since each f_i(w) is convex in this case,
to bound ∑_i=1^n1/q_iu_i-â_i_2^2 is easier, via the following.
∑_i=1^n1/q_iu_i-â_i_2^2 = ∑_i=1^n1/q_i∇ f_i(w^t-1)-∇ f_i(ŵ) _2^2
≤ (2 max_i L_i/q_i)∑_i=1^n (f_i(w^t-1)-f(ŵ)-⟨∇ f_i(ŵ),w^t-1-ŵ⟩ )
≤ (2 max_i L_i/q_i) n (F (w^t-1)-F(ŵ)),
where the first inequality follows from Lemma <ref>, and the second one follows from convexity of g(w) and optimality condition of ŵ.
The definition of q_i in the Algorithm 1 implies that for every i,
L_i/q_i=2nL̅L_i/L_i+L≤ 2n L̅.
Substitute this into the corresponding terms in (<ref>), we have
𝔼(C_t-1-C_t)
≥ c_a ηλ A_t-1+ c_b ηλ B_t-1 +2c_bη (F(w^t-1)-F(ŵ))-4c_aηλ n^2L̅ (F(w^t-1)-F(ŵ)).
Remind c_b=c_a λ n^2/2η, using the choice η≤1/4 L̅, we have
E(C_t)≤ (1-ηλ) C_t-1,
where η=min{1/4L̅,1/4λ n}.
Case 2: Using strong convexity of F(·), we have
c_aηλ∑_i^n1/q_iu_i-â_i_2^2=c_aηλ∑_i=1^n1/q_i∇ f_i(w^t-1)-∇ f_i(ŵ)_2^2
≤ c_aηλ∑_i=1^nL_i^2/q_iw^t-1-ŵ_2^2≤ 2c_aη(F(w^t-1)-F(ŵ)) ∑_i=1^nL^2_i/q_i,
where the first inequality uses the smoothness of f_i(·), and the second one holds from the strong convexity of F(·) and optimal condition of ŵ.
Using Equation (<ref>), we have
⟨ w^t-1-ŵ,∇ f(w^t-1)+λ v^t-1⟩≥λ/2B_t-1+(F(w^t-1)-F(ŵ)).
Then replace the corresponding terms in (<ref>) we have
𝔼 (C_t-1-C_t)≥ηλ C_t-1 + 2η(c_b- c_a ∑_i=1^nL_i^2/q_i)(F(w^t-1)-F(ŵ)).
Since we know L_i/q_i≤ 2nL̅, we have
𝔼 (C_t-1-C_t)≥ηλ C_t-1+2η (c_b-2n^2L̅^2 c_a) (F(w^t-1)-F(ŵ)).
The last term is no-negative if c_b/c_a≥ 2n^2L̅^2. Since we choose c_b/c_a=λ n^2/2η, this condition is satisfied as we choose η≤λ/4L̅^2.
§.§ Proof on non-convex F(w)
The proof of the non-convex case follows a similar line as that of the convex case. To avoid redundancy, we focus on pointing out the difference in the proof. We start with some technical lemmas. The following lemma is adapted from Lemma 6 of <cit.>, which we present for completeness.
For any vector w∈ R^p, let A denote the index set of its s largest elements in magnitude, under assumption of d_λ,μ in Section <ref> in the main body of paper, we have
d_λ,μ(w_A)-d_λ,μ (w_A^c)≤λ L_d (w_A_1-w_A^c_1) .
Moreover, for an arbitrary vector w∈ℝ^p, we have
d_λ,μ (w^*)-d_λ,μ (w)≤λ L_d (ν_A_1-ν_A^c_1),
where ν=w-w^* and w^* is s sparse.
The next lemma is non-convex counterparts of Lemma <ref> and Lemma <ref>.
Suppose d_λ,μ(·) satisfies the assumptions in Section <ref> in the main body of paper, w^* is feasible, λ L_d≥8ρθlog p /n, λ≥4/L_d∇ f (w^*)_∞, and there exists ξ, T such that
F(w^t)-F(ŵ)≤ξ, ∀ t> T.
Then for any t> T, we have
w^t-ŵ_1≤ 4√(s)w^t-ŵ_2+8√(s)w^*-ŵ_2+2min(ξ/λ L_d, ρ).
For an arbitrary feasible w, define Δ=w-w^*.
Suppose we have F(w)-F(ŵ)≤ξ, since we know F(ŵ)≤ F(w^*) so we have F(w)≤ F(w^*)+ξ, which implies
f(w^*+Δ)+d_λ,μ (w^*+Δ)≤ f(w^*)+d_λ,μ(w^*) +ξ .
Subtract ⟨∇ f(w^*),Δ⟩ and use the RSC condition (Recall we have τ=θlog p/n in the assumption of Theorem 2) we have
κ/2Δ_2^2-θlog p/nΔ_1^2+d_λ,μ (w^*+Δ)-d_λ,μ(w^*)
≤ ξ-⟨∇ f(w^*),Δ⟩
≤ ξ+∇ f(w^*)_∞Δ_1,
where the last inequality holds from Holder's inequality.
Rearranging terms and use the fact that Δ_1≤ 2ρ (by feasibility of w and w^*) and we assume that λ L_d≥ 8ρθlog p /n , λ≥4/L_d∇ f (w^*)_∞, we get
ξ+1/2λ L_dΔ_1+d_λ,μ(w^*)-d_λ,μ(w^*+Δ)≥κ/2Δ_2^2≥ 0.
By Lemma <ref>, we have
d_λ,μ (w^*)-d_λ,μ (w)≤λ L_d (Δ_A_1-Δ_A^c_1) ,
where A is the set of indices of the top s components of Δ in magnitude, which thus leads to
3λ L_d/2Δ_A_1-λ L_d/2Δ_A^c_1+ξ≥ 0.
Consequently
Δ_1≤Δ_A_1+Δ_A^c_1≤ 4Δ_A_1+2ξ/λ L_d≤ 4√(s)Δ_2 +2ξ/λ L_d.
Combining with the fact Δ_1≤ 2ρ , we obtain
Δ_1≤ 4√(s)Δ_2 +2min{ξ/λ L_d, ρ}.
So we have w^t-w^*_1≤ 4√(s)w^t-w^*_2+2min{ξ/λ L_d, ρ}.
Notice F(w^*)-F(ŵ)≥ 0, so following same steps and set ξ=0 we have ŵ-w^*_1≤ 4√(s)ŵ-w^*_2.
Combining the two together, we get
w^t-ŵ_1≤w^t-w^*_1+w^*-ŵ_1≤ 4√(s)w^t-ŵ_2+8√(s)w^*-ŵ_2+2min (ξ/λ L_d, ρ).
Now we provide a counterpart of Lemma <ref> in the non-convex case.
Under the same assumptions as those of Lemma <ref>, we have
F(w^t)-F(ŵ)≥κ̃/2w^t-ŵ_2^2-ϵ^2 (Δ^*,s );
ϕ(ŵ)-ϕ(w^t)-⟨∇ϕ(w^t), ŵ-w^t ⟩≥ [κ̃-λ̃/2w^t-ŵ_2^2-ϵ^2 (Δ^*,s )],
where κ̃=κ-μ-64sτ, Δ^*=ŵ-w^*, and ϵ^2 (Δ^*,s )=2τ (8√(s)ŵ-w^*_2+2min(ξ/λ L_d,ρ))^2.
Notice that
F(w^t)-F(ŵ)
= f(w^t)-f(ŵ)-μ/2w^t_2^2+μ/2ŵ_2^2+ λ d_λ(w^t)-λ d_λ(ŵ)
≥ ⟨∇ f(ŵ),w^t-ŵ⟩+κ/2w^t-ŵ_2^2- ⟨μŵ, w^t-ŵ⟩-μ/2w^t-ŵ_2^2
+ λ d_λ(w^t)-λ d_λ(ŵ)-τw^t-ŵ_1^2
≥ ⟨∇ f(ŵ),w^t-ŵ⟩+κ/2w^t-ŵ_2^2- ⟨μŵ, w^t-ŵ⟩-μ/2w^t-ŵ_2^2
+ λ⟨∂ d_λ(ŵ),w^t-ŵ⟩-τw^t-ŵ_1^2
= κ-μ/2w^t-ŵ_2^2-τw^t-ŵ_1^2,
where the first inequality uses RSC condition, the second inequality uses the convexity of d_λ(w), and the last equality holds from the optimality condition of ŵ.
Using Lemma <ref>, and the inequality (a+b)^2≤ 2a^2+2b^2, we have
w^t-ŵ_1^2≤ (4√(s)w^t-ŵ_2+8√(s)w^*-ŵ_2+2min (ξ/λ L_d, ρ))^2
≤ 32 s w^t-ŵ_2^2+2 (8√(s)ŵ-w^*_2+2min(ξ/λ L_d,ρ))^2.
Substitute this into Equation (<ref>), we obtain
F(w^t)-F(ŵ)≥ (κ-μ/2-32sτ)w^t-ŵ_2^2-2τ (8√(s)ŵ-w^*_2+2min(ξ/λ L_d,ρ))^2.
Recall
ϕ(w)= f(w)-λ̃/2w_2^2
-μ/2w_2^2.
So we have
ϕ(ŵ)-ϕ(w^t)-⟨ŵ-w^t,∇ϕ(w^t)⟩
= f(ŵ)-f(w^t)-⟨∇ f(w^t), ŵ-w^t⟩-λ̃+μ/2ŵ-w^t_2^2
≥ κ/2ŵ-w^t_2^2-τŵ-w^t_1^2-λ̃+μ/2ŵ-w^t_2^2.
Then use the upper bound of w^t-ŵ_1^2 in (<ref>) and rearrange terms we establish the lemma.
We are now ready to prove the main theorem of the non-convex case, i.e., Theorem 2.
Remind in the non-convex case, for i=1,...,n, the definition of ϕ_i(x) is the same as in the convex case.
The difference is for ϕ_n+1(w), where ϕ_n+1(w) is defined as follows:
ϕ_n+1(w)=-(n+1) (λ̃+μ)/2w_2^2.
Recall the definition of g̃(w) is as follows
g̃(w)=1/2w_2^2+λ/λ̃d_λ (w).
Following similar steps as in the proof of Theorem <ref>, we have
E(C_t-1-C_t)≥ c_aηλ̃ A_t-1-c_aηλ̃∑_i=1^n+11/q_iu_i-â_i_2^2+c_bηλ̃ B_t-1+2c_bη(F(w^t-1)-F(ŵ))
+∑_i=1^n+11/q_im_i_2^2 (c_aηλ̃ (1-β_i)-c_b η^2/(n+1)^2)+2c_bη [ ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1) ].
Following the same steps as in Equation (<ref>), and replace λ̃ by λ̃+μ we have
1/q_n+1∇ϕ_n+1(w^t-1)-∇ϕ_n+1(ŵ)_2^2
≤ 2(n+1)L̃_n+1/q_n+1λ̃+μ/2w^t-1-ŵ_2^2.
Similar as the convex case we then divide the time step t=1,2,... into several epochs, i.e., ([ T_0,T_1), [T_1,T_2),...). At the end of each epoch j, we prove that C_t decreases with a linear rate until the optimality gap F(w^t)-F(ŵ) decrease to some tolerance ξ_j.
We then apply Lemma <ref> and using the assumption that κ̃≥λ̃+μ to relate w^t-1-ŵ_2^2 to F(w^t)-F(ŵ):
1/q_n+1∇ϕ_n+1(w^t-1)-∇ϕ_n+1(ŵ)_2^2≤ 2(n+1)L̃_n+1/q_n+1 (F(w^t-1)-F(ŵ)+ϵ_j^2 (Δ^*,s)).
Thus
∑_i=1^n+11/q_iu_i-â_i_2^2≤ 8(n+1)^2L̃ (F(w^t-1)-F(ŵ))+4(n+1)^2L̃ϵ_j^2 (Δ^*,s).
We choose 2c_b=16c_a λ̃ (n+1)^2L̃, and use the fact that c_b/c_a=λ̃ (n+1)^2/2η, we have η=1/16L̃. Combine all pieces together we get
𝔼(C_t-1-C_t)≥ c_aηλ̃ A_t-1-8c_aηλ̃(n+1)^2L̃ (F(w^t-1)-F(ŵ))-4c_aηλ̃ (n+1)^2L̅ϵ_j^2 (Δ^*,s)
+c_bηλ̃ B_t-1+2c_bη(F(w^t-1)-F(ŵ))+2c_bη ( ϕ(ŵ)-ϕ(w^t-1)-⟨ŵ-w^t-1,∇ϕ(w^t-1) )
≥ηλ̃ C_t-1+η(2c_b-8c_aλ̃ (n+1)^2L̃)(F(w^t-1)-F(ŵ)) -4c_aηλ̃(n+1)^2L̃ϵ_j^2 (Δ^*,s)
+(c_bη (κ̃-λ̃) w^t-ŵ_2^2 -2c_bηϵ_j^2 (Δ^*,s))
≥ηλ̃ C_t-1+ η c_b (F(w^t-1)-F(ŵ))-3c_bηϵ_j^2 (Δ^*,s),
where the second inequality uses Lemma <ref> and the assume κ̃≥λ̃.
The rest of the proofs are almost identical to the convex one. In the first epoch, we have
ϵ_1^2 (Δ^*,s)=2τ (δ_stat+2ρ)^2 , ξ_1= 6τ (δ_stat+2ρ)^2
Then we choose T_1 such that
F(w^T_1-1)-F(ŵ)≥ 3ϵ_1^2 (Δ^*,s) and F(w^T_1)-F(ŵ)≤ 3ϵ_1^2 (Δ^*,s).
In the second epoch we can use the same argument.
ϵ_2^2 (Δ^*,s)=2τ_σ (δ_stat+δ_1)^2 where δ_1=2ξ_1/λ L_d.
Repeat this strategy on every epoch. The only difference from the proof of the convex case is that we now replace λ by λ L_d.
We use the same argument to prove ξ_k is a decreasing sequence and conclude the proof.
§.§ Proof of Corollaries
We now prove the corollaries that instantiate our main theorems to different statistical estimators.
To begin with, we present the following lemma of the RSC proved in <cit.> and then use it in the case of Lasso.
If each data point x_i is i.i.d randomly sampled from the distribution N(0,Σ), then there exist universal constants c_0 and c_1 such that
XΔ_2^2/n≥1/2Σ^1/2Δ_2^2-c_1ν(Σ)log p/nΔ_1^2, Δ∈ℝ^p,
with probability at least 1-exp(-c_0n). Here X is the data matrix where each row is data point x_i.
Since w^* is supported on a subset S with cardinality s, we choose
ℬ(S)={ w∈ℝ^p | w_j=0 for all j∉ S }.
It is straightforward to choose 𝒜(S)=ℬ(S) and notice that w^*∈𝒜(S).
In Lasso, f(w)=1/2ny-Xw_2^2, and it is easy to verify that
f(w+Δ)-f(w)-⟨∇ f(w),Δ⟩≥1/2nXΔ_2^2≥1/4Σ^1/2Δ_2^2-c_1/2ν(Σ)log p/nΔ_1^2.
Notice that g(·) is ·_1 in Lasso, thus Ψ(ℬ)=sup_w∈ℬ\{0}w_1/w_2=√(s). So we have
κ̃=1/2σ_min (Σ)-64c_1 ν(Σ) slog p/n .
On the other hand, the tolerance is
δ =24τ(8Ψ(ℬ)ŵ-w^*_2+8g(w^*_𝒜^⊥))^2
=c_2ν (Σ)slog p/nŵ-w^*_2^2,
where we use the fact that w^*∈𝒜(S) which implies g(w^*_𝒜^⊥)=0.
The last piece to check is that λ≥ 2g^*(∇ f(w^*)). In Lasso we have g^*(·)=·_∞. Using the fact that y_i=x_i^Tw^*+ξ_i, this is equivalent to require λ≥2/nX^Tξ_∞. Thus, by our choice of λ that λ≥ 6σ√(log p/n), the condition is satisfied invoking the following inequality which holds by applying the tail bound on the Gaussian variable and the union bound to get
ℙ(2/nX^Tξ_∞≤ 6σ√(log p/n)) ≥ 1-exp(-3log p).
We use the following fact on the RSC condition of the Group Lasso <cit.><cit.>. As each data point x_i is i.i.d random sampled from the distribution N(0,Σ), then there exists strictly positive constant (σ_1,σ_2) which just depends on Σ such that
XΔ_2^2/2n≥σ_1(Σ) Δ_2^2-σ_2(Σ) (√(m/n)+√(3 log N_𝒢/n) )^2 Δ^2_𝒢,2, Δ∈ℝ^p
with probability at least 1-c_3exp (-c_4n).
Recall we define the subspace
𝒜 (S_𝒢)={w|w_G_i=0 for all i∉ S_𝒢} ,
and 𝒜(𝒮_𝒢)=ℬ (𝒮_𝒢), where S_𝒢 corresponds to non-zero group of w^*. The subspace compatibility can be computed by
Ψ(ℬ)=sup_w∈ℬ\{0}w_𝒢,2/w_2=√(s_𝒢).
The effective RSC parameter is given by
κ̃=σ_1(Σ)-64σ_2(Σ)s_𝒢(√(m/n)+√(3 log N_𝒢/n))^2 .
We then check the requirement on λ holds. Since Group lasso is ℓ_1,2 grouped norm, its dual norm of it is (∞,2) grouped norm.
So we need
λ≥ 2 max_i=1,...,N_𝒢1/n(X^Tξ)_G_i_2.
Using Lemma 5 in <cit.>, we know
max_i=1,...,N_𝒢1/n(X^Tξ)_G_i_2≤ 2σ(√(m/n)+√(log N_𝒢/n))
with probability at least 1-2exp (-2log N_𝒢). Thus it suffices to choose λ≥ 4σ (√(m/n)+√(log N_𝒢/n)), as suggested in the corollary.
Finally, the tolerance can be computed as
δ=c_3 s_𝒢σ_2(Σ)(√(m/n)+√(3 log N_𝒢/n))^2 ŵ-w^*_2^2.
The proof is similar to that of Lasso. Recall in the proof of the Lasso example we have
∇ f(w^*)_∞=1/nX^Tξ_∞≤ 3 σ√(log p/n),
and the RSC condition is
XΔ_2^2/n≥1/2Σ^1/2Δ_2^2-c_1ν(Σ)log p/nΔ_1^2.
Recall μ=1/ζ-1 and L_d=1, we establish the result.
Notice
∇ f(w^*)_∞=Γ̂w^*-γ̂_∞=γ̂-Σ w^* +(Σ-Γ̂)w^*_∞≤γ̂-Σ w^*_∞+(Σ-Γ̂)w^*_∞.
As shown in Lemma 2 of <cit.>, both terms are bounded by
c_1 φ√(log p/n) with probability at least 1-c_1exp(-c_2log p )., where φ=(√(σ_max (Σ))+√(γ_ς)) (σ+√(γ_ς)w^*_2).
To obtain the RSC condition, we apply Lemma 1 in <cit.>, to get
1/nΔ^TΓ̂Δ≥σ_min (Σ)/2Δ_2^2-c_3 σ_min(Σ)max( (σ_max (Σ)+γ_ς/σ_min(Σ))^2,1 ) log p/nΔ_1^2,
with probability at least 1-c_4 exp(-c_5 nmin( σ^2_min (Σ)/( σ_max(Σ)+γ_ς)^2,1 ) ).
Combine these pieces together, we establish the result.
plainnat
|
http://arxiv.org/abs/1701.07617v1 | 20170126085234 | Limiting curves for polynomial adic systems | [
"Aleksei Minabutdinov"
] | math.DS | [
"math.DS"
] |
Limiting curves for polynomial adic systems.[Supported by the RFBR (grant 14-01-00373)]
A. R. MinabutdinovNational Research University Higher School of Economics, Department of Applied Mathematics and Business Informatics, St.Petersburg, Russia, e-mail:
December 30, 2023
==========================================================================================================================================================================
We prove the existence and describe limiting curves resulting from deviations in partial sums in the ergodic theorem for cylindrical functions and polynomial adic systems. For a general ergodic measure-preserving transformation and a summable function we give a necessary condition for a limiting curve to exist. Our work generalizes results by É. Janvresse, T. de la Rue and Y. Velenik and answers several questions from their work.
Key words: Polynomial adic systems, ergodic theorem, deviations in ergodic theorem.
MSC: 37A30, 28A80
§ INTRODUCTION
In this paper we develop the notion of a limiting curve introduced by É. Janvresse, T. de la Rue and Y. Velenik in <cit.>. Limiting curves were studied for the Pascal adic in <cit.> and <cit.>. In this paper we study it for a wider class of adic transformations.
Let T be a measure preserving transformation defined on a Lebesgue probability space (X, ℬ,μ) with an invariant ergodic probability measure μ. Let g denote a function in L^1(X,μ). Following <cit.> for a point x∈ X and a positive integer j we denote the partial sum ∑_k=0^j-1g(T^kx) by S_x^g(j). We extend the function S_x^g(j) to a real valued argument by a linear interpolation and denote extended function by F_x^g(j) or simply F(j),j≥ 0.
Let (l_n)_n=1^∞ be a sequence of positive integers. We consider continuous on [0,1] functions φ_n(t) = F(t· l_n(x)) - t · F(l_n)/R_n( ≡φ_x,l_n^g(t) ), where the normalizing coefficient R_n is canonically defined to be equal to the maximum in t∈[0,1] of |F(t· l_n(x)) - t · F(l_n)|.
If there is a sequence l^g_n(x)∈ℕ such that functions φ_x,l^g_n(x)^g converge to a (continuous) function φ_x^g in sup-metric on [0,1], then
the graph of the limiting function φ=φ^g_x is called a limiting curve, sequence l_n=l_n^g(x) is called a stabilizing sequence and the sequence R_n = R_x,l^g_n(x)^g is called a normalizing sequence. The quadruple (x, (l_n)_n=1^∞, (R_n)_n=1^∞, φ) is called a limiting bridge.
Heuristically, the limiting curve describes small fluctuations (of certainly renormalized) ergodic sums 1/lF(l), l∈(l_n), along the forward trajectory x, T(x), T^2(x)…. More specifically, for l∈(l_n) it holds F(t· l) = t F(l)+R_l φ(t)+o(R_l), where t∈[0,1].
In this paper we will always assume that T is an adic transformation. Adic transformations were introduced into ergodic theory by A. M. Vershik in <cit.> and were extensively studied since that time. The following important theorem shows that adicity assumption is not restrictive at all:
(A. M. Vershik, <cit.>).
Any ergodic measure preserving transformation on a Lebesgue space is isomorphic to some adic transformation. Moreover, one can find such an isomorphism that any given countable dense invariant subalgebra of measurable sets goes over into the algebra of cylinder sets.
In <cit.> authors encouraged studying different approaches to combinatorics of Markov's compacts (sets of paths in Bratteli diagrams).
In particular, it is interesting to find a natural class of adic transformations such that the limiting bridges exist for cylindric functions. Moreover, it is interesting to study joint growth rates of stabilizing and normalizing sequences.
In this paper we give necessary condition for a limiting curve to exist.
Next we find necessary and sufficient conditions for almost sure (in x) existence of limiting curves for a class
of self-similar adic transformations and cylindric functions. These transformations (in a slightly less generality) were considered by X. Mela and S. Bailey in <cit.> and <cit.>. Our work extends <cit.> and answers several questions from this research.
§ LIMITING CURVES AND COHOMOLOGOUS TO A CONSTANT FUNCTIONS
In this section we show that a necessary condition for limiting curves to exist is unbounded growth of the normalizing coefficient R_n. Contrariwise we show that normalizing coefficients are bounded if and only if function g is cohomologous to a constant. In particular this implies that there are no limiting curves for cylindric functions for an ordinary odometer.
§.§ Notions and definitions
Let B=B(𝒱,ℰ) denote a Bratteli diagram defined by the set of vertices 𝒱 and the set of edges ℰ. Vertices at the level n are numbered k=0 through L(n). We associate to a Bratteli diagram B the space X = X(B) of infinite edge paths beginning at the vertex v_0 = (0, 0). Following fundamental paper <cit.> we assume that there is a linear order ≤_n,k defined on the set of edges with a terminate vertex (n,k), 0≤ k≤ L(n). These linear orders define a lexicographical order on the set of edges paths in X that belong to the same class of the tail partition. We denote by ≼ corresponding partial order on X. The set of maximal (minimal) paths is defined by X_max (correspondingly, X_min).
Adic transformation T is defined on X∖(X_max∪ X_min) by setting Tx, x∈ X, equal to the successor of x, that is, the smallest y that satisfies y ≻ x.
Let ω be a path in X. We denote by (n,k_n(ω)) a vertex through which ω passes at level n.
For a finite path c=(c_1,…,c_n) we denote k_n(c) simply by k(c). A cylinder set C=[c_1c_2… c_n]={ω∈ X|ω_1=c_1,ω_2=c_2,…, ω_n=c_n} of a rank n is totally defined by a finite path from the vertex (0,0) to the vertex (n,k)=(n,k(c)). Sets π_n,k of lexicographically ordered finite paths c=(c_0,c_1,…,c_n-1), k(c)=n, are in one to one correspondence with towers τ_n,k made up of corresponding cylinder sets C_j = τ_n,k(j),1⩽ j⩽(n,k).
The dimension (n,k) of the vertex (n,k) is the total number of such finite paths (rungs of the tower).
We denote by (c) the number of finite paths in lexicographically ordered set π_n,k. Evidently, 1≤(c)≤(n,k). For a given level n the set of towers {τ_n,k}_0≤ k≤ L(n) defines approximation of transformation T, see <cit.>, <cit.>.
We can consider a vertex (n,k) of Bratteli diagram B as an origin in a new diagram B'_n,k=(𝒱',ℰ'). The set of vertices 𝒱', edges ℰ' and edges paths X(B'_n,k) are naturally defined. As above partial order ≼' on X(B') is induced by linear orders ≤_n',k', n'>n.
Ordered Bratteli diagram (B,≼) is self-similar if ordered diagrams (B,≼) and (B'_n,k,≼'_n,k) are isomorphic n∈ℕ, 0⩽ k⩽ L(n).
Let ℱ denote the set of all functions f:X→ℝ.
We denote by ℱ_N the space of cylindric functions of rank N (i.e. functions that are constant on cylinders of rank N).
Let g∈ℱ_N, N< n. We denote by F_n,k^g linearly interpolated partial sums S^g_x∈τ_n^k(1). Assume that self-similar Bratteli diagram B has L+1 vertices at level N and let ω∈π_n,k, 0⩽ k ⩽ L(n), be a finite path such that its initial segment ω'=(ω_1,ω_2,…, ω_N) is a maximal path, i.e. (ω')=(N,k(ω')). Let E^N,l_n,k denote the number of paths from (0,0) to (n,k) passing through the vertex (N,l),0≤ l≤ L, and not exceeding path ω. We denote by ∂^N,l_n,k(ω) the ratio of E^N,l_n,k to (N,l). It is not hard to see
that a partial sum F_n,k^g evaluated at j=(ω) has the following expression:
F^g_n,k(j) = ∑_l=0^L h^g_N,l∂^N,l_n,k(ω),
where coefficients h^g_N,l are equal to F_N,l^g(H_N,l), 0⩽ l⩽ L.
Expression (<ref>) is a generalization of Vandermonde's convolution formula.
§.§ A necessary condition for existence of limiting curves
Let (X,T) be an ergodic measure-preserving transformation with invariant measure μ. Let g be a summable function and a point x∈ X. We consider a sequence of functions φ^g_x,l_n and normalizing coefficients R_x,l_n^g given by the identity
φ_x,l_n(x)^g(t) = S_x^g([t· l_n(x)]) - t · S_x^g(l_n(x))/R_x,l_n(x)^g,
where R_x,l_n(x)^g equals maximum of absolute value of the numerator. Without loss of generality, we assume that the limit g^*(x)=lim_n→∞1/nS^g_x exists at the point x. The following theorem generalizes Lemma 2.1 from <cit.> for an arbitrary summable function.
If a continuous limiting curve φ_x^g=lim_nφ^g_x,l_n exists for μ-a.e. x, then the normalizing coefficients R_x,l_n^g are unbounded in n.
Assume the contrary that |R_x,l_n^g|⩽ K. For simplicity we introduce the following notation: S=S_x^g, φ_n = φ^g_x,l_n, R_n = R_x,l_n^g and φ = φ_x. Since φ≠0, there is j∈ℕ such that 1/jS(j)≠ g^*. This in turn implies lim inf_n|φ_n(j/l_n)|= lim inf_n 1/R_n|S(j)-j S(l_n)/l_n|⩾1/K|S(j)-jg^*|=j/K|1/jS(i)-g^*|>0,
contradicting continuity of the limiting curve φ at the origin.
A function g∈ L^∞(X,μ) (μ-a.e.) of the form g = c+h∘ T - h for some c∈ℝ and h∈ L^∞(X,μ) is called cohomologous to a constant in L^∞.
Normalizing sequence R_x,l_n^g is bounded if and only if function g is cohomologous to a constant.
Sums ∑_j=0^n-1 (g-g^*)∘ T^j of a cohomologous function are μ-a.e. bounded, therefore normalizing coefficients R_x,l_n^g are μ-a.e. bounded too.
The proof of the converse statement exploits the result by A. G. Kachurovskiy from <cit.>. Assume that the normalizing coefficients R_x,l_n^g are bounded. Then for μ-a.e. point x∈ X and for any j∈ℕ the following inequality holds |S^g_x(j)-j/l_nS^g_x(l_n)|⩽ C. Going to the limit in n, we see that
|∑_i=1^jf∘ T^i(x)|⩽ C,
where f = g-g^*. Theorem 19 from <cit.> (see also G. Halasz, <cit.>), inequality |S^f_x|⩽ C, is equivalent to existence of a function h∈ L^∞, such that f = h∘ T-h. Therefore g equals to h∘ T-h+g^*.
Let B be a Bratteli diagram such that there is only one vertex at each
level, and let the edge ordering be such that the edges increase from left to right. This transformation is called an odometer. A stationary odometer
is an odometer for which the number of edges connecting consecutive levels is constant.
Let (X,T) be an odometer. Any cylindric function g∈ℱ_N is cohomologous to a constant. Therefore there is no limiting curve for a cylindric function.
There is only one vertex (n,0) at each level n. Expression (<ref>) for the partial sum F_n,0^g(i) is evidently valid for any odometer (even without assumption of self-similarity). Moreover, expression (<ref>) is defined by the only coefficient h^g_N,0 and therefore is proportional to H_N = (N,0). We can subtract such constant C to the function g
that equality h^g-C_N,0=0 holds. But this is equivalent to the following: The function function g-C belongs to the linear space spanned by the functions f_j - f_j∘ T, 1⩽ j⩽ H_N, where f_j is the indicator-function of the j-th rung in the tower τ_N,0. Therefore function g-C is cohomologous to zero.
§ EXISTENCE OF LIMITING CURVES FOR POLYNOMIAL ADIC SYSTEMS
In this part we will show that any not cohomologous to a constant cylindric function in a polynomial adic system has a limiting curve. These generalizes Theorem 2.4. from <cit.>.
§.§ Polynomial adic systems
Let p(x) = a_0 + a_1 x… + a_d x^d be an integer polynomial of degree d∈ℕ with positive integer coefficients a_i, 0≤ i≤ d. Bratteli diagram B_p =(𝒱,ℰ)_p associated to polynomial p(x) is defined as follows:
* Number of vertices grows linearly: |𝒱_0|=1 |𝒱_n| = |𝒱_n-1|+d=nd+1, n∈ℕ.
* If 0⩽ j⩽ d vertices (n,k) and (n+1,k+j) are connected by a_j edges.
Polynomial p(x) is called a generating polynomial of the diagram B_p, see paper <cit.> by S. Bailey.
Since the number of edges into vertex (n, k)
is exactly p(1)=a_0+a_1+…+a_d it is natural to use the alphabet 𝒜 = {0,1,…,a_0+a_1+…+a_d-1} for edges labeling. We call a lexicographical order defined in <cit.> a canonical order. It is defined as follows: Edges connecting (0, 0) with
(1, d) are labeled through 0 to a_d-1 (from left to right); edges connecting (0,0) and (1,d-1), are indexed by a_d to a_d+a_d-1-1, etc. Edges connecting (0,0) and (1,0), are indexed through a_0+a_1+…+a_d-1 to a_0+a_1+…+a_d.
Infinite paths are totally defined by this labeling and may be considered as one sided infinite sequences in 𝒜^ℕ. We denote the path space by X_p.
We denote by T_p the adic transformation associated with the canonical ordering.
Remark. Any self-similar Bratteli diagram is either a diagram of a stationary odometer or is associated to some polynomial p(x). Any non-canonical ordering is obtained from canonical by some substitution σ.
Everywhere below we stick to the canonical order. Case of general order needs several straightforward changes that are left to the reader.
Dimension of the vertex (n,k) from diagram B_p equals to the coefficient of x^k in the polynomial (p(x))^n and is called generalized binomial coefficient. We denote it by C_p(n,k). For n>1 coefficients C_p(n,k) can be evaluated by a recursive expression C_p(n,k)=∑_j=0^da_jC_d(n-1,k-j).
In <cit.> and <cit.> X. Méla and S. Bailey showed that the fully supported invariant ergodic measures of the system (X_p,T_p) are the one-parameter family of Bernoulli measures:
(S. Bailey, <cit.>, X. Méla, <cit.>) 1. Let q∈(0,1/a_0) and t_q is the unique solution in (0,1) to the equation
a_0q^d+a_1q^d-1t+…+a_dt^d-q^d-1=0,
then the invariant, fully supported, ergodic probability measures for the adic transformation T_p are the one-parameter family of Bernoulli measures μ_q, q∈(0,1/a_0),
μ_q=∏_0^∞(q,…,q_a_0,t_q,…,t_q_a_1,t_q^2/q,…,t_q^2/q_a_2,…,t_q^d/q^d-1,…,t_q^d/q^d-1_a_d).
2.Invariant measures that
are not fully supported are
∏_0^∞(1/a_0,…,1/a_0_a_0,0,…,0) and ∏_0^∞(0,…,0,1/a_d,…,1/a_d_a_d,).
Polynomial adic system associated with polynomial p(x), is a triple (X_p,T_p,μ_q), q∈(0,1/a_0).
In particular, if p(x)=1+x system (X_p,T_p,μ_q), q∈(0,1), is the well-known Pascal adic transformation. Transformation was defined in <cit.> by A. M. Vershik [However earlier isomorphic transformation was used by <cit.> and <cit.>.] and was studied in many works <cit.>, see more complete list in the last two papers. For the Pascal adic space X_p is an infinite dimensional unit cube I={0,1}^∞, while measures μ_q are dyadic Bernoulli measures ∏_1^∞(q,1-q). Transformation T_p=P is defined by the following formula (see <cit.>)[P^k(x), k∈ℤ, is defined for all x except eventually diagonal, i.e., except those x for which there exists n ∈ℕ such that either x_k = 0 for all k ≥ n or x_k = 1 for all k ≥ n]:
x↦ Px; P(0^m-l1^l10…)=1^l0^m-l01…
(that is only first m+2 coordinates of x are being changed). De-Finetti's theorem and Hewitt-Savage 0–1 law imply that all P-invariant ergodic measures are the Bernoulli measures μ_p = ∏_1^∞(p,1-p), where 0<p<1.
Below we enlist several known properties of the polynomial systems:
* Polynomial systems are weakly bernoulli (the proof essentially follows <cit.> and is performed in <cit.> and <cit.>).
* Complexity function has polynomial growth rate (for the Pascal adic first term of asymptotic expansion is known to be equal to n^3/6, see <cit.>).
* Polynomial system (X_p,T_p,μ_q) defined by a polynomial p(x) = a_0+a_1x with a_0 a_1>1 has a non-empty set of non-constant eigenfunctions.
Authors of <cit.> studied limiting curves for the Pascal adic transformation (I,P,μ_q), q∈(0,1).
(<cit.>, Theorem 2.4.) Let P be the Pascal adic transformation defined on Lebesgue probability space (I, ℬ,μ_q),q∈(0,1), and g be a cylindric function from ℱ_N. Then for μ_q-a.e. x
limiting curve φ^g_x ∈ C[0,1] exists if and only if g is not cohomologous to a constant.
For the Pascal adic limiting curves can be described by nowhere differentiable functions, that generalizes Takagi curve.
(<cit.>, Theorem 1.) Let P be the Pascal adic transformation defined on the Lebesgue space (I, ℬ,μ_q), N∈ℕ and g∈ℱ_N be a not cohomologous to a constant cylindric function. Then for μ_q-a.e. x there is a stabilizing sequence l_n(x) such that the limiting function is α_g,x𝒯^1_q, where α_g,x∈{-1,1}, and 𝒯^1_q is given by the identity
𝒯^1_q (x) = ∂ F_μ_q/∂ q∘ F_μ_q^-1(x), x∈[0,1],
where F_μ_q is the distribution function[More precisely F_μ_q is distribution function of measure μ̃_q, that is image of μ_q under canonical mapping ϕ:I→[0,1], ϕ(x)= ∑_i=1^∞x_i/2^i. ] of μ_q.
The graph of 1/2𝒯^1_1/2 is the famous Takagi curve, see <cit.>.
For a function g correlated with the indicator functions of i-th coordinate 1_{x_i=0}, x=(x_j)_j=1^∞∈ X Theorem <ref> was proved in <cit.>.
§.§ Combinatorics of finite paths in the polynomial adic systems
In this section we'll specify representation (<ref>) for the polynomial adic systems.
For a finite path ω=(ω_1,ω_2,…, ω_n) we set k^1(ω) equal to nd-k(ω). Using self-similarity of the diagram B_p we can inductively prove the following explicit expression for (ω):
Index (ω) of a finite path ω=(ω_j)_j=1^n in lexicographically ordered set π_n,k(ω) is defined by equality:
(ω) = ∑_j=2^r ∑_i=0^ω_a_j-1C_P(a_j-1;k^1(ω)-k^1(i)-m_j) +(ω_1),
where m_j =∑_t=j^r-1k^1(ω_a_t),2⩽ j⩽ r-1, m_r=0, and polynomial P(x) is given by the identity P(x) = x^dp(x^-1).
Remark.
If initial segment (ω_1,ω_2,…,ω_N) of ω∈π_n,k is a maximal path to some vertex (N,l), then (<ref>) can be rewritten as follows:
Num(ω) = ∑_j=N+1^r ∑_i=0^ω_a_j-1C_P(a_j-1;k^1(ω)-k^1(i)-m_j) +C_P(a_N;k^1(ω)-m_l) .
Let N and l, 0⩽ l⩽ Nd, be positive integers and ω∈π_n,k be a finite path. Function ∂_k^1^N,l: ℤ_+→ℤ_+ , k^1=nd-k, is defined by the identity
∂_k^1^N,lω = ∑_j=N+1^r ∑_i=0^ω_a_j-1C_P(a_j-1-N;k^1-k^1(i)-m_j-l),
where positive integers a_j,k^1(i),m_j, are defined as in (<ref>).
Parameters N and l correspond to shifting the origin vertex (0,0) to the vertex (N,l). Therefore value of the function ∂_k^1^N,lω,k^1=k^1(ω), equals to the number of paths from the vertex (0,0) going through the vertex (N,l) to the vertex (n,k),k=nd-k^1, and non-exceding path ω divided by (N,l).
Let K_M,n,k, 1⩽ M⩽ n, denote indexes (in lexicographical order) of those paths ω=(ω_1,…, ω_n)∈π_n,k, such that their initial segment (ω_1,ω_2,…,ω_M) is maximal (as a path from (0,0) to some vertex (M,l)).
Let g∈ℱ_N. Function F̃^g,M_n,k: K_M,n,k→ℝ (where M, N⩽ M⩽ n is a positive integer) is defined by the identity
F̃_n,k^g,M(j) =∑_l=0^Ndh_M,l^g ∂_nd-k^M,l ω,
where (ω)=j, j∈ K_M,n,k, ω∈π_n,k.
We extend domain of the function F̃_n,k^g,M to the whole interval [1,H_n,k] using linear interpolation. Expression (<ref>) implies that for j∈ K_M,n,k the identity F̃^g,M_n,k(j)=F^g_n,k(j) holds. Non strictly speaking,
higher values of parameter M ,M>N, makes functions F̃_n,k^g,M to be more and more rough approximation of function F^g_n,k and points from K_M,n,k correspond to nodes of this approximation.
Let 1⩽ j⩽ H_n,k and g∈ℱ_N. There exists a constant C=C(g), such that the following inequality holds for all n,k:
|F̃^g,N_n,k(j)-F_n,k^g(j)|⩽ C.
Remark. If the function g equals 1 and (ω)=(n,k)≡ C_P(n,k^1(ω)), then expression (<ref>) (as well as (<ref>)) reduces to:
C_P(n,k) = ∑_l=0^NdC_P(N,l)C_P(n-N,k-l),
that is Vandermonde's convolution formula for generalized binomial coefficients.
§.§ A generalized r-adic number system on [0,1]
Let parameter q∈(0,1/a_0) and number t_q∈(0,1/a_1) be defined as in Theorem <ref>. We denote by r=p(1) number of letters in the alphabet 𝒜.
Let ω=(ω_i)_i=1^∞∈ X_p,ω_i∈𝒜, be an infinite path. It is also natural to consider ω as a path in an infinite perfectly balanced tree ℳ_r.
By s̅_n = (s_n^0,…s_n^r-1)^T we denote r-dimensional vector with j-th, 0⩽ j⩽ r-1, component equal to number of occurrences of letter j among (ω_1,ω_2,…,ω_n). Let a̅_i,0⩽ i⩽ r-1, denote r-dimensional vector
(0,0,…,0_∑_j=0^i-1a_j,1,1,…,1_a_i,0,0,…,0_∑_j=i+1^da_j),
Let u· v denote scalar product of r-dimensional vectors u and v. We define mapping θ_q: X → [0,1] by the following identity:
x = ∑_j=1^∞ I_q(ω_j)q^j(t_q/q)^a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j,
where I_q(w) = a_0q^h+1/t_q^h+1+a_1t_qq^h/t_q^h+1+…+a_hq/t_q+s with w=∑_i=0^ha_i+s, 0⩽ s<a_h+1,0⩽ h<d.
Let X_0 denote the set of stationary paths.
Function θ_1/r is a canonical bijection ϕ=θ_1/r: X∖ X_0→[0,1]∖ G, G = ϕ(X_0). Function ϕ maps measure μ_q,q∈(0,1), defined on X to measure μ̃_q on [0,1], the family of towers {τ_n,k}_k=0^nd to the family {τ̃_n,k}_k=0^nd of disjunctive intervals. That defines isomorphic realization T̃_p on [0,1]∖ G of polynomial adic transformation T_p.
As shown by A. M. Vershik any adic transformation has a cutting and stacking realization on the subset of a full measure of [0,1] interval. However, nice explicit expression (<ref>) needs some regularity from the Bratteli diagram.
Conversely, any point x∈[0,1] could be represented by series (<ref>). We call this representation q-r-adic representation associated to the polynomial p(x). (If r=2 representation (<ref>) for q=1/2 is a usual dyadic representation of x∈(0,1).) Let G_q^m denote the set (vector) of all stationary numbers of rang m, m∈ℕ, i.e. numbers with a finite representation
x = ∑_j=1^m I(ω_j)∏_i=0^r-1p_i^s_j^i,
and let G_q=∪_m G_q^m be the set of all q-r-stationary numbers.
Let l∈ℕ and x be a path in X_p. We consider r^l-dimensional vectors K̃_n=K_n-l,n,k_n(x) and renormalization mappings D_n,k:[1, C_p(n,k)]→[0,1] defined by D_n,k(j)=j/C_p(n,k). Using ergodic theorem it is straightforward to show that for μ_q-a.e. x it holds lim_n→∞R_n,k_n(x)(K̃_n)=G^l_q (where convergence is the componentwise convergence of vectors).
§.§ Existence of limiting curves for polynomial adic systems
In this part we generalize Theorem 2.4 from <cit.> for polynomial adic systems (X_p,T_p) associated with positive integer polynomial p.
First we prove a combinatorial variant of the theorem. Let, as above, x∈ X_p be an infinite path going through vertices (n,k_n(x))∈ B_p. Below we write vertex (n,k_n(x)) as (n,k_n) or simply as (n,k). To simplify notation, the dimension (n,k)=C_p(n,k) is denoted by H_n,k.
We define function φ_n,k^g=φ_x∈τ_n,k(1),H_n,k^g:[0,1]→ℝ by identity
φ_n,k^g(t) = F_n,k^g(tH_n,k)-tF_n,k^g(H_n,k)/R_n,k^g.
Let F be a function defined on [1,H_n,k]. Define function ψ_F,n,k on [0,1] by
ψ_F(t) = F(tH_n,k)-tF(H_n,k)/R_n,k,
where R_n,k is a canonically defined normalization coefficient. Then the following identity holds ψ_F_n,k^g=φ_n,k^g.
Let g∈ℱ_N be not cohomologous to a constant cylindric function.
Theorem <ref> implies that normalization sequence (R_n,k_n^g)_n⩾1 monotonically increases. Lemma <ref> shows that
|| ψ_F_n,k^g - ψ_F_n,k^g,N||_∞0.
We want to show that there is a sequence (n_j)_j⩾ 1 and a continuous function φ(t),t∈[0,1], such that
lim_j→∞|| ψ_F_n_j,k_n_j^g,N - φ||_∞=0.
Following <cit.>, we consider an auxiliary object: a family of polygonal functions ψ^M_n = ψ_F_n,k^g,n-M+N, N+1⩽ M⩽ n. Graph of each function ψ^M_n is defined by (2r)^M-dimensional array (x^M_i(n),y^M_i(n))_i=1^r^M, such that ψ^M_n(x^M_i(n))=y^M_i(n). Results from Section 3.3 show that vector (x^M_i(n))_i=1^r^M converges pointwise to q-r-stationary numbers G^M_q of rank M, given by polynomial p(x).
Let l and M be positive integers, such that N+1⩽ l<M<n. Functions F_n,k^g,n-M and F_n,k^g,n-l coincide at each point from K_n-l,n,k_n, therefore functions ψ^M_n and ψ^l_n also coincide at (x^l_i(n))_i=1^r^l. Moreover, Proposition <ref> (it generalizes Proposition 3.1 from <cit.>) provides the following estimate:
|| ψ^M_n_j -ψ^l_n_j||_∞⩽ C_1e^-C_2(M-l),
with C_1,C_2>0. For a fixed M we can extract a subsequence (n_j) such that polygonal functions ψ^M_n_j converge to a polygonal function φ^M in sup-metric. Then, as in <cit.>, using a standard diagonalization procedure we can find subsequence (that again will be denoted by (n_j)_j) such that convergence to some continuous on [0,1] function holds for any M:
lim_M→∞lim sup_j→∞|| ψ_n_j^M - φ||_∞=0.
Auxiliary functions φ^M are polygonal approximations to the function φ.
Therefore we have proved the following claim, generalizing Theorem <ref>:
Let (X,T,μ_q ),q∈(0,1/a_0), be a polynomial adic transformation defined on Lebesgue probability space (I, ℬ,μ_q),q∈(0,1), and g be a not cohomologous to a constant cylindric function from ℱ_N. Then for μ_q-a.e. x passing through vertices (n,k_n(x)) we can extract a subsequence (n_j) such that φ_n_j,k_n_j(x)^g converges in sup-metric to a continuous function on [0,1].
Each limiting curve φ is a limit in j of polygonal curves ψ_n_j^m,m⩾ 1, with nodes at stationary points G_q⊂[0,1]. Therefore, its values
φ(t) can be obtained as limits lim_j→∞ψ_n_j^m((ω)/H_n_j,k_n_j), where t∈ G^m_q and lim_j→∞(ω)/H_n_j,k_n_j = t, with (ω)∈ K_n_j-m,n_j,k_n_j.
Self-similar structure of towers simplifies this task. We write simply n for n_j(x), F for F_n,k^g and R_n= R_n,k^g. The following lemma in fact generalizes results from Section 3.1. of <cit.>:
Limiting curve is totally defined by the following limits n→∞:
lim_n→∞1/R_n(F(L_m,i,n,k) - L_m,i,n,k/H_n,kF(H_n,k)),
where L_m,i,n,k=∑_j=0^ma_jH_n-i,k(ω)+j-di,0⩽ m⩽ d.
We may assume that δ<k_n/n(x)<δ d for some δ>0.
First, we suppose that some typical n=n(x)>>m and k_n are taken.
We consider a set of ingoing finite paths of length m to the vertex (n,k), d≤ k≤ d(n-1), n>>m. Self-similarity of B_p implies that these paths can be considered as paths going from the origin to some vertex (m,j), 0≤ j≤ md, of B_p, see Fig. <ref>. As shown in Section 3.3 above, each such path correspond to a point from G_q^m, which in its turn correspond to q-r-adic interval of rank m. Let x_m,j, 0⩽ j⩽ md denote length of such interval and y_m,j, denote increment of the function ψ_n^M on the interval (m,j). Values (x_m,j, y_m,j) may be defined inductively: For m=0 by x_0,0=1, y_0,0=0, and for m>0 and indices j such that (m-1)d<j⩽ md by x_m,j=∑_i=0^j a_iC_p(n-m,nd-k+j-di)/H_n,k, y_m,j =ψ_n^M(x_m,j); for other values of j by recursive expression x_m-1,i=∑_j=0^d a_jx_m,i-j, y_m-1,i=∑_j=0^d a_jy_m,i-j. Therefore function ψ_n_j^M is totally defined by its values at x_m,j, 1⩽ m⩽ M, (m-1)d<j⩽ md. Going to the limit we obtain the claim.
Stochastic version of Theorem <ref> is obtained from the following claim: for any ε>0 for μ_q-a.e. x there exists subsequence n_j(x) such that (w^j), w^j=(x_1,… x_n_j), satisfies the following condition (w^j)/H_n_j,k_n_j<ε. In fact, even more strong result holds. It follows from the recurrence property of one-dimensional random walk and was first proved by É. Janvresse and T. de la Rue in <cit.> to show that the Pascal adic transformation is loosely Bernoulli. Later it was generalized in
<cit.> for the polynomial adic systems.
For any ε>0 and μ_q×μ_q-a.e. pair of paths (x,y)∈ X× X there is a subsequence n_j such that k_n_j(x)=k_n_j(y) and indices (ω_x), (ω_y) of paths ω_x=(x_1,x_2,… x_n_j) and ω_y=(y_1,y_2,… y_n_j) satisfy the following inequlity (ω_z)/H_n_j,k_n_j(x)< ε, z∈{x,y}, for each j∈ℕ.
(Stochastic variant of Theorem <ref>.)
Let (X,T,μ_q ), q∈(0,1/a_0), and g be a cylindric function from ℱ_N. Then for μ_q-a.e. x
limiting curve φ^g_x ∈ C[0,1] exists if and only if function g is not cohomologous to a constant.
Follows from Lemma <ref>, Theorem <ref> and Theorem <ref>.
Remark
Lemma <ref> implies that appropriate choice of stabilizing sequence l_n(x) can provide the same limiting curve φ_x^g, lim_j→∞||φ_x,l_j(x)^g-φ_x^g||=0, for μ_q-a.e. x.
Finally we prove Proposition <ref> used above. It generalizes Proposition 3.1 from <cit.>. However, its proof needs an additional statement due to the non unimodality of generalized binomial coefficients C_p(n,k):
Let p(x)=a_0+a_1x+…+a_dx^d be a positive integer polynomial. Then the following holds:
1. There exist n_1∈ℕ and C_1>0, depending only on {a_0,…,a_d}, such that max_k{C_p(n,k+1)/C_p(n,k),C_p(n,k)/C_p(n,k+1)}⩽ C_1n for n>n_1.
2. C_p(n-1, k-i)⩽1/a_imax{k/n, 1-k/n}C_p(n,k), 0⩽ i ⩽ d.
1.
Let X be a discrete random variable on {0,1,…, d} with distribution associated to the polynomial p(x), that is Prob(X=k) =a_k/p(1), 0⩽ k⩽ d. Distribution of a sum Y_n=X_1+X_2+… X_n of i.i.d. random variables X_k,0⩽ k⩽ n, with distributions associated to the polynomial p(x), is associated to the polynomial p^n(x), i.e. Prob(Y_n=k) =C_p(n,k)/p^n(1), 0⩽ k⩽ nd. A. Oldyzko and L. Richmond showed in <cit.> that the function f_n(k)≡Prob(Y_n=k) is asymptotically unimodal, i.e. for n≥ n_1, coefficients C_p(n,k), 0⩽ k ⩽ nd,n⩾ n_1, first increase (in k) and decrease then.
We denote by C and c the maximum and the minimum values of the coefficients {C_p(n_1,k)}_k=0^n_1d of the polynomial p^n_1(x). Let also denote the maximum of the coefficients {a_0,…,a_d} of the polynomial p(x).
We will use induction in n to prove that C_p(n,k+1)/C_p(n,k)⩽C/cdn, 0⩽ k⩽ nd-1, n⩾ n_1. (The second estimate C_p(n,k)/C_p(n,k+1)⩽C/cdn can be proved in the same way).
We start now with the base case: For n=n_1 it obviously holds that C_p(n_1, k+1)/C_p(n_1, k)⩽C/c⩽Cd/c, 0⩽ k⩽ n, hence
we have shown the base case.
Now assume that we have already shown C_p(n-1, k)/C_p(n-1, k-1)⩽Cd/c(n-1), where 1⩽ k⩽ d (n-1) and n≥ n_1. We need to show that C_p(n, k)/C_p(n, k-1)⩽Cd/cn, 1⩽ k⩽ dn.
C_p(n, k+1)/C_p(n, k) = ∑_i=0^d a_iC_p(n-1,k+1-i)/∑_i=0^d a_iC_p(n-1,k-i)⩽
⩽C_p(n-1,k)(a_0+a_1+…+a_d-1+d a_dC/c(n-1))/a_dC_p(n-1, k)⩽
⩽a_0+a_1+…+a_d-1-d/a_d+ C d n/c⩽ Cd/c n.
2. The statement follows directly from the following identity for the generalized binomial coefficients:
∑_i=1^dC_p(n-1,k-i)a_ii=k/nC_p(n,k).
To show it we differentiate identity p^n(x)=∑_k≥0 C_p(n,k) x^k resulting
np^n-1(x)p'(x) = ∑_k≥0 kC_p(n,k) x^k-1, p'(x)=a_1+2a_2x+…+da_dx^d-1.
It remains to equate exponents from the two sides.
The following proposition generalizes Propositon 3.1 from <cit.>. We preserved the original notation where it was possible.
Let N⩾ 1 be a positive integer and δ∈(0,1/4) be a small parameter. Let A=A(n̅,k̅)∈ B_p be a vertex with coordinates (n̅,k̅) satisfying
2δn̅⩽ k⩽(d-2δ)n̅ and
2δn̅⩽ nd-k⩽(d-2δ)n̅.
Let α_l,0⩽ l⩽ Nd, be real numbers, such that ∑_l=0^Ndα_l^2>0. Let n, N⩽ n⩽n̅, and B(n,k)=(n,k) be a vertex with coordinates satisfying 0⩽ k⩽k̅, 0⩽ n-k⩽n̅-k̅. Define
γ_n,k = 1/R∑_l=0 ^Ndα_l C_d(n-N,k-l),
where R = R(A,B,δ) is a renormalization constant such that |γ_n,k| are uniformly in n and k from 0⩽ k⩽k̅,0⩽ n-k⩽n̅- k̅, N⩽ n⩽n̅ bounded by 2. Then there exist a constant C = C(δ,N), such that, provided n̅ is large enough, the following inequality holds for all n,k:
|γ_n,k|⩽ 3e^-C(n̅-n).
Conditions on the vertex A δ-separate it from "boundary" vertices (n̅,0) and (n̅,dn̅). Conditions on the vertex B=B(n,k) provides it can be considered as a vertex in a "flipped" graph and that it can be connected with the vertex A, see Fig. <ref>.
We can assume that n̅>2n_1, where n_1=n_1(a_0,a_1,…,a_d), is defined in the proof of Lemma <ref>.
Let l_0, 0⩽ l_0⩽ Nd, be such that coefficient α_l_0 is nonzero. We can rewrite the right hand side of (<ref>) as follows:
Rγ_n,k= C_d(n-N,k-l_0)P(n,k,l_0), N⩽ n⩽n̅, 0⩽ k⩽ nd,
where P(n,k,l_0) is defined by ∑_l=0 ^Nα_l C_d(n-N,k-l)/C_d(n-N,k-l_0).
Let α denote the maximum of |α_l|, 0⩽ l⩽ Nd.
We want to show that there is a polynomial Q(x) of degree deg(Q)≤ Nd such that
|P(n,k,l_0) - P(n̅,k̅,l_0)|⩽ Q(n̅).
It is enough to show that there is c_1>0 such that |C_d(n-N,k-l)/C_d(n-N,k-l_0)|⩽ c_1 n^Nd, 0⩽ l⩽ Nd, N⩽ n⩽n̅.
The latter inequality follows from Nd fold application of part 1 of Lemma <ref>.
Define function Q̃ by Q̃ =P(n,k,l_0) - P(n̅,k̅,l_0). We can write
γ_n,k =1/RC_d(n-N,k-l_0)P(n,k,l_0) =
=C_d(n-N,k-l_0)/C_d(n̅-N,k̅-l_0)C_d(n̅-N,k̅-l_0)P(n,k,l_0)/R =
=C_d(n-N,k-l_0)/C_d(n̅-N,k̅-l_0)C_d(n̅-N,k̅-l_0)(P(n̅,k̅,l_0)+Q̃)/R.
By the assumption we have |γ_n̅,k̅|=|1/RP(n̅, k̅,l_0)C_d(n̅-N,k̅-l_0)|⩽ 2. Therefore inequality (<ref>) can be written as |Q̃|⩽ Q. We get
|γ_n,k|⩽3Q(n̅)C_d(n-N,k-l_0)/C_d(n̅-N,k̅-l_0).
Applying the estimate from part 2 of Lemma <ref> (n̅-n) times and using assumptions on the vertices A and B,
we obtain that C_d(n-N,k-l_0)/C_d(n̅-N,k̅-l_0)⩽ 3e^-C̃(δ)(n̅-n) for some C̃(δ)>0.
Finally we get (an independent of the initial choice of l_0) estimate:
|γ_n,k|⩽3Q(n̅)C_d(n-N,k-l_0)/C_d(n̅-N,k̅-l_0)⩽ 3e^-C(δ)(n̅-n)
for some C(δ)>0.
§.§ Examples of limiting curves
Let q_1 and q_2 be two numbers (parameters) from (0, 1).
We consider the function S^p_q_1,q_2:[0,1]→ [0,1] that maps a number x with q_1-r-adic representation
x = ∑_j=1^∞ I_q_1(ω_j)q_1^j(t_q_1/q_1)^a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j to
S^p_q_1,q_2(x) = ∑_j=1^∞ I_q_2(ω_j)q_2^j(t_q_2/q_2)^a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j.
For any q_1-r-stationary point x_0=∑_j=1^m I_q_1(ω_j)q_1^j(t_q_1/q_1)^a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j
and any x ∈ [0, 1] the function S^p_q_1,q_2 satisfies the following self-affinity property:
S^p_q_1,q_2(x_0+r_q_1x) = S^p_q_1,q_2(x_0) + r_q_2S^p_q_1,q_2(x),
where r_q_i = q_i^m(t_q_i/q_i)^a̅_1·s̅_m+2a̅_2·s̅_m+…+d a̅_d·s̅_m, i=1,2.
Expression (<ref>) means that the graph of S^p_q_1,q_2 considered on the q-r-adic interval [x_0, x_0+r_q_1] coincides after renormalization with the graph of S^p_q_1,q_2 on the whole interval [0,1]. Also for q_1=1/r function S^p_1/r,q_2 is the distribution function of the measure μ̃_q_2.
Functions S^p_q_1,q_2(·) allow us to define new functions
_p,q_1^k : = ∂^k S^p_q_1,q_2/∂ q_2^k|_q_2=q_1, k∈ℕ.
If k=0 we will assume that _p,q^0 (x) =x.
For q=1/2 and k=1 function 1/2_1+x,1/2^1 is the Takagi function, see <cit.>. The function _p,q_1^k on the interval [x_0, x_0+r_q_1] can be expressed by a linear combination of the functions _p,q_1^j,0⩽ j⩽ k. (Expression can be easily obtained by differentiating identity (<ref>) with respect to parameter q_2 and defining q_2 equal to q_1.)
Functions _p,q^k, q∈(0,1/a_0), k≥1, are continuous functions on [0,1].
The proof is based on the fact that any two points x and y from the same q-r-adic interval of rank m have the same coordinates (ω_1, ω_2,…, ω_m) in q-r-adic expansion. This provides a straightforward estimate for the difference
|_p,q^1(x) - _p,q^1(y)|.
Let b=b_q denote the ratio t_q/q. As shown in Section 3.3 above any x in (0,1) can be coded by a path ω=(ω_i)_i=1^∞, ω_i∈{0,1…,r-1}=𝒜, in r-adic (perfectly balanced) tree ℳ_r. The function
_p,q^1 maps x=x_q∈[0,1] with q-r adic series representation
x=∑_j=1^∞(∑_i=0^ω_j-1b^i)q^jb^a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j,
to z=∂/∂ q x_q.
Let s̃_j denote the sum a̅_1·s̅_j+2a̅_2·s̅_j+…+d a̅_d·s̅_j.
Derivative ∂/∂ q(q^jb^l) equals to q^j-1b^l-1[(j-l)b+lt'_q],
where l =s̃_j-ω_j+i.
Using implicit function theorem we find that
t'_q = -a_0d q^d-1+a_1(d-1)q^d-2t_q+…+a_d-1t_q^d-1-(d-1)q^d-2/a_1 q^d-1+2a_2q^d-2t_q+… a_dt_q^d-1d.
Let also denote the maximum of the coefficients {a_0,…,a_d} of the polynomial p(x).
We have |t'_q|≤2d^2/q. Let p_max∈(0,1) denote the maximum of {q,t_q,t_q^2/q,…,t_q^d/q^d-1}.
Assume y is the left boundary of some q-r-adic interval of rank m, containing point x. Then the following inequality holds (we simply write T for T_p,q^1):
|T(y)-T(x)| ≤∑_j=m^∞∑_i=0^ω_j-1|∂/∂ q( q^jb^s̃_j+i)|.
Using estimate | q^jb^s̃_j+i |≤ (p_max)^j, 0≤ i≤ r-1, we see that the absolute value of ∂/∂ q(q^jb^l) for j>2 is estimated by expression P(j,q)(p_max)^j-2, where P(j,q) is some polynomial. Define ε to be equal to 0.99. Then for m large enough it holds:
|T(y)-T(x)|≤∑_j=m^∞∑_i=0^ω_j-1P(j,q)(p_max)^j-2≤ C(p_max)^mε,
where C is some constant.
In general case we can assume that points x and x+δ are from some q-r-adic interval of rank m=m(δ), lim_δ→ 0m(δ)=+∞, and let y be the left boundary point of this interval. Then
|T(x+δ)-T(x)| ≤ |T(y)-T(x)|+|T(y)-T(x+δ)|≤ 2Cp_max^mε
For k>1 we can use a similar argument based on the following estimate for the k-th derivative: |∂^k/∂ q^kq^jb^l| ≤ P_k(j,q)(p_max)^j-k-1, where j>k and P_k(j,q) is some polynomial.
For a cylindrical function g=- ∑_j=0^dja_j1_{k^1(x_1)=j}∈ℱ_1 and for μ_q-a.e. x there is a stabilizing sequence l_n(x) such that the limiting function is _p,q^1.
For simplicity we will present the proof for p(x)=1+x+x^2.
The general case follows the same steps. Theorem <ref> implies that we can find the limiting function φ(x) as lim_n→∞φ_n,k, where (by the law of large numbers) k_n/n→𝔼_μ_qk_1. Lemma <ref> imply that it is sufficient to show that the function φ(x) coincide with _p,q^1 at x=q^j and x=q^j-1(q+t_q), where j∈ℕ.
The function _p,q_1^1 maps point x =q^j to ∂/∂ qq^j =jq^j-1 and point x=q^j-1(q+t_q) to q^j-2(jq+(j-1)t_q+t'_qq).
Using expression (<ref>) we see that
t'_q =1-(2q+t_q)/2t_q+q.
Identity (<ref>) implies that h_n,k^g= k/nH_n,k. We need to find the following limits for i∈ℕ, n→∞ and k_n/n→𝔼_μ_qk_1 = 2q+t_q (we write F for F_n,k):
* lim1/R_n(F(H_n-i,k-2i) - H_n-i,k-2i/H_n,kF(H_n,k))
* lim1/R_n(F(H_n-i,k-2i+H_n-i,k-2i+1) - H_n-i,k-2i+H_n-i,k-2i+1/H_n,kF(H_n,k))
We define the normalizing coefficient R_n by
R_n=qH_n,k/n(2-𝔼_μ_qk_1). After some computations we see that the first limit equals iq^i-1, and the second to q^i-2(iq+(i-1)t_q+t'_qq). These shows that that the limiting function φ coincides with the function _p,q_1^1 on a dense set of q-2-stationary points. Therefore, by Theorem <ref> these functions coincide.
Numerical simulations show that limiting functions _p,q^k, k⩾1, and their linear combinations arise as limiting functions
lim_n→∞φ_n,k_n^g for a general cylindrical function g∈ℱ_N.
We do not have any proof of this statement except for the case of the Pascal adic, see Theorem <ref> above. Expression (<ref>) shows that for a cylindrical function g∈ℱ_N the partial sum F^g_n,k is defined by the coefficients h_N,k^g, 0≤ k≤ Nd. Its seems to be useful to define h_N,k^g_m by the generating function h_N,k^g_m=coeff[v^m] (h_0+h_1v+…+h_dv^d)^kp(v)^Nd-k, where functions g_m forms an orthogonal basis. (For the Pascal adic the function (1-av)^k(1+v)^n-k, a = 1-q/q, is the generating function of the Krawtchouk polynomials and the basis g_m is the basis of Walsh functions, see <cit.>).
§ LIMIT OF LIMITING CURVES
In this section we answer the question by É. Janvresse, T. de la Rue and Y. Velenik from <cit.>, page 20, Section 4.3.1.
Let q∈ (0,1) and t_q∈(0,1) be the unique solution in (0,1) of the equation
q^d+q^d-1t+…+t^d=q^d-1.
As above, we denote by b=b_q the ratio t_q/q. Any x in (0,1) has an almost unique (d+1)-adic representation:
x = ∑_j=1^∞(∑_i=0^ω_j-1b^i)q^jb^s_j^1+2s_j^2+… ds_j^d-ω_j,
where ω=(ω_i)_i=1^∞, ω_i∈{0,1…,d}=𝒜, is a path in (d+1)-adic (perfectly balanced) tree ℳ_d+1 and s_j^k is the number of occurrences of letter k among (ω_1,ω_2, …, ω_j).
We denote by S_q(x) the (anaclitic in parameter q) function defined by (a uniformly summable in x) series (<ref>). We put q_* equal to 1/(d+1) (this is so called symmetric case t_q_*=q_*). If d=1 representation (<ref>) for q^*=1/2 is a usual dyadic representation of x∈(0,1).
The authors of <cit.> were interested in the limiting behavior of the graph of the function
_d: x ↦1/d+1∂ S_q/∂ q|_q=q_*
for large values of d (we also introduced vertical normalization by d+1, if d=1 the graph of 1/2T_1 is the Takagi curve). On the basis of a series of numerical simulations they noticed that limiting curves for d→∞ seem to converge to a smooth curve. Below we will show that the limiting curve for d=∞ is actually a parabola, see Fig. <ref>.
We are going split the unit interval into d+1 subintervals I_i=(i/d+1;i+1/d+1),0≤ i≤ d, of equal length and evaluate the function _d at each of the (left) boundary points of these intervals.
We also want to show that the function _d is uniformly in d bounded at these intervals. After that we go to the limit in d.
Symmetry assumption q = t_q = q_* and implicit function theorem (see (<ref>)) imply that
t'_q_* = - 2-d/d.
In its turn this implies b'_q = t'/q-b_q/q|_q=q_* = -2(d+1)/d.
Finally we find that ∂/∂ q (q^jb^r) = jq^j-1b^r+rb^r-1q^jb'_q |_q=q_* =(q_*)^j-1(j+rq_*(-2(d+1)/d))= (q_*)^j-1(j - 2r/d)).
Note that the left boundary point a_d of I_ad, a∈[0,1],ad ≡ [ad], ([ · ] is an integer part) equals a_d=ad/d+1 and is coded by the stationary path ω = (ω_j)_j=1^∞∈ℳ_d+1 with ω_1 = ad and ω_j ≡ 0, j≥2.
We have
_d(a_d) = 1/d+1∑_i=0^da-1(j-2(da(j-1)+i)/d) =a_d(1-a_d)d+1/d a_d(1-a_d).
This shows that the smooth curve (if exists) should be a parabola.
To complete the proof of the theorem it only remains to show that _d(x) is uniformly bounded in d at the intervals I_ad. Analogously to (<ref>) we see that for x∈ I_ad it holds
|_d(x)-_d(a_d)| = 1/d+1∑_j=2^∞ q_*^j-1∑_i=0^ω_j-1(j-2(s_j^1+2s_j^2+… + ds_j^d-ω_j+i)/d)≤100 d/d+1∑_j=2^∞ j q_*^j-1≤
≤400d/(d+1)^2.
That finishes our proof.
§.§ Question.
We may heuristically interpret results of Section 4 as existence of a limiting curve of a dynamical system defined by a diagram with "infinite" number of edges.
This leads us to the following questions: Does this system really exist? How to define it correctly? Which properties does it have?
10
Ver81
A. M. Vershik, Uniform algebraic approximations of shift and multiplication operators, Sov. Math. Dokl., 24:3 (1981), 97–100.
ver82
A. M. Vershik, A theorem on periodical Markov approximation in ergodic theory, J. Sov. Math., 28 (1982), 667–674.
VerLiv92 A. M. Vershik and A. N. Livshits,
Adic models of ergodic transformations, spectral theory, and related topics,
Adv. in Soviet Math. AMS Transl., 9, 1992, 185–204.
Vershik11
A. M. Vershik, The Pascal automorphism has a continuous spectrum,
Funct. Anal. Appl., 45:3 (2011), 173–186.
Ver14 A. M. Vershik,
The problem of describing central measures on the path spaces of graded graphs,
Funct Anal Its Appl .,
48:4 (2014), 256–271.
Ver15 A. M. Vershik, Several Remarks on Pascal Automorphism
and Infinite Ergodic Theory, Armenian Journal of Mathmatics, 7:2 (2015), 85–96.
Kac96 A. G. Kachurovskii,
The rate of convergence in ergodic theorems,
Russian Mathematical Surveys, 51, 4, (1996)
653–703.
MinMan13
I. E. Manaev, A. R. Minabutdinov, The Kruskal-Katona Function, Conway Sequence, Takagi Curve, and Pascal Adic, Transl: J. Math. Sci.(N.Y.), 196:2 (2014), 192–198.
Min14
A. R. Minabutdinov, Random Deviations of Ergodic Sums for the Pascal Adic Transformation in the Case of the Lebesgue Measure, Transl: J. Math. Sci.(N.Y.), 209:6, (2015), 953–978.
MinKraw
A. R. Minabutdinov, A higher-order asymptotic expansion of the Krawtchouk polynomials, Transl.: J. Math. Sci.(N.Y.) 215:6 (2016), 738–747.
LodMin15
A. A. Lodkin,A. R. Minabutdinov,
Limiting Curves for the Pascal Adic Transformation,
Transl: J. Math. Sci.(N.Y.),
216:1 (2016),
94–119.
Bailey2006
S. Bailey,
Dynamical properties of some non-stationary, non-simple Bratteli-Vershik systems,
Ph.D. thesis, University of North Carolina, Chapel Hill,
2006.
HajanItoKakutani A. Hajan, Y. Ito, S. Kakutani,
Invariant measure and orbits of dissipative transformations,
Adv. in Math., 9:1 (1972), 52–65.
Halasz1976
G. Halasz,
Remarks on the remainder in Birkhoff's ergodic theorem,
Acta Mathematica Academiae Scientiarum Hungarica,
28:3-4 (1976), 389–395.
PascalLooselyBernoulli
É. Janvresse, T. de la Rue,
The Pascal adic transformation is loosely Bernoulli,
Annales de l'Institut Henri Poincaré (B) Probability and
Statistics, 40:2 (2004), 133 – 139.
DeLaRue
É. Janvresse, T. de la Rue, and Y. Velenik,
Self-similar corrections to the ergodic theorem for the Pascal-adic
transformation, Stoch. Dyn., 5:1 (2005), 1–25.
Kakutani1976 S. Kakutani, A problem of equidistribution on the unit interval [0, 1], in: Lecture Notes in Math., vol. 541, Springer-Verlag, Berlin, (1976) 369–375.
Kruppel M. Krüppel, De Rham's singular function, its partial derivatives with respect to the parameter and binary digital sums,
Rostocker Math. Kolloq., 64 (2009), 57–74.
Mela2006 X. Méla,
A class of nonstationary adic transformations,
Ann. Inst. H. Poincaré Prob. and Stat.,
42:1 (2006), 103–123.
MelaPetersen
X. Méla, K. Petersen, Dynamical properties of the Pascal adic transformation,
Ergodic Theory Dynam. Systems, 25:1 (2005), 227–256.
OldyzkoRichmond1985 A. M. Odlyzko, L. B. Richmond, On the Unimodality of High Convolutions of Discrete Distributions,
Ann. Probab.,
13 (1985), 299–306.
Takagi1903 T. Takagi,
A simple example of the continuous function without derivative,
Proc. Phys.-Math. Soc., 1 (1903), 176–177.
|
http://arxiv.org/abs/1701.08175v1 | 20170127191655 | Electromagnetically Induced Transparency with Superradiant and Subradiant states | [
"Wei Feng",
"Da-Wei Wang",
"Han Cai",
"Shi-Yao Zhu",
"Marlan O. Scully"
] | physics.atom-ph | [
"physics.atom-ph"
] |
Beijing Computational Science Research Center, Beijing 100193,
China
Texas A&M University, College Station, TX 77843, USA
[email protected]
Texas A&M University, College Station, TX 77843, USA
Texas A&M University, College Station, TX 77843, USA
Beijing Computational Science Research Center, Beijing 100193,
China
Department of Physics, Zhejiang University, Hangzhou 310027, China
Texas A&M University, College Station, TX 77843, USA
Baylor University, Waco, Texas 76706, USA
Xi'an Jiaotong University, Xi'an, Shaanxi 710048, China
We construct the electromagnetically induced transparency (EIT) by dynamically coupling a superradiant state with a subradiant state. The superradiant
and subradiant states with enhanced and inhibited decay rates act as the excited and metastable states in EIT, respectively. Their energy difference
determined by the distance between the atoms can be measured by the EIT spectra, which renders this method useful in subwavelength metrology.
The scheme can also be applied to many atoms in nuclear quantum optics, where the transparency point due to counter-rotating wave terms can be observed.
42.50.Nn, 42.50.Ct
Electromagnetically Induced Transparency with
Superradiant and Subradiant states
Marlan O. Scully
December 30, 2023
================================================================================
Introduction.–Electromagnetically induced transparency
(EIT) <cit.> is a quantum optical mechanism that is
responsible for important
phenomena such as slow light <cit.>,
quantum memory <cit.> and enhanced
nonlinearity <cit.>. A probe field that resonantly
couples the transition from the ground state |g⟩ to an excited
state |e⟩ of an atom, experiences a transparency point at
the original Lorentzian absorption peak, if the excited state is coherently
and resonantly coupled to a metastable state |m⟩. EIT involves
at least three levels and naturally three-level atoms
are used in most cases. However, proper three-level structures are not available
in some optical systems, such as in atomic nuclei
<cit.>
and biological fluorescent molecules <cit.>, in which
EIT can have important applications once realized. Interestingly, it has been shown that
even with only two-level systems, EIT-like spectra can
be achieved by locally addressing the atomic ensembles
<cit.>.
However, strict EIT scheme with a dynamic coupling field is still absent in two-level optical systems.
Superradiance and subradiance are the enhanced and inhibited
collective radiation of many atoms
<cit.>, associated with the collective Lamb shifts <cit.>. The superradiance and subradiance of two interacting atoms has attracted much interest both theoretically <cit.> and experimentally <cit.>.
In this Letter, we use superradiance and subradiance to construct
EIT and investigate the new feature in the EIT absorption spectrum involving with the cooperative effect and the counter-rotating wave terms. For only two
atoms, the symmetric (superradiant) state has much larger
decay rate than the anti-symmetric (subradiant) state when the distance between the two atoms
is much smaller than the transition wavelength. These two states serve as the excited and the metastable states and their splitting, depending on the distance between the atoms, can be measured by the EIT spectra.
In addition, the counter-rotating wave terms in the effective coupling field between the superradiant and subradiant states bring an additional transparency point, which is usually not achievable in traditional EIT systems with three-level atoms.
Mechanism.–Two two-level atoms have four quantum
states, a ground state |gg⟩, two first excited states |ge⟩
and |eg⟩, and a double excited state |ee⟩. Considering
the interaction between the two atoms, the
eigen basis of the first excited states is composed by the symmetric and anti-symmetric states,
|+⟩
=1/√(2)[|eg⟩ +|ge⟩],
|-⟩ =1/√(2)[|eg⟩
-|ge⟩],
with decay rates γ_±=γ_0±γ_c and energy
shifts Δ_±=±Δ_c. Here γ_0
is the single atom decay rate, γ_c and Δ_c are the
collective decay rate and energy shift (see Supplementary Material
<cit.>). When the distance between the two atoms r≪λ
where λ is the transition wavelength, we have
γ_c→γ_0
and thus γ_+→2γ_0 and γ_-→0.
The collective energy shift Δ_c is divergent with 1/r^3.
A weak probe field can only resonantly excite |+⟩ from |gg⟩
since the collective energy shift Δ_c moves the transition
between |+⟩ and |ee⟩ out of resonant with the probe
field <cit.>. We can neglect the two-photon absorption for a weak probe field <cit.>.
The states |gg⟩, |+⟩ and |-⟩ form a three-level
system, as shown in Fig. <ref> (a). The symmetric and the anti-symmetric states satisfy the requirement
on the decoherence rates for EIT, i.e., γ_+≫γ_-
when r≪λ. The eigenenergies of |±⟩ states are
split by the collective energy shift.
The challenge is how to resonantly couple
|+⟩ and |-⟩ states. The key result of this Letter is that |+⟩ and |-⟩ states can be coupled
by two off-resonant counter-propagating plane waves with different frequencies ν_1 and ν_2. If the frequency
difference ν=ν_1-ν_2 matches the splitting between |+⟩ and |-⟩ states 2Δ_c, we obtain
on resonance coupling via two Raman transitions as shown in Fig. <ref> (b).
The resulting Hamiltonian is (assuming ħ=1) <cit.>,
H= ω_+|+⟩⟨+|+ω_-|-⟩⟨-|+Ω_c(t)(|+⟩⟨-|+|-⟩⟨+|)
-Ω_p(e^-iν_pt|+⟩⟨ gg|+h.c.),
where Ω_c(t)=Ω_0sin(kr)sin(ν t-ϕ)
with k=ν_s/c, ν_s=(ν_1+ν_2)/2, r=x_1-x_2 and ϕ=k(x_1+x_2) with x_1,2 being
the coordinates of the two atoms along the propagation of the plane waves. The coupling strength Ω_0=E^2d^2/(ω-ν_s) with E being the amplitude of the electric field of the plane waves, d being the transition matrix element of the atoms and ω being the single atom transition frequency. The transition frequencies of |±⟩ states are ω_±=ω±Δ_c+δ_u(t) with δ_u(t)=Ω_0[1+cos(kr)cos(ν t-ϕ)] being a universal Stark shift induced by the two plane waves.
The absorption spectra can be calculated by the Liouville equation,
∂ρ/∂ t= -i[H,ρ]+∑
_j=+,-γ_j/2[2|gg⟩⟨ j|ρ|j⟩⟨ gg|
-|j⟩⟨ j|ρ-ρ|j⟩⟨ j|].
Since H is time-dependent with frequency ν, the coherence
can be expanded ⟨+|ρ|gg⟩=∑_nρ_+gg^[n]e^inν t.
Eq.(<ref>) can be solved with the Floquet theorem <cit.> and the
absorption
is proportional to Imρ_+gg^[0], the imaginary part
of the zero frequency coherence.
The counter-rotating wave terms of Ω_c(t) can be neglected
for small distance between the two atoms and weak coupling field when
Ω_0sin(kr)≪Δ_c.
We obtain typical EIT absorption spectra with two absorption peaks
and one transparency point, as shown in the black curve of Fig.
<ref> (a). Here the probe detuning δ_p=ω+Δ_c+Ω_0-ν_p has
taken into account all the static energy shifts of |+⟩ state, including
Ω_0, the static part of the universal Stark shift δ_u(t). The
effect of the counter-rotating wave terms and the universal shift δ_u(t)
emerge either when we increase the distance (reduce Δ_c) between the two atoms
or increase the dynamic Stark shift Ω_0 (proportional
to the intensity of the standing wave), which are demonstrated by the multiple side
peaks in Fig. <ref> (a).
We can use the following procedure for the subwavelength metrology,
as shown in Fig. <ref> (b). We first reduce
the intensity of the standing wave to only allow two peaks to appear
in the spectra. Then we tune the frequency difference ν until
the two absorption peaks become symmetric, which yields the collective
energy shift Δ_c=ν/2. The distance between the two atoms
can be obtained by the relation between Δ_c(r)
and r <cit.>. Since Δ_c(r)∝1/r^3 for small
distance
r≪λ, the sensitivity δΔ_c/δ r∝1/r^4.
Compared with the existing proposals for subwavelength imaging of two interacting atoms with fluorescences <cit.>, a natural preference for this EIT metrology is that both the dressing field and the probe fields are weak. This is in
particular useful for the biological
samples that cannot sustain strong laser fields.
The above mechanism can also be understood as a dynamic modulation of the transition frequency difference between the two atoms <cit.>. We notice that the difference
between |+⟩ and |-⟩ states is a relative π
phase factor between |eg⟩ and |ge⟩ states.
If we can control the transition frequencies of the two atoms
such that the states |eg⟩ and |ge⟩ have energy shifts
Ω_c and -Ω_c respectively, an initial state of the symmetric state
|ψ(0)⟩=|+⟩ evolves with time
|ψ(t)⟩=(e^-iΩ_ct|eg⟩+e^iΩ_ct|ge⟩)/√(2).
At t=π/2Ω_c, we obtain |ψ(t)⟩=-i|-⟩. Therefore,
the states |+⟩ and |-⟩ are coupled by an
energy difference between the two atoms. In our scheme, the two counter propagating plane waves create a moving standing wave that induces a time-dependent dynamic Stark shift difference between the two atoms, Ω_c(t), which serves as the coupling field. This picture enables us to generalize the mechanism to many atoms, as shown later.
The single atom EIT <cit.> and the superradiance and subradiance of
two ions <cit.> have been observed in experiments. The coupling
between the symmetric and anti-symmetric states has also been realized with two
atoms trapped in an optical lattice <cit.>. In particular, the
cryogenic fluorescence of two interacting terrylene molecules has been used for
spectroscopy with nanometer resolution <cit.>. Due to different
local electric fields, the two molecules have different transition frequencies,
which corresponds to a static coupling field Ω_c. By introducing an
oscillating electric field gradient or a moving standing wave, such a system can
be exploited for the current EIT experiment of superradiance and subradiance. Very recently, superradiance was also observed from two silicon-vacancy centers embedded in diamond photonic crystal cavities <cit.>, which provide another platform to realize this mechanism.
Generalization to many atoms.–The mechanism
can be extended to large ensembles of two-level systems. Let us consider
two atomic ensembles, one with |e⟩ and |g⟩, and
the other with |a⟩ and |b⟩ as their excited and
ground states. Each ensemble has N atoms and both ensembles are
spatially mixed together. The transition frequency difference between the two atomic ensembles is within the linewidth such that a single photon
can excite the two ensembles to a superposition of two timed Dicke
states <cit.>,
|+_𝐤⟩=1/√(2)(|e_𝐤⟩+|a_𝐤⟩)
where
|e_𝐤⟩=1/√(N)∑
_n=1^Ne^i𝐤·𝐫_n|g_1,...,e_n,...,g_N⟩⊗|b_1,b_2,...,b_N⟩,
|a_𝐤⟩=|g_1,g_2,...,g_N⟩⊗1/√(N)∑
_n=1^Ne^i𝐤·𝐬_n|b_1,...,a_n,...,b_N⟩.
Here 𝐫_n and 𝐬_n are the positions of
the nth atom in the two ensembles. 𝐤 is the wave vector
of the single photon. The timed Dicke states |e_𝐤⟩
and |a_𝐤⟩ are excited from the same ground state
|G⟩≡|g_1,g_2,...,g_N⟩⊗|b_1,b_2,...,b_N⟩
by a single photon. They have directional emission in the direction
of 𝐤, so as their superposition state |+_𝐤⟩,
associated with enhanced decay rate and collective Lamb shift. On
the other hand, the state
|-_𝐤⟩=1/√(2)(|e_𝐤⟩-|a_𝐤⟩),
is a subradiant state in the sense that its decay rate is estimated to be similar to that of a single atom
<cit.>. The directional emissions of |e_𝐤⟩
and |a_𝐤⟩ are canceled because of the relative
phase factor -1 between them. The collective Lamb shift of |-_𝐤⟩ can be very
different from that of the |+_𝐤⟩ state.
We can dynamically couple |+_𝐤⟩ and |-_𝐤⟩
states in a well studied nuclear quantum optical system <cit.>, as shown in Fig. <ref>. The nuclei embedded in a
waveguide are ^57Fe with the transition frequency ω=14.4keV
and the linewidth γ_0=4.7neV. In the
presence of a magnetic field, the ground and excited states with
I_g=1/2
and I_e=3/2 split into multiplets with Zeeman energy splitting
δ_j (j=e,g).
Applying a magnetic field 𝐁 parallel to the incident and outgoing
electric fields 𝐄_in and 𝐄_out and
perpendicular to 𝐤, the linearly polarized
input x-ray can only couple two transitions, as shown in Fig. <ref>. At
room temperature, the populations on the two magnetic sublevels of the
ground state are approximately equal <cit.>. Here we can use a magnetically soft ^57FeNi absorber foil with zero magnetostriction <cit.> to avoid the mechanical sidebands and other complications in a time-dependent external magnetic field.
The Hamiltonian in the interaction picture can be written as,
H
= Ω_c(|+_𝐤⟩⟨ -_𝐤|e^-iω_0t
+|-_𝐤⟩⟨
+_𝐤|e^iω_0t)
-Ω_p(e^-iδ_pt|+_𝐤⟩⟨ G|+h.c.),
where Ω_c=Ω_1cos(ν
t) with
Ω_1=(_g+δ_e)/2
is induced by a magnetic field B=B_0cosν t.
ω_0 is the collective Lamb shift difference between the states
|+_𝐤⟩ and |-_𝐤⟩.
δ_p is the probe detuning from the |+_𝐤⟩ state.
The reflectance of the thin film cavity is dominated by the coherence |ρ_+G|^2 where ρ_+G≡⟨ +_𝐤|ρ|G⟩ (see <cit.>),
|R|^2∝lim_T→∞1/T∫_0^T|ρ_+G(t)|^2
dt=∑_n|ρ_+G^[n]|^2,
where we have made average in a time interval T≫ 1/ν. The
coherence ρ_+G has
multiple frequency components ρ_+G(t)=∑_nρ_+G^[n]e^i2ν t due to the
counter-rotating wave terms. Only when ν=0, no time average is needed.
The typical collective Lamb shift
of ^57Fe nuclear ensemble is 5∼10γ_0
<cit.>.
The internal magnetic field in the ^57Fe
sample can be tens of Tesla in an external radio-frequency field
<cit.>.
The effective coupling field Rabi frequency Ω_1
can be easily tuned from zero to 20γ_0. The magnetic field
amplitudes corresponding to the effective coupling strengths
Ω_1=5γ_0
and Ω_1=20γ_0 taken in Fig. <ref> (a) and (b) are
B_0=5.3T
and B_0=21.3T, respectively.
The reflectance spectra can be used to investigate the effect of the
counter-rotating wave terms of the coupling field and to determine the collective
Lamb shift. For a relatively small Ω_1, there are two dips in a single
Lorentzian peak, as shown in Fig. <ref> (a). The left and right ones
correspond to the rotating and counter-rotating wave terms of the coupling field,
respectively. The distance between the two dips is approximately 2ν. When
ν=0, these two dips merge and the spectrum is the same as the one of the
previous EIT experiments with a static coupling between two ensembles mediated by
a cavity <cit.>. For a larger Ω_1=20γ_0 in Fig.
<ref> (b), we still have the two dips since Ω_1<γ_+ and the
vacuum induced coherence still exists <cit.>, but we also have two
peaks basically corresponding to the two magnetic transitions in Fig. <ref>.
Compared with the result in <cit.> where |+_𝐤⟩ and
|-_𝐤⟩ have the same energy and the magnetic field is static,
here the two peaks are not symmetric for ν=0 due to a finite Lamb shift
difference. Therefore, the results can be compared with experimental data to
obtain the collective Lamb shifts.
In conclusion, we construct an EIT scheme by dynamically coupling the
superradiant state with the subradiant state. The interaction between atoms can
be measured by the EIT spectra. Compared with the EIT-like schemes with a static
coupling in atomic ensembles <cit.>, the local dynamical modulation of the transition frequencies of
the atoms introduces a tunable detuning for the coupling field. Therefore, our
scheme contains all the ingredients of EIT. In particular, for the systems where
the splitting between the superradiant and subradiant states is larger than the
decay rate of the superradiant state, the dynamic modulation can bring the EIT
dip to the Lorentzian absorption peak of the superradiant state, as shown in
Fig. <ref> (b). The dynamic modulation enables a
precise measurement of the distance between two atoms and brings new physics of
the EIT point due to counter-rotating wave terms.
The authors thank G. Agarwal, A. Akimov, A. Belyanin, J. Evers, O.
Kocharovskaya, R. Röhlsberger and A. Sokolov for insightful discussions. We acknowledge the support of National Science Foundation Grant EEC-0540832 (MIRTHE ERC), Office of Naval Research (Award No. N00014-16-1-3054) and Robert A. Welch Foundation (Grant No. A-1261). Wei Feng was supported by China Scholarship Council (Grant No. 201504890006). H. Cai is supported by the Herman F. Heep and Minnie Belle Heep Texas A&M University Endowed Fund held/administered by the Texas A&M Foundation.
10
Harris1991K.-J. Boller, A. Imamoğlu, and S. E. Harris,
Phys. Rev. Lett. 66, 2593 (1991).
FleischhauerRMPM. Fleischhauer, A. Imamoğlu, and J.P. Marangos, Rev. Mod. Phys. 77, 633 (2005).
Hau1999L. V. Hau, S. E. Harris, Z. Dutton, and C. H.
Behroozi, Nature (London) 397, 594 (1999).
Kash1999M. M. Kash, V. A. Sautenkov, A. S. Zibrov, L. Hollberg, G. R.
Welch, M. D. Lukin, Y. Rostovtsev, E. S. Fry, and M. O. Scully, Phys. Rev. Lett.
82, 5229 (1999).
Kocharovskaya2001O. Kocharovskaya, Y. Rostovtsev, and
M. O. Scully, Phys. Rev. Lett. 86, 628 (2001).
Lukin2000PRLM. Fleischhauer and M. D. Lukin, Phys. Rev.
Lett. 84, 5094 (2000).
Lukin2002PRAM. Fleischhauer and M. D. Lukin, Phys. Rev.
A 65, 022314 (2002).
LukinM. D. Lukin, Rev. Mod. Phys. 75, 457 (2003).
Harris1990S. E. Harris, J. E. Field, and A. Imamoglu, Phys.
Rev. Lett. 64, 1107 (1990).
Jain1996M. Jain, H. Xia, G. Y. Yin, A. J. Merriam, and
S. E. Harris, Phys. Rev. Lett. 77, 4326 (1996).
Rohlsberger2010R. Rhlsberger, K. Schlage,
B. Sahoo, S. Couet, and R. Rffer, Science 328, 1248 (2010).
Anisimov2007P. Anisimov, Y. Rostovtsev, and O. Kocharovskaya,
Phys. Rev. B 76, 094422 (2007).
Tittonen1992I. Tittonen, M. Lippmaa, E. Ikonen,
J. Lindn, and T. Katila, Phys. Rev. Lett. 69, 2815 (1992).
Bates2005 Mark Bates, Timothy R. Blosser, and Xiaowei
Zhuang, Phys. Rev. Lett. 94, 108101 (2005).
Rohlsberger2012R. Rhlsberger, H.-C. Wille, K. Schlage,
and B. Sahoo, Nature (London) 482, 199
(2012).
Xu2013 D. Z. Xu, Yong Li, C. P. Sun, and Peng Zhang, Phys. Rev. A 88, 013823 (2013).
Makarov2015A. A. Makarov, Phys. Rev. A 92,
053840 (2015).
Dicke1954R. H. Dicke, Phys. Rev. 93,
99 (1954).
Lehmberg1970R. H. Lehmberg, Phys. Rev. A 2, 883
(1970); 2, 889 (1970).
Agarwal1974G. S. Agarwal, in Quantum Statistical
Theories of Spontaneous Emission and Their Relation to Other Approaches,
edited by G. Höhler, Springer Tracts in Modern Physics Vol. 70 (Springer,
Berlin, 1974).
Scully2009PRLM. O. Scully, Phys. Rev. Lett.
102,
143601 (2009).
Scully 07 LaserPhysicsM. Scully, Laser Phys.
17, 635 (2007) .
Dawei2010PRAD. W. Wang, Z. H. Li, H. Zheng, and S.-Y. Zhu,
Phys. Rev. A 81, 043819 (2010).
Petrosyan2002D. Petrosyan and G. Kurizki, Phys. Rev. Lett. 89, 207902 (2002).
Muthukrishnan2004A. Muthukrishnan, G. S. Agarwal, and M. O. Scully, Phys. Rev. Lett. 93, 093002 (2004).
Grangier1985 P. Grangier, A. Aspect, and J. Vigue, Phys. Rev. Lett. 54, 418 (1985).
DeVoe1996 R. G. DeVoe and R. G. Brewer, Phys. Rev. Lett. 76,
2049 (1996).
Hettich2002 C. Hettich, C. Schmitt, J. Zitzmann, S. Kühn, I.
Gerhardt, and V. Sandoghdar, Science 298, 385 (2002).
Gaetan2009 A. Gaetan, Y. Miroshnychenko, T. Wilk, A. Chotia, M. Viteau, D. Comparat, P. Pillet, A. Browaeys, and P. Grangier,
Nature Physics, 5 (2), 115 (2009).
McGuyer2015 B. H. McGuyer, M. McDonald, G. Z. Iwata, M. G. Tarallo, W. Skomorowski, R. Moszynski, and T. Zelevinsky, Nature Physics 11, 32 (2016).
SupplementalSee Supplemental Material at XXXX, which
includes Refs. XXXX.
Varada1992G. V. Varada and G. S. Agarwal, Phys. Rev. A 45,
6721 (1992).
Agarwal1984 G. S. Agarwal and N. Nayak, J. Opt. Soc. Am. B 1,
164 (1984).
Wang2015 D.-W. Wang, R.-B. Liu, S.-Y. Zhu, and M. O. Scully, Phys. Rev. Lett. 114, 043602 (2015).
Chang2006 J.-T. Chang, J. Evers, M. O. Scully, and M. S. Zubairy, Phys. Rev. A 73, 031803 (2006).
Mucke2010 Martin Mcke, Eden Figueroa, Joerg Bochmann, Carolin Hahn,
Karim Murr, Stephan Ritter, Celso J. Villas-Boas, and Gerhard Rempe, Nature
465, 755 (2010).
Trotzky2010 S. Trotzky, Y.-A. Chen, U. Schnorrberger, P.
Cheinet, and I. Bloch, Phys. Rev. Lett. 105, 265303 (2010).
Sipahigil2016 A. Sipahigil1, R. E. Evans1, D. D. Sukachev, M. J. Burek, J. Borregaard, M. K. Bhaskar, C. T. Nguyen, J. L. Pacheco, H. A. Atikian, C. Meuwly, R. M. Camacho, F. Jelezko, E. Bielejec, H. Park1, M. Loncar, and M. D. Lukin, Science 354, 847 (2016).
Scully2006M. O. Scully, E. S. Fry, C. H. R. Ooi, and K.Wodkiewicz,
Phys. Rev. Lett. 96, 010501 (2006).
Evers2013K. P. Heeg and J. Evers, Phys. Rev. A 88,
043828 (2013).
Scully2015PRLM. O. Scully, Phys. Rev. Lett.
115, 243602 (2015).
Heeg2013K. P. Heeg, H.-C. Wille, K. Schlage, T.
Guryeva, D. Schumacher, I. Uschmann,
K. S. Schulze, B. Marx, T. Kämpfer, G. G. Paulus, R.
Röhlsberger, and J. Evers, Phys. Rev. Lett. 111, 073601 (2013).
FeNuclearJ. Hannon and G. Trammell, Hyperfine Interact.
123-124, 127 (1999).
Heeg2015K. P. Heeg and J. Evers, Phys. Rev. A 91,
063803 (2015).
|
http://arxiv.org/abs/1701.07975v1 | 20170127085224 | The calorimeter of the Mu2e experiment at Fermilab | [
"N. Atanov",
"V. Baranov",
"J. Budagov",
"F. Cervelli",
"F. Colao",
"M. Cordelli",
"G. Corradi",
"E. Dané",
"Yu. I. Davydov",
"S. Di Falco",
"E. Diociaiuti",
"S. Donati",
"R. Donghia",
"B. Echenard",
"K. Flood",
"S. Giovannella",
"V. Glagolev",
"F. Grancagnolo",
"F. Happacher",
"D. G. Hitlin",
"M. Martini",
"S. Miscetti",
"T. Miyashita",
"L. Morescalchi",
"P. Murat",
"G. Pezzullo",
"F. Porter",
"F. Raffaelli",
"T. Radicioni",
"M. Ricci",
"A. Saputi",
"I. Sarra",
"F. Spinella",
"G. Tassielli",
"V. Tereshchenko",
"Z. Usubov",
"R. Y. Zhu"
] | physics.ins-det | [
"physics.ins-det"
] |
Probabilistic Shaping and Non-Binary Codes
Joseph J. Boutros1,
Fanny Jardel2,
and Cyril Méasson3
1Texas A&M University, 23874 Doha, Qatar
2Nokia Bell Labs, 70435 Stuttgart, Germany
3Nokia Bell Labs, 91620 Nozay, France
[email protected], {fanny.jardel,cyril.measson}@nokia.com
=================================================================================================================================================================================================================================================
§ INTRODUCTION
After the discovery of lepton flavor violation in neutrino oscillation, the search for Charged Lepton Flavor Violation (CLFV) is one of the most important activities in particle physics. In the Standard Model of particle interactions the occurrence of such a process is predicted to be extremely rare, far below the possible experimental reach. On the other hand, many extensions of the Standard Model predict CLFV rates that may be observed by the next generation of experiments <cit.>.
The Mu2e experiment <cit.> at Fermilab aims to observe the neutrinoless conversion of a muon into an electron in the field of an Aluminum atom. In this two-body process the energy of the emerging electron is fixed (104.967 MeV) and the possible sources of background can be very efficiently suppressed.
In 3 years of running, ∼10^20 protons will be delivered to Mu2e and ∼10^18 muons will be stopped in the Aluminum stopping target. This huge amount of data will allow for a factor 10^4 improvement on the sensitivity to the ratio between the rate of the neutrinoless muon conversions into electrons and the rate of ordinary muon capture in Al nucleus:
R_μ e=μ^-+Al→ e^-+Al/μ^-+Al→ν_μ + Mg
Even in case that no signal is observed, Mu2e will achieve a remarkable result: a limit R_μ e<6× 10^-17 at 90% confidence level, that is 10^4 times better than the current limit set by the Sindrum II experiment <cit.>.
§ THE MU2E EXPERIMENT
The Mu2e experimental apparatus (figure <ref>) consists of 3 superconducting solenoids: the production solenoid, where an 8 GeV proton beam is sent against a tungsten target and pions and kaons produced in the interactions are guided by a graded magnetic field towards the transport solenoid; a transport solenoid, with a characteristic 'S' shape, that transfers the negative particles with the desired momentum (∼50 MeV) to the detector solenoid and absorbs most of the antiprotons thanks to a thin window of low Z material that separates the two halves of the solenoid; the detector solenoid, where the Aluminum muon stopping target is located and a graded field directs the electrons coming from the muon conversion to the tracker and the calorimeter.
The 8 GeV proton beam has the pulsed structure shown in figure <ref>. Each bunch lasts ∼ 250 ns and contains ∼ 3× 10^7 protons. The bunch period of ∼ 1.7μ s facilitates exploitation of the time difference between the muonic Aluminum lifetime (τ = 864 ns) and the prompt backgrounds due to pion radiative decays, muon decays in flight and beam electrons, that are all concentrated within few tens of ns from the bunch arrival: a live search window delayed by 700 ns with respect to the bunch arrival suppresses these prompt backgrounds to a negligible level.
In order to achieve the necessary background suppression, it's important to have a fraction of protons out of bunch, or extinction factor, lower than 10^-10. The current simulations of the accelerator optics predict an extinction factor better than required. The extinction factor will be continuously monitored by a dedicated detector located downstream of the production target.
The Mu2e Tracker consists of about 21000 low mass straw tubes oriented transverse to the solenoid axis and grouped into 18 measurement stations distributed over a distance of ∼3 m (figu-re <ref>.left). Each straw tube is
instrumented on both sides with TDCs to measure the particle crossing time and ADCs to measure the specific energy loss dE/dX, that can be used to separate the electrons from highly ionizing particles.
The central hole of radius R∼380 mm precludes detection of charged particles with momentum lower than ∼ 50 MeV/c (figure <ref>.right).
The core of the momentum resolution for 105 MeV electrons is expected to be better than 180 keV/c, sufficient to suppress background electrons produced in the decays of muons captured by Aluminum nuclei.
The background due to cosmic muons (δ rays, muon decays or misidentified muons) is suppressed by a cosmic ray veto system covering the whole detector solenoid and half of the transport solenoid (figure <ref>.left). The detector consists of four layers of polystyrene scintillator counters interleaved with Aluminum absorbers (figure <ref>.right). Each scintillator is read out via two embedded wavelength shifting fibers by silicon photomultipliers (SiPMs) located at each end.
The veto is given by the coincidence of three out of four layers.
An overall veto efficiency of 99.99% is expected. This corresponds to ∼ 1 background event in 3 years of data taking. An additional rejection factor will be provided by particle identification obtained by combining tracker and calorimeter information[An irreducible background of ∼0.1 electrons induced by cosmic muons in 3 years will nonetheless survive to particle identification <cit.>.].
§ THE MU2E ELECTROMAGNETIC CALORIMETER
The Mu2e electromagnetic calorimeter (ECAL) is needed to:
* identify the conversion electrons;
* provide, together with tracker, particle identification to suppress muons and pions mimicking the conversion electrons;
* provide a standalone trigger to measure tracker trigger and track
reconstruction efficiency;
* (optional) seed the tracker pattern recognition to reduce the number of possible hit combinations.
ECAL must operate in an harsh experimental environment:
* a magnetic field of 1 T;
* a vacuum of 10^-4 Torr;
* a maximum ionizing dose of 100 krad for the hottest region at lower radius and ∼ 15 krad for the region at higher radius (integrated in 3 years including a safety factor of 3);
* a maximum neutron fluence of 10^12 n/cm^2 (integrated in 3 years including a safety factor of 3)
* a high particle flux also in the live search window.
The solution adopted for the Mu2e calorimeter (figure <ref>) consists of two annular disks of
undoped CsI crystals placed at a relative distance of ∼ 70 cm, that is approximately half pitch of the conversion electron helix in the magnetic field.
The disks have an inner radius of 37.4 cm and an outer radius of 66 cm. The design minimizes the number of low-energy particles that intersect the calorimeter while maintaining an high acceptance for the signal.
Each disk contains 674 undoped CsI crystals of 20×3.4×3.4 cm^3. This granularity has been optimized taking into account the light collection for readout photosensors, the particles pileup, the time and energy resolution.
Each crystal is read out by two arrays of UV-extended silicon photomuliplier sensors (SiPM). The SiPMs signal is amplified and shaped by the Front-End Electronics (FEE) located on their back. The voltage regulators and the digital electronics, used to digitize the signals, are located in crates disposed around the disks.
§.§ CsI crystals
The characteristics of the pure CsI crystals are reported in table <ref>. These crystals have been preferred to the other candidates because of their emission frequency, well matching the sensitivity of commercial photosensors, their good time and energy resolution and their reasonable cost.
Each crystal will be wrapped with 150 μ m of Tyvek 4173D.
Quality tests on a set of pure CsI crystals from SICCAS (China), Optomaterial (Italy) and ISMA (Ukraine) have been performed in Caltech and at the INFN Laboratori Nazionali di Frascati (LNF)<cit.>.
The results can be summarized as follows:
* a light yield of 100 p.e./MeV when measured with a 2” UV extended EMI PMT;
* an emission weighted longitudinal transmittance varying from 20% to 50% depending on the crystal surface quality;
* a light response uniformity corresponding to a variation of 0.6%/cm;
* a decay time τ∼ 30 ns with, in some cases, a small slow component with τ∼ 1.6μ s;
* a light output reduction lower than 40% after an irradiation with a total ionizing dose of 100 krad;
* a negligible light output reduction but a small worsening of longitudinal response uniformity after an irradiation with a total fluence of 9× 10^11 n/cm^2;
* a radiation induced readout noise in the Mu2e radiation environment equivalent to less than 600 KeV.
§.§ Photosensors
Figure <ref> shows one of the two SiPM arrays used to read each crystal.
The array is formed by two series of 3 SiPMs. The two series are connected in parallel by the Front End electronics to have a x2 redundancy.
The series connection reduces the global capacitance, improving the signal decay time to less than 100 ns. It also minimizes the output current and the power consumption.
Each SiPM has an active surface of 6x6 mm^2 and is UV-extended with a photon detection efficiency (PDE) at the CsI emission peak (∼315 nm) of ∼ 30%.
Tests on single SiPM prototypes from different vendors (Hamamatsu, SENSL, Advansid) have been performed at LNF and INFN Pisa.
The gain is better than 10^6 at an operating voltage V_OP=V_BR+3V, where V_BR is the breakdown voltage of the SiPM. When coupled in air with the CsI crystal the yield is ∼ 20 p.e./MeV.
The noise correspond to an additional energy resolution of ∼ 100 keV.
A test of neutron irradiation with a fluence of 4× 10^11 neutrons/cm^2 1 MeV equivalent[Since SiPMs are partially shielded by the crystals, this corresponds for the SiPMs for the 3 years of Mu2e running to a safety factor of ∼2.] and a SiPM temperature kept stable at 25^oC, has produced a dark current increase from 60 μ A to 12 mA and a gain decrease of 50%.
A test with photon radiation corresponding to a total ionizing dose of 20 krad has produced negligible effects on gain and dark current.
In order to reduce the effects of radiation damage and to keep the power consumption at a reasonable level the SiPM temperature will be kept stable at 0^oC.
The qualification tests of the SiPM array preproduction are in progress at Caltech, LNF and INFN Pisa and will evaluate:
* the I-V characteristics of the single SiPMs and of each series;
* the breakdown voltage V_BR and the operating voltage V_OP=V_BR+3V of the single SiPMs and of each series;
* the absolute gain and the PDE relative to a reference sensor at V_OP for the single SiPMs and for each series;
* the mean time to failure (MTTF) through an accelerated aging test at 55^o;
* the radiation damage due to neutron, photons and heavy ions.
§.§ Read out electronics
In the front end electronics board, directly connected to the SiPM array,
the signals coming from the two series are summed in parallel and then shaped and amplified in order to obtain a signal similar to the one shown in figure <ref>.left. This shaping aims to reduce the pileup of energy deposits due to different particles and to optimize the resolution on the particle arrival time.
The shaped signals are sent to a waveform digitizer boards where they are digitized at a sampling frequency of 200 MHz using a 12 bits ADC.
The most critical components of the waveform digitizer board are: the SM2150T-FC1152 Microsemi SmartFusion2 FPGA, the Texas Instrument ADS4229 ADC and the Linear Tecnologies LTM8033 DC/DC converter.
The FPGA is already qualified by the vendor as SEL and SEU free and will be tested only together with the assembled board.
The DC/DC converter, tested in a 1 T magnetic field, still maintain an efficiency of ∼ 65%. Negligible effects on output voltage and efficiency have been observed after neutron and photon irradiation corresponding to 3 years of Mu2e running.
The ADCs have also been irradiated with neutrons and photons equivalent to 3 years of Mu2e running and have shown no bit flips or loss of data.
§.§ Energy and time calibration
The energy and time calibration of the Mu2e calorimeter could be performed using different calibration sources <cit.>.
A 6 MeV activated liquid source (Fluorinert) <cit.> can be circulated into pipes located in front of each disk to set the absolute energy scale.
A laser calibration system can be used to pulse each crystal to equalize the time offset and the energy response of each SiPM array channel.
Also minimum ionizing cosmic muons can be used to equalize the time offset and the energy response.
The energy-momentum matching for electrons produced by muons decaying in the orbit of the Al atom or by pion two body decays can be used to set the energy scale and to determine the time offset with respect to the tracker. These low momentum electrons mostly pass through the hole of the calorimeter disks but can be used in special calibration runs with reduced magnetic field.
§ BEAM TEST OF A SMALL MATRIX
A calorimeter prototype consisting of a 3×3 matrix of 3×3×20 cm^3 undoped CsI crystals wrapped in 150 μ m of Tyvek R and read by one 12×12 mm^2 SPL TSV SiPM by Hamamatsu has been tested with an electron beam at the Beam Test Facility (BTF) in Frascati during April 2015.
The results obtained are coherent with the ones predicted by the simulation:
* a time resolution better than 150 ps for 100 MeV electrons;
* an energy resolution of ∼ 7% for 100 MeV electrons with a 50^o incidence angle[This is the most probable incidence angle for conversion electrons reaching the Mu2e calorimeter. The values range from 40^o to 60^o.], dominated by the energy leakage due to the few Molière radii of the prototype.
§ CALORIMETER PERFORMANCES PREDICTED BY SIMULATION
A detailed Monte Carlo simulation has been developed to optimize the calorimeter design.
The simulation corresponding to the final design predicts the following performances for 100 MeV electrons:
* a time resolution of ∼110 ps;
* an energy resolution of ∼4%;
* a position resolution of 1.6 cm in both the transverse coordinates.
Combining the calorimeter and tracker time and energy/momentum information it's possible to distinguish between 100 MeV electrons and muons with the same momentum: an electron efficiency of 94% and a corresponding muon rejection factor of 200 have been obtained.
The calorimeter information can be used to seed the track reconstruction improving the reconstruction efficiency and contributing to remove the background due to tracks with poorly reconstructed momentum.
A standalone software trigger based on the calorimeter information only is able to achieve an efficiency of 60% on conversion electrons while suppressing the background trigger rate by a factor 400.
A combined software trigger using both tracker and calorimeter information is able to achieve an efficiency of 95% on conversion electrons with a background rejection factor of 200.
§ CONCLUSIONS AND OUTLOOK
The Mu2e calorimeter is a key component of the Mu2e experiment.
The calorimeter design is now mature:
quality tests have shown that the chosen components are able to operate in the Mu2e harsh environment.
Monte Carlo simulation, supported by test beam results, shows that the
current design meets the requirements on muon identification, seeding of track reconstruction and trigger selection.
A beam test of a larger scale prototype with 50 pure CsI crystals read by 100 SiPM arrays will be performed in the next months.
This work was supported by the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 690835.
99
degouvea
A. de Gouvea, N. Saoulidou, Fermilab's intensity frontier, Ann.Rev.Nucl.Part.Sci. 60 (2010) 513-538.
TDR
Mu2e Collaboration (L. Bartoszek et al.), Mu2e Technical Design Report, arXiv:1501.05241.
sindrum2
SINDRUM II Collaboration (W.H. Bertl et al.), A Search for muon to electron conversion in muonic gold, Eur.Phys.J. C47 (2006) 337-346.
lnfcsi
M. Angelucci et al., Longitudinal uniformity, time performances and irradiation test of pure CsI crystals, Nucl.Instrum.Meth. A824 (2016) 678-680.
nimecal
N. Atanov et al., Design and status of the Mu2e electromagnetic calorimeter, Nucl.Instrum.Meth. A824 (2016) 695-698.
babar
B.Aubert et al., The BABAR detector, Nucl.Instrum.Meth. A479
(2002) 1.
|
http://arxiv.org/abs/1701.08196v2 | 20170127212606 | A priori error estimates of Adams-Bashforth discontinuous Galerkin methods for scalar nonlinear conservation laws | [
"Charles Puelz",
"Beatrice Riviere"
] | math.NA | [
"math.NA"
] |
A priori error estimates of Adams–Bashforth
discontinuous Galerkin methods for scalar nonlinear conservation laws
Charles Puelz and Béatrice Rivière
December 30, 2023
====================================================================================================================
In this paper we show theoretical convergence of a second–order Adams–Bashforth discontinuous Galerkin method for approximating smooth solutions to scalar nonlinear conservation laws with E-fluxes. A priori error estimates are also derived for a first–order forward Euler discontinuous Galerkin method. Rates are optimal in time and suboptimal in space; they are valid under a CFL condition.
§ INTRODUCTION
We consider approximating smooth solutions to the following nonlinear partial differential equation posed with initial conditions:
∂ u/∂ t + ∂/∂ xf(u) = s(u), in ℝ× (0,T],
u = u_0, in ∈ℝ×{0},
where u:ℝ× [0,T] →ℝ and f,s:ℝ→ℝ. The function s is assumed to be Lipschitz. As typical for the numerical analysis of such problems <cit.>, we do not consider boundary conditions, and instead assume the solution has compact support in some interval [0,L].
The focus of this work is the analysis of the second order Adams–Bashforth method in time combined with the discontinuous Galerkin method in space. The main motivation for studying this discretization is its popularity in the hemodynamic modeling community for approximating a nonlinear hyperbolic system describing blood flow in an elastic vessel <cit.>. For a selection of work simulating this model with a discontinuous Galerkin spatial discretization coupled to the second order Adams–Bashforth scheme, see <cit.>. To the best of our knowledge, there is little analysis for this fully discrete scheme. The results presented in this paper for scalar hyperbolic equations provide a first step towards theoretically understanding the numerical approximation of the hyperbolic system modeling blood flow.
In addition, we provide an error analysis for the first order forward Euler in time combined with discontinuous Galerkin in space.
Discontinuous Galerkin schemes for hyperbolic conservations laws have been extensively studied, especially when coupled with Runge–Kutta methods for the time discretization. This class of schemes was introduced in the series of papers by Cockburn, Shu, and co-authors <cit.>. We recall the work from Zhang, Shu, and others analyzing Runge–Kutta discontinuous Galerkin methods applied to scalar conservation laws and symmetrizable systems <cit.>. These papers establish error estimates for smooth solutions for both second and third order Runge–Kutta schemes. Their analysis requires the CFL condition Δ t = O(h^4/3) for the second order Runge–Kutta scheme and piecewise polynomials of degree two and higher. The CFL condition Δ t = O(h) may be used for the third order Runge-Kutta scheme for piecewise polynomials of degree one and higher and for the second order Runge–Kutta scheme with piecewise linear polynomials.
Recent stability and convergence results have been obtained for IMEX (implicit–explicit) multistep schemes applied to a nonlinear convection diffusion equation, i.e. (<ref>)–(<ref>) augmented with a nonzero diffusion term <cit.>. These schemes implicitly discretize the diffusion term and explicitly discretize the hyperbolic term. It is not immediately clear how to adapt the analysis to the case of zero diffusion since the estimates depend on the reciprocal of the diffusion parameter.
A summary of the paper is as follows. In Section <ref>, we introduce the numerical schemes, properties of the numerical flux, and inequalities related to projections. The main results are also stated. Section <ref> and <ref> contain the proofs of the convergence results. In Section <ref> we provide some numerical results for inviscid Burger's equation and a nonlinear hyperbolic system modeling blood flow in an elastic vessel. Conclusions follow.
§ SCHEME AND MAIN RESULTS
We define notation relevant for the spatial discretization of (<ref>)–(<ref>) by the discontinuous Galerkin method. To do this, we make a similar technical modification to the flux function as in <cit.>. If the initial condition u_0 takes values within some open set Ω, then locally in time the solution to (<ref>)–(<ref>) also takes values in Ω <cit.>. We assume the flux function f ∈ C^3(ℝ) vanishes outside of Ω so derivatives up to third order are uniformly bounded, i.e. there exists some constant C depending only on f and its derivatives satisfying:
|f^(γ)(v)| ≤ C, ∀ v ∈ℝ, γ = 1,2,3.
Let the collection of intervals ( I_j )_j=0^N be a uniform partition of the interval [0,L], with I_j = [x_j,x_j+1] of size h. Let ℙ^k(I_j) denote the space of polynomials of degree k on the interval I_j. The approximation space is
𝕍_h={ϕ_h:[0,L] →ℝ s.t. ϕ_h|_I_j∈ℙ^k(I_j), ∀ j = 0, …, N }.
The space L^2(0,L) is the standard L^2 space; let (·,·) denote the L^2 inner-product over Ω, with associated norm ‖·‖.
Let Π_h be the L^2 projection into 𝕍_h:
(Π_h v, ϕ_h) = (v,ϕ_h), ∀ϕ_h∈𝕍_h, ∀ v∈ L^2(0,L).
Define the notation for traces of a function ϕ:[0,L]→ℝ to the boundaries of the intervals:
ϕ^±|_x_j = lim_ε→ 0, ε > 0ϕ(x_j ±ε), 1≤ j≤ N,
ϕ^+|_x_0 = lim_ε→ 0, ε > 0ϕ(x_0 + ε),
ϕ^-|_x_N+1 = lim_ε→ 0, ε > 0ϕ(x_N+1 -ε).
The standard notation for jumps and averages at the interior nodes is given as follows:
[ϕ]|_x_j = ϕ^-|_x_j - ϕ^+|_x_j, 1≤ j≤ N,
{ϕ}|_x_j = 1/2(ϕ^-|_x_j + ϕ^+|_x_j), 1≤ j≤ N.
Let f̂ denote the numerical flux, that is assumed to be Lipschitz continuous and consistent.
There is a constant C_L > 0 such that for any p, q, u, v ∈ℝ:
|f̂(p, q) - f̂(u, v)| ≤ C_L ( |p - u| + |q - v| ),
and
f̂(v,v) = f(v), ∀ v ∈ℝ.
We also assume that f̂ belongs to the class of E–fluxes <cit.>.
The numerical flux f̂ is an E–flux, which means it satisfies, for all w between v^- and v^+,
( f̂(v^-,v^+) - f(w) ) [v]|_x_j≥ 0, 1 ≤ j ≤ N.
An example of a numerical flux that satisfies Assumption <ref> and Assumption <ref> is the local Lax-Friedrichs flux, f̂_LF, defined by:
f̂_LF(v^-, v^+)|_x_j = { f(v) }|_x_j + 1/2J(v^-, v^+) [v]|_x_j, ∀ 1≤ j ≤ N,
with
J(v^-, v^+)|_x_j = max_min(v^-|_x_j, v^+|_x_j) ≤ w ≤max(v^-|_x_j, v^+|_x_j) |f'(w)| , ∀ j = 1, …, N.
Finally, we define a discrete function α at each interior node. The fact that α is nonnegative and
uniformly bounded is a key ingredient in the error analysis.
α(v)|_x_j =
{[ [v]^-1( f̂(v^-, v^+) - f({v}) )|_x_j, if [v]|_x_j≠ 0,; 1/2|f'({v}|_x_j)|, if [v]|_x_j = 0. ].
There exist constants C_α, C_0 and C_1 such that
0 ≤α(v)|_x_j ≤ C_α, ∀ (v^-, v^+) ∈ℝ^2, ∀ 1≤ j≤ N,
1/2|f'({v}|_x_j)| ≤α(v)|_x_j + C_0 |[v]|_x_j|, ∀ (v^-, v^+) ∈ℝ^2, ∀ 1≤ j≤ N,
1/8 f”({v}|_x_j)[v]|_x_j ≤α(v)|_x_j + C_1 [v]^2|_x_j, ∀ (v^-, v^+) ∈ℝ^2, ∀ 1≤ j≤ N.
The constants C_0 and C_1 depend on the derivatives of f.
The proof of Lemma <ref> follows the one in <cit.>; the definition for α slightly differs
from the one given in <cit.> so that it is suitable for the error analysis of the
Adams–Bashforth scheme.
An additional assumption is made for the numerical flux.
There is a constant C > 0 such that for any v_h∈𝕍_h and v∈𝒞(0,L):
|α(v_h)|_x_j -α(v)|_x_j|≤ C ‖ v_h-v‖_∞, ∀ 1≤ j≤ N.
Assumption <ref> is used in the error analysis for the Adams–Bashforth scheme. It is easy to check
that the local Lax-Friedrichs flux defined by (<ref>) satisfies (<ref>).
We now introduce the discontinuous Galerkin discretization on each interval.
ℋ_j(v, ϕ_h) = ∫_I_jf(v) d ϕ_h/dx
+ ∫_I_js(v) ϕ_h -f̂(v^-, v^+)|_x_j+1ϕ_h^-|_x_j+1
+ f̂(v^-, v^+)|_x_jϕ_h^+|_x_j ∀ 1≤ j≤ N-1,
ℋ_0(v, ϕ_h) = ∫_I_0f(v) d ϕ_h/dx
+ ∫_I_0s(v) ϕ_h -f̂(v^-, v^+)|_x_1ϕ_h^-|_x_1,
ℋ_N(v, ϕ_h) =∫_I_Nf(v) d ϕ_h/dx
+ ∫_I_Ns(v) ϕ_h + f̂(v^-, v^+)|_x_Nϕ_h^+|_x_N.
For some number M>0, define Δ t = T / M.
The second order in time Adams–Bashforth scheme is:
given u_h^0 ∈𝕍_h and u_h^1 ∈𝕍_h, for n = 1, …, M-1, seek u_h^n+1∈𝕍_h satisfying
∫_I_ju_h^n+1ϕ_h = ∫_I_j u_h^nϕ_h + Δ t 3/2ℋ_j(u_h^n, ϕ_h) - Δ t 1/2ℋ_j(u_h^n-1, ϕ_h),
∀ϕ_h ∈𝕍_h, ∀ 0≤ j≤ N.
Since (<ref>) is a multi-step method, two starting values are needed. We choose
u_h^0 = Π_h u_0 for the initial value, and we choose u_h^1 = ũ_h^1 where
ũ_h^1 satisfies the first-order in time
forward Euler scheme defined below.
With the choice ũ_h^0 = Π_h u_0, for n = 0, …, M-1, seek ũ_h^n+1∈𝕍_h satisfying
∫_I_jũ_h^n+1ϕ_h = ∫_I_jũ_h^nϕ_h + Δ t ℋ_j(ũ_h^n, ϕ_h), ∀ϕ_h ∈𝕍_h, ∀ 0≤ j≤ N.
The initial value u_h^1 is computed using (<ref>) with a time step that is small enough so that the following
assumption holds:
u_h^1 - Π_h u^1≤ h^k+1/2.
Theorem <ref> below shows that (<ref>) is a reasonable assumption if the time step used for
the forward Euler method is small enough.
The main result of this paper is the convergence result for the Adams-Bashforth scheme (<ref>).
Assume the exact solution u belongs to 𝒞^2([0,T];H^k+1(Ω)).
Let u_h^1 satisfy (<ref>).
Under Assumptions <ref>, <ref>, <ref> and the CFL condition
Δ t = O(h^2), there is a constant C independent of h and Δ t such that,
for h sufficiently small, and for k ≥ 2:
max_n = 0, …, Mu^n - u_h^n≤ C (Δ t^2 + h^k+1/2).
The proof of Theorem <ref> is given in Section <ref>. An easy modification of the proof yields the
following convergence result for the forward Euler scheme (<ref>). Its proof is outlined in Section <ref>.
Assume the exact solution u belongs to 𝒞^2([0,T];H^k+1(Ω)).
Let (ũ_h^n)_n satisfy (<ref>).
Under Assumptions <ref>, <ref> and the CFL condition
Δ t = O(h^2), for h sufficiently small, and
for k ≥ 1, there is a constant C independent of h and Δ t such that:
max_n = 0, …, Mu^n - ũ_h^n≤ C (Δ t + h^k+1/2).
We remark that von Neumann stability analysis conducted in <cit.> suggests a less restrictive CFL condition Δ t = O(h^4/3) for the second order Adams–Bashforth scheme. Our theoretical estimates require Δ t = O(h^2); at the moment we are unable to relax this condition.
We finish this section by recalling inverse inequalities, trace inequalities and approximations results.
Let v_∞ = max_x ∈ [0,L]|v(x)| denote the sup-norm. There exists a constant C independent of h such that
ϕ_h_∞ ≤ C h^-1/2ϕ_h, ∀ϕ_h∈𝕍_h,
|ϕ_h^n,±|_x_j| ≤ C h^-1/2ϕ_h_L^2(I_j), ∀ 1≤ j≤ N, ∀ϕ_h∈𝕍_h,
( ∑_j = 0^Nd/dxϕ_h_L^2(I_j)^2 )^1/2 ≤ C h^-1ϕ_h, ∀ϕ_h∈𝕍_h.
For simplicity we denote u^n the function u evaluated at the time t^n = n Δ t. The approximation error is denoted
η^n = u^n - Π_h u^n,
and it satisfies the optimal a priori bounds
η^n ≤ C h^k+1,
|η^n,±|_x_j| ≤ C h^k+1/2, ∀ 1≤ j≤ N,
η^n_∞ ≤ C h^k+1/2,
η^n+1 - η^n ≤ C Δ t h^k+1.
The constant C is independent of h, Δ t but depends on the exact solution u and its derivatives.
§ PROOF OF THEOREM <REF>
For the error analysis, we denote
χ^n = u_h^n-Π_h u^n.
The proof of Theorem <ref> is based on an induction hypothesis:
χ^ℓ≤ h^3/2, ∀ 0≤ℓ≤ M.
Since χ^0=0, the hypothesis (<ref>) is trivially satisfied for ℓ=0.
With the assumption (<ref>), it is also true for ℓ=1.
Fix ℓ∈{2,…,M} and assume that
χ^n≤ h^3/2, ∀ 0≤ n ≤ℓ-1.
We will show that (<ref>) is valid for n = ℓ. We begin by deriving an error inequality.
We fix an interval I_j for 0≤ j≤ N.
It is easy to see that the scheme is consistent in space and the exact solution satisfies
3/2∫_I_j u_t^n ϕ_h - 1/2∫_I_ju_t^n-1ϕ_h = 3/2ℋ_j(u^n, ϕ_h) - 1/2ℋ_j(u^n-1, ϕ_h), ∀ 1≤ n ≤ M-1.
In the above, the notation u_t^n is used for the time derivative of u evaluated at t^n.
Subtracting (<ref>) from (<ref>) and rearranging terms, one obtains:
∫_I_j (u_h^n+1 -u_h^n - Δ t3/2 u_t^n + Δ t1/2 u_t^n-1)ϕ_h
= Δ t3/2(ℋ_j(u_h^n, ϕ_h) - ℋ_j(u^n, ϕ_h) ) - Δ t1/2 (ℋ_j(u_h^n-1, ϕ_h) - ℋ_j(u^n-1, ϕ_h) ), ∀ 1≤ n ≤ M-1.
Summing over the elements j = 0, …, N and adding and subtracting the L^2 projection
of u at t^n and t^n+1 yields the equality:
∫_0^L (χ^n+1 - χ^n) ϕ_h = ∫_0^L (u^n -u^n+1 + Δ t 3/2 u_t^n - Δ t1/2 u_t^n-1)ϕ_h + ∫_0^L (η^n+1 - η^n) ϕ_h + b^n(ϕ_h),
with the following definition for n ≥ 1
b^n(ϕ_h) = Δ t 3/2∑_j = 0^N ( ℋ_j(u_h^n, ϕ_h) - ℋ_j(u^n, ϕ_h)) - Δ t 1/2∑_j = 0^N ( ℋ_j(u_h^n-1, ϕ_h) - ℋ_j(u^n-1, ϕ_h) ).
The second term on the right hand side of (<ref>) vanishes due to the property (<ref>) of the local L^2 projection. To handle the first term, we obtain from the following Taylor expansions for some ζ̃∈ [t^n-1,t^n] and some ζ∈ [t^n, t^n+1]:
u^n+1 - u^n = Δ t u_t^n + 1/2Δ t^2 u_tt^n + 1/6Δ t^3 u_ttt|_ζ,
u_t^n-1 - u_t^n = -Δ t u_tt^n + 1/2Δ t^2 u_ttt|_ζ̃.
Thus we have
u^n -u^n+1 + Δ t 3/2 u_t^n - Δ t1/2 u_t^n-1 = - Δ t^3 (1/6u_ttt|_ζ +1/4u_ttt|_ζ̃).
Hence (<ref>) becomes:
∫_0^L (χ^n+1 - χ^n ) ϕ_h ≤ C Δ t^3 ∫_0^L |ϕ_h| + b^n(ϕ_h).
Cauchy Schwarz's inequality and Young's inequalities imply:
∫_0^L (χ^n+1 - χ^n ) ϕ_h ≤ C Δ t^5 + Δ t ϕ_h^2 + b^n(ϕ_h).
We choose ϕ_h = χ^n in inequality (<ref>) to obtain:
∫_0^L (χ^n+1 - χ^n ) χ^n ≤ C Δ t^5 + Δ t χ^n^2 + b^n(χ^n).
So, the following error inequality holds for n ≥ 1:
1/2χ^n+1^2 - 1/2χ^n^2 ≤
C Δ t^5 + Δ t χ^n^2
+1/2χ^n+1 - χ^n^2
+ b^n(χ^n).
It remains to handle the last two terms in (<ref>). The proofs of the following two lemma are given in the next section.
Assume that Δ t = O(h^2). The following holds for n ≥ 1:
χ^n+1 - χ^n^2 ≤ C Δ t^6 + C Δ t ( χ^n^2 + χ^n-1^2) + C Δ t h^2k+2.
Let n≥ 2 and assume χ^n≤ h^3/2, ‖χ^n-1‖≤ h^3/2, and Δ t = O(h^2). The following holds:
b^n(χ^n) ≤ C Δ t (‖χ^n‖^2+‖χ^n-1‖^2)
+ C Δ t^6
+ C Δ t (1+ 2 ε^-1) h^2k+1
-(1/2-2 ε) Δ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]^2|_x_j
-(1/2-2 ε) Δ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j, ∀ε >0.
For n = 1 one has the following:
b^1(χ^1) ≤ C Δ t ‖χ^1‖^2
+ C Δ t (1+ 2 ε^-1) h^2k+1
+3‖χ^1‖^2-(1/2-2 ε) Δ t ∑_j=1^N α(u_h^1)|_x_j [χ^1]^2|_x_j, ∀ε >0.
Substituting the bounds from (<ref>), (<ref>), (<ref>) (with ε = 1/4), and using the fact that
α(u_h^n) and α(u_h^n-1) are nonnegative, the error inequality (<ref>) simplifies to:
χ^n+1^2 - χ^n^2 ≤ C Δ t^5 + C Δ t ( χ^n^2 + χ^n-1^2 + ‖χ^n-2‖^2) + C Δ t h^2k+1,
n≥ 2,
and
χ^n+1^2 - χ^n^2 ≤ C Δ t^5 + C Δ t χ^n^2 + C Δ t h^2k+1
+ C ‖χ^n‖^2,
n = 1.
Summing (<ref>) from n = 2, …, ℓ-1 and adding to (<ref>) one obtains:
χ^ℓ^2 ≤ C Δ t^4 + Ch^2k+1 + Cχ^1^2 + C Δ t ∑_n = 0^ℓ-1χ^n^2.
Gronwall's inequality and assumption (<ref>) immediately gives
χ^ℓ^2 ≤ C_2 T e^T ( Δ t^4 + h^2k+1),
where C_2 is independent of ℓ, h and Δ t. Employing the CFL condition Δ t = O(h^2), one has:
χ^ℓ≤(C_2 T e^T )^1/2( h^4 + h^k+1/2).
The induction proof is complete if h is small enough so that
C_2 T e^T h < 1,
implying that for k ≥ 2:
χ^ℓ≤(C_2 T e^T )^1/2 h ( h^3 + h^k-1/2) ≤ h^3/2.
Since η^n≤ C h^k+1 and u^n - u_h^n≤η^n + χ^n we can conclude:
u^n - u_h^n≤ C(Δ t^2 + h^k+1/2).
§.§ Proof of Lemma <ref>
Choose ϕ_h = χ^n+1 - χ^n in (<ref>) and use Cauchy-Schwarz's and Young's inequalities to obtain:
χ^n+1 - χ^n^2 ≤ CΔ t^6 + 2 b^n(χ^n+1 - χ^n).
We will now obtain a bound for b(ϕ_h) for any ϕ_h ∈𝕍_h.
By definition, we write
b^n(ϕ_h) = θ_1 + θ_2 + θ_3,
where
θ_1 = 3/2Δ t ∑_j=0^N ∫_I_j (f(u_h^n)-f(u^n))dϕ_h/dx
-1/2Δ t ∑_j=0^N ∫_I_j (f(u_h^n-1)-f(u^n-1))dϕ_h/dx
-3/2Δ t ∑_j=1^N (f({u_h^n})-f(u^n))|_x_j [ϕ_h]|_x_j
+1/2Δ t ∑_j=1^N (f({u_h^n-1})-f(u^n-1))|_x_j [ϕ_h]|_x_j,
θ_2 = Δ t ∑_j=0^N ∫_I_j(3/2 (s(u_h^n)-s(u^n))-1/2 (s(u_h^n-1)-s(u^n-1)))ϕ_h,
θ_3 = -3/2Δ t ∑_j=1^N (f̂(u_h^n,-,u_h^n,+)-f({u_h^n}))|_x_j [ϕ_h]|_x_j
+1/2Δ t ∑_j=1^N (f̂(u_h^n-1,-,u_h^n-1,+)-f({u_h^n-1}))|_x_j [ϕ_h]|_x_j.
Using Taylor expansions, we write for some ζ_1^n, ζ_2^n, ζ_1^n-1 and ζ_2^n-1:
f(u_h^n) - f(u^n) = f'(ζ_1^n)(u_h^n - u^n) = f'(ζ_1^n)(χ^n-η^n),
f({u_h^n}) - f(u^n) = f'(ζ_2^n)({u_h^n} - {u^n}) = f'(ζ_2^n)({χ^n} - {η^n}),
f(u_h^n-1) - f(u^n-1) = f'(ζ_1^n-1)(u_h^n-1 - u^n-1) = f'(ζ_1^n-1)(χ^n-1-η^n-1),
f({u_h^n-1}) - f(u^n-1) = f'(ζ_2^n)({u_h^n-1} - {u^n-1}) = f'(ζ_2^n-1)({χ^n-1} - {η^n-1}).
Using the above expansions in the definition of θ_1, trace inequalities and the CFL condition Δ t =𝒪(h^2), we
can obtain for any ε > 0
|θ_1| ≤ε‖ϕ_h‖^2 + C ε^-1Δ t (‖χ^n‖^2 +‖χ^n-1‖^2)
+C ε^-1 Δ t h^2k+2.
The term θ_2 is bounded using Lipschitz continuity of s, approximation results, Cauchy-Schwarz's and Young's inequalities.
For any ε>0, we have
θ_2
≤ C ε^-1Δ t^2 h^2k+2 + Cε^-1Δ t^2 (χ^n^2+‖χ^n-1‖^2)
+ εϕ_h^2.
Lastly, the term θ_3 can be rewritten using the definition (<ref>).
θ_3 = -3/2Δ t ∑_j = 1^N α(u_h^n)|_x_j [u_h^n]|_x_j [ϕ_h]|_x_j
+1/2Δ t ∑_j = 1^N α(u_h^n-1)|_x_j [u_h^n-1]|_x_j [ϕ_h]|_x_j
= -3/2Δ t ∑_j = 1^N α(u_h^n)|_x_j [χ^n-η^n]|_x_j [ϕ_h]|_x_j
+1/2Δ t ∑_j = 1^N α(u_h^n-1)|_x_j [χ^n-1-η^n-1]|_x_j [ϕ_h]|_x_j.
Using Young's and Cauchy-Schwarz's inequalities, approximation results, trace inequalities, boundedness of α and the CFL condition, we have
|θ_3 | ≤ε‖ϕ_h‖^2
+ C ε^-1Δ t (‖χ^n‖^2+‖χ^n-1‖^2)
+ C ε^-1Δ t h^2k+2.
Combining the bounds above yields
b(ϕ_h) ≤ε‖ϕ_h‖^2 +
C ε^-1Δ t (‖χ^n‖^2+‖χ^n-1‖^2)
+ C ε^-1Δ t h^2k+2, ∀ε >0, ∀ϕ_h∈𝕍_h.
We choose ε = 1/4 and ϕ_h = χ^n-χ^n-1 in (<ref>) and substitute the bound
in (<ref>) to obtain (<ref>).
χ^n+1 - χ^n^2 ≤ CΔ t^6
+ C Δ t (‖χ^n‖^2+‖χ^n-1‖^2)
+ C Δ t h^2k+2.
§.§ Proof of Lemma <ref>
As in the proof of Lemma <ref>, we write
b^n(χ^n) = θ_1 + θ_2 + θ_3,
where the definitions of θ_1, θ_2, θ_3 are given in (<ref>), (<ref>) and (<ref>) respectively
for the particular choice ϕ_h = χ^n. Unfortunately we cannot make use of the bound (<ref>) since the factor
Δ t is missing in front of ε‖ϕ_h ‖^2. A more careful analysis is needed, and we will take
advantage of the CFL condition.
Define
ℱ(n,ϕ_h) =
Δ t ∑_j=0^N ∫_I_j (f(u_h^n)-f(u^n))dϕ_h/dx
- Δ t ∑_j=1^N (f({u_h^n})-f(u^n))|_x_j [ϕ_h]|_x_j.
Using the function ℱ which is linear in its second argument, we rewrite the term θ_1 as
θ_1 = 3/2ℱ(n,χ^n) -1/2ℱ(n-1,χ^n-1) + 1/2ℱ(n-1,χ^n-1-χ^n).
We now state a bound for the term ℱ(n,χ^n).
ℱ(n,χ^n)≤ C Δ t ‖χ^n‖^2
+ C(1+ε^-1) Δ t h^2k+1
+εΔ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]^2|_x_j, ∀ε > 0.
The proof of (<ref>) is technical and can be found in Appendix <ref>.
The bound for ℱ(n-1,χ^n-1) is identical.
ℱ(n-1,χ^n-1)≤ C Δ t ‖χ^n-1‖^2
+ C(1+ε^-1) Δ t h^2k+1
+εΔ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j, ∀ε > 0.
We are left with bounding ℱ(n-1,χ^n-1-χ^n). Following the technique used for bound (<ref>), we can obtain
ℱ(n-1,χ^n-1-χ^n) ≤‖χ^n-1-χ^n‖^2
+ C Δ t ‖χ^n-1‖^2 +C Δ t h^2k+2.
Combining the above with (<ref>), we have for n≥ 2
θ_1 ≤ C Δ t (‖χ^n‖^2+ ‖χ^n-1‖^2 + ‖χ^n-2‖^2)
+ C (1+2 ε^-1) Δ t h^2k+1
+εΔ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]^2|_x_j
+εΔ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j
+ C Δ t^6, ∀ϵ>0.
For n=1, since χ^0 = 0, inequalities (<ref>) and (<ref>) imply
θ_1 ≤ C Δ t ‖χ^1‖^2
+ C (1+ε^-1) Δ t h^2k+1
+εΔ t ∑_j=1^N α(u_h^1)|_x_j [χ^1]^2|_x_j
+ ‖χ^1‖^2,
∀ϵ>0.
The term θ_2 is bounded using Lipschitz continuity of s, approximation results, Cauchy-Schwarz's
inequality:
θ_2 ≤ C Δ t (‖χ^n‖+‖η^n‖+‖χ^n-1‖+‖η^n-1‖) ‖χ^n‖≤ C Δ t (‖χ^n‖^2 +‖χ^n-1‖^2) + C Δ t h^2k+2.
For the term θ_3, we use the definition (<ref>) and write
θ_3 = -3/2Δ t ∑_j = 1^N α(u_h^n)|_x_j [χ^n-η^n]|_x_j [χ^n]|_x_j
+1/2Δ t ∑_j = 1^N α(u_h^n-1)|_x_j [χ^n-1-η^n-1]|_x_j [χ^n]|_x_j.
After some manipulation we rewrite θ_3 as:
θ_3 = -1/2Δ t ∑_j = 1^N α(u_h^n)|_x_j [χ^n]^2|_x_j
-1/2Δ t ∑_j = 1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j
+Δ t ∑_j=1^N (α(u_h^n-1)-α(u_h^n))|_x_j [χ^n-1-η^n-1]|_x_j [χ^n-1]|_x_j
-1/2Δ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1-η^n-1]|_x_j [χ^n-1-χ^n]|_x_j
+Δ t ∑_j=1^N α(u_h^n)|_x_j [χ^n-1-η^n-1]|_x_j [χ^n-1-χ^n]|_x_j
+Δ t ∑_j=1^N α(u_h^n)|_x_j [(χ^n-1-χ^n)-(η^n-1-η^n)]|_x_j [χ^n]|_x_j
+1/2Δ t ∑_j=1^N α(u_h^n)|_x_j [η^n]|_x_j [χ^n]|_x_j
+1/2Δ t ∑_j=1^N α(u_h^n-1)|_x_j [η^n-1]|_x_j [χ^n-1]|_x_j.
We now bound the terms in the right-hand side of (<ref>) except for the first two terms.
We write
α(u_h^n-1)|_x_j - α(u_h^n)|_x_j
=(α(u_h^n-1)|_x_j - α(u^n-1)|_x_j)
+(α(u^n-1)|_x_j - α(u^n)|_x_j)
-(α(u_h^n)|_x_j-α(u^n)|_x_j).
From (<ref>) and (<ref>), we have
|α(u_h^n-1)|_x_j - α(u_h^n)|_x_j|
≤ C‖ u_h^n-1 - u^n-1‖_∞ + C ‖ u_h^n-u^n‖_∞
+1/2| | f'(u^n-1)|_x_j| -| f'(u^n)|_x_j| |.
With a Taylor expansion, we obtain
| α(u_h^n-1)|_x_j - α(u_h^n)|_x_j| ≤ C (u^n-1 - u_h^n-1_∞ + u^n - u_h^n_∞ + Δ t ), ∀ 1≤ j≤ N.
With the assumption ‖χ^n‖≤ h^3/2 and ‖χ^n-1‖≤ h^3/2, bound (<ref>)
and approximation results, we have
| α(u_h^n-1)|_x_j - α(u_h^n)|_x_j| ≤ C (h + Δ t), ∀ 1≤ j ≤ N.
Using trace inequalities, we then have
Δ t ∑_j=1^N (α(u_h^n-1)|_x_j-α(u_h^n)|_x_j)[χ^n-1]^2|_x_j≤ C Δ t (1 + h^-1Δ t) ‖χ^n-1‖^2.
With the CFL condition, we conclude
Δ t ∑_j=1^N (α(u_h^n-1)|_x_j-α(u_h^n)|_x_j)[χ^n-1]^2|_x_j≤ C Δ t ‖χ^n-1‖^2.
Similarly we have
-Δ t ∑_j=1^N (α(u_h^n-1)|_x_j-α(u_h^n)|_x_j)[η^n-1]|_x_j[χ^n-1]|_x_j≤ C Δ t ‖χ^n-1‖^2 + C Δ t h^2k+1.
The fourth term in (<ref>) is bounded by Cauchy-Schwarz's inequality, trace inequalities, approximation results, the
CFL condition and (<ref>):
1/2Δ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1-η^n-1]|_x_j[χ^n-1-χ^n]|_x_j ≤‖χ^n-1-χ^n‖^2 + C Δ t^2 h^-2‖χ^n-1‖^2 + C Δ t^2 h^2k
≤‖χ^n-1-χ^n‖^2 + C Δ t ‖χ^n-1‖^2 + C Δ t h^2k+2.
The fifth term in (<ref>) is handled exactly like the fourth term.
Similarly the first part in the sixth term has the following bound:
Δ t ∑_j=1^N α(u_h^n)|_x_j [χ^n-1-χ^n]|_x_j [χ^n]|_x_j≤‖χ^n-1-χ^n‖^2 + C Δ t ‖χ^n‖^2.
For the second part, we use a Taylor expansion in time and the CFL condition:
Δ t ∑_j=1^N α(u_h^n)|_x_j [η^n-1-η^n]|_x_j [χ^n]|_x_j≤ C Δ t^2 h^k‖χ^n‖≤ C Δ t ‖χ^n‖^2 + C Δ t h^2k+2.
The last two terms in (<ref>) are treated almost identically, using approximation results, and
the boundedness of α:
1/2Δ t ∑_j=1^N α(u_h^n)|_x_j [η^n]|_x_j [χ^n]|_x_j +1/2Δ t ∑_j=1^N α(u_h^n-1)|_x_j [η^n-1]|_x_j [χ^n-1]|_x_j≤ C ε^-1Δ t h^2k+1
+ εΔ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]^2|_x_j
+ εΔ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j, ∀ε>0.
To summarize, with (<ref>), the term θ_3 is bounded as:
θ_3 ≤ C Δ t (‖χ^n‖^2+‖χ^n-1‖^2)
+ C Δ t^6
+ C Δ t (1+ε^-1) h^2k+1
-(1/2-ε) Δ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]^2|_x_j
-(1/2-ε) Δ t ∑_j=1^N α(u_h^n-1)|_x_j [χ^n-1]^2|_x_j, ∀ε >0, n≥ 2.
For n=1, the term θ_3 is simply bounded as:
θ_3 ≤ C Δ t ‖χ^1‖^2
+ C Δ t (1+ε^-1) h^2k+1
-(1/2-ε) Δ t ∑_j=1^N α(u_h^1)|_x_j [χ^1]^2|_x_j
+ 2‖χ^1‖^2,
∀ε >0.
Combining the bounds above for θ_i, 1≤ i≤ 3, we conclude the proof.
§ PROOF OF THEOREM <REF>
The proof for the forward Euler scheme is also done by induction. It is a less technical proof than
for the Adams–Bashforth scheme. We skip many details and give an outline of the proof.
Denote
ξ^n = ũ_h^n-Π_h u^n.
The induction hypothesis is less restrictive than for the Adams-Bashforth method, which yields
a convergence result that is valid for polynomials of degree one and above.
ξ^ℓ≤ h, ∀ 0≤ℓ≤ M.
Since ξ^0=0, the hypothesis (<ref>) is trivially satisfied for ℓ=0.
Fix ℓ∈{1,…,M} and assume that
ξ^n≤ h, ∀ 0≤ n ≤ℓ-1.
We now have to show that (<ref>) is valid for n = ℓ. We begin by deriving an error inequality.
We fix an interval I_j for 0≤ j≤ N. Using consistency in space of the scheme:
∫_I_j u_t^n ϕ_h = ℋ_j(u^n, ϕ_h), 0≤ n≤ M,
we obtain, after some manipulation, the error equation:
∫_I_j(ξ^n+1 - ξ^n) ϕ_h = ∫_I_j(Δ t u_t^n - u^n+1 + u^n ) ϕ_h + ∫_I_j(η^n+1 - η^n ) ϕ_h + Δ t(ℋ_j(u_h^n, ϕ_h) - ℋ_j(u^n, ϕ_h)).
The first term in the right-hand side of (<ref>) is bounded using a Taylor expansion, whereas
the second term vanishes due to (<ref>). Summing over the elements from j = 0, …, N
results in
∫_0^L (ξ^n+1 - ξ^n ) ϕ_h ≤ C Δ t^2 ∫_0^L |ϕ_h| + Δ t ∑_j = 0^N ( ℋ_j(u_h^n, ϕ_h) - Δ t ℋ_j(u^n, ϕ_h) ).
Define
b̃^n(ϕ_h) = Δ t ∑_j = 0^N ( ℋ_j(u_h^n, ϕ_h) - Δ t ℋ_j(u^n, ϕ_h) ).
Then equation (<ref>) becomes
∫_0^L (ξ^n+1 - ξ^n ) ϕ_h ≤ C Δ t^2 ∫_0^L |ϕ_h| +b̃^n(ϕ_h),
and Cauchy Schwarz's and Young's inequalities imply
∫_0^L (ξ^n+1 - ξ^n ) ϕ_h ≤ C Δ t^3 + C Δ t ϕ_h^2 +b̃^n(ϕ_h).
We now choose ϕ_h = ξ^n to obtain:
∫_0^L (ξ^n+1 - ξ^n ) ξ^n ≤ C Δ t^3 + C Δ t ξ^n^2 + b̃^n(ξ^n).
It then follows that
1/2ξ^n+1^2 - 1/2ξ^n^2 ≤1/2ξ^n+1 - ξ^n^2 + C Δ t^3 + C Δ t ξ^n^2 + b̃^n(ξ^n).
The terms ‖ξ^n+1-ξ^n‖ and b̃^n(ξ^n) are bounded by:
‖ξ^n+1-ξ^n‖^2 ≤ C Δ t^4 + C Δ t ‖ξ^n‖^2 + C Δ t h^2k+2,
b̃^n(ξ^n) ≤ C Δ t ‖ξ^n‖^2 + C Δ t h^2k+1.
Proof of (<ref>) follows closely the proof of Lemma <ref> but is
less technical. We skip it. Proof of (<ref>) differs from the proof of Lemma <ref> and details are given in
Appendix <ref>.
The error inequality simplifies to:
ξ^n+1^2 - ξ^n^2 ≤ C Δ t^3 + C Δ t ξ^n^2 + C Δ t h^2k+1.
Summing from n = 0, …, ℓ-1, and using the fact that ξ^0 = 0, one obtains:
ξ^ℓ^2 ≤ C Δ t^2 + C h^2k+1 + C Δ t ∑_n=0^ℓ-1ξ^n^2.
We now apply Gronwall's inequality:
ξ^ℓ^2 ≤ C_4 T e^T (Δ t^2 + h^2k+1),
where C_4 is independent of ℓ. Employing the CFL condition Δ t = O(h^2), one has:
ξ^ℓ≤(C_4 T e^T )^1/2( Δ t + h^k+1/2)
= (C_4 T e^T )^1/2( h^2 + h^k+1/2).
Hence the induction is complete if h is small enough so that
C_4 T e^T h < 1.
Since η^ℓ≤ C h^k+1 and u^ℓ - u_h^ℓ≤η^ℓ + ξ^ℓ one obtains:
u^ℓ - u_h^ℓ≤ C (Δ t + h^k+1/2),
and we conclude the proof.
§ NUMERICAL RESULTS
§.§ Scalar case
In this section, we use the method of manufactured solutions to numerically verify convergence rates. Solutions to the inviscid Burger's equation,
∂ u/∂ t + ∂/∂ x( 1/2 u^2 ) = 0,
are approximated using the Adams–Bashforth scheme (<ref>).
We consider the following exact solution to (<ref>) posed in the interval [0,1]:
u(x,t) = cos(2 π x) sin(t) + sin(2 π x) cos(t).
Convergence rates in space, given in Table <ref>, are calculated for polynomial degrees k = 1, 2, 3 by fixing a small timestep Δ t = 10^-4 so the temporal error is small compared to the spatial error. The spatial discretization parameter h = 1/2^m for m = 1, …, 5, and we evolve the solution for ten timesteps. Our results yield a rate of k+1 in space, verifying the fact that the convergence estimate in Theorem <ref> is suboptimal.
Errors and rates in time are provided in Table <ref>. We fix h = 1/4, vary Δ t = 1/2^m, m = 10, …, 13, and consider high polynomial degrees k = 8,9 so the spatial error is smaller than the temporal error. We evolve the solution to the final time T = 1 s. We recover the expected second order rate in time.
§.§ System case
In this section we compute convergence rates for a hyperbolic system that is the motivation for this work: a model which describes one–dimensional blood flow in an elastic vessel:
∂/∂ t[ A; Q ]
+
∂/∂ x[ Q; αQ^2/A + 1/ρ(Aψ - Ψ) ]
=
[ 0; -2 πνα/α-1Q/A ],
p = p_0 + ψ(A; A_0), Ψ = ∫_A_0^A ψ(ξ;A_0) d ξ.
The variables are vessel cross sectional area A and fluid momentum Q. The parameters are the reference pressure p_0 = 0 dynes/cm^2, the reference cross sectional area A_0 = 1 cm^2, the non–dimensional Coriolis coefficient α = 1.1, the fluid density ρ = 1.06 g/cm^3, and the kinematic viscosity ν = 3.302 × 10^-2 cm^2/s. For these computations we use a typical form for the function relating area to pressure <cit.>:
ψ = β(A^1/2 - A_0^1/2),
with β = 1 dynes/cm^3. In defining the numerical flux for our computations, we use a version of the local Lax–Friedrichs flux suggested for nonlinear hyperbolic systems in <cit.>. With U = [A,Q]^T and λ_1( U) and λ_2( U) the eigenvalues of the Jacobian of the flux function in (<ref>), the flux is defined with:
J( U^-|_x_j, U^+|_x_j) = max( |λ_1( U^-|_x_j)|, |λ_1( U^+|_x_j)| , |λ_2( U^-|_x_j)| , |λ_2( U^+|_x_j)| ).
To compute errors and rates, we solve (<ref>) in the interval [0,1] with the following exact solution:
A(x,t) = cos(2 π x) cos(t) + 2, Q(x,t) = sin(2 π x) cos(t).
The discretization for a hyperbolic system follows the same procedure as for a scalar hyperbolic equation. For these simulations, we employ the second–order Adams–Bashforth scheme (<ref>) with the local Lax–Friedrichs numerical flux.
Errors and convergence rates in space, provided in Tables <ref> and <ref> , are determined by fixing a small time step Δ t = 2 × 10^-5 s and taking h = 1 / 2^m for m = 1, … 5. We consider k = 1, 2, 3 and evolve the solution for ten time steps.
To calculate the rate in time, we make the error in space small by choosing high order polynomials k = 8,9 on a mesh with size h=1/4. By taking h to be constant, we avoid overly refining Δ t due to the CFL condition. The time step Δ t = 1/2^m for m = 10, …, 13 and we evolve the solution to the final time T = 1 s. Results are displayed in Tables <ref> and <ref>.
The computed rates in space and time indicate that results analogous to Theorems <ref> and <ref> can be expected for such numerical discretizations of nonlinear hyperbolic systems. Numerical analysis for systems will be the subject of future work.
§ CONCLUSIONS
In this paper we prove a priori error estimates for fully discrete schemes approximating scalar conservation laws, where the spatial discretization is a discontinuous Galerkin method and the temporal discretization is either the second order Adams–Bashforth method or the forward Euler method. The estimates are valid for polynomial degree greater than or equal to two for the second order method and greater than or equal to one
for the first order method in time. A CFL condition of the form Δ t = O(h^2) is required. In future work, we will consider a priori error estimates for numerical methods approximating nonlinear hyperbolic systems like those describing blood flow in an elastic vessel.
§ APPENDIX
§.§ Proof of bound (<ref>)
Using Taylor expansions up to third order, we write
f(u_h^n) - f(u^n) = f'(u^n)(u_h^n - u^n) +1/2 f”(u^n)(u_h^n-u^n)^2
+1/6 f”'(ζ_1^n) (u_h^n-u^n)^3
= f'(u^n)(χ^n-η^n) +1/2 f”(u^n)(χ^n-η^n)^2
+1/6 f”'(ζ_1^n) (χ^n-η^n)^3 ,
= f'(u^n)χ^n
+1/2 f”(u^n) (χ^n)^2
-f'(u^n) η^n
- f”(u^n) χ^n η^n
+1/2 f”(u^n) (η^n)^2
+1/6 f”'(ζ_1^n) (χ^n-η^n)^3
= β_1+…+β_6,
f({u_h^n}) - f(u^n) = f'(u^n)({u_h^n} - {u^n})
+1/2 f”(u^n)({u_h^n}-u^n)^2
= f'(u^n)({χ^n} - {η^n}) + 1/2 f”(u^n)({χ^n}-{η^n})^2
+1/6 f”'(ζ_2^n) ({χ^n}-{η^n})^3,
= f'(u^n){χ^n}
+1/2 f”(u^n) ({χ^n})^2
-f'(u^n) {η^n}
- f”(u^n) {χ^n}{η^n}
+1/2 f”(u^n) ({η^n})^2
+1/6 f”'(ζ_2^n) ({χ^n}-{η^n})^3
= γ_1+…+γ_6,
where ζ^n_1 and ζ_2^n are some points between u_h^n and u^n, and {u_h^n} and u^n respectively.
We substitute these expansions in the terms ℱ(n,χ^n) and write:
ℱ(n,χ^n) = X_1 + … + X_6,
with
X_i = Δ t ∑_j = 0^N ∫_I_jβ_id χ^n/dx
- Δ t ∑_j = 1^N γ_i |_x_j [χ^n]|_x_j, 1≤ i≤ 6.
We integrate by parts the first term in the definition of X_1 and use the fact that f' vanishes at the endpoints of the domain, namely
at x_0 and x_N+1. The term X_1 then simplifies to
X_1 = -1/2Δ t ∑_j=0^N ∫_I_j (∂/∂ x f'(u^n)) (χ^n)^2
≤ C Δ t ‖χ^n ‖^2.
Using the assumption ‖χ^n‖≤ h^3/2 and trace inequalities, we have
X_2 = 1/2Δ t ∑_j=0^N ∫_I_j f”(u^n) (χ^n)^2 d χ^n/dx
-1/2Δ t ∑_j=1^N f”(u^n)|_x_j ({χ^n})^2|_x_j [χ^n]|_x_j
≤ C Δ t ‖χ^n‖_∞ h^-1‖χ^n‖^2
≤ C Δ t ‖χ^n‖^2.
To bound the term X_3 we define the following piecewise
constant function u_c^n elementwise as:
u_c^n|_I_j(x) = u^n|_x_j, ∀ x ∈ I_j, ∀ 0≤ j ≤ N.
We note that
f'(u^n) - f'(u_c^n)_∞≤ Ch.
We then rewrite the term X_3
X_3 = -Δ t ∑_j = 0^N ∫_I_j f'(u^n)η^n d χ^n/dx
+ Δ t ∑_j = 1^N f'(u^n){η^n} |_x_j [χ^n]|_x_j
= -Δ t ∑_j = 0^N ∫_I_j (f'(u^n)-f'(u_c^n))η^n d χ^n/dx
-Δ t ∑_j = 0^N f'(u_c^n) ∫_I_jη^n d χ^n/dx
+ Δ t ∑_j = 1^N (f'(u^n)-f'({u_h^n}))|_x_j{η^n} |_x_j [χ^n]|_x_j
+ Δ t ∑_j = 1^N f'({u_h^n})|_x_j{η^n} |_x_j [χ^n]|_x_j.
The second term above vanishes because of (<ref>).
The first term is bounded using approximation properties and (<ref>).
Δ t ∑_j = 0^N ∫_I_j (f'(u^n)-f'(u_c^n))η^n d χ^n/dx≤ C Δ t h^2k+2 +
CΔ t ‖χ^n‖^2.
Using a Taylor expansion, for some ζ_3^n we have
f'(u^n)-f'({u_h^n}) = f”(ζ_3^n) {u^n-u_h^n}≤ C (‖χ^n‖_∞ + ‖η^n‖_∞).
Using the assumption ‖χ^n‖≤ h^3/2 we then have
Δ t ∑_j = 1^N (f'(u^n)-f'({u_h^n})){η^n} |_x_j [χ^n]|_x_j≤ C Δ t ‖χ^n‖^2 + C Δ t h^2k+2.
For the last term in (<ref>) we employ (<ref>) to obtain:
Δ t ∑_j = 1^N f'({u_h^n})|_x_j{η^n} |_x_j [χ^n]|_x_j≤ CΔ t ∑_j = 1^N (α(u_h^n)|_x_j + C |[u_h^n]|_x_j | ) |{η^n} |_x_j| | [χ^n]|_x_j|
= CΔ t ∑_j = 1^Nα(u_h^n)|_x_j|{η^n} |_x_j| | [χ^n]|_x_j|
+ CΔ t ∑_j = 1^N |[u_h^n]| |{η^n} |_x_j| | [χ^n]|_x_j| .
Using Cauchy-Schwarz's and Young's inequalities, approximation results and the assumption ‖χ^n‖≤ h^3/2, we obtain
Δ t ∑_j = 1^N f'({u_h^n})|_x_j{η^n} |_x_j [χ^n]|_x_j≤εΔ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]|_x_j^2
+ Cε^-1Δ t h^2k+1
+ C Δ t ‖χ^n‖^2.
In summary we have
X_3 ≤εΔ t ∑_j=1^N α(u_h^n)|_x_j [χ^n]|_x_j^2
+ Cε^-1Δ t h^2k+1
+ C Δ t ‖χ^n‖^2 + C Δ t h^2k+1.
The bounds for X_4, X_5, and X_6 are standard applications of Cauchy Schwarz's inequality, Young's inequality, the induction hypothesis, assumption (<ref>), and inequalities (<ref>), (<ref>), and (<ref>)–(<ref>):
X_4 = -Δ t ∑_j=0^N ∫_I_j f”(u_n) χ^n η^n d χ^n/dx
+Δ t ∑_j=1^N f”(u_n)|_x_j{χ^n}|_x_j{η^n}|_x_j [χ^n]|_x_j
≤ C Δ t χ^n^2 + Δ t h^2k+1,
X_5 = 1/2Δ t ∑_j=0^N ∫_I_j f”(u_n)|_x_j (η^n)^2 d χ^n/dx
-1/2Δ t ∑_j=1^N f”(u_n) {η^n}^2|_x_j [χ^n]|_x_j
≤ C Δ t h^2k+2 + C Δ t χ^n^2,
X_6 = 1/6Δ t ∑_j=0^N ∫_I_j f”'(ζ_1^n) (χ^n-η^n)^3 d χ^n/dx
-1/6Δ t ∑_j=1^N f”'(ζ_2^n) ({χ^n}-{η^n})^3|_x_j [χ^n]|_x_j
≤ C Δ t χ^n^2 + C Δ t h^2k+1.
We can then conclude by combining all the bounds above.
§.§ Proof of bound (<ref>)
We rewrite, using the definition of α
b̃^n(ξ^n) = θ_1 + θ_2 + θ_3,
with
θ_1 =
Δ t ∑_j=0^N ∫_I_j (f(ũ_h^n)-f(u^n))dξ^n/dx
- Δ t ∑_j=1^N (f({ũ_h^n})-f(u^n))|_x_j [ξ^n]|_x_j,
θ_2 = Δ t ∑_j=1^N ∫_I_j (s(ũ_h^n)-s(u^n))ξ^n,
θ_3 = -Δ t ∑_j=1^N α(ũ_h^n)|_x_j [ũ_h^n]|_x_j [ξ^n]|_x_j.
We note that the bound for θ_1
follows the argument of the proof of (<ref>), where we substitute χ^n by ξ^n.
As in the previous section, we use Taylor expansions up to third order and write the term b̃^n(ξ^n) as a sum
of six terms, X_i, 1≤ i≤ 6. Bounds for X_i are obtained in a similar fashion, except for the term X_2
which is bounded differently because the
the induction hypothesis for the forward Euler scheme is weaker than the hypothesis for the Adams–Bashforth scheme.
We have
X_2 = Δ t 1/2∑_j = 0^N ∫_I_jf”(u^n)(ξ^n)^2d ξ^n/dx - Δ t 1/2∑_j = 1^N f”(u^n)({ξ^n})^2 |_x_j [ξ^n]|_x_j.
We rewrite the first term above. Integrating the first term by parts gives and using the assumption that f”' vanishes at the endpoints of the interval gives:
Δ t 1/2∑_j = 0^N ∫_I_jf”(u^n)(ξ^n)^2d ξ^n/dx = Δ t 1/6∑_j = 0^N ∫_I_jf”(u^n)d (ξ^n)^3/dx
= Δ t 1/6∑_j = 1^N f”(u^n)|_x_j [(ξ^n)^3]|_x_j -Δ t 1/6∑_j = 0^N ∫_I_j∂ f”(u^n)/∂ x (ξ^n)^3.
Now, we use the identity [ξ^3] = 2{ξ}^2[ξ] + {ξ^2}[ξ] to rewrite the first term in the right-hand side of (<ref>):
X_2 = Δ t 1/6∑_j = 1^N f”(u^n)|_x_j( {(ξ^n)^2} - {ξ^n}^2) [ξ^n]|_x_j - Δ t 1/6∑_j = 0^N ∫_I_j∂ f”(u^n)/∂ x (ξ^n)^3.
Employing the identity {ξ^2} - {ξ}^2 = 1/4[ξ]^2 for the first term and inductive hypothesis ξ^n_∞≤ h^1/2 on the second term gives:
X_2 ≤Δ t 1/24∑_j = 1^N f”(u^n)|_x_j [ξ^n]^3|_x_j + C Δ t ξ_∞ξ^2 ≤Δ t 1/24∑_j = 1^N f”(u^n)|_x_j [ξ^n]^3|_x_j + C Δ t ξ^2.
The first term in (<ref>) is broken into two parts:
Δ t 1/24∑_j = 1^N f”(u^n)|_x_j [ξ^n]^3|_x_j = Δ t 1/24∑_j = 1^N (f”(u^n)|_x_j - f”({ũ_h^n})|_x_j) [ξ^n]^3|_x_j + Δ t 1/24∑_j = 1^N f”({ũ_h^n})|_x_j [ξ^n]^3|_x_j.
We use for the first term in (<ref>) a Taylor expansion f”(u^n) - f”({ũ_h^n}) = f”'(ζ^n) {η^n-ξ^n}
with the inductive hypothesis to obtain the following bound:
Δ t 1/24∑_j = 1^N (f”(u^n)|_x_j-f”(u^n_h)|_x_j) [ξ^n]^3|_x_j
≤ CΔ t ξ^n^2.
For the last term in (<ref>), since [u^n] = 0, we rewrite it using the identity [ξ^n] = [η^n] + [ũ_h^n]:
Δ t 1/24∑_j = 1^N f”({ũ_h^n})|_x_j [ξ^n]^3|_x_j
= Δ t 1/24∑_j = 1^N f”({ũ_h^n})|_x_j [η^n] |_x_j[ξ^n]^2|_x_j
+ Δ t 1/24∑_j = 1^N f”({ũ_h^n})|_x_j [ũ_h^n]|_x_j [ξ^n]^2|_x_j.
The first term in (<ref>) can be estimated with trace inequalities and approximation results. The second term in (<ref>) is bounded using inequality (<ref>) and the induction hypothesis:
Δ t 1/24∑_j = 1^N f”({ũ_h^n})|_x_j [ξ^n]^3|_x_j ≤ C Δ t h^-1η^n_∞ξ^n^2
+ Δ t 1/3∑_j = 1^N (α(ũ_h^n)|_x_j + C |[ũ_h^n]|^2|_x_j) [ξ^n]^2|_x_j
≤ C Δ t ξ^n^2 + Δ t 1/3∑_j = 1^Nα(ũ_h^n)|_x_j [ξ^n]^2|_x_j + C Δ t h^-1ũ_h^n_∞^2 ξ^n^2
≤ C Δ t ξ^n^2 + Δ t 1/3∑_j = 1^Nα(ũ_h^n)|_x_j [ξ^n]^2|_x_j.
Combining all the estimates gives:
X_2 ≤Δ t 1/3∑_j = 1^Nα(ũ_h^n)|_x_j [ξ^n]^2|_x_j + CΔ tξ^n^2.
This bound is added to the bounds for the other terms X_i's to obtain:
θ_1 ≤ C Δ t ‖ξ^n‖^2
+ (1/3 + ε) ∑_j = 1^Nα(ũ_h^n)|_x_j [ξ^n]^2|_x_j
+ C Δ t (1+ε^-1) h^2k+1.
The term θ_2 is bounded using Lischitz continuity of s:
θ_2 ≤ C Δ t ‖ξ^n‖^2 + C Δ t h^2k+2.
The term θ_3 is rewritten as
θ_3 = -Δ t ∑_j=1^N α(ũ_h^n)|_x_j [ξ^n]^2|_x_j
+ Δ t ∑_j=1^N α(ũ_h^n)|_x_j [η^n]|_x_j [ξ^n]|_x_j.
Using Young's inequality and approximation results we obtain
θ_3 ≤ (-1+ε) Δ t ∑_j=1^N α(ũ_h^n)|_x_j [ξ^n]^2|_x_j
+ C Δ t h^2k+1, ∀ε>0.
This means that by choosing ε = 1/3 in the above, we conclude
b̃^n(ξ^n) ≤ C Δ t ‖ξ^n‖^2 + C Δ t h^2k+1.
plain
|
http://arxiv.org/abs/1701.08169v1 | 20170127190105 | The HI Chronicles of LITTLE THINGS BCDS III: Gas Clouds in and around Mrk 178, VII Zw 403, AND NGC 3738 | [
"Trisha Ashley",
"Caroline E. Simpson",
"Bruce G. Elmegreen",
"Megan Johnson",
"Nau Raj Pokhrel"
] | astro-ph.GA | [
"astro-ph.GA"
] |
December 30, 2023
Department of Physics, Florida International University
11200 SW 8th Street, CP 204, Miami, FL 33199
[email protected]
1Current Address: NASA Ames Research Center, Moffett Field, CA, 94035
Department of Physics, Florida International University
11200 SW 8th Street, CP 204, Miami, FL 33199
[email protected]
IBM T. J. Watson Research Center,
1101 Kitchawan Road, Yorktown Heights, New York 10598
[email protected]
CSIRO Astronomy & Space Science
P.O. Box 76, Epping, NSW 1710 Australia
[email protected]
Department of Physics, Florida International University
11200 SW 8th Street, CP 204, Miami, FL 33199
[email protected]
In most blue compact dwarf (BCD) galaxies, it remains unclear what triggers their bursts of star formation. We study the of three relatively isolated BCDs, Mrk 178, VII Zw 403, and NGC 3738, in detail to look for signatures of star formation triggers, such as gas cloud consumption, dwarf-dwarf mergers, and interactions with companions. High angular and velocity resolution atomic hydrogen () data from the Very Large Array (VLA) dwarf galaxy survey, Local Irregulars That Trace Luminosity Extremes, The Nearby Galaxy Survey (LITTLE THINGS), allows us to study the detailed kinematics and morphologies of the BCDs in . We also present high sensitivity maps from the NRAO Green Bank Telescope (GBT) of each BCD to search their surrounding regions for extended tenuous emission or companions. The GBT data do not show any distinct galaxies obviously interacting with the BCDs. The VLA data indicate several possible star formation triggers in these BCDs. Mrk 178 likely has a gas cloud impacting the southeast end of its disk or it is experiencing ram pressure stripping. VII Zw 403 has a large gas cloud in its foreground or background that shows evidence of accreting onto the disk. NGC 3738 has several possible explanations for its stellar morphology and morphology and kinematics: an advanced merger, strong stellar feedback, or ram pressure stripping. Although apparently isolated, the data of all three BCDs indicate that they may be interacting with their environments, which could be triggering their bursts of star formation.
§ INTRODUCTION
Dwarf galaxies are typically inefficient star-formers <cit.>. However, blue compact dwarf (BCD) galaxies are low-shear, high-gas-mass fraction dwarfs that are known for their dense bursts of star formation in comparison to other dwarfs <cit.>. It is often suggested that the enhanced star formation rates in BCDs come from interactions with other galaxies or that they are the result of dwarf-dwarf mergers <cit.>. Yet, there are still many BCDs that are relatively isolated with respect to other galaxies, making an interaction or merger scenario less likely <cit.>. Other methods for triggering the burst of star formation in BCDs have been suggested, from accretion of intergalactic medium (IGM) to material sloshing in dark matter potentials <cit.>, but it remains unknown what has triggered the burst of star formation in a majority of BCDs.
Understanding star formation triggers in BCDs is important for understanding how/whether BCDs evolve into/from other types of dwarf galaxies. So far, attempts to observationally place BCDs on an evolutionary path between different types of dwarf galaxies have been largely unsuccessful <cit.>. BCDs have vastly different stellar characteristics than irregular and elliptical dwarf galaxies. Interaction at a distance between dwarf galaxies has been suggest as a possible pathway to BCD formation <cit.>, however, recent studies show observationally that dwarf galaxy pairs do not have enhanced star formation, only extended neutral gas components <cit.>. Some authors have had success modeling the formation of BCDs through consumption of IGM <cit.> and dwarf-dwarf mergers <cit.> and there is some evidence that these processes may contribute to the formation of individual BCDs, however, these processes have yet to be confirmed observationally for BCDs as a group.
There has been evidence, in case studies of individual BCDs, that external gas clouds and mergers could be important for triggering bursts of star formation. <cit.> found tidal features in the of the BCD II Zw 40. This galaxy has no known nearby companion and therefore could be an example of an advanced merger. Haro 36 is a BCD that is thought to be relatively isolated, has a kinematically distinct gas cloud in the line of sight, an tidal tail, and may be showing some signs of an associated stellar tidal tail <cit.>. These features led <cit.> to conclude that Haro 36 is likely the result of a merger. IC 10 is also an interesting BCD that could be experiencing IGM accretion or is the result of a merger. <cit.> present a new extension that extends to the north of IC 10's main disk. <cit.> find that IC 10's northern extension and the southern plume are likely IGM filaments being accreted onto the IC 10's main disk or extensive tidal tails that are evidence of IC 10 being an advanced merger. For each of the above examples, the galaxies were studied as individuals rather than just one galaxy in a very large sample. Studying the properties of BCDs as individual galaxies may therefore be the key to understanding what has triggered their burst of star formation, since each BCD is morphologically and kinematically so distinct.
In this paper we present high angular and high velocity resolution Very Large Array[The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.] (VLA) data from the LITTLE THINGS[Local Irregulars That Trace Luminosity Extremes, The Nearby Galaxy Survey; <https://science.nrao.edu/science/surveys/littlethings>] project <cit.> in order to investigate the internal morphologies and kinematics of Mrk 178, VII Zw 403, and NGC 3738. For basic information on these galaxies see Table <ref>. We also present higher sensitivity data from the Robert C. Byrd Green Bank Telescope<ref> (GBT) encompassing a total area of 200 kpc×200 kpc and a velocity range of 2500 . These data are used to study each BCD individually in order to look for evidence of a star formation triggers.
lccccccc
Basic Galaxy Information
0pt
Galaxy RA (2000.0) Dec (2000.0) Distancea Systemic R_Db log SFR_Dc M_Va
Name (hh mm ss.s) (dd mm ss) (Mpc) Velocity () (kpc) (M_ yr^-1 kpc^-2) (mag)
Mrk 178 11 33 29.0 49 14 24 3.9 250 0.33±0.01 -1.60±0.01 -14.1
VII Zw 403 11 27 58.2 78 59 39 4.4 -103 0.52±0.02 -1.71±0.01 -14.3
NGC 3738 11 35 49.0 54 31 23 4.9 229 0.78±0.01 -1.66±0.01 -17.1
a<cit.>
bR_D is the V-band disk exponential scale length <cit.>.
cSFR_D is the star formation rate, measured from Hα data, normalized to an area of πR_D^2 <cit.>
§ SAMPLE
§.§ Mrk 178
Mrk 178 (=UGC 6541) is a galaxy that has a confusing classification record in the literature; it has been classified as a merger <cit.> and two separate articles have suggested that Mrk 178 has a nearby companion (a different companion in each paper) at a velocity close to its own velocity <cit.>. Upon further investigation, none of these claims can be verified <cit.>. <cit.> cite <cit.> to support the idea that Mrk 178 is a merger, however, <cit.> do not claim that it is a merger, merely a dwarf irregular. <cit.> suggest that UGC 6538 is a companion to Mrk 178, however, the velocity difference of these two galaxies is almost 2800 (velocities taken from NED[NASA/IPAC Extragalactic Database (NED) http://ned.ipac.caltech.edu/]), making UGC 6538 more likely a background galaxy. <cit.> suggest that Mrk 178 is a close pair with UGC 6549, however, the velocity difference between these two galaxies is more than 9000 (velocities taken from NED), making it a very unlikely candidate for a companion to Mrk 178 and probably a spatially coincident background galaxy. The classification of Mrk 178 in <cit.> has led NED to label Mrk 178 as a galaxy in a pair. <cit.> find, from a NED search, that the closest galaxy to Mrk 178 within ±150 is NGC 3741 at a distance of 410 kpc and a velocity difference of 20 . Assuming that the line of sight velocity difference between Mrk 178 and NGC 3741 is comparable to their transverse velocity, this distance is too large for Mrk 178 to have recently interacted with NGC 3741; using their relative velocity and their approximate distance, they would require 20 Gyr to meet, making an interaction between the two highly unlikely. Mrk 178 is located roughly in the Canes Venetici I group of galaxies <cit.>.
The stellar and gas components of Mrk 178 have been previously studied <cit.>. <cit.> studied Mrk 178's distribution using the Westerbork Synthesis Radio Telescope (WSRT) . Their integrated intensity map, at a resolution of 13, shows a broken ring-like structure. Mrk 178 is also part of a survey that uses the Giant Metrewave Radio Telescope (GMRT) called
the Faint Irregular Galaxies GMRT Survey (FIGGS). <cit.> present Mrk 178's FIGGS data, but do not discuss the morphology or kinematics of Mrk 178 as an individual galaxy. Their integrated intensity maps, at resolutions of 2268 and 1171, show that Mrk 178 has a highly irregular shape to its gaseous disk.
Mrk 178's stellar population is well known for its young Wolf-Rayet (WR) features <cit.>. A detailed study of Mrk 178's WR population by <cit.> revealed a large number of WR stars in its brightest stellar component. Mrk 178's stellar population was studied in detail by <cit.>. Their results indicate that Mrk 178 had a higher star formation rate 0.5 Gyr ago when compared to its current star formation rate. Their results also indicate that Mrk 178 has an old underlying stellar population.
§.§ VII Zw 403
VII Zw 403 (=UGC 6456) is a well known isolated BCD without obvious signatures of tidal interaction <cit.>. It sits just beyond the M81 group and is falling in towards the M81 group <cit.>. <cit.> found that the closest galaxy to VII Zw 403 within ±150 is KDG 073 at a distance of 900 kpc and a velocity difference of 32 . Companions that are currently interacting are not likely to be more than 100 kpc away from each other and are likely to be close in velocity <cit.>. Using the relative velocity and distance of VII Zw 403 and KDG 073 to estimate the time it would take for them to have passed each other last and assuming that their line of sight velocity difference is comparable to their transverse velocity, we note that it would be 28 Gyr since their last interaction or twice the age of the universe, making it highly unlikely that they have interacted.
VII Zw 403's has been studied using the GBT, Nançay, the GMRT, and the VLA <cit.>. The data reveal a galaxy that has a disturbed velocity field and irregular morphology. The VLA data presented in <cit.> reveal a break in the major axis of the isovelocity contours and a possible hole. <cit.> conclude that VII Zw 403 may have experienced an accretion event in its past, which is now difficult to detect.
<cit.> modeled the star formation history of VII Zw 403 using far infrared Hubble Space Telescope data. They concluded that VII Zw 403's star formation has been continuous and not episodic, with an increased star formation rate over the past Gyr. <cit.> also modeled VII Zw 403's star formation history and showed a starburst occurring in the 600-800 Myr interval of VII Zw 403's past, followed by a lower star formation rate. X-ray emission was detected in VII Zw 403 using the PSPC instrument on the ROSAT satellite <cit.>. There was apparent extended X-ray emission in the form of three diffuse arms, which all emanated from a central source. A point source located at the central X-ray source was later confirmed by two studies <cit.>, but the diffuse emission seen in <cit.> was not detected by Chandra or ROSAT's HRI instrument.
§.§ NGC 3738
NGC 3738 (=UGC 6565) is a galaxy that has not received a significant amount of individual attention. NGC 3738 is not always classified as a BCD; however, <cit.> suggest that the light profile properties of NGC 3738 are those of a BCD. <cit.> also support this idea, with their light profile of NGC 3738 being most similar to the other eight BCDs in their sample. NGC 3738 has been included in several other large surveys <cit.>, however, it is not discussed in detail. The map from <cit.> reveals a very clumpy and irregular morphology at a resolution of 135. <cit.> find that the closest companion to NGC 3738 within ± 150 is NGC 4068, at a distance of 490 kpc and a velocity difference of 19 . With these parameters as rough estimates (assuming their line of sight velocity difference is comparable to their transverse velocity), it is unlikely that these two galaxies would have met, since they would require 25 Gyr to travel to each other. NGC 3738 is located roughly in the Canes Venetici I group of galaxies <cit.>.
§ OBSERVATIONS AND DATA REDUCTION
§.§ The Very Large Array Telescope
The VLA data of Mrk 178, VII Zw 403, and NGC 3738 were collected as part of the LITTLE THINGS project. LITTLE THINGS is a survey of 41 dwarf galaxies; each galaxy in the survey has high angular (6) and high velocity (≤2.6 ) resolution data from the B, C, and D configurations of the VLA. For more information about LITTLE THINGS see <cit.>. Basic observational parameters for Mrk 178, VII Zw 403, and NGC 3738 are given in Table <ref>.
A detailed description of data calibration and mapping techniques used for LITTLE THINGS can be found in <cit.>. The VLA maps were made using the Multi-Scale (M-S) clean algorithm as opposed to the classical clean algorithm. M-S clean convolves the data to several beam sizes (0, 15, 45, 135 for LITTLE THINGS maps) and then searches the convolved data for the region of highest flux amongst all of the convolutions. That region is then used for clean components. The larger angular scales will map the tenuous structure, while the smaller angular scales will map the high resolution details of the . Therefore, M-S clean allows us to recover tenuous emission while maintaining the high angular resolution details in the images. Basic information on the VLA maps of Mrk 178, VII Zw 403's, and NGC 3738 can be found in Table <ref>. For more information on the advantages of M-S Clean see <cit.>.
lcccc
VLA Observing Information
Galaxy Name Configuration Date Observed Project ID Time on Source (hours)
3*Mrk 178 B 08 Jan 15, 08 Jan 21, 08 Jan 27 AH927 10.4
C 08 Mar 23, 08 Apr 15 AH927 5.75
D 08 Jul 8, 08 Jul 24, 08 Jul 25 AH927 1.75
3*VII Zw 403 B 06 Sep 10 AH907 8.55
C 92 Apr 11 AH453 3.7
D 97 Nov 10 AH623 4
3*NGC 3738 B 08 Jan 15, 08 Jan 21, 08 Jan 27 AH927 10.87
C 08 Mar 23, 08 Apr 15 AH927 5.22
D 08 Jul 8, 08 Jul 24, 08 Jul 25 AH927 1.69
empty
llcccc
VLA Map Information
0pt
Galaxy Weighting Synthesized Linear Velocity Resolution RMS over 10
Name Scheme Beam Size () Resolution (pc) () (10^19 atoms cm^-2)
2*Mrk 178 Robust (r=0.5) 6.19 × 5.48 120 1.29 4.6
Natural 12.04 × 7.53 230 1.29 1.6
2*VII Zw 403 Robust (r=0.5) 9.44 × 7.68 200 2.58 3.7
Natural 17.80 × 17.57 380 2.58 0.74
2*NGC 3738 Robust (r=0.5) 6.26 × 5.51 150 2.58 4.7
Natural 13.05 × 7.79 310 2.58 1.5
§.§ The Green Bank Telescope
Mrk 178, VII Zw 403, and NGC 3738 were also observed in with the GBT by two projects. The GBT data are higher sensitivity and lower resolution than the VLA data, therefore, the GBT maps were used to search the surrounding regions for companion galaxies and extended, tenuous emission, while the VLA maps were used to see the detailed morphology and kinematics of the . For basic GBT observing information, see Table <ref>. The first project (Proposal ID GBT/12B-312; P.I. Johnson) covered a 2×2 field around each galaxy (140 kpc×140 kpc for Mrk 178, 150 kpc×150 kpc for VII Zw 403, and 170 kpc×170 kpc for NGC 3738). These data were combined with the second project (P.I. Ashley; Proposal ID GBT13A-430) which covered a 200 kpc×200 kpc region around each BCD and a total velocity range of 2500 to search for any extended emission and nearby companions <cit.>. In order to make the maps' sensitivity uniform throughout a 200 kpc×200 kpc region, the second project also observed the regions around the 2×2 maps from the first project to fill in the 200 kpc×200 kpc region.
On-the-fly mapping was used for both projects, scanning in a raster pattern in Galactic latitude and longitude, and sampling at the Nyquist rate. With a 12.5 MHz bandwidth, in-band frequency switching with a central frequency switch of 3.5 MHz was implemented to calibrate the data. Using a code written by NRAO staff, the data were first corrected for stray radiation (with the exception of Mrk 178's data, see below) and then the data were Hanning smoothed to increase the signal-to-noise ratio. Next, standard calibration was done using the GBTIDL[Developed by NRAO; for documentation see <http://gbtidl.sourceforge.net>.] task getfs. Radio frequency interference (RFI) spikes were then manually removed by using the values of neighboring channels to linearly interpolate over the spike in frequency space[Mrk 178's data suffered significant RFI throughout most of the integrations; some integrations were flagged entirely. We removed as much RFI as possible from other integrations, however, with about 50 hours of data and a 3 second integration time, there are about 60,000 integrations each with 2 polarizations, therefore we were unable to remove all RFI. These RFI also often remained at low levels in individual integrations until scans were averaged together, making them difficult to find with available RFI finding visual tools and requiring manual removal.]. After further smoothing, the spectra baselines were fit using third or fourth ordered polynomials to remove residual instrumental effects[NGC 3738's data contained two sources that are not near NGC 3738 in frequency space, but due to the frequency switching, appeared close to NGC 3738 in frequency space. For more information on how this source was calibrated see Appendix <ref>.].
Mrk 178's baselines contained low level sinusoidal waves throughout each session possibly due to a resonance in the receiver. In order to remove the sinusoidal features, prior to any other calibration steps, we used a code written by Pisano, Wolfe, and Pingel (Pingel, private communications) that uses the ends of each row in the GBT map as faux off-positions. These faux off-positions in each row are subtracted from the rest of the row, resulting in a stable baseline. This procedure should not result in any loss of extended flux around Mrk 178 since it is using only blank pixels on the edges of each row in the map to subtract baselines from the corresponding row of pixels. The code also removed the need for stray radiation corrections as much of the Milky Way effects were removed through subtraction of the faux off-positions.
After calibration, the data were imaged in AIPS. dbcon was used to combine all of the sessions from both projects and sdgrd was then used to spatially grid the data. The final GBT maps were made in xmom and had their coordinates transformed from Galactic to Equatorial using the task flatn in AIPS. Rotating the data in flatn to align the coordinates so that north faces up and east to the left (like the VLA maps) requires the data to be re-gridded in the process. The re-gridding results in a slight change of flux values for each pixel, therefore, any measurements (mass, noise, etc.) obtained for the GBT data were taken prior to the rotation of the maps. Also, each map was compared before and after the rotation occurred to look for any features in the map that may have been morphologically distorted. The effects of re-gridding were inconsequential at the resolution of the GBT maps. The rotation was done as a visual aid for the reader to easily compare the orientation of the VLA maps to the GBT maps. For basic information on the individual GBT maps, see Table <ref>.
lcc
GBT Observing Information
0pt
Galaxy Name Total Time Observinga (hours) Angular Size Observedb (degr)
Mrk 178 58.5 2.9×2.9
VII Zw 403 43.5 2.6×2.6
NGC 3738 34.25 2.3×2.3
a Total time spent on observations including overhead time (e.g., moving the telescope, getting set up for observations, and time on calibrators.)
b These angular sizes represent the 200×200 kpc^2 fields of the galaxies. However, for observing purposes these fields had 0.1 degrees added horizontally and vertically to them as a buffer. This was done to account for the time that the telescope would start/stop moving vs. the time that the telescope began recording data.
empty
lcccc
GBT Map Information
0pt
Galaxy Beam Size Linear Velocity Resolution RMS over 10
Name () Resolution (kpc) () (10^16 atoms cm^-2)
Mrk 178 522.85 × 522.85 10 0.9661 6.7
VII Zw 403 522.23 × 522.23 11 0.9661 3.9
NGC 3738 522.81 × 522.81 12 0.9661 3.6
§ RESULTS: MRK 178
§.§ Mrk 178: Stellar Component
The FUV, Hα, and V-band maps of Mrk 178 are shown in Figure <ref>. The FUV data were taken with GALEX <cit.>, and the Hα and V-band data were taken with the Lowell Observatory 1.8m Perkins Telescope <cit.>. The FUV, Hα, and V-band surface brightness limits are 28.5, 28, and 27 mag arcsec^-2, respectively <cit.>. All three of these maps share a common feature in their morphology (most easily seen in the Hα map): there appears to be a region of high stellar density to the south, then there is a curved structure of stars, which lies north of the high stellar density region and curves to the west (right).
§.§ Mrk 178: VLA Morphology
The VLA natural-weighted integrated intensity map is shown in Figure <ref>a. There are two distinct regions of high density; one to the north and one to the south. The dense region to the north has three peaks, while the region to the south has one peak. These two high density regions appear to be part of a ring-like structure that was also seen in <cit.>. There is also tenuous to the northwest that creates an overall cometary morphology in the map.
The VLA robust-weighted integrated intensity map is shown in Figure <ref>a. This map shows the broken ring-like structure at a higher resolution. Plots of the FUV, Hα, and V-band contours over the colorscale of the robust-weighted integrated intensity map are also shown in Figures <ref>b-<ref>d, respectively. In all three figures the curved feature in the stellar components follows the morphology of the broken ring-like structure in the northeast. In both the robust-weighted data (Figure <ref>d) and in the natural-weighted data (Figure <ref>c) the V-band disk extends further southeast than the .
§.§ Mrk 178: VLA Velocity and Velocity Dispersion Field
The VLA velocity field for Mrk 178 can be seen in Figure <ref>b. If the tenuous gas creating the extension to the northwest is not included, then the velocity field of Mrk 178 is reminiscent of solid body rotation with a kinematic major axis at a position angle (PA) of roughly 230 (estimated by eye). This first kinematic major axis is nearly perpendicular to the stellar morphological major axis <cit.>. The tenuous extension to the northwest has isovelocity contours that are nearly perpendicular to the isovelocity contours of the rest of the with a kinematic major axis with a PA of roughly 135 (estimated by eye). This second kinematic major axis also nearly aligns with the stellar morphological major axis. A position-velocity (P-V) diagram for each of the kinematic axes in Mrk 178's velocity field can be seen in Figure <ref>. Figure <ref> was created in kpvslice, which is part of the Karma[Documentation is located at <http://www.atnf.csiro.au/computing/software/karma/>.] software package <cit.>. In Karma the user draws a line over a map of the galaxy and Karma plots the velocity of the gas at every position along the line. The P-V diagrams of the natural-weighted cubes have color bars that begin at 1σ level (0.64 mJy/beam). The white box in the top left P-V diagram encompasses the tenuous gas in the northwest end of Mrk 178 that is rotating with one of the two kinematic major axes (indicated by the red slice in the velocity map to the top right of Figure <ref>). The emission at higher negative angular offsets to this box is associated with the morphological peak of emission to the south identified in Section <ref>. The P-V diagram in the bottom-left of Figure <ref> shows the velocity of the gas in the head of the cometary shape increasing from the northeast to the southwest.
The northwest extension has a length of 920 pc. The length of the extension was taken from the natural-weighted map from the tip of the northwest edge of the 2σ contour to the southeast tip of the 245 contour. The 245 contour (the red contour in Figure <ref>b) was chosen as a cutoff for the length because it is the most southeastern isovelocity contour with a kinematic major axis that is nearly aligned with the stellar morphological major axis.
The tenuous component to the northwest has a maximum velocity difference of 17 from the systemic velocity and the southeast kinematic component has a maximum velocity difference of 16 from the systemic velocity. Mrk 178 is therefore rotating slowly and probably has a shallow potential well. Most of the velocity dispersion map contains velocity dispersions of 9-13 (Figure <ref>e).
Neither of the two kinematic major axes are likely due to dispersions in the gas; the velocity dispersion map in Figure <ref>e has a gradient of 2-4 and the velocity gradient is 10-16 over the radius of the galaxy. Instead, one of the major kinematic axes of Mrk 178 is likely from rotation in the disk and the other could be from an extragalactic impact or some other disturbance, as discussed in Section <ref>. The V-band image of Mrk 178 (Figure <ref>) indicates that Mrk 178's outer stellar isophotes are elliptical, as would be expected of a disk. Disk galaxies typically have rotation associated with their disk. As previously mentioned, the northwestern edge of the gaseous disk has a kinematic major axis that follows the stellar disk's morphological major axis (see Figure <ref>d), as would be expected from gas rotating with the disk of the galaxy. Therefore, there are likely two real kinematic major axes in the gas of Mrk 178 with the kinematic major axis to the northeast rotating like a typical disk.
§.§ Mrk 178: GBT Morphology And Velocity Field
The integrated intensity map as measured with the GBT is shown in the top of Figure <ref>. An arrow is used to identify Mrk 178 which is very difficult to distinguish from the noise in this map. Mrk 178's GBT data had several problems including the large amounts of difficult-to-remove RFI as noted in Section <ref>. The data cube was inspected with the histogram tool in CASA's <cit.> viewer window. From this inspection it was concluded that the other bright spots in Figure <ref>a are likely due to RFI. The northwestern region of the map is significantly noisier than the rest of the map (about 1.5 times noisier). Due to the noisiness of the GBT maps, we will not discuss the results of Mrk 178's GBT maps much throughout the rest of the paper. However, Mrk 178's emission detected with the GBT is likely real and not noise; in the bottom left of Figure <ref>, Mrk 178's VLA outer contour is plotted on top of a close up of Mrk 178's GBT map and these two maps have the same general morphology with an extension of gas to the northwest. Mrk 178's GBT velocity field was also inspected, however, due to the resolution of the map, no new information can be gleaned from the velocity map.
§.§ Mrk 178: Mass
The mass of each galaxy was calculated using the AIPS task ispec and masses of individual galaxy features were measured using the AIPS task blsum. ispec will sum the flux within a user-specified box from a user-specified range of velocity channels. The sum can then be used to calculate the mass of using the following equation:
M(M_)=235.6D^2∑_iS_iΔ V
where D is the distance of the galaxy in units of Mpc, S_i is the flux in mJy in channel i, and Δ V is the channel width in . blsum also results in the sum of a flux within a given region, however, the region can be any shape drawn by the user on the map. Mrk 178's total mass from the VLA data is 8.7×10^6 M_. The mass of the northwest extension in the VLA natural-weighted map was also calculated using blsum. The border which separates the northwest extension from the rest of the in Mrk 178 was again defined by the isovelocity contour of 245 that is furthest northwest in the map. The mass of the northwest extension is 7.5×10^5 M_ or 8.6% of the total VLA mass. Mrk 178's total mass from the GBT data is 1.3×10^7 M_. Mrk 178's VLA maps recovered 67% of the GBT mass. Using the same velocity width used to measure Mrk 178's GBT mass, the uncertainty in Mrk 178's GBT mass is 5×10^5 M_ making Mrk 178 a significant detection in the GBT maps with an mass at 26σ.
§ DISCUSSION: MRK 178
Mrk 178 has odd VLA and stellar morphologies, including: an overall cometary shape in its disk, two kinematic major axes in the velocity map, and a stellar disk that extends beyond the natural-weighted map due to a lack of tenuous gas to the southeast of the disk. We discuss four possible explanations for these kinematics and morphologies.
§.§ Mrk 178 Has A Large Hole In The
In Figures <ref>a and <ref>a a large hole-shell structure is visible in the southern region of Mrk 178's disk. The high density regions in the north and south of Mrk 178's disk could be part of a shell that has been created by the hole between them. This hole was identified as part of a LITTLE THINGS project cataloguing and characterizing all of the holes in the 41 dwarf galaxy sample using the hole quality checks outlined in <cit.> (Pokhrel , in prep.). To initially be included in the catalog, the hole structure must be visible in 3 consecutive velocity channels; Mrk 178's hole is visible in 6 consecutive velocity channels in the natural-weighted cube (251 to 258 ). <cit.> also assign each hole with a quality value of 1-9 (low to high quality) based on the number of velocity channels that contain the hole, whether the location of the hole's center changes across the velocity channels, the difference in surface brightness between the hole and its surroundings (at least 50%), and how elliptical the appearance of the hole is in a P-V diagram. Based on these criteria, Mrk 178's hole has a quality value of 6, which is average quality.
In P-V diagrams the emission around an hole will create an empty ring or partial ring appearance in the P-V diagram when a hole is present <cit.>. The left side of Figure <ref> shows the P-V diagram for Mrk 178's hole. The black ellipse indicates the parameter space of the hole in the P-V diagram; the hole creates a partial ring defined by the offsets of about 11.4 and 11.4, and a central velocity of 260 . The right side of Figure <ref> is the natural-weighted map of Mrk 178, with the red arrow indicating the location of the slice used for the P-V diagram and the white ellipse indicating the location of Mrk 178's hole. The higher velocity side of the ring may be composed of very tenuous gas or that side of the hole may have blown out of the disk.
With the higher velocity side of the shell not clearly defined, the estimated expansion rate uncertainty will be high, however, the velocity of the dense edge of the shell can be used to get an estimate of the expansion. The velocity of the center of this hole is found to be 260 and the velocity of the intact side of the shell was taken to be 244 , resulting in an expansion velocity of 16 . The radius of the hole was taken to be the square root of the product of the major and minor axes of the hole, resulting in a radius of 180 pc. A hole of this size, expanding at 16 , would have taken roughly 11 Myr to form. This calculated age of the hole is a rough estimate, as the expansion rate of the hole may have been much faster when it first formed and the hole looks as though it has blown out of one side of the disk, which means the hole may be older than indicated by its current expansion rate.
The energy needed to create Mrk 178's hole can be estimated using Equation 26 from <cit.>, which calculates the energy from the initial supernova burst:
E_0=5.3×10^-7 n_0^1.12 v_sh^1.40 R^3.12
where E_0 is the initial energy in units of 10^50 ergs, n_0 is the initial volumetric density of the gas in atoms cm^-3, v_sh is the velocity of the hole's expansion in , and R is the radius of the hole in pc. For Mrk 178's hole, n_0 was taken to be the approximate density of the surrounding gas. A column density of 7.69×10^20 cm^-2, the average density of the surrounding , was used as the initial column density for the hole. Assuming a scale height of 1740 pc (Pokhrel , in prep.), n_0 is approximately 0.14 cm^-3. Using these parameters results in an energy of 3.1×10^51 ergs. Assuming that the energy of a supernova explosion is 10^51 ergs, it would take approximately 3 supernovae explosions to create a hole of this size. This is a very small number of supernova explosions that could have easily formed over a period as short as a Myr with Mrk 178's current star formation rate <cit.>.
The southern region of high stellar density in the V-band emission has two bright components centered on 11h 33m 28.7s and 491415.9 (see Figure <ref>). The component located closer to the center of Mrk 178's stellar disk, at 11h 33m 28.5s and 491418.3 does look as though it could be in the hole as can be seen in Figure <ref>d. <cit.> calculate the ages of the stars in the V-band's bright southern stellar concentration to be 9 Myr. The estimated time needed to create the hole was calculated to roughly be 11 Myr, which, given the uncertainties in the hole age calculation, is roughly in line with the 9 Myr age of the stellar concentration.
Although, interesting on its own, the potential hole does not easily explain the rest of the morphology and kinematics: the is rotating on two separate kinematic major axes (as discussed in Section <ref>) and the stellar component expands further than the VLA natural-weighted (as discussed in Section <ref>). The stellar component that expands further than the data is not located near the hole and therefore is not likely a result of the hole if it is real. The two kinematic axes indicate that the gas has been significantly disturbed in the past.
§.§ Mrk 178 Has Recently Interacted With Another Galaxy
Two kinematic major axes and an asymmetric distribution (which does not cover the stellar disk) could indicate that Mrk 178 has recently interacted or merged with another galaxy. There are no known companions close to Mrk 178 and the GBT maps do not show any companion to Mrk 178 at the sensitivity and resolution of the maps. Therefore, it is unlikely that Mrk 178 has recently had an interaction with a nearby gas-rich companion. Instead, it is possible that Mrk 178 is interacting with a gas-poor companion or Mrk 178 is the result of a merger still in the process of settling into a regular rotation pattern.
However, if an interaction or merger has caused a significant asymmetry in the gas morphology of the disk and two kinematic major axes, then why does the outer V-band disk not show any signs of a morphological disturbance such as tidal tails, bridges, or significant asymmetries? The gaseous disk is collisional and thus has a short term memory of past events (on the order of one dynamical period). The stellar disk is non-collisional and therefore has a longer memory of past mergers than the gaseous disk. So, if the gaseous disk is still significantly kinematically and morphologically disturbed, then the outer regions of the older stellar disk (reaching 27 mag/arcsec^2) would likely still show significant signs of disturbance and yet it appears to be relatively elliptical in shape. Haro 36 is a BCD with a tidal tail visible in its outer V-band disk which has a limiting surface brightness of 25.5 mag/arcsec^2. Since Haro 36 has a distance of 9.3 Mpc, it is reasonable to assume that Mrk 178, at a limiting surface brightness of 27 mag/arcsec^2 and a distance of only 3.9 Mpc, would show signs of a tidal disturbance in its outer V-band disk if it existed. Since there are no signs of tidal disturbances in the outer V-band disk, Mrk 178 is not likely a merger remnant.
§.§ Mrk 178 Is Experiencing Ram Pressure Stripping
Mrk 178 could be interacting with intergalactic gas through ram pressure stripping. The intensity map in Figure <ref>a has a generally cometary appearance, with a bifucated head of star formation and two gas peaks. Each head ( peak) is also cometary and pointing in the same direction as the whole galaxy, particularly the northern one. These heads are likely an indication of a subsonic shock front since they are near a sharp density edge. To make this structure as well as clear the gas from the southeastern part of the V-band disk, the galaxy could be moving in the southeast direction at several tens of km s^-1 into a low density IGM that may be too ionized to see in . If this motion also had a component toward us, then it could account for the strong velocity perturbation in the south (and the lack of velocity perturbation in the northwest), where all the HI gas redshifted relative to the rest of the galaxy is the gas being stripped from the southeast edge of the galaxy and is moving away from us. The HI `hole' between the two peaks discussed in Section <ref> could be a hydrodynamical effect of the streaming intergalactic gas; the intergalactic gas could be moving between and around each peak as the galaxy moves through the IGM. Alternatively, the ram pressure from this motion could have made or compressed the head region, promoting rapid star formation there or at the leading shock front. That star formation could then have made the hole discussed in Section <ref>. This scenario is analogous to that in NGC 1569 <cit.> where an incoming stream apparently compressed the galaxy disk and triggered two super star clusters, which are now causing significant clearing of the peripheral gas.
Some parameters of the interaction can be estimated from the pressure in the head region of Mrk 178 as observed in HI and starlight. From <cit.> the pressure in the interstellar medium (assuming comparable stellar and gas disk thicknesses) is approximately:
P≈π2 G Σ_ gas(Σ_ gas+σ_gσ_sΣ_ stars)
where Σ_ gas and Σ_ stars are the gaseous and stellar surface densities, σ_g and σ_s are the gaseous and stellar velocity dispersions, and G is the gravitational constant. In the second term of Equation <ref>, σ_g σ_s^-1 is approximately equal to 1 because the gaseous and stellar velocity dispersions of dwarf irregular galaxies are similar <cit.>. The average HI column density in the southern star formation peak is ∼1×10^21 cm^-2 in Figure <ref>a, which is Σ_ gas=11 M_⊙ pc^-2 or 2.3×10^-3 g cm^-2 when corrected for He and heavy elements by multiplying by 1.35. <cit.> calculate the properties of a stellar clump in the same region as the star formation peak seen in our Figure 1. They note that the southern stellar clump of stars (see their Figure 2) has a mass inside the galactocentric radius of the southern stellar clump (390 pc) of 1.3×10^5 M_⊙. Assuming the mass is evenly spread over a circular region, Σ_ stars=0.27 M_⊙ pc^-2 or 5.6×10^-3 g cm^-2. If we assume that this stellar surface density is also the stellar surface density of the interior of the southern clump, then the pressure in the southern clump is approximately P=5.7×10^-13 dyne cm^-2. Now we can calculate the ambient density of the IGM, ρ_ IGM, assuming that the internal cloud pressure, P, is equal to the ram pressure, P_ram=ρ_ IGMv_ IGM^2, where v_ IGM is Mrk 178's velocity relative to the intergalactic medium. In order for the IGM to sweep away the southeastern gas and perturb the velocity field on only one side of the galaxy, the velocity of Mrk 178 relative to the IGM has to be comparable to the internal rotational speed of the galaxy, so, the minimum velocity of Mrk 178 relative to the IGM is 20 . Therefore, the IGM density that makes the ram pressure, P_ram, equal to the internal cloud pressure is:
ρ_ IGM=1.4×10^-25( 20 km s^-1/v_IGM)^2 g cm^-3
or
ρ_ IGM=0.08 ( 20 km s^-1/v_IGM)^2 atoms cm^-3.
The velocity of Mrk 178 relative to the IGM is likely much higher than 20 and closer to 100's of ; therefore, at v_ IGM=100 ρ_ IGM would be 0.0032 cm^-3. At this density, even if the IGM has a thickness in the line of sight of 5 kpc and assuming that the gas is all neutral and not ionized (the IGM is probably at least partially ionized), the IGM would have a column density of 5×10^19 or less than 1σ in the VLA maps in Figure <ref>, meaning that the IGM would be lost in the noise. The GBT maps would be able to pick up this level of emission, however, the cloud is still expected to be partially ionized, dropping the column density of the . Also, the relative velocity between Mrk 178 and the IGM could be higher than 100 , which would lower the required external column density. Therefore the IGM may not be visible in the GBT maps either.
The relative motion of Mrk 178 and the IGM could have cleared the gas out of the southern part of the galaxy, produced a cometary appearance to the main clumps at the leading edge of the remaining interstellar gas in addition to the gas in the galaxy as a whole, and produced the large velocity perturbation and dispersion that are observed in the southern region.
§.§ Mrk 178 Has A Gas Cloud Running Into It
<cit.> suggest that the overall cometary shapes of galaxies, such as that seen prominently in Mrk 178's disk, can be explained by extragalactic gas impacting the disk of the galaxy. The gas in the south of Mrk 178's disk may have experienced a collision with a cloud in the southeast side of the galaxy, pushing the redshifted side of the galaxy to the west. This scenario would leave the gas in the northwestern edge of the disk relatively undisturbed. When the gas cloud impacted Mrk 178's disk, it would have created a dense shock front in the gas as it moved gas from the southeastern edge of the disk to the west. Therefore, it is possible that the impacting gas cloud is showing up as a region of high density in the south of Mrk 178's disk. The hole discussed in Section <ref> could have also been created by the cloud collision. A gas cloud running into Mrk 178's disk is a situation that is similar to that of ram pressure stripping discussed in Section <ref>. The main difference between these two situations is that ram pressure stripping is a steady pressure and a gas cloud impacting the disk is a short lived pressure.
The location of the collision would likely be indicated by the morphological peak in to the south. To the west of and around the location of the morphological peak, Mrk 178's velocity dispersions are slightly increased near 11h32m29s and 1415 to about 13-16 (the surrounding regions have dispersions of ≲10 ), indicating that the gas has been disturbed in this region. Assuming that the northwest side of Mrk 178's disk has been relatively undisturbed by the impacting gas cloud, we can assume that the southeast side of the disk used to rotate with velocities redshifted with respect to Mrk 178's systemic velocity. If a gas cloud impacted the disk opposite to the rotation, we would expect a large increase in velocity dispersion and we would also expect the redshifted velocities to generally get closer to the systemic velocity of the galaxy as the disk gas gets slowed by the impact. However, the redshifted gas in the southern end of the disk has a velocity relative to the systemic velocity of 10-16 , which is similar to the blueshifted gas in the northwestern edge of the disk. This indicates that the gas cloud has struck Mrk 178's disk either co-rotating with it or in a radial direction parallel to the plane, pushing the gas that was in the southeast part of the disk west and away from us relative to the plane of the galaxy.
If the impacting gas cloud had enough energy to move the eastern edge of the disk to the west, then the binding energy of the gas that was pushed west will be approximately equal to the excess energy that has been left behind by the impacting cloud in the southern region of the disk. We will assume for simplicity that the angular momentum of the system has a relatively small effect on the energies calculated since the forced motion of the gas in the disk is nearly the same as the rotation speed, so the gas in the disk of the galaxy does not have enough time to turn a significant amount. The binding energy of the gas that was originally in the southeast end of the disk would be about 0.5m_sev^2 where m_se is the mass of the gas that was in the southeast edge of the galaxy (that now has been pushed west) and v is the rotational velocity that the gas in the southeast end of the disk had before the impact. Assuming that the galaxy was once symmetric, we can use the mass of the northwest `extension' (see Section <ref>) as the approximate m_se, 7.5×10^5 M_, and the observed velocity of the edge of the disk in the northeast, corrected for inclination <cit.>, to get the rotational velocity of the gas cloud, 18 . Using these numbers, we estimate that the binding energy of the gas that was in the southeast of the disk is 1.2×10^8 M_ km^2 s^-2 or 2.4×10^51 erg.
Next we can estimate the excess energy in the southern clump: 0.5m_sc(σ_sc^2-σ_a^2) where m_sc is the mass of the gas in the southern clump of high density (now a mixture of the gas originally in the southeastern edge of the disk and the gas cloud that ran into the disk), σ_sc is the velocity dispersion of the gas in the southern clump of high density, and σ_a is the ambient velocity dispersion. ispec was used to calculate the mass of the southern clump of high density by using a box that contained emission south of the Declination 491418.5, resulting in an mass of 2.3×10^6 M_. The velocity dispersion of the dense southern clump is 13 and the velocity dispersion of the ambient gas is 9 . Using these numbers, the excess energy in the southern dense clump is 1.0×10^8 M_ km^2 s^-2 or 2.0×10^51 erg. This energy is comparable to the estimated binding energy of the gas that was originally in the southeast of the disk. Therefore, it is possible that the southeastern edge of Mrk 178's disk was struck by a gas cloud that pushed the gas west and away from us, and increased the velocity dispersion.
§ RESULTS: VII ZW 403
§.§ VII Zw 403: Stellar Component
VII Zw 403's FUV and V-band data are shown in Figure <ref>. The FUV data were taken with GALEX and the V-band data were taken with the Lowell Observatory 1.8m Perkins Telescope <cit.>. The FUV and V-band surface brightness limits are 29.5 and 27 mag arsec^-2, respectively <cit.>. The FUV morphology is similar to the inner morphology of the V-band.
§.§ VII Zw 403: VLA Morphology
VII Zw 403's natural-weighted integrated intensity map as measured by the VLA is shown in Figure <ref>a. The emission has a morphological major axis in the north-south direction and is centrally peaked. There is also some detached, tenuous emission just to the south of the main disk that may be associated with VII Zw 403. The robust-weighted integrated intensity map in Figure <ref>a reveals some structure in the inner region of the . Just to the north of the densest region, the fourth contour from the bottom reveals an structure that curves toward the east. This structure was also seen in <cit.>. The FUV and V-band stellar contours are both plotted over the colorscale of VII Zw 403's robust-weighted integrated intensity map in Figures <ref>b and <ref>c, respectively. The highest isophotes from the FUV and V-band data are located on the highest column density in projection and extend just north of that.
§.§ VII Zw 403's Optical Maps: VLA Velocity and Velocity Dispersion Field
The VLA velocity field of VII Zw 403 is shown in Figure <ref>b. The kinematics of the east side of the galaxy resemble solid body rotation with a major kinematic axis that does not align with the morphological major axis. The kinematics of the west side of the galaxy are generally disturbed with some possible organized rotation in the south. The velocity dispersion field is shown in Figure <ref>c. The dispersions reach near 17 with the highest dispersions being centrally located.
§.§ VII Zw 403: GBT Morphology And Velocity Field
VII Zw 403's integrated intensity map, as measured with the GBT, is shown in the left side of Figure <ref>. The tenuous emission beyond VII Zw 403's emission (located at the center of the map) is from the Milky Way. VII Zw 403's velocity range overlaps partially with the velocity range of the Milky Way. These GBT maps were integrated to allow some of the Milky Way to appear in order to search as many channels as possible for any extended emission or companions nearby VII Zw 403. Yet, after using multiple velocity ranges for the integration of the data cube and inspection of individual channels, no companions or extended emission from VII Zw 403 were found before confusion with the Milky Way emission became a problem. Therefore, VII Zw 403 does not appear to have any extra emission or companions nearby at the sensitivity of this map. VII Zw 403's GBT velocity field was also inspected, however, there was no discernible velocity gradient.
§.§ VII Zw 403: Mass
VII Zw 403's total mass detected in the VLA natural-weighted data is 4.2×10^7 M_, while its mass from the GBT data is 5.1×10^7 M_. The Milky Way emission could be contributing to some of the mass measured from the GBT data, however, that is unlikely; the velocity range used in ispec was the same as that used to make the integrated intensity map (see Figure <ref>) and the box size in which the flux was summed tightly enclosed the VII Zw 403 emission. The VLA was able to recover 82% of the GBT mass, although VII Zw 403's GBT mass should be higher since some channels that contain emission from VII Zw 403 were excluded from the mass measurement to avoid confusion with the Milky Way.
§ DISCUSSION: VII ZW 403
The most noticeable morphological peculiarity in VII Zw 403's VLA data is the detached gas cloud to the south of the disk in the natural-weighted integrated intensity map (Figure <ref>). Because this feature appears in more than three consecutive channels in the 25×25convolved natural-weighted data cube, as can be seen in Figure <ref>, it is unlikely to be noise in the map and is therefore considered real emission. However, Figure <ref> shows that there is overlap in velocity between VII Zw 403 and HI in the Milky Way. To make sure that the southern detached cloud is not Milky Way emission in the foreground of VII Zw 403, the channels that contained the southern detached cloud were checked for Milky Way emission (see Figure <ref>); none was found. We are therefore confident that the cloud is a real feature connected with VII Zw 403.
Kinematically, the east side of the galaxy has rotation that resembles solid body rotation, while the velocity field on the west side shows a break in the isovelocity contours from northeast to southwest. Strikingly, the velocity dispersions in the natural-weighted data show higher values along this break. The alignment of the two features can be seen in Figure <ref>, where the velocity dispersion field contours have been plotted over the colorscale of the velocity field.
A P-V diagram through the western velocity disturbance using the natural-weighted data cube is shown in Figure <ref>. The gas goes from blueshifted to redshifted velocities in a solid body manner as the P-V diagram moves towards positive offsets. However, in the P-V diagram there is a thin horizontal streak of high brightness from 8 to 58 and 110 to 100 as outlined in the black box. Most of the rest of the tenuous gas at this angular offset range appears to be above 100 , consistent with the rest of the gas on the west side rotating in a solid body fashion with the east side of the disk. It is possible that this density enhancement is a external gas cloud in the line of sight that is disturbing the velocity field of VII Zw 403.
Since the gas at anomalous velocities appears at higher negative velocities than the rest of the gas in the disk in the same line of sight (see Figure <ref>) and since the velocity disturbance is highly directional (northeast to southwest), the emission associated with it morphologically extends from the southwestern edge of the disk when individual velocity channels are viewed (as in the left image of Figure <ref>). Both to remove this anomalous gas component from the line of sight and also examine whether it could be an external gas cloud, the AIPS task blank was used to manually mark-out the regions containing the emission. This was done twice: once to blank out the emission from the potential cloud, resulting in a data cube just containing the disk emission; and a second time using the first blanked data cube as a mask to retain just the cloud emission. A map of an unblanked channel and a map of the same channel with the potential cloud emission blanked are shown in Figure <ref> as an example.
The results of blanking are shown in Figure <ref>. Figure <ref>a is the velocity field of the galaxy without most of the emission from the gas cloud, Figure <ref>b is the velocity field of the gas cloud, and Figure <ref>d compares the two by showing the contours of the first and the colorscale of the second. Figure <ref>c is the original velocity field from Figure <ref> for comparison. With most of the gas cloud emission removed, the velocity field on the west side of the galaxy now generally follows the solid body trend seen on the east side of the galaxy. The emission from the gas cloud also has a generally smooth transition in velocities. The mass of the gas cloud in the line of sight of VII Zw 403 is 7.5×10^6 M_ or 18% of the total VLA mass measured. The projected length of the cloud at the distance of VII Zw 403 is 4.5 kpc when the curves of the cloud are included. The V-band image of VII Zw 403 (a cropped version is shown in Figure <ref>) extends to the new gas cloud presented in Figure <ref>, however, it does not show any emission in that region above the limiting magnitude of 27 mag arsec^-2.
The remaining velocity field of VII Zw 403's disk shows that the disk does not appear to have a kinematic major axis that is aligned with the morphological major axis of the gaseous or stellar disk. The PA of the kinematic major axis is roughly 235 (estimated by eye), while the stellar morphological major axis is 169.2 <cit.>. The stellar morphological axis matches the morphological disk major axis well without the gas cloud, as can be seen in Figure <ref>a. The misalignment of the stellar morphological and kinematic major axes indicates that the gaseous disk of VII Zw 403 is disturbed, possibly from past gas consumption <cit.>, a past interaction, or a past dwarf-dwarf merger. It is also possible that VII Zw 403 is highly elongated or bar-like along the line of sight, resulting in an offset kinematic major axis.
Below we discuss two main possible explanations for the curved gas cloud in Figure <ref>b.
§.§ Gas Expelled From A Hole
The northern edge of the gas cloud in Figure <ref>b spatially lines up well with a potentially stalled hole. <cit.> suggested that the structure just to the north of the the densest region, denoted by a white circle in Figure <ref>a, may be a stalled hole that has broken out of the disk. The alignment of these two features can be seen in Figure <ref>b, where the robust-weighted intensity map colorscale has been plotted with the contours of the intensity map of the gas cloud in Figure <ref>b and the location of the potential hole has been denoted by a white circle. If a hole has broken through the disk, then that hole could be ejecting material into the line of sight of the western side of the disk. The ejected material could then be distorting the velocity field and creating the higher velocity dispersions seen in the galaxy. The velocity dispersions could also in this case be the result of disturbed gas on both the west and the east side of the disk since the hole could have been expanding into the other side of the disk. The gas cloud in Figure <ref>b does have a large mass at 18% of the total mass for VII Zw 403, but, according to dwarf galaxy models in <cit.>, dwarf galaxies can have most of their gas expelled, at least momentarily, beyond the stellar disk due to large amounts of stellar feedback (see their Figure 2). Therefore, this cloud is not necessarily too large to be material ejected from stellar feedback. However, it should be noted that the data cubes were searched for holes as part of the LITTLE THINGS project and no holes passed the hole quality checks outlined in <cit.> in VII Zw 403 (Pokhrel , in prep.) including the stalled hole suggested by <cit.>. However, in the interest of following up with every potential explanation for VII Zw 403's distorted velocity field, the possibility of a stalled hole ejecting material in the line of sight of the western side of the disk is explored below.
The velocities of the gas cloud in Figure <ref>b indicate that the most recently ejected material (closest in projection to the potential hole) is blueshifted with respect to the systemic velocity of VII Zw 403. This indicates that the material would be in the foreground of the disk (being pushed towards us). The hole would eject material nearly perpendicular to the disk into the foreground of the west side of the galaxy, orienting the disk so that the east side is closest to us and the west side is furthest away. Such an orientation is perpendicular to that expected from the velocity contours.
The rough estimates made by <cit.> using the equations in <cit.> show that this cavity would have to be made by 2800 stars with masses greater than 7 M_. <cit.> did not see evidence for this very large number of stars and suggested that the cavity may have instead been made over a long period of time or was made through consumption when the star formation rate was higher around 600-800 Myr ago. If the cavity is a hole that broke through the disk roughly 600-800 Myr ago and the foreground gas cloud is gas that has been expelled from the disk by a now stalled hole, then the hole would require an outflow velocity of only 8 to have moved the gas 4.5 kpc. This is a reasonable rough estimate for an outflow velocity. However, when looking at the velocities of the gas cloud, the tip nearest to the potentially stalled hole has a velocity of about 125 and the disk has a velocity of about 100 at the same location. This is a velocity difference of 25 , which is three times larger than our calculated outflow velocity. With an outflow velocity this high, the approximate time that the gas would require to move 4.5 kpc would be 180 Myr, when the SFR of the disk was lower <cit.>. The velocity field of the underlying disk does still appear disturbed in Figure <ref>a, so it is likely that the gas cloud extends further to the east to a different velocity. Yet, it would be expected to continue the velocity trend seen in the rest of the gas cloud since there is a clear gradient of higher negative velocities towards the northeast tip of the gas cloud. Therefore, the gas cloud's velocity is unlikely to get closer to the velocity of the disk near the potentially stalled hole.
The velocity dispersions of VII Zw 403's disk without the gas cloud and the velocity dispersions of the gas cloud alone are shown in Figure <ref>. In Figure <ref>a of the underlying disk velocity dispersions we note that the higher velocity dispersions seen in Figure <ref>d are still there with the exception of some decrease where the cloud is in the line of sight. However, the southwest edge of the disk has higher velocity dispersions than in Figure <ref>d, this may be due to the removal of too much gas from the edge of the disk, resulting in the edge of the disk having velocities in Figure <ref> that are not the true velocity values. In Figure <ref>b the velocity dispersions of the gas cloud are generally 10 or less. Such low velocity dispersions are inconsistent with turbulent gas that has been ejected from the disk from a hole with outflow velocities of 25 . For example, dwarf irregular galaxy NGC 4861 has three outflow regions with expansion velocities of 25 and velocity dispersions of 20 <cit.>. The low velocity dispersions in VII Zw 403's gas cloud are instead consistent with cold gas in the line of sight of VII Zw 403. Considering all of the evidence against outflow from an hole, this gas cloud has probably not been expelled by supernova explosions and stellar winds.
§.§ An External Gas Filament
Another possible explanation for the gas cloud in Figure <ref>b is an external cloud of gas (potentially IGM) that is in the line of sight of VII Zw 403. An external gas cloud would explain VII Zw 403's high mass-to-light ratio relative to other BCDs <cit.>. With a mass of 7.5×10^6 M_, the gas cloud is a reasonable size to be a starless gas cloud near the disk with IGM origins <cit.>.
The integrated intensity map of the external gas cloud is shown in Figure <ref>c in colorscale with the contours of the remaining disk's integrated intensity map. The cloud has an column density peak near a declination of 78 59. It is likely that some of the southwestern edge of the disk was included in the map of the line-of-sight gas cloud during the process of using blank, which could explain the entirety of the cloud's peak in . This can be seen in Figure <ref>a as the southwestern edge of the disk looks as though it may be missing, but it is also possible that the peak in the colorscale of Figure <ref>c is, at least in part, the peak of a gas cloud being consumed by VII Zw 403.
It is possible that some of the external gas cloud has not been removed from the main disk in Figures <ref> and <ref> due to the nature of the manual identification of the cloud or due to overlapping velocities in the disk and gas cloud. For example, some of the structure that <cit.> labeled as a potentially stalled hole denoted by the white circle in Figure <ref>b can still be seen in the contours of the remaining disk in Figure <ref>c as a small second peak to the north of the dense region. The structure inside and on the north and west edges of the white circle in Figure <ref>a lines up well with the external gas cloud's northern edge and could actually be resolved structure of the external gas cloud.
The velocities of the gas cloud, in this case, cannot tell us which side of the disk (foreground/background) the cloud is on. If the northeastern edge of the cloud is closest to the disk, then the gas cloud would be in the background of the disk falling towards it and potentially impacting it. If the southeastern edge of the cloud is closest to the disk, the cloud is then falling away from us and towards the disk in the foreground of the disk.
The velocity dispersions are nearly constant along the east-west axis in Figure <ref>. This implies that if there is a foreground/background gas cloud extending across VII Zw 403's kinematic major axis, then that cloud must have a velocity gradient similar to that of VII Zw 403's disk (with offset velocities in the same line of sight). Assuming that the velocity gradient of the gas cloud seen in Figure <ref> continues into the line of sight of the disk, then it seems plausible that the velocity dispersion could stay nearly constant through the kinematic major axis of VII Zw 403. Also, if we assume that the cloud is falling into VII Zw 403, then the gas cloud would acquire about the same velocity gradient as VII Zw 403's disk.
It is possible that the velocity dispersions on the east side of the galaxy are from the external gas cloud running into the disk and pushing gas in the center of the disk from the west, which in turn, is pushing on gas on the east side of the disk. If this is the case, then the northeastern end of the external gas cloud would be impacting the disk and the gas cloud would be falling towards us and towards the disk from behind the disk.
The location of the star forming region may also have been affected by the external gas cloud running into the disk over some time. The densest star forming region, seen in Figure <ref>, is offset to the east of the center of the main V-band disk. When the gas cloud impacted the disk, it would push gas from behind the disk towards us and to the east. This may have caused star formation in the compressed gas that is located in the east of the disk and extending out towards us. If this is the case, then the new stars would visually be in the line of sight of the far side of the stellar disk since they are in the foreground and presumably not at the edge of the disk. The V-band image in Figure <ref> thus shows that the west side of the disk would be the near side of the disk.
As the external gas cloud continues to be consumed by VII Zw 403, it could trigger another burst of star formation in the galaxy. The cloud has a mass of 7.5×10^6 M_ and <cit.> suggest that a mass of ≥10^7 M_ is required for a burst of star formation. Considering the uncertainties associated with distance calculations, the gas cloud in the line of sight of VII Zw 403 is potentially large enough to trigger another burst of star formation in VII Zw 403 if it strikes the disk retrograde to its rotation as discussed in <cit.>.
The gas cloud may also be the remnants of a past merger. However, without tidal arms or double central cores, there is not much evidence for something as dramatic as a merger in VII Zw 403.
§ RESULTS: NGC 3738
§.§ Stellar Component
The FUV and V-band data for NGC 3738 are shown in Figure <ref>. The V-band data were taken with the 1.1 m Hall Telescope at Lowell Observatory and the FUV were taken with GALEX <cit.>. The FUV and V-band surface brightness limits are 30 and 27 mag arcsec^-2, respectively <cit.>. The FUV and V-band data have very similar morphologies, with the V-band disk being more extended than the FUV disk.
§.§ VLA Morphology
NGC 3738's natural-weighted integrated intensity map as measured by the VLA is shown in Figure <ref>a. There are several regions of emission surrounding the disk. The robust-weighted map is shown in Figure <ref>a. Figure <ref>b and <ref>c show the colorscale of the robust-weighted, integrated intensity map and contours of the FUV and V-band data, respectively. The V-band data stretch beyond the gaseous disk even in the natural-weighted map Figure <ref>c.
§.§ VLA Velocity And Velocity Dispersion Field
The intensity-weighted velocity field is shown in Figure <ref>b. The disk is participating in near solid body rotation, with some small kinks in the isovelocity contours as can be seen in Figure <ref>. The velocities of the separate regions of emission that surround the disk are all near the systemic velocity of the galaxy of 229 indicating that they could be associated with NGC 3738. The intensity-weighted FWHM of the line profiles is shown in Figure <ref>d. Velocity dispersions reach up to 35 and are above 20 throughout most of the disk.
§.§ GBT Morphology And Velocity Field
NGC 3738's integrated intensity map, as measured with the GBT, is shown in the left side of Figure <ref>. NGC 3738 is at the center of the map surrounded by some noise and part of a galaxy to the south. A close up of NGC 3738's GBT velocity field is shown in the right side of Figure <ref>. There is a gradient evident in NGC 3738's emission from bottom left to top right which matches the gradient direction seen in the VLA maps. NGC 3738 does not appear to have any companions at the sensitivity of the GBT map.
§.§ Mass
The total mass of NGC 3738 measured from the VLA emission, including the separate regions of emission, is 9.5×10^7 M_, and the total mass measured from the GBT emission for NGC 3738 is 1.7×10^8 M_. The VLA was able to recover 56% of the GBT mass. The masses of the individual regions of gas external to the disk in NGC 3738's VLA data add up to 1.3×10^7 M_ or 14% of the total VLA mass.
§ DISCUSSION: NGC 3738
NGC 3738 is morphologically and kinematically disturbed. In the VLA data, NGC 3738 has high velocity dispersions in its disk, several gas clouds around its disk, and it does not appear to have an extended, tenuous outer disk like those seen in other BCDs <cit.>. The mass measurements from the GBT maps, however, indicate that there may be a significant amount of tenuous surrounding the disk. Additionally, the stellar morphological major axis, as measured by <cit.>, is 179.6, which is offset from the kinematic major axis, measured to be 115 (estimated by eye), however, the inner isophotes of the V-band image do appear to be more closely aligned with the kinematic major axis of NGC 3738 (see Figures <ref>b and <ref>c).
The small regions of emission around the disk of NGC 3738 in the natural-weighted VLA maps could be noise in the natural-weighted maps. As discussed in Section <ref>, the 25×25 convolved cube can be checked to see if these features are real (appear in three consecutive channels or more). In Figure <ref>, the cloud features to the north and west of the NGC 3738's main body are visible in more than three consecutive channels each and therefore are likely real.
With several regions of separate emission surrounding the disk, it is possible that the high dispersions are due to a gas cloud(s) in the line of sight of NGC 3738's disk. P-V diagrams were made to search the disk for kinematically distinct gas clouds. Figure <ref> is a P-V diagram of a slice that generally traces the kinematic major axis that shows the only kinematically distinct gas cloud that was found in the line of sight of the disk. The overall trend seen in the gas through this slice is an increase in velocity moving towards positive offsets. However, at an angular offset range of 10 to 30 there is gas that has a decreasing velocity from 220 to 190 (the emission circled in the P-V diagram and approximately located on the black segment of the pink slice in Figure <ref>). This kinematically distinct gas cloud could be a foreground/background gas cloud or gas in the disk. It has a large velocity range of 30 and is just below the velocities of the surrounding separate regions of emission also seen in the VLA map. The approximate location and extent of the cloud is also outlined with a magenta ellipse in Figures <ref>b and d. This kinematically distinct cloud is likely causing the distortion of the isovelocity contours seen in the same region and some of the high velocity dispersions seen in the disk of the galaxy. The kinematically distinct gas in Figure <ref> cannot, however, account for the high velocity dispersions seen in the northeast side of the galaxy.
There are no tidal tails or bridges that are apparent in the maps of NGC 3738; however, it is possible that NGC 3738 is an advanced merger. If NGC 3738 is an advanced merger, then the tidal tails in the and stellar disk may have had enough time to dissipate or strong tidal tails may not have formed depending on the initial trajectories of the two merged galaxies <cit.>. Most of the disk at least would have had time to get back onto a regular rotation pattern, as seen in Figure <ref>. Also, mergers can result in efficient streaming of towards the center of the galaxy <cit.>, causing a central starburst and, perhaps in NGC 3738's case, leaving behind a tenuous outer pool that is detected by the GBT and not the VLA. Some of the tenuous detected by the GBT and not the VLA may also be remnants left behind by the progenitor galaxies as they merged. The gas clouds surrounding the disk in Figure <ref> could also be material that was thrown outside of the main disk during the merger. These gas clouds could then re-accrete back onto the galaxy later. Although it is possible that NGC 3738 is an advanced merger, the lack of obvious tidal tails in either the stellar or gaseous disk means that we do not have evidence to state with certainty that NGC 3738 is an advanced merger.
Unique features of NGC 3738, such as the gas clouds to the west and north of the main disk, an optical disk that extends beyond the main disk in the VLA natural-weighted intensity maps, and high velocity dispersions (see Figure <ref>), could be an indication that ram pressure stripping is taking place. Like Mrk 178, NGC 3738 belongs to the Canes Venetici I group of galaxies, therefore, it may be being stripped by tenuous, ionized gas (see Section <ref>). However, unlike Mrk 178, the V-band emission of NGC 3738 stretches beyond the disk on all sides of the disk. Therefore the IGM would need to uniformly sweep out the gas from the edges of the disk, meaning that NGC 3738 would be moving through the IGM nearly face on. Ram pressure stripping may also have left NGC 3738 with a tenuous outer pool which is being detected by the GBT, but may be too tenuous to be detected in the VLA maps, explaining why the GBT maps are resulting in nearly twice the mass as the VLA.
It is also possible that NGC 3738's gas has been pushed into its halo by the outflow winds created by a burst of star formation, resulting in the emission to the west and the north of the main disk. Three holes were found in NGC 3738's disk that met the quality selection criteria outlined in <cit.> and have quality values (discussed in Section <ref>) of 6-7. All three holes are of the same type: they appear to have blown out both sides of the disk (Pokhrel , in prep.). Thus, we are unable to get any accurate estimates of expansion velocities and other properties of the holes. Feedback can explain not only the holes and emission outside of the main disk, but also the stellar disk. Models presented in <cit.> show the stellar disk can expand during periods of strong feedback along with the gaseous disk (see their Figure 2). Therefore, the outer regions of NGC 3738's gaseous disk may be such low density from the expansion that they do not appear to cover NGC 3738's expanded stellar disk in the natural-weighted maps. The regions of emission to the north and west of the main disk could then reaccrete onto the disk at later times, resulting in another period of increased star formation activity.
§ COMPARISON TO OTHER DWARF GALAXIES
All three of these BCDs appear isolated with respect to other galaxies, but their VLA data indicate that they have not likely been evolving in total isolation, either from other galaxies in the past or nearby gas clouds (perhaps with the exception of NGC 3738 which may have experienced strong stellar feedback). This is the same conclusion made in previous papers of this series: <cit.> and <cit.>. Haro 29 and Haro 36 are BCDs that appear to be advanced mergers or have interacted with a nearby companion <cit.>. Like Mrk 178, VII Zw 403, and NGC 3738, there are no companions in the GBT maps (shown in Figure <ref>) of Haro 29 and Haro 36 that are clearly interacting with the BCDs at the sensitivity of the map (these data were taken after was published). Haro 36's GBT map appears very suggestive of a potential interaction between NGC 4707 and Haro 36, however, at the sensitivity of the map, this interaction cannot be confirmed because there is no continuous bridge connecting them together. Also, Haro 36 and NGC 4707 are 530 kpc apart with a relative velocity of 34 <cit.>. Therefore, these galaxies are not likely to have interacted since they would require 15 Gyr (longer than the age of the universe) to travel that far away from each other (assuming that their line of sight velocity difference is comparable to their transverse velocity). This could indicate that Haro 29 and Haro 36 are advanced mergers, that their companion is gas poor, or they have formed in some other manner. IC 10 is another BCD that <cit.> conclude is an advanced merger or accreting IGM, as indicated by an extension of to the north <cit.>. <cit.> also found evidence of tidal features and no potential companion in the BCD II Zw 40. Several surveys have shown that external gas clouds exist around several other BCDs <cit.>, however, it is not clear if these clouds are being expelled from the galaxies or if they are being accreted as is thought to be occurring in the dwarf irregulars NGC 1569 <cit.> and NGC 5253 <cit.>.
Other dwarf galaxies in LITTLE THINGS can also provide insight into characteristics of BCDs. A general feature that is prominent in all of the BCDs (perhaps with the exception of NGC 3738 unless the external gas clouds are counted) is significant morphological asymmetries in the gaseous disk <cit.>. This is not an uncommon feature in the LITTLE THINGS sample overall, several galaxies also have morphological asymmetries: DDO 69, DDO 155, DDO 167, DDO 210, DDO 216, NGC 4163, LGS 3, and DDO 46 <cit.>. Five of these galaxies have low star formation rates: DDO 46, DDO 69, DDO 210, DDO 216, and LGS 3, ranging from log SFR_D of 4.10 to 2.71 <cit.>. The remaining three dwarf irregulars with significant morphological asymmetries, DDO 155, DDO 167, and NGC 4163, do have higher star formation rates of 2.28 to 1.41 <cit.>. DDO 155 and NGC 4163 have even been labeled as BCDs in the literature <cit.>. However, morphological asymmetries in a dwarf galaxy do not appear to imply high star formation rates, a defining feature of BCDs. This is not unexpected as dwarf irregular galaxies are named for their irregular shape. DDO 216, for example, has a cometary appearance in <cit.>, but it has one of the lowest star formation rates, log SFR_D=4.10. Therefore morphological asymmetries alone, including cometary appearances, do not imply increased star formation rates.
Other indications of the disturbed gas in these BCDs are the second moments maps. Haro 36, IC 10, and NGC 3738 all have gas velocity dispersions throughout large regions of their disks reaching at least 15-20 . Two other dwarf galaxies in the LITTLE THINGS sample also have velocity dispersions in excess of 15 : NGC 1569 and NGC 2366 <cit.>. Both of these dwarf irregular galaxies have heightened star formation rates. NGC 1569 is believed to have a gas cloud impacting the disk <cit.>. NGC 2366 is a dwarf irregular with an kinematic major axis that is offset from its stellar and morphological axes, a supergiant H2 region, and is occasionally referred to as a BCD <cit.>. The increased velocity dispersions in NGC 2366's disk do not appear to be associated with the large star forming regions in the disk and therefore their cause is unknown <cit.>. Haro 36, IC 10, NGC 3738, and NGC 1569 all have higher velocity dispersions throughout their disk including regions of star formation. All four of these dwarf galaxies are likely having interactions with their environment through consumption of external gas, mergers, and ram pressure stripping <cit.>, therefore, it is possible that their interaction with the environment is the cause of their high velocity dispersions and that their star formation is also contributing to these high dispersions.
Two distinct kinematic major axes are seen in the disks of Haro 36, Mrk 178, and VII Zw 403 (arguably also in Haro 29) and not in other dwarf galaxies in the LITTLE THINGS sample <cit.>. This feature is usually indicative of a recent disturbance to the gas. For Haro 36, Mrk 178, and VII Zw 403, two kinematic major axes indicated that they: were the result of a merger, were experiencing ram pressure stripping and/or had an impacting external gas cloud <cit.>. Strong interactions with the environment may therefore be playing an important role in fueling the bursts of star formation in BCDs.
§ CONCLUSIONS
Mrk 178, VII Zw 403, and NGC 3738 all have disturbed kinematics and morphology. Both VII Zw 403 and Mrk 178 have a significant offset of their kinematic axis from their stellar morphological major axis, while both Mrk 178 and NGC 3738 have stellar disks that extend beyond their natural-weighted maps. This indicates that all three galaxies have been significantly perturbed in the past.
Mrk 178 has strange stellar and morphologies. At the sensitivity of the VLA maps, Mrk 178 does not have extending as far as the stellar component as indicated in the V-band. It is possible that Mrk 178's VLA morphology is dominated by a large hole and an extension to the northwest. However, the hole shape in the VLA data may also be a red herring. A hole cannot easily explain why the northwest region of the disk appears to be rotating on a kinematic major axis that is nearly perpendicular to the kinematic major axis for the rest of Mrk 178's body. Another scenario that could explain the morphology and kinematics of Mrk 178's VLA maps is a gas cloud running into the galaxy. This gas cloud would be running into the disk from the southeast and pushing gas in the disk to the west. Since Mrk 178 appears to be a low mass galaxy, it is very possible for the galaxy to be easily disturbed. Another possible explanation for Mrk 178 is that it is experiencing ram pressure stripping from ionized intergalactic medium. Ram pressure stripping would explain the lack of gas in the southeast region of the galaxy and the overall cometary morphology in Mrk 178.
VII Zw 403 appears to have a gas cloud in the line of sight that is rotating differently than the main body. The low velocity dispersions in the cloud point to it being a cold gas cloud. The gas cloud is likely an external gas cloud impacting the main disk from behind and pushing it to the east. The underlying disk of VII Zw 403, when most of the emission associated with the cloud in the line of sight has been removed, has a kinematic major axis that is misaligned with the morphological stellar and major axes. It is possible that VII Zw 403 is elongated or bar-like along the line of sight, resulting in the appearance of an offset kinematic major axis.
NGC 3738 is a BCD with a stellar V-band disk that extends further than the VLA natural-weighted disk. The GBT maps also pick up almost two times the mass of the VLA natural-weighted maps. It is therefore possible that NGC 3738 has a tenuous extended halo. It does not have any nearby companions at the sensitivity of the GBT map, therefore, it did not recently interact with a currently-gas-rich galaxy. There are also multiple gas clouds around the disk that are moving at approximately the systemic velocity of NGC 3738. These gas clouds may have been pushed outside of NGC 3738's main disk due to stellar feedback. It is also possible that the gas clouds external to the disk may have been ejected from the main disk during a past merger than NGC 3738 has undergone. Another possibility is that NGC 3738 is experiencing face-on ram pressure stripping from ionized IGM.
Whether they have had their starburst triggered through mergers, interactions, or consumption of IGM, it is apparent that each BCD is different and requires individual assessment to understand what has happened to it. This may be true for modeling what each BCDs' past and future may look like. If BCDs have been triggered through different means, then there is no guarantee that they will evolve into/from the same type of object. Different triggers for BCDs may also explain why it is so difficult to derive strict parameters to define this classification of galaxies <cit.>. If they have been triggered differently, then their parameters likely span a much larger range than if they were all triggered in the same manner.
We would like to acknowledge Jay Lockman for his valuable conversations and help with Mrk 178's data. We would also like to acknowledge Nick Pingel, Spencer Wolfe, and D.J. Pisano for their code and help with the Mrk 178 data. We would also like to thank the anonymous referee for their many helpful comments that improved this paper. Trisha Ashley was supported in part by the Dissertation Year Fellowship at Florida International University. This project was funded in part by the National Science Foundation under grant numbers AST-0707563 AST-0707426, AST-0707468, and AST 0707835 to Deidre A. Hunter, Bruce G. Elmegreen, Caroline E. Simpson, and Lisa M. Young. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration (NASA). The National Radio Astronomy Observatory is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation.
§ NGC 3738'S BASELINE FITTING
During calibration the off-frequency spectrum is shifted to the on-frequency spectrum's central frequency and subtracted from the on-spectrum. If a source is within 3.5 MHz of the target's usable frequency range (also 3.5 MHz in width, therefore the other source can be up to 5.25 MHz away from the central frequency of the target source), then the unwanted source that was originally 1.75-5.25 MHz away from the spectrum will appear closer in frequency space to the target source as a negative dip in intensity. The unwanted source reduces the amount of frequency space that can be used to estimate the zero-emission baseline for subtraction. Two sources were located outside of the target frequency range in NGC 3738's data and with calibration, have been reflected into NGC 3738's target frequency range: NGC 3733 (north of NGC 3738) and NGC 3756 (south of NGC 3738). The real emission from NGC 3733 and NGC 3756 is very far away from NGC 3738 in velocity space (a difference of 960 and 1100 , respectively). It is therefore very unlikely that these sources are interacting with NGC 3738.
The spectra of the approximate space that NGC 3733 covered were summed and averaged to get a clear picture of where it is in frequency space; the resulting labeled spectrum can be seen in Figure <ref>. This spectrum has not had a baseline fit to it or the RFI removed from it, which is why the spectrum appears to have no zero line of emission and there are also spikes at 1.414 MHz and 1.420 MHz. The Milky Way and its reflection have been labeled as such. The source with emission located at 1414.3-1415.3 MHz is NGC 3733, which has been reflected into the target frequency range at 1417.8-1418.8 MHz. The frequency range that NGC 3733 occupies in the target frequency range could therefore no longer be used as zero emission space in a baseline fit. If it was used or partially used, then this could result in a fake source at its location and generally poor fits since a low order polynomial is being fit to the baseline. An example of a fit using the frequency range that includes the reflections of these sources can be seen in the top of Figure <ref>. A second example of a fit using frequency ranges that do not overlap with the reflected source can be seen in the bottom of Figure <ref>.
An averaged spectrum of NGC 3738's emission is shown in Figure <ref>. In this spectrum, NGC 3738 peaks at a frequency of 1419.3 MHz, and its reflection appears at a frequency of 1415.8 MHz. The reflection now occurs outside of the target frequency range. The emission of NGC 3738 and the reflection of NGC 3733 consume most of the target frequency range, leaving very little room to fit to a baseline. Some of the emission and reflection-free space outside of the target frequency range could be used for baseline fitting when the baseline appears continuous, but using a significant portion of the target frequency range is necessary for a good fit. There is also the added problem of another source, NGC 3756, being reflected into the target frequency range. NGC 3756's averaged spectrum is shown in Figure <ref>. In this figure, NGC 3756 has emission at 1415 MHz and has a reflection at 1418.5 MHz.
With NGC 3733, NGC 3756, and NGC 3738 on the GBT maps stretching throughout the target frequency range, it was impossible to fit one single baseline to all three sources. Luckily, all three sources are separated in Galactic Latitude. Therefore, the solution to the baseline problem was to split the GBT map into four spatial regions, with different baseline fit parameters for each region. The first region included the raster scan rows (in Galactic Latitude) that contained only NGC 3733. The second region included only the raster scan rows that contained NGC 3738. The third region included NGC 3756. The fourth region included everything else. The baseline fit used for NGC 3738 was also used for this fourth region so that any sources close to NGC 3738 in velocity could be fit properly. The rows with NGC 3733 and NGC 3756 also had their emission-free pixels searched for any possible emission that could be related to NGC 3738 or a companion. This was done by averaging 10 spectra at a time to look for possible emission in the resulting averaged spectrum. The exceptions to this 10-spectra-averaging were the spectra close to the reflected sources and the last four spectra of each row, where fewer spectra were available for averaging, but still averaged for inspection. No sources related to NGC 3738 were found in the target frequency range. This method gives the four different regions slightly different noise levels, but the effect is relatively small with the rms values of the regions over 10 ranging from 3.8 to 4.5 × 10^16 cm^-2. These rms values are likely higher than the overall rms over 10 for the entirety of NGC 3738's GBT map because they were measured over smaller regions.
[Ashley (2013)]ashley13 Ashley, T., Simpson, C. E., Elmegreen, B. G. 2013, , 146, 42
[Ashley (2014)]ashley14 Ashley, T., Elmegreen, B. G., Johnson, M., Nidever, D., Simpson, C. E., Pokhrel, N. R. 2014, , 148, 130
[Bagetakos (2011)]bagetakos11 Bagetakos, I., Brinks, E., Walter, F., 2011, , 141, 23
[Banerjee (2011)]banerjee11 Banerjee, A., Jog, C. J., Brinks, E., & Bagetakos, I. 2011, , 415, 687
[Barnes & Hernquist(1996)]barnes96 Barnes & Hernquist 1996, , 471, 115
[Bekki(2008)]bekki08 Bekki, K. 2008, , 388, L10
[Begum (2008)]begum08 Begum, A., Chengalur, J. N., Karachentsev, I. D., Sharina, M. E., & Kaisin, S. S. 2008, , 386, 1667
[Brinchmann (2008)]brinchmann08 Brinchmann, J., Kunth, D., & Durret, F. 2008, 485, 657
[Brosch et al.(2004)]brosch04 Brosch, N., Almoznino, E., Heller, A. B. 2004, , 349, 357
[Brooks (2009)]brooks09 Brooks, A. M., Governato, F., Quinn, T., Brook, C. B., & Wadsley, J. 2009, , 694, 396
[Chevalier(1974)]chevalier74 Chevalier, R. A. 1974, , 188, 501
[Chynoweth (2011)]chynoweth11 Chynoweth, K., Holley-Bockelmann, K., Polisensky, E., & Langston, G. 2011, , 142, 137
[Cornwell(2008)]cornwell08 Cornwell, T. J. 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 793
[Côté (2000)]cote00 Côté, S., Carignan, C., & Freeman, K. C. 2000, , 120, 3027
[de Blok (2008)]deblok08 de Blok, W. J. G. Walter, F., Brinks, E. 2008, , 136, 2648
[El-Badry (2016)]elbadry16 El-Badry, K., Werzel, A., Geha, M. 2016, , 820, 131
[Elmegreen(1989)]e89 Elmegreen, B.G. 1989, ApJ, 338, 178
[Elmegreen et al.(2012)]e12 Elmegreen, B.G., Zhang, H.-X., & Hunter, D.A. 2012, ApJ, 747, 105
[Gil de Paz & Madore(2003)]gil03 Gil de Paz, A., Madore, B. F., & Pevunova, O. 2003, ApJS, 147, 29
[Gil de Paz & Madore(2005)]gil05 Gil de Paz, A. & Madore, B. F. 2005, ApJS, 156, 345
[González-Riestra (1988)]gonzalez88 González-Riestra, R., Rego, M., & Zamorano, J. 1988, , 202, 27
[Gooch(1996)]gooch96 Gooch, R.E., 1996, Astronomical Data Analysis Software and Systems V, ASP Conf. Series, 101, 80
[Guseva (2000)]guseva00 Guseva, N. G., Izotov, Y. I., & Thuan, T. X. 2000, , 531, 776
[Helmi et al.(2012)]helmi12 Helmi, A., Sales, L. V., Starkenburg, E. et al. 2012, , 758, L5
[Herrmann (2013)]herrmann13 Herrmann, K. A., Hunter, D. A., & Elmegreen, B. G. 2013, , 146, 5
[Hibbard & Mihos(1995)]hibbard95 Hibbard & Mihos 1995, , 110, 140
[Hoffman (2003)]hoffman03 Hoffman, G. L., Brosch, N., Salpeter, E. E., Carle, N. J. 2003, , 126, 2774
[Hunter (2001)]hunter01 Hunter, D. A., Elmegreen, B. G., & van Woerden, H. 2001, , 566, 773
[Hunter & Elmegreen(2004)]hunter04 Hunter, D. A. & Elmegreen, B. G. 2004, , 128, 2170
[Hunter & Elmegreen(2006)]hunter06 Hunter, D. A. & Elmegreen, B. G. 2006, , 162, 49
[Hunter (2010)]hunter10 Hunter, D. A., Elmegreen, B. G., & Ludka, B. C. 2010, , 139, 447
[Hunter (2012)]hunter12 Hunter, D. A., Ficut-Vicas, D., Ashley, T. 2012, , 144, 134
[Johnson (2012)]johnson12 Johnson, M., Hunter, D. A., Oh, S.-H., 2012, , 144, 152
[Johnson (2015)]johnson15 Johnson, M., Hunter, D. A., Wood, S., 2015, , 149, 196
[Karachentsev (2002)]karachentsev02 Karachentsev, I. D., Dolphin, A. E., Grisler, D., 2002, , 383, 125
[Karachentsev (2003)]karachentsev03 Karachentsev, I. D., Sharina, M. E., Dolphin, A. E., 2003, , 398, 467
[Kehrig (2013)]kehrig13 Kehrig, Pérez-Montero, E., Vílchez, J. M., Brinchmann, J., 2013, , 432, 2731
[Keres̆ (2005)]keres05 Keres̆, D., Katz, N., Weinberg, D. H., &, Davé 2005, , 363, 2
[Lelli(2013)]lelli13 Lelli, F. 2013, PhD Thesis, Starbursts and Gas Dynamics in Low-Mass Galaxies, http://www.astro.rug.nl/ lelli/Lelli.PhDthesis.pdf
[Leroy (2008)]leroy08 Leroy, A. K., Walter, F., Brinks, E., 2008, , 136, 2782
[Lira (2000)]lira00 Lira, P., Lawrence, A., & Johnson, R. A. 2000, , 319, 17
[López-Sánchez (2012)]lopez12 López–Sánchez, Á. R., Korubalski, B. S., van Eymeren, J., 2012, 419, 1051
[Lynds (1998)]lynds98 Lynds, R., Tolstoy, E., O'Niel, E. J. Jr., Hunter, D. A. 1998, , 116, 146
[Martínez-Delgado (2012)]delgado12 Martínez-Delgado, D., Romanowsky, A. J., Gabany, R. J. 2012, , 784, L24
[Mazzarella (1991)]mazzarella91 Mazzarella, J. M., Bothun, G. D., & Boroson, T. A. 1991, , 101, 2034
[McCray & Kafatos(1987)]mccray87 McCray, R. & Kafatos, M. 1987, , 317, 190
[McConnachie (2007)]mcconnachie07 McConnachie, A. W., Venn, K. A., Irwin, M. J., Young, L. M., & Geehan, J. J. 2007, , 671, L33
[McMullin (2007)]mcmullin07 McMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, Astronomical Data Analysis Software and Systems XVI (ASP Conf. Ser. 376), ed. R. A. Shaw, F. Hill, & D. J. Bell (San Francisco, CA: ASP), 127
[McQuinn (2009)]mcquinn09 McQuinn, K. B. W., Skillman, E. D., Cannon, J. M., 2009, , 695 561
[Nicholls (2011)]nicholls11 Nicholls, D. C., Dopita, M. A., Jerjen, H, & Meurer, G. R.2011, AJ, 142, 83
[Nidever (2013)]nidever13 Nidever, D. L., Ashley, T., Slater, C. T., . 2013, , 779, L15
[Noeske (2001)]noeske01 Noeske, K. G., Iglesias-Páramo, J., Vílchez, J. M., Papaderos, P., & Fricke, K. J. 2001, , 371, 806
[Ott (2005)]ott05 Ott, J., Walter, F., & Brinks, E. 2005, , 358, 1423
[Pearson (2016)]pearson16 Pearson, S., Besla, G., Putman, M. E., 2016, , 459, 1827
[Papaderos (1994)]papaderos94 Papaderos, P., Fricke, K. J., Thuan, T. X., & Loose, H.-H. 1994, , 291, L13
[Papaderos (1996)]papaderos96b Papaderos, P., Loose, H.-H., Thuan, T. X, & Fricke, K. J. 1996, , 120, 207
[Paturel (2003)]paturel03 Paturel, G., Theureau, G., Bottinelli, L., 2003, , 412, 57
[Peterson(1979)]peterson79 Peterson, S. D. 1979, , 40, 527
[Pustilnik (2001)]pustilnik01 Pustilnik, S. A., Kniazev, A. Y., Lipovetsky, V. A., & Ugryumov, A. V. 2001, , 373, 24
[Ramya (2009)]ramya09 Ramya S., Kantharia N. G., Prabhu T. P. 2009, ASP Conf. Ser. Vol. 407. The Low-Frequency Radio Universe. A Radio Continuum and Study of Optically Selected Blue Compact Dwarf Galaxies: Mrk 1039 and Mrk 0104. Saikia D. J., Green D. A., Gupta Y., Venturi T., editors. San Francisco: Astron. Soc. Pac., 114
[Roychowdhury (2009)]roychowdhury09 Roychowdhury, S., Chengalur, J. N., Begum, A., & Karachentsev, I. D. 2009, , 397, 1435
[Sánchez Almeida (2013)]sanchez13 Sánchez Almeida, J., Muñoz-Tuñón, C., Elmegreen, D. M., Elmegreen, B. G., & Méndez-Abreu, J. 2013, , 767, 74
[Sancisi (2008)]sancisi08 Sancisi, R., Fraternali, F., Osterloo, T., & van der Hulst, T. 2008, , 15, 189
[Schulte-Ladbeck & Hopp(1998)]schulte98 Schulte-Ladbeck, R. E., & Hopp, U.1998, , 116, 2886
[Schulte-Ladbeck (2000)]schulte00 Schulte-Ladbeck, R. E., Hopp, U., Greggio, L., & Crone, M. M. 2000, , 120, 1713
[Schulte-Ladbeck (2001)]schulte01 Schulte-Ladbeck, R. E., Hopp, U., Greggio, L., Crone, M. M., & Drozdovsky, I. O. 2001, , 121, 3007
[Simpson et al.(2011)]simpson11 Simpson, C. E., Hunter, D. A., Nordgren, T. E., et al. 2011, , 142, 82
[Stevens & Strickland(1998)]stevens98 Stevens, I. R. & Strickland, D. K. 1998, 294, 523
[Stil & Israel(2002a)]stil02a Stil, J. M. & Israel, F. P. 2002a, , 389, 29
[Stil & Israel(2002b)]stil02b Stil, J. M. & Israel, F. P. 2002b, , 392, 473
[Sullivan (2006)]sullivan06 Sullivan, M., Le Borgne, D., Pritchet, C. J. 2006, ApJ, 684, 868
[Tajiri & Kamaya(2002)]tajiri02 Tajiri, Y. Y. & Kamaya, H. 2002, , 389, 367
[Tamburro (2009)]tamburro09 Tamburro, D., Rix, H.-W., Leroy, A. K. 2009, , 137, 4424
[Taylor (1994)]taylor94 Taylor C. L., Brinks E., Pogge R. W., Skillman E. D. 1994, , 107, 971
[Taylor (1995)]taylor95 Taylor, C. L., Brinks, E., Grashuis, R. M., & Skillman, E. D. 1995, , 99, 427
[Taylor(1997)]taylor97 Taylor, C. L. 1997, , 480, 524s
[Thuan & Martin(1981)]thuan81 Thuan, T. X. & Martin, G. E. 1981, , 247, 823
[Thuan (2004)]thuan04 Thuan, T. X., Hibbard, J. E., & Lévrier, F. 2004, , 128, 617
[Thilker (2004)]thilker04 Thilker, D. A., Braun, R., Walterbos, R. A. M., 2004, , 601, L39
[Toomre & Toomre(1972)]toomre72 Toomre, A. & Toomre, J. 1972, , 178, 623
[Tully (1981)]tully81 Tully, R. B., Boesgaard, A. M., Dyck, H. M., & Schempp 1981, , 246, 38
[Vaduvescu (2006)]vaduvescu06 Vaduvescu, O., Richer, M. G., & McCall, M. L. 2006, 131, 1318
[van Eymeren (2009)]eymeren09 van Eymeren, J. Marcelin, M., Koribalski, B. S. 2009, , 505, 105
[van Zee (1997)]vanzee97 van Zee, L., Haynes, M. P., Salzer, J. J., & Broeils, A. H. 1997, , 113, 1618
[van Zee (1998)]vanzee98 van Zee, L., Skillman, E. D., Salzer, J. J. 1998, , 116, 1186
[van Zee(2001)]vanzee01b van Zee, L., Salzer, J. J., & Skillman, E. D. 2001, , 122, 121
[Verbeke (2014)]verbeke14 Verbeke, R., De Rijcke, S., Cloet-Osselaer, A., Vandenbroucke, B., & Schroyen, J. 2014, , 442, 1830
[Walter & Brinks(1999)]walter99 Walter, F. & Brinks, E. 1999, , 118, 273
[Warren (2012)]warren12 Warren, S. R., Skillman, E. D., Stilp, A. M. 2012, , 757, 84
[Westmeier (2008)]westmeier08 Westmeier, T., Brüns, C., & Kerp, J. 2008, , 390, 1691
[Wilcots & Miller(1998)]wilcots98 Wilcots, E. M. & Miller B. W. 1998, , 116, 2363
|
http://arxiv.org/abs/1701.08173v2 | 20170127190851 | Quantization of Horava Gravity in 2+1 Dimensions | [
"Tom Griffin",
"Kevin T. Grosvenor",
"Charles M. Melby-Thompson",
"Ziqi Yan"
] | hep-th | [
"hep-th",
"gr-qc"
] |
=1
|
http://arxiv.org/abs/1701.07459v3 | 20170125194305 | Emergence of a stellar cusp by a dark matter cusp in a low-mass compact ultra-faint dwarf galaxy | [
"Shigeki Inoue"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage 2014
Relating the finite-volume spectrum and the two-and-three-particle S matrix
for relativistic systems of identical scalar particles
Stephen R. Sharpe
December 30, 2023
===================================================================================================================================
Recent observations have been discovering new ultra-faint dwarf galaxies as small as ∼20 pc in half-light radius and ∼3 km s^-1 in line-of-sight velocity dispersion. In these galaxies, dynamical friction on a star against dark matter can be significant and alter their stellar density distribution. The effect can strongly depend on a central density profile of dark matter, i.e. cusp or core. In this study, I perform computations using a classical and a modern analytic formulae and N-body simulations to study how dynamical friction changes a stellar density profile and how different it is between a cuspy and a cored dark matter haloes. This study shows that, if a dark matter halo has a cusp, dynamical friction can cause shrivelling instability which results in emergence of a stellar cusp in the central region 2 pc. On the other hand, if it has a constant-density core, dynamical friction is significantly weaker and does not generate a stellar cusp even if the galaxy has the same line-of-sight velocity dispersion. In such a compact and low-mass galaxy, since the shrivelling instability by dynamical friction is inevitable if it has a dark matter cusp, absence of a stellar cusp implies that the galaxy has a dark-matter core. I expect that this could be used to diagnose a dark matter density profile in these compact ultra-faint dwarf galaxies.
instabilities – methods: numerical – methods: analytical – galaxies: dwarf – galaxies: kinematics and dynamics.
§ INTRODUCTION
Dark matter (DM) density profiles in dwarf galaxies have long been debated. Theoretical studies such as cosmological N-body simulations have demonstrated that DM density increases toward the galactic centre independent of a halo mass <cit.>. On the other hand, observations proposed that dwarf galaxies seem to have nearly constant densities of DM at their central regions <cit.>.[Low surface brightness galaxies have also been observed to have DM cores <cit.> although I do not discuss these galaxies.]
As a possible solution, if DM haloes consist of warm or self-interacting particles, all dwarf galaxies are expected to have central DM cores. It has also been proposed, alternatively, that (recursive) baryonic feedback can turn a cusp into a core by flattening the inner slopes of the primordial DM density profiles in dwarf galaxies as massive as M_ DM∼ 10^10–10^11 M_⊙ <cit.>. If the latter scenario is the case, since dynamical masses of some ultra-faint dwarf galaxies (UFDs) in the local group have been observed to be significantly smaller than the mass threshold above which the baryonic effect is influential to their central DM densities, they could be expected to preserve the primordial DM density profiles which may be cuspy. Accordingly, it is interesting to try to determine DM density profiles of such low-mass UFDs. It is, however, still impossible to know whether their DM haloes have cusps or cores because of only a handful of stars observable by spectroscopy to measure their line-of-sight velocities (LOSVs) and model their DM haloes. Hence, it is worthwhile looking for an alternative method to deduce which type of DM the low-mass galaxies have, cusp or core. For example, <cit.> have proposed a method using a fraction of wide binaries which can be disrupted by tidal force depending on their DM potential in UFDs.
The smallest UFDs are as tiny as R_ h30 pc in half-light radius and L ∼10^2-3 L_⊙ in luminosity <cit.> although current observations still cannot reject the possibility that some of them are extended globular clusters. Recently, <cit.> has analytically discussed that dynamical friction (DF) on a star against dark matter may be marginally effective on the timescale of ∼10 Gyr in Draco II — observed physical properties of which are R_ h=19^+8_-6 pc, brightness M_ v=-2.9±0.8, LOSV dispersion σ_ h=2.9±2.1 km s^-1 measured within R_ h <cit.> — by adopting the Chandrasekhar DF formula <cit.> to his singular isothermal DM halo model. His result implies that DF against DM could significantly change stellar distribution in UFDs more compact and/or less massive than Draco II, which will be discovered by future observations.
The effect of DF strongly depends on the DM density profile. It has been known that DF drag force becomes significantly weaker in cored density distribution than in cuspy one, once a massive particle enters the core <cit.>. Studies using N-body simulations have demonstrated that drag force by DF does cease practically in a constant-density core, probably by non-linear effects <cit.>. Therefore, if an extremely low-mass UFD has a constant-density core of DM, DF could be too weak to affect the stellar distribution. On the other hand, if such a UFD has a DM cusp, DF against DM could be strong enough to make alterations to its stellar distribution, such as emergence of a stellar cusp or formation of a nucleus cluster as a remnant of stars fallen into the galactic centre. Current observations of low-mass compact UFDs are limited to a close distance of d30 kpc from the sun because of their faintness. At this distance, each star in a UFD can be resolved since the typical size of observational smearing is smaller than the mean separation of stars even at the galactic centres. Therefore, the expected stellar cusp and the nucleus cluster would be observed as a dense group of stars at the galactic centre if it exists.
This study addresses the effect of DF by DM on stellar distribution in an extremely low-mass and compact UFD and focus on how different it is between cuspy and cored DM density profiles. In Section <ref>, I perform analytical estimation of stellar shrivelling due to DF based on a classical and a modern DF formulae. In Section <ref>, I perform N-body simulations resolving every single star and demonstrate the same as the analytical estimation presented in Section <ref>. Finally, I present discussion and summary of this work in Section <ref>.
§ ANALYSIS USING DYNAMICAL FRICTION FORMULAE
In this study, I describe a density profile of a DM halo by a Dehnen model <cit.>,
ρ_ DM(r) = ρ_ DM,0r_ s^4/r^γ(r+r_ s)^4-γ,
where ρ_ DM,0 and r_ s are scale density and radius, and γ is an inner density slope. I assume a cuspy DM halo to be represented by setting γ=1, which corresponds to the Hernquist model profile <cit.>, and a cored DM halo is represented by γ=0. Local velocity dispersion of DM is generally computed by solving Jeans equation,
σ_ DM^2(r) = 1/ρ_ DM∫^∞_rρ_ DMGM_ DM(r')/r'^2 dr',
where G is the gravitational constant, and M_ DM(r) is mass of DM enclosed within r. Here, I assume that gravity of a baryon component is negligible and that velocity distribution is isotropic. The analytic solutions of σ_ DM for γ=0 and 1 can be found in <cit.> and <cit.>.
This study discusses how the DF against DM affects stellar distribution and how different it is between cuspy and cored DM haloes. I use a Plummer's model for stellar distribution of a compact UFD,
ρ_⋆(r) = 3M_⋆/4π r_⋆^3(1+r^2/r_⋆^2)^-5/2,
where M_⋆ and r_⋆ are the total mass and scale radius of stars. This model has a core of stars in r≪ r_⋆ where the density is nearly constant. Two-dimensional half-light radius R_ h and integrated mass M_ h inside R_ h are obtained by assuming a constant mass-to-luminosity ratio and integrating equation (<ref>). In this study, I assume r_⋆=20 pc (R_ h=r_⋆ in a Plummer's model). Luminosity-weighted LOSV dispersion inside R_ h is given as
σ^2_ h = 4π/M_ h∫^∞_0dz∫^R_ h_0ρ_⋆σ^2_⋆(R',z)R' dR',
where stellar velocity dispersion σ_⋆ is computed from equation (<ref>) in which ρ_⋆ is substituted for ρ_ DM. Since stellar gravity is now assumed to be negligible, setting σ_ h gives ρ_ DM,0 when the other parameters in equation (<ref>) are fixed:[When σ_ h=1.5 km s^-1, the values of ρ_ DM,0 in the cored DM models are 9.8, 7.0, 5.8 and 5.1×10^-1 M_⊙ pc^-3 for r_ s=125, 250, 500 pc and 1 kpc. Those in the cuspy models are 1.1, 0.45, 0.20 and 0.096×10^-1 M_⊙ pc^-3, respectively.] σ_ h^2∝ρ_ DM,0. In what follows, I discuss the two cases of σ_ h to 1.5 and 3.0 km s^-1.[The cusp and the core models of equation (<ref>) have the finite total masses, M_ cusp,tot=2πρ_ DM,0r_ s^3 and M_ core,tot=(4/3)πρ_ DM,0r_ s^3, respectively. When σ_ h=1.5 km s^-1, for r_ s=125 pc, the total masses of cuspy and cored haloes are M_ cusp,tot=1.3×10^6 M_⊙ and M_ core,tot=8.0×10^6 M_⊙. For r_ s=1 kpc, M_ cusp,tot=6.0×10^7 M_⊙ and M_ core,tot=2.2×10^9 M_⊙.] Fig. <ref> illustrates radial profiles of σ_ DM, circular velocities v_ circ≡√(GM_ DM(r)/r) and σ_⋆ normalised by σ_ h in my cuspy and cored halo models with r_ s=125 pc and 1 kpc.
§.§ Analytic formulae of dynamical friction
§.§.§ The Chandrasekhar formula
I consider DF against DM on a star. The Chandrasekhar DF formula under Maxwellian velocity distribution[Although <cit.> has also proposed more general forms of analytic DF not relying on Maxwell distribution, I refer to equation (<ref>) as Chandrasekhar formula in this paper.] is given as
F_ DF = -4πlnΛ G^2ρ_ DMm_⋆^2/v_⋆^2[ erf(X)-2X/√(π)exp(-X^2)],
where m_⋆ and v_⋆ are a mass and a velocity of a star, X≡ v_⋆/(√(2)σ_ DM), and Λ is a parameter, whose proper value is still under debate <cit.>[Basically, Λ is defined to be a ratio between the minimum and maximum impact parameters of two-body gravitational interaction, i.e., Λ≡ b_ max/b_ min, where b_ min∼ Gm_⋆/v_⋆^2, b_ max∼ r_ s in the classical formula <cit.>.]. The direction of F_ DF is presumed to be opposite to the velocity vector v_⋆. By assuming a circular orbit, i.e. v_⋆=v_ circ, equation (<ref>) can be solved with σ_ DM from equation (<ref>). In the case considered here, equation (<ref>) is independent of ρ_ DM,0 (i.e. σ_ h) since ρ_ DM, v_⋆^2 and σ_ DM^2 are proportional to ρ_ DM,0 although the DF timescale, ∼ m_⋆σ_ h/F_ DF, depends on ρ_ DM,0. In addition, because the parentheses in equation (<ref>) is independent of m_⋆, F_ DF/m_⋆^2 is independent of m_⋆.
The left panel of Fig. <ref> shows the Chandrasekhar DF force of equation (<ref>) with lnΛ=15 normalised by m_⋆^2 in the cuspy and the cored haloes with various r_ s. The Figure indicates that the strength of DF is remarkably different between the cuspy and the cored DM haloes; DF increases monotonically towards the centre in a DM cusp (the blue lines), whereas it is approximately constant or gently decreases in a core (the red lines). The behaviour of DF is almost independent of r_ s in the cuspy haloes, whereas DF in a core becomes weaker when r_ s is larger. Although the difference of DF inside r_ s between the cusp and the core becomes smaller with decreasing r_ s, it is still quite large in r10 pc even in the case of r_ s=125 pc. This means that, if a stellar component of a UFD is deeply embedded in a DM halo (i.e. R_ h≪ r_ s), one can expect that DF strongly depends on a density profile of DM and could change the stellar distribution if the compact UFD has a cusp of DM. Moreover, in a cuspy halo, a star undergoing DF migrates into an inner radius, at which DF is even stronger (see Section <ref>).
§.§.§ Petts et al. formula
The Chandrasekhar formula of equation (<ref>) is, however, based on several assumptions such as the Maxwellian velocity distribution and the invariable parameter of Λ. Therefore, improved formulae have been invented by previous studies. Recently, <cit.> proposed more sophisticated DF modelling based of the general Chandrasekhar formula <cit.>, which uses a distribution function instead of the Maxwellian distribution and takes high-velocity encounters into account. They demonstrated that their improved DF model can reproduce orbits of infalling particles in cuspy and cored density fields better than the classical formula. They formulate DF force[<cit.> proposed two models of DF: `P16' and `P16f'. I use their P16f model in this study since they concluded that P16f is more accurate than P16.] as
F_ DF = -2π^2 G^2ρ_ DMm_⋆^2/v_⋆^2∫^v_ esc_0J(v_ DM)f(v_ DM)v_ DM dv_ DM,
J= ∫^v_⋆+v_ DM_|v_⋆-v_ DM|(1+v_⋆^2-v_ DM^2/V^2)log(1+b_ max^2V^4/G^2m_⋆^2) dV,
where f(v_ DM) represents a distribution function of DM, which is defined so that 4π∫ f(v_ DM)v_ DM^2 dv_ DM=1, and escape velocity v_ esc=√(-2Φ), and V corresponds to relative velocity of encounter. In equation (<ref>), the maximum impact parameter
b_ max = min( ρ_ DM(r)/dρ_ DM/dr, r).
The value of ρ_ DM/(dρ_ DM/dr) can be taken as the distance within which the density field can be considered to be homogeneous <cit.>. However, since it can diverge in a constant-density core, b_ max is limited to be ≤ r <cit.>. Equation (<ref>) can be solved analytically (see Appendix <ref>).
In addition, <cit.> also introduced a `tidal-stalling' radius, r_ TS, at which the tidal radius of a massive particle is equal to its orbital radius. The tidal radius is
r_ t = Gm_⋆/Ω^2-d^2Φ/dr^2,
where Ω^2=GM_ DM/r^3. They argued that DF ceases within the tidal-stalling radius because of non-linear effects, and showed that the radius of r_ TS matches well results of N-body simulations (a radius of DF cessation in a core, see Section <ref>). In the DF model, F_ DF=0 in r<r_ TS although equation (<ref>) still returns a non-zero value.
The right panel of Fig. <ref> shows the DF force of equation (<ref>) normalised by m_⋆^2. Unlike the Chandrasekhar formula, now F_ DF/m_⋆^2 weakly depends on σ_ h and m_⋆. Here, I assume σ_ h=1.5 km s^-1 and m_⋆=0.5 M_⊙, however the results hardly change between σ_ h=1.5 and 3.0 km s^-1. Radii of r_ TS become about 1.4 times smaller when σ_ h=3.0 km s^-1. In the right panel of Fig. <ref>, although the differences between the cuspy and the cored haloes are still quite large, it is remarkable that the formula predicts DF significantly stronger than the classical formula in the central regions of the cored haloes <cit.>. On the other hand, the DF in the cuspy haloes is similar to that given by the Chandrasekhar formula.
§.§ Orbital integration with the formulae
Using the models and the DF formulae described above, I perform orbital integration of stars under the potential given by the DM distribution of equation (<ref>). With the initial spatial distribution of equation (<ref>), the velocity distribution of stars is given by Eddington's formula <cit.> with isotropy. I do not take into account mutual interactions between the stars, therefore the result is independent of the number of stars. For the sake of statistics, I use a random sample of ten million stars in each run. While integrating their orbits with respect to time, the stars are decelerated by DF represented by the analytic formulae every timestep. I assume m_⋆=0.5 M_⊙ as a typical mass of a star as old as ∼10 Gyr <cit.>. The analytic DF is considered to work until a star reaches the radius r_ limit at which M_ DM(r_ limit)=m_⋆. In my models, r_ limit≃0.1 and 0.5 pc in the cuspy and the cored DM haloes. When a star enters r_ limit with a velocity slower than v_ circ|_r=r_ limit, the star is stopped there and considered to be fallen into the galactic centre by DF. When the Chandrasekhar formula is applied, I set lnΛ=15. When the formula is applied, DF ceases within r_ TS (i.e. F_ DF=0). I use a second-order leap-frog integrator for the orbital computations with a constant and shared timestep of Δ t=0.01× r_ limit/v_ circ|_r=r_ limit. I confirmed the convergence of my results with respect to Δ t. Orbits of stars are time-integrated until t=10 Gyr, and I obtain stellar surface densities as functions of radius in the runs.
Fig. <ref> shows the results of the orbital integration using the Chandrasekhar (top) and (bottom) formulae in the cuspy and the cored haloes with r_ s=125 pc and σ_ h=1.5 km s^-1. Although the stellar surface density is nearly constant in R5 pc in the initial state (black line), the density profiles are significantly steepened after t∼4 Gyr in the cuspy DM halo (blue lines). In addition, sharp stellar cusps like nucleus clusters emerge at the galactic centres, which have five and four per cents of the total number of stellar particles within R<0.5 pc in the top and the bottom panels. The cusps mainly consist of stars fallen into the centres by DF. The steepened stellar distribution profiles are nearly exponential outside the stellar cusps. Because the Chandrasekhar and the formulae are not significantly different (Fig. <ref>), the results of Fig. <ref> are similar in the cuspy halo. These results corroborate the expectation that, as <cit.> proposed, a stellar distribution in a low-mass compact UFD can be affected by DF against DM if it has a cusp.
If a DM halo has a core, however, the DF approximated by the analytic formulae is significantly less efficient to steepen the stellar profile (the red lines), in spite of the same σ_ h meaning similar DM masses within R_ h. When the Chandrasekhar formula is applied (the top panel), the stellar density hardly changes even at t=10 Gyr in the cored DM halo. Although the formula (the bottom panel) steepens the stellar density profile more than the classical formula, the density slope is clearly shallower than that in the cuspy DM, and a stellar cusp does not form. No stars are fallen into the centre by either DF modellings. The absence of the stellar cusp is due to the week DF in the DM core and the DF cessation assumed in r<r_ TS in the model.
Fig. <ref> shows the same results but for different settings for r_ s and σ_ h. The effect of DF becomes weaker for larger r_ s and σ_ h (i.e. higher ρ_ DM,0), but the steepening of a surface density profile and formation of a stellar cusp by a DM cusp can be seen even when r_ s=1 kpc and σ_ h=3.0 km s^-1. In the case of cored DM haloes, on the other hand, the stellar density profiles are almost intact even though σ_ h and r_ s are the same as in the cuspy halo models. In the case of the cored halo with σ_ h=3.0 km s^-1 and r_ s=125 pc, the formula predicts weak steepening, but a stellar cusp does not emerge. Thus, significance of the DF effect strongly depends on whether the DM halo has a cusp or a core even if the r_ s and σ_ h are the same. The most noticeable difference is the emergence of a stellar cusp in a DM cusp.
The total mass of the nucleus remnants consisting of stars fallen into the centre can depend not only on DM density but also stellar distribution. If R_ h is larger, stars have more extended distribution, therefore DF timescale becomes longer on average. Thus, a larger R_ h leads a smaller fraction of stars to fall into the centre by DF. As a result, a less prominent stellar cusp would form in such an extended galaxy.
§ N-BODY SIMULATIONS
As I showed in Section <ref>, the analytic formulae are useful to estimate the magnitude of DF. The formulae, however, still ignore non-linear effects. For example, they assume DF as a corrective effect of two-body interactions and do not take into account orbital periodicity of particles or reaction of field particles. To address further the effect of DF using more realistic models, I perform N-body simulations in which the models are fully self-consistent, and DF drag force naturally arises as mutual interactions between particles.
§.§ Settings
The initial conditions of my N-body simulations are the same as the DM and stellar models (equation <ref> and <ref>) with the parameters used in Fig. <ref>: r_ s=125 pc, σ_ h=1.5 km s^-1 (ignoring stellar potential) and b=20 pc. The total stellar mass is set to M_⋆=2500 M_⊙, and a mass of a single stellar particle is m_⋆=0.5 M_⊙, i.e. the number of stellar particles is N_⋆=5000. A stellar particle has a softening length of ϵ_⋆=0.1 pc. Velocity distribution is given by Eddington's formula taking into account the total potential of the DM and the stars. Although the actual LOSV dispersion of stars inside R_ h is slightly higher than 1.5 km s^-1 because of self-gravity of the stars, the increase is only a few per cent. Although every single star is resolved with a point-mass particle, the interactions between stars in the simulations are still collisionless (see Appendix <ref>).
DF can arise if m_⋆≫ m_ DM, where m_ DM is a mass of a DM particle in simulations. Since m_⋆=0.5 M_⊙ in my simulations, m_ DM should be 0.05 M_⊙. Achieving such a high resolution requires approximately 2.6 and 16.0×10^7 particles for the cuspy and the cored DM haloes. To lighten the heavy burden of the N-body computations, I employ an orbit-dependent refinement method for a multi-mass spherical model proposed by <cit.>. This method divides a DM halo into i shells and the central sphere (the zeroth shell). Basically, each shell is resolved into DM particles with each mass resolution m_ DM,i and softening length ϵ_i (see Table <ref>). After assigning a DM particle in the i-th shell its initial position and velocity and computing its pericentre distance in the fixed potential, if the pericentre intrudes into the inner j-th shell, the particle is split into m_ DM,i/m_ DM,j particles with the mass m_ DM,j and the softening length ϵ_j.[Therefore, the mass ratio m_ DM,i/m_ DM,j has to be a natural number.] The split particles are distributed on random positions while keeping the initial radius of the parent particle, and directions of their tangential velocities are randomly reassigned while keeping the initial radial velocity and the kinematic energy of the parent particle. This refinement method can, by a substantial factor, reduce the computational run time by decreasing the number of DM particles in outer regions that are not important to this study, while preventing the outer particles with larger masses from entering the innermost region resolved with the smallest particle mass. After the refinement, 1.53 and 6.88×10^7 particles are required to represent the cuspy and the cored DM haloes.
I use a simulation code ASURA <cit.>,[ASURA is an N-body/smoothed particle hydrodynamics (SPH) code although this study only uses the N-body part.] in which a symmetric form of a Plummer softening kernel <cit.>, a parallel tree method with an computational accelerator GRAPE <cit.> and the second-order leap-frog integrator with individual timesteps are used. The number of stellar particles, N_⋆=5000, in my simulations may be too small to obtain a statistically certain density profile. To reinforce this point, I perform ten runs with the same initial condition but different random-number seeds.
§.§ Results
§.§.§ Evolution of the stellar density profiles
I obtain three surface density profiles observed from perpendicular angles for each of the ten runs. Then, I compute a stacking of the thirty profiles of stellar surface density at the same time t for each case of the cuspy and the cored halo. The centre of the stellar distribution is defined to be the median position among all stellar particles in each snapshot.
Fig. <ref> shows the stackings of stellar surface density profiles in the cuspy (blue) and the cored (red) DM haloes at t=4, 7 and 10 Gyr. The shaded regions indicate the ranges of upper and lower 1σ-deviations of the stackings. In the DM cusp, the stellar density profile clearly demonstrates the emergence of a stellar cusp after t=7 Gyr; the density slope becomes remarkably steeper in R2 pc than that in R2 pc. On the other hand, such a stellar cusp does not emerge in the cored DM halo although the outer density slope of the stars in R2 pc is similar to that in the cuspy DM halo, which is nearly exponential with radius. From this result, it can be seen that a DM cusp can generate a stellar cusp by DF even if the stellar density profile is initially flat. In the N-body simulations, the stellar cusps have masses of 33.8^+4.5_-5.3 M_⊙ within R<2 pc, which corresponds to 1.4 per cent of the total stellar mass. Additionally, the difference between the two cases is significant in spite of the same σ_ h, which means that the two halo models are considered to be similar in observations. The emergence and the absence of stellar cusps in the cuspy and cored DM are approximately consistent with the results of my orbital integration models using the analytic DF formulae (Fig. <ref> and <ref>).
Fig. <ref> shows the evolution of R_ h and σ_ h during the N-body simulations. Stellar half-mass radii R_ h decrease slightly; by ≃0.5 and 1 pc in the cored and cuspy DM. LOSV dispersions σ_ h with in R_ h are almost constant even after the emergence of the stellar cusps. This means that DF is not effective for most of stars around the half-mass radius although the central regions in r≪ R_ h are significantly affected.
§.§.§ Evolution of the DM density profiles
As I showed above, DM exerts DF on stars and can cause a low-mass compact UFD to have a stellar cusp if it has a cuspy DM halo. On the other hand, the DM particles can be kinematically heated by the stars spiraling into the centre, as the reaction of DF. Previous studies have shown that a DM cusp can be disrupted or made shallower by objects spiraling into the centre <cit.>. <cit.> also demonstrated that a DM cusp disrupted by infalling objects can be revived if a central remnant of the infalling objects is sufficiently massive. Hence, it is also interesting to look into evolution of the dark matter density profiles in my N-body simulations, i.e. whether the cuspy halo is still cuspy or cored after the creation of the stellar cusp.
Fig. <ref> shows DM density profiles in my N-body simulations of the cuspy halo model, in which I make a stacking of the ten runs. Here, the halo centre is defined to be a position of the particle that has the highest DM density in each snapshot. I use a method like SPH to compute the local DM densities for the centering; a cubic spline kernel is applied to 128 neighbouring DM particles. The Figure indicates that the DM cusp in the initial state is significantly weakened in r1 pc at t=4 Gyr (orange). Eventually, the initial DM cusp is turned into a core extending to r≃2 pc at t=10 Gyr (green). This result means that the central DM is kinematically heated by infalling stars, and the DM density is decreased in r2 pc. The size of this region where DM is affected is consistent with the size of the stellar cusp in the N-body simulations (Fig. <ref>). Since the softening length of DM particles is ϵ_0=0.1 pc, the peaks of DM densities at r≃0.1 pc may be transient fluctuation.
From the consistency of the sizes between the stellar cusps and the DM cores created, the size of a stellar cusp may be regulated by a DM density profile flattened by infalling stars. If this is the case, a larger number of stars falling into the centre can create a larger DM core and a broader stellar cusp. In the central region of a cuspy DM halo, the significance of DF basically depends on σ_ h[F_ DF is almost independent from r_ s in the formula for cuspy haloes (Fig. <ref>).]. In addition, the number of stars in the central region where stars can reach the centre by DF within ∼10 Gyr depends on the initial stellar distribution, i.e. M_⋆ and R_ h.
§ DISCUSSION AND SUMMARY
§.§ Summary and interpretation of the results
As I showed in Section <ref> and <ref>, DF on stars against DM can largely alter the stellar density distribution if the galaxy has cuspy DM distribution and a compact stellar component having R_ h≃20 pc and σ_ h3 km s^-1. The most important result obtained from my N-body simulations is that the DF by the DM cusp can arouse emergence of a stellar cusp in the galactic centre 2 pc. On the other hand, if a DM halo has a core, DF is not efficient to generate such a stellar cusp, in spite of the same σ_ h, although stellar density can be affected and increase slightly in a wide range 10 pc.
The results mentioned above can be explained by the differences of density and velocity dispersion between a cuspy and a cored DM haloes. According to the analytic formulae, DF becomes stronger when a background density is higher and a velocity dispersion is lower. In a cuspy halo, DM density increases toward the centre, and velocity dispersion decreases (see Fig. <ref>), therefore DF becomes stronger towards the centre. In this case, orbital shrinkage by DF brings a star to an inner region where DF is even stronger: `DF shrivelling instability' <cit.>. On the other hand, in a cored halo, density and velocity dispersion are nearly constant in the central region, therefore DF drag force is approximately independent of radius; it actually decreases gently towards the centre (Fig. <ref>). This means that a cored halo is relatively stable against the DF shrivelling of stars.
Interpretation of the above results should be considered carefully. It should be noted that presence of a stellar cusp in a low-mass compact UFD is not necessarily evidence to prove a DM cusp. It is because we do not know the initial condition of the stellar density; a galaxy can create a stellar cusp at its birth even if its DM halo has a core. It could be said, however, that it is inevitable to have a stellar cusp if a low-mass compact UFD has a DM cusp. In other words, if a low-mass compact UFD is observed to have no stellar cusp and be old enough, it suggests that its DM halo would have a large core with a size of r_ s100 pc. I discuss UFDs in current observations on this point in Section <ref>.
§.§ The analytic formulae vs N-body simulations
It is interesting to compare the results of the analytic formulae with those of N-body simulations although it is not the main purpose of this study. Early studies using numerical simulations have discussed that the Chandrasekhar formula assuming Maxwellian velocity distribution and a invariant Λ can give a quite accurate estimate of DF force in various cases <cit.>. It was also reported, however, that the formula can be inaccurate in some specific cases; analyses and N-body simulations have shown that DF can be enhanced around a constant-density core, and then suppressed in the core <cit.>. These phenomena are inconsistent with predictions by the simplified Chandrasekhar formula. Various physical mechanisms of the deviation from the analytic formula have been proposed: orbital resonance between a massive and field particles <cit.>, coherent velocity field among particles <cit.>, a non-Maxwellian velocity distribution <cit.>, decrease of low-velocity particles <cit.> and inhomogeneity of background density and a variable Λ <cit.>.
In the case of a DM cusp, the two analytic formulae predict similar DF force in Fig. <ref>, and the results of my orbital integration models are qualitatively consistent with the N-body simulations. However, the stellar cusp in my N-body simulations have the size of R≃2 pc, and it could be attributed to the weakened DM cusp shown in Fig. <ref>, where DF is weakened. In addition, the DM density centre is not necessarily be fixed onto the stellar centre in the simulations, and the slippage of the centres can broaden the stellar cusp. Therefore, the broadness of the stellar cusp in the simulations does not necessarily mean inaccuracy of the DF modellings. However, the stellar density slopes outside the stellar cusp is steeper in the orbital integration models. Moreover, in my N-body simulations, the total mass of the stellar cusps within R=2 pc are about four times smaller than those predicted by the analytic DF modellings. On these points, both analytic formulae would be overestimating DF in the cuspy haloes.
In the case of a DM core, on the other hand, DF cannot generate a stellar cusp in either the orbital integration or the N-body models. However, there is a difference worthy of special mention: the simplified Chandrasekhar formula hardly changes the stellar density slopes, whereas the N-body simulations show significant increase of the stellar densities in a wide range of R10 pc (Fig. <ref>). This result shows that the Chandrasekhar formula assuming Maxwellian distribution and a invariable Λ underestimates DF in the cored halo despite that the classical formula overestimates in the cuspy halo. It is noteworthy that using the Petts et al. formula with their tidal-stalling model can dramatically improve the reproducibility of DF effect in the cored haloes, and the result of the stellar density profile is almost consistent with the N-body simulations (see the bottom panels of Fig. <ref> and <ref>). Thus, the DF modelling proposed by <cit.> seems to be more accurate than the simplified Chandrasekhar formula although it may not be perfect yet in a DM cusp. Although it is beyond the scope of this study to investigate the physical reasons of the differences between the analytic formulae and my N-body simulations, I consider that the N-body simulations would be physically more credible than the analytic models.
§.§ Comparison with observations
Here, I discuss the validity of my models of low-mass compact UFDs and the results in comparison with current observations. First, it is still very difficult or impossible to determine masses and sizes of DM haloes of UFDs with accuracy in current observations. I have to note, therefore, that the parameters in my DM halo models might be arbitrary. Recent observational studies have argued that galaxies have the universal DM surface density, μ_ DM≡ρ_ DM,0r_ s, over quite a wide range of luminosity when they are assumed to have cored DM haloes <cit.>.[This universality can be explained by assuming the Faber-Jackson law for DM haloes <cit.>.] <cit.> and <cit.> derived μ_ DM=140^+80_-30 and 70±4 M_⊙ pc^-2 from their galaxy samples including some satellite galaxies of the Milky Way. Although it has to be noted that they assumed different models for their cored haloes, my cored DM models of equation (<ref>) have μ_ DM=122 and 176 M_⊙ pc^-2 for r_ s=125 and 250 pc, respectively, when σ_ h=1.5 km s^-1. Accordingly, my cored halo model with r_ s=125–250 pc and σ_ h=1.5 km s^-1 would be consistent with the observed universality of μ_ DM if it is extrapolated to extremely low-mass galaxies.
For my stellar model, I assume the uniform stellar mass of m_⋆=0.5 M_⊙. However, of course, stars generally have different masses according to their initial mass function and stellar evolution. Although stellar scattering by massive stars would not be efficient since encounters between stars are expected to be rare in a low-mass compact UFD (see Appendix <ref>), mass segregation does occur on the same timescale as DF because of mass-dependence of DF. Because more massive stars sink faster into the centre of a DM cusp, a stellar cusp would mainly consist of massive stars.[The massive objects are
stellar remnants. Even the typical mass of White Dwarfs is larger than 0.5 M_⊙.] The massive objects in the stellar cusp could be a heating source of less massive stars around it and prevent the less massive stars from falling into the centre. Thus, I have to note that my N-body simulations lack this effect.
The best UFD that is the most similar to my N-body model (R_ h≃20 pc, σ_ h=1.5 km s^-1 and log(M_⋆/M_⊙)=3.4) would be Draco II, which has R_ h∼19^+8_-6 pc, σ_ h=2.9±2.1 km s^-1, the total luminosity log(L_ V/L_⊙)=3.1±0.3 and an age ∼12 Gyr <cit.>. Although the most probable value of σ_ h observed is nearly twice higher than that in my model, it is within the error range. If I adopt a stellar mass-to-luminosity ratio M/L_ V=2–3 M_⊙/L_⊙ for a metal-poor system of 12 Gyr from a simple stellar population model of <cit.>, the stellar mass of Draco II is approximately log(M_⋆/M_⊙)=3.1–3.9. At the Heliocentric distance of Draco II, 20±3 kpc <cit.>, the size of a stellar cusp expected from my N-body simulations, ≃2 pc, corresponds to ≃0.3 arcmin. Unfortunately, the cusp region is smaller than the size of the innermost bin of a stellar surface density profile shown in fig. 3 of <cit.>, therefore it might be still challenging for the current observations to detect a stellar cusp in Draco II even if it is present. In addition, the observed number of stars belonging to Draco II may be too small to excavate a stellar cusp in the current observations. Although DM density profiles of UFDs may differ from one another even if they have the same sizes and LOSV dispersions, it could improve statistics for proving absence of stellar cusps to make a stacking like Fig. <ref> among UFDs having similar and sufficiently small R_ h and σ_ h. Because of the faintness of UFDs as small as Draco II, observations are limited to the close distance from the solar system: d30 kpc. It could be expected, however, that future observations will explore vaster regions to discover such faint UFDs.
§ ACKNOWLEDGMENTS
The author thanks the referee for his/her useful comments that helped improve
the article greatly, and Takayuki R. Saitoh for kindly providing the simulation code ASURA. This study was supported by World Premier International Research Center Initiative (WPI), MEXT, Japan and CREST, JST. The numerical computations presented in this paper were carried out on Cray XC30 at Center for Computational Astrophysics, National Astronomical Observatory of Japan.
[Antonini & MerrittAntonini &
Merritt2012]am:12
Antonini F., Merritt D., 2012, , 745, 83
[Arca-Sedda &
Capuzzo-DolcettaArca-Sedda & Capuzzo-Dolcetta2014]ac:14
Arca-Sedda M., Capuzzo-Dolcetta R., 2014, , 785, 51
[Arca-Sedda &
Capuzzo-DolcettaArca-Sedda & Capuzzo-Dolcetta2017]ac:17
Arca-Sedda M., Capuzzo-Dolcetta R., 2017, , 464, 3060
[Belokurov et al.,Belokurov
et al.2009]bel:09
Belokurov V., et al., 2009, , 397, 1748
[Binney & TremaineBinney &
Tremaine2008]bt:08
Binney J., Tremaine S., 2008, Galactic Dynamics Second Edition.
Princeton Univ. Press, Princeton
[Bontekoe & van AlbadaBontekoe & van
Albada1987]bva:87
Bontekoe T. R., van Albada T. S., 1987, , 224, 349
[ChandrasekharChandrasekhar1943]c:43
Chandrasekhar S., 1943, ApJ, 97, 255
[Cole, Dehnen & WilkinsonCole
et al.2011]cdw:11
Cole D. R., Dehnen W., Wilkinson M. I., 2011, , 416, 1118
[de Blokde Blok2010]d:10
de Blok W. J. G., 2010, Adv. Astron., 2010, 789293
[DehnenDehnen1993]d:93
Dehnen W., 1993, , 265, 250
[Di Cintio, Brook, Dutton, Macciò,
Obreja & DekelDi Cintio et al.2017]dbd:16
Di Cintio A., Brook C. B., Dutton A. A., Macciò A. V., Obreja
A., Dekel A., 2017, , 466, L1
[Donato et al.,Donato
et al.2009]d:09
Donato F., et al., 2009, , 397, 1169
[Dosopoulou & AntoniniDosopoulou &
Antonini2016]da:16
Dosopoulou F., Antonini F., 2016, preprint (astro-ph/1611.06573)
[Dubinski & CarlbergDubinski &
Carlberg1991]dc:91
Dubinski J., Carlberg R. G., 1991, , 378, 496
[El-Badry, Wetzel, Geha, Hopkins,
Kereš, Chan & Faucher-GiguèreEl-Badry
et al.2016]ewg:16
El-Badry K., Wetzel A., Geha M., Hopkins P. F., Kereš D.,
Chan T. K., Faucher-Giguère C.-A., 2016, , 820, 131
[Gilmore, Wilkinson, Wyse, Kleyna, Koch, Evans
& GrebelGilmore et al.2007]gww:07
Gilmore G., Wilkinson M. I., Wyse R. F. G., Kleyna J. T., Koch A., Evans
N. W., Grebel E. K., 2007, ApJ, 663, 948
[Goerdt, Moore, Read &
StadelGoerdt et al.2010]gmr:10
Goerdt T., Moore B., Read J. I., Stadel J., 2010, , 725, 1707
[Goerdt, Moore, Read, Stadel & ZempGoerdt
et al.2006]gmr:06
Goerdt T., Moore B., Read J. I., Stadel J., Zemp M., 2006, MNRAS, 368,
1073
[Governato et al.,Governato
et al.2010]g:10
Governato F., et al., 2010, Nat, 463, 203
[Gradshteyn, Ryzhik, Jeffrey &
ZwillingerGradshteyn et al.2007]grj:07
Gradshteyn I. S., Ryzhik I. M., Jeffrey A., Zwillinger D., 2007,
Table of Integrals, Series, and Products
[Hayashi & ChibaHayashi &
Chiba2012]hc:12
Hayashi K., Chiba M., 2012, , 755, 145
[Hayashi & ChibaHayashi &
Chiba2015]hc:15
Hayashi K., Chiba M., 2015, , 803, L11
[HernandezHernandez2016]h:16
Hernandez X., 2016, , 462, 2734
[Hernandez & GilmoreHernandez &
Gilmore1998]hg:98
Hernandez X., Gilmore G., 1998, , 297, 517
[HernquistHernquist1990]h:90
Hernquist L., 1990, , 356, 359
[Homma et al.,Homma
et al.2016]hc:16
Homma D., et al., 2016, , 832, 21
[InoueInoue2009]i:09
Inoue S., 2009, MNRAS, 397, 709
[InoueInoue2011]i:11
Inoue S., 2011, , 416, 1181
[Inoue & SaitohInoue &
Saitoh2011]is:11
Inoue S., Saitoh T. R., 2011, , 418, 2527
[Ishiyama et al.,Ishiyama
et al.2013]imp:11
Ishiyama T., et al., 2013, , 767, 146
[Just, Khan, Berczik, Ernst &
SpurzemJust et al.2011]jkb:11
Just A., Khan F. M., Berczik P., Ernst A., Spurzem R., 2011,
, 411, 653
[Just & PeñarrubiaJust &
Peñarrubia2005]jp:05
Just A., Peñarrubia J., 2005, , 431, 861
[Klypin, Kravtsov, Bullock &
PrimackKlypin et al.2001]kkb:01
Klypin A., Kravtsov A. V., Bullock J. S., Primack J. R., 2001,
ApJ, 554, 903
[Kormendy & FreemanKormendy &
Freeman2016]kf:16
Kormendy J., Freeman K. C., 2016, , 817, 84
[KroupaKroupa2002]k:02
Kroupa P., 2002, Science, 295, 82
[Laevens et al.,Laevens
et al.2015]l:15
Laevens B. P. M., et al., 2015, , 813, 44
[Lin & TremaineLin &
Tremaine1983]lh:83
Lin D. N. C., Tremaine S., 1983, , 264, 364
[MakinoMakino2004]m:04
Makino J., 2004, PASJ, 56, 521
[MarastonMaraston2005]m:05
Maraston C., 2005, , 362, 799
[Martin et al.,Martin
et al.2016]m:16
Martin N. F., et al., 2016, , 458, L59
[Navarro, Frenk & WhiteNavarro
et al.1997]nfw:97
Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493
[Ogiya & MoriOgiya &
Mori2014]om:14
Ogiya G., Mori M., 2014, , 793, 46
[Oh, de Blok, Brinks, Walter &
Kennicutt Jr.Oh et al.2011]obb:10
Oh S.-H., de Blok W. J. G., Brinks E., Walter F., Kennicutt Jr.
R. C., 2011, AJ, 141, 193
[Peñarrubia, Ludlow, Chanamé &
WalkerPeñarrubia et al.2016]plc:16
Peñarrubia J., Ludlow A. D., Chanamé J., Walker M. G.,
2016, , 461, L72
[Petts, Gualandris & ReadPetts
et al.2015]pgr:15
Petts J. A., Gualandris A., Read J. I., 2015, , 454, 3778
[Petts, Read & GualandrisPetts
et al.2016]prg:16
Petts J. A., Read J. I., Gualandris A., 2016, , 463, 858
[Pontzen & GovernatoPontzen &
Governato2012]pg:12
Pontzen A., Governato F., 2012, , 421, 3464
[Read, Goerdt, Moore, pontzen & StadalRead
et al.2006]rgm:06
Read J. I., Goerdt T., Moore B., pontzen A. P., Stadal J., 2006, MNRAS,
373, 1451
[Saitoh, Daisaka, Kokubo, Makino,
Okamoto, Tomisaka, Wada & YoshidaSaitoh et al.2008]sdk:08
Saitoh T. R., Daisaka H., Kokubo E., Makino J., Okamoto T.,
Tomisaka K., Wada K., Yoshida N., 2008, PASJ, 60, 667
[Saitoh, Daisaka, Kokubo, Makino,
Okamoto, Tomisaka, Wada & YoshidaSaitoh et al.2009]sdk:09
Saitoh T. R., Daisaka H., Kokubo E., Makino J., Okamoto T.,
Tomisaka K., Wada K., Yoshida N., 2009, PASJ, 61, 481
[Saitoh & MakinoSaitoh &
Makino2009]sm:09
Saitoh T. R., Makino J., 2009, ApJL, 697, L99
[Saitoh & MakinoSaitoh &
Makino2010]sm:10
Saitoh T. R., Makino J., 2010, PASJ, 62, 301
[Saitoh & MakinoSaitoh &
Makino2012]sm:12
Saitoh T. R., Makino J., 2012, , 17, 76
[Saitoh & MakinoSaitoh &
Makino2013]sm:13
Saitoh T. R., Makino J., 2013, , 768, 44
[Silva, Lima, de Souza, Del Popolo, Le
Delliou & LeeSilva et al.2016]sls:16
Silva J. M., Lima J. A. S., de Souza R. E., Del Popolo A., Le
Delliou M., Lee X.-G., 2016, , 5, 021
[Simon et al.,Simon et al.2016]s:16
Simon J. D., et al., 2016, preprint (astro-ph/1610.05301)
[Spano, Marcelin, Amram, Carignan,
Epinat & HernandezSpano et al.2008]sma:08
Spano M., Marcelin M., Amram P., Carignan C., Epinat B.,
Hernandez O., 2008, MNRAS, 383, 297
[Springel et al.,Springel
et al.2008]swv:08
Springel V., et al., 2008, MNRAS, 391, 1685
[Tanikawa, Yoshikawa, Nitadori &
OkamotoTanikawa et al.2013]tyn:13
Tanikawa A., Yoshikawa K., Nitadori K., Okamoto T., 2013, ,
19, 74
[Willman et al.,Willman
et al.2005]w:05
Willman B., et al., 2005, , 129, 2692
[Zelnikov & KuskovZelnikov &
Kuskov2016]zk:16
Zelnikov M. I., Kuskov D. S., 2016, , 455, 3597
[Zemp, Moore, Stadel, Carollo &
MadauZemp et al.2008]zms:08
Zemp M., Moore B., Stadel J., Carollo C. M., Madau P., 2008,
, 386, 1543
§ ANALYTIC SOLUTION OF INTERACTION INTENSITY
Equation (<ref>) integrates intensity of interactions with DM particles over possible relative velocities and impact parameters. <cit.> have mentioned that their formula (equation <ref>) containing J(v_ DM) requires a double integral which is quite expensive in numerical computations. For the sake of practical use of their formula, here I find the analytical solution of equation (<ref>).
By letting A≡ v_⋆^2-v_ DM^2 and B=b_ max/G^2m_⋆^2, the indefinite integral of equation (<ref>) can be obtained as,
j(V) = ∫(1+A/V^2)log(1+BV^4) dV
= (V-A/V)log(BV^4+1) -4V + I_ 1 + Im(I_ 2)/√(2)B^1/4 +C,
where C is an integration constant, and
I_ 1(V) = (A√(B)-1)log(√(B)V^2+1-√(2)B^1/4V/√(B)V^2+1+√(2)B^1/4V)
I_ 2(V) = (A√(B)+1)log(√(B)V^2-1-√(2)iB^1/4V/√(B)V^2-1+√(2)iB^1/4V).
Furthermore, Im(I_ 2) is transformed as follows,[Im[log(x+iy)]=arctan(y/x) <cit.>.]
Im(I_ 2) = -2(A√(B)+1)arctan(√(2)B^1/4V/√(B)V^2-1).
Eventually, the definite integral, equation (<ref>), is
J(v_ DM) = j(v_⋆+v_ DM) - j(|v_⋆-v_ DM|)
Thus, since equation (<ref>) can be solved analytically, the equation (<ref>) actually does not require a double integration but a single integration in the numerical computations.
§ COLLISIONLESSNESS OF THE SIMULATIONS
In my N-body simulations, every single star is resolved with a point-mass particle, and I should ascertain whether the gravitational interactions between the stellar particles are collisional or collisionless. Perturbations on a star by the others can be approximated as,
Δ v_⊥^2 ≃ 8N_⋆(Gm_⋆/r_⋆ v_⋆)^2lnΛ_⋆,
where
Λ_⋆≡b_⋆, max/b_⋆, min∼v_⋆^2r_⋆/Gm_⋆,
where b_⋆, min∼ Gm_⋆/v_⋆^2, b_⋆, max∼ r_⋆. Here, using a stellar mass fraction f_⋆, v_⋆^2∼ GN_⋆ m_⋆/(r_⋆ f_⋆), and the number of crossings that are required for dynamical relaxation is
n_ relax≡v_⋆^2/Δ v_⊥^2=N_⋆/f_⋆^2/8ln(N_⋆/f_⋆).
f_⋆≃0.03 within r_⋆ in both of the cuspy and the cored haloes. In the initial settings of my simulations, n_ relax=5.8×10^5. The crossing time is t_ cross∼ r_⋆/v_⋆≃ r_⋆/σ_ h=13.0 Myr. Eventually, I estimate the relaxation timescales to be approximately t_ relax≡ n_ relax× t_ cross∼10^3 Gyr which are significantly longer then the age of the Universe: ∼10 Gyr. Therefore, the stellar interactions in my N-body simulations are regarded to be collisionless.
|
http://arxiv.org/abs/1701.07535v2 | 20170126012036 | Stratified Splitting for Efficient Monte Carlo Integration | [
"Radislav Vaisman",
"Robert Salomone",
"Dirk P. Kroese"
] | math.ST | [
"math.ST",
"stat.TH"
] |
compatibility=false
|
http://arxiv.org/abs/1701.08055v1 | 20170127140153 | Modelling Competitive Sports: Bradley-Terry-Élő Models for Supervised and On-Line Learning of Paired Competition Outcomes | [
"Franz J. Király",
"Zhaozhi Qian"
] | stat.ML | [
"stat.ML",
"cs.LG",
"stat.AP",
"stat.ME"
] |
§
.1 ex
§.§
.1 ex
§.§.§
[runin]
.1 ex
[name=Theorem]Thm
[within=section,name=Lemma]Lem
[sibling=Lem,name=Definition]Def
[sibling=Lem,name=Assumption]Ass
[sibling=Lem,name=Notation]Not
[sibling=Lem,name=Proposition]Prop
[sibling=Lem,name=Remark]Rem
[sibling=Lem,name=Example]Ex
[sibling=Lem,name=Corollary]Cor
[sibling=Thm,name=Conjecture]Conj
^#1
d
^#1
(
{ruledalgorithmtbploaalgorithm1]
Franz J. Király
<[email protected]>12]Zhaozhi Qian
<[email protected]>[1]
Department of Statistical Science,
University College London,
Gower Street,
London WC1E 6BT, United Kingdom
[2]King Digital Entertainment plc,
Ampersand Building,
178 Wardour Street,
London W1F 8FY, United Kingdom
emptyModelling Competitive Sports:
Bradley-Terry-Élő Models
for Supervised and On-Line Learning
of Paired Competition Outcomes
[
January 27, 2017
============================================================================================================================
Prediction and modelling of competitive sports outcomes has received much recent attention,
especially from the Bayesian statistics and machine learning communities. In the real world setting of outcome prediction,
the seminal Élő update still remains, after more than 50 years, a valuable baseline
which is difficult to improve upon, though in its original form it is a heuristic and not a proper statistical “model”.
Mathematically, the Élő rating system is very closely related to the Bradley-Terry models, which are usually
used in an explanatory fashion rather than in a predictive supervised or on-line learning setting.
Exploiting this close link between these two model classes and some newly observed similarities,
we propose a new supervised learning framework with close similarities to logistic regression, low-rank matrix completion and neural networks.
Building on it, we formulate a class of structured log-odds models, unifying the desirable properties found in the
above: supervised probabilistic prediction of scores and wins/draws/losses, batch/epoch and on-line learning, as well as the possibility to incorporate features in the prediction, without having to sacrifice simplicity, parsimony of the Bradley-Terry models, or computational efficiency of Élő's original approach.
We validate the structured log-odds modelling approach in synthetic experiments and English Premier League outcomes,
where the added expressivity yields the best predictions reported in the state-of-art, close to the quality of contemporary betting odds.
§ INTRODUCTION
§.§ Modelling and predicting competitive sports
Competitive sports refers to any sport that involves two teams or individuals
competing against each other to achieve higher scores. Competitive
team sports includes some of the most popular and most watched games
such as football, basketball and rugby. Such sports are played both
in domestic professional leagues such as the National Basketball Association,
and international competitions such as the FIFA World Cup. For football
alone, there are over one hundred fully professional leagues in 71
countries globally. It is estimated that the Premier League, the top
football league in the United Kingdom, attracted a (cumulative) television
audience of 4.7 billion viewers in the last season <cit.>.
The outcome of a match is determined by a large number of factors.
Just to name a few, they might involve the competitive strength of
each individual player in both teams, the smoothness of collaboration
between players, and the team's strategy of playing. Moreover, the
composition of any team changes over the years, for example because players leave
or join the team. The team composition may also change within the
tournament season or even during a match because of injuries or penalties.
Understanding these factors is, by the prediction-validation nature of the scientific method,
closely linked to predicting the outcome of a pairing. By Occam's razor, the
factors which empirically help in prediction are exactly those that one may hypothesize to
be relevant for the outcome.
Since keeping track of all relevant factors is unrealistic,
of course one cannot expect a certain prediction of a competitive sports outcome.
Moreover, it is also unreasonable to believe that all factors can be measured or controlled,
hence it is reasonable to assume that unpredictable, or non-deterministic statistical “noise”
is involved in the process of generating the outcome (or subsume the unknowns as such noise).
A good prediction will, hence, not exactly predict the outcome, but will anticipate the “correct” odds more precisely.
The extent to which the outcomes are predictable may hence be considered as a surrogate quantifier of how much
the outcome of a match is influenced by “skill” (as surrogated by determinism/prediction), or by
“chance”[We expressly avoid use of the word “luck” as in vernacular use it often means “chance”,
jointly with the belief that it may be influenced by esoterical, magical or otherwise metaphysical means.
While in the suggested surrogate use, it may well be that the “chance” component of a model subsumes
possible points of influence which simply are not measured or observed in the data, an
extremely strong corpus of scientific evidence implies that these will not be metaphysical, only unknown
- two qualifiers which are obviously not the same, despite strong human tendencies to believe the contrary.]
(as surrogated by the noise/unknown factors).
Phenomena which can not be specified deterministically
are in fact very common in nature. Statistics and probability theory provide ways
to make inference under randomness. Therefore, modelling and predicting
the results of competitive team sports naturally falls into the area
of statistics and machine learning. Moreover, any interpretable predictive model
yields a possible explanation of what constitutes factors influencing the outcome.
§.§ History of competitive sports modelling
Research of modeling competitive sports has a long history. In its early
days, research was often closely related to sports betting or player/team ranking <cit.>.
The two most influential approaches are due to <cit.> and <cit.>.
The Bradley-Terry and Élő models allow estimation of player rating;
the Élő system additionally contains algorithmic heuristics to easily update a player's rank,
which have been in use for official chess rankings since the 1960s.
The Élő system is also designed to predict the odds of a player winning or losing to the opponent.
In contemporary practice, Bradley-Terry and Élő type models are broadly used in modelling of sports outcomes and ranking of players,
and it has been noted that they are very close mathematically.
In more recent days, relatively diverse modelling approaches originating from the Bayesian statistical framework
<cit.>,
and also some inspired by machine learning principles <cit.> have been applied for modelling competitive sports.
These models are more expressive and remove some of the Bradley-Terry and Élő models' limitations, though usually at the price of
interpretability, computational efficiency, or both.
A more extensive literature overview on existing approaches will be given later in Section <ref>, as literature
spans multiple communities and, in our opinion, a prior exposition of the technical setting
and simultaneous straightening of thoughts benefits the understanding and allows us
to give proper credit and context for the widely different ideas employed in competitive sports modelling.
§.§ Aim of competitive sports modelling
In literature, the study of competitive team sports may be seen to
lie between two primary goals. The first goal is to design models that make good predictions for future
match outcome. The second goal is to understand the key factors that influence
the match outcome, mostly through retrospective analysis <cit.>.
As explained above, these two aspects are intrinsically connected,
and in our view they are the two
facets of a single problem: on one hand, proposed influential factors
are only scientifically valid if confirmed by falsifiable
experiments such as predictions on future matches. If the predictive
performance does not increase when information about such factors
enters the model, one should conclude by Occam's razor that these factors are actually
irrelevant[... to distinguish/characterize the observations, which in some cases may
plausibly pertain to restrictions in set of observations, rather than to causative relevance.
Hypothetical example: age of football players may be identified as unimportant for the outcome -
which may plausibly be due to the fact that the data contained no players of ages 5 or 80, say,
as opposed to player age being unimportant in general. Rephrased, it is only unimportant for cases
that are plausible to be found in the data set in the first place.].
On the other hand, it is plausible to assume that
predictions are improved by making use of relevant factors (also known
as “features”) become available, for example because they
are capable of explaining unmodelled random effects (noise). In light
of this, the main problem considered in this work is the
(validable and falsifiable) prediction problem, which
in machine learning terminology is also known as the supervised learning task.
§.§ Main questions and challenges in competitive sports outcomes prediction
Given the above discussion, the major challenges may be stated as follows:
On the methodological side, what are suitable models
for competitive sports outcomes? Current models are not at the same time
interpretable, easily computable, allow to use feature information on the teams/players,
and allow to predict scores or ternary outcomes.
It is an open question how to achieve this in the best way, and this manuscript
attempts to highlight a possible path.
The main technical difficulty lies in the fact that off-shelf methods do not apply
due to the structured nature of the data:
unlike in individual sports such as running and swimming where the outcome
depends only on the given team, and where the prediction task may
be dealt with classical statistics and machine learning technology
(see <cit.> for a discussion of this in the context of running),
in competitive team sports the outcome may
be determined by potentially complex interactions between two opposing teams.
In particular, the performance of any team is not measured directly using a simple metric,
but only in relation to the opposing team's performance.
On the side of domain applications, which in this manuscript is premier league football,
it is of great interest to determine the relevant factors determining the outcome,
the best way to predict, and which ranking systems are fair and appropriate.
All these questions are related to predictive modelling, as well as the availability of
suitable amounts of quality data. Unfortunately, the scarcity of features available
in systematic presentation places a hurdle to academic research in competitive team sports, especially
when it comes to assessing important factors such as team member characteristics,
or strategic considerations during the match.
Moreover, closely linked is also the question to which extent the outcomes are determined by
“chance” as opposed to “skill”. Since if on one hypothetical extreme, results would prove to be completely
unpredictable, there would be no empirical evidence to distinguish the matches from a game of chance
such as flipping a coin. On the other hand, importance of a measurement for predicting would strongly
suggest its importance for winning (or losing), though without an experiment not necessarily a causative link.
We attempt to address these questions in the case of premier league football within the confines of readily available data.
§.§ Main contributions
Our main contributions in this manuscript are the following:
(i) We give what we believe to be the first comprehensive literature review of state-of-art competitive sports modelling that comprises the multiple communities (Bradley-Terry models, Élő type models, Bayesian models, machine learning) in which research so far has been conducted mostly separately.
(ii) We present a unified Bradley-Terry-Élő model which combines the statistical rigour of the Bradley-Terry models with fitting and update strategies similar to that found in the Élő system. Mathematically only a small step, this joint view is essential in a predictive/supervised setting as it allows efficient training and application in an on-line learning situation. Practically, this step solves some problems of the Élő system (including ranking initialization and choice of K-factor), and establishes close relations to logistic regression, low-rank matrix completion, and neural networks.
(iii) This unified view on Bradley-Terry-Élő allows us to introduce classes of joint extensions, the structured log-odds models, which unites desirable properties of the extensions found in the disjoint communities: probabilistic prediction of scores and wins/draws/losses, batch/epoch and on-line learning, as well as the possibility to incorporate features in the prediction, without having to sacrifice structural parsimony of the Bradley-Terry models, or simplicity and computational efficiency of Élő's original approach.
(iv) We validate the practical usefulness of the structured log-odds models in synthetic experiments and in
answering domain questions on English Premier League data, most prominently on the importance of features, fairness of the ranking,
as well as on the “chance”-“skill” divide.
§.§ Manuscript structure
Section <ref> gives an overview of the mathematical setting in competitive sports prediction.
Building on the technical context, Section <ref> presents a more extensive review of the literature related to the prediction problem
of competitive sports, and introduces a joint view on Bradley-Terry and Élő type models.
Section <ref> introduces the structured log-odds models, which are validated in
empirical experiments in Section <ref>.
Our results and possible future directions for research are discussed in section <ref>.
§.§ Authors' contributions
This manuscript is based on ZQ's MSc thesis, submitted September 2016 at University College London, written under supervision of FK.
FK provided the ideas of re-interpretation and possible extensions of the Élő model.
Literature overview is jointly due to ZQ an FQ, and in parts follows some very helpful pointers by I. Kosmidis (see below).
Novel technical ideas in Sections <ref> to <ref>,
and experiments (set-up and implementation) are mostly due to ZQ.
The present manuscript is a substantial re-working of the thesis manuscript, jointly done by FK and ZQ.
§.§ Acknowledgements
We are thankful to Ioannis Kosmidis for comments on an earlier form of the manuscript,
for pointing out some earlier occurrences of ideas presented in it but not given proper credit,
as well as relevant literature in the “Bradley-Terry” branch.
§ THE MATHEMATICAL-STATISTICAL SETTING
This section formulates the prediction task in competitive sports and fixes notation,
considering as an instance of supervised learning with several non-standard structural aspects being of relevance.
§.§ Supervised prediction of competitive outcomes
We introduce the mathematical setting for outcome prediction in competitive team sports.
As outlined in the introductory Section <ref>, three crucial features need to be taken into account in this setting:
(i) The outcome of a pairing cannot be exactly predicted prior to the game, even with perfect knowledge of all determinates.
Hence it is preferable to predict a probabilistic estimate for all possible match outcomes (win/draw/loss) rather than deterministically
choosing one of them.
(ii) In a pairing, two teams play against each other, one as a home team and the other as the away or guest team. Not all pairs may play against each other, while others may play multiple times. As a mathematically prototypical (though inaccurate) sub-case one may consider all pairs playing exactly once, which gives the observations an implicit matrix structure (row = home team, column = away team). Outcome labels and features crucially depend on the teams constituting the pairing.
(iii) Pairings take place over time, and the expected outcomes are plausibly expected to change with (possibly hidden) characteristics of the teams.
Hence we will model the temporal dependence explicitly to be able to take it into account when building and checking predictive strategies.
§.§.§ The Generative Model.
Following the above discussion, we will fix a generative model as follows:
as in the standard supervised learning setting, we will consider a generative joint random variable (X,Y) taking values in
×, where is the set of features (or covariates, independent variables) for each pairing, while is the set of labels (or outcome variables, dependent variables).
In our setting, we will consider only the cases = {win, lose} and = {win, lose, draw},
in which case an observation from is a so-called match outcome, as well as the case = ^2, in which case an observation
is a so-called final score (in which case, by convention, the first component of is of the home team), or the case of
score differences where = (in which case, by convention, a positive number is in favour of the home team).
From the official rule set of a game (such as football), the match outcome is uniquely determined by a score or score difference.
As all the above sets are discrete, predicting will amount to supervised classification
(the score difference problem may be phrased as a regression problem, but we will abstain from doing so for technical reasons that become apparent later).
The random variable X and its domain shall include information on the teams playing, as well as on the time of the match.
We will suppose there is a set of teams, and for i,j∈ we will denote by (X_ij,Y_ij) the random variable (X,Y)
conditioned on the knowledge that i is the home team, and j is the away team.
Note that information in X_ij can include any knowledge on either single team i or j, but also information corresponding uniquely to the pairing (i,j).
We will assume that there are Q:=# teams, which means that the X_ij and Y_ij may be arranged in (Q× Q) matrices each.
Further there will be a set of time points at which matches are observed. For t∈ we will denote by (X(t),Y(t)) or (X_ij(t),Y_ij(t))
an additional conditioning that the outcome is observed at time point t.
Note that the indexing X_ij(t) and Y_ij(t) formally amounts to a double conditioning and could be written as X|I = i, J = j, T = t and Y|I = i, J = j, T = t, where I,J,T are random variables denoting the home team, the away team, and the time of the pairing. Though we do believe that the index/bracket notation is easier to carry through and to follow (including an explicit mirroring of the the “matrix structure”) than the conditional or “graphical models” type notation, which is our main reason for adopting the former and not the latter.
§.§.§ The Observation Model.
By construction, the generative random variable (X,Y) contains all information on having any pairing playing at any time,
However, observations in practice will concern two teams playing at a certain time,
hence observations in practice will only include independent samples of (X_ij(t),Y_ij(t)) for some i,j∈, t∈, and never full observations of (X,Y) which can be interpreted as a latent variable.
Note that the observations can be, in-principle, correlated (or unconditionally dependent) if the pairing (i,j) or the time t is not made explicit (by conditioning which is implicit in the indices i,j,t).
An important aspect of our observation model will be that whenever a value of X_ij(t) or Y_ij(t) is observed, it will always come together with the information of the playing teams (i,j)∈^2 and the time t∈ at which it was observed. This fact will be implicitly made use of in description of algorithms and validation methodology.
(formally this could be achieved by explicitly exhibiting/adding ×× as a Cartesian factor of the sampling domains or which we will not do for reasons of clarity and readability)
Two independent batches of data will be observed in the exposition. We will consider:
:= {(X^(1)_i_1j_1(t_1),Y^(1)_i_1j_1(t_1)),…,(X^(N)_i_Nj_N(t_N),Y^(N)_i_Nj_N(t_N))}
:= {(X^(1*)_i^*_1j^*_1(t^*_1),Y^(1*)_i^*_1j^*_1(t^*_1)),…,(X^(M*)_i^*_Mj^*_M(t^*_M),Y^(M*)_i^*_Mj^*_M(t^*_M))}
where (X^(i),Y^(i)) and (X^(i*),Y^(i*)) are i.i.d. samples from (X,Y).
Note that unfortunately (from a notational perspective), one cannot omit the superscripts κ as in X^(κ) when defining the samples, since the figurative “dies” should be cast anew for each pairing taking place. In particular, if all games would consist of a single pair of teams playing where the results are independent of time, they would all be the same (and not only identically distributed) without the super-index, i.e., without distinguishing different games as different samples from (X,Y).
§.§.§ The Learning Task.
As set out in the beginning, the main task we will be concerned with is predicting future outcomes given past outcomes and features, observed from the process above. In this work, the features will be assumed to change over time slowly. It is not our primary goal to identify the hidden features in (X,Y), as they are never observed and hence not accessible as ground truth which can validate our models. However, these will be of secondary interest and considered empirically validated by a well-predicting model.
More precisely, we will describe methodology for learning and validating predictive models of the type
f: ×××→ (),
where () is the set of (discrete probability) distributions on .
That is, given a pairing (i,j) and a time point t at which the teams i and j play, and information of type x=X_ij(t), make a probabilistic prediction f(x,i,j,t) of the outcome.
Most algorithms we discuss will not use added information in , hence will be of type f:××→ (). Some will disregard the time in . Indeed, the latter algorithms are to be considered scientific baselines above which any algorithm using information in and/or has to improve.
The models f above will be learnt on a training set , and validated on an independent test set as defined above.
In this scenario, f will be a random variable which may implicitly depend on but will be independent of .
The learning strategy - which is f depending on - may take any form and is considered in a full black-box sense.
In the exposition, it will in fact take the form of various parametric and non-parametric prediction algorithms.
The goodness of such an f will be evaluated by a loss
L: ()×→ which compares a probabilistic prediction to the true observation.
The best f will have a small expected generalization loss
ε (f|i,j,t) := _(X,Y)[L(f(X_ij(t),i,j,t),Y_ij(t))]
at any future time point t and for any pairing i,j.
Under mild assumptions, we will argue below that this quantity is estimable from and only mildly dependent on t,i,j.
Though a good form for L is not a-priori clear. Also, it is unclear under which assumptions ε (f|t) is estimable, due do the conditioning on (i,j,t) in the training set. These special aspects of the competitive sports prediction settings will be addressed in the subsequent sections.
§.§ Losses for probablistic classification
In order to evaluate different models, we need a criterion to measure
the goodness of probabilistic predictions. The most common error metric
used in supervised classification problems is the prediction accuracy.
However, the accuracy is often insensitive to probabilistic predictions.
For example, on a certain test case model A predicts a win probability of 60%, while model B predicts a win probability of 95%. If the
actual outcome is not win, both models are wrong. In terms of prediction
accuracy (or any other non-probabilistic metric), they are equally wrong because both of them made
one mistake. However, model B should be considered better than model A since it predicted the “true” outcome with higher accuracy.
Similarly, if a large number of outcomes of a fair coin toss have been observed as training data, a model that predicts 50% percent for
both outcomes on any test data point should be considered more accurate than a model that predicts 100% percent for either outcome 50% of the time.
There exists two commonly used criteria that take into account the probabilistic nature of predictions which we adopt. The first one is the Brier score (Equation <ref> below)
and the second is the log-loss or log-likelihood loss (Equation <ref> below).
Both losses compare a distribution to an observation, hence mathematically have the signature of a function ()×→.
By (very slight) abuse of notation, we will identify distributions on (discrete) with its probability mass function; for a distribution p, for y∈ we write p_y for mass on the observation y (= the probability to observe y in a random experiment following p).
With this convention, log-loss L_ℓ and Brier loss L_Br are defined as follows:
L_ℓ: (p,y)↦ - log p_y
L_Br: (p,y)↦ (1-p_y)^2 + ∑_y∈∖{y} p_y^2
The log-loss and the Brier loss functions have the following properties:
(i) the Brier Score is only defined on with an addition/subtraction and a norm defined.
This is not necessarily the case in our setting where it may be that = {win, lose, draw}.
In literature, this is often identified with = {1,0,-1}, though this identification is arbitrary, and the Brier score may change depending on which numbers are used.
On the other hand, the log-loss is defined for any and remains unchanged under any renaming or renumbering of a discrete .
(ii) For a joint random variable (X,Y) taking values in ×, it can be shown that the expected losses [ L_ℓ(f(X),Y) ] are minimized by the “correct” prediction f: x↦(p_y = P(Y=y|X=x))_y∈.
The two loss functions usually are introduced as empirical losses on a test set , i.e.,
ε_(f) = 1/#∑_(x,y)∈ L_*(x,y).
The empirical log-loss is the (negative log-)likelihood of the test predictions.
The empirical Brier loss, usually called the “Brier score”, is a straightforward translation of the mean squared error
used in regression problems to the classification setting, as the expected
mean squared error of predicted confidence scores.
However, in certain cases, the Brier score is hard to interpret and may behave in
unintuitive ways <cit.>, which may partly be seen as a phenomenon caused
by above-mentioned lack of invariance under class re-labelling.
Given this and the interpretability of the empirical log-loss as a likelihood,
we will use the log-loss as principal evaluation
metric in the competitive outcome prediction setting.
§.§ Learning with structured and sequential data
The dependency of the observed data on pairing and time makes the prediction task at hand non-standard.
We outline the major consequences for learning and model validation, as well as the implicit assumptions which allow us to tackle these.
We will do this separately for the pairing and the temporal structure, as these behave slightly differently.
§.§.§ Conditioning on the pairing
Match outcomes are observed for given pairings (i,j), that is, each feature-label-pair will be of form (X_ij,Y_ij), where as above the subscripts denote conditioning on the pairing. Multiple pairings may be observed in the training set, but not all; some pairings may never be observed.
This has consequences for both learning and validating models.
For model learning, it needs to be made sure that the pairings to be predicted can be predicted from the pairings observed. With other words, the label Y^*_ij in the test set that we want to predict is (in a practically substantial way) dependent on the training set = {(X^(1)_i_1j_1,Y^(1)_i_1j_1),…,(X^(N)_i_Nj_N,Y^(N)_i_Nj_N) }. Note that smart models will be able to predict the outcome of a pairing even if it has not been observed before, and even if it has, it will use information from other pairings to improve its predictions
For various parametric models, “predictability” can be related to completability of a data matrix with Y_ij as entries. In section <ref>,
we will relate Élő type models to low-rank matrix completion algorithms; completion can be understood as low-rank completion,
hence predictability corresponds to completability. Though, exactly working completability out is not the main is not the primary aim of this manuscript,
and for our data of interest, the English Premier League, all pairings are observed in any given year, so completability is not an issue.
Hence we refer to <cit.> for a study of low-rank matrix completability. General parametric models may be treated along similar lines.
For model-agnostic model validation, it should hold that the expected generalization loss
ε (f|i,j) := _(X,Y)[L(f(X_ij,i,j),Y_ij)]
can be well-estimated by empirical estimation on the test data. For league level team sports data sets, this can be achieved by having multiple years of data available.
Since even if not all pairings are observed, usually the set of pairings which is observed is (almost) the same in each year, hence the pairings will be similar in the training and test set if whole years (or half-seasons) are included.
Further we will consider an average over all observed pairings, i.e., we will compute the empirical loss on the training set as
ε (f) := 1/#∑_(X_ij,Y_ij)∈ L(f(X_ij,i,j),Y_ij)
By the above argument, the set of all observed pairings in any given year is plausibly modelled as similar, hence it is plausible to conclude that this empirical loss estimates some expected generalization loss
ε(f) := _X,Y,I,J[L(f(X_IJ,I,J),Y_IJ)]
where I,J (possibly dependent) are random variables that select teams which are paired.
Note that this type of aggregate evaluation does not exclude the possibility that predictions for single teams (e.g., newcomers or after re-structuring) may be inaccurate, but only that the “average” prediction is good. Further, the assumption itself may be violated if the whole league changes between training and test set.
§.§.§ Conditioning on time
As a second complication, match outcome data is gathered through time. The data
set might display temporal structure and correlation with time. Again, this has consequences for learning and validating the models.
For model learning, models should be able to intrinsically take into
account the temporal structure - though as a baseline, time-agnostic models should be tried.
A common approach for statistical models is to assume a temporal structure in the latent
variables that determine a team's strength. A different and somewhat
ad-hoc approach proposed by <cit.> is to assign
lower weights to earlier observations and estimate parameter by maximizing
the weighted log-likelihood function. For machine learning models,
the temporal structure is often encoded with handcrafted features.
Similarly, one may opt to choose a model that can be updated as time progresses.
A common ad-hoc solution is to re-train the model after a certain amount
of time (a week, a month, etc), possibly with temporal discounting, though there is no general consensus about
how frequently the retraining should be performed.
Further there are genuinely updating models, so-called on-line learning models, which update model
parameters after each new match outcome is revealed.
For model evaluation, the sequential nature of the data poses a severe restriction:
Any two data points were measured at certain time points, and one can not assume that they are not correlated through time information.
That such correlation exists is quite plausible in the domain application, as a team would be expected to perform more similarly at close time points than at distant time points.
Also, we would like to make sure that we fairly test the models for their prediction accuracy -
hence the validation experiment needs to mimic the “real world” prediction process, in which the predicted outcomes will be in the temporal future of the training data.
Hence the test set, in a validation experiment that should quantify goodness of such prediction, also needs to be in the temporal future of the training set.
In particular, the common independence assumption that allows application of re-sampling strategies such as the K-fold cross-validation method <cit.>,
which guarantees the expected loss to be estimated by the empirical loss, is violated. In the presence of temporal correlation,
the variance of the error metric may be underestimated, and the error metric itself will, in general, be mis-estimated.
Moreover, the validation method
will need to accommodate the fact that the model may be updated on-line
during testing. In literature, model-independent validation strategies for data
with temporal structure is largely an unexplored (since technically difficult) area. Nevertheless, developing
a reasonable validation method is crucial for scientific model assessment. A plausible
validation method is introduced in section
<ref> in detail.
It follows similar lines as the often-seen “temporal cross-validation” where training/test splits are always temporal, i.e., the training data points are in the temporal past of the test data points, for multiple splits. An earlier occurrence of such a validation strategy may be found in <cit.>.
This strategy comes without strong estimation guarantees and is part heuristic; the empirical loss will estimate the generalization loss as long as statistical properties do not change as time shifts forward, for example under stationarity assumptions. While this implicit assumption may be plausible for the English Premier League, this condition is routinely violated in financial time series, for example.
§ APPROACHES TO COMPETITIVE SPORTS PREDICTION
In this section, we give a brief overview over the major approaches to prediction in competitive sports found in literature. Briefly, these are:
(a) The Bradley-Terry models and extensions.
(b) The Élő model and extensions.
(c) Bayesian models, especially latent variable models and/or graphical models for the outcome and score distribution.
(d) Supervised machine learning type models that use domain features for prediction.
(a) The Bradley-Terry model is the most influential statistical approach to ranking based on competitive
observations <cit.>.
With its original applications in psychometrics, the goal of the class of Bradley-Terry models is to
estimate a hypothesized rank or skill level from observations of pairwise competition outcomes (win/loss).
Literature in this branch of research is, usually, primarily concerned not with prediction, but estimation of
a “true” rank or skill, existence of which is hypothesized, though prediction
of (binary) outcome probabilities or odds is well possible within the paradigm.
A notable exception is the work of <cit.> where the problem is in essence formulated
as supervised prediction, similar to our work.
Mathematically, Bradley-Terry models may be seen as log-linear two-factor models that, at the state-of-art are usually
estimated by (analytic or semi-analytic) likelihood maximization <cit.>.
Recent work has seen many extensions of the Bradley-Terry models, most notably for modelling of ties <cit.>,
making use of features <cit.> or for explicit modelling the time dependency of skill <cit.>.
(b) The Élő system is one of the earliest attempts to model competitive sports
and, due to its mathematical simplicity, well-known and widely-used by practitioners <cit.>.
Historically, the Élő system is used for chess rankings, to assign a rank score to chess players.
Mathematically, the Élő system only uses information about the historical match outcomes. The Élő
system assigns to each team a parameter, the so-called Élő rating.
The rating reflects a team's competitive skills: the team with higher
rating is stronger.
As such, the Élő system is, originally, not a predictive model or a statistical model in the usual sense.
However, the Élő system also gives a probabilistic prediction for the binary match outcome based
on the ratings of two teams.
After what appears to have been a period of parallel development that is still partly ongoing,
it has been recently noted by members of the Bradley-Terry community that the Élő prediction heuristic
is mathematically equivalent to the prediction via the simple Bradley-Terry
model <cit.>.
The Élő ratings are learnt via an update rule that is applied whenever a new outcome is observed.
This suggested update strategy is inherently algorithmic and later shown to be closely related to
on-line learning strategies in neural network; to our knowledge it appears first in Élő's work
and is not found in the Bradley-Terry strain.
(c) The Bayesian paradigm offers a natural framework to model match outcomes probabilistically,
and to obtain probabilistic predictions as the posterior predictive distribution.
Bayesian parametric models also allow researchers to inject expert
knowledge through the prior distribution. The prediction function
is naturally given by the posterior distribution of the scores, which
can be updated as more observations become available.
Often, such models explicitly model not only the outcome but also the score distribution,
such as Maher's model <cit.> which models outcome scores
based on independent Poisson random variables with team-specific means.
<cit.>
extend Maher's model by introducing a correlation effect between
the two final scores.
More recent models also include dynamic components to model
temporal dependence <cit.>.
Most models of this type only use historical match outcomes as features,
see <cit.> for an exception.
(d) More recently, the method-agnostic supervised machine learning paradigm has been
applied to prediction of match outcomes <cit.>.
The main rationale in this branch of research is that the best model is not known, hence
a number of off-shelf predictors are tried and compared in a benchmarking experiment.
Further, these models are able to make use of features other than previous outcomes easily.
However, usually, the machine learning models are trained in-batch, i.e., not following a dynamic update or on-line learning strategy,
and they need to be re-trained periodically to incorporate new observations.
In this manuscript, we will re-interpret the Élő model and its update rule as
the simplest case of a structured extension of predictive logistic (or generalized linear) regression models, and the canonical gradient ascent update of its likelihood
- hence, in fact, giving it a parametric form not entirely unlike the models mentioned in (b),
In the subsequent sections, this will allow us to complement it with the beneficial properties of the machine learning approach (c),
most notably the addition of possibly complex features, paired with the Élő update rule which can be shown generalize to an on-line update strategy.
More detailed literature and technical overview is given given in the subsequent sections.
The Élő model and its extensions, as well as its novel parametric interpretation, are reviewed in Section <ref>.
Section <ref>
reviews other parametric models for predicting final scores. Section <ref> reviews the use of
machine learning predictors and feature engineering for sports prediction.
§.§ The Bradley-Terry-Élő models
This section reviews the Bradley-Terry models, the Élő system, and closely related variants.
We give the above-mentioned joint formulation, following the modern rationale of considering as a “model” not only a generative specification, but also algorithms for training, predicting and updating its parameters.
As the first seems to originate with the work of <cit.>, and the second in the on-line update heuristic of <cit.>,
we argue that for giving proper credit, it is probably more appropriate to talk about Bradley-Terry-Élő models
(except in the specific hypothesis testing scenario covered in the original work of Bradley and Terry).
Later, we will attempt to understand the Élő system as an on-line update of a structured logistic odds model.
§.§.§ The original formulation of the Élő model
We will first introduce the original version of the Élő model, following <cit.>.
As stated above, its original form which is still applied for determining the official chess ratings (with minor domain-specific modifications),
is neither a statistical model nor a predictive model in the usual sense.
Instead, the original version is centered around the ratings θ_i for each team i.
These ratings are updated via the Élő model rule, which we explain (for sake of clarity) for the case of no draws:
After observing a match between (home) team i and (away) team j, the ratings of teams i and j are updated as
θ_i ← θ_i+K[S_ij-p_ij]
θ_j ← θ_j-K[S_ij-p_ij]
where K, often called “the K factor”, is an arbitrarily chosen constant, that is, a model parameter usually set per hand.
S_ij is 1 if team/player i has been observed to win, and 0 otherwise.
Further, p_ij is the probability of i winning against j
which is predicted from the ratings prior to the update by
p_ij=σ(θ_i-θ_j)
where σ: x↦(1+exp(-x))^-1 is the logistic function (which has a sigmoid shape, hence is also often called “the sigmoid”).
Sometimes a home team parameter h is added to account for home advantage, and the predictive equation becomes
p_ij=σ(θ_i-θ_j + h)
Élő's update rule (Equation <ref>) makes sense intuitively because the term (S_ij-p_ij)
can be thought of as the discrepancy between what is expected, p_ij,
and what is observed, S_ij. The update will be larger if the
current parameter setting produces a large discrepancy. However, a concise
theoretical justification has not been articulated in literature.
If fact, Élő himself commented that “the logic of the equation
is evident without algebraic demonstration” <cit.>
- which may be true in his case, but not satisfactory
in an applied scientific nor a theoretical/mathematical sense.
As an initial issue, it has been noted that the whole model is invariant under joint re-scaling of the θ_i, and the parameters K,h, as well as under arbitrary choice of zero for the θ_i (i.e., adding of a fixed constant c∈ to all θ_i).
Hence, fixed domain models will usually choose zero and scale arbitrarily. In chess rankings, for example, the
formula includes additional scaling constants of the form p_ij=(1+10^-(θ_i-θ_j)/400)^-1;
scale and zero are set through fixing some historical chess players'
rating, which happens to set the “interesting” range in the positive thousands[A common misunderstanding here is that no Élő ratings below zero may occur.
This is, in-principle, wrong, though it may be extremely unlikely in practice if the arbitrarily chosen zero is chosen low enough.].
One can show that there are no more parameter redundancies, hence scaling/zeroing turns out not to be a problem if kept in mind.
However, three issues are left open in this formulation:
(i) How the ratings for players/teams are determined who have never played a game before.
(ii) The choice of the constant/parameter K, the “K-factor”.
(iii) If a home parameter h is present, its size.
These issues are usually addressed in everyday practice by (more or less well-justified) heuristics.
The parametric and probabilistic supervised setting in the following sections yields more principled ways to address this.
step (i) will become unnecessary by pointing out a batch learning method;
the constant K in (ii) will turn out to be the learning rate in a gradient update,
hence it can be cross-validated or entirely replaced by a different strategy for learning the model.
Parameters such as h in (iii) will be interpretable as a logistic regression coefficient.
See for this the discussions in Sections <ref>, <ref> for (i),(ii), and Section <ref> for (iii).
§.§.§ Bradley-Terry-Élő models
As outlined in the initial discussion, the class of Bradley-Terry models introduced by <cit.> may be interpreted as a
proper statistical model formulation of the Élő prediction heuristic.
Despite their close mathematical vicinity, it should be noted that classically Bradley-Terry and Élő models are usually applied and interpreted differently, and consequently fitted/learnt differently: while both models estimate a rank or score, the primary (historical) purpose of the Bradley-Terry is to estimate the rank, while the Élő system is additionally intended to supply easy-to-compute updates as new outcomes are observed, a feature for which it has historically paid for by lack of mathematical rigour.
The Élő system is often invoked to predict future outcome probabilities, while the Bradley-Terry models usually do not see predictive use
(despite their capability to do so, and the mathematical equivalence of both predictive rules).
However, as mentioned above and as noted for example by <cit.>, a joint mathematical formulation can be found, and as we will show,
the different methods of training the model may be interpreted as variants of likelihood-based batch or on-line strategies.
The parametric formulation is quite similar to logistic regression models, or generalized linear models,
in that we will use a link function and define a model for the outcome odds.
Recall, the odds for a probability p are (p) := p/(1-p), and the logit function is : x↦log(x) = log x - log(1-x)
(sometimes also called the “log-odds function” for obvious reasons).
A straightforward calculation shows that ^-1 = σ, or equivalently, σ((x)) = x for any x, i.e., the logistic function is the inverse of the logit (and vice versa (σ(x)) = x by the symmetry theorem for the inverse function).
Hence we can posit the following two equivalent equations in latent parameters θ_i as definition of a predictive model:
p_ij = σ(θ_i-θ_j)
(p_ij) = θ_i-θ_j
That is, p_ij in the first equation is interpreted as a predictive probability; i.e., Y_ij∼ (p_ij).
The second equation interprets this prediction in terms of a generalized linear model with a response function that is linear in the θ_i.
We will write θ for the vector of θ_i; hence the second equation could also be written, in vector notation,
as (p_ij) = ⟨ e_i - e_j, θ⟩. Hence, in particular, the matrix with entries (p_ij) has rank (at most) two.
Fitting the above model means estimating its latent variables θ.
This may be done by considering the likelihood of the latent parameters θ_i given the training data.
For a single observed match outcome Y_ij, the log-likelihood of θ_i and θ_j is
ℓ (θ_i,θ_j|Y_ij) = Y_ijlog (p_ij) + (1-Y_ij)log (1-p_ij),
where the p_ij on the right hand side need to be interpreted as functions of θ_i,θ_j (namely, as in equation <ref>).
We call ℓ (θ_i,θ_j|Y_ij) the one-outcome log-likelihood as it is based on a single data point.
Similarly, if multiple training outcomes = {Y_i_1j_1^(1),…,Y_i_Nj_N^(N)} are observed, the log-likelihood of the vector θ is
ℓ (θ|) = ∑_k=1^N [Y^(k)_i_kj_klog (p_i_kj_k) + (1-Y_i_kj_k^(k))log (1-p_i_kj_k)]
We will call ℓ (θ|) the batch log-likelihood as the training set contains more than one data point.
The derivative of the one-outcome log-likelihood is
∂/∂θ_iℓ (θ_i,θ_j|Y_ij) = Y_ij (1- p_ij) - (1-Y_ij) p_ij = Y_ij - p_ij,
hence the K in the Élő update rule (see equation <ref>) may be updated as a gradient ascent rate or learning coefficient in an on-line likelihood update.
We also obtain a batch gradient from the batch log-likelihood:
∂/∂θ_iℓ (θ|) = [Q_i - ∑_(i,j)∈ G_i p_ij],
where, Q_i is team i's number of wins minus number of losses observed in , and G_i is the (multi-)set of (unordered) pairings team i has participated in .
The batch gradient directly gives rise to a batch gradient update
θ_i←θ_i+K·[Q_ij-∑_(i,j)∈ G_i p_ij].
Note that the above model highlights several novel, interconnected, and possibly so far unknown
(or at least not jointly observed) aspects of Bradley-Terry and Élő type models:
(i) The Élő system can be seen as a learning algorithm for a logistic odds model with latent variables, the Bradley-Terry model
(and hence, by extension, as a full fit/predict specification of a certain one-layer neural network).
(ii) The Bradley-Terry and Élő model may simultaneously be interpreted as Bernoulli observation models of a rank two matrix.
(iii) The gradient of the Bradley-Terry model's log-likelihood gives rise to a (novel) batch gradient and a single-outcome gradient ascent update.
A single iteration per-sample of the latter (with a fixed update constant) is Élő's original update rule.
These observations give rise to a new family of models: the structured log-odds models that will be discussed in Section <ref> and <ref>,
together with concomitant gradient update strategies of batch and on-line type.
This joint view also makes extensions straightforward, for example, the “home team parameter”h in the common extension p_ij=σ(θ_i-θ_j + h)
of the Élő system may be interpreted as Bradley-Terry model with an intercept term, with log-odds (p_ij) = ⟨ e_i - e_j, θ⟩ + h,
that is updated by the one-outcome Élő update rule.
Since more generally, the structured log-odds models arise by combining the parametric form of the Bradley-Terry model with Élő's update strategy,
we also argue for synonymous use of the term “Bradley-Terry-Élő models” whenever Bradley-Terry models are updated batch, or epoch-wise,
or whenever they are, more generally, used in a predictive, supervised, or on-line setting.
§.§.§ Glickman's Bradley-Terry-Élő model
For sake of completeness and comparison, we discuss the probabilistic formulation of <cit.>.
In this fully Bayesian take on the Bradley-Terry-Élő model, it is assumed that there is a latent random variable
Z_i associating with team i. The latent variables are statistically
independent and they follow a specific generalized extreme value (GEV)
distribution:
Z_i∼GEV(θ_i,1,0)
where the mean parameter θ_i varies across teams, and the
other two parameters are fixed at one and zero.
The density function of GEV(μ,1,0),
μ∈ℝ is
p(x|μ)=exp(-(x-μ))·exp(-exp(-(x-μ)))
The model further assumes that team i wins over team j in a
match if and only if a random sample (Z_i, Z_j) from the
associated latent variables satisfies Z_i>Z_j.
It can be shown that the difference variables (Z_i-Z_j) then happen to follow a logistic
distribution with mean θ_1-θ_2 and scale parameter
1, see <cit.>.
Hence, the (predictive) winning probability for
team i is eventually given by Élő's original equation <ref> which is equivalent to the Bradley-Terry-odds.
In fact, the arguably strange parametric form for the distribution f of the Z_i
makes the impression of being chosen for this particular, singular reason.
We argue, that Glickman's model makes unnecessary assumptions
through the latent random variables Z_i which furthermore carry an unnatural distribution .
This is certainly true in the frequentist interpretation, as the parametric model in Section <ref>
is not only more parsimonious as it does not assume a process that generates the θ_i,
but also it avoids to assume random variables that are never directly observed (such as the Z_i).
This is also true in the Bayesian interpretation, where a prior is assumed on the θ_i which then indirectly
gives rise to the outcome via the Z_i.
Hence, one may argue by Occam's razor, that modelling the Z_i is unnecessary, and,
as we believe, may put obstacles on the path to the existing and novel extensions in Section <ref> that would otherwise appear natural.
§.§.§ Limitations of the Bradley-Terry-Élő model and existing remedies
We point out some limitations of the original Bradley-Terry and Élő models which we attempt to address in Section <ref>.
Modelling draws
The original Bradley-Terry and Élő models do not model the possibility of a draw. This
might be reasonable in official chess tournaments where players play on
until draws are resolved. However, in many competitive sports a significant
number of matches end up as a draw - for example, in the English Premier
League about twenty percent of the matches. Modelling
the possibility of draw outcome is therefore very relevant.
One of the first extensions of the Bradley-Terry model, the ternary outcome model by <cit.>,
was suggested to address exactly this shortcoming. The strategy
for modelling draws in the joint framework, closely following this work,
is outlined in Section <ref>.
Using final scores in the model
The Bradley-Terry-Élő model only takes into account the binary outcome
of the match. In sports such as football, the final scores for both
teams may contain more information. Generalizations exist to tackle
this problem. One approach is adopted by the official FIFA Women’s
football ranking <cit.>, where the actual outcome of the
match is replaced by the Actual Match Percentage,
a quantity that depends on the final scores. FiveThirtyEight, an online
media, proposed another approach <cit.>. It introduces
the “Margin of Victory Multiplier” in the rating system to adjust
the K-factor for different final scores.
In a survey paper, <cit.> showed empirical evidence
that rating methods that take into account the final scores often
outperform those that do not. However, it is worth noticing that the existing
methods often rely on heuristics and their mathematical justifications
are often unpublished or unknown. We describe a principled way
to incorporate final scores in Section <ref>
into the framework, following ideas of <cit.>.
Using additional features
The Bradley-Terry-Élő model only takes into account very limited information.
Apart from previous match outcomes, the only feature it uses is the
identity of home and away teams. There are many other potentially
useful features. For example, whether the team is recently promoted
from a lower-division league, or whether a key player is absent from
the match. These features may help make better prediction if they
are properly modeled. In Section <ref>, we
extend the Bradley-Terry-Élő model to a logistic odds model
that can also make use of features, along lines similar to the feature-dependent
models of <cit.>.
§.§ Domain-specific parametric models
We review a number of parametric and Bayesian models that have been considered in literature to model competitive sports outcomes.
A predominant property of this branch of modelling is that the final scores are explicitly modelled.
§.§.§ Bivariate Poisson regression and extensions
<cit.> proposed to model the final scores as
independent Poisson random variables. If team i is playing at home
field against team j, then the final scores S_i and S_j
follows
S_i ∼ Poisson(α_iβ_jh)
S_j ∼ Poisson(α_jβ_i)
where α_i and α_j measure the 'attack' rates,
and β_i and β_j measure the 'defense' rates of the
teams. The parameter h is an adjustment term for home advantage.
The model further assumes that all historical match outcomes are independent.
The parameters are estimated from maximizing the log-likelihood function
of all historical data. Empirical evidence suggests that the Poisson
distribution fits the data well. Moreover, the Poisson distribution can
be derived as the expected number of events during a fixed time period
at a constant risk. This interpretation fits into the framework of
competitive team sports.
<cit.> proposed two modifications to Maher's
model. First, the final scores S_i and S_j are allowed to
be correlated when they are both less than two. The model employs
a free parameter ρ to capture this effect. The joint probability
function of S_i,S_j is given by the bivariate Poisson distribution
<ref>:
P(S_i=s_i,S_j=s_j)=τ_λ,μ(s_i,s_j)λ^s_iexp(-λ)/s_i!·λ^s_jexp(-μ)/s_j!
where
λ = α_iβ_jh
μ = α_jβ_i
and
τ_λ,μ(s_i,s_j)=
1-λμρ if s_i=s_j=0,
1+λρ if s_i=0, s_j=1,
1+μρ if s_i=1, s_j=0,
1-ρ if s_i=s_j=1,
1 otherwise.
The function τ_λ,μ adjusts the probability function
so that drawing becomes less likely when both scores are low. The
second modification is that the Dixon-Coles model no longer assumes
match outcomes are independent through time. The modified log-likelihood
function of all historical data is represented as a weighted sum of
log-likelihood of individual matches illustrated in equation <ref>,
where t represents the time index. The weights are heuristically
chosen to decay exponentially through time in order to emphasize more
recent matches.
ℓ=∑_t=1^Texp(-ξ t)log[P(S_i(t)=s_i(t), S_j(t)=s_j(t))]
The parameter estimation procedure is the same as Maher's model. Estimates
are obtained from batch optimization of modified log-likelihood.
<cit.> explored several other possible parametrization
of the bivariate Poisson distribution including those proposed by
<cit.>, and <cit.>.
The authors performed a model comparison between Maher's independent
Poisson model and various bivariate Poisson models based on AIC and
BIC. However, the comparison did not include the Dixon-Coles model.
<cit.> performed a more comprehensive model
comparison based on their forecasting performance.
§.§.§ Bayesian latent variable models
<cit.> proposed a Bayesian parametric model based
on the bivariate Poisson model. In addition to the paradigm change,
there are three major modifications on the parameterization. First
of all, the distribution for scores are truncated: scores greater
than four are treated as the same category. The authors argued that
the truncation reduces the extreme case where one team scores many
goals. Secondly, the final scores S_i and S_j are assumed
to be drawn from a mixture model:
P(S_i=s_i,S_j=s_j)=(1-ϵ)P_DC+ϵ P_Avg
The component P_DC is the truncated version of the Dixon-Coles
model, and the component P_Avg is a truncated bivariate Poisson
distribution (<ref>) with μ and λ equal
to the average value across all teams. Thus, the mixture model encourages
a reversion to the mean. Lastly, the attack parameters α and
defense parameters β for each team changes over time following
a Brownian motion. The temporal dependence between match outcomes
are reflected by the change in parameters. This model does not have
an analytical posterior for parameters. The Bayesian inference procedure
is carried out via Markov Chain Monte Carlo method.
<cit.> proposed another Bayesian formulation
of the bivariate Poisson model based on the Dixon-Coles model. The
parametric form remains unchanged, but the attack parameters α_i's
and defense parameter β_i's changes over time following an
AR(1) process. Again, the model does not have an analytical posterior.
The authors proposed a fast variational inference procedure to conduct
the inference.
<cit.> proposed a further extension to the bivariate
Poisson model proposed by <cit.>. The authors
noted that the correlation between final scores are parametrized explicitly
in previous models, which seems unnecessary in the Bayesian setting.
In their proposed model, both scores are conditionally independent
given an unobserved latent variable. This hierarchical structure naturally
encodes the marginal dependence between the scores.
§.§ Feature-based machine learning predictors
In recent publications, researchers reported that machine learning
models achieved good prediction results for the outcomes of competitive
team sports. The strengths of the machine learning approach lie in
the model-agnostic and data-centric modelling using available off-shelf methodology,
as well as the ability to incorporate features in model building.
In this branch of research, the prediction problems are usually studied as a supervised classification problem,
either binary (home team win/lose or win/other), or ternary, i.e., where the outcome of a match falls into three distinct classes: home team
win, draw, and home team lose.
<cit.> applied logistic regression, support vector
machines with different kernels, and AdaBoost to predict NCAA football
outcomes. For this prediction problem, the researchers hand crafted
210 features.
<cit.> explored more machine learning predictors
in the context of sports prediction. The predictors include naïve
Bayes classifiers, Bayes networks, LogitBoost, k-nearest neighbors, Random forest,
and artificial neural networks. The models are trained on 20 features
derived from previous match outcomes and 10 features designed subjectively
by experts (such as team's morale).
<cit.> conducted a similar study. The predictors
are commercial implementations of various Decision Tree and ensembled
trees algorithms as well as a hand-crafted Bayes Network. The models
are trained on a subset of 320 features derived form the time series
of betting odds. In fact, this is the only study so far where the
predictors have no access to previous match outcomes.
<cit.> explored the possibility of predicting
match outcome from Tweets. The authors applied naïve Bayes classifiers, Random
forests, logistic regression, and support vector machines to a feature
set composed of 12 match outcome features and a number of Tweets features.
The Tweets features are derived from unigrams and bigrams of the Tweets.
§.§ Evaluation methods used in previous studies
In all studies mentioned in this section, the authors validated their
new model on a real data set and showed that the new model performs
better than an existing model. However, complication arises when we
would like to aggregate and compare the findings made in different
papers. Different studies may employ different validation settings,
different evaluation metrics, and different data sets. We report on this
with a focus on the following, methodologically crucial aspects:
(i) Studies may or may not include a well-chosen benchmark for comparison.
If this is not done, then it may not be concluded that the new method outperforms
the state-of-art, or a random guess.
(ii) Variable selection or hyper-parameter tuning procedures
may or may not be described explicitly. This may raise doubts about the validity
of conclusions, as “hand-tuning” parameters is implicit overfitting,
and may lead to underestimate the generalization error in validation.
(iii) Last but equally importantly, some studies do not
report the error measure on evaluation metrics (standard deviation
or confidence interval). In these studies, we cannot rule out the
possibility that the new model is outperforming the baselines just
by chance.
In table <ref>, we summarize the benchmark evaluation
methodology used in previous studies. One may remark that the size of testing
data sets vary considerably across different studies, and most studies
do not provide a quantitative assessment on the evaluation metric.
We also note that some studies perform the evaluation on the training
data (i.e., in-sample). Without further argument, these evaluation results
only show the goodness-of-fit of the model on the training data, as they do not provide
a reliable estimate of the expected predictive performance (on unseen data).
§ EXTENDING THE BRADLEY-TERRY-ÉLŐ MODEL
In this section, we propose a new family of models for the outcome
of competitive team sports, the structured log-odds models. We will
show that both Bradley-Terry and Élő models belong to this family (section
<ref>), as well as logistic regression.
We then propose several new models with added flexibility (section <ref>)
and introduce various training algorithms (section <ref>
and <ref>).
§.§ The structured log-odds model
Recall our principal observations obtained from the joint discussion of Bradley-Terry and Élő models in Section <ref>:
(i) The Élő system can be seen as a learning algorithm for a logistic odds model with latent variables, the Bradley-Terry model
(and hence, by extension, as a full fit/predict specification of a certain one-layer neural network).
(ii) The Bradley-Terry and Élő model may simultaneously be interpreted as Bernoulli observation models of a rank two matrix.
(iii) The gradient of the Bradley-Terry model's log-likelihood gives rise to a (novel) batch gradient and a single-outcome gradient ascent update.
A single iteration per-sample of the latter (with a fixed update constant) is Élő's original update rule.
We collate these observations in a mathematical model, and highlight relations to well-known model classes,
including the Bradley-Terry-Élő model, logistic regression, and neural networks.
§.§.§ Statistical definition of structured log-odds models
In the definition below, we separate added assumptions and notations for the general set-up, given in the paragraph
“Set-up and notation”, from model-specific assumptions, given in the paragraph “model definition”.
Model-specific assumptions, as usual, need not hold for the “true” generative process, and the mismatch of the assumed model structure
to the true generative process may be (and should be) quantified in a benchmark experiment.
Set-up and notation.
We keep the notation of Section <ref>; for the time being, we assume that there
is no dependence on time, i.e., the observations follow a generative joint random variable (X_ij,Y_ij).
The variable Y_ij models the outcomes of a pairing where home team i plays against away team j.
We will further assume that the outcomes are binary home team win/lose = 1/0, i.e., Y_ij∼ (p_ij).
The variable X_ij models features relevant to the pairing.
From it, we may single out features that pertain to a single team i, as a variable X_i.
Without loss of generality (for example, through introduction of indicator variables), we will assume that X_ij takes values in ^n, and X_i takes values in ^m.
We will write X_ij,1,X_ij,2,…, X_ij,n and X_i,1,…, X_i,m for the components.
The two restrictive assumptions (independence of time, binary outcome) are temporary and are made for expository reasons.
We will discuss in subsequent sections how these assumptions may be removed.
We have noted that the double sub-index notation easily allows to consider p_* in matrix form.
We will denote by to the (real) matrix with entry p_ij in the i-th row and j-th column.
Similarly, we will denote by the matrix with entries Y_ij.
We do not fix a particular ordering of the entries in , as the numbering of teams does not matter,
however the indexing needs to be consistent across , and any matrix of this format that we may define later.
A crucial observation is that the entries of the matrix can be plausibly expected to not be arbitrary.
For example, if team i is a strong team, we should expect
p_ij to be larger for all j's. We can make a similar argument
if we know team i is a weak team. This means the entries in matrix
are not completely independent from each other (in an algebraic sense); in other words,
the matrix can be plausibly assumed to have an inherent structure.
Hence, prediction of should be more accurate if the correct structural assumption is made on ,
which will be one of the cornerstones of the structured log-odds models.
For mathematical convenience (and for reasons of scientific parsimony which we will discuss),
we will not directly endow the matrix with structure, but the matrix := (),
where as usual and as in the following, univariate functions are applied entry-wise
(e.g., = σ() is also a valid statement and equivalent to the above).
Model definition.
We are now ready to introduce the structured log-odds models for competitive team sports.
As the name says, the main assumption of the model is that the log-odds matrix L is a structured
matrix, alongside with the other assumptions of the Bradley-Terry-Élő model in Section <ref>.
More explicitly, all assumptions of the structured log-odds model may be written as
∼ Bernoulli()
= σ()
where we have not made the structural assumptions on explicit yet.
The matrix may depend on X_ij,X_i, though a sensible model
may be already obtained from a constant matrix
with restricted structure. We will show that the Bradley-Terry and Élő models are of this subtype.
Structural assumptions for the log-odds.
We list a few structural assumptions that may or may not be present in some form,
and will be key in understanding important cases of the structured log-odds models.
These may be applied to as a constant matrix to obtain the simplest class of log-odds models,
such as the Bradley-Terry-Élő model as we will explain in the subsequent section.
Low-rankness. A common structural restriction for a matrix (and arguably the most scientifically or mathematically parsimonious one)
is the assumption of low rank: namely, that the rank of the matrix of relevance is
less than or equal to a specified value r. Typically, r is far less than
either size of the matrix, which heavily restricts the number of (model/algebraic) degrees of freedom in an
(m× n) matrix from mn to r(m+n-r).
The low-rank assumption essentially reflects a belief that the unknown matrix is determined by only a small number
of factors, corresponding to a small number of prototypical rows/columns, with the small number being equal to r.
By the singular value decomposition theorem, any rank r matrix A∈^m× n may be written as
A = ∑_k=1^r λ_k· u^(k)·(v^(k))^⊤, A_ij = ∑_k=1^r λ_k· u^(k)_i · v^(k)_j
for some λ_k∈, pairwise orthogonal u^(k)∈^m, pairwise orthogonal v^(k)∈^n;
equivalently, in matrix notation, A = U·Λ· V^⊤ where Λ∈^r× r is diagonal, and U^⊤ U = V^⊤ V = I (and where U∈^m× r, V ∈^n× r, and u^(k), v^(k) are the rows of U,V).
Anti-symmetry. A further structural assumption is symmetry or anti-symmetry of a matrix.
Anti-symmetry arises in competitive outcome prediction naturally as follows:
if all matches were played on neutral fields (or if home advantage is modelled separately),
one should expect that p_ij=1-p_ji, which means the probability
for team i to beat team j is the same regardless of where the
match is played (i.e., which one is the home team).
Hence,
_ij = p_ij = logp_ij/1-p_ij = log1-p_ji/p_ji = - p_ji = -_ji,
that is, is an anti-symmetric matrix, i.e., = - ^⊤.
Anti-symmetry and low-rankness. It is known that any real antisymmetric matrix always has even rank <cit.>.
That is, if a matrix is assumed to be low-rank and anti-symmetric simultaneously, it will have rank 0 or 2 or 4 etc.
In particular, the simplest (non-trivial) anti-symmetric low-rank matrices have rank 2.
One can also show that any real antisymmetric matrix A∈^n× n with rank 2r'
can be decomposed as
A=∑_k=1^r'λ_k·(u^(k)·(v^(k))^⊤-v^(k)·(u^(k))^⊤) ,
A_ij = ∑_k=1^r'λ_k·(u^(k)_i · v^(k)_j-u^(k)_j · v^(k)_i)
for some λ_k∈, pairwise orthogonal u^(k)∈^m, pairwise orthogonal v^(k)∈^n;
equivalently, in matrix notation, A = U·Λ· V^⊤ - V·Λ· U^⊤
where Λ∈^r× r is diagonal, and U^⊤ U = V^⊤ V = I (and where U, V ∈^n× r, and u^(k), v^(k) are the rows of U,V).
Separation. In the above, in general, the factors u^(k),v^(k) give rise to interaction constants (namely: u^(k)_i· v^(k)_j) that are specific to the pairing.
To obtain interaction constants that only depend on one of the teams, one may additionally assume that one of the factors is constant,
or a vector of ones (without loss of generality from the constant vector). Similarly, a matrix with constant entries corresponds to an effect independent of the pairing.
Learning/fitting of structured log-odds models will be discussed in Section <ref>.
after we have established a number of important sub-cases and the full formulation of the model.
In a brief preview summary, it will be shown that the log-likelihood function has in essence the same form for
all structured log-odds models. Namely, for any parameter θ on which or may depend,
it holds for the (one-outcome log-likelihood) that
ℓ (θ|Y_ij) = Y_ijlog (p_ij) + (1-Y_ij)log (1-p_ij) = Y_ij_ij + log(1-p_ij).
Similarly, for its derivative one obtains
∂ℓ (θ|Y_ij)/∂θ = Y_ij/p_ij·∂ p_ij/∂θ - 1-Y_ij/1-p_ij·∂ p_ij/∂θ,
where the partial derivatives on the right hand side will have a different form for different structural assumptions,
while the general form of the formula above is the same for any such assumption.
Section <ref> will expand on this for the full model class.
§.§.§ Important special cases
We highlight a few important special types of structured log-odds models that we have already seen, or that are prototypical
for our subsequent discussion:
The Bradley-Terry-model and via identification the Élő system are obtained under the structural assumption
that is anti-symmetric and of rank 2 with one factor vector of ones.
Namely, recalling equation <ref>, we recognize that the log-odds
matrix in the Bradley-Terry model is
given by _ij=θ_i-θ_j,
where θ_i and θ_j are the Élő ratings.
Using the rule of matrix multiplication, one can verify that this is equivalent to
=θ·^⊤-·θ^⊤
where is a vector of ones and θ is the vector
of Élő ratings. For general θ, the
log-odds matrix will have rank two (general = except if θ_i=θ_j for all i,j).
By the exposition above, making the three assumptions is equivalent to positing the Bradley-Terry or Élő model.
Two interesting observations may be made:
First, the ones-vector being a factor entails that the winning chance
depends only on the difference between the team-specific ratings θ_i,θ_j, without any further interaction term.
Second, the entry-wise exponential of is a matrix of rank (at most) one.
The popular Élő model with home advantage is obtained from the Bradley-Terry-Élő model under the structural assumption
that is a sum of low-rank matrix and a constant; equivalently, from an assumption
of rank 3 which is further restricted by fixing some factors to each other or to vectors of ones.
More precisely, from equation <ref>, one can recognize that for the
Élő model with home advantage, the log-odds matrix decomposes as
=θ·^⊤-·θ^⊤+h··^⊤
Note that the log-odds matrix is no longer antisymmetric due to the constant term
with home advantage parameter h that is (algebraically) independent of the playing teams.
Also note that the anti-symmetric part, i.e., 1/2( + ^⊤),
is equivalent to the constant-free Élő model's log-odds, while the symmetric
part, i.e., 1/2( - ^⊤), is exactly the new constant home advantage term.
More factors: full two-factor Bradley-Terry-Élő models may be obtained by dropping the
separation assumption from either Bradley-Terry-Élő model, i.e.,
keeping the assumption of anti-symmetric rank two, but allowing
an arbitrary second factor not necessarily being the vector of ones.
The team's competitive strength is then determined by two interacting factors
u, v, as
=u· v^⊤-v· u^⊤.
Intuitively, this may cover, for example, a situation where the benefit from being much better may be
smaller (or larger) than being a little better, akin to a discounting of extremes.
If the full two-factor model predicts better than the Bradley-Terry-Élő model, it may certify for
different interaction in different ranges of the Élő scores.
A home advantage factor (a constant) may or may not be added, yielding a model of total rank 3.
Raising the rank: higher-rank Bradley-Terry-Élő models may be obtained by
model by relaxing assumption of rank 2 (or 3) to higher rank.
We will consider the next more expressive model, of rank four.
The rank four Bradley-Terry-Élő model which we will consider will add
a full anti-symmetric rank two summand to the log-odds matrix, which
hence is assumed to have the following structure:
=u· v^⊤-v· u^⊤+θ·^⊤-·θ^⊤
The team's competitive strength is captured by
three factors u, v and θ; note that we have kept the vector of ones as a factor.
Also note that setting either of u,v to would not result in a model extension
as the resulting matrix would still have rank two.
The rank-four model may intuitively make sense if there are (at least) two distinguishable qualities
determining the outcome - for example physical fitness of the team and strategic competence.
Whether there is evidence for the existence of more than one factor, as opposed to assuming
just a single one (as a single summary quantifier for good vs bad) may be checked by comparing predictive capabilities of the respective models.
Again, a home advantage factor may be added, yielding a log-odds matrix of total rank 5.
We would like to note that a mathematically equivalent model, as well as models with more factors, have already been considered by <cit.>,
though without making explicit the connection to matrices which are of low rank, anti-symmetric or structured in any other way.
Logistic regression may also be obtained as a special case of
structured log-odds models. In the simplest form of logistic regression,
the log-odds matrix is a linear functional in the features.
Recall that in the case of competitive outcome prediction, we consider
pairing features X_ij taking values in ^n, and team features X_i taking values in ^m.
We may model the log-odds matrix as a linear functional in these, i.e., model that
_ij = ⟨λ^(ij), X_ij⟩ + ⟨β^(i), X_i⟩ + ⟨γ^(j), X_j⟩ + α,
where λ^(ij)∈^n, β^(i),γ^(j)∈^m, α∈.
If λ^(ij) = 0, we obtain a simple two-factor logistic regression model.
In the case that there is only two teams playing only with each other, or (the mathematical correlate of) a single team playing only with itself,
the standard logistic regression model is recovered.
Conversely, a way to obtain the Bradley-Terry model as a special case of
classical logistic regression is as follows:
consider the indicator feature X_ij:= e_i - e_j.
With a coefficient vector β, the logistic odds will be
_ij=⟨β, X_ij⟩ = β_i-β_j.
In this case, the coefficient vector
β corresponds to a vector of Élő ratings.
Note that in the above formulation, the coefficient vectors λ^(ij), β^(i) are explicitly allowed to depend on the teams.
If we further allow α to depend on both teams, the model includes the Bradley-Terry-Élő models above as well; we could also
make the β depend on both teams.
However, allowing the coefficients to vary in full generality is not very sensible, and as for the constant term which
may yield the Élő model under specific structural assumptions, we need to endow all model parameters with
structural assumptions to prevent combinatorial explosion of parameters and overfitting.
These subtleties in incorporating features, and more generally
how to combine features with hidden factors
will be discussed in the separate, subsequent Section <ref>.
§.§.§ Connection to existing model classes
Close connections to three important classes of models become apparent through the discussion in the previous sections:
Generalized Linear Models generalize both linear and log-linear models (such as the Bradley-Terry model) through
so-called link functions, or more generally (and less classically) link distributions,
combined with flexible structural assumptions on the target variable.
The generalization aims at extending prediction with linear functionals through
the choice of link which is most suitable for the target <cit.>.
Particularly relevant for us are generalized linear models for ordinal outcomes which includes the
ternary (win/draw/lose) case, as well as link distributions for scores. Some existing extensions of this type,
such as the ternay outcome model of <cit.> and the score model of <cit.>,
may be interpreted as specific choices of suitable linking distributions.
How these ideas may be used as a component of structured log-odds models will be discussed in Section <ref>.
Neural Networks (vulgo “deep learning”) may be seen as a generalization of logistic regression which is
mathematically equivalent to a single-layer network with softmax activation function. The generalization is achieved
through functional nesting which allows for non-linear prediction functionals, and greatly expands the capability of regression models to handle
non-linear features-target-relations <cit.>.
A family of ideas which immediately transfers to our setting are strategies for training and model fitting.
In particular, on-line update strategies as well as training in batches and epochs yields a natural
and principled way to learn Bradley-Terry-Élő and log-odds models in an on-line setting or to potentially improve its predictive power in a
static supervised learning setting.
A selection of such training strategies for structured log-odds models will be explored in Section <ref>.
This will not include variants of stochastic gradient descent which we leave to future investigations.
It is also beyond the scope of this manuscript to explore the implications of using multiple layers
in a competitive outcome setting, though it seems to be a natural idea given the closeness of the model classes
which certainly might be worth exploring in further research.
Low-rank Matrix Completion is the supervised task of filling in some
missing entries of a low-rank matrix, given others and the information that the rank is small.
Many machine learning applications can be viewed as estimation
or completion of a low-rank matrix, and different solution strategies exist <cit.>.
The feature-free variant of structured log-odds models (see Section <ref>) may be regarded as a
low-rank matrix completion problem: from observations of Y_ij∼(σ(_ij)), for (i,j)∈ E where the set of observed pairings
E may be considered as the set of observed positions, estimate the underlying low-rank matrix , or
predict Y_kℓ for some (k,ℓ) which is possibly not contained in E.
One popular low-rank matrix completion strategy in estimating model parameters or completing missing entries uses the idea of replacing the discrete rank constraint
by a continuous spectral surrogate constraint, penalizing not rank but the nuclear norm ( = trace norm = 1-Schatten-norm)
of the matrix modelled to have low rank <cit.>.
The advantage of this strategy is that no particular rank needs to be a-priori assumed, instead the objective implicitly selects a low rank
through a trade-off with model fit. This strategy will be explored in Section <ref> for the structured log-odds models.
Further, identifiability of the structured log-odds models is closely linked to the question whether a given entry of a low-rank matrix
may be reconstructed from those which have been observed.
Somewhat straightforwardly, one may see that reconstructability in the algebraic sense, see <cit.>,
is a necessary condition for identifiability under respective structure assumptions.
However, even though many results of <cit.> directly generalize,
completability of anti-symmetric low-rank matrices with or without vectors of ones being factors has not been studied explicitly in literature to our knowledge,
hence we only point this out as an interesting avenue for future research.
We would like to note that a more qualitative and implicit mention of this, in the form of noticing connection to the general area of collaborative filtering,
is already made in <cit.>, in reference to the multi-factor models studied by <cit.>.
§.§ Predicting non-binary labels with structured log-odds models
In Section <ref>, we have not introduced all
aspects of structured log-odds models in favour of a clearer exposition.
In this section, we discuss these aspects
that are useful for the domain application more precisely, namely:
(i) How to use features in the prediction.
(ii) How to model ternary match outcomes (win/draw/lose) or score outcomes.
(iii) How to train the model in an on-line setting with a batch/epoch strategy.
For point (i) “using features”, we will draw from the structured log-odds models' closeness to logistic regression;
the approach to (ii) “general outcomes” may be treated by choosing an appropriate link function as with generalized linear models;
for (iii), parallels may be drawn to training strategies for neural networks.
§.§.§ The structured log-odds model with features
As highlighted in Section <ref>, pairing features X_ij taking values in ^n, and team features X_i taking values in ^m
may be incorporated by modelling the log-odds matrix as
_ij = ⟨λ^(ij), X_ij⟩ + ⟨β^(i), X_i⟩ + ⟨γ^(j), X_j⟩ + α_ij,
where λ^(ij)∈^n, β^(i),γ^(j)∈^m, α_ij∈. Note that differently from the simpler exposition in
Section <ref>, we allow all coefficients, including α_ij, to vary with i and j.
Though, allowing λ^(ij) and β^(i),γ^(j) to vary completely freely may lead to over-parameterisation or overfitting,
similarly to an unrestricted (full rank) log-odds matrix of α_ij in the low-rank Élő model,
especially if the number of distinct observed pairings is of similar magnitude as the number of total observed outcomes.
Hence, structural restriction of the degrees of freedom may be as important for the feature coefficients as for the constant term.
The simplest such assumption is that all λ^(ij) are equal, all β^(i) are equal, and all γ^(j) are equal, i.e., assuming that
_ij = ⟨λ, X_ij⟩ + ⟨β, X_i⟩ + ⟨γ, X_j⟩ + α_ij,
for some λ∈^n, β,γ∈^m, and where α_ij may follow the assumptions of the feature-free log-odds models.
This will be the main variant which will refer to as the structured log-odds model with features.
However, the assumption that constants are independent of the pairing i,j may be too restrictive, as it may be plausible
that, for example, teams of different strength profit differently from or are impaired differently by the same circumstance,
e.g., injury of a key player.
To address such a situation, it is helpful to re-write Equation <ref> in matrix form:
= ∘_3 + ·_*^⊤ + _*·^⊤ + ,
where _* is the matrix whose rows are the X_i, where and are matrices whose rows are the β^(i),γ^(j), and where
is the matrix with entries α_ij.
The symbols and denote tensors of degree 3 (= 3D-arrays)
whose (i,j,k)-th elements are λ^(ij)_k and X_ij,k. The symbol ∘_3 stands for the index-wise product of degree-3-tensors which eliminates
the third index and yields a matrix, i.e.,
(∘_3 )_ij = ∑_k=1^n λ^(ij)_k· X_ij,k.
A natural parsimony assumption for ,,, and is, again, that of low-rank.
For the matrices, ,,, one can explore the same structural assumptions as in Section <ref>:
low-rankness and factors of one are reasonable to assume for all three, while anti-symmetry seems natural for but not for ,.
A low tensor rank (Tucker or Waring) appears to be a reasonable assumption for . As an ad-hoc definition of tensor (decomposition) rank of ,
one may take the minimal r such that there is a decomposition into real vectors u^(i),v^(i),w^(i) such that
_ijk = ∑_ℓ=1^r u^(ℓ)_i· v^(ℓ)_j· w^(ℓ)_k.
Further reasonable assumptions are anti-symmetry in the first two indices, i.e., _ijk = - _jik, as well as some factors u^(ℓ), v^(ℓ) being vectors of ones.
Exploring these possible structural assumptions on the coefficients of features in experiments is possibly interesting both from
a theoretical and practical perspective, but beyond the scope of this manuscript.
Instead, we will restrict ourselves to the case of = 0, of and having the same entry each, and
following one of the low-rank assumptions in structural assumptions as in Section <ref> as in the feature-free model.
We would like to note that variants of the Bradley-Terry model with features have already been proposed and implemented in the package for R <cit.>, though isolated from other aspects of the Bradley-Terry-Élő model class such as modelling draws,
or structural restrictions on hidden variables or the coefficient matrices and tensors, and the Élő on-line update.
§.§.§ Predicting ternary outcomes
This section addresses the issue of modeling draws raised in <ref>.
When it is necessary to model draws, we assume that the outcome of
a match is an ordinal random variable of three so-called levels: win ≻
draw ≻ lose. The draw is treated as a middle outcome. The extension
of structured log-odds model is inspired by an extension of logistic
regression: the Proportional Odds model.
The Proportional Odds model is a well-known family of models for ordinal
random variables <cit.>. It extends the
logistic regression to model ordinary target variables. The model
parameterizes the logit transformation of the cumulative probability
as a linear function of features. The coefficients associated
with feature variables are shared across all levels, but there is
an intercept term α_k which is specific to a certain level.
For a generic feature-label distribution (X,Y), where X takes values in ^n
and Y takes values in a discrete set of ordered levels, the proportional odds model
may be written as
log(P(Y ≻ k)/P(Y ≼ k))=α_k+⟨β, X⟩
where β∈^n, α_k∈, and k∈.
The model is called Proportional Odds model because the odds for any
two different levels k, k', given an observed feature set, are proportional with a constant
that does not depend on features; mathematically,
(P(Y≻ k)/P(Y≼ k))/(P(Y≻ k')/P(Y≼ k'))=exp(α_k-α_k')
Using a similar formulation in which we closely follow <cit.>, the structured log-odds model can be
extended to model draws, namely by setting
log(P(Y_ij=win)/P(Y_ij=draw)+P(Y_ij=lose)) = _ij
log(P(Y_ij=draw)+P(Y_ij=win)/P(Y_ij=lose)) = _ij+ϕ
where _ij is the entry in structured log-odds matrix and ϕ
is a free parameter that affects the estimated probability of a draw.
Under this formulation, the probabilities for different outcomes are
given by
P(Y_ij=win) = σ(_ij)
P(Y_ij=lose) = σ(-_ij-ϕ)
P(Y_ij=draw) = σ(-_ij)-σ(-_ij-ϕ)
Note that this may be seen as a choice of ordinal link distribution in a “generalized” structured odds model,
and may be readily combined with feature terms as in Section <ref>.
§.§.§ Predicting score outcomes
Several models have been considered in Section <ref>
that use score differences to update the Élő ratings.
In this section, we derive a principled way to predict scores, score differences
and/or learn from scores or score differences.
Following the analogy to generalized linear models, we will be able to
tackle this by using a suitable linking distribution, the model can utilize additional
information in final scores.
The simplest natural assumption one may make on scores is obtained from assuming
a dependent scoring process, i.e., both home and away team's scores
are Poisson-distributed with a team-dependent parameter and possible correlation.
This assumption is frequently made in literature <cit.>
and eventually leads to a (double) Poisson regression when combined with structured log-odds models.
The natural linking distributions for differences of scores
are Skellam distributions which are obtained as difference distributions of two (possibly correlated) Poisson
distributions <cit.>, as it has been suggested by <cit.>.
In the following, we discuss only the case of score differences in detail,
predicting both team's score distributions can be obtained similarly
as predicting the correlated Poisson variables with the respective parameters instead of the Skellam difference distribution.
We first introduce some notation.
As a difference of Poisson distributions whose support is , the support of a Skellam distribution is the set of integers .
The probability
mass function of Skellam distributions takes two positive parameters μ_1 and μ_2,
and is given by
P(z|μ_1,μ_2)=e^-(μ_1+μ_2)(μ_1/μ_2)^z/2I_|z|(2√(μ_1μ_2))
where I_α is the modified Bessel function of first kind with parameter α, given
by
I_α(x):=∑_k=0^∞1/k!·Γ(α+k+1)·(x/2)^2k + α
If random variables Z_1 and Z_2 follow Poisson distributions with mean parameters λ_1 and λ_2 respectively,
and their correlation is ρ= (Z_1,Z_2), then their difference Z̃=Z_1-Z_2
follows a Skellam distribution with mean parameters μ_1=λ_1-ρ√(λ_1λ_2)
and μ_2=λ_2-ρ√(λ_1λ_2).
Now we are ready to extend the structured log-odds model to incorporate
historical final scores. We will use a Skellam distribution as the
linking distribution: we assume that the score difference of a match
between team i and team j, that is, Y_ij (taking values in =),
follows a Skellam distribution with (unknown) parameter
exp(_ij) and exp('_ij).
Note that hence there are now two structured ,' , each of which
may be subject to constraints such as in Section <ref>,
or constraints connecting them to each other, and each of which may
depend on features as outlined in Section <ref>.
A simple (and arguably the simplest sensible) structural assumption is that ^⊤= ',
is rank two, with factors of ones, as follows:
= · u^⊤ + v·^⊤;
equivalently, that exp() has rank one and only non-negative entries.
As mentioned above, features such as home
advantage may be added to the structured parameter matrix or ' using the
way introduced in Section <ref>.
Also note that the above yields a strategy to make ternary predictions while training on the scores.
Namely, a prediction for ternary match outcomes may simply be derived from predicted
score differences Ỹ_ij, through defining
P(win) = P(Y_ij>0)
P(draw) = P(Y_ij=0)
P(lose) = P(Y_ij<0)
In contrast to the direct method in
Section <ref>,
the probability of draw can now be calculated without introducing
an additional cut-off parameter.
§.§ Training of structured log-odds models
In this section, we introduce batch and on-line learning strategies for structured log-odds models,
based on gradient descent on the parametric likelihood.
The methods are generic in the sense that the exact structural assumptions of the model will affect the
exact form of the log-likelihood, but not the main algorithmic steps.
§.§.§ The likelihood of structured log-odds models
We derive a number of re-occurring formulae for the likelihood of structured log-odds models.
For this, we will subsume all structural assumptions on in the form of a parameter θ
on which may depend, say in the cases mentioned in Section <ref>.
In each case, we consider θ to be a real vector of suitable length.
The form of the learning step(s) is slightly different depending on the chosen link function/distribution, hence
we start with our derivations in the case of binary prediction, where = {1,0}, and discuss ternary
and score outcomes further below.
In the case of binary prediction, it holds for the (one-outcome log-likelihood) that
ℓ (θ|X_ij,Y_ij) = Y_ijlog (p_ij) + (1-Y_ij)log (1-p_ij)
= Y_ij_ij + log(1-p_ij) = Y_ij_ij - _ij + log(p_ij).
Similarly, for its derivative one obtains
∂ℓ (θ|X_ij,Y_ij)/∂θ = ∂/∂θ[Y_ijlog p_ij+(1-Y_ij)log(1-p_ij)]
= [Y_ij/p_ij - 1-Y_ij/1-p_ij]·∂ p_ij/∂θ
= [Y_ij-p_ij]·∂/∂θ_ij
where we have used definitions for the first equality, the chain rule for the second, and for the last equality that
∂/∂ xσ (x) = σ(x) (1-σ(x)), ∂/∂ x p_ij = p_ij(1-p_ij)∂/∂ x_ij.
In all the above, derivatives with respect to θ are to be interpreted as (entry-wise) vector derivatives; equivalently, the equations hold for any
coordinate of θ in place of θ.
As an important consequence of the above, the derivative of the log-likelihood
almost has the same form (<ref>) for different model variants, and
differences only occur in the gradient
term ∂/∂θ_iL_ij; the term
[Y_ij-p_ij] may be interpreted as a prediction residual, with p_ij depending
on X_ij for a model with features. This fact enables
us to obtain unified training strategies for a variety of structured log-odds models.
For multiple class prediction as in the ordinal or score case, the above
generalizes relatively straightforwardly. The one-outcome log-likelihood is given as
ℓ (θ|X_ij,Y_ij) = ∑_y∈ Y_ij[y] log p_ij[y]
where, abbreviatingly, p_ij[y] = P(Y_ij = y), and Y_ij[y] is one iff Y_ij takes the value y, otherwise zero.
For the derivative of the log-likelihood, one hence obtains
∂ℓ (θ|X_ij,Y_ij)/∂θ = ∂/∂θ∑_y∈ Y_ij[y] log (p_ij[y])
= ∑_y∈Y_ij[y]/p_ij[y]·∂ p_ij[y]/∂θ
= ∑_y∈[Y_ij[y]· (1-p_ij[y])]·∂/∂θ_ij[y],
where _ij[y]:= p_ij[y].
This is in complete analogy to the binary case, except for the very final cancellation which does not occur.
If Y_ij is additionally assumed to follow a concrete distributional form (say Poisson or Skellam), the expression may be further simplified.
In the subsequent sections, however, we will continue with the binary case only, due to the relatively straightforward analogy through the above.
In either case, we note the similarity with back-propagation in neural networks, where
the derivatives ∂/∂θ_ij[y] correspond to a “previous layer”. Though we would like to note
that differently from the standard multilayer perceptron, additional structural constraints on this layer
are encoded through the structural assumptions in the structured log-odds model.
Exploring the benefit of such constraints in general neural network layers is beyond the scope of this manuscript,
but a possibly interesting avenue to explore.
§.§.§ Batch training of structured log-odds models
We now consider the case where a batch of multiple training outcomes
= {(X_i_1j_1^(1),Y_i_1j_1^(1)),…,(X_i_1j_1^(1),Y_i_Nj_N^(N))}
have been are observed, and we would like to train the model parameters the log-likelihood, compare the discussion in Section <ref>.
In this case, the batch log-likelihood of the parameters θ and its derivative take the form
ℓ (θ|) = ∑_k=1^N ℓ(θ|(X_i_kj_k^(k),Y_i_kj_k^(k)))
= ∑_k=1^N [ Y_ij^(k)log(p_ij^(k)) + (1-Y_ij^(k))log(1-p_ij^(k))]
∂/∂θℓ (θ|) = ∑_k=1^N [Y_i_kj_k^(k)-p_i_kj_k^(k)]·∂/∂θ_i_kj_k^(k)
Note that in general, both p_ij^(k) and _ij^(k) will depend on the respective features X_i_kj_k^(k) and the
parameters θ, which is not made explicit for notational convenience.
The term [Y_i_kj_k^(k)-p_i_kj_k^(k)] may again be interpreted as a sample of prediction residuals, similar to the one-sample case.
By the maximum likelihood method, the maximizer θ := _θ ℓ (θ|) is an estimate for the generative θ.
In general, unfortunately, an analytic solution will not exist; nor will the optimization be convex, not even for the Bradley-Terry-Élő model.
Hence, gradient ascent and/or non-linear optimization techniques need to be employed.
An interesting property of the batch optimization is that a-priori setting a “K-factor” is not necessary.
While it may re-enter as the learning rate in a gradient ascent strategy, such parameters may be tuned
in re-sampling schemes such as k-fold cross-validation.
It also removes the need for a heuristic that determines new players' ratings (or more generally: factors),
as the batch training procedure may simply be repeated with such players' outcomes included.
§.§.§ On-line training of structured log-odds models
In practice, the training data accumulate through time, so we need
to re-train the model periodically in order to capture new information.
That is, we would like to address the situation where training data X_ij(t),Y_ij(t) are observed
at subsequent different time points.
The above-mentioned vicinity of structured log-odds models to neural networks
and standard stochastic gradient descent strategies directly yields a
family of possible batch/epoch on-line strategies for structured log-odds models.
To be more mathematically precise (and noting that the meaning of batch and epoch is not consistent across literature):
Let ={(X^(1)_i_1j_1(t_1),Y^(1)_i_1j_1(t_1)),…, (X^(N)_i_Nj_N(t_N),Y^(N)_i_Nj_N(t_N))}
be the observed training data points, at the (not necessarily distinct) time points = {t_1,…, t_N} (hence 𝒯 can be a multi-set).
We will divide the time points into blocks _0,…, _B in a sequential way,
i.e., such that ∪_i=0^B _i =, and for any two distinct k,ℓ, either x<y for all x∈𝒯_k,y∈_ℓ, or x>y
for all x∈𝒯_k,y∈_ℓ. These time blocks give rise to the training data
batches_i:={(x,y)∈ : (x,y) t∈_i}.
The cardinality of _i is called the batch size of the i-th batch.
We single out the 0-th batch as the “initial batch”.
The stochastic gradient descent update will be carried out, for the i-th batch, τ_i times.
The i-th epoch is the collection of all such updates using batch _i, and τ_i is called the epoch size (of epoch i).
Usually, all batches except the initial batch will have equal batch sizes and epoch sizes.
The general algorithm for the parameter update is summarized as stylized pseudo-code as Algorithm <ref>.
Of course, any more sophisticated variant of stochastic gradient descent/ascent may be used here as well,
though we did not explore such possibilities in our empirical experiments and leave this for interesting future investigations.
Important such variants include re-initialization strategies, selecting the epoch size τ_i data-dependently by convergence criteria,
or employing smarter gradient updates, such as with data-dependent learning rates.
Note that the update rule applies for any structured log-odds model as long as
∂/∂θℓ (θ|_i) is easily obtainable,
which should be the case for any reasonable parametric form and constraints.
Note that the online update rule may also be used to update, over time, structural model parameters such as home advantage and feature coefficients.
Of course, some parameters may also be regarded as classical hyper-parameters and tuned via grid or random search on a validation set.
There are multiple trade-offs involved in choosing the batches and epochs:
(i) Using more, possibly older outcomes vs emphasizing more recent outcomes.
Choosing a larger epoch size will yield a parameter closer to the maximizer of the likelihood given the most recent batch(es).
It is widely hypothesized that the team's performance changes gradually over time.
If the factors change quickly, then more recent outcomes should be emphasized via larger epoch size.
If they do not, then using more historical data via smaller epoch sizes is a better idea.
(ii) Expending less computation for a smooth update vs expending more computation for a more accurate update.
Choosing a smaller learning rate will avoid “overshooting” local maximizers of the likelihood, or oscillations,
though it will make a larger epoch size necessary for convergence.
We single out multiple variants of the above to investigate the above trade-off and empirical merits of different on-line training strategies:
(i) Single-batch max-likelihood, where there is only the initial batch (B=0), and a very large number of epochs (until convergence of the log-likelihood).
This strategy, in essence, disregards any temporal structure and is equivalent to the classical maximum likelihood approach under the given model assumptions.
It is the “no time structure” baseline, i.e., it should be improved upon for the claim that there is temporal structure.
(ii) Repeated re-training is using re-training in regular intervals using the single-batch max-likelihood strategy.
Strictly speaking not a special case of Algorithm <ref>, this is a less sophisticated and possibly much more computationally expensive baseline.
(iii) On-line learning is Algorithm <ref> with all batch and epoch sizes equal, parameters tuned on a validation set.
This is a “standard” on-line learning strategy.
(iv) Two-stage training, where the initial batch and epoch size is large, and all other batch and epoch sizes are equal, parameters tuned on a validation set.
This is single-batch max-likelihood on a larger corpus of not completely recent historical data, with on-line updates starting only in the recent past.
The idea is to get an accurate initial guess via the larger batch which is then continuously updated with smaller changes.
In this manuscript, the most recent model will only be used to predict the labels/outcomes in the most recent batch.
§.§ Rank regularized log-odds matrix estimation
All the structured log-odds models we discussed so far made explicit
assumption about the structure of the log-odds matrix. An alternative
way is to encourage the log-odds matrix to be more structured by imposing
an implicit penalty on its complexity. In this way, there is no need to specify
the structure explicitly. The trade-off between the log-odds matrix's
complexity and its ability to explain observed data is tuned by validation
on evaluation data set.
The discussion will be based on the binary outcome model from Section <ref>.
Without any further
assumption about the structure of or , the maximum
likelihood estimate for each p_ij is given by
p̂_ij:=W_ij/N_ij
where W_ij is the number of matches in which team i beats
team j, and N_ij is the total number of matches between team
i and team j. As we have assumed observations of wins/losses
to be independent, this immediately yields
:= /, as the maximum likelihood estimate for ,
where , ,, are the matrices with p̂_ij,W_ij,N_ij
as entries and division is entry-wise.
Using the invariance of the maximum likelihood estimate under the bijective transformation
_ij = (p_ij), one obtains the maximum likelihood estimate for _ij as
_ij=log(p̂_ij/1-p̂_ij)= log W_ij - log W_ji,
or, more concisely, = log - log^⊤, where the log is entry-wise.
We will call the matrix the empirical log-odds matrix. It is worth noticing that the empirical
log-odds matrix gives the best explanation in a maximum-likelihood sense,
in the absence of any further structural restrictions.
Hence, any log-odds matrix additionally restricted by structural assumptions will achieve a lower likelihood on the observed data.
However, in practice
the empirical log-odds matrix often has very poor predictive performance
because the estimate tends to have very large variance whose asymptotic is governed by the
number of times that entry is observed (which is practice is usually very small or even zero).
This variance may be reduced by regularising the complexity
of the estimated log-odds matrix. Common complexity measures of a
matrix are its matrix norms <cit.>.
A natural choice is the nuclear norm or trace norm, which is a continuous surrogate for
rank and has found a wide range of machine-learning applications including matrix completion
<cit.>.
Recall, the trace norm of an (n× n) matrix A is defined as
A_*=∑_k=1^nσ_k
where σ_k is the k^th singular value of the matrix A.
The close relation to the rank of A stems from the fact that the rank is the number of non-zero singular values.
When used in optimization, the trace norm behaves similar to the one-norm in LASSO type models,
yielding convex loss functions and forcing some singular values to be zero.
This principle can be used to obtain the following optimization program for regularized log-odds matrix estimation:
min_ - _F^2 + λ_*
+^⊤=0
The first term is a Frobenius norm “error term”, equivalent to a squared loss
-_F^2 = ∑_i,j(L_ij-L̂_ij)^2,
instead of the log-likelihood function in order to ensure convexity of
the objective function.
There is a well-known bound on the trace of a matrix <cit.>:
For any X∈ℝ^n× m, and t∈ℝ, ||X||_*≤ t
if and only if there exists A∈𝕊^n and B∈𝕊^m
such that [[ A X; X^⊤ B ]]≽0 and 1/2((A)+(B))<t. Using this bound,
we can introduce two auxiliary matrices A and B and solve an
equivalent problem:
min_A,B, -_F^2+λ/2((A)+(B))
[[ A ; ^⊤ B ]]≽0
+^⊤=0
This is a Quadratic Program with a positive semi-definite constraint
and a linear equality constraint. It can be efficiently solved by
the interior point method <cit.>, and
alternative algorithms for large scale settings also exist <cit.>.
The estimation procedure can be generalized to model ternary match
outcomes. Without any structural assumption, the maximum likelihood
estimate for p_ij[k]:=P(Y_ij=k) is given by
p̂_ij[k]W_ij[k]/N_ij
where Y_ij is the ternary match outcome between team i and
team j, and k takes values in a discrete set of ordered levels.
W_ij[k] is the number of matches between i and j in which
the outcome is k. N_ij is the total number of matches between
the two teams as before.
We now define
_ij^(1)log(p_ij[]/p_ij[] + p_ij[]) _ij^(2)log(p_ij[]+ p_ij[] /p_ij[])
The maximum likelihood estimate for _ij^(1) and _ij^(2)
can be obtained by replacing p_ij[k] with the
corresponding p̂_ij[k] in _ij^(1),
yielding maximum likelihood estimates L̂_ij^(1) and L̂_ij^(2).
As in Section <ref>, we make an implicit assumption of proportional odds
for which we will regularize, namely that _ij^(2)=_ij^(1)+ϕ. For this, we obtain a new
convex objective function
min_,ϕ^(1)-_F^2+^(2)--ϕ··^⊤||_F^2+λ_*.
The optimal value of is a regularized estimate of _ij^(1), and + ϕ··^⊤
is a regularized estimate of _ij^(2).
The regularized log-odds matrix estimation method is quite experimental
as we have not established a mathematical proof for the error bound. Further research is also needed
to find an on-line update formula for this method.
We leave these as open questions for future investigations.
§ EXPERIMENTS
We perform two sets of experiments to validate the practical usefulness of
the novel structured log-odds models, including the Bradley-Terry-Élő model.
More precisely, we validate
(i) in the synthetic experiments in Section <ref> that the (feature-free) higher-rank models in Section <ref> outperform the standard Bradley-Terry-Élő model
if the generative process is higher-rank.
(ii) in real world experiments on historical English Premier League pairings, in Section <ref>,
structured log-odds models that use features as proposed in
Section <ref>, and the two-stage training method as proposed in Section <ref> outperform methods that do not.
In either setting, the methods outperform naive baselines, and their performance is similar to predictions derived from betting odds.
§.§ Synthetic experiments
In this section, we present the experiment results over synthetic
data sets. The goal of these experiments is to show that the newly
proposed structured log-odds models perform better than the original
Élő model when the data were generated following the new
models' assumptions. The experiments also show the validity of the
parameter estimation procedure.
The synthetic data are generated according to the assumptions of the
structured log-odds models (<ref>). To recap, the
data generation procedure is the following.
* The binary match outcome y_ij is sampled from a Bernoulli distribution
with success probability p_ij,
* The corresponding log-odds matrix L has a certain structure,
* The match outcomes are sampled independently (there is no temporal
effect)
As the first step in the procedure, we randomly generate a ground
truth log-odds matrix with a certain structure. The structure depends
on the model in question and the matrix generation procedure is different
for different experiments. The match outcomes y_ij's are sampled
independently from the corresponding Bernoulli random variables with
success probabilities p_ij derived from the true log-odds matrix.
For a given ground truth matrix, we generate a validation set and
an independent test set in order to tune the hyper-parameter. The
hyper-parameters are the K factor for the structured log-odds
models, and the regularizing strength λ for regularized
log-odds matrix estimation. We perform a grid search to tune the hyper-parameter.
We choose the hyper-parameter to be the one that achieves the best
log-likelihood on the validation set. The model with the selected
hyper-parameter is then evaluated on the test set. This validation
setting is sound because of the independence assumption (<ref>).
The tuned model gives a probabilistic prediction for each match in
the test set. Based on these predictions, we can calculate the mean
log-likelihood or the mean accuracy on the test set. If two models
are evaluated on the same test set, the evaluation metrics for the
two models form a paired sample. This is because the metrics depend
on the specific test set.
In each experiment, we replicate the above procedure for many times.
In each replication, a new ground truth log-odds matrix is generated,
and the models are tuned and evaluated. Each replication hence produces
a paired sample of evaluation metrics because the metrics for different
models are conditional independent in the same replication.
We would like to know which model performs better given the data generation
procedure. This question can be answered by performing hypothesis
testing on paired evaluation metrics produced by the replications.
We will use the paired Wilcoxon test because of the violation of normality
assumption.
The experiments do not aim at comparing different training methods
(<ref>). Hence, all models in an
experiment are trained using the same method to enable an apple-to-apple
comparison. In experiments <ref> and <ref>,
the structured log-odds models and the Bradley-Terry-Élő model are trained
by the online update algorithm. Experiment (<ref>)
concerns about the regularized log-odds matrix estimation, whose online
update algorithm is yet to be derived. Therefore, all models in section
<ref> are trained using batch
training method.
The experiments all involve 47 teams [Forty-seven teams played in the English Premier league between 1993
and 2015]. Both validation and test set include four matches between each pair
of teams.
§.§.§ Two-factor Bradley-Terry-Élő model
This experiment is designed to show that the two-factor
model is superior to the Bradley-Terry-Élő model if the true log-odds matrix
is a general rank-two matrix.
Components in the two factors u and v are independently generated
from a Gaussian distribution with μ=1 and σ=0.7. The
true log-odds matrix is calculated as in equation <ref>
using the generated factors. The rest of the procedure is carried
out as described in section <ref>. This procedure
is repeated for two hundred times.
The two hundred samples of paired mean accuracy and paired mean log-likelihood
are visualized in figure <ref> and <ref>.
Each point represents an independent paired sample.
Our hypothesis is that if the true log-odds matrix is a general rank-two
matrix, the two-factor Élő model is likely to perform better
than the original Élő model. We perform Wilcoxon test on
the paired samples obtained in the experiments. The two-factor Élő
model produces significantly better results in both metrics (one-sided
p-value is 0.046 for accuracy and less than 2^-16 for mean log-likelihood).
§.§.§ Rank-four Bradley-Terry-Élő model
These two experiments are designed to compare the rank-four Élő
model to the two-factor Élő model when the true log-odds
matrix is a rank-four matrix.
The first experiment considers the scenario when all singular values
of the true log-odds matrix are big. In this case, the best rank-two
approximation to the true log-odds matrix will give a relatively large
error because the third and fourth singular components cannot be recovered.
The log-odds matrices considered in this experiment takes the following
form
L=s_1· u· v^⊤+s_2·θ·1^⊤-s_1· v· u^⊤-s_2·1·θ^⊤
, where s_1 and s_2 are the two distinct singular values
and 1 is parallel to the vector of ones, and vector
1 , u, v and θ are orthonormal. This
formulation is based on the decomposition of a real antisymmetric
matrix stated in section <ref>. The
true log-odds matrix L has four non-zero singular values s_1,
-s_1, s_2 and -s_2. In the experiment, s_1=25
and s_2=24.
The rest of the data generation and validation setting is the same
as the experiments in section <ref>. The procedure
is repeated for 100 times. We applied the paired Wilcoxon test to
the 100 paired evaluation results. The test results support the hypothesis
that the rank-four Élő model performs significantly better
in both metrics (one-sided p-value is less than 2^-16 for both
accuracy and mean log-likelihood).
In the second experiment, the components in factors u, v and
θ are independently generated from a Gaussian distribution
with μ=1 and σ=0.7. The log-odds matrix is then calculated
using equation <ref> directly. The factors are
no longer orthogonal and the second pair of singular values are often
much smaller than the first pair. In this case, the best rank-two
approximation will be close to the true log-odds matrix.
The procedure is repeated for 100 times again using the same data
generation and validation setting. Paired Wilcoxon test shows rank-four
Élő model achieves significantly higher accuracy on the test
data (one-sided p-value is 0.015), but the mean log-likelihood is
not significantly different (p-value is 0.81).
The results of the above two experiments suggest that the rank-four
Élő model will have significantly better performance when
the true log-odds matrix has rank four and it cannot be approximated
well by a rank-two matrix.
§.§.§ Regularized log-odds matrix estimation
In the following two experiments, we want to compare the regularized
log-odds matrix estimation method with various structured log-odds
models.
To carry out regularized log-odds matrix estimation, we need to first
get an empirical estimate of log-odds on the training set. Since there
are only four matches between any pair of teams in the training data,
the estimate of log-odds often turn out to be infinity due to division
by zero. Therefore, I introduced a small regularization term in the
estimation of empirical winning probability p̂=n_win+ϵ/n_total+2ϵ,
where ϵ is set to be 0.01. Then, we obtain the smoothed
log-odds matrix by solving the optimization problem described in section
<ref>. A sequence of λ's
are fitted, and the best one is chosen according to the log-likelihood
on the evaluation set. The selected model is then evaluated on the
testing data set.
Structured log-odds models with different structural assumptions are
used for comparison. We consider the Élő model, two-factor
Élő model, and rank-four Élő model. For each of
the three models, we first tune the hyper-parameter on a further split
of training data. Then, we evaluate the models with the best hyper-parameter
on the evaluation set and select the best model. Finally, we test
the selected model on the test set to produce evaluation metrics.
This experiment setting imitates the real application where we need
to select the model with best structural assumption.
In order to compare fairly with the trace norm regularization method
(which is currently a batch method), the structured log-odds models
are trained with batch method and the selected model is not updated
during testing.
In the first experiment, it is assumed that the structure of log-odds
matrix follows the assumption of the rank-four Élő model.
The log-odds matrix is generated using equation (<ref>)
with s_1=25 and s_2=2.5. The data generation and hypothesis
testing procedure remains the same as previous experiments. Paired
Wilcoxon test is performed to examine the hypothesis that regularized
log-odds model produces higher out-of-sample log-likelihood. The testing
result is in favour of this hypothesis (p-value is less than 10^-10).
In the second experiment, it is assumed that the structure of log-odds
matrix follows the assumption of the Élő model (section <ref>).
The true Élő ratings are generated using a normal distribution
with mean 0 and standard deviation 0.8. Paired Wilcoxon test
shows that the out-of-sample likelihood is somewhat different between
the tuned regularized log-odds model and trace norm regularization
(two sided p-value = 0.09).
The experiments show that regularized log-odds estimation can adapt
to different structures of the log-odds matrix by varying the regularization
parameter. The performance on simulated data set is not worse than
the tuned regularized log-odds model.
§.§ Predictions on the English Premier League
§.§.§ Description of the data set
The whole data set under investigation consists of English Premier
League football matches from 1993-94 to 2014-15 season. There are
8524 matches in total. The data set contains the date of the match, the home team, the away
team, and the final scores for both teams. The English Premier League is chosen as a representative
as competitive team sports because of its high popularity. In each
season, twenty teams will compete against each other using the double
round-robin system: each team plays the others twice, once at the
home field and once as guest team. The winner of each match scores
three championship points. If the match draws, both teams score one
point. The final ranking of the teams are determined by the championship
points scored in the season. The team with the highest rank will be
the champion and the three teams with the lowest rank will move to
Division One (a lower-division football league) next season. Similarly,
three best performing teams will be promoted from Division One into
the Premier League each year. In the data set, 47 teams has played
in the Premier League. The data set is retrieved
from http://www.football-data.co.uk/.
The algorithms are allowed to use all available information prior
to the match to predict the outcome of the match (win, lose, draw).
§.§.§ Validation setting
In the study of the real data set, we need a proper way to quantify
the predictive performance of a model. This is important for two reasons.
Firstly, we need to tune the hyper-parameters in the model by performing
model validation. The hyper-parameters that bring best performance
will be chosen. More importantly, we wish to compare the performance
of different types of models scientifically. Such comparison is impossible
without a quantitative measure on model performance.
It is a well-known fact that the errors made on the training data
will underestimate the model's true generalization error. The common
approaches to assess the goodness of a model include cross validation
and bootstrapping <cit.>. However,
both methods assume that the data records are statistically independent.
In particular, the records should not contain temporal structure.
In the literature, the validation for data with temporal structure
is largely an unexplored area. However, the independence assumption
is plausibly violated in this study and it is highly likely to affect
the result. Hence, we designed an set of ad-hoc validation methods
tailored for the current application.
The validation method takes two disjoint data sets, the training data
and the testing data. We concatenate the training and testing data
into a single data set and partition it into batches
following the definitions given in <ref>. We then run Algorithm <ref> on ,
but only collect the predictions of matches in the testing data. Those
predictions are then compared with the real outcomes in the testing
data and various evaluation metrics can be computed.
The exact way to obtain batches will depend on the
training method we are using. In the experiments, we are mostly interested
in the repeated batch re-training method (henceforth batch training
method), the on-line training method and the two-stage training method.
For these three methods, the batches are defined as follows.
* Batch training method: the whole training data forms the initial batch
_0; the testing data is partitioned into similar-sized
batches based on time of the match.
* On-line training method: all matches are partitioned into similar-sized
batches based on time of the match.
* Two-stage method: the same as batch training method with a different
batch size on testing data.
In general, a good validation setting should resemble the usage of
the model in practice. Our validation setting guarantees that no future
information will be used in making current predictions. It is also
naturally related to the training algorithm presented in <ref>.
§.§.§ Prediction Strategy
Most models in this comparative study have tunable hyper-parameters.
Those hyper-parameters are tuned using the above validation settings.
We split the whole data set into three disjoint subsets, the training set, the tuning set and the testing set.
The first match in the training set is the one between Arsenal and Coventry on 1993-08-04, and the first match in the tunning set is the one between Aston Villa and Blackburn on 2005-01-01. The first match in the testing data is the match between Stoke and Fulham on 2010-01-05, and the last match in the testing set is between Stoke and Liverpool on 2015-05-24. The testing set has 2048 matches in total.
In the tuning step, we supply the training set and the tuning set to the validation procedure as the training and testing data.
To find the best hyper-parameter, we perform a gird search and the hyper-parameter which
achieves the highest out-of-sample likelihood is chosen.
In theory, the batch size and epoch size are tunable hyper-parameters, but in the experiments we choose these parameters based on our prior knowledge. For the on-line and two-stage method, each individual match in testing data is regarded as a batch. The epoch size is chosen to be one. This reflects the usual update rule of the conventional Élő ratings: the ratings are updated immediately after the match outcome becomes available. For the batch training method, matches take place in the same quarter of the year are allocated to the same batch.
The model with the selected hyper-parameters is tested using
the same validation settings. The training data now consists of both training set and tuning set. The testing data is supplied with the testing set.
This prediction strategy ensures that the training-evaluating-testing
split is the same for all training methods, which means that the model
will be accessible to the same data set regardless of what training
method is being used. This ensures that we can compare different training methods fairly.
All the models will also be compared with a set of benchmarks. The first benchmark is a naive baseline which always predicts home team to win the match. The second benchmark is
constructed from the betting odds given by bookmakers. For each match, the bookmakers provide three odds for the three outcomes, win, draw and lose. The betting odds and the probability has the following relationship: P=1/odds. The probabilities implied by betting odds are used as prediction. However,
the bookmaker's odds will include a vigorish so the implied “probability” does
not sum to one. They are normalized by dividing each term with the sum to give the valid probability.
The historical odds are also obtained from http://www.football-data.co.uk/.
§.§.§ Quantitative comparison for the evaluation metrics
We use log-likelihood and accuracy on the testing data set as evaluation
metrics. We apply statistical hypothesis testing on the validation
results to compare the models quantitatively.
We calculate the log-likelihood on each test case for each model.
If we are comparing two models, the evaluation metrics for each test
case will form a paired sample. This is because test cases might be
correlated with each other and model's performance is independent
given the test case. The paired t-test is used to test whether there
is a significant difference in the mean of log-likelihood. We draw
independent bootstrap samples with replacement from the log-likelihood
values on test cases, and calculate the mean for each sample. We then
calculate the 95% confidence interval for the mean log-likelihood
based on the empirical quantiles of bootstrapped means <cit.>.
Five thousand bootstrap samples are used to calculate these intervals.
The confidence interval for accuracy is constructed assuming the model's
prediction for each test case, independently, has a probability p
to be correct. The reported 95% confidence interval for Binomial
random variable is calculated from a procedure first given in <cit.>.
The procedure guarantees that the confidence level is at least 95%,
but it may not produce the shortest-length interval.
§.§.§ Performance of the structured log-odds model
We performed the tunning and validation of the structured log-odds
models using the method described in section <ref>.
The following list shows all models examined by this experiment:
* The Bradley-Terry-Élő model (section <ref>)
* Two-factor Bradley-Terry-Élő model (section <ref>)
* Rank-four Bradley-Terry-Élő model (section <ref>)
* The Bradley-Terry-Élő model with score difference (section <ref>)
* The Bradley-Terry-Élő model with two additional features (section <ref>)
All models include a free parameter for home advantage (see section
<ref>), and they are also able to capture the probability
of a draw (section <ref>). We have introduced
two covariates in the fifth model. These two covariates indicate whether
the home team or away team is just promoted from Division One this
season. We have also tested the trace norm regularized log-odds model,
but as indicated in section <ref>
the model still has many limitations for the application to the real
data. The validation results are summarized in table <ref>
and table <ref>.
The testing results help us understand the following two scientific
questions:
* Which training method brings the best performance to structured log-odds
models?
* Which type of structured log-odds model achieves best performance
on the data set?
In order to answer the first question, we test the following hypothesis:
(H1): Null hypothesis: for a certain model, two-stage training
method and online training method produce the same mean out-of-sample
log-likelihood. Alternative hypothesis: for a certain model two-stage
training method produces a higher mean out-of-sample log-likelihood
than online training method.
Here we compare the traditional on-line updating rule with the newly developed two-stage method.
The paired t-test is used to assess the above hypotheses. The p-values
are shown in table <ref>. The cell associated with the
Élő model with covariates are empty because the online training
method does not update the coefficients for features. The first columns
of the table gives strong evidence that the two-stage training method
should be preferred over online training. All tests are highly significant
even if we take into account the issue of multiple testing.
In order to answer the second question, we compare the four new models
with the Bradley-Terry-Élő model. The hypothesis is formulated as
(H2): Null hypothesis: using the best training method, the new
model and the Élő model produce the same mean out-of-sample
log-likelihood. Alternative hypothesis: using the best training method,
the new model produces a higher mean out-of-sample log-likelihood
than the Élő model.
The p-values are listed in the last column of table <ref>.
The result also shows that adding more factors in the model does not
significantly improve the performance. Neither two-factor model nor
rank-four model outperforms the original Bradley-Terry-Élő model on the
testing data set. This might provide evidence and justification of
using the Bradley-Terry-Élő model on real data set. The model that uses
the score difference performs slightly better than the original Bradley-Terry-Élő
model. However, the difference in out-of-sample log-likelihood is
not statistically significant (the p-value for one-sided test is 0.24
for likelihood). Adding additional covariates about team promotion
significantly improves the Bradley-Terry-Élő model.
§.§.§ Performance of the batch learning models
This experiment compares the performance of batch learning models.
The following list shows all models examined by this experiment:
* GLM with elastic net penalty using multinomial link function
* GLM with elastic net penalty using ordinal link function
* Random forest
* Dixon-Coles model
The first three models are machine learning models that can be trained
on different features. The following features are considered in this
experiment:
* Team id: the identity of home team and away team
* Ranking: the team's current ranking in Championship points and goals
* VS: the percentage of time that home team beats away team in last
3, 6, and 9 matches between them
* Moving average: the moving average of the following monthly features
using lag 3, 6, 12, and 24
* percentage of winning at home
* percentage of winning away
* number of matches at home
* number of matches away
* championship points earned
* number of goals won at home
* number of goals won away
* number of goals conceded at home
* number of goals conceded away
The testing accuracy and out-of-sample log-likelihood are summarized
in table <ref> and table <ref>.
All models perform better than the baseline benchmark, but no model
seems to outperform the state-of-the-art benchmark (betting odds).
We applied statistical testing to understand the following questions
* Does the GLM with ordinal link function perform better than the GLM
with multinomial link function?
* Which set of features are most useful to make prediction?
* Which model performs best among GLM, Random forest, and Dixon-Coles
model?
For question one, we formulate the hypothesis as:
(H3): Null hypothesis: for a given set of feature, the GLM with
ordinal link function and the GLM with multinomial link function produce
the same mean out-of-sample log-likelihood. Alternative hypothesis:
for a given set of feature, the mean out-of-sample log-likelihood
is different for the two models.
The p-values for these tests are summarized in table <ref>.
In three out of four scenarios, the test is not significant. There
does not seem to be enough evidence against the null hypothesis. Hence,
we retain our believe that the GLM with different link functions have
the same performance in terms of mean out-of-sample log-likelihood.
For question two, we observe that models with the moving average feature
have achieved better performance than the same model trained with
other features. We formulate the hypothesis as:
(H4): Null hypothesis: for a given model, the moving average
feature and an alternative feature set produce the same mean out-of-sample
log-likelihood. Alternative hypothesis: for a given model, the mean
out-of-sample log-likelihood is higher for the moving average feature.
The p-values are summarized in table <ref>. The
tests support our believe that the moving average feature set is the
most useful one among those examined in this experiment.
Finally, we perform comparison among different models. The comparisons
are made between the GLM with multinomial link function, Random forest,
and Dixon-Coles model. The features used are the moving average feature
set. The p-values are summarized in table <ref>.
The tests detect a significant difference between GLM and Random forest,
but the other two pairs are not significantly different. We apply
the p-value adjustment using Holm's method in order to control family-wise
type-one error <cit.>. The adjusted p-values are
not significant. Hence, we retain our belief that the three models
have the same predictive performance in terms of mean out-of-sample
log-likelihood.
§.§ Fairness of the English Premier League ranking
“Fairness” as a concept is statistically undefined and due to its subjectivity is not empirical unless based on peoples' opinions.
The latter may wildly differ and are not systematically accessible from our data set or in general.
Hence we will base our study of the Premier League ranking scheme's “fairness” on a surrogate derived
from the following plausibility considerations:
Ranking in any sport should plausibly be based on the participants' skill in competing in official events of that sport.
By definition the outcomes of such events measure the skill in competing at the sport, distorted by a possible component of “chance”.
The ranking, derived exclusively from such outcomes, will hence also be determined by the so-measured skills and a component of “chance”.
A ranking system may plausibly be considered fair if the final ranking is only minimally affected by whatever constitutes “chance”,
while accurately reflecting the ordering of participating parties in terms of skill, i.e., of being better at the game.
Note that such a definition of fairness is disputable, but it may agree with the general intuition when ranking players of games with a strong chance
component such as card or dice games, where cards dealt or numbers thrown in a particular game should, intuitively, not affect a player's rank,
as opposed to the player's skills of making the best out of a given dealt hand or a dice throw.
Together with the arguments from Section <ref> which argue for predictability-in-principle surrogating skill,
and statistical noise surrogating chance, fairness may be surrogated as the stability of the ranking under the best possible prediction
that surrogates the “true odds”.
In other words, if we let the same participants, under exactly the same conditions, repeat the whole season, and all that changes is
the dealt cards, the thrown numbers, and similar possibly unknown occurrences of “chance”,
are we likely to end up with the same ranking as the first time?
While of course this experiment is unlikely to be carried out in real life for most sports, the best possible prediction which is surrogated by the prediction by the best accessible predictive model yields a statistically justifiable estimate for the outcome of such a hypothetical real life experiment.
To obtain this estimate, we consider the as the “best accessible predictive model”
the Bradley-Terry-Élő model with features, learnt by the two-stage update rule (see Section <ref>),
yielding a probabilistic prediction for every game in the season.
From these predictions, we may independently sample match outcomes and
final rank tables according to the official scoring and ranking rules.
Figure <ref> shows estimates for the distribution or ranks of Premier League teams participating in the 2010 season.
It may be observed that none of the teams, except Manchester United, ends up with the same rank they achieved in reality in more than 50% of the cases.
For most teams, the middle 50% are spread over 5 or more ranks, and for all teams, over 2 or more.
From a qualitative viewpoint, the outcome for most teams appears very random, hence the allocation of the final rank seems qualitatively similar to a game of chance
notable exceptions being Manchester United and Chelsea whose true final rank is similar to a narrow expected/predicted range. It is also worthwhile noting that Arsenal
has been predicted/expected among the first three with high confidence, but eventually was ranked fourth.
The situation is qualitatively similar for later years, though not shown here.
§ DISCUSSION AND SUMMARY
We discuss our findings in the context of our questions regarding prediction
of competitive team sports and modelling of English Premier League outcomes, compare Section <ref>
§.§ Methodological findings
As the principal methodological contribution of this study, we have formulated the Bradley-Terry-Élő model
in a joint form, which we have extended to the flexible class of structured log-odds models.
We have found structured log-odds models to be potentially useful in the following way:
(i) The formulation of the Bradley-Terry-Élő model as a parametric model within a supervised on-line setting solves a number of open issues of the heuristic Élő model, including setting of the K-factor and new players/teams.
(ii) In synthetic experiments, higher rank Élő models outperform the Bradley-Terry-Élő model in predicting competitive outcomes if the generative truth is higher rank.
(iii) In real world experiments on the English Premier league, we have found that the extended capability of structured log-odds models to make use of features is useful as it allows better prediction of outcomes compared to not using features.
(iv) In real world experiments on the English Premier league, we have found that our proposed two-stage training strategy for on-line learning with structured log-odds models is useful as it allows better prediction of outcomes compared to using standard on-line strategies or batch training.
We would like to acknowledge that many of the mentioned suggestions and extensions are already found in existing literature, while, similar to the Bradley-Terry and Élő models in which parsimonious parametric form and on-line learning rule have been separated, those ideas usually appear without being joint to a whole.
We also anticipate that the highlighted connections to generalized linear models, low-rank matrix completion and neural networks may prove fruitful in future investigations.
§.§ Findings on the English Premier League
The main empirical on the English Premier League data may be described as follows.
(i) The best predictions, among the methods we compared, are obtained from a structured log-odds model with rank one and added covariates (league promotion), trained via the two-stage strategy. Not using covariates or the batch training method makes the predictions (significantly) worse (in terms of out-of-sample likelihood).
(ii) All our models and those we adapted from literature were outperformed by the Bet365 betting odds.
(iii) However, all informed models were very close to each other and the Bet 365 betting odds in performance and not much better than the uninformed baseline of team-independent home team win/draw/lose distribution.
(iv) Ranking tables obtained from the best accessible predictive model (as a surrogate for the actual process by which it is obtained, i.e., the games proper) are, qualitatively, quite random, to the extent that most teams may end up in wildly different parts of the final table.
While we were able to present a parsimonious and interpretable state-of-art model for outcome prediction for the English Premier League,
we found it surprising how little the state-of-art improves above an uninformed guess which already predicts almost half the (win/lose/draw) outcomes correctly,
while differences between the more sophisticated methods range in the percents.
Given this, it is probably not surprising that a plausible surrogate for humanity's “secret” or non-public knowledge of competitive sports prediction,
the Bet365 betting odds, is not much better either. Note that this surrogate property is strongly plausible from noticing that offering odds leading to a worse prediction
leads to an expected loss in money, hence the market indirectly forces bookmakers to disclose their best prediction[
The expected log-returns of a fractional portfolio where a fraction q_i of the money is bet on outcome i against a bookmaker whose odds correspond to probabilities p_i
are [L_ℓ (p,Y)] - [L_ℓ (q,Y)] - c where L_ℓ is the log-loss and c is a vigorish constant. In this utility quantifier, portfolio composition and bookmaker odds
are separated, hence in a game theoretic adversarial minimax/maximin sense, the optimal strategies consist in the bookmaker picking p and the player picking q to be their best possible/accessible prediction, where “best” is measured through expected log-loss (or an estimate thereof). Note that this argument does not take into account behavioural aspects
or other utility/risk quantifiers such as a possible risk premium, so one should consider it only as an approximation, though one that is plausibly sufficient for the qualitative discussion in-text.
].
Thus, the continued existence of betting companies hence may lead to the belief that this is possibly rather due to
predictions of ordinary people engaged in betting that are worse than uninformed, rather than betting companies' capability of predicting better.
Though we have not extensively studied betting companies empirically, hence this latter belief is entirely conjectural.
Finally, the extent to which the English Premier League is unpredictable raises an important practical concern:
influential factors cannot be determined from the data if prediction is impossible, since by recourse to the scientific method
assuming an influential factor is one that improves prediction.
Our results above allow to definitely conclude only three such factors which are observable, namely a general “good vs bad” quantifier for whatever one may consider as a team's “skills”, which of the teams is at home, and the fact whether the team is new to the league.
As an observation, this is not very deep or unexpected - the surprising aspect is that we were not able to find evidence for more.
On a similar note, it is surprising how volatile a team's position in the final ranking tables seems to be, given the best prediction we were able to achieve.
Hence it may be worthwhile to attempt to understand the possible sources of the observed nigh-unpredictability.
On one hand, it can simply be that the correct models are unknown to us and the right data to make a more accurate prediction have been disregarded by us.
Though this is made implausible by the observation that the betting odds are similarly bad in predicting, which is somewhat surprising as we have not used much of possibly available detail data such as in-match data and/or player data (which are heavily advertised by commercial data providers these days).
On the other hand, unpredictability may be simply due to a high influence of chance inherent to English Premier League games,
similar to a game of dice that is not predictable beyond the correct odds.
Such a situation may plausibly occur if the “skill levels” of all the participating teams are very close - in an extreme case,
where 20 copies of the same team play against each other, the outcome would be entirely up to chance as the skills match exactly, no matter how good or bad these are.
Rephrased differently, a game of skill played between two players of equal skill becomes a game of chance.
Other plausible causes of the situation is that the outcome a Premier League game is more governed by chance and coincidence than by skills in the first place,
or that there are unknown influential factors which are unobserved and possibly distinct from both chance or playing skills.
Of course, the mentioned causes do not exclude each other and may be present in varying degrees not determinable from the data considered in this study.
From a team's perspective, it may hence be interesting to empirically re-evaluate measures that are very costly or resource consuming under the aspect of
predictive influence in a similar analysis, say.
§.§ Open questions
A number of open research questions and possible further avenues of investigation have already been pointed out in-text.
We summarize what we believe to be the most interesting avenues for future research:
(i) A number of parallels have been highlighted between structured log-odds models and neural networks.
It would be interesting to see whether adding layers or other ideas of neural network flavour are beneficial in any application.
(ii) The correspondence to low-rank matrix completion has motivated a nuclear norm regularized algorithm; yielding acceptable results in a synthetic scenario, the algorithm did not perform better than the baseline on the Premier League data. While this might be due to the above-mentioned issues with that data, general benefits of this alternative approach to structured log-odds models may be worth studying - as opposed to training approaches closer to logistic regression and neural networks.
(iii) The closeness to low-rank matrix completion also motivates to study identifiability and estimation variance bounds on particular entries of the log-odds matrix, especially in a setting where pairings are not independently or uniformly sampled.
(iv) While our approach to structured log-odds is inherently parametric, it is not fully Bayesian - though naturally, the benefit of such an approach may be interesting to study.
(v) We did not investigate in too much detail the use of features such as player data, and structural restrictions on the feature coefficient matrices and tensors. Doing this, not necessarily in the context of the English Premier League, might be worthwhile, though such a study would have to rely on good sources of added feature data to have any practical impact.
On a more general note, the connection between neural networks and low-rank or matrix factorization principles apparent in this work may also be an interesting direction to explore,
not necessarily in a competitive outcome prediction context.
plainnat
|
http://arxiv.org/abs/1701.08206v1 | 20170127220313 | Galaxies in the Illustris simulation as seen by the Sloan Digital Sky Survey - II: Size-luminosity relations and the deficit of bulge-dominated galaxies in Illustris at low mass | [
"Connor Bottrell",
"Paul Torrey",
"Luc Simard",
"Sara L. Ellison"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
[NO \title GIVEN]
[NO \author GIVEN]
December 30, 2023
======================
The interpretive power of the newest generation of large-volume hydrodynamical simulations of galaxy formation rests upon their ability to reproduce the observed properties of galaxies. In this second paper in a series, we employ bulge+disc decompositions of realistic dust-free galaxy images from the Illustris simulation in a consistent comparison with galaxies from the Sloan Digital Sky Survey (SDSS). Examining the size-luminosity relations of each sample, we find that galaxies in Illustris are roughly twice as large and 0.7 magnitudes brighter on average than galaxies in the SDSS. The trend of increasing slope and decreasing normalization of size-luminosity as a function of bulge-fraction is qualitatively similar to observations. However, the size-luminosity relations of Illustris galaxies are quantitatively distinguished by higher normalizations and smaller slopes than for real galaxies. We show that this result is linked to a significant deficit of bulge-dominated galaxies in Illustris relative to the SDSS at stellar masses logM_⋆/M_⊙≲11. We investigate this deficit by comparing bulge fraction estimates derived from photometry and internal kinematics. We show that photometric bulge fractions are systematically lower than the kinematic fractions at low masses, but with increasingly good agreement as the stellar mass increases.
galaxies: structure – hydrodynamics – surveys – astrophysics
§ INTRODUCTION
The observed relationship between size and luminosity is a crucial benchmark within the framework of hierarchical assembly of galaxies <cit.>. The morphologies that are determined by photometric analyses are governed by the growth and evolution of stellar populations and their distribution within galaxies. Reproducing the observed size-luminosity relation of galaxies within hydrodynamical simulations requires adequate numerical resolution and a broad physical model that includes key physical processes: stellar and gas kinematics; gas-cooling; star formation, feedback, and quenching; stellar population synthesis and evolution; black hole feedback, and the influence of galaxy interactions and merging on these processes <cit.>. The similarities and differences between the size-luminosity relations of the simulated and observed galaxies reflect the successes and trappings of the models employed by the simulations.
The sizes of galaxies in hydrodynamical simulations have only recently demonstrated consistency with observations. In particular, the formation of realistic disc galaxies in earlier generations of simulations was recognized as a significant challenge within a framework of hierarchical assembly (e.g., ). Simulated discs were too centrally concentrated, too small, and rotated too quickly at fixed luminosity. The resulting Tully-Fisher relations and disc angular momenta in early disc-formation experiments yielded stark contrasts with observations (e.g., ; review by ; also see comparison of various hydrodynamical codes by ). The inclusion of energetic feedback has been shown to mitigate the differences between simulated and observed discs by preventing overcooling of gas, runaway star formation at early times, and angular momentum deficiency in simulated disc galaxies (e.g., ). High-resolution hydrodynamical simulations that include efficient feedback have yielded more reasonable disc sizes in small galaxy samples and for targeted mass ranges <cit.>. However, reproducing the size-mass and size-luminosity relations for galaxy populations remains challenging in cosmological simulations. The size-mass and size-luminosity relations of galaxies depend sensitively on stellar mass and luminosity functions, the M_⋆-M_halo relations, and feedback models – which must all be accurate to reproduce observed galaxy sizes <cit.>.
Galaxy sizes in statistically meaningful samples from hydrodynamical simulations and their dependencies on sub-grid models for star-formation and energetic feedback have been studied in several recent works (e.g., OWLS z=2: ; OWLS z=0: ; GIMIC z=0: ). The sub-grid feedback parameters generally have large uncertainties and are often calibrated to reproduce global scaling relations for galaxy populations at specific epochs (e.g., see ). In the Illustris simulation, the parameters for the efficiency of energetic feedback are calibrated to roughly reproduce the history of cosmic star-formation rate density and the z=0 stellar mass function <cit.>. However, the evolution of these relations are predictions of the simulation. <cit.> performed an image-based comparison using non-parametric morphologies derived from mock Sloan Digital Sky Survey (SDSS) observations of galaxies from the Illustris simulation <cit.> to show that galaxies in Illustris were roughly twice the size of observed galaxies for the same stellar masses at z=0. In the EAGLE simulation <cit.>, the feedback efficiency parameters were calibrated to reproduce the galactic stellar mass function and the sizes of discs at z=0 and the observed relation between stellar mass and black-hole mass <cit.>. EAGLE has been shown to successfully reproduce the evolution of passive and star-forming galaxy sizes out to z=2 using inferred scalings between physical and photometric properties <cit.>. Comparison of the predictions from large-volume and high-fidelity hydrodynamical simulations such as Illustris and EAGLE to observations enables improved constraints on the sub-grid physics that govern galaxy sizes. However, it is important that such comparisons be made in a fair way by deriving the properties of galaxies consistently in observations and simulations. Realistic mock observations of simulated galaxies make it possible to perform a direct, image-based comparison between models and real data.
Creating mock observations of simulated galaxies is the most direct way to consistently derive galaxy properties for comparisons with observations – as the same analysis tools can be used to derive the photometric and structural properties of each. Mock observations of galaxies from hydrodynamical simulations have been successful in reproducing observed trends for targeted morphologies – but have been limited to small samples of galaxies. <cit.> (see also ) used high-resolution zoom-in hydrodynamical cosmological simulations and dust-inclusive radiative transfer to produce mock observations of a sample of eight disc galaxies. Bulge+disc decompositions of the surface-brightness profiles were performed on B-band images to estimate the disc scale-length, r_d (or often, h, in the literature) and magnitudes of the bulge and disc components. The size-luminosity relation of the discs agreed well with observed discs at redshifts z=0 and z=1 <cit.>. In particular, the z=0 discs were consistent with the size-luminosity relations of the observational samples within as large of dynamic range in magnitude as for the observations.
The size-luminosity relation for the bulges within discs has also been compared with observational constraints using mock photometry. <cit.> identified two galaxies with significant bulge components within the high-resolution disc galaxies simulated by <cit.>. The properties of the bulge components were derived from H-band photometry using bulge+disc decompositions – consistently with photometric decompositions of observed bulges in late-type galaxies and in ellipticals <cit.>. The properties of the bulges identified in the simulations were in broad agreement with the size-luminosity relation derived from observations but within a more narrow magnitude range than previously shown for the discs and a significantly more limited sample size.
The principal limitations of previous studies aimed at comparing simulated and observed structural relations include: (1) small sample sizes; (2) inconsistent derivations of simulated and observed galaxy properties; (3) incomplete observational realism that biases the distributions of derived properties of simulated galaxies in comparisons with observations. Each of these limitations can be addressed using realistic mock-observations of galaxies from large-volume cosmological hydrodynamical simulations. The current generation of large-volume hydrodynamical simulations contain sizeable populations of galaxies which can be used to compare the distributions of galaxies and their morphologies on the global size-luminosity relation. Mock observations of galaxies from these simulations (e.g., ) and observational realism () ensure that their derived properties are affected by the same observational biases as real galaxies.
In <cit.> we detailed the design and implementation of a new methodology for performing image-based comparisons between galaxies from cosmological simulations and observational galaxy redshift surveys. Our goal was to remove prior limitations to consistent morphological comparisons between theory and observations. In addition to using mock images, we also applied observational realism to these mock images to ensure the same biases were present in the mock and real data. In the first implementation of the methodology, we presented catalogs of parametric bulge+disc decompositions for ∼7000 galaxies in the z∼0 Illustris simulation snap shot with SDSS realism and a technical characterization of the effects of observational biases on some key parameters. In particular, the catalogs enable consistent comparisons with the existing bulge+disc decomposition catalogs of for 1.12 million galaxies in the SDSS.
The distribution of a population of galaxies on the size-luminosity plane is governed by the distribution of stellar populations within the physical components of galaxies. The size-luminosity relation is therefore well suited to examine the successes and discrepancies in the structural morphologies of galaxies from the Illustris simulation using the methods and catalogs from our previous paper. In this second paper in a series, we employed our mock and real galaxy structural parameter catalogs to make the comparison between Illustris and the Sloan Digital Sky Survey (SDSS) as fair as currently possible. A review of our methods and description of our samples are presented in Section <ref>. In Section <ref>, we compare size-luminosity luminosity relations of Illustris and SDSS. In Section <ref>, we examine the impact of morphological differences between the observed and simulated galaxy populations on the size-luminosity relations. In Section <ref>, we investigate the connection between morphology and stellar mass in SDSS and Illustris and compare the photometric and kinematic bulge-to-total fractions of galaxies from the Illustris simulation. We discuss and summarize our results in Sections <ref> and <ref>, respectively.
§ METHODS
§.§ Illustris simulation
A detailed description of the Illustris simulation can be found in <cit.>, <cit.>, and <cit.>. In this section, we briefly summarize the Illustris simulation properties that are most relevant to the creation of the synthetic images and our comparison.
Illustris is a cosmological hydrodynamical simulation that is run in a large cubic periodic volume of side-length L = 106.5 Mpc. The simulation is run using the moving-mesh code arepo <cit.> and a broad physical model that includes a sub-resolution inter-stellar medium (ISM), star-formation, and associated feedback <cit.>, gas cooling <cit.>, stellar evolution and enrichment <cit.>, heating and ionization by a UV background <cit.>, black hole seeding, merging, and active galactic nucleus (AGN) feedback <cit.>. Details of the physical model employed in Illustris can be found in <cit.> and <cit.>. The volume contains N_DM=1820^3 dark matter particles (m_DM=6.3×10^6M_⊙) and N_baryon=1820^3 gas resolution elements (m_baryon≈1.3×10^6). Stellar particles (M_⋆≈1.3×10^6 M_⊙) are formed stochastically out of cool, dense gas resolution elements and inherit the metallicity of the local ISM gas. Stellar particles then gradually return mass to the ISM to account for mass-loss from aging stellar populations. The age, birth mass, and time-dependent current mass are tracked for each stellar particle within the simulation. The gravitational softening lengths of dark and baryonic particles are ϵ_DM=1420 pc and ϵ_baryon=710 pc, respectively. The smallest gas resolution elements at z=0 have a typical extent (fiducial radius) r_cell^min=48 pc. Haloes are defined in the Illustris simulation using a Friends-of-Friends (FoF) algorithm (e.g., ) with a linking length of 0.2 times the mean particle separation to identify bound haloes. Individual galaxies are defined with the subfind halo-finder <cit.>.
The initial conditions for the simulation assume a ΛCDM model consistent with WMAP-9 measurements <cit.>: Ω_M = 0.2726; Ω_Λ = 0.7274; Ω_b = 0.0456; σ_8 = 0.809; n_s= 0.963; and H_0 = 100 h km s^-1Mpc^-1 where h=0.704. Free parameters within the Illustris model were calibrated in smaller simulation volumes to roughly reproduce the observed galaxy stellar mass function at z=0 and star-formation rate density across cosmic time.
<cit.>, <cit.>, and <cit.> examine the physical properties of galaxies from Illustris in a comparison with several key observations. The cosmic star-formation rate density and galaxy stellar mass functions agree reasonably well with observations, by construction. Still, Illustris produces slightly too many galaxies with masses logM_⋆/M_⊙<10 and logM_⋆/M_⊙>11.5 relative to observations <cit.> (though it must be noted that these observations have significant measurement uncertainties at the high-mass end). The cosmic star-formation rate density in Illustris, while accurately reproducing the observed trend between z∼1-10 <cit.>, is also slightly too large at z=0 – corresponding to larger fractions of star-forming/blue galaxies for stellar masses logM_⋆/M_⊙<10.5 <cit.>. Nonetheless, the global passive/red and star-forming/blue fractions for galaxies with logM_⋆/M_⊙>9 at z=0 agrees reasonably well with observations (though colours become less accurate within specific stellar mass domains).
The Illustris r-band galaxy luminosity function at z=0 also reasonably agrees with local observations from the SDSS for M_r∼-15 to -24 <cit.> as shown by <cit.>. Visualization of galaxies with over 10^5 stellar particles, logM_⋆/M_⊙≳11, demonstrates that Illustris produces diverse morphological structures including populations of star-forming blue discs and passive red bulge-dominated galaxies. Kinematic bulge-to-total stellar mass fractions of these well-resolved galaxies were used to demonstrate that Illustris accurately describes the transition from late- to early-types as a function of total stellar mass – finding reasonable agreement with observational photometric morphological classifications presented in <cit.>. <cit.> cautions that the morphological comparison should not be over-interpreted because the morphologies from Illustris were classified physically, where the observed morphologies were classified visually. On the other hand, the methodology used in this paper is particularly well-suited to perform a consistent and detailed comparison of galaxy morphologies using the same classification methods.
Detailed comparisons between Illustris and the observed galaxy scaling relations can be found in <cit.> and <cit.>, which specifically examine the galaxy luminosity functions, stellar mass functions, star formation main sequence, Tully-Fisher relations, stellar-age stellar-mass relations, among others. Those papers show that Illustris broadly reproduces the redshift z=0 galaxy luminosity function, the evolving galaxy stellar mass function, and Tully-Fisher relations better than previous simulations that included less developed feedback models <cit.>. However, there are some areas (e.g., the mass-metallicity relation, size-mass relation, or stellar-age stellar-mass relation) where significant tension remains between simulated and observed results. In this paper, we consider a more stringent test for comparing the Illustris simulation results against observations by applying even-handed analysis to synthetic Illustris observations, and real SDSS images. Our approach is aimed at identifying specific conflicts with the models and observations that can be used to refine future generations of galaxy formation models.
§.§ Stellar mocks
We employ mock observations of galaxies from the redshift z=0 snapshot of the Illustris simulation taken from the synthetic image catalog of . Each synthetic image of a galaxy is centred on the galaxy's gravitational potential minimum with field-of-view dimensions equal to 10 times the stellar half-mass radii rhm_⋆ of the galaxy (i.e., using particles/cells defined by subfind). Stellar particles within the full FoF group are each assigned a spectral energy distribution (SED) based on their mass, age, and metallicity values using the starburst99 (SB99) single-age stellar population SED templates <cit.>. The SEDs assume a Chabrier Initial Mass Function <cit.>, as does the Illustris simulation itself. The images are produced using the sunrise radiative transfer code <cit.> with four viewing angles for each galaxy. The viewing angles are oriented along the arms of a tetrahedron defined in the coordinates of the simulation volume (e.g., CAMERA 0 parallel to the positive z-axis of the simulation volume). Each pinhole camera is placed 50 Mpc away from the centre of the tetrahedron – which is positioned at the gravitational potential minimum of the galaxy. The projection of the stellar light from a galaxy is, therefore, effectively randomly oriented with respect to the galaxy's rotation axis. The fiducial camera resolution is 256×256 pixels. 10^8 photon packets are used in the Monte Carlo photon propagation scheme. The resulting mock position-wavelength data cube may then be convolved with an arbitrary transmission function and have its pixel resolution degraded to match the desired instrument. confirmed that the number of photons used in the propagation is sufficient such that the resulting synthetic images are well converged. Still, warn that caution should be exercised when examining the detailed structure of low-surface brightness features due to the residual Monte Carlo noise that can manifest as fluctuations in pixel-to-pixel intensity.
The synthetic images are created without the dust absorption/emission functionalities of the sunrise code. A truly comprehensive procedure for creating realistic images of galaxies from a cosmological simulation that can be compared with observations must include an accurate treatment of dust. In Section 2.2.3 of <cit.>, we summarize the challenges (detailed in ) of generating a proper treatment of dust for synthetic images of galaxies from simulations that do not resolve the complex structure of the interstellar medium on spatial scales required to properly model the dust distribution (≪1 kpc). Indeed, the challenges extend beyond numerical convergence for dust-inclusive radiative transfer in sunrise that might be overcome at higher computational expense for a sub-sample of galaxies. We acknowledge the limitation that the lack of dust presents to our comparison with observations and reserve treatment of dust until such a time that the effects and uncertainties associated with (particular) dust models on the synthetic galaxy images are resolved. Nonetheless, our current comparisons will be valuable standards for future comparisons that employ comprehensive, dust-inclusive radiative transfer to create synthetic galaxy images. Indeed, the methodology presented in was designed to enable seamless integration of such dust models in the radiative transfer. Owing to the lack of dust in the synthetic images, we expect our optical luminosities and sizes to represent upper limits – particularly for edge-on viewing angles of discs.
The images used in this paper use SB99 SED templates without nebular emission line contributions or H II region modelling. examined a model that accounts for the impact of nebular emission and dust obscuration from unresolved birth clouds on the emergent SEDs from young stellar particles. Nebular emission from young stars can contribute substantially to the flux in certain broad-band filters. accounted for birth cloud emission/obscuration by replacing the SB99 emission of young stellar particles (t_age<10^7 yr) with mappings-III model emission assuming partially obscured young stellar spectra <cit.> with added contributions from H II regions <cit.>. found that the spatial distribution of light in the resulting synthetic images were very similar to those that did not employ the nebular emission model. Therefore, opted not to include modelling of nebular emission in their fiducial (public) synthetic images in order to minimize post-processing uncertainties (as for the dust) while still creating sufficiently realistic images that comparisons can be drawn against observations.
A stellar light distribution (SLD) scheme is required to map discretized stellar particles to continuous light distributions. We employ the fiducial adaptive 16^th nearest-neighbour SLD scheme from the public release. In <cit.>, we showed that estimates of size and luminosity for most galaxies are largely invariant to the choice of SLD scheme. However, systematics from internal segmentation are appreciable for galaxies with stellar half-mass radii rhm_⋆ > 8 kpc and total stellar masses logM_⋆/M_⊙<11 and are not alleviated by any choice of SLD scheme that were examined in <cit.>. Additionally, large constant smoothing radii (∼1 kpc) tend to produce galaxies with systematically smaller (B/T) than in the fiducial scheme. Ultimately, we followed the philosophy of that no SLD scheme is more physically motivated than another, and the choice to use the fiducial scheme is motivated largely by its simplicity.
The synthetic image catalog includes 6891 galaxies with stellar masses logM_⋆/M_⊙>10 corresponding to a N_⋆≳10^4 stellar particle number cut. All synthetic images are artificially redshifted to z=0.05 and are convolved with SDSS g and r filters. The raw synthetic images include no observational realism or noise apart from some residual Monte Carlo noise that may manifest in pixel-to-pixel intensity fluctuations for low surface brightness features.
§.§ Observational realism
To enable consistent comparisons with observations, galaxies from the simulation must be mock observed with the same realism that affect observations of real galaxies. Building on previous work (e.g., ), in <cit.> we designed an extensive methodology for adding observational biases to the synthetic images. Our method is designed to achieve the same statistics for sky brightness, resolution, and crowding as galaxies in observational catalogs by assigning insertions into real image fields probabilistically based on the observed positional distribution of galaxies. Specifically, the synthetic images fluxes are convolved with the reconstructed point-spread function (PSF), have Poisson noise added, and are inserted into SDSS g and r band corrected images following the projected locations of galaxies from the bulge+disc decomposition catalog of . The realism procedure ensures that the biases on the decomposition model parameters from resolution, signal-to-noise, and crowding are statistically consistent for simulated and real galaxies.
The biases associated with the added realism on structural measurements are characterized in <cit.> and summarized here. The dominant contribution to error in the measured parameters is internal segmentation in galaxies with clumps of locally bright features in otherwise diffuse surface brightness profiles (roughly characterized by stellar half-mass radii rhm_⋆ > 8 kpc and total stellar masses logM_⋆/M_⊙<11). Internal segmentation in galaxies with locally bright features occurs because a deblending procedure is required to separate external sources from the galaxy photometry. Unrealistic stellar light distributions can lead to parts of a galaxy-of-interest being confused as an external source by the deblending. In extreme situations, photometric analysis may be reduced to a fraction of the original galaxy surface brightness distribution – leading to large systematic and random errors in both magnitude and size for particular galaxies, as well as spurious measurements of (B/T). Our analysis showed that some degree of internal segmentation occurs in roughly 30% of galaxies from Illustris. However, for galaxies not affected by internal segmentation, we showed that magnitude and half-light radii were robust to observational biases. Random errors in (B/T) were generally larger for galaxies in which the bulge and disc components are both appreciable, but is a trend that is qualitatively consistent with decompositions of analytic bulge+disc models in the SDSS (see , Appendix B).
§.§ Bulge+disc decompositions
Bulge+disc decompositions were performed on the mock observations from Illustris with the surface-brightness decomposition software gim2d <cit.>. As described in <cit.>, we model every realization of a galaxy with a single-component pure profile (free index, n_pS) and a two-component bulge+disc decomposition model with fixed bulge index, n_b=4, and exponential disc, n_d=1, profiles. We focus on the bulge+disc decomposition results in this paper. The following catalogs were defined by <cit.> and are used again in this paper:
catalog: A single bulge+disc decomposition for all galaxies and each of four camera angles (∼ 28,000 decompositions). Each camera angle incarnation of a galaxy is inserted into the SDSS following Section <ref>. Decompositions from the catalog are employed in our comparisons between mock-observed and real galaxies.
catalog: Multiple decompositions of a representative Illustris galaxy (RIG) sample of 100 galaxies that uniformly sample the stellar half-mass radius and total stellar mass distribution of Illustris galaxies from . Selection of the RIG sample is described in <cit.>. Each RIG is inserted into roughly 100 SDSS sky areas following <ref> and all four camera angle incarnations of a galaxy are fitted at each location (leading to ∼40,000 decompositions in the catalog). The catalog enables measurement of collective uncertainties from biases such as resolution, sky brightness, and crowding on median measurements from the distributions of structural parameters.
§.§ Selection of an SDSS comparison sample
The catalog of 1.12 million quantitative morphologies of galaxies from the SDSS by represents a reservoir from which we can draw populations of galaxies for comparisons to the simulated galaxies in the catalog. We compare our results with the n_b=4, n_d=1 bulge+disc decomposition results from the catalogs. However, several important criteria must be met for the comparison to be fair. While the design of the catalog ensured that biases from crowding, resolution, and sky were consistent between mock and real galaxies, we recall that all of our galaxies are inserted into the SDSS at redshift z=0.05. One observational bias that we have therefore not explored is the robustness of our parameter estimates with the surface brightness degradation as a function of redshift. A criterion of the SDSS control sample that is consequently necessary is that the control galaxies are confined to some thin redshift range around z=0.05 so that any biases that arise from surface brightness improvements or degradation do not enter into the comparison. Such a criterion for the redshift of a galaxy further requires that the estimate of the redshift is accurate – which requires the additional criterion that galaxies must have spectroscopically measured redshifts. We therefore impose the following criteria on the catalog:
(1) Galaxies are selected from the Spectroscopic Sample of the SDSS DR7 Legacy Survey (∼660,000 galaxies)
(2) Galaxies are confined to the volume corresponding to the spectroscopic redshift range 0.04<z<0.06 (∼68,000 galaxies)
Biases from volume incompleteness in the samples of simulated and real galaxies are removed by sampling the galaxies in the catalog to match the normalized stellar mass distribution of the SDSS over 0.04<z<0.06 with a lower mass cutoff of logM_⋆/M_⊙>10. The latter criterion is imposed because it is the stellar mass lower limit of Illustris galaxies for which there are synthetic images <cit.>. The SDSS stellar masses are derived from combined surface brightness profile model estimates and SED template fitting by <cit.>. Galaxies are drawn with replacement from the 28,000 galaxies in the catalog using a Monte Carlo accept-reject scheme to match the stellar mass distribution of the SDSS sample. The stellar masses for Illustris galaxies are computed from the sum of stellar particle masses that belong to a galaxy as identified by subfind. The differing methodologies for computing the stellar masses may introduce biases in the stellar mass matching <cit.>. showed that photometric masses derived from the synthetic galaxy SEDs from Illustris were broadly similar to the idealized subfind masses from the simulation (with some systematics identified therein). However, to obtain stellar mass estimates for simulated galaxies with synthetic photometry that have the same inherent biases and uncertainties as the observationally derived masses would require dust-inclusive radiative transfer – which first requires an accurate model for dust. Therefore, we acknowledge that comparing the photometrically-derived observed galaxy masses with the subfind masses of simulated galaxies may bias components of our analysis that depend on stellar mass matching. The exact role of the mass matching biases may be characterized using future high-resolution simulations that are better equipped to generate dust-inclusive synthetic photometry.
§ GALAXY SIZE-LUMINOSITY RELATIONS
The left panel of Figure <ref> shows the distributions of Illustris (red, filled contours) and SDSS (blue contours) in the plane of r-band half-light radius (as measured through circular aperture curve of growth photometry) and absolute r-band magnitude from the bulge+disc decompositions. The luminosities in each sample span roughly 4 magnitudes – except for a low-luminosity tail in Illustris at the 99^th percentile. However, the Illustris luminosities are brighter by roughly 0.7 magnitudes (factor of 2) on average. The left panel of Figure <ref> also demonstrates a discrepancy in sizes between the distributions of Illustris and SDSS galaxies. Galaxies with high luminosities, M_r≲-21.5, are systematically larger in Illustris than galaxies observed in the real universe by roughly +0.4 dex (or a factor of 2 larger, consistent with ). There is also a discrepancy in the correlation between size and luminosity for Illustris galaxies with respect to the SDSS for the same stellar masses. The slope of the global size-luminosity relation for galaxies in Illustris is significantly shallower than for galaxies in the SDSS – implying a weaker relationship between galaxy size and stellar mass in Illustris.
The large offset in magnitude between Illustris and the SDSS in Figure <ref> occurs despite the known systematics from internal segmentation in an appreciable fraction of galaxies in the catalog <cit.>. Roughly 30% of galaxies in Illustris are affected by internal segmentation to some degree. The effect of internal segmentation on measured fluxes and sizes can be significant – with reductions in total flux by up to a factor of six <cit.>. However, the offset in magnitude that is seen in Figure <ref> for size-luminosity distributions of Illustris relative to the SDSS galaxies is negative – opposite to the positive magnitude bias from internal segmentation. Indeed, the systematically larger magnitude estimates from internally segmented galaxies seem only to broaden the high-magnitude tail of the 99% contour for Illustris galaxies.
The right panel of Figure <ref> shows that replacing the decomposition results with galaxy properties derived directly from the synthetic images, M_r,synth and rhl_r,synth, affects only the sizes and fluxes at low-luminosities, M_r,b+d≳-20.5, and removes the low-luminosity outliers from Illustris. Ultimately, replacing the decomposition results with the synthetic image properties in the left panel of Figure <ref> generates an Illustris size-luminosity relation that is shifted by an additional 0.2 magnitudes brighter with respect to the SDSS and no particular improvement to agreement in slope with SDSS. The biases from internal segmentation are therefore insufficient to explain the discrepant offset in magnitude and difference in slope in the size-luminosity relations of Illustris and the SDSS. Although informative on the effect of internal segmentation, it should be noted that the comparison in the right panel of Figure <ref> is biased. The quantities for the SDSS galaxies are derived from the decomposition models, while for the Illustris galaxies they are derived directly from the synthetic images. The differences in the Illustris size-luminosity distributions in the left and right panels cannot strictly be interpreted as arising from internal segmentation alone. However, in <cit.>, we characterized the biases on half-light radii and magnitudes from various sources – showing that in the absence of internal segmentation, half-light radii and magnitudes that are computed from the models and from the synthetic images are broadly consistent.
Our dust-free synthetic images do not permit a characterization of the effects of dust in the differences in the size-luminosity distributions. The systematics from dust on model parameters for the bulge and disc differ – complicating speculative arguments on how exactly global galaxy properties should be affected. In general, however, the optical luminosities shown here for Illustris should represent upper limits to the luminosities of a dusty galaxy population – which is consistent with the shift to brighter magnitudes in Illustris. In Section <ref>, we discuss the role of dust in the context of bulge+disc decompositions of dusty galaxies and in our dust-free analysis.
§ IMPACT OF BULGE AND DISC MORPHOLOGIES
§.§ Morphological dependence of the size-luminosity relation
Observational studies by <cit.> and <cit.> confirm differences between the size-luminosity relations of visually classified late-type and early-type galaxies. Discs are generally larger than bulges and samples that contain galaxies with dominant disc components are offset on the size-luminosity plane from samples of galaxies containing dominant bulges at fixed luminosity. The disc relation is also shallower and has more scatter than the size-luminosity relations of bulge-dominated galaxies.
<cit.> also performed a comparison of the size-luminosity relations for observed classical and pseudo-bulges – finding that the size-luminosity relation of classical bulges is the same as for ellipticals. Pseudo-bulges, which are sometimes classified by index, n≲2, have a steeper slope than classical bulges and ellipticals on the size-luminosity relation but significantly greater scatter <cit.>. The discrepancy between pseudo-bulges and classical bulges is expected as pseudo-bulges are believed to have a different formation mechanism and to be structurally different from classical bulges (e.g., , and references therein). Classification of bulges into pseudo-bulges and classical bulges using index is generally imperfect <cit.>, but can be considered as an approximation <cit.>. Many bulges with n<2 follow the tight size-luminosity relation for classical bulges and, conversely, some bulges that are offset from this relation have indices that are consistent with the distribution of classical bulges <cit.>.
Our analysis does not discern between classical and pseudo-bulges – as our bulge+disc decompositions use a bulge component with fixed index, n=4. However, our analysis of the simulated and observed galaxy populations is internally consistent. If pseudo-bulges in the simulations are equally represented and structurally similar to observed pseudo-bulges, then their effect on the distribution of structural parameters will be the same. Therefore, while the structural estimates for pseudo-bulge properties may be inaccurate in our bulge+disc decompositions, any discrepancies between the distributions of bulge properties between the simulations and observations will be sourced by true structural differences of these components.
The presence and growth of a stellar bulge component in galaxy morphologies is strongly linked to many key processes of galaxy formation theory. The photometric bulge-to-total fractions obtained in the structural decompositions of galaxies provide estimates of the relative contribution of the bulge to their structure. The importance of bulges in various scaling relations including size-luminosity, the bulge-to-total fractions are well-suited to identify the morphological differences between Illustris and the SDSS. In this section, the observed size-luminosity relations of late-type (disc-dominated) and early-type (spheroid- or bulge-dominated) are examined to provide context for a morphological comparison using the photometric bulge-to-total fraction and total stellar mass.
§.§ Bulge and disc fractions in Illustris and the SDSS
The distinct size-luminosity relations of bulge and disk dominated galaxies is obvious in the SDSS sample, when populations are separated either by visual morphology, or quantitative bulge fractions. Figure <ref> shows the size-luminosity relation of the visual classification sample of <cit.> using the half-light radii rhl_g,B+D and absolute g-band magnitudes M_g,B+D from the bulge+disc decompositions of <cit.>. The left panel of Figure <ref> shows that roughly splitting the full sample into late- and early-types by Hubble T-type generates two distinct size-luminosity relations in slope and scatter. The result agrees well with other analyses of morphology dependence in the size-luminosity relations of galaxies (e.g., ). The right panel of Figure <ref> demonstrates that the morphological dependence of the galaxy size-luminosity relation is also seen when bulge-to-total fractions are used as a galaxy morphology indicator. The slope, scatter, and normalization of the size-luminosity relations of bulge- and disc-dominated systems separated by visual classification are almost exactly reproduced using bulge-to-total ratios from the quantitative bulge+disc decompositions. Therefore, (B/T) estimates may enable sensitive investigation into the morphological differences between galaxies in the SDSS and Illustris that drive their size-luminosity relations.
Figure <ref> shows the size-luminosity relations of Illustris and the SDSS classified morphologically by (B/T)_r using the same samples as for Figure <ref>. The Illustris size-luminosity relations demonstrate qualitatively similar changes with (B/T) morphology as seen for the SDSS galaxies: the slope of the size-luminosity relation increases in higher (B/T) classification groups and the normalization similarly decreases. However, though qualitatively similar, they are quantitatively distinct. Galaxies in Illustris have higher normalizations and shallower slopes across all (B/T) classifications. Indeed, the fractions in each (B/T) classification are also indicated – forecasting a discrepancy in the morphological distributions of Illustris and the SDSS.
Given that morphology is clearly critical in driving the normalization, slope, and scatter of the size luminosity relation, it is germane to compare the (B/T) distributions of the SDSS and Illustris samples. Figure <ref> shows the distribution of r-band photometric (B/T) as a function of total stellar mass in the SDSS (left panel) and Illustris (right panel) taken from the catalog. The samples are the same as for Figure <ref> which were matched in stellar mass. Figure <ref> shows that the SDSS has a diversity of morphological populations including bulge-dominated, disc-dominated, and a large number of composite galaxies within this mass distribution. The diversity is not shared by Illustris – which is lacking in bulge dominated galaxies, particularly at low masses. Only for stellar masses logM_⋆/M_⊙≳10.6 do galaxies with significant bulge components become more common – indicating a stronger correlation between bulge fraction and total stellar mass within Illustris than exists in the observations. The right marginal for the Illustris distribution shows that roughly 72% of galaxies in Illustris have (B/T)_r<0.05, a fraction which rapidly declines for larger photometric bulge fractions (less than 1% of galaxies in the Illustris sample have (B/T)_r>0.6).
The difference in morphological distributions between Illustris and SDSS shown in Figure <ref> has an obvious implication for a comparison of their size luminosity relations. Illustris contains a much larger disc population than the SDSS for the sample matched by total stellar mass to the distribution of spectroscopic SDSS galaxies in 0.04<z<0.06. Figure <ref> showed that disc-dominated galaxies are elevated in the size-luminosity relation and have a shallower slope than early-types. The SDSS size-luminosity distribution in the comparison with Illustris in Figure <ref> is analogous to the background distributions for SDSS from Figure <ref> – showing the contributions of bulge- and disc-dominated galaxies to the relations. The galaxy size-luminosity relation for the SDSS is broadened vertically at low luminosities and has bent contours because it contains populations of discs, bulges, and composite systems. The bulges in SDSS weight the distribution to more compact half-light radii at low luminosities, as shown in Figure <ref>. However, Illustris is deficient of bulges at low stellar masses (luminosities). Therefore, there is no downward weight from bulge-dominated galaxies at the low-luminosity end of Illustris to bring the slope and scatter of the galaxy size-luminosity relation into agreement with the SDSS.
The impact of morphological differences between the SDSS and Illustris on the size-luminosity relation can be determined by matching samples in both total stellar mass and bulge-to-total ratio. If the infrequency of bulge-dominated morphologies at low stellar masses in Illustris is responsible for the discrepancy in the size-luminosity relations of Illustris and the SDSS, then matching the SDSS morphologies (which are more diverse) and stellar masses to the Illustris galaxies from Figure <ref> should bring the size-luminosity relations into better agreement.[Note that matching in the other direction does not work for SDSS galaxies in 0.04<z<0.06. Illustris contains too few galaxies with high (B/T) to be matched to the much larger population of bulges in the SDSS at these masses. In order to maintain the same stellar mass distributions as in our previous comparisons, we match galaxies one-to-one from the SDSS by stellar mass and (B/T) to the sample of Illustris galaxies from Figures <ref> and <ref> that are matched to the distribution of stellar masses in SDSS in 0.04<z<0.06.]
Figure <ref> shows the size-luminosity relations for the SDSS and Illustris for samples now matched in both stellar mass and bulge-to-total fraction. The scaling and normalization between size and luminosity in Figure <ref> for Illustris and SDSS galaxies is brought into greater agreement by the morphology matching. At low luminosities, the large discrepancy in galaxy sizes at fixed luminosity from Figure <ref> is largely removed. The improved agreement at low-luminosities is consistent with a deficit in bulges in Illustris relative to galaxies in the SDSS – as seen in Figure <ref>. Matching samples by morphology largely removes the bulge-dominated systems in the SDSS – effectively leaving the size-luminosity relation of the discs. Still, the remaining disagreement indicates that, while the bulge deficit in Illustris may play a significant role in driving contrast with the SDSS size-luminosity relation, the morphology matching alone cannot provide a complete explanation for the differences in the size-luminosity relations of SDSS and Illustris.
The remaining disagreement in the average sizes and luminosities of the mass-morphology matched samples could be due the lack of dust in the synthetic images. Dust corrections in synthetic images of galaxies have recently been considered for galaxies in the EAGLE simulation <cit.>. Proper treatment and inclusion of dust effects might yield greater consistency in the observational realism of mock observations of galaxies.
§ THE DEFICIT OF BULGES IN LOW MASS ILLUSTRIS GALAXIES
The results from the previous section demand further investigation into the lack of bulge-dominated galaxies in Illustris as determined by gim2d decompositions. The result contrasts with past problems in hydrodynamical simulations – in which the physical parameters of galaxies indicated that they were too bulge-dominated <cit.>. The question is whether (a) photometric bulges (photo-bulges), as identified by gim2d, systematically do not exist in Illustris; or (b) photo-bulges only do not strongly appear in the mass-matched sample that was used to examine the size-luminosity relation; or (c) true kinematic bulges are just not well extracted by gim2d. However, in the mass matched sample to 0.04<z<0.06 of the SDSS, galaxies at the high mass end (logM_⋆/M_⊙≳11) of the z=0 stellar mass function of Illustris are not represented – despite the larger cosmic volume in the observed sample.[There are a number of reasons for the larger number of high-mass galaxies in Illustris relative to the SDSS volume to which we performed the matching. One possible reason is that Illustris slightly over-predicts the redshift z=0 stellar mass function (SMF) at the high-mass end <cit.>. Further biases could arise in the analysis of galaxies at the centres of rich clusters – which may provide discrepant stellar mass estimates with respect to the known masses from the simulations. A dedicated study of the systematics on photometric stellar mass estimates for all morphologies is required to fully understand the biases in the mass matching.] It is possible that the higher-mass galaxies have significant bulge components and are being missed in our analysis due to the standard of consistency we aim to achieve by matching in mass and redshift. Examination of the bulge fractions at higher masses in Illustris and comparisons with observations will yield insights on the discrepancy between the morphological dependence on stellar mass in Illustris and the SDSS.
In this section, the relationships between morphological (B/T) fractions and stellar mass in the full populations of Illustris and SDSS are examined to provide insight on the deficit of bulges at low stellar mass in Illustris seen in the previous section. We argue possible scenarios that cause the discrepancies with observations. A comparison of the kinematically defined stellar bulge-to-total fraction in the simulation with the photometric fraction is also performed to examine the consistency between the information taken from the stellar orbits and stellar light.
§.§ Morphological dependence on stellar mass
Galaxies in Illustris contain bulges – albeit few at low stellar masses. Figure <ref> shows the distribution of bulge-to-total fractions in the SDSS and Illustris with matching to the stellar mass distribution of Illustris galaxies from the catalog. We mitigate statistical biases by matching each galaxy from the catalog by stellar mass to the 15 nearest-mass neighbours in the SDSS z<0.2 control pool. We select from the z<0.2 SDSS volume to access galaxies that can be matched to the high-mass end of the Illustris stellar mass distribution – for which there are too few in 0.04<z<0.06. Taking galaxies from z<0.2 of SDSS will mean that spatial resolution biases are not controlled in this comparison – though it is apparent that the SDSS distribution does not differ substantially from Figure <ref>. The right panel of Figure <ref> shows that there is a stronger relationship between (B/T) and stellar mass in Illustris. At stellar masses logM_⋆/M_⊙ >11 there appears to be a significantly more visible correlation between (B/T) and stellar mass in Illustris than in SDSS – where all disc, bulge, and composite morphologies are more evenly distributed as a function of stellar mass.
In Figure <ref>, over 80% of Illustris galaxies are completely disc-dominated at logM_⋆/M_⊙ < 10.5. At slightly higher masses, 10.5<logM_⋆/M_⊙<11, the SDSS have fewer disc-dominated systems and more composites, but Illustris still contains ∼50% discs with (B/T)<0.05 and few composites or bulge-dominated systems. The distributions become more similar in 11<logM_⋆/M_⊙<11.5. Illustris contains an appreciable number galaxies with higher bulge fractions in 11<logM_⋆/M_⊙<11.5 which is similar to the observations.
Figure <ref> demonstrates that although both simulated and real galaxies have a (B/T)-stellar mass dependence, there is a stronger correlation between photometric bulge-to-total ratio and total stellar mass in Illustris than in observed galaxies from the SDSS. Galaxies in the SDSS have diverse morphologies within each mass division – whereas (B/T) morphologies in galaxies from Illustris are strongly dependent on total stellar mass. Both populations share the trend that bulges become more frequent at higher stellar masses, but the dependence is stronger in Illustris. We now discuss possible biases that may explain these differences.
§.§ Impact of stellar particle resolution/smoothing
Accurate photometry for the inner region of the surface brightness distribution of a galaxy is essential for interpreting the bulge component <cit.>. In <cit.>, we showed that the choice of stellar light distribution (SLD) scheme did not bias the global properties of the galaxy such as total integrated magnitude and half-light radius, but could strongly bias the structural parameter (B/T). We showed that broader smoothing kernels (such as the constant 1 kpc kernel relative to the N=16 nearest-neighbour kernel) artificially limit the spatial resolution in the inner regions of galaxies. Broad, constant smoothing kernels reduced concentration of flux from the central regions of the galaxy and systematically reduced estimates of (B/T) systematically (to zero in many cases, even for galaxies with (B/T) as large as 0.6 in the fiducial scheme). While it is possible that the fiducial SLD scheme may be biasing (B/T) towards smaller bulge fractions in our decompositions, that leads to the notion that there is a “correct” SLD scheme for the particle mass resolution of Illustris (and the particle resolution of any hydrodynamical simulation). Some SLD schemes will give a better physical representation than others. But ultimately, the upper limit to the spatial resolution that is accessible through any SLD scheme is set by the stellar particle mass/spatial resolution in the simulation.
The choice of the N_16 nearest-neighbour smoothing as the fiducial model was motivated by simplicity <cit.>. However, the comparisons of SLD schemes with narrower and broader smoothing kernels and their effects on the measured (B/T) are qualitatively analogous to a comparisons using higher and lower stellar particle resolution, respectively. Higher particle mass resolutions (lower total stellar mass/particle) tend to reduce the typical spatial separation between stellar particles in galaxies produced in hydrodynamical simulations and effectively increase the spatial resolution (at least when using adaptive SLD schemes). Figure <ref> showed that the majority of high mass systems (logM_⋆/M_⊙≳11) in Illustris (that contain larger numbers of particles) have bulges and that low mass galaxies largely do not. The spatial distribution of particles determines the surface brightness distribution of a simulated galaxy. Larger numbers of particles reduce the smoothing radii in our fiducial SLD scheme and generally improve the spatial resolution of a galaxy surface brightness distribution. Improvements to the spatial resolution in the bulge surface brightness distribution (in particular to the inner 100 pc that are essential for discerning its profile from a disc) may facilitate greater accuracy in modelling of the bulge component. If so, the strong mass dependence for the bulge-to-total fraction in Illustris seen in Figure <ref> may arise from inadequate particle resolution for creating realistic photo-bulges in synthetic images of low mass galaxies with smaller numbers of stellar particles.
One way to test the particle resolution dependence on bulge fractions directly is to perform hydrodynamical zoom-in simulations of lower mass systems in Illustris with the same numerical techniques and simulations models (e.g., for Illustris: ). Comparison of the decomposition results from the high-resolution and low-resolution simulations would yield insight on the effects of particle resolution on (B/T) estimates from mock observations. Alternatively, an investigation of the biases on structural morphology from the particle resolution and the simulation models that regulate the formation of structure may be performed by comparing our decomposition results with consistent decompositions of galaxies from other large hydrodynamical simulations such as EAGLE which has comparable mass resolution <cit.>. However, in such a comparison, the biases from differences in the simulation models on the morphological estimates would need to be carefully examined in order to assess whether particle resolution is the main culprit of the strong mass-dependence on (B/T) estimates in the mock observations. Investigating simulations with different resolutions is beyond the scope of this paper.
§.§ Comparison with kinematic B/T
An interesting test that is feasible from our image-based decompositions of simulated galaxies is a comparison between the properties derived from kinematic and photometric information for the stellar particles. Comparisons of photometric and kinematic bulge fractions in galaxies have been performed previously in the literature without the large numbers or extensive realism considerations provided here. <cit.> used bulge+disc decompositions of mock observations of Milky Way-mass galaxies from hydrodynamical simulations with similar mass resolution to Illustris (M_⋆∼10^6) to show that (B/T) is systematically lower from photometry relative to the kinematics. <cit.> reproduced this result in a sample of 18 cosmological zoom-in simulations of galaxies. Each galaxy from <cit.> had a photometric bulge-to-total ratio of (B/T)≈0 but kinematic ratios ∼0.5. However, the zoom-in simulations from <cit.> used adaptive particle mass resolution to each halo – making it difficult to ascertain the effects of particle mass resolution. Still, the implication of each study is that exponential structure of mock observed surface brightness profiles of simulated galaxies does not imply a cold rotationally-supported kinematic disc (i.e. low photometric B/T does not necessarily indicate the lack of a kinematic bulge). These results are further complicated by <cit.> who, while not specifically investigating the differences between photometry and the kinematics, demonstrated that realistic mock-observed photometric bulges can be produced in high-resolution simulations that match well with the photometric properties of real bulges. The implication from these studies is that while galaxies with realistic photo-bulges may be produced, bulge fractions inferred from photometry may not straightforwardly couple with kinematic bulge classifications.
In the simulations, the angular momentum for each particle about the principle rotational axis in a galaxy can be derived using the particle velocities and locations relative to the galactic potential. Stars that belong to the bulges of galaxies tend to have a gaussian distribution of velocities (see for a recent review) whereas stars in the disc have rotationally supported orbits and generally larger, coherent angular momenta. Stars (and particles representing stars) can be approximately associated to their stellar components using this angular momentum information to estimate the kinematic bulge-to-total or disc-to-total fractions (e.g., ). One definition of the kinematic (B/T) that is common in the literature (see ) is:
(B/T)_kin = 1/N_⋆,tot 2 × N_⋆(J_z / J(E)<0)
where N_⋆,tot is the total number of star particles belonging to the galaxy, J_z is an individual particle's component of angular momentum about the principle rotation axis (computed from the angular momenta of all stars within 10 half-mass radii), and J(E) is the maximum angular momentum of stellar particles ranked by binding energy (U_gravity + ν^2) within 50 ranks of the particle in question. N_⋆(J_z / J(E)<0) is an approximation to the number of stars whose motions are not coherent with the bulk rotation. Because the velocities in the bulge are expected to be normally distributed about J_z / J(E)=0, symmetry provides that 2× N_⋆(J_z / J(E)<0) should approximate the number of stars in the bulge of a galaxy – which can be normalized by the total number of stars for the kinematic bulge-to-total ratio.
The left panel of Figure <ref> compares the kinematic and photometric estimates of (B/T) using the decompositions from the catalog (Section <ref>). The galaxies are used to provide a sense of the uncertainties associated with photometric (B/T) by employing the distributions of decomposition results from all placements and camera angles for each galaxy. The vertical position of each point represents the median photometric (B/T) over all placements and camera angles for each galaxy. The error bars show the 95% range centred on the median of the distribution of estimates. The horizontal position for each system is the kinematic (B/T) derived from the stellar orbits. Note that none of the galaxies in this sample have kinematic bulge fractions less than (B/T)_kin=0.2 – which creates immediate tension with our photometric decomposition results. Many galaxies with high kinematic bulge fractions have no photo-bulge. So whilst the kinematic (B/T) indicate that many galaxies are bulge dominated, the photometric results are completely disk dominated! Furthermore, no discernible correlation is seen between the kinematic and photometric bulge fractions. The results are consistent with previous findings that the photometric bulge-to-total fractions are systematically lower than the kinematic fractions <cit.>.
Three RIGs in the left panel of Figure <ref> are highlighted by star symbols and have labels corresponding to image panels to the right. The highlighted RIGs were selected to enable visual inspection of galaxies with (B/T)_phot>(B/T)_kin (upper right row), (B/T)_phot≈(B/T)_kin (middle right row), (B/T)_phot<(B/T)_kin (bottom right row). The panels show gri composites of our synthetic images and mock SDSS Galaxy Zoo visual classification images[The mock SDSS Galaxy Zoo images were designed to enable consistent visual classifications with observed SDSS galaxies – so they are Illustris galaxies placed in real image fields, but are not convolved with the SDSS PSF or inserted into a SDSS fields in a way that reproduces crowding, resolution, and sky brightness statistics. These higher-order biases are unimportant for consistency in visual classifications of galaxies (as most galaxies in the vicinity of closely projected sources are rejected from visual classification samples) but are important in decompositions <cit.>.] with realism from <cit.> for visual impression of each highlighted RIG. Visual inspection of the morphology of the RIG with (B/T)_phot>(B/T)_kin shows that it has a strong bulge component but that is embedded within a disc (edge-on in this camera angle). The uncertainties from the distribution of decomposition results are not consistent with the kinematic estimate. However, the photometric (B/T) fraction for this galaxy is reconcilable with its visual appearance. The photometric (B/T) fraction for the galaxy with (B/T)_phot≈(B/T)_kin in the middle row of images in Figure <ref> is also visually reconcilable with the photometry. While the galaxy appears to contain a bar that may affect the photometric (B/T), it is consistent with the kinematically derived quantity. The galaxy shown in the bottom row of images in Figure <ref> is most intriguing. The galaxy shown in the bottom row has (B/T)_phot=0 for all placements but (B/T)_kin≈1. However, there is no visual presence of a bulge or a disc – yet the kinematic information indicates that it is almost a pure bulge. Figure <ref> shows that photometrically derived morphologies can achieve similar results to the kinematics. However, photometric (B/T) estimates for the majority of galaxies in the RIG sample are systematically lower than their kinematic counterparts.
The markers corresponding to each RIG in Figure <ref> are coloured according to their total stellar masses. Colour-coding of the masses enables inspection of the dependence on stellar mass for the photometric and kinematic bulge-to-total estimates. As expected from Figure <ref>, only galaxies with stellar masses logM_⋆/M_⊙≳11 contain appreciable photometric bulge fractions. Furthermore, galaxies with low stellar masses logM_⋆/M_⊙≲10.5 have the largest systematic errors between the photometric and kinematic estimates for (B/T). The small number of photometric bulges at low masses and the presence of kinematic bulges corroborates with the bias expected from particle resolution. However, the galaxy shown in bottom right row of images in Figure <ref> does not generate confidence in the kinematic estimates for low-mass galaxies. Alternatively, the galaxy in the bottom right row of images in Figure <ref> tells us that kinematics sometimes have nothing to do with visual or photometric morphology! Either way, the high kinematic bulge fractions of galaxies with no visual bulge such as seen in the bottom right row of images in Figure <ref> make pinning particle resolution as the driving bias for reducing photometric (B/T) estimates more challenging. Still, the stronger correlation between kinematic and photometric (B/T) for galaxies with logM_⋆/M_⊙≳11 and the systematically low photometric (B/T) for galaxies with masses logM_⋆/M_⊙≲10.5 presents a strong case for the suppression of photometric bulge-to-total fractions by particle mass resolution limitations.
Any relationship between the correlation of kinematic and photometric (B/T) with total stellar mass can be examined using the full Illustris sample (ie. the catalog). Figure <ref> shows the difference between the kinematic and photometric bulge fractions, Δ (B/T), as a function of total stellar mass for all Illustris galaxies. The suspicion of a mass dependence in Δ (B/T) from the representative sample in Figure <ref> is confirmed in Figure <ref>. Galaxies at low stellar masses, logM_⋆/M_⊙≲11, have systematically lower photometric bulge fractions than inferred from the stellar kinematics. However, increasing total stellar mass yields increasing agreement between the kinematics and photometry. Indeed, at logM_⋆/M_⊙≳11.2, photometric and kinematic (B/T) are broadly consistent.
§ DISCUSSION
§.§ Remaining challenges to realism: Dust
Our comparisons of the global size-luminosity relations of SDSS and Illustris indicate that there is no single sufficient explanation for their discrepancy. In Sections <ref> and <ref>, we demonstrated that (1) internal segmentation had little overall effect in our comparisons; and (2) the difference in morphologies between the SDSS and Illustris sample has a crucial role in generating the discrepancy. However, Figure <ref> showed that while matching by morphology improves agreement in the slope and offset of the size-luminosity relations, galaxies in Illustris remain slightly brighter and larger on average for the same stellar masses and bulge fractions. Differences in each may be explained in part by contributions from neglecting treatment of dust in our mock observations of Illustris galaxies. However, the inability to resolve dust physics with the spatial resolution achievable for simulations of the scale of Illustris makes it difficult to examine the role of dust quantitatively. Our comparison of the size-luminosity relations of Illustris and the SDSS galaxies are therefore complicated by the presence of dust in the real universe – which is not distributed uniformly within galaxies (e.g., see and references therein).
<cit.> showed that when dust is present in galaxies, measurements of galaxy properties with bulge+disc decompositions are affected. In their study, disc scale-lengths of analytic bulge+disc systems with dust were systematically over-estimated and this was exacerbated by inclination (with edge-on discs biased most strongly). Meanwhile, the indices and effective radii of bulges and spheroids were systematically underestimated <cit.>. In populations of discs and spheroids, the effects of dust may therefore serve to increase the scatter and modify the scaling between size and luminosity. However, the differences in size at the low-luminosity end of Figure <ref> cannot be reconciled with this dust model because, at low luminosities, galaxies in the SDSS are systematically smaller than in Illustris for the same luminosities – whereas real galaxies with dust should cause over-estimates of disc sizes. For dust to cause this shift would first require a significantly larger population of low-luminosity spheroids whose sizes would be systematically underestimated due to dust. Therefore, while dust may partially explain the offset in luminosities between Illustris and the SDSS, a difference in morphologies between the two populations is first necessary to cause the changes to the scaling between size and luminosity from dust. Ultimately, the absence of dust in the simulation and radiative transfer code used to produce the synthetic images presents a limitation in the realism of the mock observations. However, the choice to not employ a dust model in the radiative transfer is motivated by the uncertainties involved in the distribution of dust in galaxies. Future analysis of the biases from dust-inclusive radiative transfer for our measurements would yield interesting results that could be compared with the dust-less models, but is beyond the scope of the current work.
Furthermore, a deficit of photometrically derived bulges relative to observed galaxies is not expected for simulated galaxies in which there is no dust. Broadly following the arguments in the previous paragraph, the inclusion of a dust-model in the radiative transfer would serve to further systematically under-estimate (B/T) estimates in Illustris by reducing the overall brightness of bulges and the pixels corresponding to the peak of the bulge surface brightness distribution <cit.>. The de Vaucouleurs n=4 model for the bulge is strongly dependent on the surface brightness from the inner 100 pc of a galaxies light profile. Attenuation of the light by dust at the centre of the bulge drives the free index and bulge half-light radius down in pure models <cit.>. In fixed n bulge+disc decompositions the bulge model brightness is forced down to accommodate the decrease in flux from the central pixels and the exponential disc model is driven up. The combined effects lead to a reduction in bulge integrated magnitude and half-light radius, an increase in disc integrated magnitude and half-light radius, and a corresponding reduction in bulge-to-total fraction. The inclusion of dust tends to weaken the photo-bulge relative to the disc. Therefore, the exclusion of dust in the radiative transfer should not cause the photo-bulge deficit.
§.§ Disconnect between kinematics and photometry
Future directions in investigating how well the photometric structure of Illustris galaxies actually reproduce the structure of real galaxies will require better treatment of dust radiative transfer, particle resolution and smoothing of stellar light, and comparisons with observations. The facts that (1) high-mass galaxies in Illustris contain photo-bulges that are consistent with the kinematics and (2) mock observations from high- particle resolution simulations also produce bulges <cit.> support a scenario in which resolution plays a role in interpretation of bulges in mock observations of simulated galaxies. To test the hypothesis that spatial or particle resolution is preventing adequate sampling of the bulge component, a deeper examination of alternative SLD schemes and, more directly, comparisons with zoom-in simulations of Illustris are required. In particular, a recent zoom-in simulation of Illustris using the same model and hydrodynamics improved the particle resolution by 40× the resolution of the full volume <cit.>. The zoom-in would place the particle resolution of the zoomed-in low mass galaxies, logM_⋆/M_⊙≲ 10.5, on level-ground with galaxies at high mass in our current comparison – which appear to contain more substantial numbers of galaxies with bulges. If bulges can be resolved in synthetic images of the zoomed-in low-mass galaxies, then new constraints can be placed on the necessary resolution to resolve bulges for realistic comparisons with observations.
Furthermore, the nature of whether photo-bulges genuinely correlate with the kinematic bulges is another test that may be performed with a comparison of both kinematic and photometric structural estimates of individual galaxies in the zoom-in and the full volume. Given that our lowest mass galaxies had the largest discrepancies in the kinematic and photometric bulge fractions, one could tackle the question of whether an increase by 400% in particle resolution can alleviate the discrepancy.
It may also be possible to explore alternative kinematic definitions of a bulge. In this paper, we used a simple prescription that assumes that the angular momentum distribution of the bulge component is symmetric about the principle angular momentum axis of the galaxy – centred at the minimum in the gravitational potential. In this way, bulge fraction can be easily computed assuming this symmetry, since stellar particles belonging to a rotationally supported disc should have an angular momentum distribution that should be offset from zero. However, there may be several caveats to this definition. A comprehensive study of caveats to this definition is beyond the scope of this paper. Still, one of them could be the stellar clumps in Illustris galaxies identified in <cit.>. Should a handful of large stellar clumps have orbits in which a large amount of the angular momentum is radial, this would cause the kinematic bulge fraction to systematically increase – despite the clumps having no physical connection to a stellar bulge. The contributions of these clumps to the angular momentum distributions of higher mass galaxies may be smaller, leading to generally better relationships between kinematic and photometric bulges. Although speculative at this point, now that we have seen that coupling between the kinematic and photometric bulge fractions is possible, it may be a good opportunity to review and improve upon conventional definitions of the bulge inferred from simulations.
§ SUMMARY
In this second paper in a series, we have employed our new procedure described in <cit.> for deriving image-based quantitative morphologies of simulated galaxies in a fair comparison with observations. In this section, the results of a comparison between properties derived from bulge+disc decompositions of galaxies from the Illustris simulation and the SDSS are summarized.
§.§ Comparison with SDSS galaxies
The bulge+disc decomposition results from the catalog <cit.> were used to perform a comparison with the observed size-luminosity and (B/T)-stellar mass relations. Comparisons between simulated and real galaxy size-luminosity relations have been previously performed (e.g., ) but may be compromised by at least one of the following factors: (1) the limitations on statistical meaningfulness in comparisons to observations due to small simulated galaxy samples; (2) inconsistent derivations of simulated and observed galaxy properties; (3) incomplete observational realism that biases the distributions of derived properties of simulated galaxies in comparisons with observations. Each of these caveats is addressed by using mock observations of a representative population of simulated galaxies, applying extensive observational realism to enable an unbiased image-based comparison, and employing identical methods for deriving galaxy properties in simulated and observed galaxies.
* Size-luminosity relation: Illustris galaxies were matched by stellar mass to the stellar mass distribution of the SDSS – taking the SDSS galaxy population over 0.04<z<0.06 with a lower mass cut of logM_⋆/M_⊙ > 10 as the comparison sample. In Section <ref>, we compared the size-luminosity relations of the SDSS and Illustris for the matched sample and showed that Illustris galaxies are generally larger and brighter for the same stellar masses as galaxies from the SDSS. Furthermore, the correlation between size and luminosity is not as strong in Illustris (and appears flat) relative to the SDSS relation. We concluded that such a discrepancy could not be explained by the known biases from internal segmentation of the galaxy surface brightness distributions identified in <cit.>. In Section <ref>, the morphological dependence of the size-luminosity relations in Illustris and SDSS was examined using our bulge-to-total fractions. While Illustris qualitatively reproduces the observed trend of increasing slope and decreasing normalization with increasing bulge fractions, the size-luminosity relations of Illustris are quantitatively distinct, having smaller slopes and higher normalizations across all (B/T) classifications (Figure <ref>).
* Bulge and disc morphologies: Distributions of (B/T) as a function of total stellar mass were compared using the mass-matched samples from the size-luminosity comparison. We showed that Illustris is dominated by disc-dominated morphologies at all masses in the sample – whereas the SDSS demonstrates diverse morphologies. Still, Illustris contains bulges-dominated galaxies, but the relationship between stellar mass and (B/T) is stronger than in the observations (i.e. Illustris contains too many discs at low mass and only high-mass galaxies contain appreciable bulge fractions). The size-luminosity relations of bulge- and disc- dominated galaxies differ significantly in observations. These results hinted that the morphological differences between Illustris and SDSS samples were affecting the comparison of their size-luminosity relations.
* Size-luminosity relation – Impact of morphology: The size-luminosity relations of SDSS and Illustris were revisited – this time matching by stellar mass and (B/T) morphology by re-sampling SDSS galaxies to match the (B/T)-stellar mass distribution of Illustris. The comparison demonstrated that, indeed, the discrepancy in the previous comparison of the size-luminosity relations owed in large part to the fact that Illustris contained predominantly disc-dominated galaxies in that sample. By additionally matching by morphology, the agreement between the size-luminosity relations (which is essentially the disc size-luminosity relation) is significantly improved – leaving a reduced magnitude and size offset between the relations. The remaining offset is difficult to characterize without a detailed quantification of the effects of dust in the creation of the synthetic images and on our decomposition results.
§.§ The deficit of bulge-dominated galaxies in Illustris at low stellar mass
Based on our decompositions with gim2d, Illustris contains too few bulge/spheroid-dominated galaxies at low stellar masses, logM_⋆/M_⊙≲ 11, relative to the observations and only has appreciable populations of galaxies with bulges at logM_⋆/M_⊙≳ 11. The deficit of bulge-dominated galaxies contrasts with previous generations of simulations that tended to produce galaxies that were too compact, dense, and rotated too quickly (e.g., . A relationship between bulge fraction and stellar mass is not surprising in a framework of galaxy evolution that is based on hierarchical assembly of galaxies through mergers. However, the significantly stronger dependence on bulge-fraction with total stellar mass in Illustris, relative to the observations, is puzzling. Still, several scenarios may provide some explanation. First, the ability to resolve the bulge for galaxies with the spatial resolution that is limited by the number of stellar particles and/or method for distributing stellar light in galaxies with low total stellar mass. And second, the adequacy of the mechanisms by which bulges form and survive within the simulation model. Application of the methods utilized in this paper to zoom-in cosmological simulations with the same physical model at high-resolution (e.g., ) and to alternative models (e.g., EAGLE: ) may yield insight into the validity of these hypotheses.
Lastly, the photometric bulge-to-total fractions of Illustris galaxies were compared with the bulge fractions derived from the internal kinematics of simulated galaxies. Confirming previous work using a larger sample and similar resolution <cit.>, we showed in Section <ref> that the photometric estimates for (B/T) are systematically lower than the kinematic estimates. In our first look using a representative sample of Illustris galaxies, no discernible correlation between the photometric and kinematic (B/T) is seen. However, taking all galaxies in Illustris, we showed that while galaxies with logM_⋆/M_⊙≲ 11 have photometric (B/T) that are systematically lower than the kinematic (B/T), galaxies with higher stellar masses demonstrated broad consistency between photometric and kinematic bulge fractions. We showed that there is a strong relationship between the correlation of photometric and kinematic (B/T) and total stellar mass – with the correlation improving with increasing masses.
Several low-mass galaxies logM_⋆/M_⊙≲ 10.5 with high kinematic (B/T) and no visible photo-bulge were inspected – implying that (a) the spatial resolution is insufficient in these galaxies to resolve the bulge; (b) the kinematic estimate for the bulge that we employed does not always reflect the true presence of a bulge; (c) there is no underlying connection between kinematics and visual or photometric morphology. A combination of (a) and (b) is also possible. In such a scenario, galaxies that are poorly resolved (both spatially in the images and by particles in the kinematics) may have reduced photometric estimates of (B/T) and have intrinsically large uncertainties in the kinematic estimates.
§ ACKNOWLEDGEMENTS
We thank the reviewer for suggestions which greatly improved the quality of this paper. We thank Greg Snyder for useful discussions and input. PT acknowledges support for Program number HST-HF2-51384.001-A was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This research made use of a University of Victoria computing facility funded by grants from the Canadian Foundation for Innovation and the British Columbia Knowledge and Development Fund. We thank the system administrators of this facility for their gracious support. Funding for the Sloan Digital Sky Survey IV has been provided by
the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
Science, and the Participating Institutions. SDSS-IV acknowledges
support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,
Instituto de Astrofísica de Canarias, The Johns Hopkins University,
Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /
University of Tokyo, Lawrence Berkeley National Laboratory,
Leibniz Institut für Astrophysik Potsdam (AIP),
Max-Planck-Institut für Astronomie (MPIA Heidelberg),
Max-Planck-Institut für Astrophysik (MPA Garching),
Max-Planck-Institut für Extraterrestrische Physik (MPE),
National Astronomical Observatory of China, New Mexico State University,
New York University, University of Notre Dame,
Observatário Nacional / MCTI, The Ohio State University,
Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group,
Universidad Nacional Autónoma de México, University of Arizona,
University of Colorado Boulder, University of Oxford, University of Portsmouth,
University of Utah, University of Virginia, University of Washington, University of Wisconsin,
Vanderbilt University, and Yale University.
mnras
|
http://arxiv.org/abs/1701.07731v2 | 20170126145811 | Fast Xor-based Erasure Coding based on Polynomial Ring Transforms | [
"Jonathan Detchart",
"Jérôme Lacan"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
definitionDefinition
mydefDefinition
propositionProposition
mypropProposition
example
=
Fast Xor-based Erasure Coding based on Polynomial Ring Transforms
Jonathan Detchart, Jérôme Lacan
ISAE-Supaéro, Université de Toulouse, France
=======================================================================================
The complexity of software implementations of MDS erasure codes mainly depends on the efficiency of the finite field operations implementation.
In this paper, we propose a method to reduce the complexity of the finite field multiplication by using simple transforms between a field and a ring to perform the multiplication in a ring.
We show that moving to a ring reduces the complexity of the operations. Then, we show that this construction allows the use of simple scheduling to reduce the number of operations.
§ INTRODUCTION
Most of practical Maximum Distance Separable (MDS) packet erasure codes are implemented in software. In the various applications like packet erasure channels <cit.> or distributed storage systems <cit.>, the coding/decoding process performs operations over finite fields. The efficiency of the implementation of these finite field operations is thus critical for these applications.
To speedup this operation, <cit.> described an implementation of finite field multiplications which only uses simple operations, contrarily to classic software multiplications which are based on lookup tables (LUT). The complexity of multiplying by an element, i.e. the number of operations, depends on the size of the finite field and also on the element itself. This kind of complexity is studied for Maximum-Distance Separable (MDS) codes in <cit.>. Other work has been done to reduce redundant operations by applying scheduling <cit.>.
Independently, in the context of large finite field for cryptographic applications, <cit.> proposed a -based method to perform fast hardware implementations of multiplications
by transforming each element of a field into an element of a larger ring. In this polynomial ring, where the operations on polynomials are done modulo x^n+1, the multiplication by a monomial is much simpler as the modulo is just a cyclic shift. The authors identified two classes of fields based on irreducible polynomials with binary coefficients
allowing to transform each field element into a ring element by adding additional "ghost bits".
In this paper, we extend their approach to define fast software implementations of -based erasure codes. We propose an original method called PYRIT (PolYnomial RIng Transform) to perform operations between elements of a finite field into a bigger ring by using fast transforms between these two structures. Working in such a ring is much easier than working in a finite field. Firstly, it reduces the coding complexity by design. And secondly, it allows the use of simple scheduling to reduce the number of operations thanks to the properties of the ring structure.
The next section presents the algebraic framework allowing to define the various transforms between the finite field and some subsets of the ring. Then we discuss about the choice of these transforms and their properties. We also detail the complexity analysis before introducing some scheduling results.
§ ALGEBRAIC CONTEXT
The algebraic context of this paper is finite fields and ring theory. More detailed presentation of this context including the proofs of the following propositions can be found in <cit.> or <cit.>.
Let 𝔽_q^w be the finite field with q^w elements.
Let R_q,n=𝔽_q[x]/(x^n-1) denote the quotient ring of polynomials of the polynomial ring 𝔽_2[x] quotiented by the ideal generated by the polynomial x^n-1.
Let p_1^u_1(x)p_2^u_2(x)… p_r^u_r(x)=x^n-1 be the decomposition of x^n-1 into irreducible polynomials over 𝔽_q.
When n and q are relatively prime, it can be shown that u_1=u_2=…=u_n=1 (see <cit.>). In other words, if q=2, and n is odd, we simply have p_1(x)p_2(x)… p_r(x)=x^n-1.
In the rest of this document, we assume that n and q are relatively prime.
The ring R_q,n is equal to the direct sum of its r minimal ideals of A_i=((x^n-1)/p_i(x)) for i=1,…,r.
Moreover, each minimal ideal contains a unique primitive idempotent θ_i(x). A construction of this idempotent is given in <cit.>, Chap. 8, Theorem 6.
Since 𝔽_q[x]/(p_i(x)) is isomorphic to the finite field B_i=𝔽_q^w_i, where p_i(x) is of degree w_i, we have:
R_q,n is isomorphic to the following Cartesian product:
R_q,n≃ B_1 ⊗ B_2 ⊗… B_r
For each i=1,…,r, A_i is isomorphic to B_i. The isomorphism is:
ϕ_i : [ B_i → A_i; b(x) → b(x)θ_i(x) ]
and the inverse isomorphism is:
ϕ_i^-1 : [ A_i → B_i; a(x) → a(α_i); ]
where α_i is a root of p_i(x).
Let us now assume that q=2. Let us introduce a special class of polynomials:
The All One Polynomial (AOP) of degree w is defined as
p(x) = x^w+x^w-1+x^w-2+…+x+1
The AOP of degree w is irreducible over 𝔽_2 if and only if w+1 is a prime and w generates 𝔽^*_w+1, where 𝔽^*_w+1 is the multiplicative group in 𝔽_w+1 <cit.>. The values w+1, such that the AOP of degree w is irreducible is the sequence A001122 in <cit.>. The first values of this sequence are: 3, 5, 11, 13, 19, 29, …. In this paper, we only consider irreducible AOP.
According to Proposition <ref>, R_2,w+1 is equal to the direct sum of its principal ideals A_1=((x^w+1+1)/p(x))=(x+1) and A_2=((x^w+1+1)/x+1)=(p(x)) and R_2,w+1 is isomorphic to the direct product of B_1=𝔽_2[x]/(p(x))=𝔽_2^w and B_1=𝔽_2[x]/(x+1)=𝔽_2.
It can be shown that the primitive idempotent of A_1 is θ_1=p(x)+1. This idempotents is used to build the isomorphism ϕ_1 between A_1 and B_1.
§ TRANSFORMS BETWEEN THE FIELD AND THE RING
This section presents different transforms between the field B_1=𝔽_2^w=𝔽_2[x]/(p(x)) and the ring R_2,w+1=𝔽_2[x]/(x^w+1+1).
§.§ Isomorphism transform
The first transform is simply the application of the basic isomorphism between B_1 and the ideal A_1 of R_2,w+1 (see Prop. <ref>).
By definition of the isomorphism, we have:
ϕ_1^-1( ϕ_1(u(x)).ϕ_1(v(x)) ) = u(x).v(x)
So, ϕ_1 can be used to send the elements of the field in the ring, then, to perform the multiplication, and then, to come back in the field. We show in the following Proposition that the isomorphism admits a simplified version.
Let W(b(x)), the weight of b(x), defined as the number of monomials in the polynomial representation of b(x).
ϕ_1 (b_B(x)) = b_A(x) = {[ b_B(x) if W(b_B(x)) is even; b_B(x)+p(x) else ].
ϕ_i^-1(b_A(x)) = b_B(x) ={[ b_A(x) if b_w=0; b_A(x)+p(x) else ].
where b_w is the coefficient of the monomial of degree w of b_A(x).
For the first point, we have ϕ_1 (b(x))=b(x)θ_1(x)=b(x)(p(x)+1)=b(x)p(x)+b(x). We can observe that b(x)p(x)=0 when W(b(x)) is even and b(x)p(x)=p(x) when W(b(x)) is odd. The first point is thus obvious.
For the second point, it can be observed that, from the first point of this proposition, if an element of A_1 has a coefficient b_w≠0, then it was necessarily obtained from the second rule, i.e. by adding p(x). Then, its image into B_1 can be obtained by subtracting (adding in binary) p(x). If b_w=0, then nothing has to be done to obtain b_B(x).
§.§ Embedding transform
Let us denote by ϕ_E the embedding function which simply consists in considering the element of the field as an element of the ring without any transformation. This function was initially proposed in <cit.>.
Note that the images of the elements of B_1 doesn't necessarily belong to A_1. However, let us define the function ϕ̅_1^-1 from R_2,w+1 to A_1 by ϕ̅_1^-1(b_A(x))=b_A(α), where α is a root of p(x). This function can be seen as an extension of the function ϕ_1^-1 to the whole ring.
<cit.>
For any u(x) and v(x) in B_1, we have:
ϕ̅_1^-1( ϕ_E(u(x)).ϕ_E(v(x)) ) = u(x).v(x)
The embedding function corresponds to a multiplication by 1 in the ring. In fact, 1 is equal to the sum of the idempotents θ_i(x) of the ideals A_i, for i=1,…,l <cit.>. Thus, ϕ_E(u(x))=u(x).∑_i=0^lθ_i(x). Then, ϕ_E(u(x)).ϕ_E(v(x)) is equal to u(x).v(x).(∑_i=0^lθ_i(x))^2. Thanks to the properties of idempotents, θ_i(x).θ_j(x) is equal to θ_i(x) if i=j and 0 else. Thus, ϕ_E(u(x)).ϕ_E(v(x)) is equal to u(x).v(x).(∑_i=0^lθ_i(x)). The function ϕ̅_i^-1 is the computation of the remainder modulo p_i(x). The irreducible polynomial p_i(x) corresponds to the ideal A_i. Thus θ_i(x) mod p(x) is equal to 1 if i=1 and 0 else.
This proposition proves that the Embedding function can be used to perform a multiplication in the ring instead of doing it in the field. The isomorphism also has this property, but the complexities of the transforms between the field and the ring are more complex.
§.§ Sparse transform
Let us define the transform ϕ_S from B_1 to R_2,w+1:
ϕ_S (b_B(x)) = b_A(x) = ϕ_1(b_B(x)) + δ . p(x)
where δ=1 if W(ϕ_1(b_B(x)) + p(x)) < W(ϕ_1(b_B(x))) and 0 else.
For any u(x) and v(x) in B_1, we have:
ϕ̅_1^-1( ϕ_S(u(x)).ϕ_S(v(x)) ) = u(x).v(x)
As observed in the proof of Prop. <ref>, ϕ̅_i^-1 is just the computation of the remainder modulo p(x). Moreover, according to the definition of ϕ_S, ϕ_S(u(x)).ϕ_S(v(x)) is equal to u(x).v(x) plus a multiple of p(x) (possibly equal to 0). Thus, the remainder of ϕ_S(u(x)).ϕ_S(v(x)) modulo p(x) is equal to u(x).v(x).
This proposition shows that ϕ_S can be used to perform the multiplication in the ring. The main interest of this transform is that the weight of the image of ϕ_S is small, which reduce the complexity of the multiplication in the ring.
§.§ Parity transform
The ideal A_1 is composed of the set of elements of R_2,w+1 with even weight.
We can observe from Proposition <ref> that all the image of ϕ_1 have even weight. Since the number of even-weight element of R_2,w+1 is equal to the number of elements of A_1, A_1 is composed of the set of elements of R_2,w+1 with even weight.
Let us consider the function ϕ_P, from B_1 to R_2,w+1, which adds a single parity bit to the vector corresponding to the finite field element. The obtained element has an even weight (by construction), and thus, according to the previous Proposition, it belongs to A_1.
Since the images by ϕ_P of two distinct elements are distinct, ϕ_P is a bijection between B_1 and A_1. The inverse function, ϕ_P^-1, consists just in removing the last coefficient of the ring element.
It should be noted that ϕ_P is not an isomorphism, but just a bijection between B_1 and A_1. However it will be shown in next Section that this function can be used in the context of erasure codes.
§ APPLICATION OF TRANSFORMS
In typical -based erasure coding systems <cit.>, the encoding process consists in multiplying an information vector by the generator matrix. Since in software, are performed using machine words of l bits, l interleaved codewords are encoded in parallel.
We consider a system with k input data blocks and m output parity blocks.
The total number of of the encoding is thus defined by the generator matrix which must be as sparse as possible. First, we use a k× (k+m)-systematic generator matrix built from a k× k-identity matrix concatenated to a k× m Generalized Cauchy (GC) matrix <cit.>. A GC matrix generates a systematic MDS code and it contains only 1 on its first row and on its first column. Then, to improve the sparsity of the generator matrix in the ring, we use the Sparse transform ϕ_S. This has to be done only once since the ring matrix is the same for all the codewords.
For the information vectors, it is not efficient to use ϕ_S since the s of machine words do not take into account the sparsity of the -ed vectors. We thus use Embedding or Parity transforms, which are less complex than ϕ_1.
When Embedding is used for information vectors and Sparse is used for the generator matrix, the obtained result in the ring can be sent into the field by using ϕ_1^-1 (proof similar to the proofs of Propositions <ref> or <ref>).
When Parity is used for the information vector, the image of the vector in the ring only contains elements of the ideal A_1. Since these elements are multiplied by the generator matrix (in the ring), the obtained result only contains elements of the ideal A_1. These elements have even weight, so it is not necessary to keep the parity bit before sending them on the "erasure channel". Since Parity transform is not an isomorphism, these data can not be decoded by another method. Indeed, to decode, it is necessary to apply ϕ_P (add the parity bit), then to decode by multiplying by the inverse matrix, and then to to apply ϕ_P^-1 (remove the parity bit on the correct information vector).
§ COMPLEXITY ANALYSIS
In this section, we determine the total number of operations done in the coding and the decoding processes.
§.§ Coding complexity
The coding process is composed of three phases: the field to ring transform, the matrix vector multiplication and the ring to field transform. We assume that the information vector is a vector of k elements of the field 𝔽_2^w.
For the first and the third phases, Table <ref> gives the complexities of Embedding and Parity transforms obtained from their definition in Section <ref>.
The choice between the two methods thus depends on the values of the parameters: if k>m, Parity transform has lower complexity. Else, "Embedding" complexity is better.
For the matrix vector operation, let us first consider the multiplication of two ring elements. As explained in the previous section, the first element (which corresponds to an information symbol) is managed by the software implementation by machine words. So the complexity of the multiplication only depends on the weight of the second element, denoted by w_2∈{0, 1, …,w+1}. The complexity of this multiplication is thus (w+1).w_2.
Now, we can consider the specificities of the various transforms. In the Parity transform, the last bit of the parity blocks is not used ( i.e. it is not transmitted on the erasure channel). So it is not necessary to compute it. It follows that the complexity of the multiplication is only w.w_2.
Similarly, for the Embedding transform, the last bit of the input vector is always equal to 0. So, we also have a complexity equal to w.w_2.
To have an average number of operations done in the multiplication of the generator matrix by the input data blocks, we have to evaluate the average weight of the entries of the generator matrix in the ring.
The generator matrix is a k× m-GC matrix with the first column and the first row are filled by 1. The other elements can be considered as random non zero elements. They are generated by ϕ_S which chooses the lowest ring element among the two ones corresponding to the field element. Let us denote their average weight by w_ϕ_S.
For this case, the average number of s is thus:
(k+m-1).w + (k-1).(m-1).w.w_ϕ_S
This leads to the following general expression of the coding complexity:
(min(k,m)+k+m-1).w+(k-1).(m-1).w.w_ϕ_S
To estimate the complexity on a practical example, we fix the value of w to 4. Classic combinatorial evaluation (not presented here) gives the average weight for nonzeros images of ϕ_S:
w_ϕ_S=w+1/2^w+1-2.(2^w - ww/2)
So, w_ϕ_S=1.66. We plot in Figure <ref> the evolution of the factor over optimal (used e.g. in <cit.>, table III) which is the density of the matrix normalized by the minimal density, k.m.w. We vary the value of k for three values of m: 3, 5 and 7. For each pair (k,m), we generate 10000 random GC matrices and keep the best we found.
We can observe that the values are very low. For example, <cit.> gives the lowest density of Cauchy matrices for the field 𝔽_2^4 and we can observe that our values are always lower than these ones.
To reduce the complexity in specific cases, we can observe that the ring contains w elements whose the corresponding matrix is optimal (one diagonal). By using these elements, we can search by brute force MDS matrices built only with these optimal elements.
For example, let us consider the elements of the field 𝔽_2^4 sent into the ring R_2,5. The Vandermonde matrix defined by:
V = ( x^i.j)_i=0,…,4; j=0,…,4
where x is a monomial in R_2,5 has the minimal number of 1. It can be verified that this matrix can be used to build a systematic a MDS code. For this matrix, the total number of s done in the generation of the parity packets (including the field-to-ring and ring-to-field transforms) is
(min(k,m)+k+m-1).w+(k-1).(m-1).w = k.w + k.m.w
Its factor over optimal is equal to 1.2 which is lower than the values given in Figure <ref> and which is close to the lowest bound given in <cit.>.
§.§ Decoding complexity
As the decoding is a matrix inversion and a matrix vector multiplication, we can use the same approach to perform the multiplication. We first invert the sub-matrix in the field, then we transform each entry of this matrix into ring elements. Then, we perform the ring multiplication.
The complexity of the decoding thus depends on the complexity of the matrix inversion and on the complexity of the matrix vector multiplication.
The complexity of the matrix vector multiplication was studied in the previous paragraph.
The complexity of the a r× r-matrix inversion is generally in O(r^3) operations in the field. But if the matrix has a Cauchy structure, this complexity can be reduced to O(r^2) <cit.>.
Note that, contrarily to the matrix vector multiplication, the matrix inversion complexity does not depend on the size of the source and parity blocks. And thus, it becomes negligible when the size of the blocks increase.
§ SCHEDULING
An interesting optimization on MDS erasure codes under -based representation is the scheduling of operations.
Such techniques are proposed in <cit.>, <cit.>, <cit.> and <cit.>. The general principle consists in "factorizing" some operations which are done several times to generate the parity blocks.
We show in the two next paragraphs that these techniques can be used very efficiently on the ring elements.
However, it can be observed that the matrices defined over rings have two main advantages.
§.§ Complexity reduction
Over finite fields, the scheduling consists in searching common patterns on the binary representation of the generator matrices. The w × w-matrices representing the multiplication by the field elements does not have particular structure and thus, they must be entirely considered in the scheduling algorithm.
This is not the case for the (w+1)× (w+1)-matrices corresponding to a ring element because, thanks to the form of the polynomial x^w+1+1, they are composed of diagonals either full of 0 or 1. This means that they can be represented in the scheduling algorithm just by their first column or, equivalently, by the ring polynomial.
This allows to drastically reduce the algorithm complexity and thus to handle bigger matrices. From a polynomial point of view, the search of scheduling just consist in finding some common patterns in the equations generating the parity blocks.
Let us assume that n=5 and that three data polynomials a_0(x), a_1(x) and a_2(x) are combined to generate the three parities p_0(x)=(1+x^4)a_0(x)+x^2a_1(x)+x^3a_2(x), p_1(x)=a_0(x)+x^3a_1(x)+(1+x^3)a_2(x) and p_2(x)=a_0(x)+a_1(x)+x^3a_2(x).
In this case, the scheduling just consists in computing p'(x)= a_0(x)+x^3a_2(x) and then p_0(x)=p'(x)+x^4a_0(x)+x^2a_1(x), p_1(x)= p'(x)+x^3a_1(x)+a_2(x) and p_2=p'(x)+a_2(x).
To estimate the complexity, we can consider the number of sums of polynomials. Without scheduling, we need 11 sums (4 for p_0(x), 4 for p_1(x), and 3 for p_2(x)) instead of with scheduling, we only need 10 sums (2 for p'(x), 3 for p_0(x), 3 for p_1(x) and 2 for p_2(x)).
§.§ Additional patterns
Ring-based matrices allow to find more common patterns than field-based matrices. The main idea is to observe that, in the ring, we can "factorize" not only common operations, but also operations which are multiple by a monomial (i. e. cyclic-shift) of operations done in some other equations. This is possible only because the multiplications are done modulo x^w+1+1.
Let us assume that n=5 and that three data polynomials a_0(x), a_1(x) and a_2(x) are combined to generate the parities
p_0(x)=a_0(x) + x^2a_1(x)+(1+x^2)a_2(x),
p_1(x)=x^2a_0(x)+ x^3a_1(x)+(x+x^4)a_2(x) and
p_2(x)=x^2a_0(x)+ a_1(x)+(x^2+x^3)a_2(x).
We can observe that, with a "simple" scheduling, it is not possible to factorize some operations.
However, by rewriting the polynomials, we can reveal factorizations: p_1(x)=x^2a_0(x)+x(x^2a_1(x)+a_2(x))+x^4a_2(x) and p_2(x)=x^2a_0(x)+x^3(x^2a_1(x)+a_2(x))+x^2a_2(x). So, if p'(x)=x^2a_1(x)+a_2(x), we have p_0(x)=p'(x)+a_0(x)+x^2a_2(x), p_1(x)=xp'(x)+x^2a_0(x)+x^2a_2(x) and p_2(x)=x^3p'(x)+x^2a_0(x)+x^2a_2(x).
To estimate the complexity by the same method than in the previous example, we need 11 polynomial additions with scheduling compared to 12 additions necessary without scheduling.
§.§ Scheduling results
To evaluate the potential gain of the scheduling, we have implemented an exhaustive search of the best patterns on generator matrices.
This algorithm was applied on several codes for the field 𝔽_2^4. Table <ref> presents the results in term of "factor over optimal" which is defined as the total number of 1 in the matrix over the number of 1 for the optimal MDS matrix , i.e k.m.w.
When working in a ring, we include to the complexity the operations needed to apply the transforms. In this case, Embedding transform has a lower complexity. So we added m.w to the number of 1 in the matrix resulting from the scheduling algorithm.
For each case, we have generated 100 random Generalized Cauchy matrices.
The measured parameters are:
* average field matrix: average number of 1 in the GC matrices divided by k.m.w
* best field matrix: lowest number of 1 among the GC matrices divided by k.m.w
* average ring matrix: average number of 1 in the ring matrices (without scheduling) + ring-field correspondence divided by k.m.w
* best ring matrix: best number of 1 among the ring matrices (without scheduling) + ring-field correspondence divided by k.m.w
* average with scheduling: average number of s with scheduling + ring-field correspondence divided by k.m.w
* best with scheduling: best number of s with scheduling + ring-field correspondence divided by k.m.w
This table confirms that, even without scheduling, ring matrices have a lower density than field matrices, thanks to the Sparse transform.
Applying scheduling to these matrices allows a significant gain of complexity. Indeed, it reduces the complexity by more than 20% on the best matrices. The final results are similar to the results obtained (without scheduling) on the optimal matrix in Section <ref>. To the best of our knowledge, other scheduling approaches do not reach this level of sparsity for these parameters.
§ CONCLUSION
In this paper, we have presented a new method to build MDS erasure codes with low complexity. By using transforms between a finite field and a polynomial ring, sparse generator matrices can be obtained. This allows to significantly reduce the complexity of the matrix vector multiplication.
It also allows simple schedulers that drastically improve the complexity by reducing the number of operations.
Similar results can be obtained with Equally-Spaced Polynomials (ESP) <cit.>, but they are not presented here due to lack of space.
IEEEtran
plain
|
http://arxiv.org/abs/1701.07849v1 | 20170126191651 | An Analytic Criterion for Turbulent Disruption of Planetary Resonances | [
"Konstantin Batygin",
"Fred C. Adams"
] | astro-ph.EP | [
"astro-ph.EP",
"math.DS"
] |
^1Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125
^2Department of Physics, University of Michigan, Ann Arbor, MI 48109
^3Department of Astronomy, University of Michigan, Ann Arbor, MI 48109
[email protected]
Mean motion commensurabilities in multi-planet systems are an expected
outcome of protoplanetary disk-driven migration, and their relative
dearth in the observational data presents an important challenge to
current models of planet formation and dynamical evolution. One natural
mechanism that can lead to the dissolution of commensurabilities is
stochastic orbital forcing, induced by turbulent density fluctuations
within the nebula. While this process is qualitatively promising, the
conditions under which mean motion resonances can be broken are not
well understood. In this work, we derive a simple analytic criterion that
elucidates the relationship among the physical parameters of the
system, and find the conditions necessary to drive planets out of resonance. Subsequently, we confirm our findings with
numerical integrations carried out in the perturbative regime, as well
as direct N-body simulations. Our calculations suggest that turbulent
resonance disruption depends most sensitively on the planet-star mass
ratio. Specifically, for a disk with properties comparable to the
early solar nebula with α=10^-2, only planet pairs with cumulative mass ratios smaller than (m_1+m_2)/M≲10^-5∼3M_⊕/M_⊙ are
susceptible to breaking resonance at semi-major axis of order
a∼0.1AU. Although turbulence can sometimes compromise resonant
pairs, an additional mechanism (such as suppression of resonance
capture probability through disk eccentricity) is required to
adequately explain the largely non-resonant orbital architectures of
extrasolar planetary systems.
An Analytic Criterion for Turbulent Disruption of Planetary Resonances
Konstantin Batygin^1 & Fred C. Adams^2,3
December 30, 2023
======================================================================
§ INTRODUCTION
Despite remarkable advances in the observational characterization of
extrasolar planetary systems that have occurred over the last two
decades, planet formation remains imperfectly understood. With the
advent of data from large-scale radial velocity and photometric
surveys <cit.>, the origins of a newly
identified census of close-in Super-Earths (planets with orbital
periods that span days to months, and masses between those of the
Earth and Neptune) have emerged as an issue of particular
interest. Although analogs of such short-period objects are absent
from our solar system, statistical analyses have demonstrated that
Super-Earth type planets are extremely common within the Galaxy, and
likely represent the dominant outcome of planet formation <cit.>.
An elusive, yet fundamentally important aspect of the Super-Earth
conglomeration narrative is the role played by orbital transport.
A key question is whether these planets experience accretion
in-situ <cit.>,
or if they migrate to their close-in orbits having formed at large
orbital radii, as a consequence of disk-planet interactions
<cit.>.
Although this question remains a subject of active research, a number
recent studies <cit.> have pointed
to a finite extent of migration as an apparent requirement for
successful formation of Super-Earths. Moreover, structural models
<cit.> show that the majority of Super-Earths have
substantial gaseous envelopes, implying that they formed in gas-rich
environments, where they could have actively exchanged angular
momentum with their surrounding nebulae.
Establishment of mean motion resonances in multi-planet systems has
long been recognized as a signpost of the planetary migration
paradigm. Specifically, the notion that slow, convergent evolution of
orbits towards one another produces planetary pairs with orbital
periods whose ratio can be expressed as a fraction of (typically
consecutive) integers, dates back more than half a century
<cit.>.
While distinct examples of resonant planetary systems exist within the
known aggregate of planets[Archetypical examples of short-period resonant systems include GJ 876 <cit.>, Kepler-36 <cit.>, Kepler-79 <cit.>, and Kepler-227 <cit.>.], the
overall orbital distribution shows little preference for mean motion
commensurabilities (Figure <ref>). Therefore, taken at face
value, the paradigm of orbital migration predicts consequences for the
dynamical architectures of Super-Earths that are in conflict with the
majority of observations <cit.>. Accordingly,
the fact that mean motion commensurabilities are neither
common nor entirely absent in the observational census of extrasolar
planets presents an important challenge to the present understanding
of planet formation theory.
Prior to the detection of thousands of planetary candidates by the
Kepler spacecraft, the expectations of largely resonant
architectures of close-in planets were firmly established by global
hydrodynamic, as well as N-body simulations
<cit.>. An important
distinction was drawn by the work of <cit.>, who pointed out that
resonances can be destabilized by random density fluctuations produced
by turbulence within the protoplanetary disk. Follow-up studies
demonstrated that a rich variety of outcomes can be attained as a
consequence of stochastic forcing within the disk
<cit.>, and that in specific cases, turbulence
can be conducive to the reproduction of dynamical architecture
<cit.>.
While the prediction of the infrequency of resonant systems made by
<cit.> was confirmed by the Kepler dataset, recent work
has shown that turbulent forcing is not the only mechanism through
which resonances can be disrupted. Specifically, the work of
<cit.> proposed that a particular relationship between
the rates of eccentricity damping and semi-major axis decay can render
resonances metastable, while <cit.> showed that
probability of resonance capture can be dramatically reduced in
slightly non-axisymmetric disks. In light of the ambiguity associated
with a multitude of theoretical models that seemingly accomplish the
same thing, it is of great interest to inquire which, if any, of the
proposed mechanisms plays the leading role in sculpting the
predominantly non-resonant architectures of known exoplanetary
systems.
Within the context of the aforementioned models of resonant
metastability and capture suppression, the necessary conditions for
passage through commensurability are relatively clear. Resonant
metastability requires the outer planet to be much more massive than
the inner planet <cit.>, while the capture suppression
mechanism requires disk eccentricities on the order of a few percent
to operate <cit.>. In contrast, the complex interplay
between planet-planet interactions, turbulent forcing, and dissipative
migration remains poorly quantified, making the turbulent disruption
mechanism difficult to decisively confirm or refute (see e.g., ). As a result, a key goal of this work is to
identify the regime of parameter space for which the stochastic
dissolution of mean motion resonances can successfully operate. In
doing so, we aim to gain insight into the evolutionary stages of young
planetary systems during which disk turbulence can prevent the
formation of resonant pairs of planets.
The paper is organized as follows. In Section <ref>, we present
the details of our model. In Section <ref>, we employ methods
from stochastic calculus to derive an analytic criterion for
turbulent disruption of mean motion resonances. In Section
<ref>, we confirm our results with both perturbative numerical
integrations and an ensemble of full N-body simulations. The paper
concludes in Section <ref> with a summary of our results and a
discussion of their implications.
§ ANALYTIC MODEL
The model we aim to construct effectively comprises three ingredients:
(1) first-order (k:k-1) resonant planet-planet interactions, (2) orbital migration and
damping, as well as (3) stochastic turbulent forcing. In this section,
we outline our treatment of each of these processes. A cartoon
depicting the geometric setup of the problem is shown in Figure
<ref>. Throughout much of the manuscript, we make the
so-called “compact" approximation, where we assume that the
semi-major axis ratio ξ≡ a_1/a_2→ 1. While
formally limiting, the agreement between results produced under this
approximation and those obtained within N-body integrations is
well-known to be satisfactory, particularly for k⩾3 (see,
e.g., ), where the integer k
specifies the resonance <cit.>.
Being made up of analytic components, the model constructed here
cannot possibly capture all of the intricate details of the dynamical
evolution that planets are subjected to, within protoplanetary disks.
By sacrificing precision on a detailed level, however, we hope to
construct an approximate description of the relevant physical
processes that will illuminate underlying relationships. These
findings can then be used to constrain the overall regime over which
turbulent fluctuations can effect the dynamical evolution of nascent
planetary systems.
§.§ Planet-Planet Interactions
In the late twentieth century, it was recognized that a perturbative
Hamiltonian that represents the motion of a massive pair of planets
residing on eccentric orbits, in the vicinity of a mean-motion
commensurability, can be cast into integrable form
<cit.>. More recently,
this formalism has been used to provide a geometric representation of
resonant dynamics <cit.>, study the onset of chaos
<cit.>, generalize the theory of resonant capture
<cit.>, as well as to elucidate overstable librations
<cit.>. A key advantage of this treatment is that it translates the full, unrestricted three-body problem into the same mathematical form as that employed for the well-studied circular restricted problem <cit.>. Here, we make use of this framework once again. Because detailed derivations of the aforementioned resonant normal
form are spelled out in the papers quoted above, we will not reproduce
it here, and instead restrict ourselves to employing the results.
The Hamiltonian that describes planet-planet interactions in the
vicinity of a k:k-1 mean motion resonance can be written as follows:
ℋ = 3 (ε+1) ( x^2+y^2/2) -
( x^2+y^2/2)^2 - 2 x,
where the variables (ε,x,y) are defined below.
A Hamiltonian of this form is typically referred to as the second
fundamental model for resonance
<cit.>, and behaves as a
forced harmonic oscillator at negative values of the proximity
parameter, ε, while possessing a pendulum-like phase-space
structure at large positive values of ε. This integrable
model approximates the real N-body dynamics at low eccentricities and
inclinations, and formally assumes that the orbits do not cross,
although this latter assumption is routinely violated without much
practical consequence (see, e.g.,
). In the well-studied case
of the restricted circular three-body problem, the canonical variables
(x,y) are connected to the test particle's eccentricity and the
resonant angle, while ε is a measure of how close the
orbits are to exact resonance. Within the framework of the full
planetary resonance problem (where neither mass nor eccentricity of
either secondary body is assumed to be null), the variables take on
slightly more complex physical meanings.
In order to convert between Keplerian orbital elements and the
dimensionless canonical variables used here, we first define a
generalized composite eccentricity
σ= √(e_1^2 + e_2^2- 2 e_1 e_2 cos(Δϖ)),
where subscripts 1 and 2 refer to the inner and outer planets
respectively, e is eccentricity, and ϖ is the longitude of
periastron. Additionally, we define units of action and time
according to
[A]= 1/2( 15/4k M/m_1+m_2)^2/3
[T] = 1/n(5/√(6) k^2M/m_1+m_2)^2/3,
where m is planetary mass, M is stellar mass, and n=√( M/a^3)
is the mean motion. Then, in the compact limit, the variables in the
Hamiltonian become <cit.>:
x = σ ( 15/4k M/m_1+m_2)^1/3cos(kλ_2-(k-1)λ_1-ω̃)
y = σ ( 15/4k M/m_1+m_2)^1/3sin(kλ_2-(k-1)λ_1-ω̃)
ε = 1/3( 15/4k M/m_1+m_2)^2/3(σ^2 - Δξ/k),
where ξ=a_1/a_2, and the quantity
ω̃≡arctan[
e_2sin ϖ_2-e_1sin ϖ_1 e_1cos ϖ_1-e_2cos ϖ_2]
represents a generalized longitude of perihelion.
The specification of resonant dynamics is now complete. While
application of Hamilton's equations to equation (<ref>) only
yields the evolution of σ and the corresponding resonant angle,
the behavior of the individual eccentricities and apsidal lines can be
obtained from the conserved[When the system is subjected to
slow evolution of the proximity parameter ε, ρ is
no longer a strictly conserved quantity. Instead, ρ becomes an
adiabatic invariant that is nearly constant, except when the system
encounters a homoclinic curve <cit.>.]
quantity ρ=m_1 e_1^2+m_2 e_2^2+m_1 m_2 e_1 e_2 cos(Δϖ). In addition, we note that the definitions of the variables
(<ref>) are independent of the individual planetary masses
m_1, m_2, and depend only on the cumulative planet-star mass ratio
(m_1+m_2)/M. This apparent simplification is a consequence of
taking the limit ξ≡ a_1/a_2→ 1, and is qualitatively
equivalent to the Öpik approximation <cit.>.
§.§ Planet-Disk Interactions
Dating back to early results on ring-satellite interactions
<cit.>, it has been evident that planets can
exchange orbital energy and angular momentum with their natal
disks. For planets that are not sufficiently massive to open gaps
within their nebulae, this exchange occurs through local excitation
of spiral density waves (i.e., the so-called “type-I" regime), and
proceeds on the characteristic timescale:
τ_wave= 1/n(M/m)
(M/Σ a^2) (h/r)^4,
where Σ is the local surface density, and h/r is the aspect
ratio of the disk. For an isothermal equation of state and a surface
density profile that scales inversely with the orbital radius
<cit.>, the corresponding rates of eccentricity and semi-major
axis decay are given by <cit.>:
1/ad a/d t≡ - 1/τ_mig≃ - 4 f/τ_wave(h/r)^2
1/ed e/d t≡ - 1/τ_dmp≃ -
3/41/τ_wave.
A different, routinely employed approach to modeling disk-driven
semi-major axis evolution is to assume that it occurs on a timescale
that exceeds the eccentricity decay time by a numerical factor
𝒦. To this end, we note that the value of 𝒦∼10^2
adopted by many previous authors <cit.> is in rough
agreement with equation (<ref>) which yields
𝒦∼(h/r)^-2.
While eccentricity damping observed in numerical simulations
(e.g., ) is well matched by equation
(<ref>), state-of-the-art disk models show that both the rate
and direction of semi-major axis evolution can be significantly
affected by entropy gradients within the nebula <cit.>.
Although such corrections alter the migration histories on a detailed
level, convergent migration followed by resonant locking remains an
expected result in laminar disks <cit.>. For
simplicity, in this work, we account for this complication by
introducing an adjustable parameter f into equation (<ref>).
In addition to acting as sources of dissipation, protoplanetary disks
can also drive stochastic evolution. In particular, density
fluctuations within a turbulent disk generate a random gravitational
field, which in turn perturbs the embedded planets <cit.>. Such
perturbations translate to effectively diffusive evolution of the
eccentricity and semi-major axis <cit.>. In the ideal
limit of MRI-driven turbulence, the corresponding eccentricity and
semi-major axis diffusion coefficients can be constructed from
analytic arguments (e.g., see )
to obtain the expressions
_ξ = _a/a^2∼ 2 _e∼α/2( Σ a^2/M)^2 n,
where α is the Shakura-Sunayev viscosity parameter
<cit.>. Although non-ideal effects can modify the
above expressions on the quantitative level <cit.>, for the
purposes of our simple model we neglect these explicit corrections. We
note, however, that such details can be trivially incorporated into
the final answer by adjusting the value of α accordingly.
§ CRITERION FOR RESONANCE DISRUPTION
With all components of the model specified, we now evaluate the
stability of mean motion resonances against stochastic
perturbations. In order to obtain a rough estimate of the interplay
between turbulent forcing, orbital damping, and resonant coupling, we
can evaluate the diffusive progress in semi-major axis and
eccentricity against the width of the resonance. Specifically, the
quantities, whose properties we wish to examine are χ≡
n_2/n_1 - k/(k-1) and x. Keep in mind that this latter quantity
is directly proportional to the generalized eccentricity σ
(see equation [<ref>]).
§.§ Diffusion of Semi-major Axes
In the compact limit a_1≈ a_2, the time evolution of the
parameter χ can be written in the approximate form
dχ/dt≃3/2 (d a_1/dt - d a_2/dt),
where is a representative average semi-major axis. For the
purposes of our simple model, we treat a_1 and a_2 as uncorrelated
Gaussian random variables with diffusion coefficients _a; we note however, that in reality significant correlations may exist
between these quantities and such correlations could potentially alter the nature of the random walk <cit.>. Additionally, for comparable-mass planets, we
may adopt as a characteristic drift rate, replacing m with
m_1+m_2 in equation (<ref>). Note that this assumption leads
to the maximum possible rate of orbital convergence.
With these constituents, we obtain a stochastic differential equation of the form
dχ = 3/2√(2 _ξ) dw - 3/2χ/ dt,
where w represents a Weiner process (i.e., a continuous-time random
walk; ). The variable χ will thus take on a distribution of values
as its evolution proceeds. Adopting the t→∞
standard deviation of the resulting distribution function as a
characteristic measure of progress in χ, we have:
δχ = √(3 _ξ /2) =
1/4h/r√(3 α Σ ^2/f (m_1+m_2)).
The approximate extent of stochastic evolution that the system can
experience and still remain in resonance is given by the resonant
bandwidth, Δχ. At its inception[A resonance can
only be formally defined when a homoclinic curve (i.e., a separatrix)
exists in phase-space. For a Hamiltonian of the form (<ref>),
a separatrix appears at ε=0, along with an unstable
(hyperbolic) fixed point, that bifurcates into two fixed points (one
stable and one unstable) at ε>0.], the width of the
resonance <cit.> is given by
Δχ≃ 5 [ √(k) (m_1+m_2)/M]^2/3.
Accordingly, a rough criterion for turbulent disruption of the resonance is
[box=]align
δχ/Δχ ∼1/20h/r
M/m_1+m_2 √(3 α/f)
×[ Σ ^2/k M
√(Σ ^2/m_1+m_2) ]^1/3 ≳1.
Keep in mind that δχ is a measure of the width of the
distribution in the variable χ due to stochastic evolution,
whereas Δχ is the change in χ necessary to
compromise the resonance.
The above expression for resonance disruption depends sensitively on
the planet-star mass ratio. This relationship is illustrated in Figure
<ref>, where the expression (<ref>) is
shown as a function of the quantity (m_1+m_2)/M, assuming system
properties α=10^-2, h/r=0.05, ⟨ a⟩ = 0.1AU,
f=1, and k=3. The disk profile is taken to have the form
Σ=Σ_0 (r_0/r), with Σ_0=1700 g/cm^2 and
r_0=1AU, such that the local surface density at ⟨ a⟩
is ⟨Σ⟩ = 17,000 g/cm^2. Notice that the
disruption criterion (<ref>) also depends on (the square
root of) the surface density of the disk. A family of curves
corresponding to lower values of the surface density (i.e.,
0.1,0.2,…0.9,1 × ⟨Σ⟩) are also shown,
and color-coded accordingly.
While Figure <ref> effectively assumes a maximal rate of orbital convergence, we reiterate that hydrodynamical simulations suggest that both the speed and sense of type-I migration can have a wide range of possible values <cit.>. To this end, we note that setting f=0 in equation (<ref>) yields ∞>1, meaning that in the case of no net migration, an arbitrarily small turbulent viscosity is sufficient to eventually bring the resonant angles into circulation. Furthermore, a negative value of f, which corresponds to divergent migration, renders our criterion meaningless, since resonance capture cannot occur in this instance <cit.>.
§.§ Diffusion of Eccentricities
An essentially identical calculation can carried out for stochastic
evolution of x (or y). To accomplish this, we assume that the
generalized eccentricity σ diffuses with the coefficient
√(2) _e. Accounting for conversion factors between
conventional quantities and the dimensionless coordinates (given by
equation [<ref>]), we obtain
_x≃α (Σ ^2/M)^2
(M/√(k) (m_1+m_2))^4/3.
Similarly, the damping timescale takes the form
τ_x≃ (128/225k M/m_1+m_2)^1/3k M/Σ ^2(h/r)^4,
where, as before, we adopted the total planetary mass as an an
approximation for m in the expression (<ref>).
In direct analogy with equation (<ref>), we obtain
the stochastic equation for the time evolution of x,
dx = √(2 _x) dw - x/τ_x dt,
so that the distribution of x is characterized by the standard
deviation δ x = √(_x τ_x). At the same time, we
take the half-width of the resonant separatrix to be given by
Δ x=2 (e.g., ). Combining
these two results, we obtain a second criterion for resonance
disruption, i.e.,
δ x/Δ x ∼(h/r)^2 (√(2) k/15)^1/3√(α Σ ^2/M)
×(M/m_1+m_2)^5/6≳ 1 .
§.§ Semi-major Axis vs Eccentricity
In order to construct the simplest possible model that still captures
the dynamical evolution adequately, it is of interest to evaluate the
relative importance of stochastic evolution in the degrees of freedom
related to the semi-major axis and eccentricity. Expressions
(<ref>) and (<ref>) both represent conditions
under which resonant dynamics of a planetary pair will be short-lived,
even if capture occurs. To gauge which of the two criteria is more
stringent, we can examine the ratio
δ x/Δ x/δχ/Δχ∼5√(f) ( h/r) [ k^2 (m_1+ m_2)/M]^1/3≪ 1.
The fact that this expression evaluates to a number substantially
smaller than unity means that diffusion in semi-major axes (equation
[<ref>]) dominates over diffusion in eccentricities
(equation [<ref>]) as a mechanism for disruption of
mean-motion commensurabilities. Although the relative importance of
_a compared to _e is not obvious a priori, it likely
stems in large part from the fact that the orbital convergence
timescale generally exceeds the eccentricity damping timescales by a
large margin.
§ NUMERICAL INTEGRATIONS
In order to derive a purely analytic criterion for turbulent
disruption of mean motion resonances, we were forced to make a series
of crude approximations in the previous section. To assess the
validity of these approximations, in this section we test the
criterion (<ref>) through numerical integrations. We
first present a perturbative approach (Section <ref>) and then
carry out a series of full N-body simulations (Section <ref>).
§.§ Perturbation Theory
The dynamical system considered here is described by three equations
of motion, corresponding to the variations in x, and y, and
ε. Although the resonant dynamics itself is governed by
Hamiltonian (<ref>), to account for the stochastic and
dissipative evolution, we must augment Hamilton's equations with terms
that describe disk-driven evolution. As before, we adopt
τ_dmp as the decay timescale for the generalized
eccentricity, σ, and take τ_mig as the
characteristic orbital convergence time. The full equations of motion
are then given by:
dx/dt = -3 y (1+ε)+y (x^2+y^2) - x/τ_dmp/[T]
dy/dt = -2+3 x (1+ε) - x (x^2 + y^2 ) - y/τ_dmp/[T]
dε/dt = 2/3([A]/k τ_mig/[T] - x^2+y^2/τ_dmp/[T])+ℱ.
In the above expression, ℱ represents a source of
stochastic perturbations. For computational convenience, we
implemented this noise term as a continuous sequence of analytic
pulses, which had the form 2 ζsin(π t/Δ t )/Δ t,
where ζ is a Gaussian random variable. The pulse time interval
was taken to be Δ t=0.1, and the standard deviation of ζ
was chosen such that the resulting diffusion coefficient
𝒟_ζ = σ_ζ^2/Δ t matched that given
by equation (<ref>).
Note that here, we have opted to only implement stochastic
perturbations into the equation that governs the variation of
ε. Qualitatively, this is equivalent to only retaining
semi-major axis diffusion and neglecting eccentricity diffusion. To
this end, we have confirmed that including (appropriately scaled)
turbulent diffusion into equations of motion for x and y does not
alter the dynamical evolution in a meaningful way, in agreement with
the discussion surrounding equation (<ref>).
Turbulent fluctuations aside, the equation of motion for the parameter
ε indicates that there exists an equilibrium value of the
generalized eccentricity
σ_eq=√(τ_dmp/(2 k τ_mig)) that
corresponds to stable capture into resonance. Analogously, the
equilibrium value of x_eq=σ_eq√(2[A])
parallels the strictly real fixed point of Hamiltonian (<ref>).
As a result, if we neglect the small dissipative contributions and set
dx/dt=0,dy/dt=0,x=x_eq,y=0 in the first and second equations
in expression (<ref>), we find an equilibrium value of the
proximity parameter, ε_eq, that coincides with
resonant locking. An ensuing crucial point is that if resonance is
broken, the system will attain values of ε substantially
above the equilibrium value ε_eq.
In order to maintain a close relationship with the results presented
in the preceding section, we retained the same physical parameters for
the simulations as those depicted in Figure <ref>. In
particular, we adopted α=10^-2, h/r=0.05, ⟨ a
⟩=0.1AU, f=1, and k=3. Additionally, we again chose a
surface density profile with Σ_0=1700g/cm^2 at
r_0=1AU, that scales inversely with the orbital radius, such that
the nominal surface density at r=⟨ a ⟩ is ⟨Σ⟩=17,000g/cm^2. We also performed a series of
simulations that span a lower range of surface densities
(0.1,0.2,…,0.9,1 × ⟨Σ⟩).
All of the integrations were carried out over a time span of
τ_mig=100 τ_wave, with the system initialized
at zero eccentricity (x_0=y_0=0), on orbits exterior to exact
commensurability (ϵ_0=-1).
We computed three sets of evolutionary sequences, corresponding to
planet-star mass ratios (m_1+m_2)/M=10^-6,10^-5, and 10^-4.
As can be deduced from Figure <ref>, the qualitative
expectations for the outcomes of these simulations (as dictated by
equation [<ref>]) are unequivocally clear. Resonances
should be long-term stable for (m_1+m_2)/M=10^-4 and long-term
unstable for (m_1+m_2)/M=10^-6. Meanwhile, temporary resonance
locking, followed by turbulent disruption of the commensurability
should occur for (m_1+m_2)/M=10^-5.
Figure <ref> depicts numerically computed evolution of
ε for the full range of local surface densities under
consideration (color-coded in the same way as in Figure <ref>)
as a function of time. These numerical results are in excellent
agreement with our theoretical expectations from Section <ref>.
The proximity parameter always approaches its expected equilibrium
value ε_eq for large mass ratios
(m_1+m_2)/M=10^-4 (top panel), but never experiences long-term
capture for small mass ratios (m_1+m_2)/M=10^-6 (bottom
panel). Resonance locking does occur for the intermediate case
(m_1+m_2)/M=10^-5 (middle panel). However, two evolutionary
sequences corresponding to Σ=0.7 ⟨Σ⟩ and
Σ=0.9 ⟨Σ⟩ show the system breaking out of
resonance within a single orbital convergence time, τ_mig.
It is sensible to assume that other evolutionary sequences within this
set would also break away from resonance if integrations were
extended over a longer time period.
Figure <ref> shows the phase-space counterpart of the
evolution depicted in Figure <ref>. Specifically, the x-y
projections of the system dynamics are shown for cases with surface
densities Σ=0.3 ⟨Σ⟩ (blue) and
Σ=⟨Σ⟩ (black), where the background depicts
the topology of the Hamiltonian (<ref>). In each panel, the
black curve designates the separatrix of ℋ, given the
equilibrium value of the proximity parameter
ε=ε_eq. The background color scale is a
measure of the value of ℋ. The thee equilibrium points of
the Hamiltonian are also shown, as transparent green dots.
As in Figure <ref>, three representative ratios of
planet mass to stellar mass are shown. In the right panel (for mass ratio
(m_1+m_2)/M=10^-4), turbulent diffusion plays an essentially
negligible role and the system approaches a null libration amplitude
under the effect of dissipation. In the middle panel (for mass ratio
(m_1+m_2)/M=10^-5), resonant capture is shown, but the libration
amplitude attained by the orbit is large, particularly in the case of
Σ=⟨Σ⟩. In the left panel (for mass ratio
(m_1+m_2)/M=10^-6), the trajectory is initially advected to high
values of the action, but inevitably breaks out of resonance and
decays towards the fixed point at the center of the internal
circulation region of the portrait.
§.§ N-body Simulations
In order to fully evaluate the approximations inherent to the
perturbative treatment of the dynamics employed thus far, and to
provide a conclusive test of the analytic criterion
(<ref>), we have carried out a series of direct N-body
simulations. The integrations utilized a Burlisch-Stoer
integration scheme (e.g., ) that included the full
set of 18 phase space variables for the 3-body problem consisting of
two migrating planets orbiting a central star. For the sake of
definiteness, the physical setup of the numerical experiments was
chosen to closely mirror the systems used in the above discussion.
Specifically, two equal-mass planets were placed on initially circular
orbits slightly outside of the 2:1 mean motion resonance, so that the
initial period ratio was 0.45. The planets were then allowed to
evolve under the influence of mutual gravity, as well as disk-driven
convergent migration, orbital damping, and turbulent perturbations.
Following <cit.>, we incorporated the
orbital decay and eccentricity damping using accelerations
of the form:
dv⃗/dt = -v⃗/τ_mig -
2 r⃗/τ_dmp(v⃗·r⃗)/(r⃗·r⃗),
where v⃗ and r⃗ denote the orbital velocity and radius
respectively.[Note that we have neglected disk-induced damping
of the orbital inclination, because of the planar setup of the problem.]
While both planets were subjected to eccentricity
damping, inward (convergent) migration was only experienced by the
outer planet. Simultaneously, for computational convenience, the
semi-major axis of the inner planet was re-normalized to a_1=0.1AU
at every time step[Qualitatively, this procedure is equivalent
to changing the unit of time at every time step <cit.>.].
The characteristic timescales τ_mig and τ_dmp
were kept constant, given by equation (<ref>), adopting
identical physical parameters of the disk to those employed above.
Finally, following previous treatments <cit.>,
turbulent fluctuations were introduced into the equations of motion
through random velocity kicks, whose amplitude was tuned such that the
properties of the diffusive evolution of an undamped isolated orbit
matched the coefficients from equation (<ref>). For completeness, we have also included the leading order corrections due to general relativity <cit.>.
As in the previous sub-section, we computed the orbital evolution of
three representative cases with mass ratios
(m_1+m_2)/M=10^-4,10^-5, and 10^-6 (corresponding to migration timescales of τ_mig≃1.5×10^3,10^4, and 10^5 years respectively) over a time span of
0.1Myr. The numerical results are shown in Figure <ref>,
and show excellent agreement with the analytic criterion from equation
(<ref>). In particular, the system with mass ratio
(m_1+m_2)/M=10^-4 exhibits long-term stable capture into a 3:2
mean motion resonance, as exemplified by the ensuing low-amplitude
libration of the resonant angles ϕ[3:2] = 3λ_2 - 2
λ_1-ϖ_1 and ψ[3:2] = 3λ_2 - 2 λ_1-ϖ_2,
shown in red and blue in the bottom panel of Figure <ref>.
Correspondingly, both the period ratio (top panel) and the
eccentricities (middle panel) rapidly attain their resonant
equilibrium values, and remain essentially constant throughout the
simulation.
The case with mass ratio (m_1+m_2)/M=10^-5, for which equation
(<ref>) yields δχ / Δχ∼1,
perfectly exemplifies the transitory regime. As shown in the top
panel of Figure <ref>, where this experiment is represented
in gray, the system exhibits temporary capture into the 3:2 as well
as the 4:3 commensurabilities, and subsequently locks into a
meta-stable 7:6 resonance at time ∼15,000 years. Although evolution
within this resonance is relatively long-lived, the bottom panel of
Figure <ref> shows that the corresponding resonant angles
ϕ[7:6] = 7λ_2 - 6λ_1-ϖ_1 (green) and
ψ[7:6] = 3λ_2 - 2 λ_1-ϖ_2 (gray) maintain
large amplitudes of libration, due to the nearly perfect balance
between orbital damping and turbulent excitation. As a result, the
system eventually breaks out of its resonant state. After a period of
chaotic scattering, the orbits switch their order, and the period
ratio increases.
Finally, the case with mass ratio (m_1+m_2)/M=10^-6 represents a
system that never experiences resonant locking. As the period ratio
evolves towards unity (purple curve in the top panel), encounters with
mean motion commensurabilities only manifest themselves as impulsive
excitations of the orbital eccentricities (purple/orange curves in the
middle panel) of the planets. As such, the planets eventually
experience a brief phase of close encounters, and subsequently
re-enter an essentially decoupled regime, after the orbits reverse.
We note that because turbulence introduces a fundamentally stochastic
component into the equations of motion, each realization of the N-body
simulations is quantitatively unique. However, having carried out tens
of integrations for each set of parameters considered in Figure
<ref>, we have confirmed that the presented solutions are
indeed representative of the evolutionary outcomes. As a result, we
conclude that the analytic expression (<ref>) represents
an adequate description of the requirement for resonance disruption,
consistent with the numerical experiments.
§ CONCLUSION
While resonant locking is an expected outcome of migration theory <cit.>, the current sample of exoplanets shows only a mild tendency for systems to be
near mean motion commensurabilities <cit.>. Motivated by
this observational finding, this paper derives an analytic criterion
for turbulent disruption of planetary resonances and demonstrates its
viability through numerical integrations. Our specific results are
outlined below (Section <ref>), followed by a conceptual
interpretation of the calculations (Section <ref>), and finally a
discussion of the implications (Section <ref>).
§.§ Summary of Results
The main result of this paper is the derivation of the constraint
necessary for turbulent fluctuations to compromise mean motion
resonance (given by equation [<ref>]). This criterion
exhibits a strong dependence on the ratio of planetary mass to stellar
mass, but also has significant dependence on the local surface
density. That is, turbulence can successfully disrupt mean motion
resonances only for systems with sufficiently small mass ratios and/or large
surface densities (see Figure <ref>).
The analytic estimate (<ref>) for the conditions required
for turbulence to remove planet pairs from resonance has been verified by
numerical integrations. To this end, we have constructed a model of disk-driven resonant dynamics
in the perturbative regime, and have calculated the time evolution of
the resonance promiximity parameter ε (Section
<ref>). The results confirm the analytical prediction that given nominal disk parameters, systems with mass ratios
smaller than (m_1+m_2)/M∼10^-5∼ 3M_⊕/M_⊙ are forced out of resonance by turbulence, whereas
systems with larger mass ratios survive (Figure <ref>). We have
also performed full N-body simulations of the problem (Section
<ref>). These calculations further indicate that planetary systems with
small mass ratios are readily moved out of resonance by turbulent
fluctuations, whereas systems with larger mass ratios are not (Figure
<ref>). Accordingly, the purely analytic treatment, simulations performed within the framework of perturbation
theory, and the full N-body experiments all yield consistent results.
For circumstellar disks with properties comparable to the minimum mass
solar nebula <cit.>, the results of this paper suggest that compact Kepler-type planetary systems are relatively close to the border-line for stochastic disruption
of primordial mean motion commensurabilities. Nonetheless, with a cumulative mass ratio that typically lies in the range of (m_1+m_2)/M∼10^-5 - 10^-4 (Figure <ref>), the majority of these planets are
sufficiently massive that their resonances can survive in the face of
turbulent disruption, provided that the perturbations operate at the expected amplitudes
(this result also assumes that the stochastic fluctuations act over a
time scale that is comparable to the migration time).
Given critical combinations of parameters (for which equation [<ref>] evaluates to a value of order unity), resonant systems can ensue, but
they routinely come out of the disk evolution phase with large libration
amplitudes. This effect has already been pointed out in previous work
<cit.>, which focused primarily on numerical
simulations with limited analytical characterization. Importantly, this notion suggests that the stochastic forcing mechanism may be critical to setting up
extrasolar planetary systems like GJ 876 and Kepler-36 that exhibit
rapid dynamical chaos <cit.>.
Although this work has mainly focused on the evolution of sub-Jovian planets, we can reasonably speculate that turbulent fluctuations are unlikely to strongly affect mean motion
resonances among giant planets. In addition to having mass ratios well
above the critical limit, the influence that the disk exerts on large
planets is further diminished because of gap-opening <cit.>. However, one
complication regarding this issue is that the damping rate of
eccentricity is also reduced due to the gap (e.g., ). Since both the excitation
and damping mechanisms are less effective in the gap-opening regime,
a minority of systems could in principle allow for excitation to dominate.
§.§ Conceptual Considerations
The analysis presented herein yields a practical measure that informs the outcome of dynamical evolution of multi-planetary systems embedded in turbulent protoplanetary disks. While numerical experiments confirm that the analytic theory indeed provides an acceptable representation of perturbed N-body dynamics, the phenomenological richness inherent to the problem calls for an additional, essentially qualitative account of the results. This is the purpose of the following discussion.
Within the framework of our most realistic description of the relevant physics (i.e., the N-body treatment), the effect of turbulent fluctuations is
to provide impulsive changes to the planet velocities. The turbulence
has a coherence time of order one orbital period, so that the
fluctuations provide a new realization of the random gravitational
field on this time scale <cit.>. With these impulses, the orbital elements of
the planets, specifically the semi-major axis a and eccentricity
e, execute a random walk. In other words, as the elements vary, the changes in a and e accumulate
in a diffusive manner <cit.>. Simultaneously, the interactions between planets and the spiral density waves they induce in the nebula lead to smooth changes in the orbital periods, as well as damping of the planetary eccentricities <cit.>.
In contrast with aforementioned disk-driven effects, the bandwidth of a planetary resonance is typically described in terms of maximal libration amplitude of a critical angle ϕ that obeys d'Alembert rules (e.g., see Chapter 8 of ). Thus, the conceptual difficulty lies in connecting how the extrinsic forcing of orbital elements translates to the evolution of this angle. Within the framework of our theoretical model, this link is enabled by the Hamiltonian model of mean motion resonance (equation [<ref>]; ).
In the parameter range relevant to the problem at hand, the behavior of Hamiltonian (<ref>) is well-approximated by that of a simple pendulum <cit.>. Specifically, the equilibrium value of ε dictates the value of the pendulum's action, Φ, at which zero-amplitude libration of the angle ϕ can occur, as well as the location of the separatrix. Correspondingly, oscillation of the angle ϕ translates to variations of the action Φ, which is in turn connected to the eccentricities (equations [<ref>]) as well as the semi-major axes, through conservation of the generalized Tisserand parameter <cit.>.
In this picture, there are two ways to drive an initially stationary pendulum to circulation: one is to perturb the ball of the pendulum directly (thereby changing the energy-level of the trajectory), and the other is to laterally rock the base (thus modulating the separatrix along the Φ-axis). These processes are directly equivalent to the two types of diffusion considered in our calculations. That is, [1] diffusion in the dynamic variables x and y themselves (explicitly connected to eccentricities and resonant angle) is analogous to direct perturbations to the ball of the pendulum, while [2] diffusion in the proximity parameter ε (explicitly connected to the semi-major axes) corresponds to shaking the base of the pendulum back and forth.
Meanwhile, consequences of eccentricity damping and convergent migration are equivalent to friction that acts to return the ball of the pendulum back to its undisturbed state, and restore the separatrix to its equilibrium position, respectively. In the type-I migration regime however, eccentricity damping by the disk is far more efficient than orbital decay <cit.>, meaning that the ball of the pendulum is effectively submerged in water, while the base of the pendulum is only subject to air-resistance (in this analogy). As a result, the latter process — diffusion in proximity parameter ε — ends up being more important for purposes of moving planets out of mean motion resonance (see equation [<ref>]).
§.§ Discussion
The work presented herein suggests that turbulent forcing is unlikely to be the single dominant effect that sculpts the final orbital distribution of exoplanets. At the same time, the functional form of expression (<ref>) yields important insight into the evolutionary aspects of the planet formation process. Particularly, because the resonance disruption criterion
depends on the disk mass, it implies a certain time-dependence of the
mechanism itself (as the nebula dissipates, the critical mass ratio below which the mechanism operates
decreases from a value substantially above the Earth-Sun mass
ratio, to one below). This means that even though the turbulent disruption mechanism becomes ineffective
in a weaning nebula, it may be key to facilitating growth in the early stages of evolution of
planetary systems, by allowing pairs of proto-planets to skip over mean-motion commensurabilities and merge, instead of forming resonant chains. In essence, this type of dynamical behavior is seen in the large-scale numerical experiments of <cit.>.
For much of this work, the system parameters that we use effectively
assume a maximum rate of orbital convergence. Because the quantitative
nature of migration can change substantially in the inner nebula, the
actual rate of orbital convergence may be somewhat lower
<cit.>. This change would make planetary resonances more
susceptible to stochastic disruption. At the same time, we have not
taken into account the inhibition of the random gravitational field
through non-ideal magnetohydrodynamic effects <cit.>,
which would weaken the degree of stochastic forcing. Both of these
effects can be incorporated into the criterion of equation
(<ref>) by lowering the migration factor f and the
value of α accordingly. However, because both of these quantities
appear under a square root in the expression, the sensitivity
of our results to these corrections is not expected to be extreme.
This work assumes that turbulence operates in circumstellar
disks at the expected levels. The presence of turbulence is most commonly attributed to the magneto-rotational-instability <cit.>,
which in turn requires the disk to be sufficiently ionized. Although
the innermost regions of the disk are expected to be ionized by
thermal processes, dead zones could exist in intermediate part of the
disk (; see also ), and ionization by cosmic rays can be suppressed
in the outer disk <cit.>. Indeed, suppressed levels of
ionization are now inferred from ALMA observations of young star/disk
systems <cit.>, implying that the assumption of sufficient
ionization — and hence active MRI turbulence — is not guarenteed. At the same time, our model is agnostic towards the origins of turbulent fluctuations themselves, and can be employed equally well if a purely hydrodynamic source of turbulence were responsible for angular momentum transport within the nebula <cit.>.
In light of the aforementioned uncertainties inherent to the problem at hand, it is of considerable interest to explore if simply adjusting the parameters can, in principle, yield consistency between the model and the observations. That is, can reasonable changes to the migration rate, etc., generate agreement between the turbulent resonance disruption hypothesis and the data? Using equation (<ref>), we find that increasing the local surface density by an order of magnitude (Σ=10⟨Σ⟩=170,000g/cm^2) while lowering the orbital convergence rate a hundred-fold (f=0.01) and retaining h/r=0.05, ⟨ a⟩=0.1AU, α=0.01 yields (m_1+m_2)/M≃2× 10^-4∼60M_⊕/M_⊙ as the critical mass ratio, thus explaining the full range of values shown in Figure <ref>. Correspondingly, rough agreement between observations and the stochastic migration scenario is reproduced in the work of <cit.>, where the amplitude of turbulent forcing was tuned to give consistency with data.
Although this line of reasoning may appear promising, it is important to note that as the disk accretes onto the star, the local surface density will diminish, causing the critical mass ratio to decrease as well. Meanwhile, even with a reduction factor of f=0.01, the type-I migration timescale remains shorter than the ∼few Myr lifetime of the nebula, as long as Σ≳0.1⟨Σ⟩=170g/cm^2. As a result, we argue that any realistic distribution of the assumed parameters is unlikely to allow turbulence to provide enough resonance disruption to explain the entire set of observations.
If disk turbulence does not play the defining role in generating an
observational census of extrasolar planets that is neither dominated
by, nor devoid of, mean motion resonances, than what additional
processes are responsible for the extant data set? As already
mentioned in the introduction, there are two other ways in which
planets can avoid resonant locking – resonant metastability <cit.> and capture
probability suppression <cit.>. The first mechanism requires that the outer
planet is more massive then the inner planet to compromise
resonance <cit.>. As a result, observed resonant systems would almost always
have a more massive inner planet, but this ordering is not reflected
in the data. On the other hand, the (second) capture suppression
mechanism requires disk eccentricities of order
e_disk∼0.02 to explain the data. Importantly, disk eccentricities of
this magnitude (and greater) are not only an expected result of
theoretical calculations, they are invoked to explain observations of
asymmetric glow of dust <cit.>.
In conclusion, turbulent fluctuations probably do not explain the
entire ensemble of observed planetary systems, which exhibit only a weak preference for mean motion commensurability. In addition to
turbulent forcing, many other physical processes are likely at work,
where perhaps the most promising mechanism is capture suppression due to nonzero
disk eccentricities. Nonetheless, a subset of exotic planetary systems
that exhibit large-amplitude resonant librations likely require a
turbulent origin. The relative duty cycle of this mechanism, and
others, poses an interesting problem for further exploration.
Acknowledgments: We would like to thank Juliette Becker, Tony Bloch, Wlad Lyra and Chris Spalding for useful discussions, as well as the referee, Hanno Rein, whose insightful report led to a considerable improvement of the manuscript. K.B. acknowledges support from the NSF AAG program AST1517936, and from Caltech. F.C.A. acknowledges support from the NASA Exoplanets Research Program NNX16AB47G, and from the University of Michigan.
[Adams et al.(2008)]alb
Adams, F. C., Laughlin, G., & Bloch, A. M. 2008, ApJ, 683, 1117
[Allan(1969)]Allan1969
Allan, R. R. 1969, AJ, 74, 497
[Allan(1970)]Allan1970
Allan, R. R. 1970, Celest. Mech., 2, 121
[Balbus & Hawley(1991)]mri
Balbus, S. A., & Hawley, J. F. 1991, ApJ, 376, 214
[Bai & Stone(2013)]2013ApJ...769...76B Bai, X.-N., & Stone, J. M. 2013, , 769, 76
[Batalha et al.(2013)]Batalha
Batalha, N. M. et al. 2013, ApJS, 204, 24
[Batygin(2015)]Batygin2015
Batygin, K. 2015, MNRAS, 451, 2589
[Batygin et al.(2015)]BatDeckHol
Batygin, K., Deck, K. M., & Holman, M. J. 2015, AJ, 149, 167
[Batygin et al. (2011)]batynmorbid
Batygin, K., & Morbidelli, A. 2011, Celest. Mech., 111, 219
[Batygin & Morbidelli(2013)]BatMorby2013b
Batygin, K., & Morbidelli, A. 2013, A&A, 556, A28
[Bitsch & Kley(2011)]Bitsch
Bitsch, B., & Kley, W. 2011, A&A, 536, A77
[Bitsch et al.(2015)]2015A A...575A..28B Bitsch, B., Johansen, A., Lambrechts, M., & Morbidelli, A. 2015, , 575, A28
[Borders & Goldreich(1984)]BordersGoldreich1984
Borderies, N., & Goldreich, P. 1984, Celest. Mech., 32, 127
[Chiang & Laughlin(2013)]ChiangLaughlin
Chiang, E., & Laughlin, G. 2013, MNRAS, 431, 3444
[Cleeves et al.(2013)]cleevesone
Cleeves, L. I., Adams, F. C., & Bergin, E. A. 2013, ApJ, 772, 5
[Cleeves et al.(2015)]cleeves
Cleeves, L. I., Bergin, E. A., Qi, C., Adams, F. C., & Öberg, K. I.
2015, ApJ, 799, 204
[Coleman & Nelson(2016)]ColemanNelson
Coleman, G.A.L., & Nelson, R. P. 2016, MNRAS, 457, 2480
[Cresswell & Nelson(2008)]CresswellNelson2008
Cresswell, P., & Nelson, R. P. 2008, A&A, 482, 677
[Crida et al.(2006)]2006Icar..181..587C Crida, A., Morbidelli, A., & Masset, F. 2006, Icarus, 181, 587
[Crida et al.(2008)]Crida2008
Crida, A., Sándor, A., & Kley, W., 2008, A&A, 483, 325
[D'Angelo & Bodenheimer(2016)]DAngeloBodenheimer2016 D'Angelo, G., & Bodenheimer, P. 2016, , 828, 33
[Deck et al.(2012)]Decketal
Deck, K. M. et al. 2012, ApJ, 755, 21
[Deck et al.(2013)]Deck2013
Deck, K. M., Payne, M., & Holman, M. J. 2013, ApJ, 774, 129
[Deck & Batygin(2015)]DeckBatygin
Deck, K. M., & Batygin, K. 2015, ApJ, 810, 119
[Duffell & MacFadyen(2013)]2013ApJ...769...41D Duffell, P. C., & MacFadyen, A. I. 2013, , 769, 41
[Duffell & Chiang(2015)]DuffellChiang2015 Duffell, P. C., & Chiang, E. 2015, , 812, 94
[Fabrycky et al.(2014)]Fabrycky2014
Fabrycky, D. C. et al. 2014, ApJ, 790, 146
[Foreman-Mackey et al.(2014)]Forman-Mackey
Foreman-Mackey, D., Hogg, D. W., & Morton, T. D. 2014, ApJ,
795, 64
[Fressin et al.(2013)]Fressin2013 Fressin, F., Torres, G., Charbonneau, D., et al. 2013, , 766, 81
[Gammie(1996)]gammie
Gammie, C. F. 1996, ApJ, 457, 355
[Goldreich(1965)]Goldreich1965
Goldreich, P. 1965, MNRAS, 130, 159
[Goldreich & Schlichting(2014)]GoldShicht2014
Goldreich, P., & Schlichting, H. E. 2014, AJ, 147, 32
[Goldreich & Tremaine(1980)]GoldreichTremaine1980
Goldreich, P., & Tremaine, S. 1980, ApJ, 241, 425
[Goldreich & Tremaine(1982)]GoldreichTremaineRing
Goldreich, P., & Tremaine, S. 1982, ARA&A, 20, 249
[Hansen & Murray(2015)]MurrayHansen
Hansen, B.M.S., & Murray, N. 2015, MNRAS, 448, 1044
[Hayashi(1981)]1981PThPS..70...35H Hayashi, C. 1981, Progress of Theoretical Physics Supplement, 70, 35
[Henrard(1986)]Henrard1986
Henrard, J., & Lemaitre, A. 1986, Celest. Mech., 39, 213
[Henrard & Lamaitre(1983)]HenrardLemaitre1983 Henrard, J., & Lamaitre, A. 1983, Celestial Mechanics, 30, 197
[Horn et al.(2012)]lyra
Horn, B., Lyra, W., Mac Low, M-M., & Sándor, Z. 2012, ApJ, 750, 34
[Howard et al.(2012)]Howard
Howard, A. W. et al. 2012, ApJS, 201, 15
[Johnson et al.(2006)]johnson
Johnson, E. T., Goodman, J., & Menou, K. 2006, ApJ, 647, 1413
[Jontof-Hutter et al.(2014)]2014ApJ...785...15J Jontof-Hutter, D., Lissauer, J. J., Rowe, J. F., & Fabrycky, D. C. 2014, , 785, 15
[Ketchum et al.(2011)]ketchum
Ketchum, J. A., Adams, F. C., & Bloch, A. M. 2011, ApJ, 726, 53
[Kley & Nelson(2012)]KleyNelson2012
Kley, W., & Nelson, R. P. 2012, ARA&A, 50, 211
[Laughlin et al.(2004)]lsa
Laughlin, G., Steinacker, A., & Adams, F. C. 2004, ApJ, 608, 489
[Lecoanet et al.(2009)]lecoanet
Lecoanet, D., Adams, F. C., & Bloch, A. M. 2009, ApJ, 692, 659
[Lee & Peale(2002)]leepeale
Lee, M.-H., & Peale, S. J. 2002, ApJ, 567, 596
[Lee & Chiang(2015)]Lee2015
Lee, E. J., & Chiang, E. 2015, ApJ, 811, 41
[Lee & Chiang(2016)]Lee2016
Lee, E. J., & Chiang, E. 2016, ApJ, 817, 90
[Lin & Youdin(2015)]2015ApJ...811...17L Lin, M.-K., & Youdin, A. N. 2015, , 811, 17
[Malhotra(1993)]Malhotra1993
Malhotra, R. 1993, Nature, 365, 819
[Mestel(1963)]Mestel63
Mestel, L. 1963, MNRAS, 126, 553
[Mills et al.(2016)]2016Natur.533..509M Mills, S. M., Fabrycky, D. C., Migaszewski, C., et al. 2016, , 533, 509
[Mittal & Chiang(2015)]2015ApJ...798L..25M Mittal, T., & Chiang, E. 2015, , 798, L25
[Morbidelli(2002)]Morby
Morbidelli, A. 2002, Modern Celestial Mechanics: Aspects of Solar
System Dynamics (London: Taylor & Francis)
[Mulders et al.(2015)]Mulders2015 Mulders, G. D., Pascucci, I., & Apai, D. 2015, , 798, 112
[Murray & Dermott(1999)]md99
Murray, C. D., & Dermott, S. F. 1999, Solar System Dynamics
(Cambridge: Cambridge Univ. Press)
[Nelson & Papaloizou(2004)]nelson
Nelson R. P., & Papaloizou J.C.B. 2004, MNRAS, 350, 849
[Nelson et al.(2013)]2013MNRAS.435.2610N Nelson, R. P., Gressel, O., & Umurhan, O. M. 2013, , 435, 2610
[Nesvorný & Morbidelli(2008)]morbyone
Nesvorný, D., & Morbidelli, A. 2008, Icarus, 688, 636
[Nobili & Roxburgh(1986)]genrel Nobili, A., & Roxburgh, I. W. 1986, IAUS, 114, 105
[Ogihara et al.(2015)]Ogihara2015insitu Ogihara, M., Morbidelli, A., & Guillot, T. 2015, , 578, A36
[Ogihara et al.(2015)]Ogihara2015mig Ogihara, M., Morbidelli, A., & Guillot, T. 2015, , 584, L1
[Okuzumi & Hirose(2011)]okuzumi
Okuzumi, S., & Hirose, S. 2011, ApJ, 742, 65
[Okuzumi & Ormel(2013)]OkuzumiOrmel2013 Okuzumi, S., & Ormel, C. W. 2013, , 771, 43
[Öpik(1976)]Opik1976
Opik, E. J. 1976, Interplanetary Encounters:
Close-range gravitational interactions (New York: Elsevier)
[Ormel & Okuzumi(2013)]ormelpaper
Ormel, C. W., & Okuzumi, S. 2013, ApJ, 771, 44
[Paardekooper(2014)]Paardekooper Paardekooper, S.-J. 2014, , 444, 2031
[Papaloizou & Larwood(2000)]PapaloizouLarwood2000
Papaloizou, J.C.B., & Larwood, J. D. 2000, MNRAS, 315, 823
[Peale(1976)]Peale1976
Peale, S. J. 1976, ARA&A, 14, 215
[Petigura et al.(2013)]Petigura
Perigura, E. A., Howard, A. W., & Marcy, G. W. 2013, PNAS, 110, 19273
[Press et al.(1992)]Press1992
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery,
B. P. 1992, Numerical Recipes in FORTRAN: The Art of Scientific
Computing (Cambridge: Cambridge Univ. Press)
[Quillen(2006)]quill
Quillen, A. C. 2006, MNRAS, 365, 1367
[Rein & Papaloizou(2009)]rein
Rein, H., & Papaloizou, J.C.P. 2009, A&A, 497, 595
[Rein et al.(2010)]2010A A...510A...4R Rein, H., Papaloizou, J. C. B., & Kley, W. 2010, , 510, A4
[Rein(2012)]Rein2012 Rein, H. 2012, , 427, L21
[Rivera et al.(2010)]2010ApJ...719..890R Rivera, E. J., Laughlin, G., Butler, R. P., et al. 2010, , 719, 890
[Rogers(2015)]Rogers2015
Rogers, L. A. 2015, ApJ, 801, 41
[Schlichting(2014)]Schlichting
Schlighting, H. 2014, ApJ, 795, 15
[Sessin & Ferraz-Mello(1984)]SessinFerraz-Mello1984
Sessin, W., & Ferraz-Mello, S. 1984, Celest. Mech., 32, 307
[Sinclair(1970)]Sinclair1970
Sinclair, A. T. 1970, MNRAS, 148, 325
[Sinclair(1972)]Sinclair1972
Sinclair, A. T. 1972, MNRAS, 160, 169
[Shakura & Sunyaev(1973)]ShakuraSunayev1973
Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337
[Tanaka et al.(2002)]Tanaka2002
Tanaka, H., Takeuchi, T., Ward, W. R. 2002, ApJ, 565, 1257
[Tanaka & Ward(2004)]Tanaka2004
Tanaka, H., & Ward, W. R. 2004, ApJ, 602, 388
[Terquem & Papaloizou(2007)]TerquemPap2007
Terquem, C., Papaloizou, J.C.B. 2007, ApJ, 654, 1110
[Van Kampen(2001)]vankampen
Van Kampen, N. G. 2001, Stochastic Processes in Physics
and Chemistry (Amsterdam: North Holland)
[Weiss & Marcy(2014)]WeissMarcy2014
Weiss, L. M., & Marcy, G. W. 2014, ApJ, 783, 6
[Winn & Fabrycky(2015)]WinnFabrycky2015 Winn, J. N., & Fabrycky, D. C. 2015, , 53, 409
[Xu & Lai(2016)]2016arXiv161106463X Xu, W., & Lai, D. 2016, arXiv:1611.06463
[Wisdom(1986)]Wisdom1986
Wisdom, J. 1986, Celest. Mech., 38, 175
|
http://arxiv.org/abs/1701.07805v3 | 20170126182811 | On extractable shared information | [
"Johannes Rauh",
"Pradeep Kr. Banerjee",
"Eckehard Olbrich",
"Jürgen Jost",
"Nils Bertschinger"
] | cs.IT | [
"cs.IT",
"math.IT",
"94A15, 94A17"
] |
Private Information Retrieval
from MDS Coded Data with Colluding Servers:
Settling a Conjecture by Freij-Hollanti et al.
Hua Sun and Syed A. Jafar
==============================================================================================================================
We consider the problem of quantifying the information shared by a pair of random variables X_1,X_2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f(S) for any function f.
We show that our measure leads to a new nonnegative decomposition of the mutual information I(S;X_1X_2) into shared, complementary and unique components.
We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information.
We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic.
Keywords: Information decomposition; multivariate mutual information; left monotonicity; Blackwell order
§ INTRODUCTION
A series of recent papers have focused on the bivariate information decomposition problem <cit.>. Consider three random variables S, X_1, X_2 with finite alphabets , and , respectively. The total information that the pair (X_1,X_2) convey about the target S can have aspects of shared or redundant information (conveyed by both X_1 and X_2), of unique information (conveyed exclusively by either X_1 or X_2), and of complementary or synergistic information (retrievable only from the the joint variable (X_1,X_2)). In general, all three kinds of information may be present concurrently. One would like to express this by decomposing the mutual information I(S;X_1X_2) into a sum of nonnegative components with a well-defined operational interpretation.
One possible application area is in the neurosciences.
In <cit.>, it is argued that such a decomposition can provide a framework to analyze neural information processing using information theory that can integrate and go beyond previous attempts.
For the general case of k finite source variables (X_1,…,X_k), Williams and Beer <cit.> proposed the partial information lattice framework that specifies how the total information about the target S is shared across the singleton sources and their disjoint or overlapping coalitions.
The lattice is a consequence of certain natural properties of shared information (sometimes called the Williams–Beer axioms).
In the bivariate case (k=2), the decomposition has the form
I(S;X_1X_2) = (S;X_1,X_2)_shared + (S; X_1,X_2)_complementary + (S;X_1\ X_2)_unique (X_1 wrt X_2) + (S;X_2\ X_1)_unique (X_2 wrt X_1),
I(S;X_1) = (S;X_1,X_2) + (S;X_1\ X_2) ,
I(S;X_2) = (S;X_1,X_2) + (S;X_2\ X_1) ,
where (S;X_1,X_2), (S;X_1\ X_2), (S;X_2\ X_1), and (S;X_1,X_2) are nonnegative functions that depend continuously on the joint distribution of (S,X_1,X_2).
The difference between shared and complementary information is the familiar co-information <cit.> (or interaction information <cit.>), a symmetric generalization of the mutual information for three variables,
CoI(S;X_1,X_2) =I(S;X_1)-I(S;X_1|X_2)=(S;X_1,X_2)-(S; X_1,X_2).
Equations (<ref>) to (<ref>) leave only a single degree of freedom, i.e., it suffices to specify either a measure for , for or for .
Williams and Beer not only introduced the general partial information framework, but also proposed a measure of SI to
fill this framework. While their measure has subsequently been criticized for “not measuring the right thing”
<cit.>,
there has been no successful attempt to find better measures, except for the bivariate case
(k=2) <cit.>. One problem seems to
be the lack of a clear consensus on what an ideal measure of shared (or unique or complementary) information should look
like and what properties it should satisfy. In particular, the Williams–Beer axioms only put crude bounds on the values
of the functions , and . Therefore, additional axioms have been proposed by various
authors <cit.>.
Unfortunately, some of these properties contradict each other <cit.>, and the question for
the right axiomatic characterization is still open.
The Williams–Beer axioms do not say anything about what should happen when the target variable S undergoes a local transformation.
In this context, the following left monotonicity property was proposed in <cit.>:
(LM) (S;X_1,X_2) ≥(f(S);X_1,X_2) for any function f.
asdf (left monotonicity)
Left monotonicity for unique or complementary information can be defined similarly.
The property captures the intuition that shared information should only decrease if the target performs some local operation (e.g., coarse graining) on her variable S.
As argued in <cit.>, left monotonicity of shared and unique information are indeed desirable properties. Unfortunately, none of the measures of shared information proposed so far satisfy left monotonicity.
In this contribution, we study a construction that enforces left monotonicity. Namely, given a measure of shared information , define
(S;X_1,X_2) := sup_f:→' (f(S);X_1,X_2),
where the supremum runs over all functions f:→' from the domain of S to an arbitrary finite set '.
By construction, satisfies left monotonicity, and is the smallest function bounded from below by that satisfies left monotonicity.
Changing the definition of shared information in the information decomposition
framework Equations (<ref>)–(<ref>) leads to new definitions of unique and complementary information:
(S;X_1\ X_2) := I(S;X_1) - (S;X_1,X_2),
(S;X_2\ X_1) := I(S;X_2) - (S;X_1,X_2),
(S;X_1,X_2) := I(S;X_1X_2) - (S;X_1,X_2)- (S;X_1\ X_2) - (S;X_2\ X_1).
In general, (S;X_1∖ X_2)≠(S;X_1∖ X_2) :=
sup_f:→' (f(S);X_1∖ X_2). Thus, our construction cannot enforce left monotonicity for
both UI and SI in parallel.
Lemma <ref> shows that , and are nonnegative and thus define a
nonnegative bivariate decomposition.
We study this decomposition in Section <ref>. In Theorem <ref>, we show that our construction is not compatible with a decision-theoretic interpretation of unique information proposed in <cit.>. In Section <ref>, we ask whether it is possible to find an information decomposition in which both shared and unique information measures are left monotonic. Our construction cannot directly be generalized to ensure left monotonicity of two functions simultaneously. Nevertheless, it is possible that such a decomposition exists, and in Proposition <ref>, we prove bounds on the corresponding shared information measure.
Our original motivation for the definition of was to find a bivariate decomposition in which the shared information satisfies left monotonicity. However, one could also ask whether left monotonicity is a required property of shared information, as put forward in <cit.>.
In contrast, <cit.> argue that redundancy can also arise by means of a mechanism.
Applying a function to S corresponds to such a mechanism that singles out a certain aspect from S. Even if all the X_i share nothing about the whole S, they might still share information about this aspect of S, which means that the shared information will increase. With this intuition, we can interpret not as an improved measure of shared information, but as a measure of extractable shared information, because it asks for the maximal amount of shared information that can be extracted from S by further processing S by a local mechanism. More generally, one can apply a similar construction to arbitrary information measures. We explore this idea in Section <ref> and discuss probabilistic generalizations and relations to other information measures.
In Section <ref>, we apply our construction to existing measures of shared information.
§ PROPERTIES OF INFORMATION DECOMPOSITIONS
§.§ The Williams–Beer Axioms
Although we are mostly concerned with the case k=2, let us first recall the three axioms that Williams and
Beer <cit.> proposed for a measure of shared information for arbitrarily many arguments:
(S) (S;X_1,…,X_k) is symmetric under permutations of X_1,…,X_k, (Symmetry)
(SR) (S;X_1) = I(S;X_1), (Self-redundancy)
(M) (S;X_1,…,X_k-1,X_k) ≤(S;X_1,…,X_k-1),
with equality if
X_i=f(X_k) for some i<k and some function f. (Monotonicity)
Any measure of satisfying these axioms is nonnegative.
Moreover, the axioms imply the following:
(RM) (S;X_1,…,X_k) ≥(S;f_1(X_1),…,f_k(X_k)) for all
functions f_1,…,f_k. (right monotonicity)
Williams and Beer also defined a function
I_min(S;X_1,…,X_k) = ∑_sP_S(s)min_i{∑_x_iP_X_i|S(x_i|s)logP_S|X_i(s|x_i)/P_S(s)}
and showed that I_min satisfies their axioms.
§.§ The Copy example and the Identity Axiom
Let X_1,X_2 be independent uniformly distributed binary random variables, and consider the copy function
Copy(X_1,X_2):=(X_1,X_2). One point of criticism of I_min is the fact that
X_1 and X_2 share I_min(Copy(X_1,X_2);X_1,X_2) = 1bit about
Copy(X_1,X_2) according to I_min, even though they are independent.
<cit.> argue that the shared information about the copied pair should equal the mutual information:
(Id) (Copy(X_1,X_2);X_1,X_2)=I(X_1;X_2). (Identity)
Ref. <cit.> also proposed a bivariate
measure of shared information that satisfies
(Id). Similarly, the measures of bivariate shared information proposed in
<cit.> satisfies (Id).
However, (Id) is incompatible with a nonnegative information decomposition according to the Williams–Beer axioms for k≥ 3 <cit.>.
On the other hand, Ref. <cit.> uses an example from game theory to give an intuitive explanation how even independent variables X_1 and X_2 can have nontrivial shared information. However, in any case the value of 1bit assigned by I_min is deemed to be too large.
§.§ The Blackwell property and property (∗)
One of the reasons that it is so difficult to find good definitions of shared, unique or synergistic information is that a clear operational idea behind these notions is missing. Starting from an operational idea about decision problems,
Ref. <cit.> proposed the following property for the unique information, which we now
propose to call Blackwell property:
(BP) For a given joint distribution P_SX_1X_2, (S;X_1\ X_2) vanishes if and only if there exists a random
variable X_1' such that S-X_2-X_1' is a Markov chain and P_SX_1'=P_SX_1.
In other words, the channel S → X_1 is a garbling or degradation of the channel S → X_2. Blackwell's
theorem <cit.> implies that this garbling property is equivalent to the fact that any decision
problem in which the task is to predict S can be solved just as well with the knowledge of X_2 as with the
knowledge of X_1. We refer to Section 2 in <cit.> for the details.
Ref. <cit.> also proposed the following property:
(*) and depend only on the marginal distributions P_SX_1 and P_SX_2 of the
pairs (S,X_1) and (S,X_2).
This property was in part motivated by (BP), which also depends only on the channels S→ X_1 and S→ X_2 and thus on P_SX_1 and P_SX_2. Most information decompositions proposed so far satisfy property (*).
§ EXTRACTABLE INFORMATION MEASURES
One can interpret SI as a measure of extractable shared information. We explain this idea in a more general setting.
For fixed k, let IM(S;X_1,…,X_k) be an arbitrary information measure that measures one aspect of the
information that X_1,…,X_k contain about S. At this point, we do not specify what precisely an
information measure is, except that it is a function that assigns a real number to any joint distributions of
S,X_1,…,X_k. The notation is, of course, suggestive of the fact that we mostly think about one of the measures , or , in which the first argument plays a special role. However, IM could also be the mutual information I(S;X_1),
the entropy H(S), or the coinformation CoI(S;X_1,X_2).
We define the corresponding extractable information measure as
IM(S;X_1,…,X_k) := sup_f IM(f(S);X_1,…,X_k),
where the supremum runs over all functions f:↦' from the domain of S to an arbitrary finite set '. The intuition is that IM is the maximal possible amount of IM one can “extract” from (X_1,…,X_k) by transforming S. Clearly, the precise interpretation depends on the interpretation of IM.
This construction has the following general properties:
* Most information measures satisfy IM(O;X_1,…,X_k)=0 when O is a constant random variable. Thus, in this case, IM(S;X_1,…,X_k)≥ 0. Thus, for example, even though the coinformation can be negative, the extractable coinformation is never negative.
* Suppose that IM satisfies left monotonicity.
Then, IM = IM. For example, entropy H and mutual information I satisfy left monotonicity, and so H=H
and I=I. Similarly, as shown in <cit.>, the measure of unique
information defined in <cit.> satisfies left monotonicity, and so
=.
* In fact, IM is the smallest left monotonic information measure that is at least as large as IM.
The next result shows that our construction preserves monotonicity properties of the other arguments of IM.
It follows that, by iterating this construction, one can construct an information measure that is monotonic in all arguments.
Let f_1,…,f_k be fixed functions. If IM satisfies IM(S;f_1(X_1),…,f_k(X_k))≤
IM(S;X_1,…,X_k) for all S, then IM(S;f_1(X_1),…,f_k(X_k))≤IM(S;X_1,…,X_k) for all S.
Let f^* = _f{IM (f(S);f_1(X_1),…,f_k(X_k))}. Then,
IM(S;f_1(X_1),…,f_k(X_k))
= IM(f^*(S);f_1(X_1),…,f_k(X_k))
≤^(a) IM(f^*(S);X_1,…,X_k)≤sup_f IM(f(S);X_1,…,X_k)
=IM(S;X_1,…,X_k),
where (a) follows from the assumptions.
As a generalization to the construction, instead of looking at “deterministic extractability,” one can also look at
“probabilistic extractability” and replace f by a stochastic matrix. This leads to the definition
IM(S;X_1,…,X_k) := sup_P_S'|S IM(S';X_1,…,X_k),
where the supremum now runs over all random variables S' that are independent of X_1,…,X_k given S. The
function IM is the smallest function bounded from below by IM that satisfies
(PLM) IM(S;X_1,X_2) ≥IM(S';X_1,X_2) whenever S' is independent of X_1,X_2 given S.
An example of this construction is the intrinsic conditional information
I(X;Y↓ Z) := min_P_Z'|Z I(X;Y|Z'),
which was defined in <cit.> to study the secret-key rate, which is the maximal rate at which a secret can be generated by two agents knowing X or Y, respectively, such that a third agent who knows Z has arbitrarily small information about this key. The min instead of the max in the definition implies that I(X;Y↓ Z) is “anti-monotone” in Z.
In this paper, we restrict ourselves to the deterministic notions, since many of the
properties we want to discuss can already be explained using deterministic extractability.
Moreover, the optimization problem (<ref>) is a finite optimization problem and thus much easier to solve
than Equation (<ref>).
§ EXTRACTABLE SHARED INFORMATION
We now specialize to the case of shared information. The first result is that when we apply our construction to a measure of shared information that belongs to a bivariate information decomposition, we again obtain a bivariate information decomposition.
Suppose that is a measure of shared information, coming from a nonnegative bivariate information decomposition (satisfying Equations (<ref>) to (<ref>)). Then, defines a nonnegative information decomposition; that is, the derived functions
(S;X_1\ X_2) := I(S;X_1) - (S;X_1,X_2),
(S;X_2\ X_1) := I(S;X_2) - (S;X_1,X_2),
and (S;X_1,X_2) := I(S;X_1X_2) - (S;X_1,X_2) - (S;X_1\ X_2) - (S;X_2\ X_1)
are nonnegative. These quantities relate to the original decomposition by
a) (S;X_1,X_2) ≥(S;X_1,X_2),
b) (S;X_1,X_2) ≥(S;X_1,X_2),
c) (f^*(S);X_1\ X_2) ≤(S;X_1\ X_2)
≤(S;X_1\ X_2),
where f^* is a function that achieves the supremum in Equation (<ref>).
a) (S;X_1,X_2) ≥(S;X_1,X_2) ≥ 0,
b) (S;X_1,X_2)=(S;X_1,X_2)-CoI(S;X_1,X_2)
≥(S;X_1,X_2) - CoI(S;X_1,X_2)
≥(S;X_1,X_2) ≥ 0,
c) (S;X_1\ X_2)=I(S;X_1)-(S;X_1,X_2)
≤ I(S;X_1) - (S;X_1,X_2)=(S;X_1\ X_2),
(S;X_1\ X_2)= I(S;X_1)-(S;X_1,X_2)
≥ I(f^*(S);X_1)-(f^*(S);X_1,X_2)
= (f^*(S);X_1\ X_2) ≥ 0,
where we have used the data processing inequality.
* If satisfies (*), then also satisfies (*).
* If is right monotonic, then is also right monotonic.
(1) is direct, and (2) follows from Lemma <ref>.
Without further assumptions on , we cannot say much about when vanishes. However, the condition that vanishes has strong consequences.
Suppose that (S;X_1\ X_2) vanishes, and let f^* be a function that achieves the supremum in Equation (<ref>). Then, there is a Markov chain . Moreover, (f^*(S);X_1\ X_2)=0.
Suppose that (S;X_1\ X_2)=0. Then,
I(S;X_1)=(S;X_1,X_2)=(f^*(S);X_1,X_2)≤ I(f^*(S);X_1) ≤ I(S;X_1). Thus, the data
processing inequality holds with equality. This implies that X_1 - f^*(S) - S is a Markov chain. The identity
(f^*(S);X_1\ X_2)=0 follows from the same chain of inequalities.
If has the Blackwell property, then does not have the Blackwell property.
As shown in the example in the appendix,
there exist random variables S, X_1, X_2 and a function f that satisfy
* S and X_1 are independent given f(S).
* The channel f(S)→ X_1 is a garbling of the channel f(S)→ X_2.
* The channel S→ X_1 is not a garbling of the channel S→ X_2.
We claim that f solves the optimization problem (<ref>). Indeed, for an arbitrary function f',
(f'(S);X_1,X_2)≤ I(f'(S);X_1)≤ I(S;X_1) = I(f(S);X_1) = (f(S);X_1,X_2).
Thus, f solves the maximization problem (<ref>).
If satisfies the Blackwell property, then (2) and (3) imply (f(S);X_1\ X_2) = 0 and (S;X_1\ X_2) > 0. On the other hand,
(S;X_1∖ X_2)
= I(S;X_1) - (S;X_1,X_2)
= I(S;X_1) - (f(S);X_1,X_2)
= I(S;X_1) - I(f(S);X_1) + (f(S);X_1\ X_2)
= 0.
Thus, does not satisfy the Blackwell property.
There is no bivariate information decomposition in which satisfies the Blackwell property and satisfies
left monotonicity.
If satisfies left monotonicity, then =. Thus, = cannot satisfy the Blackwell
property by Theorem <ref>.
§ LEFT MONOTONIC INFORMATION DECOMPOSITIONS
Is it possible to have an extractable information decomposition? More precisely, is it possible to have an information decomposition in which all information measures are left monotonic? The obvious strategy of starting with an arbitrary information decomposition and replacing each partial information measure by its extractable analogue does not work, since this would mean increasing all partial information measures (unless they are extractable already), but then their sum would also increase. For example, in the bivariate case, when is replaced by a larger function , then needs to be replaced by a smaller function, due to the constraints (<ref>) and (<ref>).
As argued in <cit.>, it is intuitive that be left monotonic. As argued above (and in <cit.>), it is also desirable that be left monotonic. The intuition for synergy is much less clear.
In the following, we restrict our focus to the bivariate case and study the implications of requiring both and to be left monotonic.
Proposition <ref> gives bounds on the corresponding measure.
Suppose that , and define a bivariate information decomposition, and suppose that and
are left monotonic. Then,
(f(X_1,X_2);X_1,X_2) ≤ I(X_1;X_2)
for any function f.
Before proving the proposition, let us make some remarks.
Inequality (<ref>) is related to the identity axiom. Indeed, it is easy to
derive Inequality (<ref>) from the identity axiom and from the assumption that is left monotonic.
Although Inequality (<ref>) may not seem counterintuitive at first sight, none of the
information decompositions proposed so far satisfy this property (the function I_⋏
from <cit.> satisfies left monotonicity and has been proposed as a measure of shared information, but it does not lead to a nonnegative information decomposition).
If is left monotonic, then
(f(X_1,X_2);X_1,X_2) ≤(Copy(X_1,X_2);X_1,X_2)
= I(Copy(X_1,X_2);X_1) - (Copy(X_1,X_2);X_1\ X_2).
If is left monotonic, then
(Copy(X_1,X_2);X_1\ X_2) ≥(X_1;X_1\ X_2)
= I(X_1;X_1) - (X_1;X_1,X_2).
Note that I(X_1;X_1) = H(X_1) = I(Copy(X_1,X_2);X_1) and
(X_1;X_1,X_2)=I(X_1;X_2)-(X_1;X_2\ X_1)=I(X_1;X_2).
Putting these inequalities together, we obtain (f(X_1,X_2);X_1,X_2) ≤ I(X_1;X_2).
§ EXAMPLES
In this section, we apply our construction to Williams and Beer's measure, I_min <cit.>, and to the bivariate measure of shared information, , proposed in <cit.>.
First, we make some remarks on how to compute the extractable information measure (under the assumption that one knows how to compute the underlying information measure itself). The optimization problem (<ref>) is a discrete optimization problem. The search space is the set of functions from the support of S to some finite set '. For the information measures that we have in mind, we may restrict to surjective functions f, since the information measures only depend on events with positive probabilities. Thus, we may restrict to sets ' with |'|≤||.
Moreover, the information measures are invariant under permutations of the alphabet . Therefore, the only thing that matters about f is which elements from are mapped to the same element in '. Thus, any function f:→' corresponds to a partition of , where s,s'∈ belong to the same block if and only if f(s)=f(s'), and it suffices to look at all such partitions. The number of partitions of a finite set is the Bell number B_||.
The Bell numbers increase super-exponentially, and for larger sets , the search space of the optimization problem (<ref>) becomes quite large. For smaller problems, enumerating all partitions in order to find the maximum is still feasible. For larger problems, one would need a better understanding about the optimization problem.
For reference, some Bell numbers include:
n 3 4 6 10
B_n 5 15 203 115975
.
As always, symmetries may help, and so in the Copy example discussed below, where ||=4, it suffices to study six functions instead of B_4 = 15.
We now compare the measure I_min, an extractable version of Williams and Beer's measure I_min (see Equation (<ref>) above),
to the measure , an extractable version of the measure proposed in <cit.>.
For the latter, we briefly recall the definitions.
Let Δ be the set of all joint distributions of random variables (S,X_1,X_2) with given state spaces , _1, _2. Fix P=P_SX_1X_2∈Δ. Define Δ_P as the set of all distributions Q_SX_1X_2 that preserves the marginals of the pairs (S,X_1) and (S,X_2), that is,
Δ_P{Q_SX_1X_2∈Δ: Q_SX_1=P_SX_1 , Q_SX_2=P_SX_2,∀ (S,X_1,X_2)∈Δ}.
Then, define the functions
(S;X_1\ X_2) min_Q∈Δ_P I_Q(S;X_1|X_2),
(S;X_2\ X_1) min_Q∈Δ_P I_Q(S;X_2|X_1),
(S;X_1,X_2) max_Q∈Δ_P CoI_Q(S;X_1,X_2),
(S;X_1,X_2) I(S;X_1X_2) - min_Q∈Δ_P I_Q(S;X_1X_2),
where the index Q in I_Q or CoI_Q indicates that the corresponding quantity is computed with respect to the joint distribution Q.
The decomposition corresponding to satisfies the Blackwell property and the identity axiom <cit.>.
is left monotonic, but is not <cit.>. In particular, ≠. can be characterized as the smallest measure of shared information that satisfies property (*). Therefore, is the smallest left monotonic measure of shared information that satisfies property (*).
Let =={0,1} and let X_1, X_2 be independent uniformly distributed random variables. Table <ref> collects values of shared information about f(X_1,X_2) for various functions f (in bits).
The function f_1:{00,01,10,11}→{0,1,2} is defined as
f_1(X_1,X_2) :=
X_1, if X_2=1,
2, if X_2=0.
The Sum function is defined as f(X_1,X_2) := X_1 + X_2.
Table <ref> contains (up to symmetry) all possible non-trivial functions f.
The values for the extractable measures are derived from the values of the corresponding non-extractable measures.
Note that the values for the extractable versions differ only for Copy from the original ones. In these examples, I_min=I_min, but as shown in <cit.>, I_min is not left monotonic in general.
§ CONCLUSIONS
We introduced a new measure of shared information that satisfies the left monotonicity property with respect to local operations on the target variable. Left monotonicity corresponds to the idea that local processing will remove information in the target variable and thus should lead to lower values of measures which quantify information about the target variable. Our measure fits the bivariate information decomposition framework; that is, we also obtain corresponding measures of unique and synergistic information.
However, we also have shown that left monotonicity for the shared information contradicts the Blackwell property of the unique information, which limits the value of a left monotonic measure of shared information for information decomposition.
We also presented an alternative interpretation of the construction used in this paper. Starting from an arbitrary measure of shared information (which need not be left monotonic), we interpret the left monotonic measure as the amount of shared information that can be extracted from S by local processing.
Our initial motivation for the construction of was the question to which extent shared information originates from the redundancy between the predictors X_1 and X_2 or is created by the mechanism that generated S.
These two different flavors of redundancy were called source redundancy and mechanistic redundancy, respectively, in <cit.>.
While cannot be used to completely disentangle source and mechanistic redundancy, it can be seen as a measure of the maximum amount of redundancy that can be created from S using a (deterministic) mechanism.
In this sense, we believe that it is an important step forward towards a better understanding of this problem and related questions.
§ APPENDIX: COUNTEREXAMPLE IN THEOREM <REF>
Consider the joint distribution
f(s) s x_1 x_2 P_f(S)SX_1X_2
0 0 0 0 1/4
0 1 0 1 1/4
0 0 1 0 1/8
0 1 1 0 1/8
1 2 1 1 1/4
and the function f:{0,1,2}→{0,1} with f(0)=f(1)=0 and f(2)=1. Then, X_1 and X_2 are independent uniform binary random variables, and f(S) = And(X_1,X_2). In addition, S-f(S)-X_1 is a Markov chain. By symmetry, the joint distributions of the pairs (f(S), X_1) and (f(S), X_2) are identical, and so the two channels f(S)→ X_1 and f(S)→ X_2 are identical, and, hence, trivially, one is a garbling of the other. However, one can check that the channel S→ X_1 is not a garbling of the channel S→ X_2.
This example is discussed in more detail in <cit.>.
IEEEtran
|
http://arxiv.org/abs/1701.07973v1 | 20170127084516 | Frequency conversion in ultrastrong cavity QED | [
"Anton Frisk Kockum",
"Vincenzo Macrì",
"Luigi Garziano",
"Salvatore Savasta",
"Franco Nori"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"physics.optics"
] |
arrows
|
http://arxiv.org/abs/1701.07859v4 | 20170126195605 | Geometric Ergodicity of the multivariate COGARCH(1,1) Process | [
"Robert Stelzer",
"Johanna Vestweber"
] | math.PR | [
"math.PR",
"60G10, 60G51, 60J25"
] |
Transport Effects on Multiple-Component Reactions in Optical Biosensors
This work was done with the support of the National Science Foundation under award number NSF-DMS 1312529. The first author was also partially supported by the National Research Council through an NRC postdoctoral fellowship.
Ryan M. Evans David A. Edwards
Received: date / Accepted: date
=========================================================================================================================================================================================================================================================================================================
For the multivariate COGARCH(1,1) volatility process we show sufficient conditions for the existence of a unique stationary distribution, for the geometric ergodicity and for the finiteness of moments of the stationary distribution by a Foster-Lyapunov drift condition approach. The test functions used are naturally related to the geometry of the cone of positive semi-definite matrices and the drift condition is shown to be satisfied if the drift term of the defining stochastic differential equation is sufficiently “negative”. We show easily applicable sufficient conditions for the needed irreducibility and aperiodicity of the volatility process living in the cone of positive semidefinite matrices, if the driving Lévy process is a compound Poisson process.
AMS Subject Classification 2010: Primary: 60J25 Secondary: 60G10, 60G51
Keywords:
Feller process, Foster-Lyapunov drift condition, Harris recurrence, irreducibility, Lévy process, MUCOGARCH, multivariate stochastic volatility model
§ INTRODUCTION
General autoregressive conditionally heteroscedastic (GARCH) time series models, as introduced in <cit.>, are of high interest for financial economics. They capture many typical features of observed financial data, the so-called stylized facts (see <cit.>). A continuous time extension, which captures the same stylized facts as the discrete time GARCH model, but can also be used for irregularly-spaced and high-frequency data, is the COGARCH process, see e.g. <cit.>. The use in financial modelling is studied e.g. in <cit.> and the statistical estimation in <cit.>, for example. Furthermore, an asymmetric variant is proposed in <cit.> and an extension allowing for more flexibility in the autocovariance function in <cit.>.
To model and understand the behavior of several interrelated time series as well as to price derivatives on several underlyings or to assess the risk of a portfolio multivariate models for financial markets are needed. The fluctuations of the volatilities and correlations over time call for employing stochastic volatility models which in a multivariate set-up means that one has to specify a latent process for the instantaneous covariance matrix. Thus, one needs to consider appropriate stochastic processes in the cone of positive semi-definite matrices. Many popular multivariate stochastic volatility models in continuous time, which in many financial applications is preferable to modelling in discrete time, are of an affine type, thus falling into the framework of <cit.>. Popular examples include the Wishart (see e.g. <cit.>) and the Ornstein-Uhlenbeck type stochastic volatility model (see <cit.>, for example).
Thus the price processes have two driving sources of randomness and the tail-behavior of their volatility process is typically equivalent to the one of the driving noise (see <cit.>). A very nice feature of GARCH models is that they have only one source of randomness and their structure ensures heavily-tailed stationary behavior even for very light tailed driving noises (<cit.>).
In discrete time one of the most general multivariate GARCH versions (see <cit.> for an overview) is the BEKK model, defined in <cit.>, and the multivariate COGARCH(1,1) (shortly MUCOGARCH(1,1)) process introduced and studied in <cit.>, is the continuous time analogue, which we are investigating further in this paper.
The existence and uniqueness of a stationary solution as well as the convergence to the stationary solution is of high interest and importance. Geometric ergodicity ensures fast convergence to the stationary regime in simulations and paves the way for statistical inference. By the same argument as in <cit.> geometric ergodicity and the existence of some p-moments of the stationary distribution provide exponential β-mixing for Markov processes. This in turn can be used to show a central limit theorem for the process, see for instance <cit.>, and so allows to prove for example asymptotic normality of estimators (see e.g. <cit.> in the context of univariate COGARCH(1,1) processes). In a similar way <cit.> employ the results of the present paper to analyse moment based estimators of the parameters of MUCOGARCH(1,1) processes.
In many applications involving time series (multivariate) ARMA-GARCH models (see e.g. <cit.>) turn out to be adequate and geometric ergodicity is again key to understand the asymptotic behaviour of statistical estimators. In continuous time a promising analogue currently investigated in <cit.> seems to be a (multivariate) CARMA process (see e.g. <cit.>) driven by a (multivariate) COGARCH process. The present paper also lays foundations for the analysis of such models.
For the univariate COGARCH process geometric ergodicity was shown by <cit.> and <cit.> discussed it for the BEKK GARCH process. In <cit.> for the MUCOGARCH process sufficient conditions for the existence of a stationary distribution are shown by tightness arguments, but the paper failed to establish uniqueness or convergence to the stationary distribution. In this paper we deduce under the assumption of irreducibility sufficient conditions for the uniqueness of the stationary distribution, the convergence to it with an exponential rate and some finite p-moment of the stationary distribution of the MUCOGARCH volatility process Y. To show this we use the theory of Markov process, see e.g. <cit.>. A further result of this theory is, that our volatility process is positive Harris recurrent. If the driving Lévy process is a compound Poisson process, we show easily applicable conditions ensuring irreducibility of the volatility process in the cone of positive semidefinite matrices.
Like in the discrete time BEKK case the non-linear structure of the SDE prohibits us from using well-established results for random recurrence equations like in the one-dimensional case and due to the rank one jumps establishing irreducibility is a very tricky issue. To obtain the latter <cit.> in discrete time used techniques from algebraic geometry (see also <cit.>) whereas we use a direct probabilistic approach playing the question back to the existence of a density for a Wishart distribution. However, we restrict ourselves to processes of order (1,1) while in the discrete time BEKK case general orders were considered. The reason is that on the one hand order (1,1) GARCH processes seem sufficient in most applications and on the other hand multivariate COGARCH(p,q) processes can be defined in principle (<cit.>), but no reasonable conditions on the possible parameters are known. Already in the univariate case these conditions are quite involved (cf. <cit.>, <cit.>). On the other hand we look at the finiteness of an arbitrary p-th moment (of the volatility process) and use drift conditions related to it, whereas <cit.> only looked at the first moment for the BEKK case. In contrast to <cit.> we avoid any vectorizations, work directly in the cone of positive semi-definite matrices and use test functions naturally in line with the geometry of the cone.
After a brief summary of some preliminaries, notations and Lévy processes, the remainder of the paper is organized as follows: In Section 3 we recall the definition of the MUCOGARCH(1,1) process and some of its properties of relevance later on. In Section 4 we present our first main result: sufficient conditions ensuring the geometric ergodicity of the volatility process Y. Furthermore, we compare the conditions for geometric ergodicity to previously known conditions for (first order) stationarity. Moreover, we discuss the applications of the obtained results and illustrate them by exemplary simulations. In Section <ref> we establish sufficient conditions for the irreducibility and aperiodicity of Y needed to apply the previous results on geometric ergodicity. Section <ref> first gives a brief repetition of the Markov theory we use and the proofs of our results are developed.
§ PRELIMINARIES
Throughout we assume that all random variables and processes are defined on a given filtered probability space
(Ω,ℱ, , (ℱ_t)_t∈𝒯) with 𝒯= in the discrete time case and 𝒯=^+ in the
continuous one. Moreover, in the continuous time setting we assume the usual conditions (complete, right continuous filtration) to be satisfied.
For Markov processes in discrete
and continuous time we refer to <cit.> and respectively <cit.>.
A summary of the most relevant notions and results from Markov processes is given in Section <ref> for the convenience of the reader.
§.§ Notation
The set of real m × n matrices is denoted by M_m,n() or only by M_n() if m=n. For the invertible n × n matrices
we write GL_n(). The linear subspace of symmetric matrices we denote by _n, by _n^+ the closed cone of positive semi-definite
matrices and the open cone of positive definite matrices by _n^++. Further we denote by I_n the n × n identity matrix.
We introduce the natural ordering on _n and denote it by ≼, that is for A,B ∈_n it holds A≼ B ⇔ B-A ∈_n^+.
The tensor (Kronecker) product of two matrices A,B is written as A⊗ B. $⃗ denotes the well-known vectorization operator that maps
then×nmatrices to^n^2by stacking the columns of the matrices below another. The spectrum of a matrix is denoted byσ(·)and the spectral radius byρ(·). For a matrix with only real eigenvaluesλ_max(·)andλ_min(·)denote the largest and the smallest eigenvalue.(x)is the real part of a complex number. Finally,A^⊤is the transpose of a matrixA∈M_m,n().
By . _2we denote both the Euclidean norm for vectors and the
corresponding operator norm for matrices and by._Fthe
Frobenius norm for matrices.
Furthermore, we employ an intuitive notation with respect to the (stochastic) integration with matrix-valued integrators referring to any of
the standard texts (e.g. <cit.>) for a comprehensive treatment of the theory of stochastic integration. For anM_m,n()-valued Lévy processL, andM_d,m()resp.M_n,p()- valued processesX,Yintegrable with respect toL, the term∫_0^t X_s dL_s Y_sis to be understood as thed×p(random) matrix with(i,j)-th entry∑_k=1^m ∑_l=1^n ∫_0^t X_s^ik dL_s^kl Y_s^lj.
If(X_t)_t∈^+is a semi-martingale in^mand(Y_t)_t∈^+one in^nthen the quadratic variation([X,Y]_t)_t∈^+is
defined as the finite variation process inM_m,n()with components[X,Y]_ij,t=[X_i,Y_j]_tfort∈^+andi=1,…,m,j=1,…,n.
§.§ Lévy processes
Later on we use Lévy processes (see e.g. <cit.>) in^dand in the symmetric matrices_d.
We consider a Lévy processL=(L_t)_t∈^+(whereL_0=0a.s.)
in^ddetermined by its characteristic function in the
Lévy-Khintchine formE[e^i⟨u,L_t⟩]=exp{tψ_L(u)}fort∈^+with
ψ_L(u)=i⟨γ_L,u⟩-1/2⟨ u,τ_Lu⟩+∫_ ^d
(e^i⟨ u,x⟩-1-i⟨ u,x⟩ I_[0,1]({x)) ν_L(dx), u∈ ^d,
whereγ_L∈^d,τ_L∈_d^+and the Lévy measureν_Lis a measure on^dsatisfyingν_L({0})=0and∫_^d(x^2∧1)
ν_L(dx)<∞.Ifν_Lis a finite measure,Lis a compound Poisson process.
Moreover,⟨·,·⟩denotes the usual Euclidean scalar product on^d.
We always assumeLto be càdlàg and denote its jump measure byμ_L, i.e.μ_Lis the Poisson random measure on^+×^d∖{0}given by μ_L(B)=♯{s≥0: (s,L_s-L_s-)∈B}
for any
measurable setB⊂^+×^d∖{0}.Likewise,μ̃_L(ds,dx)=μ_L(ds,dx)-dsν_L(dx) denotes the compensated jump measure.
Regarding matrix-valued Lévy processes, we will only encounter matrix subordinators (see <cit.>), i.e. Lévy processes with paths in_d^+. Since matrix subordinators are of finite variation and(X^*Y)(withX,Y∈_danddenoting the usual trace functional) defines a
scalar product on_dlinked to the Euclidean scalar product on^d^2via(X^*Y)=(⃗X)^*(⃗Y)=⟨(⃗Y), (⃗X)⟩, the characteristic function of a matrix subordinator can be represented as
E(e^i(L_t^*Z)) =exp(tψ_L(Z)), Z∈_d, ψ_L(Z)=i (γ_L Z)+∫__d^+(e^i(XZ)-1)ν_L(dX)
with driftγ_L∈_d^+and Lévy measureν_L.
The discontinuous part of the quadratic variation of any Lévy processLin^d[L,L]^d_t:=∫_0^t∫_ ^dxx^*μ_L(ds,dx)=∑_0≤ s≤ t(Δ L_s)(Δ L_s)^*,
is a matrix subordinator with drift zero and Lévy measure
ν_[L,L]^d(B)=∫_^dI_B(xx^*)ν_L(dx)
for all Borel setsB⊆_d.
§ MULTIVARIATE COGARCH(1,1) PROCESS
In this section we present the definition of the MUCOGARCH(1,1) process and some relevant properties mainly based on <cit.>.
Let L be an ^d-valued Lévy process, A,B ∈ M_d() and C ∈_d^++.
The MUCOGARCH(1,1) processG=(G_t)_t ≥ 0 is defined as the solution of
dG_t = V_t-^1/2dL_t
V_t = Y_t + C
dY_t = (BY_t- + Y_t-B^⊤) dt + A V_t-^1/2 d [L,L]^d_tV_t-^1/2 A^⊤,
with initial values G_0∈^d and Y_0∈_d^+.
The process Y=(Y_t)_t≥ 0 is called MUCOGARCH(1,1) volatility process.
Since we only consider MUCOGARCH(1,1) processes, we
often simply write MUCOGARCH.
Equations (<ref>) and (<ref>) directly give us an SDE for the covariance matrix processV:
dV_t = (B(V_t- -C)+(V_t--C)B^⊤)dt + AV_t-^1/2 d[L,L]^d_t
V_t-^1/2 A^⊤.
Providedσ(B) ⊂(-∞,0) + i, we see thatV, as long as no jumps occur,
returns to the levelCat an exponential rate determined byB. Since all
jumps are positive semidefinite,Cis not a mean level but a lower bound.
To have the MUCOGARCH process well-defined, we have to know that a unique
solution of the SDE system exists and the solution ofY(andV) does
not leave the set_d^+. In the following we always understand that our processes live on_d. Since_d^++is an open subset of_d, we now are in the most natural setting for SDEs and we get:
Let A,B∈ M_d(), C∈_d^++ and L be a d-dimensional Lévy process.
Then the SDE (<ref>) with initial value Y_0∈_d^+ has a unique positive semi-definite solution (Y_t)_t∈^+.
The solution (Y_t)_t∈^+ is locally bounded and of finite variation.
Moreover, it satisfies
Y_t=e^BtY_0e^B^⊤ t+∫_0^te^B(t-s)A(C+Y_s-)^1/2d[L,L]_s^d (C+Y_s-)^1/2A^⊤ e^B^⊤(t-s)
for all t ∈^+ and thus Y_t≽ e^BtY_0e^B^⊤ t
for all t∈^+.
In particular, whenever (<ref>) is started with an initial valueY_0∈_d^+(or (<ref>) withV_0≽C) the solution stays in_d^+(_d^++C) at all times. This can be straightforwardly seen from (<ref>) and the fact that for anyM∈M_d()maps of the formX↦MXM^⊤map_d^+into itself (see <cit.> for a more detailed discussion).
At first sight, it appears superfluous to introduce the processYinstead of directly working with the processV. However, in the following it is more convenient to work withY, as then the matrixCdoes not appear in the drift and the state space of the Markov process analysed is the cone of positive semi-definite matrices itself and not a translation of this cone.
(i) The MUCOGARCH process (G,Y) as well as its volatility process Y alone are
temporally homogeneous strong Markov processes on ^d×_d^+ and
_d^+, respectively, and they have the weak -Feller property.
(ii)Y is non-explosive and has the weak -Feller property. Thus it is a Borel right process.
<cit.> is (i).
-Feller property of Y: Let f∈(_d^+). We have to show that ∀
t≥0^t f(x) → 0, for x→∞,
where we understand x →∞ in the sense of x_2→∞.
Since ^t f(x)=(f(Y_t(x))), where x denotes the starting point Y_0=x, it is enough to show that Y_t goes to infinity for x →∞:
Y_t_2 ≥ e^BtY_0 e^B^⊤ t_2 ≥ e^-Bt^-2_2 x_2 →∞ for x_2 →∞.
As argued in Section <ref> any -Feller process is a Borel right process and the non-explosivity property is shown in the proof of Theorem 6.3.7 in
<cit.>.
For ^d-valued solutions to Lévy-driven stochastic differential equations
dX_t = σ(X_t-)dL_t<cit.> gives necessary and sufficient conditions for the rich Feller property, which includes the -Feller property, if σ is continuous and of (sub-)linear growth. But since our direct proof is quite short, we prefer it instead of trying to adapt the result of <cit.> to our state space.
§ GEOMETRIC ERGODICITY OF THE MUCOGARCH VOLATILITY PROCESS Y
In Theorem <cit.> sufficient conditions for the existence of a stationary distribution for the volatility processYorV(with certain moments finite) are shown, but neither the uniqueness of the stationary distribution nor that it is a limiting distribution are obtained. Our main theorem now gives sufficient conditions for geometric ergodicity and thereby for the existence of a unique stationary distribution to which the transition probabilities converge exponentially fast in total variation (and in stronger norms).
On the proper closed convex conethe trace: →^+is well-known to define a norm which is also a linear functional. So do actually all the maps→^+, X↦(ηX)withη∈_d^++, as_d^+is also generating, self-dual and has interior_d^++(cf. <cit.>, for instance). The latter can also be easily seen using the following Lemma which is a consequence of <cit.>.
Let X∈_d, Y∈_d^+. Then
λ_min(X)(Y)≤(XY)≤λ_max(X)(Y).
To be in line with the geometry of_d^+we thus use the above norms to define appropriate test functions and look at the trace norm for the finiteness of moments (which, of course, is independent of the actually employed norm).
Forp=1andp ≥2it is shown in <cit.>, that the finiteness of(L_1_2^2p)and(Y_0_2^p)implies the finiteness of(Y_t_2^p)for allt. We improve this to allp >0.
Let Y be a MUCOGARCH volatility proces and p >0. If ((Y_0)^p) < ∞, ∫_y_2≤ 1 y _2 ^2∧2pν_L(dy) < ∞ and (L_1_2^2p) < ∞, then ((η Y_t)^p)< ∞ for all t ≥ 0, η∈_d^++ and t ↦((η Y_t)^p) is locally bounded.
Observe that(L_1_2^2p) < ∞is equivalent to∫_y_2≥1 y _2 ^2p ν_L(dy) < ∞and that∫_y_2≤1 y _2 ^2∧2p ν_L(dy) < ∞is always true forp≥1and otherwise means that the2p-variation ofLhas to be finite.
Let Y be a MUCOGARCH volatility process which is μ-irreducible with the support of μ having
non-empty interior and aperiodic.
If one of the following conditions is satisfied
(i) setting p=1 there exists an η∈_d^++ such that ∫_y_2≥ 1y_2^2ν_L(dy)<∞ and
η B+B^⊤η+ A^⊤η A∫_^d yy^⊤ν_L(dy)_2∈-_d^++,
(ii) setting p=1 there exists an η∈_d^++ such that ∫_y_2≥ 1y_2^2ν_L(dy)<∞ and
η B+B^⊤η+ λ_max(A^⊤η A)∫_^d yy^⊤ν_L(dy)∈-_d^++,
(iii) there exist a p∈ (0,1] and an η∈_d^++ such that ∫_^d y _2 ^2pν_L(dy) < ∞ and
∫_^d( ( 1+K_η,Ay^2_2)^p - 1)ν_L(dy)
+K_η,B p < 0,
where K_η,B:=max_x∈_d^+,(x)=1 ((η B+B^⊤η)x)/(η x) and K_η,A:=max_x∈_d^+,(x)=1 (A^⊤η Ax)/(η x),
(iv) there exist a p∈ [1,∞) and an η∈_d^++ such that ∫_y_2≥ 1y_2^2pν_L(dy)<∞ and
∫_^d(2^p-1( 1+K_η,Ay^2_2)^p - 1)ν_L(dy)
+K_η,B p < 0,
where K_η,B, K_η,A are as in (iii),
(v) there exists a p∈ [1,∞) and an η∈_d^++ such that ∫_y_2≥ 1y_2^2pν_L(dy)<∞ and
max{2^p-2,1}K_η,A∫_^dy^2_2( 1 +y^2_2K_η,A)^p-1ν_L(dy)+K_η,B < 0
where K_η,B, K_η,A are as in (iii),
then a unique stationary distribution for the MUCOGARCH(1,1)
volatility process Y exists, Y is positive Harris recurrent, geometrically ergodic (even (η·)^p+1 uniformly ergodic) and the
stationary distribution has a finite p-th moment.
(i) For p=1 the cases (i) and (ii) give us quite strong conditions comparable to the ones known for affine processes in (cf. <cit.>). For p≠ 1 it seems that the non-linearity of our SDE implies that we need to use inequalities that are somewhat crude.
(ii) Condition (<ref>) demands that the driving Lévy process is compound Poisson for p>1. To overcome this restriction is the main motivation for considering case (v).
(iii) For p=1 the Conditions (<ref>), (<ref>), (<ref>) agree.
In dimension one they also agree with Conditions (<ref>) and (<ref>). Moreover, then the above sufficient conditions agree with the necessary and sufficient conditions of <cit.> for a univariate COGARCH(1,1) process to have a stationary distribution with finite first moments, as follows from <cit.>.
(iv) Observing that
K_η,A∫_^dy^2_2ν_L(dy)+K_η,B≥max_x∈_d^+,(x)=1 ((η B+B^⊤η+A^⊤η A∫_^d yy^⊤ν_L(dy)_2)x)/(η x),
the self-duality of shows that for p=1 the equivalent Conditions (<ref>), (<ref>), (<ref>) imply that (<ref>) holds.
Conversely the upcoming Examples <ref>, <ref> show that (<ref>) and (<ref>) are less restrictive than the equivalent Conditions (<ref>), (<ref>), (<ref>) and that (<ref>) does not imply (<ref>) and vice versa.
(v) Arguing as in <cit.>, one sees that if (<ref>) is satisfied for some p>0 it is also satisfied for all smaller ones. Using similar elementary arguments, the same can be shown for (<ref>) and for (<ref>) this property is obvious.
Note, however, that ∫_^d y _2 ^2pν_L(dy) < ∞ for some 0<p≤ 1 does not imply that ∫_^d y _2 ^2p̅ν_L(dy) is finite for 0<p̅≤ p.
(vi) From the exercise on p. 98 of <cit.> (taking e.g. A=[ 1 0; 10 1 ] there) we see immediately that if (<ref>), (<ref>), (<ref>), (<ref>), or (<ref>) is satisfied for one η∈_d^++ it can be violated for η̅∈_d^++, η̅≠η. Hence, the possibility to choose η∈_d^++ freely is important.
(vii) In the Case (iii) ∫_^d K_η,A^py^2p_2ν_L(dy)
+K_η,B p < 0, implies (<ref>), as (x+y)^p-x^p≤ y^p for x,y≥ 0 and p∈(0,1].
The above Conditions (<ref>), (<ref>), (<ref>) appear enigmatic at a first sight. However, essentially they are related to the drift being negative in an appropriate way.
(i) If one of the Conditions (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) is satisfied, then η B+B^⊤η∈-_d^++.
(ii) If one of the Conditions (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) is satisfied, then (σ(B))<0.
(iii) If (σ(B))<0, then there exists an η∈_d^++ such that η B+B^⊤η∈-_d^++.
(i) This is obvious for (<ref>), (<ref>).
In the other cases we get that K_η,B<0 must hold. Therefore max_x∈_d^+,(x)=1 ((η B+B^⊤η)x)<0. By the self-duality of this shows η B+B^⊤η∈-_d^++.
(ii) and (iii) now follow from <cit.>.
So in the end what we demand is that the drift is “negative” enough to compensate for a positive effect from the jumps. The Conditions (<ref>), (<ref>) can be related to eigenvalues of linear maps on_d.
(i) Condition (<ref>) holds if and only if the linear map _d→_d, X↦ XB+B^⊤ X +A^⊤ X A ∫_^d yy^⊤ν_L(dy)_2 has only eigenvalues with strictly negative real part.
(ii) Condition (<ref>) holds if the linear map _d→_d, X↦ XB+B^⊤ X +(A^⊤ X A)∫_^d yy^⊤ν_L(dy) has only eigenvalues with strictly negative real part.
(i) Follows from <cit.>, as it is easy to see that the map is quasi monotone increasing and as a linear map and its adjoint have the same eigenvalues.
(ii) Follows analogously after noting that 0≤λ_max( A^⊤ X A)≤(A^⊤ X A).
[Relation to first order stationarity]
We say that a process is first order stationary, if it has finite first moments at all times, if the first moment converges to a finite limit independent of the initial value as time goes to infinity and if the first moment is constant over time when the process is started at time zero with an initial value whose first moment equals the limiting value.
According to <cit.> sufficient conditions for asymptotic first-order stationarity of the MUCOGARCH volatility Y are:
(a) there exists a constant σ_L ∈^+ such that ∫_^d y y^⊤ν_L(dy) = σ_L I_d,
(b)σ(B) ⊂ (-∞, 0) + i,
(c)σ( B ⊗ I + I ⊗ B + σ_L(A ⊗ A) ) ⊂ (-∞, 0) + i.
An inspection of the arguments given there shows that (c) only needs to hold for the linear operator B ⊗ I + I ⊗ B + σ_L(A ⊗ A) restricted to the set (⃗_d). Under (a) ∫_^d yy^⊤ν_L(dy)_2=σ_L and devectorizing thus shows that B ⊗ I + I ⊗ B + σ_L(A ⊗ A) is just the linear operator in Lemma <ref> (i).
Hence, under (a) our Condition (<ref>) implies (b) and (c). So our conditions for geometric ergodicity with a finite first moment are certainly not worse than the previously known conditions for just first order stationarity.
The constantsK_η,B, K_η,Aare related to changing norms and may be somewhat tedious to obtain. The following lemma follows immediately from Lemma <ref> and implies that we can replace them by eigenvalues in the Conditions (<ref>), (<ref>), (<ref>).
It holds that:
* K_η,B≤λ_max(η B+B^⊤η)/λ_min(η).
* K_η,A≤λ_max(A^⊤η A)/λ_min(η).
* K_I_d,B=λ_max(B+B^⊤).
* K_I_d,A=λ_max(A^⊤ A).
If B is symmetric, we have that A⊗ A_2=A_2^2=λ_max(A^TA) and that λ_max(B+B^⊤)=2λ_max(B). So we see that in this case our Condition (<ref>) implies Condition (4.4) of <cit.> for the existence of a stationary distribution for p=k=1.
For a symmetric B and p=k>1 the conditions are very similar only that an additional factor of 2^p-1 appears inside the integral in the conditions ensuring geometric ergodicity. So the previously known conditions for the existence of a stationary distribution with a finite p-th moment for p>1 may be somewhat less restrictive than our conditions for geometric ergodicity with a finite p-th moment for p>1. But in contrast to <cit.> we do not need to restrict ourselves to B being diagonalizable and integer moments p.
Let us now consider that the driving Lévy process is a compound Poisson process with rate γ>0 and jump distribution P_L. So P_L is a probability measure on ^d and ν_L=γ P_L.
As in many applications where discrete time multivariate GARCH processes are employed the noise is taken to be an iid standard normal distribution, a particular choice would be to take P_L as the standard normal law.
However, we shall only assume that P_L has finite second moments, mean zero and covariance matrix Σ_L. Observe that the upcoming Section <ref> shows that the needed irreducibility and aperiodicity properties hold as soon as P_L has an absolutely continuous component with a strictly positive density around zero (again definitely satisfied when choosing the standard normal distribution).
Then we have that
∫_^d yy^⊤ν_L(dy)_2 =γΣ_L_2=γλ_max(Σ_L),
∫_^dy^2_2ν_L(dy) =γ(Σ_L).
If we assume that B=β I_d, A=α I_d for some α,β∈ and η=I_d, then (<ref>) and (<ref>) are equivalent to
2β+α^2γλ_max(Σ_L)<0
whereas
(<ref>), (<ref>), (<ref>) are becoming
2β+α^2γ(Σ_L)<0
for p=1. Note that in this particular set-up it is straightforward to see that the choice of η has no effect on (<ref>), (<ref>), (<ref>), (<ref>). The latter is also the case for (<ref>) if Σ_L is additionally assumed to be a multiple of the identity.
As, for example, λ_max(I_d)=1, but (I_d)=d, this also illustrates that Conditions (<ref>), (<ref>), (<ref>) are considerably more restrictive than Conditions (<ref>) and (<ref>) unless we are in the univariate case.
For p≠ 1 it is not sufficient to only specify the mean and variance of the jump distribution to check Conditions (<ref>), (<ref>), (<ref>). For a concrete specification of P_L it is, however, straightforward to check them by (numerical) integration.
We consider the same basic set-up as in Example <ref> in dimension d=2.
If we take A=I_2, B=[ -2 0; 0 -4 ], γ=1, Σ_L=[ 3 0; 0 6 ] and η=I_2, then Condition (<ref>) is violated whereas (<ref>) is satisfied.
If we take A=[ √(3) 0; 0 √(6) ] , B=[ -2 0; 0 -4 ], γ=1, Σ_L=I_2 and η=I_2, then Condition (<ref>) is violated whereas (<ref>) is satisfied.
In both cases Conditions (<ref>), (<ref>), (<ref>) are violated for p=1.
An inspection of our proofs shows, that we can also consider the stochastic differential equation
dY_t = (BY_t- + Y_t-B^⊤) dt + A (C+Y_t-)^1/2 dL_t(C+Y_t-)_t-^1/2 A^⊤
with A,B ∈ M_d() and C ∈_d^++, initial value Y_0∈_d^+ and L being a d× d matrix subordinator with drift γ_L=0 and Lévy measure ν_L.
All results we obtained in this section remain valid when replacing ∫_^d by ∫_, yy^T by y, y_2^2 by y_2 and (L_1_2^2p) < ∞ by (L_1_2^p) < ∞.
In principle, it is to be expected that the results can be generalized with substantial efforts to general order MUCOGARCH(p,q) processes (cf. <cit.> for discrete time BEKK models). However, as explained in the introduction, they have only been defined briefly and no applicable conditions on the admissible parameters are known nor is the natural state space of the associated Markov process (which should not be some power of the positive definite cone). As can be seen from the discrete time analogue, the main issue is to find a suitable Lyapunov test function and suitable weights need to be introduced. For orders different from (1,1) the weights known to work in the discrete time setting are certainly not really satisfying except when the Lyapunov test function is related to the first moment (the case analysed in <cit.>). On the other hand in applications very often order (1,1) processes provide rich enough dynamics already. Therefore, we refrain from pursuing this any further.
The geometric ergodicity results of this section are of particular relevance for at least the following:
* Model choice:
In many applications stationary models are called for and one should use models which have a unique stationary distribution. Our results are the first giving sufficient criteria for a MUCOGARCH model to have a unique stationary distribution (for the volatility process).
* Statistical inference:
<cit.> investigate in detail a moment based estimation method for the parameters of MUCOGARCH(1,1) processes. This involves establishing identifiability criteria, which is a highly non-trivial issue, and calculating moments explicitly, which requires assumptions on the moments of the driving Lévy process. However, the asymptotics of the estimators derived there hinge centrally on the results of the present paper. Actually, we cannot see any reasonable other way to establish consistency and asymptotic normality of estimators for the MUCOGARCH parameters than using the geometric ergodicity conditions for the volatility of the present paper and to establish that under stationarity it implies that the increments of the MUCOGARCH process G are ergodic and strongly mixing (see <cit.> for details).
* Simulations:
Geometric ergodicity implies that simulations (of a Markov process) can be started with an arbitrary initial value and that after a (not too long) burn-in period the simulated path behaves essentially like one following the stationary dynamics. On top V-uniform ergodicity (which is actually obtained above) provides results on the finiteness of certain moments and the convergence of the moments to the stationary case. We illustrate this now in some exemplary simulations.
We consider the set-up of Example <ref> in dimension two and choose α=0.14, β=-0.01, C=I_2, γ=1 and the jumps to be standard normally distributed.
Then condition (<ref>) (or (<ref>)) is satisfied (but not conditions (<ref>), (<ref>), (<ref>)) and so we have geometric ergodicity with convergence of the (absolute) first moment. However, this is a rather extreme case as (given all the other parameter choices) (<ref>) is satisfied if and only if |α|<√(0.02)≈ 0.1414 and thus some convergences are only to be seen clearly for very long paths. We have simulated a path up to time 20 million with Matlab (see Figure <ref>). The paths themselves look immediately stationary (at this time scale) but extremely heavy tailed. In the long run (and not too fast) the running empirical means are reasonably converging (to their true values of 50 for the variances and 0 for the covariance). In sharp contrast to this there seems to be no convergence for the empirical second moments at all and actually we strongly conjecture that the stationary covariance matrix is infinite in this case. Looking at lower moments, like the 0.25th (absolute) empirical moments there is again convergence to be seen which is faster than the one for the first moment (as is to be expected). Zooming into the paths at the beginning in Figure <ref> (left plot) shows that the process clearly does not start of like a stationary one, but the behaviour looks stationary to the eye very soon.
Changing only the parameter α to 0.142 and using the same random numbers we obtain the results depicted in Figure <ref>. The initial part of the paths (see right plot in Figure <ref>) looks almost unchanged (observe that the scaling of the vertical axes in the plots of Figure <ref> differs). The paths in total clearly appear to be even more heavy tailed. Now neither the empirical second moments nor the empirical means seem to converge (and we conjecture that both mean and variance are infinite for the stationary distribution) . Now condition (<ref>) does not hold any more and so the simulations suggest that this condition implying geometric ergodicity with convergence of the first moments is pretty sharp, as expected from the theoretical insight. The simulations also seem to clearly indicate that the 1/4th (absolute) moments still converge. Numerical integration shows for (<ref>) (again the choice of η makes no difference in this special case):
∫_^2( ( 1+α^2y^2_2)^1/4 - 1)ν_L(dy)
+2β/4≈ 0.0098 -0.005 > 0.
So our sufficient condition (<ref>) for convergence of the 1/4th moment is not satisfied, but due to the involved inequalities we expected it not to be too sharp. Actually, numerically not even the logarithmic condition <cit.> for the existence of a stationary distribution seems to be satisfied. This illustrates that these conditions involving norm estimates are not too sharp, as our simulations and the fact that condition (<ref>) is almost satisfied strongly indicate that geometric ergodicity with convergence of some moments 0<p<1 should hold.
Finally, we consider an example illustrating that our conditions forp≠1are more precise if one coordinate dominates the others.
We consider the same model as in the previous Example <ref> with the only difference that we now assume the jumps to have a two-dimensional normal distribution with covariance matrix Σ_L=[ 1 0; 0 σ ] for σ>0. For σ→ 0 we conclude from the formulae given in Example <ref> that for p=1(<ref>), (<ref>), (<ref>) are asymptotically equivalent to (<ref>) and as in Example <ref> the choice of η does not matter (they also are equivalent to (<ref>) for η=I_2, but there the choice of η may now matter). Note that (<ref>) is not affected by changes in σ as long as σ≤ 1.
Choosing σ=1/1000 and α=0.142 as in the second part of Example <ref> numerical integration shows for (<ref>) with p=1/4:
∫_^2( ( 1+α^2y^2_2)^1/4 - 1)ν_L(dy)
+2β/4≈ 0.0049 -0.005 < 0.
So we obtain geometric ergodicity and the finiteness of the 1/4th absolute moments from our theory.
Figure <ref> shows the results of a simulation using the same time horizon and random numbers as in Example <ref>. The plots of the paths, the empirical means and second moments) show that the first variance component extremely dominates the others. One immediately sees that the empirical mean and second moment of V_11 does not converge. In line with our theory all 0.25th absolute moments seem to converge pretty fast.
§ IRREDUCIBILITY AND APERIODICITY OF THE MUCOGARCH VOLATILITY PROCESS
So far we have assumed away the issue of irreducibility and aperiodicity focusing on establishing a Foster-Lyapunov drift condition to obtain geometric ergodicity. As this is intrinsically related to the question of the existence of transition densities and support theorems, this is a very hard problem of its own. In our case the degeneracy of the noise, i.e. that all jumps of the Lévy process are rank one and that thus the jump distribution has no absolutely continuous component, makes our case particularly challenging.
We now establish sufficient conditions for the irreducibility and aperiodicity ofYby combining at leastdjumps to get an absolutely continuous component of the increments.
Let Y be a MUCOGARCH volatility process driven by a compound
Poisson process L and with A∈ GL_d(), (σ(B))<0. If the jump distribution of L has a non-trivial absolutely
continuous component equivalent to the Lebesgue measure on ^d,
then Y is irreducible with respect to the Lebesgue measure on
_d^+ and aperiodic.
We can soften the conditions on the jump distribution of the Compound Poisson process:
Let Y be a MUCOGARCH volatility process driven by a compound Poisson process L and with A∈ GL_d(), (σ(B))<0. If the jump distribution of L has a non-trivial absolutely continuous component equivalent to the Lebesgue measure on ^d restricted to an open neighborhood of zero, then Y is irreducible w.r.t. the Lebesgue measure restricted to an open neighborhood of zero in _d^+ and aperiodic.
(i) If the driving Lévy process is an infinite activity Lévy process, whose Lévy measure has a non-trivial absolutely continuous component and the density of the component w.r.t. the Lebesgue measure on _d^+ is strictly positive in a neighborhood of zero, we can show that Y is open-set irreducible w.r.t. the Lebesgue measure restricted to an open neighborhood of zero in . For strong Feller processes open-set irreducibility provides irreducibility, but the strong Feller property is to the best of our knowledge hard to establish for Lévy-driven SDEs. A classical way to show irreducibility is by using density or support theorems based on Malliavin Calculus, see e.g. <cit.>. But they all require, that the coefficients of the SDE have bounded derivatives, which is not the case for the MUCOGARCH volatility process. So finding criteria for irreducibility in the infinite activity case appears to be a very challenging question beyond the scope of the present paper.
(ii) In the univariate case establishing irreducibility and aperiodicity is much easier, as one can easily use the explicit representation of the COGARCH(1,1) volatility process to establish that one has weak convergence to a unique stationary distribution which is self-decomposable and thus has a strictly positive density. This way is blocked in the multivariate case as we do not have the explicit representation and the linear recurrence equation structure.
Note that the condition that the Lévy measure has an absolutely continuous component with a support containing zero is the obvious analogue to the condition on the noise in <cit.>.
Let us return to the SDE (<ref>) driven by a general matrix subordinator which we introduced in Remark <ref>. It is straightforward to see that then the conclusions of Theorem <ref> or Corollary <ref>, respectively, are valid for the process Y, if the matrix subordinator L is a compound Poisson process with the jump distribution of L having a non-trivial absolutely
continuous component equivalent to the Lebesgue measure on _d^+ (restricted to an open neighborhood of zero).
§ PROOFS
To prove our results we use the stability concepts for Markov processes of <cit.>.
§.§ Markov processes and ergodicity
For the convenience of the reader, we first give a short introduction to the definitions and results for general continuous time Markov processes following mainly
<cit.>.
Given be a state spaceX, which is assumed to be an open or closed subset of a finite-dimensional real vector spaced equipped with the usual topology and the Borel-σ-algebra(X). We consider a continuous time Markov processΦ=(Φ_t)_t≥0onXwith transition probabilities^t(x,A)=_x(Φ_t ∈A)forx∈X, A ∈(X).
The operator^tfrom the associated transition semigroup acts on a bounded
measurable functionfas
^t f(x) = ∫_X ^t(x,dy)f(y)
and on aσ-finite measureμonXas
μ^t(A) = ∫_X μ(dy) ^t(y,A).
To define non-explosivity, we consider a fixed family{ O_n | n ∈_+}of open precompact sets, i.e. the closure ofO_nis a compact subset ofX,
for whichO_n ↗Xasn →∞. WithT^mwe denote the first
entrance time toO_m^cand byξthe exit time for the process, defined as
ξ𝒟= lim_m →∞ T^m.
We call the process Φnon-explosive if _x(ξ = ∞)=1
for all x∈ X.
By(X)we denote the set of all continuous and bounded functionsf: X →and
by(X)those continuous and bounded functions, which vanish at infinity.
Let (^t)_t∈_+ be the transition semigroup of a time homogeneous
Markov process Φ.
(i) (^t)_t∈_+ or Φ is called
stochastically continuous if
lim_t→ 0
t≥ 0^t(x, 𝒩(x)) = 1
for all x ∈ X and open neighborhoods 𝒩(x) of x.
(ii) (^t)_t∈_+ or Φ is a (weak)
-Feller semigroup or process if it is stochastically continuous
and
^t ((X)) ⊆(X) for all t≥ 0.
(iii) If in (ii) we have instead of (<ref>)
^t ((X)) ⊆(X) for all t≥ 0
we call the semigroup or the process (weak) -Feller.
Combining the definition of strongly continuous contraction
semigroups, Theorem 4.1.1 and the definition of Feller processes in <cit.> shows that a -Feller process is a Borel-right process (cf. <cit.> for a definition).
From now on we assume thatΦis a non-explosive Borel right process. For the definitions and details of
the existence and structure see <cit.>.
A σ-finite measure π on (X) with the property
π = π^t, ∀ t ≥ 0
is called invariant.
Notation: Byπwe always denote an invariant measure ofΦ, if it exists.
Φ is called exponentially ergodic,
if an invariant measure π exists and satisfies for all x ∈ X
^t(x,.) - π_TV≤ M(x) ρ^t, ∀ t≥0
for some finite M(x), some ρ<1 and where μ_TV:=sup_|g|≤ 1, g measurable | ∫μ(dy)g(y)| denotes the total variation norm.
If this convergence holds for the f-norm μ_f:=sup_|g|≤ f, g measurable | ∫μ(dy)g(y)| (for any signed measure μ) ,
where f is a measurable function from the state space X to [1,∞), we
call the process 𝐟-exponentially ergodic.
A seemingly stronger formulation ofV-exponential ergodicity isV-uniform ergodicity: We require thatM(x) = V(x) ·Dwith some finite constantD.
Φ is called 𝐕-uniformly ergodic, if a measurable
function V: X → [1,∞) exists such that for all x ∈ X
^t(x,.) - π_V ≤ V(x) D ρ^t, t≥0
holds for some D<∞, ρ <1.
To prove ergodicity we need the notions of irreducibility and aperiodicity.
For any σ-finite measure μ on ℬ(X) we call the process
Φμ-irreducible if for any B ∈ℬ(X)
with μ(B)>0
_x (η_B) > 0, ∀ x ∈ X
holds, where η_B := ∫_0^∞_{Φ_t ∈ B} dt is the occupation time.
This is obviously the same as requiring
∫_0^∞^t(x,B) dt >0, ∀ x ∈ X.
IfΦisμ-irreducible, there exists a maximal irreducibility measureψsuch that every other irreducibility measureνis absolutely continuous with respect toψ(see <cit.>).
We writeℬ^+(X)for the collection of all measurable subsetsA∈ℬ(X)withψ(A)>0.
In <cit.> it was shown, that if the discrete time h-skeleton of a process, the ^h-chain,
is ψ-irreducible for some h>0,
then it holds for the continuous time process. If the ^h-chain is ψ-irreducible for every h>0,
we call the process simultaneously ψ-irreducible.
One probabilistic form of stability is the concept of Harris recurrence.
(i) Φ is called Harris recurrent, if either
* _x(η_A = ∞)=1 whenever ϕ(A)>0 for some σ-finite measure ϕ, or
* _x(τ_A < ∞)=1 whenever μ(A)>0 for some σ-finite measure μ. τ_A:= inf{t≥0 : Φ_t ∈ A } is the first hitting time of A.
(ii) Suppose that Φ is Harris recurrent with finite invariant measure π, then Φ is called positive Harris recurrent.
To define the class of subsets ofXcalled petite sets, we suppose thatais a probability distribution
on_+. We define the Markov transition functionK_afor the process sampled byaas
K_a(x,A):= ∫_0^∞^t(x,A)a(dt), ∀ x∈ X, A∈ℬ.
A nonempty set C∈ℬ is called ν_a-petite, if ν_a is a nontrivial measure on ℬ(X)
and a is a sampling distribution on (0,∞) satisfying
K_a(x,.) ≥ν_a(.), ∀ x∈ C.
When the sampling distribution a is degenerate, i.e. a single point mass, we call the set Csmall.
Like in the discrete time Markov chain theory the set C is small, if there exists an m>0 and a nontrivial measure
ν_m on ℬ(X) such that for all x∈ C,B∈ℬ(X)
^m(x,B) ≥ν_m(B)
holds.
For discrete time chains there exists a well known concept of periodicity, see for example <cit.>.
For continuous time processes this definition is not adaptable, since there are no fixed time steps.
But a similar concept is the definition of aperiodicity for continuous time Markov processes as introduced in <cit.>.
A ψ-irreducible Markov process is called aperiodic if for some small set C ∈ℬ^+(X) there exists a T
such that
^t(x,C) >0 for all t≥ T and all x∈ C.
When Φ is simultaneously ψ-irreducible then we know from <cit.> that
every skeleton chain is aperiodic in the sense of a discrete time Markov chain.
For discrete time Markov processes there exist conditions such that every compact set is petite and every petite set is small:
(i) If Φ, a discrete time Markov process, is a Ψ-irreducible Feller chain with supp(Ψ) having non-empty interior, every compact set is petite.
(ii) If Φ is irreducible and aperiodic, then every petite set is small.
Proposition <ref> (i) is also true for continuous time
Markov processes, see <cit.>.
To introduce the Foster-Lyapunov criterion for ergodicity we need the concept of the extended generator of a Markov process.
𝒟 () denotes the set of all functions f: X ×_+→
for which a function g: X ×_+→ exists, such that ∀ x
∈ X, t>0
_x(f(Φ_t,t)) = f(x,0) + _x( ∫_0^tg(Φ_s,s)ds),
_x( ∫_0^t|g(Φ_s,s)|ds) < ∞
holds. We write f := g and call the extended generator of Φ.
𝒟() is called the domain of .
The next theorem from <cit.> gives for an irreducible and aperiodic
Markov process a sufficient criterion to beV-uniformly ergodic. This is a modification of the Foster-Lyapunov drift
criterion of <cit.>.
Let (Φ_t)_t≥ 0 be a μ-irreducible and aperiodic Markov process. If
there exist
constants b,c > 0 and a petite set C in (X) as well as a measurable function V:
X → [1,∞) such that
V ≤ -bV + c _C ,
where is the extended generator, then (Φ_t)_t≥ 0 is
V-uniformly ergodic.
§.§ Proofs of Section <ref>
We prove the geometric ergodicity of the MUCOGARCH volatility process by using
Theorem <ref>. The main task is to show the validity of a Foster-Lyapunov drift condition and that the used function belongs to the domain of the extended generator. For the latter we need the existence of moments given in Lemma <ref>. The different steps require similar inequalities. The most precise estimations are needed for the Foster-Lyapunov drift condition. Therefore we below first prove the Forster-Lyapunov drift condition assuming for a moment that we are in the domain of the extended generator and prove this and the finiteness of moments later on, as the inequalities needed there are often obvious from the proof of the Foster-Lyapunov drift condition.
§.§.§ Proof of Theorem <ref>
The first and main part of the proof is to show the geometric ergodicity, as the positive Harris recurrence is essentially a consequence of it.
To prove the geometric ergodicity, and hence the existence and uniqueness of a stationary distribution, of the MUCOGARCH volatility process, it is enough to show that the Foster-Lyapunov drift condition of Theorem <ref> holds. All other conditions of Theorem <ref> follow from Theorem <ref> or the assumptions.
The extended generator
Using e.g. the results on SDEs, the stochastic symbol and the infinitesimal generator of <cit.>, <cit.> and <cit.> and that(XY)is the canonical scalar product in_d, we see that for a sufficiently regular functionu: _d^+→in the domain of the (extended/infinitesimal) generator𝒜we have.
u(x) = ((Bx+xB^⊤) ∇ u(x)) + ∫_^d( u(x + A(C+x)^1/2 yy^⊤(C+x)^1/2A^⊤) - u(x) ) ν_L(dy)
=: 𝒟 u(x) + 𝒥 u(x).
We abbreviate the first addend, the drift part, with𝒟 u(x)and the second, the jump part, with𝒥 u(x).
Foster-Lyapunov drift inequality
As test function we chooseu(x) = (ηx)^p + 1, thusu(x) ≥1. Note that the gradient ofuis given by∇u(x) = p (ηx)^p-1 η.
Forp ∈(0,1)the gradient ofuhas a singularity in0, but in the end we look at((Bx+xB^⊤) ∇u(x)), which turns out to be continuous in0.
Now we need to look at the cases (i) - (v) separately, as the proofs work along similar lines, but differ in important details.
Case (i)
We have
𝒥 u(x) = ∫_^d( (η(x + A(C+x)^1/2 yy^⊤(C+x)^1/2A^⊤)) - (η x) ) ν_L(dy)
=∫_^d( η^1/2A(C+x)^1/2 yy^⊤(C+x)^1/2A^⊤η^1/2) ν_L(dy)
=( η^1/2A(C+x)^1/2∫_^d yy^⊤ν_L(dy)(C+x)^1/2A^⊤η^1/2)
≤( η^1/2A(C+x)A^⊤η^1/2)∫_^d yy^⊤ν_L(dy)_2.
In the last step we usedX≼X_2I_d=λ_max(X)I_dforX∈_d^+, thatis monotone in the natural order on_d^+, and that maps of the form_d→_d, X↦ZXZ^⊤withZ∈M_d()are order preserving.
Hence
𝒥 u(x) ≤( A^⊤η A(C+x))∫_^d yy^⊤ν_L(dy)_2.
For the drift part we get
𝒟 u(x) =((Bx+xB^⊤) η) =((η B+B^⊤η)x).
Together we have
𝒜 u(x) = ((η B+B^⊤η+ A^⊤η A∫_^d yy^⊤ν_L(dy)_2 ν_L(dy))x)+ ( A^⊤η AC)∫_^d yy^⊤ν_L(dy)_2 .
Since ηB+B^⊤η+ A^⊤ηA∫_^d yy^⊤ν_L(dy)_2∈-_d^++, there exists ac>0, such that
((η B+B^⊤η+ A^⊤η A∫_^d yy^⊤ν_L(dy)_2)x)≤ -c(η x).
Summarizing we have that there existc,d>0such that
u(x) ≤ - c (η x) + d.
For (x)> kandkbig enough there exist0 < c_1 < csuch that
u(x) ≤- c_1 ( (ηx) +1 ).
For (x) ≤kwe have
- c_1 (ηx) + d = -c_1 ((ηx) +1 ) + e,
withe := c_1+d >0.
Altogether we have
u(x) ≤ -c_1 u(x) + e _D_k,
whereD_k:= { x: (x) ≤k }is a compact set. By Proposition <ref> (i) this is also a petite set. Therefore the Foster-Lyapunov drift condition is proved.
Case (ii)
Compared to Case (i) we just change the inequality for the jump part. Using Lemma <ref> we have
𝒥 u(x)
=∫_^d(A^⊤η A(C+x)^1/2 yy^⊤(C+x)^1/2) ν_L(dy)≤λ_max(A^⊤η A) ((C+x) ∫_^d yy^⊤ν_L(dy))
and hence
𝒜 u(x) = ((η B+B^⊤η+ λ_max(A^⊤η A)∫_^d yy^⊤ν_L(dy))x)+ λ_max(A^⊤η A) (C∫_^d yy^⊤ν_L(dy)).
Now we can proceed as before.
Cases (iii) and (iv)
We have for the drift part
𝒟 u(x) =p((η B+B^⊤η)x)(η x)^p-1≤ K_η,B p (η x)^p.
For the jump part we get using thatyy^⊤_2=y_2^2fory∈^d𝒥 u(x) = ∫_^d( (η(x + A(C+x)^1/2 yy^⊤(C+x)^1/2A^⊤))^p - (η x)^p ) ν_L(dy)
≤∫_^d(( (η x) + y^2_2( η A(C+x)A^⊤))^p - (η x)^p )ν_L(dy)
≤∫_^d(( (η x) + y^2_2(η ACA^⊤)+y^2_2K_η,A(η x))^p - (η x)^p )ν_L(dy).
Using the elementary inequality(x+y)^p ≤max{2^p-1,1} (x^p + y^p)for allx,y≥0we obtain
𝒥 u(x)
≤ (η x) ^p∫_^d(max{2^p-1,1}( 1+K_η,Ay^2_2)^p - 1)ν_L(dy)
+max{2^p-1,1}(η ACA^⊤)^p∫_^dy^2_2ν_L(dy).
Putting everything together using (<ref>) or (<ref>), respectively, we have that there existc,d>0such that
u(x) ≤ - c (η x) ^p+ d.
and we can proceed as in Case (i).
Case (v)
We again only argue differently in the jump part compared to the Cases (iii) and (iv) and can assumep>1due to Remark <ref> (iii). For the jump part we again have
𝒥 u(x)
≤∫_^d(( (η x) + y^2_2(η ACA^⊤)+y^2_2K_η,A(η x))^p - (η x)^p )ν_L(dy).
Next we use the following immediate consequence of the mean value theorem and the already used elementary inequality forp-th powers of sums.
For x,y ≥ 0, p ≥ 1 it holds that
0≤ (x+y)^p - x^p ≤ p y (x+y)^p-1.
This gives (using LandauO(·)notation)
𝒥 u(x)
≤ ∫_^d(( (η x) + y^2_2(η ACA^⊤)+y^2_2K_η,A(η x))^p - (η x)^p )ν_L(dy)
≤ p ((η ACA^⊤)+K_η,A(η x))
×∫_^dy^2_2( (η x) + y^2_2(η ACA^⊤)+y^2_2K_η,A(η x))^p-1ν_L(dy)
= pK_η,A(η x)∫_^dy^2_2( (η x) + y^2_2(η ACA^⊤)+y^2_2K_η,A(η x))^p-1ν_L(dy)
+O((η x)^p-1)
≤ pmax{2^p-2,1}K_η,A(η x)^p∫_^dy^2_2(1 +y^2_2K_η,A)^p-1ν_L(dy)+O((η x)^max{p-1,1}).
Combining this with the estimate on the drift part already given for Cases (iii) and (iv) and using (<ref>), we have that there existc,d>0such that
u(x) ≤ - c (η x) ^p+ d.
and we can proceed as in Case (i).
Test function belongs to the domain
We now show that the chosen test functionu(x) = (ηx)^p +1belongs to the domain of the extended generator and thatuindeed has the claimed form. So we have to show that for all initial valuesY_0=xand allt ≥0it holds that
_x(u(Y_t)) = u(x) + _x( ∫_0^t u(Y_s)ds)
and
_x( ∫_0^t| u(Y_s)|ds) < ∞.
To show (<ref>) we show thatM_t := u(Y_t) - u(x) - ∫_0^t u(Y_s-) dsis a martingale. For this we apply Itô's formula for processes of finite variation tou(x) = (ηx)^p +1. For a moment we ignore the discontinuity of∇uin0forp ∈(0,1)by consideringuon∖{0}.
u(Y_t) - u(Y_0) = ∫_0+^t (∇ u(Y_s-) d Y^c_s) + ∑_0 < s ≤ t(u(Y_s) - u (Y_s-) )
= ∫_0+^t p Y_s-^p-1((B Y_s-+Y_s- B^⊤) η) ds
+ ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds).
Above we implicitly assumed that∑_0 < s ≤t (u(Y_s) - u (Y_s-) )exists. Noting that(ηY_s )≥(ηY_s- )we easily see from the inequalities obtained for the jump part in the proof of the Foster-Lyapunov drift condition above that this is the case wheneverLis of finite2p-variation which is ensured by the assumptions of the theorem.
Similarly we see that
∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )ν_L(dy) ds
is finite.
Then we get
M_t = u(Y_t) - u(x) - ∫_0^t u(Y_s-)ds
= ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )(μ_L(dy,ds)-ν_L(dy)ds).
By the compensation formula (see <cit.> for the version for conditional expectations),(M_t)_t ≥0is a martingale if
[ ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )ν_L (dy) ds ] < ∞,
as the integrand is non-negative.
Just like in the deduction of the Foster-Lyapunov drift condition, we get∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )≤ c Y_s-^p + d,for some constantsc, d >0. By Lemma <ref> it holds that(Y_t^p) < ∞for allt ≥0and t ↦(Y_t^p)is locally bounded. Thus (<ref>) follows by Fubini's theorem.
It remains to prove the validity of Itô's formula on the whole state spaceand forp∈(0,1). For this note that ifY_0 ≠0it follows thatY_t ≠0for allt >0. Thus we only need to consider the caseY_0 = 0. If the jumps are of compound Poisson type we defineτ:= inf{ t ≥0: Y_t ≠0 }, which is the first jump time of the driving Lévy process. Fort < τ,Y_t = 0andt= τit is obviously fulfilled, because Lemma <ref> implies thatC (ηY_s-)≥|((B Y_s-+Y_s- B^⊤) η) |≥c (ηY_s-)for somec,C>0. Fort > τwe can reduce it to the case∖{0}. Thus Itô's formula stays valid if the driving Lévy process is a compound Poisson process.
Now we assume that the driving Lévy process has infinite activity. In <cit.> it has been shown, that we can approximateY_tby approximating the driving Lévy process. We use the same notation as in <cit.>. We fixω∈Ωand someT >0. The proof of <cit.> shows that thenY_n,tfor alln∈ℕandY_n,tare uniformly bounded on[0,T]. It follows that·^pis uniformly continuous on the space of values. We need to show that
u(Y_n,t) - u(Y_0)
= ∫_0+^t p Y_n,s-^p-1((B Y_n,s-+Y_n,s- B^⊤) η) ds
+ ∫_0+^t ∫_^d(u(Y_n,s- + A(C+Y_n,s-)^1/2 yy^⊤(C+Y_n,s-)^1/2A^⊤) - u(Y_n,s-) )μ_L_n(dy,ds).
converges uniformly on[0,T]. Sinceω∈ΩandT >0was arbitrary, once established this shows that Itô's formula holds almost surely uniformly on compacts. For the first summand uniform convergence follows directly by the uniform convergence ofY_n,tand the uniform continuity of the functions applied to it.
For the second summand we observe that also the jumps ofLare necessarily bounded on[0,T]and that∫_0+^t ∫_^dy_2^2pμ_L_n(dy,ds) ≤∫_0+^t ∫_^dy_2^2pμ_L(dy,ds).We have that
|∫_0+^t ∫_^d(u(Y_n,s- + A(C+Y_n,s-)^1/2 yy^⊤(C+Y_n,s-)^1/2A^⊤) - u(Y_n,s-) )μ_L_n(dy,ds) .
.-∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds) |
≤∫_0+^T ∫_^d|(u(Y_n,s- + A(C+Y_n,s-)^1/2 yy^⊤(C+Y_n,s-)^1/2A^⊤) - u(Y_n,s-) ).
.- (u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) ) |μ_L(dy,ds)
+∫_0+^T ∫_^d|u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) |(μ_L(dy,ds) -μ_L_n(dy,ds)).
Now the first integral converges to zero again by uniform continuity arguments for the integrand and the second one due to uniform boundedness of the integrand and the uniform convergence ofL_ntoL.
It remains to show (<ref>).
To show this we first deduce a bound foru(x). Using the triangle inequality we splitu(x)again in the drift part and the jump part. For the absolute value of the jump part we can use the upper bounds from the Foster-Lyapunov drift condition proof since the jumps are non-negative. The absolute value of the drift part is bounded as follows
𝒟 u(x) ≤ pC Y_n,s^p.
Adding both parts together we get
(u(x)) ≤ c_2 u(x)
for some constantc_2 >0.
With that (<ref>) follows by Lemma <ref>.
Harris recurrence and finiteness of moments
To show the positive Harris recurrence of the volatility processYand the finiteness of thep-moments of the stationary distribution we use the skeleton chains. In <cit.> it is shown, that the Foster-Lyapunov condition for the extended generator, as we have shown, implies a Foster-Lyapunov drift condition for the skeleton chains. Further observe that petite sets are small, since by the assumption of irreducibility we can use the same arguments as in the upcoming proof of Theorem <ref>. With that we can apply <cit.> and get the positive Harris recurrence for every skeleton chain and the finiteness of thep-moments of the stationary distribution. By definition the positive Harris recurrence for every skeleton chain implies it also for the volatility processY.§.§ Proof of Lemma <ref>((Y_0)^p) < ∞implies((ηY_0)^p) < ∞for allη∈_d^++.
We apply Itô's formula tou(Y_t) = Y_t^p. The validity of Itô's formula was shown above in the section “Test function belongs to the domain”. We fix someT >0. Lett ∈[0,T]. As in the proof before we get with Itô's formula
u(Y_t) - u(Y_0)
= ∫_0+^t p Y_s-^p-1((B Y_s-+Y_s- B^⊤) η) ds
+ ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds)
≤ ∫_0+^t c_1 Y_s-^p ds
+ ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds) .
for somec_1>0.
So we have
(Y_t^p)
≤ (Y_0^p) + ∫_0^t c_1 (Y_s^p) ds
+ ( ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds) ).
Using the compensation formula and the bounds of the proof of the Foster-Lyapunov drift condition we get that there existc_2,d>0such that
( ∫_0+^t ∫_^d(u(Y_s- + A(C+Y_s-)^1/2 yy^⊤(C+Y_s-)^1/2A^⊤) - u(Y_s-) )μ_L(dy,ds))
≤∫_0^t ( c_2 Y_s^p + d) ds .
Combined we have
(Y_t^p) ≤(Y_0^p) + ∫_0^t (c_1+c_2) (Y_s^p) ds + d T.
Applying Gronwall's inequality this shows that(Y_t^p) ≤ ( (Y_0^p) + d T) e^(c_1+c_2) t.SinceT >0was arbitrary(Y_t^p)is finite for allt ≥0andt ↦(Y_t^p)is locally bounded.
§.§ Proofs of Section <ref>
§.§.§ Proof of Theorem <ref>
Letνbe the Lévy measure ofL. We haveν= ν_ac + ν̃, whereν_acis the absolute continuous component andν() < ∞. Moreover we can splitLinto the corresponding processesL𝒟= L_ac+L̃, whereL_acandL̃are independent. We setB_T={ω∈Ω | L̃_t = 0 ∀t ∈[0,T]}. Then∀T >0 (B_T)>0and for any event Ait holds that(A ∩B_T) >0 ⇔(A|B_T) >0and(A∩B_T) >0 ⇒(A) >0.
So in the following we assume w.l.o.g.L̃ = 0as otherwise the below arguments and the independence ofL_acandL̃imply that(Y_t ∈A | Y_0=x)>0results from(Y_t ∈A | Y_0 =x, B_t) >0.
*Irreducibility:
By Remark <ref>, to prove the irreducibility of
the MUCOGARCH volatility process it is enough to
show it for a skeleton chain.
Letδ>0and sett_n := nδ, ∀n∈_0. We consider the skeleton chain
Y_t_n = e^Bt_n Y_t_0 e^B^⊤ t_n + ∫_0^t_n e^B(t_n-s) A
(C+Y_s-)^1/2 d[L,L]^d_s (C+Y_s-)^1/2 A^⊤ e^B^⊤
(t_n-s).
To show irreducibility w.r.t.λ_we have to show that for any𝒜 ∈(_d^+)withλ__d^+(𝒜) >0and anyy_0 ∈there exists anlsuch that
(Y_t_l∈𝒜 | Y_0=y_0) >0.
With
(Y_t_l∈𝒜 | Y_t_0=y_0)
≥(Y_t_l∈𝒜, exactly one jump in every
time interval (t_0,t_1], ⋯, (t_l-1,t_l] | Y_t_0=y_0)
= (Y_t_l∈𝒜 | Y_t_0=y_0, exactly one
jump in every time interval (t_0,t_1], ⋯, (t_l-1,t_l] )
·( exactly one jump in every time interval
(t_0,t_1], ⋯, (t_l-1,t_l]),
and the fact that the last factor is strictly positive we can w.l.o.g. assume, that we have exactly one jump in every time interval(t_k-1, t_k] ∀k=1, ⋯, l.
We denote byτ_kthe jump time of our Lévy process in(t_k-1, t_k]. With the assumption, that we only have one jump on every time interval, the skeleton chain can be represented by the sum of the jumpsX_i, whereL_t=∑_i=1^N_t X_iis the used representation for the compound Poisson ProcessL. We fix the number of time stepsl≥dand get:
Y_t_l= e^Bt_l Y_0 e^B^⊤ t_l
+ ∑_i=1^l e^B(t_l- τ_i) A (C+e^B(τ_i-t_i-1)Y_t_i-1e^B^⊤(τ_i-t_i-1))^1/2 X_iX_i^⊤ (C+e^B(τ_i-t_i-1)Y_t_i-1e^B^⊤(τ_i-t_i-1))^1/2 A^⊤
e^B^⊤(t_l-τ_i).
First we show that the sum of jumps in (<ref>) has a positive density on_d^+.
Note that every single jump is of rank one.
We define
Z_i^(l) := e^B(t_l- τ_i) A (C+e^B(τ_i-t_i-1)Y_t_i-1e^B^⊤(τ_i-t_i-1))^1/2 X_i
and with (<ref>) we have
Z_i^(l) = e^B(t_l- τ_i) A ( C + e^Bt_i-1 Y_0 e^B^⊤ t_i-1 + ∑_j=1^i-1 e^B(t_i-1-t_l) Z_j^(l)Z_j^(l)^⊤ e^B^⊤ (t_i-1-t_l))^1/2 X_i.
By assumptionX_1, X_2, ⋯are are iid and absolutely continuous w.r.t. Lebesgue measure on^dwith a (Lebesgue a.e.) strictly positive density. We see
immediately that
Z_1^(l) | Y_0, τ_1
is absolutely continuous with a strictly positive densityf_Z_1^(l) | Y_0, τ_1. Iteratively we get that
every
Z_i^(l) | Y_0, Z_1^(l), ⋯, Z_i-1^(l), τ_i
is absolutely continuous with a strictly positive densityf_Z_i^(l) | Y_0, Z_1^(l), …, Z_i-1^(l), τ_i ,
for alli=2, …, l.
We denote withf_Z^(l)|Y_0, τ_1, …, τ_lthe density ofZ^(l)=(Z_1^(l), ⋯, Z_l^(l))^⊤givenY_0, τ_1, …, τ_l. Note that givenZ_j^(l), j<i,Z_i^(l)is independent ofτ_j. By the rules for conditional densities we get
f_Z^(l)|Y_0, τ_1, …, τ_l = f_Z_1^(l)| Y_0,τ_1· f_Z_2^(l) | Y_0, Z_1^(l), τ_2⋯ f_Z_l^(l)| Y_0, Z_1^(l), ⋯, Z_l-1^(l), τ_l
is strictly positive on^dl. Thus an equivalent measure,∼, exists such thatZ_1^(l)|Y_0,τ_1,…, τ_l, ⋯ , Z_l^(l)|Y_0, τ_1,…. τ_lare iid normally distributed. In <cit.> it is shown, that forl ≥dΓ:= ∑_i=1^l Z_i^(l)|_Y_0, τ_1,…. τ_l·Z_i^(l)^⊤ |_Y_0, τ_1,…. τ_l
has a strictly positive density underw.r.t. the Lebesgue measure
on_d^+. But sinceandare equivalent,Γhas also a
strictly positive density underw.r.t. Lebesgue measure
on_d^+.
This yields
(Y_t_l∈𝒜 | Y_0=y_0) =∫__+^l( e^Bt_lY_0e^B^⊤ t_l + Γ∈𝒜|Y_0=y_0, τ_1=k_1, …, τ_l=k_l) d_(τ_1,…, τ_l)(k_1,…, k_l) >0
if( e^Bt_lY_0e^B^⊤t_l + Γ∈𝒜 | Y_0=y_0, τ_1=k_1, …, τ_l=k_l)>0.Here we use that the joint distribution of the jump times_(τ_1,…, τ_l)is not trivial, sinceτ_1, …, τ_kare the jump times of a compound Poisson Process.
Above we have shown that( e^Bt_lY_0e^B^⊤t_l + Γ∈𝒜 | Y_0=y_0, τ_1=k_1, …, τ_l=k_l)>0if
λ__d^+(𝒜∩{x ∈_d^+ | x ≽ e^B t_lY_0 e^B^⊤ t_l}) >0.
As we assumedσ(B) ⊂(-∞,0) +i,e^BtY_0e^B^⊤t →0fort →∞. Thus we can chooselbig enough such thatλ__d^+ (𝒜 ∩{x ∈_d^+ | x ≽e^B t_lY_0 e^B^⊤t_l } ) >0for any𝒜 ∈_d^+withλ__d^+(𝒜) >0(noteλ__d^+(∂_d^+) =0). This shows the claimed irreducibility and asδwas arbitrary even simultaneous irreducibility.
*Aperiodicity:
The
simultaneous irreducibility and Proposition
<ref> show that every skeleton chain is aperiodic. Using Proposition <ref> we know for every skeleton chain, that every compact set is also small.
We define the set
𝒞 := { x ∈^d_+ | x_2 ≤ K},
with a constantK>0. Obviously𝒞is a compact set and thus a small set for every skeleton chain. By Remark <ref> it is also small for the continuous time Markov Process(Y_t)_t≥0.
To show aperiodicity for(Y_t)_t≥0in the sense of Definition <ref> we prove that
there exists aT>0such that
^t(x,𝒞) >0
holds for allx∈𝒞and allt≥T.
Using
^t(x,𝒞) ≥^t(x, 𝒞∩{ no jump up to time
t})
we considerY_tunder the condition “no jump up to timet”. WithY_0 =x ∈𝒞we have
Y_t = e^Btxe^B^⊤ t.
Sinceλ= max( (σ(B))) <0there existsδ>0, andC
≥1such that e^Bt _2 ≤C e^- δtand hence we have
e^Bt x e^B^⊤t _2 ≤C e^-2δt x_2≤C e^-2δt K ≤K
for allt ≥ln(C)/2δ. Hence,Y_t ∈𝒞for allt ≥ln(C)/2δand thus (<ref>) holds.§.§.§ Proof of Corollary <ref>
The proof is similar to that of Theorem <ref> with the difference that we now assume for the jump sizesX_ithat they have a density, which is strictly positive in a neighborhood of zero, e.g.∃k>0such that
everyX_ihas a strictly positive density on{ x ∈^d: x ≤k}.
We use the same notation as in the previous proof, but we omit the superscripts^(l).
By the definition ofZ_iand the same iteration as in the first case we show, thatZ:=(Z̃_1, …, Z̃_l)|Y_0,τ_1, …, τ_lhas a strictly positive density on a suitable neighborhood of the origin.
ForA∈M_d()we definej(A):= min_x ∈^d Ax_2/x_2as the modulus of injectivity, which has the following properties:0 ≤j(A) ≤ A_2and Ax_2 ≥j(A) x_2as well as forA,B ∈M_d()j(A B) ≥j(A)j(B).
With that we get forZ_1Z_1 _2 = e^B(t_l-τ_1) A (C+e^Bτ_1Y_0e^B^⊤τ_1)^1/2 X_1_2 ≥ j(e^B(t_l-τ_1) A (C+e^Bτ_1Y_0e^B^⊤τ_1)^1/2) X_1_2
≥ j(e^B(t_l-τ_1)) j( A) j( (C+e^Bτ_1Y_0e^B^⊤τ_1)^1/2) X_1_2 ≥ j(e^Bt_l) j( A) j( C^1/2) X_1_2
and thusZ_1|Y_0,τ_1has a strictly positive density on{x∈^d: x ≤k̃}, wherek̃ := j(e^Bt_l) j( A) j( C^1/2 ) k.
Iteratively get that everyZ_i|Y_0, τ_i, Z_1, …, Z_i-1has a strictly positive density on{x∈^d: x ≤k̃}and as in the first case this shows thatZ=( Z_1, …, Z_l)|Y_0,τ_1, …,τ_lhas a strictly positive density on{ x=(x_1,…,x_l)^⊤∈^d ·l: x_i ≤k̃ ∀i=1,…,l }.We fix ank̂,0<k̂ < k̃and set𝒦̂:= {x =(x_1,…,x_l)^⊤∈^d ·l:x_i ≤k̂ ∀i=1, …,l }. Now we can construct random variablesZ̃_i,i=1, …, l, such thatZ̃:=(Z̃_1, …, Z̃_l)|Y_0,τ_1, …, τ_lhas a strictly
positive
density on^d ·land
_𝒦̂·Z̃ |Y_0,τ_1, …, τ_l 𝒟=_𝒦̂· Z |Y_0,τ_1, …, τ_l .
Due to the first case we now can choose a measuresuch that theZ̃_i|_Y_0, τ_1, …, τ_l, Z_1, …, Z_i-1are iid normal distributed and the random variableΓ̃:= ∑_i=1^l Z̃_i|_Y_0, τ_1, …, τ_l, Z_1, …, Z_i-1 Z̃_i^⊤|_Y_0, τ_1, …, τ_l, Z_1, …, Z_i-1has a strictly positive density on_d^+.
With the equivalence ofandalso(Γ̃∈𝒜)>0for every𝒜 ∈ℬ().
Further we defineℰ:={x∈: x=∑_i=1^l z_i z_i^⊤, z_i∈^d, x_2 ≤k̂}and𝒦:= {x∈: x=∑_i=1^l z_i z_i^⊤ for z_1, …, z_l ∈^d implies z_i_2 ≤k̂ ∀i=1,…,l }. Letx= ∑_i=1^l z_i z_i^⊤∈ℰ. Thenx = ∑_i=1^l z_i z_i^⊤≽z_j z_j^⊤for allj=1,…, land thusz_j z_j^⊤_2=z_j_2≤k̂, which means thatx ∈𝒦and therebyℰ ⊆𝒦.
Now let𝒜 ∈ℬ ()and note that_𝒦 ·Γ̃ 𝒟= _ 𝒦 ·Γ. Finally we get
(Γ∈𝒜∩ℰ) = (Γ∈𝒜∩ℰ∩𝒦) = (Γ̃∈𝒜∩ℰ )>0
ifλ__d^+(𝒜 ∩ℰ)>0.
With the same conditioning argument as in the proof of Theorem <ref> and again using the fact that by assumption there always exists alsuch thatλ__d^+(𝒜 ∩ℰ ∩{x ∈_d^+ | x ≽e^B t_lY_0 e^B^⊤t_l })>0ifλ__d^+(𝒜 ∩ℰ)>0, we get simultaneous irreducibility w.r.t. the measureλ__d^+ ∩ℰdefined byλ__d^+ ∩ℰ(B) := λ__d^+(B ∩ℰ)for allB ∈ℬ ().
Aperiodicity follows as in the proof of Theorem <ref>, since we only used the compound Poisson structure ofLand not the assumption on the jump distribution.§ ACKNOWLEDGEMENTS
The authors are grateful to the editor and an anonymous referee for their reading and insightful comments which improved the paper considerably.
The second author gratefully acknowledges support by the DFG Graduiertenkolleg 1100.
acmtrans-ims1 |
http://arxiv.org/abs/1701.07969v2 | 20170127083352 | Coherent microwave-to-optical conversion via six-wave mixing in Rydberg atoms | [
"Jingshan Han",
"Thibault Vogt",
"Christian Gross",
"Dieter Jaksch",
"Martin Kiffner",
"Wenhui Li"
] | physics.atom-ph | [
"physics.atom-ph",
"physics.optics"
] |
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543^1
MajuLab, CNRS-UNS-NUS-NTU International Joint Research Unit UMI 3654, Singapore 117543^2
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom^3
Department of Physics, National University of Singapore, 117542, Singapore^4
42.50.Gy,42.65.Ky,32.80.Ee
We present an experimental demonstration of converting a microwave field to an
optical field via frequency mixing in a cloud of cold ^87Rb atoms, where the
microwave field strongly couples to an electric dipole transition between
Rydberg states. We show that the conversion allows the phase information of the
microwave field to be coherently transferred to the optical field. With the
current energy level scheme and experimental geometry, we achieve a photon
conversion efficiency of ∼ 0.3% at low microwave intensities and a broad
conversion bandwidth of more than 4 MHz. Theoretical simulations agree well with
the experimental data, and indicate that near-unit efficiency is possible in
future experiments.
Coherent Microwave-to-Optical Conversion via Six-Wave Mixing in Rydberg Atoms
Wenhui Li^1,4
Received: date / Accepted: date
=============================================================================
Coherent and efficient conversion from microwave and terahertz radiation into
optical fields and vice versa has tremendous potential for developing
next-generation classical and quantum technologies. For example, these methods
would facilitate the detection and imaging of millimeter waves with various
applications in medicine, security screening and
avionics <cit.>.
In the quantum domain, coherent microwave-optical conversion is essential for
realizing quantum hybrid systems <cit.> where spin systems or
superconducting qubits are coupled to optical photons that can be transported
with low noise in optical fibers <cit.>.
The challenge in microwave-optical conversion is to devise a suitable platform
that couples strongly to both frequency bands, which are separated by several
orders of magnitude in frequency, and provides an efficient link between them.
Experimental work on microwave-optical conversion has been based on
ferromagnetic magnons <cit.>, frequency mixing in Λ-type
atomic ensembles <cit.>,
whispering gallery resonators <cit.>, or nanomechanical
oscillators <cit.>. All of these schemes
include cavities to enhance the coupling to microwaves. The realization of near-unit
conversion efficiencies as e.g. required for transmitting quantum information
remains an outstanding and important goal.
Recently, highly excited Rydberg atoms have been identified as
a promising alternative <cit.> as they feature strong electric dipole
transitions in a wide frequency range from microwaves to terahertz <cit.>.
In this letter, we demonstrate coherent microwave-to-optical conversion of
classical fields via six-wave mixing in Rydberg atoms. Due to the strong
coupling of millimeter waves to Rydberg transitions, the conversion is realized
in free space. In contrast to
millimeter-wave induced optical fluorescence <cit.>, frequency mixing
is employed here to convert a microwave field into a unidirectional single
frequency optical field. The long lifetime of Rydberg states allows us to make
use of electromagnetically induced transparency
(EIT) <cit.>, which significantly enhances the conversion
efficiency <cit.>. A free-space photon-conversion efficiency of
0.3% with a bandwidth of more than 4 MHz is achieved with our current experimental
geometry. Optimized geometry and energy level configurations should enable the
broadband inter-conversion of microwave and optical fields with near-unit
efficiency <cit.>. Our results thus constitute a major step
towards using Rydberg atoms for transferring quantum states between optical and
microwave photons.
The energy levels for the six-wave mixing are shown in Fig. <ref>(a), and
the experimental setup is illustrated in Fig. <ref>(b). The conversion of
the input microwave field M into the optical field L is achieved via frequency
mixing with four input auxiliary fields P, C, A, and R in a cold atomic cloud.
Starting from the spin polarized ground state |1⟩, the auxiliary fields
and the microwave field M, all of which are nearly resonant with the
corresponding atomic transitions, create a coherence between
the states |1⟩ and |6⟩. This induces the emission of the light
field L with frequency ω_L= ω_P + ω_C
- ω_A + ω_M - ω_R such that the
resonant six-wave mixing loop is completed, where ω_X is the
frequency of field X
(X∈{P,R,M,C,L,A}). The
emission direction of field L is determined by the phase matching
condition 𝐤_L = 𝐤_P + 𝐤_C
- 𝐤_A + 𝐤_M - 𝐤_R, where
𝐤_X is the wave vector of the corresponding field. The wave
vectors of the microwave fields 𝐤_A and 𝐤_M
are negligible since they are much smaller than
those of the optical fields and to an excellent approximation, they cancel each
other. Moreover, we have 𝐤_C≈𝐤_R,
thus the converted light field L propagates in the same direction as the input
field P. The transverse profile of the converted light field L resembles that of
the auxiliary field P due to pulse matching <cit.> as
illustrated in Fig. <ref>(b).
An experimental measurement begins with the preparation of a cold cloud of
^87Rb atoms in the |5S_1/2, F=2, m_F=2⟩ state in a magnetic field
of 6.1 G, as described previously in <cit.>. At this stage,
the atomic cloud has a temperature of about 70 μK, a 1/e^2 radius
of w_z=1.85(10) mm along the z direction, and a peak atomic
density n_0 = 2.1 (2) × 10^10 cm^-3. We then switch on
all the input laser and microwave fields simultaneously for frequency mixing.
The beams for both C and R fields are derived from a single 482 nm laser, while
that of the P field comes from a 780 nm laser, and the two lasers are frequency
locked to a single high-finesse temperature stabilized Fabry-Perot cavity
<cit.>. The 1/e^2 beam radii of these Gaussian fields at the
center of the atomic cloud are w_P=25(1) μm,
w_C=54(2) μm, and w_R=45(1) μm,
respectively; and
their corresponding peak Rabi frequencies are
Ω^(0)_P=2 π×1.14(7) MHz, Ω^(0)_C=2 π×9.0(5)
MHz, and Ω^(0)_R=2 π×6.2(3) MHz. The two
microwave fields M and A, with a frequency separation of around 450 MHz, are
generated by two different microwave sources via frequency multiplication. They
are emitted from two separate horn antennas, and propagate in the horizontal
plane through the center of the atomic cloud, as shown in Fig. <ref>(b).
The Rabi frequencies Ω_M and Ω_A are
approximately uniform across the atomic cloud volume that intersects the laser
beams. The Rabi frequency of the A field is Ω_A=2 π×1.0(1) MHz, while the Rabi frequency of the M field Ω_M is
varied in different measurements. The details of the microwave Rabi frequency
calibrations are presented in <cit.>. The P and L fields that emerge
from the atomic cloud are collected by a
diffraction-limited optical system <cit.>, and separated using a
quarter-wave plate and a polarization beam splitter (PBS). Their respective powers
are measured with two different avalanche photodiode detectors. Each optical power measurement is an average of the recorded time-dependent signal in the
range from 6 to 16 μs after switching on all the fields simultaneously,
where the delay ensures the steady state is fully reached.
We experimentally demonstrate the coherent microwave-to-optical conversion via
the six-wave mixing process by two measurements. First, we scan the detuning Δ_P of
the P field across the atomic resonance and measure the power of the transmitted field P (P_P),
and the power of the converted optical field L (P_L) simultaneously.
All other input fields are held on resonance.
The results of this measurement are shown in Fig. <ref>(a),
where the spectrum of the transmitted field P (red squares) exhibits a double peak structure.
The signature of the six-wave mixing process is the converted field L (purple circles), and its spectrum
features a pronounced peak around Δ_P=0.
Second, to verify the coherence of the conversion, we perform
optical heterodyne measurements between the L field and a reference field that
is derived from the same laser as the P field. Fig. <ref>(b) shows that the
Fourier spectrum of a 500μs long beat note signal has a transform limited
sinc function dependence. The central frequency of the spectrum confirms that
the frequency of the converted field L is determined by the resonance
condition for the six-wave mixing process. Furthermore, we phase modulate the M
field with a triangular modulation function and observe the recovery of the
phase modulation in the optical heterodyne measurements, as shown in
Fig. <ref>(c). This demonstrates that the phase information is coherently
transferred in the conversion, as expected for a nonlinear frequency mixing
process.
We simulate the experimental spectra by modelling the interaction of the laser
and microwave fields with the atomic ensemble within the framework of coupled
Maxwell-Bloch equations <cit.>. The time evolution of the atomic density
operator is given by a Markovian master equation (ħ is the reduced Planck constant),
∂_t = - i/ħ [ H , ]
+L_γ+L_deph ,
where H is the Hamiltonian describing the interaction of an independent
atom with the six fields, and the term L_γ describes
spontaneous decay of the excited states. The last term
L_deph in Eq. (<ref>) accounts for dephasing of
atomic coherences involving the Rydberg states |3⟩, |4⟩, and
|5⟩ with the dephasing rates γ_d, γ_DD, and
γ_d', respectively <cit.>. The sources of decoherence are the finite laser linewidths, atomic collisions, and dipole-dipole interactions
between Rydberg atoms.
The dephasing rates affect the P and L spectra and are found
by fitting the steady state solution of coupled Maxwell-Bloch equations to the experimental spectra in Fig. <ref>(a).
All other parameters are taken from independent
experimental measurements and calibrations. We obtain γ_d = 2 π×150 kHz, γ_DD = 2 π× 150 kHz and γ_d' = 2 π× 560 kHz and keep these values
fixed in all simulations.
The system in Eq. (<ref>) exhibits an approximate dark state <cit.>
|D⟩∝(Ω_M^*Ω_C^*|1⟩-Ω_M^*Ω_P|3⟩
+Ω_A^*Ω_P|5⟩)
for
Ω_L/Ω_P=-Ω_A^*Ω_R^*/(Ω_M^*Ω_C^*),
where Ω_L is the Rabi frequency of field L.
This state has non-zero population only in
metastable states |1⟩, |3⟩, and |5⟩, and is decoupled from all
the fields. The population in |D⟩ increases with the build-up of the converted light
field along the z direction, and thus P_L saturates when all atoms are trapped in this state. Fig. <ref> shows the dependence of the output
power P_L on the optical depth D_P∝ n_0 w_z of the atomic cloud, and the theory curve agrees well with the experimental data.
The predicted saturation at D_P≈ 20 is consistent with the population in |D⟩ exceeding 99.8% at this optical depth.
Next we analyze the dependence of the conversion process on detuning and
intensity of the microwave field M. All auxiliary fields are kept on
resonance and at constant intensity. Fig. <ref>(a) shows P_L as a function of the microwave detuning
Δ_M. We find that the spectrum of the L field
can be approximated by a squared Lorentzian function centered at Δ_M=0,
and its full width at half maximum (FWHM) is ≈ 6 MHz.
The FWHM extracted from microwave spectra at
different intensities I_M is plotted in Fig. <ref>(b). The FWHM has a finite value > 4 MHz in the low intensity limit, and increases slowly
with I_M due to power broadening.
This large bandwidth is one of the distinguishing features of our scheme
and is essential for extending the conversion scheme to the single photon
level <cit.>.
In Fig. <ref>(c), we show measurements of P_L vs. the intensity of the microwave field I_M at
Δ_M = 0. We find that the converted power P_L increases approximately linearly at
low microwave intensities, and thus our conversion scheme is expected to work in the limit of very weak input fields.
The decrease of P_L at large intensities arises
because the six-wave mixing process becomes inefficient if the Rabi frequency Ω_M is
much larger than the Rabi frequency Ω_A of the auxiliary microwave.
All the theoretical curves in Fig. <ref> agree well with the experimental data.
We evaluate the photon conversion efficiency of our
setup by considering the cylindrical volume 𝒱 where the atomic cloud
and all six fields overlap. This volume has a diameter ∼ 2 w_P
and a length ∼ 2 w_z [see Fig. <ref>(b)].
We define the conversion efficiency as
η = P_L/ħω_L/ I_M S_M/ ħω_M,
where S_M = 4 w_P w_z is the cross-section of the volume 𝒱 perpendicular to
𝐤_M. The efficiency η gives the ratio of the photon flux in L leaving volume 𝒱 over
the photon flux in M entering 𝒱. As shown in Fig. <ref>(d), the conversion efficiency is approximately
η≈ 0.3% over a range of low intensities and then decreases with increasing I_M.
Note that η in Eq. (<ref>) is a measure of the efficiency of the physical conversion
process in the Rydberg medium based on the
microwave power I_M S_M impinging on S_M. This power is smaller than the total power emitted by the horn antenna since the M field has not been focused on 𝒱 in our setup.
The good agreement between our model and the experimental data allows us to theoretically explore other geometries.
To this end we consider that the microwave fields M and A are co-propagating with the P field,
and assume that all other parameters are the same <cit.>.
We numerically evaluate the generated light power P_L^∥ for this setup and calculate the efficiency η^∥
by replacing P_L with P_L^∥ and S_M with S_M^∥=π w_P^2 in Eq. (<ref>).
We find η^∥≈ 26%, which is approximately two orders of magnitude larger than η. This increase is mostly due to the
geometrical factor S_M/S_M^∥≈ 91, since P_L^∥∼ P_L. Note that such a value
for η^∥ is consistent with the efficiency achieved by a similar near-resonance frequency
mixing scheme in the optical domain <cit.>.
In conclusion, we have demonstrated coherent microwave-to-optical conversion via
a six-wave mixing process utilizing the strong coupling of electromagnetic
fields to Rydberg atoms. We have established the coherence of the conversion by
a heterodyne measurement and demonstrated a large bandwidth by measuring the generated light
as a function of the input microwave frequency.
Coherence and large bandwidth are essential for taking our scheme
to the single photon level and using it in quantum technology applications. Our
results are in good agreement with theoretical simulations based on an independent
atom model thus showing a limited impact of atom-atom interaction on
our conversion scheme.
This work has focussed on the physical conversion mechanism in Rydberg systems and
provides several possibilities for future studies and applications.
Alkali atom transitions offer a wide range of frequencies in the optical and microwave
domain with properties similar to those exploited in this work. For example, the
conversion of a microwave field to telecommunication wavelengths is
possible by switching to different optical transitions and/or using different atomic
species <cit.>, which makes our approach
promising for classical and quantum communication applications.
Moreover, it has been theoretically shown that bidirectional conversion with near-unit efficiency is possible by using a different Rydberg excitation scheme and well-chosen detunings of the auxiliary fields <cit.>. Such non-linear conversion with near-unit efficiency has only been experimentally realized in the optical domain <cit.>. Reaching this level of efficiency requires good mode-matching between the millimeter waves and the auxiliary optical fields <cit.>, which can be achieved either by tightly focusing the millimeter wave, or by confining it to a waveguide directly coupled to the conversion medium <cit.>. Eventually, extending our conversion scheme to millimeter waves in a cryogenic environment <cit.> would pave the way towards quantum applications.
The authors thank Tom Gallagher for useful discussions and acknowledge the
support by the National Research Foundation, Prime
Ministers Office, Singapore and the Ministry of Education, Singapore under the
Research Centres of Excellence programme. This work is supported by Singapore
Ministry of Education Academic Research Fund Tier 2 (Grant No.
MOE2015-T2-1-085). M.K. would like to acknowledge the use of the University of
Oxford Advanced Research Computing (ARC) facility
(http://dx.doi.org/10.5281/zenodo.22558).
32
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Adam(2011)]adam:11
author author A. J. L. Adam, title title
Review of near-field terahertz measurement methods and their
applications. @noop journal journal
J. Infrared Milli. Terahz. Waves volume 32, pages 976 (year 2011)NoStop
[Chan et al.(2007)Chan,
Deibel, and Mittleman]chan:07
author author W. L. Chan, author J. Deibel, and author D. M. Mittleman, title title Imaging with terahertz
radiation, @noop journal journal Rep.
Prog. in Phys. volume 70, pages 1325
(year 2007)NoStop
[Tonouchi(2007)]tonouchi:07
author author M. Tonouchi, title title Cutting-edge
terahertz technology, @noop journal journal Nat. Photon. volume 1, pages
97 (year 2007)NoStop
[Zhang et al.(2017)Zhang,
Shkurinov, and Zhang]Zhang2017
author author Xi Cheng Zhang, author Alexander Shkurinov, and author Yan Zhang, title title
Extreme terahertz science, http://dx.doi.org/10.1038/nphoton.2016.249 journal journal Nat Photon volume 11, pages
16–18 (year 2017)NoStop
[Xiang et al.(2013)Xiang,
Ashhab, You, and Nori]Xiang2013
author author Ze-Liang Xiang, author Sahel Ashhab, author J. Q. You, and author Franco Nori, title title Hybrid quantum circuits:
Superconducting circuits interacting with other quantum systems, 10.1103/RevModPhys.85.623 journal journal Rev. Mod. Phys. volume 85, pages 623–653 (year 2013)NoStop
[Kimble(2008)]Kimble2008
author author H. J. Kimble, title title The quantum
internet, http://dx.doi.org/10.1038/nature07127 journal journal Nature volume 453, pages 1023–1030 (year 2008)NoStop
[Hisatomi et al.(2016)Hisatomi, Osada, Tabuchi, Ishikawa, Noguchi, Yamazaki, Usami, and Nakamura]Hisatomi2016
author author R. Hisatomi, author A. Osada,
author Y. Tabuchi, author T. Ishikawa, author
A. Noguchi, author R. Yamazaki, author K. Usami, and author Y. Nakamura, title title Bidirectional conversion between microwave and light via
ferromagnetic magnons, 10.1103/PhysRevB.93.174427
journal journal Phys. Rev. B volume 93, pages 174427 (year
2016)NoStop
[Williamson et al.(2014)Williamson, Chen, and Longdell]Williamson:2014
author author Lewis A Williamson, author Yu-Hui Chen, and author Jevon J Longdell, title title Magneto-optic
modulator with unit quantum efficiency, @noop journal journal Phys. Rev. Lett. volume 113, pages 203601 (year
2014)NoStop
[O'Brien et al.(2014)O'Brien, Lauk, Blum, Morigi, and Fleischhauer]OBrien:2014
author author Christopher O'Brien, author Nikolai Lauk, author Susanne Blum, author Giovanna Morigi, and author Michael Fleischhauer, title title Interfacing superconducting qubits and telecom photons via a
rare-earth-doped crystal, @noop journal journal Phys. Rev. Lett. volume 113, pages 063603 (year 2014)NoStop
[Blum et al.(2015)Blum,
O'Brien, Lauk, Bushev,
Fleischhauer, and Morigi]Blum:2015
author author Susanne Blum, author Christopher O'Brien, author Nikolai Lauk,
author Pavel Bushev, author Michael Fleischhauer, and author Giovanna Morigi, title title Interfacing microwave qubits
and optical photons via spin ensembles, @noop journal journal Phys. Rev. A volume
91, pages 033834 (year 2015)NoStop
[Hafezi et al.(2012)Hafezi,
Kim, Rolston, Orozco,
Lev, and Taylor]Hafezi:2012
author author M Hafezi, author Z Kim, author SL Rolston, author
LA Orozco, author BL Lev, and author JM Taylor, title title
Atomic interface between microwave and optical photons, @noop
journal journal Phys. Rev. A volume 85, pages 020302 (year
2012)NoStop
[Strekalov et al.(2009)Strekalov, Schwefel, Savchenkov,
Matsko, Wang, and Yu]Strekalov2009a
author author D. V. Strekalov, author H. G. L. Schwefel, author A. A. Savchenkov, author A. B. Matsko, author L. J. Wang, and author N. Yu, title title Microwave whispering-gallery resonator
for efficient optical up-conversion, 10.1103/PhysRevA.80.033810 journal journal Phys.
Rev. A volume 80, pages 033810
(year 2009)NoStop
[Rueda et al.(2016)Rueda,
Sedlmeir, Collodo, Vogl,
Stiller, Schunk, Strekalov,
Marquardt, Fink, Painter,
Leuchs, and Schwefel]Rueda2016
author author Alfredo Rueda, author Florian Sedlmeir, author Michele C. Collodo, author Ulrich Vogl,
author Birgit Stiller, author Gerhard Schunk, author
Dmitry V. Strekalov, author
Christoph Marquardt, author
Johannes M. Fink, author
Oskar Painter, author
Gerd Leuchs, and author
Harald G. L. Schwefel, title
title Efficient microwave to optical photon
conversion: an electro-optical realization, 10.1364/OPTICA.3.000597 journal journal Optica volume 3, pages 597–604 (year 2016)NoStop
[Bochmann et al.(2013)Bochmann, Vainsencher, Awschalom, and Cleland]Bochmann2013
author author Joerg Bochmann, author Amit Vainsencher, author David D. Awschalom, and author Andrew N. Cleland, title title
Nanomechanical coupling between microwave and optical photons, http://dx.doi.org/10.1038/nphys2748 journal journal Nat Phys volume 9, pages
712–716 (year 2013)NoStop
[Andrews et al.(2014)Andrews, Peterson, Purdy, Cicak, Simmonds, Regal, and Lehnert]Andrews:2014
author author RW Andrews, author RW Peterson,
author TP Purdy, author K Cicak, author
RW Simmonds, author
CA Regal, and author
KW Lehnert, title title Bidirectional and efficient conversion between microwave
and optical light, @noop journal journal Nat. Phys. volume 10, pages
321–326 (year 2014)NoStop
[Bagci et al.(2014)Bagci,
Simonsen, Schmid, Villanueva,
Zeuthen, Appel, Taylor,
Sørensen, Usami, Schliesser et al.]Bagci:2014
author author Tolga Bagci, author Anders Simonsen, author Silvan Schmid, author Louis G Villanueva, author Emil Zeuthen, author Jürgen Appel, author Jacob M Taylor,
author A Sørensen, author Koji Usami, author
Albert Schliesser, et al., title title Optical detection of radio
waves through a nanomechanical transducer, @noop journal journal Nature volume 507, pages 81–85 (year 2014)NoStop
[Kiffner et al.(2016)Kiffner, Feizpour, Kaczmarek, Jaksch, and Nunn]kiffner:16
author author M. Kiffner, author A. Feizpour,
author K. T. Kaczmarek, author D. Jaksch, and author
J. Nunn, title title Two-way interconversion of millimeter-wave and optical
fields in Rydberg gases, @noop journal journal New J. Phys. volume 18, pages 093030 (year 2016)NoStop
jacobs:17 B. T. Gard, K. Jacobs, R. McDermott, and M. Saffman,
Microwave-to-optical frequency conversion using a cesium atom coupled
to a superconducting resonator, Phys. Rev. A 96, 013833 (2017).
[Gallagher(1994)]gallagher:ryd
author author T. F. Gallagher, @noop title Rydberg Atoms (publisher Cambridge University Press, address
Cambridge, year 1994)NoStop
[Wade et al.(2016)Wade,
S̆ibalić, de Melo, Kondo, Adams, and Weatherill]Wade2016
author author C. G. Wade, author N. S̆ibalić, author N. R. de Melo, author J. M. Kondo,
author C. S. Adams, and author K. J. Weatherill, title title Real-time near-field
terahertz imaging with atomic optical fluorescence, http://dx.doi.org/10.1038/nphoton.2016.214 journal journal Nat. Photon. volume 11, pages 40 (year 2016)NoStop
[Mohapatra et al.(2007)Mohapatra, Jackson, and Adams]mohapatra2007coherent
author author A. K. Mohapatra, author T. R. Jackson, and author C. S. Adams, title title Coherent optical
detection of highly excited Rydberg states using electromagnetically induced
transparency, @noop journal journal
Phys. Rev. Lett. volume 98, pages
113003 (year 2007)NoStop
[Fleischhauer et al.(2005)Fleischhauer, Imamoǧlu, and Marangos]fleischhauer:05
author author M. Fleischhauer, author A. Imamoǧlu, and author J. P. Marangos, title title
Electromagnetically induced transparency: Optics in coherent media, @noop journal journal Rev. Mod. Phys. volume 77, pages 633 (year
2005)NoStop
[Merriam et al.(2000a)Merriam, Sharpe,
Shverdin, Manuszak, Yin, and Harris]merriam2000
author author Andrew J. Merriam, author S. J. Sharpe, author M. Shverdin, author D. Manuszak,
author G. Y. Yin, and author S. E. Harris, title title Efficient nonlinear frequency
conversion in an all-resonant double-
system, 10.1103/PhysRevLett.84.5308 journal journal Phys. Rev. Lett. volume 84, pages 5308–5311 (year
2000a)NoStop
[Merriam et al.(1999)Merriam, Sharpe, Xia, Manuszak, Yin, and Harris]merriam1999
author author Andrew J. Merriam, author S. J. Sharpe, author H. Xia, author D. Manuszak,
author G. Y. Yin, and author S. E. Harris, title title Efficient gas-phase generation of
coherent vacuum ultraviolet radiation, 10.1364/OL.24.000625 journal journal Opt.
Lett. volume 24, pages 625–627
(year 1999)NoStop
[Harris(1993)]harris:93
author author S. E. Harris, title title
Electromagnetically induced transparency with matched pulses, @noop
journal journal Phys. Rev. Lett. volume 70, pages 552 (year
1993)NoStop
[Harris(1994)]harris:94
author author S. E. Harris, title title Normal modes for
electromagnetically transparency, @noop journal
journal Phys. Rev. Lett. volume 72, pages 52 (year 1994)NoStop
[Han et al.(2015)Han,
Vogt, Manjappa, Guo,
Kiffner, and Li]han2015lensing
author author Jingshan Han, author Thibault Vogt, author Manukumara Manjappa, author Ruixiang Guo, author Martin Kiffner, and author Wenhui Li, title title Lensing effect of
electromagnetically induced transparency involving a Rydberg state, @noop journal journal Phys. Rev. A volume 92, pages 063824 (year 2015)NoStop
[Note1()]Note1
note See Supplemental Material for details about the theoretical
model and the calibration of experimental parameters.Stop
[Note2()]Note2
note The intensity I_M is related to the Rabi
frequency Ω _M by I_M = 1/2ϵ _0 c (ħΩ _M/d_45)^2, where c is the speed of the light, ϵ _0 the
electric constant, ħ the Planck constant, d_45 the electric dipole
moment between states and
(See Ref. <cit.>)NoStop
[Gilbert(1993)]gilbert1993frequency
author author Sarah L. Gilbert, title title
Frequency stabilization of a fiber laser to rubidium: a high-accuracy
1.53-μm wavelength standard, in @noop booktitle Applications in Optical Science and Engineering (organization International Society for Optics and Photonics, year 1993) pp. pages 146–153NoStop
[Bouchiat et al.(1989)Bouchiat, Guéna, Jacquier,
Lintz, and Pottier]bouchiat1989cs
author author M. A. Bouchiat, author J. Guéna,
author P. Jacquier, author M. Lintz, and author
L. Pottier, title title The cs 6s-7s-6p3/2 forbidden three-level system:
analytical description of the inhibited fluorescence and optical rotation
spectra, @noop journal journal
Journal de Physique volume 50, pages
157–199 (year 1989)NoStop
[Hogan et al.(2012)Hogan,
Agner, Merkt, Thiele,
Filipp, and Wallraff]Hogan2012
author author S. D. Hogan, author J. A. Agner,
author F. Merkt, author T. Thiele, author
S. Filipp, and author
A. Wallraff, title title Driving Rydberg-Rydberg transitions from a coplanar
microwave waveguide, http://link.aps.org/doi/10.1103/PhysRevLett.108.063004 journal journal Phys. Rev. Lett. volume 108, pages 063004 (year
2012)NoStop
[Hermann-Avigliano et al.(2014)Hermann-Avigliano, Teixeira, Nguyen,
Cantat-Moltrecht, Nogues, Dotsenko, Gleyzes, Raimond, Haroche, and Brune]Hermann2014
author author C. Hermann-Avigliano, author R. Celistrino Teixeira, author T. L. Nguyen, author T. Cantat-Moltrecht, author G. Nogues, author I. Dotsenko,
author S. Gleyzes, author J. M. Raimond, author
S. Haroche, and author
M. Brune, title title Long coherence times for Rydberg qubits on a
superconducting atom chip, 10.1103/PhysRevA.90.040502
journal journal Phys. Rev. A volume 90, pages 040502 (year
2014)NoStop
cano:11 D. Cano, H. Hattermann, B. Kasch, C. Zimmermann, R. Kleiner, D. Koelle, and
J. Fortágh,
Experimental system for research on ultracold atomic gases
near superconducting microstructures, Eur. Phys. J. D 63, 17 (2011).
|
http://arxiv.org/abs/1701.07996v1 | 20170127101656 | Orthogonal Polynomials related to g-fractions with missing terms | [
"Kiran Kumar Behera",
"A. Swaminathan"
] | math.CA | [
"math.CA",
"42C05, 33C47, 30B70"
] |
amsplain
Department of Mathematics
Indian Institute of Technology Roorkee-247 667,
Uttarakhand, India
[email protected]
Department of Mathematics
Indian Institute of Technology Roorkee-247 667,
Uttarakhand, India
[email protected], [email protected]
The purpose of the present paper is to investigate some
structural and qualitative aspects of two different
perturbations of the parameters of g-fractions.
In this context the concept of gap g-fractions is introduced. While tail sequences of a continued fraction play a
significant role in the first perturbation,
Schur fractions are used
in the second perturbation of the g-parameters that are considered.
Illustrations are provided using Gaussian hypergeometric functions.
Using a particular gap g-fraction, some members
of the class of Pick functions are also identified.
Orthogonal Polynomials related to g-fractions with missing terms
A. Swaminathan
January 27, 2017
==================================================================
myheadings Kiran Kumar Behera and A. Swaminathan
Orthogonal Polynomials related to g-fractions with missing terms
§ INTRODUCTION
Given an arbitrary real sequence {g_k}_k=0^∞,
a continued fraction expansion of the form
11[ ; - ](1-g_0)g_1z1[ ; - ](1-g_1)g_2z1[ ; - ](1-g_2)g_3z1[ ; - ]⋯,
z∈ℂ,
is called a g-fraction if the parameters
g_j∈[0,1], j∈ℕ∪{0}.
It terminates and equals a rational function if
g_j∈{0,1} for some j∈ℕ∪{0}.
If 0<g_j<1, j∈ℕ∪{0},
the g-fraction (<ref>)
still converges uniformly on every compact subsets of the slit
domain ℂ∖[1,∞)
(see <cit.> and
<cit.>),
and in this case,
(<ref>)
will represent an
analytic function, say ℱ(z).
Such g-fractions are found having applications in diverse areas
like number theory
<cit.>,
dynamical systems
<cit.>,
moment problems and analytic
function theory
<cit.>.
In particular, <cit.>,
the Hausdorff moment problem
ν_j=∫_0^1σ^jdν(σ),
j≥0,
has a solution if and only if (<ref>)
corresponds to a power series of the form
1+ν_1z+ν_2z^2+⋯, z∈ℂ∖[1,∞).
Further, the g-fractions have also been used to study the geometric properties of ratios of Gaussian hypergeometric functions as well as their q-analogues,
(see the proofs of <cit.>
and <cit.>).
Among several such results,
one of the most fundamental result concerning g-fraction is
<cit.> in which holomorphic functions
having positive real part in ℂ∖[1, ∞) are characterised.
Precisely, Re(√(1+z) ℱ(z)) is positive
if and only if ℱ(z) has a continued fraction expansion
of the form (<ref>).
Moreover, ℱ(z) has the integral representation
ℱ(z)=∫_0^1dϕ(t)1-zt,
z∈ℂ∖[1,∞],
where ϕ(t) is a bounded non-decreasing function
having a total increase 1.
Many interesting results are also available in literature if we consider
subsets of ℂ∖[1,∞).
For instance, let 𝕂 be the class of
holomorphic functions having a positive real part on the unit disk
𝔻:={z:|z|<1}. Such functions denoted by 𝒞(z)
are called Carathéodory functions and have the Riesz-Herglotz
representation <cit.>
𝒞(z)=
∫_0^2πe^it+ze^it-zdϕ(t)+qi,
where q=Im 𝒞(0).
Further, if 𝒞(z)∈𝕂 is such that
𝒞(ℝ)⊆ℝ
and normalised by 𝒞(0)=1,
then the following continued fraction expansion can be derived
<cit.>
1-z1+zC(z)=
11[ ; - ]g_1ω1[ ; - ](1-g_1)g_2ω1[ ; - ](1-g_2)g_3ω1[ ; - ]⋯,
z∈𝔻,
where w=-4z/(1-z)^2. Note that here g_0=0.
Closely related to the Carathéodory functions are the Schur functions
f(z) given by
C(z)=1+zf(z)1-zf(z),
z∈𝔻.
From (<ref>),
it is clear that f(z) maps the unit disk 𝔻
to the closed unit disk 𝔻̅.
In fact, if
𝔹={
f(𝔻)⫅𝔻̅},
(<ref>)
describes a one-one correspondence between the classes of
holomorphic functions 𝔹 and 𝕂.
The class 𝔹 studied by J. Schur
<cit.>,
is the well-known Schur algorithm.
This algorithm generates a sequence of rational functions
{f_n(z)}_n=0^∞ from a given sequence
{α_n}_n=0^∞ of complex numbers lying in
𝔻̅.
Then, with α_n, n≥0, satisfying some
positivity conditions, f_n(z)→ f(z),
n→∞, where f(z)∈𝔹
is a Schur function
<cit.>.
It is interesting to note that α_n=f_n(0),
n≥0 where f_0(z)≡ f(z). Moreover, using these
parameters {α_n}_n≥0,
the following Schur fraction can be
obtained <cit.>
α_0+
(1-|α_0|^2)zα̅_0z[ ; + ]1α_1[ ; + ](1-|α_1|^2)zα̅_1z[ ; + ]1α_2[ ; + ](1-|α_2|^2)zα̅_2z[ ; + ]⋯,
where α_j is related to the g_j
occurring in (<ref>)
by α_j=1-2g_j, j≥1.
Similar to g-fractions the Schur
fraction also terminates if |α_n|=1 for some n∈ℤ_+.
It may be noted that such a case occurs if and only if
f(z) is a finite Blashke product <cit.>.
Let A_n(z) and B_n(z) denote the n^th partial
numerator and denominator of
(<ref>) respectively.
Then, with the initial values A_0(z)=α_0,
B_0(z)=1, A_1(z)=z
and
B_1(z)=α̅_0z,
the following recurrence relations hold
<cit.>
A_2n(z) =α_nA_2n-1(z)+A_2n-2(z)
B_2n(z) =α_nB_2n-1(z)+B_2n-2(z),
n≥1,
A_2n+1(z) =α̅_nzA_2n(z)+(1-|α_n|^2)zA_2n-1(z)
B_2n+1(z) =α̅_nzB_2n(z)+(1-|α_n|^2)zB_2n-1(z),
n≥1.
Using
(<ref>)
in
(<ref>),
we get
A_2n+1(z) =zA_2n-1(z)+α̅_pzA_2n-2(z)
B_2n+1(z) =zB_2n-1(z)+α̅_pzB_2n-2(z).
The relations
(<ref>)
and
(<ref>)
are sometimes written in the more
precise matrix form as
(
[ A_2p+1 B_2p+1; A_2p B_2p; ])=
(
[ z α̅_pz; α_p 1; ])
(
[ A_2p-1 A_2p-1; A_2p-2 A_2p-2; ]),
p≥1.
It is also known that
<cit.>
A_2n+1(z) =zB_2n^∗(z)
; B_2n+1(z)=zA_2n^∗(z)
A_2n(z) =B_2n+1^∗(z)
; B_2n(z)=A_2n+1^∗(z).
Here, and in what follows,
P_n^∗(z)=z^nP_n(1/z̅)
for any polynomial P_n(z)
with complex coefficients and of degree n.
From (<ref>),
it follows that
A_2n+1(z)B_2n+1(z)=
(A_2n^∗(z)B_2n^∗(z))^-1,
A_2n(z)B_2n(z)=
(A_2n+1^∗(z)B_2n+1^∗(z))^-1.
Further, the even approximants of the Schur fraction
(<ref>)
coincide with the the n^th approximant of the Schur algorithm,
so that
A_2n(z)/B_2n(z) converges to the
Schur function f(z) as n→∞.
From the point of view of their applications, it is obvious
that the parameters g_n of the g-fraction
(<ref>) contain hidden
information about the properties of the dynamical systems or the
special functions they represent.
One way to explore this hidden
information is through perturbation;
that is, through a study of the consequences when some disturbance
is introduced in the parameter sequence {g_n}.
The main objective of the present manuscript is to study the
structural and qualitative aspects of two
perturbations.
The first is when a finite number of parameters g_j's are missing
in which case we call the corresponding g-fraction as a gap-g-fraction.
The second case is replacing {g_n}_n=0^∞ by
a new sequence {g_n^(β_k)}_n=0^∞ in which the
j^th term g_j is replaced by g_j^(β_k).
The first case is illustrated
using Gaussian hypergeometric functions,
where we use the fact that
many g-fractions converge to ratios of
Gaussian hypergeometric functions in slit complex domains.
The second case is studied by applying the technique of coefficient
stripping <cit.> to the sequence of Schur parameters {α_j}.
This follows from the fact that the Schur fraction and the g-fraction are
completely determined by the related Schur parameters α_k's
and the g-parameters respectively, and that a perturbation in α_j
produces a unique change in the g_j and vice-versa.
The manuscript is organised as follows.
Section <ref>
provides structural relations for the three different cases
provided by the gap g-fraction. A particular ratio
of Gaussian hypergeometric functions is used to illustrate the results.
The modified g-fractions given in
Section <ref>
has a shift in g_k to g_k+1 and so on for any fixed k.
Instead, the effect of changing g_k to any another value
g_k^(β_k) is discussed in
Section <ref>.
Illustrations of the results obtained in
Section <ref>
leading to characterization of a class of ratio of hypergeometric functions
such as Pick functions is outlined in
Section <ref>.
§ GAP G-FRACTIONS AND STRUCTURAL RELATIONS
As the name suggests, gap-g-fractions correspond to the
g-sequence {g_k}_k=0^∞
with missing parameters.
We study three cases in this section and in each the concept of tail sequences
of a continued fraction plays an important role. For more information on the tails
of a continued fraction, we refer to
<cit.>.
For z∈ℂ∖[1,∞), let ℱ(z) be the continued fraction
(<ref>) and
ℱ(k;z)=
11[ ; - ](1-g_0)g_1z1[ ; - ]
⋯(1-g_k-2)g_k-1z1[ ; - ](1-g_k-1)g_k+1z1[ ; - ](1-g_k+1)g_k+2z1[ ; - ]⋯.
Note that
(<ref>)
is obtained from
(<ref>)
by removing g_k for some arbitrary k
which cannot be obtained by letting g_k=0.
Let,
ℋ_k+1(z)=
g_k+1z1[ ; - ](1-g_k+1)g_k+2z1[ ; - ](1-g_k+2)g_k+3z1[ ; - ]⋯,
so that -(1-g_k)ℋ_k+1(z) is the (k+1)^th
tail of ℱ(z).
We note that
<cit.>,
the existence of ℱ(z) guarantees the
existence of ℋ_k+1(z).
Further, if
h(k;z)=(1-g_k-1)ℋ_k+1(z),
k≥1,
then, from (<ref>)
and (<ref>)
we obtain the rational function
𝒳_k(h(k;z);z)𝒴_k(h(k;z);z)=
11[ ; - ](1-g_0)g_1z1[ ; - ]⋯(1-g_k-2)g_k-1z1-h(k;z).
It is known
<cit.>,
that the k^th approximant of
(<ref>)
is given by the rational function
𝒳_k(0;z)𝒴_k(0;z)=
11[ ; - ](1-g_0)g_1z1[ ; - ]⋯(1-g_k-2)g_k-1z1=
𝒮_k(0,z),
,
and that
𝒳_k(h(k;z);z)𝒴_k(h(k;z);z)=
𝒳_k(0;z)-h(k;z)𝒳_k-1(0;z)𝒴_k(0;z)-h(k;z)𝒴_k-1(0;z).
Then,
𝒳_k(h(k;z);z)𝒴_k(h(k;z);z)-
𝒳_k(0;z)𝒴_k(0;z) =
h(k;z)[𝒳_k(0;z)𝒴_k-1(0;z)-𝒳_k-1(0;z)𝒴_k(0;z)]𝒴_k(0;z)[𝒴_k(0;z)-h(k;z)𝒴_k-1(0;z)]
=
h(k;z)z^k-1∏_j=1^k-1(1-g_j-1)g_j𝒴_k(0;z)[𝒴_k(0;z)-h(k;z)𝒴_k-1(0;z)],
where the last equality follows from <cit.>.
Denoting d_j=(1-g_j-1)g_j, j≥1, we have from
(<ref>)
𝒳_k(h(k;z);z)𝒴_k(h(k;z);z)=
𝒳_k(0;z)𝒴_k(0;z)-
∏_j=1^k-1d_jz^k-1h(k;z)𝒴_k-1(0;z)𝒴_k(0;z)h(k;z)-
[𝒴_k(0;z)]^2.
In the sequel, by ℱ(z) we will mean the
unperturbed g-fraction as given in
(<ref>)
with g_k∈[0,1], k∈ℤ_+.
Further, as the notation suggests, the rational function
𝒮_k(0;z) is independent of the parameter g_k
and is known whenever ℱ(z) is given.
The information of the missing parameter g_k
at the k^th position
is stored in h(k;z) and hence the notation
ℱ(k;z).
It may also be noted that the polynomials
𝒴_k(0;z)
can be easily computed from the Wallis recurrence
<cit.>
𝒴_j(0;z)=𝒴_j-1(0;z)-
(1-g_j-2)g_j-1z𝒴_j-2(0;z),
j≥2,
with the initial values 𝒴_0(0;z)=𝒴_1(0;z)=1.
Thus, we state our first result.
Suppose ℱ(z) is given.
Let ℱ(k;z) denote the
perturbed g-fraction in which the parameter g_k is missing. Then, with
d_j=(1-g_j-1)g_j, j≥1
ℱ(k;z)=𝒮_k(0;z)-
∏_j=1^k-1d_jz^k-1h(k;z)𝒴_k-1(0;z)𝒴_k(0;z)h(k;z)-[𝒴_k(0;z)]^2,
where 𝒴_k(0;z), 𝒮_k(0;z) and
-(1-g_k-1)^-1(1-g_k)h(k;z) are respectively,
the k^th partial denominator,
the k^th approximant and
the (k+1)^th tail of ℱ(z).
It may be observed that the right side of
(<ref>)
is of the form
a(z)h(k;z)+b(z)c(z)h(k;z)+d(z),
with a(z), b(z), c(z), d(z) being well defined polynomials.
Rational functions of such form are said to be
rational transformation of h(k;z) and occur frequently in
perturbation theory of orthogonal polynomials.
For example, see <cit.>.
A similar result for the perturbed g-fraction in which a finite number
of consecutive parameters are missing can be obtained by an analogous
argument that is stated directly as
Let ℱ(z) be given. Let
ℱ(k,k+1,⋯,k+l-1;z) denote the perturbed g-fraction in which
the l consecutive parameters g_k,g_k+1,⋯,g_k+l-1 are missing.
Then,
ℱ(k,k+1,⋯,k+l-1;z)=
𝒮_k(0;z)-∏_j=1^k-1d_jz^k-1h(k,k+1,⋯,k+l-1;z)𝒴_k-1(0;z)𝒴_k(0;z)h(k,k+1,⋯,k+l-1;z)-
[𝒴_k(0;z)]^2,
where -(1-g_k-1)^-1(1-g_k+l-1)h(k,k+1,⋯,k+l-1;z)
is the (k+l)^th tail of ℱ(z).
The next result is about the perturbation in which only two parameters
g_k and g_l are missing, where l need not be k±1.
Let ℱ(z) be given.
Let ℱ(k,l;z) denote the perturbed g-fraction
in which two parameters g_k and g_l are missing, where
we assume l=k+m+1, m≥1.
Then
ℱ(k,l;z)=𝒮_k(0;z)-
∏_j=1^k-1d_jz^k-1h(k,l;z)𝒴_k-1(0;z)𝒴_k(0;z)h(k,l;z)-[𝒴_k(0;z)]^2
where -(1-g_k-1)^-1(1-g_k)h(k,l;z) is the perturbed (k+1)^th tail of
ℱ(z) in which g_l is missing and is given by
-(1-g_k-1)^-1(1-g_k)h(k,k+m+1;z)=
𝒮_m^(k+1)(0,z)-
∏_j=k+1^k+md_jz^m h(k+m+1;z)[𝒴^(k+1)_m(0;z)]^2-
𝒴^(k+1)_m-1(0;z)𝒴^(k+1)_m(0;z)h(k+m+1;z)
where 𝒴^(k+1)_m(0;z) and 𝒮_m^(k+1)(0,z)
are respectively, the
m^th partial denominator and
m^th approximant of the
(k+1)^th tail of ℱ(z).
Here, -(1-g_l-1)^-1(1-g_l)h(l;z) is the (l+1)^th tail
of ℱ(z).
Let
ℋ_k+1(l;z)=
g_k+1z1[ ; - ](1-g_k+1)g_k+2z1[ ; - ]⋯(1-g_l-1)g_l+1z1[ ; - ](1-g_l+1)g_l+2z1[ ; - ]⋯
so that -(1-g_k)ℋ_k+1(l;z) is the perturbed
(k+1)^th tail of ℱ(z) in which g_l is missing.
Then we can write
ℱ(k,l;z)=
𝒳_k(h(k,l;z);z)𝒴_k(h(k,l;z);z)=
11[ ; - ](1-g_0)g_1z1[ ; - ]⋯(1-g_k-2)g_k-1z1-h(k,l;z),
where h(k,l;z)=(1-g_k-1)ℋ_k+1(l;z).
Now, proceeding as in
Theorem <ref>,
we obtain
ℱ(k,l;z)=𝒮_k(0;z)-
∏_j=1^k-1d_jz^k-1h(k,l;z)𝒴_k-1(0;z)𝒴_k(0;z)h(k,l;z)-[𝒴_k(0;z)]^2
Hence all that remains is to find the expression for h(k,l;z)
or ℋ_k+1(l;z).
Now, let
ℋ_l+1(z)=
g_l+1z1[ ; - ](1-g_l+1)g_l+2z1[ ; - ](1-g_l+2)g_l+3z1[ ; - ]⋯,
and h(l;z)=(1-g_l-1)ℋ_l+1(z).
From
(<ref>)
and
<cit.>,
we have
-(1-g_k)ℋ_k+1(l;z)
=
-(1-g_k)g_k+1z1[ ; - ](1-g_k+1)g_k+2z1[ ; - ]⋯(1-g_l-2)g_l-1z1-h(l;z)
=
𝒳^(k+1)_l-k-1(h(l;z);z)𝒴^(k+1)_l-k-1(h(l;z);z).
It is clear that, the rational function
[𝒳^(k+1)_l-k-1(0;z)/𝒴^(k+1)_l-k-1(0;z)]
is the approximant of the (k+1)^th tail
-(1-g_k)ℋ_k+1(z)
of ℱ(z).
Then, using <cit.>
we obtain
𝒳^(k+1)_l-k-1(h(l;z);z)𝒴^(k+1)_l-k-1(h(l;z);z)-
𝒳^(k+1)_l-k-1(0;z)𝒴^(k+1)_l-k-1(0;z)=
-h(l;z)∏_k^l-1[d_jz]𝒴^(k+1)_l-k-1(0;z)[𝒴^(k+1)_l-k-1(0;z)-h(l;z)𝒴^(k+1)_l-k-2(0;z)]
Finally, using the fact that l=k+m+1, we obtain
-(1-g_k)ℋ_k+1(k+m+1;z)=
𝒮_m^(k+1)(0,z)-
∏_j=k+1^k+md_jz^m h(k+m+1;z)[𝒴^(k+1)_k+m+1(0;z)]^2-𝒴^(k+1)_k+m(0;z)𝒴^(k+1)_k+m+1(0;z)h(k+m+1;z),
where
𝒮_m^(k+1)(0,z)=
-(1-g_k)g_k+1z1[ ; - ](1-g_k+1)g_k+2z1[ ; - ]⋯(1-g_k+m-1)g_k+mz1
is the m^th approximant of the (k+1)^th tail of ℱ(z).
As mentioned earlier, from
(<ref>),
(<ref>) and
(<ref>),
it is clear that tail sequences play a
significant role in deriving the
structural relations for the gap g-fractions.
We now illustrate the role of tail
sequences
using particular g-fraction expansions.
§.§ Tail sequences using hypergeometric functions
The Gaussian hypergeometric function, with the
complex parameters a, b and c is defined by the power series
F(a,b;c;ω)=
∑_n=0^∞(a)_n(b)_n(c)_n(1)_nω^n,
|ω|<1
where c≠0,-1,-2,⋯ and
(a)_n=a(a+1)⋯(a+n-1) is the Pochhammer symbol.
Two hypergeometric functions
F(a_1,b_1;c_1;ω) and F(a_2,b_2;c_2,ω)
are said to be contiguous if the difference between the corresponding parameters
is at most unity. A linear combination of two contiguous hypergeometric
functions is again a hypergeometric function. Such relations are called contiguous relations
and have been used to explore many hidden properties of
the hypergeometric functions; for example, many special functions
can be represented by ratios of Gaussian hypergeometric functions.
For more details, we refer to <cit.>.
Consider the Gauss continued fraction <cit.>
(with b↦ b-1 and c↦ c-1)
F(a,b;c;ω)F(a,b-1;c-1;ω)=
11[ ; - ](1-g_0)g_1ω1[ ; - ](1-g_1)g_2ω1[ ; - ](1-g_2)g_3ω1[ ; - ]⋯
where
g_2p=c-a+p-1c+2p-1,
g_2p+1=c-b+pc+2p,
p≥0.
Let k_p=1-g_p, p≥0. We aim to find the ratio of
hypergeometric functions given by the continued fraction
11[ ; - ]k_1ω1[ ; - ](1-k_1)k_2ω1[ ; - ](1-k_2)k_3ω1[ ; - ]⋯
For this, first note that from
(<ref>),
we can write
ℛ(ω)=
1-1k_0[1-F(a,b-1;c-1;ω)F(a,b;c;ω)]=
1-(1-k_1)ω1-k_1(1-k_2)ω1-k_2(1-k_3)ω1-⋯
Now, replacing b↦ b-1 and c↦ c-1 in
the contiguous relation
<cit.>
we obtain
F(a,b;c;ω)-F(a,b-1;c-1;ω)=
a(c-b)(c-1)cω F(a+1,b;c+1;ω)
Hence, with k_0=(c-a-1)/(c-1), we have
ℛ(ω)
=1-c-1a[F(a,b;c;ω)-F(a,b-1;c-1;ω)F(a,b;c;ω)]
=1-c-bcF(a+1,b;c+1;ω)F(a,b;c;ω)
=(1-ω)F(a+1,b;c;ω)F(a,b;c;ω),
where the last equality follows from the relation
F(a,b;c;ω)=(1-ω)F(a+1,b;c;ω)+c-bcω F(a+1,b;c+1;ω),
which is easily proved by comparing the coefficients of
ω^k on both sides.
Finally, using the well known result
<cit.>, we obtain
F(a+1,b;c;ω)F(a,b;c;ω)=
11[ ; - ]b/cω1[ ; - ](c-b)(a+1)/c(c+1)ω1[ ; - ](c-a)(b+1)/(c+1)(c+2)ω1[ ; - ]⋯
Note that the continued fraction
(<ref>)
has also been derived by different means in
<cit.> and studied
in the context of geometric properties of
hypergeometric functions.
For further analysis, we establish the following
formal continued fraction expansion:
F(a+1,b;c+1;ω)F(a,b;c;ω)=
11[ ; - ]b(c-a)/c(c+1)ω1[ ; - ](a+1)(c-b+1)/(c+1)(c+2)ω1[ ; - ](b+1)(c-a+1)/(c+2)(c+3)ω1[ ; - ]⋯
Suppose for now, the left hand side of
(<ref>)
is denoted by 𝒢_1^(a,b,c)(ω). Again using, Gauss
continued fraction <cit.>
with a↦ a+1 and c↦ c+1,
we arrive at
[𝒢_1^(a,b,c)]^-1(ω)=1-b(c-a)c(c+1)ωF(a+1,b+1;c+2;ω)F(a+1,b;c+1;ω).
In the relation
(<ref>),
interchanging a and b and then replacing a↦ a+1 and
c↦ c+1, we get
1-b(c-a)c(c+1)ωF(a+1,b+1;c+2;ω)F(a+1,b;c+1;ω)=
F(a,b;c;ω)F(a+1,b;c+1;ω),
which implies
𝒢_1^(a,b,c)(ω)=
F(a+1,b;c+1;ω)F(a,b;c;ω).
As mentioned, the right hand side is only a
formal expansion for the left hand side in
(<ref>).
However, note the fact that the sequence {𝒫_j(ω)}_j=0^∞,
𝒫_2j(ω) =F(a+j,b+j;c+2j;ω),
𝒫_2j+1(ω) =F(a+j+1,b+j;c+2j+1;ω),
j≥0
satisfies the difference equation
𝒫_j(ω)=𝒫_j+1(ω)-
d_j+1ω𝒫_j+2(ω),
j≥0,
where
d_n=
{[ (b+j)(c-a+j)(c+2j)(c+2j+1), n=2j+1≥1, j≥0;; (a+j)(c-b+j)(c+2j-1)(c+2j), n=2j≥2, j≥1. ].
Thus, using
<cit.>,
we can conclude that the right side of
(<ref>)
indeed corresponds as well as converges to the
left side of
(<ref>),
which we state as
The following correspondence and convergence properties hold.
* With a, b, c≠0,-1,-2,⋯ complex constants,
F(a+1,b;c+1;ω)F(a,b;c;ω)∼11[ ; - ](b)(c-a)/c(c+1)ω1[ ; - ](a+1)(c-b+1)/(c+1)(c+2)ω1[ ; - ](b+1)(c-a+1)/(c+2)(c+3)ω1[ ; - ]⋯
* The continued fraction on the right side of
(<ref>)
converges to the meromorphic function f(ω) in the cut-plane 𝔇
where f(ω)=F(a+1,b;c+1;ω)/F(a,b;c;ω) and
𝔇={ω∈ℂ:|(1-ω)|<π}. The convergence is uniform
on every compact subset of {ω∈𝔇:f(ω)≠∞}.
We would like to mention here that the polynomial sequence
{𝒬_n(ω)}
corresponding to {𝒫_n(ω)}
and that arises during the discussion of the convergence of
Gauss continued fractions
<cit.>
is given by
𝒬_2j(ω) =F(a+j,b+j;c+2j;ω), j≥0
𝒬_2j+1(ω) =F(a+j,b+j+1;c+2j+1;ω), j≥0.
Also note that the continued fraction used in the right side of
(<ref>)
is
11[ ; - ]k_1(1-k_2)ω1[ ; - ]k_2(1-k_3)ω1[ ; - ]k_3(1-k_4)ω1[ ; - ]⋯.
Here, we recall that k_n=1-g_n, n≥0, where {g_n} are the parameters appearing in the Gauss continued fraction
(<ref>).
The following result gives a kind of generalization of
Proposition <ref>.
The correspondence and convergence properties of the
continued fractions involved can be discussed similar to the one for
𝒢_1^(a,b,c)(ω).
Let,
𝒢_n^(a,b,c)(ω)=
11[ ; - ]k_n(1-k_n+1)ω1[ ; - ]k_n+1(1-k_n+2)ω1[ ; - ]k_n+2(1-k_n+3)ω1[ ; - ]⋯
Then,
𝒢_2j^(a,b,c)(ω) =
F(a+j,b+j;c+2j;ω)F(a+j,b+j-1;c+2j-1;ω)
j≥1,
𝒢_2j+1^(a,b,c)(ω) =
F(a+j+1,b+j;c+2j+1;ω)F(a+j,b+j;c+2j;ω)
j≥0
The case j=0 has already been established in
Proposition <ref>.
Comparing the continued fractions for
𝒢_2j+1^(a,b,c)(ω)
and 𝒢_2j-1^(a,b,c)(ω),
j≥1,
it can be seen that
𝒢_2j+1^(a,b,c)(ω)
can be obtained for
𝒢_2j-1^(a,b,c)(ω) j≥1
by shifting
a↦ a+1, b↦ b+1 and c↦ c+2.
For n=2j, j≥1,
we note that the continued fraction in right side of
(<ref>)
is nothing but the Gauss continued fraction
<cit.>
with the shifts a↦ a+j, b↦ b+j-1
and c↦ c+2j-1
in the parameters.
Instead of starting with k_n(1-k_n+1), as the first partial numerator
term in the continued fraction
(<ref>),
a modification by inserting a new term changes the hypergeometric ratio given in Proposition <ref>, thus leading to interesting consequences. We state this result as follows.
Let
ℱ_n^(a,b,c)(ω)=
11[ ; - ]k_nω1[ ; - ](1-k_n)k_n+1ω1[ ; - ](1-k_n+1)k_n+2ω1[ ; - ]⋯ .
Then,
ℱ_2j+1^(a,b,c)(ω)
=F(a+j+1,b+j;c+2j;ω)F(a+j,b+j;c+2j;ω),
j≥0,
ℱ_2j+2^(a,b,c)(ω)
=F(a+j+1,b+j+1;c+2j+1;ω)F(a+j+1,b+j;c+2j+1;ω),
j≥0.
Denoting,
ℰ_n+1^(a,b,c)(ω)=
1-1k_n(1-1𝒢_n^(a,b,c)(ω))=
1-(1-k_n+1)ω1-k_n+1(1-k_n+2)ω1-⋯,
n≥1,
we find from <cit.>,
ℱ_n+1^(a,b,c)(ω)=
ℰ_n+1^(a,b,c)(ω)1-z=
11-k_n+1ω1-(1-k_n+1)k_n+2ω1-⋯,
n≥1.
Hence, we need to derive the functions
ℰ_n+1^(a,b,c)(ω). For n=2j, j≥1,
using (<ref>)
and k_2j=(a+j)/(c+2j-1), we find that
1k_2j[1-1𝒢_2j^(a,b,c)(ω)]=
c-b+jc+2jωF(a+j+1,b+j;c+2j+1;ω)F(a+j,b+j;c+2j;ω)
Shifting a↦ a+j, b↦ b+j and c↦ c+2j in
(<ref>),
we find that
ℰ_2j+1^(a,b,c)(ω)=(1-z)F(a+j+1,b+j;c+2j;ω)F(a+j,b+j;c+2j;ω)
so that
ℱ_2j+1^(a,b,c)(ω)=
F(a+j+1,b+j;c+2j;ω)F(a+j;b+j;c+2j;ω)=
11-k_2j+1ω1-(1-k_2j+1)k_2j+2ω1-⋯,
j≥1.
Repeating the above steps, we find that for n=2j+1, j≥0 and
k_2j+1=(b+j)/(c+2j), j≥0,
ℰ_2j+2^(a,b,c)(ω)=(1-z)F(a+j+1,b+j+1;c+2j+1;ω)F(a+j+1,b+j;c+2j+1;ω),
j≥0
so that
ℱ_2j+2^(a,b,c)(ω)=
F(a+j+1,b+j+1;c+2j+1;ω)F(a+j+1;b+j;c+2j+1;ω)=
11-k_2j+2ω1-(1-k_2j+2)k_2j+3ω1-⋯,
j≥0.
For particular values of ℱ_n^(a,b,c)(ω), further properties of the ratio of hypergeometric function can be discussed. One particular case and few ratios of hypergeometric functions are given in
Section <ref>
with some properties. Before proving such specific case, we consider another type of perturbation in g-fraction in the next section.
§ PERTURBED SCHUR PARAMETERS
As mentioned in Section <ref>,
the case of a single parameter
g_k being replaced by g_k^(β_k) can be studied using the
Schur parameters. It is obvious that this is equivalent to studying the
perturbed sequence {α_j^(β_k)}_j=0^∞,
where
α_j^(β_k)=
{[ α_j, j≠ k;; β_k, j=k. ].
Hence, we start with a given Schur function and study the perturbed
Carathéodory function and its corresponding g-fraction.
The following theorem gives the structural relation between the Schur
function and the perturbed one. The proof follows the transfer matrix
approach, which has also been used earlier
in literature (see for example
<cit.>).
Let A_k(z) and B_k(z) be the n^th partial numerators
and denominators of the Schur fraction associated with the sequence
{α_k}_n=0^∞. If A_k(z;k)
and B_k(z;k) are the n^th partial numerators
and denominators of the Schur fraction associated with the sequence
{α_j^(β_k)}_j=0^∞ as defined in
(<ref>),
then the following structural relations hold for
p≥ 2k, k≥1.
z^k-1∏_j=0^k(1-|α_j|^2)
(
[ A_2p+1(z;k) A_2p(z;k); B_2p+1(z;k) B_2p(z;k); ])=
𝔗(z;k)
(
[ A_2p+1(z) A_2p(z); B_2p+1(z) B_2p(z); ]),
where the entries of the transfer matrix
𝔗(z;k) are given by
(
[ 𝔗_(1,1) 𝔗_(1,2); 𝔗_(2,1) 𝔗_(2,2); ])
= (
[ p_k(z,k)A_2k-1(z)+q_k^∗(z,k)A_2k-2(z) q_k(z,k)A_2k-1(z)+p_k^∗(z,k)A_2k-2(z); p_k(z,k)B_2k-1(z)+q_k^∗(z,k)B_2k-2(z) q_k(z,k)B_2k-1(z)+p_k^∗(z,k)B_2k-2(z); ]),
with
p_k(z,k) =(α_k-β_k)B_2k-1(z)+(1-βα̅_k)B_2k-2(z)
q_k(z;k) =(β_k-α_k)A_2k-1(z)-(1-α̅_kβ_k)A_2k-2(z).
Let
Ω_p(z;α)=
(
[ A_2p+1(z) B_2p+1(z); A_2p(z) B_2p(z); ])
Ω_p(z;α;k)=
(
[ A_2p+1(z;k) B_2p+1(z;k); A_2p(z;k) B_2p(z;k); ]).
Then the matrix relation
(<ref>)
can be written as
Ω_p(z;α) =T_p(α_p)·Ω_p-1(z;α)
=T_p(α_p)· T_p-1(α_p-1)·⋯·
T_1(α_1)·Ω_0(z;α),
p≥1,
with
T_p(α_p)=
(
[ z α̅_pz; α_p 1; ])
Ω_0(z;α)=
(
[ z α̅_0z; α_0 1; ])=
T_0(α_0).
From (<ref>), it is clear that
Ω_p(z;α;k)=
Ω_0(z;α) T_k(β_k)∏_j≠ kj=1^pT_j(α_j).
Defining the associated polynomials of order k+1 as
Ω_p-(k+1)^(k+1)(z;α)=
T_p(α_p)T_p-1(α_p-1)⋯ T_k+1(α_k+1)Ω_0(z;α),
we have
T_p(α)T_p-1(α_p-1)⋯ T_k+1(α_k+1)=
Ω_p-(k+1)^(k+1)(z;α)[Ω_0(z;α)]^-1,
where [Ω_0(z;α)]^-1 denotes the matrix inverse of
Ω_0(z;α).
Now, using
(<ref>)
and
(<ref>)
in
(<ref>),
we get
Ω_p(z;k;α)=Ω_p-(k+1)^(k+1)(z;α)·[Ω_0(z;α)]^-1· T_k(β_k)·Ω_k-1(z;α).
Again from (<ref>),
Ω_p(z;α) =T_p(α_p)⋯ T_k+1(α_k+1)·T_k(α_k)⋯ T_1(α_1)Ω_0(z;α)
=Ω_p-(k+1)^(k+1)(z;α)Ω_0^-1(z;α)·Ω_k(z;α),
which means
Ω_p-(k+1)^(k+1)(z;α)=
Ω_p(z;α)[Ω_k(z;α)]^-1Ω_0(z;α).
Using
(<ref>)
in
(<ref>), we get
Ω_p(z;k;α)=Ω_p(z,α)[Ω_k(z,α)]^-1Ω_0(z;α)·
[Ω_0(z;α)]^-1· T_k(β_k)·Ω_k-1(z,α),
which implies
[Ω_p(z;k;α)]^T=[T_k(β_k)Ω_k-1(z,α)]^T·
[Ω_k(z,α)]^-T·
[Ω_p(z,α)]^T.
where [Ω_p(z,α)]^T denotes the matrix transpose of
Ω_p(z,α).
After a brief calculation, and using the relations
(<ref>),
it can be proved that the product
[T_k(β_k)Ω_k-1(z,α)]^T·Ω_k^-T(z,α)
precisely gives the transfer matrix
𝔗(z;k)
leading to
(<ref>).
As an important consequence of Theorem
(<ref>),
we have,
z^k-1∏_j=0^k(1-|α_j|^2)
(
[ A_2p(z;k); B_2p(z;k); ])=
𝔗_k(z;k)
(
[ A_2p(z); B_2p(z); ]),
which implies,
A_2p(z;k)B_2p(z;k)=
𝔗_(1,2)+𝔗_(1,1)(A_2p(z)/B_2p(z))𝔗_(2,2)+𝔗_(2,1)(A_2p(z)/B_2p(z)).
This gives the perturbed Schur function as,
f^(β_k)(z;k)=
𝔗_(1,2)+𝔗_(1,1)f(z)𝔗_(2,2)+𝔗_(2,1)f(z).
We next consider a non-constant Schur function of the form f(z)=cz+d where
|c|+|d|≤1 and have a perturbation α_1↦β_1. Then
p_1(z,1) =(α_1-β_1)α̅_0z+(1-α̅_1β_1);
p_1^∗(z,1)=(1-α_1β̅_1)z+(α̅_1-β̅_1)α_0;
q_1(z,1) =(β_1-α_1)z-(1-α̅_1β_1)α_0;
q_1^∗(z,1)=(β̅_1-α̅_1)-(1-α_1β̅_1)α̅_0z.
The matrix entries are
τ_(1,1) =(α_1-β_1)z^2+[(1-β_1α̅_1)-(1-α_1β̅_1)|α_0|^2]z+
α_0(β̅_1-α̅_1),
τ_(1,2) =(β_1-α_1)z^2+[(1-α_1β̅_1)α_0-(1-α̅_1β_1)α_0]z+
(α̅_1-β̅_1)α_0^2,
τ_(2,1) =(α_1-β_1)(α̅_0)^2z^2+[(1-β_1α̅_1)α̅_0-(1-α_1β̅_1)α̅_0]z+
(β̅_1-α̅_1),
τ_(2,2) =(β_1-α_1)α̅_0z^2+[(1-α_1β̅_1)-(1-α̅_1β_1)|α_0|^2]z+
(α̅_1-β̅_1)α_0.
The transformed Schur function is a rational function given by
f^(β_1)(z,1)=
Az^3+Bz^2+Cz+DÂz^3+B̂z^2+Ĉz+D̂,
where
A =(α_1-β_1)α̅_0c,
B=(β_1-α_1)(1-α̅_0d)+
c(1-β_1α̅_1)-
c|α_0|^2(1-α_1β̅_1),
C =(1-α_1β̅_1)(α_0-d|α_0|^2)+
(1-α̅_1β_1)(d-α_0)+
cα_0(β̅_1-α̅_1),
D =(β̅_1-α̅_1)(d-α_0)α_0,
and
 =(α_1-β_1)(α̅_0)^2c,
B̂=(β_1-α_1)(1-α̅_0d)α̅_0+
c(1-β_1α̅_1)α̅_0-
c(1-α_1β̅_1)α̅_0,
Ĉ =(1-α_1β̅_1)(1-dα̅_0)+
(1-β_1α̅_1)(dα̅_0-|α_0|^2)+
c(β̅_1-α̅_1),
D̂ =(β̅_1-α̅_1)(d-α_0).
This leads to the following easy consequence of Theorem
<ref>.
Let f(z)=cz+α_0, where |c|≤ 1-|α_0|
denote the class of Schur functions.
Then with the perturbation α_1↦β_1,
the resulting Schur function is the rational function given by
f^(β_1)(z;1)=Az^2+Bz+CÂz^2+B̂z+Ĉ,
A≠0, Â≠0.
We now consider an example illustrating the above discussion.
Consider the sequence of Schur parameters
{α_n}_n=0^∞
given by
α_0=1/2 and α_n=2/(2n+1), n≥1.
Then, as in <cit.>,
the Schur function is f(z)=(1+z)/2
with
A_2m(z) =12+2z^m+2-2(m+1)z^2+2mz(2m+1)(z-1)^2,
B_2m(z) =1+z^m+2+z^m+1-(2m+1)z^2+(2m-1)z(2m+1)(z-1)^2,
A_2m+1(z) =z+z^2-(2m+3)z^m+2+(2m+1)z^m+3(2m+1)(z-1)^2,
B_2m+1(z) =z^m+12+2z-(m+1)z^m+1+mz^m+2(2m+1)(z-1)^2.
We study the perturbation α_1↦β_1=1/2.
For the transfer matrix 𝔗(z;k),
the following polynomials are required.
p_1(z) =z12+23;
p_1^∗(z)=23z+112;
q_1(z) =-z6-13;
q_1^∗(z)=-z3-16.
The entries of 𝔗(z;k) are
𝔗_(1,1) =z^212+z2-112;
𝔗_(1,2)=-z^26+112;
𝔗_(2,1) =z^224-16;
𝔗_(2,2)=-z^212+z2+112.
Hence, the transformed Schur function using
(<ref>)
is
f^(1/2)(z;1)=2z^2-3z+5z^2-3z+20.
Observe that f(z) and f^(1/2)(z;1) are analytic in 𝔻 with f(0)=f^(1/2)(0;1) and
ω(z)=f^-1(f^(1/2)(z;1))=
3z(z-3)z^2-3z+20,
where ω(z) is analytic in 𝔻 with |ω(z)|<1.
Further, by Schwarz lemma |ω(z)|<|z| for 0<|z|<1 unless ω(z)
is a pure rotation. In such a case the range of f^(1/2)(z;1) is contained in the range of f(z). The function f^(1/2)(z;1) is said to be subordinate to f(z) and written as f^(1/2)(z;1)≺ f(z)
for z∈𝔻
<cit.>.
We plot the ranges of both the Schur functions below.
In Figure <ref>, the outermost circle
is the unit circle while the middle one is the image of
|z|=0.9 under f(z) which is again a
circle with center at 1/2. The innermost figure is the image of
|z|=0.9 under f^(1/2)(z;1).
§.§ The change in Carathéodory function
Let the Carathéodory function associated with the
perturbed Schur function f^(β_k)(z;k) be
denoted by 𝒞^(β_k)(z;k).
Then, using
(<ref>),
we can write
𝒞^(β_k)(z;k) =1+zf^(β_k)(z;k)1-zf^(β_k)(z;k)
=(𝔗_2,2+z𝔗_1,2)+(𝔗_2,1+z𝔗_1,1)f(z)(𝔗_2,2-z𝔗_1,2)+(𝔗_2,1-z𝔗_1,1)f(z).
Further, using the relation
(<ref>),
we have
𝒞^(β_k)(z;k)=
𝒴^-(z)+𝒴^+(z)𝒞(z)𝒲^-(z)+𝒲^+(z)𝒞(z),
where
𝒴^±(z) =
z(𝔗_(2,2)+z𝔗_(1,2))
± (𝔗_(2,1)+z𝔗_(1,1))
𝒲^±(z) =
z(𝔗_(2,2)-z𝔗_(1,2))
± (𝔗_(2,1)-z𝔗_(1,1)).
As an illustration, for the Schur function f(z)=(1+z)/2,
it is easy to verify that
𝒞(z)=2+z+z^22-z-z^2 𝒞^(1/2)(z;1)=2z^3-5z^2+7z+20-2z^3+7z^2-13z+20.
We plot these Carathéodory functions below.
In Figures <ref> and <ref>,
the ranges of both the original and perturbed Carathéodory functions
are plotted for |z|=0.9.
Interestingly, the range of 𝒞(z) is unbounded
(Figure <ref>) which is clear as z=1 is a pole of
𝒞(z).
However 𝒞^1/2(z;1) has simple poles at 5/2 and
(1± i√(15))/2
and hence
with the use of perturbation we are able to make the
range bounded (Figure <ref>).
As shown in
<cit.>,
the sequence
{γ_j}_j=0^∞
satisfying the recurrence relation
γ_p+1=γ_p-α̅_p1-α_pγ_p,
p≥0.
where γ_0=1 and α_j's are the Schur parameters
plays an important role in the g-fraction expansion for a special
class of Carathéodory functions.
Let {γ_j^(β_k)} correspond to the perturbed
Carathéodory function 𝒞^(β_k)(z;k).
Since only α_j is perturbed,
it is clear that γ_j remains unchanged for j=0,1,⋯,k.
The first change,
γ_k+1 to γ_k+1^(β_k),
occurs when α_k is replaced by β_k.
Consequently, γ_k+j, j≥2,
change to
γ_k+j^(β_k), j≥2, respectively.
We now show that γ_j
can be expressed as a bilinear transformation
of γ_j^(β_k) for j≥ k+1.
Let {γ_j}_j=0^∞ be the sequence corresponding to
{α_j}_j=0^∞ and {γ_j^(β_k)}_j=0^∞
that to {α_j^(β_k)}_j=0^∞. Then,
γ_k+j^(β_k)=a̅_k+jγ_k+j-b_k+j-b̅_k+jγ_k+j+a_k+j,
j≥1,
where
* a_k+1=1-α̅_kβ_k1-|β_k|^2
b_k+1=β̅_̅k̅-α̅_k1-|β_k|^2, (j=1).
* For j≥2,
(
[ a_k+j; b_k+j; ])=
11-|α_k+j-1|^2(
[ 1 α_k+j-1; α̅_k+j-1 1; ])
(
[ a_k+j-1-α̅_k+j-1b̅_k+j-1; b_k+j-1-α̅_k+j-1a̅_k+j-1; ]).
Consider first the expression
a_k+1γ_k+1^(β_k)+b_k+1b̅_k+1γ_k+1^(β_k)+a̅_k+1.
Substituting
γ_k+1^(β_k)=(γ_k-β̅_k)/(1-β_kγ_k)
and the given values of a_k+1 and b_k+1,
it simplifies to
(1-α̅_kβ_k)(γ_k-β̅_k)+(β̅_k-α̅_k)(1-α_kγ_k)(β_k-α_k)(γ_k-β̅_k)+(1-α_kγ̅_k)(1-β_kγ_k)
=γ_k(1-|β_k|^2)-α̅_k(1-|β_k|^2)(1-|β_k|^2)-α_kγ_k(1-|β|^2)
=γ_k+1.
Since |a_k+1|^2-|b_k+1|^2=(1-|α_k|^2)/(1-|β_k|^2)≠0,
(<ref>)
is proved for j=1.
Next, let
a_k+2γ_k+2^(β_k)+b_k+2b̅_k+2γ_k+2^(β_k)+a̅_k+2
=(a_k+2-α_k+1b_k+2)γ_k+1^(β_k)+(b_k+2-α̅_k+1a_k+2)(b̅_k+2-α_k+1a̅_k+2)γ_k+1^(β_k)+(a̅_k+2-α̅_k+1b̅_k+2)
=N(γ_k)D(γ_k).
Substituting first the given values of
a_k+2 and b_k+2,
the numerator becomes
N(γ_k)=(1-|α_k+1|^2)[(a_k+1-α̅_k+1b̅_k+1)γ_k+1^(β_k)+
(b_k+1-α̅_k+1a̅_k+1)],
and then using,
γ_k+1^(β_k)=(a̅_k+1-b_k+1)/
(-b̅_k+1γ_k+1+a_k+1),
N(γ_k)=(1-|α_k+1|^2)(|α_k+1|^2-
|b_k+1|^2)(γ_k+1-α̅_k+1).
With similar calculations, we obtain
D(γ_k)=(1-|α_k+1|^2)(|α_k+1|^2-
|b_k+1|^2)(1-α_k+1γ_k+1).
This means
N(γ_k)D(γ_k)=
γ_k+1-α̅_k+11-α_k+1γ_k+1=
γ_k+2,
where
|a_k+2|^2-|b_k+2|^2=|a_k+1|^2-|b_k+1|^2≠0,
thus proving
(<ref>)
for j=2.
The remaining part of the proof
is follows by a simple induction on j.
With the condition |α_p|<1, it is clear that
(<ref>)
gives analytic self-maps of the unit disk.
Similar to the changes in mapping properties obtained as
a consequence of perturbation,
it is expected that
(<ref>)
may lead to interesting results in fractal geometry and complex dynamics.
Since γ_j and γ_j^(β_k) are related by a bilinear
transformation, it is clear that the expressions for a_k+j and
b_k+j, j≥1, are not unique.
It is known that if 𝒞(z) is real for real z, then
the α_p's are real and
γ_p=1, p=0,1,⋯.
Further, it is clear from
(<ref>)
that γ_j^(β_k)=1 whenever γ_j=1.
In this case, the following g-fraction is obtained
for k≥0.
1-z1+z𝒞^(β_k)(z)=
11[ ; - ]g_1ω1[ ; - ](1-g_1)g_2ω1[ ; - ]⋯(1-g_k)g_k+1^(β_k)ω1[ ; - ](1-g_k+1^(β_k))g_k+2ω1[ ; - ]⋯
where g_j=(1-α_j-1)/2, j=1,⋯,k,k+2⋯,
g_k+1^(β_k)=(1-β_k)/2
and ω=-4z/(1-z)^2.
§ A CLASS OF PICK FUNCTIONS AND SCHUR FUNCTIONS
Let the Hausdorff sequence {ν_j}_j≥0
with ν_0=1 be given so that there
exists a bounded non-decreasing measure ν on [0,1]
satisfying
ν_j=∫_0^1σ^jdν(σ),
j≥0.
By <cit.>, the existence of
dν(σ) is equivalent to the power series
F(ω)=∑_j≥0ν_jω^j=
∫_0^111-σωdν(σ)
having a continued fraction
expansion of the form
∫_0^111-σωdν(σ)=
ν_01[ ; - ](1-g_0)g_1ω1[ ; - ](1-g_1)g_2ω1[ ; - ](1-g_2)g_3ω1[ ; - ]⋯
where ν_0≥0 and 0≤ g_p≤ 1, p≥0.
Such functions F(ω) are analytic in the slit domain
ℂ∖[1,∞)
and belong to the class of Pick functions.
We note that the Pick functions are analytic in the
upper half plane and have a positive imaginary part
<cit.>.
In the next result, we characterize some members of the class of Pick
functions using the gap g-fraction ℱ_2^(a,b,c)(ω).
The proof is similar to that of
<cit.> and
follows from <cit.>, given earlier
as <cit.>.
If a, b, c∈ℝ with -1<a≤ c and 0≤ b≤ c,
then the functions
ω ↦F(a+1,b+1;c+1;ω)F(a+1,b;c+1;ω) ; ω↦ω F(a+1,b+1;c+1;ω)F(a+1,b;c+1;ω)
ω ↦F(a+2,b+1;c+2;ω)F(a+1,b;c+1;ω) ; ω↦zF(a+2,b+1;c+2;ω)F(a+1,b;c+1;ω)
ω ↦F(a+2,b+1;c+2;ω)F(a+1,b+1;c+1;ω) ; ω↦ω F(a+2,b+1;c+2;ω)F(a+1,b+1;c+1;ω)
are analytic in ℂ∖[1,∞) and each
function map both the open unit disk 𝔻 and the
half plane {ω∈ℂ: Re ω<1} univalently
onto domains that are convex in the direction of the
imaginary axis.
We would like to note here that by a domain convex in the direction
of imaginary axis, we mean that every line parallel to the imaginary
axis has either connected or empty intersection with the
corresponding domain
<cit.>,
(see also
<cit.>.
).
With the given restrictions on a, b and c, ℱ_2^(a,b,c)(ω)
has a g-fraction expansion and hence by <cit.>, there
exists a non-decreasing function ν_0:[0,1]↦ [0,1] with a total increase of 1 and
F(a+1,b+1;c+1;ω)F(a+1,b;c+1;ω)=
∫_0^111-σωdν_0(σ),
ω∈ℂ∖[1,∞),
which implies
ω F(a+1,b+1;c+1;ω)F(a+1,b;c+1;ω)=
∫_0^1ω1-σωdν_0(σ),
ω∈ℂ∖[1,∞).
Now, if we define
ν_1(σ)=1k_2∫_0^σρ dν_0(ρ),
where k_2=(a+1)/(c+1)>0, it can be easily seen that
ν_1:[0,1]↦ [0,1] is again a non-decreasing
map with ν_1(1)-ν_1(0)=1.
Further, using the contiguous relation
F(a+1,b;c;ω)-F(a,b;c;ω)=bcω F(a+1,b+1;c+1;ω),
we obtain
ω F(a+2,b+1;c+2;ω)F(a+1,b;c+1;ω)=
∫_0^1ω1-σωdν_1(σ),
ω∈ℂ∖[1,∞),
and hence
F(a+1,b+1;c+1;ω)F(a+1,b;c+1;ω)=
1+k_2
∫_0^1ω1-σωdν_1(σ),
ω∈ℂ∖[1,∞).
Further, noting that the coefficient of ω in
F(a+2,b+1;c+2;ω)/F(a+1,b;c+1;ω) is
[(b+1)(c-a)]/[(c+1)(c+2)]=k_3+(1-k_3)k_2,
we define
ν_2(σ)=1k_3+k_2(1-k_3)∫_0^σρ dν_1(ρ),
and find that
F(a+2,b+1;c+2;ω)F(a+1,b;c+1;ω)=
1+[k_3+k_2(1-k_3)]
∫_0^1ω1-σωdν_2(σ).
Finally from Gauss continued fraction
(<ref>),
we conclude that
F(a+2,b+1;c+2;ω)/F(a+1,b+1;c+1;ω) has a g-fraction
expansion and so there exists a map ν_3:[0,1]↦[0,1]
which is non-decreasing, ν_3(1)-ν_3(0)=1 and
ω F(a+2,b+1;c+2;ω)F(a+1;b+1;c+1;ω)=
∫_0^1ω1-σωdν_3(σ),
ω∈ℂ∖ [1,∞).
Defining for a<c
ν_4(σ)=1(1-k_2)k_3∫_0^σρ dν_3(ρ),
so that (1-k_2)k_3>0, and using the fact that the coefficient
of ω in F(a+2,b+1;c+2;ω)/F(a+1,b+1;c+1;ω) is (1-k_2)k_3,
we obtain
F(a+2,b+1;c+2;ω)F(a+1,b+1;c+1;ω)=
1+[(1-k_2)k_3]
∫_0^1ω1-σωdν_4(σ).
Thus, with ν_j, j=0,1,2,3,4, satisfying
the conditions of
<cit.>,
<cit.>,
the proof of the theorem is completed.
Ratios of Gaussian hypergeometric functions having mapping properties described in Theorem <ref>
are also found in <cit.>
but for the ranges -1≤ a≤ c and 0<b≤ c. Hence for the common range -1<a≤ c and 0<b≤ c, two different ratios of hypergeometric functions belonging to the class of Pick functions can be obtained leading to the expectation of finding more such ratios for every possible range.
It may be noted that the ratio of Gaussian hypergeometric functions in (<ref>)
denoted here as ℱ(z)
has the mapping properties given in Theorem
<ref>,
which is proved in <cit.>.
We now consider its g-fraction expansion with the parameter k_2 missing.
Using the contiguous relation
(<ref>)
and the notations used in Theorems
<ref>
and
<ref>,
it is clear that
ℱ_3^(a,b,c)(ω)=F(a+2,b+1;c+2;ω)/F(a+1,b+1;c+2;ω)
and
ℋ_3(ω)=
1-1ℱ_3^(a,b,c)(w)=
b+1c+2ωF(a+2,b+2;c+3;ω)F(a+2,b+1;c+2;ω).
Then
h(2;ω)=(1-k_1)ℋ_3(ω)=
(c-b)(b+1)(c)(c+2)ωF(a+2,b+2;c+3;ω)F(a+2,b+1;c+2;ω)
Then, from Theorem <ref>,
ℱ(2;ω)
=11-(1-k_0)k_1ω-
(1-k_0)k_1ω h(2;ω)[1-(1-k_0)k_1z]h(2;ω)-[1-(1-k_0)k_1z]^2
=cc-bz-bczh(2;z)c(c-bz)h(2;z)-(c-bz)^2
which implies
ℱ(2;ω)=
cc-bω-
b(b+1)(c-b)c+2ω^2F(a+2,b+2;c+3;ω)F(a+2,b+1;c+2;ω)(c-b)(b+1)(c-bω)c+2ωF(a+2,b+2;c+3;ω)F(a+2,b+1;c+2;ω)-(c-bω)^2
that is ℱ(2;ω) is given as a rational transformation of a new ratio of hypergeometric functions. It may also be noted that
for -1≤ a≤ c and 0<b≤ c,
<cit.>,
both ℱ(ω) and
ℱ(2;ω)
will map both the unit disk 𝔻 and the half plane
{ω∈ℂ:Re ω<1} univalently onto domains that are convex in the direction of the imaginary axis.
As an illustration, we plot both these functions in figures
(<ref>) and (<ref>).
§.§ A class of Schur functions
From Theorem <ref>
we obtain
k_2j+1ω1-(1-k_2j+1)k_2j+2ω1-(1-k_2j+2)k_2j+3ω1-⋯ =1-F(a+j,b+j;c+2j;ω)F(a+j+1,b+j;c+2j;ω)
=b+jc+2jω F(a+j+1,b+j+1;c+2j+1;ω)F(a+j+1,b+j,c+2j;ω)
where the last equality follows from the contiguous relation
(<ref>)
Hence using <cit.>
we get
1-z21-f_2j(z)1+zf_2j(z)=
b+jc+2jF(a+j+1,b+j+1;c+2j+1;ω)F(a+j+1,b+j,c+2j;ω),
j≥1,
where f_n(z) is the Schur function and
ω and z are related as ω=-4z/(1-z)^2.
Similarly, interchanging a and b in
(<ref>)
we obtain
1-z21-f_2j+1(z)1+zf_2j+1(z)=
a+j+1c+2j+1F(a+j+2,b+j+1;c+2j+2;ω)F(a+j+1,b+j+1,c+2j+1;ω),
j≥0,
where ω=-4z/(1-z)^2.
Moreover, using the relation α_j-1=1-2k_j, j≥1,
the related sequence of Schur parameters is given by
α_j=
{[ c-2bc+j, j=2n, n≥0;; c-2a-1c+j, j=2n+1, n≥1. ].
We note the following particular case. For a=b-1/2 and
c=b, the resulting Schur parameters are
α_j^(b)=-b/(b+j), j≥0.
Such parameters have been considered in
<cit.>
(when b∈ℝ) in the
context of orthogonal polynomials on the unit circle.
These polynomials are known in modern literature as
Szegö polynomials and we suggest the interested readers to
refer <cit.> for further information.
Finally, as an illustration we note that while the Schur function
associated with the parameters
{α_j^(b)}_j≥0 is
f(z)=-1,
that associated with the parameters
{α^(b)_j}_j≥1
is given by
1-z21-f^(b)(z)1+zf^(b)(z)=
b+1/2b+1F(b+3/2,b+1;b+2;ω)F(b+1/2,-;-;ω)
where ω=-4z/(1-z)^2.
We remark that in this section, specific illustration of the results given in Section <ref> are discussed leading to characterizing a class of ratio of hypergeometric functions. Similar characterization of functions involving the function ω=-4z/(1-z)^2, given in Section <ref>
may provide some important consequences of such perturbation of g-fractions.
99
Andrews-Askey-Roy-book
G. E. Andrews, R. Askey and R. Roy,
Special functions,
Encyclopedia of Mathematics and its Applications, 71,
Cambridge Univ. Press, Cambridge, 1999.
Swami-mapping-prop-BHF-2014-JCA
Á. Baricz and A. Swaminathan,
Mapping properties of basic hypergeometric functions,
J. Class. Anal.
5 (2014), no. 2, 115–128.
Castillo-pert-szego-rec-2014-jmaa
K. Castillo, On perturbed Szegő recurrences,
J. Math. Anal. Appl.
411 (2014), no. 2, 742–752.
Castillo-copolynomials-2015-jmaa
K. Castillo, F. Marcellán and J. Rivero,
On co-polynomials on the real line,
J. Math. Anal. Appl.
427 (2015), no. 1, 469–483.
Donoghue-interpolation-pick-function-rocky-mountain
W. F. Donoghue, Jr.,
The interpolation of Pick functions,
Rocky Mountain J. Math.
4 (1974), 169–173
Duren-book
P. L. Duren,
Univalent functions,
Grundlehren der Mathematischen Wissenschaften,
259, Springer-Verlag, New York(1983).
Garza-Marcellan-szego-spectral-JCAM-2009
L. Garza and F. Marcellán,
Szegő transformations and rational spectral transformations for associated polynomials,
J. Comput. Appl. Math.
233 (2009), no. 3, 730–738.
Ismail-Merkes-Styer-starlike-1990-complex-variables
M. E. H. Ismail, E. Merkes and D. Styer,
A generalization of starlike functions,
Complex Variables Theory Appl.
14 (1990), no. 1-4, 77-84.
JNT-Survey-Schur-PC-Szego
W. B. Jones, O. Njåstad and W. J. Thron,
Schur fractions, Perron Carathéodory fractions and Szegő polynomials, a survey,
in
Analytic theory of continued fractions, II (Pitlochry/Aviemore, 1985),
127–158, Lecture Notes in Math., 1199, Springer, Berlin.
Jones-Njasad-Thron-Moment-OP-CF-1989-BLMS
W. B. Jones, O. Njåstad and W. J. Thron,
Moment theory, orthogonal polynomials, quadrature, and continued fractions
associated with the unit circle,
Bull. London Math. Soc.
21 (1989), no. 2, 113–152.
Jones-Thron-book
W. B. Jones and W. J. Thron,
Continued fractions,
Encyclopedia of Mathematics and its Applications, 11,
Addison-Wesley Publishing Co., Reading, MA, 1980.
Kustner-g-fractions-2002-CMFT
R. Küstner,
Mapping properties of hypergeometric functions and
convolutions of starlike or convex functions of order α,
Comput. Methods Funct. Theory
2 (2002), no. 2, 597–610.
Kustner-g-fractions-JMAA-2007
R. Küstner,
On the order of starlikeness of the shifted Gauss hypergeometric function,
J. Math. Anal. Appl.
334 (2007), no. 2, 1363–1385.
Lisa-Waadeland-book-cf-with-application
L. Lorentzen and H. Waadeland,
Continued fractions with applications,
Studies in Computational Mathematics,
3, North-Holland, Amsterdam, 1992.
Merkes-AMS-1959-typically-real
E. P. Merkes,
On typically-real functions in a cut plane,
Proc. Amer. Math. Soc.
10 (1959), 863–868.
Njaastad-convernce-of-schur-algo-1990-PAMS
O. Njåstad,
Convergence of the Schur algorithm,
Proc. Amer. Math. Soc.
110 (1990), no. 4, 1003–1007.
Schur-papers-1917-1918
J. Schur,
Über Potenzreihen dei im Inneren des Einheitskreises
beschränkt sind,
J. reine angewandte Math.
147 (1917), 205–232,
148 (1918/19), 122–145.
Simon-book-vol1
B. Simon,
Orthogonal polynomials on the unit circle. Part 1,
American Mathematical Society Colloquium Publications, 54, Part 1,
Amer. Math. Soc., Providence, RI, 2005.
Ranga-szego-polynomials-2010-AMS
A. Sri Ranga,
Szegő polynomials from hypergeometric functions,
Proc. Amer. Math. Soc.
138 (2010), no. 12, 4259–4270.
Szego-book
G. Szegő,
Orthogonal polynomials, fourth edition,
Amer. Math. Soc., Providence, RI, 1975.
Alexei-Runckel-points-RJ
A. V. Tsygvintsev,
On the convergence of continued fractions at Runckel's points,
Ramanujan J.
15 (2008), no. 3, 407–413.
Alexei-ABC-flow-2013-JAT
A. Tsygvintsev,
Bounded analytic maps, Wall fractions and ABC-flow,
J. Approx. Theory
174 (2013), 206–219.
Wall-cf-and-bdd-analytic-function-1944-BAMS
H. S. Wall,
Continued fractions and bounded analytic functions,
Bull. Amer. Math. Soc.
50 (1944), 110–119.
Wall_book
H. S. Wall,
Analytic Theory of Continued Fractions,
D. Van Nostrand Company, Inc., New York, NY, 1948.
Zhedanov-rational-spectral-JCAM-1997
A. Zhedanov,
Rational spectral transformations and orthogonal polynomials,
J. Comput. Appl. Math.
85 (1997), no. 1, 67–86.
|
http://arxiv.org/abs/1701.07646v1 | 20170126104429 | Brain State Flexibility Accompanies Motor-Skill Acquisition | [
"Pranav G. Reddy",
"Marcelo G. Mattar",
"Andrew C. Murphy",
"Nicholas F. Wymbs",
"Scott T. Grafton",
"Theodore D. Satterthwaite",
"Danielle S. Bassett"
] | q-bio.NC | [
"q-bio.NC"
] |
[1]Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104 USA
[2]Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
[3]Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
[4]Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, MD 21218 USA
[5]Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106 USA
[6]Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104, USA
[7]Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104 USA
[8]To whom correspondence should be addressed: [email protected].
A USB-controlled potentiostat/galvanostat for thin-film battery characterization
Thomas Dobbelaere
December 30, 2023
================================================================================
Learning requires the traversal of inherently distinct cognitive states to produce behavioral adaptation. Yet, tools to explicitly measure these states with non-invasive imaging – and to assess their dynamics during learning – remain limited. Here, we describe an approach based on a novel application of graph theory in which points in time are represented by network nodes, and similarities in brain states between two different time points are represented as network edges. We use a graph-based clustering technique to identify clusters of time points representing canonical brain states, and to assess the manner in which the brain moves from one state to another as learning progresses. We observe the presence of two primary states characterized by either high activation in sensorimotor cortex or high activation in a frontal-subcortical system. Flexible switching among these primary states and other less common states becomes more frequent as learning progresses, and is inversely correlated with individual differences in learning rate. These results are consistent with the notion that the development of automaticity is associated with a greater freedom to use cognitive resources for other processes. Taken together, our work offers new insights into the constrained, low dimensional nature of brain dynamics characteristic of early learning, which give way to less constrained, high-dimensional dynamics in later learning.
§ KEYWORDS:
motor sequence learning, graph theory, discrete sequence production, brain state flexibility
§ INTRODUCTION
The human brain is an inherently adaptive <cit.>, plastic <cit.> organ. Its fundamental malleability supports changes to its architecture and function that are advantageous to human survival. Importantly, such changes can occur on multiple time scales: from the long time scales of evolution <cit.> to the shorter time scales of multi-year development <cit.>, or even short-term learning <cit.>. Notably, even in the shortest time scales of learning, adaptation can occur over multiple spatial scales <cit.>, from the level of single neurons <cit.> to the level of large-scale systems <cit.>. Moreover, this adaptation can affect functional dynamics <cit.> or can evoke a direct change in the structure of neuroanatomy, driving new dendritic spines <cit.>, axon collaterals <cit.>, and myelination <cit.>.
Malleability, adaptability, and plasticity often manifest as a variability in quantitative statistics that describe the structure or function of a system. In the large-scale human brain, such statistics can include measures of neurophysiological noise <cit.> or changes in patterns of resting state functional connectivity <cit.>. More recently, dynamic reconfiguration of putative functional modules in the brain – groups of functionally connected areas identified using community detection algorithms <cit.> – has been used to define a notion of network flexibility <cit.>, which differs across individuals and is correlated with individual differences in learning <cit.>, cognitive flexibility <cit.>, and executive function <cit.>.
Indeed, in the context of motor skill learning, dynamic network techniques have proven to be particularly advantageous for longitudinal designs, where data is collected from the same participants at multiple time points interspersed throughout the learning process <cit.>. Using a 6-week longitudinal design where participants trained motor sequences while undergoing functional magnetic resonance imaging, motor system activity was found to be associated with both increasing and decreasing motor system activity, with sequence-specific representations varying across multiple distinct timescales <cit.>. With a network modeling approach based on coherent activity between brain regions, the same dataset revealed the existence of a core-periphery structure that changes over the course of training and predicts individual differences in learning success <cit.>. And more recently, these changes were shown to reflect a growing autonomy between sensory and motor cortices, and the release of cognitive control hubs in frontal and cingulate cortices <cit.>. Yet despite these promising advances, dynamic network reconfiguration metrics are fundamentally unable to assess changes in the patterns of activity that are characteristic of brain dynamics, as they require the computation of functional connectivity estimates over extended time windows <cit.>.
To overcome this weakness, we developed an alternative technique inspired by network science to identify temporal activation patterns and to assess their flexibility <cit.>. Leveraging the same longitudinal dataset from the above studies, we begin by defining a brain state as a pattern of regional activity – for instance, estimated from functional magnetic resonance imaging (fMRI) – at a single time point <cit.> (Fig. <ref>). Time points with similar activity patterns are then algorithmically clustered using a graph-based clustering technique <cit.>, producing sets of similar brain states. Finally, by focusing on the transitions from one state to another, we estimate the rate of switching between states. This approach is similar to techniques being concurrently developed in the graph signal processing literature <cit.>, and it allows us to ask how activation patterns in the brain change as a function of learning. We address this question in the context of the explicit acquisition of a novel motor-visual skill, which is a quintessential learning process studied in both human and animal models. As participants practice the task, we hypothesize that the brain traverses canonical states differently, that characteristics of this traversal predict individual differences in learning, and that the canonical states themselves are inherently different in early versus late learning.
To test these hypotheses, 20 healthy adult human participants practiced a set of ten-element motor sequences. Execution of the sequences involved the conversion of a visual stimulus to a motor response in the form of a button press (see Fig. <ref>). BOLD data was acquired during task performance on 4 separate occasions each two weeks apart (Fig. <ref>a-b); between each consecutive pair of scanning sessions, participants practiced the sequences at home in approximately ten home training sessions. To assess behavioral change, we defined movement time (MT) to be the time between the first button press and the last button press for any given sequence, and learning rate was quantified by the exponential drop-off parameter of a two-term exponential function fit to the MT data. To assess the change in brain activity related to behavioral change, we divided the brain into 112 cortical and subcortical regions, and calculated regional BOLD time series (Fig. <ref>c). We defined a brain state to be a pattern of BOLD magnitudes across regions, at each time point. We then quantified the similarities in brain states across time with a rank correlation measure between all pairs of states to create a symmetrical correlation matrix for each trial block (Fig. <ref>d). Within each trial block we used network-based clustering algorithms to find recurrent brain states independent of their temporal order (Fig. <ref>e-f).
We observed three to five communities – brain states, with two anti-correlated “primary states” occurring more frequently than the rest. We also observed that “state flexibility” – the flexible switching among all brain states – increases with task practice, being largely driven by contributions from brain regions traditionally associated with task learning and memory. Moreover, individuals with higher state flexibility learned faster than individuals with less switching between brain states. These results demonstrate that the global pattern of brain activity offers important insights into neurophysiological dynamics supporting adaptive behavior, underscoring the utility of a state-based assessment of whole brain dynamics in understanding higher order cognitive functions such as learning.
§ MATERIALS AND METHODS
§.§ Experiment and Data Collection.
Ethics statement. In accordance with the guidelines set out by the Institutional Review Board of the University of California, Santa Barbara, twenty-two right-handed participants (13 females and 9 males) volunteered to participate and provided informed consent in writing. Separate analyses of the data acquired in this study are reported elsewhere <cit.>.
Experimental Setup and Procedure. Head motion was calculated for each subject as mean relative volume-to-volume displacement. Two participants were excluded from the following analyses: one failed to complete the entirety of the experiment and the other had persistent head motion greater than 5 mm during MRI scanning. The 20 remaining participants all had normal or corrected vision and none had any history of neurological disease or psychiatric disorders. In total, each participant completed at least 30 behavioral training sessions over the course of 6 weeks, a pre-training fMRI session and three test fMRI sessions. The training used a module that was installed on the participant's laptop by an experimenter. Participants were given instructions on how to use the module and were required to train at minimum ten days out of each of 3 fourteen day periods. Training began immediately after the pre-training fMRI session and test scans were conducted approximately fourteen days after each previous scan (during which training also took place). Thus a total of 4 scans were acquired over the approximately 6 weeks of training.
Training and trial procedure. Participants practiced a set of ten-element sequences in a discrete sequence-production (DSP) task, which required participants to generate these responses to visual stimuli by pressing a button on a laptop keyboard with their right hand (see Figure <ref>). Sequences were represented by a horizontal array of five square stimuli, where the thumb corresponded to the leftmost stimuli and the pinky corresponded to the rightmost stimuli. The imperative stimulus was highlighted in red and the next square to be pressed in the sequence was highlighted immediately after a correct key press. The sequence only continued once the appropriate key was pressed. Participants had an unlimited amount of time to complete each trial, and were encouraged to remain accurate rather than swift.
Each participant trained on the same set of six different ten element sequences, with three different levels of exposure: extensively trained (EXT) sequences that were practiced for 64 trials each, moderately trained (MOD) sequences that were practiced for 10 trials each, and minimally trained (MIN) sequences that were practiced for 1 trial each. Sequences included neither repetitions ("11", for example) nor patterns such as trills ("121", for example) or runs ("123", for example). All trials began with a sequence-identity cue, which informed participants which sequence they would have to type. Each identity cue was associated with only a single sequence and was composed of a unique shape and color combination. EXT sequences, for example, were indicated by a cyan or magenta circle, MOD sequences by a red or green triangle, and MIN sequences by orange or white stars. Participants reported no difficulty viewing the identity cues. After every set of ten trials, participants were given feedback about the number of error-free sequences produced and the mean time to produce an error-free sequence.
Each test session in a laboratory environment was completed after approximately ten home training sessions (over the course of fourteen days) and each participant took part in three test sessions, not including the pre-training session, which was identical to the training sessions. To familiarize the participants with the task, we introduced the mapping between the fingers and DSP stimuli and explained each of the identity cues prior to the pre-training session.
As each participant's training environment at home was different than the testing environment, arrangements were made to ease the transition to the testing environment (see Figure <ref> for the key layout during testing). Padding was placed under the participants' knees for comfort and participants were given a fiber optic response box with a configuration of buttons resembling that of the typical laptop used in training. For example, the distance between the centers of buttons in the top row was 20 mm (similar to the 20 mm between the "G" and "H" keys on a MacBook Pro) and the distance between the top row and lower left button was 32 mm (similar to the 37 mm between the "G" and spacebar keys on a MacBook Pro). The position of the box itself was adjustable to accommodate participants' different reaches and hand sizes. In addition, padding was placed both under the right forearm to reduce strain during the task and also between the participant and head coil of the MRI scanner to minimize head motion.
Participants were tested on the same DSP task that they practiced at home, and, as in the training sessions, participants were given unlimited time to complete the trials with a focus on maintaining accuracy and responding quickly. Once a trial was completed, participants were notified with a “+” which remained on their screen until the next sequence-identity cue was presented. All sequences were presented with the same frequency to ensure a sufficient number of events for each type. Participants were given the same feedback after every ten trials as they were in training sessions. Each set of ten trials (referred to hereafter as trial blocks) belonged to a single exposure type (EXT, MOD, or MIN) and had five trials for each sequence, which were separated by an inter-trial interval that lasted between 0 and 6 seconds. Each epoch was composed of six blocks (60 trials) with 20 trials for each exposure and each test session contained five epochs and thus 300 trials. Participants had a variable number of brain scans depending on how quickly they completed the tasks. However, the number of trials performed was the same for all participants, with the exception of two abbreviated sessions resulting from technical problems. In both cases, participants had only completed four out of five scan runs for that session when scanning was stopped. Data from these sessions are included in this study.
Behavioral apparatus. The modules on participants' laptop computers were used to control stimulus presentation. These laptops were running Octave 3.2.4 along with the Psychtoolbox version 3. Test sessions were controlled using a laptop running MATLAB version 7.1 (Mathworks, Natick, MA). Key-press responses and response times were measured using a custom fiber optic button box and transducer connected via a serial port (button box, HHSC-1×4-l; transducer, fORP932; Current Designs, Philadelphia, PA).
Behavioral estimates of learning. We defined movement time (MT) as the time between the first button press and the last button press for any single sequence. For the sequences of a single type, we fit a double exponential function to the MT <cit.> data in order to estimate learning rate. We used robust outlier correction in MATLAB (through the "fit.m" function in the Curve Fitting Toolbox with option "Robust" and type "Lar"): MT = D_1e^-tκ + D_2e^-tλ, where t is time, κ is the exponential drop-off parameter used to describe the fast rate of improvement (which we called the learning rate), λ is the exponential drop-off parameter used to describe the slow, sustained rate of improvement, and D_1 and D_2 are real and positive constants. The magnitude of κ determines the shape of the learning curve, where individuals with larger κ values have a steeper drop-off in MT and thus are thought to be quicker learners (see Figure <ref>) <cit.>. This decrease in MT has been an accepted indicator of learning for several decades <cit.> and various forms have been tried for the fit of MT <cit.>, with variants of an exponential model being the most statistically robust choices. Importantly, this approach is also not dependent on an individual's initial performance or performance ceiling.
§.§ fMRI imaging.
Imaging procedures. Signals were acquired using a 3.0 T Siemens Trio with a 12-channel phased-array head coil. Each whole-brain scan epoch was created using a single-shot echo planar imaging sequence that was sensitive to BOLD contrast to acquire 37 slices per repetition time (repetition time (TR) of 2000 ms, 3 mm thickness, 0.5 mm gap) with an echo time of 30 ms, a flip angle of 90 degrees, a field of view of 192 mm, and a 64×64 acquisition matrix. Before the first round of data collection, we acquired a high-resolution T1-weighted sagittal sequence image of the whole brain (TR of 15.0 ms, echo time of 4.2 ms, flip angle of 90 degrees, 3D acquisition, field of view of 256 mm, slice thickness of 0.89 mm, and 256×256 acquisition matrix).
fMRI data preprocessing. Imaging data was processed and analyzed using Statistic Parametric Mapping (SPM8, Wellcome Trust Center for Neuroimaging and University College London, UK). We first realigned raw functional data, then coregistered it to the native T1 (normalized to the MNI-152 template with a resliced resolution of 3×3×3 mm), and then smoothed it with an isotropic Gaussian kernel of 8-mm full width at half-maximum. To control for fluctuations in signal intensity, we normalized the global intensity across all functional volumes. Using this pipeline of standard realignment, coregistration, normalization, and smoothing, we were able to correct for motion effects due to volume-to-volume fluctuations relative to the first volume in a scan run. The global signal was not regressed out of the voxel time series, given its controversial application to resting-state fMRI data <cit.> and the lack of evidence of its utility in analysis of task-based fMRI data. Furthermore, the functional connectivity matrices that we produce showed no evidence of strong global functional correlations but instead showed discrete organization in motor, visual and non-motor, non-visual areas <cit.>.
§.§ Network construction and analysis.
Partitioning into regions of interest We divided the brain into regions based on a standardized atlas <cit.>. There exist a number of atlases and the decision of which to use has been the topic of several recent studies on structural <cit.>, resting-state <cit.>, and task-based network architectures <cit.>. Consistent with prior graph-based studies of task-based fMRI <cit.>, we divided the brain into 112 cortical and subcortical regions using the Harvard-Oxford (HO) atlas of the FMRIB (Oxford Centre for Functional Magnetic Resonance Imaging of the Brain) Software Library <cit.>. For each participant and for each of the 112 regions, the regional mean BOLD was computed by separately averaging across all voxels in that area (see Figure <ref> a & b).
Wavelet decomposition. Historically, wavelet decomposition has been applied to fMRI data <cit.> to detect small signal changes in nonstationary time series with noisy backgrounds <cit.>. Here, we use the maximum-overlap discrete wavelet transform, which has been used extensively <cit.> to decompose regional time series into wavelet scales corresponding to specific frequency bands <cit.>. Because our sampling frequency was 2 s (1 TR), wavelet scale 1 corresponded to 0.125–0.25 Hz, and scale 2 to 0.06–0.125 Hz. To enhance sensitivity to task-related changes in BOLD magnitudes <cit.>, we examined wavelet scale 2, consistent with our previous work <cit.>. For a lengthier discussion of methodological considerations, see <cit.>.
Constructing Time-by-Time Networks. We were interested in studying the similarities between brain states as individuals learn. We defined a brain state as a pattern of BOLD activity across brain regions at a single instant in time <cit.>. We measured the similarities between these states in each trial block which was comprised of approximately 40–60 repetition times (TRs). We calculated the Spearman correlation of regional BOLD magnitudes between all possible pairs of time points (TRs). This procedure creates an undirected, weighted graph or network in which nodes represent time points and edges between nodes represent the correlation between brain states at different time points (see Figure <ref> c & d). Intuitively, this matrix – which we refer to as a “time-by-time” network provides the necessary information to uncover common brain states <cit.>, and to study transitions between brain states, as a participant learns.
Isolating Brain States Using Community Detection. To uncover common brain states in the “time-by-time” network, we used a network-based clustering technique known as community detection <cit.>. In particular, we chose a common community detection approach known as modularity maximization, where we optimize the following modularity quality function <cit.> using a Louvain-like <cit.> locally greedy heuristic algorithm <cit.>:
Q_0 = ∑_ij [ A_ij-γ P_ij ]δ ( g_i, g_j )
where 𝐀 is the time-by-time matrix, times i and j are assigned respectively to community g_i and g_j, the Kronecker delta δ ( g_i, g_j ) = 1 if g_i= g_j (and zero otherwise), γ is the structural resolution parameter, and P_ij is the expected weight of the edge between regions i and j under some null model. Consistent with prior work <cit.>, we used the Newman-Girvan null model <cit.>:
P_ij = k_ik_j/2m
where k_i =∑_j A_ij is the strength of region i and m = 1/2∑_ij A_ij. Importantly, the algorithm we use is a heuristic that implements a non-deterministic optimization <cit.>. Consequently we repeated the optimization 100 times <cit.>, and we report results summarized over those iterations by building what are known as consensus partitions <cit.> (see Figure <ref>). In order to do this, we construct a nodal association matrix 𝐀 from a set of N partitions, where A_i,j is equal to the number of times in the N partitions that node i and node j are in the same community. Furthermore, we construct a null nodal association matrix 𝐀^n, constructed from random permutations of the N partitions. This null association matrix indicates the number of times any two nodes will be assigned to the same community by chance. We then create the thresholded matrix 𝐀^T by setting any element A_i,j that is less than the corresponding null element A^n_i,j to 0. This procedure removes random noise from the nodal association matrix 𝐀. Subsequently, we use a Louvain-like method to obtain N new partitions of A^T into communities, where each of the N partitions is typically identical, and each of which is a consensus partition of the N original partitions.
Recurrent Brain States.Each community obtained in the aforementioned pipeline includes a set of TRs that show similar patterns of regional BOLD magnitudes, and could thus be interpreted as representing a single, repeated brain state in a single trial bock. We first sought to aggregate these brain states over trial blocks. To this end, we average the pattern of regional BOLD magnitudes across all TRs assigned to that community in that trial block. We then repeat community detection across all representative brain states found in the trial blocks to find sets of representative brain states for each subject at each scan. By averaging the pattern of BOLD magnitudes of the brain states in each set, we find a group of representative brain states for every subject at every scan.
Second, we sought to aggregate these subject-scan representative brain states over all scans to identify a group of representative brain states for each scan. We thus repeat community detection over the set of all subject-scan representative brain states, separated by scan and again average the pattern of regional BOLD magnitudes across all subject-scan representative brain states assigned to the same community. This final set of brain-states we consider to be scan-representative brain states for each scan of learning.
Finally, we sought to find analogous communities in each scan. Therefore, we repeated the community detection algorithm for these communities and interpreted two scan-representative brain states assigned to the same community as analogous. In summary, we repeatedly use this brain state isolation procedure hierarchically to first isolate representative brain states for each subject-scan combination, then for each scan, and finally to find brain states in each scan that are similar to one another.
§ RESULTS:
§.§ Time by time network analysis identifies frontal and motor states
Our first goal was to characterize the average anatomical distribution of BOLD magnitudes across all subjects and scans, to better understand the whole-brain activation patterns accompanying motor skill learning. To achieve this goal, we create a time-by-time network where nodes represent individual time points, and edges represent the Spearman correlation coefficient between the vector of regional BOLD magnitudes at time point i and time point j. We represented the time-by-time network as a graph. From these graphs, we were able to find 3 recurrent brain states, of which, two were strongly anti-correlated (Pearson correlation coefficient r(446)=-0.4291, p=1.6951*10^-21). These anti-correlated states make up 95.67% of all time subjects spent learning, and are also the only states to be present in all scans. We therefore refer to these states as "primary states" and focus our analysis upon them. We refer to the first state as the “motor state,” characterized by strong activation of the extended motor system and anterior cingulate, as well as simultaneous deactivation of the medial primary visual cortex (Fig. <ref> A, Table <ref>). We refer to the second state as the “frontal state,” characterized by strong activation of a distributed set of regions in frontal and temporal cortices, as well as subcortical structures (Fig. <ref> B, Table <ref>).
While these two states were statistically present across the entire experiment, we did observe small fluctuations in the magnitudes of the regional activity of both states. Thus, natural questions to ask are (i) did either state became stronger or weaker with training? and (ii) did the frequency of primary states change with learning? To address the first question, we calculated the mean BOLD magnitude among all brain regions for each state. Using a repeated measures ANOVA, we found no significant differences among scans in either state (F(3,669)=1.17, p=0.3221 (Figure <ref>). This suggests that the activation of these two states did not significantly change – on average – with the level of training. To address the second question, we calculated the proportion of all states that the primary states make up in each scan. Using a repeated measures ANOVA, we found no significant differences among scans (F(3,57)=0.17, p=0.9163). This suggests that the frequency of primary states remained the same during learning.
§.§ State flexibility increases with task practice
How does the brain traverse these states? Do individuals' traversals change with learning? To examine how the pattern of traversals through brain states changes during learning, we defined a “state flexibility” metric. Following <cit.>, we specified state flexibility (F) to be the number of state transitions (T) observed relative to the number of states (S), or F= T/S. Intuitively, state flexibility is a measure of the volatility versus rigidity in brain dynamics, directly representing the frequency of dynamic state changes. We observed that state flexibility increased monotonically with the number of trials practiced (repeated measures ANOVA: F(9, 171)=9.97, p=3.0417 × 10^-12, Figure <ref>). This suggests that as subjects learned the sequences, regional patterns of BOLD magnitudes became more variable, indicating more frequent transitions between different brain states.
An important question to consider is whether this change in state flexibility is related to the length of time that participants take to complete the practice trials. Specifically, because the experiment is self-paced, the length of time to complete the sequences decreased as participants practiced; subjects became quicker with experience. To ensure that the length of time to complete a sequence was not driving the observed changes in state flexibility, we constructed a non-parametric permutation-based null model by permuting the adjacency matrix 𝐀 uniformly at random while maintaining symmetry. Critically, this null model displayed a decrease in state flexibility with number of trials practiced (F(9, 171)=2.6, p=0.0078, Figure <ref>), suggesting that neither the reduced length of time nor the correlation values themselves can explain the observed increase in state flexibility, but rather that the temporal structure of the data is required for the observed increase in state flexibility.
§.§ Regional contributions to state flexibility vary by function
Do regions contribute differentially to state flexibility? To answer this question we conducted a “lesioning” analysis, where we calculated state flexibility for each subject and scan while “lesioning out,” or excluding, a single region. We then calculated the average difference between the true state flexibility for that subject and scan, and the lesioned state flexibility for each region across all trial groups. We normalized these values by subtracting off the mean effect of lesioning on flexibility.
To assess statistical significance, we created a matrix of the contributions to state flexibility for all regions and subjects. We found the contribution from seven regions to be significant (p<0.05 for each region, df = 19) by computing a t-test between the ablated state flexibilities and the true state flexibility, while correcting for multiple comparisons across the 112 brain regions using the false discovery rate. By calculating the average of these contributions for each region, we identify negative contributors, the removal of which increases state flexibility, and positive contributors, the removal of which decreases state flexibility. We find that the significant negative contributors to state flexibility are associated generally with motor and visual function (supplementary motor area, cuneus cortex, and the postcentral gyrus). In contrast, the significant positive contributors to state flexibility are associated with more integrative processing in hetermodal association areas (temporal occipital fusiform cortex, and planum polare on the temporoparietal junction) (Fig. <ref>, Table <ref>).
§.§ State flexibility is correlated with learning rate
The results thus far indicate that state flexibility is an important global feature of brain dynamics that significantly changes as individuals learn a new motor-visual skill. Yet, they do not address the question of how such brain dynamics relate directly to changes in behavior. Therefore, we next asked the question: are individual differences in state flexibility related to individual differences in learning rate (as defined in Methods)? Here, we focus solely on the most trained sequences, as the effects of learning are most dramatic in the most frequently practiced sequences <cit.>. Here, we estimate the correlation between the learning rate between sessions, and the differences in state flexibility between sessions. All correlations are estimated using a mixed linear effects model that accounts for the effect of either subject or scan.
Using this method, we observe a significant positive correlation between state flexibility and learning rate (p=1.163×10^-7), (Figure <ref>), accounting for the effects of subject.That is, subjects will tend to be inherently better or worse at learning than other subjects, therefore, we normalize for inter-subject differences in learning rate, and we find a significant correlation between state flexibility difference and learning rate. State flexibility difference is positively correlated with learning, and thus larger decreases in flexibility are associated with better learning. Furthermore, we observe a significant correlation between state flexibility difference and learning rate (p=0.0376) when accounting for the effects of scan. Learning rate tends to decrease as the number of scans increases, therefore, we build this trend into our model, and find a significant correlation between state flexibility and learning rate. Extending our previous assertion, these results suggest that both the individual differences and the larger patterns of change are correlated with learning rate.
§ DISCUSSION
In this work, we studied task-based fMRI data collected at 4 time points separated by about 2 weeks during which healthy adult participants learned a set of six 10-note finger sequences. During learning, we hypothesized that the brain would show a change in the manner in which it traversed brain states. We defined a state as a pattern of BOLD magnitude across 112 anatomically-defined brain regions. We identified two canonical states characteristic of the entire period of task performance, which showed high activation of motor cortex and frontal cortex, respectively. Interestingly, we observed that the flexibility with which participants switched among these canonical states and other less common states was lowest early in training and highest late in training, indicating the emergence of state flexibility. We find that the positive contributors to state flexibility are associated with integrative processing while the negative contributors are associated with motor and visual function. Finally, we observe that changes in state flexibility were correlated with learning rate: increasing state flexibility was correlated with higher learning rates.
§.§ Extensions of graph theoretical tools to the temporal domain
Over the past decade, tools from graph theory have offered important insights into the structure and function of the human and animal brain, both at rest and during cognitively demanding tasks <cit.>. In these applications, the nodes of the graph are traditionally thought of as neurons or brain areas, and the edges of the graph are defined by either anatomical tracts <cit.> or by functional connections <cit.>. Yet, the tools of graph theory are in fact much more general than these initial applications <cit.>. Indeed, recent extensions have brought these tools to other domains – from genetics <cit.> to orthopedics <cit.> – by carefully defining alternative graph representations of relational data. As a concrete example, a graph can be used to encode the relationships between movements or behaviors, by treating a movement as a node, and by linking subsequent movements (or actions) by the inter-movement interval <cit.>. Similarly, a graph can be used to code the temporal dependencies between stimuli, by treating a stimulus as a node, and by linking pairs of stimuli by their temporal transition probabilities <cit.>.
While these applications may initially seem vastly different, they in fact all share a common property: that entities are related to one another by some facet of time. Here, by contrast, we construct the edge-vertex dual of this more common form. We ask: How are times related to one another by some other entity? Specifically, we study how the brain state in one time point is related to the brain state in another time point, and we define a brain state as the vector of activation magnitudes across all regions of interest <cit.>. The notion that a pattern of activation reflects a brain state is certainly not a new one <cit.>. In the context of fMRI data, a common approach is to study the multi-voxel pattern of activation in a region of interest to better understand the representation of a stimulus <cit.>. And in the context of EEG and MEG data, the pattern of power or amplitude in a set of sensors or a set of reconstructed sources is frequently referred to as a microstate <cit.>. The composition and dynamics of these microstates have shown interesting cognitive and clinical utility, predicting working memory <cit.> and disease <cit.>. Yet, while patterns of activation are acknowledged as an important representation of a brain or cognitive state, little is known about how these states evolve into one another. Recent advances have made this possible by coding the relationships between brain states in a graph <cit.>. Here we capitalize on these advances to extract the community structure in such a graph, to identify canonical states, and to quantify the transitions between them. It will be interesting in future to broaden the analytical framework applied here to study other properties of the graph – including local clustering and global efficiency – to better understand how the brain traverses states over time.
§.§ Brain states characteristic of discrete sequence production
Using this unusual graph theory approach in which network nodes represent time points and network edges represent similarities in brain states across two time points, we were able to identify two canonical brain states that characterized the task-evoked activity dynamics across the entire experiment, extending across 6 weeks of intensive training. The most common state, perhaps unsurprisingly, was characterized by high BOLD magnitudes in regions of the extended motor cortex, including the bilateral precentral gyrus, left postcentral gyrus, bilateral superior parietal lobule, bilateral supramarginal gyrus, bilateral supplementary motor area, bilateral parietal operculum cortex, and bilateral Heschl's gyrus <cit.>. This map is consistent with the fact that this is an intensive motor-learning paradigm <cit.> in which participants acquire the skill necessary to perform a sequence of 12 finger movements over a short period of time. The second most common state was composed of a frontal-temporal-subcortical system, containing the anterior middle temporal gyrus, medial frontal cortex, parahippocampal gyrus, caudate, nucleus accumbens, and hippocamus. These areas are thought to play critical roles in sequence learning <cit.> facilitated by higher-order cognitive processes including reward learning <cit.>, cognitive control and executive function <cit.>, predicting nature and timing of action outcomes <cit.>, and subcortical storage of motor sequence information <cit.>. This system is particularly interesting because it displayed a competitive relationship with the motor state, with a strongly anticorrelated activation profile, suggesting that frontal-subcortical circuitry affects control by transient, desynchronized interactions.
§.§ State flexibility, task practice, and learning rate
Beyond the anatomy of the states that characterize extended training on a discrete sequence production task, it is also useful to study the degree to which those states are expressed, and the manner in which one state moves into another state. The two primary states that we observed characterized 95.67% of all time points, indicating their canonical nature. Temporally, the brain frequently switched back and forth between these two states, with less frequent traversal of other non-primary states. We quantified this switching using a brain state flexibility measure <cit.>, and observed that flexibility increased significantly over the course of the 6 weeks of training. Moreover, brain state flexibility was negatively correlated with learning rate, being lowest early in training when behavioral adaptivity was greatest. These results suggest that consistent activation patterns characterize early training, when participants must learn the mapping of visual cues to motor responses, the use of the button box, and the patterns of finger movements. Later in learning, when the skill has become relatively automatic, participants display more varied progressions of activation patterns (higher brain state flexibility), potentially mirroring the greater freedom of their cognitive resources for other processes <cit.>.
Importantly, these results offer a complement to prior efforts to quantify network flexibility based on estimates of functional connectivity <cit.> where the nodes are brain regions and the edges are temporally defined correlations between those regions. Network flexibility appears to peak early in finger sequence training <cit.>, followed by a growing autonomy of motor and visual systems <cit.>. In combination with our results, these prior data suggest that there may be distinct time scales associated with brain variability at the level of activity (where variability may peak late) in comparison to the level of connectivity (where variability may peak early). Such a hypothesis could be directly validated in additional studies that reproduce the results we present here. The apparent separation in time scales of these processes over learning also supports the growing notion that the information housed in patterns of activity can be quite independent from information housed in patterns of connectivity <cit.>. For example, earlier studies have demonstrated that patterns of beta weights from a GLM do not necessarily map onto patterns of strong or weak functional connectivity <cit.>, the temporal dynamics of an activity time trace do not necessarily map onto patterns of functional connectivity <cit.>, and phenotypes indicative of psychiatric disease can be identified in functional connectivity while being invisible to methods focused on activity <cit.>. Together, these studies indicate that activity and connectivity can provide distinct information regarding the neurophysiological processes relevant for cognition and disease. They also in principle support the possibility of differential time scales of flexibility in activity and connectivity as a function of learning.
§.§ Methodological Considerations
There are several important methodological and conceptual considerations pertinent to this work. The first consideration we would like to discuss is a relatively philosophical one. It pertains to our use of the term “brain state”. It is important to disambiguate the use of brain state as a quantifiable and quantified object, defined as the pattern of activation magnitudes over all brain areas (strung out in a vector <cit.>), and other more conceptual notions of mental state or cognitive state. These latter notions can be difficult to quantify directly from imaging data, even if they may have relatively specific definitions from both psychological and clinical perspectives <cit.>. It will be important in future uses of our brain state detection and characterization technique to maintain clarity in the use of these terms.
The second important consideration relevant to this work, is that the data that we study here was collected with a tranditional 2 second TR. It would be very interesting to test for similar phenomena in the high-resolution BOLD imaging techniques available now, for example using multiband acquisitions. Such higher sampling could offer heightened sensitivity to changes in brain state flexibility related to individual differences in learning. Moreover, they could provide enhanced sensitivity to variations in brain state flexibility across different frequency bands, particularly higher frequency bands that have been shown to be sensitive to shared genetic variance <cit.>.
Finally, on a computational note, it is important to emphasize that the results described here are obtained via the application of a clustering technique <cit.> to identify brain states from the temporal graph. Importantly, the technique that we use – based on modularity maximization <cit.> – is a hard partitioning algorithm that seeks to solve an NP-hard problem using a clever heuristic <cit.>. Although modularity maximization can accurately recover planted network modules in synthetic tests <cit.>, it does have important limitations <cit.>. Therefore it would be interesting in future to examine the sensitivity of results to other clustering techniques available in the literature.
§ CONCLUSION
In summary, in this study we seek to better understand the changes in brain state that accompany the acquisition of a new motor skill over the course of extended practice. We treat the brain as a dynamical system whose states are characterized by a recognizable pattern of activation across anatomicaly defined cortical and subcortical regions. We apply tools from graph theory to study the temporal transitions (network edges) between brain states (network nodes). Our data suggest that the emergence of automaticity is accompanied by an increase in brain state flexibility, or the frequency with which the brain switches between activity states. Broadly, our work offers a unique perspective on brain variability, noise, and dynamics <cit.>, and its role in human learning.
§ ACKNOWLEDGMENTS
DSB would like to acknowledge support from the John D. and
Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the Army Research Office through contract number W911NF-14-1-0679, the National Institute of Health (1R01HD086888-01), and the National Science Foundation (BCS-1441502, CAREER PHY-1554488, BCS-1631550, and CNS-1626008). The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.
abbrv
|
http://arxiv.org/abs/1701.07640v2 | 20170126103203 | Detector-Independent Verification of Quantum Light | [
"J. Sperling",
"W. R. Clements",
"A. Eckstein",
"M. Moore",
"J. J. Renema",
"W. S. Kolthammer",
"S. W. Nam",
"A. Lita",
"T. Gerrits",
"W. Vogel",
"G. S. Agarwal",
"I. A. Walmsley"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
National Institute of Standards and Technology, 325 Broadway, Boulder, Colorado 80305, USA
National Institute of Standards and Technology, 325 Broadway, Boulder, Colorado 80305, USA
National Institute of Standards and Technology, 325 Broadway, Boulder, Colorado 80305, USA
Institut für Physik, Universität Rostock, Albert-Einstein-Straße 23, D-18059 Rostock, Germany
Texas A&M University, College Station, Texas 77845, USA
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
We introduce a method for the verification of nonclassical light which is independent of the complex interaction between the generated light and the material of the detectors.
This is accomplished by means of a multiplexing arrangement.
Its theoretical description yields that the coincidence statistics of this measurement layout is a mixture of multinomial distributions for any classical light field and any type of detector.
This allows us to formulate bounds on the statistical properties of classical states.
We apply our directly accessible method to heralded multiphoton states which are detected with a single multiplexing step only and two detectors, which are in our work superconducting transition-edge sensors.
The nonclassicality of the generated light is verified and characterized through the violation of the classical bounds without the need for characterizing the used detectors.
Detector-Independent Verification of Quantum Light
I. A. Walmsley
December 30, 2023
==================================================
*Introduction.—
The generation and verification of nonclassical light is one of the main challenges for realizing optical quantum communication and computation <cit.>.
Therefore, robust and easily applicable methods are required to detect quantum features for real-world applications; see, e.g., <cit.>.
The complexity of producing reliable sensors stems from the problem that new detectors need to be characterized.
For this task, various techniques, such as detector tomography <cit.>, have been developed.
However, such a calibration requires many resources, for example, computational or numerical analysis, reference measurements, etc.
From such complex procedures, the interaction between quantum light and the bulk material of the detector can be inferred and quantum features can be uncovered.
Nevertheless, the verification of nonclassicality also depends on the bare existence of criteria that are applicable to this measurement.
Here, we prove that detectors with a general response to incident light can be employed in an optical detection scheme, which is well characterized, to identify nonclassical radiation fields based on simple nonclassicality conditions.
The concept of device independence has recently gained a lot of attention because it allows one to employ even untrusted devices; see, e.g., <cit.>.
For instance, device-independent entanglement witnesses can be used without relying on properties of the measurement system <cit.>.
It has been further studied to perform communication and computation tasks <cit.>.
Detector independence has been also applied to state estimation and quantum metrology <cit.> to gain knowledge about a physical system which might be too complex for a full characterization.
In parallel, remarkable progress has been made in the field of well-characterized photon-number-resolving (PNR) detectors <cit.>.
A charge-coupled-device camera is one example of a system that can record many photons at a time.
However, it also suffers inherent readout noise.
Still, the correlation between different pixels can be used to infer quantum correlated light <cit.>.
Another example of a PNR device is a superconducting transition-edge sensor (TES) <cit.>.
This detector requires a cryogenic environment, and its operation is based on superconductivity.
Hence, a model for this detector would require the quantum mechanical treatment of a solid-state bulk material which interacts with a quantized radiation field in the frame of low-temperature physics.
Along with the development of PNR detectors, multiplexing layouts define another approach to realize photon-number resolution <cit.>.
The main idea is that an incident light field, which consists of many photons, is split into a number of spatial or temporal modes, which consist of a few photons only.
These resulting beams are measured with single-photon detectors which do not have any photon-number-resolution capacity.
They can only discriminate between the presence (“click”) and absence of absorbed photons.
Hence, the multiplexing is used to get some insight into the photon statistics despite the limited capacity of the individual detectors.
With its resulting binomial click-counting statistics, one can verify nonclassical properties of correlated light fields <cit.>.
Recently, a multiplexing layout has been used in combination with TESs to characterize quantum light with a mean photon number of 50 and a maximum number of 80 photons for each of the two correlated modes <cit.>.
In this Letter, we formulate a method to verify nonclassical light with arbitrary detectors.
This technique is based on a well-defined multiplexing scheme and individual detectors which can discriminate different measurement outcomes.
The resulting correlation measurement is always described as a mixture of multinomial distributions in classical optics.
Based on this finding, we formulate nonclassicality conditions in terms of covariances to directly certify nonclassical light.
Nonclassical light is defined in this work as a radiation field which cannot be described as a statistical mixture of coherent light <cit.>.
We demonstrate our approach by producing heralded photon-number states from a parametric down-conversion (PDC) source.
Already a single multiplexing step is sufficient to verify the nonclassicality of such states without the need to characterize the used TESs.
In addition to our method presented here, a complementary study is provided in Ref. <cit.>.
There we use a quantum-optical framework to perform additional analysis of the measurement layout under study.
*Theory.—
The detection scenario is shown in Fig. <ref>.
Its robustness to the detector response is achieved by the multiplexing layout whose optical elements, e.g., beam splitters, are much simpler and better characterized than the detectors.
Our only broad requirement is that the measured statistics of the detectors are relatively similar to each other.
Here we are not using multiplexing to improve the photon-number detection (see, e.g., Ref. <cit.>).
Rather, we employ this scheme to get nonclassicality criteria that are independent of the properties of the individual detectors.
First, we consider a single coherent, classical light field.
The detector can resolve arbitrary outcomes k=0,…,K—or, equivalently, K+1 bins <cit.>—which have a probability p_k.
If the light is split by 50/50 beam splitters as depicted in Fig. <ref> and measured with N individual and identical detectors, we get the probability p_k_1⋯ p_k_N to measure k_1 with the first detector, k_2 with the second detector, etc.
Now, N_k is defined as the number of individual detectors which measure the same outcome k.
This means we have N_0 times the outcome 0 together with N_1 times the outcome 1, etc., from the N detectors, N=N_0+⋯+N_K.
For example, k_1=K and k_2=k_3=k_4=0 yields N_K=1 and N_0=3 for N=4 detectors (N_k=0 for all 0<k<K).
The probability to get any given combination of outcomes, N_0,…,N_K, from the probabilities p_k_1⋯ p_k_N is known to follow a multinomial distribution <cit.>,
c(N_0,…,N_K)=N!/N_0!⋯ N_K!p_0^N_0⋯ p_K^N_K.
To ensure a general applicability, we counter any deviation from the 50/50 splitting and differences of the individual detectors by determining a corresponding systematic error (in our experiment in the order of 1%), see the Supplemental Material <cit.> for the error analysis.
For a different intensity, the probabilities p_k of the individual outcomes k might change.
Hence, we consider in the second step a statistical mixture of arbitrary intensities.
This generalizes the distribution in Eq. (<ref>) by averaging over a classical probability distribution P,
c(N_0,…,N_K)=⟨N!/N_0!⋯ N_K!p_0^N_0⋯ p_K^N_K⟩
= ∫ dP(p_0,…,p_K)N!/N_0!⋯ N_K!p_0^N_0⋯ p_K^N_K.
Because any light field in classical optics can be considered as an ensemble of coherent fields <cit.>, the measured statistics of the setup in Fig. <ref> follows a mixture of multinomial distributions (<ref>).
This is not necessarily true for nonclassical light as we will demonstrate.
The distribution (<ref>) applies to arbitrary detectors and includes the case of on-off detectors (K=1), which yields a binomial distribution <cit.>.
Also, we determine the number of outcomes, K+1, directly from our data.
Let us now formulate a criterion that allows for the identification of quantum correlations.
The mean values of multinomial statistics obey N_k=Np_k <cit.>.
Averaging over P yields
N_k=N⟨ p_k⟩.
In the same way, we get for the second-order moments, N_kN_k'=N(N-1)p_kp_k'+δ_k,k'Np_k <cit.> with δ_k,k'=1 for k=k' and δ_k,k'=0 otherwise, an averaged expression
N_kN_k'= N(N-1)⟨ p_kp_k'⟩+δ_k,k'N⟨ p_k⟩.
Thus, we find the covariance from Eqs. (<ref>) and (<ref>),
Δ N_kΔ N_k'
= N⟨ p_k⟩(δ_k,k'-⟨ p_k'⟩)
+N(N-1)⟨Δ p_kΔ p_k'⟩.
Note that the multinomial distribution has the covariances Δ N_kΔ N_k'=Np_k(δ_k,k'-p_k') <cit.>.
Multiplying Eq. (<ref>) with N and using Eq. (<ref>), we can introduce the (K+1)×(K+1) matrix
M= (NΔ N_kΔ N_k'-N_k(Nδ_k,k'-N_k'))_k,k'=0,…,K
= N^2(N-1)(⟨Δ p_kΔ p_k'⟩)_k,k'=0,…,K.
As the covariance matrix (⟨Δ p_kΔ p_k'⟩)_k,k' is nonnegative for any classical probability distribution P, we can conclude:
We have a nonclassical light field if
0≰(NΔ N_kΔ N_k'-N_k(Nδ_k,k'-N_k'))_k,k'=0,…,K;
i.e., the symmetric matrix M in Eq. (<ref>) has at least one negative eigenvalue.
In other words, M≱ 0 means that fluctuations of the parameters p_k in (⟨Δ p_kΔ p_k'⟩)_k,k' are below the classical threshold of zero.
Based on condition (<ref>), we will experimentally certify nonclassicality.
*Experimental setup.—
Our experimental implementation is shown in Fig. <ref>(a).
A PDC source produces correlated photons.
Conditioned on the detection of k clicks from the heralding detector, we measure the click-counting statistics c(N_0,…,N_K), Eq. (<ref>).
The key components of our experiment are (i) the PDC source and (ii) the three TESs used as our heralding detector and as our two individual detectors after the multiplexing step.
(i) PDC source.
Our PDC source is a waveguide-written 8 mm-long periodically poled potassium titanyl phosphate crystal.
We pump a type-II spontaneous PDC process with laser pulses at 775 nm and a full width at half maximum of 2 nm at a repetition rate of 75 kHz.
The heralding idler mode (horizontal polarization) is centered at 1554 nm, while the signal mode (vertical polarization) is centered at 1546 nm.
The output signal and idler pulses are spatially separated with a PBS.
The pump beam is discarded using an edge filter.
Subsequently, the other beams are filtered by 3 nm bandpass filters in order to filter out the broadband background which is typically generated in dielectric nonlinear waveguides <cit.>.
(ii) TES detectors.
We use superconducting TESs <cit.> as our detectors.
They consist of 25 μ m×25 μ m× 20 nm slabs of tungsten inside an optical cavity designed to maximize absorption at 1500 nm.
They are maintained at their transition temperature by Joule heating caused by a voltage bias, which is self-stabilized via an electrothermal feedback effect <cit.>.
When photons are absorbed, the increase in temperature causes a corresponding electrical signal which is picked up and amplified by a superconducting quantum interference device (SQUID) module and amplified at room temperature.
This results in complex time-varying signals of about 5 μ s duration.
Our TESs are operated within a dilution refrigerator with a base temperature of about 70 mK.
The estimated detection efficiency is 0.98^+0.02_-0.08 <cit.>.
The electrical throughput is measured using a waveform digitizer and assigns a bin (described below) to each output pulse <cit.>.
We process incoming signals at a speed of up to 100 kHz.
The time integral of the measured signal results in an energy whose counts are shown in Fig. <ref>(b) for the heralding TES.
It also indicates a complex, nonlinear response of the TESs <cit.>.
The energies are binned into K+1 different intervals.
One typically fits those counts with a number of functions or histograms to get the photon statistics via numerical reconstruction algorithms for the particular detector.
Our bins—also the number of them—are solely determined from the measured data by simply dividing our recorded signal into disjoint energy intervals [Fig. <ref>(b)].
This does not require any detector model or reconstruction algorithms.
Above a threshold energy, no further peaks can be significantly resolved and those events are collected in the last bin.
No measured event is discarded.
Our heralding TES allows for a resolution of K+1=12 outcomes.
Because of the splitting of the photons on the beam splitter in the multiplexing step, the data from the other two TESs yield a reduced distinction between K+1=8 outcomes.
*Results.—
Condition (<ref>) can be directly applied to the measured statistics c(N_0,…,N_K) by sampling mean values, variances, and covariances [Eq. (<ref>)].
In Fig. <ref>, we show the resulting nonclassicality of the heralded states.
As the minimal eigenvalue of M has to be non-negative for classical light, this eigenvalue is depicted in Fig. <ref>.
To discuss our results, we compare our findings with a simple, idealized model.
Our produced PDC state can be approximated by a two-mode squeezed-vacuum state which has a correlated photon statistics, p(n,n')=(1-λ)λ^nδ_n,n', where n(n') is the signal(idler) photon number and r≥ 0 (λ=tanh^2r) is the squeezing parameter which is a function of the pump power of the PDC process <cit.>.
Heralding with an ideal PNR detector, which can resolve any photon number with a finite efficiency η̃, we get a conditioned statistics of the form
p(n|k)= 𝒩_knkη̃^k(1-η̃)^n-k(1-λ)λ^n,
with 𝒩_k= (1-λ)(λη̃)^k/[1-λ(1-η̃)]^k+1,
for the kth heralded state and p(n|k)=0 for n<k and λ^0=1.
Here 𝒩_k is a normalization constant as well as the probability that the kth state is realized.
The signal includes at least n≥ k photons if k photoelectric counts have been recorded by the heralding detector.
In the ideal case, the heralding to the 0th bin yields a thermal state [Eq. (<ref>)] and in the limit of vanishing squeezing a vacuum state, p(n|0)=δ_n,0 for λ→ 0.
Hence, we expect that the measured statistics is close to a multinomial, which implies M≈ 0.
Our data are consistent with this consideration, cf. Fig. <ref>.
Using an ideal detector, a heralding to higher bin numbers would give a nonclassical Fock state with the corresponding photon number.
The nonclassical character of the experimentally realized multiphoton states is certified in Fig. <ref>.
The generation of k photon pairs in the PDC is less likely for higher photon numbers, 𝒩_k∝λ^k.
Hence, this reduced count rate of events results in the increasing contribution of the statistical error in Fig. <ref>.
The highest significance of nonclassicality is found for lower heralding bins.
Furthermore, we studied our criterion (<ref>) as a function of the pump power in Fig. <ref> to demonstrate its impact on the nonclassicality.
The conditioning to zero clicks of the heralding TES is consistent with a classical signal.
For higher heralding bins, we observe that the nonclassicality is larger for decreasing pump powers as the distribution in Eq. (<ref>) becomes closer to a pure Fock state.
We can also observe in Fig. <ref> that the error is larger for smaller pump powers as fewer photon pairs are generated (𝒩_k∝λ^k).
Note that the nonclassicality is expressed in terms of the photon-number correlations.
If our detector would allow for a phase resolution, we could observe the increase of squeezing with increasing pump power.
This suggests a future enhancement of the current setup.
Moreover, an implementation of multiple multiplexing steps (N>2) would allow one to measure higher-order moments <cit.>, which renders it possible to certify nonclassicality beyond second-order moments <cit.>.
*Conclusions.—
We have formulated and implemented a robust and easily accessible method that can be applied to verify nonclassical light with arbitrary detectors.
Based on a multiplexing layout, we showed that a mixture of multinomial distributions describes the measured statistics in classical optics independently of the specific properties of the individual detectors.
We derived classical bounds on the covariance matrix whose violation is a clear signature of nonclassical light.
We applied our theory to an experiment consisting of a single multiplexing step and two superconducting transition-edge sensors.
We successfully demonstrated the nonclassicality of heralded multiphoton states.
We also studied the dependence on the pump power of our spontaneous parametric-down-conversion light source.
Our method is a straightforward technique that also applies to, e.g., temporal multiplexing or other types of individual detectors, e.g., multipixel cameras.
It also includes the approach for avalanche photodiodes <cit.> in the special case of a binary outcome.
Because our theory applies to general detectors, one challenge was to apply it to superconducting transition-edge sensors whose characteristics are less well understood than those of commercially available detectors.
Our nonclassicality analysis is only based on covariances between different outcomes which requires neither sophisticated data processing nor a lot of computational time.
Hence, it presents a simple and yet reliable tool for characterizing nonclassical light for applications in quantum technologies.
*Acknowledgements.—
The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 665148 (QCUMbER).
A. E. is supported by EPSRC EP/K034480/1.
J. J. R. is supported by the Netherlands Organization for Scientific Research (NWO).
W. S. K. is supported by EPSRC EP/M013243/1.
S. W. N., A. L., and T. G. are supported by the Quantum Information Science Initiative (QISI).
I. A. W. acknowledges an ERC Advanced Grant (MOQUACINO).
The authors thank Johan Fopma for technical support.
The authors gratefully acknowledge helpful comments by Tim Bartley and Omar Magaña-Loaiza.
*Note.—
This work includes contributions of the National Institute of Standards and Technology, which are not subject to U.S. copyright.
99
KLM01
E. Knill, R. Laflamme, and G. J. Milburn,
A scheme for efficient quantum computation with linear optics,
https://doi.org/10.1038/35051009Nature (London) 409, 46 (2001).
KMNRDM07
P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn,
Linear optical quantum computing with photonic qubits,
https://doi.org/10.1103/RevModPhys.79.135Rev. Mod. Phys. 79, 135 (2007).
GT07
N. Gisin and R. Thew,
Quantum communication,
https://doi.org/10.1038/nphoton.2007.22Nat. Photon. 1, 165 (2007).
S09
J. H. Shapiro,
The Quantum Theory of Optical Communications,
https://doi.org/10.1109/JSTQE.2009.2024959IEEE J. Sel. Top. Quantum Electron. 15, 1547 (2009).
TSDZDW11
N. Thomas-Peter, B. J. Smith, A. Datta, L. Zhang, U. Dorner, and I. A. Walmsley,
Real-World Quantum Sensors: Evaluating Resources for Precision Measurement,
https://doi.org/10.1103/PhysRevLett.107.113603Phys. Rev. Lett. 107, 113603 (2011).
BFM15
F. E. Becerra, J. Fan, and A. Migdall,
Photon number resolution enables quantum receiver for realistic coherent optical communications,
https://doi.org/10.1038/nphoton.2014.280Nat. Photon. 9, 48 (2015).
LS99
A. Luis and L. L. Sánchez-Soto,
Complete Characterization of Arbitrary Quantum Measurement Processes,
https://doi.org/10.1103/PhysRevLett.83.3573Phys. Rev. Lett. 83, 3573 (1999).
AMP04
G. M. D'Ariano, L. Maccone, and P. Lo Presti,
Quantum Calibration of Measurement Instrumentation,
https://doi.org/10.1103/PhysRevLett.93.250407Phys. Rev. Lett. 93, 250407 (2004).
LKKFSL08
M. Lobino, D. Korystov, C. Kupchak, E. Figueroa, B. C. Sanders, and A. I. Lvovsky,
Complete characterization of quantum-optical processes,
https://doi.org/10.1126/science.1162086Science 322, 563 (2008).
LFCPSREPW09
J. S. Lundeen, A. Feito, H. Coldenstrodt-Ronge, K. L. Pregnell, C. Silberhorn, T. C. Ralph, J. Eisert, M. B. Plenio, and I. A. Walmsley,
Tomography of quantum detectors,
https://doi.org/10.1038/nphys1133Nat. Phys. 5, 27 (2009).
ZDCJEPW12
L. Zhang, A. Datta, H. B. Coldenstrodt-Ronge, X.-M. Jin, J. Eisert, M. B. Plenio, and I. A. Walmsley,
Recursive quantum detector tomography,
https://doi.org/10.1088/1367-2630/14/11/115005New J. Phys. 14, 115005 (2012).
BCDGMMPPP12
G. Brida, L. Ciavarella, I. P. Degiovanni, M. Genovese, A. Migdall, M. G. Mingolla, M. G. A. Paris, F. Piacentini, and S. V. Polyakov,
Ancilla-Assisted Calibration of a Measuring Apparatus,
https://doi.org/10.1103/PhysRevLett.108.253601Phys. Rev. Lett. 108, 253601 (2012).
KW16
J. Kaniewski and S. Wehner,
Device-independent two-party cryptography secure against sequential attacks,
https://doi.org/10.1088/1367-2630/18/5/055004New J. Phys. 18, 055004 (2016).
BRLG13
C. Branciard, D. Rosset, Y.-C. Liang, and N. Gisin,
Measurement-Device-Independent Entanglement Witnesses for All Entangled Quantum States,
https://doi.org/10.1103/PhysRevLett.110.060405Phys. Rev. Lett. 110, 060405 (2013).
ZYM16
Q. Zhao, X. Yuan, and X. Ma,
Efficient measurement-device-independent detection of multipartite entanglement structure,
10.1103/PhysRevA.94.012343Phys. Rev. A 94, 012343 (2016).
LKMBTZ14
C. C. W. Lim, B. Korzh, A. Martin, F. Bussieres, R. Thew, and H. Zbinden,
Detector-Device-Independent Quantum Key Distribution,
https://doi.org/10.1063/1.4903350Appl. Phys. Lett. 105, 221112 (2014).
GKW15
A. Gheorghiu, E. Kashefi, and P. Wallde,
Robustness and device independence of verifiable blind quantum computing,
https://doi.org/10.1088/1367-2630/17/8/083040New J. Phys. 17, 083040 (2015).
CKS14
M. Cooper, M. Karpinski, and B. J. Smith,
Quantum state estimation with unknown measurements,
https://doi.org/10.1038/ncomms5332Nat. Commun. 5, 4332 (2014).
AGSB16
M. Altorio, M. G. Genoni, F. Somma, and M. Barbieri,
Metrology with Unknown Detectors,
https://doi.org/10.1103/PhysRevLett.116.100802Phys. Rev. Lett. 116, 100802 (2016).
S07
C. Silberhorn,
Detecting quantum light,
https://doi.org/10.1080/00107510701662538Contemp. Phys. 48, 143 (2007).
H09
R. H. Hadfield,
Single-photon detectors for optical quantum information applications,
https://doi.org/10.1038/nphoton.2009.230Nat. Photon. 3, 696 (2009).
BDFL08
J.-L. Blanchet, F. Devaux, L. Furfaro, and E. Lantz,
Measurement of Sub-Shot-Noise Correlations of Spatial Fluctuations in the Photon-Counting Regime,
https://doi.org/10.1103/PhysRevLett.101.233604Phys. Rev. Lett. 101, 233604 (2008).
MMDL12
P.-A. Moreau, J. Mougin-Sisini, F. Devaux, and E. Lantz,
Realization of the purely spatial Einstein-Podolsky-Rosen paradox in full-field images of spontaneous parametric down-conversion,
https://doi.org/10.1103/PhysRevA.86.010101Phys. Rev. A 86, 010101(R) (2012).
CTFLMA16
V. Chille, N. Treps, C. Fabre, G. Leuchs, C. Marquardt, and A. Aiello,
Detecting the spatial quantum uncertainty of bosonic systems,
https://doi.org/10.1088/1367-2630/18/9/093004New J. Phys. 18, 093004 (2016).
LMN08
A. E. Lita, A. J. Miller, and S. W. Nam,
Counting nearinfrared single-photons with 95% efficiency,
https://doi.org/10.1364/OE.16.003032Opt. Express 16, 3032 (2008).
BCDGLMPRTP12
G. Brida, L. Ciavarella, I. P. Degiovanni, M. Genovese, L. Lolli, M. G. Mingolla, F. Piacentini, M. Rajteri, E. Taralli, and M. G. A. Paris,
Quantum characterization of superconducting photon counters,
https://doi.org/10.1088/1367-2630/14/8/085001New J. Phys. 14, 085001 (2012).
RFZMGDFE12
J. J. Renema, G. Frucci, Z. Zhou, F. Mattioli, A. Gaggero, R. Leoni, M. J. A. de Dood, A. Fiore, and M. P. van Exter,
Modified detector tomography technique applied to a superconducting multiphoton nanodetector,
https://doi.org/10.1364/OE.20.002806Opt. Express 20, 2806 (2012).
PTKJ96
H. Paul, P. Törmä, T. Kiss, and I. Jex,
Photon Chopping: New Way to Measure the Quantum State of Light,
https://doi.org/10.1103/PhysRevLett.76.2464Phys. Rev. Lett. 76, 2464 (1996).
KB01
P. Kok and S. L. Braunstein,
Detection devices in entanglement-based optical state preparation,
https://doi.org/10.1103/PhysRevA.63.033812Phys. Rev. A 63, 033812 (2001).
ASSBW03
D. Achilles, C. Silberhorn, C. Śliwa, K. Banaszek, and I. A. Walmsley,
Fiber-assisted detection with photon number resolution,
https://doi.org/10.1364/OL.28.002387Opt. Lett. 28, 2387 (2003).
FJPF03
M. J. Fitch, B. C. Jacobs, T. B. Pittman, and J. D. Franson,
Photon-number resolution using time-multiplexed single-photon detectors,
https://doi.org/10.1103/PhysRevA.68.043814Phys. Rev. A 68, 043814 (2003).
CDSM07
S. A. Castelletto, I. P. Degiovanni, V. Schettini, and A. L. Migdall,
Reduced deadtime and higher rate photon-counting detection using a multiplexed detector array,
https://doi.org/10.1080/09500340600779579J. Mod. Opt. 54, 337 (2007).
SPDBCM07
V. Schettini, S.V. Polyakov, I.P. Degiovanni, G. Brida, S. Castelletto, and A.L. Migdall,
Implementing a Multiplexed System of Detectors for Higher Photon Counting Rates,
https://doi.org/10.1109/JSTQE.2007.902846IEEE J. Sel. Top. Quantum Electron. 13, 978 (2007).
SVA12
J. Sperling, W. Vogel, and G. S. Agarwal,
Sub-Binomial Light,
https://doi.org/10.1103/PhysRevLett.109.093601Phys. Rev. Lett. 109, 093601 (2012).
BDJDBW13
T. J. Bartley, G. Donati, X.-M. Jin, A. Datta, M. Barbieri, and I. A. Walmsley,
Direct Observation of Sub-Binomial Light,
https://doi.org/10.1103/PhysRevLett.110.173602Phys. Rev. Lett. 110, 173602 (2013).
SBVHBAS15
J. Sperling, M. Bohmann, W. Vogel, G. Harder, B. Brecht, V. Ansari, and C. Silberhorn,
Uncovering Quantum Correlations with Time-Multiplexed Click Detection,
https://doi.org/10.1103/PhysRevLett.115.023601Phys. Rev. Lett. 115, 023601 (2015).
HSPGHNVS16
R. Heilmann, J. Sperling, A. Perez-Leija, M. Gräfe, M. Heinrich, S. Nolte, W. Vogel, and A. Szameit,
Harnessing click detectors for the genuine characterization of light states,
https://doi.org/10.1038/srep19489Sci. Rep. 6, 19489 (2016).
SBDBJDVW16
J. Sperling, T. J. Bartley, G. Donati, M. Barbieri, X.-M. Jin, A. Datta, W. Vogel, and I. A. Walmsley,
Quantum Correlations from the Conditional Statistics of Incomplete Data,
https://doi.org/10.1103/PhysRevLett.117.083601Phys. Rev. Lett. 117, 083601 (2016).
HBLNGS15
G. Harder, T. J. Bartley, A. E. Lita, S. W. Nam, T. Gerrits, and C. Silberhorn,
Single-Mode Parametric-Down-Conversion States with 50 Photons as a Source for Mesoscopic Quantum Optics,
https://doi.org/10.1103/PhysRevLett.116.143601Phys. Rev. Lett. 116, 143601 (2016).
TG86
U. M. Titulaer and R. J. Glauber,
Correlation functions for coherent fields,
https://doi.org/10.1103/PhysRev.140.B676Phys. Rev. 140, B676 (1965).
M86
L. Mandel,
Non-classical states of the electromagnetic field,
https://doi.org/10.1088/0031-8949/1986/T12/005Phys. Scr. T12, 34 (1986).
TheArticle
J. Sperling, et al.,
Identification of nonclassical properties of light with multiplexing layouts,
https://arxiv.org/abs/1701.07642arXiv:1701.07642 [quant-ph].
CommentBin
The notion of a bin is used here synonymous with measurement outcome.
It should not be confused with the concept of a temporal bin, which is used to describe time-bin multiplexing detectors <cit.>.
FEHP11
See
C. Forbes, M. Evans, N. Hastings, and B. Peacock,
Statistical Distributions, 4th ed.
http://www.wiley.com/WileyCDA/WileyTitle/productCd-1118097823.html(Wiley & Sons, Hoboken, New Jersey, USA, 2011),
Chap. 30.
SupplementalMaterial
See Supplemental Material, which includes Refs. <cit.>, for the error analysis and an additional study of the nonlinear detector response of the TESs.
SVA12a
J. Sperling, W. Vogel, and G. S. Agarwal,
True photocounting statistics of multiple on-off detectors,
https://doi.org/10.1103/PhysRevA.85.023820Phys. Rev. A 85, 023820 (2012).
ECMS11
A. Eckstein, A. Christ, P. J. Mosley, and C. Silberhorn,
Highly Efficient Single-Pass Source of Pulsed Single-Mode Twin Beams of Light,
https://doi.org/10.1103/PhysRevLett.106.013603Phys. Rev. Lett. 106, 013603 (2011).
I95
K. D. Irwin,
An application of electrothermal feedback for high resolution cryogenic particle detection,
https://doi.org/10.1063/1.113674Appl. Phys. Lett. 66, 1998 (1995).
HMGHLNNDKW15
P. C. Humphreys, B. J. Metcalf, T. Gerrits, T. Hiemstra, A. E. Lita, J. Nunn, S. W. Nam, A. Datta, W. S. Kolthammer, and I. A. Walmsley,
Tomography of photon-number resolving continuous-output detectors,
https://doi.org/10.1088/1367-2630/17/10/103044New J. Phys. 17, 103044 (2015).
Getal11
T. Gerrits, et al.,
On-chip, photon-number-resolving, telecommunication-band detectors for scalable photonic information processing,
https://doi.org/10.1103/PhysRevA.84.060301Phys. Rev. A 84, 060301(R) (2011).
A13
See an analysis in
G. S. Agarwal,
Quantum Optics
http://www.cambridge.org/gb/academic/subjects/physics/optics-optoelectronics-and-photonics/quantum-optics-1(Cambridge University Press, Cambridge, 2013),
including Fig. 3.2.
AT92
G. S. Agarwal and K. Tara,
Nonclassical character of states exhibiting no squeezing or sub-Poissonian statistics,
https://doi.org/10.1103/PhysRevA.46.485Phys. Rev. A 46, 485 (1992).
APHM16
I. I. Arkhipov, J. Peřina, O. Haderka, and V. Michálek,
Experimental detection of nonclassicality of single-mode fields via intensity moments,
https://doi.org/10.1364/OE.24.029496Opt. Express 24, 29496 (2016).
< g r a p h i c s >
< g r a p h i c s >
|
http://arxiv.org/abs/1701.07770v4 | 20170126165237 | Bounds for several-disk packings of hyperbolic surfaces | [
"Jason DeBlois"
] | math.GT | [
"math.GT"
] |
Department of Mathematics, University of [email protected]
For any given k∈ℕ, this paper gives upper bounds on the radius of a packing of a complete hyperbolic surface of finite area by k equal-radius disks in terms of the surface's topology. We show that these bounds are sharp in some cases and not sharp in others.
Bounds for several-disk packings of hyperbolic surfaces
Jason DeBlois
18 november 2016
=======================================================
By a packing of a metric space we will mean a collection of disjoint open metric balls. This paper considers packings of a fixed radius on finite-area hyperbolic (i.e. constant curvature -1) surfaces, and our methods are primarily those of low-dimensional topology and hyperbolic geometry. But before describing the main results in detail I would like to situate them in the context of the broader question below, so I will first list some of its other instances, survey what is known toward their answers, and draw analogies with the setting of this paper:
For a fixed k∈ℕ and topological manifold M that admits a complete constant-curvature metric of finite volume, what is the supremal density of packings of M by k balls of equal radius, taken over all such metrics with fixed curvature? (Here the density of a packing is the ratio of the sum of the balls' volumes to that of M.)
The botanist P.L. Tammes posed a positive-curvature case of Question <ref> which is now known as Tammes' problem, where M = 𝕊^2 with its (rigid) round metric, in 1930. Answers (i.e. sharp density bounds) are currently known for k≤ 14 and k=24 after work of many authors, with the k=14 case only appearing in 2015 <cit.> (the problem's history is surveyed in 1.2 there).
In the Euclidean setting, the case of Question <ref> with M an n-dimensional torus is equivalent to the famous lattice sphere packing problem, see eg. <cit.>, when k=1. When k>1 it is the periodic packing problem; and finding the supremum over all k is equivalent to the sphere packing problem, which is solved only in dimensions 2, 3 <cit.> and, very recently, 8 <cit.> and 24 <cit.>. The lattice sphere packing problem is solved in all additional dimensions up to 8, see <cit.>.
The hyperbolic case is also well studied. Here a key tool, known in low-dimensional topology as “Böröczky's theorem", asserts that any packing of ℍ^n by balls of radius r has local density bounded above by the density in an equilateral n-simplex with sidelength 2r of its intersection with balls centered at its vertices <cit.>. Rogers proved the analogous result for Euclidean packings earlier <cit.>. We note that in the hyperbolic setting the bound depends on r as well as n.
Böröczky's theorem yields bounds towards an answer to Question <ref> for an arbitrary k∈ℕ and complete hyperbolic manifold M, since a packing of M has a packing of ℍ^n as its preimage under the universal cover ℍ^n→ M. Analogously, Rogers' result yields bounds toward the lattice sphere packing problem, which until recently were still the best known in some dimensions (see <cit.>). This basic observation is particularly useful in low dimensions: for instance Rogers' lattice sphere packing bound is sharp only in dimension 2 (where it is usually attributed to Gauss). In the three-dimensional hyperbolic setting, where Böröczky's Theorem was actually proved earlier by Böröczky–Florian <cit.>, its “r=∞” (i.e. horoball packing) case yields sharp lower bounds on the volumes of cusped hyperbolic 3-orbifolds <cit.> and 3-manifolds <cit.>, for example.
In dimension two, C. Bavard observed that Böröczky's theorem implies answers to the k=1 case of Question <ref> for every closed hyperbolic surface F <cit.>; that is, it yields sharp bounds on the maximal radius of a ball embedded in such a surface. The same argument yields bounds towards Question <ref> for arbitrary k and hyperbolic surfaces F, as we will observe in Section <ref>. This was already shown for closed genus-two surfaces when k=2, by Kojima–Miyamoto <cit.>.
For a non-compact surface F of finite area, it is no longer true that a maximal-density packing of F pulls back to a packing of ℍ^2 with maximal local density: the cusps of F yield “empty horocycles", large vacant regions in the preimage packing. In <cit.> I introduced a new technique for proving two-dimensional packing theorems and used it to settle the k=1 case of Question <ref> for arbitrary complete, orientable hyperbolic surfaces of finite area. M. Gendulphe has since released a preprint which resolves the non-orientable k=1 cases by another method <cit.>. But the restriction to orientable surfaces in <cit.> is not necessary, as I show in Section <ref> below. The first main result here records density bounds for arbitrary k∈ℕ and complete, finite-area hyperbolic surfaces (orientable or not) which follow from the packing theorems of <cit.>.
In fact we bound the radius of packings. But this is equivalent to bounding their density, since the area of a hyperbolic disk is determined by its radius, and by the Gauss–Bonnet theorem the area of a complete, finite-area hyperbolic surface is determined by its topological type.
plain
*VorBoundPropProposition <ref>
Above, a horocyclic ideal triangle is the convex hull in ℍ^2 of three points, two of which lie on a horocycle C with the third at the ideal point of C. We prove Proposition <ref> in Section <ref>.
To my knowledge, the bounds of Proposition <ref> are the best in the literature for every k, χ and n. They coincide with those from Böröczky's Theorem in the closed (n=0) case but are otherwise stronger, see Proposition <ref>. But as we show below, they are not sharp in general; indeed, it is easy to see that they are not attained in general. For a closed surface F with Euler characteristic χ attaining the bound of r_χ,0^k, the equilateral triangles that decompose F all have equal vertex angles, since they have equal side lengths. It follows that its triangulation must be regular; that is, each vertex must see the same number of triangles. This imposes the condition that k divide 6χ, since the number of such triangles is 2(k-χ) by a simple computation.
In this sense Proposition <ref> is akin to L. Fejes Tóth's general bound toward Tammes' problem <cit.>, which is attained only for k=3, 4, 6, and 12: those for which S^2 has a regular triangulation with k vertices. (The last three are realized by the boundaries of a tetrahedron, octahedron, and icosahedron, respectively). When the bound is attained in the non-compact (n>0) case, one correspondingly expects both equilateral and horocyclic ideal triangle vertices to be “evenly distributed”. Lemma <ref> formulates this precisely, and shows that it imposes the conditions that k divide both 6χ and n. The second main result of this paper shows that the bound is attained when these conditions hold.
*attained theoremTheorem <ref>
The non-trivial part of the proof is the purely topological Proposition <ref>, which constructs closed surfaces of Euler characteristic χ+n, triangulated with k+n vertices of which n have valence one, when k divides both 6χ and n. (The valence-one vertices correspond to cusps of F.) Previous work of Edmonds–Ewing–Kulkarni <cit.> covers the n=0 cases of Proposition <ref> but its techniques, which exploit the existence of certain branched coverings in this setting, do not naturally extend to the n>0 cases. The proof here is independent of <cit.>, even for n=0.
I could not prove in the n>0 case that the bound r_χ,n^k is attained only if k divides both 6χ and n. But I do show in Lemma <ref> that for a given χ<0 and n>0, it is attained only at finitely many k. Section <ref> goes on to establish:
*not attained theoremTheorem <ref>
This follows immediately from Lemma <ref> and the main result Proposition <ref> of Section <ref>, which asserts for each χ and n that the function on the entire moduli space of all (orientable or non-orientable) hyperbolic surfaces with Euler characteristic χ and n cusps which measures the maximal k-disk packing radius does attain a maximum. (This is not obvious because the function in question is not proper, see <cit.> in the case k=1.) Key to the proof of Proposition <ref> is Lemma <ref>, which asserts that the thick part of a surface with a short geodesic can be inflated while increasing the length of the geodesic. Proposition <ref> and its proof were suggested by a referee for <cit.>, and sketched by Gendulphe <cit.>, in the case k=1.
I'll end this introduction with a couple of further questions. First, recall from the discussion above that Lemma <ref> is is likely not best possible. It would be interesting to know whether the bound of Theorem <ref> is attained in any cases beyond those covered by Theorem <ref>.
For χ< 0 and n> 0, does there exist any k not dividing both n and 6χ for which a complete hyperbolic surface of finite area with n cusps and Euler characteristic χ admits a packing by k disks that each have radius r_χ,n^k?
For any example not covered by Theorem <ref>, some observations from the proof of Lemma <ref> can be used to show that α(r_χ,n^k) and β(r_χ,n^k) must satisfy an additional algebraic dependence beyond what is prescribed in Theorem <ref>. While this seems unlikely, it is not clear (to me at least) that it does not ever occur. In a follow-up paper I will consider the case of two disks on the three-punctured sphere, showing that at least in this simplest possible case it does not.
It would also be interesting to know whether k-disk packing radius has local but non-global maximizers. We phrase the question below in the language introduced above Proposition <ref>.
For which smooth surfaces Σ and k∈ℕ does the function _k from Definition <ref> have a local maximum on 𝔗(Σ) that is not a global maximum? In particular, can this occur if k divides both 6χ and n, where χ is the Euler characteristic of Σ and n its number of cusps?
Gendulphe answered this “no” for k=1 <cit.>, extending my result on the orientable case <cit.>.
§.§ Acknowledgements
Thanks to Dave Futer for a keen observation, and for pointing me to <cit.>, and to Ian Biringer for a helpful conversation. Thanks also to the anonymous referee for a careful reading and helpful comments.
§ DECOMPOSITIONS OF ORIENTABLE AND NON-ORIENTABLE SURFACES
The bound of <cit.>, which Proposition <ref> generalizes, is proved using the centered dual complex plus as defined in Proposition 5.9 of <cit.>. This is a decomposition of a finite-area hyperbolic surface F that is canonically determined by a finite subset of F. In this section we will first recap the construction of the centered dual plus, and show that <cit.> carries through to the non-orientable case without revision. Then we will prove Proposition <ref>.
Let F be a complete, finite-area hyperbolic surface, ⊂ F a finite set, and πℍ^2→ F a locally isometric universal cover. We will assume here that F is non-orientable, since the orientable case is covered by previous work, and let F_0 be the orientable double cover of F and _0 be the preimage of in F_0. Note that π factors through a locally isometric universal cover pℍ^2→ F_0, so in fact =p^-1(_0).
This set is invariant under the isometric actions of π_1 F and π_1 F_0 by covering transformations.
Theorem 5.1 of <cit.>, which is a rephrasing of <cit.> for surfaces, asserts the existence of a π_1 F_0-invariant Delaunay tessellation of p^-1(_0) characterized by the following empty circumcircles condition:
For each circle or horocycle S of ℍ^2 that intersects and bounds a disk or horoball B with B∩ = S∩, the closed convex hull of S∩ in ℍ^2 is a Delaunay cell. Each Delaunay cell has this form.
Since this characterization is in purely geometric terms it implies that the Delaunay tessellation is invariant under every isometry that leaves invariant. In particular, it is π_1 F-invariant. In fact the entire <cit.> extends to the non-orientable case; one needs only additionally observe that the parabolic fixed point sets of π_1 F and π_1 F_0 are identical. We will now run through the remaining results of Sections 5.1 and 5.2 of <cit.>, which build to Proposition 5.9 there, and comment on their extensions to the non-orientable case.
Corollary 5.2 of <cit.> makes three assertions about properties of the Delaunay tessellation's image in the quotient surface, which all extend directly to the non-orientable case. The first, on finiteness of the number of π_1 F_0-orbits of Delaunay cells, implies the same for π_1 F-orbits. The original proof (in <cit.>) of the the second, on interiors of compact cells embedding, applies directly. In our setting, the third assertion is that for each non-compact Delaunay cell C_u, which is invariant under some parabolic subgroup Γ_u of π_1 F_0, p|_𝑖𝑛𝑡 C_u factors through an embedding of 𝑖𝑛𝑡 C_u/Γ_u to a cusp of F_0. Here we have:
The stabilizer of C_u in π_1 F is also Γ_u; each cusp of F_0 projects homeomorphically to F; and π|_𝑖𝑛𝑡 C_u factors through an embedding of 𝑖𝑛𝑡 C_u/Γ_u to a cusp of F.
The stabilizer of C_u in the full group of isometries of ℍ^2 is the stabilizer of the ideal point u of the horocycle S in which it is inscribed. Using the upper half-plane model and translating u to ∞, standard results on the classification of isometries (see eg. <cit.>) imply that this group is the semidirect product
{([ 1 x; 0 1 ]) : x∈ℝ}⋊{z↦ -z̅}
with the index-two translation subgroup preserving orientation. One sees directly from the classification that every orientation-reversing element here reflects about a geodesic. Since π_1 F acts freely on ℍ^2, every element that stabilizes C_u preserves orientation. But π_1 F_0 consists precisely of those elements of π_1 F that preserve orientation. The lemma's first assertion follows directly, and the latter two follow from that one.
The main result of Section 5.1 of <cit.>, Corollary 5.6, still holds if its orientability hypothesis is dropped. This implies for us that the centered dual complex of S is π_1 F-invariant and its two-cells project to F homeomorphically on their interiors. The centered dual complex is constructed in <cit.>; in particular see Definition 2.26 there, and it is the object to which the main packing results Theorems 3.31 and 4.16 of <cit.> apply.
The proof of <cit.> given there in fact applies without revision in the non-orientable case. To this end we note that the Voronoi tessellation of and its geometric dual (see <cit.> and the exposition above it) are by their construction invariant under every isometry preserving . And the proofs of Lemmas 5.4 and 5.5, and Corollary 5.6, do not use the hypothesis that elements of π_1 F preserve orientation, only that they act isometrically and are fixed point-free.
Section 5.2 of <cit.> builds to the description of the centered dual complex plus in Proposition 5.9, its final result. The issue here is that if F has cusps then the underlying space of the centered dual decomposition is not necessarily all of ℍ^2, but rather the union of all geometric dual cells (which are precisely the compact cells of the Delaunay tessellation, by <cit.>) with possibly some horocyclic ideal triangles. But Lemma 5.8 of <cit.> shows that each non-compact Delaunay cell C_u intersects this underlying space in a sub-union of the collection of horocyclic ideal triangles obtained by joining each of its vertices to its ideal point u by a geodesic ray. So the centered dual plus is obtained by simply adding two-cells (and their edges and ideal vertices) to the centered dual, one for each horocyclic ideal triangle obtained from each C_u as above that does not already lie in a centered dual two-cell.
The proof of <cit.> again extends without revision to the non-orientable setting, and provides the final necessary ingredient for:
Proposition 5.9 of <cit.> still holds if F is not assumed oriented.
Having established this, we turn to the first main result.
The first part of this proof closely tracks that of Theorem 5.11 in <cit.>. Given a complete, finite-area hyperbolic surface F of Euler characteristic χ with n cusps, equipped with an equal-radius packing by k disks of radius r, we let be the set of disk centers. Fixing a locally isometric universal cover πℍ^2→ F, let = π^-1() and, applying Proposition <ref>, enumerate a complete set of representatives for the π_1 F-orbits of cells of the centered dual complex plus as {C_1,,C_m}. By the Gauss–Bonnet theorem the C_i satisfy:
Area(C_1) + + Area(C_m) = -2πχ
Take C_i non-compact if and only if i≤ m_0 for some fixed m_0≤ m, and for each i≤ m let n_i be the number of edges of C_i. Each compact edge of the decomposition has length at least d = 2r, since it has disjoint open subsegments of length 2r around each of its vertices. For i≤ m_0 we therefore have the key area inequality obtained from <cit.> and some calculus:
Area(C_i) ≥ D_0(∞,d,∞) + (n_i-3)D_0(d,d,d),
with equality holding if and only if n_i=3 and the compact side length is d. Here D_0(∞,d,∞) is the area of a horocyclic ideal triangle with compact side length d, and D_0(d,d,d) is the area of an equilateral hyperbolic triangle with all side lengths d.
For m_0 < i ≤ m, again as described in the proof of Theorem 5.11 of <cit.>, Theorem 3.31 there and some calculus give
Area(C_i) ≥ (n_i-2) D_0(d,d,d),
with equality if and only if n_i=3 and all side lengths are d. We therefore have:
-2πχ ≥ m_0 · D_0(∞,d,∞) + (∑_i=1^m (n_i-2) - m_0)· D_0(d,d,d)
≥ n · D_0(∞,d,∞) + (∑_i=1^m n_i - 2m - n) · D_0(d,d,d)
= n· (π-2β(r)) + (2e-2m - n)· (π-3α(r))
In moving from the first to the second line above, we have used the fact that m_0≥ n and D_0(∞,d,∞) > D_0(d,d,d) (again see the proof of <cit.>) to trade m_0-n instances of D_0(∞,d,∞) down for the same number of D_0(d,d,d). In moving from the second to the third we have rewritten ∑_i=1^m n_i as 2e, where e is the total number of edges, and used the angle deficit formula for areas of hyperbolic triangles (recall from above that d=2r).
We will now apply an Euler characteristic identity satisfied by the closed surface F̅ obtained from F by compactifying each cusp with a unique marked point. By Proposition <ref> the C_i project to faces of a cell decomposition of F̅. This satisfies v-e+f = χ(F̅), where v, e and f are the total number of vertices, edges, and faces, respectively. For us this translates to
n+k - e + m = χ+n,
since χ(F̅) = χ+n. Here there are n+k vertices of the centered dual plus decomposition of F̅: k that are disk centers, and n that are marked points. As above we have called the number of edges e, and the number of faces is m. Now substituting 2k-2χ for 2e-2m on the final line of (<ref>), after simplifying we obtain
(6 - 6χ+3n/k)α(r) + 2n/kβ(r) ≥ 2π
Since α and β are decreasing functions of r we obtain that r≤ r_χ,n^k.
As we noted above the inequalities (<ref>), equality holds on the first line if and only if n_i=3 for each i, i.e. each C_i is a triangle, and all compact sides have length d=2r. This implies in particular that the compact centered dual two-cells (the C_i for i>m_0) are all Delaunay two-cells, and the non-compact cells are obtained by dividing horocyclic Delaunay cells of into horocyclic ideal triangles by rays from vertices. As we noted below (<ref>), equality holds on the second line if and only if m_0 = n; that is, each cusp corresponds to a unique horocyclic ideal triangle. Thus when r = r^k_χ,n, F decomposes into equilateral and n horocyclic ideal triangles with all compact sidelengths equal to 2r_χ,n^k.
In this case, one finds by inspecting the definition of the centered dual plus that every compact centered dual two-cell is a Delaunay triangle, and every non-compact cell is obtained from a Delaunay monogon by subdividing it by a single arc from its vertex out its cusp. To prove the Proposition it thus only remains to show that a surface of Euler characteristic χ and n cusps which decomposes into equilateral triangles and n horocyclic ideal triangles, all with compact sidelength 2r_χ,n^k, with k vertices, admits a packing by k disks of radius r_χ,n^k centered at the vertices.
This follows the line of argument from Examples 5.13 and 5.14 of <cit.>. Lemma 5.12 there implies in particular that an open metric disk of radius r_χ,n^k centered at a vertex v of an equilateral triangle T in ℍ^2 with all sidelengths 2r_χ,n^k intersects T in a full sector with angle measure equal to the interior angle of T at v. It is not hard to show that the same holds for a disk centered at a finite vertex of a horocyclic ideal triangle. In both cases it is clear moreover that disks of radius r centered at distinct (finite) vertices of such triangles do not intersect. Therefore for a surface decomposed into a collection of equilateral and horocyclic ideal triangles with all compact sidelengths 2r_χ,n^k, a collection of disjoint embedded open disks of radius r_χ,n^k centered at the vertices of the decomposition is assembled from disk sectors in each triangle around each of its vertices.
§ COMPARING BOUNDS
In this section we give brief accounts of two arguments that give alternative bounds on the k-disk packing radius: a naive bound r_A^k and a bound from Böröczky's Theorem <cit.> that turns out to simply be r_χ,0^k. These arguments are not original, in particular the latter can essentially be found in <cit.> and <cit.>. We will then compare the bounds in Proposition <ref>.
[The naive bound] If k disks of radius r are packed in a complete finite-area surface F then the sum of their areas is no more than that of F. The area of a hyperbolic disk of radius r is 2π(cosh r - 1) (see eg. <cit.>), and the Gauss–Bonnet theorem implies that the area of a complete, finite-area hyperbolic surface F with Euler characteristic χ is -2πχ. Therefore 2π(cosh r -1)· k ≤ -2πχ, whence r ≤ r_A^k defined by
cosh(r_A^k) = 1-χ/k
[Böröczky's bound] Suppose again that k disks of radius r are packed in a complete, finite-area surface F. Fix a locally isometric universal cover πℍ^2→ F and consider the the preimage of the disks packed on F, which is a packing of ℍ^2 by disks of radius r. Each disk D in the preimage determines a Voronoi 2-cell V (see eg. Section 1 of <cit.>), and for α as in Theorem <ref> Böröczky's theorem asserts the following bound on the density of D in V:
Area(D)/Area(V)≤3α(r)(cosh r -1)/π-3α(r) ⇒ 2π(π/3α(r) - 1)≤Area(V)
We obtain the right-hand inequality above upon substituting for Area(D) and simplifying.
The packing of ℍ^2 by the preimage of the disks on F is invariant under the action of π_1 F by covering transformations, so this is also true of its Voronoi tessellation. Moreover since there is a one-to-one correspondence between disks and Voronoi 2-cells, a full set {D_1,,D_k} of disk orbit representatives determines a full set {V_1,,V_k} of Voronoi cell orbit representatives. Their areas thus sum to that of F. If F has Euler characteristic χ then summing the right-hand inequalities above and applying the Gauss–Bonnet theorem yields
2π(π/3α(r) - 1)· k ≤ -2πχ ⇒ π≤(1-χ/k)3α(r)
From the formula α(r) = 2sin^-1(1/(2cosh r)), we see that α decreases with r. Comparing with the equation defining r_χ,n^k in Theorem <ref>, we thus find that this inequality implies r≤ r_χ,0^k.
The relationship between the bounds r_A^k, r_χ,0^k and r_χ,n^k is perhaps not immediately clear from their formulas in all cases. The result below clarifies this.
For any fixed χ<0 and n≥ 0, and k∈ℕ, r_χ,n^k ≤ r_χ,0^k < r_A^k. The inequality r_χ,n^k≤ r_χ,0^k is strict if and only if n>0.
We first compare r_χ,0^k with r_χ,n^k when n>0. We will apply Corollary 5.15 of <cit.>, which asserts for any r>0 that 2β(r) < 3α(r). Slightly rewriting the equation defining r_χ,n^k gives:
2π = (6-6χ/k)α(r_χ,n^k) + n/k(2β(r_χ,n^k)-3α(r_χ,n^k)) < (6-6χ/k)α(r_χ,n^k)
Since α decreases with r, and r_χ,0^k is defined by setting the right side of the inequality above equal to 2π, it follows that r_χ,n^k < r_χ,0^k.
We now show that r_χ,0^k < r_A^k for each χ and k. Recall that α(r) = 2sin^-1(1/(2cosh r)). The inverse sine function is concave up on (0,1) and takes the value 0 at 0 and π/6 at 1/2, so α(r_χ,0^k) < π/(3cosh r_χ,0^k). Plugging back into the definition of r_χ,0^k, and comparing with that of r_A^k, yields the desired inequality.
§ SOME EXAMPLES SHOWING SHARPNESS
In this section we'll prove Theorem <ref>, which asserts that the bound r_χ,n^k of Proposition <ref> is attained under certain divisibility hypotheses. We begin with a simple counting lemma recording the combinatorial condition that motivates these hypotheses.
Suppose F is a complete, orientable hyperbolic surface of finite area with Euler characteristic χ<0 and n≥ 0 cusps that decomposes into a collection of compact and horocyclic ideal triangles that intersect pairwise (if at all) only at vertices or along entire edges, such that there are k vertices and exactly n horocyclic ideal triangles.
If there exist fixed i and j such that each vertex of the decomposition of F is the meeting point of exactly i compact and j horocyclic ideal triangle vertices, then k divides both n and 6χ, and
i = 6-6χ+3n/k j = 2n/k.
The closed surface F̅ obtained from F by compactifying each cusp with a single point has Euler characteristic χ+n, and the given decomposition determines a triangulation of F̅ with k+n vertices, where each horocyclic ideal triangle has been compactified by the addition of a single vertex at its ideal point. Since F has n cusps and n horocyclic ideal triangles, each horocyclic ideal triangle encloses a cusp, its non-compact edges are identified in F, and the quotient of these edges has one endpoint at the added vertex (which has valence one).
Noting that the numbers e, of edges, and f, of faces of the triangulation of F̅ satisfy 2e = 3f, computing its Euler characteristic gives:
v-e+f = k+n - f/2 = χ+n
Therefore F has a total of 2(k-χ) triangles, of which 2(k-χ)-n are compact. Since each compact triangle has three vertices and each horocyclic ideal triangle has two, even distribution of vertices determines the counts i and j above. These imply in particular that k must divide both 2n and 6χ+3n. But we note that in fact k must divide n, since the two vertices of each horocyclic ideal triangle are identified in F, and therefore k must also divide 6χ.
A condition equivalent to the hypothesis of Lemma <ref> on a topological surface in fact ensures the existence of a hyperbolic structure satisfying the conclusion of Theorem <ref>.
For χ<0 and n≥ 0, suppose S is a closed surface with Euler characteristic χ+n and n marked points that is triangulated with k+n vertices, including the marked points, with the following properties:
* Each marked point is contained in exactly one triangle, which we also call .
* There exist fixed i,j≥ 0 such that exactly i non-marked and j marked triangle vertices meet at each of the remaining k vertices.
Then there is a complete hyperbolic surface F of finite area that decomposes into a collection of equilateral and exactly n horocyclic ideal triangles intersecting pairwise only at vertices or along entire edges, if at all; and there is a homeomorphism f S-𝒫→ F, where 𝒫 is the set of marked points, taking non-marked triangles to equilateral triangles and each marked triangle, less its marked vertex, to a horocyclic ideal triangle.
F is unique with this property up to isometry. That is, for any complete hyperbolic surface F' and homeomorphism f' S-𝒫→ F' satisfying the conclusion above, there is an isometry ϕ F'→ F such that f is properly isotopic, preserving triangles, to ϕ∘ f'. Also, F has a packing by k disks of radius r_χ,n^k, each centered at the image of a non-marked vertex of S.
Here a triangulation of a surface, possibly with boundary, is simply a homeomorphism to the quotient space of a finite disjoint union of triangles by homeomorphically pairing certain edges. If the surface has boundary then not all edges must be paired.
In this section we will also prove existence of the triangulations required in Proposition <ref>.
For any χ< 0 and n≥ 0, and any k∈ℕ that divides both 6χ and n, there is a closed non-orientable surface with Euler characteristic χ+n and n marked points that is triangulated with k+n vertices, including the marked points, with the following properties:
* Each marked point is contained in exactly one triangle, which we also call .
* Exactly i non-marked and j marked triangle vertices meet at each of the remaining k vertices, where i and j are given by (<ref>).
If χ+n is even then there is also an orientable surface of Euler characteristic χ+n with n marked points and such a triangulation.
The main result of this section follows directly from combining Proposition <ref> with <ref>.
§.§ Geometric surfaces from triangulations
We now proceed to prove the Propositions above. This subsection gives a standard argument to prove Proposition <ref>.
Let S be a closed topological surface with a collection 𝒫 of n marked points, triangulated with k+n vertices satisfying the Proposition's hypotheses. Applying the Euler characteristic argument from the proof of Lemma <ref> with S in the role of F̅ there gives that there are 2(k-χ)-n non-marked triangles, and hence that i and j are given by (<ref>). By definition, S is a quotient space of a disjoint union of triangles by pairing edges homeomorphically. We will produce our hyperbolic surface F by taking a disjoint union of equilateral and horocyclic ideal triangles in ℍ^2 corresponding to the triangles of S and pairing their edges to match.
Number the non-marked triangles of the disjoint union giving rise to S from 1 to m, where m = 2(k- χ)-n. Let T_1,,T_m be a collection of disjoint equilateral triangles in ℍ^2 with side lengths 2r_χ,n^k, and for 1≤ s≤ m fix a homeomorphism from T_s to the non-marked triangle numbered s. Number the marked triangles giving rise to S from 1 to n, let H_1,,H_n be disjoint horocyclic ideal triangles each with compact side length 2r_χ,n^k, and for each t fix a homeomorphism from H_t to the complement, in the t^th marked triangle, of its marked vertex.
We now form a triangulated complex F as a quotient space of ( T_s)⊔( H_t) by identifying edges of the T_s and H_t in pairs. For each s and t such that the images of T_s and T_t (or T_s and H_t, or H_s and T_t) share an edge, identify the corresponding edges of the geometric triangles by an isometry, choosing the one that is isotopic to the homeomorphism that pairs their images in the edge-pairing of marked and non-marked triangles that has quotient S. Also isometrically identify the two non-compact edges of each H_t. Since S is closed, and each of its marked vertices lies in a single triangle (hence also a single edge), this pairs off all edges of the T_s and H_t in F. And our choices of edge-pairings ensure that the homeomorphism from ( T_s)⊔( H_t) to the corresponding collection of triangles for S (less n vertices) can be adjusted by an isotopy to induce a homeomorphism F→ S-𝒫. Its inverse is the map f from the Proposition's statement.
A standard argument now shows that F is a hyperbolic surface by describing a family of chart maps to ℍ^2 with isometric transition functions. This argument is essentially that of, say, <cit.>, so we will only give a bare sketch of ideas. The quotient map ( T_s)∪( H_t)→ F has a well-defined inverse on the interior of each triangle, and this yields charts for points that lie outside the triangulation's one-skeleton. For a point p in the interior of an edge of intersection between the images of, say, T_s and T_s, there is a chart that maps p to its preimage p̃ in T_i. It sends the intersection of a neighborhood of p with the image of T_s into an isometric translate of T_t that intersects T_s along the edge containing p̃.
Similarly, for each vertex p of the triangulation, a chart around p is given by choosing a preimage p̃ of p in some triangle whose image contains p, then isometrically translating all other triangles whose images contain p so that they have a vertex at p̃. The idea is to choose these isometries so that each translate intersects the translate of the triangle before it (as their images are encountered, proceeding around the boundary of a small neighborhood U of p in F) in an edge containing p̃. It is key that by the definition of r_χ,n^k in Proposition <ref> we have
(6-6χ+3n/k)α(r_χ,n^k) + 2n/kβ(r_χ,n^k) = 2π.
And since p lies in 6-6χ+3n/k equilateral triangle vertices and 2n/k horocyclic ideal triangle vertices by hypothesis, upon proceeding all the way around the boundary of U we find that the union of isometric translates entirely encloses a neighborhood of p̃ in ℍ^2 that is isometric to U.
For each horocyclic ideal triangle H_t, the isometry that identifies the two non-compact edges of H_t is a parabolic fixing its ideal point u. This implies that each cross-section of H_t by a horocycle with ideal point u has its endpoints identified, so Theorem 11.1.4 of <cit.> implies that F is complete. Proposition <ref> now implies that F has a packing by k disks of radius r_χ,n^k centered at the vertices of the triangulation of F.
Any two equilateral triangles in ℍ^2 with the same side length are isometric, and the full combinatorial symmetry group of any equilateral triangle is realized by isometries (these facts are standard). Analogously, two horocyclic ideal triangles with the same compact side length are isometric (this is easy to prove bare hands, or cf. <cit.>), and every one has a reflection exchanging its two vertices in ℍ^2. It follows that if f' S-𝒫→ F' has the same properties as F then an isometry ϕ F'→ F can be defined triangle-by-triangle so that f and ϕ∘ f' take each vertex, edge, and triangle of S to identical corresponding objects in F, and that their restrictions to any edge are properly isotopic through maps to an edge of F. Adjusting ϕ∘ f' further on each triangle yields a proper isotopy to f.
§.§ Constructing triangulated surfaces
In this subsection we construct surfaces with prescribed triangulations to prove Proposition <ref>. We will treat the orientable and non-orientable cases separately, and it will be useful at times to think in terms of genus rather than Euler characteristic. Here the genus of an orientable (or, respectively, non-orientable) closed surface is the number of summands in a decomposition as a connected sum of tori (resp. projective planes). We declare the genus of a compact surface with boundary to be that of the closed surface obtained by adjoining a disk to each boundary component. We now recall the fundamental relationship between the genus, number of boundary components, and Euler characteristic:
χ(F_𝑜𝑟) = 2 - 2g_𝑜𝑟 - b χ(F_𝑛𝑜𝑛) = 2 - g_𝑛𝑜𝑛 - b
On the left side above, F_𝑜𝑟 is a compact, orientable surface of genus g_𝑜𝑟≥ 0, and on the right, F_𝑛𝑜𝑛 is non-orientable of genus g_𝑛𝑜𝑛≥ 1, with b≥ 0 boundary components in each case.
In proving Proposition <ref>, we will find it convenient to track the “triangle valence" of vertices.
We take the triangle valence of a vertex v of a triangulated surface to be the number of triangle vertices identified at v.
Since the link of a vertex v in a closed triangulated surface is a circle, the triangle valence of v coincides with its valence as usually defined: the number of edge endpoints at v. However for a vertex on the boundary of a triangulated surface with boundary, the triangle valence is one less than the valence. It is convenient to track triangle valence since it is additive under the operation of identifying surfaces with boundary along their boundaries.
The proof is a bit lengthy and technical, though completely elementary, so before embarking on it we give an overview. We build every triangulated surface by identifying boundary components in pairs from a fixed collection of “building blocks” constructed in a sequence of Examples. Each building block is a compact surface with boundary which is triangulated with all non-marked vertices on the boundary, an equal number of vertices per boundary component, and all (non-marked) vertices of equal valence. To give an idea of what we will construct, we have collected data on our orientable building blocks without marked vertices in Table <ref>.
In the Table, columns correspond to the triangulated building blocks Σ_g,b (of orientable genus g with b boundary components), rows to the number of vertices per boundary component, and table entries to the triangle valence of each vertex. And the number in parentheses directly below each Σ_g,b refers to the Example where it is triangulated.
We use these building blocks in Lemma <ref> to prove the orientable closed (i.e. without marked vertices) case of Proposition <ref>. We then proceed to the non-orientable closed case, in Lemma <ref>, after adding a few non-orientable building blocks to the mix in Examples <ref> and <ref>. Each Lemma's proof has several cases, featuring different combinations of building blocks, determined by certain divisibility conditions on the total number of vertices k.
As we remarked in the introduction, the closed case of Proposition <ref> is the p=3 case of the main theorem of Edmonds–Ewing–Kulkarni <cit.>, which is proved by a different method involving branched covers. The advantage of our proof is that each closed surface constructed in Lemmas <ref> and <ref> has a collection of disjoint simple closed curves, coming from the building blocks' boundaries, which are unions of edges and whose union contains every vertex. This allows us to extend to the case of n>0 marked vertices (a case not covered in <cit.>) by “unzipping” each edge in each such curve and inserting a copy of a final building block constructed in Example <ref>, homeomorphic to a disk, with marked vertices in its interior. We handle this case below that Example, completing the proof of Proposition <ref>.
It is a simple exercise to show that an annulus Σ_0,2 can be triangulated with one, two, or three vertices on each boundary component, and each such triangulation can be arranged so that each vertex has triangle valence three. We will also use three- and six-holed spheres Σ_0,3 and Σ_0,6 triangulated with two vertices per boundary component, and a four-holed sphere Σ_0,4 with three vertices per boundary component. The three- and four-holed spheres are pictured in Figure <ref> on the left and in the middle, respectively. Inspection of each reveals that each vertex has triangle valence four.
The right side of Figure <ref> shows a triangulated dodecagon, two copies of which are identified along alternating edges to produce a triangulated copy of Σ_0,6. Precisely, for each even i we identify the edge e_i in one copy homeomorphically with e_i+6 in the other so that for each i, v_i = e_i∩ e_i+1 in the first copy is identified with v_i+6 in the other. Note that for each i ≅0 modulo four, the vertex v_i is contained in 4 triangle vertices of the dodecagon, whereas v_i is in 3, 1, or 2 vertices for i≅1, 2, or 3, respectively. Thus in the quotient six-holed sphere Σ_0,6, each vertex has triangle valence 5. Each boundary component is the union of two copies of e_i along their endpoints for some odd i and so contains two vertex quotients, of v_i-1 and v_i.
Here we will triangulate the orientable genus-g surface Σ_g,1 with one boundary component, for g≥ 1. In fact we describe triangulations with one, two, and three vertices, all on the boundary and all with the same valence.
The standard construction of the closed, orientable genus-g surface takes a 4g-gon P_4g with edges labeled e_0,,e_4g-1 in counterclockwise order, and identifies e_i to e_i+2 via an orientation-reversing homeomorphism for each i<4g congruent to 0 or 1 modulo 4. (Here the e_i inherit their orientation from the counterclockwise orientation on ∂ P_4g.) We produce Σ_g,1 by identifying the first 4g edges of a (4g+1)-gon P_4g+1 in the same way, and making no nontrivial identifications on points in the interior of the final edge. Any triangulation of P_4g+1 projects to a one-vertex triangulation of Σ_g,1 with its single vertex v on ∂Σ_g,1. Such a triangulation has 4g-1 triangles, so v has triangle valence 12g-3. The case g=1 is pictured on the left in Figure <ref>.
We may construct two- or three-vertex triangulations of Σ_g,1, each with all vertices on the boundary, by inserting one or two vertices, respectively, in the interior of the non-identified edge e_4g of P_4g+1. If one vertex is inserted then we begin by joining it to each vertex e_i-1∩ e_i for 1≤ i ≤ 4g-1. If two are inserted then the nearer one to e_0∩ e_4g is joined to e_i-1∩ e_i for 1≤ i ≤ 2g, and the other one is joined to e_i-1∩ e_i for 2g≤ i ≤ 4g-1. See the middle and right side of Figure <ref> for the case g=1. Note that in the resulting triangulations of Σ_g,1, the inserted vertices have lower valence than the quotient vertex v of those of P_4g+1.
We can even out the valence by flipping edges. An edge e that is the intersection of distinct triangles T and T' of a triangulated surface is flipped by replacing it with the other diagonal of the quadrilateral T∪ T'. This yields a new triangulation in which each endpoint of e has its (triangle) valence reduced by by one, and each vertex opposite e has it increased by one. In Σ_g,1, triangulated as prescribed in the paragraph above, the projection of each e_i begins and ends at v, and each vertex opposite e_i is an inserted vertex. So flipping e_i reduces the valence of v and increases the valence of the inserted vertex, each by 2
In the the two-vertex triangulation of Σ_g,1 described above, the inserted vertex has triangle valence 4g. Since there are a total of 4g triangles there are 12g triangle vertices total. So after flipping e_i for g distinct i, each vertex of A_4g has triangle valence 6g. In the three-vertex triangulation, each inserted vertex has triangle valence 2g+1, and there are 3(4g+1) triangle vertices total. So after flipping all 2g distinct e_i, all vertices have triangle valence 4g+1.
For each g≥ 1 we now triangulate the two-holed orientable surface Σ_g,2 of genus g with two boundary components. Each triangulation will have all vertices on the boundary, each boundary component will have the same number of vertices (either one, two, or three) and each vertex will have the same valence. We construct Σ_g,2 by identifying all but two of the edges of a 4n-gon P_4n in pairs, where n = g+1.
Label the edges of P_4n as e_0,,e_4n-1 in counterclockwise order, and for each k≠ 0,2n identify e_k with its diametrically opposite edge e_k+2n by an orientation-reversing homeomorphism. The edge orientations in question here are inherited from the boundary orientation on ∂ P_4n. So the initial vertex v_1 = e_0∩ e_1 of e_1 is identified with the terminal vertex v_2n+2 of e_2n+1. Since v_2n+2 is the initial vertex of e_2n+2 it is also identified with the terminal vertex v_3 of e_2 and so on, so that in the end all vertices v_i for odd i<2n are identified with all vertices v_j for even j > 2n. In particular the endpoints v_0 = v_4n and v_1 of e_0 are identified in the quotient.
Similarly, for all even i with 0<i≤ 2n, the vertices v_i are identified together with the vertices v_j for all odd j > 2n; and in particular the endpoints of e_2n are identified in the quotient. The quotient Σ_g,2 by these identifications is thus a surface with two boundary components, one from e_0 and one from e_2n, each containing one of the two equivalence classes of vertices of P_4n. Note that the 180-degree rotation of P_4n preserves the identifications and so induces an automorphism ρ of Σ_g,2 that exchanges its two boundary components and the vertex quotients they contain.
We may triangulate P_4n using arcs joining v_0 to v_j for 2n+1≤ j < 4n-1, v_2n to v_i for 1 ≤ i < 2n-1, and v_1 to v_2n+1. The cases g=1 and g=2 (so P_8 and P_12, respectively) of this construction are pictured on the top line of Figure <ref>. The resulting triangulations of Σ_g,2 are ρ-invariant, since for example ρ takes v_1 to v_2n+1, so the two vertices of Σ_g,2 have the same valence. A triangulation of P_4n has 4n-2 triangles, so this is the number f of faces of the triangulation of Σ_g,2. There are two vertices, and the number e of edges satisfies 2e-2 = 3f (note that the edges e_0 and e_2n belong to only one triangle each), so e = 3/2f+1. Therefore Σ_g,2 has Euler characteristic 2-2n. Since it has two boundary components its genus is g as asserted.
We may re-triangulate Σ_g,2 with an additional one or two vertices per boundary component. We first describe how to add one vertex, yielding a total of two vertices per boundary component. We begin by adding vertices w_0 and w_2n to P_4n in the edges e_0 and e_2n, respectively. Then triangulate the resulting (4n+2)-gon by joining w_0 to w_2n and each v_j for 2n < j ≤ 4n-1, and joining w_2n to each v_i for 1 ≤ i < 2n. This is illustrated on the middle line of Figure <ref>. The resulting triangulation of Σ_g,2 is ρ-invariant, so the two quotient vertices of the v_i have identical valence, as do the projections of w_0 and w_2n.
However it is plain to see that w_0 has triangle valence 2n+1: it is one vertex of each of the 2n triangles in the upper half of P_4n, and one of a unique triangle in the lower half. But since there are 4n triangles there are 12n triangle endpoints, so the two quotient vertices of the v_i must each have triangle valence 4n-1. As in Example <ref>, we even out the valence by flipping some of the e_i. For each i between 1 and 2n-1, one of the triangles in Σ_g,2 containing e_i has w_0 as its opposite vertex, and the other has w_2n in this role. If we flip e_i we thus increases the triangle valence of each of w_0 and w_2n by one, and since e_i has one endpoint at each quotient vertex of the v_i, it decreases each of their triangle valences by one. Thus after flipping e_1 through e_n-1, all vertices of the new triangulation of Σ_g,2 have triangle valence 3n.
To re-triangulate Σ_g,2 with three vertices per boundary component, we begin by placing vertices u_0 and w_0 in the interior of e_0, in that order, and u_2n and w_2n in e_2n so that the 180-degree rotation of P_4n exchanges u_0 with u_2n and w_0 with w_2n. Using line segments join w_0 to each of v_2,,v_n; join u_2n to each of v_n+1,,v_2n-1; join w_2n to each of v_2n+2,,v_3n; and join u_0 to each of v_3n+1,,v_4n-1. Note that the collection of such line segments is rotation-invariant. It divides P_4n into triangles and a single region with vertices u_0, w_0, v_n, v_n+1, u_2n, w_2n, u_3n and u_3n+1. Triangulate this region by joining u_0 to w_2n and v_3n, w_0 to w_2n and u_2n, and v_n to u_2n.
The resulting triangulation of P_4n is still rotation-invariant, and moreover each of u_0 and w_0 has triangle valence n+2. There are a total of 4n+2 triangles, with a total of 12n+6 vertices. So flipping e_i for all i between 0 and 2n except n yields a triangulation of Σ_g,2 in which each vertex has triangle valence 2n+1 = 2g+3.
In the special case g=1, the construction of Example <ref> yields a one-holed torus Σ_1,1 triangulated with one, two, or three vertices per boundary component such that each vertex has triangle valence 9, 6 or 5, respectively. We may construct a b-holed torus Σ_1,b as a b-fold cover of Σ_1,1, where each boundary component projects homeomorphically to that of Σ_1,1. One easily constructs such a cover by, say, joining a disk to Σ_1,1 along its boundary, taking a b-fold cover of the resulting torus, removing the preimage of the disk's interior and lifting a triangulation of Σ_1,1. The triangle valence of vertices remains the same.
We now have enough building blocks to handle the orientable closed case of Proposition <ref>.
For any g≥ 2 and any k∈ℕ that divides 12(g-1), the closed, orientable surface of genus g has a triangulation with k vertices, all of equal valence.
For the case k=1 we observe that any triangulation of the 4g-gon P_4g descends to a one-vertex triangulation of the genus-g surface under the “standard construction” mentioned in Example <ref>. So we assume below that k>1.
By an Euler characteristic calculation, a k-vertex triangulation of the genus-g surface has 4(g-1) + 2k triangles. If the vertices have equal valence then each has valence 12(g-1)/k + 6. Note that the number of vertices and their valence determines the genus, so below it is enough to exhibit k-vertex triangulations of closed, connected surfaces with vertices of the correct valence. We break the proof into sub-cases depending on some congruence conditions satisfied by k.
Case 1: 2 and 3 do not divide k In this case k has no common factor with 12, so k divides g-1. For g_0 = (g-1)/k, we claim that the desired surface is obtained by joining one copy of the one-holed genus-g_0 surface Σ_g_0,1 of Example <ref> to each of the k boundary components of Σ_1,k, where all building blocks are triangulated with one vertex per boundary component, so that vertices are identified.
This follows from the fact that the vertex on Σ_g_0,1 has triangle valence 12g_0-3 and each vertex of Σ_1,k has triangle valence 9; that triangle valence adds upon joining boundary components; and that triangle valence coincides with valence for vertices of triangulated closed surfaces. So each vertex of the resulting closed surface has valence:
(12g_0-3)+9 = 12(g-1)/k + 6
Case 2: 2 divides k, 3 and 4 do not Now k/2 has no common factor with 12 and hence divides g-1. So we use k/2 copies of Σ_g_0,2 from Example <ref>, where g_0 = 2(g-1)/k, each triangulated with one vertex on each boundary component. We arrange them in a ring with a copy of the annulus Σ_0,2 placed between each pair of subsequent copies. Each vertex of the resulting closed surface has valence:
(6g_0+3)+3 = 12(g-1)/k + 6
Case 3: 4 divides k, 3 does not In this case we use k/4 copies of Σ_g_0,2, where g_0 = 4(g-1)/k, each triangulated with two vertices on each boundary component. As in the previous case we arrange them in a ring interspersed with copies of Σ_0,2, so after joining boundaries each vertex has valence (3g_0+3) + 3 = 12(g-1)/k + 6.
Case 4: 3 divides k, 2 does not We take g_0 = 3(g-1)/k∈ℕ and join k/3 copies of Σ_g_0,1, each triangulated with 3 vertices per boundary component, to Σ_1,k/3 with the corresponding triangulation to produce a closed triangulated surface. Its vertices each have valence (4g_0+1)+5 = 12(g-1)/k + 6.
Case 5: 6 divides k, 4 does not We take g_0 = 6(g-1)/k∈ℕ and arrange k/6 copies of Σ_g_0,2, each triangulated with three vertices per boundary component, in a ring interspersed with copies of Σ_0,2. The resulting closed, triangulated surface has vertices of valence (2g_0+3)+3 = 12(g-1)/k + 6.
Case 6: 12 divides k If 12(g-1)/k is even then we let g_0 = 12(g-1)/(2k), so that
2g_0+3 = 12(g-1)/k + 3
As in the previous case we join k/6 copies of Σ_g_0,2, triangulated with three vertices per boundary component, in a ring interspersed with copies of Σ_0,2. If 12(g-1)/k is odd then we let g_0 = (12(g-1)/k-1)/2, so that
2g_0+3 = 12(g-1)/k + 2
We build the surface in this case from k/6 copies of Σ_g_0,2, triangulated with three vertices per boundary component, and k/12 copies of Σ_0,4, each triangulated as in Figure <ref>. (If g_0 = 0, as it may be, then Σ_g_0,2 is the annulus of Example <ref>.) Given any bijection from the set of boundary components of of the Σ_g_0,2 to those of the Σ_0,4, homeomorphically identifying each boundary component of a Σ_g_0,2 with its image taking vertices to vertices produces a closed, triangulated surface. We must choose the bijection to make the resulting surface connected, an easy exercise equivalent to constructing a connected four-valent graph with k/12 vertices (treating the Σ_0,4 as vertices and the Σ_g_0,2 as edges).
Each case above constructs a closed, connected surface triangulated with k vertices, each of valence 12(g-1)/k + 6. These quantities determine the number of edges and faces of the triangulation and show that the surface constructed has Euler characteristic 2-2g, hence genus g.
We now prove the same result in the non-orientable case. Recall below that the genus of a non-orientable surface is the maximal number of ℝP^2-summands in a connected sum decomposition. We begin by adding some non-orientable building blocks to the mix.
In Example <ref> the edge identifications between the two copies of the dodecagon comprising Σ_0,6 preserve orientation, so these copies inherit opposite orientations from any orientation on Σ_0,6. Therefore the involution that exchanges the two dodecagons while rotating Σ_0,6 by 180-degrees reverses orientation. Since it is also triangulation-preserving and fixed point-free, its quotient is a three-holed ℝP^2, triangulated with two vertices per boundary component where again each vertex has triangle valence 5.
For any g≥ 2 and a 2g-gon with edges oriented counterclockwise and subsequently labeled e_0,,e_2g-1, identifying e_i with e_i+1 by an orientation-preserving homeomorphism for each even i<2n yields a non-orientable surface of genus g. Any triangulation of the 2g-gon projects to a one-vertex triangulation of this surface with 2g-2 triangles. For g≥ 1 a one-vertex triangulation of a one-holed genus-g nonorientable surface Υ_g,1 is produced by analogously by identifying all edges but one of a 2g+1-gon. This triangulation thus has 2g-1 triangles, so the vertex has triangle valence 6g-3.
We produce two- or three-vertex triangulations with vertices of constant valence analogously to the orientable case of Example <ref>. These triangulations have 2g and 2g+1 triangles, respectively, so each vertex has triangle valence 3g or 2g+1.
For any g≥ 3 and any k∈ℕ that divides 6(g-2), the closed, nonorientable genus-g surface has a triangulation with k vertices, all of equal valence.
The k=1 case is given at the beginning of Example <ref> above. For the case k>1, as in the proof of Lemma <ref> we consider several sub-cases.
Case 1: 2 and 3 do not divide k For g_0 = (g-2)/k we glue one copy of Υ_g_0,1, triangulated with one vertex per boundary component, to each boundary component of Σ_1,k, triangulated to match. The resulting closed surface has each vertex of valence (6g_0 -3)+9 = 6(g-2)/k + 6.
Case 2: 2 divides k, 3 does not For g_0 = 2(g-2)/k, join k/2 copies of Υ_g_0,1 to boundary components of Σ_1,k/2, all triangulated with two vertices per boundary component. In the resulting surface each vertex has valence 3g_0+6=6(g-2)/k + 6.
Case 3: 3 divides k, 2 does not For g_0 = 3(g-2)/k, join k/3 copies of Υ_g_0,1 to boundary components of Σ_1,k/3, all triangulated with three vertices per boundary component. In the resulting surface each vertex has valence (2g_0+1) + 5=6(g-2)/k + 6.
Case 4: 6 divides k If 6(g-2)/k is even then take g_0 = 6(g-2)/(2k) and join k/3 copies of Υ_g_0,1 to boundary components of Σ_1,k/3, all triangulated with three vertices per boundary component. In the resulting surface each vertex has valence (2g_0+1)+5 = 6(g-2)/k+6. If 6(g-2)/k is odd and congruent to 0 modulo three, take g_0=6(g-2)/(3k) and join k/2 copies of Υ_g_0,1 to boundary components of Σ_1,k/2, all triangulated with two vertices per boundary component. In the resulting surface each vertex has valence 3g_0 + 6 = 6(g-2)/k+6.
If 6(g-2)/k is odd and congruent to 1 modulo three, take g_0 = (6(g-2)/k+2)/3 and take g_1 = g_0-1. We join k/6 copies of Σ_0,3, triangulated with two vertices per boundary component as in Figure <ref>, to k/6 copies of each of Υ_g_0,1 and the two-holed orientable building block Σ_g_1,2, triangulated to match, as follows: arrange the copies of Σ_g_1,2 in a ring and join boundary components of each pair of subsequent copies to two boundary components of a fixed copy of Σ_0,3. This leaves one free boundary component on each copy of Σ_0,3, which we cap off with a copy of Υ_g_0,1. The result is a connected, closed triangulated surface with k vertices, each of valence 3g_0+4 = 3(g_1+1)+4 = 6(g-2)/k+6.
If 6(g-2)/k is odd and congruent to 2 modulo three then we perform the same construction as in the previous case, except that we take g_0 = [6(g-2)/k+1]/3 and replace each copy of Σ_0,3 with a copy of the three-holed ℝP^2 from Example <ref>. The resulting closed, triangulated surface now has k vertices that each have valence 3g_0+5 = 3(g_1+1) + 5 = 6(g-2)/k+6.
As in the proof of Lemma <ref>, the fact that each non-orientable surface constructed above has k vertices, each of valence 6(g-2)/k+6, implies that it has genus g.
[Triangulated complexes with ideal vertices] Here we will produce a triangulated complex X_l homeomorphic to a disk for each l∈ℕ, with l+1 vertices of which l lie in the interior and each have triangle valence one. We call these vertices “marked”. The remaining vertex lies on the boundary of X_l. The triangulation of X_l is comprised of 2l-1 triangles in two classes: “marked” triangles H_1,,H_l, which each have one marked vertex, and “non-marked” triangles E_1,,E_l-1.
We begin by identifying a subset of the edges of the E_i in pairs so that their union is homeomorphic to a disk, for instance according to the scheme indicated in Figure <ref>, so that (for l>2) E_1 has exactly two free edges and each E_i has at least one. An Euler characteristic calculation shows that a total of 2l-4 edges of the E_i are identified, so the boundary of ⋃ E_i is a union of l+1 free edges. For each free edge e of ⋃ E_i except one belonging to E_1, we join one of the H_i to ⋃ E_i by identifying the edge opposite its marked vertex to e via a homeomorphism. We then finish by identifying the two edges of each H_i that contain its marked vertex to each other by a boundary orientation-reversing homeomorphism.
In the resulting quotient X_l, the interior of each H_i forms an open neighborhood of its marked vertex, and its edge opposite the marked vertex descends to a loop based at the non-marked vertex quotient. There is therefore a unique non-marked vertex quotient, with triangle valence 5l-3, where 3l-3 vertices of non-marked triangles are identified together with 2l non-marked vertices of marked triangles.
Let Y_l be the complex obtained by joining a non-marked triangle E_0 to X_l along its sole free edge (of E_1). Then Y_l is still homeomorphic to a disk with l non-marked vertices in its interior, but its boundary is a union of two edges: the free edges of E_0. The non-marked vertex quotient of X_l has triangle valence 5l-1 in Y_l, with 3l-1 non-marked triangle vertices identified there, since it picks up an extra two from E_0. The other non-marked vertex quotient in Y_l is the single vertex shared by the two free edges of E_0, which therefore has triangle valence one in Y_l.
We will begin by re-stating the Proposition separately in the orientable and non-orientable cases. In the non-orientable case it asserts that for g≥ 1 and n≥ 0 such that 2-g-n<0, and any k∈ℕ that divides both 6(2-g) and n, there is a closed non-orientable surface of genus g with n marked points which is triangulated with k+n vertices with the following two properties: each marked point is a valence-one vertex; and calling the triangle containing it marked, each remaining vertex is a quotient of 2n/k marked and
6 + 6(g-2) + 3n/k
non-marked triangle vertices. Here we recall from the Proposition that the surface is required to have Euler characteristic χ+n, so from the non-orientable case of (<ref>) with b=0 we have χ+n = 2-g. Thus χ = 2 - g -n < 0, and k divides both 6(2-g) and n if and only if it divides both 6χ and n.
The Proposition's orientable case asserts that for g, n≥ 0 such that 2-2g-n<0, and any k dividing both 12(g-1) and n, that there is a closed, orientable surface of genus g with n marked points which is triangulated in analogous fashion to the non-orientable case except that each non-marked vertex is contained in
6 + 12(g-1)+3n/k
non-marked triangle vertices. The valence computations here and in the non-orientable case are obtained by substituting 2-g-n and 2-2g-n, respectively, for χ in the formula 6-(6χ+3n)/k in the Proposition's original statement.
From these restatements it is clear that Lemmas <ref> and <ref> address the n=0 cases, so we assume below that n>0. We first consider the orientable case, beginning with the subcase k=1. For g≥ 1 join a copy of the one-holed building block Σ_g,1 of Example <ref>, triangulated with one vertex, to a copy of X_n along their boundaries. The non-marked vertex is then a quotient of
(12g-3) + (3n-3) = 12(g-1) + 3n + 6
non-marked triangle vertices, and 2n marked triangle vertices. In the case g=0 (so with n≥ 3), for natural numbers i and j such that i+j=n we join a copy of X_i to a copy of X_j along their boundaries. The result is homeomorphic to a triangulated sphere with n marked vertices, where the non-marked vertex is a quotient of (3i-3) + (3j-3) = 3n -6 non-marked triangle vertices.
Now take k≥2, and suppose g≥ 2. Lemma <ref> supplies a closed, oriented, surface S_0 of genus g with no marked points and a k-vertex triangulation, where each vertex has valence 12(g-1)/k+6. By its construction, S_0 is endowed with a collection of disjoint simple closed curves, each a union of one, two, or three edges, that separate it into a union of building blocks. Orient each such curve on S_0, and for an edge e contained in such a curve let e inherit its orientation. Then cut out the interior of e; that is, take the path-completion of S_0-e. The resulting space has a single boundary component which is a union of two edges, and S_0 is recovered by identifying them.
Construct S by removing the interior of each such edge e, orienting the two new edges to match that of e, and joining each new edge to one of a copy of Y_l in orientation-preserving fashion, where l = n/k and the edges of Y_l are oriented pointing away from the free vertex of E_0. Each vertex of S_0 is the initial vertex of one oriented edge, and the terminal vertex of one oriented edge. It therefore picks up one additional non-marked triangle vertex from one copy of Y_l and 3l-1 from another, as well as 2l marked triangle vertices. This vertex thus has the required valence in S.
For the orientable case g = 1 and k ≥ 2 we join one copy of X_l to each boundary component of Σ_1,k, triangulated as in Example <ref> with one vertex per boundary component, where l=n/k. Then each non-marked vertex is contained in 9+3l-3 = 3n/k + 6 non-marked triangle vertices and 2n/k vertices from marked triangles.
We finally come to the orientable case g=0 and k≥ 2. Since k must divide 12(g-1) = -12 it can be only 2, 3, 4, 6 or 12. For the case k=2, noting that n is thus even, we join one copy of X_n/2 to each boundary component of the annulus Σ_0,2, triangulated with one vertex per boundary component. For k=3 we produce a sphere S_0 by doubling a triangle across its boundary, orient the cycle of triangle edges, and perform the construction described above to yield S, using three copies of Y_n/3. For k=6 and k=12 we begin with Σ_0,2 or Σ_0,4, respectively, each triangulated with three vertices per boundary component as in Example <ref>; construct S_0 by capping off each boundary component with a single triangle; and proceed similarly.
For the case k=4 we first construct a space Z_l, l∈ℕ, as follows: join two copies of Y_l along a single edge of E_0 in each so that the free vertex of E_0 in one is identified to the non-trivial, non-marked vertex quotient in the other. Then Z_l is homeomorphic to a disk with two vertices on its boundary, each a quotient of 3l non-marked triangle vertices and 2l non-marked vertices of marked triangles, and 2l marked vertices in its interior. Returning to the k=4 subcase, we attach one copy of Z_n/4 to each boundary component of the annulus Σ_0,2, triangulated with two vertices per boundary component. This yields a sphere with four non-marked vertices, each a quotient of 3n/4+3 non-marked triangle vertices as required. The valence requirements are also easily checked in the other orientable subcases with g=0 and k≥ 2.
The non-orientable subcase k=1 is analogous to the corresponding orientable subcase, with the orientable building block Σ_g,1 replaced by Υ_g,1 from Example <ref>. (There is no analog of the g=0 sub-subcase here.) And the non-orientable subcase k≥ 2, g≥ 3 is analogous to the orientable subcase k≥ 2, g≥ 2, with the surface S_0 provided here by Lemma <ref> instead of <ref>.
The non-orientable subcase k≥ 2, g=2 is analogous to the orientable subcase k≥ 2, g=1, but with the k-holed torus Σ_1,k replaced by a k-holed Klein bottle (the non-orientable genus-two surface) triangulated with one vertex per boundary component. Its construction is analogous to that of Σ_1,k in Example <ref>: fill the hole of the triangulated one-holed Klein bottle Υ_2,1, take a k-fold cyclic cover, and remove the preimage of the interior of the added disk. In all cases so far the valence criteria are straightforward to check.
We finally come to the subcase k≥ 2 and g=1. As in the corresponding orientable subcase (g=0), we note that possible values of k are tightly restricted by the requirement that k divides 6(g-2) = -6: it must be either 2, 3, or 6. For k=2 or 3 we note that the construction of Example <ref> supplies Υ_1,1, a one-holed ℝP^2 triangulated with one, two or three vertices on its boundary. In each case each vertex is contained in three non-marked triangle vertices. For k=2 we attach a copy of Z_n/2 to Υ_1,1, triangulated with two vertices, along their boundaries. For k=3 we triangulate Υ_1,1 with three vertices, cap off its boundary with a triangle to produce a surface S_0, and produce S by suturing in three copies of Y_n/3 as above. For k=6 we cap off the boundary components of the three-holed ℝP^2 from Example <ref>, triangulated with two vertices per boundary component, with three copies of Z_n/3 as defined above.
§ GENERIC NON-SHARPNESS
In this section we prove Theorem <ref>, that the bound r_χ,n^k of Proposition <ref> is “generically” not sharp. We begin by observing that r_χ,n^k is generically not attained.
For any χ<0 and k∈ℕ that does not divide 6χ, there is no closed hyperbolic surface with Euler characteristic χ and a packing by k disks of radius r_χ,0^k. For any fixed χ< 0 and n>0, there are only finitely many k∈ℕ for which a complete, finite-area hyperbolic surface with Euler characteristic χ and n cusps exists which admits a packing by k disks of radius r_χ,n^k.
We first consider the closed (n=0) case. The main observation here is that by the equation from Proposition <ref> defining r_χ,n^k, α(r_χ,0^k) is an integer submultiple of 2π if and only if k divides 6χ. By the Proposition, any closed surface that admits a radius-r_χ,0^k disk packing is triangulated by a collection of equilateral triangles that all have vertex angle α(r_χ,n^k). The total angle around any vertex v of the triangulation is thus i·α(r_χ,n^k), where i is the number of triangle vertices identified at v. But since v is a point of a hyperbolic surface this angle is 2π, whence k must divide 6χ.
Let us now fix χ<0 and n>0, and recall from Proposition <ref> that r_χ,n^k>0 is determined by the equation f_k(r_χ,n^k) = 2π, where for α(r) = 2sin^-1(1/2cosh r) and β(r) = sin^-1(1/cosh r),
f_k(r) = (6-6χ+3n/k)α(r) + 2n/kβ(r) = (6 - 6χ+n/k)α(r) + 2n/k(β(r) - α(r)).
Rewriting f_k as on the right above makes it clear that for any fixed r>0 and k∈ℕ, f_k+1(r) < f_k(r). For since the inverse sine is concave up on (0,1) we have β(r) > α(r) for any r, and inspecting the equations of (<ref>) shows in all cases that -6χ-n > 0. Moreover, these equations show in all cases that -6χ-3n≥ -3 in all cases, so f_k decreases with r since both α and β do.
We now claim that α(r_χ,n^k) strictly increases to π/3 and β(r_χ,n^k) to π/2 as k→∞. For each k, by the above we have f_k+1(r_χ,n^k) < f_k(r_χ,n^k) = 2π, so since each f_k decreases with r it follows that r_χ,n^k+1 < r_χ,n^k. Thus in turn α(r_χ,n^k+1)>α(r_χ,n^k) and similarly for β. Note that for each r, f_k(r) → 6α(r) as k→∞. The limit is a decreasing function of r that takes the value 2π at r=0. Since the sequence {r_χ,n^k}_k∈ℕ is decreasing and bounded below by 0, it converges to its infimum ℓ. If ℓ were greater than 0 then for large enough k we would have f_k(r_χ,n^k) < f_k(ℓ) < 2π, a contradiction. Thus r_χ,n^k→ 0 as k→∞, and the claim follows by a simple computation.
Now let us recall the geometric meaning of α and β: α(r) is the angle at any vertex of an equilateral triangle with side length 2r, and β(r) is the angle of a horocyclic triangle with compact side of length 2r, at either endpoint of this side. By Proposition <ref>, if there is a complete hyperbolic surface F of finite area with Euler characteristic χ and n cusps and a packing by k disks of radius r_χ,n^k then F decomposes into equilateral triangles and exactly n horocyclic ideal triangles, all with compact sidelength 2r_χ,n^k.
For such a surface F, since F has n cusps and there are n horocyclic ideal triangles, each horocyclic ideal triangle has its edges identified in F to form a monogon encircling a cusp. At most n disk centers can be at the vertex of such a monogon, so for k > n there is a disk center which is only at the vertex of equilateral triangles. The angles around this vertex must sum to 2π, so α(r_χ,n^k) must equal 2π/i for some i∈ℕ. But for k large enough we have 2π/7 < α(r_χ,n^k) < π/3, and this is impossible.
We will spend the rest of the section showing that among finite-area complete hyperbolic surfaces with any fixed topology, the maximal k-disk packing radius does attain a maximum. To make this more precise requires some notation. Below for a smooth surface Σ of hyperbolic type — that is, which is diffeomorphic to a complete, finite-area hyperbolic surface — let 𝔗(Σ) be the Teichmüller space of Σ. This is the set of pairs (F,ϕ) up to equivalence, where F is a complete, finite-area hyperbolic surface and ϕΣ→ F is a diffeomorphism (called a marking), and (F',ϕ') is equivalent to (F,ϕ) if there is an isometry f F→ F' such that f∘ϕ is homotopic to ϕ'. We take the usual topology on 𝔗(Σ), which we will discuss further in Lemma <ref> below.
For a surface Σ of hyperbolic type and k∈ℕ, define _k𝔗(Σ)→ (0,∞) by taking _k(F,ϕ) to be the maximal radius of an equal-radius packing of F by k disks. Refer by the same name to the induced function on the moduli space 𝔐(Σ) of Σ.
The moduli space 𝔐(Σ) is the quotient of 𝔗(Σ) by the action of the mapping class group, or, equivalently, the collection of complete, finite-area hyperbolic surfaces homeomorphic to Σ, taken up to isometry, endowed with the quotient topology from 𝔗(Σ). Since the value of _k on (F,ϕ) depends only on F, it induces a well-defined function on 𝔐(Σ). We are aiming for:
For any surface Σ of hyperbolic type, and any k∈ℕ, _k attains a maximum on 𝔐(Σ).
Assuming the Proposition, we immediately obtain this section's main result:
This is because for fixed χ and n there are at most two n-punctured surfaces of hyperbolic type with Euler characteristic χ: one orientable and one non-orientable. By Lemma <ref>, for all but finitely many k the maximum of _k on each of their moduli spaces (which exists by the Proposition) is less than r_χ,n^k.
To prove the Proposition we need some preliminary observations on _k, some basic facts about the topology of moduli space, and some non-trivial machinery due originally to Thurston.
For any surface Σ of hyperbolic type, and any k∈ℕ there exist c_k>0 and ϵ_k>0 such that _k(F)≥ c_k for every F∈𝔐(Σ), and any packing of F by k disks of radius _k(F) is contained in the ϵ_k-thick part of F. For each such F, there is a packing of F by k disks of radius _k(F).
There is a universal lower bound of sinh^-1(2/√(3)) on the maximal injectivity radius of a hyperbolic surface. (This was shown by Yamada <cit.>.) The value of _k on any hyperbolic surface F is therefore at least the maximum radius c_k of k disks embedded without overlapping in a single disk of radius sinh^-1(2/√(3)). This proves the lemma's first assertion.
For a maximal-radius packing of F by k disks, the center of each disk is thus contained in the c_k-thick part F_[c_k,∞) of F, the set of x∈ F such that the injectivity radius of F at x is at least c_k. The second assertion follows immediately from the (surely standard) fact below:
For any r less than the two-dimensional Margulis constant, and any hyperbolic surface F, the r-neighborhood in F of F_[r,∞) is contained in F_[r',∞), where r' = sinh^-1(1/2(1-e^-2r)).
We will use the following hyperbolic trigonometric fact: for a quadrilateral with base of length δ and two sides of length h, each meeting the base at right-angles, the lengths δ of the base, h of the sides it meets, and ℓ of the remaining side are related by
sinh(ℓ/2) = cosh hsinh(δ/2).
Now fix x∈∂ F_[r,∞) and let U be the component of the ϵ-thin part of F that contains x, where ϵ is the two-dimensional Margulis constant. Suppose first that U is an annulus, and let h be the distance from x to the core geodesic γ of U. If γ has length δ then applying (<ref>) in the universal cover we find that there is a closed geodesic arc of length ℓ (as defined there) based at x and freely homotopic (as a closed loop) to δ. It thus follows that cosh h = sinh r/sinh(δ/2). Let us now assume that δ≤ 2sinh^-1(sinh r/cosh r), so that h ≥ r. For a point y at distance h-r from δ, the geodesic arc based at y in the free homotopy class of δ has length ℓ' given by
sinh(ℓ'/2) = cosh(h-r)sinh(δ/2) = sinh r(cosh r - √(sinh^2 r - sinh^2(δ/2)))
(using the “angle addition” formula for hyperbolic cosine.) As a function of δ, ℓ' is increasing, and its infimum at δ = 0 satisfies sinh(ℓ'/2) = sinh r(cosh r - sinh r) = 1/2(1-e^-2r). Therefore the injectivity radius at y is at least r' defined in the claim. Note also that at δ = 2sinh^-1(sinh r/cosh r) we have ℓ' = δ as expected. So when h≤ r, i.e. when U is contained in the r-neighborhood of F_[r,∞), the injectivity radius at every point of U is at least δ/2 > r'.
Another hyperbolic trigonometric calculation shows that if U is a cusp component of the ϵ-thin part of F then for y at distance r from x along the geodesic ray from x out the cusp, the shortest geodesic arc based at y has length ℓ' satisfying the same formula as above. Hence in this case as well, the injectivity radius is at least r' = ℓ'/2 at every point of U in the r-neighborhood of F_[r,∞).
For the Lemma's final assertion we note that _k(F) is the supremum of the function on F^k which records the maximal radius of a packing by equal-radius disks centered at x_1,,x_k for any (x_1,,x_k)∈ F^k. (If x_i = x_j for some j≠ i then we take its value to be 0.) This function is clearly continuous on F^k. By the Lemma's first assertion it attains a value of c_k, so it approaches its supremum on the compact subset (F_[c_k,∞))^k of F^k.
For any surface Σ of hyperbolic type, and any k∈ℕ, _k is continuous on 𝔗(Σ), therefore also on 𝔐(Σ).
We will take the topology on 𝔗(Σ) to be that of approximate isometries as in <cit.>, with a basis consisting of K-neighborhoods of (F,ϕ)∈𝔗(Σ) for K∈(1,∞). Such a neighborhood consists of those (F',ϕ')∈𝔗(Σ) for which there exists a K-bilipschitz diffeomorphism Φ F→ F' with Φ∘ϕ homotopic to ϕ'. (This is equivalent to the “algebraic topology” of <cit.>, which is in turn equivalent to that induced by the Teichmüller metric, cf. <cit.>. In particular Mumford's compactness criterion holds for the induced topology on 𝔐(Σ), see <cit.>.)
It is obvious that _k is continuous with this topology. For given a geodesic arc γ of length ℓ on F and a K-bilipschitz diffeomorphism Φ F→ F', the geodesic arc in the based homotopy class of Φ∘γ has length between ℓ/K and Kℓ. Now for a packing of F by k disks of radius _k(F) there are finitely many geodesic arcs of length ℓ≐ 2_k(F) joining the disk centers, and every other arc based in this set has length at least some ℓ_0>ℓ. Choosing K near enough to 1 that ℓ_0/K >Kℓ we find for every (F',ϕ') in the K-neighborhood of (F,ϕ) that _k(F)/K ≤_k(F') ≤ K_k(F).
The main ingredient in the proof of Proposition <ref> is Lemma <ref> below. It was suggested by a referee for <cit.>, who sketched the proof I give here.
Let Σ be a surface of hyperbolic type, and fix r_1 > r_0 >0, both less than the two-dimensional Margulis constant. For any essential simple closed curve 𝔠 on Σ, and any S∈𝔗(Σ) such that the geodesic representative γ of has length less than 2r_0 in S, there exists S_0∈𝔗(Σ) where the geodesic representative γ_0 of has length between 2r_0 and 2r_1, and a path-Lipschitz map (one which does not increase the length of paths) from S_0-γ_0 onto a region in S containing the complement of the component of the r_1-thin part containing γ.
The construction uses the strip deformations described by Thurston <cit.>. We will appeal to a follow-up by Papadopolous–Théret <cit.> for details and a key geometric assertion. Below we take to have geodesic length less than 2r_0 in S. We will assume first that is non-separating. The separating case raises a few complications that we will deal with after finishing this case.
To begin the strip deformation construction, cut S open along the geodesic representative γ of to produce a surface with two geodesic boundary components γ_1 and γ_2. Then adjoin a funnel to this surface along each γ_i to produce its Nielsen extension S. Now fix a properly embedded geodesic arc α in the cut-open surface that meets each of γ_1 and γ_2 perpendicularly at an endpoint. One can obtain such an arc α starting with a simple closed curve a in S that intersects γ once: if a∩γ = {x} then regarding a as a closed path in S based at x, any lift ã to the universal cover has its endpoints on distinct components γ̃_1 and γ̃_2 of the preimage of γ; this lift is then homotopic through arcs with endpoints on γ̃_1 and γ̃_2 to their common perpendicular α̃, which projects to α.
In S, α determines a bi-infinite geodesic α̂ with one end running out each funnel. We now cut S open along α̂ and glue in an “ϵ-strip” B (terminology as in <cit.>), which is isometric to the region in ℍ^2 between two ultraparallel geodesics whose common perpendicular (the core of B) has length ϵ, by an isometry from ∂ B to the two copies of α̂ in the cut-open S. This isometry should take the endpoints of the core of B to the midpoints of the copies of α in the two copies of α̂. Let us call the resulting surface S_0. There is a quotient map fS_0→S taking B to α̂, which is the identity outside B and on B collapses arcs equidistant from the core. By <cit.>, f is 1-Lipschitz. (In <cit.>, applying f is called “peeling an ϵ-strip”, and the respective roles of S_0 and S here are played by X̂ and Ŷ_B there.)
For i=1,2, let γ̂_i be the geodesic in S_0 that is freely homotopic to f^-1(γ_i), which is the union of the open geodesic arc γ_i-α with an equidistant arc from the core of B. By gluing the endpoints of the core of B to the midpoints of the copies of α we ensured that the γ̂_i have equal length, which depends on ϵ (we will expand on this assertion below). We now remove the funnels that the γ̂_i bound in S_0, then isometrically identify the resulting boundary components yielding a finite-area surface S_0 (the choice of isometry is not important here).
Let γ_0 be the quotient of the γ̂_i in S_0. The marking Σ→ S induces one from Σ to S_0 that takes to a simple closed curve with geodesic representative γ_0. The restriction of f also induces a map from S_0-γ_0 to a region in S. This is because the γ̂_i lie inside the preimage f^-1(S-γ), where S-γ refers to the closure (S-γ)∪γ_1∪γ_2 of S-γ inside its Nielsen extension. This in turn follows from the fact that the nearest-point retraction from S_0 to f^-1(S-γ), which is modeled on B∩(S_0-f^-1(S-γ)) by the orthogonal projection to a component of an equidistant locus to the core of B and on the rest of S_0-f^-1(S-γ) by the retraction of a funnel to its boundary, is distance-reducing.
Since f does not increase the length of paths, the same holds true for the induced map S_0-γ_0 → S. Let us call U the component of the r_1-thin part of S containing γ. We will show that ϵ can be chosen so that the γ̂_i, and hence also γ_0, have length at least 2r_0 and less than 2r_1. Since f is Lipschitz, and the f(γ̂_i) are freely homotopic in S to γ, it will follow that their images lie in U and thus that the image of S_0-γ_0 contains S-U.
It seems intuitively clear that the length of the γ̂_i increases continuously and without bound with ϵ, and that it limits to the length of γ as ϵ→ 0. If this is so then it is certainly possible to choose ϵ so that the γ̂_i, hence also their quotient in S_0, have length strictly between 2r_0 and 2r_1. We will thus spend a few lines describing the hyperbolic trigonometric calculations to justify the assertions above on the length of the γ̂_i, completing the proof of the non-separating case.
For i = 1,2, γ̂_i is freely homotopic in S_0 to the broken geodesic that is the union of the open geodesic arc γ_i - α with the geodesic arc contained in the ϵ-strip B that joins its endpoints. Dropping perpendiculars to γ̂_i from the midpoints of these two arcs divides the region in S_0 between γ̂_i and the broken geodesic into the union of two copies of a pentagon with four right angles. Let x be the length of the base of , half the length of γ̂_i. The sides opposite the base have lengths δ/2 and z, where δ and 2z are the respective lengths of γ, and the arc in B joining the endpoints of γ-α. See Figure <ref>.
The side of length z is also a side of a quadrilateral ⊂ B with its other sides in the core of B (with length ϵ/2), the core's perpendicular bisector, and a copy of α (with length h). This quadrilateral has three right angles, and trigonometric formulas for such quadrilaterals (see <cit.>) describe z and the non-right angle θ in terms of ϵ/2 and h:
sinh z = cosh h sinh (ϵ/2) sinθ = cosh(ϵ/2)/cosh z
The non-right angle of has measure π/2+θ, so from trigonometric laws for pentagons with four right angles <cit.>, we now obtain:
cosh x = cosh hsinh(ϵ/2)sinh(δ/2) + cosh(ϵ/2)cosh(δ/2)
Recall that x is half the length of the γ̂_i. And indeed we find that x increases with ϵ, continuously and without bound, and it limits to δ/2 (half the length of γ) as ϵ→ 0.
Now suppose is separating. In this case we choose geodesic arcs α_1 and α_2, each properly embedded in the surface obtained by cutting S along the geodesic representative γ of and meeting the boundary perpendicularly, one in each component. These may again be obtained from a simple closed curve a⊂ S, this time one which intersects γ twice, such that no isotopy of a reduces the number of intersections. Lifting the arcs a_1 and a_2 of a on either side of γ to the universal cover, separately homotoping each to a common perpendicular between lifts of γ, then pushing back down to S and cutting along γ yields the α_i.
After cutting S along γ, we take γ_i to be the geodesic boundary of the component containing α_i, for i=1,2. After constructing S as before, we cut it open along both α̂_1 and α̂_2 then, for each i=1,2, attach an ϵ_i-strip B_i along the boundary of the component containing γ_i. In the resulting disconnected surface S_0, for each i let γ̂_i be the geodesic in the free homotopy class of the union of the two arcs of γ_i-α_i with the two arcs in B_i joining the endpoints of these arcs that are on the same side of its core.
We now customize ϵ_1 so that γ_1 has length greater than 2r_0 but less than 2r_1, then customize ϵ_2 so that γ_2 has the same length as γ_1. Then as in the previous case we can cut off the funnels that the γ̂_i bound in S_0 and glue the resulting boundary components isometrically to produce a surface S_0 with a 1-Lipschitz map to f^-1(K), for K as before. Again what remains is to verify that the length of each γ̂_i increases continuously and without bound with ϵ_i, and that it approaches the length δ of γ as ϵ_i→ 0.
The extra complication in this case arises from the fact alluded to above, that each α_i intersects γ twice. And we cannot guarantee that the points of intersection are evenly spaced along γ. So for each i the funnel of S_0 bounded by γ̂_i, which contains the broken geodesic f^-1(γ_i), has only one reflective symmetry. Its fixed locus is the disjoint union of the perpendicular bisectors of the two components of γ_i-α_i, which divide the region between γ̂_i and f^-1(γ_i) into two isometric hexagons with four right angles. One such hexagon ℋ is pictured in Figure <ref>.
One side of ℋ has length x, half the length of γ̂_i as in the previous case. The side opposite this one has length 2z, for z as in (<ref>). (Here we are suppressing the dependence on i for convenience, but note that α_1 and α_2 may have different lengths, so “z” refers to different lengths in the cases i=1 and 2.) The angle at each endpoint of the side with length z is π/2+θ, where again θ is given by (<ref>) (with the same caveat as for z). The sides meeting this one are contained in γ_i-α_i, and calling their lengths a and b, we have that a+b = δ/2 is half the length of γ.
When ϵ is small, the geodesics containing the sides of lengths a and b meet at a point p in B. Their sub-arcs that join p to the endpoints of the side of length 2z form an isosceles triangle with two angles of π/2-θ. Applying some hyperbolic trigonometric laws and simplifying gives that the angle ψ at p and the length y of the two equal-length sides respectively satisfy:
cosψ = 2sinh^2 hsinh^2(ϵ/2) - 1 cosh y = cosh(ϵ/2)/√(1 - sinh^2 h sinh^2(ϵ/2))
From these formulas we find in particular that this case holds when sinh(ϵ/2) < 1/sinh h. From the same law for pentagons as in the non-separating case we now obtain:
cosh x = sinh(y+a)sinh(y+b) - cosh(y+a)cosh(y+b)cosψ
= 1/2[(1-cosψ)cosh(2y+a+b) - (1+cosψ)cosh(a-b) ]
The second line above is obtained by applying the equation sinh xsinh y = 1/2(cosh(x+y) - cosh(x-y)) and its analog for hyperbolic cosine. Applying hyperbolic trigonometric identities and substituting from (<ref>), after some work we may rewrite the right side above as:
coshϵcosh (a+b) + sinhϵcosh hsinh(a+b) + sinh^2 hsinh^2(ϵ/2)(cosh(a+b) - cosh(a-b))
This makes it clear that x increases continuously with ϵ, and that x→ a+b as ϵ→ 0. Recalling that x is half the length of γ̂_i and a+b = δ/2 half that of γ_i, we have all but one of the desired properties of γ̂_i. But there is a “phase transition” at ϵ=2sinh^-1(1/sinh h). Note that as ϵ approaches this from below, from (<ref>) we have
sinh z → h sinθ→cosh h/√(cosh^2 h + sinh^2 h)
From Proposition <ref> we now recall that the angle β at a finite vertex of a horocyclic ideal triangle and half the length r of its compact side satisfy sinβ = 1/cosh r. One easily verifies that the limit values of cosh z and of sin(π/2-θ) = cosθ are reciprocal, so geometrically, what is happening as ϵ→2sinh^-1(1/sinh h) is that p is sliding away from the core of B along its perpendicular bisector, reaching an ideal point at ϵ = 2sinh^-1(1/sinh h).
For ϵ>2sinh^-1(1/sinh h) we therefore expect the geodesics containing a and b to have ultraparallel intersection with B. Their common perpendicular is then one side of a quadrilateral in B with opposite side of length 2z. If d is the length of this common perpendicular and y the length of the two remaining sides, from standard hyperbolic trigonometric formulas <cit.> we have:
cosh d = 2sinh^2 hsinh^2(ϵ/2)-1 cosh y = sinh z/sinh(d/2) = cosh hsinh(ϵ/2)/√(sinh^2 hsinh^2(ϵ/2)-1)
We now apply the hyperbolic law of cosines for right-angled hexagons <cit.>, yielding:
cosh x = sinh(y+a)sinh(y+b) cosh d-cosh(y+a)cosh(y+b)
= 1/2[cosh (2y+a+b)(cosh d - 1) - cosh(a-b)(cosh d+1)]
Manipulating this formula along the lines of the previous case now establishes that x increases continuously and without bound with ϵ, and comparing the resulting formula with the previous one shows that its limits coincide as ϵ approaches 2sinh^-1(1/sinh h) from above and below. The lemma follows.
Fix k∈ℕ and a surface Σ of hyperbolic type, and let r_1 = min{ϵ_0/2,ϵ_k} and r_0 = r_1/2, where ϵ_0 is the two-dimensional Margulis constant and ϵ_k is as in Lemma <ref>. For any S∈𝔗(Σ) with a geodesic of length less than 2r_0, we claim that there exists S'∈𝔗(Σ) with all geodesics of length at least 2r_0 and _k(S')≥_k(S).
Let _1,,_n be the curves in Σ whose geodesic representatives have length less than 2r_0 in S, and let U be the component of the r_1-thin part of S containing the geodesic representative γ_1 of _1. Lemma <ref> supplies some S_1'∈𝔗(Σ), with a path-Lipschitz map f from S'_1-γ_1' to a region in S containing S-U, where the geodesic representative γ_1' of _1 in S_1' has length between 2r_0 and 2r_1. A packing of S by k disks of radius _k(S) is entirely contained in S-U by Lemma <ref> and our choice of r_1. Therefore if x_1,,x_k are the disk centers in S, points x_1',,x_k' of S_1' mapping to the x_i under f are the centers of a packing of S_1' by disks of the same radius, since f does not increase the length of any based geodesic arc.
We note that for a point x∈ S-U where the injectivity radius is at least r_0, the injectivity radius is also at least r_0 at any x'∈ S_1' mapping to U: every geodesic arc based at x' that intersects γ_1' has length greater than the Margulis constant, and the length of every other arc is decreased by f. It follows that only _2,,_n may have geodesic length less than 2r_0 in S_1'. Iterating now yields the claim.
Now for a sequence (S_i) of surfaces in 𝔐(Σ) on which _k approaches its supremum, we use the claim to produce a sequence (S_i'), with _k(S_i') ≥_k(S_i) for each i, such that S_i' has systole length at least 2r_0 for each i. This sequence has a convergent subsequence in 𝔐(Σ), by Mumford's compactness criterion <cit.>, and since _k is continuous on 𝔐(Σ) it attains its maximum at the limit.
plain
|
http://arxiv.org/abs/1701.07938v1 | 20170127041713 | Geometric interpretation of generalized distance-squared mappings of $\mathbb{R}^2$ into $\mathbb{R}^\ell$ $(\ell \geq 3)$ | [
"Shunsuke Ichiki"
] | math.DG | [
"math.DG",
"53A04, 57R45, 57R50"
] |
Generalized distance-squared mappings
are quadratic mappings of ℝ^m into ℝ^ℓ of special type.
In the case that matrices A
constructed by coefficients of generalized distance-squared mappings
of ℝ^2 into ℝ^ℓ (ℓ≥3) are full rank,
the generalized distance-squared mappings having a generic central point have the following properties.
In the case of ℓ=3, they have only one singular point.
On the other hand, in the case of ℓ>3, they have no singular points.
Hence, in this paper, the reason why in the case of ℓ=3
(resp., in the case of ℓ>3),
they have only one singular point (resp., no singular points) is explained
by giving a geometric interpretation to these phenomena.
[2010]53A04, 57R45, 57R50
A unified description of
collective magnetic excitations
Michael Farle
=========================================================
§ INTRODUCTION
Throughout this paper, positive integers are expressed by i, j, ℓ and m.
In this paper, unless otherwise stated, all mappings belong to class C^∞.
Two mappings f : ℝ^m→ℝ^ℓ and
g:ℝ^m→ℝ^ℓ are said
to be 𝒜-equivalent if
there exist two diffeomorphisms h : ℝ^m→ℝ^m and
H : ℝ^ℓ→ℝ^ℓ satisfying
f=H∘ g ∘ h^-1.
Let p_i=(p_i1, p_i2, …, p_im) (1≤ i≤ℓ)
(resp., A=(a_ij)_1≤ i≤ℓ, 1≤ j≤ m)
be a point of ℝ^m
(resp., an ℓ× m matrix with non-zero entries).
Set p=(p_1,p_2,…,p_ℓ)∈ (ℝ^m)^ℓ.
Let G_(p, A):ℝ^m →ℝ^ℓ be the mapping
defined by
G_(p, A)(x)=(
∑_j=1^m a_1j(x_j-p_1j)^2,
∑_j=1^m a_2j(x_j-p_2j)^2,
…,
∑_j=1^m a_ℓ j(x_j-p_ℓ j)^2
),
where x=(x_1, x_2, …, x_m)∈ℝ^m.
The mapping G_(p, A) is called a generalized distance-squared mapping,
and the ℓ-tuple of points p=(p_1,… ,p_ℓ)∈ (ℝ^m)^ℓ
is called the central point
of the generalized distance-squared mapping G_(p,A).
For a given matrix A, a property of
generalized distance-squared mappings will be said to be
true for a generalized distance-squared mapping
having a generic central point
if there exists a subset Σ with Lebesgue measure zero
of (ℝ^m)^ℓ such that for any p ∈ (ℝ^m)^ℓ-Σ,
the mapping G_(p,A) has the property.
A distance-squared mapping D_p
(resp., Lorentzian distance-squared mapping
L_p) is the mapping G_(p,A)
satisfying that each entry of A is 1
(resp., a_i1=-1 and a_ij=1 (j 1)).
In <cit.> (resp., <cit.>),
a classification result on distance-squared mappings D_p
(resp., Lorentzian distance-squared mappings
L_p) is given. Moreover, in <cit.> (resp., <cit.>), a classification result on generalized distance-squared mappings G_(p,A) of ℝ^2 into ℝ^2
(resp., ℝ^m+1 into ℝ^ℓ (ℓ≥ 2m+1)) is given.
The important motivation for these investigations is as follows.
Height functions and distance-squared functions have been investigated
in detail so far,
and they are well known as a useful tool
in the applications of singularity theory to differential geometry
(for example, see <cit.>).
The mapping in which each component is a height function is
nothing but a projection.
Projections as well as height functions or distance-squared functions
have been investigated so far.
On the other hand,
the mapping in which each component
is a distance-squared function is a distance-squared mapping.
Besides, the notion of generalized distance-squared mappings is
an extension of the distance-squared mappings.
Therefore, it is natural to investigate generalized distance-squared mappings
as well as projections.
In <cit.>,
a classification result on
generalized distance-squared mappings of ℝ^2 into ℝ^2
is given.
If the rank of A is two,
the generalized distance-squared mapping
having a generic central point
is a mapping of which any singular point is
a fold point
except one cusp point.
The singular set is a rectangular hyperbola.
Moreover, in <cit.>, a
geometric interpretation of a singular set of
generalized distance-squared mappings of ℝ^2 into ℝ^2 having a generic central point is also given in the case of rank A=2.
By the geometric interpretation,
the reason why the mappings have only one cusp point is explained.
On the other hand, in <cit.>,
a classification result on
generalized distance-squared mappings
of ℝ^m+1 into ℝ^ℓ (ℓ≥ 2m+1)
is given.
As the special case of m=1, we have the following.
Let ℓ be an integer satisfying ℓ≥ 3.
Let A=(a_ij)_1≤ i ≤ℓ, 1≤ j ≤ 2 be an ℓ× 2 matrix with
non-zero entries satisfying rank A=2.
Then, the following hold:
* In the case of ℓ=3,
there exists a proper algebraic subset Σ _A⊂ (ℝ^2)^3
such that for any p∈ (ℝ^2)^3-Σ _A,
the mapping G_(p,A) is 𝒜-equivalent to
the normal form of Whitney umbrella
(x_1,x_2)↦ (x_1^2, x_1x_2, x_2).
* In the case of ℓ>3,
there exists a proper algebraic subset Σ _A⊂ (ℝ^2)^ℓ
such that for any p∈ (ℝ^2)^ℓ-Σ _A,
the mapping G_(p,A) is 𝒜-equivalent to the inclusion
(x_1,x_2)↦ (x_1, x_2, 0,… ,0).
As described above, in <cit.>, a geometric interpretation of
generalized distance-squared mappings of ℝ^2 into ℝ^2
having a generic central point is given in the case that the matrix A is full rank.
On the other hand, in this paper,
a geometric interpretation of
generalized distance-squared mappings of ℝ^2 into ℝ^ℓ
(ℓ≥3)
having a generic central point is given in the case that the matrix A is full rank
(for the reason why we concentrate on the case that the matrix A is full rank,
see Remark <ref>).
Hence, by <cit.> and this paper,
geometric interpretations of generalized distance-squared mappings of the plane
having a generic central point in the case that the matrix A is full rank
are completed.
The main purpose of this paper is to give a geometric interpretation of
Theorem <ref>.
Namely, the main purpose of this paper is to answer the following question.
Let ℓ be an integer satisfying ℓ≥ 3.
Let A=(a_ij)_1≤ i ≤ℓ, 1≤ j ≤ 2 be an ℓ× 2 matrix with
non-zero entries satisfying rank A=2.
* In the case of ℓ=3,
why do generalized distance-squared mappings
G_(p,A):ℝ^2→ℝ^3 having a generic central point have only one singular point ?
* On the other hand,
in the case of ℓ>3,
why do generalized distance-squared mappings
G_(p,A):ℝ^2→ℝ^ℓ having a generic central point have
no singular points ?
§.§ Remark
In the case that the matrix A is not full rank ( rank A=1),
for any ℓ≥ 3,
the generalized distance-squared mapping
of ℝ^2 into ℝ^ℓ having a generic central point
is 𝒜-equivalent to only the inclusion (x_1,x_2)↦ (x_1, x_2, 0, … , 0) (see Theorem 3 in <cit.>).
On the other hand, in the case that the matrix A is full rank ( rank A=2),
the phenomenon in the case of ℓ=3 is completely different from the phenomenon in
the case of ℓ>3 (see Theorem <ref>).
Hence, we concentrate on the case that the matrix A is full rank .
In Section <ref>,
some assertions and definitions are prepared for
answering Question <ref>, and
the answer to Question <ref> is stated.
In Section <ref>, the proof of a lemma of Section <ref> is given.
§ THE ANSWER TO QUESTION <REF>
Firstly, in order to answer Question <ref>,
some assertions and definitions
are prepared.
By Theorem <ref>, it is clearly seen that the following assertion holds.
The assertion is important for giving an geometric interpretation.
Let ℓ be an integer satisfying ℓ≥ 3.
Let A=(a_ij)_1≤ i ≤ℓ, 1≤ j ≤ 2
(resp., B=(b_ij)_1≤ i ≤ℓ, 1≤ j ≤ 2) be an ℓ× 2 matrix with
non-zero entries
satisfying rank A=2 (resp., rank B=2).
Then, there exist proper algebraic subsets Σ_A and Σ_B of
(ℝ^2)^ℓ such that for any p ∈ (ℝ^2)^ℓ-Σ_A
and for any q ∈ (ℝ^2)^ℓ-Σ_B, the mapping
G_(p,A) is 𝒜-equivalent to the mapping G_(q, B).
Let F_p:ℝ^2→ℝ^ℓ (ℓ≥3) be the mapping
defined by
F_p(x_1,x_2)
= (
a(x_1-p_11)^2+b(x_2-p_12)^2, (x_1-p_21)^2+(x_2-p_22)^2, … ,
.
.
(x_1-p_ℓ 1)^2+(x_2-p_ℓ 2)^2
),
where 0<a<b and p=(p_11,p_12, … ,p_ℓ 1,p_ℓ 2).
Remark that the mapping F_p is the generalized distance-squared mapping G_(p,B),
where
B=
(
[ a b; 1 1; ⋮ ⋮; 1 1; ]).
Since the rank of the matrix B is two,
by Corollary <ref>, in order to answer
Question <ref>, it is sufficient to answer the following question.
Let ℓ be an integer satisfying ℓ≥ 3.
* In the case of ℓ=3,
why do the mappings F_p : ℝ^2→ℝ^3 having a generic central point have only one singular point ?
* On the other hand,
in the case of ℓ>3,
why do the mappings
F_p : ℝ^2→ℝ^ℓ having a generic central point have
no singular points ?
The mapping F_p=(F_1,… ,F_ℓ) determines ℓ-foliations
𝒞_p_1(c_1), … ,𝒞_p_ℓ(c_ℓ)
in the plane defined by
𝒞_p_i(c_i)={(x_1, x_2)∈ℝ^2 | F_i(x_1,x_2)=c_i},
where c_i≥ 0 (1≤ i ≤ℓ) and
p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ.
For a given central point p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ,
a point q∈ℝ^2 is a singular point of the mapping F_p
if and only if the ℓ-foliations 𝒞_p_i(c_i) (1≤ i ≤ℓ)
defined by the point p are tangent at the point q,
where (c_1,… ,c_ℓ)=F_p(q).
For a given central point p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ, in the case that a point q∈ℝ^2 is a singular point of
the mapping F_p, there may exist an integer i such that the foliation 𝒞_p_i(c_i) is merely a point, where c_i=F_i(q).
However, by the following lemma,
we see that the trivial phenomenon seldom occurs
(for the proof of Lemma <ref>, see Section <ref>).
Let ℓ be an integer satisfying ℓ≥ 3.
Then, there exists a proper algebraic subset Σ⊂ (ℝ^2)^ℓ such that
for any central point p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ-Σ,
if a point q∈ℝ^2 is a singular point of the mapping F_p, then
the ℓ-foliations 𝒞_p_1(c_1) and 𝒞_p_i(c_i) (2≤ i ≤ℓ) are an ellipse and (ℓ-1)-circles, respectively, where (c_1,… ,c_ℓ)=F_p(q).
§.§ Answer to Question <ref>
As described above, in order to answer Question <ref>,
it is sufficient to answer Question <ref>.
* We will answer (1) of Question <ref>.
The phenomenon that the mapping F_p:ℝ^2→ℝ^3 having
a generic central point has only one singular point
can be explained by the following geometric interpretation. Namely,
constants c_i ≥0 (i=1,2,3) such that three foliations
𝒞_p_1(c_1), 𝒞_p_2(c_2) and 𝒞_p_3(c_3) defined by the central point p=(p_1, p_2, p_3)∈ (ℝ^2)^3 are tangent are uniquely determined,
and the tangent point is also unique.
Moreover, in the case, remark that by Lemma <ref>,
the three foliations 𝒞_p_1(c_1), 𝒞_p_2(c_2) and 𝒞_p_3(c_3) defined by almost all (in the sense of Lebesgue measure) (p_1,p_2,p_3)∈ (ℝ^2)^3 are an ellipse and two circles, respectively.
Furthermore, by the geometric interpretation,
we can also see the location of the singular point of the mapping F_p
having a generic central point
(for example, see Figure <ref>).
* We will answer (2) of Question <ref>.
The phenomenon that the mapping F_p:ℝ^2→ℝ^ℓ (ℓ >3) having
a generic central point has no singular points
can be explained by the following geometric interpretation. Namely,
for any constants c_i≥0 (1≤ i ≤ℓ),
ℓ-foliations 𝒞_p_1(c_1), … , 𝒞_p_ℓ(c_ℓ) defined by the central point p=(p_1, … , p_ℓ)∈ (ℝ^2)^ℓ are not tangent at any points
(for example, see Figure <ref>).
§.§ Remark
The geometric interpretation (the answer to Question <ref>) has one more advantage.
By the interpretation, we get the following assertion from the viewpoint of the contacts amongst one ellipse and some circles.
Let a, b be real numbers satisfying 0<a<b.
* There exists a proper algebraic subset Σ of (ℝ^2)^3
such that for any (p_1,p_2,p_3)∈ (ℝ^2)^3-Σ,
constants c_i>0 (i=1,2,3) such that one ellipse a(x_1-p_11)^2+b(x_2-p_12)^2=c_1 and two circles (x_1-p_i1)^2+(x_2-p_i2)^2=c_i (i=2, 3) are
tangent are uniquely determined, where p_i=(p_i1, p_i2). Moreover, the tangent point
is also unique.
* On the other hand, in the case of ℓ>3,
there exists a proper algebraic subset Σ of (ℝ^2)^ℓ
such that for any (p_1,…, p_ℓ)∈ (ℝ^2)^ℓ-Σ,
for any c_i>0 (i=1,… ,ℓ),
the one ellipse a(x_1-p_11)^2+b(x_2-p_12)^2=c_1 and the (ℓ-1)-circles
(x_1-p_i1)^2+(x_2-p_i2)^2=c_i (i=2,… ,ℓ) are not tangent
at any points, where p_i=(p_i1, p_i2).
§ PROOF OF LEMMA <REF>
The Jacobian matrix of the mapping F_p at (x_1,x_2) is the following.
JF_p(x_1,x_2) =2(
[ a(x_1-p_11) b(x_2-p_12); ⋮ ⋮; x_1-p_ℓ 1 x_2-p_ℓ 2 ]).
Let Σ_i be a subset of (ℝ^2)^ℓ consisting of
p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ satisfying
p_i∈ℝ^2 is a singular point of F_p (1≤ i ≤ℓ).
Namely,
for example, Σ_1 is the subset of (ℝ^2)^ℓ consisting of
p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ satisfying
rank (
[ 0 0; p_11-p_21 p_12-p_22; ⋮ ⋮; p_11-p_ℓ 1 p_12-p_ℓ 2 ])<2.
By ℓ≥ 3, it is clearly seen that Σ_1 is a proper algebraic subset of
(ℝ^2)^ℓ.
Similarly, for any i (2≤ i ≤ℓ), we see that Σ_i is also a proper algebraic subset
of
(ℝ^2)^ℓ.
Set Σ=∪_i=1^ℓΣ_i. Then, Σ is also a proper algebraic subset
of (ℝ^2)^ℓ.
Let p=(p_1,… ,p_ℓ)∈ (ℝ^2)^ℓ-Σ be a central point, and let q be
a singular point of the mapping F_p defined by the central point.
Then, suppose that there exists an integer i such that
the foliation 𝒞_p_i(c_i) is not an ellipse or a circle,
where c_i=F_i(q) (F_p=(F_1, … ,F_ℓ)).
Then, we get c_i=0. Hence, we have q=p_i.
This contradicts the assumption p∈ (ℝ^2)^ℓ-Σ.
§ ACKNOWLEDGEMENTS
The author is grateful to Takashi Nishimura for his kind advices.
The author is supported by JSPS KAKENHI Grant Number 16J06911.
99
CSJ. W. Bruce and P. J. Giblin,
Curves and Singularities (second edition),
Cambridge University Press, Cambridge, 1992.
GGM. Golubitsky and V. Guillemin,
Stable mappings and their singularities,
Graduate Texts in Mathematics 14, Springer, New York, 1973.
DS. Ichiki and T. Nishimura,
Distance-squared mappings,
Topology Appl.,
160 (2013), 1005–1016.
LS. Ichiki and T. Nishimura,
Recognizable classification of Lorentzian distance-squared
mappings, J. Geom. Phys., 81 (2014), 62–71.
G2S. Ichiki and T. Nishimura,
Generalized distance-squared mappings of
ℝ^n+1 into ℝ^2n+1,
Contemporary Mathematics, Amer. Math. Soc., Providence RI, 675 (2016),
121-132.
G1S. Ichiki, T. Nishimura, R. Oset Sinha and M. A. S. Ruas,
Generalized distance-squared mappings of the plane into the plane,
Adv. Geom., 16 (2016), 189–198.
|
http://arxiv.org/abs/1701.07922v2 | 20170127015154 | Dissipation induced quantum transport on a finite one-dimensional lattice | [
"Roland Cristopher F. Caballar",
"Bienvenido M. Butanas Jr.",
"Vladimir P. Villegas",
"Mary Aileen Ann C. Estrella"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
[email protected]
National Institute of Physics, College of Science, University of the Philippines, Diliman, 1101 Quezon City
National Institute of Physics, College of Science, University of the Philippines, Diliman, 1101 Quezon City
Department of Physics, Central Mindanao University, University Town, Musuan, Maramag, Bukidnon, 8710 Philippines
National Institute of Physics, College of Science, University of the Philippines, Diliman, 1101 Quezon City
National Institute of Physics, College of Science, University of the Philippines, Diliman, 1101 Quezon City
Manila Business Consulting Inc., Unit 703 Loyola Heights Condominium, Loyola Heights, Quezon City
We construct a dissipation induced quantum transport scheme by coupling a finite lattice of N two-level systems to an environment with a discrete number of energy levels. With the environment acting as a reservoir of energy excitations, we show that the coupling between the system and the environment gives rise to a mechanism for excited states of the system to be efficiently transported from one end of the lattice to another. We also show that we can adjust the efficiency of the quantum transport scheme by varying the spacing between energy levels of the system, by decreasing the ground state energy level of the environment, and by weakening the coupling between the system and the environment. A possible realization of this quantum transport scheme using ultracold atoms in a lattice coupled to a reservoir of energy excitations is briefly discussed at the end of this paper.
Dissipation induced quantum transport on a finite one-dimensional lattice
Mary Aileen Ann C. Estrella
December 30, 2023
=========================================================================
§ INTRODUCTION
Dissipation in quantum mechanics has, in recent times, attracted an increasing amount of interest. This is primarily due to its role in inducing decoherence in a quantum system which is coupled to an environment <cit.>. In this context, dissipation can be viewed as a bane in quantum mechanics, and efforts have been made to reduce dissipative effects due to correlation between a quantum system and the surrounding environment <cit.>.
However, recent work has shown that dissipation can be used as a resource in quantum mechanics, wherein it drives the evolution of a quantum system towards a unique steady state <cit.>. In particular, if a system is coupled to an environment in the manner of an open quantum system, then with the proper choice of environment, and by adjusting the strength of coupling between the system and the environment, the resulting time evolution equation for the system will have a unique steady state. The existence of this unique steady state can then be attributed to dissipative effects due to the coupling between the system and the environment. As such, one can then use dissipation as a resource in quantum computation and quantum state preparation, as shown in Refs. <cit.>.
Aside from quantum computation and quantum state preparation, dissipation can also be used as a resource in quantum state transport. An illustration of how this can be done was carried out by Rebentrost et. al. <cit.>, wherein they considered an interacting N-body system in the presence of a single excitation. They were able to show that by coupling this system with a fluctuating environment, the quantum transport of excited energy states can be enhanced, with the efficiency of the process dependent on the energy mismatch between states and the hopping terms in the system Hamiltonian. This quantum transport scheme has been applied to the analysis of electronic energy transfer in photosynthetic structures <cit.>, in non-Markovian open quantum systems <cit.>, as well as in the analysis of quantum transport in various systems <cit.>.
Another possible mechanism for efficient dissipation-assisted quantum transport was provided in Refs. <cit.>, which makes use of open quantum random walks. In this mechanism, a system with internal and spatial degrees of freedom is coupled to an environment, with the coupling between the system and the environment causing the system to undergo a quantum random walk. Open quantum random walks have been shown to obey a central limit theorem <cit.>, which implies that quantum systems undergoing open quantum random walks will evolve towards a unique steady state.
Having shown that dissipation can be treated as a resource in quantum mechanics, and that it can be used to enhance and create efficient quantum transport mechanisms, we then ask if it is possible to create other dissipation induced quantum transport mechanisms. In this paper, we show that it is possible by considering a system comprised of a lattice of two-level systems coupled to an environment with a discrete number of energy levels. We show that if the system and the environment are weakly coupled to each other, it is possible to create an efficient dissipation induced quantum transport scheme for excited states of the system from one end of the lattice to the other. Maximum efficiency can be achieved if the number of energy levels present in the environment is roughly of the same order of magnitude as the system's, if the spacing between energy levels of the system is relatively large and if the ground state energy of the environment is much less than that of the system's.
The rest of the paper is divided into the following sections. Section 2 gives a general description of the system and environment in terms of their respective Hamiltonians, and specifies as well the form of the Hamiltonian describing their interactions. Section 3 outlines the derivation of the master equation describing the dynamics of the system, while Section 4 provides a description of the dynamics of the system by examining the properties of the numerical solution of the master equation of the system. We summarize our results in Section 6.
§ A LATTICE OF TWO-LEVEL SYSTEMS COUPLED TO AN ENVIRONMENT
The system S considered in this paper consists of a one-dimensional lattice of two-level systems. The Hilbert space corresponding to this system is given as ℋ_S=ℋ_S,int⊗ℋ_S,pos, where ℋ_S,int and ℋ_S,pos are the subspaces of the system's Hilbert space corresponding to the system's energy and position, respectively. In this Hilbert space, the system's Hamiltonian has the following form:
H_S=∑_n=1^2∑_j=1^Nε_nâ^†_n â_n⊗|j⟩⟨j|,
where |j⟩ is a basis vector in ℋ_S,pos corresponding to node j in the lattice, which is finite, 1-dimensional and has a total of N nodes. Also, the operator â_n is the annihilation operator defined in the Hilbert space ℋ_S,int corresponding to the energy level ε_n for the system's internal degrees of freedom.
Let the system S be coupled to an environment B, which has a Hamiltonian H_B whose explicit form is defined in a Hilbert space ℋ_B as
H_B=∑_k=1^ME_kb̂^†_kb̂_k,
where b̂_k is the annihilation operator, defined in ℋ_B, corresponding to the energy level E_k of the environment B.
To describe the interaction between the system and the environment, we assume that the coupling between the system and the environment is linear in system operators defined in ℋ_S and ℋ_B. Furthermore, we let those system operators be â_n⊗|j⟩⟨j| and b̂^†_k which are defined in ℋ_S and ℋ_B respectively. Then the interaction between the system and environment is described by the following Hamiltonian <cit.>:
H_SB= ∑_k∑_n=1^2∑_j=1^Ng_nkjâ_n⊗|j+1⟩⟨j|⊗b̂^†_k
+g^*_nkjâ^†_n⊗|j⟩⟨j+1|⊗b̂_k,
where g_nkj is the coupling constant describing the strength of coupling between the system S and the environment B. We evolve the coupling Hamiltonian over time, making use of the evolution equation
H_SB(t)=exp(-i/ħ(H_S+H_B)t)H_SBexp(i/ħ(H_S+H_B)t).
In doing so, we obtain the following expression:
H_SB(t)=∑_n,k∑_j=1^N e^-i/ħ(ε_n-E_k)tg_nkjâ_n⊗|j+1⟩⟨j|⊗b̂^†_k
+e^i/ħ(ε_n-E_k)tg^*_nkjâ^†_n⊗|j⟩⟨j+1|⊗b̂_k.
From the form of H_SB, we can then see that the coupling between the system and the environment induces a form of quantum transport of excitations in the system from one lattice site to another, and in doing so either raises or lowers the energy of the environment. This quantum transport process of excitations can be described more explicitly using a quantum master equation for the system, which will be derived in the next section.
§ MASTER EQUATION FOR THE LATTICE OF TWO-LEVEL SYSTEMS COUPLED TO AN ENVIRONMENT
Allowing the system to interact with the environment means that it is now an open quantum system, whose dynamics are described by a master equation which specifies the time evolution of the density matrix describing the system. This master equation can be obtained by making use of the integral form of the von Neumann master equation in the interaction picture, and by making use of the Born approximation as well as the assumption that the initial state is a product state <cit.>. In doing so, we will then obtain the Redfield equation, whose general form is given by
d/dtρ_S(t)=-∫_0^tds[𝐇_SB(t),[𝐇_SB(s),ρ_S(t)⊗ρ_B]].
Here, ρ_S(t) is the density matrix describing the system S. We make use of the Redfield equation rather than the Born-Markov equation to describe the dynamics of the open quantum system because we are interested in determining the dynamics of the system over intermediate timescales, rather than over long timescales. Such timescales are more realistic and experimentally realizable, which implies that the resulting master equation will be of greater use in experimental investigations of the open quantum system. In deriving this equation, we make use of the Born approximation, which states that the total density matrix of the system coupled to the environment has the form
ρ(t)=∑_j=1^Nρ_S(t)⊗|j⟩⟨j|⊗ρ_B,
where ρ_S(t) and ρ_B are the density matrices, defined in the Hilbert spaces ℋ_S,int and ℋ_B, respectively, describing the state of the internal degrees of freedom of the system and of the environment at node j of the lattice and at the instant of time t.
Now for a system undergoing a quantum walk, the density matrix ρ_S(t) describing the system at time t can be written as
ρ_S(t)=∑_n,jρ_n,j(t),
where ρ_n,j(t) describes the state of the system at energy level n and node j in the lattice. Inserting equations <ref> and <ref> into equation <ref>, we then obtain the following expression:
∑_j,nd/dtρ_n,j(t)⊗|j⟩⟨j|=
∑_j=1^N∑_n(-iΓ_nj(t)[â_nâ^†_n,ρ_S,j(t)]⊗|j⟩⟨j|.
.+γ_nj(t)(2â_nρ_n,j+1(t)â^†_n-{â^†_nâ_n,ρ_n,j(t)}))⊗|j⟩⟨j|.
Details about the derivation of this master equation are given in the appendix of this paper. Here, the coefficients Γ_nj(t) and γ_nj(t) have the following form:
Γ_nj(t)=∑_kħ/ϵ_n-E_k|g_kn|^2(1-cos(ϵ_n-E_k/ħt)),
γ_nj(t)=∑_kħ/ϵ_n-E_k|g_kn|^2sin(ϵ_n-E_k/ħt).
The resulting master equation is block diagonal in the system Hilbert space ℋ_S=ℋ_S,int⊗ℋ_S,pos just like the density matrix of the system. However, the Lindblad operator of the master equation, given by
ℒ_n,k,j(ρ_S,j(t))
=-γ_nkj(2â^†_nρ_S,j+1(t)â_n-{â_nâ^†_n,ρ_S,j(t)}),
involves two internal states of the system, namely the internal state of the system at lattice site j+1 and the internal state of the system at lattice site j. This, together with the time dependence of the coefficients γ_n of the Lindblad operator, signifies that ℒ_n,j is not in Lindblad form, which implies that the time evolution of the internal states of the system is non-Markovian. We will explore the consequences of this non-Markovian behavior of the system in the next section as we examine the dynamics of the system by numerically solving the master equation.
§ DYNAMICS OF THE SYSTEM
Having derived the master equation describing the dynamics of the system, we now solve it numerically to enable us to analyze the dynamical behavior of the system. In doing so, we will also be able to examine the quantum transport mechanism described by the system-environment interaction Hamiltonian given in section 2, and determine the efficiency of such a process. In our analysis, we make use of natural units.
For the initial state, we assume that it is localized at node j=1, and is then given as ρ(0)=ρ_S(0)⊗|1⟩⟨1|⊗ρ_B, where ρ_S(0) is the density matrix describing the initial internal state of the system. Furthermore, we assume that the system and the environment are weakly coupled to each other, i. e. γ_nk<<1 and Γ_nk<<1. In all computations, we assume that the initial internal state ρ_S(0) of the system localized at node j=1 is the ground energy state of the system. To analyze the dynamics of the system, we then compute for the probability that a node |j⟩ is occupied at time t as follows:
P_j(t)=|Tr(⟨j|ρ_S(t)|j⟩)|,
where the trace is taken over the internal state ρ(t) occupying node j at time t.
By computing the occupation probability P_j(t) for various nodes, we find that an optimal quantum transport scheme will result from the coupling of the two-level system to the environment if the spacing between energy levels in the system and the environment are large (Δ_E = E_e-E_g>>E_g), if the number of energy levels in the environment is small, and if the ground state energy of the environment is much smaller than the ground state energy of the system (E_g,B<<E_g,S). This is shown in Fig. <ref>, which is a plot of the occupation probability P_j(t) taken over all nodes at each time t. The plot shows that initially, the state is localized at the origin, but as it evolves, the probability that it will be found at the end node increases while the probability that it is found at any other node decreases. Eventually, the state is localized at the end node of the lattice at an instant of time t<N, where N is the number of nodes in the lattice. Hence, if we define the speed of transport of the state as
v_N=t_max,N/N,
where t_max,N is the instant of time when the occupation probability at the end node of the lattice reaches its maximum value, then for quantum transport of a state from one end of a lattice to another to be efficient, v_N<1, which is exactly what is observed for the quantum transport scheme due to the coupling between the system and the environment considered in this paper, for the conditions specified above.
The effect of decreasing the spacing between the ground and excited energy levels of the system is shown in Fig. <ref>. In particular, we see that for a relatively large energy gap between the ground state and excited state of the system, the coupling between the system and the environment will drive the system towards a steady state which occupies the end node of the system beginning at an instant of time less than the number of nodes in the lattice. This signifies that the transport process due to the coupling between the system and the environment is efficient, since the amount of time it takes for the end node of the lattice to be occupied is less than the total number of nodes in the lattice. On the other hand, the smaller the energy gap between the ground state and excited state of the system, the smaller is the maximum value of the probability that the end node of the lattice is occupied.
Furthermore, as Fig. <ref> shows, as the energy gap between the ground and excited states of the system decreases, the more likely that the steady state of the system will not be one that occupies the end node of the lattice. This implies that decreasing the energy gap between the system's energy levels also decreases the efficiency of the quantum transport process due to the coupling between the system and the environment.
As for the effect of increasing the number of energy levels available in the environment, Fig. <ref> shows that increasing the number of energy levels decreases the efficiency of the quantum transport process due to the coupling between the system and the environment. In particular, as the number of energy levels in the environment increases, the instant when the probability that the endpoint of the lattice is occupied is at its maximum occurs at an earlier time. However, the maximum value of this probability will also decrease. This then implies a decrease in the efficiency of the quantum transport process, since there is a nonzero probability that other nodes in the lattice other than the endpoint are occupied.
While the number of energy levels present in the environment has an effect on the efficiency of the quantum transport scheme, the spacing between energy levels in the environment apparently has no effect whatsoever. Therefore, an environment with a small number of discrete energy levels of arbitrary spacing from each other, coupled to a lattice of two-level systems, will create an efficient quantum transport scheme from one end of the lattice to the other.
Finally, there is the question of what exactly is the state that is transported to the end node of the lattice. To determine what state is transported to the end of the lattice, we compute for the trace distance between the state at the end of the lattice, as given by the density matrix ρ_N(t)=ρ(t)⊗|N⟩⟨N|, and a desired final state given by ρ_N,f=ρ_f⊗|N⟩⟨N|. The trace distance is defined as
T(ρ_N(t),ρ_N,f)=1/2Tr(√((ρ_N(t)-ρ_N,f)^2))=1/2∑_j|λ_j|,
where λ_j are the absolute values of the eigenvalues of the Hermitian matrix ρ_N(t)-ρ_N,f. As shown in Fig. <ref>, the trace distance drops off to zero if ρ_N,f is an excited energy state of the system, while it rises to one if ρ_N,f is a ground energy state of the system. This signifies that if the initial state of the system is the ground energy state, then the state transported to the end of the lattice is the excited energy state of the system. This is further illustrated in Fig. <ref>, wherein the occupation probability at the end of the lattice reaches its maximum value at the same instant that the trace distance between ρ_N(t) and ρ_N,f equals zero, signifying that the state that is transported at the end of the lattice is indeed an excited energy state of the system.
We note that the calculations and results obtained up to this point in this section depend on the initial state of the system being in the ground energy state. If, however, the initial state is not the ground state, then the efficiency of the quantum transport scheme will be greatly affected. In particular, as shown in Fig. <ref>, the probability that the endpoint of the lattice will be occupied decreases if the initial state localized at the beginning of the lattice has the form
ρ_S(0)=αρ_g+βρ_e,
where ρ_g and ρ_e are the density matrices corresponding to the ground and excited states of the system. In fact, the maximum value of P_N will be equal to the coefficient α in the expression for the initial state ρ_S(0) as given by Eq. <ref>.
§ COMPARISON OF DYNAMICS OF THE SYSTEM WITH THAT OF A SIMILAR MARKOVIAN SYSTEM
Having analyzed the dynamics of the open quantum system, we now compare it to the dynamics of a similar system whose master equation was obtained using the Born-Markov equation. In this approximation, the timescale over which the system varies is much smaller than the timescale over which the environment varies. The system, environment and interaction Hamiltonians of this open quantum system are given by Eqs. <ref>, <ref> and <ref> respectively, and the derivation of its master equation will follow lines similar to those given in the Appendix. However, the point of departure in deriving the open quantum system's master equation comes after Eq. <ref>. This is because the double commutator is integrated over a semi-infinite time interval [0,∞) instead of over a finite time interval [0,t]. In doing so, we obtain the following form of the master equation for this system:
∑_j,nd/dtρ_n,j(t)⊗|j⟩⟨j|=
∑_j=1^N∑_n(-iΓ_nj[â_nâ^†_n,ρ_S,j(t)]⊗|j⟩⟨j|.
.+γ_nj(2â_nρ_n,j+1(t)â^†_n-{â^†_nâ_n,ρ_n,j(t)}))⊗|j⟩⟨j|.
Here, the coefficients Γ_nj and γ_nj are time-independent, and have the explicit form
Γ_nj=∑_k|g_kn|^2∫_0^∞ds sin(ϵ_n-E_k/ħs),
γ_nj=∑_k|g_kn|^2∫_0^∞ds cos(ϵ_n-E_k/ħt).
We note that unlike in the master equation given by Eq. <ref>, the coefficients of the master equation given by Eq. <ref> are time-independent. As we have done in the previous section, to analyze the dynamics of the system described by Eq. <ref>, we compute for the probability that, if the system's initial state is an excited energy state localized at node |1⟩, which is the starting point of the one-dimensional lattice in which the system moves, the node |j⟩ is occupied in this system at time t, with this probability given by Eq. <ref>. We then compare this to the probability that the same node is occupied in an open quantum system which is described by Eq. <ref> and whose initial state is also an excited energy state localized at node |1⟩. Our results are summarized in Fig. <ref>. The figure shows that for both the system described by Eq. <ref> and the system described by Eq. <ref>, the probability that the node |N⟩, which is the endpoint of the one-dimensional lattice in which these systems are evolving, will attain maximal values at instants of time t<N, with these maximal values implying that both systems can be used for efficient quantum transport of excited states from one end of a one-dimensional lattice to another. However, we also find that the instant when the open quantum system described by Eq. <ref> attains a maximal value for Eq. <ref> at |N⟩ occurs much earlier than the instant when the open quantum system described by Eq. <ref> attains a maximal value for Eq. <ref> at |N⟩. This suggests that if one makes use of the Born-Markov approximation to obtain the master equation describing the dynamics of the open quantum system considered in this paper, then that system can be used for dissipation-assisted efficient quantum transport of an excited state from one end of a one-dimensional lattice to another, and that such a quantum state transport scheme will be much more efficient than one which makes use of the same system but which does not make use of the Born-Markov approximation. However, we note that the Redfield equation is much more general than the Born-Markov approximation, since the former does not make any requirements regarding the timescales over which the system and the environment vary. As such, the use of the Born-Markov approximation will be able to provide a qualitiative description of the efficiency of the dissipation induced quantum transport scheme considered in this paper, but the use of the Redfield equation will allow us to consider more systems that can be used to physically realize this quantum transport mechanism.
§ ANALYSIS AND DISCUSSION
From the previous section, we found that it is possible to construct a dissipation induced quantum transport mechanism for excited states of a two-level system in a lattice by weakly coupling it to an environment with a small number of discrete energy levels. The resulting quantum transport mechanism will be efficient, in that the amount of time it will take to transport the excited energy state of the system from one end of the lattice to the other will be less than the number of nodes in the lattice.
Such a quantum transport mechanism has the advantage of eliminating active control over the system throughout the process. Rather, it is the interaction between the system and the environment that allows the quantum transport scheme to be carried out. All that is necessary to allow the quantum transport scheme to be carried out is to prepare and localize the initial state at one end of the lattice.
We note that the quantum transport scheme for the system described in this paper is optimized for transporting excited states from one end of the lattice to another, with the initial state being the ground state for the system. However, the state that is transported to the other end is orthogonal to the initial state of the system localized in one end. There is no other mechanism present in the system other than its coupling with the environment to explain why the initial state localized in one end of the lattice is different from the state transported to the other end. As such, not only does the coupling between the system and the environment induce transportation of excited states from one end of the lattice to another, it also excites the initial state of the system from the ground to the excited energy level. Hence, a full description of the dissipation induced quantum transport scheme due to the coupling between the system and the environment described in this paper can be given as follows: if the initial state of the system is in the ground state and is localized at one end of the lattice, the interaction between the system and the environment will first raise the energy of the initial state to the excited level, then will cause the excited state to be transported to the other end of the lattice at an instant of time less than the number of nodes in the lattice.
The results obtained in the previous section also imply that even if the initial state of the system is not entirely in the ground state, but rather is a superposition of the system's ground and excited states, coupling the system to the environment described in this paper will still cause an excited state of the system to be transported from one end of the lattice to the other. However, the efficiency of this quantum transport scheme in transporting the system's excited state from one end of the lattice to the other will be reduced, since the probability that the state transported to the end of the lattice is an excited state will be less than one. Instead, what is transported to the other end of the lattice is the ground state of the system. Nevertheless, the coupling between the system and the environment still results in a dissipative quantum transport scheme for either ground or excited energy states of the system, which first changes the energy of the initial state before transporting the resulting excited or deexcited energy state of the system from one end of the lattice to the other end.
Finally, we note that the qualitative behavior of this dissipative quantum transport mechanism can be obtained using a master equation obtained using the Born-Markov approximation. Such an approximation will give us a much simpler form of the master equation, since its coefficients will be time-independent. However, it imposes more constraints onto the system, in particular requiring that the environment varies much more slowly over time than the system, a requirement which may limit the types of systems that can realize this quantum transport mechanism. Nevertheless, the use of the Born-Markov approximation to describe this quantum transport scheme is still useful, due to the simplicity of the resulting master equation that allows for quicker and easier analysis of the dynamical behavior of the mechanism, as well as the qualitative similarity of the dynamical behavior described by this equation with that described by the Redfield equation for this same quantum transport scheme.
However, there are still issues that need to be resolved with this quantum transport scheme, foremost of which is its physical realizability. In particular, there is the question of what the explicit form of the system and the environment that can be used to realize this quantum transport scheme. One possible physical realization of this quantum transport scheme can be accomplished by taking an ensemble of two-level ultracold atoms confined to a lattice as our system, and weakly coupling these two-level atoms in the lattice to a Bose-Einstein Condensate (BEC), whose ground state energy is much less than the ground state of the two-level atoms in the lattice. We note that BECs have been proposed as environments coupled to particular systems before, in particular in Refs. <cit.>. They are able to absorb and emit excitations from and into the system to which they are coupled, in effect serving as energy reservoirs. In doing so, they can be used to realize dissipative quantum preparation and transport schemes for a variety of states. As such, this makes them a natural choice as the environment for the dissipation induced quantum transport scheme described in this paper. We leave the issue of the physical realization of this quantum transport scheme, as well as other possible issues arising from the formulation of the scheme, for future work.
§ ACKNOWLEDGEMENTS
This work is supported by a grant from the National Research Council of the Philippines as NRCP Project no. P-022. B.M.B. and V.P.V. acknowledge support from the Department of Science and Technology (DOST) as DOST-ASTHRDP scholars.
§ DERIVATION OF THE MASTER EQUATION
To obtain Eq. <ref>, we first evaluate the double commutator [H_SB(t),[H_SB(s),∑_jρ_S(t)⊗|j⟩⟨j|⊗ρ_B]]. Substituting equations <ref> and <ref> into this expression, we obtain the following:
[H_SB(t),[H_SB(s),∑_jρ_S(t)⊗|j⟩⟨j|⊗ρ_B]]=
∑_j=1^N∑_n,n'∑_k,k'e^i/ħ(ε_n'-E_k')te^-i/ħ(ε_n-E_k)sg^*_n'k'g_kn
×(â^†_n'â_nρ_n,j(t)⊗|j⟩⟨j|⊗b̂_k'b̂^†_kρ_B.
.-â_nρ_n,j(t)â^†_n'⊗|j+1⟩⟨j+1|⊗b̂^†_kρ_Bb̂_k'.
.-â^†_n'ρ_n,j(t)â_n⊗|j⟩⟨j|⊗b̂_k'ρ_Bb̂^†_k.
.+ρ_n,j(t)â_nâ^†_n'⊗|j+1⟩⟨j+1|⊗ρ_Bb̂^†_kb̂_k')
+∑_j=1^N∑_n,n'∑_k,k'e^-i/ħ(ε_n'-E_k')te^i/ħ(ε_n-E_k)sg_n'k'^*g_kn
×(â_n'â^†_nρ_n,j(t)⊗|j+1⟩⟨j+1|⊗b̂^†_k'b̂^†_kρ_B.
.-â^†_nρ_n,j(t)â_n'⊗|j⟩⟨j|⊗b̂_kρ_Bb̂^†_k'.
.-â_n'ρ_n,j(t)â^†_n⊗|j+1⟩⟨j+1|⊗b̂^†_k'ρ_Bb̂_k.
.+ρ_n,j(t)â^†_nâ_n'⊗|j⟩⟨j|⊗ρ_Bb̂_kb̂^†_k').
We then take the trace of Eq. <ref> over the environment variables, noting that Tr(b̂^†_k'b̂_kρ_B)=0 and Tr(b̂_k'b̂^†_kρ_B)=δ_k',k. Thus, we obtain the following expression:
Tr_B[H_SB(t),[H_SB(s),∑_n,jρ_n,j(t)⊗|j⟩⟨j|⊗ρ_B]]=
∑_j=1^N∑_n,n'∑_ke^i/ħ(ε_n'-E_k)te^-i/ħ(ε_n-E_k)sg^*_n'kg_kn
×(-â_n'ρ_n,j(t)â^†_n⊗|j⟩⟨j|+ρ_n,j(t)â^†_nâ_n'⊗|j+1⟩⟨j+1|)
+∑_j=1^N∑_n,n'∑_ke^-i/ħ(ε_n'-E_k)te^i/ħ(ε_n-E_k)sg_n'k^*g_kn
×(â^†_n'â_nρ_n,j(t)⊗|j+1⟩⟨j+1|-â_nρ_n,j(t)â^†_n'⊗|j⟩⟨j|).
We then make use of the boundary condition |N+1⟩=|1⟩, and in doing so we can factor out the position matrices |j⟩⟨j| in the double commutator. Next, we integrate the double commutator over the time variable s, and simplify the resulting expression. This results in the following expression:
∫_0^tds Tr_B[H_SB(t),[H_SB(s),∑_jρ_S(t)⊗|j⟩⟨j|⊗ρ_B]]=
∑_j=1^N∑_n,n'∑_kiħ/ε_n-E_k(e^-i/ħ(ε_n-E_k)t-1)e^i/ħ(ε_n'-E_k)tg^*_n'kg_kn
×(-â_n'ρ_n,j+1(t)â^†_n+ρ_n,j(t)â^†_nâ_n')⊗|j⟩⟨j|
-∑_j=1^N∑_n,n'∑_kiħ/ε_n-E_k(e^i/ħ(ε_n-E_k)t-1)e^-i/ħ(ε_n'-E_k)tg_n'k^*g_kn
×(â^†_n'â_nρ_n,j(t)-â_nρ_n,j+1(t)â^†_n')⊗|j⟩⟨j|.
Next, we apply a rotating wave approximation to Eq. <ref>, wherein we set ε_n'≈ε_n, and in doing so diagonalize the expression in n. In doing so, we can simplify Eq. <ref>, giving us the following equation:
∫_0^tds Tr_B[H_SB(t),[H_SB(s),∑_jρ_S(t)⊗|j⟩⟨j|⊗ρ_B]]=
∑_j=1^N∑_n∑_kiħ/ε_n-E_k(1-e^i/ħ(ε_n-E_k)t)|g_kn|^2
×(-â_nρ_n,j+1(t)â^†_n+ρ_n,j(t)â^†_nâ_n)⊗|j⟩⟨j|
-∑_j=1^N∑_n,n'∑_kiħ/ε_n-E_k(1-e^-i/ħ(ε_n-E_k)t)|g_kn|^2
×(â_nâ_nρ_n,j(t)-â^†_nρ_n,j+1(t)â^†_n)⊗|j⟩⟨j|.
Finally, adding up both sums in Eq. <ref>, we obtain the master equation given by Eq. <ref> with the coefficients defined as in Eq. <ref>.
99
breuer H-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, 2nd ed., Oxford Univ. Press, 2010.
weiss U. Weiss, Quantum Dissipative Systems, 4th ed., World Scientific, 2012.
nielsen M. A. Nielsen and I. Chuang, Quantum Computation and Quantum Information, 10th Anniv. Ed., Cambridge Univ, Press, 2010.
verstraete F. Verstraete, M. M. Wolf and J. I Cirac, Nat. Phys. 5, 633 (2009).
diehl S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler, and P. Zoller, Nat. Phys. 4, 878 (2008).
diehl2 B. Kraus, H. P. Büchler, S. Diehl, A. Kantian, A. Micheli and P. Zoller, Phys. Rev. A 78, 042307 (2008).
dallatorre E. G. Dalla Torre, J. Otterbach, E. Demler, V. Vuletic and M. D. Lukin, Phys. Rev. Lett 110, 120402 (2013)
caballar R. C. F. Caballar, S. Diehl, H. Mäkelä, M. Oberthaler and G. Watanabe, Phys. Rev. A 89, 013620 (2014).
rebentrost P. Rebentrost, M. Mohseni, I. Kassal, S. Lloyd and A. Aspuru-Guzik, New J. Phys. 11, 033003 (2009).
palmeiri B. Palmieri, D. Abramavicius and S. Mukamel, J. Chem. Phys. 130, 204512 (2009)
ishizaki A. Ishizaki and G. Fleming, J. Chem. Phys. 130, 234110 (2009)
fassioli F. Fassioli and A. Olaya-Castro, New J. Phys. 12, 085006 (2010)
panitchayangkoon G. Panitchayangkoon, D. V. Voronine, D. Abramavicius, J. R. Caram, N. H. C. Lewis, S. Mukamel and G. S. Engel, Proc. Nat. Acad. Sci. 108, 20908 (2010)
cai J. Cai, S. Popescu and H. J. Briegel, Phys. Rev. E 82, 021921 (2010)
ishizaki2 A. Ishizaki and G. R. Fleming, Ann. Rev. Cond. Matt. Phys. bf 3, 333 (2012)
liang X-T. Liang, Phys. Rev. E 82, 051918 (2010)
sweke R. Sweke, M. Sanz, I. Sinayskiy, F. Petruccione and E. Solano, Phys. Rev. A 94, 022317 (2016)
mulken O. Mülken and T. Schmid, Phys. Rev. E 82, 042104 (2010)
caruso F. Caruso, N. Spagnolo, C. Vitelli, F. Sciarrino, and M. B. Plenio, Phys. Rev. A 83, 013811 (2011)
sinayskiy I. Sinayskiy, A. Marais, F. Petruccione, and A. Ekert, Phys. Rev. Lett. 108, 020602 (2012)
antezza P. Doyeux, R. Messina, B. Leggio, and M. Antezza, Phys. Rev. A 95, 012138 (2017)
attal S. Attal, I. Sinayskiy and F. Petruccione, Phys. Lett. A 376, 1545 (2012)
attal2 S. Attal, F. Petruccione, C. Sabot and I. Sinayskiy, J. Stat. Phys. 47, 832 (2012)
sinayskiy2 I. Sinayskiy and F. Petruccione, Phys. Rev. A 92, 032105 (2015).
konno N. Konno and H. J. Yoo, J. Stat. Phys. 150, 299 (2013)
attal3 S. Attal, N. Guillotin-Plantard, C. Sabot, Ann. Henri Poincaré 16, 15 (2015)
|
http://arxiv.org/abs/1701.08138v1 | 20170127182124 | Supremacy of the quantum many-body Szilard engine with attractive bosons | [
"Jakob Bengtsson",
"Mikael Nilsson Tengstrand",
"Andreas Wacker",
"Peter Samuelsson",
"Masahito Ueda",
"Heiner Linke",
"Stephanie M. Reimann"
] | quant-ph | [
"quant-ph"
] |
^1NanoLund, Lund University, P.O.Box 118, SE-22100 Lund, Sweden
^2 Mathematical Physics, Lund University, Box 118, 22100 Lund, Sweden
^3 Solid State Physics, Lund University, Box 118, 22100 Lund, Sweden
^4Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 11
3-0033, Japan
^5RIKEN Center for Emergent Matter Science (CEMS), Wako, Saitama 351-0198, Japan
In a classic thought experiment, Szilard <cit.> suggested a heat engine
where a single particle, for example an atom or a molecule, is confined in a container coupled to a single heat bath.
The container can be separated into two parts by a moveable wall acting as a piston.
In a single cycle of the engine, work can be extracted from the information on
which side of the piston the particle resides. The work output
is consistent with Landauer's principle that the erasure of one bit of
information costs the entropy k_B ln 2 <cit.>,
exemplifying the fundamental relation between work, heat and information <cit.>.
Here we apply the concept of the Szilard engine to a fully interacting quantum many-body system.
We find that a working medium of a number of N ≥ 2 bosons
with attractive interactions is clearly superior to other previously discussed
setups <cit.>.
In sharp contrast to the classical case, we find that the average work output increases
with the particle number. The highest overshoot occurs for a small but finite
temperature, showing an intricate interplay between thermal and quantum effects.
We anticipate that our finding will shed new light on the role of information in controlling thermodynamic fluctuations in the deep quantum regime, which are strongly influenced by quantum correlations in interacting systems <cit.>.
Supremacy of the quantum many-body Szilard engine with attractive bosons
J. Bengtsson^1,2, M. Nilsson Tengstrand^1,2, A. Wacker^1,2, P. Samuelsson^1,2, M. Ueda^4,5, H. Linke^1,3 & S. M. Reimann^1,2
December 30, 2023
=================================================================================================================================
§ INTRODUCTION
The Szilard engine was originally designed as a thought experiment with only a single classical
particle <cit.> to illustrate the role of information in
thermodynamics (see, for example, <cit.> for a recent review).
The apparent conflict with the second law could be resolved by
properly accounting for the work cost associated with the information
processing <cit.>.
Although Szilard's suggestion dates back to 1929, only more recently the conversion between information and energy was shown experimentally using a Brownian particle <cit.>.
A direct realisation of the classical Szilard cycle was reported by
Roldán et al. <cit.> for a colloidal particle in an optical double-well trap.
In a different scenario, Koski et
al. <cit.> measured k_B T ln 2 of work for
one bit of information using a single electron moving between
two small metallic islands.
A quantum version of the single-particle Szilard engine was first
discussed by Zurek <cit.>. In contrast to the classical
case, insertion or removal of a wall in a quantum system shifts the
energy levels, implying that the process must be associated with
non-zero work <cit.>. Kim et
al. <cit.> showed that the amount of work that can be
extracted crucially depends on the underlying quantum statistics: two
non-interacting bosons were found superior to the classical
equivalent, as well as to the corresponding fermionic case.
Many different facets of the quantum Szilard engine have been studied, including
optimisation of the cycle <cit.> or the effect of
spin <cit.> and parity <cit.>, but all for
non-interacting particles. The case of two attractive bosons was
discussed in Ref. <cit.>; however, the authors assigned the
increased work output to a classical effect. The question thus remains
how the information-to-work conversion in many-body quantum systems
is affected by interactions between the particles.
Here, we present a full quantum many-body treatment of spin-0 bosonic
particles in a Szilard engine with realistic attractive interactions between the particles, as they
commonly occur in, for example, ultra-cold atomic gases <cit.>.
We demonstrate quantum supremacy in the few-body limit for N≤ 5, where a solution to the full many-body problem can be obtained with very high numerical accuracy.
A perturbative approach indicates that the supremacy further increases for larger particle numbers.
Surprisingly, the highest overshoot of work compared to W_1=k_BT ln 2 (i.e., the highest possible classical work output) occurs for a finite temperature, exemplifying the relation between thermodynamic fluctuations and the many-particle excitation spectrum.
§ MANY-BODY SZILARD CYCLE FOR BOSONS WITH ATTRACTIVE INTERACTIONS.
Our claim is based on a fully ab
initio simulation of the quantum many-particle Szilard cycle by
exact numerical diagonalisation, i.e., the full configuration
interaction method (as further described in the supplementary material).
A hard-walled one-dimensional container of length L confines N bosons that constitute
the working medium. We model the interactions by the usual two-body pseudopotential of
contact type <cit.>, gδ (x_1-x_2), where the strength of the interaction g is
given in units of g_0=ħ^2/(Lm). The single-particle ground
state energy E_1=ħ^2π^2/2mL^2 sets the
energy unit, where m is the mass of a single particle.
The cycle of the Szilard engine goes
through four steps, assumed to be carried out quasi-statically and in
thermodynamic equilibrium with a single surrounding heat bath at
temperature T: (i) insertion of a wall dividing the quantum
many-body system at a position ℓ^ins, followed by
(ii) a measurement of the actual particle number n on the left side
of the wall, (iii) reversible translation of the wall to its final
position ℓ_n^rem depending on the outcome n of the
measurement, and finally (iv) removal of the barrier at
ℓ_n^rem.
The total average work output of a single cycle with processes
(i)-(iv) has been determined <cit.> as
W = -k_BT ∑^N_n = 0 p_n(ℓ^ins) ln[ p_n(ℓ^ins)/p_n(ℓ^rem_n)] .
Here, p_n(ℓ) denotes the probability to find n particles to
the left of the wall located at position ℓ, and N-n
particles to the right, if the combined system is in thermal
equilibrium. The N-particle
eigenstates Ψ_i with energy E_i,
obtained by numerical diagonalisation, can be classified by the particle
number n_i in the left subsystem with 0<x<ℓ.
Then we find that p_n(ℓ)=∑_iδ_n_i,n e^-E_i(ℓ)/k_BT/Z with Z=∑_i
e^-E_i(ℓ)/k_BT.
Measuring the particle number on one side after insertion
of the wall, one gains the Shannon
information <cit.>
I=-∑_n=0^N p_n(ℓ^ins) ln p_n(ℓ^ins).
Going back to the original state in the cycle, this information is
lost, associated with an average increase of entropy Δ
S=k_BI. This increase in entropy of the system allows one to extract the
average amount of work W≤ k_BT I which can be positive.
Here, the equality only holds if all p_n(ℓ^rem_n)≡
1. In this case the removal of the barrier is reversible for each
observed particle number. This reversibility had been associated with
the conversion of the full information gain into
work <cit.>, as explicitly assumed in
Ref. <cit.>. While p_n(ℓ^rem_n)≡ 1 is
straightforward for the single-particle case with
ℓ^rem_0=0 and ℓ^rem_1=L, this is hard
to realise for N≥ 2 <cit.>.
For our case of a moving piston, the full work can typically not be extracted. To
optimise W, we choose the optimal ℓ^rem_n,
maximising p_n(ℓ^rem_n) for all systems considered here.
(The procedure is similar to the non-interacting case <cit.>).
The highest relative work output is obtained for a many-body system of
attractive bosons at a finite temperature.
This is the white region in the top panel of
Fig. <ref> (a), where the work output W/W_1≈ 1.12
for a system of four attractive bosons surpasses the
results for noninteracting (middle panel with W/W_1≲ 1.08)
as well as for repulsive (lowest panel) bosons. For comparison a system of
four classical particles has W/W_1≲ 0.886 (not shown here).
We also note that for interaction strengths g≤ 0, the maximum work
output always occurs if the wall is inserted in the middle of the
container (ℓ^ins=L/2) for an engine operating in the
deep quantum regime. (For larger temperatures, other insertion positions can become favorable, see the Supplementary Material). For T→ 0, the work output vanishes
if ℓ^ins≠ L/2. In this limit all non-interacting
bosons occupy the lowest single-particle quantum level. After
insertion of the wall, the energetically lowest-lying level is in the
larger region. For ℓ^ins≠ L/2 we know beforehand the
location of the particles and measuring the number of particles does
not provide any new information, i.e., I=0. Consequently, no
work can be extracted in the cycle. Attractive interactions obviously
enhance this feature. However, this does not hold for repulsive
interactions, g>0, as shown in the lowest panel of
Fig. <ref> (a). In this case, the particles spread out
on different sides of the wall in the ground state. Here,
degeneracies between different many-particle states occur at
particular values of ℓ^ins, which allow an information
gain in the measurement. This explains the N distinct peaks as a
function of ℓ^ins for low temperature in the lowest panel of Fig. <ref> (a).
The maximum of W/W_1 for attractive bosons increases with
particle number, as shown in Fig. <ref> (b). The
optimal relative work output is higher for
attractive bosons (solid red line) than for non-interacting
bosons (red dashed line) and clearly beats the corresponding
result for classical particles (blue dashed line).
Here, the data for N≤ 5 were obtained by exact diagonalisation while a
perturbative approach (see supplementary material) was applied for N>5.
The peak work output
for bosons with attractive interactions at a finite temperature is a
general feature, which holds for a wide range of interaction strengths,
see Fig. <ref> (c) for the case of N=3 bosons. Indeed,
the temperature at which the peak occurs increases with larger interaction
strengths.
§ ONSET OF THE PEAK AT AN INTERMEDIATE TEMPERATURE.
For systems with attractive interactions, g<0,
the work output equals k_BT ln 2 at low temperatures, independent of N.
Due to the dominance of the attractive interaction, all N particles will be
found on one side of the barrier. When the barrier is inserted symmetrically,
we have p_0(L/2)=p_N(L/2)=1/2, while all other
p_n(L/2)=0. At the same time, the removal position
ℓ_0^rem=0 and ℓ_N^rem=L provide
p_0(ℓ_0^rem)=1 and p_N(ℓ_N^rem)=1, so that
Eq. (<ref>) provides the work output W=k_BT ln 2 for the
entire cycle as observed in Fig. <ref>(c). This case, with two
possible measurement outcomes and a full sweep of the piston,
resembles the single-particle case. One might wonder, whether the
increased particle number should not imply a higher pressure on the
piston and thus, more work. This, however, is not the case, as the
attraction between the particles reduces the pressure.
Also, when inserting the barrier, the
difference in work due to the interactions has to be taken into account.
With increasing temperature (i.e., k_BT∼ -3g(N-1)/L≈
-0.6(N-1)E_1g/g_0, for weak interactions as shown in the supplementary material) other
measurement outcomes than n=0 or n=N become probable. Since p_0 and
p_N now decrease with temperature we see a deviation from the
performance of the single-particle engine.
§ THE TWO-PARTICLE INTERACTING ENGINE.
To get a better understanding of the physics behind the enhancement of work output for
bosons with attractive interactions at finite temperatures, let us look at the two-particle
case in some more detail. For a central insertion of the barrier, we find
p_0(L/2) = p_2(L/2). For the same symmetry reasons,
p_1(ℓ ) has a maximum at this barrier position. No work can thus be
extracted in cycles where the two particles are measured on different
sides of the central barrier, since
p_1(ℓ ^ins) / p_1(ℓ _1^rem)≥ 1 in Eq. (<ref>).
Thus, the only contributions to the work output result from
p_0 and p_2. Together with p_0(ℓ^rem_0=0) = p_2(ℓ^rem_2=L) = 1 we obtain
W = - 2k_B T p_0(L/2)ln p_0(L/2)
This function has its peak at p_0 = 1/e with the peak value
W ≈ 1.061 k_BT ln 2, see Fig. <ref>.
This implies a finite value p_1=1-2/e. Even if no work can be
extracted with one particle on either side of the barrier, a non-zero
probability p_1 of such a measurement outcome can be preferable.
Two attractive bosons, initially at T→ 0 and with p_0=1/2,
will for increasing T continuously approach the classical limit of
p_0=1/4. Hence, at a certain temperature, depending on the
interaction strength, p_0 passes through p_0=1/e producing a peak in
the relative work. Physically, one may understand this property of the
engine as follows: At low temperatures, the two attractive bosons will
always end up on the same side of the barrier, bound together by their attraction.
The cycle is then operating similar to the single
particle case, which explains that W = k_BT ln 2 when
p_0 = 1/2. A less correlated system (obtained with increasing T) provides
a larger expansion work for cycles in which both particles are on one
side of the barrier. On the other hand, cycles with one particle on
each side of the barrier, from which no work can be extracted, become
more frequent.
For 1/e < p_0 < 1/2, the enhanced pressure is more important and the average work output
increases with decreasing p_0.
For lower values of p_0, i.e. p_0 < 1/e, too few cycles
contribute on average to the work production. The average work output decreases with decreasing p_0 despite the corresponding increase in pressure.
Importantly, we note the absence of a similar maximum in the non-interacting case,
where W/W_1 is found to decrease steadily towards the classical limit with increasing T.
§ SZILARD ENGINES WITH N>2 ATTRACTIVE BOSONS.
The maximum of W/W_1 tends to increase with the particle number N (as previously discussed in connection with Fig. <ref> b). The reason lies in the fact that work can be extracted from a larger number of measurement outcomes.
Similar to the two-particle engine, the combined contribution to the average work
output from cycles in which all
particles are on the same side of a barrier inserted at
ℓ^ins=L/2 is given by Eq. (<ref>). However, also cycles with n=1,2,…,N-1
(except if n = N/2) on the left side of the barrier do contribute to
the average work output, and work output even higher than in
the two-particle case is possible.
The maximum of p_1(ℓ) and that of p_N-1(ℓ) occurs for ℓ≠ L/2, as clearly indicated
by the probabilities for different measurement outcomes shown for N=4 in
Fig. <ref>. This means that
p_n(ℓ ^ins) /p_n (ℓ _n^rem)≤ 1
is possible for ℓ ^ins=L/2 and that work may be extracted in agreement with Eq. (<ref>).
For all systems considered here,
with insertion of the barrier at the midpoint the optimum is reached for p_0 = p_N ≈ 0.3 (see the example for N=4 in Fig. <ref>), which is close to the
optimal value of 1/e for the corresponding two-particle engine.
§ REPULSIVE BOSONS
Finally, we consider the repulsive interactions between bosons, see
Figure <ref>(c). In the low-temperature
limit, the relative work output is very similar to that of
non-interacting spin-less fermions discussed in Refs. <cit.>. This resemblance becomes even more pronounced with
increasing interaction strength. This in fact is no coincidence, but
rather a property of one-dimensional bosons with strong, repulsive
interactions that have an impenetrable core: Indeed, in the limit of
infinite repulsion, bosons act like spin-polarised non-interacting
fermions. This is the well-known Tonks-Giradeau regime <cit.>.
Both for non-interacting fermions and strongly repulsive bosons, the region
where the quantum Szilard engine exceeds the classical single-particle maximum of work
output, has disappeared.
§ CONCLUSIONS
We have demonstrated that the work output of the quantum Szilard engine can be significantly boosted by short-ranged attractive interactions for a bosonic working medium. We based our claim on the (numerically) exact solution of the full many-body Schrödinger equation for up to five bosons.
It is likely that the effect is even further enhanced for larger particle numbers; however, despite the simple one-dimensional setup, the numerical effort grows very significantly (and beyond our feasibility) for larger N.
By increasing the strength of the interparticle attraction, the engine's work output can be increased significantly also at higher temperatures, where the work that can be extracted generally is of larger magnitude. While we here restrict our analysis to idealised quasi-static processes, it would be of much interest
to consider a finite speed in the ramping of the barrier, enabling transitions to excited states which by coupling to baths will lead to dissipation. Extending our approach to quantify irreversibility in real processes on the basis of a fully ab initio quantum description may in the future allow to study
dissipative aspects in the kinetics of the conversion between information and work.
naturemag_noURL
Supplementary Material
1. Work output of the quantum Szilard engine
In contrast to other conventional heat engines that operate by
exploiting a temperature gradient, as discussed in many textbooks on
thermodynamics, the Szilard engine <cit.>
allows for work to be extracted also when
connected to a single heat bath at constant temperature.
It is propelled by the information obtained about the working
medium and its microscopical properties.
In the supplementary material, we briefly outline the theoretical description of the quantum
Szilard engine, in close analogy to that of Refs. <cit.>.
An idealised version of the Szilard engine cycle consists of four
well-defined steps: (i) insertion, (ii) measurement, (iii) expansion and finally
(iv) removal. First, an impenetrable barrier is introduced
(i) that effectively splits the working medium into two
halves. Then, the number of particles on each side of the barrier is
measured (ii). Depending on the outcome of this measurement, the
barrier moves (iii) to a new position and contraction-expansion
work can be extracted in the process. Finally, the barrier is removed
(iv) which completes a single cycle of the engine.
All four steps, (i)-(iv), of the Szilard engine are assumed to
be carried out quasi-statically and in thermodynamic equilibrium with
a surrounding heat bath at temperature T. Now, the work associated
with an isothermal process can be obtained from
W ≤ -Δ F = k_B T Δ( ln Z),
where k_B, F and Z are the Boltzmann constant, the Helmholtz free
energy and the partition function
Z = ∑_j e^-E_j/(k_BT),
where the sum runs over the energies E_j of, in principle, all micro
(or quantum) states of the considered system. In practice, however, we
construct an approximate partition function from a finite number of
energy states. Note that the work in Eq. (<ref>) is chosen
to be positive if done by the system. Also, the equivalence between
W and -Δ F is reserved for reversible processes alone.
We now turn to the work associated with the individual steps of the
quantum Szilard engine. For simplicity, we consider an engine with N
particles initially confined in a one-dimensional box of size L. All
steps of the engine are, as previously mentioned, carried out
quasi-statically and in thermal equilibrium with the surrounding heat
bath at temperature T. To maximise the work output, we further
assume that all involved processes are reversible, unless specified
otherwise.
(i) Insertion. A wall is slowly introduced at ℓ^ins,
where 0 ≤ℓ^ins≤ L. In the end, the initial system is divided
into left and right sub-systems of sizes ℓ^ins and L-ℓ^ins
respectively. Based on Eq. (<ref>), the work of this process
is given by
W_ (i) = k_BT ln [ ∑_n=0^N Z_n(ℓ^ins)/Z_N(L) ],
where Z_n(ℓ^ins) is the short-hand notation for the
partition function obtained with n particles in the left sub-system
and N-n in the right one. With this notation, Z_N(L) is thus
equivalent to the partition function of the initial system, before the
insertion of the barrier. Also, note that prior to measurement, the
number of particles on either side of the barrier is not yet a
characteristic property of the new system. We need thus to sum over
all possible particle numbers in the numerator of
Eq. (<ref>). Finally, we want to stress the fact that, unlike for
a classical description of the engine, the insertion of a barrier
costs energy in the form of work, due to the associated change in the
potential landscape.
(ii) Measurement. The number of particles located on the different
sides of the barrier is now measured. Here, following <cit.>, we assume that the
measurement process itself costs no work, i.e., we assume that
W_ (ii)=0 (see main text). The probability that n particles are measured to be
on the left side of the barrier (and N-n on the right side) is given by
p_n(ℓ^ins) = Z_n(ℓ^ins)/∑_n^'=0^N Z_n^'(ℓ^ins).
(iii) Expansion. The barrier introduced in (i) is assumed to move without
friction. During this expansion/contraction process, the number of
particles on either side of the barrier remains fixed. In other words,
the barrier is assumed high enough such that tunnelling may be
neglected. If the barrier moves from ℓ^ins to
ℓ^rem_n when n particles are measured in the left
sub-system, the average work extracted from this step of the cycle
reads
W_ (iii)= k_BT ∑_n=0^N p_n(ℓ^ins) ln[Z_n(ℓ^rem_n)/Z_n(ℓ^ins)],
where p_n are the probabilities given by Eq. (<ref>).
(iv) Removal. The barrier at ℓ^rem_n, that
separates the left sub-system with n particles from the right one
with N-n particles, is now slowly removed. As the height of the barrier shrinks, particles will
eventually start to tunnel between the two sub-systems. This transfer
of particles makes the removal of the barrier an irreversible
process. Clearly, if we instead were to start without a barrier and
introduce one at ℓ^rem_n, then we can generally not be
certain to end up with n particles to the left of the
partitioner. Assuming that the particles are fully delocalised between
the two sub-systems already in the infinite height barrier limit, then
the average work associated with the removal process is given by
W_(iv) = k_B T ∑_n=0^N p_n(ℓ^ins) ln[Z_N(L)/∑_n^'=0^N
Z_n^'(ℓ^rem_n)].
Finally, the averaged combined work output of a single Szilard cycle, W, is given by the sum
of the partial works associated with the four steps (i)-(iv), i.e.
W=W_ (i)+W_ (ii)+W_ (iii)+W_ (iv), and simplifies into
W = -k_BT ∑^N_n = 0 p_n(ℓ^ins) ln[ p_n(ℓ^ins)/p_n(ℓ^rem_n)].
which is the central equation (1) in the main article.
2. The interacting many-body Hamiltonian and exact diagonalisation
To keep the schematic setup of the many-body Szilard cycle as simple as possible, we consider a quantum system of N interacting particles, initially confined in a one-dimensional box of size L that is separated by a barrier inserted at a certain position ℓ.
We note that for contact interactions between the particles, as defined in the main text, the exact energies E_j to the fully interacting many-body Hamiltonian Ĥ are those given in terms of two independent systems with n and N-n particles.
In order to construct the partition functions and compute the probabilities p_n, the entire exact many-body energy spectrum is needed. For the simple case of non-interacting particles (or single-particle systems) these energies are known analytically. For interacting particles, however, they must be determined by solving the full many-body problem. We here apply
the configuration interaction method where we use a basis of the 5th order B-splines <cit.>, with a linear distribution of knot-points within each left/right sub-system, to determine the energies of each sub-system and parity at each stage. For N = 3, we used 62 B-splines (or one-body states) to construct the many-body basis for each sub-system. Since the dimension of the many-body problem grows drastically with N, we needed to decrease the number of B-splines to 32 for N=4. Consequently, in this case we could not go to equally high temperatures and interaction strengths.
3. Perturbative approach for weakly attractive bosons at
low temperatures
Here we consider the case k_BT≪ E_1, so that only the lowest quantum
levels in each part are thermally occupied. In the case of vanishing
interaction, the state with n particles in the lowest level of the left
part of the wall positioned
at ℓ (and N-n particles in the lowest level of the right
part) has the energy
E_n^(0)(ℓ)=n E_1(L/ℓ)^2+
(N-n)E_1(L/L-ℓ)^2
Applying the wave function Ψ_0(x)=sin (π
x/ℓ)√(2/ℓ) for the left side,
the mutual interaction energy between two particles in this
level is
U(ℓ)=g∫_0^ℓ dx |Ψ_0(x)|^4=3g/2ℓ
Now we assume that
this interaction energy (times the number of interacting partners)
is much smaller than the level spacing, i.e. (n-1)U(ℓ)≪
3 E_1(L/ℓ)^2, which is satisfied for
g≪π^2 g_0/N-1. Then we may determine
the energy of the many-particle state by first-order perturbation theory.
This results in the interaction energy
E_n^(1)(ℓ)=n(n-1)/2U(ℓ)+(N-n)(N-n-1)/2U(L-ℓ)
.
Setting E_n(ℓ)≈ E_n^(0)(ℓ)+E_n^(1)(ℓ), we obtain an analytical
expression for the probabilities p_n(ℓ) without any need for numerical
diagonalisation.
Using ℓ^ins=L/2 and
determining the optimal removal positions ℓ^rem_n numerically,
we get the work output by Eq. (<ref>). Again the optimal
temperature needs to be chosen to obtain the results plotted in
Fig. <ref>(b).
4. Estimate of the peak temperature
For the symmetric wall position, the
ground state of the system with attractive bosons has all particles on one
side, say the left one. Using the perturbative approach discussed above, the
interaction energy is E_N^(1)(L/2). If one boson is transferred from the
left side to the right side, the interaction energy changes to
E_N-1^(1)(L/2), while the level energies E_n^(0)(L/2) are
independent on n for the symmetric wall position. Thus thermal excitations
become likely for k_BT∼ E_N-1^(1)(L/2)-E_N^(1)(L/2)=-3(N-1)g/L. For
these temperatures the particles do not cluster on the same side of the wall
any longer and we have p_0<1/2.
5. Temperature dependence of the work output for different interaction
strengths
As a complement to Fig. 1(c) of the main article, we show the case for N=4
here in Fig. <ref>.
For small to medium couplings -g_0≲ g <0,
the peaks have approximately the same
height and they are shifted proportionally to g. This shift follows the
deviations from the low temperature limit W=W_1, which set in
at k_BT≈ -0.6(N-1)E_1g/g_0, as shown by the approximative approach in
the main article.
As discussed in the method section, for g≈ -π^2 g_0/(N-1),
correlation effects become important and we find a reduced peak at g=-10
g_0, similar to the case for N=3 in Fig. 1(c) of the main article.
Due to the high numerical demand on the
numerical diagonalization, we did not obtain results for larger |g| in
the case N=4, while for N=3 an increase of the peak height for even larger
|g| is observed.
For all interaction strengths g<0, the peak height
is actually larger than the peak for the attractive two-particle case
W_2≈ 1.061k_BTln(2) depicted in Fig. 2 of the main article. This is
due to the fact, that Eq. (2) of the main article is a lower bound for the
work output and p_0(L/2) necessarily moves from 1/2 at T→ 0 to the
classical result 1/2^N at large temperatures. Thus the maximum for p_0=1/e
is taken at some intermediate temperature.
6. Operation of the Quantum Szilard engine at high temperatures
For N=4 particles, Fig. <ref> shows that the symmetric
insertion point ℓ^ins=L/2 is not optimal for high temperatures.
For classical particles, the optimal work output is given by
W_tot = -N T[(/)^Nln(/) .
+.(1-/)^Nln(1-/)]
- T∑_m=1^N-1Nm(/)^m (1-/)^N-m
×ln[(/)^m
(1-/)^N-m/(m/N)^m (1-m/N)^N-m].
A numerical scan of different insertion positions shows that an
asymmetric insertion point ℓ^ins L/2 (as shown by the blue
dashed line in Fig. <ref>) provides the highest work output.
In contrast, the symmetric position is optimal for N≤ 3
classical particles. For the non-repulsively interacting bosons (g≤ 0)
studied here, the symmetric insertion is favorable in the low temperature limit as thoroughly discussed in
the main article. On the other hand, for large temperatures the classical
result needs to be recovered. This occurs via a pitchfork bifurcation<cit.> at an
intermediate temperature as shown in Fig. <ref>.
For noninteracting Bosons it occurs at k_BT_c≈ 50 E_1 for N=4, and
at slightly larger values if attractive interactions are included.
|
http://arxiv.org/abs/1701.07442v1 | 20170125190015 | Dust radiative transfer modelling of the infrared ring around the magnetar SGR 1900$+$14 | [
"G. Natale",
"N. Rea",
"D. Lazzati",
"R. Perna",
"D. F. Torres",
"J. M. Girart"
] | astro-ph.HE | [
"astro-ph.HE"
] |
G. Natale
[email protected]
Jeremiah Horrocks Institute, University of Central Lancashire, Preston, PR1 2HE, UK
Institute of Space Sciences (IEEC–CSIC), Campus UAB, Carrer de Can Magrans S/N, 08193 Barcelona, Spain.
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL–1090 GE Amsterdam, the Netherlands.
Department of Physics, Oregon State University, 301 Weniger Hall, Corvallis, OR 97331, USA
Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY, 11794, USA
Institute of Space Sciences (IEEC–CSIC), Campus UAB, Carrer de Can Magrans S/N, 08193 Barcelona, Spain.
Institució Catalana de Recerca i Estudis Avançats (ICREA), E-08010 Barcelona, Spain
Institute of Space Sciences (IEEC–CSIC), Campus UAB, Carrer de Can Magrans S/N, 08193 Barcelona, Spain.
A peculiar infrared ring-like structure was discovered by Spitzer around the strongly magnetised neutron star SGR 1900+14. This infrared structure was suggested to be due to a dust-free cavity, produced by the SGR Giant Flare occurred in 1998, and kept illuminated by surrounding stars. Using a 3D dust radiative transfer code, we aimed at reproducing the emission morphology and the integrated emission flux of this structure assuming different spatial distributions and densities for the dust, and different positions for the illuminating stars. We found that a dust-free ellipsoidal cavity can reproduce the shape, flux, and spectrum of the ring-like infrared emission, provided that the illuminating stars are inside the cavity and that the interstellar medium has high gas density (n_H∼1000 cm^-3). We further constrain the emitting region to have a sharp inner boundary and to be significantly extended in the radial direction, possibly even just a cavity in a smooth molecular cloud. We discuss possible scenarios for the formation of the dustless cavity and the particular geometry that allows it to be IR-bright.
§ INTRODUCTION
Strongly magnetized neutron stars <cit.> are extremely powerful X-ray and soft gamma-ray emitters, in particular under the form of large flares. These flares might reach luminosities that in our Galaxy second only Supernova explosions. In particular, magnetars emit a large variety of flares and outbursts on timescales from fraction of seconds to years, on a vast range of luminosities from 10^38 to ∼10^47<cit.>. The most energetic events they ever emitted, called Giant Flares, have been detected three times in the past few decades, from three magnetars: the Soft Gamma-ray Repeaters (SGR) 0526-66 on 1979 March 5 <cit.>, SGR 1900+14 on 1998 August 27 <cit.> and the last and more energetic one on 2004 December 27 from SGR 1806-20 <cit.>. All of them are characterized by a very luminous initial spike (∼10^44-47), lasting less than a second, which decays rapidly into a softer tail (modulated at the neutron star spin period) lasting several hundreds of seconds (with luminosities of ∼10^43). The nature of the steady and flaring high energy emission from these sources has been intriguing all along. In fact, magnetar's X-ray luminosity is in general too high to be produced by the pulsar rotational energy losses alone, as for more common isolated radio pulsars, and the lack of any companion star excludes an accretion scenario. It is now well established that the peculiarities of these extreme highly magnetized objects (10^14-15 Gauss) are related to the strength and instability of their magnetic field, that at times might stress the stiff neutron star crust <cit.>, might rearrange itself locally in small twisted bundles, or disrupt and reconnect higher up in the magnetosphere producing large ejections of particles. The ages of the ∼25 magnetars known <cit.>, derived from their rotational properties (t_c∼Ṗ/P), indicate a young population, typically a few thousand years old. In three or four cases there are reasonably accepted associations with Supernova Remnants <cit.>, as well as massive star clusters <cit.>.
SGR1900+14 is one of the youngest magnetars known (∼1 kyr). A very prolific burster <cit.>, it is embedded in a cluster of very massive stars <cit.> and it is one of the three magnetars which has shown a Giant Flare. This magnetar was observed using all three instruments onboard the NASA Spitzer Space Telescope in 2005 and 2007 <cit.>.
Surprisingly, these observations have revealed a prominent ring-like structure (see Figure 1) in the 16μm and 24μm wave-bands, not detected in the 3.6–8.0 μm observations. A formal elliptical fit to the ring indicates semi-major and semi-minor axes of angular lengths ∼36” and ∼19”, respectively, centred at the position of the magnetar SGR1900+14. No equivalent feature was observed at radio or X-ray wavelengths (L_ 332 MHz≤ 2.7 × 10^29 d_12.5^2 erg s^-1, and L_ 2-10 keV≤ 1.8 × 10^33 d_12.5^2 erg s^-1; with d_12.5 being the distance in units of 12.5 kpc; <cit.>).
The Spitzer images are dominated by the bright emission from two nearby M supergiants that mark the centre of a compact cluster of massive stars at a distance of ∼12.5 kpc, believed to have hosted the magnetar progenitor star <cit.>. The physical size of the ring at this distance is ∼ 2.18×1.15 pc, it has a temperature ∼ 80 K <cit.>, and a flux of 0.4±0.1 Jy and 1.2±0.2 Jy at 16 and 24μm, respectively <cit.>. However, the inevitable difficulties in the analysis of this faint and complicated structure make these values rather uncertain and preclude a detailed assessment of the true ring morphology. The ring-like structure has been interpreted as due to illumination from nearby stars of a dust free cavity produced by the Giant Flare <cit.>.
In this work we show the results of a series of radiation transfer (RT) calculations performed with a 3D dust radiative transfer code DART-Ray <cit.> aimed at reproducing the emission of the infrared ring around SGR1900+14, assuming different plausible distributions for the dust illuminated by the nearby stars. In <ref> we introduce our initial assumptions, method, and the RT calculations. Results and Discussion follow in <ref> and <ref>.
§ RADIATION TRANSFER CALCULATIONS
In order to set up radiation transfer models appropriate to reproduce the observed ring emission, we first defined the distribution of stars and dust within the volume over which the RT calculations are performed (a cube of size 10 pc). In this section, we describe how this has been done given the constraints provided by the observations. Specifically, in <ref> the assumed ring distance, physical sizes and fluxes are given. In <ref> we explain how the dust distributions have been defined. In <ref> we describe how we found the appropriate viewing angles for the observer such that the projected ellipse is approximately of the same size and orientation as the dust emission ring. In <ref> we show how we derived the intrinsic 3D positions of the stars, relative to the magnetar, by using the constraint given by their projected positions on the sky. Finally, in <ref> we describe the specific RT models we considered and how the RT and dust emission calculations have been performed.
§.§ Ring distance, sizes and mid-infrared fluxes
We set up the size and geometry of the assumed 3D dust distributions by comparison with the observations (see Figure <ref>). We considered the distance measurement of <cit.>, which found d=12.5 ±1.4 kpc by using optical spectroscopy of the close-by stars. Assuming d=12.5 kpc, the lengths of the projected ring major and minor semi-axis are 2.18 and 1.15 pc. The major axis of the ring is rotated by about 22 from the R.A. axis. We considered the integrated fluxes for the dust ring in the mid-infrared as measured by <cit.>: F(24μm)=1.2±0.2 Jy and F(16μm)= 0.4±0.1 Jy.
§.§ Assumed ellipsoidal dust distributions
The observed 2D ring-like emission morphology observed on the Spitzer data is compatible with being the projection of a 3D ellipsoidal structure. We modelled it with a thin ellipsoidal shell, a uniform dust distribution around an ellipsoidal cavity or a more complicated distribution resulting from a stellar wind dust density profile which has been internally depleted of dust. In this section, we show how we have defined dust density profiles representative of each of these cases. The three profiles we have chosen should be considered as simplified representations of the complex dust distributions determined by the physical processes plausibly giving rise to the 2D ring we observe. Given the large uncertainties on the details of these physical processes, we could only assume simple shapes which qualitatively reproduce the expected dust distributions.
The ellipsoidal surfaces, needed to define the above ellipsoidal structures, are described by the following formula for a constant normalized radius R:
R^2=x^2/a^2+y^2/b^2+z^2/c^2
where a, b and c are the lengths of the three semi-axis of the "reference ellipsoid" with R=1. If for a given (x,y,z) we have R>1 or < 1, the point (x,y,z) belongs to an ellipsoidal surface which is either inside or outside the reference ellipsoid (but it has the same axis ratio). In order to consider the volume within an ellipsoidal shell, that is, the volume embedded between two ellipsoidal surfaces with different normalized radii, we consider all the points (x,y,z) satisfying the following relation:
√(| R^2 -1| ) < Δ R
where Δ R represents the semi-width of the ellipsoidal shell (in normalized R units). For the elliptical shell distribution, the dust density is assumed to be constant within the volume defined by Eq.<ref> and zero outside. For the ellipsoidal cavity distribution, we assumed that the dust density is constant for R>1 and zero for R<1. Finally, for the stellar wind distribution, we assumed the following dust density radial profile:
ρ_d(R)=ρ_d(R_d)(R/R_d)^2 if R<R_d,
ρ_d(R)=ρ_d(R_d)(R/R_d)^-2 if R>R_d
with R_d=1. The wind density profile for R>1 decreases as R^-2 and thus resembles that expected in a stellar wind with elliptical symmetry. The profile for R<1 is hard to predict theoretically, since it is due to dust destruction processes with unknown parameters. For the sake of simplicity, we assumed it rises as R^2 until R=1. The three types of density profiles we defined are shown in Figure <ref>.
§.§ Derivation of the lines-of-sight reproducing the ring apparent sizes
For all the above dust distribution profiles, the axis lengths and orientation of the ring are determined by the intrinsic lengths of the ellipsoid axis as well as by three angles: two angles (θ_ obs and ϕ_ obs) which specify the line-of-sight direction (see left panel of Fig. <ref>), and one angle that specifies the orientation of the 2D reference frame on the observer plane. By "observer plane", we mean the plane over which the ellipsoid is projected (that is, simply the plane of the data map). The ellipsoid projection is degenerate in the sense that different combinations of the ellipsoid intrinsic parameters can produce the same projected ellipse just by choosing appropriate observer viewing angles.
In order to be able to predict the shape of the projected ellipse for arbitrary combinations of the ellipsoid and observer parameters, we have used the formulae derived by <cit.>. Given an ellipsoid and the observer plane, these authors derived analytical formulae for the projected ellipse by determining all the points on the observer plane where the normal to the plane is tangent to the three-dimensional ellipsoid. By using equations 25 of GS81, we are able to predict the projected ellipse semi-axis and orientation given the ellipsoid semi-axis a, b, c and the observer line of sight angles θ_ obs and ϕ_ obs[The formulae in GS81 are written in terms of three rotation angles θ_GS, ϕ_GS and ψ_GS (see Fig. 3 in GS81) which can be connected to our definition of observer viewing angles θ_ obs and ϕ_ obs in the following way:
θ_GS=π/2 ϕ_GS=π/2-θ_ obs ψ_GS=ϕ_ obs-π/2.
By using the first relation, we force one axis of the 2D reference frame on the observer plane to be the projection of the z-axis of the 3D frame "xyz", where the ellipsoid is defined. In this way, θ_ obs and ϕ_obs are easily related to the angles ϕ_GS and ψ_GS in GS81 using the second and third relation above.].
Then, in order to handle the inverse problem of finding the combinations of the observer angles θ_ obs and ϕ_ obs that produce a projected ellipse with the same parameters of the observed dust emission ring, we wrote an optimization program which finds the right θ_ obs and ϕ_ obs for a given combination of ellipsoid semi-axis a, b, and c. Specifically, we choose to fix the values of b and c to 2.18 pc, the length of semi-major axis of the projected ellipse. We then assumed b/a= 2 or 4. For these different combinations of ellipsoid parameters, we found the values of θ_ obs and ϕ_ obs which allow to reproduce the shape and orientation of the projected ellipse (see <ref>). The values for θ_ obs and ϕ_ obs we derived are listed in Table <ref>.
§.§ Derivation of the intrinsic 3D positions of the stars from their sky position
In all the RT models that we calculated, we placed the magnetar at the origin of the 3D reference frame xyz, since the magnetar appears at the centre of the projected ellipse. For the two supergiant stars we had to find a way to place them such that their projected position on the observer plane coincides with their sky coordinates R.A. and Dec (see Table <ref>).
Because this is the only constraint we have, each star can in principle be located at any point on a line perpendicular to the observer plane and intersecting the observer plane in (R.A., Dec), as shown in the right panel of Fig.<ref>. We parametrize the position of a star along this line in terms of its distance d_s from the magnetar. As shown in Fig.<ref>, for each value of d_s there can be up to two possible positions for the star (but also just one or even none if d_s is too small). If two possible positions exist, one of the two is chosen as location for the star. In Table <ref> we show the positions we derived for the stars assuming they are located at different distances d_s from the magnetar (that is, d_s = 1 and 3 pc). These distances are chosen such that the stars are either within the ellipsoidal cavity, or outside it but not too far from its border. We call these two types of star locations "IN" and "OUT" configurations.
§.§ 3D dust radiative transfer and dust emission calculations
In order to calculate the dust heating from the stars and the resulting dust emission, we used the 3D ray-tracing dust radiative transfer code DART-Ray <cit.>. This code allows to solve the 3D dust radiative transfer problem for arbitrary distributions of dust and stars <cit.>: it follows the propagation of the light emitted by the stars within an RT model, including absorption and multiple scattering due to the dust. By calculating the variation of the light specific intensity I_λ throughout a RT model, it also derives the distribution of the radiation field energy density u_λ(x⃗) of the UV/optical/near-infrared radiation:
u_λ(x⃗)=∫ I_λ(x⃗,n⃗)dΩ/c
From u_λ and the dust density distribution, the dust emission can then be calculated at each position. We performed the dust emission calculation taking into account the stochastic heating of small grains <cit.>. This is important to consider since small grains tend to not achieve equilibrium with the interstellar radiation field and, heated by single photons carrying energies comparable to their internal energies, experience large temperature fluctuations. Thus, taking into account stochatical heating is important to correctly derive the dust emission spectra in particular in the mid-infrared. Unlike the equilibrium case, where a grain is characterized by only one dust temperature, in the stochastically heated case a grain has a certain probability P(T) to be at a certain temperature T. At each position inside the RT model, the probability function P(T) is calculated for each grain size a and composition k following <cit.> <cit.>.
The function P(T) depends on both the local value of u_λ and the grain absorption coefficient Q^ abs_λ(a,k). Once P(T) is derived, the dust emission luminosity for a single grain ϵ_λ(a,k) can then be calculated as:
ϵ_λ(a,k)=4π^2 a^2Q^ abs_λ(a,k)∫_0^∞ P(T)B_λ(T)dT
where B_λ is the Planck function. The total dust emission at each position is derived after integration over the grain size distribution (for each chemical species).
Finally, by projecting the dust emission at each position onto the observer plane and by convolving the resulting maps to the instrument angular resolution (FWHM∼5.3 and 6 arcs for Spitzer 16 and 24 μm), dust emission maps can be derived as they would be obtained by an external observer.
As input, DART-Ray just needs a 3D Cartesian adaptive grid where for each element the dust density and the stellar volume emissivity are specified. Both distributed stellar emission and stellar point sources can be included. We created input grids representing the three types of dust distributions described in <ref>: an ellipsoidal shell, an ellipsoidal cavity and a ellipsoidal wind.
As we already mentioned in <ref>, we assumed b=c=2.18 pc and b/a=2 or 4. In the case of the ellipsoidal shell dust distribution, we also assumed Δ R=0.10. We chose this value for Δ R because it produces a dust emission ring with approximately the same width of the observed ring.
To set the dust optical properties (absorption and scattering efficiency, scattering phase angle parameter, dust-to-gas mass ratio), we assumed the dust model BARE-GR-S of <cit.> as implemented in the TRUST dust RT benchmark project <cit.>. In this dust model, the dust is composed by a mix of silicates, graphite and PAH molecules <cit.>. The abundance of each component in the dust model has been chosen to match observational data for the average extinction curves, chemical abundances and dust emission in the Milky Way.
To set the dust/gas density in the RT model, we assumed several choices for the optical depth per unit parsec at 1 μm which, assuming the dust model mentioned above, correspond to a wide range of values for the hydrogen number density n_H. Since the density can vary within an RT model, the density values we will mention hereafter refer to the density at the reference ellipsoid (that is, for R = 1). Because of the long time required for a single RT/dust emission calculation (several hours), the density values have not been varied automatically in order to minimize the disagreement with the data but have been simply chosen in order to represent a typical diffuse interstellar medium (ISM) density (n_H=10 cm^-3) as well as much denser medium (n_H=100 and 1000 cm^-3, see Table <ref>). The ring integrated fluxes for the models with n_H=1000 cm^-3 are compatible with the observed fluxes, as it will be shown in the next section.
Following the procedure described in <ref>, we positioned the two M supergiant stars at either d_s=1 or d_s=3 pc from the magnetar (the two stars are at same distance from the magnetar but at two different 3D positions since they are projected in two different points on the observer plane). We assume these stars are emitting as blackbodies with effective temperatures and bolometric luminosities as found by <cit.> (see Table <ref>).
§ RESULTS
In this section we show the results we obtained for the various assumed dust ellipsoidal distributions (see Figure <ref>), dust densities (defined by the 1 μm optical depth per unit length or, equivalently, the hydrogen number density n_H), the stellar positions (which can be in the IN or OUT configurations) and the reference ellipsoid axis-ratio b/a.
The parameters of the calculated RT models are shown in Table <ref>. In the same table we also show the fluxes we measured for the dust emission ring appearing on the 16 and 24 μm model maps. The ring photometry has been taken on an elliptical aperture containing the ring, in analogy to the measurement of <cit.> on the Spitzer maps. The 16 μm emission model maps, corresponding to all the RT models listed in Table <ref>, are shown in Figures <ref>, <ref> and <ref>. The variation of the parameters listed above has a clear effect on the morphology of the synthetic maps and/or on the total flux.
§.§ Ellipsoidal shell models
The first two panels on the left in figure <ref> show the maps obtained for the ellipsoidal shell RT model with n_H=10cm^-3 and b/a=2 for the IN and OUT configurations. As one can see, only the IN configuration gives rise to a clear dust ring shape, while the OUT configuration presents an additional bright feature within the ring–like shape. Although this enhancement resembles the one seen on the Spitzer maps, its nature is very different. The enhancement on the data maps appears at the position of the supergiant stars, it is a PSF-convolved point source and it is much brighter than the emission from the ring. Being much brighter than the expected blackbody emission for those stars, this emission is probably due to dust in the circumstellar material around the M supergiants. On the other hand, the bright feature observed in the OUT models is extended and shifted with respect to the position of the stars. This emission is due to dust in the elliptical shell that is closer to the stars located outside the shell and, thus, heated more strongly by them.
The effect of modifying the b/a ratio, but keeping the stars inside the cavity, can be seen from the middle panel. Also in this case, although a dust ring is visible, an emission enhancement with comparable brightness to that of the ring is visible on the map. This is due to the more elongated shape of the cavity where some parts of the internal boundary of the cavity are significantly more heated by the stars, thus causing this effect. In terms of emission flux, in this model the integrated ring flux is about a half of that of the model with b/a=2 (although the flux integrated over the entire map is higher). More importantly, the change of total flux due to the variation of b/a is too small to allow to reproduce the observed fluxes while keeping n_H=10cm^-3. This is because the b/a ratio can be varied, realistically, only within a rather small range of values (≈ 1-10).
On the other hand, the dust density can in principle be varied over a wide range of values and, thus, can give rise to a substantial change in the total flux without affecting the emission morphology. We performed calculations with dust densities corresponding to n_H=100 and 1000 cm^-3 for a model with b/a=2 and stars within the cavity. As one can see from the right panels of Figure <ref> and the ring fluxes in Table <ref>, the predicted emission has the same morphology of the n_H=10 cm^-3 model but it is much more luminous. The dust emission SEDs of the ring for these two models are shown in Fig. <ref> (left panel). This plot shows that for the model with n_H=1000 cm^-3 the fluxes we obtained are within 1.5σ from those measured by <cit.>.
§.§ Ellipsoidal cavity models
We also performed RT calculations assuming a dust cavity geometry, where the dust is uniformly distributed outside the reference ellipsoid. This is expected in case the flare from the magnetar simply caused dust destruction within the dust cavity and did not affect significantly the density of the surrounding medium (thus, there is no significant enhancement of the gas/dust density close to the border of the cavity in this case). For n_H=10 cm^-3 and b/a =2, we calculated the maps for the IN and OUT geometry. The corresponding 16μm maps are shown in Figure <ref>. In both cases the emission is much more extended compared to that of the shell models (which do not contain any dust apart from that within the shell). However, the ring emission morphology is clearly recovered only in the case where the stars are inside the cavity. If the stars are outside, they illuminate the dust close to them very strongly. The emission from this dust dominates the emission seen on the maps. Instead, the ring–like emission is barely noticeable, with brightness close to that of the background emission. We point out that the bright dust emission seen in this case is much more extended than the point source emission seen on the data map at the position of the stars. We also calculated higher density models (n_H=100-1000 cm^-3) for the IN configuration. The maps shown in Fig.<ref> are characterized by a higher surface brightness, compared to the calculation with n_H=10 cm^-3, but a similar emission morphology.
The ring fluxes for the ellipsoidal cavity model and for the IN configuration are listed in Table <ref>. These can be compared with the ring fluxes for the corresponding shell models with same parameters. As one can see, there are some differences between the shell and cavity models with same parameters, but the fluxes are similar. We also show the ring dust emission SEDs for the highest density models in Fig. <ref> (middle panel). The cavity model with n_H=1000 cm^-3 fits the observed integrated dust emission within the data error bars. This shows that both a thin dust shell and a dust cavity RT model give similar results for the ring integrated fluxes and morphology in the case that the stars are located inside the cavity. Also, overall, the results we obtained from the dust cavity models further suggest that the presence of the stars (or other radiation sources) within the cavity is a necessary requirement to recover the observed dust emission morphology.
§.§ Ellipsoidal dusty wind models
The last geometry we explored for the dust is that expected in the case the dust around the magnetar is distributed as in a stellar wind (with elliptical symmetry) and the dust has been only partially destroyed within the cavity region. The assumed density profile rises until the border of the cavity and then decreases as R^-2 as shown in Sect.<ref>. Given the previous results for the shell and cavity model, we have run only RT calculations assuming n_H=1000 cm^-3 at R=1 for this model. The results are shown in Fig. <ref> for both the IN and OUT configurations for the stars. In the case the stars are located inside, the morphology of the ring is recovered although not as clearly as we found before for some of the shell and cavity models. There is substantial diffuse emission coming from the regions enclosed by the ring. Interestingly, a quite narrow and bright dust emission appears at the position of the stars. This feature resembles the bright dust emission source seen on the data at the star positions. A sharper profile, rising more rapidly close to R=1, would certainly reduce the brightness of this diffuse emission, as observed for the two previous configurations. However, given the large uncertainties on dust grain sizes or the properties of the destructing flare, we assume this dust density shape as a toy model to test the wind scenario.
We note that also for the wind model we do not recover the ring morphology in the case the stars are located outside the cavity region. The dust emission SED of the ring predicted in the case the stars are located inside the cavity region is shown in Fig.<ref> (right panel). The model is able to fit the total fluxes at 16 and 24μm (see also integrated fluxes in Table <ref>). This result confirms that a density of n_H∼1000 cm^-3 at the position of the border of the cavity region is necessary to fit the data, independently of the assumed dust distribution.
§.§ Comparison of the average surface brightness profiles for the high density models
In order to gain more quantitative insight on the similarities in morphology between the ring-like emission on the data and on the model maps, we compared average surface brightness profiles derived from the maps. We limited this comparison with the data to the models with the highest density (that is, n_H=1000 cm^-3), which are the only ones able to reproduce the total ring emission flux (see above). We derived the average surface brightness profiles in the following way. Firstly, we masked the emission from the brightest stars on the data maps and the corresponding regions on the model maps. Then, from the background-subtracted maps, we derived average profiles by averaging the values of the non-masked pixels within elliptical rings with same centre, axis-ratio and orientation as the infrared ring on the data. Finally, we normalized all profiles to their maximum value. The surface brightness profiles so derived at 16 and 24 μm are shown in Figure <ref> for the data and the high density RT models for all the assumed dust density distributions. The radial distance R_ map on the x-axis corresponds to the length of the semi-major axis of the elliptical rings used to derive the profiles.
From both the 16 and 24 μm profiles several interesting features are evident. For radii larger than ∼1.2 pc, the profile for the shell dust distribution ends too sharply and it is thus unable to reproduce the tail of the emission observed on the data, which is much more extended. On the other hand, the tails of the profiles for the cavity and wind dust distributions decrease in a smoother way that is much closer to that observed on the data (see also average discrepancies in table <ref>). This finding implies that a thin dust shell is not able to reproduce the ring emission profile. Thus, the presence of a large amount of dust beyond the outer edge of the cavity is required. Furthermore, the comparison for the inner part of the emission profiles (R≤1.2 pc) shows that the wind dust distribution gives rise to excess emission inside the ring, which is incompatible with the data. A sharp rise of the dust distribution at small radii provides much better agreement with the data (see table <ref>). The preferred dust distribution is of an almost dust-free cavity with a sharp transition to a dust rich environment. Beyond the cavity the dust and/or interstellar medium densities decrease as a function of distance slower than the R^-2 wind-model.
§ DISCUSSION
We have performed several dust RT calculations assuming elliptical
dust shell/cavity geometries as well as a disrupted wind profile, and
by positioning the two supergiants stars inside or outside the dust
cavity. We have found that the dust ring morphology, similar to that
found on the Spitzer data of SGR1900+14, is recovered only in the
cases where the stars are inside the cavity. Furthermore, we
approximately reproduce the total integrated fluxes at 16 and
24 μm only by assuming a gas density of
n_H∼1000 cm^-3 for all the dust geometries we assumed.
The corresponding mass of the dust responsible for the ring emission
is M_ dust∼2 M_⊙.
Given these results, the first question to ask is whether or not the
models that reproduce the observed dust emission morphology and total
flux are realistic or not. In particular the gas density, implied by
our modelling to explain the Spitzer infrared luminosity by dust
illumination, appears to be very high compared to that of the diffuse
galactic ISM. Before discussing the possible nature of this high
density, we first clarify what assumptions/parameters in our modelling
might have caused an artificial high gas density, not representative
of the real ISM density around the magnetar. Firstly, we point out
that the gas density is not measured directly from the gas emission
but inferred from the dust density divided by a dust-to gas ratio of
0.00619 (which is characteristic of the assumed Milky Way dust model,
see section <ref>). However, this dust-to-gas ratio,
representative of the kpc scale ISM of the nearby Milky Way regions,
presents significant local variations in the ISM <cit.>. Furthermore, since the assumed size distribution
of the grains is also representative of the local Milky Way, this also
has an effect on the derived dust density. In fact, the MIR emission
in our modelling is mainly produced by small grains (sizes
a∼10^-3-10^-2 μm) which are stochastically heated. If the
grain size distribution is more skewed towards smaller grain sizes,
compared to the one we are assuming, this would require significantly
less dust mass to reproduce the observed MIR emission. The grain size
distribution is known to be affected by both dust destruction and
formation processes, but it is not possible to constrain it further
with our observations. On the other hand, we also note that if the
cavity has been created by dust destruction, the grain size
distribution there should instead favour the presence of large dust
grains rather than small ones <cit.>. In
fact, a number of studies <cit.> have
shown that the X-ray flux is more effective at destroying small
grains than larger ones. The precise evolution of the dust grain
distribution is dependent on both the spectral shape and overall
intensity of the illuminating source, on the composition of the
grains, as well as on the relative importance of the processes of
X-ray Heating, Coulomb Explosion and Ion Field Emission, the last
two of which being particularly uncertain <cit.>. However, even within these uncertainties, all the models
generally predict that smaller grains will be destroyed to larger
distances than larger ones[If Coulomb Explosion dominates,
then the distance to which grains of size a are destroyed by a
source with energy spectral index α scales as
a^-0.5-α/3 if grain charging is limited by internal
energy losses, or as a^-1-α/2 if limited by the
electrostatic potential.]. Therefore, there would be a region in which only selective destruction took place, leaving behind a dust distribution skewed towards large grains. At the inner edge the distribution would be skewed towards big grains, progressively changing into the undisturbed (pre-burst) distribution at larger distances.
An attempt at modeling these
effects would be worthwhile if the quality of the data were to allow
a comparison with observations, but this is not possible with the
current data.
Finally, in our modelling we only assumed the two supergiant stars,
the most luminous stars in the field, to be heating the dust
shell/cavity. However, other sources of radiation might well play a
role (e.g. other fainter stars within the cavity) and, in this case,
the needed gas density to match the observed fluxes would be lower. On
the other hand, note that the constant ∼10^34 erg/s X-ray
luminosity emitted by the magnetar is too low to power the dust
emission. In fact, the wavelength–integrated dust emission luminosity
for the models that fit the MIR fluxes is in the range 3.7-4 ×
10^35 erg/s. An additional mechanism to heat the dust is also
collisional heating in hot plasma, where the dust is heated by the
collisions with high energy electrons. This is expected if the dust is
embedded in shocked gas with temperatures of order of 10^6
K. However, in this case we might expect to see an X-ray diffuse
emission from the hot gas around the magnetar as well, which is not
observed.
Vrba et al.(2000) argued that SGR 1900+14 is associated with a cluster
of young stars (much fainter in apparent magnitude than the two M
supergiants we considered in our modelling) which are probably
embedded in a dense medium. This interpretation is qualitatively
consistent with our results. As proposed by <cit.>, the
1998 Giant Flare could have produced the cavity by destroying the dust
within it. Assuming a constant dust density within the cavity region,
corresponding to n_H=1000 cm^-3, we estimated a total dust mass
of order of 3 M_⊙ that was plausibly present before being
destroyed by the flare. An energy of about E∼ 6×10^45erg
would suffice to destroy this amount of dust, consistent with the
estimates by <cit.> based on Eq. 25 in
<cit.>. The size of the region with destroyed dust
would be larger for smaller grains, as discussed above.
In this scenario, the high density we
derived would be similar to the high density ISM around the
magnetar. Furthermore, we note that high density of the ISM
(n_H=10^5–10^7 cm^-3) has been found in the environment
surrounding GRBs <cit.>, which should be similar to that
where magnetars are located.
The wind model we considered was meant to be similar to the scenario where the dust distribution outside the cavity was mainly determined by the wind of the magnetar progenitor while internally disrupted by the Giant Flare. However, gas densities at 1pc distance in a typical stellar wind are expected to be several orders of magnitudes lower than those we found (n_H∼1 cm^-3, estimated from the mass-loss rates Ṁ of <cit.> assuming n_H ∝Ṁ/v/r^2 with v = 1000 km/s). Thus, this last scenario is unlikely if the ring density is indeed so high.
Another possibility is that the dust emission ring is the infrared emission from the supernova remnant (SNR) of the magnetar progenitor. The dust mass associated with the shell model with n_H=1000 cm^-3 is M_dust=1.9 M_⊙, which is a factor 3–4 higher than the measurement of dust mass around SN1987 by <cit.>. However, given the large uncertainties in the inferred dust masses, and that our value for the dust density might have been overestimated because of the reasons given above, it may well be that the amount of dust needed for the shell model is compatible with that of SNRs. The SNR scenario was also considered by <cit.> but discarded because of the lack of observed radio and X-ray emission from the ring. However, if we consider i) the IR/X luminosity ratio of ∼10^-1-10^2 measured by <cit.> for many SNRs, ii) the total IR luminosity of the ring in our models (∼4×10^35 erg/s), and the X-ray detection limit for the ring <cit.>, this structure is still compatible with being a SNR with high IR/X ratio.
On the other hand, we can also compare the 24μm luminosity with the expected X-ray luminosity, according to Figure 12 of <cit.>, which studied a sample of SNR in the Large Magellanic Cloud. If the magnetar was located at the distance of the LMC (50 kpc), we would have ν F_ν(24μ m)= 1.5×10^-10 erg/s/cm^2, and the relative expected X-ray flux would be 2×10^-10 erg/s/cm^2. This would translate in an intrinsic X-ray luminosity of ∼6×10^37 erg/s, which should have been clearly detected in the case of the SGR 1900+14. Hence, given the information we have at hand we cannot discard the SNR scenario on the base of the observed IR/X-ray luminosity ratio, although it would be a rather peculiar remnant compared with what we see around other Galactic pulsars or magnetars <cit.>.
However, if we also consider the shape of the normalized average surface brightness profiles, shown in Fig.<ref>, this provides a strong evidence that 1) there is very little amount of dust inside the cavity and 2) the emitting dust is much more extended than a simple thin shell. These findings are compatible with the scenario where the cavity has been produced by the Giant Flare within a high density medium. However, the SNR scenario would still be acceptable in the case the transition in density between the shell and the surrounding ISM is smoother than what we assumed in our modelling.
Regardless of the origin or the exact distribution of the illuminated dust, or the exact nature of the dust free cavity, our models show that we are able to observe this illuminated dust structure only because of two favourable characteristics: 1) the high dust density in the local region, and 2) the illuminating stars coincidentally lay inside the shell. Similar dust structures might potentially be present around many other magnetars or pulsars but they would be invisible to us because of the lack of either one of the two above local properties of this particular object.
GN acknowledges support by the EU COST Action MP1304, for the Short Term Scientific Mission where this project was completed, and by the Leverhulme Trust research project grant RPG-2013-41. N.R. acknowledges funding in the framework of the NWO Vidi award A.2320.0076, and via the European COST Action
MP1304 (NewCOMPSTAR). N.R. and D.F.T. are supported by grants AYA2015-71042-P and SGR2014-1073. RP acknowledges support from the NSF under grant AST-1616157. JMG is supported by grant AYA2014-57369-C3-1-P.
[Camps et al. (2015)]Camps15
Camps, P. et al., 2015, A&A, 580, 87
[Davies et al. (2009)]Davies09
Davies B. et al. 2009, ApJ, 707, 844
[Draine (2003)]Draine03
Draine, B. T. 2003, ARA&A, 41, 241
[Duncan & Thompson (1992)]Duncan92
Duncan, R. C. & Thompson, C. 1992, ApJ, 392, 9
[Eikenberry et al. (2001)]Eikenberry01
Eikenberry, S. S. et al. 2001, ApJ, 563, 133
[Fruchter et al. (2001)]Fruchter01
Fruchter, A. et al. 2001, ApJ, 563, 597
[Gaensler et al. (2001)]Gaensler01
Gaensler, B. M. et al. 2001, ApJ, 559, 963
[Gendzwill & Stauffer (1981)]Gendzwill81
Gendzwill D.J. & Stauffer M. R. 1981, Journal of the International Association for Mathematical Geology,
Volume 13, Issue 2, pp 135-152
[Green (1984)]Green84
Green D. A., 1984, MNRAS, 209, 449
[Hurley et al. (1999)]Hurley99
Hurley K., et al., 1999, Nature, 397, 41
[Hurley et al. (2005)]Hurley05
Hurley, K. et al. 2005, Nature, 434, 1098
[Israel et al. (2008)]Israel08
Israel, G. L. et al. 2008, ApJ, 685, 1114
[Kaplan et al. (2003)]Kaplan03
Kaplan D. L. et al., 2003, ApJ, 590, 1008
[Koo et al. (2016)]Koo16
Koo, B. et al. 2016, ApJ, 821, 20
[Kudritzki (2002)]Kudritzki02
Kudritzki, Rolf P. 2002, ApJ, 577, 389
[Lazzati & Perna (2002)]Lazzati02
Lazzati, D., Perna, R., 2002, MNRAS.330, 383
[Martin et al. (2014)]Martin14
Martin J., Rea N., Torres D. F., Papitto A., 2014, MNRAS, 444, 2910
[Matsuura et al. (2011)]Matsuura11
Matsuura, M. et al. 2011, Sci, 333, 1258
[Mazets et al. (1979)]Mazets79
Mazets E. P., Golentskii S. V., Ilinskii V. N., Aptekar R. L., Guryan I. A., 1979, Natur, 282, 587
[Muno et al. (2006)]Muno06
Muno M. P., Law C., Clark J. S., Dougherty S. M., de Grijs R., Portegies Zwart S., Yusef-Zadeh F., 2006, ApJ, 650, 203
[Natale et al. (2014)]Natale14
Natale G. et al., 2014, MNRAS, 438, 3137
[Natale et al. (2015)]Natale15
Natale G. et al., 2015, MNRAS, 449, 243
[Olausen & Kaspi (2014)]Olausen14
Olausen, S. A. & Kaspi, V. M. 2014, ApJS, 212, 60
[Perna & Lazzati (2002)]Perna02
Perna, R. & Lazzati, D. 2002, ApJ, 580, 261
[Perna, Lazzati & Fiore (2003)]Perna03
Perna, R., Lazzati, D. & Fiore, F. 2003, ApJ, 585, 775
[Perna & Pons (2011)]Perna11
Perna, R. & Pons, J. A., 2011, ApJ, 727L, 51
[Rea & Esposito (2011)]Rea11
Rea, N. & Esposito, P. 2011, ASSP, 21, 247
[Reach et al. (2015)]Reach15
Reach, W. T. et al. 2015, ApJ, 811, 118
[Seok et al. (2013)]Seok13
Seok, J. Y. et al. 2013, ApJ, 779, 134
[Steinacker et al. (2013)]Steinacker13
Steinacker, J. et al. 2013, ARA&A, 51, 63
[Thompson & Duncan (1993)]Thompson93
Thompson, C. & Duncan, R. C. 1993, ApJ, 408, 194
[Thompson & Duncan (1995)]Thompson95
Thompson, C. & Duncan, R. C. 1995, MNRAS, 175, 255
[Turolla et al. (2015)]Turolla15
Turolla, R. et al. 2015, RPPh, 78, 6901
[Voit (1991)]Voit91
Voit, G. M. 1991, ApJ, 379, 122
[Vrba et al. (1996)]Vrba96
Vrba, F. J. et al. 1996, ApJ, 468, 225
[Vrba et al. (2000)]Vrba00
Vrba, F. J. et al. 2000, ApJ, 533, 17
[Wachter et al. (2008)]Wachter08
Wachter et al. 2008, Nature, 453, 626
[Waxman & Draine (2000)]Waxman00
Waxman, E. & Draine, B. T. 2000, ApJ, 537, 796
[Zubko et al. (2004)]Zubko04
Zubko et al., 2004, ApJS, 152, 211
|
http://arxiv.org/abs/1701.08019v3 | 20170127113353 | Perturbative Power Counting, Lowest-Index Operators and Their Renormalization in Standard Model Effective Field Theory | [
"Yi Liao",
"Xiao-Dong Ma"
] | hep-ph | [
"hep-ph"
] |
Perturbative power counting, lowest-index operators and their renormalization in standard model effective field theory
Yi Liao ^a,b,c[[email protected]] and Xiao-Dong Ma ^a[[email protected]]
^a School of Physics, Nankai University, Tianjin 300071, China
^b CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,
Chinese Academy of Sciences, Beijing 100190, China
^c Synergetic Innovation Center for Quantum Effects and Applications,
Hunan Normal University, Changsha, Hunan 410081, China
Abstract
We study two aspects of higher dimensional operators in standard model effective field theory. We first introduce a perturbative power counting rule for the entries in the anomalous dimension matrix of operators with equal mass dimension. The power counting is determined by the number of loops and the difference of the indices of the two operators involved, which in turn is defined by assuming that all terms in the standard model Lagrangian have an equal perturbative power. Then we show that the operators with the lowest index are unique at each mass dimension d, i.e., (H^† H)^d/2 for even d≥ 4, and (L^Tϵ H)C(L^Tϵ H)^T(H^† H)^(d-5)/2 for odd d≥ 5. Here H, L are the Higgs and lepton doublet, and ϵ, C the antisymmetric matrix of rank two and the charge conjugation matrix, respectively. The renormalization group running of these operators can be studied separately from other operators of equal mass dimension at the leading order in power counting. We compute their anomalous dimensions at one loop for general d and find that they are enhanced quadratically in d due to combinatorics. We also make connections with classification of operators in terms of their holomorphic and anti-holomorphic weights.
We study in this short paper two general aspects in standard model effective field theory (SMEFT). One is a power counting rule in perturbation theory for anomalous dimension matrix of higher dimensional operators with equal mass (canonical) dimension that is induced by the standard model (SM) interactions. We show that the leading power of each entry in the anomalous dimension matrix is determined in terms of the loop order and the difference of indices for the two operators involved. The other is about the lowest-index operators. We find that they are unique at each dimension and can be renormalized independently of other operators of equal dimension at the leading order in SM interactions. We compute their one-loop anomalous dimensions, and find that they increase quadratically with their dimension due to combinatorics.
Regarding SM as an effective field theory below the electroweak scale, the low energy effects of high scale physics can be parameterized in terms of higher dimensional operators:
ℒ_SMEFT=ℒ_4+ℒ_5+ℒ_6+ℒ_7+⋯.
Here the leading terms are the SM Lagrangian
ℒ_4 =
-1/4∑_X X_μνX^μν+(D_μ H)^†(D^μ H)
-λ(H^† H-1/2v^2)^2
+∑_ΨΨ̅i DΨ
-[Q̅Y_u u H̃+Q̅Y_d d H+L̅Y_e e H +],
where X sums over the three gauge field strengths of couplings g_1,2,3, and Ψ extends over the lepton and quark left-handed doublets L, Q and right-handed singlets e, u, d. The Higgs field H develops the vacuum expectation value v/√(2), and H̃_i=ϵ_ijH^*_j. D_μ is the usual gauge covariant derivative, and Y_u,d,e are Yukawa coupling matrices.
The higher dimensional operators, collected in _5,6,7 and ellipses in Eq. (<ref>), are composed of the above SM fields, and respect the SM gauge symmetries but not necessarily accident symmetries like lepton or baryon number conservation. They are generated from high scale physics by integrating out heavy degrees of freedom, with their Wilson coefficients naturally suppressed by powers of certain high scale. It is thus consistent to leave aside those Wilson coefficients when we do power counting for their renormalization running effects due to SM interactions. The higher dimensional operators start at dimension-five (dim-5), which turns out to be unique <cit.>. The complete and independent list of dim-6 and dim-7 operators has been constructed in Refs. <cit.> and <cit.> respectively. The number of operators increases horribly fast with their dimension; for discussions on dim-8 operators and beyond, see recent papers <cit.>.
If SM is augmented by sterile neutrinos below the electroweak scale, there will be additional operators at each dimension, see Refs. <cit.> for discussions on operators up to dim-7 that involve sterile neutrinos.
Now we consider power counting in the anomalous dimension matrix γ of higher dimensional operators due to SM interactions. We restrict ourselves in this work to the mixing of operators with equal mass dimension, because this is the leading renormalization effect due to SM interactions that is not suppressed by a high scale. Since the power counting is additive, it is natural to assign an index of power counting χ[] to the operator which in turn is a sum of the indices for the elements involved in . For the purpose of power counting, we denote g as a generic coupling in SM. Suppose an effective interaction C_i_i in _SMEFT is dressed by SM interactions at n-loops to induce an effective interaction, Δ_ji_j (no sum over j), involving the operator _j of equal dimension. The SM n-loop factor of g^2n is shared by the difference of the indices of the operators χ[_j]-χ[_i] and the induced ultraviolate divergent coefficient Δ_ji. As Δ_ji_j contributes a counterterm to the effective interaction C_j_j from which γ_ji is determined for the running of C_j, we obtain the power counting for the entry γ_ji in the anomalous dimension matrix
χ[γ_ji]=2n+χ[_i]-χ[_j].
The issue now becomes defining an index for operators up to a constant, χ[], which could be understood as an intrinsic power counting of SM couplings for the operator .
Since we are concerned with overall power counting in SM interactions, it is plausible to treat all terms in _4 on the same footing by assuming an equal index of perturbative power counting when the kinetic terms have been canonically normalized.
A similar argument was assumed previously in chiral perturbation theory involving chiral fermions coupled to electromagnetism <cit.>. Denoting generically
χ[H]=x, χ[λ]=2y,
so that χ[_4]=4x+2y, it is straightforward to determine the indices of other components in _4:
χ[Ψ]=3/2x+1/2y, χ[X_μν]=2x+y,
χ[D_μ]=x+y, χ[g_1,2,3]=χ[Y]=y.
It is evident that the x term actually counts canonical dimension and the y counts the power of g. Since we are concerned with renormalization mixing of operators with equal dimension, the power counting for their anomalous dimension matrix depends only on the y term according to Eq. (<ref>). Although our χ[γ_ij] does not depend on x, we find it most convenient to work with x=0 and y=1, so that the nonvanishing indices for power counting are
χ[Ψ]=1/2, χ[X_μν]=1, χ[D_μ]=1, χ[g_1,2,3]=χ[Y]=1, χ[λ]=2.
The lowest index that an operator could have is zero in this convention. Using a different x amounts to shifting the indices of all fields and derivatives by a multiplier of their mass dimensions without changing χ[γ_ij], and choosing y=1 simply fits the usual convention that all gauge and Yukawa couplings count as g^1 while the scalar self-coupling λ counts as a quartic gauge coupling g^2.
We can now associate an index of power counting χ[] to a higher dimensional operator by simply adding up the indices of its components according to Eq. (<ref>). The entry γ_ji in the anomalous dimension matrix for the set of operators _k due to SM interactions at n-loops has the index of power counting shown in Eq. (<ref>) in terms of a generic coupling g, which denotes g_1,2,3, Y_e,u,d, and √(λ). Our results for dim-6 and dim-7 operators are shown in Table <ref> and Table <ref> respectively. The one-loop γ matrix for dim-6 operators has been computed in a series of papers <cit.>, and is consistent with power counting in Table <ref>. The γ submatrix for baryon number violating dim-7 operators is available recently <cit.>, and also matches power counting in Table <ref>. Note that some entries in the tables may actually vanish due to structures of one-loop Feynman diagrams or nonrenormalization theorem <cit.>. Since at least one vertex of SM interactions is involved in one-loop diagrams, γ counts as g^1 or higher. This explains the presence of zero in the last two columns of the tables. The power counting in the explicit result of one-loop γ matrix for dim-6 operators has also been explained in Ref. <cit.> using the arguments of naive dimensional analysis developed for strong dynamics <cit.> that rescale operators forth and back by factors of couplings and powers of 4π. Our analysis above is more straightforward and assumes only the uniform application of SM perturbation theory.
With the above definition of the index of power counting for an operator, we make an interesting observation that the operator with the lowest index is unique at each mass dimension. To show this, we notice that out of the building blocks (H, Ψ, D_μ, X_μν) for higher dimensional operators only H has a vanishing index. This means that it should appear as many times as possible in the lowest-index operators for a given mass dimension d. For d even, this is easy to figure out, i.e.,
^d_H = (H^† H)^d/2.
These operators represent a correction to the SM scalar potential from high scale physics, and could impact the vacuum properties. For d odd, additional building blocks must be introduced. In the absence of fermions, X_μν and D_μ have to appear at least twice due to Lorentz invariance, which costs no less than two units of index. And in addition, this cannot yield an operator of odd dimension. The cheapest possible way would be to introduce two fermion fields in a scalar bilinear form on top of the Higgs fields, resulting in an operator of index unity. It turns out that gauge symmetries require the fermions to be leptons. Sorting out the quantum numbers of lepton fields [The bilinear form (L̅e) must couple to an odd total number of H^† and H thus resulting in an even dim-d operator. The bilinear (ee) requires four more powers of H than H^† to balance hypercharge, which then cannot be made weak isospin invariant. This leaves the only possibility as shown.], we arrive at the unique operator at odd d dimension,
^d pr_LH = [(L^T_pϵ H)C (L^T_rϵ H)^T](H^† H)^(d-5)/2,
where p, r are lepton flavor indices. This is the generalized dim-d Weinberg operator for Majorana neutrino mass whose uniqueness was established previously in Ref. <cit.> using Young tableau.
The lowest-index operators are of interest because their renormalization running under SM interactions is governed at the leading order by their own anomalous dimensions; i.e., they are only renormalized at the next-to-leading order by higher-index operators of the same canonical dimension. This is evident from Eq. (<ref>) and the last row in Tables <ref> and <ref>. The uniqueness of the lowest-index operators at each dimension further simplifies the consideration of their renormalization running, which will be taken up in the remaining part of this work. Before that, we make a connection to classification of operators in terms of their holomorphic and anti-holomorphic weights ω, ω̅ <cit.>. The weights are defined as ω()=n()-h(), ω̅()=n()+h()
for an operator , where n() is the minimal number of particles for on-shell amplitudes that the operator can generate and h() the total helicity of the operator. The claim is that our lowest-index operators _H^d, _LH^d are also the ones with the largest weights, i.e., both of their ω and ω̅ are the largest among operators of a given canonical dimension. To show this, we introduce some notations. We denote Ψ to be left-handed fermion fields, i.e., L, Q, e^C, u^C, d^C, and Ψ̅ the right-handed ones, and X^μν_±=X^μν∓(i/2)ϵ^μνρσX_ρσ. The pair of weights has the values (ω,ω̅)=(1,1), (1,1), (3/2,1/2), (1/2,3/2), (0,0), (0,2), (2,0) for the building blocks of operators, H, H^†, Ψ, Ψ̅, D, X_-, X_+, respectively. The weights (ω(^d),ω̅(^d)) of an operator ^d of dimension d are the sum of the corresponding weights of its components:
ω(^d) = n_H+n_H^†+1/2(n_Ψ+3n_Ψ)+2n_X_+
=d-(n_Ψ̅+n_D+2n_X_-)≤ d,
ω̅(^d) = n_H+n_H^†+1/2(3n_Ψ+n_Ψ)+2n_X_-
=d-(n_Ψ+n_D+2n_X_+)≤ d,
where n_B denotes the power of the component B appearing in ^d. The largest ω and ω̅ that an operator could have is thus its canonical dimension. For d even, this is easy to realize by sending n_X_±=n_D=n_Ψ=n_Ψ̅=0, i.e., the operator with the highest weights is the lowest-index operator _H^d made up purely of the Higgs field. For d odd, it is known that all operators in SMEFT necessarily involve fermion fields <cit.>, with the minimal choice being n_Ψ+n_Ψ̅=2. This can be arranged by choosing n_Ψ=2, n_X_±=n_D=n_Ψ̅=0 resulting in the operator _LH^d of the highest weights (d,d-2), or by choosing instead n_Ψ̅=2 as its Hermitian conjugate _LH^d†. The alternative choice n_Ψ=n_Ψ̅=1 would require a factor of D due to Lorentz symmetry, which reduces ω (or ω̅) by two units compared with _LH^d (or _LH^d†). This establishes the claim. As a side remark, the above equations together with Lorentz symmetry also imply that the operators at even (odd) dimension have even (odd) holomorphic and anti-holomorphic weights.
Now we compute the anomalous dimensions at leading order for the lowest-index operators _H^d at even dim-d and _LH^d pr for odd dim-d in Eqs. (<ref>,<ref>). The Feynman diagrams shown in Figs. <ref> and <ref> are for _H^6 and _LH^7 pr respectively. At higher dimensions one has to be careful with combinatorics due to powers of H^† H involved in the operators. We perform the calculation in dimensional regularization and minimal subtraction scheme and in the general R_ξ gauge. The cancelation of the ξ parameters in the final answer then serves as a useful check. The renormalization group equations for the Wilson coefficients of the above two operators are, at leading order in perturbation theory,
16π^2μd/dμC^d_H = [3d^2λ -3/4dg_1^2 -9/4dg_2^2 +dW_H ]C^d_H,
16π^2μd/dμC^d pr_LH = [(3d^2-18d+19)λ -3/4(d-5)g_1^2 -3/4(3d-11)g_2^2 +(d-3)W_H]C^d pr_LH
-3/2[(Y_eY^†_e)_vpC^d vr_LH+(Y_eY^†_e)_vrC^d pv_LH],
where W_H=[3(Y^†_uY_u)+3(Y^†_dY_d)+(Y^†_eY_e)] comes from field strength renormalization of H.
We make some final comments on the above result. The terms in the anomalous dimensions due to the Higgs self-coupling λ increase quadratically with canonical dimension d due to combinatorics, making renormalization running effects significantly more and more important for higher dimensional operators. The Yukawa terms in Eq. (<ref>) are independent of d because the lepton field L cannot connect to (H^† H)^(d-5)/2 to yield a nonvanishing contribution due to weak isospin symmetry. The large numerical factor in the λ term for C_H^6 was observed previously in <cit.>, and our leading order results indeed match that work. Including a symmetry factor of 1/2 in the λ term of Eq. (<ref>) that appears in graphs (4)-(5) in Fig. <ref> at d=4, our result also applies to renormalization of the λ coupling and is consistent with <cit.> upon noting different conventions for λ. The renormalization of the Weinberg operator _LH^5 pr was finally given in Ref. <cit.> and corresponds to graphs (1)-(5) in Fig. <ref>. Our result at d=5 is consistent with that work again after taking into account different conventions for λ. The λ term of the γ function for _LH^d pr increases significantly with d for the first two operators in particular, from 4λ at d=5 to 40λ at d=7.
In summary, we have provided a simple perturbative power counting for renormalization effects of higher dimensional operators due to SM interactions in the framework of SMEFT. In the course of our analysis we introduced an index that parametrizes the perturbative order of operators. We found that the lowest-index operators are unique at each mass dimension, and that their renormalization running under SM interactions is determined at leading perturbative order by their own anomalous dimensions. We computed the anomalous dimensions of those operators for any mass dimension and found that they increase quadratically with their mass dimension. This will be useful in the study of effective scalar potential and generation of tiny Majorana neutrino masses in the framework of SMEFT.
§ ACKNOWLEDGEMENT
This work was supported in part by the Grants No. NSFC-11025525, No. NSFC-11575089 and by the CAS Center for Excellence in Particle Physics (CCEPP).
100
Weinberg:1979sa
S. Weinberg,
Phys. Rev. Lett. 43, 1566 (1979).
Buchmuller:1985jz
W. Buchmuller and D. Wyler,
Nucl. Phys. B 268, 621 (1986).
Grzadkowski:2010es
B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek,
JHEP 1010, 085 (2010)
[arXiv:1008.4884 [hep-ph]].
Lehman:2014jma
L. Lehman,
Phys. Rev. D 90, 125023 (2014)
[arXiv:1410.4193 [hep-ph]].
Liao:2016hru
Y. Liao and X. D. Ma,
JHEP 1611, 043 (2016)
[arXiv:1607.07309 [hep-ph]].
Lehman:2015via
L. Lehman and A. Martin,
Phys. Rev. D 91, 105014 (2015)
[arXiv:1503.07537 [hep-ph]].
Henning:2015daa
B. Henning, X. Lu, T. Melia and H. Murayama,
Commun. Math. Phys. 347, no. 2, 363 (2016)
[arXiv:1507.07240 [hep-th]].
Lehman:2015coa
L. Lehman and A. Martin,
JHEP 1602, 081 (2016)
[arXiv:1510.00372 [hep-ph]].
Henning:2015alf
B. Henning, X. Lu, T. Melia and H. Murayama,
arXiv:1512.03433 [hep-ph].
Aparici:2009fh
A. Aparici, K. Kim, A. Santamaria and J. Wudka,
Phys. Rev. D 80, 013010 (2009)
[arXiv:0904.3244 [hep-ph]].
delAguila:2008ir
F. del Aguila, S. Bar-Shalom, A. Soni and J. Wudka,
Phys. Lett. B 670, 399 (2009)
[arXiv:0806.0876 [hep-ph]].
Bhattacharya:2015vja
S. Bhattacharya and J. Wudka,
Phys. Rev. D 94, no. 5, 055022 (2016)
[arXiv:1505.05264 [hep-ph]].
Liao:2016qyd
Y. Liao and X. D. Ma,
Phys. Rev. D 96, no. 1, 015012 (2017)
[arXiv:1612.04527 [hep-ph]].
Urech:1994hd
R. Urech,
Nucl. Phys. B 433, 234 (1995)
[hep-ph/9405341].
Knecht:1999ag
M. Knecht, H. Neufeld, H. Rupertsberger and P. Talavera,
Eur. Phys. J. C 12, 469 (2000)
[hep-ph/9909284].
Nyffeler:1999ap
A. Nyffeler and A. Schenk,
Phys. Rev. D 62, 113006 (2000)
[hep-ph/9907294].
chiralNDA
For recent discussions and debates on chiral dimensional counting and naive dimensional analysis, see:
G. Buchalla, O. Cata and C. Krause
Phys. Lett. B 731, 80 (2014)
[arXiv:1312.5624 [hep-ph]];
B. M. Gavela, E. E. Jenkins, A. V. Manohar and L. Merlo,
Eur. Phys. J. C 76, 485 (2016)
[arXiv:1601.07551 [hep-ph]];
G. Buchalla, O. Cata, A. Celis and C. Krause,
arXiv:1603.03062 [hep-ph].
Grojean:2013kd
C. Grojean, E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1304, 016 (2013)
[arXiv:1301.2588 [hep-ph]].
Elias-Miro:2013gya
J. Elias-Miro, J. R. Espinosa, E. Masso and A. Pomarol,
JHEP 1308, 033 (2013)
[arXiv:1302.5661 [hep-ph]].
Elias-Miro:2013mua
J. Elias-Miro, J. R. Espinosa, E. Masso and A. Pomarol,
JHEP 1311, 066 (2013)
[arXiv:1308.1879 [hep-ph]].
Jenkins:2013zja
E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1310, 087 (2013)
[arXiv:1308.2627 [hep-ph]].
Jenkins:2013wua
E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1401, 035 (2014)
[arXiv:1310.4838 [hep-ph]].
Alonso:2013hga
R. Alonso, E. E. Jenkins, A. V. Manohar and M. Trott,
JHEP 1404, 159 (2014)
[arXiv:1312.2014 [hep-ph]].
Alonso:2014zka
R. Alonso, H. M. Chang, E. E. Jenkins, A. V. Manohar and B. Shotwell,
Phys. Lett. B 734, 302 (2014)
[arXiv:1405.0486 [hep-ph]].
Alonso:2014rga
R. Alonso, E. E. Jenkins and A. V. Manohar,
Phys. Lett. B 739, 95 (2014)
[arXiv:1409.0868 [hep-ph]].
Elias-Miro:2014eia
J. Elias-Miro, J. R. Espinosa and A. Pomarol,
Phys. Lett. B 747, 272 (2015)
[arXiv:1412.7151 [hep-ph]].
Cheung:2015aba
C. Cheung and C. H. Shen,
Phys. Rev. Lett. 115, no. 7, 071601 (2015)
[arXiv:1505.01844 [hep-ph]].
Jenkins:2013sda
E. E. Jenkins, A. V. Manohar and M. Trott,
Phys. Lett. B 726, 697 (2013)
[arXiv:1309.0819 [hep-ph]].
Manohar:1983md
A. Manohar and H. Georgi,
Nucl. Phys. B 234, 189 (1984).
Liao:2010ku
Y. Liao,
Phys. Lett. B 694, 346 (2011)
[arXiv:1009.1692 [hep-ph]].
Degrande:2012wf
C. Degrande, N. Greiner, W. Kilian, O. Mattelaer, H. Mebane, T. Stelzer, S. Willenbrock and C. Zhang,
Annals Phys. 335, 21 (2013)
[arXiv:1205.4231 [hep-ph]].
Arason:1991ic
H. Arason, D. J. Castano, B. Keszthelyi, S. Mikaelian, E. J. Piard, P. Ramond and B. D. Wright,
Phys. Rev. D 46, 3945 (1992).
Antusch:2001ck
S. Antusch, M. Drees, J. Kersten, M. Lindner and M. Ratz,
Phys. Lett. B 519, 238 (2001)
[hep-ph/0108005].
|
http://arxiv.org/abs/1701.07581v2 | 20170126052158 | Interaction-induced Bloch Oscillation in a Harmonically Trapped and Fermionized Quantum Gas in One Dimension | [
"Lijun Yang",
"Lihong Zhou",
"Wei Yi",
"Xiaoling Cui"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
Key Laboratory of Quantum Information, University of Science and Technology of China,
CAS, Hefei, Anhui, 230026, People's Republic of China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
Key Laboratory of Quantum Information, University of Science and Technology of China,
CAS, Hefei, Anhui, 230026, People's Republic of China
Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China
[email protected]
Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
Motivated by a recent experiment by F. Meinert et al,
arxiv:1608.08200, we study the dynamics of an impurity moving in the
background of a harmonically trapped one-dimensional Bose gas in the
hard-core limit. We show that due to the hidden “lattice" structure
of background bosons, the impurity effectively feels a
quasi-periodic potential via impurity-boson interactions that can
drive the Bloch oscillation under an external force, even in the absence of real lattice potentials.
Meanwhile, the inhomogeneous density of trapped bosons imposes an
additional harmonic potential to the impurity, resulting in a
similar oscillation dynamics but with a different period and
amplitude. We show that the sign and strength of the impurity-boson coupling
can significantly affect the above two potentials in determining the impurity dynamics.
Interaction-induced Bloch Oscillation in a Harmonically Trapped and Fermionized Quantum Gas in One Dimension
Xiaoling Cui
December 30, 2023
============================================================================================================
§ INTRODUCTION
Bloch oscillation (BO) describes a striking quantum phenomenon, in which the motion of a particle under a periodic potential and an external force is oscillatory, rather than linear, as time evolves <cit.>. It has been observed in semiconductor superlattices <cit.> and in cold atoms with optical lattices <cit.>. Physically, this phenomenon is due to the Bragg scattering of the particle at the edge of the Brillouin zone, which is supported by the translational invariance of lattice potentials. It is then interesting to ask the question: is translational invariance necessary for the occurrence of BO? A recent experiment at Innsbruck <cit.> seems to suggest the answer no. In this experiment, oscillatory dynamics of an impurity interacting with hard-core bosons trapped in one dimension have been observed, in the absence of any periodic confinements. In Ref. <cit.> and earlier theories <cit.>, this phenomenon was attributed to the Bragg scattering with bosons at the edge of an emergent Brillouin zone, which causes the impurity momentum to change by twice the Fermi momentum of bosons with no energy cost.
In this work, we give an alternative interpretation of the oscillation dynamics observed in Ref. <cit.>, by adopting the concept of effective spin chain in strongly interacting atomic gases in one dimensional (1D) traps <cit.>. The idea of spin chain is based on the fact that for fermionized (or impenetrable) particles in one dimension, their spatial order is fixed, and the probability peak of finding the i-th ordered particle (see ρ_i(x) in Fig. <ref>) is well separated from those of the neighboring ordered particles. Therefore an underlying “lattice" chain is automatically formed by mapping the order index to the corresponding site index <cit.>.
In this way, various spin-chain Hamiltonians can be constructed in response to different external perturbations, including interactions, gauge fields and trapping potentials, which have been used to address and engineer novel spin spirals and magnetic orders in 1D systems <cit.>.
Experimentally, an anti-ferromagnetic spin chain has been recently confirmed in a small cluster of 1D trapped fermions <cit.>.
Now we apply the idea of spin chain to the Innsbruck experiment <cit.>. Note that the “spin chain" here refers to the ordered (hardcore) bosons in the coordinate space. As the impurity-boson interaction can be converted to the impurity interacting with all the ordered particles (see Fig. <ref>), it then becomes clear that the impurity should effectively feel a “lattice" potential originating from the “lattice" structure of ordered bosons. Such a “lattice" potential is quasi-periodic, with the lattice spacing approximately the inter-particle distance of bosons and the lattice depth proportional to the impurity-boson coupling strength. This lattice potential then gives rise to the BO of impurity under an external force. This picture, compared to that in Ref. <cit.>, provides more information on the role of impurity-boson interaction in the impurity dynamics, as we will elaborate in this paper.
In this work, we will also consider another factor which influences the impurity dynamics, i.e., the external confinement of bosons. Such a confining potential gives rise to an inhomogeneous boson density distribution, which leads to an additional potential for the impurity via the impurity-boson interactions. In fact, it is such a harmonic confinement that breaks the translational invariance of the whole system, while its effect on the impurity dynamics has not been considered in previous studies <cit.>. Here we explicitly relate the two effective potentials, i.e., the periodic and the harmonic ones, to the coupling strength between the impurity and bosons. We show that the period and amplitude of the impurity dynamics sensitively depend on the sign and strength of the impurity-boson coupling.
Our results offer insights into the impurity dynamics under a general class of background systems as long as they are fermionized.
The rest of the paper is organized as follows. In section II, we set up the basic model for the impurity moving in the background of hardcore bosons. Based on the model, we study the impurity dynamics under an external force in section III. Section IV is contributed to the discussion and summary of our results.
§ MODEL
We start from the Hamiltonian of the system following the setup in Ref. <cit.> (ħ=1 throughout the paper):
H=H_b(x_1,⋯,x_N)-1/2m∂^2/∂ x^2+g_ib∑_i=1^Nδ(x-x_i)-F x.
Here x and x_i=1,2...N are respectively the coordinates of impurity and N bosons; g_ib is the impurity-boson coupling strength; F is the external force acting on the impurity starting from time t=0^+; H_b is the Hamiltonian for the hard-core bosons:
H_b=∑_i(-1/2m∂^2/∂ x_i^2+1/2mω_ho x_i^2)+g_bb∑_i<jδ(x_i-x_j).
In the limit g_bb→∞, the ground state of bosons can be written as Ψ_b(x_1,...,x_N)=|ϕ_F(x_1,...,x_N)|, where ϕ_F is the Slater determinant describing N identical fermions occupying the lowest N levels of an 1D harmonic oscillator. As the spatial order of impenetrable particles is fixed, one can write down the probability density of finding the i-th ordered particle at x, named as ρ_i(x), to be
ρ_i(x)=∫ dx⃗ |ϕ_F|^2 θ(x_1<...<x_i<...<x_N) δ(x-x_i),
It has been shown that each ρ_i(x) is well separated and follows a Gaussian distribution centered at x̅_i=∫ dx x ρ_i(x) and with a width σ_i <cit.>:
ρ_i(x)→1/√(π)σ_ie^-(x-x̅_i)^2/σ_i^2.
By mapping the order index to site index, one can consider each ordered particle as localized in the corresponding lattice site with a finite distribution width. Following this idea one can construct various spin chain models as studied in the literature <cit.>.
Assuming that the impurity-boson interaction is weak enough compared to the Fermi energy of the hard-core bosons, i.e., g_ib/d≪ E_F=Nω_ho (d is inter-particle distance of bosons), the ground-state profile of bosons will not be significantly changed by the impurity. In this case, we can
write down an effective potential for the impurity due to impurity-boson interaction:
V(x)=g_ib∑_i ρ_i(x).
Therefore the “lattice" structure of ρ_i(x) is naturally transferred to a “lattice" potential on the impurity. The resulting Hamiltonian for the impurity is
H_imp=-1/2m∂^2/∂ x^2+V(x)-F x.
In the ideal situation when ρ_i(x) (Eq.<ref>) is equally distributed with the same width, i.e., x̅_i+1-x̅_i≡ d, σ_i≡σ, V reduces to the ideal lattice potential V_L for large N:
V_L(x)=g_ib∑_i 1/√(π)σe^-(x-x̅_i)^2/σ^2, x̅_i=(i-N-1/2)d
which is translationally invariant and can support BO dynamics of the impurity under an external force. Remarkably, here the “lattice" is induced by the finite impurity-boson coupling g_ib, and therefore the property of BO is highly tunable by g_ib. This is the unique aspect of such an interaction-induced BO.
Meanwhile, it should be noted that one essential deviation between V_L and actual V is because of the inhomogeneity of boson density in a trap. In particular, the height of ρ_i changes with index i, which decays gradually from the trap center to the edge. This generates an additional potential V'(x) on top of V_L, as shown by dashed line in Fig. 1. V' can be estimated through the local density approximation (LDA), and for small x it is
V'(x)∼ -sgn(g_ib)1/2mω'^2 x^2, ω'=√(|g_ib|ω_ho/π R),
here R=(2N)^1/2 a_ho (a_ho=1/√(mω_ho)) is the Thomas radius of hardcore bosons under LDA. Eq. <ref> shows that V' is simply a harmonic potential with the frequency scaling with |g_ib|^1/2. A subtle case is that when g_ib is repulsive and V' is concave, one would have to impose another harmonic potential to host the impurity initially at the trap center. We will discuss this case later.
Now we can approximate V(Eq. <ref>) as the sum of a lattice potential V_L(Eq.<ref>) and a harmonic one V'(Eq.<ref>). Such an approximation is expected to work well for the impurity dynamics near the center of the trap, but not near the edges where the assumption of uniform Gaussian distribution in V_L breaks down. The individual effect of V_L and V' to the impurity dynamics is analyzed as follows. V_L induces BO with the period 2π/(Fd) and an amplitude
proportional to the band width, which is a decreasing function of
|g_ib|; V' also induces a periodic dynamics due to the linear
interference between different harmonic levels, and the resulting
period and amplitude of the oscillation all depend on ω' or
g_ib (see Eq.<ref>). So under the combined effects of V_L
and V', the impurity is expected to undergo oscillatory dynamics
with properties crucially relying on the coupling g_ib.
§ RESULTS
In our numerical simulation of the impurity dynamics under H_imp (Eq.<ref>), we have chosen the initial state as the ground state of H_imp without external force (F=0). We then turn on a finite F at time t=0^+ and solve the time-dependent Schrödinger equation i∂ψ/∂ t= H_impψ, with ψ the impurity
wave function. In the simulation, we have discretized the coordinate space in a sufficiently large region (with the size much larger than the Thomas radius R) in order to numerically solve the ground state and the dynamics. Here we define the dimensionless parameters
g̃_ib≡ g_ib/(ω_ho a_ho), and F̃≡ Fa_ho/ω_ho.
In Fig. <ref>, we plot the time evolution of the mean displacement ⟨
x⟩≡⟨ψ |x|ψ⟩ and the mean momentum ⟨ k⟩≡⟨ψ |-i∂/∂ x |ψ⟩ for the impurity
moving in the background of N=10 bosons, taking
g̃_ib=-2 and F̃=0.05 for instance. In this case,
the resulting ω'=0.37ω_ho according to Eq. <ref>.
As expected, both ⟨ x⟩ and ⟨ k⟩
oscillate periodically in time t, and the exact results (by
simulating Eq. <ref>) can be fitted quantitatively well by
replacing the potential (V) with the sum of lattice and harmonic
potentials (V_L+V').
In Fig. <ref>, we present the impurity momentum distribution, n(k)≡∫ dx e^ik(x-x')ψ^*(x)ψ(x'), in the parameter plane of momentum k and time t. In general, the behavior of n(k) will depend on the strengths of impurity-boson coupling g_ib and external force F. To see the effect of the external force, we choose a small F̃=0.05 in Fig. <ref>(a) and a larger one F̃=0.3 in Fig. <ref>(b). We see that the sharp Bragg reflection, as has been discussed in Ref. <cit.>, is more visible for larger F̃ (Fig. <ref>(b)). As the harmonic potential V' can only give rise to a periodic oscillation of n(k), such a reflection can only be attributed to the effect of the underlying periodic potential V_L, or the hidden "lattice" structure of hardcore bosons.
Nevertheless, here we would like to point out that a sharp reflection of momentum in n(k) can be a sufficient condition, but not a necessary condition, for the BO dynamics under the periodic potential. As observed earlier in the optical lattice experiment <cit.>, such a reflection in n(k) is only visible for shallow lattices, but not for deep ones. This is because for deep lattices, each crystal momentum state is the superposition of many plane-wave states, and the time evolution of these (plane-wave) momenta will all contribute to n(k). This will result in a perfect periodic oscillation of n(k) as shown in Ref. <cit.>.
To see clearly the role of g_ib in the dynamics, we
extract the period T_x (T_k) and amplitude A_x (A_k) for
⟨ x⟩ (⟨ k⟩) and plot them as functions
of g_ib in Fig. <ref>(g_ib<0) and Fig. <ref>(g_ib>0). Again we see
that the results from periodic and harmonic potentials (V_L+V') fit well with
exact results from the total potential (V).
For the attractive g_ib case in Fig. <ref>, we see that all the periods (T_x,T_k) and amplitudes (A_x,A_k) decrease as |g_ib|. This can be attributed to the two effects generated by increasing |g_ib|. First, it will deepen the lattice depth of V_L and produce a narrower band width, which reduces the amplitudes A_x and A_k. Second, it will produce a tighter confinement V'(see Eq. <ref>), which further reduces A_x and A_k as well as T_x and T_k. In fact, as |g_ib| becomes larger, the effect of V' will dominate and the dynamics essentially follow the harmonic prediction (red-dotted lines in Fig. 3).
For repulsive g_ib, as discussed before, one has to
impose another harmonic potential, V”(x)=1/2 mω”^2 x^2, to
compensate for the effect of concave potential V' (Eq. <ref>)
and host the impurity initially at trap center (⟨ x⟩_t=0=0). In order to highlight the role of V_L in the
dynamics, we have chosen ω” just a bit larger than ω', i.e., ω”=ω'+0.2ω_ho, and the results are shown in Fig. <ref>. We see that as g_ib increases,
A_x, A_k and T_x, T_k all decrease. Similar to g_ib<0 case, this can be attributed to the narrower band width produced by larger g_ib (and thus deeper V_L), as well as the combined effects of V_L and residue harmonic potential V'+V”.
Actually, by expanding H_imp(Eq.<ref>) in terms of the
lowest-band Wannier functions that are supported by V_L, we can write
down an effective lattice model for the impurity:
H_imp^eff=-∑_(i,j)t_ij(c_i^† c_j+h.c.)+ ∑_i
(V^h_i-F_i) c_i^† c_i,
here V^h_i and F_i are respectively the on-site potential
generated by the total harmonic confinement (sum of V' and V”)
and the force Fx; the hopping is
t_ij = -∫ w_0^*(x)(-1/2m∂^2/∂
x^2+V_L(x))w_0(x-(j-i)d)dx.
Note that the lattice model (<ref>) is valid under the
adiabaticity condition <cit.>, which requires Fd≪ E_gap
(E_gap is the band gap) to ensure the dynamics within the lowest
band. Apparently this condition is satisfied for large |g_ib| (and thus large band gap). We have checked that it is not satisfied by for the range of g_ib considered in
Fig. <ref> and Fig. <ref>, where the higher band effects should play essential roles in determining the dynamics.
§ DISCUSSION AND SUMMARY
Our results offer a number of insights into the oscillatory impurity
dynamics in Ref. <cit.>. First, such dynamics is purely induced
by the impurity-boson interaction g_ib. Increasing |g_ib|
will generally lead to faster oscillatory dynamics with smaller
amplitudes, which is qualitatively consistent with what was observed in
Ref. <cit.>. Second, both the induced “lattice" potential
(due to the “lattice" structure of bosons) and the harmonic
confinement (due to inhomogeneous boson density) play important
roles in the resulting dynamics. Their individual effects can be
examined by tuning g_ib (repulsive or attractive) or applying
additional confinements on the impurity. Third, the impurity
dynamics does not rely on the statistics of the background system, but rather on the fact that the system is fermionized. Physically, this is because any fermionized system has the same density profile, regardless of whether it is composed of bosons, fermions or boson-fermion mixtures. Fermionized backgrounds thus affect the impurity similarly via density-density interactions.
It is worthwhile to point out that the periodic dynamics in this work is
related to the assumption of unaffected boson profile.
Once the coupling g_ib is strong enough to invalidate this
assumption, the boson excitation should be taken into account, which
is expected to bring more modes into the dynamics and cause
damping, as observed in the Innsbruck experiment <cit.>. The study
of dynamics in this regime is beyond the scope of the present work.
In summary, we have demonstrated the interaction-induced oscillatory
dynamics of an impurity moving in the background of an 1D trapped
hard-core bosons. Because of the hidden “lattice" structure of
bosons, the impurity dynamics essentially mimics the BO in conventional
lattices, despite the lack of lattice translational
invariance. Moreover, we also point out that the inhomogeneous
density of trapped bosons provides another harmonic potential that
can strongly affect the dynamics. These results provide a new
perspective on the recent observation in the Innsbruck
experiment <cit.>.
Acknowledgment.
This work is supported by the National Natural Science Foundation of China (No.11374177, 11374283, 11626436, 11421092, 11534014, 11522545), and the National Key Research and Development Program of China (No.2016YFA0300603, 2016YFA0301700). W. Y. acknowledges support from the “Strategic Priority Research Program(B)” of the Chinese Academy of Sciences, Grant No. XDB01030200.
Note Added: During preparing this paper, we became aware of the preprint by Yang and Pu <cit.>, who studied the impurity dynamics in a different interaction regime (g_ib→+∞).
99
BlochF. Bloch, Z. Phys. 52, 555 (1929).
ZenerC. Zener, Proc. R. Soc. London A 145, 523 (1934).
BO_superlatticeC. Waschke, H. Roskos, R. Schwedler, K. Leo, H. Kurz,
and K. Khler, Phys. Rev. Lett. 70, 3319 (1993).
BO_OL1M. Ben Dahan, E. Peik, J. Reichel, Y. Castin, and C. Salomon,
Phys. Rev. Lett. 76, 4508 (1996).
BO_OL2B. P. Anderson and M. A. Kasevich, Science 282, 1686 (1998).
BO_OL3M. Fattori, C. DErrico, G. Roati, M. Zaccanti, M. JonaLasinio, M. Modugno, M. Inguscio, and G. Modugno, Phys. Rev. Lett. 100, 080405 (2008).
BO_OL4F. Meinert, M. J. Mark, E. Kirilov, K. Lauber, P. Weinmann, M. Grobner, and H.-C. Nagerl, Phys. Rev. Lett. 112, 193003 (2014).
expt F. Meinert, M. Knap, E. Kirilov, K. Jag-Lauber, M. B. Zvonarev, E. Demler,
and H.-C. Nagerl, arxiv: 1608.08200.
Gangardt1D. M. Gangardt and A. Kamenev, Phys. Rev. Lett. 102, 070402 (2009).
Gangardt2M. Schecter, D. M. Gangardt, and A. Kamenev, Ann. Phys. 327, 639 (2012).
Gangardt3M. Schecter, D. M. Gangardt, and A. Kamenev, New J. Phys. 18, 065002 (2016).
SantosF. Deuretzbacher, D. Becker, J. Bjerlin, S. M. Reimann, and L. Santos, Phys. Rev. A 90, 013611 (2014).
ZinnerA. G. Volosniev, D. V. Fedorov, A. S. Jensen, M. Valiente, and N. T. Zinner, Nature Communications 5, 5300 (2014).
Pu L. Yang, L. Guan, and H. Pu, Phys. Rev. A 91, 043634 (2015).
Levinsen J. Levinsen, P. Massignan, G. M. Bruun, M. M. Parish, Science Advances 1, e1500197 (2015)
YangL. Yang and X. Cui, Phys. Rev. A 93, 013617 (2016).
CuiX. Cui and T.-L. Ho, Phys. Rev. A 89, 013629 (2014).
BlumeQ. Guan and D. Blume, Phys. Rev. A 92, 023641 (2015).
Zinner2A. G. Volosniev, D. Petrosyan, M. Valiente, D. V. Fedorov, A. S. Jensen, and N. T. Zinner, Phys. Rev. A 91, 023620 (2015); R. E. Barfknecht, A. Foerster, N. T. Zinner, arxiv: 1612.01570.
Levinsen2P. Massignan, J. Levinsen, and M. M. Parish, Phys. Rev. Lett. 115, 247202 (2015).
Cui2L. Yang, X.-W. Guan and X. Cui, Phys. Rev. A 93, 051605 (R) (2016).
ChenHaiping Hu, Lei Pan, Shu Chen, Phys. Rev. A 93, 033636 (2016).
Pu2L. Yang and H. Pu, Phys. Rev. A 94, 033614 (2016).
Santos2F. Deuretzbacher, D. Becker, J. Bjerlin, S. M. Reimann, L. Santos, arxiv: 1611.04418.
Jochim_exptS. Murmann, F. Deuretzbacher, G. Zurn, J. Bjerlin, S. M. Reimann, L. Santos, T. Lompe, S. Jochim, Phys. Rev. Lett. 115, 215301 (2015).
Pu3 L. Yang and H. Pu, arXiv:1701.05264.
|
http://arxiv.org/abs/1701.07533v2 | 20170126010151 | Constructing tame supercuspidal representations | [
"Jeffrey Hakim"
] | math.RT | [
"math.RT",
"20G25, 22E50"
] |
theoremTheorem[subsection]
lemma[theorem]Lemma
proposition[theorem]Proposition
problem[theorem]Problem
corollary[theorem]Corollary
definition
definition[theorem]Definition
example[theorem]Example
xca[theorem]Exercise
remark
remark[theorem]Remark
GL
tr
A
\
Q
R
Z
C
SL
S
H
G
F
F
B
A
E
C
Z
Q
S
Ad
G
H
T
M
B
N
S
Z
.1em ^t-.1em
#1C_c^∞(#1)
F^×
1 2
T
O^×
E^×
E
g
h
k̨
t
n
ḇ
by2#1#2#3#4(
#1 #3#2 #4)
Ξ
C
\
2mu
1pt.2mu
4pt.2mu
7pt.1mu
#1#2.2in#1(#2).2in
#1.2in#1.2in
#1.2in#1.2in
#1[#1]
#1C_c^∞(#1)
Hom
Ind
\
∋
e
f
h
R
I
C
D
E
H
O
N
a
b
c
O
h
l
m
O
r
s
S
t
z
C
Q
R
T
Z
^t-.1em
tr
ad
Ad
by2#1#2#3#4( #1 #3#2 #4)
K
P
S
V
W
E
A
F
T
O
O^×
F^×
E^×
1 2
.2in
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[1]
[2]
[3]
[4]
[5]
[6]
[7]
W
Constructing Tame Supercuspidal Representations
Jeffrey Hakim
December 30, 2023
===============================================
A new approach to Jiu-Kang Yu's construction of tame supercuspidal representations of p-adic reductive groups is presented. Connections with the theory of cuspidal representations of finite groups of Lie type and the theory of distinguished representations are also discussed.
=.13in
§ INTRODUCTION
This paper provides a new approach to the construction of tame supercuspidal representations of p-adic reductive groups. It grew out of a desire to unify the theory of distinguished tame supercuspidal representations from <cit.> with Lusztig's analogous theory for cuspidal representations of finite groups of Lie type <cit.>.
Our construction may be viewed as a revision of Yu's construction <cit.> that associates supercuspidal representations to certain representations of compact-mod-center subgroups. In the applications to distinguished representations, it plays a role analogous to Deligne-Lusztig “induction” of cuspidal representations from certain characters of tori <cit.> (but our approach is not cohomological).
Yu's tame supercuspidal representations are parametrized by complicated objects called generic, cuspidal G-data. (See <cit.>.)
The fact that Yu's representations should be associated to simpler objects (such as characters of tori) is already evident in Murnaghan's paper <cit.>, Kaletha's paper <cit.>, and the general theory of the local Langlands correspondence.
In <cit.> and <cit.>, generic, cuspidal G-data are manufactured from simpler data and then Yu's construction is applied.
By contrast, our intent is to simplify Yu's construction and make it more amenable to applications, such as the applications to distinguished representations discussed in <ref>.
The main source of simplification is the elimination of Howe factorizations in Yu's construction. (See <cit.> and <cit.>.) Recall that to construct a tame supercuspidal representation with Yu's construction, one has to make a noncanonical choice of a factorization. Removing the need for this choice, as we do, allows one to more easily develop applications, because one no longer needs technical lemmas proving the existence of application-friendly factorizations.
Another technical notion required in Yu's construction is that of “special isomorphisms.” Though it is already known that special isomorphisms can be chosen canonically and their choice does not affect the isomorphism class of the constructed representation, special isomorphisms have remained a misunderstood and unpleasant aspect of the theory of tame supercuspidal representations. (Here, we are really referring to the “relevant” special isomorphisms of <cit.>. See 3.4, especially Proposition 3.26, <cit.> for more details. Yu's original definition of “special isomorphism” is in <cit.>.)
Our construction takes as its input a suitable representation ρ (called a “permissible representation”) of a compact-mod-center subgroup of our p-adic group G and then canonically constructs the character of a representation κ of an open compact-mod-center subgroup such that κ induces a supercuspidal representation π of G.
The construction of the character of κ (and hence the equivalence classes of κ and π) is canonical, reasonably simple, and makes no reference to special isomorphisms. We do, in fact, use special isomorphisms to establish the existence of a representation κ with the given character, but the meaning and role of special isomorphisms in the theory becomes more transparent.
We also observe that, in many cases, there is no need to explicitly impose genericity conditions in our construction, but, instead, the necessary genericity properties are automatic.
We are indebted to Tasho Kaletha for explaining to us that Yu's main genericity condition, condition GE1, is built into our construction.
In general, the influence of Kaletha's ideas on this paper is hard to adequately acknowledge.
After the initial draft of the paper was circulated, the author became aware of Kaletha's work <cit.> on regular supercuspidal representations. This prompted a major revision of the paper, during which various tameness conditions and other technicalities were removed.
The best evidence of the utility of our construction lies in the proof of Theorem <ref>, a result that is stated in this paper but proven in a companion paper <cit.>. The proof uses the theory in this paper to establish stronger versions of the author's main results with Fiona Murnaghan, with considerably less effort.
Theorem <ref> links the representation theories of finite and p-adic groups by providing a uniform formula for the dimensions of the spaces of invariant linear forms for distinguished representations.
For finite groups of Lie type, our formula is essentially a reformulation of the main result of <cit.>. For p-adic reductive groups, it is a reformulation of the main result of <cit.>.
In both cases, these reformulations refine earlier reformulations in <cit.>.
This paper is structured in an unorthodox way to accommodate a variety of readers, each of whom is only interested in select aspects of the theory.
The construction itself is expeditiously explained (without proofs)
in <ref>. The main result is Theorem <ref>.
The proofs are taken up in <ref>.
For the most part, we expect that the connection between our construction and Yu's construction will be self-evident in <ref>, but <ref> formalizes this connection. Finally, in <ref>, we describe the connection between our construction and the Deligne-Lusztig construction in applications to the theory of distinguished representations.
We gratefully acknowledge the advice and generous help of Jeffrey Adler and Joshua Lansky throughout the course of this project.
§ THE CONSTRUCTION
§.§ The inducing data
Let F be a finite extension of a field ℚ_p of p-adic numbers for some odd prime p,
and let F be an algebraic closure of F.
Let be a connected reductive F-group.
We are interested in constructing supercuspidal representations π of G = (F).
The input for our construction is a suitable representation (ρ , V_ρ) of a suitable compact-mod-center subgroup H_x of G, and the map from ρ to π has some formal similarities to Deligne-Lusztig induction of cuspidal representations of finite groups of Lie type.
Let us now specify H_x and ρ more precisely.
Fix an F-subgroup of that is a Levi subgroup of over some tamely ramified finite extension of F.
We require that the quotient _/_ of the centers of and is F-anisotropic.
Fix a vertex x in the reduced building ℬ_ red(,F). Let H = (F) and let H_x be the stabilizer of x in H. Let H_x,0 be the corresponding (maximal) parahoric subgroup in H and let H_x,0+ be its prounipotent radical.
Note that the reduced building ℬ_ red(,F) of is, by definition, identical to the extended building ℬ(_ der , F) of the derived group _ der of .
Let _ sc→_ der be the universal cover of _ der.
We identify the buildings ℬ (_ sc,F) and ℬ(_ der,F) (as in <ref>) and let H_ der,x,0+^♭ denote the image of H_ sc,x,0+ in _ der.
A representation of H_x is permissible if it is a smooth, irreducible, complex representation (ρ ,V_ρ) of H_x such that:
(1) ρ induces an irreducible (and hence supercuspidal) representation of H,
(2) the restriction of ρ to H_x,0+ is a multiple of some character ϕ of H_x,0+,
(3) ϕ is trivial on H_ der,x,0+^♭,
(4) the dual cosets (ϕ |Z^i,i+1_r_i)^* (defined in <ref>) contain elements that satisfy Yu's condition GE2 (stated below in <ref>).
When the order of the fundamental group π_1(_ der) of _ der is not divisible by p, then we show in Lemma <ref> that H_ der,x,0+^♭=H_ der,x,0+. In this case, we show in Lemma <ref> that every permissible representation
ρ may be expressed as ρ_0 ⊗ (χ|H_x), where ρ_0 has depth zero and χ is a quasicharacter of H that extends ϕ. (So our representations ρ_0 are essentially the same as the representations denoted by ρ in <cit.>.) This allows one to appeal to the depth zero theory of Moy and Prasad <cit.> to explicitly describe the representations ρ. Even when p divides the order of π_1 (_ der), one can apply depth zero Moy-Prasad theory by lifting to a z-extension, as described in <ref>. In this case, one obtains a factorization over the z-extension, but the factors may not correspond to representations of H_x.
In light of <cit.>, condition (4) is only required when
p is a torsion prime for the dual of the root datum of (in the sense of <cit.> and <cit.>).
Using Steinberg's results (Lemma 2.5 and Corollary 1.13 of <cit.>), we can make this more explicit as follows. Since we assume p 2, condition (4) is needed precisely when
* p=3 and _ der has a factor of type E_6, E_7, E_8 or F_4,
* p=5 and _ der has a factor of type E_8, or
* _ der has a factor of type A and
p is a torsion prime for X/Φ, where Φ is the root lattice and X is the character lattice relative to some maximal torus of .
In particular, condition (4) is unnecessary for _n, but is required for _n.
A given permissible representation can correspond to supercuspidal representations for two different groups. For example, suppose Ψ = (,y,ρ_0,ϕ⃗) is a generic cuspidal G-datum with d>0 and ϕ_d =1, with notations as in <cit.>. Let Ψ' be the datum obtained from Ψ by deleting the last components of and ϕ⃗. Then the tame supercuspidal representations of G and G^d-1 associated to Ψ and Ψ', respectively, are both associated to the same permissible representation ρ_0⊗ (∏_i=0^d-1(ϕ_i|G^0_[y])).
§.§ Basic notations
We use boldface letters for F-groups and non-boldface for the groups of F-rational points. Gothic letters are used for Lie algebras, and the subscript “der” is used for derived groups of algebraic groups and their Lie algebras. For example, G = (F), g = Lie(), 𝔤 = g(F), and _ der is the derived group of . We let G_ der denote _ der(F), but caution that this is not necessarily the derived group of G.
In general, our notations tend to follow <cit.>, which, in turn, mostly follows <cit.>. This applies, in particular, to groups G_y,f associated to concave functions, such as Moy-Prasad groups G_y,s and groups G⃗_y,s⃗ associated to twisted Levi sequences and admissible sequences s⃗. Many of these notations are recalled in <ref>, as well as conventions regarding variants of exponential maps (such as Moy-Prasad isomorphisms).
We use colons in our notations for quotients, for example,
G_y,s:r = G_y,s/G_y,r. Similar notations apply to the Lie algebra filtrations and to F-groups other than .
Moy-Prasad filtrations over F are defined with respect to the standard valuation v_F on F. Moy-Prasad filtrations over extensions of F are also defined with respect to v_F. Thus we have intersection formulas such as (E)_y,r∩ G = G_y,r.
Let
E_r =E∩ v_F^-1([r,∞])
and, for r>0,
E_r^× = 1+ E_r.
Given a point y in the reduced building ℬ_ red( , F), we let 𝖦_y denote the associated reductive group defined over the residue field f of F. Thus 𝖦_y(F) = G(F^ un)_y,0:0+, where F^ un is the maximal unramified extension of F in F, and F is the residue field of F^ un (which is an algebraic closure of f).
§.§ The torus and the subtori _𝒪
The construction requires the choice of a maximal, elliptic F-torus in with the properties:
* The splitting field E in F of over F is a tamely ramified extension of F. (Note that E/F is automatically a finite Galois extension.)
* The point x is the unique fixed point of Gal(E/F) in the apartment 𝒜_ red(,, E).
In choosing , we are appealing to some known facts:
* ℬ_ red( , F)= ℬ_ red(,E)^ Gal(E/F), since E/F is tamely ramified, according to a result of Rousseau <cit.>. (See also <cit.>.)
* 𝒜_ red (,,F) consists of a single point, since is elliptic.
* Not only do tori of the above kind exist, but DeBacker <cit.> gives a construction of a special class of such tori:
* Choose an elliptic maximal f-torus 𝖳 in 𝖧_x.(See Lemma 2.4.1 <cit.>.)
* Choose a maximal F^ un-split torus in such that
* is defined over F,
* x∈𝒜_ red(,,F^ un), and
* 𝖳(F) is the image of (F^ un)∩ (F^ un)_x,0 in 𝖧_x(F).
(See Lemma 2.3.1 <cit.>.)
* Take to be the centralizer of in .
Such tori are called “maximally unramified elliptic maximal tori” in 3.4.1 <cit.>, and various reformulations of the definition are provided there. For example, an elliptic maximal F-torus is maximally unramified if it is contained in a Borel subgroup of over F^ un. Lemma 3.4.4 <cit.> says that any two maximally unramified elliptic maximal tori in corresponding to the same point in ℬ_ red(,F) must be H_x,0+-conjugate.
Lemma 3.4.18 <cit.> says every regular depth zero supercuspidal representation of H comes from a regular depth zero character of the F-rational points of a maximally unramified elliptic maximal torus of .
It is necessarily the case that T⊂ H_x since T preserves both 𝒜_ red(,,E) and ℬ_ red (,F), and the intersection of the latter spaces is { x}.
When H_ der is compact, every F-torus in is elliptic and ℬ_ red (,F) is a point x and so H_x =H. In this case, every maximal F-torus in is elliptic and x is (trivially) the unique Gal(E/F)-fixed point in 𝒜_ red(,,E). This example illustrates, in the extreme, that the point x does not determine the torus .
Let Φ = Φ (,) and Γ = Gal(F/F). Given a Γ-orbit 𝒪 in Φ, let _𝒪 be the torus generated by the tori _a = image(ǎ) as a varies over 𝒪.
For positive depths, the Moy-Prasad filtration of T_𝒪 = _𝒪(F) is given using the norm map
N_E/F: _𝒪(E) → T_𝒪
t ↦ ∏_γ∈ Gal(E/F)γ (t)
as follows. If a∈𝒪 and r>0 then, according to Lemma <ref>,
T_𝒪,r =N_E/F(ǎ (E_r^×))= N_E/F(_a(E)_r)= N_E/F(_𝒪(E)_r),
for all a∈𝒪.
Though the tori _𝒪 are not explicitly considered in <cit.>, the norm map on _a(E) plays a prominent role there, and our use of the norm map has been influenced by the theory in <cit.>. (See <cit.>.)
§.§ Recovering r⃗ and
A key step in our construction is to recursively construct from ϕ a sequence
= (^0,… , ^d)
of subgroups
= ^0⊊⋯⊊^d=
and a sequence
r⃗ = (r_0,… ,r_d)
of real numbers
0≤ r_0 <⋯ < r_d-1≤ r_d.
We construct the groups recursively in the order ^d,… ,^0 (and similarly for r_d,… , r_0), so d should be treated initially as an unknown nonnegative integer whose value only becomes evident at the end of the recursion.
This indexing is compatible with <cit.>, but it is the opposite of what is done in <cit.>.
To begin the recursion, we take ^d = and
r_d = depth(ρ) = depth(ϕ).
When =, we declare that d=0.
Now assume ⊊. Suppose i∈{ 0,… , d-1} and ^i+1 has been defined and strictly contains .
In general, given ^i+1, we take
^i = ^i+1_ der∩, ^i = ^i+1_ der∩.
Note that ^i is the
torus generated by the tori _𝒪 associated to Γ-orbits 𝒪 in Φ (^i+1,).
(See <cit.>.)
To say is elliptic in means that /_ is F-anisotropic or, equivalently, _ der∩ is F-anisotropic. Since we assume _/_ is F-anisotropic, must be elliptic in . So ^d-1, and hence ^i, must be F-anisotropic.
It follows that x determines unique points (that we also denote x) in ℬ_ red(^i+1,F) and ℬ_ red (^i,F).
Our conventions regarding embeddings of buildings are similar to those in <cit.>. See, in particular, Remark 3.3 <cit.>, regarding the embedding of ℬ_ red(^i,F) in ℬ_ red(^i+1,F).
It should be understood that the images of x in the various buildings depends on the choice of . As in <cit.>, it will be easy to see that varying does not change the isomorphism class of the resulting supercuspidal representation. (See the discussion after Lemma 1.3 <cit.>.)
At this point, we encounter a technical issue involving the residual characteristic p.
Suppose first that p does not divide the order of the fundamental group π_1(_ der).
Then we define
r_i = depth(ϕ|H^i_x,0+) = depth(ϕ |T^i_0+).
The latter identity of depths is a consequence of Lemma <ref>.
If p divides the order π_1(_ der), we can simply replace by a z-extension ^♯ of the type considered in <ref>. In other words, we identify G with G^♯ /N, where is the kernel of the z-extension, and apply our construction to ^♯ instead of .
Though it is somewhat tedious, the latter case can also be translated into terms that are intrinsic to G. (In other words, one does not need to refer to the z-extension, but one does need to refer to the universal cover of _ der.)
To appreciate the source of complications, consider, for example, the character ϕ of H_x,0+. According to Lemma 3.5.3 <cit.>, we have a surjection H^♯_x,0+→ H_x,0+, and thus the pullback ϕ^♯ of ϕ to H^♯_x,0+ captures all of the information carried by ϕ.
But we may not have similar surjections associated to the groups H^i_x,0+ and T^i_0+, and this creates difficulties. The appropriate depths r_i for the construction may no longer be the depths of the restrictions of ϕ to H^i_x,0+ and T^i_0+. Therefore, we cannot simply define the r_i's as above.
In light of the previous remarks, we generally assume p does not divide the order of π_1(_ der) when we describe our construction, though we do consider more general p when considering the complete family of representations that the construction captures. (See Lemma <ref>.)
Continuing with the construction, we now take ^i to be the (unique) maximal subgroup of such that:
* ^i is defined over F.
* ^i contains .
* ^i is a Levi subgroup of over E.
* ϕ |H^i-1_x,r_i=1.
(It follows from Lemma <ref> that the condition ϕ |H^i-1_x,r_i=1 is equivalent to the condition ϕ |T^i-1_r_i=1.)
As soon as ^i =, the recursion ends. The value of i for which ^i= is declared to be i=0. The values of d and the other indices also become evident at this point.
There is an alternate way to construct r⃗ and that is direct (as opposed to being recursive) and clarifies the meaning of G^i and r_i, as well as the fact that these objects are well defined. It exploits the choice of and is inspired by 3.7 <cit.>.
One can define
r_0<⋯ < r_d-1
to be the sequence of positive numbers that occur as depths of the various characters ϕ |T_𝒪,0+ for 𝒪∈ΓΦ. (This does not preclude the possibility that ϕ |T_𝒪,0+=1 for some orbits 𝒪.)
Next, when i<d, we take
Φ^i = ⋃_𝒪∈ΓΦ, ϕ|T_𝒪,r_i=1𝒪.
One can define ^i to be the unique E-Levi subgroup of that contains and is such that Φ (^i,) = Φ^i.
To connect our narrative with Yu's construction, it might be helpful to think in terms of Howe's theory of factorizations of admissible quasicharacters (and its descendants), even though our approach avoids factorization theory. In the present context, a factorization of ϕ would consist of quasicharacters ϕ_i : G^i →^× such that ϕ = ∏_i=0^d (ϕ_i|H_x,0+). But the factors ϕ_i+1,… , ϕ_d are trivial on [G^i+1,G^i+1]∩ H_x,0+ and the factors ϕ_0,… , ϕ_i-1 are trivial on H_x,r_i. Therefore, on [G^i+1,G^i+1] ∩ H_x,r_i, the character ϕ coincides with the quasicharacter ϕ_i. Lemma <ref> (with replaced by ^i+1) implies that [G^i+1,G^i+1] ∩ H_x,r_i = H^i_x,r_i, so long as p does not divide the order of the fundamental group of ^i+1_ der.
Hence, ϕ|H^i_x,r_i = ϕ_i|H^i_x,r_i. Our present approach to constructing supercuspidal representations is partly motivated by the heuristic that the essential information carried by the factor ϕ_i is contained in the restriction ϕ_i|H^i_x,r_i and, since this restriction agrees with ϕ |H^i_x,r_i, there is no need to use factorizations. (See Lemma <ref> for more details.)
Sequences such as = (^0,… ,^d) are called“tamely ramified twisted Levi sequences” (Definition 2.43 <cit.>), and one may view ^i+1 as being constructed from ^i by adding unipotent elements. By contrast, if ^d = and = (^0,… ,^d) then each ^i+1 is obtained from ^i by adding semisimple elements.
§.§ The inducing subgroup K
The objective of our construction is to associate to ρ an equivalence class of supercuspidal representations of G. These representations are induced from a certain compact-mod-center subgroup K of G that is defined in this section. Later, we define an equivalence class of representations κ of K by canonically defining the (common) character of these representations κ. The representation π of G induced from such a κ will lie in the desired equivalence class of supercuspidal representations π associated to ρ.
For each i∈{ 0,… , d-1}, let
s_i= r_i/2
and
J^i+1 = (G^i,G^i+1)_x,(r_i,s_i)
J^i+1_+ = (G^i,G^i+1)_x,(r_i,s_i+)
J^i+1_++ = (G^i,G^i+1)_x,(r_i+,s_i+).
(The notations on the right hand side follow <cit.>.
See also the discussion in <ref> below regarding the groups _x,t⃗ associated to the twisted Levi sequence = (^i,^i+1) and admissible sequences of numbers.)
Our desired inducing subgroup K is
given by
K= H_xJ,
where
J = J^1⋯ J^d.
§.§ The subgroup K_+
The (equivalent) representations κ of K mentioned above will turn out to restrict to a multiple of a certain canonical character ϕ̂ of a certain subgroup K_+.
The subgroup K_+ is
given by
K_+ = H_x,0+L_+,
where
L_+ = (ϕ)L^1_+⋯ L^d_+,
L^i+1_+ = G^i+1_ der∩ J^i+1_++.
The character ϕ̂ is the inflation
ϕ̂= inf_H_x,0+^K_+-.3em (ϕ )
of ϕ to K_+, that is, ϕ̂(hℓ) = ϕ (h), when h∈ H_x,0+ and ℓ∈ L_+. More details regarding the definition of ϕ̂ are provided in <ref>.
The inflation of characters from H_x,0+ to K_+ just considered should not be confused with a different inflation procedure considered in <cit.>, where we are using the decomposition K_+ = H_x,0+J^1_+⋯ J^d_+.
§.§ The dual cosets
Fix i∈{ 0,… , d-1} and define ^i,i+1 to be the
F-torus
^i,i+1 = (^i+1_ der∩^i)^∘,
where ^i is the center of ^i.
Fix a character ψ of F that is trivial on the maximal ideal 𝔓_F but nontrivial on the ring of integers 𝔒_F.
The dual coset (ϕ |Z^i,i+1_r_i)^* of ϕ |Z^i,i+1_r_i is the coset in z^i,i+1,*_-r_i:(-r_i)+ consisting of elements
X^*_i in z^i,i+1,*_-r_i that represent ϕ |Z^i,i+1_r_i in the sense that
ϕ ( exp (Y+z^i,i+1_r_i+)) = ψ (X^*_i(Y)), ∀ Y∈z^i,i+1_r_i.
(The notation “exp” here is for the Moy-Prasad isomorphism on z^i,i+1_r_i:r_i+. See <ref> for more details.)
Let z^i,i+1 be the Lie algebra of Z^i,i+1, and let z^i,i+1,* be the dual of z^i,i+1.
The decomposition
g^i = z^i⊕g^i_ der
restricts to a decomposition
h^i = z^i,i+1⊕h^i-1.
Using this, we associate to each coset in z^i,i+1,*_-r_i:(-r_i)+
a character of H^i_x,r_i:r_i+ that is trivial on H^i-1_x,r_i:r_i+. The character of H^i_x,r_i:r_i+ associated to the dual coset (ϕ |Z^i,i+1)^* is just the restriction of ϕ since ϕ |H^i-1_x,r_i=1.
In other words, if X∈z^i,i+1_r_i and Y∈h^i-1_x,r_i then
ϕ (exp (X+Y+ h^i_x,r_i+)) = ψ (X^*_i(X)),
where X^*_i is any element of (ϕ |Z^i,i+1_r_i)^* and
exp: h^i_x,r_i:r_i+≅ H^i_x,r_i:r_i+
is the isomorphism defined in <ref>.
Lemma <ref> establishes that every element X^*_i of the dual coset (ϕ |Z^i,i+1_r_i)^* is ^i+1-generic of depth -r_i. (Here, we are using condition (4) in Definition <ref>, as well as Lemma 8.1 <cit.>.)
The decomposition h^i = z^i,i+1⊕h^i-1
restricts to
t^i = z^i,i+1⊕t^i-1,
where t^i and t^i-1 are generated by the t_𝒪's associated to Φ (^i+1,) and Φ (^i,), respectively. We caution that z^i,i+1 is usually not generated by t_𝒪's and, in fact, may not contain any t_𝒪's.
§.§ The construction
For each i∈{ 0,… ,d-1}, the
elements X^*_i of the dual coset (ϕ |Z^i,i+1_r_i)^* also represent a character of J^i+1_+ that we denote by ζ_i. By construction, ζ_i is the inflation of a ^i+1-generic character of G^i_x,r_i of depth r_i.
Since G^i_x,r_i:r_i+ is isomorphic to a finite vector space over a finite field of characteristic p, every character of G^i_x,r_i:r_i+, in particular ζ_i, must take values in the group μ_p of complex p-th roots of unity.
The space V of our inducing representation κ will be a tensor product
V = V_ρ⊗ V_0⊗⋯⊗ V_d-1
of V_ρ with spaces V_i, such that each V_i is the space of a Heisenberg representation τ_i attached to the character ζ_i.
The next step is to define these Heisenberg representations.
Let W_i be the quotient J^i+1/J^i+1_+ viewed as an _p-vector space with nondegenerate (multiplicative) symplectic form
⟨ uJ^i+1_+ ,vJ^i+1_+⟩ = ζ_i(uvu^-1v^-1).
(The fact that this is a nondegenerate symplectic form is shown in <cit.>.)
Let
ℋ_i be the (multiplicative) Heisenberg p-group such that
ℋ_i = W_i×μ_p
with multiplication given by
(w_1,z_1)(w_2,z_2) = (w_1w_2, z_1z_2 ⟨ w_1,w_2⟩^(p+1)/2).
Let (τ_i,V_i) be a Heisenberg representation of ℋ_i whose central character is the identity map on μ_p. (Up to isomorphism, τ_i is unique.)
Let 𝒮_i be the symplectic group Sp(W_i). We let 𝒮_i act on ℋ_i via its natural action on the first factor of W_i×μ_p. The semidirect product 𝒮_i⋉ℋ_i is the Cartesian product 𝒮_i×ℋ_i with multiplication
(s_1,h_1)(s_2,h_2)= (s_1s_2, (s_2^-1h_1)h_2).
Except for the case in which p=3 and __p(W_i)=2, there is a unique extension τ̂_i of τ_i to a representation
τ̂_i :𝒮_i⋉ℋ_i→ (V_i).
When p=3 and __p(W_i)=2, we also have a canonical lift τ̂_i that is specified in <cit.>.
Given h∈ H_x, let
ω_i(h) = τ̂_i( Int(h),1),
where Int(h) comes from the conjugation action of h on J^i+1.
We now state our main result:
Suppose ρ is a permissible representation of H_x.
Up to isomorphism, there is a unique representation κ = κ (ρ) of K such that:
(1) The character of κ has support in H_xK_+.
(2) κ |K_+ is a multiple of ϕ̂.
(3) κ | H_x = ρ⊗ω_0⊗⋯⊗ω_d-1.
Then π (ρ)= ind_K^G(κ (ρ)) is an irreducible, supercuspidal representation of G whose isomorphism class is canonically associated to ρ. The isomorphism class of every tame supercuspidal representation of G constructed by Yu in <cit.> contains a representation π (ρ).
The fact that the equivalence class of κ (ρ) is well-defined and canonically associated to ρ is proven in Lemma <ref>.
The proof of the latter result also establishes that κ (ρ) is determined by the properties listed above.
The fact that π (ρ) is irreducible is proven in Lemma <ref>. This implies that π (ρ) is also supercuspidal, as explained on page 12 of <cit.>.
Lemma <ref> establishes the connection between our construction and Yu's construction.
If ρ is permissible and Θ_π is the smooth function on the regular set of G that represents the character of π = π(ρ) then Θ_π is given by the Frobenius formula
Θ_π (g) =∑_C∈ K G/K ∑_h∈ Cχ̇_κ (h gh^-1),
where χ̇_κ is the function on G defined by
χ̇_κ (jk) = ϕ̂(k)· tr(ρ (j))· tr (ω_0(j))⋯ tr (ω_d-1(j)),
for j∈ H_x and k∈ K_+, and χ̇_κ≡ 0 on G-H_xK_+.
In particular, Θ_π (g) =0 when g does not lie in the G-invariant set generated by H_xK_+.
We refer to <cit.> for details regarding the Frobenius formula.
§ TECHNICAL DETAILS
This chapter is essentially a collection of technical appendices to the previous chapter.
§.§ z-extensions
A z-extension of (over F)
consists of a connected reductive F-group ^♯ with simply connected derived group together with an associated exact sequence
1→→^♯→→ 1
of F-groups
such that is an induced torus that embeds in the center of ^♯.
To say that is an induced torus means that it is a product of tori of the form Res_L/F(𝔾_m), where each L is a finite extension of F.
This implies that the Galois cohomology H^1(F,) is trivial and hence
the cohomology sequence associated to the above exact sequence yields
an exact sequence
1→ N→ G^♯→ G→ 1
of F-points.
Therefore every representation of G may be viewed as a representation of G^♯ that factors through G.
Not only do z-extensions always exist, but we can always choose a z-extension ^♯ of such that is a product of factors Res_E/F(𝔾_m), where, as usual, E is the splitting field of over F.
In this case, the preimage ^♯ of in ^♯ is an E-split maximal torus in ^♯.
Assume, at this point, that we have fixed such a z-extension.
The terminology “z-extension” and our definition are derived from <cit.>, but Kottwitz refers to Langlands's article
<cit.> as the source of the notion. Existence results for z-extensions can be found in <cit.> and <cit.>.
The Philosophy of z-extensions. When studying the representation theory of general connected reductive F-groups, it suffices to consider the representation theory of groups with simply connected derived group. For groups of the latter type, technicalities involving bad primes (such as those related to the Kneser-Tits problem) are minimized (or eliminated).
Here, we are using the terminology “bad primes” to informally refer to any technical issues that might cause a general theory of representations of connected reductive F-groups to break down for a finite number of residual characteristics p of the ground field F.
Let ^♯ be the preimage of in ^♯.
Note that we have a natural identification of Φ (,) with Φ (^♯,^♯). Then the center Z() is the intersection of the kernels of the roots in Φ (,). It is easy to see that the center of ^♯ must be the intersection of the kernels of the roots in Φ (^♯,^♯) that correspond to elements of Φ ( ,).
Since ^♯ is the centralizer in ^♯ of the torus Z(^♯)^∘, it must be the case that ^♯ is a Levi subgroup of ^♯ (over E). It follows that the fundamental group of ^♯_ der embeds in the (trivial) fundamental group of ^♯_ der.
(This is observed in <cit.> and is easy to prove directly.)
It also follows
that our z-extension of restricts to a z-extension
1→→^♯→→ 1
of .
Next, we observe that we have a natural Gal(E/F)-equivariant identification of reduced buildings
ℬ_ red(^♯ ,E)≅ℬ_ red(,E)
as simplicial complexes. If we identify the groups
^♯ (E) / (E) = (^♯ /)(E) ≅ (E)
then (E) has the same action on ℬ_ red(^♯ ,E) and ℬ_ red(,E).
(Recall, from <cit.>, that associated to each Chevalley basis of the Lie algebra h_ der of _ der is a point in ℬ_ red(,E). But h_ der is naturally identified with the Lie algebra of ^♯_ der.
So a Chevalley basis determines a point in both ℬ_ red(,E)
and ℬ_ red(^♯ ,E). Identifying these points determines our identification of reduced buildings.)
Given x∈ℬ_ red(,E), we have an exact sequence
1→(E)→^♯(E)_x→ (E)_x→ 1
of stabilizers of x, as well as exact sequences
1→(E)_r→^♯ (E)_x,r→(E)_x,r→ 1
for all r≥ 0.
There are similar exact sequences
1→ N→ H^♯_x→ H_x→ 1
and
1→ N_r→ H^♯_x,r→ H_x,r→ 1
for F-rational points.
(See <cit.>.)
Suppose ρ is a representation of H_x and ρ^♯ is its pullback to a representation of H^♯_x. Then ρ is permissible if and only if ρ^♯ is. The representation π^♯ of G^♯ associated to ρ^♯ is precisely the pullback of the representation π of G associated to ρ. The correspondence π^♯↔π gives a bijection between the representations of G that come from applying our construction to G^♯ and the representations that come from applying our construction directly to G.
To end this section, we note a complication that occurs for the group _n. Even though _n is simply connected, its dual PGL_n is not, and this introduces the issues discussed above in Remark <ref>. Thus taking z-extensions does not remove all problems related to the residual characteristic.
§.§ Commutators
In this section, we collect a few facts about commutators.
Let (E)^+ be the (normal) subgroup of (E) generated by the E-rational elements of the unipotent radicals of parabolic subgroups of that are defined over E.
We recall now some basic facts about (E)^+. (See <cit.> and <cit.>.)
Since E is a perfect field, (E)^+ may also be described as the group generated by the E-rational unipotent elements in (E).
Since E is a field with at least 4 elements, any subgroup of (E) that is normalized by (E)^+ either contains (E)^+ or is central. (This follows from the main theorem of <cit.>.)
In particular, (E)^+⊂ [ (E),(E)]. But since is E-split, (E)/(E)^+ must be abelian and hence we deduce that
(E)^+ = [ (E),(E)].
If x∈ℬ_ red(,F) then we have inclusions
H^♭_ der,x,0+ ⊂ [H_ der,H_ der]∩ H_x,0+⊂ [H,H]∩ H_x,0+
⊂ [(E),(E)]∩ H_x,0+
= (E)^+ ∩ H_x,0+
⊂ H_ der,x,0+.
If the order of the fundamental group π_1(_ der) is not divisible by p then all of these inclusions are equalities.
The first step is to show H^♭_ der,x,0+⊂ [H_ der,H_ der]∩ H_x,0+.
We observe that
the universal cover _ sc of _ der decomposes as a direct product
_ sc = ∏_i _i
of F-simple factors _i, and,
for each i, we have _i = Res_F_i/F'_i, where '_i is an absolutely simple, simply connected F_i-group and F_i is a finite extension of F.
(See <cit.>.)
To prove the desired inclusion, it suffices to show that for each i we have
ξ (H_i,x,0+)⊂ [ξ(H_i),ξ(H_i)],
where ξ is the covering map _ sc→_ der.
Suppose first that _i is F-isotropic or, equivalently, '_i is F_i-isotropic. We have
H_i^+⊂ [H_i,H_i]⊂ H_i= H_i^+,
where the first inclusion follows from <cit.> and the equality follows from <cit.>. Therefore,
ξ (H_i,x,0+) ⊂ξ([H_i,H_i]) ⊂ [ξ(H_i),ξ(H_i)].
Now suppose _i is F-anisotropic. Then H_i = _1(D_i), where D_i is a central division algebra over F_i. According to <cit.>, we have
[H_i,H_i]= H_i ∩ (1+𝔓_D_i) = H_i,x,0+,
where 𝔓_D_i is the maximal ideal in D_i. Again, we obtain
ξ (H_i,x,0+)⊂ [ξ(H_i),ξ(H_i)] and hence H^♭_ der,x,0+⊂ [H_ der,H_ der]∩ H_x,0+.
The remaining inclusions follow from the inclusions
[H_ der,H_ der]⊂ [H,H]⊂ [(E),(E)]
= (E)^+ ⊂_ der(E).
Now suppose the order of the fundamental group π_1(_ der) is not divisible by p. The fact that H^♭_ der,x,0+= H_ der, x,0+ is discussed in the proof of <cit.> and it follows from <cit.>.
The previous proof uses standard methods that can also be found in <cit.> and in the proof of <cit.>.
There are several uses for the latter result.
The fact that one often has H^♭_ der,x,0+= H_ der, x,0+ allows one to replace H^♭_ der,x,0+
in the definition of “permissible representation” with a simpler object.
The groups involving commutators are included partly to allow one to link
our theory with Yu's theory, where quasicharacters of H play more of a role. In this regard, we note that a character of H_x,0+ extends to a quasicharacter of H precisely when it is trivial on [H,H]∩ H_x,0+. On the other hand,
the group [H_ der,H_ der]∩ H_x,0+ is relevant to Kaletha's theory and, in particular,
<cit.>.
Another application is the following:
Suppose ρ is a permissible representation of H_x and χ is a quasicharacter of G. Then ρ⊗ (χ|H_x) is also permissible. The representation of G associated to ρ⊗ (χ|H_x) is equivalent to π⊗χ, where π is the representation of G associated to ρ.
Showing that ρ⊗ (χ|H_x) is permissible amounts to showing that
χ | H^♭_ der,x,0+ is trivial. But this results from the inclusions
H^♭_ der,x,0+⊂ [H,H]⊂ [G,G],
which are implied by Lemma <ref>. The second assertion follows directly from the definition of our construction.
Next, we discuss commutators of groups associated to concave functions (as in the previous section) and, in particular, groups associated to admissible sequences.
Given two concave functions f_1 and f_2, a function f_1∨ f_2 is defined in <cit.> and it is shown in Lemma 5.22
<cit.> that, under modest restrictions, that [G_x,f_1, G_x,f_2]⊂ G_x,f_1∨ f_2. (The latter result is a refinement of <cit.>.)
For groups associated to admissible sequences, Corollary 5.18 <cit.> gives the following simpler statement of the result in Lemma 5.22 <cit.>. (The notion of “admissible sequence” is more general in <cit.> than in <cit.>.)
Given admissible sequences a⃗ = (a_0,… , a_d) and b⃗ = (b_0,… , b_d),
then [G⃗_x,a⃗ , G⃗_x,b⃗]⊂G⃗_x,c⃗, where
c⃗ = (c_0,… , c_d), with
c_j =
min{ a_0+b_0,… , a_d+b_d}, if j=0,
min{ a_j+m_b,m_a+b_j, a_j+1+b_j+1,… , a_d+b_d}, if j>0,
and m_a = min{ a_0,… , a_d} and m_b = min{ b_0,… , b_d}.
We frequently will use the previous lemma (explicitly and implicitly) in combination with the following observation: to show that one subgroup H_1 of G normalizes another subgroup H_2 is equivalent to showing [H_1,H_2]⊂ H_2.
§.§ Twists of depth zero representations
The purpose of this section is to examine conditions under which a permissible representation ρ of H_x can be expressed as a tensor product ρ_0⊗χ, where ρ_0 has depth zero and χ is a quasicharacter of H_x. Such a decomposition is useful in applications because the depth zero representations ρ_0 have a simple description, according to the work of Moy and Prasad <cit.>. A secondary goal is to contrast our construction of supercuspidal representations with Yu's construction in the special case in which = or, equivalently, d=0.
We start with a definition:
A quasicharacter χ of H_x is H-normal if it
satisfies one of the following two equivalent conditions:
* χ is trivial on [H,H_x]∩ H_x.
* The intertwining space
I_h (χ) =
Hom_
hH_x h^-1∩ H_x
(^hχ ,χ)
is nonzero for all h∈ H.
The next result and its corollary have obvious proofs:
If ρ is an irreducible, smooth representation of H_x and χ is an H-normal quasicharacter of H_x then there is a set-theoretic identity
I_h(ρ) = I_h(ρ⊗χ),
for all h∈ H, where
I_h(ρ)= Hom_hH_x h^-1∩ H_x(^hρ ,ρ).
If ρ is an irreducible, smooth representation of H_x and χ is an H-normal quasicharacter of H_x then the representation ind_H_x^H(ρ) is irreducible if and only if ind_H_x^H(ρ⊗χ) is irreducible.
The latter result raises the question of when, given a permissible representation ρ of H_x, one can twist ρ by an H-regular quasicharacter to obtain a depth zero representation, that is, a representation of H_x whose restriction to H_x,0+ is a multiple of the trivial representation.
This is answered in the following:
Suppose ρ is a permissible representation of H_x that restricts on H_x,0+ to the character ϕ. Then the following are equivalent:
* There exists an H-normal quasicharacter χ of H_x and a depth zero representation ρ_0 of H_x such that ρ = ρ_0⊗χ.
* ϕ extends to an H-normal quasicharacter χ of H_x.
* ϕ |([H,H_x]∩ H_x,0+)=1.
For Yu's tame supercuspidal representations, the conditions in the previous lemma can be replaced by the following more restrictive conditions:
* There exists a quasicharacter χ of H and a depth zero representation ρ_0 of H_x such that ρ = ρ_0⊗ (χ |H_x).
* ϕ extends to a quasicharacter χ of H.
* ϕ |([H,H]∩ H_x,0+)=1.
If p does not divide the order of π_1(_ der) then every permissible representation ρ of H_x may be expressed as ρ_0⊗ (χ|H_x), where ρ_0 is a depth zero representation of H_x and χ is a quasicharacter of H.
Under the condition on p, Lemma <ref> implies that H^♭_ der,x,0+ = [H,H]∩ H_x,0+. Therefore the character ϕ of H_x,0+ associated to ρ extends to a quasicharacter χ of H. The representation ρ_0 = ρ⊗ (χ^-1|H_x) has depth zero.
If p divides the order of π_1(_ der), then the possibility that H^♭_ der,x,0+ is strictly contained in [H,H]∩ H_x,0+
opens up the possibility of new supercuspidal representations not captured by Yu's construction.
We now compare our theory with Yu's theory in the case of =. In this case, both theories minimally extend the Moy-Prasad theory of depth zero supercuspidal representations.
Let 𝔣 be the residue field of F, viewed as a subfield of the residue field 𝔉 of the maximal unramified extension of F^ un of F contained in F.
Let 𝖦_x be the 𝔉-group with 𝖦_x (𝔉) = (F^ un)_x,0:0+ and the natural Galois action. The group 𝖦_x (𝔣) is identified with G_x,0:0+.
Recall from <cit.> that if ρ^∘ is a representation of G_x,0 then
(G_x,0,ρ^∘) is called a depth zero minimal K-type
if it is the inflation of an irreducible cuspidal representation of G_x,0:0+ = 𝖦_x (𝔣).
According to <cit.> and <cit.>, every irreducible, depth zero, supercuspidal representation π of G has the form ind_G_x^G (ρ_0), where ρ_0 is a smooth, irreducible representation of G_x whose restriction to G_x,0+ is a multiple of the trivial representation and whose restriction to G_x,0 contains a depth zero minimal K-type.
In fact, every smooth, irreducible representation of G_x that induces π must contain a depth zero minimal K-type, since any two representations of G_x that induce π must be conjugate by an element of G that normalizes G_x.
Yu's tame supercuspidal representations π in the special case of = are constructed as follows. One starts with a pair (ρ_0 ,χ), where ρ_0|G_x,0+ is 1-isotypic, ρ_0 induces an irreducible representation of G, and χ is any quasicharacter of G and then one puts
π = _G_x^G (ρ_0⊗ (χ |G_x)).
Consider now the representation ρ =ρ_0 ⊗ (χ|G_x). In order for ρ to be permissible, it must satisfy the following:
(1) ρ induces an irreducible representation of G,
(2) ρ|G_x,0+ is a multiple of some character ϕ,
(3) ϕ| H^♭_ der,x,0+=1.
Condition (2) is satisfied with ϕ = χ|G_x,0+.
Let χ^♯ be the pullback of χ to a quasicharacter of a z-extension G^♯ of G. Then Lemma 3.5.1 <cit.> implies that χ^♯ is trivial on G^♯_ der,x,0+. Condition (3) follows. But the restriction of χ to G_x is G-normal and hence condition (1) is satisfied by Corollary <ref>. So ρ must be permissible.
It is now evident that when = our framework captures all of Yu's supercuspidal representations for which d=0, and perhaps more in view of Remarks <ref> and <ref>.
At first glance, it appears that conditions (2) and (3) in the definition of “permissible representation” (Definition <ref>) should be eliminated since they are either unnecessary or they reduce the number of supercuspidal representations constructed. On the other hand, using condition (1) alone yields a theory with no content.
When is a proper subgroup of , it will be readily apparent how all three conditions are needed to construct supercuspidal representations.
§.§ Root space decompositions and exponential maps
In this section, with F as usual, can be taken as any connected reductive group that splits over E. We also fix a point x in the reduced building ℬ_ red(,F).
Let ℳ consist of the following data:
* a maximal F-torus that splits over E and contains x in its apartment 𝒜_ red(,,E),
* a -basis χ_1,… , χ_n for the character group X^*(),
* a linear ordering of Φ = Φ (,).
The theory of mock exponential maps was developed by Adler <cit.> and our approach is adapted from his, together with theory from <cit.> and <cit.>.
For positive r in
= ∪{ s+ : s∈}∪{∞},
the mock exponential map on t(E)_r,
ℳ-exp: t (E)_r→(E)_r,
is defined by the condition
χ_i (ℳ-exp (X)) = 1+dχ_i (X),
for all X∈t(E)_r and 1≤ i≤ n.
Next, for each root a∈Φ, we have an exponential map
exp: g_a (E)_x,r→_a(E)_x,r.
Recall that _a (E)_x,r is the group (denoted X_α in <cit.>) that Tits associates to the affine function α that has gradient a and satisfies α (x) =r. Thus _a(E)_x,r = exp(E_α (*)X_a), where * is the point in 𝒜_ red(,,E) that Tits associates to some Chevalley basis that contains the root vector X_a∈g_a. Since α (*) = r-⟨ x-*,a⟩, we have
_a (E)_x,r = exp (g_a (E)_x,r),
where
g_a (E)_x,r = E_r-⟨ x-*,a⟩ X_a.
Now fix a function
f: Φ∪{ 0}→ (0,∞ )
that is concave in the sense that
f( ∑_i a_i) ≤∑_i f(a_i),
where we are summing over a set of elements of Φ∪{0} whose sum is also in Φ∪{ 0}.
Then Bruhat-Tits consider the multiplication map
μ : (E)_f(0)×∏_a∈Φ_a(E)_x,f(a)→ (E)_x,f,
where the product is taken according some fixed (but arbitrary) ordering of Φ, and (E)_x,f is the group generated by (E)_f(0) and the _a(E)_x,f(a)'s.
It is shown in <cit.> that μ is bijective. (See also <cit.>.) We will consider the map μ associated to the ordering of Φ coming from ℳ.
We refer the reader to <cit.> for more details on our application of <cit.>. Yu explains that the choice of a valuation of the root datum, in the sense of <cit.>, is equivalent to the choice of a point in the reduced building. In particular, it determines filtrations of the various root groups. So it should be understood that when we apply the Bruhat-Tits result, we are choosing the valuation of root data corresponding to our given point x.
The corresponding Lie algebra map
ν : t(E)_f(0)×∏_a∈Φg_a(E)_x,f(a)→g (E)_x,f,
is obviously bijective and, in fact, is an E-linear isomorphism.
The mock exponential map
ℳ-exp: g (E)_x,f→(E)_x,f
is defined as the composite
g (E)_x,f[r]^ℳ-exp[d]_ν^-1 (E)_x,f
t(E)_f(0)×∏_a∈Φg_a(E)_x,f(a)[r] (E)_f(0)×∏_a∈Φ_a(E)_x,f(a)[u]_μ,
where the bottom map is obtained via the (mock) exponential maps discussed above on the factors.
Once ℳ and x are fixed, the various mock exponential maps we get for different f are all compatible in the sense that they are all restrictions of the map associated to the constant 0+.
The functions f of most interest to us are the functions
f(a) =
r_0, if a∈Φ (^0,)∪{0},
r_i, if a∈Φ (^i+1,)- Φ (^i,) and i>0,
associated to tamely ramified twisted Levi sequences = (^0,… ,^d) and admissible sequences r⃗ = (r_0,…, r_d), as in <cit.>. In this case, (E)_x,f is denoted (E)_x,r⃗. When r⃗ and s⃗ are admissible sequences such that
0<s_i≤ r_i≤min (s_i,… , s_d)+min (s_0,…, s_d),
for all i∈{ 0,… , d}, then the mock exponential determines an isomorphism
exp : g⃗(E)_x,s⃗:r⃗→(E)_x,s⃗:r⃗
of abelian groups that is independent of ℳ. (See <cit.>.) The latter isomorphism is Gal(E/F)-equivariant and yields an isomorphism
g⃗_x,s⃗:r⃗≅G⃗_x,s⃗:r⃗ on F-points. (See <cit.>.)
The canonical reference for facts about groups associated to concave functions has historically been <cit.>. More concise and modern treatments of this theory can be found in <cit.> and in the work of Yu. Of particular importance to us are the results on commutators that are discussed in the next section.
Suppose χ is a character of G_x,0+ that is trivial on (E)^+∩ G_x,0+. Then the depth of χ is identical to the depth of χ |T_0+.
Suppose r>0. Then we have a commutative diagram
g (E)_x,r:r+[r]^ℳ-exp[d]_ν^-1 (E)_x,r:r+
t(E)_r:r+×∏_a∈Φg_a(E)_x,r:r+[r] (E)_r:r+×∏_a∈Φ_a(E)_x,r:r+[u]_μ
of isomorphisms.
We are especially interested in μ, but its properties may be deduced from the properties of the other maps, which are studied in <cit.>.
It is easy to see that μ^-1 maps G_x,r:r+ to a set of the form T_r:r+× S, where S is a subgroup of ∏_a∈Φ_a(E)_x,r:r+ such that μ (S) is the image in G_x,r:r+ of a subset of (E)^+∩ G_x,r.
Suppose χ is a character of G_x,0+ that is trivial on G_x,r+. Then χ |G_x,r determines a character of G_x,r:r+ that factors through μ^-1 to a character of T_r:r+× S that is the product of χ |T_r:r+ with the trivial character of S. It follows that if χ has depth r then its restriction to T_0+ also has depth r.
Similarly, if χ is a character of G_x,0+ that is trivial on T_r+ and (E)^+∩ G_x,0+ then χ determines a character of G_x,r:r+ and a corresponding character of T_r:r+× S that is trivial on S.
It follows that if χ|T_0+ has depth r then χ also has depth r.
§.§ The norm map and the filtrations of the _𝒪's
Fix an orbit 𝒪 of Γ = Gal (F/F) in Φ = Φ (,), and recall that we have let _𝒪 denote the F-torus generated by the E-tori
_a = image(ǎ) for a∈𝒪.
In this section, we study the Moy-Prasad filtration of the T_𝒪 using
the norm map
N_E/F: _𝒪(E) → T_𝒪
t ↦ ∏_γ∈ Gal(E/F)γ (t).
Our main result is:
If a∈𝒪 and r>0 then
T_𝒪,r =N_E/F(ǎ (E_r^×))= N_E/F(_a(E)_r)= N_E/F(_𝒪(E)_r)
for all a∈𝒪.
Given a surjective homomorphism
→
of F-tori that split over E
the associated homomorphism
A_r→ B_r
is surjective for all r>0, so long as the kernel of the original map is connected or finite with order not divisible by p. (See Lemma 3.1.1 <cit.>.)
We can apply this to the norm map
N_E/F:R_E/F(_𝒪)→_𝒪,
since it is surjective and its kernel is a torus. (See the remarks below, following the proof.)
We deduce that
N_E/F(_𝒪(E)_r) = T_𝒪,r
Similarly, the surjection
ǎ :R_E/F(𝔾_m)→_a
yields surjections
_a (E)_r =ǎ(E_r^× )
for all r>0.
(If the kernel is nontrivial then it must have order two, in which case we invoke our assumption that p 2.)
Since _𝒪(E)_r is generated by the groups _b (E)_r with b∈𝒪, it follows that
N_E/F(_a (E)_r) = N_E/F(_𝒪(E)_r).
Our assertions now follow.
The latter result asserts that, in a certain sense, the norm map “preserves depth,” but it does not say that N_E/F preserves the depths of individual elements.
This is similar to the fact that, since E/F is tamely ramified,
_E/F(E_r) = F_r for all real numbers r, even though the trace clearly does not preserve depths of individual elements.
It is not true in general that _a(E) = ǎ (E^×) for all roots a that are defined over E.
Consider, for example, = PGL_2 and the subgroup consisting of cosets g, where g∈_2 and is the center of _2. Then ǎ(E^×) consists of cosets
[ u 0; 0 u^-1 ] = [ u^2 0; 0 1 ],
with u∈ E^×, while _a(E) consists of cosets
[ u 0; 0 1 ],
with u∈ E^×.
Let be an F-torus. If L is a field containing E then the norm map
N_E/F:R_E/F() →
over L is equivalent to the multiplication map
^n = ×⋯×→.
Accordingly, the norm map must be surjective and its kernel must be a torus. If is E-split then the choice of an E-diagonalization of yields a realization of as a product of R_E/F (𝔾_m) factors. Thus the norm gives an explicit realization of every F-torus as a quotient of an induced torus. Similarly, the inclusion of in R_E/F() explicitly shows that every F-torus is contained in an induced torus.
(See Propositions 2.1 and 2.2 <cit.>.)
§.§ Yu's genericity conditions
This section is a recapitulation of facts from <cit.> that are, implicitly or explicitly, based on results from <cit.>.
Fix i∈{ 0,… , d-1} and suppose X^*_i lies in the dual coset (ϕ |Z^i,i+1_r_i)^*∈z^i,i+1,*_-r_i: (-r_i)+.
For each root a∈Φ^i+1 = Φ (^i+1,), let H_a = dǎ(1) be the associated coroot in the Lie algebra g^i+1.
In Lemma <ref>, we will show that X^*_i necessarily satisfies Yu's condition (<cit.>):
GE1. v_F(X^*_i(H_a))=-r_i, ∀ a∈Φ^i+1-Φ^i.
For the rest of this section, we assume GE1 is satisfied.
Now let F be the residue field of the algebraic closure F of F or, equivalently, the algebraic closure of the residue field f.
Choose ϖ_r_i∈F of depth r_i and note that ϖ_r_iX^*_i has depth 0.
Given χ∈ X^*(), we have an associated linear form dχ :t→𝔾_a, and the map χ↦ dχ extends to an isomorphism
X^*()⊗_F≅t^*⊗F.
Under the latter isomorphism, the element ϖ_r_iX^*_i corresponds to an element of X^*()⊗_O_F.
Let X^*_i be the residue class of ϖ_r_iX^*_i modulo
X^*()⊗_P_F.
Thus
X^*_i∈ X^*()⊗_F .
Though X^*_i is only well-defined up to multiplication by a scalar in F^× (depending on the choice of ϖ_r_i), it is easy to see that this scalar does not affect our discussion. (In other words, one could treat X^*_i as a projective point.)
Let
Φ^i+1_X_i^*= { a∈Φ^i+1 : X_i(H_a)=0}.
Φ^i+1_X_i^*= Φ^i.
GE1 implies that X^*_i (H_a) 0 for all a∈Φ^i+1-Φ^i. Since X^*_i∈z^i,* and H_a∈g_ der^i for a∈Φ^i, we must have X^*_i (H_a)=0 for all a∈Φ^i.
We now observe that the Weyl group
W(Φ^i+1) = ⟨ r_a : a∈Φ^i+1⟩
acts on
X^*()⊗_F and we let Z_W(Φ^i+1)(X^*_i) denote the stabilizer of X^*_i.
Yu's second genericity condition (<cit.>) is:
GE2. Z_W(Φ^i+1)(X^*_i) =W (Φ^i) = W(Φ^i+1_X_i).
W(Φ^i) = W(Φ^i+1_X_i) is identical to the subgroup of
Z_W(Φ^i+1)(X^*_i) generated by the reflections in Z_W(Φ^i+1)(X^*_i).
If a∈Φ^i+1 then
r_a(X^*_i ) = X^*_i - X^*_i (H_a) a,
and thus r_a ∈ Z_W(Φ^i+1)(X^*_i) precisely when a∈Φ^i+1_X^*_i = Φ^i.
Since every reflection in W(Φ^i+1) has the form r_a for some a∈Φ^i+1, our claim follows.
W(Φ^i) is a normal subgroup of Z_W(Φ^i+1)(X^*_i) and the quotient group
Z_W(Φ^i+1)(X^*_i)/W(Φ^i) is isomorphic to a subgroup of the torsion component of X^*()/Φ^i+1, where Φ^i+1 is the lattice generated by Φ^i+1.
The desired result follows directly from Theorem 4.2(a) in <cit.> upon equating Steinberg's objects
(H,A,L,^AL,Σ^*,W, Z_W(H))
with the objects
(X^*_i, F, X^*(), X^*()⊗_F,Φ^i+1, W(Φ^i+1),Z_W (Φ^i+1) (X^*_i)),
respectively.
Note that Z_W(H)^0 in <cit.> denotes the subgroup of Z_W(H) generated by the reflections in Z_W(H).
(<cit.>)
If p is not a torsion prime for the dual of the root datum of ^i+1, that is, (X_*(),Φ̌^i+1, X^*(),Φ^i+1) then GE1 implies GE2.
§.§ Genericity of the dual coset
This section involves the dual cosets defined in Definition <ref>.
The result we prove is inspired by Lemma 3.7.5 <cit.>.
If i∈{ 0,… , d-1} then every element X^*_i in the dual coset (ϕ |Z^i,i+1_r_i)^* consists of ^i+1-generic elements of depth -r_i.
Given a∈Φ^i+1-Φ^i, let H_a= dǎ (1).
Let 𝒪 be the Γ-orbit of a.
With the obvious notations, we have the Lie algebra analogue
of Lemma <ref>:
t_𝒪,r_i = _E/F(dǎ (E_r_i)) = _E/F(E_r_iH_a)
= _E/F(t_a(E)_r_i)= _E/F(t_𝒪(E)_r_i).
(The proof is analogous to that of Lemma <ref>.)
For u∈ E_r_i, we therefore have
_E/F(uH_a) ∈t_𝒪,r_i⊂t^i_r_i⊂h^i_x, r_i and
ψ(_E/F (u X^*_i(H_a)))
= ψ(X^*_i(_E/F (uH_a)))
= ϕ (exp (_E/F(uH_a)))
= ϕ (N_E/F(exp (uH_a)))
= ϕ (N_E/F(exp (dǎ(u))))
= ϕ (N_E/F(ǎ(1+u))) .
Here, exp is the Moy-Prasad isomorphism h^i_x, r_i:r_i+≅ H^i_x, r_i:r_i+, and we observe that exp restricts to the Moy-Prasad isomorphism t^i_r_i:r_i+≅ T^i_r_i:r_i+.
We now observe that ϕ |T_𝒪,r_i 1 for all 𝒪∈ΓΦ^i+1 and, according to Lemma <ref>, we also know that if 𝒪 is the orbit of a then N_E/F(ǎ (E^×_r_i)) = T_𝒪,r_i. Therefore, there exists u∈ E_r_i such that
ψ(_E/F (u X^*_i(H_a))) 1.
This implies v_F (X^*_i (H_a))=-r_i, which shows that condition GE1 of <cit.> is satisfied.
Now we invoke Lemma 8.1 <cit.> or condition (4) in Definition <ref> to conclude that Yu's condition GE2 is also satisfied.
§.§ Heisenberg p-groups and finite Weil representations
Fix i∈{ 0,… ,d-1} and let μ_p be the group of complex p-th roots of unity.
We have a surjective homomorphism
ζ_i : J^i+1_+ →μ_p
which we have used to define a symplectic form on the space
W_i= J^i+1/J^i+1_+
by
⟨ uJ^i+1_+,vJ^i+1_+⟩ = ζ_i (uvu^-1v^-1).
We have also defined
a (multiplicative) Heisenberg p-group structure on
ℋ_i = W_i×μ_p
using the multiplication rule
(w_1,z_1)(w_2,z_2) = (w_1w_2, z_1z_2 ⟨ w_1,w_2⟩^(p+1)/2).
The center of ℋ_i is
𝒵_i = 1×μ_p
and we use z↦ (1,z) to identify μ_p with 𝒵_i.
We let H_x act on ℋ_i by
h· (jJ_+^i+1,z) = (hjh^-1J_+^i+1,z).
We are interested in (central) group extensions of W_i by μ_p that are isomorphic to ℋ_i (as group extensions). To say that an extension
1→μ_p→W_i→ W_i→ 1
of W_i by μ_p is isomorphic to ℋ_i as a group extension means that there exists an isomorphism ι :W_i→ℋ_i such that the diagram
1@=[d][r] μ_p[r]@=[d] W_i[r][d]^ι W_i[r]@=[d] 1@=[d]
1[r] 𝒵_i[r] ℋ_i[r] W_i[r] 1
commutes. Such an isomorphism ι is called a special isomorphism.
(Note that, by the Five Lemma, if we merely assume ι is a homomorphism making the latter diagram commute then it is automatically an isomorphism.)
Recall that the cohomology classes in H^2(W_i,μ_p) correspond to group extensions
W_i
of W_i by μ_p.
Given W_i, we get a 2-cocycle
c: W_i× W_i→μ_p
by choosing a section s:W_i→W_i and defining c by
s(w_1)s(w_2) = c(w_1,w_2) s(w_1w_2).
To say that c is a 2-cocycle means that
c(w_1,w_2) c(w_1w_2,w_3) = (s(w_1)c(w_2,w_3)s(w_1)^-1) c(w_1,w_2w_3).
Indeed, both sides of the latter equation equal
s(w_1)s(w_2)s(w_3)s(w_1w_2w_3)^-1.
Changing the section s has the effect of modifying c by a 2-coboundary.
Indeed, any other section s' must be related to s by
s' (w) = f(w)s(w)
where f is a function f:W_i→μ_p, and then c' = c· df, where
(df)(w_1,w_2) =f(w_1) f(w_2)f(w_1w_2)^-1.
On the other hand, given a 2-cocycle c we get an extension W_i of W_i by μ_p by taking W_i = W_i×μ_p with multiplication defined by
(w_1,z_1)(w_2,z_2) = (w_1w_2, c (w_1,w_2) z_1z_2)
for some coycle
c : W_i× W_i→μ_p.
Associativity of multiplication is equivalent to the cocycle condition.
The cocycle
c(w_1,w_2) = ⟨ w_1,w_2⟩^(p+1)/2
yields the Heisenberg p-group ℋ_i and we are only interested in cocycles that are cohomologous to this cocycle.
In fact, we are only interested in special isomorphisms that come from homomorphisms ν_i :J^i+1→ℋ_i.
A special homomorphism is an H_x-equivariant homomorphism ν_i : J^i+1→ℋ_i that factors to a special isomorphism J^i+1/ζ_i →ℋ_i.
We observe that when ν_i is a special homomorphism then
ν_i = ζ_i and
the diagram
1@=[d][r] J^i+1_+/ζ_i[r][d]^ζ_i J^i+1/ζ_i[r][d]^ν_i W_i[r]@=[d] 1@=[d]
1[r] 𝒵_i[r] ℋ_i[r] W_i[r] 1
commutes.
Our notion of special homomorphism is a variant of the notion of “relevant special isomorphism” in <cit.> which, in turn, is derived from Yu's definition of “special isomorphism” <cit.>. The existence of special homomorphisms follows from the existence of relevant special isomorphisms. (See <cit.>.)
Both ℋ_i and J^i+1/ζ_i are Heisenberg p-groups that are canonically associated to ρ. Special homomorphisms yield isomorphisms between these two Heisenberg p-groups, but the existence of automorphisms of the Heisenberg p-groups leads to the lack of uniqueness of special isomorphisms. One can choose canonical special homomorphisms, however, there is no one canonical choice that is most convenient for all applications. (These matters are discussed in <cit.>.)
We have chosen a Heisenberg representation (τ_i ,V_i) of ℋ_i and we have pulled back τ_i to a representation (τ_ν_i,V_i) of J^i+1. Thus τ_ν_i =τ_i∘ν_i.
We have extended τ_i to a representation τ̂_i of 𝒮_i⋉ℋ_i on V_i and we have let ω_i (h) = τ̂_i ( Int(h),1), for all h∈ H_x.
The next lemma follows routinely from the definitions, but we include a proof to make evident where the H_x-equivariance of ν_i is used.
τ_ν_i(h^-1jh) = ω_i(h)^-1τ_ν_i(j) ω_i (h),
whenever j∈ J^i+1 and h∈ H_x.
Given j∈ J^i+1 and h∈ H_x, our assertion follows from the computation
τ_ν_i(h^-1jh)
= τ̂_i (1,ν_i (h^-1jh))
= τ̂_i (1, Int(h)^-1ν_i(j))
= ω_i(h)^-1τ_ν_i(j) ω_i(h).
§.§ Verifying that ϕ̂ is well defined
Recall from <ref> the following definitions:
L^i+1_+ = G^i+1_ der∩ J^i+1_++,
L_+ = (ϕ)L^1_+⋯ L^d_+,
K_+ = H_x,0+L_+,
^i = ^i+1_ der∩.
We have also stated a provisional definition for the character ϕ̂ of K_+, namely, ϕ̂= inf_H_x,0+^K_+-.3em (ϕ ) or, equivalently, ϕ̂(hℓ)= ϕ( h), when h∈ H_x,0+ and ℓ∈ L_+.
The purpose of this section is to verify that the definition of ϕ̂ make sense.
(1) L_+ is a group that is normalized by H_x,0+.
(2) K_+ = G⃗_x,(0+,s_0+,…, s_d-1+)= H_x,0+J_+.
(3) H ∩ L_+ = ϕ.
(4) ϕ̂ is a well-defined character of K_+.
(5) ϕ̂(h) =ζ_i (h), when h∈ G^i+1_ der∩ J^i+1_+ and i∈{ 0,… , d-1}.
Our definition of K_+ is different than the definition in <cit.> and <cit.>, however, Lemma <ref> shows that the different definitions coincide. The notation G⃗_x,(0+,s_0+,…, s_d-1+) is a special case of the notation from <ref> for groups associated to admissible sequences.
By definition, L_+ =(ϕ)L^1_+⋯ L^d_+ and since H_x,0+ normalizes each factor L^j_+, so does ϕ.
Moreover, L^j_1_+ normalizes L^j_2_+ whenever j_1≤ j_2. (See Remark <ref>.)
It follows from induction on i that the sets
(ϕ)L^1_+⋯ L^i_+
are groups, for i = 1,… ,d.
This yields (1).
Now suppose i∈{ 0,… , d-1} and let
^i,i+1(E)_x,s_i+=∏_a∈Φ^i+1-Φ^i U_a (E)_x,s_i+,
where U_a is the root group associated to the root a. In the latter definition, we assume we have fixed an ordering of the set Φ^i+1 -Φ^i and we use this ordering to determine the order of multiplication in the product.
The resulting set depends on this ordering.
We observe that
^i,i+1(E)_x,s_i+⊆^i+1_ der(E)∩ (^i,^i+1)(E)_x,(r_i+,s_i+)⊆^i+1(E)_x,s_i+
and, according to <cit.>,
(E)_x,(0+,s_0+,…, s_d-1+)
= (E)_x,0+ (E)^1_x,s_0+⋯ (E)^d_x,s_d-1+,
The latter identity may be sharpened as
(E)_x,(0+,s_0+,…, s_d-1+)
= (E)_x,0+ (E)^0,1_x,s_0+⋯ (E)^d-1,d_x,s_d-1+,
where we are using <cit.>
and
<cit.>.
More precisely, the Bruhat-Tits result can be used to show that one can rearrange the various products so that the contributions of the individual root groups occur in any specified order. In particular, one can first group together the roots from Φ^0, then the roots from Φ^1 - Φ^0, and so forth.
It follows that
(E)_x,(0+,s_0+,…, s_d-1+)
= (E)_x,0+∏_i=0^d-1(^i+1_ der(E)∩ (^i,^i+1)(E)_x,(r_i+,s_i+)) .
Using the Bruhat-Tits result or an argument as in the proof of <cit.>, we have:
G⃗_x,(0+,s_0+,…, s_d-1+) = H_x,0+∏_i=0^d-1 L^i+1_+
= H_x,0+L_+ = K_+.
The identity
G⃗_x,(0+,s_0+,…, s_d-1+)= H_x,0+J_+
is established in <cit.>. Thus we have proven (2).
Assertion (3) follows from:
H∩ L^i+1_+ = G^i+1_ der∩ H_x,r_i+ = H^i_x,r_i+⊂ H^i_x,r_i+1⊂ϕ .
We have defined ϕ̂ by ϕ̂(hℓ) = ϕ (h), for h∈ H_x,0+ and ℓ∈ L_+. The fact that this is a well-defined function on K_+ is a consequence of the fact that H_x,0+∩ L_+ =ϕ.
We now use (1) and the computation
ϕ̂(h_1ℓ_1h_2ℓ_2) = ϕ̂(h_1h_2(h_2^-1ℓ_1 h_2)ℓ_2)
= ϕ(h_1h_2)
= ϕ̂(h_1ℓ_1)ϕ̂(h_2ℓ_2),
for h_1,h_2∈ H_x,0+ and ℓ_1,ℓ_2∈ L_+
to deduce that ϕ̂ is a character and prove (4).
Finally, we prove (5). The methods used earlier in this proof can be used (with essentially no modification) to show that
G^i+1_ der∩ J^i+1_+ = H^i_x,r_i L^1_+⋯ L^i+1_+.
We will show that ϕ̂ and ζ_i agree on each of the factors on the right hand side of the latter identity.
On the factor H^i_x,r_i, the characters ϕ̂ and ζ_i coincide since both are represented by any element X^*_i in the dual coset (ϕ |Z^i,i+1_r_i)^* defined in <ref>.
Now consider a factor L^j+1_+, with j∈{ 0,… , i-1}. By definition, ϕ̂ is trivial on such a factor. The fact that ζ_i is also trivial on L^j+1_+ follows from the fact that the dual coset (ϕ |Z^i,i+1_r_i)^* is contained in (𝔷^i,*)_-r_i which is orthogonal to 𝔤^i_ der, x,r_i. Here, we are using the fact that L^j+1_+ ⊂ G^i_ der∩ G^i_x,r_i. We also know that both ϕ̂ and ζ_i are trivial on the factor L^i+1_+. This completes our proof.
§.§ Intertwining theory for Heisenberg p-groups
The following section was prompted by an email message from Loren Spice and it was developed in the course of conversations with him. It addresses an error in the proofs of Proposition 14.1 and Theorem 14.2 of <cit.> that can be fixed using ingredients already present in <cit.>.
Roughly speaking, our point of view on intertwining is that two representations of overlapping groups intertwine when they can be glued together to form a larger representation of a larger group. This approach reveals more of the inherent geometric structure present in Yu's theory.
As usual, will be a connected reductive F-group, but now ' will be an F-subgroup of that is a Levi subgroup over E. In other words, (',) is a twisted Levi sequence in . Let x be a point in ℬ_ red(',F).
As in <cit.>, we put J= (G',G)_x,(r,s), J_+= (G',G)_x,(r,s+)
and we let ϕ be a -generic character of G'_x,r of depth r. We let ζ denote the character of J_+ associated to ϕ. (In other words, ζ is the character denoted by ϕ̂|J_+ in <cit.>.)
We remark that the quotient W= J/J_+ is canonically isomorphic to the corresponding Lie algebra quotient and thus our discussion could be carried out on the Lie algebra.
For subquotients S of G, let ^g S denote the subquotient obtained by conjugating by g.
For a representation π of a subquotient S of G, let ^gπ be the representation of ^g S given by ^g π (s) = π (g^-1sg).
Let Φ = Φ (,), Φ' = Φ (',) and
let Φ_0 be the set of roots a∈Φ such that a(gx-x)=0.
We define groups J_0 and J_0,+ by imitating the definitions of J and J_+ except that we only use roots that lie in Φ_0.
More precisely, let J_0 (E) be the group generated by (E)_r and the groups _a(E)_x,r, with a∈Φ'∩Φ_0, and the groups _a(E)_x,s, with a∈ (Φ -Φ')∩Φ_0. Define J(E)_0,+ similarly, but replace s by s+. Now let J_0 = J(E)_0∩ G and J_0,+= J(E)_0,+∩ G.
Let μ_p be the group of complex p-th roots of unity. Note that all of our symplectic forms are multiplicative and take values in μ_p. We let I_p denote the identity map on μ_p viewed as a (nontrivial) character of μ_p.
Define W_0 to be the space J_0/J_0,+, viewed as a (multiplicative) _p-vector space with μ_p-valued symplectic form given by
⟨ uJ_0,+,vJ_0,+⟩ = ζ (uvu^-1v^-1 )= ^g ζ (uvu^-1v^-1 ).
(The latter equality follows from <cit.>.)
The embedding J_0↪ J yields an embedding W_0↪ W such that the symplectic form on W, given by
⟨ uJ_+,vJ_+⟩ = ζ (uvu^-1v^-1 ),
restricts to the symplectic form on W_0.
Similarly, the embedding ^g J_0↪^g J yields an embedding W_0↪^gW such that the symplectic form on ^gW, given by
⟨ u ^gJ_+,v ^g J_+⟩ = ^gζ (uvu^-1v^-1 ),
restricts to the symplectic form on W_0.
We therefore have a fibered sum (a.k.a., pushout) diagram
W_0[r][d] ^gW[d]
W[r] W^⋆ .
The fibered sum
W^⋆ = W ⊔_W_0^gW
is
the quotient of W×^g W by the subgroup of elements (w_0,w_0^-1), where w_0 varies over W_0.
Let W_1 denote the image of ^gJ_+∩ J in W. Then, according to <cit.>, the space W_1 is totally isotropic and its orthogonal complement W_1^⊥ in W is identical to the image of ^gJ∩ J in W. We have W_1^⊥ = W_1⊕ W_0.
As with W_0, Yu (canonically) defines a subgroup J_1 such that W_1 = J_1J_+/J_+, except that he uses the roots a such that a(x-gx)<0.
He also defines a subgroup J_3 by using the roots a such that a(x-gx)>0. Taking W_3 = J_3J_+/J_+, gives a vector space such that we have an orthogonal direct sum decomposition W = W_13⊕ W_0, where the nondegenerate space W_13 has polarization W_1⊕ W_3.
We also have canonical identifications
W= W_13⊕ W_0
and
^gW = W_0⊕^g W_13.
(See Lemmas 12.8 and 13.6 <cit.>.)
Since the symplectic forms that W_0 inherits from W and ^gW coincide, the space W^⋆ inherits a nondegenerate symplectic structure such that
W^⋆ = W_13⊕ W_0⊕^gW_13
is an orthogonal direct sum of nondegenerate symplectic spaces.
Let
ℋ^⋆ = W^⋆×μ_p
with the multiplication
(w_1,z_1)(w_2,z_2) = (w_1w_2, z_1z_2⟨ w_1,w_2⟩^(p-1)/2).
Let (τ^⋆ , V_τ^⋆) be a Heisenberg representation of ℋ^⋆ whose central character is the identity map I_p on μ_p.
For concreteness, one can realize τ^⋆ as follows.
Fix a polarization
W_0 = W_2⊕ W_4
of W_0 and define a polarization
W^⋆ = W^⋆ +⊕ W^⋆ -
by taking
W^⋆ + = W_1⊕ W_2 ⊕^gW_1
and
W^⋆ - = W_3⊕ W_4 ⊕^gW_3.
Now take
τ^⋆ = Ind_W^⋆ +×μ_p^ℋ^⋆(1× I_p ).
Restriction of functions from ℋ^⋆ to W^⋆ - = W^⋆ -× 0 identifies the space V_τ^⋆ of τ^⋆ with the space [W^⋆ -] of complex-valued functions on W^⋆ -.
Let 𝒮^⋆ = Sp(W^⋆) be the symplectic group of W^⋆ and let
τ̂^⋆ :𝒮^⋆⋉ℋ^⋆→ (V_τ^⋆)
be the unique extension of τ^⋆. We call this the “Weil-Heisenberg representation” of 𝒮^⋆⋉ℋ^⋆. Its restriction
ω^⋆ :𝒮^⋆→ (V_τ^⋆)
to 𝒮^⋆ is called the “Weil representation of 𝒮^⋆.”
Let ℋ = W×μ_p be the Heisenberg group associated to W, and embed ℋ in ℋ^⋆ in the obvious way.
In our above model for the Heisenberg representation τ^⋆, the space
V_τ = V_τ^⋆^^gW_1
of ^g W_1-fixed vectors corresponds to the space [W^- ] of functions in [W^⋆ -] that are supported in the totally isotropic space
W^- = W_3⊕ W_4.
It is an irreducible summand in the decomposition of τ^⋆ |ℋ, and restriction of functions from ℋ^⋆ to ℋ identifies V_τ with the space of the Heisenberg representation
τ = Ind_W^+×μ_p^ℋ(1× I_p),
where
W^+ = W_1⊕ W_2.
Similarly, we take
V_^g τ = V_τ^⋆^W_1
and observe that this corresponds to the space [^gW^-]. Note that
^g W^- = W_4⊕^g W_3
since ^g W_4 and W_4 are identified in W^⋆.
Restriction of functions from ℋ^⋆ to ^gℋ identifies V_^gτ with the space of the Heisenberg representation
^gτ = Ind_^gW^+×μ_p^^gℋ(1× I_p),
where
^g W^+ = ^g W_1⊕ W_2.
Let ℋ_0 be the subgroup
ℋ_0 =ℋ∩^gℋ= W_0×μ_p
of ℋ^⋆. This is the Heisenberg group associated to W_0.
One may view ℋ^⋆ as a fibered sum
ℋ^⋆ = ℋ⊔_ℋ_0^gℋ.
Let
V_τ_0 = V_τ∩ V_^gτ= V_τ^⋆^W_1×^gW_1.
This is a common irreducible summand in the decompositions of τ^⋆ |ℋ_0, τ |ℋ_0 and ^gτ|ℋ_0 into irreducibles.
Restriction of functions from ℋ^⋆ to ℋ_0 identifies V_τ_0 with the space of
τ_0 = Ind_W_2×μ_p^ℋ_0(1× I_p).
Restricting functions to W_4 = W_4× 0 identifies V_τ_0 with [W_4].
At this point, we have embedded the representations τ and ^gτ within the representation τ^⋆ and shown that they are identical on the overlap of their domains ℋ_0 =ℋ∩^gℋ.
The explicitly exhibits that τ and ^gτ intertwine or, in other words, that the space
Hom_ℋ_0(^gτ ,τ) is nonzero. In fact, Yu shows that the latter space has dimension one, and hence τ_0 occurs uniquely in τ and in ^gτ.
We turn now to the intertwining of the Heisenberg-Weil and Weil representations.
The symplectic group 𝒮 = Sp(W) embeds in 𝒮^⋆ in an obvious way. The Weil-Heisenberg representation τ̂^⋆ restricts to the Weil-Heisenberg representation
τ̂:𝒮⋉ℋ→( V_τ)
that extends τ. Let ω denote the resulting Weil representation of 𝒮.
Similar remarks apply with the roles of W and ^gW interchanged and we use the natural notations in this context.
The symplectic group 𝒮_0 = Sp(W_0) has a natural embedding in 𝒮^⋆ as a proper subgroup of 𝒮∩^g𝒮.
We obtain the Weil-Heisenberg representation
τ̂_0 :𝒮_0 ⋉ℋ_0→(V_τ_0)
by restricting any of the Weil-Heisenberg representations τ̂^⋆, τ̂ or ^gτ̂.
Let ω_0 be the associated Weil representation of 𝒮_0.
At this point, it is easy to deduce that ω and ^gω intertwine, however, we caution that it is not enough to simply observe
that ω and ^gω both restrict to ω_0 on V_τ_0, since 𝒮_0 does not coincide with 𝒮∩^g𝒮.
Instead, we simply observe that
the group 𝒮∩^g𝒮 stabilizes V_τ_0 and thus
V_τ_0 may be viewed as a common (𝒮∩^g𝒮)⋉ℋ_0-submodule of τ̂ and ^gτ̂.
We have now shown:
The intertwining space Hom_ℋ_0(^gτ,τ) is 1-dimensional and identical to
Hom_(^g 𝒮∩𝒮)⋉ℋ_0(^g τ̂,τ̂). One can associate to each complex number c an intertwining operator ℐ_c: V_^gτ→ V_τ by taking
* ℐ_c(v) = cv for all v∈ V_τ_0,
* ℐ_c ≡ 0 on all irreducible ℋ_0-modules other than V_τ_0 occurring V_^gτ.
Proposition 14.1 and Theorem 14.2 of <cit.> are valid.
Our claim follows from Lemma <ref> upon pulling back our representations via a special homomorphism and observing that the image of ^gK∩ K in 𝒮^⋆ lies in ^g𝒮∩𝒮.
Note that there is a representation of 𝒮∩^g𝒮 on V_τ_0 defined by using the natural projection 𝒮∩^g𝒮→𝒮_0 and then composing with ω_0.
Let ω_0^♯ denote this representation.
Let ω̂_0 be the representation of 𝒮∩^g𝒮 on V_τ_0 given by restricting ω^⋆ (or ω or ^gω). There must be a character χ of 𝒮∩^g𝒮 such that ω̂_0 = ω_0^♯⊗χ, but this character χ need not be trivial. In the proof of Lemma 14.6 <cit.>, it is incorrectly stated that “N_0 is contained in the commutator subgroup of P_0.” This leads to problems in the statements of Lemma 14.6 and Proposition 14.7 in <cit.>. These problems involve the fact that χ need not be trivial, however, as we have indicated, the nontriviality of χ is ultimately is irrelevant for the required intertwining results.
§.§ The construction of κ
Fix a permissible representation ρ of H_x. Theorem <ref> states, in part, that,
up to isomorphism, there is a unique representation κ = κ (ρ) of K such that:
(1) The character of κ has support in H_xK_+.
(2) κ |K_+ is a multiple of ϕ̂.
(3) κ | H_x = ρ⊗ω_0⊗⋯⊗ω_d-1.
In this section, we construct such a representation κ. Since Conditions (1) – (3) completely determine the character of κ, once we have proven the necessary existence result, the uniqueness, up to isomorphism, will follow.
Since K = H_xJ^1⋯ J^d, and since κ is determined on H_x by Condition (3), it suffices to define κ |J^i+1, for all i∈{ 0,… , d-1}.
Now suppose i∈{ 0,… , d-1}. Fix a special homomorphism ν_i :J^i+1→ℋ_i, in the sense of Definition <ref>, and use it to pull back τ_i to a representation τ_ν_i of J^i+1.
Let
J^i+1_♭ = G^i+1_ der∩ J^i+1
J^i+1_♭ ,+ = G^i+1_ der∩ J^i+1_+.
Since [J^i+1,J^i+1]⊂ J^i+1_♭, the quotient J^i+1/J^i+1_♭ is a compact abelian group with subgroup J^i+1_+/J^i+1_♭ ,+.
Choose a character χ_i of J^i+1 that occurs as an irreducible component of the induced representation
Ind_J^i+1_+/J^i+1_♭,+^J^i+1/J^i+1_♭( ζ_i^-1(ϕ̂|J^i+1_+)).
Here, we are using Lemma <ref>(5).
Note that Frobenius Reciprocity implies
ϕ̂|J^i+1_+ = (χ_i |J^i+1_+)·ζ_i.
Define κ on J^i+1 by
κ (j) = χ_i(j) (1_ρ⊗ 1_0⊗⋯⊗ 1_i-1⊗τ_ν_i (j)⊗ 1_i+1⊗⋯⊗ 1_d-1),
where 1_ℓ denotes the identity map on V_ℓ.
The definitions of κ |H_x, κ |J^1, …, κ |J^d are compatible and define a representation κ : K→ (V) such that κ |K_+ is a multiple of ϕ̂.
There are three things to prove:
(i) The given definitions of κ on H_x, J^1, …, J^d are compatible and hence yield a well-defined mapping κ : K→ (V).
(ii) The mapping κ is a homomorphism.
(iii) κ |K_+ is a multiple of ϕ̂.
It is convenient to start by showing that the definitions of κ on H_x, J^1, …, J^d are compatible with (iii).
First, we show that the restriction of κ |H_x to H_x∩ K_+ =H_x,0+ is a multiple of ϕ̂|H_x,0+ = ϕ.
By definition,
κ | H_x = ρ⊗ω_0⊗⋯⊗ω_d-1
and ρ | H_x,0+ is a multiple of ϕ.
So it suffices to show that each factor ω_i when restricted to H_x,0+ is a multiple of the trivial representation.
The latter statement follows directly from the fact that H_x,0+ acts trivially by conjugation on W_i = J^i+1/J^i+1_+.
(In fact, Lemma 4.2 <cit.> implies that [H_x,0+,J^i+1]⊂ J^i+1_++⊂ J^i+1_+ which, in turn, implies that H_x,0+ acts trivially by conjugation on W_i = J^i+1/J^i+1_+. See also Remark <ref> below for more general information about commutators.)
Now fix i∈{ 0,… , d-1}. The restriction of κ |J^i+1 to J^i+1∩ K_+ =J^i+1_+ is a multiple of (χ_i |J^i+1_+)·ζ_i.
But since ϕ̂|J^i+1_+ = (χ_i |J^i+1_+)·ζ_i, we see that the restriction of κ |J^i+1 to J^i+1_+ is compatible with (iii).
Next, we observe that
K = H_x,0+J^1_+⋯ J^d_+ = (H_x∩ K_+)(J^1∩ K_+)⋯ (J^d∩ K_+).
Therefore, if (i) and (ii) hold then (iii) must also hold, according to the previoous discussion about compatibility with (iii) of the various restrictions of κ.
It remains to prove (i) and (ii). We prove both things using a single induction.
Suppose we are given i∈{ 0,… , d-1} such that the definitions of κ on H_x,J^1,⋯ , J^i are compatible and yield a representation κ |K^i of K^i = H_xJ^1⋯ J^i. (When i=0, this amounts to the trivial assumption that κ |H_x is well defined.)
To show that we have a well-defined representation κ : K→ (V), it suffices to show that the definitions of κ|K^i and κ |J^i+1 are compatible, and that the resulting function κ |K^i+1 is a homomorphism K^i+1→ (V).
Compatibility is equivalent to the condition that κ |K^i and κ|J^i+1 agree on K^i∩ J^i+1 = G^i_x,r_i.
Suppose i≥ 1. Then since G^i_x,r_i = J^i∩ J^i+1, compatibility reduces to the condition that κ |J^i and κ |J^i+1 agree on G^i_x,r_i.
The latter condition indeed holds since both κ |J^i and κ |J^i+1 restrict to a multiple of ϕ̂|G^i_x,r_i. A similar argument may be used when i=0.
It follows that we have a well-defined mapping
κ |K^i+1: K^i+1→(V).
We now verify that this mapping is a homomorphism.
Suppose k,k'∈ K^i and j,j'∈ J^i+1. It is easy to see that the condition
κ (kj)κ (k'j') = κ (kjk'j')
is equivalent to the condition
κ (k'^-1jk') = κ (k')^-1κ (j)κ (k').
Now write
k' = hj_1⋯ j_i,
with h∈ H_x, j_1∈ J^1, …, j_i∈ J^i and substitute for κ (k') the product
κ (h)κ(j_1)⋯κ (j_i). Then we obtain the condition
χ_i (k'^-1jk') τ_ν_i(h^-1jh) = χ_i (j) τ_ν_i(j).
In light of Lemma <ref>, this reduces to
χ_i (k'^-1jk') = χ_i (j)
or, equivalently,
χ_i |[K^i,J^i+1]=1.
But, by construction, χ_i is trivial on J^i+1_♭ = G^i+1_ der∩ J^i+1.
Thus our claim follows from the fact that [K^i,J^i+1]⊂ J^i+1_♭.
The character of κ has support in H_xK_+.
Consequently, up to isomorphism, the representation κ is independent of the choices of the ν_i's and χ_i's.
The second assertion follows from the first one,
since conditions (2) and (3) in Theorem <ref> are independent of the choices of the ν_i's and χ_i's.
So it suffices to prove that the character χ_κ of κ has support in H_xK_+.
By definition,
κ (hj_1⋯ j_d) = (∏_i=0^d-1χ_i(j_i+1)) ρ(h)⊗( ⊗_i=0^d-1ω_i(h) τ_ν_i(j_i+1)),
where h∈ H_x, j_1∈ J^1,… , j_d∈ J^d.
Taking traces of the latter finite-dimensional operators gives
χ_κ (hj_1⋯ j_d) = (∏_i=0^d-1χ_i(j_i+1)) ρ(h)⊗( ⊗_i=0^d-1(ω_i(h) τ_ν_i(j_i+1))).
The desired fact about the support of χ_κ now follows from Howe's computation of the support of the finite Heisenberg/Weil representations. (See <cit.> and <cit.>.)
The previous proof is similar to the proof of Proposition 4.24 in <cit.>.
Recall that κ |J^i+1 is the product of a (noncanonical) character χ_i of J^i+1 with the pullback τ_ν_i of a Heisenberg representation τ_i via a special homomorphism ν_i : J^i+1→ℋ_i. The character χ_i is chosen in such a way that κ |J^i+1_+ is a multiple of ϕ̂|J^i+1_+.
A consequence of Lemma <ref> is that the equivalence classes of the representations κ |J^i+1 are invariants that are canonically associated to ρ. A priori, knowing the equivalence classes of κ|H_x and the κ |J^i+1's is not enough to determine the equivalence class of κ.
§.§ Irreducibility of π
We begin by recapitulating a basic fact from <cit.>.
Suppose 𝒢 is a totally disconnected group with center 𝒵, and suppose 𝒦 is an open subgroup of 𝒢 such that 𝒦 contains 𝒵 and the quotient 𝒦/𝒵 is compact.
Assume (μ,V_μ) is an irreducible, smooth, complex representation of 𝒦. When we write ind_𝒦^𝒢(μ), we are referring to the space of functions
f:𝒢→ V_μ
such that:
* f(kg) = μ (k) f(g), for all k∈𝒦, g∈𝒢,
* f has compact support modulo 𝒵 or, equivalently, the support of f has finite image in 𝒦𝒢,
* f is fixed by right translations by some open subgroup 𝒦_f of 𝒢.
We view ind_𝒦^𝒢(μ) as a 𝒢-module, where 𝒢 acts on functions by right translations.
A well known and fundamental fact, due to Mackey, is that the representation ind_𝒦^𝒢(μ)
is irreducible precisely when
I_g(μ) 0 implies g∈𝒦, where
I_g(μ) = Hom_g 𝒦g^-1∩𝒦(^gμ,μ),
for g∈𝒢.
We use this repeatedly in this section.
Now suppose κ is a representation of K as in the statement of Theorem <ref> and let π = ind_K^G(κ).
The representation π is irreducible.
We may as well assume d>0, since there is nothing to prove if d=0.
Assume g_d∈ G and I_g_d(κ) 0.
It suffices to show g_d∈ K.
We start the proof with a recursive procedure.
Assume that i∈{ 0,… , d-1} and we are given g_i+1∈ G^i+1 such that I_g_i+1(κ ) 0. We will show that there exists
g_i∈ G^i ∩ J^i+1g_i+1J^i+1.
For such g_i, we also show that it is necessarily the case that I_g_i(κ) 0.
Since κ |K_+ is a multiple of ϕ̂, the assumption I_g_i+1(κ) 0 implies that I_g_i+1(ϕ̂|K^i+1_+) 0
or, equivalently,
ϕ̂|([g_i+1^-1,K^i+1_+]∩ K^i+1_+)=1.
Here, K^i+1_+ is the group
K^i+1_+
= H_x,0+J^1_+⋯ J^i+1_+
= H_x,0+L^1_+⋯ L^i+1_+
= (G^0,…, G^i+1)_x,(0+,s_0+,… ,s_i+).
We have
ζ_i |([g_i+1^-1,J^i+1_+]∩ J^i+1_+)= ϕ̂|([g_i+1^-1,J^i+1_+]∩ J^i+1_+)=1,
since, according to Lemma <ref>(5), the characters ζ_i and ϕ̂ agree on G^i+1_ der∩ J^i+1_+.
According to <cit.>, there exist elements g_i∈ G^i and j_i+1,j'_i+1∈ J^i+1 such that
g_i = j_i+1g_i+1j'_i+1.
Then
Λ↦κ (j_i+1)∘Λ∘κ (j'_i+1)
determines a bijection
I_g_i+1(κ)≅ I_g_i(κ).
Consequently, I_g_i(κ) 0.
Thus we may continue the recursion until we have produced a sequence g_d,… , g_0, where g_i∈ G^i and I_g_i(κ) 0.
In particular, we obtain g_0∈ H such that
g_d ∈ J^d⋯ J^1 g_0 J^1⋯ J^d⊂ Kg_0 K.
So to show g_d lies in K, it suffices to show g_0 lies in H_x = K∩ H.
Hence, it suffices to show that when h∈ H and I_h(κ ) 0 then it must be the case that h∈ H_x.
So suppose h∈ H and I_h (κ ) 0. The intertwining space I_h (κ )
consists of linear endomorphisms Λ of the space V of κ such that
Λ (^h κ (k)v) =κ (k) Λ(v), ∀ k∈ hK h^-1∩ K .
Given such Λ∈ I_h (κ ), there is an associated
λ∈ Hom_hKh^-1∩ K (^h κ⊗κ̃,1),
where hKh^-1∩ K is embedded diagonally in K × K and (κ̃, V) is the contragredient of (κ,V). It is defined on elementary tensors in V ⊗V by
λ (v⊗ṽ) = ⟨Λ (v),ṽ⟩.
The map Λ↦λ gives a linear isomorphism
I_h (κ) ≅ Hom_hKh^-1∩ K (^hκ⊗κ̃,1).
The latter remarks apply very generally to intertwining, not just to κ, and we will use them for various representations.
We observe that for each i∈{ 0,… , d-1} the intertwining space I_h (τ_ν_i) has dimension one, according to <cit.>. For each such i, fix a nonzero element Λ_i∈ I_h (τ_ν_i) and let
λ_i be the corresponding element of Hom_hJ^i+1h^-1∩ J^i+1(^h τ_ν_i⊗τ̃_ν_i ,1).
Suppose
v= v_ρ⊗ v_0⊗⋯⊗ v_d-1∈ V_ρ⊗ V_0⊗⋯⊗ V_d-1 = V
and
ṽ= ṽ_ρ⊗ṽ_0⊗⋯⊗ṽ_d-1∈V_ρ⊗V_0⊗⋯⊗V_d-1 = V.
Suppose i∈{ 0,… ,d-1}.
Fix all of the components of v and ṽ except for v_i and ṽ_i and define a linear form λ'_i :V_i⊗V_i→ by
λ'_i (v_i⊗ṽ_i) = λ (v⊗ṽ).
Then for j∈ hJ^i+1h^-1∩ J^i+1 we have
λ'_i (v_i⊗ṽ_i)
= λ (κ (h^-1jh)v⊗κ̃(j)ṽ)
= χ_i (h^-1jhj^-1) λ'_i (τ_ν_i (h^-1jh)v_i⊗τ̃_ν_i (j)ṽ_i)
= λ'_i (τ_ν_i (h^-1jh)v_i⊗τ̃_ν_i (j)ṽ_i),
where the triviality of χ_i (h^-1jhj^-1) follows from the fact that χ_i is trivial on J^i+1_♭ = G^i+1_ der∩ J^i+1.
Therefore, λ'_i lies in
Hom_hJ^i+1h^-1∩ J^i+1(^h τ_ν_i⊗τ̃_ν_i ,1) = λ_i.
Using a “multiplicity one” argument as in the proof of Lemma 5.24 <cit.>, one can show that
for every linear form
λ∈ Hom_hKh^-1∩ K (^hκ⊗κ̃,1)
there exists a unique linear form
λ_ρ∈ Hom(V_ρ⊗V_ρ ,)
such that
λ (v⊗ṽ) = λ_ρ (v_ρ⊗ṽ_ρ)·∏_i=0^d-1λ_i (v_i⊗ṽ_i).
Roughly speaking, the argument goes as follows. Fix v_d-1 and ṽ_d-1 so that λ_d-1(v_d-1⊗ṽ_d-1) is nonzero.
Define
∂ v = v_ρ⊗ v_0⊗⋯⊗ v_d-2
in
∂ V = V_ρ⊗ V_0⊗⋯⊗ V_d-2
and
∂ṽ =ṽ_ρ⊗ṽ_0⊗⋯⊗ṽ_d-2
in
∂V = V_ρ⊗V_0⊗⋯⊗V_d-2.
Then there is a linear form on ∂ V⊗∂V given on elementary tensors by
∂λ
(∂ v⊗∂ṽ) = λ (v⊗ṽ)/λ_d-1(v_d-1⊗ṽ_d-1).
Repeating this procedure one coordinate at a time, one gets a sequence ∂^jλ of linear forms on
∂^j V⊗∂^j V
defined on elementary tensors by
∂^jλ
(∂^j v⊗∂^jṽ)
= λ (v⊗ṽ)/∏_i=1^jλ_d-i(v_d-i⊗ṽ_d-i).
This ultimately yields the desired linear form λ_ρ.
We claim that
λ_ρ∈ Hom_hH_xh^-1∩ H_x(^hρ⊗ρ̃,1).
Indeed, if k∈ hH_xh^-1∩ H_x then, according to Corollary <ref>, we have
λ ( v ⊗ṽ) = λ (κ (h^-1kh)v ⊗κ̃(k)ṽ)
= λ_ρ (ρ(h^-1kh)v_ρ⊗ρ̃(k)ṽ_ρ)∏_i=0^d-1λ_i (ω_i (h^-1kh)v_i ⊗ω̃_i (k)ṽ_i)
= λ_ρ (ρ(h^-1kh)v_ρ⊗ρ̃(k)ṽ_ρ)∏_i=0^d-1λ_i ( v_i ⊗ṽ_i).
Since λ_ρ is necessarily nonzero, there must be a corresponding nonzero element Λ_ρ∈ I_h (ρ). Since I_h(ρ) is nonzero and ρ induces an irreducible representation of H, we deduce that h lies in H_x, which completes the proof.
In Proposition 4.6 <cit.>, it is shown that cuspidal G-data that satisfy conditions SC1_i, SC2_i, and SC3_i of <cit.> yield irreducible (hence supercuspidal) representations of G. In Theorem 15.1 in <cit.>, it is shown that generic, cuspidal G-data satisfy the SC conditions. Theorem 9.4, Theorem 11.5, and Theorem 14.2 of <cit.>, respectively, are the results that specifically show that generic data satisfy the conditions SC1_i, SC2_i, and SC3_i, respectively. Our proof of Lemma <ref> uses Theorem 9.4 <cit.> and Theorem 14.2 of <cit.> (with the revised proof in Corollary <ref> above).
As just indicated, our proof of Lemma <ref> is based partly on Yu's proof of his Proposition 4.6. But in the latter part of his proof, Yu uses an inductive argument due to Bushnell-Kutzko <cit.>.
By contrast, we have adapted the proof of Lemma 5.24 <cit.>.
(The latter result was not intended to handle intertwining issues, but it is interesting that it can be used for this purpose.)
§ THE CONNECTION WITH YU'S CONSTRUCTION
The theory of generic cuspidal G-data was introduced by Yu in <cit.>, but in discussing this theory we follow the notations and terminology of <cit.>.
Fix a generic cuspidal G-datum Ψ = (, y,ρ_0 ,ϕ⃗) and a torus as in <cit.>.
Then Yu's construction associates to Ψ a representation κ (Ψ) of an open compact-mod-center subgroup K(Ψ) of G. The representation κ (Ψ) induces an irreducible, supercuspidal representation π (Ψ) of G. (See <cit.>.)
Let = ^d, = ^0, and y come from the given datum Ψ, and let
ρ = ρ(Ψ)= ρ_0 ⊗∏_i=0^d (ϕ_i |H_x ),
where x is the vertex in ℬ_ red (,F) associated to y.
Let r⃗ = (r_0,… , r_d) be the sequence of depths associated to Ψ (as in <cit.>). Let
ϕ = ∏_i=0^d (ϕ_i|H_x,0+).
Then ϕ and ϕ_i agree on [G^i+1,G^i+1]∩ H_x,r_i-1+.
For each i∈{ 0,… , d-1}, we have a symplectic space W_i associated to Ψ as in <cit.>.
Associated to W_i, we can define a Heisenberg group structure on
ℋ_i = W_i×μ_p, as in <ref>.
Define another Heisenberg group W_i^♯ by letting
W_i^♯ = W_i × (J^i+1_+/ζ_i)
with multiplication
defined as in <cit.>.
Then (w,z)↦ (w,ζ_i(z)) defines an isomorphism W^♯_i≅ℋ_i of Heisenberg groups.
Fix a Heisenberg representation (τ_i ,V_i) of ℋ_i. Let (τ_i^♯ , V_i) be the Heisenberg representation of W_i^♯ obtained via the given isomorphism W^♯_i≅ℋ_i.
Extending τ_i as in <ref>, we obtain a Weil representation 𝒮_i→(V_i). This is the same as the Weil representation that comes from τ_i^♯ in <cit.>.
We have a map H_x→𝒮_i given by conjugation. Pulling back the Weil representation via the latter map yields a representation
ω_i : H_x→(V_i).
In Yu's construction and ours, we must arbitrarily choose for each i∈{ 0,… ,d-1} a Heisenberg representation within a prescribed isomorphism class.
Implicit in the statement of the next result is the assumption that we choose the same family of Heisenberg representations when we construct κ (Ψ) and κ (ρ).
More precisely, we first choose a family of Heisenberg representations τ_i^♯ as in <cit.>, then, after it is established that (Ψ) = (ρ) and r⃗(Ψ) = r⃗(ρ), we use in the construction of κ (ρ) the Heisenberg representations τ_i associated to the τ_i^♯'s as above.
Suppose Ψ is a generic cuspidal G-datum and ρ is the corresponding representation of H_x. Then:
* ρ is a permissible representation.
* The given sequences = (^0,…, ^d) and r⃗ = (r_0,…, r_d) associated to Ψ are identical to the corresponding sequences associated to ρ.
* The representations κ (ρ) and κ (Ψ) are defined on the same group K acting on the same space V and the two representations are identical on the subgroups H_x, J^1_+, …, J^d_+.
* The characters of both representations have support in H_xJ^1_+⋯ J^d_+.
Consequently, the characters of κ (Ψ) and κ (ρ) are identical and thus these representations of K (and the associated tame supercuspidal representations of G) are equivalent.
Fix Ψ = (, y,ρ_0 ,ϕ⃗) and , as above.
We may as well replace y in the datum with the point x=[y], since only x is relevant to Yu's construction. As noted in <cit.>, if ^♯ is z-extension of as in <ref>, then Ψ pulls back to a generic, cuspidal G^♯-datum Ψ^♯ = (^♯, x,ρ_0^♯, ϕ⃗^♯) such that Yu's representation π (Ψ) of G pulls back to his representation π (Ψ^♯) of G^♯.
Therefore, we may as well assume that _ der is simply connected.
(Note that the complications regarding the definition of the r_i's in <ref> essentially disappear when considering Yu's construction, since Yu imposes more restrictive conditions on his objects. For example, a given factor ϕ_i always has the same depth as its pullback ϕ_i^♯, since we have surjections G^i,♯_x,r→ G^i_x,r for all r≥ 0, according to Lemma 3.5.3 <cit.>.)
Let
ρ = ρ(Ψ)= ρ_0 ⊗∏_i=0^d (ϕ_i |H_x ).
In our construction, we assume that there is a character ϕ of H_x,0+ such that ρ|H_x,0+ is a multiple of ϕ. Such a character ϕ exists in the present situation and, according to <cit.>, it is given by
ϕ = ∏_j=0^d(ϕ_j|H_x,0+).
In general, there are various objects, such as the ^i's, associated to Ψ and, in our construction, there are similar objects associated to ϕ.
We need to show that the objects , r⃗ and ϕ⃗ coming from Ψ can be recovered from
ϕ exactly as in our construction.
Assume i∈{ 1,… , d-1}.
Let ^i denote the center of ^i and let ^i,i+1 = (^i+1_ der∩^i)^∘.
Choose a G^i+1-generic element Z^*_i∈𝔷^i,*_-r_i of depth -r_i that represents ϕ_i |G^i_x,r_i.
Using the decomposition
𝔷^i,*_-r_i = 𝔷^i+1,*_-r_i⊕z^i,i+1,*_-r_i,
we write Z^*_i = Y^*_i +X^*_i.
Then Y^*_i represents the trivial character of H^i_x,r_i since 𝔷^i+1,* is orthogonal to 𝔤^i+1_ der. Therefore, Z^*_i and X^*_i represent the same character of H^i_x,r_i, namely, ϕ_i |H^i_x,r_i.
Since we assume _ der is simply connected, it follows from Lemma <ref> (with replaced by ^i+1) that
[G^i+1,G^i+1] ∩ H_x,r_i = H^i_x,r_i
and, consequently, ϕ |H^i_x,r_i=ϕ_i |H^i_x,r_i. Therefore, X^*_i must lie in the dual coset (ϕ |Z^i,i+1)^*∈z^i,i+1,*_-r_i:(-r_i)+.
We observe now that, just as in our construction,
r_i must be the depth of ϕ |H^i_x,0+, and ^i is the unique maximal subgroup of containing such that ^i is defined over F and is an E-Levi subgroup of , and ϕ |H^i-1_x,r_i=1.
Thus, we have established that the sequences and r⃗ associated to Ψ and ϕ coincide. It is also follows form the remarks in <ref> that ρ is permissible.
The fact that and r⃗ coincide for Ψ and ρ implies that the associated subgroups of G, such as K, are the same for Ψ and ϕ.
It is routine to verify from the definitions that κ (Ψ)|H_x and κ (ρ)|H_x are the identical representation of the group H_x on the same space V.
We refer the reader to <cit.> for the definition of κ (Ψ) and we note that it is easy to see that the representation ω_i defined just before the statement of the present lemma is given by ω_i (h) = τ̂_i^♯ (f'_i(h)), in the notation of in <cit.>.
We also observe that both κ (Ψ) and κ (ρ) act according to a multiple of the character ϕ̂ on each of the subgroups J^i+1_+.
In the case of κ (Ψ), this is <cit.>.
The fact that the support of the character of κ (Ψ) is contained in H_xJ^1_+⋯ J^d_+ is shown in the proof of <cit.>. The corresponding fact for κ (ρ) is our Lemma <ref>.
§ DISTINGUISHED CUSPIDAL REPRESENTATIONS FOR P-ADIC GROUPS AND FINITE GROUPS OF LIE TYPE
The motivation for this paper was a desire to unify the p-adic and finite field theories of distinguished cuspidal representations. We now state in a uniform way a theorem that applies both to p-adic and finite fields. The proof appears in a companion paper <cit.>.
In this section, we simultaneously address two cases that we refer to as “the p-adic case” and “the finite field case.”
In the p-adic case, F is, as usual, a finite extension of _p with p odd. In the finite field case, F= 𝔽_q where q is a power of an odd prime p. Let be a connected, reductive F-group and let G = (F).
In the p-adic case, we let ρ be a permissible representation of H_x. In the finite field case, we let ρ be a character in general position of (𝔽_q), where is an F-elliptic maximal F-torus.
For the sake of unity, we let L denote H_x in the p-adic case and (𝔽_q) in the finite field case.
Let π (ρ) be the irreducible supercuspidal or cuspidal Deligne-Lusztig representation of G associated to ρ. Let ℐ be the set of F-automorphisms of of order two, and let G act on ℐ by
g·θ = Int(g)∘θ∘ Int(g)^-1,
where Int(g) is conjugation by g. Fix a G-orbit Θ in ℐ.
Given θ∈𝒪, let G_θ be the stabilizer of θ in G. Let G^θ be the group of fixed points of θ in G.
When ϑ is an L-orbit in Θ, let
m_L (ϑ) = [G_θ :G^θ (G_θ∩ L)],
for some, hence all, θ∈ϑ.
Let ⟨Θ,ρ⟩_G denote the dimension of the space Hom_G^θ(π (ρ),1) of -linear forms on the space of π (ρ) that are invariant under the action of G^θ for some, hence all, θ∈Θ.
For each θ such that θ (L) = L, we define a character
ε__L,θ :L^θ→{± 1}
as follows.
In the finite field case,
ε__L,θ (h) = ( Ad(h)|𝔤^θ).
One can show that this is the same as the character ε
defined in <cit.>.
In the p-adic case,
ε__L,θ (h) = ∏_i=0^d-1(_ f ( Ad(h) | W_i^+)/P_F)_2,
where our notations are as follows.
First, we take
𝔚_i^+
=((⊕_a∈Φ^i+1-Φ^ig_a)^ Gal(F/F))^θ_x,s_i:s_i+,
viewed as a vector space over the residue field f of F.
In other words, 𝔚_i^+ is, roughly speaking, the space of θ-fixed points in the Lie algebra of W_i= J^i+1/J^i+1_+.
Next, for u∈f^×, we let (u/P_F)_2 denote the quadratic residue symbol. This is related to the ordinary Legendre symbol by
(u/P_F)_2
=(N_f/𝔽_p (u)/p) = (N_f/𝔽_p (u))^(p-1)/2= u^(q_F-1)/2.
We remark that in the p-adic case, ε__L,θ is the same as the character η'_θ defined in <cit.>.
In both the finite field and p-adic cases, one can, at least in most cases, compute the determinants arising in the definition of ε__L,θ and provide elementary expressions for them in terms of Gal(F/F)-orbits of roots.
When ϑ is an L-orbit in Θ, we write ϑ∼ρ when θ (L) = L for some, hence all θ∈ϑ, and when
the space Hom_L^θ (ρ ,ε__L,θ) is nonzero.
When ϑ∼ρ, we define
⟨ϑ ,ρ⟩_L = Hom_L^θ (ρ ,ε__L,θ),
where θ is any element of ϑ. (The choice of θ does not matter.)
With these notations, we may have the following:
⟨Θ,ρ⟩_G = ∑_ϑ∼ρ m_L(ϑ) ⟨ϑ , ρ⟩_L.
Note that in the special case in which
* is a product _1×_1,
* contains the involution θ (x,y) = (y,x),
* ρ has the form ρ_1×ρ̃_2,
where ρ̃_2 is the contragredient of ρ_2,
we have
⟨Θ,ρ⟩_G = Hom_G(π (ρ_1),π(ρ_2)).
In the finite field case, our theorem in the latter situation is consistent with the Deligne-Lusztig inner product formula <cit.>. (See <cit.>, for more details.)
amsalpha
|
http://arxiv.org/abs/1701.07783v1 | 20170126172757 | Universality and the dynamical space-time dimensionality in the Lorentzian type IIB matrix model | [
"Yuta Ito",
"Jun Nishimura",
"Asato Tsuchiya"
] | hep-th | [
"hep-th",
"gr-qc",
"hep-lat"
] | |
http://arxiv.org/abs/1701.07760v2 | 20170126162318 | Degrees of Iterates of Rational Maps on Normal Projective Varieties | [
"Nguyen-Bac Dang"
] | math.AG | [
"math.AG",
"math.DS",
"37F10, 14C25"
] |
A negative answer to a conjecture arising in the study
of selection-migration models in population geneticst1
Elisa Sovrano
18 november 2016
===============================================================================================================
Let X be a normal projective variety defined over an algebraically closed field of arbitrary characteristic.
We study the sequence of intermediate degrees of the iterates of a dominant rational selfmap of X, recovering former results by Dinh, Sibony <cit.>, and by Truong <cit.>.
Precisely, we give a new proof of the submultiplicativity properties of these degrees and of their birational invariance.
Our approach exploits intensively positivity properties in the space of numerical cycles of arbitrary codimension.
In particular, we prove an algebraic version of an inequality first obtained by Xiao <cit.> and Popovici <cit.>, which generalizes Siu's inequality (see <cit.>) to algebraic cycles of arbitrary codimension. This allows us to show that
the degree of a map is controlled up to a uniform constant by the norm of its action by pull-back
on the space of numerical classes in X.
§ INTRODUCTION
Let f : X X be any dominant rational self-map of a normal projective variety X of dimension n defined over an algebraically closed field of arbitrary characteristic.
If X is not normal then one can always consider its normalization. Moreover, if the field is not algebraically closed, then we shall take its algebraic closure.
Given any big and nef (e.g ample) Cartier divisor H_X on X, and any integer 0 ⩽ i ⩽ n, one defines the i-th degree of f as the integer:
_i, H_X(f) = (π_1^* H_X^n-i·π_2^* H_X^i),
where π_1 and π_2 are the projections from the normalization of the graph of f in X × X onto the first and the second factor respectively and where ( · ) denotes the intersection product on this graph.
Our main theorem can be stated as follows.
Let X be a normal projective variety of dimension n and let H_X be a big and nef Cartier divisor on X.
(i) There is a positive constant C>0
such that for any dominant rational self-maps f, g on X, one has:
_i,H_X(f ∘ g ) ⩽ C _i,H_X (f) _i,H_X(g).
(ii) For any big nef Cartier divisor H_X' on X, there exists a constant C>0 such that for any rational self-map f on X, one has:
1C⩽_i,H_X(f)_i,H_X' (f)⩽ C.
Observe that Theorem <ref>.(ii) implies that the degree growth of f is a birational invariant, in the sense that there is a positive constant C such that for any birational map g: X' X with X' projective, and any big nef Cartier divisor H_X' on X', one has
1C⩽_i, H_X (f^p)_i,H_X' (g^-1∘ f^p ∘ g)≤ C,
for any p ∈ℕ.
Indeed, by applying Theorem <ref>.(ii) for the induced action by f on the normalization of the graph of g, one deduces that the growth of the degrees on the graph of g and on X and X' are controlled by a strictly positive constant.
Fekete's lemma and Theorem <ref>.(i) also imply the existence of the dynamical degree (first introduced in <cit.> for rational maps of the projective space) as the following quantity:
λ_i(f):=lim_p → + ∞_i,H_X(f^p)^1/p .
The independence of λ_i(f) under the choice of H_X, and its birational invariance are the consequence of Theorem <ref>.(ii) .
When = ℂ, Theorem <ref> was proved by Dinh and Sibony in <cit.>, and further generalized to compact Kähler manifolds in <cit.>.
The core of their argument relied on a procedure of regularization for closed positive currents of any bidegree (<cit.>) and was therefore transcendental in nature.
When is a field of characteristic zero, there exists an inclusion of the field in ℂ by Lefschetz principle (<cit.>) and Dinh and Sibony's argument proves that the i-th dynamical degree of any rational dominant map is well-defined.
Recently, Truong <cit.> managed to get around this problem and proved Theorem <ref> for arbitrary smooth varieties using an appropriate Chow-type moving lemma.
He went further in <cit.> and obtained Theorem <ref> for any normal variety in all characteristic by applying de Jong's alteration theorem (<cit.>).
Note however that he had to deal with correspondences since a rational self-map can only be lifted as a correspondence through a general alteration map.
Our approach avoids this technical difficulty.
To illustrate our method, let us explain the proof of Theorem <ref>, when X is smooth, i=1 and f, g are regular following the method initiated in <cit.>.
Recall that a divisor α on X is pseudo-effective and one writes α⩾ 0 if for any ample Cartier divisor H on X, and any rational ϵ >0, a suitable multiple of the ℚ-divisor α + ϵ H is linearly equivalent to an effective one.
Recall also the fundamental Siu inequality[this inequality is also referred to as the weak transcendantal holomorphic Morse inequality in <cit.>] (<cit.>, <cit.>, <cit.>) which states:
α⩽ n (α·β^n-1)(β^n)β,
for any nef divisor α, and any big and nef divisor β.
Since the pullback by a dominant morphism of a big nef divisor remains big and nef,
we may apply (<ref>) to the big nef divisors α = g^* f^* H_X and β = f^* H_X, and we get
g^*f^* H_X ⩽ n _1,H_X(f)(H_X^n) g^* H_X .
Intersecting with the cycle H_X^n-1 yields the submultiplicativity of the degrees with the constant C = n/(H_X^n).
We observe that the previous inequality (<ref>) can be easily extended to complete intersections by cutting out by suitable ample sections.
In particular, we get a positive constant C such that for any big nef divisors α and β, one has:
α^i ⩽ C (α^i ·β^n-i)(β^n)β^i.
Such inequalities have been obtained by Xiao (<cit.>) and Popovici (<cit.>) in the case = ℂ.
Their proof uses the resolution of complex Monge-Ampère equations and yields a constant C = n i.
On the other hand, our proof applies in arbitrary characteristic and in fact to more general classes than complete intersection ones.
We refer to Theorem <ref> below and the discussion preceding it for more details.
Note however that we only obtain C = (n-i+1)^i, far from the expected optimal constant C = n i of Popovici.
Once (<ref>) is proved, Theorem <ref> follows by a similar argument as in the case i = 1.
Going back to the case where X is a complex smooth projective variety, recall that the degree of f is controlled up to a uniform constant by the norm of the linear operator f^∙,i, induced by pullback on the de Rham cohomology space H^2i_dR (X)_ℝ (<cit.>).
One way to construct f^∙,i is to use the Poincaré duality isomorphisms ψ_X: H^2i_dR(X,ℝ) →H_2n-2i(X,ℝ), ψ_Γ_f : H^2i_dR(Γ_f,ℝ) →H_2n-2i(Γ_f,ℝ) where H_i(X,ℝ) denotes the i-th simplicial homology group of X.
The operator f^∙,i is then defined following the commutative diagram below:
H_dR^2i(Γ_f, ℝ)[r]^-ψ_Γ_f H_2n-2i(Γ_f,ℝ) [r]^π_1_* H_2n-2i(X,ℝ) [d]^ψ_X^-1
H_dR^2i(X, ℝ) [rr]_f^∙,i[u]^π_2^* H_dR^2i(X,ℝ),
where Γ_f is a desingularization of the graph of f in X × X, and π_1, π_2 are the projections from Γ_f onto the first and second factor respectively.
In order to state an analogous result in our setting, we need to find a replacement for the de Rham cohomology group H^2i_dR(X)_ℝ and define suitable pullback operators.
When X is smooth, one natural way to proceed is to consider the spaces ^i(X)_ℝ of algebraic ℝ-cycles of codimension i modulo numerical equivalence.
The operator f^∙, i is then simply given by the composition π_1_* ∘π_2^* : ^i(X)_ℝ→^i(X)_ℝ.
When X is singular, then the situation is more subtle because one cannot intersect arbitrary cycle classes in general [an arbitrary curve can only be intersected with a Cartier divisor, not with a general Weil divisor.].
One can consider two natural spaces of numerical cycles ^i(X)_ℝ and _i(X)_ℝ on which pullback operations and pushforward operations by proper morphisms are defined respectively.
More specifically, the space of numerical i-cycles _i(X)_ℝ is defined as the group of ℝ-cycles of dimension i modulo the relation z ≡ 0 if and only if (p^* z · D_1 ·…· D_e+i) = 0 for any proper flat surjective map p : X' → X of relative dimension e and any Cartier divisors D_j on X'.
One can prove that _i(X)_ℝ is a finite dimensional vector space and one defines ^i(X)_ℝ as its dual (_i(X)_ℝ, ℝ).
Note that our presentation differs slightly from Fulton's definition (see Appendix <ref> for a comparison), but we also recover the main properties of the numerical groups. This approach is more suitable to compare cycles using positivity estimates on complete intersections.
As in the complex case, we are able to construct Poincaré duality maps ψ_X : ^i(X)_ℝ→_n-i(X)_ℝ and ψ_Γ_f : ^i(Γ_f)_ℝ→_n-i(Γ_f)_ℝ, but they are not necessarily isomorphisms due to the presence of singularities. As a consequence, we are only able to define a linear map f^∙,i as f^∙,i := π_1_* ∘ψ_Γ_f∘π_2^* : ^i(X)_ℝ→_n-i(X)_ℝ between two distinct vector spaces.
Despite this limitation, we prove a result analogous to one of Dinh and Sibony. The next theorem was obtained by Truong for smooth varieties (<cit.>).
Let X be a normal projective variety of dimension n. Fix any norms on ^i(X)_ℝ and _n-i(X)_ℝ, and denote by · the induced operator norm on linear maps from ^i(X)_ℝ to _n-i(X)_ℝ. Then there is a constant C>0 such that for any rational selfmap f: X X, one has:
1C⩽|| (f)^∙,i||_i,H_X(f)⩽ C.
Our proof of Theorem <ref> exploits a natural notion of positive classes in ^i(X)_ℝ combined with a strengthening of (<ref>) to these classes that we state below (see Theorem <ref>).
To simplify our exposition, let us suppose again that X is smooth.
As in codimension 1, one can define the pseudo-effective cone ^i(X) as the closure in ^i(X)_ℝ of the cone generated by effective cycles of codimension i. Its dual with respect to the intersection product is the nef cone ^n-i(X), which however does not behave well when i ⩾ 2 (see <cit.>).
Some alternative notions of positive cycles have been introduced by Fulger and Lehmann in <cit.>, among which the notion of basepoint free classes emerges. Basepoint free classes have many good properties such as being both pseudo-effective and nef, being invariant by pull-backs by morphisms and by intersection products, and forming a salient convex cone with non-empty interior. The terminology comes from the fact that the basepoint free classes always have a cycle representing them with intersects any subvariety with the expected dimension.
Denote by ^i(X) the cone of basepoint free classes. It is defined as the closure in ^i(X)_ℝ of the cone generated by ℝ-cycles of the form p_* (D_1 ·…· D_e+i) where D_j are ample Cartier ℝ-divisors and p : X' → X is a flat surjective proper morphism of relative dimension e.
For basepoint free classes, we are able to prove the following generalization of (<ref>).
Let X be a normal projective variety of dimension n. Then there exists a constant C>0 such that for any basepoint free class α∈^i(X), for any big nef divisor β, one has in ^i(X)_ℝ:
α⩽ C (α·β^n-i)(β^n)×β^i.
Theorem <ref> follows from (<ref>) by observing that f^∙,i^i(X) ⊂^i(X), so that the operator norm ||f^∙,i|| can be computed by evaluating f^∙,i only on basepoint free classes.
In the singular case, the proof of Theorem <ref> is completely similar but the spaces ^i(X)_ℝ and _n-i(X)_ℝ are not necessarily isomorphic in general.
As a consequence, several dual notions of positivity appear in ^i(X)_ℝ and _i(X)_ℝ
that make the arguments more technical.
Finally, using the techniques developed in this paper, we give a new proof of the product formula of Dinh, Nguyen, Truong (<cit.>,<cit.>) which they proved when = ℂ and which was later generalized by Truong (<cit.>) to normal projective varieties over any field.
The setup is as follows.
Let q : X → Y be any proper surjective morphism between normal projective varieties, and
fix two big and nef divisors H_X, H_Y on X and Y respectively.
Consider two dominant rational self-maps f: X X, g : Y Y, which are semi-conjugated by
q, i.e. which satisfy q ∘ f = g ∘ q. To simplify notation we shall write
X/_qY fg X/_q Y when these assumptions hold true.
Recall that the i-th relative degree of X/_q Y fg X/_q Y is given by the intersection product
_i (f) := (π_1^*( H_X^ X - Y - i· q^* H_Y^ Y) ·π_2^* H_X^i),
where π_1 and π_2 are the projections from the graph of f in X × X onto the first and the second component respectively.
One can show a relative version of Theorem <ref> (see Theorem <ref>), and define as in the absolute case, the i-th relative dynamical degree λ_i(f, X/Y)
as the limit lim_p→ +∞_i(f^p)^1/p.
It is also a birational invariant in the sense that if φ : X' X, ψ: Y' Y such that q' = ψ^-1∘ q∘φ is regular, then λ_i( φ^-1∘ f ∘φ , X'/Y') = λ_i(f, X/Y), and does not depend on the choices of H_X and H_Y.
When q : X Y is merely rational and dominant, then we define (see Section <ref>) the i-th relative degree of f by replacing X with the normalization of graph of q.
We prove the following theorem.
Let X,Y be normal projective varieties.
For any dominant
rational self-maps f: X X, g : Y Y which are semi-conjugated by
a dominant rational map q: X Y, we have
λ_i(f) = max_max(0, i- l ) ⩽ j ⩽min(i,e) ( λ_i-j(g) λ_j (f, X/Y) ) .
Our proof follows closely Dinh and Nguyen's method from <cit.> and relies on a fundamental inequality (see Corollary <ref> below) which follows from Künneth formula at least when = ℂ.
To state it precisely, consider π : X' → X a surjective generically finite morphism and q : X → Y a surjective morphism where X', X and Y are normal projective varieties such that n= X= X' and such that l = Y.
We prove that for any basepoint free classes α∈^i(X') and β∈^n-i(X'), one has:
(β·α) ⩽ C ∑_max(0 , i-l) ≤ j ≤min(i,e) U_j(α) × ( β·π^* (q^* H_Y^i-j· H_X^j)),
where H_Y and H_X are big and nef divisors on Y and X respectively,
and U_j(α) is the intersection product given by U_j(α) = (π^*(q^*H_Y^l-i+j· H_X^e-j) ·α).
In the singular case, Truong has obtained this inequality using Chow's moving intersection lemma.
We replace this argument by a suitable use of Siu's inequality and Theorem <ref> in order to prove a positivity property for a class given by the difference between a basepoint free class in X' × X' and the fundamental class of the diagonal of X' in X' × X' (see Theorem <ref>).
Inequality (<ref>) is a weaker version of <cit.> proved by Dinh-Nguyen when Y is a complex projective variety, and was extended to a field arbitrary characteristic by Truong when Y is smooth (<cit.>).
§.§ Organization of the paper
In the first Sections <ref> and <ref>, we review the background on the Chow groups and recall the definitions of the spaces of numerical cycles and provide their basic properties.
In <ref>, we discuss the various notions of positivity of cycles and prove Theorem <ref>.
In <ref>, we define relative numerical cycles and canonical morphisms which are the analogous to the Poincaré morphisms ψ_X in a relative setting.
In <ref>, we prove Theorem <ref>, Theorem <ref> and Theorem <ref>.
Finally we give an alternate proof of Dinh-Sibony's theorem in the Kähler case (<cit.>) in <ref> using Popovici <cit.> and Xiao's inequality <cit.>. Note that these inequalities allow us to avoid regularization techniques of closed positive currents but rely on a deep theorem of Yau.
In Section <ref>, we prove that our presentation and Fulton's definition of numerical cycles are equivalent, hence proving that any numerical cycles can be pulled back by a flat morphism.
Firstly, I would like to thank my advisor C. Favre for his patience and our countless discussions on this subject. I thank also S. Boucksom for some helpful discussions and for pointing out the right argument for the appendix, S. Cantat, L. Fantini, M. Fulger, T. Truong, B. Lehmann, R. Mboro and J. Xie for their precious comments on my previous drafts and for providing me some references.
The author is supported by the ERC-starting grant project "Nonarcomp" no.307856, and is supported by ANR project “Lambda” ANR-13-BS01-0002
§ CHOW GROUP
§.§ General facts
Let X be a normal projective variety of dimension n defined over an algebraically closed field of arbitrary characteristic.
The space of cycles Z_i(X) is the free abelian group generated by irreducible subvarieties of X of dimension i,
and Z_i(X)_ℚ, Z_i(X)_ℝ will denote the tensor products Z_i(X) ⊗_ℤℚ and Z_i(X) ⊗_ℤℝ.
Let q: X → Y be a morphism where Y is a normal projective variety. Since X and Y are respectively projective, the map q is proper. Following <cit.>, we define the proper pushforward of the cycle [V] ∈ Z_i(X) as the element of Z_i(Y) given by:
q_* [V] = {[ 0 if (q(V)) < V; [ (η) : (q(η)) ] × [q(V)] if V = (q(V)), ] .
where V is an irreducible subvariety of X of dimension i, η is the generic point of V and (η), (q(η)) are the residue fields of the local rings 𝒪_η and 𝒪_q(η) respectively.
We extend this map by linearity and obtain a morphism of abelian groups q_* : Z_i(X) → Z_i(Y).
Let C be any closed subscheme of X of dimension i and denote by C_1, …, C_r its i-dimensional irreducible components. Then C defines a fondamental class [C]∈ Z_i(X) by the following formula:
[C] := ∑_j=1^r l_𝒪_C_j, C(𝒪_C_j,C)[C_j],
where l_A(M) denotes the length of an A-module M (<cit.>).
For any flat morphism q: X → Y of relative dimension e between normal projective varieties, we can define a flat pullback of cycles q^*: Z_i(Y) → Z_i+e(X) (see <cit.>). If C is any subscheme of Y of dimension i, the cycle q^* [C] is by definition the fundamental class of the scheme-theoretic inverse by q:
q^* [C] := [q^-1(C)] ∈ Z_i+e(X).
Let W be a subvariety of X of dimension i+1 and φ be a rational map on W. Then we define a cycle on X by:
[(φ)]:= ∑_V(φ) [V],
where the sum is taken over all irreducible subvarieties V of dimension i of W ⊂ X.
A cycle α defined this way is rationally equivalent to 0 and in that case we shall write α 0.
The i-th Chow group A_i(X) of X is the quotient of the abelian group Z_i(X) by the free group generated by the cycles that are rationally equivalent to zero. We denote by A_∙(X) the abelian group ⊕ A_i(X).
We recall now the functorial operations on the Chow group, which result from the intersection theory developped in <cit.>.
Let q: X → Y be a morphism between normal projective varieties. Then we have:
(i)
The morphism of abelian groups q_* : Z_i(X) → Z_i(Y) induces a morphism of abelian groups q_* : A_i(X) → A_i(Y).
(ii)
If the morphism q is flat of relative dimension e, then the morphism q^* : Z_i(Y) → Z_i+e(X) induces a morphism of abelian groups q^* : A_i(Y) → A_i+e(X).
Assertion (i) is proved in <cit.> and assertion (ii) is given in <cit.>.
Let q: X → Y is a flat morphism of normal projective varieties. Suppose α∈ A_i(Y) is represented by an effective cycle α∑ n_j [V_j] where the n_j are positive integers. Then q^*α is also represented by an effective cycle.
Any cycle α∈ Z_0(X)_ℤ is of the form ∑ n_j [p_j] with p_j ∈ X() and n_j ∈ℤ. We define the degree of α to be (α) := ∑ n_j and we shall write:
(α ):= (α) = ∑ n_j.
The morphism of abelian groups : Z_0(X)_ℤ→ℤ induces a morphism of abelian groups : A_0(X) →ℤ.
§.§ Intersection with Cartier divisors
Let X be a normal projective variety and D be a Cartier divisor on X. Let V be a subvariety of of dimension i in X and denote by j : V ↪ X the inclusion of V in X. We define the intersection of D with [V] as the class:
D · [V] := j_* [D'] ∈ A_i-1(X),
where D' is a Cartier divisor on V such that the line bundles j^* 𝒪_X(D) and 𝒪_V(D') are isomorphic.
Observe that D' exists since the exact sequence 0 [r] 𝒪^*_V [r] ℳ_V^* [r] ℳ_V^* / 𝒪_V^*[r] 0 induces a surjective map from the divisor subgroups H^0(V, ℳ_V^* / 𝒪_V^*) of V onto the Picard group (V) = H^1(V, 𝒪_V^*) where ℳ_V^* is the sheaf of non-zero rational functions on V.
We extend this map by linearity into a morphism of abelian groups D · :Z_i(X) → A_i-1(X).
Let X be a normal projective variety and D be a Cartier divisor on X.
The map D · : Z_i(X) → A_i-1(X) induces a morphism of abelian groups D · : A_i(X) → A_i-1(X).
Moreover, the following properties are satisfied:
* For all Cartier divisors D and D' on X, for all class α∈ A_i(X), we have:
(D' + D) ·α = D' ·α + D ·α.
* (Projection formula) Let q: X → Y be a morphism between normal projective varieties. Then for all class β∈ A_i(X) and all Cartier divisor D on Y, we have in A_i-1(Y):
q_* (q^*D ·β) = D · q_*(β).
§.§ Characteristic classes
Let X be a normal projective variety of dimension n and L be a line bundle on X. There exists a Cartier divisor D on X such that the line bundles L and 𝒪_X(D) are isomorphic. We define the first Chern class of L as:
c_1(L) := [D] ∈ A_n-1(X).
For all normal projective varieties X, the group ^i(X) is the free group generated by elements of the form D_1 ·…· D_i where D_1, …, D_i are Cartier divisors on X.
Let X be a normal projective variety and E be a vector bundle of rank e+1 on X.
Given any vector bundle E on X, we shall denote by (E) the projective bundle of hyperplanes in E following the convention of Grothendieck.
Let p be the projection from (E^*) to X and ξ = c_1(𝒪_(E^*) (1)).
We define the i-th Segré class s_i (E) as the morphism s_i(E) ·: A_∙(X) → A_∙-i(X) given by:
s_i(E) α := p_* (ξ^e+i· p^*α).
When X is smooth of dimension n, we can define an intersection product on the Chow groups A_i(X) × A_l(X) → A_n-i-l(X) (see <cit.>) which is compatible with the intersection with Cartier divisors and satisfies the projection formula (see <cit.>).
Applying the projection formula to (<ref>), we get
s_i(E) α = p_* (ξ^e+i)·α,
so that s_i(E) is represented by an element in A_n-i(X). To simplify we shall also denote s_i(E) this element.
As Segré classes of vector bundles are operators on the Chow groups A_∙(X), the composition of such operators defines a product.
(cf <cit.>) Let q: X → Y be a morphism between normal projective varieties.
For any vector bundle E and F on Y, the following properties hold.
(i) For all α∈ A_i(Y) and all j< 0, we have s_j(E) α = 0.
(ii) For all α∈ A_i(Y), we have s_0(E) α = α.
(iii) For all integers j,m, we have s_j(E) ( s_m(F) α) = s_m(F) (s_j(E) α).
(iv) (Projection formula) For all β∈ A_i(X) and any integer j, we have q_*(s_j(q^* E) β) = s_j(E) q_* β.
(v) If the morphism q: X → Y is flat, then for all α∈ A_i(Y) and any integer j, we have s_j(q^*E) q^* α = q^* (s_j(E) α)).
The j-th Chern class c_j(E) of a vector bundle E on X is an operator c_j(E) : A_∙(X) → A_∙ -j defined formally as the coefficients in the inverse power series:
(1+ s_1(E) t + s_2(E)t^2 + … )^-1 = 1 + c_1(E) t + … + c_r+1(E)t^r+1.
A direct computation yields for example
c_1(E) = - s_1(E), c_2(E) = (s_1(E)^2 - s_2(E)).
Let X be a normal projective variety.
The abelian group A^i(X) is the subgroup of (A_∙(X), A_∙-i(X)) generated by product of Chern classes c_i_1 (E_1) ·…· c_i_p(E_p) where i_1, …, i_p are integers satisfying i_1 + … + i_p = i and where E_1, …, E_p are vector bundles over X. We denote by A^∙(X) the group ⊕ A^i(X).
Observe that by definition, A^i(X) contains the image of ^i(X).
Recall that the Grothendieck group K^0(X) is the free group generated by vector bundles on X quotiented by the subgroup generated by relations of the form [E_1] + [E_3] - [E_2] where there is an exact sequence of vector bundles:
0 [r] E_1 [r] E_2 [r] E_3 [r] 0 .
Moreover, the group K^0(X) has a structure of rings given by the tensor product of vector bundles.
Recall also that the Chern character is the unique morphism of rings : (K^0(X), +, ⊗) → (A^∙(X), + , · ) satisfying the following properties (see <cit.>).
* If L is a line bundle on X, then one has:
(L)= ∑_i ⩾ 0c_1(L)^ii!.
* For any morphism q: X' → X and any vector bundle E on X, we have q^* (E) = (q^*E).
For any vector bundle E on X, we will denote by _i(E) the term in A^i(X) of (E).
We recall Grothendieck-Riemann-Roch's theorem for smooth varieties.
(see <cit.>) Let X be a smooth variety. Then the Chern character induces an isomorphism:
[X] : E ∈ K^0(X)⊗ℚ→(E) [X] ∈ A_∙(X)⊗ℚ.
We also recall the definition of Schur polynomials.
Consider a vector bundle E of rank e on X.
Fix two integers e,i and a decreasing partition λ =(λ_1, …, λ_i) of i with terms lower or equal than e.
The Schur class s_λ(E) is the class given by:
s_λ(E) = | [ c_λ_1(E) c_λ_1 + 1(E) … c_λ_1 + i -1(E); c_λ_2 -1(E) c_λ_2 (E) … c_λ_2 +i - 2(E); …; c_λ_i -i +1(E) c_λ_i - i +2(E) … c_λ_i(E) ] | .
If E is a vector bundle of rank e on X, then the Schur class s_λ(E) ∈ A^i(X) is the Schur polynomial in the variables given by the Chern classes c_1(E), … , c_e(E).
When the vector bundle E is globally generated, then the Schur classes can be interpreted as degeneracy loci (see <cit.>).
§ SPACE OF NUMERICAL CYCLES
§.§ Definitions
In all this section, X, Y,X_1,X_2,X_3 and X' are normal projective varieties and X is of dimension n.
Two cycles α and β in Z_i(X) are said to be numerically equivalent and we will denote by α≡β if for all flat morphisms p_1 : X_1 → X of relative dimension e and all Cartier divisors D_1 , … , D_e+i in X_1, we have:
( D_1 ·…· D_e+i· q^*α ) = ( D_1 ·…· D_e+i· q^*β) .
The group of numerical classes of dimension i is the quotient _i(X) = Z_i(X) / ≡.
By construction, the group _i(X) is torsion free and there is a canonical surjective morphism A_i(X) →_i(X) for any integer i.
Observe also that for i = 0, two cycles are numerically equivalent if and only if they have the same degree. Since smooth points are dense in X (see <cit.>) and are of degree 1, this proves that the degree realizes the isomorphism _0(X) ≃ℤ.
We set _i(X)_ℚ and _i(X)_ℝ the two vector spaces obtained by tensoring by ℚ and ℝ respectively.
This definition allows us to pullback numerical classes by any flat morphism q : X → Y of relative dimension e.
Our presentation is slightly different from the classical one given in <cit.>. We refer to Appendix <ref> for a proof of the equivalence of these two approaches.
Let q: X → Y a morphism. Then the morphism of groups q_* : Z_i(X) → Z_i(Y) induces a morphism of abelian groups q_*:_i(X) →_i(Y).
Let n be the dimension of X and l be the dimension of Y, and
let α be a cycle in Z_i(X) such that α is numerically trivial.
We need to prove that q_* α is also numerically trivial.
Take p_1 : Y_1 → Y a flat morphism of relative dimension e_1.
Let X_1 be the fibred product X ×_Y Y_1 and let p_1' and q' be the natural projections from X_1 to X and Y_1 respectively.
X_1 [d]^q'[r]^p_1' X [d]^q
Y_1 [r]^p_1 Y
Since flatness is preserved by base change (<cit.>), the morphism p_1' is flat and q' is proper.
Pick any cycle γ whose class is in ^e_1 +i(Y_1).
We want to prove that ( γ·p_1^* q_*α) = 0.
By <cit.>, we have that p_1^*q_* α = q'_*p_1'^* α in Z_e_1+i(Y_1). Applying the projection formula, we get:
γ· p_1^* q_* α = γ· q'_* p_1'^* α q'_*(q'^* γ· p_1'^* α).
Because p_1' is flat and q'^*γ∈^e_1 + i (X_1), we have (q'^* γ· p_1'^* α ) = 0 so that (γ· p_1^* q_* α )= 0 as required.
The numerical classes defined above are hard to manipulate, we want to define a pullback of numerical classes by any proper morphism.
We proceed and define dual classes.
We denote by Z^i(X) = _ℤ( Z_i(X) , ℤ) the space of cocycles. If p_1 : X_1 → X is a flat morphism of relative dimension e_1, then any element γ∈^e_1+i(X_1) induces an element [γ] in Z^i(X) by the following formula:
[γ] : α∈ Z_i(X) → (γ· p_1^* α) ∈ℤ.
The abelian group ^i(X) is the subgroup of Z^i(X) generated by elements of the form [γ] where γ∈^e_1+i(X_1) and X_1 is flat over X of relative dimension e_1.
By definition, the map : Z_0(X) →ℤ is naturally an element of Z^0(X).
Moreover, one has using Theorem <ref>.(ii) that:
z ∈ Z_0(X) → (s_0(E) z) = (z) ∈ℤ,
for any vector bundle E on X.
Hence, defines an element of ^0(X) by definition of Segré classes (Definition <ref>).
By definition of the numerical equivalence relation, any element of ^i(X) induces an element of the dual _ℤ(_i(X), ℤ). Hence, we can define a natural pairing between ^i(X) and _i(X). For any normal projective variety, the pairing ^i(X) ×_i(X) →ℤ is non degenerate (i.e the canonical morphism from ^i(X) to _ℤ(_i(X), ℤ) is injective).
It follows directly from the definition of ^i(X) and _i(X).
A priori, an element of ^i(X) is a combination of elements [γ_1] + [γ_2] + … + [γ_j].
The following proposition proves one can always take j=1 at least if we tensor all spaces by ℚ.
Any element of ^i(X) is induced by γ∈^e_1+i(X_1)_ℚ where p_1: X_1 → X is a flat morphism of relative dimension e_1.
By an immediate induction argument, we are reduced to prove the assertion for the sum of two elements [γ_1] +[γ_2] where γ_j ∈^e_j + i(X_i)_ℚ and p_j : X_j → X are flat morphisms of relative dimension e_1 and e_2 respectively.
Let us consider X' the fibre product X_1 × X_2 over X and p_j' the flat projections from X' to X_j for j=1,2.
By linearity , we only need to show that there exists an element γ_1' ∈^e_1+e_2 + i (X') such that [γ_1'] = [γ_1] in ^i(X).
X_1 × X_2 [ld]^p_2'[rd]^p_1'
X_1 [rd]^p_1 X_2 [ld]^p_2
X
Take an ample Cartier divisor H_X_2 on X_2 and λ_2 an integer such that p_2_* H_X_2^e_2λ_2 [X].
Setting γ_1' = 1λ_2 p_1'^* H_X_2^e_2· p_2'^* γ_1, we need to prove that for any α∈ Z_i(X), one has (γ_1 · p_1^* α )= (γ_1' · p_2'^* p_1^* α).
By <cit.>, we have the equality p_2'_*p_1'^* H_X_2^e_2 = p_1^*p_2_* H_X_2^e_2 in Z^e_2(X_2), hence:
p_2'_*p_1'^* H_X_2^e_2 = λ_2 p_1^* [X].
Since X_1 is reduced and p_1^* [X] is a cycle of codimension 0 in X_1, we have p_1^* [X] = [X_1].
Hence by the projection formula, we have:
[ 1λ_2p_2'_* ( p_2'^* (γ_1 · p_1^* α) · p_1'^* H_X_2^e_2) = 1λ_2 (p_1^* α·γ_1) ·p_2_* p_1'^* H_X_2^e_2; = 1λ_2 (p_1^* α·γ_1) ·λ_2[X_1]; = p_1^* α·γ_1. ]
In particular, the degrees are equal and [γ_1]= [γ_1'] ∈^i(X) as required.
By the same argument, there exists a class γ_2' ∈^e_1 + e_2 + i (X_1 × X_2) such that [γ_2] = [γ_2'] ∈^i(X), hence [γ_1] + [γ_2] = [γ_1'] + [γ_2'] = [γ_1' + γ_2'] ∈^i(X) as required.
We define _∙(X) (resp. ^∙(X)) by ⊕_i _i(X) (resp. ⊕_i ^i(X)).
§.§ Algebra structure on the space of numerical cycles
We now define a structure of algebra on ^∙(X), and prove that _∙(X) has a structure of ^∙(X) module.
Pick γ∈^e_1+i(X_1)_ℚ where p_1: X_1 → X is a flat morphism of relative dimension e_1. The element γ induces a morphism in the Chow group:
γ· :α∈ A_l(X) →p_1_* (γ· p_1^* α) ∈ A_l-i(X).
The morphism γ· :A_l(X) → A_l-i(X) induces a morphism of abelian groups from _l(X) to _l-i(X).
Any element α∈^i(X)
induces a morphism α· : N_∙(X) →_∙-i(X) such that the following conditions are satisfied.
(i) If α is induced by γ∈^e_1 + i(X_1)_ℚ where p_1: X_1 → X is a flat morphism of relative dimension e_1,
then for any integer l and any z ∈_l(X), one has in _l-i(X):
α z = γ z .
(ii) For any α, β∈^i(X) and any z ∈_l(X), we have:
(α + β) z = α z + β z.
Let us consider α∈^i(X) and suppose it is induced by γ_1∈^e_1 +i(X_1)_ℚ where p_1 : X_1 → X is a flat morphism of relative dimension e_1.
We define the map α· as :
α z = γ_1 z,
for any z ∈_l(X).
We show that the morphism does not depend on the choice of the class γ_1 and (i) is follows from Proposition <ref>. Assertion (ii) follows from the linearity of the intersection product whose proof follows closely the proof of Proposition <ref>.
Suppose that [γ_1] = [γ_2] ∈^i(X) where γ_2 ∈^e_2 + i(X_2)_ℚ and p_2: X_2 → X is a flat morphism of relative dimension e_2, then we need to prove that:
p_1_* ( γ_1 · p_1^* z) ≡p_2_* (γ_2 · p_2^*z),
for any fixed z ∈ Z_l(X).
Take β∈^e_3 + l-i(X_3) where p_3: X_3 → X is flat morphism of relative dimension e_3, we only need to show that:
(β· p_3^* p_1_* (γ_1 · p_1^* z) ) = (β· p_3^* p_2_* (γ_2 · p_2^* z)).
Let X_1' and X_2' the fibre products X_1 × X_3 and X_2 × X_3, and p_1': X_1' → X_3, p_3': X_1' → X_1, q_2 : X_2' → X_3, q_3: X_2' → X_2 be the corresponding flat projection morphisms such that we obtain the following commutative diagrams:
X_1' [rd]^p_1'[ld]^p_3' X_2' [rd]^q_2[ld]^q_3
X_1 [rd]^p_1 X_3 [ld]^p_3 X_2 [rd]^p_2 X_3 [ld]^p_3
X X .
As above, we have p_3^*p_1_* = p_1'_* p_3'^*, hence:
[ (β· p_3^* p_1_* (γ_1 · p_1^* z) ) = (β·p_1'_* p_3'^* (γ_1 · p_1^* z)); = (p_1'^* β· p_3'^* (γ_1 · p_1^* z)); = (p_3'^* γ_1 · p_1'^* p_3^* z · p_1'^* β ); = ( γ_1 ·p_3'_* p_1'^* ( p_3^* z ·β ) ); = (γ_1 · p_1^*p_3_* (p_3^* z ·β )); = (γ_2 · p_2^* p_3_* (p_3^* z ·β )). ]
By a similar argument, we show that (β· p_3^* p_2_* (γ_2 · p_2^* z)) = (γ_2 · p_2^* p_3_* (p_3^* z ·β)) which implies the desired equality:
(β· p_3^* p_1_* (γ_1 · p_1^* z) ) = (β· p_3^* p_2_* (γ_2 · p_2^* z)).
There exists a unique structure of commutative graded ring with unit () on ^∙(X) given by (α,β) ∈^∙(X) ×^∙(X) ↦α·β∈^∙(X) which satisfies the following properties:
(i) For any α,β∈^∙(X) and any z ∈_∙(X), one has:
( α·β) z = ( α ( β z) ) = ( β ( α z) ).
(ii) For any z∈_∙(X), we have () z = z.
(iii) The morphism of abelian groups given by
(α, z) ∈^∙(X) ×_∙(X) ↦α z ∈_∙(X)
is bilinear.
Hence, the abelian group _∙(X) has the structure of a graded ^∙(X)-module.
Take α_1 ∈^i(X) and α_2 ∈^l(X) and define φ∈ Z^i+l(X) by the formula:
φ : z ∈ Z_i+l(X) → (α_1 ( α_2 z)).
We prove that φ is an element of ^i+l(X).
By linearity, we can suppose that α_i is induced by γ_i ∈^i+e_j(X_j) where p_j :X_j → X is a flat morphism of relative dimension e_j for j=1,2.
Let X' = X_1 ×_X X_2 be the fibre product, let p_1' and p_2' be the projections from X' to X_1 and X_2 respectively such that we have the commutative diagram:
X' [rd]^p_1'[ld]^p_2'
X_1 [rd]^p_1 X_2 [ld]^p_2
X.
By the projection formula, we obtain for all z ∈ Z_i+l(X):
φ(z) = ( p_1'^* γ_2 · p_2'^* γ_1 · p_2'^* p_1^*z).
In particular, we have shown that φ is induced by p_1'^* γ_2 · p_2'^* γ_1∈^e_1 + e_2 +i+l(X'), hence φ is an element of ^i+l(X).
Moreover, the commutativity of the intersection product in (<ref>) proves that (α_2 (α_1 z))= (α_1 (α_2 z)) for any z ∈_i+l(X), hence α_1 ·α_2 = α_2 ·α_1.
Pick a vector bundle E on X.
As the element ∈^0(X) is equal to z → (s_0(E) z) in ^0(X) (see Remark <ref>), we get using Theorem <ref>.(ii) that:
(α z) =(α (s_0(E) z)) = (s_0(E) (α z)) = ((α·) z) = (·α) z,
for any z ∈_l(X) and any α∈^l(X).
Hence, is a unit of ^∙(X).
§.§ Pullback on dual numerical classes
Let us consider q: X → Y a proper morphism.
We define for any integer i the pullback q^* : ^i(Y) →_ℤ(_i(X) , ℤ) as the dual of the pushforward operation q_* : _i(X ) →_i(Y) with respect to the pairing ^i(X) ×_i(X) →ℤ defined in Proposition <ref>.
Let q: X → Y be a proper morphism.
The morphism q^* induces a morphism of graded rings q^* :^∙(Y) →^∙ (X) which satisfies the projection formula:
∀α∈^i(Y), ∀ z ∈_l(X), q_* (q^* α z ) = α q_* z.
We only need to prove that the image q^* (^i(Y)) is contained in ^i(X) and that the projection formula is satisfied as it directly implies that q^*: ^∙ (Y) →^∙(X) is a morphism of rings since:
(α·β) q_*z = q^* (α·β) z = α q_* (q^* β z) = (q^* α· q^*β ) z,
for any α∈^i(Y), β∈^l(Y) and any z ∈_i+l(X).
Consider a class α∈^i(Y) which is induced by γ∈^e_1 + i(Y_1) where p_1 : Y_1 → Y is a flat proper morphism of relative dimension e_1.
Setting X_1 to be the fibre product Y_1 × X and p_1', q' the projections from X_1 to X and X_1 respectively, one remarks using the equality q'_*p_1'^* = p_1^*q_* (<cit.>) that q^* α is induced by q'^* γ, hence q^* α∈^i(X) as required.
And the projection formula follows easily from the projection formula on divisors (Theorem <ref>.(ii)).
Let us sum up all the properties of numerical classes proven so far :
Let q: X → Y be a proper morphism. For any integer 0 ⩽ i⩽ X and 0 ⩽ l ⩽ Y:
(i) The pushforward morphism q_* : Z_i(X) → Z_i(Y) induces a morphism of abelian groups q_* : _i(X) →_i(Y).
(ii) The dual morphism q^* : Z^l(Y) → Z^l(X) maps ^l(Y) into ^l(X).
(iii) The induced morphism q^* : ^∙(Y) →^∙ (X) preserves the structure of graded rings.
(iv) (Projection formula)For all α∈^l(Y) and all z ∈_i(X), we have q_*( q^*α z) ≡α q_* z in _i-l(Y).
§.§ Canonical morphism
The morphism ψ_X: α∈^i(X) ↦α [X] ∈_n-i(X) is the unique morphism which satisfies the following properties.
(i) The image of the morphism : Z_0(X) →ℤ seen as an element of Z^0(X) is given by ψ_X() = [X].
(ii) The morphism ψ_X is ^i(X)-equivariant, i.e for all α∈^i(X) and all β∈^l(X), we have:
ψ_X( α·β) = αψ_X(β).
(iii) Suppose q : X → Y is a generically finite morphism where Y is of dimension n, then we have the following identity:
q_* ∘ψ_X ∘ q^* = (q) ×ψ_Y .
Recall that is the unit in ^∙(X), hence ψ_X() = [X] and (ii) follows directly from the definition and Proposition <ref>.
Assertion (iii) is then a consequence of the projection formula (see Theorem <ref>.(iv)) and the fact that q_*[X] = (q) [Y].
Let us prove that ψ_X is unique. Suppose that φ : ^i(X) →_n-i(X) satisfies the hypothesis of the theorem.
Since φ() = [X] and since is the unit element of the ring ^∙(X), we have that for any α∈^i(X), α = α·. By (ii),
φ( α ) = φ (α·) = αφ() = α [X] = ψ_X(α),
as required.
Now we prove some properties of ψ_X in some particular cases.
The following properties are satisfied.
(i) If X is smooth, then for all integers 0 ⩽ i ⩽ n, the induced morphism ψ_X : ^i(X)_ℚ→_n-i(X)_ℚ is an isomorphism.
(ii) If X is smooth and q: X → Y is a surjective generically finite morphism where Y is a normal projective variety. Then we have for all integer i:
q^* ( ψ_Y(^n-i(Y)_ℚ)^) = q^*( ^i(Y)_ℚ) ∩( q_* ∘ψ_X: ^i(X)_ℚ→_n - i(Y)_ℚ ).
(i) Let us show that ψ_X is surjective.
By the Grothendieck-Riemann-Roch's theorem (Theorem <ref>), the Chern character induces an isomorphism:
[X] : E ∈ K^0(X) ⊗ℚ→(E) [X] ∈ A_∙(X) ⊗ℚ.
This implies that the morphism ψ_X: ^i(X)_ℚ→_n-i(X)_ℚ is surjective because any Chern class is the image of a product of Cartier divisors by a flat map (see Remark <ref>).
We now prove that ψ_X : ^i(X)_ℚ→_n-i(X)_ℚ is injective.
Take α_1∈^i(X)_ℚ such that ψ_X(α_1) = 0. By Proposition <ref>, the class α_1 is induced by γ_1 ∈^e_1 + i(X_1)_ℚ where p_1 : X_1 → X is a flat morphism of relative dimension e_1.
The condition ψ_X(α_1) =0 is equivalent to the equality p_1_* γ_1 = 0 ∈_n-i(X).
We need to show that (γ_1 · p_1^*z)= 0 for any cycle z ∈ Z_i(X).
As X is smooth, we may compute intersection products inside the Chow group A_∙(X) directly by Remark <ref> and we get:
(γ_1 · p_1^* z) = ( p_1_* (γ_1 · p_1^* z) ) = (p_1_* γ_1 · z ) = 0
as the class z ∈_i(X) is the image of an element of ^n-i(X)_ℚ by surjectivity of ψ_X.
(ii) We have the following series of equivalence:
[ β∈ψ_Y ( ^n- i (Y)_ℚ)^ ⇔ ∀α∈^n-i(Y)_ℚ, (βψ_Y(α) ) = 0; ⇔ ∀α∈^n-i(Y)_ℚ ,( β (q_* ψ_X q^* α)) = 0; ⇔ ∀α∈^n-i(Y)_ℚ , (q^* β· q^* α) = 0; ⇔ ∀α∈^n-i(Y)_ℚ , (α q_* ψ_X q^* β ) = 0; ⇔ q^* β∈(q_* ∘ψ_X : ^i(X)_ℚ→_n - i(Y)_ℚ, ]
where the second equivalence follows from Theorem <ref>.(iii), the third and the fourth equivalence from the projection formula, and the last equivalence is a consequence of the fact that ψ_X is self-adjoint :
( βψ_Y(α) ) = (β (α [Y]))=(α( β [Y ])) = (αψ_Y( β)),
where α∈^i(Y) and β∈^n-i(Y).
The proof of Theorem <ref>.(i) shows that when X is smooth, _i(X)_ℚ is the quotient of Z_i(X)_ℚ by cycles z ∈ Z_i(X)_ℚ such that for any cycle z' ∈ Z_n-i(X)_ℚ, one has (z · z') = 0.
When X is smooth and when = ℂ, denote by ^i(X) the subgroup of the de Rham cohomology H^2i(X, ℂ) generated by algebraic cycles of dimension i in X. Then there is a surjective morphism ^i(X) →^i(X)_ℚ
§.§ Numerical spaces are finite dimensional
Both ℚ-vector spaces _i(X)_ℚ and ^i(X)_ℚ are finite dimensional.
If X is smooth, then using Remark <ref>, _i(X)_ℚ is the quotient of Z_i(X)_ℚ by the equivalence relation which identifies cycles α and β in Z_i(X)_ℚ if for any cycle z ∈ Z_n-i(X)_ℚ, (z ·α) = ( z ·β).
In particular, the vector-space _i(X)_ℚ is finitely generated (see <cit.> for a reference), and so is ^i(X)_ℚ using Theorem <ref>.(i).
If X is not smooth, by DeJong's alteration theorem (cf <cit.>), there exists a smooth projective variety X' and a generically finite surjective morphism q: X' → X. We only need to show that the pushforward q_* : _i(X')_ℚ→_i(X)_ℚ is surjective.
Indeed this first implies that _i(X)_ℚ is finite dimensional. Since the natural pairing ^i(X)_ℚ×_i(X)_ℚ→ℚ is non degenerate we get an injection of ^i(X)_ℚ onto _ℚ(_i(X)_ℚ, ℚ) which is also finite dimensional.
We take V an irreducible subvariety of codimension i in X. If q^-1(V) = V, then the class q_* [q^-1(V)] in _ V(X)_ℚ is represented by a cycle of dimension V which is included in V. As V is irreducible, we have q_*[q^-1(V)] ≡λ [V] for some λ∈ℕ^*.
If the dimension of q^-1(V) is strictly greater than V, we take W an irreducible component of q^-1(V) such that its image by q_|W : W → V is dominant.
We write the dimension of W as V + r where r>0 is an integer.
Fix an ample divisor H_X on X.
The class H_X^r [W] ∈_ V(X')_ℚ is represented by a cycle of dimension V in W. So the image of the class q_*(H_X^r [W]) ∈_ V(X)_ℚ is a multiple of [V] which implies the surjectivity of q_*.
For any integer 0 ⩽ i ⩽ n, the pairing ^i(X)_ℝ×_i(X)_ℝ→ℝ is perfect (i.e the canonical morphism from ^i(X)_ℝ to _ℝ( _i(X)_ℝ, ℝ) is an isomorphism).
Suppose that the dimension of X is 2n, then the morphism ψ_X: ^n(X)_ℚ→_n(X)_ℚ is an isomorphism.
We apply (<ref>) to an alteration X' of X where q : X' → X is a proper surjective morphism and where X' is a smooth projective surface. This proves that ψ_X : ^n(X)_ℚ→_n(X)_ℚ is surjective. By duality, this gives that ψ_X : ^n(X)_ℚ→_n(X)_ℚ is injective. As a consequence, we have that ψ_X:^n(X)_ℚ→_n(X)_ℚ is an isomorphism.
Let X be a complex normal projective variety with at most rational singularities. We suppose that X is numerically ℚ-factorial in the sense of <cit.>. Then the morphisms ψ_X : ^1(X)_ℚ→_n-1(X)_ℚ and ψ_X:^n-1(X)_ℚ→_1(X)_ℚ are isomorphisms.
Using <cit.>, then any Weil divisor which is numerically ℚ-Cartier is ℚ-Cartier. In particular, ψ_X : ^1(X)_ℚ →_n-1(X)_ℚ is surjective. Using (<ref>) to an alteration of X' applied to i=1, we have that ψ_X: ^1(X)_ℚ→_n-1(X)_ℚ is injective. Hence ^1(X)_ℚ and _n-1(X)_ℚ are isomorphic and by duality ^n-1(X)_ℚ and _1(X)_ℚ are also isomorphic.
When X = X(Δ) be a toric variety associated to a complete fan Δ.
The map ψ_X: ^1(X)_ℚ→_n-1(X)_ℚ is an isomorphism if and only if Δ is a simplicial fan. Indeed, denote by N the lattice containing Δ and M = (N, ℤ) its dual.
For any cone σ∈ N, we denote by M(σ) the vector space defined by M(σ) = { l ∈ M | ⟨ l , v ⟩ = 0, ∀ v ∈σ}.
The proposition in <cit.> implies that any class in _n-1(X)_ℚ is represented by a torus-invariant Weil ℚ-divisor D = ∑ a_i [V_i] in X(Δ). Since every maximal cone σ in the fan Δ⊂ N is full-dimensional, one has M(σ) = { 0} and there exists an element u(σ) ∈ M / M(σ) = M such that for any 1-dimensional ray v_i ∈σ, one has:
⟨ u(σ) , v_i ⟩ = -a_i.
The element u(σ) is uniquely determined if and only if the family of rays v_i ∈σ are linearly independent (i.e Δ is simplicial).
§ POSITIVITY
The notion of positivity is relatively well understood for cycles of codimension 1 and of dimension 1.
For cycles of intermediate dimension this situation is however more subtle and was only recently seriously considered (see <cit.>, <cit.>, <cit.> and the recent series of papers by Fulger and Lehmann (<cit.>, <cit.>).
For our purpose, we will first review the notions of pseudo-effectivity and numerically effective classes. Then we generalize the construction of the basepoint free cone introduced by <cit.> to normal projective varieties. This cone is suitable for stating generalized Siu's inequalities (see Section <ref>).
§.§ Pseudo-effective and numerically effective cones
As in the previous section, X is a normal projective variety of dimension n.
To ease notation we shall also write ^i(X) and _i(X) for the real vector spaces ^i(X)_ℝ and _i(X)_ℝ.
A class α∈_i(X) is pseudo-effective if it is in the closure of the cone generated by effective classes. This cone is denoted _i(X).
When i=1, _1(X) is the Mori cone (see e.g <cit.>), and when i = n-1, _n-1(X) is the classical cone of pseudo-effective divisors, its interior being the big cone.
A class β∈^i(X) is numerically effective (or nef) if for any class α∈_n-i(X), (βα) ⩾ 0. We denote this cone by ^i(X).
When i= 1, the cone ^1(X) is the cone of numerically effective divisors, its interior is the ample cone.
We can define a notion of effectivity in the dual ^i(X).
A class α∈^i(X) is pseudo-effective if ψ_X(α) ∈_n-i(X). We will write this cone as ^i(X).
A class z ∈_i(X) is numerically effective if for any class α∈^i(X), one has ( α z) ⩾ 0. This cone is denoted _i(X).
By convention, we will write α⩽β (resp. α⩽β) for any α, β∈_i(X) (resp. α, β∈^i(X) ) if β - α∈_i(X) (resp. β -α∈^i(X)).
When X is smooth, the morphism ψ_X induces an isomorphism between ^i(X) and _n-i(X), and we can identify these cones:
[ ^i(X) = _n-i(X) ,; ^i(X) = _n-i(X). ]
§.§ Pliant classes
We recall the definition of pliant classes introduced in <cit.> and their main properties. Their definition involve Schur classes which were introduced in Section <ref>.
The pliant cone ^∙(X) is defined as the convex cone generated by product of Schur classes of globally generated vector bundle.
We denote by ^i(X) the set of pliant classes of codimension i in X.
(see <cit.>)
The pliant cone ^i(X) satisfies the following properties.
(i) The cone ^i(X) is a closed convex salient cone with non-empty interior in ^i(X)_ℝ.
(ii) The cone ^i(X) contains product of ample Cartier divisors in its interior.
(iii) For all integer i, l, we have ^i(X) ·^l(X) ⊂^i+l(X).
(iv) For any (proper) morphism q: X → Y, one has that q^* ^i(Y) ⊂^i(X).
We recall another proposition which we will reuse in our proofs.
(cf <cit.>) Let 𝔾 be a Grassmannian variety. Then ^i( 𝔾) = ^i(𝔾).
§.§ Basepoint free cone on normal projective varieties
In this section, we define a cone ^i(X) and prove in Corollary <ref> that this cone is equal to the basepoint free cone defined by Fulger-Lehmann when X is smooth.
This generalizes <cit.> to normal projective varieties and our proof follows closely Fulger-Lehmann's approach.
Recall that a complete intersection γ∈^i+e(X') on X' where p:X' → X is a flat morphism of relative dimension e and where X' is an equidimensional projective scheme induces naturally (see Definition <ref>) an element [γ] ∈^i(X)_ℝ = _ℝ(_i(X)_ℝ, ℝ) by intersecting the class γ with the pullback by p of a i-dimensional cycle in X. We also refer to Proposition <ref> for the definition of the product ^i(X)_ℝ×^l(X)_ℝ→^i+l(X)_ℝ.
The cone ^i(X) is the closure of the convex cone in ^i(X)_ℝ generated by products of the form [γ_1] ·…· [γ_l] where each γ_j is a product of e_j + i_j ample Cartier divisors on an equidimensional projective scheme X_j which is flat over X of relative dimension e_j and where i_j are integers satisfying i_1 + … + i_l = i.
By definition, the cone ^i(X) contains the products of ample Cartier divisors and Segré classes of anti-ample vector bundles.
Recall also that if q : X → Y is a flat morphism of relative dimension e between projective schemes, then the pushforward is well-defined on numerical cycles
q_* : ^i(X)_ℝ→^i-e(Y)_ℝ (see Corollary <ref>).
The cone ^i(X) is satisfies the following properties.
(i) The cone ^i(X) is a salient, closed, convex cone with non-empty interior in ^i(X)_ℝ.
(ii) The cone ^i(X) contains products of ample Cartier divisors in its interior.
(iii) For all integer i and l, we have ^i(X) ·^l(X) ⊂^i+l(X).
(iv) For any (proper) morphism q: X → Y, we have q^* ^i(Y) ⊂^i(X).
(v) For any integer i, we have ^i(X) ⊂^i(X) ∩^i(X).
(vi) In codimension 1, one has ^1(X) = ^1(X).
(vii) For any flat morphism q: X → Y between equidimensional projective schemes of relative dimension e and any integer i ⩾ e, we have q_* ^i(X) ⊂^i - e(Y).
Moreover, (X) is the smallest cone satisfying properties (iii),(vi) and (vii).
We prove successively the items (iii), (vii), (v), (vi), (iv), (ii) and (i).
(iii), (vii) This follows from the definition of ^i(X).
(v) It is sufficient to prove that for any effective cycle z ∈ Z_n-l(X) and any basepoint free class α∈^i(X), then α z ∈_n-i-l(X). Indeed, apply this successively to z =[X] and z ∈_i(X) give the inclusions ^i(X) ⊂^i(X) and ^i(X)⊂^i(X).
By definition of basepoint free classes and by linearity, we can suppose that α is equal to a product [γ_1] ·…· [γ_p] where γ_i ∈^e_j + i_j(X_j)_ℝ are products of ample Cartier divisors on X_j where p_j : X_i → X is a flat proper morphism of relative dimension e_j and where i_j are integers such that i_1 + … + i_p = i.
By definition, one has
[γ_1 ] z = p_1_* (γ_1 · p_1^* z).
Because the cycle z is pseudo-effective, the cycle p_1^* z remains pseudo-effective as p_1 is a flat morphism. As γ_1 is a positive combination of products of ample Cartier divisors, we deduce that the cycle γ_1 · p_1^* z is pseudo-effective. Hence, [γ_1] z ∈_n-l_1-l(X). Iterating the same argument, we get that α z ∈_n-i-l(X) as required.
(vi) The interior of ^1(X) is equal to the ample cone of X so by definition:
(^1(X)) ⊂^1(X).
As the closure of the ample cone is the nef cone by <cit.>, one gets ^1(X) ⊂^1(X). Conversely, the cone ^1(X) is included in the cone ^1(X), so we get ^1(X) = ^1(X).
(iv) By linearity and stability by products, we are reduced to treat the case of a class [D] induced by an ample Cartier divisor on Y_1 where p_1 : Y_1 → Y is a flat proper morphism, and prove that q^*[D] is a limit of ample Cartier divisors on a flat variety over X.
Let X_1 be the fibre product of Y_1 and X and let q' be the natural projection from X_1 to Y_1, observe that q^*[D] is induced by q'^* D which remains nef on X_1 as q' is proper.
In particular, it is the limit of ample divisors on ^1(X_1).
(i)
Take α∈^i(X) such that -α∈^i(X). Then for all z ∈_i(X), one has that (α z)= 0 as α is nef by (v).
Since effective classes of dimension i generate Z_i(X), it follows that (α z) = 0 for any z ∈_i(X)_ℝ which implies by definition that α = 0.
This shows ^i(X) is salient.
(ii) We show now that ^i(X) contains product of ample divisors in its interior.
To do so we prove that ^i(X) ⊂^i(X) for any integer i ⩾ 1.
For i = 1, ^1(X) = ^1(X), and by definition, the divisor h is ample so it is in the interior of the nef cone and we are done.
Take a globally generated vector bundle E of rank r on X and consider the induced morphism ϕ given by:
ϕ : X →𝔾= G(r, ℙ(H^0(X, E)^*)).
Since ^i(X) ⊃ϕ^* ^i( 𝔾) and since these cones are preserved by pullbacks, we are then reduced to proving that
^i(𝔾) ⊂^i(𝔾).
Denote by G = (H^0(X,E)^*) the projective special orthogonal group of the vector space H^0(X,E)^* and consider a class α∈^i(𝔾)_ℝ.
Since 𝔾 is smooth, ψ_𝔾 :^i(𝔾)_ℝ→_n-i(𝔾)_ℝ is an isomorphism by Theorem <ref> and α is represented by an effective cycle z ∈ Z_n-i(𝔾)_ℝ.
Consider W the Zariski closure in G ×𝔾 given by:
W = { (g, g · x)}_g∈ G, x∈ z⊂ G×𝔾.
By construction, W is a quasi-projective scheme and the projection p : W →𝔾 onto 𝔾 is a flat morphism.
Denote by q: W → G the projection onto G.
Fix H a very ample divisor on G and denote by M the dimension of the group (H^0(X,E)^*). Then there exists an open embedding j : W →^M_𝔾 such that one has the following diagram:
G [r] ^M ℙ^M_𝔾[l]_h[rd]^π
W [ru]^j[lu]^q[rr]^p 𝔾
where π : ℙ^M_𝔾→𝔾 is the projection onto 𝔾 and h: ℙ^M_𝔾→ℙ^M is the projection onto ^M
By construction the general fiber of q over an element g ∈ G is numerically equivalent to α and since we can choose H to be a hyperplane of ^M, we have:
1(H^M) p_* q^* H^M = α∈_n-i(𝔾)_ℝ.
Moreover <cit.> implies that p_* j^* = π_* in Z_n-i(^M_𝔾), hence:
p_* q^* H^M = π_* h^* H^M = (H^M) α∈_n-i(𝔾)_ℝ.
Since H is ample, h^* H is nef and the class h^* H^M belongs to _n-i(^M_𝔾).
Assertion (vii) thus implies that the class π_* h^* H^M / (H^M) = α belongs to _n-i(𝔾)_ℝ, as required.
Since ^i(X) has non-empty interior in ^i(X)_ℝ by Theorem <ref>.(ii), we have proved (ii).
Let us prove that the cone ^i(X) is the smallest cone satisfying properties (iii),(vi) and (vii).
Denote by ' the minimal cone satisfying these conditions. We have that '^i(X) ⊂^i(X) by minimality.
Take q: X_1 → X a flat morphism of relative dimension e where X_1 is an equidimensional projective scheme and consider α∈^i+e(X_1) a product of ample Cartier divisors on X_1.
Since q_* : ^i(X_1)_ℝ→^i-e(X)_ℝ and since α∈'^i+e(X_1), we have that q_* α∈'^i(X) by (vii), hence ^i(X) ⊂'^i(X) as required.
We recall Fulger-Lehmann's construction of the basepoint free cone.
A class α∈_n-i(X)_ℝ is strongly basepoint free if there is:
∙ an equidimensional quasi-projective scheme U of finite type over ,
∙ a flat proper morphism s: U → X
∙ and a proper morphism p : U → W of relative dimension n-i to a quasi-projective scheme W such that each component of U surjects onto W
such that
s_| F_p_* ([F_p]) = α,
where [F_p] is the fundamental class of a general fiber of p.
We denote by '^i(X) the closure of the convex cone generated by strongly basepoint free classes in this sense.
The cone '(X) as above was defined by Fulger-Lehmann and they proved that this cone satisfies Theorem <ref> when X is smooth (<cit.>). The following result proves that the cones '(X) and (X) are equal in this case.
Suppose X is smooth, then the cone ^i(X) is equal to the basepoint free cone '^i(X).
Our construction of the cone (X) allows us to generalize Fulger-Lehmann's result for normal varieties. This improvement is due to the fact that we are able to pushforward dual numerical classes by flat morphism.
By <cit.>, the cone '(X) satisfies the conditions of Theorem <ref>, hence ^i(X) ⊂'^i(X).
Let us prove the reverse inclusion '^i(X) ⊂^i(X).
Take p : U → W a projective morphism onto an equidimensional quasi-projective variety W where U is a quasi-projective scheme and a flat map s: U → X such that s_* [F_p] = α where F_p is a general fiber of p.
Take H_W an ample divisor on W, then the class α satisfies:
α = s_* p^* H_W^i+e∈_n-i(X)_ℝ.
Choose an ample divisor H on U, since the class p^* H_W is nef, for any ϵ >0, the divisor p^* H_W + ϵ H is ample.
Since the morphism s: U → X is also quasi-projective and there exists an integer l (which depends on ϵ) such that the following diagram is commutative
ℙ^l_X [rd]^π
U [ru]^f_ϵ[rr]^s X
where f_ϵ : U →ℙ^l_X is an immersion induced by p^* H_W + ϵ H and π: ℙ^l_X → X is the flat projection onto X.
Let ξ be the relative class c_1(𝒪_ℙ^l_X(1)) on ℙ^l_X, then one has that for any cycle z ∈ Z_i(X)_ℝ:
((p^* H_W + ϵ H)^i+e· s^*z) = ( ξ^i+e·π^* z),
since f_ϵ^* ξ = p^* H_W + ϵ H.
Hence, we obtain:
( s_* (p^* H_W + ϵ H)^i+e· z) = (π_* ξ^i+e· z).
Since the class ξ^i+e is nef and since these cones are stable by flat pushforward, we have π_*(ξ^i+e) ⊂^i(X).
Taking the limit as ϵ→ 0, we have that s_* (p^* H_W + ϵ H)^i+e→α = s_* p^* H_W^i+e, hence α∈^i(X) since each class s_* (p^* H_W + ϵ H)^i+e) ∈^i(X)_ℝ belongs to ^i(X).
We give here a detailed proof of the fact that the pseudo-effective cone is salient (see also <cit.>). The proof uses a useful proposition that we will use later on.
Let α∈_n-i(X) be a pseudo-effective class on X and γ∈^n-i(X) be class lying in the interior of the basepoint free cone.
Then we have ( γα) = 0 if and only if α = 0.
Let us fix two basepoint free classes β and γ in ^n-i(X), and a norm || · || on ^n-i(X)_ℝ. As γ is in the interior of ^n-i(X) by Theorem <ref>.(ii), there exists a positive constant C > 0 such that for any β∈^n-i(X), one has:
C || β ||_^n-i(X)_ℝγ -β∈^n-i(X).
Intersecting with α∈_n-i(X) and using Theorem <ref>.(v), we have that (β·α )= 0.
Since the basepoint free cone ^n-i(X) generates all ^n-i(X)_ℝ by Theorem <ref>.(i), we have proved that (β' α) = 0 for any β' ∈^n-i(X), hence α= 0 as required.
The pseudo-effective cone _n-i(X) is a closed, convex, full dimensional salient cone in _n-i(X)_ℝ.
We take u ∈_n-i(X) such that -u ∈_n-i(X), then for any ample Cartier divisor H_X on X, the products (H_X^n-i· u) and (-u · H_X^n-i) are non-negative hence (u · H_X^n-i) = 0. This implies that u = 0 by Proposition <ref>.
§.§ Siu's inequality in arbitrary codimension
We recall Siu's inequality:
(<cit.>) Let V be a closed subscheme of dimension r in X and
let A,B be two -divisors nef on X such that A_|V is big, then we have in _i-1(X),
B [V] ⩽r ( (A^r-1· B ) [V] )(A^r [V]) A [V].
The case V= X is a consequence of the bigness criterion given in <cit.>, however we will need the result for possibly non-reduced subschemes of X.
The proof of the previous proposition implies that B_|V⩽ r (A^r-1· B [V])/ (A^r [V]) × A_|V in the Chow group A^1(V). However, since we want to work in the numerical group, we compare these classes in X (we look at their pushforward by the inclusion of V in X).
The proof is the same as in <cit.>, that is to find a section of the line bundle 𝒪_V(m(A -B)).
Up to some small pertubations of A and B of the form A + ϵ H and B + ϵ H of A and B where ϵ→ 0, we can suppose that A and B are ample.
Moreover, by taking a high multiple of A and B, we can suppose that they are also both very ample.
Since B is very ample, we choose m general elements E_j of the linear system |B| and consider the exact sequence:
0 [r] 𝒪_V(mA - mB) [r] 𝒪_V(mA) [r] 𝒪_∪ E_j(mA)[r] 0 .
Taking long exact sequence associated, one obtains the minoration:
h^0(V, 𝒪_V(mA-mB) ⩾ h^0(V , 𝒪_V(mA)) - h^0( ∪_j=1^m E_j, 𝒪_∪_j=1^m E_j(mA)).
Observe that [∪ E_j] = ∑_j=1^m [E_j] = m B [V].
Applying <cit.> to the nef divisor A, we get h^0(V, 𝒪_V(mA))= m^r/(r!) (A^r [V]) + o(m^r) and
h^0(∪ E_j , 𝒪_∪_j=1^m E_j(mA))= ∑_j=1^m m^r-1(r-1)! A^r-1· B [V] + o (m^r).
Hence,
h^0(V, 𝒪_V(mA-mB))⩾m^rr! (A^r - r A^r-1· B) [V] + o(m^r).
In particular, this implies the required inequality.
The next result is a key for our approach to controlling degrees of dominant rational maps.
Let i be an integer and V be a closed subscheme of dimension r in X. For any Cartier divisors α_1, …, α_i and β which are big and nef on V, then there exists a constant C>0 depending only on r and i such that:
(α_1 ·…·α_i) [V] ⩽ (r-i+1)^i (α_1 ·…·α_i ·β^r-i [V])(β^r [V])×β^i [V] ∈_r-i(X).
Observe that (β^n) >0 since β is big.
By continuity, we can suppose that α_i and β are ample Cartier divisors.
We apply successively Siu's inequality by restriction to subschemes representing the classes α_2 ·…·α_i [V] , β·α_3 ·…·α_i [V], …, β^i-1·α_i [V]:
[ α_1 ·α_2 ·…·α_i [V] ⩽ (r-i+1) (α_1 ·…·α_i ·β^r-i [V]) (β^r-i+1·α_2 ·…·α_i [V]) ×β·α_2 ·…·α_i [V],; β·α_2 ·…·α_i [V] ⩽ (r-i+1) (β^r-i+1·α_2 ·…·α_i [V]) (β^r-i+2·α_3 ·…·α_i [V]) ×β^2 ·α_3 ·…·α_i [V] ,; … …; β^i-1·α_i [V] ⩽ (r-i+1) (β^r-1·α_i [V])(β^r [V])×β^i [V]. ]
This gives the required inequality:
α_1 ·…·α_i [V] ⩽ (n-i+1)^i (α_1 ·…·α_i ·β^r-i [V]) (β^r [V])×β^i [V].
Let i be an integer, then for any a ∈^i(X) and any big nef Cartier divisor β on X, one has:
a ⩽ (n-i+1)^i ( a ·β^n-i)(β^n)×β^i.
By linearity and stability by product, we just need to prove the inequality for a = D_1 ·…· D_e_1 + i∈^e_1 + i(X_1), where D_i are ample Cartier divisors X_1, where p_1: X_1 → X is a flat proper morphism of relative dimension e_1. We apply Theorem <ref> to a'= D_e_1 + 1·…· D_e_1 + i· Z and β' = p_1^* β_|Z where Z = D_1 ·…· D_e_1. We obtain:
a ⩽ (n-i+1)^i (a · p_1^*β^n-i )( p_1^* β^n · Z)× p_1^* b^i· Z.
As the restriction of p_1 on Z is generically finite, by the projection formula, we get:
a ⩽ (n-i + 1)^i (a·β^n-i)(β^n)×β^i.
The previous inequality can be applied when we have positivity hypothesis on a birational model as follows.
Let X,Y be two normal projective varieties of dimension n. Let β be a class in ^i(Y), we suppose there exists a birational morphism q: X → Y and an ample Cartier divisor A on X such that A^i ⩽ q^* β. Then there exists a class β^* ∈_i(X)_ℝ∩_i(X) such that for any class α∈^i(X), we have:
α⩽ ( αβ^* ) ×β.
We just have to set β^* = (n-i+1)^i (A^n) q_* ψ_X( A^n-i).
We conjecture that for any basepoint free class a ∈^i(X) and any big nef divisor b, one has
a ⩽ ( [ n; i ] ) (a · b^n-i)(b^n)b^i.
One can show that this inequality (if true) is optimal since equality can happen when X is an abelian variety.
§.§ Norms on numerical classes
In this section, the positivity properties combined with Siu's inequality allows us to define some norms on _i(X)_ℝ and on ^i(X)_ℝ.
§.§.§ Norms on _i(X)_ℝ
Let i ⩽ n be an integer and let γ∈^i(X) be a basepoint free class on X. Any cycle z ∈_i(X)_ℝ can be written z =z^+ - z^- where z^+ and z^- are pseudo-effective. We define :
F_γ(z) := inf_ z = z^+ - z^-
z^+, z^- ∈_i(X)
{ (γ z^+) + (γ z^-) } .
For any class γ∈^i(X) lying in the interior of the basepoint free cone, the function F_γ defines a norm on _i(X)_ℝ. In particular, if we fix a norm ||· ||__i(X)_ℝ on _i(X)_ℝ, there exists a constant C> 0 such that for any pseudo-effective class z ∈_i(X), one has:
1C || z||__i(X)_ℝ⩽ (γ z) ⩽ C || z ||__i(X)_ℝ.
The only point to clarify is that F_γ(z) = 0 implies z = 0.
Observe that Proposition <ref> implies the result for z ∈_i(X).
In general, pick any two sequences
(z_p^+)_p∈ℕ and (z_p^-)_p ∈ℕ in _i(X) such that z = z_p^+ - z_p^- and such that
γ· z_p^+ + γ· z_p^- ⟶ 0.
Since z_p^+ and z_p^- are pseudo-effective and γ is basepoint free, it follows from Theorem <ref>.(v) that
lim_p→ + ∞ (γ· z_p^+) = lim_p → + ∞ (γ· z_p^-) = 0 .
As γ lies in the interior of ^i(X), given any β in ^i(X), one has that C γ - β is still in ^i(X) for some sufficently large constant C>0.
Intersecting with the pseudo-effective classes z_p^+ and z_p^- and using Theorem <ref>.(v), we have lim_p →∞(β z_p^+)=lim_p →∞(β z_p^-)= 0, thus (β z)= 0. Since the basepoint free cone ^i(X) generates all ^i(X) by Theorem <ref>.(i), we conclude that z = 0 as required.
§.§.§ Norms on ^i(X)_ℝ
We define the subcone _0^i(X) of ^i(X) as the classes α∈^i(X) such that for any birational map q : X' → X, there exists an ample Cartier divisor A on X' such that q^* α⩾ A^i.
When X is smooth, the cone _0^1(X) is equal to the big nef cone. In particular _0^i is neither closed nor open in general.
Take α∈^1(X)_ℝ a big nef divisor. Then for any birational map q : X' → X and any ample Cartier divisor A, one has by Theorem <ref> applied to A and q^*α:
A ⩽ n (A · q^*α^n-1)(α^n) q^* α.
Hence, α∈^1_0(X).
Conversely, take a class α∈^1_0(X), then there exists an ample divisor A on X such that α⩾ A.
Since ample divisors are big, we have that α is big.
Moreover, since ^1(X) = ^1(X) ∩^1(X), we have that α is big and nef as required.
The cone _0^i(X) is a convex open subset of ^i(X) that contains the classes induced by products of big nef divisors.
The cone _0^i(X) contains the products of big and nef Cartier divisors.
The fact that _0^i(X) is convex is a consequence of Siu's inequality. We take two elements α and β in _0^i(X) and any birational map q: X' → X.
By definition, there exists some ample Cartier divisors A and B on X' such that q^*α⩾ A^i and q^*β⩾ b^i.
As A and B are ample, there is a constant C>0 such that A^i ⩾ C b^i using the generalization of Siu's inequality (Theorem <ref>). This proves that q^* (t ×α + (1-t)×β) ⩾ ( t C + (1-t) )× b^i for any t ∈ [ 0, 1]. Hence t×α + (1-t) ×β∈_0^i(X) and the cone _0^i(X) is convex.
We prove that _0^i(X) is an open subset of ^i(X). We take α∈_0^i(X). We take any ample Cartier divisor H_X on X such that α - t H_X^i is in ^i(X) for small t> 0. We just need to show that α - tH_X^i stays in _0^i(X) when t is small enough.
Let q: X' → X be a birational map where X' is projective and normal. By definition of α, there exists an ample Cartier divisor A on X' such that q^*α⩾ A^i. By Siu's inequality, there exists a constant C such that:
q^* H_X^i ⩽ C (A^i · q^* H_X^n-i)(H_X^n)× A^i.
This implies the inequality:
q^* β - t q^* H_X^i ⩾ ( 1 - t C A^i · H_X^n-iH_X^n ) × A^i .
As A^i ⩽ q^* α, we have the following upper bound:
(A^i · H_X^n-i )⩽ (q^* α· q^* H_X^n-i).
We get the following minoration which depends only on α and H_X:
1- t C (α· H_X^n-i) (H_X^n)⩽ 1 - tC (A^i · H_X^n-i)(H_X^n).
Using (<ref>) and (<ref>), one gets that for t < (H_X^n) C (α· H_X^n-i), the class α - t H_X^i is in _0^i(X).
The cone _0^i(X) is not always equal to the cone generated by complete intersections. Following <cit.>, there exists a smooth toric threefold such that the cone generated by complete intersections in _1(X)_ℝ is not convex, so it cannot be equal to _0^2(X) using the following proposition.
Let X be a normal projective variety of dimension n. Any class α∈^i(X)_ℝ can be decomposed as α^+ - α^- where α^+ and α^- are basepoint free classes. For any γ∈_0^n-i(X), we define the function:
G_γ (α) := inf_α = α^+ - α^-
α^+, α^- ∈^i(X)
{ (γ·α^+) + (γ·α^-) } .
For any γ∈_0^n-i(X), the function G_γ defines a norm on ^i(X)_ℝ. In particular, for any norm || · ||_^i(X)_ℝ on ^i(X)_ℝ, there is a constant C>0 such that for any class α∈^i(X):
1C || α ||_^i(X)_ℝ⩽ (γ·α) ⩽ C || α ||_^i(X)_ℝ.
The only fact which is not immediate is the fact that G_γ(α) = 0 implies α = 0. We are reduced to treat the case where α∈^i(X).
Suppose first that X is smooth.
Since γ belongs to the interior of the basepoint free cone by Proposition <ref>, one has that for any basepoint free class β∈^n-i(X), there exists a constant C>0 such that:
C || β || γ - β∈^n-i(X).
In particular, since α is nef, one has:
0= G_γ(α) = C || β|| (γ·α) ⩾ ( β·α ) ⩾ 0.
Hence (β·α) = 0 for any basepoint free class β∈^n-i(X) and α = 0 ∈^i(X)_ℝ since the basepoint free cone generates all ^n-i(X)_ℝ by Theorem <ref>.(i).
Suppose that X is not smooth. Fix an ample Cartier divisor H_X on X.
Take an alteration π : X' → X of X. Since the morphism π^* : ^i(X)_ℝ→^i(X')_ℝ is injective, we are reduced to prove that π^* α = 0.
Consider β∈^n-i(X), we have by the projection formula that:
(π^* γ·π^* α) = (α·γ).
Since γ belongs to the interior of the basepoint free cone, there exists a constant C>0 such that:
H_X^n-i⩽ C γ.
In particular, this implies that:
(π^*H_X^n-i·π^*α ) = ( H_X^n-i·α)=0.
Since π^*H_X is a big nef Cartier divisor, the class π^* H_X^n-i belongs to ^n-i_0(X') by Proposition <ref>, hence π^* α = 0 by the previous argument.
In fact, the above proof gives a stronger statement:
for any generically finite morphism q : X' → X and any γ∈_0^n-i(X), the function G_q^* γ defines a norm on ^i(X')_ℝ.
§ RELATIVE NUMERICAL CLASSES
§.§ Relative classes
In this section, we fix q:X → Y a surjective proper morphism between normal projective varieties where X=n, Y = l and we denote by e = X - Y the relative dimension of q.
The abelian group _i(X/Y) is the subgroup of _i(X) generated by classes of subvarieties V of X such that the image q(V) is a point in Y.
Observe that by definition, there is a natural injection from _i(X/Y) into _i(X):
0 [r] _i(X/Y) [r] _i(X) .
The abelian group ^i(X/Y) is the quotient of Z^i(X) by the equivalence relation ≡_Y where α≡_Y 0 if for any cycle z ∈ Z_i(X) whose image by q is a collection of points in Y, we have (α z)= 0.
Therefore, one has the following exact sequence:
^i(X) [r] ^i(X/Y) [r] 0 .
As before, we write _i(X/Y)_ℝ = _i(X/Y) ⊗_ℤℝ, ^i(X/Y)_ℝ = ^i(X/Y) ⊗ℝ, _∙(X/Y) = ⊕_i(X/Y) and ^∙(X/Y) = ⊕^i(X/Y).
The abelian groups _i(X/Y) and ^i(X/Y) are torsion free and of finite type. Moreover, the pairing _i(X/Y)_ℚ×^i(X/Y)_ℚ→ℚ induced by the pairing _i(X)_ℚ×^i(X)_ℚ→ℚ is perfect.
Since _i(X/Y) is a subgroup of _i(X), it is torsion free and of finite type. The group ^i(X/Y) is also torsion free.
Indeed pick α∈ Z^i(X) such that p α≡_Y 0 for some integer p, then for any cycle z whose image by q is a union of points, we have (p α z) = p (α z) = 0 hence α≡_Y 0.
Finally, since there is a surjection from ^i(X) to ^i(X/Y), the group ^i(X/Y) is also of finite type.
Let us show that the pairing is well defined and non degenerate.
Take a cycle z ∈ Z_i(X)_ℚ such that q(z) is a finite number of points in Y, then if α∈^i(X) such that its image is 0 in ^i(X/Y), then (α z)= 0 and the pairing ^i(X/Y) ×_i(X/Y) →ℤ is well-defined.
Let us suppose that for any α∈^i(X/Y)_ℚ, (α z)=0. This implies that for any β∈^i(X), the intersection product (β z) = 0, thus z ≡ 0.
Conversely, suppose that (α z)=0 for any z ∈_i(X/Y), then by definition α≡_Y 0.
When Y is a point, we have _i(X/Y) = _i(X) and ^i(X/Y) = ^i(X).
If the morphism q : X → Y is finite, then we have ^0(X/Y)_ℚ = _0(X/Y)_ℚ = ℚ and ^i(X/Y) = _i(X/Y) = {0 } for i⩾ 1 since X is irreducible.
When i=1, the group _1(X/Y) is generated by curves contracted by q so that ^1(X/Y) is the relative Neron-Severi group and its dimension is the relative Picard number (see <cit.>).
When i is greater than the relative dimension, the relative classes might not be trivial. For example if q : X → Y is a birational map, then e=0 but the space ^1(X/Y)_ℝ is generated by classes of exceptional divisors of q.
The intersection product on ^∙(X) induces a structure of algebra on ^∙(X/Y). Moreover, the action from ^∙(X) on _∙(X) induces an action from ^∙(X/Y) on _∙(X/Y), so that the vector space _∙(X/Y)_ℝ becomes a ^∙(X/Y)_ℝ-module.
Observe that if z ∈ Z_i(X) such that q(z) is a union of points in Y and α∈^l(X), then α z lies in _i-l(X/Y). Indeed, by definition, the class α z is represented by a cycle supported in z, so its image by q is a collection of points in Y.
Let us now prove that the product is well-defined in ^∙(X/Y). Take α∈^i(X) such that α = 0 in ^i(X/Y) and β∈^l(X), we must prove that α·β =0 in ^i(X/Y). Pick a cycle z ∈ Z_i+l(X) whose image by q is a collection of points, by the properties of the intersection product, ((α·β ) z)= ( α (β z)). As β z is in _i(X/Y), we get that ( (α·β) z )= 0 as required.
As an illustration, we give an explicit description of these groups in a particular example.
Suppose q : X= (E) → Y where E is a vector bundle of rank e+1 on Y. Then for any integer 0 ⩽ i ⩽ e, one has:
_i(X/Y)_ℚ = ℚ ξ^e-i q^* [pt] ,
^i(X/Y)_ℚ = ℚ ξ^i,
where ξ = c_1(𝒪_(E)(1)).
Since the pairing ^i(X/Y)_ℚ×_i(X/Y)_ℚ→ℚ is non degenerate and since (ξ^i ( ξ^e-i q^*[pt])) = 1, the second equality is an immediate consequence of the first one.
We suppose first that i > 0.
Pick α∈ Z_i(X) which defines a class in _i(X/Y)_ℚ.
Using <cit.>, α is rationally equivalent to ∑_ e-i ⩽ j ⩽ eξ^j q^* α_j where α_j is an element of the Chow group A_i-e+j(Y)_ℚ.
Since the image of α by q is a union of points in Y, we have that q_* α = 0 in A_i(Y)_ℚ.
Observe that
q_* (ξ^e q^* α_e) = α_e,
and that for any j < e, one has that
q_* (ξ^j q^* α_j ) = 0
since the support of the cycle α_i is of dimension i-e+j < i and q_*(ξ^e q^* α_j) belongs to A_i(Y).
Hence the conditions q_* α = 0 implies that α_e = 0 in A_i(Y)_ℚ.
Since ξ^j α defines also a class in _i-j(X/Y)_ℚ, this implies also that α_e-j = 0 in A_i-e+j(Y)_ℚ for any j < i. We have finally that in _i(X/Y)_ℚ:
α = ξ^e-i q^* α_e-i.
Since α_e-i belongs to A_0(Y)_ℚ and _0(Y) = ℚ [pt], the ℚ-module _i(X/Y) is generated by ξ^e-i q^*[pt] for i> 0.
For i = 0, the groups _0(X)_ℚ and _0(X/Y)_ℚ are isomorphic to ℚ, so we get the desired conclusion.
§.§ Pullback and pushforward
In this section, we fix any two (proper) surjective morphisms q_1 : X_1 → Y_1, q_2 : X_2 → Y_2 between normal projective varieties.
To simplify the notation, we write X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 when we have two regular maps f : X_1 → X_2 and g : Y_1 → Y_2 such that q_2 ∘ f = g ∘ q_1 and we shall say that X_1/_q_1 Y_1 fg X_2/_q_2Y_2 is a morphism.
When f : X_1 X_2 and g : Y_1 Y_2 are merely rational maps, then we write X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 and we shall call it a rational map.
Let X_1/_q_1 Y_1 fg X_2/_q_2Y_2 be a morphism. Then the morphism of abelian groups f_* : _i(X_1) →_i(X_2) induces a morphism of abelian groups f_* : _i(X_1/Y_1) →_i(X_2/Y_2).
Take a cycle z ∈ Z_i(X_1) such that q_1(z) is a union of points of Y_1.
Then the image of the cycle z by q_2 ∘ f is also a union of points of Y_2 due to the fact that q_2 ∘ f = g ∘ q_1. Hence f_* maps _i(X_1/Y_1) to _i(X_2/Y_2).
Let X_1/_q_1 Y_1 fg X_2/_q_2Y_2 be a morphism. Then the morphism of graded rings f^* : ^∙(X_1) →^∙(X_2) induces a morphism of graded rings f^* : ^∙(X_1/Y_1)_ℚ→^∙(X_2/Y_2)_ℚ.
This results follows immediately by duality from the previous proposition since the pairing ^i(X_i/Y_i)_ℚ×_i(X_i/Y_i)_ℚ→ℚ is non degenerate.
§.§ Restriction to a general fiber and relative canonical morphism
Recall that X = n, Y = l and that the relative dimension of q: X→ Y is e.
There exists a unique class α_X/Y∈^l(X)_ℚ satisfying the following conditions.
* The image ψ_X(α_X/Y) belongs to the subspace _e(X/Y)_ℚ of _e(X)_ℚ.
* For any class β∈_l(X)_ℚ, q_* β = (α_X/Yβ) [Y].
Moreover, for any open subset V of Y such that the restriction q to U=q^-1(V) is flat, and for all y ∈ V and any irreducible component F of the scheme-theoretic fiber X_y, we have:
ψ_X(α_X/Y) = [X_y] = r [F],
where r is a rational number which only depends on F and
where [X_y] (resp. [F]) denotes the fundamental class of X_y (resp. F) viewed as an element of _e(X/Y).
More explicitly, the class α_X/Y is given by
α_X/Y = 1(H_Y^l) q^* H_Y^l ∈^l(X/Y)_ℚ,
where H_Y is an ample divisor on Y.
Recall that by generic flatness (see <cit.>), one can always find an open subset V of Y such that the restriction of q to q^-1(V) is flat over V.
Fix an ample Cartier divisor H_Y on Y, we set
α_X/Y := 1(H_Y^l) q^* H_Y^l ∈^l(X)_ℚ.
Write the class H_Y^l in A_0(Y) as:
H_Y^l = ∑ a_j [p_j]
where p_j ∈ V() are points in V and a_j are positive integers satisfying ∑ a_j = (H_Y^l).
By the projection formula (Theorem <ref>.(iv)), the class α_X/Y satisfies (i) and (ii) .
Let us show that any class satisfying (i) and (ii) is unique.
Suppose there is another one α' ∈^l(X)_ℚ.
Then for any class β∈_l(X)_ℚ, ((α_X/Y - α') β ) =0 so that α = α' since the pairing ^l(X)_ℚ×_l(X)_ℚ→ℚ is non degenerate.
Let us prove the last assertion.
By generic flatness <cit.>,
Let V be an open subset of Y such that the restriction q_|q^-1(V) : q^-1(V) → V is flat and such that the dimension of every fiber is e.
Since H_Y is ample, we can find some hyperplanes of H_i ⊂ Y such that H_1 ∩…∩ H_l represents the class H_Y^l and such that H_1 ∩…∩ H_l ⊂ V.
In particular, by <cit.>, the pullback q^* H_Y^l is represented by a cycle in the fiber of H_1 ∩…∩ H_l. Denote by u : V → Y and g: U → X the inclusion maps of V and U into Y and X respectively.
The morphisms u and g are open embedding hence are flat. Moreover we have the following commutative diagram.
U [d]^q_|U[r]^u X [d]^q
V [r]^g Y
Using <cit.>, one has that for any β∈ A_l(X):
(q^* H_Y^l β) = ( q_|U^* g^*(H_Y^l) u^*β).
Using (<ref>), one obtains in A_e(X):
q_|U^* g^* H_Y^l = q_|U^* g^*(∑ a_j [p_j]) = ∑ a_j [q^-1(p_j)],
which is well-defined since the restriction of q on U is flat.
By <cit.>, we have that [X_p_j] = [X_y] ∈_e(X) for any p_j, y ∈ V. In particular, we have:
ψ_X(q^* H_Y^l) = (∑ a_j) [X_y] = (H_Y^l) [X_y] ∈_e(X),
where y is a point in V, which proves that ψ_X(α_X/Y) = [X_y] in _e(X)_ℚ for any point y in V.
By the Stein factorization theorem, there exists a morphism q' : X → Y' with connected fibres and a finite morphism f : Y' → Y such that q' = q ∘ f.
Since (H_Y^l)[X_y] = q^* H_Y^l = q'^*f^* H_Y^l and since f^* H_Y^l ∈^l(Y')_ℝ which is canonically isomorphism to ℝ, we have that f^* H_Y^l = p · [y'] ∈^l(Y')_ℝ where p is an integer and where [y'] is a general point in f^-1(y).
We have thus proven that:
[X_y] = p(H_Y^l) [q'^-1(y')] ∈_e(X),
and q'^-1(y') is an irreducible component of X_y as required.
The class previously constructed allows us to define a restriction morphism.
Suppose that Y = l and that H_Y is an ample Cartier divisor on Y, then we define _X/Y: _∙(X)_ℚ→_∙ -l(X/Y)_ℚ by setting:
_X/Y (β) := 1(H_Y^l) q^* H_Y^l β = α_X/Yβ.
This morphism does not depend on the choice of H_Y.
We shall denote by _X/Y^* : β∈^∙(X/Y)_ℚ→α_X/Y·β∈^∙ + l(X)_ℚ the dual morphism induced by _X/Y with respect to the pairing ^∙(X/Y)_ℚ×_∙(X/Y)_ℚ→ℚ.
Recall that Y=l. The following properties are satisfied.
* For any class α∈^∙(X)_ℚ, one has:
ψ_X ∘_X/Y^* (α) = _X/Y∘ψ_X (α).
* For any morphism X'/_q'Y' fg X/_q Y where X' = X=n and Y' = Y = l such that the topological degree of g is d, we have for any α∈^i-l(X/Y)_ℚ:
d ×_X'/Y'^* ∘ f^* α = f^* ∘_X/Y^* α.
The definition of the restriction morphism gives a natural way to generalize the definition of the canonical morphism ψ_X : ^i (X) →_n-i(X) to the relative case.
Recall that the relative dimension of the morphism q: X → Y is e. For any integer i⩾ 0, we define the canonical morphism ψ_X/Y by:
ψ_X/Y := ψ_X ∘_X/Y^* : β∈^i(X/Y)_ℚ→ψ_X(α_X/Y·β) ∈_e-i(X/Y)_ℚ.
When i> e by convention the map ψ_X/Y is zero.
We give here a situation where this map is an isomorphism.
Suppose q: X → Y is a smooth morphism of relative dimension e, then for any integer 0 ⩽ i ⩽ e, the map ψ_X/Y: ^i(X/Y)_ℚ→_e-i(X/Y)_ℚ is an isomorphism.
Since the pairing ^i(X/Y)_ℚ×_i(X/Y)_ℚ→ℚ is perfect by Proposition <ref>, we have that the dual morphism ψ_X/Y^* : ^e-i(X/Y)_ℚ→_i(X/Y)_ℚ of ψ_X/Y is surjective whenever ψ_X/Y: ^i(X/Y)_ℚ→_e-i(X/Y)_ℚ is injective.
We are thus reduced to prove the injectivity of ψ_X/Y : ^i(X/Y)_ℚ→_e-i(X/Y)_ℚ.
Take a ∈^i(X/Y)_ℚ such that ψ_X/Y(a) = 0, and choose a class α∈^i(X)_ℚ representing a.
We fix a subvariety V of dimension i in a fiber X_y of q where y is a point in Y.
We need to prove that (α [V]) = 0.
By Proposition <ref>, the condition ψ_X/Y(α) = 0 implies that:
α [X_y] = 0 ∈_e-i(X)_ℚ.
As the morphism q: X → Y is smooth, the fiber X_y over y is smooth.
By Theorem <ref>, there exists a class β∈^e-i(X_y)_ℚ such that:
β [X_y] = [V].
In particular, we get:
(α [V] )= (α (β [X_y]))= (β (α [X_y])) = 0
as required.
If X = (E) where E is a vector bundle on Y, then Proposition <ref> implies that ψ_X/Y : ^i(X/Y)_ℚ→_e-i(X/Y)_ℚ is an isomorphism for any integer 0 ⩽ i ⩽ e.
If X is the blow-up of ^1 ×^1 at a point and q is the projection from ^1×^1 to the first component Y=^1 composed with the blow-down from X to ^1 ×^1. Then the morphism ψ_X/Y : ^0(X/Y)_ℚ→_1(X/Y)_ℚ is not surjective and ψ_X/Y: ^1(X/Y)_ℚ→_0(X/Y)_ℚ is not injective.
§ APPLICATION TO DYNAMICS
In this section, we shall consider various normal projective varieties X_j and Y_j respectively of dimension n and l and we write e = n-l
Recall from Section <ref> that the notation X_j /_q_j Y_j means that q_j : X_j → Y_j is a surjective morphism of relative dimension e and that X/_q Y fg X'/_q' Y' means that f: X X' and g : Y Y' are dominant rational maps such that q' ∘ f = g ∘ q.
We shall also fix H_X_j and H_Y_j big and nef Cartier divisors on X_j and Y_j respectively.
In this section we prove Theorem <ref> and Theorem <ref>.
They will follow from Theorem <ref> and Theorem <ref> respectively.
§.§ Degrees of rational maps
Let us consider a rational map
X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 and let Γ_f (resp. Γ_g) be the normalization of the graph of f (resp. g) in X_1 × X_2 (resp. Y_1 × Y_2).
We denote by Γ̃_̃f̃ the normalization of the graph of the map induced by q ∘ f from Γ_f to Γ_g, we thus have the following diagram.
Γ̃_̃f̃[ld]_π_1[rd]^π_2@/^1pc/[ddd]^ϖ
X_1 [d]^q_1@–>[rr]^f X_2 [d]^q_2
Y_1 @–>[rr]^g Y_2
Γ_g [ul]^π_1'[ur]_π_2'
The i-th relative degree of f is defined by the formula:
_i(f) : = ( π_1^*(H_X_1^e-i· (q_1^*H_Y_1)^l) ·π_2^*(H_X_2)^i).
When Y_1 and Y_2 are reduced to a point, we simply write _i(f) = _i(f).
If e = 0, then _i(f) = (q_1^*H_Y_1^l) if i = 0 and _i(f) = 0 for i >0.
Observe that in the above diagram, the ϖ : Γ̃_̃f̃→Γ_g is a regular surjective morphism.
Note that the degrees always depend on the choice of the big nef divisors, but to simplify the notations, we deliberately omit it.
We now explain how to associate to any rational map X_1 /_q_1 Y_1 fg X_2/_q_2 Y_2 a pullback operator (f,g)^∙,i.
Let X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 be a rational map and let π_1 and π_2 be the projections from the graph of f in X_1 × X_2 onto the first and the second factor respectively.
We define the linear morphisms (f,g)^∙,i and (f,g)_∙,i by the following formula:
(f,g)^∙,i : α∈^i(X_2/Y_2)_ℝ⟶ (π_1_* ∘ψ_Γ̃_̃f̃/Γ_g∘π_2^* )(α) ∈_e-i(X_1/Y_1)_ℝ.
(f,g)_∙,i : β∈^i(X_1/Y_1)_ℝ⟶ (π_2_* ∘ψ_Γ̃_̃f̃/Γ_g∘π_1^* )(β) ∈_e-i(X_2/Y_2)_ℝ.
When Y_1 and Y_2 are reduced to a point, then we simply write f^∙,i (α) := (f, _{pt})^∙,i (α) and f_∙,i (β) := (f, _{pt})_∙,i (β).
Since ^i(X/Y) = 0 and _e-i(X) = 0 when i >e, it implies that
(f,g)^∙,i and (f,g)_∙,i are identically zero for any i > e.
§.§ Sub-multiplicativity
Let us consider the composition X_1/_q_1Y_1 f_1g_2 X_2/_q_2 Y_2 f_2g_2 X_3/_q_3Y_3 of dominant rational maps.
Then for any integer 0 ⩽ i ⩽ e, there exists a constant C>0 which depends only on the choice of H_X_2, H_Y_2, i, l and e such that:
_i(f_2 ∘ f_1 ) ⩽ C _i(f_1) _i (f_2).
More precisely, C= (e-i+1)^i/ (H_X_2^e · q_2^* H_Y_2^l).
We denote by Γ̃_̃f̃_̃1̃ (resp. Γ̃_̃f̃_̃2̃, Γ_g_1, Γ_g_2) the normalization of the graph of q_2 ∘ f_1 (resp. q_3∘ f_2,g_1,g_2) and π_1, π_2 (resp.π_3,π_4, π_1', π_2' and π_3', π_4') the projections onto the first and the second factor respectively. We set Γ as the graph of the rational map π_3^-1∘ f_1 ∘π_1 : Γ̃_̃f̃_̃1̃Γ̃_̃f̃_̃2̃, u and v the projections from Γ onto Γ̃_̃f̃_̃1̃ and Γ̃_̃f̃_̃2̃ and ϖ_i the restriction on Γ̃_̃f̃_̃ĩ of the projection from X_i × X_i+1 to Y_i × Y_i+1 for each i=1,2. We have thus the following diagram.
Γ[ld]_u [rd]^v
Γ̃_̃f̃_̃1̃@/^1pc/[ddd]^ϖ_1[ld]_π_1[rd]^π_2 Γ̃_̃f̃_̃2̃[ld]_π_3[rd]^π_4@/^1pc/[ddd]^ϖ_2
X_1 [d]_q_1@–>[rr]_f_1 X_2[d]_q_2@–>[rr]_f_2 X_3 [d]_q_3
Y_1@–>[rr]^g_1 Y_2@–>[rr]^g_2 Y_3
Γ_g_1[ul]^π'_1[ru]_π'_2 Γ_g_2[ul]^π'_3[ru]_π'_4
By Proposition <ref> applied to q_2 ∘π_2 ∘ u : Γ→ Y_2, the class ψ_Γ( u^* π_2^* q_2^* H_Y_2^l) is represented by the fundamental class [V] where V is a subscheme of dimension e in Γ which is a general fiber of q_2 ∘π_2 ∘ u.
We apply Theorem <ref> by restriction to V to the class a = v^* π_4^* H_X_3^i [V] and b= u^* π_2^* H_X_2 [V]. We obtain:
v^* π_4^* H_X_3^i [V] ⩽ (e-i+1)^i ( v^* π_4^* H_X_3^i·u^* π_2^* H_X_2^e-i [V] )( u^* π_2^* H_X_2^e [V] ) u^* π_2^* H_X_2^i [V] ∈_e-i(Γ) .
Let us simplify the right hand side of inequality (<ref>).
Since π_2 ∘ u = π_3 ∘ v, ψ_Γ( u^* π_2^* q_2^* H_Y_2^l) = [V] ∈_e(Γ) and since the morphism v is generically finite, one has that:
( v^* π_4^* H_X_3^i·u^* π_2^* H_X_2^e-i [V] ) = (v^* (π_4^* H_X_3^i ·π_3^* H_X_2^e-i·π_3^* q_2^* H_Y_2^l) )= d ×_i(f_2),
where d is the topological degree of v.
The same argument gives:
(u^* π_2^* H_X_2^e [V]) = d × (H_X_2^e · q_2^* H_Y_2^l).
Using (<ref>), (<ref>), inequality (<ref>) can be rewritten as:
u^* π_2^* q_2^*H_Y_2^l· v^* π_4^* H_X_3^i ⩽ C _i( f_2) u^* π_2^* H_X_2^i· u^* π_2^* q_2^*H_Y_2^l∈^l+i(Γ),
where C = (e-i+1)^i / (H_X_2^e · q_2^* H_Y_2^l). Since the class u^*π_1^* H_X_1^e-i∈^e-i (Γ) is nef, we can intersect this class in the previous inequality to obtain:
(u^*(π_1^* H_X_1^n-l-i·π_2^* q_2^*H_Y_2^l) · v^* π_4^* H_X_3^i) ⩽ C' _i(f_2) (u^* π_2^* H_X_2^i· u^* π_2^* q_2^*H_Y_2^l· u^*π_1^* H_X_1^n-l-i).
Let us simplify the expressions in inequality (<ref>).
Because π_2^* q_2^* H_Y_2^l = ϖ_1^* π_2'^* H_Y_2^l and
_l(g_1) = (π_2'^* H_Y_2^l), we deduce that:
π_2^* q_2^* H_Y_2^l = _l(g_1)(H_Y_1^l)ϖ_1^* π'_1^* H_Y_1^l =_l(g_1)(H_Y_1^l)π_1^* q_1^* H_Y_1^l .
Applying (<ref>), the inequality (<ref>) can be translated as:
_l(g_1)(H_Y_1^l) (u^*π_1^*( H_X_1^n-l-i· q_1^*H_Y_1^l) · v^* π_4^* H_X_3^i) ⩽ C _l(g_1)(H_Y_1^l)_i( f_2) ( u^*( π_2^* H_X_2^i·π_1^* q_1^*H_Y_1^l·π_1^* H_X_1^n-l-i)) .
We obtain thus:
_l(g_1)(H_Y_1^l)_i(f_2 ∘ f_1) ⩽ C _l(g_1)(H_Y_1^l)_i(f_1) _i(f_2) .
This concludes the proof of the inequality after dividing by _l(g_1)/ (H_Y_1^l).
§.§ Norms of operators associated to rational maps
The proof of Theorem <ref> relies on an easy but crucial lemma which is as follows.
Let us consider (V,|| · ||) a finite dimensional normed ℝ-vector space and let 𝒞 be a closed convex cone with non-empty interior in V. Then there exists a constant C> 0 such that any vector u ∈ V can be decomposed as v = v^+ - v^- where u^+ and u^- are in 𝒞 such that:
|| v^+/-|| ⩽ C ||v||.
Let us define the map f : V →ℝ^+ given by:
f (v) = inf{ ||v'|| +|| v' - v|| | v' ∈𝒞 , v' - v ∈𝒞}.
We check easily that f defines a norm on V which is similar to the proof of Proposition <ref>.
Since V is finite dimensional, there exists a constant C such that for any v ∈ V, one has:
f(v) ⩽ C ||v||,
Hence ||v^+|| ⩽ C ||v || and ||v^-|| ⩽ C || v ||.
Let X/_q Y fg X/_qY be a rational map.
We fix an integer i ⩽ e, some norms on ^i(X/Y)_ℝ, on _e-i(X/Y)_ℝ.
Then there is a constant C >0 such that for any rational map X/_q Y fg X/_q Y, we have:
1C⩽|| (f, g)^∙,i ||_i(f)⩽ C.
In particular, the i-th relative dynamical degree of f satisfies the following equality:
λ_i( f,X/Y) = lim_p→ + ∞ || (f^p, g^p)^∙,i ||^1/p.
Moreover, when Y is reduced to a point, we obtain:
λ_i(f)=lim_p → +∞ ||(f^p)^∙,i ||^1/p.
The proof of Theorem <ref> follows directly from Theorem <ref> since ^i(X/Y) = ^i(X) and _e-i(X/Y) = _e-i(X) when Y is reduced to a point.
We denote by π_1 and π_2 the projections from the normalization of the graph Γ̃_̃f̃ of q ∘ f onto the first and the second component respectively as in Definition <ref>.
Since we want to control the norm of f^∙,i by the i-th relative degree of f, we first find an appropriate norm to relate the norm on _e-i(X)_ℝ with an intersection product.
As _e-i(X/Y)_ℝ is a subspace of _e-i(X)_ℝ, we can extend the norm || · ||__e-i(X/Y)_ℝ into a norm on _e-i(X)_ℝ.
As _e-i(X)_ℝ is a finite dimensional vector space and since H_X^e-i is a class in the interior of the basepoint free cone ^e-i(X), we can suppose by equivalence of norms that the norm on _e-i(X)_ℝ given by
|| z|| = inf_ z = z^+ - z^-
z^+, z^- ∈_e-i(X)
{ (H_X^e-i z^+) + (H_X^e-i z^-) }
as in Proposition <ref>.
Let us prove that the lower bound of || (f,g)^∙,i|| / _i(f) is 1.
We denote by φ : ^i(X) →^i(X/Y) the canonical surjection. Since H_X^i is basepoint free, it implies that the class (f,g)^∙( φ(H_X^i)) ∈_e-i(X/Y)_ℝ⊂_e-i(X)_ℝ is pseudo-effective. In particular, this implies that its norm is exactly _i(f). We have thus by definition:
|| (f,g )^∙,i ||_i(f) = (
|| (f,g )^∙,i || || (f,g)^∙,iφ(H_X^i)|| ) ⩾ 1 ,
as required.
Let us find an upper bound for || (f,g )^∙,i ||/|| (f,g)^∙,iφ(H_X^i)||.
First we fix a morphism s: ^i(X/Y)_ℝ→^i(X)_ℝ such that φ∘ s =.
Take α∈^i(X/Y)_ℝ of norm 1, then the class u = s(α) ∈^i(X)_ℝ is a representant of α. By construction, the norm of u is bounded by || u ||_^i(X)_ℝ⩽ C_1 || α||_^i(X/Y)_ℝ = C_1 where C_1 is the norm of the operator s. Since by Proposition <ref>.(ii), _Γ_f/Γ_g^* ∘π_2^* = (1/_l(g)) ×π_2^* ∘_X/Y^*, we have therefore:
(f,g)^∙,iα = 1_l(g)×π_1_* ∘ψ_Γ_f∘π_2^* ∘_X/Y^* (α) = _X/Y f^∙,i u.
By Theorem <ref>, the pliant cone ^i(X) has a non-empty interior in ^i(X)_ℝ and we can apply Lemma <ref>.
There exists a constant C_2>0 which depends only on ^i(X) and the choice of the norm on ^i(X)_ℝ such that the class u can be decomposed as u = u_1 - u_2 where u_i ∈^i(X) such that ||u_i||_^i(X)_ℝ⩽ C_2 || u||_^i(X)_ℝ for i=1,2. We set α_i = φ(u_i) for all i ∈{ 1,2}.
By the triangular inequality, we have:
||(f,g)^∙,iα ||__e-i(X/Y) || (f,g)^∙,iφ(H_X^i)||⩽|| (f,g)^∙,iα_1 ||__e-i(X)_ℝ|| (f,g)^∙,iφ(H_X^i)|| +
|| (f,g)^∙,iα_2 ||__e-i(X)_ℝ|| (f,g)^∙,iφ(H_X^i)|| .
We have to find an upper bound of || (f,g)^∙,iα_i||__e-i(X/Y)_ℝ for each i=1,2.
Applying Siu's inequality (Corollary <ref>) to a = π_2^* u_i and b = π_2^* H_X and then composing with _X/Y∘π_1_* ∘ψ_Γ_f gives
_X/Y (f^∙,i(u_i)) ⩽ C_3 || u_i||_^i(X)_ℝ(H_X^n)×_X/Y(f^∙,i(H_X^i)),
where C_3 is a positive constant which depends only on the choice of big nef divisors.
This implies by intersecting with H_X^e-i the inequality:
|| ( (f,g)^∙,i (α_i)||__e-i(X/Y)_ℝ⩽ C_3 || u_i||_^i(X)_ℝ(H_X^n) || (f,g)^∙,i( φ(H_X^i))||__e-i(X/Y)_ℝ .
In particular we have shown that:
1 ⩽ ||(f,g)^∙,iα ||__e-i(X/Y)_ℝ|| (f,g)^∙,iφ(H_X^i)||__e-i(X/Y)_ℝ⩽ 2 C_1 C_2 C_3(H_X^n),
which concludes the proof.
§ SEMI-CONJUGATION BY DOMINANT RATIONAL MAPS
In this section, we consider a more general situation than in the previous section. We still suppose that the varieties X_i and Y_i are of dimension n and l respectively such that the relative dimension is e= n-l, but we suppose the maps q_i : X_i Y_i merely rational and dominant: they may exhibit indeterminacy points.
Recall also that H_X_i and H_Y_i are again big and nef Cartier divisors on X_i and Y_i respectively.
Let f : X_1 X_2, g : Y_1 Y_2, q_1: X_1 Y_1 and q_2 : X_2 Y_2 be four dominant rational maps such that q_2∘ f = g ∘ q_1.
We define the i-th relative dynamical degree of f (still denoted _i(f)) as the relative degree _i(f̃) with respect to the rational map Γ_q_1/ Y_1 f̃gΓ_q_2 / Y_2 where Γ_q_i are the normalization of the graphs of q_i in X_i × Y_i for each integer i∈{ 1,2} respectively and f̃: Γ_q_1Γ_q_2 is the rational map induced by f.
(i) Consider now the following commutative diagram:
X_1 @–>[r]^f_1@–>[d]^q_1 X_2 @–>[r]^f_2@–>[d]^q_2 X_3 @–>[d]^q_3
Y_1 @–>[r]^g_1 Y_2 @–>[r]^g_2 Y_3
where f_i : X_i X_i+1, g_i : Y_i Y_i+1, q_1 : X_1 Y_1, q_2 : X_2 Y_2, q_3 : X_3 Y_3 are dominant rational maps for any integer j∈{1,2,3} such that q_j+1∘ f_j = g_j ∘ q_j for any integer j∈{1,2 }. Then there exists a constant C >0 which depends only e,i and the choice of big nef Cartier divisors such that:
_i(f_2 ∘ f_1) ⩽ C _i(f_2) _i(f_1).
(ii) Consider now the following commutative diagram:
X_1' @–>[lld]_φ_1@–>[rrr]^f̃@–>[dd] X_2' @–>[lld]_φ_2@–>[dd]
X_1 @–>[rrr]^f@–>[dd]^q_1 X_2 @–>[dd]_<<<<q_2
Y_1' @–>[rrr]^g̃@–>[lld]_ϕ_1 Y_2' @–>[lld]^ϕ_2
Y_1 @–>[rrr]^g Y_2 ,
where f:X_1 X_2, g : Y_1 Y_2, q_1: X_1 Y_1, q_2 : X_2 Y_2 are four dominant rational maps such that q_2∘ f= g ∘ q_1.
We consider some birational maps φ_i : X_i' X_i and ϕ_i : Y_i' Y_i for i = 1,2 such that f̃ = φ_2^-1∘ f ∘φ_1 and g̃ = ϕ_2^-1∘ g ∘ϕ_1.
Then for any integer 0 ⩽ i ⩽ e, there exists a constant C> 0 which depends on e, i, on the choice of big nef Cartier divisors and on the rational maps φ_1 and φ_2 such that:
1C_i(f) ⩽_i(f̃) ⩽ C _i(f).
(i)
Recall that the normalization of the graph of q_j in X_j × Y_j is birational to X_j for j∈{ 1,2}, hence one can define f̃_j : Γ_q_jΓ_q_j+1 the rational maps induced by f_j on the graph Γ_q_j of q_j for j ∈{ 1,2} respectively. Then (i) results directly from Theorem <ref> applied to the composition Γ_q_1/Y_1 f̃_̃1̃g_1Γ_q_2/Y_2 f̃_̃2̃g_2Γ_q_3/Y_3.
(ii) Let us suppose first that the maps q_j : X_j → Y_j and q_j': X_j' → Y_j' are all regular for j=1,2.
Let us apply successively Theorem <ref> to the composition X_1'/_q_1' Y_1' φ_1ϕ_1 X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 φ_2^-1ϕ_2^-1 X_2' /_q_2' Y_2'.
We obtain :
_i ( φ_2^-1∘ f ∘φ_1) ⩽ C_2 _i(f ∘φ_1) _i(φ_2^-1) ⩽ C_1 C_2 _i(f) _i(φ_1) _i(φ_2^-1),
where C_1 = (e-i+1)^i / (H_X_1^e · q_1^*H_Y_1^l) and C_2 = (e-i+1)^i / (H_X_2^e · q_2^* H_Y_2^l).
This proves that:
_i(φ_2^-1∘ f ∘φ_1) ⩽ C _i(f),
where
C = (e-i+1)^2i_i(φ_1) _i(φ_2^-1) (H_X_1^e· q_1^* H_Y_1^l) (H_X_2^e · q_2^* H_Y_2^l).
The proof follows easily from the regular case since the maps Γ_q_1'Γ_q_1 and Γ_q_2'Γ_q_2 are birational where Γ_q_i' are the graphs of q_i' in X_i' × Y_i' for i=1,2.
Proof of Theorem <ref>:
(i) We apply Theorem <ref> to Y_1 = Y_2 = Y_3 = ( ), X_1 = X_2 = X_3 = X and H_X_1= H_X_2 = H_X_3 = H_X, we get thus the desired conclusion:
_i (g ∘ f) ⩽(n-i+1)^i(H_X^n)_i(f) _i(g).
(ii) Applying Theorem <ref>.(ii) to the varieties X_1'= X_2'=X_1 = X_2 = X, Y_1'= Y_2' =Y_1 = Y_2 = (), to the choice of big nef divisors H_X_1' = H_X_2' = H_X', H_Y_1' = H_Y_2' = H_Y', H_X_1 = H_X_2 = H_X and to the rational maps φ_1 = φ_2 = _X, ϕ_1= ϕ_2 = g = _(), f : X X yields the desired result.
§ MIXED DEGREE FORMULA
Let us consider three dominant rational maps f : X X, q : X Y, g : Y Y such that q ∘ f = g ∘ q. Theorem <ref>.(i) implies that for any integer i⩽ e the sequence _i(f^n) is submultiplicative. Define i-th relative dynamical degree as follows.
λ_i(f, X/Y) := lim_p → + ∞ (_i(f^p))^1/p.
When Y is reduced to a point, then we simply write λ_i(f) := λ_i( f , X/ {pt}).
Since _i(f^p) ∈ℕ is an integer, one has that λ_i(f, X/Y) ⩾ 1.
Theorem <ref>.(ii) implies that λ_i(f, X/Y) is invariant by birational conjugacy, i.e λ_i(f,X/Y) does not depend on the choice of big nef Cartier divisors and on any choice of varieties X' and Y' which are birational to X and Y respectively.
Our aim in this section is to prove Theorem <ref>.
To that end, we follow the approach from <cit.>. The main ingredient (Corollary <ref>) is an inequality relating basepoint free classes which generalizes to arbitrary fields (see <cit.> and <cit.>).
This inequality is a direct consequence of Theorem <ref> which estimates the positivity of the diagonal in a quite general setting.
After this, we prove in Theorem <ref> the submultiplicativity formula for the mixed degrees.
Once the submultiplicativity of these mixed degrees holds, the proof follows from a linear algebra argument.
§.§ Positivity estimate of the diagonal
In this section, we prove the following theorem.
Let q: X → Y be a surjective morphism such that Y = l and such that q is of relative dimension e. There exists a constant C>0 such that for any surjective generically finite morphism π : X' → X and any class γ∈^l+e(X'× X'):
(γ [Δ_X']) ⩽ C × ( γ· (π×π)^* (H_X^e · H_Y^l)) ,
where p_1 and p_2 are the projections from X× X to the first and the second factor respectively, H_X = p_1^* H_X + p_2^* H_X and H_Y = p_1^* q^* H_Y + p_2^* q^* H_Y, and where Δ_X' (resp. Δ_X) is the diagonal of X' (resp. of X) in X'× X' (resp. in X× X).
The fact that the constant C>0 does not depend on π but only on H_X, H_Y is crucial in the applications.
Theorem <ref> implies that the difference (π×π)^* (H_X^e · H_Y^l) - [Δ_X'] belongs to the dual cone of the cone _e+l(X'× X')_ℝ with respect to the intersection product, however we conjecture that this class should be pseudo-effective:
[Δ_X'] ⩽ C ψ_X'× X'((π×π)^* (H_X^e · H_Y^l)) ∈_l+e(X'× X')_ℝ.
We shall use several times the following lemma which is proved at the end of this section.
Let X_1/_q_1 Y_1 fg X_2/_q_2 Y_2 be two dominant rational maps where Y_1 = Y_2 =l and X_1 = X_2 = e+l and where q_1,q_2 are regular surjective morphisms.
We denote by Γ_f and Γ_g the normalizations of the graph of f and g in X_1× X_2 and Y_1× Y_2 respectively, π_1,π_2, π_1', π_2' are the projections from Γ_f and Γ_g on the first and the second factor respectively.
Then there exists a constant C>0 such that for any surjective generically finite morphism π: X' →Γ_f, any integer 0 ⩽ j ⩽ l and any class β∈^e+l-j(X'), one has:
( β·π ^*π_2^* q_2^* H_Y_2^j) ⩽ C _j(g)(H_Y_1^l)× (β·π^* π_1^* q_1^* H_Y_1^j),
where _j(g) is the j-th degree of the rational map g with respect to the divisors H_Y_1 and H_Y_2.
Proof of Theorem <ref>.
By Siu's inequality, we can suppose that both the classes H_X and H_Y are ample in X and Y respectively.
We proceed in three steps. Fix π : X' → X.
Step 1: We suppose first that X = ^l ×^e, Y = ^l and q is the projection onto the first factor.
Since X× X is smooth, the pullback (π×π)^* is well-defined in _l+e(X× X)_ℝ because the morphism ψ_X× X : ^l+e(X× X)_ℝ→_l+e(X× X)_ℝ is an isomorphism.
Our objective is to prove that there exists a constant C_1>0 such that
[Δ_X'] ⩽ C_1 ψ_X'× X'((π×π)^* (H_X^e · H_Y^l)) ∈_l+e(X'× X')_ℝ.
As X× X is homogeneous, we apply the following lemma analogous to <cit.> which we prove at the end of the section.
Let X be a homogeneous projective variety of dimension n and let π : X' → X be a surjective generically finite morphism. Then one has that :
[Δ_X'] ⩽ (π×π)^* [Δ_X] ∈_n(X'× X')_ℝ.
We denote by p_1', p_2' (resp. p_1”, p_2”) the projections from Y× Y (resp. from X× X) onto the first and the second factor respectively.
Since the basepoint free cone has a non-empty interior by Theorem <ref>.(i) and since the class p_1'^*H_Y+ p_2'^* H_Y is ample on Y × Y, there exists a constant C_2>0 such that the class - [Δ_Y] + C_2 (p_1'^* H_Y + p_2'^* H_Y)^l∈^l(Y× Y)_ℝ is basepoint free.
Since Δ_X = Δ_Y ×Δ_^e and by intersection and by pullback, we have that the class:
-[Δ_X] + C_2 H_Y^l · p^*[Δ_^e] ∈^e+l(X× X)
is basepoint free where p denotes the projection from X× X to ^e×^e.
By the same argument, there exists a constant C_3>0 such that the class -p^*[Δ_^e] + C_3 H_X^e ∈^e(X× X)_ℝ is basepoint free.
We have proved that the class:
- [Δ_X] + C_2 C_3 H_Y^l · H_X^e ∈^e+l(X× X)_ℝ
is basepoint free.
Since the basepoint free cone is stable by pullback, we have thus:
[Δ_X'] ⩽ (π×π)^* [Δ_X] ⩽ C_1 ψ_X'× X'( (π×π)^* (H_Y^l · H_X^e)) ∈_l+e(X'× X')_ℝ,
where C_1 = C_2 × C_3 as required.
Step 2: We now suppose that X = Y ×^e.
Since Y is projective, there exists a dominant rational map ϕ : Y ^l (ϕ can be chosen as the composition of an embedding in ^N with a linear projection on a linear hypersurface).
Let Y' be the normalization of the graph of ϕ in X ×^e ×^l and we denote by ϕ_1 and φ_1 the projections from Y' onto the first and the second factor respectively.
Let φ_2: Y' ×^e →^l ×^e (resp. ϕ_2 : Y'×^e → X) the map induced by φ_1 (resp. ϕ_1).
Let X” be the fibred product of X' with Y'×^e so that ϕ_3, π' are the projections from X” onto X' and Y'×^e respectively.
We obtain the following commutative diagram:
X' [d]^π [l]^ϕ_3 X”[d]^π'
Y ×^e [dd]^q [l]^ϕ_2 Y' ×^e [rd]^φ_2[dd]^p_Y'
^l ×^e [dd]^p_^l
Y [l]^ϕ_1 Y' [rd]^φ_1
^l
where p_Y' and p_^l are the projections from Y' ×^e and ^l ×^e onto Y' and ^l respectively and where the horizontal arrows are birational maps.
Let us prove that there exists a constant C_4>0 which does not depend on the morphism π: X' → X such that for any basepoint free class γ' ∈^e+l(X”× X”), one has:
(γ' [Δ_X”] ) ⩽ C_4 (γ' · (ϕ_3 ×ϕ_3)^* (π×π)^* (H_X^e · H_Y^l)).
Fix a class γ' ∈^e+l(X”× X”).
We apply the conclusion of the first step to the surjective generically finite morphism π”:=φ_2 ∘π': X”→^l ×^e.
There exists a constant C_1>0 such that
[Δ_X”] ⩽ C_1 ψ_X”× X”((π”×π”)^* (H_^l^l · H_^l ×^e^e)) ∈_l+e(X”× X”)_ℝ,
where H_^l×^e is an ample Cartier divisor in (^l ×^e )^2 and H_^l is the pullback by p_^l× p_^l of an ample Cartier divisor in ^l ×^l.
Let us apply Theorem <ref> to the class (π”×π”)^* H_^l ×^e^e and to the class (π'×π')^* (ϕ_2×ϕ_2)^* H_X, there exists a constant C_5>0 such that:
( π”×π”)^* H_^l ×^e^e ⩽ C_5 ((π'×π')^* ((ϕ_2×ϕ_2)^* H_X^2l+e· (φ_2×φ_2)^* H_^l ×^e^e ))((π'×π')^*(ϕ_2×ϕ_2)^* H_X^2(l+e))
× (π'×π')^* (ϕ_2×ϕ_2)^* H_X^e ∈^e(X”× X”)_ℝ.
Since ((π'×π')^* α) = ( π') (α) for any class α∈^2l+2e((Y'×^e)^2)_ℝ, we have thus:
(π”×π”) H_^l ×^e^e ⩽ C_6 (π'×π')^* (ϕ_2×ϕ_2)^* H_X^e ∈^e(X”× X”)_ℝ,
where C_6 = C_5 ((ϕ_2×ϕ_2)^* H_X^2l+e· (φ_2×φ_2)^* H_^l×^e^e )/ ((ϕ_2×ϕ_2)^* H_X^2l+2e)>0 does not depend on π : X' → X.
Using (<ref>) and (<ref>), we obtain:
[Δ_X”] ⩽ C_7 ψ_X”× X”((π”×π”)^* H_^l^l · (ϕ_3×ϕ_3)^*(π×π)^* H_X^e) ∈_l+e(X”× X”)_ℝ,
where C_7 = C_6× C_1.
Since the basepoint free cone is contained in the nef cone by Theorem <ref>.(v), we have thus:
(γ' [Δ_X”]) ⩽ C_7 (γ' · (ϕ_3 ×ϕ_3)^* (π×π)^*H_X^e · (π”×π”)^* H_^l^l ).
Let us denote by X_1 = (Y ×^e)^2, X_2 = (^l ×^e)^2, Y_1 = Y × Y, Y_2 = ^l ×^l and let f := (φ_2 ∘ϕ_2^-1×φ_2∘ϕ_2^-1) : X_1 X_2 and g:=(φ_1 ∘ϕ_1^-1×φ_1∘ϕ_1^-1) : Y_1 Y_2 be the corresponding dominant rational maps.
Let us apply Lemma <ref> to the class (π' ×π')^* (φ_2×φ_2)^* H_^l^l and to the class (π'×π')^* (ϕ_2 ×ϕ_2)^* H_Y^l, there exists a constant C_8>0 which is independent of the morphism π'×π' : X”× X”→ (Y'×^e)^2 such that for any class β∈^2e+l(X”× X”):
(β· (π' ×π')^* (φ_2×φ_2)^* H_^l^l) ⩽ C_8 _l(g)(H_Y^2l) (β· (ϕ_3×ϕ_3)^*(π×π)^* H_Y^l).
Using (<ref>) and (<ref>) to the class β = γ' · (ϕ_3×ϕ_3)^* (π×π)^*H_X^e ∈^l+2e(X”× X”), we obtain:
(γ' [Δ_X”]) ⩽ C_4 (γ' · (ϕ_3×ϕ_3)^*(π×π)^* (H_X^e · H_Y^l )),
where C_4= C_4 × C_8 (_l(g))/ (H_Y^2l)>0 does not depend on π.
The conclusion of the theorem follows from the projection formula and from the fact that ϕ_3 ×ϕ_3 is a birational map.
Indeed, we apply the previous inequality to γ' = (ϕ_3 ×ϕ_3)^* γ where γ∈^l+e(X'× X'), we obtain
(γ [Δ_X']) = ( (ϕ_3 ×ϕ_3)^* γ [Δ_X”]) ⩽ C_4 ( γ· (π×π)^* (H_X^e · H_Y^l))
as required.
Step 3: We prove the theorem in the general case.
Suppose q : X → Y is a surjective morphism of relative dimension e and fix a class β∈^l+e(X'× X').
Since X is projective over Y, there exists a closed immersion
i:X → Y×^N such that q = p'_Y∘ i where p'_Y is the projection of Y ×^N onto Y.
Let us choose a projection Y ×^N Y ×^e so that the composition with i gives a dominant rational map f : X Y ×^e.
Let us denote by Γ_f the normalization of the graph of f in X × Y ×^e and π_1, π_2 the projections of Γ_f onto the first and the second factor respectively.
We set X” the fibred product of X' with Γ_f and we denote by π' and ϕ the projection of X” to Γ_f and X' respectively.
We get the following commutative diagram:
X' [d]^π [l]^ϕ[d]^π' X”
X [d]^q@–>[rd]^f Γ_f [d]^π_2[l]_π_1
Y [l]^p_Y Y ×^e
where p_Y is the projection of Y×^e onto Y.
We apply the result of Step 2 to the class (ϕ×ϕ)^* β∈^l+e(X”× X”) and to the diagonal of X”.
There exists a constant C_4>0 which does not depend on π such that:
( (ϕ×ϕ)^* β [Δ_X”]) ⩽ C_4 ((ϕ×ϕ)^*(β· (π×π)^* H_Y^l) · ((π_2 ∘π')× (π_2 ∘π'))^* H_Y×^e^e).
Let us apply Theorem <ref> to the class ((π_2 ∘π')× (π_2 ∘π'))^* H_Y×^e^e and to the class (ϕ×ϕ)^* (π×π) H_X.
There exists a constant C_9>0 such that:
((π_2 ∘π')× (π_2 ∘π'))^* H_Y×^e^e ⩽ C_9 ( (π'×π')^* ( (π_2×π_2)^* H_Y×^e^e · (π_1×π_1)^* H_X^2l+e )) ((π'×π')^*(π_1×π_1)^* H_X^2l+2e)
(ϕ×ϕ)^* (π×π) H_X^e ∈^e(X”× X”)_ℝ.
Since ( (π'×π')^* ( (π_2×π_2)^* H_Y×^e^e · (π_1×π_1)^* H_X^2l+e )) /((π'×π')^*(π_1×π_1)^* H_X^2l+2e) = _e(f× f)/ (H_X^2l+2e)and using (<ref>), we obtain:
((ϕ×ϕ)^* β [Δ_X”] )⩽ C ( (ϕ×ϕ)^* ( β· (π×π)^* (H_X^e · H_Y^l)) ),
where C=C_4 C_9 _e(f× f) / (H_X^2l+2e).
Since the morphism π_1: Γ_f → X is birational, the map ϕ: X”→ X' is also birational and we conclude using the projection formula and since (ϕ×ϕ)_* [Δ_X”] = [Δ_X']:
(β [Δ_X'] ) ⩽ C (β· (π×π)^* (H_X^e · H_Y^l)).
Recall that X,Y are normal projective varieties and H_X,H_Y are ample divisors on X and Y respectively.
Let q: X → Y be a surjective morphism of relative dimension e where Y = l. Then there exists a constant C>0 such that for any surjective generically finite morphism π : X' → X such that for any class α∈^i(X') and any class β∈^l+e-i(X'), one has:
(β·α) ⩽ C ∑_max(0, i-l) ⩽ j ⩽min(i,e) U_j(π_* ψ_X'(α))× (β·π^* (q^* H_Y^i-j· H_X^j)),
where U_j(π_* ψ_X'(α)) = ( H_X^e-j· q^* H_Y^l-i+jπ_* ψ_X'(α)).
Note that when i⩽ e, then the inequality is already a consequence of Siu's inequality (Theorem <ref>).
Indeed, the term on the right hand side of (<ref>) with j = i corresponds exactly to the term C (π^*H_X^n-i·α) ×π^* H_X^i.
Equation (<ref>) proves that the class
-ψ_X'(α) + C ∑_max(0, i-l) ⩽ j ⩽min(i,e) U_j(π_* ψ_X'(α)) ψ_X'(π^* (q^* H_Y^i-j· H_X^j)) ∈_n-i(X')_ℝ
is in the dual of the basepoint free cone ^n-i(X').
Moreover, if (<ref>) is satisfied, then this class is pseudo-effective.
We apply Theorem <ref> to the class γ = p_1^* β· p_2^* α∈^n(X'× X').
There exists a constant C_1>0 such that for any surjective generically finite morphism π: X'→ X and any class γ∈^n(X'× X'), one has:
(γ [Δ_X']) ⩽ C_1 (γ· (π×π)^* (H_X^e · H_Y^l)).
We denote by p_1 and p_2 the projections of X'× X' onto the first and the second factors respectively.
Fix α∈^i(X') and β∈^n-i(X').
Let us apply the previous inequality to γ = p_1^* β· p_2^* α∈^n(X'× X').
We obtain:
(β·α) = (p_1^* β· p_2^* α [Δ_X']) ⩽ C_1( p_1^*β· p_2^* α· (π×π)^* (H_X^e · H_Y^l)).
Since ( p_1^*π^*( H_X^m · q^*H_Y^j) · p_2^*(π^*(q^* H_Y^l-m· H_X^e-j) ·γ)) = 0 when m + j ≠ i, we obtain :
(β·α ) ⩽ C ∑_max( 0 ,i-l) ⩽ j ⩽min(e,i) (π^*(q^* H_Y^l-i+j· H_X^e-j ) ·α) ( π^*(q^* H_Y^i-j· H_X^j) ·β ) .
where C = C_1 ( 1 + max ( ( [ e; j ] ) ( [ l; i-j ] ) ) ).
Hence by the projection formula, we have proved the required inequality:
( β·α) ⩽ C ∑_max( 0 ,i-l) ⩽ j ⩽min(e,i) U_j(π_* ψ_X' (α) ) × ( β·π^* (q^* H_Y^i-j· H_X^j))) .
Proof of Lemma <ref>: (see <cit.>)
Since X is homogeneous, it is smooth.
Let G be the automorphism group of X × X, we denote by · the (transitive) action of G on X× X.
By generic flatness (see <cit.>), there exists a non empty open subset V ⊂ X × X such that the restriction of π×π to U := (π×π)^-1(V) is flat over V.
Recall that two subvarieties F ⊂ X× X and W ⊂ X× X intersect properly in X × X if (F ∩ W) = F + W - 2n.
Since G acts transitively on X × X, there exists by <cit.> a Zariski dense open subset O ⊂ G such that for any point g ∈ O,
the cycle g · [Δ_X] intersects properly every component of X× X ∖ V.
In particular, there exists a one parameter subgroup τ : 𝔾_m → G such that τ(1) = ∈ G and such that τ maps the generic point of 𝔾_m to a point in O.
Let S be the closure in X'× X'×^1 of the set { (x',t) ∈ U ×𝔾_m | (π×π)(x') ∈τ(t) ·Δ_X }.
Let p: X'× X' ×^1 → X'× X' be the projection onto X'× X' and let f : S →^1 be the morphism induced by the projection of X'× X' ×^1 onto ^1.
As in <cit.>, we denote by S_t:= p_*[f^-1(t)] ∈ Z_n(X'× X') for any t ∈𝔾_m.
By construction the cycle S_1 ∈ Z_n(X'× X') is effective and its support contains the diagonal Δ_X' in X'× X', hence:
[Δ_X'] ⩽ S_1 ∈_n(X'× X')_ℝ.
Let t ∈𝔾_m such that τ(t) ∈ O.
Since S_1 = S_t ∈ A_n(X'× X') for any t ∈^1, we have thus:
[Δ_X'] ⩽ S_t ∈_n(X'× X')_ℝ.
Since the cycle τ(t) · [Δ_X] intersects properly every component of X× X∖ V and since the restriction of π×π to U = (π×π)^-1(V) is flat over V, <cit.> asserts that the pullback of (π×π) ^* τ(t) · [Δ_X] is rationnally equivalent to the cycle [ (π×π)_|U^-1 (τ(t) ·Δ_X) ]. We have thus:
S_t = [ (π×π)_|U^-1 (τ(t) ·Δ_X) ] = (π×π)^* [Δ_X] ∈ A_n(X'× X').
Hence:
[Δ_X'] ⩽ (π×π)^* [Δ_X] ∈_n(X'× X')_ℝ.
Proof of Lemma <ref>.
Observe that one has the following commutative diagram:
X' [d]^π
Γ_f [ld]_π_1[rd]^π_2
X_1 @–>[rr]^f[d]^q_1 X_2[d]^q_2
Y_1 @–>[rr]^g Y_2
Γ_g [ul]^π_1'[ur]_π_2'
Fix a class β∈^e+l-j(X').
By linearity and by Proposition <ref>, we can suppose that the class β is induced by a product of nef divisors D_1 ·…· D_e_1 + e+ l -j where D_i are nef divisors on X_1' where p: X_1' → X' is a flat morphism of relative dimension e_1.
The intersection (β·π^* π_2^* q_2^* H_Y_2^j) is thus given by the formula:
(β·π^* π_2^* q_2^* H_Y_2^j )= (D_1 ·…· D_e_1 + e + l-j· p^* π^* π_2^* q_2^* H_Y_2^j).
Take A an ample Cartier divisor on X_1' and set α_ϵ = (D_1 + ϵ A) ·… (D_e_1 + e + ϵ A) ∈^e_1+e(X_1')_ℝ for any ϵ>0.
Since the class α_ϵ is a complete intersection and since the morphisms q_i are surjective, there exists a cycle V_ϵ∈ Z_l(X'_1)_ℝ such that ψ_X'_1 (α_ϵ) = { V_ϵ}∈_l(X'_1)_ℝ and such that the restrictions of the morphisms π_1 ∘π∘ p and π_2 ∘π∘ p to the support of V_ϵ are surjective and generically finite onto Y_1 and Y_2 respectively.
We apply Theorem <ref> to the class (p^*π^* π_2^* q_2^* H_Y_2^j)_|V_ϵ and to (p^*π^* π_1^* q_1^* H_Y_1)_|V_ϵ, we get:
p^* π^* π_2^* q_2^* H_Y_2^j ·α_ϵ⩽ C (p^*π^*( π_2^* q_2^* H_Y_2^j ·π_1^* H_Y_1^l-j ) {V_ϵ} )(p^* π^*π_1^* q_1^* H_Y_1^l {V_ϵ})× p^*π^* π_1^* q_1^* H_Y_1^j ·α_ϵ∈^j+e_1 + e(X'_1)_ℝ.
By the projection formula applied to the morphism π∘ p, we have that
(p^*π^*( π_2^* q_2^* H_Y_2^j ·π_1^* H_Y_1^l-j ) {V_ϵ} )/ ( p^*π^*π_1^* q_1^* H_Y_1^l {V_ϵ}) = _j(g) / (H_Y_1^l),
hence:
p^*π^* π_2^* q_2^* H_Y_2^j ·α_ϵ⩽ C _j(g)(H_Y_1^l) p^*π^* π_1^* q_1^* H_Y_1^j ·α_ϵ∈^j+e_1 + e(X'_1)_ℝ.
We intersect with the class (D_e_1 + e + 1·…· D_e_1 + e + l -j) ∈^l-j(X_1')_ℝ and take the limit as ϵ tends to zero. We obtain:
(β·π^* π_2^* q_2^* H_Y_2^j )= (D_1 ·…· D_e_1 + e + l-j· p^* π^* π_2^* q_2^* H_Y_2^j) ⩽ C _j(g)(H_Y_1^l)(β·π^* π_1^* q_1^* H_Y_1^j ),
as required.
§.§ Submultiplicativity of mixed degrees
Let X_1/_q_1 Y_1 fg X_2/_q_2Y_2 be rational maps where e = X_i - Y_i and l = Y_i for i=1,2. We fix some ample divisors H_X_i and H_Y_i on each variety respectively. We define for any integer 0 ⩽ i ⩽ n:
a_i,j (f) := {[ ((H_Y_1^l-j· H_X_1^e+j-i) f^∙,i (H_X_2^i)) if max(0,i-e) ⩽ j ⩽ l,; 0 otherwise. ].
For j = 0, it is the i-th relative degree a_i,0 (f) = _i(f) and
when j= l, it corresponds to the i-th degree of f, a_i,l(f) = _i(f).
Let q_1: X_1 → Y_1, q_2 : X_2 → Y_2, q_3: X_3 → Y_3 be three surjective morphisms such that X_i = e+ l and Y_i = l for all i ∈{ 1,2,3}.
Then there exists a constant C>0 such that for any rational maps X_1/_q_1 Y_1 f_1g_1 X_2/_q_2Y_2, X_2/_q_2 Y_2 f_2g_2 X_3/_q_3Y_3 and for all integers 0 ⩽ j_0 ⩽ l:
a_i,j_0(f_2 ∘ f_1) ⩽ C ∑_max(0,i-l ) ⩽ j ⩽min (e , i)_i-j(g_1) a_i, i-j(f_2) a_j, j + j_0 -i(f_1).
Since we are in the same situation as Theorem <ref>, we can consider the diagram (<ref>) and we keep the same notations.
We denote by n=e+l the dimension of X_i.
Let us denote by d the topological degree of the map f_2.
We apply Corollary <ref> to the pliant class α:= (1/d) v^* π_4^* H_X_3^i ∈^i(Γ), to the class β := u^* π_1^* (H_X_1^e-i+j_0· q_1^* H_Y_1^l-j_0) ∈^n-i(Γ) and to the morphism π = φ∘π_3 ∘ v.
There exists a constant C_1 >0 which depends only on the choice of divisors H_Y_2 ×^e and H_Y_2 such that:
a_i,j_0(f_2∘ f_1) ⩽ C_1 ∑_max(0,i-l ) ⩽ j ⩽min (e , i) U_j(π_* ψ_Γ(α)) ( β·π^* (H_X_2^j · q_2^* H_Y_2^i-j)),
where U_j( γ) = (H_X_2 ^e-j· q_2^*H_Y_2^l-i+jγ) for any class γ∈_n-i(X_2)_ℝ.
We observe that U_j(π_* ψ_Γ(α))= a_i,i-j(f_2).
We have thus:
a_i,j_0(f_2∘ f_1) ⩽ C_1 ∑_max(0,i-l ) ⩽ j ⩽min (e , i) a_i,i-j(f_2) (u^* (π_1^* (H_X_1^e-i+j_0· q_1^* H_Y_1^l-j_0)·π_2^*(H_X_2^j · q_2^* H_Y_2^i-j))).
Applying Lemma <ref> to the class u^* π_2^* q_2^* H_Y_2^i-j∈^i-j(Γ) and to β' = β· u^* π_2^* H_X_2^j ∈^n-i+j(Γ), there exists a constant C_2 >0 such that :
( β' · u^* π_2^* q_2^* H_Y_2^i-j) ⩽ C_2 _i-j(g_1) ( u^* (π_1^* (H_X_1^e-i+j_0· q_1^*H_Y_1^l-j_0 + i-j) ·π_2^* H_X_2^j)).
Since the map u : Γ→Γ_f_1 is birational, we have that:
(u^* (π_1^* (H_X_1^e-i+j_0· q_1^* H_Y_1^l-j_0)·π_2^*(H_X_2^j · q_2^* H_Y_2^i-j))) ⩽ C_2 _i-j(g_1) a_j, j_0 +j -i(f_1).
Finally, (<ref>) and (<ref>) imply:
a_i,j_0(f_2∘ f_1) ⩽ C ∑_max(0,i-l ) ⩽ j ⩽min (e , i) a_i,i-j(f_2) a_j,j_0 + j-i(f_1) _i-j(g_1),
where C = C_2 C_1>0 is a constant which is independent of f_1 and f_2 as required.
§.§ Proof of Theorem <ref>
Recall that we want to prove the following formula:
λ_i(f) = max_j⩽ i (λ_j(f, X/Y) λ_i-j(g)).
By definition of the relative degrees, we are reduced to prove the theorem when q: X→ Y is a proper surjective morphism.
Recall that X = n and Y = l such that q: X → Y has relative dimension e= n-l.
Let us consider the following commutative diagram:
Γ_f [rd]^π_2[ld]_π_1@/^1pc/[ddd]^ϖ
X @–>[rr]^f[d]^q X [d]^q
Y@–>[rr]^g Y
Γ_g [ru]_π_2'[lu]^π_1'
where f: X X, g : Y Y are dominant rational maps, Γ_f, Γ_g are the normalization of the graph of f and g respectively, π_1, π_2, π_1', π_2' are the projections from Γ_f and Γ_g onto the first and second factor respectively and ϖ: Γ_f →Γ_g is the restriction of q × q to Γ_f.
The following lemma proves that max_j ⩽ i ( λ_j(f,X/Y) λ_i-j(g)) ⩽λ_i(f).
For any integer max(0, i-l) ⩽ j ⩽min( i , e), there exists a constant C>0 such that for any rational map X/_q Y fg X/_q Y, we have _i-j(g ) _j(f) ⩽ C _i(f).
Granting the above lemma, then we obtain the lower bound on λ_i(f) as:
λ_i(f) ⩾λ_j(f,X/Y) λ_i-j(g).
It suffices to consider the product (π_1^* (H_X^e-j· q^* H_Y^l-i+j) ·π_2^* (H_X^j · q^* H_Y^i-j)). Since π_i ∘ q = ϖ∘π_i' for i ∈{ 1,2}, we obtain:
(π_1^* (H_X^e-j· q^* H_Y^l-i+j) ·π_2^* (H_X^j · q^* H_Y^i-j)) =( ϖ^* (π_1'^* H_Y^l-i+j·π_2'^* H_Y^i-j) ·π_1^* H_X^e-j·π_2^* H_X^j ).
Moreover, one has that π_1'^* H_Y^l-i+j·π_2'^* H_Y^i-j = (π_1'^* H_Y^l-i+j·π_2'^* H_Y^i-j) [p_0] = _i-j(g) [p_0] where p_0 is a general point in Γ_g.
We can hence apply Proposition <ref> to the morphism ϖ: Γ_f →Γ_g and obtain:
(π_1^* (H_X^e-j· q^* H_Y^l-i+j) ·π_2^* (H_X^j · q^* H_Y^i-j)) = _i-j (g) ( π_1^* H_X^e-j·π_2^* H_X^j [Γ_f_p_0] ).
Since π_1' is a birational morphism, a general fiber of ϖ is equal to a general fiber of π_1' ∘ϖ.
In other words, we have that _Γ_f/Γ_g = _Γ_f/Y and since π_1^* H_X^e-j·π_2^* H_X^j [Γ_f_p_0] = _Γ_f / Γ_g (π_1^* H_X^e-j·π_2^* H_X^j), we obtain:
(π_1^* (H_X^e-j· q^* H_Y^l-i+j) ·π_2^*( H_X^j · q^* H_Y^i-j)) = _i-j(g) ×_j(f).
As H_X is ample, we apply Theorem <ref> to the classes π_2^*q^* H_Y and π_2^*H_X:
π_2^* q^*H_Y^i-j⩽ (n-i+j+1)^i-j(π_2^* q^* H_Y^i-j·π_2^* H_X^n-i+j)(π_2^* H_X^n)π_2^*H_X^i-j = C_1 π_2^* H_X^i-j∈^i-j(X)_ℝ,
where C_1 = (n-i+j+1)^i-j(q^* H_Y^i-j· H_X^n-i+j) / (H_X^n) depends only on n,i and the choice of big nef Cartier divisors.
Intersecting with π_1^* H_X^n-i·π_2^* H_X^j, one obtains:
_i-j(g) ·_j(f) ⩽ C_1 (π_2^* H_X^i·π_1^* (H_X^e-j· q^* H_Y^l-i+j)).
By the same argument, there exists a constant C_2>0 which depends only on H_Y, H_X and i such that:
π_1^* q^* H_Y^l-i+j⩽ C_2 π_1^* H_X^l-i+j.
Hence, we obtain:
_i-j(g) _j(f) ⩽ C _i(f),
where C = C_1 C_2.
Let us prove the converse inequality.
We fix an integer 0 ⩽ i ⩽ n.
Let us apply Theorem <ref> to f_1=f, f_2=f^p, g_1 = g and g_2=g^p, we can rewrite the inequality as:
a_i,j_0 (f^p+1) ⩽ C ∑_max(0 , i-e) ⩽ j ⩽min(i,l)_j(g) a_i-j, j_0 -j(f) a_i, j(f^p).
Let us denote by U_i(f) the column vector given by:
U_i(f) = ( a_i,j(f))_0 ⩽ j ⩽ l = ( [ a_i,0(f); …; a_i, l(f) ] ).
Let us also denote by M_i(f) the (l+1)× (l+1) lower-triangular matrix given by:
M_i(f) :=( _j(g) a_i-j,m-j(f) ×χ_ [i-e, min(i,l)](j) )_ 0 ⩽ m ⩽ l, 0 ⩽ j ⩽ l ,
where χ_A denotes the characteristic function of the set A.
Therefore, (<ref>) can be rewritten as:
U_i(f^p+1) ⩽ C M_i(f) · U_i(f^p),
where · denotes the linear action on ℤ^l+1.
A simple induction proves:
U_i(f^p) ⩽ C^p (M_i(f))^p-1· U_i(f)
Since the (l+1)-th entry of the vector U_i(f^p) corresponds to _i(f^p), we deduce that:
_i(f^p)^1/p⩽ C ⟨ e_l , ( M_i(f) )^p· U_i(f)⟩^1/p,
where (e_0, …, e_l) denotes the canonical basis of ℤ^l+1.
In particular, _i(f^p)^1/p is controlled up to a constant by the eigenvalues of the matrix M_i(f) which are _j(g) _i-j(f) for max(0,i-e)⩽ j⩽min(i,l) since M_i(f) is lower-triangular.
Applying (<ref>) to f^r, we get:
_i(f^pr)^1/(pr)⩽ C^1/r || U_i(f^r)||^1/prmax_max(0,i-e) ⩽ j ⩽min(i,l) (_j(g^r) _i-j (f^r))^1/r.
We conclude by taking the lim sup as r → +∞, p→ + ∞:
λ_i(f) ⩽max_max(0, i-l)⩽ j ⩽min(i,e)λ_i-j(g) λ_j(f,X/Y).
Note that the previous theorem gives information only on the dynamical degrees of f. Lemma <ref> provides a lower bound on the degree of f^p. However, one cannot find an upper bound for _i(f^p) which would only depend on the relative degrees and the degree on the base. If X = E × E is a product of two elliptic curves and if f : (z, w) ∈ E × E → (z , z + w) is an automorphism of X, then the degree growth of f^p is equivalent to p^2 whereas the degree on the base and on any fiber are trivial.
§ KÄHLER CASE
We prove the submultiplicativity of the k-th degrees in the case where (X,ω) is a complex compact Kähler manifold. For any closed smooth (p,q)-form α on X, we denote by {α} its class in the Dolbeault cohomology H^p,q (X)_ℝ.
Let (X,ω) be a compact Kähler manifold. A class α∈ H^1,1 (X)_ℝ is nef if for any ϵ >0, the class α + ϵ{ω} is represented by a Kähler metric.
A class α of degree (i,i) is pseudo-effective if it can be represented by a closed positive current T. Moreover, one says that α is big if there exists a constant δ>0 such that T -δω is a closed positive current and we write T ⩾δω^i.
(cf <cit.>, <cit.>)Let (X,ω) be a compact Kähler manifold of dimension n. Let k be an integer and α,β be two nef classes in H^1,1(X) such that α^i ∈ H^i,i(X) is big and such that ∫_X α^n - ( [ n; i ] ) ∫_X α^n-i∧β ^i >0. Then the class α^i - β^i is big.
Recall that the degree of a meromorphic selfmap f : X X when (X,ω) is given by:
_i(f) := ∫_Γ_fπ_1^* ω^n-i∧π_2^* ω^i,
where Γ_f is the desingularization of the graph of f and π_j are the projections from Γ_f onto the first and the second factor respectively.
When X is a projective variety and ω represents the class of a hyperplane section H_X, then the intersection of the form coincides with the cup-product in cohomology, hence _i(f) = _i,H_X(f).
Let (X_1,ω_X_1), (X_2,ω_X_2) and (X_3,ω_X_3) be some compact Kähler manifolds of dimension n. Then there exists a constant C> 0 which depends only on the choice of the Kähler classes ω_X_j such that for any dominant meromorphic maps f_1: X_1 X_2 and f_2 : X_2 X_3, one has:
_i(f_2 ∘ f_1) ⩽ C _i (f_1) _i(f_2).
Moreover, the constant C may be chosen to be equal to ( [ n; i ] ) / (∫_X_2ω_X_2^n).
The previous theorem gives that for any big nef class β^i ∈ H^i,i(X), for any nef class α∈ H^1,1(X), one has:
α^i ⩽ ( [ n; i ] ) ∫_X α^i ∧β^n-i∫_X β^n ×β^i.
Then, the proof is formally the same as Theorem <ref>. Indeed, one only needs to consider the diagram (<ref>) where Y_1 = Y_2 = Y_3 are reduced to a point and where Γ_f_1, Γ_f_2, Γ are the desingularizations of the graph of f_1, f_2 and π_3^-1∘ f_1 ∘π_1 respectively.
We apply (<ref>) to α = v^* π_4^* ω_X_3 and β = v^* π_3^* ω_X_2 to obtain:
v^* π_4^* ω_X_3^i ⩽ ( [ n; i ] ) _i(f_2)∫_X_2ω_X_2^n× v^* π_3^* ω_X_2^i.
By intersecting the previous inequality with the class u^* π_1^* ω_X_1^n-i, we finally get:
_i(f_2 ∘ f_1) ⩽ ( [ n; i ] ) _i(f_2) _i(f_1)∫_X_2ω_X_2^n.
§ COMPARISON WITH FULTON'S APPROACH
In <cit.>, a cycle z ∈ Z_i(X) on a variety X is defined to be numerically trivial if (c z) for any product c =c_i_1(E_1) ·…· c_i_p(E_p) ∈ A^i(X) of Chern classes c_i_j(E_j) where E_j is a vector bundle on X and i_1 + … + i_p = i. This appendix is devoted to the proof of the following result:
Let X be a normal projective variety of dimension n. For any z ∈ Z_i(X), the following conditions are equivalent:
(i) For any product of Chern classes c = c_i_1(E_1) ·…· c_i_p(E_p) ∈ A^i(X)_ℝ where E_j are vector bundles on X and i_1 + … + i_p = i, we have (c z) = 0.
(ii) For any integer e, any flat morphism p_1 : X_1 → X of relative dimension e where X_1 is a projective scheme and any Cartier divisors D_1 , … , D_e+i on X_1, we have (D_1 ·…· D_e+i p_1^*z) =0.
(iii)For any integer e, any flat morphism p_1 : X_1 → X of relative dimension e between normal projective varieties and any Cartier divisors D_1 , … , D_e+i on X_1, we have (D_1 ·…· D_e+i p_1^*z) =0.
The implication (ii) ⇒ (i) follows immediately from the definition of Chern classes.
The implication (ii) ⇒ (iii) is also straightforward.
For the converse implications (i) ⇒ (ii) and (i) ⇒ (iii), we rely on the following proposition.
Let q : X → Y be a flat morphism of relative dimension e where X is a projective scheme and Y is a normal projective variety. For any Cartier divisors D_1, … , D_e+i be some ample Cartier divisors on X, there exist vector bundles E_j, and a homogeneous polynomial c=P(c_i_1(E_1), … , c_i_p(E_p)) of degree i with respect to the weight (i_1, … , i_p), with rational coefficients such that for any cycle z ∈ Z_i(X), (c · z ) = (D_1 ·…· D_e+i· q^* z).
We take some ample Cartier divisors D_1 , … , D_e+i on X. We denote by ℒ_i the line bundle 𝒪_X(D_i).
By Grauert's Theorem (cf <cit.>), the sheaves R^i q_*(ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i) are locally free.
By <cit.>, we have that R^i q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i) = 0 for i>0 and m_i large enough since the line bundle ℒ_i are ample.
So the sheaf q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i) is locally free and we have in K_0(Y):
q_* [ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i] = ∑ (-1)^i [R^i q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)] =[ q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)].
For any j ⩽ i:
* The function (m_1, …, m_e+i) →_j ( q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) ∈^j(Y)_ℝ is a polynomial of degree e+j with coefficients in ^j(Y).
* For any cycle z ∈ Z_j(Y), the coefficient in m_1 ·…· m_e+i in (_j(q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) z) is ((D_1 ·…· D_e+i) q^* z).
Let us set ℱ = ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i. We prove the result by induction on 0 ⩽ j ⩽ i.
For j = 0, choosing a point y ∈ Y(), the number _0(q_* (ℱ)) is equal to h^0( X_y , ℱ_|X_y ).
By asymptotic Riemann-Roch, for m_1, …, m_e+i large enough, it is a polynomial of degree X_y = e. Moreover, Snapper's theorem (see <cit.>) states that the coefficient in m_1 ·…· m_e+i is the number (D_1 ·…· D_e+i [X_y]).
We suppose by induction that _i( q_* (ℱ)) is a polynomial of degree e +i for any i ⩽ j where j ⩽ i-1. For any subvariety V of dimension j+1 in Y, we denote by W its scheme-theoretic preimage by q.
For any scheme V, let us denote by τ_V the morphisms:
τ_V : K_0(V) ⊗ℚ→ A_∙ (V) ⊗ℚ.
We refer to <cit.> for the construction of this morphism and its properties.
We apply Grothendieck-Riemann-Roch's theorem for singular varieties (see <cit.>) and using (<ref>), we get in A_∙(Y)_ℚ:
( q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i) ) τ_V( 𝒪_V ) = q_* ( (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i) τ_W(𝒪_W)).
The term in A_0(Y)_ℚ in the left handside of the previous equation is equal to:
_j+1 ( q_* (ℱ)) [V] + ∑_i ⩽ j_i ( q_*(ℱ)) τ_V,i (𝒪_V),
where τ_V,i (𝒪_V) is the term in A_i(Y) of τ_V(𝒪_V).
By the induction hypothesis, every _i(q_* ℱ) is a polynomial of degree e+i, and the right hand side of equation (<ref>) is a polynomial of degree e+j + 1, so _j+1(q_*( ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) is also a polynomial of degree e + j + 1. Now we identify the coefficients in m_1 ·…· m_e+i of the term in _0(Y) in equation (<ref>).
It follows from <cit.> that τ_W(𝒪_W) = [W] + R_W where R_W is a linear combination of cycles of dimension < e+i. Therefore, the coefficient in m_1 ·…· m_e+i of the right hand side of equation (<ref>) in _0(Y) is ((D_1 ·…· D_e+i) [W]) if j+1 = i or 0 otherwise.
We have proved that the coefficient of _j+1(q_* ( ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) [V] is ((D_1 ·…· D_e+i) [W]) if V = i or 0 otherwise. Extending it by linearity, one gets the desired result.
We have that _i( q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) is by definition a polynomial in Chern classes of vector bundles on Y. Using the previous lemma, the coefficient U(D_1, … , D_e+i) in m_1 ·…· m_e+i of _i( q_* (ℒ_1^m_1⊗…⊗ℒ_e+i^m_e+i)) is equal to P(c_i_1(E_1) , … , c_i_p(E_p)) where P is a homogeneous polynomial with rational coefficients of degree i with respect to the weight (i_1, …, i_p) and E_i are vector bundles on Y. We have proven that for any cycle z ∈ Z_i(Y):
(P(c_i_1(E_1) , … ,c_i_p(E_p) ) z ) = ((D_1 ·…· D_e+i ) q^*z).
As any Cartier divisor can be written as a difference of ample Cartier divisors. The proposition provides a proof for the implication (i) ⇒ (ii) of Theorem <ref>.
In codimension 1, the intersection product (D_1 ·…· D_e+1 q^*z) is represented by Deligne's product I_X(𝒪_X(D_1), … , , 𝒪_X(D_e+1)) ∈^1(X)_ℝ (see <cit.> for a reference). Indeed, one has by <cit.> that for any cycle z ∈_1(X):
c_1( I_X( 𝒪_X(D_1), … , , 𝒪_X(D_e+1)) ) z = D_1 ·…· D_e+1 q^*z.
This gives an answer to the question of numerical pullback formulated in <cit.>.
Let q: X → Y be a flat morphism of relative dimension e between normal projective varieties. Then the morphism q^* : A_∙(Y)_ℚ→ A_e+ ∙(X)_ℚ induces a morphism of abelian groups q^* : _∙(Y)_ℚ→_e+∙(X)_ℚ.
By duality, the morphism q_* : A^∙(X)_ℚ→ A^∙ - e(Y)_ℚ induces a morphism of abelian groups q_*: ^∙(X)_ℚ→^∙ - e(Y)_ℚ.
amsalpha
Nguyen-Bac Dang
CMLS, École polytechnique, CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex,
France
[email protected]
|
http://arxiv.org/abs/1701.07587v1 | 20170126063747 | Robustness of reference-frame-independent quantum key distribution against the relative motion of the reference frames | [
"Tanumoy Pramanik",
"Byung Kwon Park",
"Young-Wook Cho",
"Sang-Wook Han",
"Sang-Yun Lee",
"Yong-Su Kim",
"Sung Moon"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Department of Nano-Materials Science and Engineering, Korea University of Science and Technology, Daejeon, 34113, Republic of Korea
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
[email protected]
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Department of Nano-Materials Science and Engineering, Korea University of Science and Technology, Daejeon, 34113, Republic of Korea
Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul, 02792, Republic of Korea
Reference-Frame-Independent quantum key distribution (RFI-QKD) is known to be robust against slowly varying reference frames. However, other QKD protocols such as BB84 can also provide secrete keys if the speed of the relative motion of the reference frames is slow enough. While there has been a few studies to quantify the speed of the relative motion of the reference frames in RFI-QKD, it is not yet clear if RFI-QKD provides better performance than other QKD protocols under this condition. Here, we analyze and compare the security of RFI-QKD and BB84 protocol in the presence of the relative motion of the reference frames. In order to compare their security in real world implementation, we also consider the QKD protocols with decoy state method. Our analysis shows that RFI-QKD provides more robustness than BB84 protocol against the relative motion of the reference frames.
42.25.-p, 03.67.Ud, 03.67.-a, 42.50.Ex
Robustness of reference-frame-independent quantum key distribution
against the relative motion of the reference frames
Sung Moon
December 30, 2023
========================================================================================================================
§ INTRODUCTION
Quantum key distribution (QKD) promises enhanced communication security based on the laws of quantum physics <cit.>. Since the first QKD protocol has been introduced in 1984, there has been a lot of theoretical and experimental effort to improve the security and the practicality of QKD <cit.>. These days, QKD research is not only limited in laboratories <cit.> but also in industries <cit.>.
In general, QKD requires a shared common reference frame between two communicating parties, Alice and Bob. For example, the interferometric stability or the alignment of the polarization axes are required for fiber based QKD using phase encoding and polarization encoding free-space QKD, respectively. However, it can be difficult and costly to maintain the shared reference frame in real world implementation. For instance, it is highly impractical to establish a common polarization axes in earth-to-satellite QKD due to the revolution and rotation of the satellite with respect to the ground station <cit.>.
A recently proposed reference-frame-independent QKD (RFI-QKD) provides an efficient way to bypass this shared reference frame problem <cit.>. In RFI-QKD, Alice and Bob share the secrete keys via a decoherence-free basis while check the communication security with other bases. Both free-space <cit.> and telecom fiber <cit.> based RFI-QKD have been successfully implemented. It is remarkable that the concept of the reference frame independent can be applied to measurement-device-independent QKD <cit.>.
Unlike to its name, however, the security of the original theory of RFI-QKD is guaranteed when the relative motion of the reference frames is slow comparing to the system repetition rate <cit.>. If the reference frames of Alice and Bob are deviated with a fixed angle, however, one can easily compensate the deviation and implement an ordinary QKD protocol. Therefore, the effectiveness of RFI-QKD over other QKD protocols becomes clear when there is rapid relative motion of the reference frames during the QKD communication. There has been few studies to quantify the speed of the relative motion of the reference frames in RFI-QKD <cit.>. Without the performance comparison with other QKD protocols, however, these studies do not show the effectiveness of RFI-QKD over other QKD protocols.
In this paper, we report the security of RFI-QKD and BB84 protocol in the presence of the relative motion of the reference frames of Alice and Bob. In order to compare the performances in real world implementation, we also consider the decoy state method. By comparing the security analyses, we found that RFI-QKD is more robust than BB84 protocol against the relative motion of the reference frames.
§ QKD WITH A FIXED
REFERENCE FRAME DEVIATION
In this section, we review the security proof of RFI-QKD and BB84 protocol with a fixed reference frame deviation. A shared reference frame is required for both fiber based QKD with phase and free-space QKD with polarization encoding. It corresponds to the interferometric stability and the polarization axes for fiber based QKD and free-space QKD, respectively. In the following, we will consider free-space QKD with polarization encoding for simplicity. However, we note that our analysis is also applicable for fiber based QKD with phase encoding.
§.§ RFI-QKD protocol
Figure <ref> shows the polarization axes of Alice and Bob with a deviation angle θ. Since Alice and Bob should face each other in order to transmit the optical pulses, their Z-axes, which corresponds to left- and right-circular polarization states, are always well aligned. On the other hand, the relation of their X and Y-axes, which correspond to linear polarization states such as horizontal, vertical, +45, and -45 polarization states, depend on θ. The relations of the polarization axes are
X_B = X_Acosθ+Y_Asinθ,
Y_B = Y_Acosθ-X_Asinθ,
Z_B = Z_A,
where the subscripts A and B denote Alice and Bob, respectively.
In RFI-QKD, Alice and Bob share the secrete keys via Z-axis, as it is unaffected by the polarization axes deviation. In this basis, the quantum bit error rate (QBER) becomes
Q_ZZ=1-⟨ Z_AZ_B ⟩/2.
Here, the subscripts ij where i,j∈{X,Y,Z} denote that Alice sends a state in i basis while Bob measures it in j basis. The probability distributions of the measurement outcomes in X and Y-axes are used to estimate the knowledge of an eavesdropper, Eve. Her knowledge can be estimated by a quantity C which is defined as
C= ⟨ X_A X_B⟩^2 + ⟨ X_A Y_B⟩^2 + ⟨ Y_A X_B⟩^2 + ⟨ Y_A Y_B⟩^2.
Note that the quantity C is independent of the deviation angle θ.
The knowledge of Eve is bounded by
I_E[Q_ZZ,C] = (1-Q_ZZ) H[ 1+u/2] + Q_ZZ H[1+v/2],
where
u = min[ 1/1-Q_ZZ√(C/2),1 ],
v = 1/Q_ZZ√(C/2 - (1-Q_ZZ)^2 u^2),
and H[x]=-xlog_2x - (1-x)log_2(1-x) is the Shannon entropy of x.
The secret key rate in the RFI-QKD protocol is given by <cit.>
r_RFI = 1-H[Q_ZZ] - I_E[Q_ZZ,C].
It is notable that Eq. (<ref>) is independent of a fixed deviation rotation θ <cit.>. The security proof shows that r_RFI≥0 for Q_ZZ≲15.9%.
In practice, the effective quantum state that Bob receives from Alice can have errors due to the transmission noise and experimental imperfection. Assuming the noise and the imperfection are polarization independent, we can model the Bob's receiving quantum state ρ_B as
ρ_B= pρ_A + 1-p/2I,
where ρ_A, 1-p and I are the state prepared by Alice, the strength of noise, and a two dimensional identity matrix, respectively. Note that ⟨ℱ_A𝒢_B⟩ can be written as a state dependent form of
⟨ℱ_A𝒢_B⟩=[(ℱ_A⊗𝒢_B)·ρ_AB]
where ℱ,𝒢∈{X,Y,Z}, and ρ_AB=ρ_A⊗ρ_B. Therefore, the QBER Q_ZZ and the quantity C become
Q_ZZ = 1-p/2,
C = 2 p^2= 2 (1-2 Q_ZZ)^2.
In this case, r_RFI≥0 for Q_ZZ≲ 12.6 %.
§.§ BB84 protocol
In this section, we consider the secrete key rate of BB84 with a fixed reference frame deviation. Due to the symmetry, the QBER of X, and Y-axes are the same, and they are given as
Q_XX = 1-⟨ X_AX_B⟩/2,
= 1-pcosθ/2=Q_YY.
If Alice and Bob utilize X and Y axes, the overall QBER Q_XY is
Q_XY = 1/2(Q_XX+Q_YY)
= 1/2( 1- p cosθ).
Since we know that Z-axis is rotation invariant, one can get lower QBER by using Z-axis instead of Y-axis. In this case, the overall QBER Q_XZ is given by
Q_XZ = 1/2(Q_XX+Q_ZZ)
= 1/2( 1- pcos^2θ/2).
The secrete key rate of BB84 with {X,Y} ({X,Z}) bases is given by <cit.>
r^XZ (XY)_BB84 = 1- 2 H[Q_XZ (XY)].
Apparently, Eq. (<ref>) is dependent on the reference frame deviation θ. However, one can easily compensate the deviation if θ is invariant during the QKD communication. For BB84 protocol, r^XZ (XY)_BB84≥0 for Q_ZZ≲ 11% when θ=0.
§ QKD IN THE PRESENCE OF THE RELATIVE MOTION OF REFERENCE FRAMES
In this section, we study the effect of the relative motion of the reference frames of Alice and Bob during the QKD communication. Let us consider the case when θ varies from -δ to δ as depicted in Fig. <ref>. For simplicity, we assume that θ is centered at 0 and equally distributed over θ∈[-δ,δ].
The quantities Q_XX, Q_YY, and C are affected by the relative motion of the reference frames. However, Q_ZZ is unchanged since Z_A=Z_B all the time. In order to quantify the effect of the relative motion, we need to calculate the average values of the observed quantities of ⟨ℱ_A𝒢_B⟩, which is given by
⟨ℱ_A𝒢_B⟩ = 1/2δ∫_-δ^δ ⟨ℱ_A𝒢_B⟩ dθ
= sin2δ/δ⟨ℱ_A𝒢_A⟩.
With Eq. (<ref>), one can represent the quantity C as a function of Q_ZZ and δ,
C[ Q_ZZ,δ ] = 2 (1-2 Q_ZZ)^2 (sin2δ/2 δ)^2.
Therefore, one can estimate the secrete key rate of RFI-QKD in the presence of the relative motion of the reference frames by inserting Eq. (<ref>) and (<ref>) to Eq. (<ref>).
The average QBER Q_XX and Q_YY are also presented as a function of Q_ZZ and δ, and they become
Q_XX[ Q_ZZ,δ ]
= Q_YY[ Q_ZZ,δ ]
= 1/2δ∫_-δ^δ1-⟨ X_AX_B⟩/2 dθ
= 1/2 - (1-2 Q_ZZ) sin2δ/4 δ.
Therefore, the average QBER Q_XY and Q_XZ become
Q_XY[ Q_ZZ,δ ] = 1/2(Q_XX[ Q_ZZ,δ ] + Q_YY[ Q_ZZ,δ ]),
= 1/2 - (1-2Q_ZZ) sin2δ/4δ.
and
Q_XZ[ Q_ZZ,δ ] = 1/2(Q_XX[ Q_ZZ,δ ] + Q_ZZ[ Q_ZZ,δ ]),
= 1+2Q_ZZ/4 - (1-2Q_ZZ) sin2δ/8δ.
By inserting Eq. (<ref>) or (<ref>) to Eq. (<ref>), one can estimate the secrete key rate of BB84 either with {X,Y} or {X,Z} bases in the presence of the relative motion of the reference frames.
In Fig. <ref>, we compare the lower bounds of the secrete key rates for RFI-QKD and BB84 with {X,Y} and {X,Z} bases with respect to Q_ZZ and δ. It clearly shows that RFI-QKD is more robust than BB84 protocol against the channel depolarization as well as the relative motion of the reference frames. In the case of BB84 protocol, one can obtain better results by using {X,Z} bases instead of {X,Y} bases.
§ RFI-QKD AND BB84 PROTOCOL USING DECOY STATE METHOD
In this section, we apply the security analysis to a real world implementation using weak coherent pulses with decoy state method. In the following, we will consider two decoy states method which is most widely used for real world implementation <cit.>. According to this method, Alice randomly modulates the intensity of the weak coherent pulses with mean photon numbers per pulse of μ, ν and and 0 where μ>ν. They are usually called signal, decoy and vacuum pulses, respectively.
Assuming that the bases chosen by Alice and Bob are i and j where i,j∈{X,Y,Z}, the lower bound of the secrete key rate is given by <cit.>
r = -Y_μ H[Q_ij|μ] + μ y_1^L exp[-μ]( 1- I_E).
where Y_μ, and y_1^L are the gain of signal pulses having QBER Q_ij|μ, and the lower bound of the gain of single-photon pulses, repectively. Here, we assume that the error correction efficiency is unity. The values of Y_μ and Q_ij|μ can be obtained from the experiment, however, y_1^L should be estimated with the decoy and vacuum pulses. The lower bound of the gain of single-photon pulses is given by <cit.>
y_1^L = μ^2 Y_ν e^ν-ν^2 Y_μ e^μ - ( μ^2-ν^2)Y_0/μ(μν -ν^2),
where Y_ν and Y_0 are the gain of decoy pulses and vacuum pulses, respectively.
The QBER for signal state Q_ij|μ can be calculated by using
Y_μ Q_ij|μ = ∑_n=0^∞ y_n μ^n q_n,ij/n!exp[-μ],
where y_n, and q_n,ij are the gains and QBER of n-photon state, respectively. Here, Y_μ and q_n,ij are given as
Y_μ = ∑_i=0^∞ y_i μ^i/i! exp[-μ],
= 1-exp[-η μ] + p_d
q_n,ij = e_ij η_n + 1/2 p_d/y_n,
where η, p_d, and e_ij denote the overall detection efficiency including the channel transmission, the dark count probability, and the erroneous detection probability, respectively. The detection efficiency of n-photon state η_n is given by η_n=1-(1-η)^n where η can be represented in terms of loss, L in dB and Bob's detection efficiency, η_B by η=η_B10^-L/10.
The deviation between the reference frames of Alice and Bob contributes the erroneous detections, hence, e_ij=Q_ij. Note that Q_XY=1/2δ∫_-δ^δ1-⟨ X_A Y_B⟩/2dθ= 1/2, Q_YX=1/2δ∫_-δ^δ1-⟨ Y_A X_B⟩/2dθ=1/2, and Q_ii are given by Eqs. (<ref>), and (<ref>).
According to the decoy theory, the upper bound of QBER generated by single-photon states of the signal pulses is given by <cit.>
q_1,ij|μ^U = Q_μ ij Y_μ - 1/2Y_0exp[-μ]/μ y_1^L exp[-μ].
The lower bound of QBER occurred due to single-photon states of the signal pulses is given by
q_1,ij|μ^L= 1-(1-Q_μ ij)Y_μ-1/2 Y_0 exp[-μ]/μ y_1^Lexp[μ].
Here, we assume that the QBER of n-photon states for n≥ 2 is q_n,ij=1.
§.§ RFI-QKD protocol using decoy state
In the RFI-QKD scenario, the lower bound of the secrete key rate r_RFI becomes <cit.>
r_RFI = -Y_μ H[Q_ZZ|μ] + μ y_1^L exp[-μ]( 1- I_E).
The upper bound of Eve's information I_E is given by
I_E = (1-q_1,ZZ|μ^U) H[1+u_max/2]
- q_1,ZZ|μ^U H[1+v/2].
where q_1,ZZ|μ^U given by Eq. (<ref>), and
u_max = min[1/1-q_1,ZZ|μ^U√(C_1^L/2), 1],
v = 1/q_1,ZZ|μ^U√(C_1^L/2-(1-q_1,ZZ|μ^U)^2 u_max^2).
Here, C_1^L is the optimal lower bound of C for single-photon states. The optimal lower bound C_1^L is given below <cit.>
C_1^L = max[α, 2(α^')^2] + max[β, 2(β^')^2],
where
α = ∑_j=X,Y(1-2max(1/2,q_1 μ Xj^L))^2,
β = ∑_j=X,Y(1-2max(1/2,q_1 μ Yj^L))^2,
α^' = (1.70711-Q_μ XX-Q_μ XY)Y_μ-0.70711 Y_0 exp[-μ] /μ y_1^L
- 0.70711,
β^' = (1.70711-Q_μ YX-Q_μ YY)Y_μ-0.70711 Y_0 exp[-μ] /μ y_1^L
-0.70711.
§.§ BB84 protocol using decoy state
For BB84 protocol, the lower bounds of the secret key rate with {X, Z} ({X, Y}) basis is given as <cit.>
r_BB84^XZ (XY) = -Y_μ H[Q_XZ|μ (XY|μ) ]
+ μ y_1^L exp[-μ]( 1- H[q_1,XZ|μ (1,XY|μ)^U]).
Note that q_1,XZ|μ (1,XY|μ)^U is provided at Eq. (<ref>), and Q_XZ|μ and Q_XY|μ can be calculated with the help of Eqs. (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>).
§.§ Result and discussion
Figure <ref> shows the lower bounds of the secret key rates of RFI-QKD and BB84 protocols with respect to loss. For simulation, we choose Y_0=p_d=10^-6, μ=0.5, ν=0.05, and η_B=0.35 that are widely accepted for the earth-to-satellite QKD scenario <cit.>.
Figure <ref>(a) shows the secrete key rates when there is no intrinsic QBER due to the noise, i.e., p=1. The solid red, blue and green lines are for RFI-QKD, BB84 with {X,Z} and {X,Y} bases with δ=0, respectively. The dotted lines are for δ=π/7. When there is no relative motion between the reference frames of Alice and Bob, i.e., δ=0, the secrete ket rates for all QKD protocols are comparable. As the range of the relative motion increases, however, the secrete key rate of BB84 with {X,Y} basis rapidly decreases.
Figure <ref>(b) shows a more realistic case that there is intrinsic QBER due to the noise and the experimental imperfection. Here, we assume the intrinsic QBER of Q_ZZ=3% which corresponds to p=0.94. The solid, small-dashed and dotted lines are for δ=0, π/8 and π/7, respectively. It clearly shows that RFI-QKD outperforms BB84 protocol in real world implementation. For example, for p=0.94, δ=π/7, the maximum loss that QKD communication is possible in RFI-QKD is reduced by 12% from the ideal case of p=1, δ=0. On the other hand, the maximum loss for BB84 protocol using {X, Z} and {X,Y} bases is reduced by 90% and 100% from the ideal case, respectively.
§ CONCLUSIONS
To summary, we have studied both reference frame independent quantum key distribution (RFI-QKD) and BB84 protocol with the presence of the relative motion between the reference frames of Alice and Bob. We have also considered overall noise model with a depolarizing channel between Alice and Bob. In order to compare the secrete key rates in real world implementation, we also have applied the security analyses to the decoy state methods. We found that RFI-QKD provides more robustness than BB84 protocol in the presence of the relative motion between the reference frames.
§ ACKNOWLEDGEMENT
This work was supported by the ICT R&D program of MSIP/IITP (B0101-16-1355), and the KIST research programs (2E27231, 2V05340).
99
BB84 C. H. Bennett and G. Brassard, Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing (IBangalore, India, December 1984), p. 175.
Ekert91 A. Ekert, Phys. Rev. Lett. 67, 661 (1991).
gisin02 N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
scarani09 V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dusek, N. Lutkenhaus, and M. Peev, Rev. Mod. Phys. 81, 1301 (2009).
lo12 H.-K. Lo, M. Curty, and B. Qi, Phys. Rev. Lett. 108, 130503 (2012).
patel14 K. A. Patel, J. F. Dynes, M. Lucamarini, I. Choi, A. W. Sharpe, Z. L. Yuan, R. V. Penty, and A. J. Shields, Appl. Phys. Lett. 104, 051123 (2014).
guan15 J.-Y. Guan, Z. Cao, Y. Liu, G.-L. Shen-Tu, J. S. Pelc, M. M. Fejer, C.-Z. Peng, X. Ma, Q. Zhang, and J.-W. Pan, Phys. Rev. Lett. 114, 180502 (2015).
jeong16 Y.-C. Jeong, Y.-S. Kim, Y.-H. Kim, Phys. Rev. A 93, 012322 (2016).
choi16 Y. Choi, O. Kwon, M. Woo, K. Oh, S.-W. Han, Y.-S. Kim, and S. Moon, Phys. Rev. A 93, 032319 (2016).
QKD_commer For example, ID Quantique, MagiQ Technologies, QuintessenceLabs, and SeQureNet.
rarity02 J. G. Rarity, P. R. Tapster, P. M. Gorman, and P. Knight, New J. Phys. 4, 82 (2002).
ursin07 R. Ursin, F. Tiefenbacher, T. Schmitt-Manderbach, H. Weier, T. Scheidl, M. Lindentha, B. Blauensteiner, T. Jennewein, J. Perdigues, P. Trojek, B. Ömer, M. Fürst, M. Meyenburg, J. Rarity, Z. Sodnik, C. Barbieri, H. Weinfurter, and A. Zeilinger, Nature Physics 3, 481 (2007).
schmitt07 T. Schmitt-Manderbach, H. Weier, M. Fürst, R. Ursin, F. Tiefenbacher, T. Scheidl, J. Perdigues, Z. Sodnik, C. Kurtsiefer, J. G. Rarity, A. Zeilinger, and H. Weinfurter, Phys. Rev. Lett. 98, 010504 (2007).
bonato09 C. Bonato, A. Tomaello, V. D. Deppo, G. Naletto, and P. Villoresi, New J. Phys. 11, 045017 (2009).
meyer11 E. Meyer-Scott, Z. Yan, A. MacDonald, J.-P. Bourgoin, H. Hübel, and T. Jennewein, Phys. Rev. A 84, 062326 (2011).
bourgoin13 J.-P. Bourgoin, E. Meyer-Scott, B. L. Higgins, B. Helou, C. Erven, H HÖbel, B. Kumar, D. Hudson, I. D'Souza, R. Girard, R. Laflamme and T. Jennewein, New J. Phys. 15, 023006 (2013).
bourgoin15 J.-P. Bourgoin, B. L. Higgins, N. Gigov, C. Holloway, C. J. Pugh, S. Kaiser, M. Cranmer, and T. Jennewein, Opt. Express 23, 33437 (2015).
laing10 A. Laing, V. Scarani, J. G. Rarity, and J. L. O'Brien, Phys. Rev. A 82, 012304 (2010).
wabnig13 J. Wabnig, D. Bitauld, H. W. Li, A. Laing, J. L. O'Brien, and A O Niskanen, New J. Phys. 15, 073001 (2013).
zhang14 P. Zhang, K. Aungskunsiri, E. Martın-Lopez, J. Wabnig, M. Lobino, R. W. Nock, J. Munns, D. Bonneau, P. Jiang, H. W. Li, A. Laing, J. G. Rarity, A. O. Niskanen, M. G. Thompson, and J. L. O'Brien, Phys. Rev. Lett. 112, 130501 (2014).
liang14 W.-Y. Liang, S. Wang, H.-W. Li, Z.-Q. Yin, W. Chen, Y. Yao, J.-Z. Huang, G.-C. Guo, and Z.-F. Han, Sci. Rep. 4, 3617 (2014).
yin14 Z. Q. Yin, S. Wang, W. Chen, H. W. Li, G. C. Guo, and Z. F. Han, Quantum Inf. Process. 13, 1237 (2014).
wang15 C. Wang, X.-T. Song, Z.-Q. Yin, S. Wang, W. Chen, C.-M. Zhang, G.-C. Guo, and Z.-F. Han, Phys. Rev. Lett. 115, 160502 (2015).
sheridan10 L. Sheridan, T. P. Le, and V. Scarani, New J. Phys., 12, 123019 (2010).
wang16 F. Wang, P. Zhang, X. Wang, and F. Li, Phys. Rev. A 94, 062330 (2016).
Decoy2 X. Ma, B. Qi, and H.-K. Lo, Phys. Rev. A 72, 012326 (2005).
Decoy H.-K. Lo, X. Ma, and K. Chen, Phys. Rev. Lett. 94, 230504 (2005).
|
http://arxiv.org/abs/1701.07578v2 | 20170126045322 | Semiclassical Boltzmann transport theory for multi-Weyl semimetals | [
"Sanghyun Park",
"Seungchan Woo",
"E. J. Mele",
"Hongki Min"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
[email protected]
^1 Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea
^2 Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
Multi-Weyl semimetals (m-WSMs) are a new type of Weyl semimetal that have linear dispersion along one symmetry direction but anisotropic nonlinear dispersion along the two transverse directions with a topological charge larger than one. Using the Boltzmann transport theory and fully incorporating the anisotropy of the system, we study the dc conductivity as a function of carrier density and temperature. We find that the characteristic density and temperature dependence of the transport coefficients at the level of Boltzmann theory are controlled by the topological charge of the multi-Weyl point and distinguish m-WSMs from their linear Weyl counterparts.
Semiclassical Boltzmann transport theory for multi-Weyl semimetals
Hongki Min^1,2
December 30, 2023
==================================================================
Introduction.
There has been a growing interest in three-dimensional (3D) analogs of graphene called Weyl semimetals (WSMs) where bands disperse linearly in all directions in momentum space around a twofold point degeneracy. Most attention has been devoted to novel response functions in elementary WSMs which exhibit a linear dispersion; however, recently it has been realized that these are just the simplest members of a family of multi-Weyl semimetals (m-WSMs) <cit.> which are characterized instead by double (triple) Weyl-nodes with a linear dispersion along one symmetry direction but quadratic (cubic) dispersion along the remaining two directions. These multi-Weyl nodes have a topologically protected charge (also referred to as chirality) larger than one, a situation that can be stabilized by point group symmetries <cit.>.
Noting that multilayer graphenes with certain stacking patterns support two-dimensional (2D) gapless low energy spectra with high chiralities, these m-WSMs can be regarded as the 3D version of multilayer graphenes.
One can expect that their modified energy dispersion and spin- or pseudospin-momentum locking textures will have important consequences for various physical properties due both to an enhanced density of states (DOS) and the anisotropy in the energy dispersion, distinguishing m-WSMs from elementary WSMs. In this Rapid Communication, we demonstrate that this emerges already at the level of dc conductivity in the strong scattering limit described by semiclassical Boltzmann transport theory. The transport properties of conventional linear WSMs have recently been explored theoretically by several authors <cit.>, and there have been theoretical works on the stability of charge-neutral double-Weyl nodes in the presence of Gaussian disorder <cit.> and the thermoelectric transport properties in double-Weyl semimetals <cit.>. However, as we show below, the density and temperature dependences of the dc conductivity for m-WSMs require an understanding of the effect of anisotropy in the nonlinear dispersion on the scattering. We develop this theory and find that it predicts characteristic power-law dependences of the conductivity on density and temperature that depend on the topological charge of the Weyl node and distinguish m-WSMs from their linear counterparts.
Model.
The low-energy effective Hamiltonian for m-WSMs with chirality J near a single Weyl point is given by <cit.>
H_J = ε_0 [ (k_- k_0)^J σ_+ + (k_+ k_0)^Jσ_-]+ħ v_z k_z σ_z,
where k_±=k_x± i k_y, σ_±=1 2(σ_x± iσ_y), σ are the Pauli matrices acting in the space of the two bands that make contact at the Weyl point,
and k_0 and ε_0 are the material-dependent parameters in units of momentum and energy, respectively. For simplicity, here we assumed an axial symmetry around the k_z axis. The eigenenergies of the Hamiltonian are given by ε_±=±ε_0√(k̃_∥^2J+c_z^2 k̃_z^2), where k̃=k/k_0, k̃_∥=√(k̃_x^2+k̃_y^2), and c_z=ħ v_z k_0/ε_0, thus the Hamiltonian H_J has a linear dispersion along the k_z direction for k_x=k_y=0, whereas a nonlinear dispersion ∼ k_∥^J along the in-plane direction for k_z=0. Note that the system described by the Hamiltonian in Eq. (<ref>) has a nontrivial topological charge characterized by the chirality index J <cit.>. [See Sec. <ref> in the Supplemental Material (SM) <cit.> for the eigenstates and DOS for m-WSMs.]
Boltzmann transport theory in anisotropic systems.
We use semiclassical Boltzmann transport theory to calculate the density and temperature dependence of the dc conductivity, which is fundamental in understanding the transport properties of a system. Here we focus on the longitudinal part of the dc conductivity assuming time-reversal symmetry with vanishing Hall conductivities.
The Boltzmann transport theory is known to be valid in the high carrier density limit, and we assume that the Fermi energy is away from the Weyl node, as shown in experiments <cit.>. The limitation of the current approach will be discussed later.
For a d-dimensional isotropic system in which only a single band is involved in the scattering, it is well known that the momentum relaxation time at a wavevector k in the relaxation time approximation can be expressed as <cit.>
1τ_ k=∫d^d k' (2π)^d W_kk' (1-cosθ_kk'),
where W_kk'=2πħ n_ imp |V_kk'|^2 δ(ε_k-ε_k'), n_ imp is the impurity density, and V_kk' is the impurity potential describing a scattering from k to k'. The inverse relaxation time is a weighted average of the collision probability in which the forward scattering (θ_kk'=0) receives reduced weight.
For an anisotropic system, the relaxation time approximation Eq. (<ref>) does not correctly describe the effects of the anisotropy on transport. Instead, coupled integral equations relating the relaxation times at different angles need to be solved to treat the anisotropy in the nonequilibrium distribution <cit.>. The linearized Boltzmann transport equation for the distribution function f_k=f^(0)(ε)+δ f_k at energy ε=ε_k balances acceleration on the Fermi surface against the scattering rates
(-e)E·v_kS^(0)(ε)=∫d^d k' (2π)^d W_kk'(δ f_k-δ f_k'),
where
S^(0)(ε)=-∂ f^(0)(ε)∂ε, f^(0)(ε)=[e^β (ε-μ)+ 1]^-1 is the Fermi distribution function at equilibrium, and β=1 k_ B T.
We parametrize δ f_k in the form:
δ f_k=(-e)(∑_i=1^d E^(i) v_k^(i)τ_k^(i))S^(0)(ε),
where E^(i), v_k^(i), and τ_k^(i) are the electric field, velocity, and relaxation time along the i-th direction, respectively.
After matching each coefficient in E^(i), we obtain an integral equation for the relaxation time,
1=∫d^d k' (2π)^d W_kk'(τ_k^(i)-v_k'^(i) v_k^(i)τ_k'^(i)).
For the isotropic case [τ_k^(i)=τ(ε) for a given energy ε=ε_ k], Eq. (<ref>) reduces to Eq. (<ref>).
[See Sec. <ref> in SM <cit.> for applications of Eq. (<ref>) to m-WSMs.]
The current density J induced by an electric field E is then given by
J^(i)=g∫d^d k (2π)^d (-e) v_k^(i)δ f_k≡σ_ijE^(j),
where g is the degeneracy factor and σ_ij is the conductivity tensor given by
σ_ij=g e^2∫d^d k (2π)^d S^(0)(ε) v_k^(i) v_k^(j)τ_k^(j).
For the calculation, we set g=4 and v_z=v_0≡ε_0 ħ k_0.
Density dependence of dc conductivity.
Consider the m-WSMs described by Eq. (<ref>) with chirality J and their dc conductivity as a function of carrier density at zero temperature. Due to the anisotropic energy dispersion with the axial symmetry, for J>1 the conductivity also will be anisotropic as σ_xx=σ_yy≠σ_zz.
We consider two types of impurity scattering: short-range impurities (e.g., lattice defects, vacancies, and dislocations) and charged impurities distributed randomly in the background. The impurity potential for short-range scatterers is given by a constant V_kk'=V_ short in momentum space (i.e., zero-range delta function in real space), whereas for charged Coulomb impurities in 3D it is given by
V_kk'=4π e^2 ϵ(q)|q|^2,
where ϵ(q) is the dielectric function for q=k-k'. Within the Thomas-Fermi approximation, the dielectric function can be approximated as ϵ(q)≈κ[1+(q_ TF^2/|q|^2)], where κ is the background dielectric constant, q_ TF=√(4π e^2 κ D(ε_ F)) is the Thomas-Fermi wave vector, and D(ε_ F) is the DOS at the Fermi energy ε_ F. The interaction strength for charged impurities can be characterized by an effective fine structure constant α=e^2κħ v_0. Note that q_ TF∝√(gα).
Figure <ref> shows the density dependence of the dc conductivity for charged impurity scattering at zero temperature. Because of the chirality J, m-WSMs have a characteristic density dependence in dc conductivity, which can be understood as follows. From Eq. (<ref>), we expect σ_ii∼ [v_ F^(i)]^2/V_ F^2, where v_ F^(i) is the Fermi velocity along the ith direction and V_ F^2 is the angle-averaged squared impurity potential at the Fermi energy ε_ F.
For m-WSMs, the in-plane component with k_z=0 and out-of-plane component with k_x=k_y=0 for the velocity at ε_ F are given by v_ F^(∥)=J v_0 r_ F^1-1 J and v_ F^(z)=v_0 c_z, respectively, where r_ F=ε_ F/ε_0. (See Sec. <ref> in SM <cit.>.)
For charged impurities, in the strong screening limit (gα≫ 1), V_ F∼ q_ TF^-2∼ D^-1(ε_ F)∼ε_ F^-2 J, thus we find
σ_xx ∼ ε_ F^2(1-1 J)ε_ F^4 J∼ n^2(J+1) J+2,
σ_zz ∼ ε_ F^4 J∼ n^4 J+2.
Here, the DOS is D(ε)∼ε^2 J, thus ε_ F∼ n^J J+2.
In the weak screening limit (gα≪ 1), we expect V_ F∼ε_ F^-2ζ with 1 J≤ζ≤ 1, because the in-plane and out-of-plane components of the wavevector at ε_ F are k_ F^(∥)=k_0 r_ F^1 J and k_ F^(z)=k_0 r_ F/c_z, respectively. Thus, we find
σ_xx ∼ ε_ F^2(1-1 J)ε_ F^4ζ∼ n^2(J-1)+4Jζ J+2,
σ_zz ∼ ε_ F^4ζ∼ n^4Jζ J+2.
(See Sec. <ref> in SM <cit.> for the analytic expressions of the dc conductivity for short-range impurities and for charged impurities in the strong screening limit, and a detailed discussion for charged impurities in the weak screening limit.)
Figure <ref> illustrates the evolution of the power-law density dependence of the dc conductivity as a function of the screening strength characterized by gα. Note that ζ=1 J in Eq. (<ref>) gives the same density exponent as in the strong screening limit in Eq. (<ref>).
Thus, as α increases, the density exponent evolves from that obtained in Eq. (<ref>) with decreasing ζ within the range 1 J≤ζ≤ 1. Here, nonmonotonic behavior in the density exponent originates from the angle-dependent power law in the relaxation time, which manifests in the weak screening limit. (See Sec. <ref> in SM <cit.> for further discussion.)
Similarly, for short-ranged impurities, V_ F is a constant independent of density; in this case we find
σ_xx ∼ ε_ F^2(1-1 J)∼ n^2(J-1) J+2,
σ_zz ∼ ε_ F^0 ∼ n^0.
The anisotropy in conductivity can be characterized by σ_xx/σ_zz. Figure <ref> shows σ_xx/σ_zz as a function of density for m-WSMs. Thus, as the carrier density increases, the anisotropy in conductivity increases. Interestingly, σ_xx/σ_zz for both short-range impurities and charged impurities in the strong screening limit is given by
σ_xx/σ_zz∼ε_ F^2(1-1 J)∼ n^2(J-1) J+2.
Note that for arbitrary screening, ζs for σ_xx and σ_zz in Eq. (<ref>) are actually different, thus not cancelled in σ_xx/σ_zz and the power-law deviates from that in Eq. (<ref>). (See Sec. <ref> in SM <cit.> for the analytic/asymptotic expressions of the density dependence of σ_xx/σ_zz.)
We consider both the short-range and charged impurities by adding their scattering rates according to Matthiessen's rule assuming that each scattering mechanism is independent. At low densities (but high enough to validate the Boltzmann theory) the charged impurity scattering always dominates the short-range scattering, while at high densities the short-range scattering dominates, irrespective of the chirality J and screening strength.
Temperature dependence of dc conductivity.
In 3D materials, it is not easy to change the density of charge carriers by gating, because of screening in the bulk. However, the temperature dependence of dc conductivity can be used to understand the carrier dynamics of the system.
The effect of finite temperature arises from the energy averaging over the Fermi distribution function in Eq. (<ref>), and the temperature dependence of the screening of the impurity potential for charged impurities <cit.>.
From the invariance of carrier density with respect to temperature, we obtain the variation of the chemical potential μ(T) as a function of temperature T. Then the Thomas-Fermi wavevector q_ TF(T) in 3D at finite T can be expressed as q_ TF(T)=√(4π e^2κ∂ n ∂μ).
In the low- and high-temperature limits, the chemical potential is given by
μ/ε_ F =
1- π^2/3J(T/T_ F)^2 (T≪ T_ F),
1 2η(2 J)Γ(2+2 J)(T/T_ F)^-2 J (T≫ T_ F),
whereas the Thomas-Fermi wave vector is given by
q_TF(T) /q_TF(0) =
1- π^2/6J(T/T_ F)^2 (T≪ T_ F),
√(2η(2 J)Γ(1+2 J))(T/T_ F)^1 J (T≫ T_ F),
where T_ F=ε_ F/k_ B is the Fermi temperature, and Γ and η are the gamma function and the Dirichlet eta function <cit.>, respectively.
(See Sec. <ref> in SM <cit.> for the temperature dependence of the chemical potential and Thomas-Fermi wave vector.)
In a single-band system, q_ TF(T) always decreases with T^-1 at high temperatures, whereas in m-WSMs, q_ TF(T) increases with T^1 J because of the thermal excitation of carriers that participate in the screening.
Figure <ref> shows the temperature dependence of dc conductivity for charged impurities.
We find
σ_xx(T)σ_xx(0) =
1+C_xx(T T_ F)^2 (T≪ T_ F),
D_xx(T T_ F)^2+4ζ-2 J (T≫ T_ F),
σ_zz(T)σ_zz(0) =
1+C_zz(T T_ F)^2 (T≪ T_ F),
D_zz(T T_ F)^4ζ (T≫ T_ F).
As discussed, ζ varies within 1 J≤ζ≤ 1 and approaches 1 J in the strong screening limit (gα≫ 1). Here, the high-temperature coefficients D_ii>0, whereas the low-temperature coefficients C_ii change sign from negative to positive as α increases.
For short-range impurities, we find
σ_xx(T)σ_xx(0) =
1+C_xx^ short(T T_ F)^2 (T≪ T_ F),
D_xx^ short(T T_ F)^2(J-1) J (T≫ T_ F),
σ_zz(T)σ_zz(0) =
1-e^-T_ F/T (T≪ T_ F),
1 2+D_zz^ short(T T_ F)^-2+J J (T≫ T_ F).
Here, C_xx^ short<0 and D_ii^ short>0.
Note that for J=1, Eq. (<ref>a) becomes constant, and reduces to Eq. (<ref>b) if next order corrections are included.
(See Sec. <ref> in SM <cit.> for the analytic/asymptotic expressions of the temperature coefficients, and the evolution of C_ii as a function of gα.)
To understand the temperature dependence, we can consider a situation where the thermally induced charge carriers participate in transport. Then the temperature dependence in the high-temperature limit can be obtained simply by replacing the ε_ F dependence with T in Eqs. (<ref>)-(<ref>), which describe the density dependence of dc conductivity. Similarly as in Fig. <ref>, σ_xx(T)/σ_zz(T) also increases with T at high temperatures.
For the charged impurities at high temperatures, and neglecting the effect of phonons, the conductivity increases with temperature, and mimics an insulating behavior.
By contrast, for short-range impurities at high temperatures, σ_zz(T) decreases with temperature and approaches 0.5 σ_zz(0), thus showing a metallic behavior. Interestingly, σ_xx(T) shows contrasting behavior for J>1 and J=1, increasing (decreasing) with temperature for J>1 (J=1) showing insulating (metallic) behavior at high temperatures.
Discussion.
We find that the dc conductivities in the Boltzmann limit show characteristic density and temperature dependences that depend strongly on the chirality of the system, revealing a signature of m-WSMs in transport measurements, which can be compared with experiments. In real materials with time reversal symmetry, multiple Weyl points with compensating chiralities will be present. The contributions from the individual nodes calculated by our method are additive when the Weyl points are well separated and internode scattering is weak.
Our analysis is based on the semiclassical Boltzmann transport theory with the Thomas-Fermi approximation for screening and corrected for the anisotropy of the Fermi surface in m-WSMs. The Boltzmann transport theory is known to be valid in the high density limit. At low densities, inhomogeneous impurities induce a spatially varying local chemical potential, typically giving a minimum conductivity when the chemical potential is at the Weyl node <cit.> and the problem is treated within the effective medium theory. Note that the Thomas-Fermi approximation used in this work is the long-wavelength limit of the random phase approximation (RPA), and neglects interband contributions to the polarization function <cit.>, thus deviating from the RPA result at low densities. Both simplifications become important in the low-density limit, which will be considered in our future work.
The authors thank Shaffique Adam for helpful discussions and comments.
This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant No. 2015R1D1A1A01058071.
E.J.M.'s work on this project was supported by the U.S. Department of Energy, Office of Basic Energy Sciences under Award No. DE-FG02-ER45118.
H.M. acknowledges travel support provided by the University Research Foundation at the University of Pennsylvania while this work was carried out.
999
Xu2011
G. Xu, H. Weng, Z. Wang, X. Dai, and Z. Fang,
Chern Semimetal and the Quantized Anomalous Hall Effect in HgCr_2Se_4,
Phys. Rev. Lett. 107, 186806 (2011).
Fang2012
C. Fang, M. J. Gilbert, X. Dai, and B. A. Bernevig,
Multi-Weyl Topological Semimetals Stabilized by Point Group Symmetry,
Phys. Rev. Lett. 108, 266802 (2012).
Huang2016
S.-M. Huang, S.-Y. Xu, I. Belopolski, C.-C. Lee, G. Chang, T.-R. Chang, B. Wang, N. Alidoust, G. Bian, M. Neupane, D. Sanchez, H. Zheng, H.-T. Jeng, A. Bansil, T. Neupert, H. Lin, and M. Z. Hasan,
New type of Weyl semimetal with quadratic double Weyl fermions,
Proc. Natl. Acad. Sci. 113, 1180 (2016).
Wan2011
X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov,
Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates,
Phys. Rev. B 83, 205101 (2011).
Hosur2012
P. Hosur, S. A. Parameswaran, and A. Vishwanath,
Charge Transport in Weyl Semimetals,
Phys. Rev. Lett. 108, 046602 (2012).
Ryu2014
R. R. Biswas and S. Ryu,
Diffusive transport in Weyl semimetals,
Phys. Rev. B 89, 014205 (2014).
Ominato2014
Y. Ominato and M. Koshino,
Quantum transport in a three-dimensional Weyl electron system,
Phys. Rev. B 89, 054202 (2014).
Sbierski2014
B. Sbierski, G.r Pohl, Emil J. Bergholtz, and P. W. Brouwer,
Quantum Transport of Disordered Weyl Semimetals at the Nodal Point,
Phys. Rev. Lett. 113, 026602 (2014).
Skinner2014
B. Skinner,
Coulomb disorder in three-dimensional Dirac systems,
Phys. Rev. B 90, 060202(R) (2014).
Ominato2015
Y. Ominato and M. Koshino,
Quantum transport in three-dimensional Weyl electron system in the presence of charged impurity scattering,
Phys. Rev. B 91, 035202 (2015).
DasSarma2015
S. Das Sarma, E. H. Hwang, and H. Min,
Carrier screening, transport, and relaxation in three-dimensional Dirac semimetals,
Phys. Rev. B 91, 035201 (2015).
Ramakrishnan2015
N. Ramakrishnan, M. Milletari, and S. Adam,
Transport and magnetotransport in three-dimensional Weyl semimetals,
Phys. Rev. B 92, 245120 (2015).
Roy2016
B. Roy, R.-J. Slager, and V. Juricic,
Global phase diagram of a dirty Weyl semimetal,
Phys. Rev. B 95, 115104 (2017).
Goswami2015
P. Goswami and A. H. Nevidomskyy,
Topological Weyl superconductor to diffusive thermal Hall metal crossover in the B phase of UPt_3,
Phys. Rev. B 92, 214504 (2015).
Bera2016
S.Bera, J. D. Sau, and B. Roy,
Dirty Weyl semimetals: Stability, phase transition, and quantum criticality,
Phys. Rev. B 93, 201302(R) (2016).
Sbierski2016
B. Sbierski, M. Trescher, E. J. Bergholtz, and P. W. Brouwer,
Disordered double Weyl node: Comparison of transport and density-of-states calculations, arXiv:1606.06941 (2016).
Chen2016
Q. Chen and G. A. Fiete,
Thermoelectric transport in double-Weyl semimetals,
Phys. Rev. B 93, 155125 (2016).
Ahn2016
S. Ahn, E. H. Hwang, and H. Min,
Collective modes in multi-Weyl semimetals,
Scientific Reports 6, 34023 (2016).
SM
See Supplemental Material for details of the eigenstates and density of states for multi-Weyl semimetals, density dependence of dc conductivity in multi-Weyl semimetals at zero temperature, temperature dependence of chemical potential and Thomas-Fermi wavevector in multi-Weyl semimetals, and temperature dependence of dc conductivity in multi-Weyl semimetals.
Lv2015
B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, Z. Fang, X. Dai, T. Qian, and H. Ding,
Experimental Discovery of Weyl Semimetal TaAs,
Phys. Rev. X 5, 031013 (2015).
Xu2015
S.-Y. Xu, I. Belopolski, N. Alidoust, M. Neupane, G. Bian, C. Zhang, R. Sankar, G. Chang, Z. Yuan, C.-C. Lee, S.-M. Huang, H. Zheng, J. Ma, D. S. Sanchez, B. Wang, A. Bansil, F. Chou, P. P. Shibayev, H. Lin, S. Jia, and M. Zahid Hasan,
Discovery of a Weyl fermion semimetal and topological Fermi arcs,
Science 349, 613 (2015).
Ashcroft1976
N. W. Ashcroft and N. D. Mermin, Solid State Physics, (Brooks-Cole, Pacific Grove, CA, 1976).
Schliemann2003
J. Schliemann and D. Loss,
Anisotropic transport in a two-dimensional electron gas in the presence of spin-orbit coupling,
Phys. Rev. B 68, 165311 (2003).
Vyborny2009
K. Výýorný, A. A. Kovalev, J. Sinova, and T. Jungwirth,
Semiclassical framework for the calculation of transport anisotropies,
Phys. Rev. B 79, 045427 (2009).
Ando1982
T. Ando, A. B. Fowler, and F. Stern,
Electronic properties of two-dimensional systems,
Rev. Mod. Phys. 54, 437 (1982).
DasSarma2011
S. Das Sarma, S. Adam, E. H. Hwang, and E. Rossi,
Electronic transport in two-dimensional graphene,
Rev. Mod. Phys. 83, 407 (2011).
Arfken2012
G. B. Arfken, H. J. Weber, and F. E. Harris,
Mathematical Methods for Physicists, 7th ed., (Academic , New York, 2012).
Supplemental Material:
Semiclassical Boltzmann transport theory for multi-Weyl semimetals
§ EIGENSTATES AND DENSITY OF STATES FOR MULTI-WEYL SEMIMETALS
Let us consider the eigenstates and density of states (DOS) for the low-energy effective Hamiltonian of m-WSMs described by Eq. (<ref>) in the main text:
H_J
= ε_0(
[ c_z k̃_z k̃_-^J; k̃_+^J -c_z k̃_z; ]),
where k̃=k/k_0 and c_z=ħ v_z k_0/ε_0. To avoid difficulties associated with anisotropic dispersions, we consider the following coordinate transformation <cit.>
k_x → k_0(rsinθ)^1 Jcosϕ,
k_y → k_0(rsinθ)^1 Jsinϕ,
k_z →k_0 c_z rcosθ,
which transforms the Hamiltonian into the following form:
H=ε_0 r (
[ cosθ sinθ e^-iJϕ; sinθ e^iJϕ -cosθ; ]).
In the transformed coordinates, the energy dispersion is given by ε_±(r)=±ε_0 r and the corresponding eigenstate is given by
|+⟩ = (
[ cosθ 2; sinθ 2 e^i Jϕ; ]),
|-⟩ = (
[ -sinθ 2; cosθ 2 e^i Jϕ ]).
The Jacobian 𝒥 corresponding to this transformation is given by
𝒥=
|
[ ∂ k_x/∂ r ∂ k_x/∂θ ∂ k_x/∂ϕ; ∂ k_y/∂ r ∂ k_y/∂θ ∂ k_y/∂ϕ; ∂ k_z/∂ r ∂ k_z/∂θ ∂ k_z/∂ϕ; ]|=k_0^3 c_z J r^2 Jsin^2 J-1θ≡𝒥(r,θ).
Note that for the + band, the band velocity v_ k^(i)=1ħε_+,k∂ k_i can be expressed as
v_ k^(x) = J v_0 r^1-1 Jsin^2-1 Jθcosϕ,
v_ k^(y) = J v_0 r^1-1 Jsin^2-1 Jθsinϕ,
v_ k^(z) = c_z v_0 cosθ,
where v_0=ε_0ħ k_0.
The DOS at energy ε>0 can be obtained as
D(ε) = g∫d^3k (2π)^3δ(ε-ε_+,k)
= g ∫_0^∞dr ∫_0^πdθ∫_0^2πdϕ𝒥(r,θ) (2π)^3δ(ε-ε_0 r)
= g B(1 2,1 J) 4π^2 c_z Jk_0^3ε_0(εε_0)^2 J,
where g is the number of degenerate Weyl nodes.
Here, we used the relation ∫_0^π/2 dθcos^mθsin^nθ=1 2 B(m+1 2,n+1 2), where B(m,n)=Γ(m)Γ(n) Γ(m+n) is the beta function and Γ(x)=∫_0^∞ dt t^x-1e^-t is the gamma function <cit.>.
Note that the Thomas-Fermi wavevector is determined by the DOS at the Fermi energy ε_ F given by
q_ TF=√(4π e^2 κ D(ε_ F))=k_0 √(g α B(1 2,1 J)π c_z J)(ε_ Fε_0)^1 J,
where α=e^2κħ v_0 is the effective fine structure constant.
The carrier density is then given by
n=∫_0^ε_ Fdε D(ε)=n_0 g B(1 2,1 J) 4π^2 c_z (J+2)(ε_ Fε_0)^2 J+1,
where n_0=k_0^3.
Note that ε_ F∼ n^J J+2 and D(ε_ F)∼ n^2 J+2.
§ DENSITY DEPENDENCE OF DC CONDUCTIVITY IN MULTI-WEYL SEMIMETALS AT ZERO TEMPERATURE
In this section, we derive the dc conductivity at zero temperature for 3D anisotropic systems with an anisotropic energy dispersion which has an axial symmetry around the k_z-axis (i.e. independent of ϕ), as in the m-WSMs described by Eq. (<ref>) in the main text. To take into account the anisotropy of the energy dispersion, we express the anisotropic Boltzmann equation in Eq. (<ref>) in the main text using the transformed coordinates in Eq. (<ref>) assuming an axial symmetry around the k_z-axis:
1 = ∫_0^∞dr'∫_0^πdθ'∫_0^2πdϕ'𝒥(r',θ') (2π)^3 W_kk'(τ_k^(i)-v_k'^(i) v_k^(i)τ_k'^(i))
= ∫_0^∞dr'∫_0^πdθ'∫_0^2πdϕ'
k_0^3 r'^2 Jsin^2 J-1θ' (2π)^3 c_z J[2πħ n_ imp |V_kk'|^2 F_kk'δ(ε_0 r-ε_0 r')](τ_k^(i)-d_kk'^(i)τ_k'^(i))
= 2πħ n_ impk_0^3 r^2 J (2π)^2 c_z J ε_0∫_-1^1dcosθ' (1-cos^2θ')^1 J-1∫_0^2πdϕ' 2π |V_kk'|^2 F_kk'(τ_k^(i)-d_kk'^(i)τ_k'^(i)),
where d_kk'^(i)=v_k'^(i)/v_k^(i) and F_kk' = 1/2[1+cosθcosθ' +sinθsinθ' cos J(ϕ - ϕ')] is the square of the wavefunction overlap between k and k' states in the same band.
Let us define ρ_0=k_0^3 (2π)^2 c_z ε_0, V_0=ε_0 k_0^3, and 1τ_0(r)=2πħ n_ imp V_0^2 ρ_0. Then with μ=cosθ, we have
1=r^2 J J∫_-1^1dμ'(1-μ'^2)^1 J-1∫_0^2πdϕ' 2π |Ṽ_kk'|^2 F_kk'(τ̃_k^(i)-d_kk'^(i)τ̃_k'^(i)),
where Ṽ_kk'=V_kk'/V_0 and τ̃_k^(i)=τ_k^(i)/τ_0.
Assuming τ̃_k^(i)=τ̃^(i)(μ) from the axial symmetry,
1=w̃^(i)(μ)τ̃^(i)(μ)-∫_-1^1dμ' w̃^(i)(μ,μ')τ̃^(i)(μ'),
where
w̃^(i)(μ) = r^2 J J∫_-1^1dμ'(1-μ'^2)^1 J-1∫_0^2πdϕ' 2π |Ṽ_kk'|^2 F_kk',
w̃^(i)(μ,μ') = r^2 J J(1-μ'^2)^1 J-1∫_0^2πdϕ' 2π |Ṽ_kk'|^2 F_kk' d_kk'^(i).
Now let us discretize θ or equivalently μ=cosθ to μ_n (n=1,2,⋯,N) with an interval Δμ=2/N. Then for τ̃_n^(i)=τ̃^(i)(μ_n), we have
1=P_n^(i)τ̃_n^(i) - ∑_n' P_nn'^(i)τ̃_n'^(i),
where P_n^(i)=w̃^(i)(μ_n) is an N-vector and P_nn'^(i)=w̃^(i)(μ_n,μ_n')Δμ is an N× N matrix which relate the θ-dependent relaxation times. Note that Eq. (<ref>) has a similar structure for the multiband scattering <cit.> in which the relaxation time can be obtained by solving coupled equations, which relate the relaxation times for different energy bands involved in the scattering.
Then the dc conductivity at zero temperature is given by
σ_ij = g e^2∫d^3 k (2π)^3δ(ε_ k-ε_ F) v_k^(i) v_k^(j)τ_k^(j)
= g e^2 ∫_0^∞dr ∫_0^πdθ∫_0^2πdϕk_0^3 r^2 Jsin^2 J-1θ (2π)^3 c_z Jδ(ε_0 r-ε_ F) v_k^(i) v_k^(j)τ_k^(j)
= σ_0 J∫_0^∞dr r^2 J∫_-1^1dμ(1-μ^2)^1 J-1∫_0^2πdϕ 2πδ(r-r_ F) ṽ_k^(i)ṽ_k^(j)τ̃_k^(j),
where σ_0=g e^2 ρ_0 v_0^2 τ_0, r_ F=ε_ F/ε_0, and ṽ_k^(i)=v_k^(i)/v_0. Thus, from Eq. (<ref>), we have
σ_xxσ_0 = J r_ F^2 2∫_-1^1dμ(1-μ^2)τ̃^(x)(μ),
σ_zzσ_0 = c_z^2 r_ F^2 J J∫_-1^1dμ(1-μ^2)^1 J-1μ^2 τ̃^(z)(μ).
Note that τ_0, v_0, ρ_0 and σ_0 are the density independent normalization constants in units of time, velocity, DOS, and conductivity, respectively. In addition, from the axial symmetry around the k_z-axis, σ_xx=σ_yy.
For the short-range impurities, V_kk' is independent of density. Thus from Eq. (<ref>), ω̃^(i)(μ)∼ε_ F^2 J and τ̃^(i)(μ)∼ε_ F^-2 J at the Fermi energy ε_ F. Note that ε_ F∼ n^J J+2. Therefore we have
σ_xx ∼ ε_ F^2-2 J∼ n^2(J-1) J+2,
σ_zz ∼ ε_ F^0 ∼ n^0.
For charged impurities in the strong screening limit, V_kk'∼ q_ TF^-2∼ D^-1(ε_ F)∼ε_ F^-2 J, thus ω̃^(i)(μ)∼ε_ F^2 J-4 J and τ̃^(i)(μ)∼ε_ F^2 J at ε_ F. Therefore we have
σ_xx ∼ ε_ F^2+2 J∼ n^2(J+1) J+2,
σ_zz ∼ ε_ F^4 J∼ n^4 J+2.
For charged impurities in the weak screening limit, from V_kk'∼ |k-k'|^-2 and Eq. (<ref>), we expect the potential average on the Fermi surface as V_ F∼ε_ F^-2ζ with 1 J≤ζ≤ 1 (assuming no logarithmic correction), thus ω̃^(i)(μ)∼ε_ F^2 J-4ζ and τ̃^(i)(μ)∼ε_ F^4ζ-2 J at ε_ F. Therefore, we have
σ_xx ∼ ε_ F^2+4ζ-2 J∼ n^2(J-1)+4Jζ J+2,
σ_zz ∼ ε_ F^4ζ∼ n^4Jζ J+2.
Here, ζs in σ_xx and σ_zz do not need to be the same, as explained later in this section. Note that ζ=1 J gives the same density exponent corresponding to the strong screening limit.
For the short-range impurities, it turns out that the relaxation time is independent of polar angles θ. Assuming τ^(i)(μ)=τ^(i) from the beginning, for short-range impurity potential V_kk'=V_ short, Eq. (<ref>) reduces to
1τ̃^(i)=r^2 J JṼ_ short^2 ∫_-1^1dμ'(1-μ'^2)^1 J-1∫_0^2πdϕ' 2π F_kk'(1-d_kk'^(i)),
where Ṽ_ short=V_ short/V_0.
Then we find that the relaxation time τ^(i)(ε) at at energy ε=rε_0 is
1τ̃^(x)(ε) = r^2 J 2JṼ_ short^2 B(1 2,1 J)-δ_J1r^2 3Ṽ_ short^2,
1τ̃^(z)(ε) = r^2 J 2JṼ_ short^2 B(1 2,1 J+1).
From Eq. (<ref>), finally we obtain
σ_xxσ_0 = 2J r_ F^2 3τ̃^(x)(ε_ F),
σ_zzσ_0 = c_z^2 r_ F^2 J J B(3 2,1 J) τ̃^(z)(ε_ F)=J c_z^2 Ṽ_ short^2.
Note that σ_xx/σ_0=Ṽ_ short^-2, 16 3π r_ FṼ_ short^-2, 12 B(1 2,1 3) r_ F^4 3Ṽ_ short^-2 for J=1,2,3, respectively, and the obtained analytic expressions are consistent with the density dependence in Eq. (<ref>). From Eq. (<ref>), we find
σ_xx/σ_zz=1 c_z^2, 8 r_ F 3π c_z^2, 4 r_ F^4 3 B(1 2,1 3) c_z^2 for J=1,2,3, respectively, and the anisotropy between σ_xx and σ_zz increases as the Fermi energy or the carrier density increases.
For charged impurities in the strong screening limit, the impurity potential becomes V_kk'≈ V^ strong_ screen≡4π e^2 κ q_ TF^2, having the same feature of the short-range impurity potential. Thus, the relaxation time is also independent of polar angles and similar analytic expressions can be obtained by replacing Ṽ_ short by Ṽ^ strong_ screen in Eqs. (<ref>) and (<ref>), where Ṽ^ strong_ screen=V^ strong_ screen/V_0=4πα k_0^2/q_ TF^2. Then the relaxation time is given by
1τ̃^(x)(ε) = 8π^4 c_z^2 J g^2 B(1 2,1 J) r^2 J r_ F^-4 J-δ_J14π^4 c_z^2 3 g^2 r^2 r_ F^-4,
1τ̃^(z)(ε) = 16π^4 c_z^2 J (J+2) g^2 B(1 2,1 J) r^2 J r_ F^-4 J,
thus, in the strong screening limit, we obtain
σ_xxσ_0 = 2J r_ F^2 3τ̃^(x)(ε_ F),
σ_zzσ_0 = c_z^2 r_ F^2 J J B(3 2,1 J) τ̃^(z)(ε_ F)=g^2 B^2(1 2,1 J) 16π^4 J r_ F^4 J.
Note that σ_xx/σ_0=g^2 4π^4 c_z^2 r_ F^4, g^2 12π^3 c_z^2 r_ F^3, g^2 B(1 2,1 3) 12π^4 c_z^2 r_ F^8 3 for J=1,2,3, respectively, and the obtained analytic expressions are consistent with the density dependence in Eq. (<ref>). Also note that σ_xx/σ_zz has the same form with that obtained for short-range impurities.
For charged impurities at arbitrary screening, the relaxation time in general depends on polar angles for J>1. In addition, as seen in Fig. <ref> in the main text, the density exponent shows non-monotonic behavior as a function of gα. From Eq. (<ref>), for a given wavevector k=(k_0(r_ Fsinθ)^1 J,0,k_0 c_z r_ Fcosθ) at the Fermi energy, the average of the squared Coulomb potential on the Fermi surface is given by
<V^2(θ)>_ F=1 2∫_-1^1dcosθ' (1-cos^2θ')^1 J-1∫_0^2πdϕ' 2π |V_kk'|^2.
Then assuming <V^2(θ)>_ F∼ r_ F^-4ζ(θ), we can obtain the angle dependent exponent ζ(θ) with 1 J≤ζ(θ)≤ 1. Figure <ref> shows ζ(θ) for several values of θ=0, π/6, π/2. This angle-dependent power-law gives rise to a significant non-monotonic behavior of τ_z and σ_zz in gα, which originates from the competition between two inverse length scales, q_ TF∼ r_ F^1 J and k_ F^(z)∼ r_ F. Note that the in-plane component of the wavevector k_ F^(∥)∼ r_ F^1 J at the Fermi energy has the same Fermi energy dependence with q_ TF, showing a monotonic-like behavior of τ_x and σ_xx in gα. As gα increases, ζ(θ) eventually approaches 1/J irrespective of θ, obtained in the strong screening limit.
§ TEMPERATURE DEPENDENCE OF CHEMICAL POTENTIAL AND THOMAS-FERMI WAVEVECTOR IN MULTI-WEYL SEMIMETALS
In this section, we derive the temperature dependent chemical potential and Thomas-Fermi wavevector in a general gapless electron-hole system, and apply the results to m-WSMs. Suppose that a gapless electron-hole system has a DOS given by D(ε) = C_α |ε|^α-1(ε), where C_α is a constant and (ε) is a step function. For a d-dimensional electron gas with an isotropic energy dispersion ε∼ k^J, α=d/J, whereas for m-WSMs, D(ε) ∝ε^2 J from Eq. (<ref>), thus α =2 J+1.
When the temperature is finite, the chemical potential μ deviates from the Fermi energy ε_ F due to the broadening of the Fermi distribution function f^(0)(ε,μ)=[e^β (ε-μ)+ 1]^-1 where β=1 k_ BT. Since the charge carrier density n does not vary under the temperature change, we have
n=∫_-∞^∞dε D(ε)f^(0)(ε ,μ)
=∫_0^∞dε D(ε)[f^(0)(ε ,μ)+f^(0)(-ε ,μ)] ≡∫_-∞^ε_ Fdε D(ε).
Then the carrier density measured from the charge neutral point, Δ n≡.n|_μ-.n|_μ=0, is given by
Δ n= ∫_0^∞dε D(ε)[f^(0)(ε ,μ)-f^(0)(ε ,-μ)] ≡∫_0^ε_ Fdε D(ε).
Here, we used f(-ε,μ)=1-f(ε,-μ).
Before proceeding further, let us consider the following integral:
∫_0^∞dx x^α-1 z^-1 e^x+1 = ∫_0^∞dx x^α-1 z e^-x 1+ze^-x =-∫_0^∞dx x^α-1∑_n=1^∞ (-z)^n e^-nx
=^t=nx [∫_0^∞ dt t^n-1e^-t][-∑_n=1^∞(-z)^n n^α]=Γ(α) F_α(z),
where Γ(α)=∫_0^∞ dt t^α-1e^-t is the gamma function and F_α(z)=-∑_n=1^∞(-z)^n n^α. Note that Γ(α)=(α-1)Γ(α-1) with Γ(1)=1 and Γ(1/2)=√(π), and F_α(z)=z∂∂ zF_α+1(z).
Using the above result, we obtain
Δ n=C_α (k_ B T)^αΓ(α) [F_α(z)-F_α(z^-1)]
=C_α/αε_ F^α,
where z=e^βμ, which is called the fugacity. Thus, finally we have
F_α(z)-F_α(z^-1) = (βε_ F)^α/Γ(α+1).
By solving the above equation with respect to z for a given T, we can obtain the chemical potential μ=k_ B T ln z.
At low temperatures, βμ→∞ thus z→∞. Note that from the Sommerfeld expansion <cit.>
lim_z→∞∫_0^∞dxH(x)/z^-1e^x+1≈∫_0^βμdxH(x) + π^2/6∂ H(βμ)/∂ x ,
where H(x) is a function which diverges no more rapidly than a polynomial as x→∞.
Then for H(x) = x^α-1 and using Eq. (<ref>), Eq. (<ref>) becomes
lim_z→∞ F_α(z)≈(βμ)^αΓ(α+1)[1+π^2 6α(α-1)(βμ)^2],
whereas F_α(z^-1)=z^-1-z^-2 2^α+⋯ vanishes as z→∞.
Thus, we can obtain the low-temperature correction as
μ/ε_ F≈ 1- π^2/6(α-1) (T/T_ F)^2,
where T_ F=ε_ F/k_ B is the Fermi temperature.
At high temperatures, βμ→ 0 due to the finite carrier densities, thus z→ 1. From z≈ 1+βμ+1 2(βμ)^2 for |βμ|≪ 1,
lim_z→ 1 F_α(z)≈η(α)+η(α-1)βμ + 1/2η(α-2)(βμ)^2,
where η(α)=F_α(1) is the Dirichlet eta function <cit.>.
Thus, we have F_α(z) - F_α(z^-1) ≈ 2η(α-1)βμ, and obtain the following high-temperature asymptotic form:
μ/ε_ F≈1/2η(α-1)Γ(α+1)(T_ F/T)^α-1 .
For m-WSMs, α =2 J+1 and we obtain
μ/ε_ F =
1- π^2/3J(T/T_ F)^2 (T≪ T_ F),
1 2η(2 J)Γ(2+2 J)(T/T_ F)^-2 J (T≫ T_ F).
Next, consider the temperature dependent Thomas-Fermi wavevector q_ TF(T). Note that in 3D, q_ TF^2(0)=4π e^2κ D(ε_ F) and at finite T, q_ TF^2(T)=4π e^2κ∂ n ∂μ. Thus we have
q_TF^2(T) /q_TF^2(0) = ∂ε_ F/∂μ = Γ(α)/(βε_ F)^α-1[F_α-1(z) +F_α-1(z^-1)].
For a given T, the chemical potential (or equivalently fugacity z) is calculated using the density invariance in Eq. (<ref>), and then q_TF(T) is obtained from the above relation.
At low temperatures, μ(T) is given by Eq. (<ref>), thus
q_TF^2(T) /q_TF^2(0) ≈ Γ(α)/(βε_ F)^α-1[(βμ)^α-1/Γ(α)(1+π^2/6(α-1)(α-2)/(βμ)^2) + 0(z^-1-z^-2/2^α-1)]
≈ μ/ε_ F+π^2/6(α-1)(α-2)/(βε_ F)^2≈ 1-π^2 6(α-1)(T/T_ F)^2.
At high temperatures, μ(T) is given by Eq. (<ref>), thus
q_TF^2(T) /q_TF^2(0) ≈ Γ(α)/(βε_ F)^α-1[2η(α-1)+η(α-3)(βμ)^2]
≈ 2η(α-1) Γ(α)(T/T_ F)^α-1.
For m-WSMs, we find
q_TF(T) /q_TF(0) =
1- π^2/6J(T/T_ F)^2 (T≪ T_ F),
√(2η(2 J)Γ(1+2 J))(T/T_ F)^1 J (T≫ T_ F),
where q_ TF(0)=q_ TF is given by Eq. (<ref>).
Figure <ref> shows the temperature dependence of the chemical potential and Thomas-Fermi wavevector in m-WSMs.
§ TEMPERATURE DEPENDENCE OF DC CONDUCTIVITY IN MULTI-WEYL SEMIMETALS
From Eq. (<ref>) in the main text, we can easily generalize the conductivity tensor at zero temperature to that at finite temperature. For f^(0)(ε)=[z^-1e^βε+ 1]^-1, S^(0)(ε)=-∂ f^(0)(ε)∂ε=β f^(0)(ε) (1-f^(0)(ε))=β z^-1 e^βε (z^-1 e^βε+ 1)^2.
Then the conductivity tensor at finite temperature is given by
σ_ij(T) = g e^2∫d^3 k (2π)^3(-∂ f^(0)(ε_k)∂ε) v_k^(i) v_k^(j)τ_k^(j)
= g e^2 ∫_0^∞dr ∫_0^πdθ∫_0^2πdϕk_0^3 r^2 Jsin^2 J-1θ (2π)^3 c_z Jβ z^-1 e^βε_0 r (z^-1 e^βε_0 r+ 1)^2 v_k^(i) v_k^(j)τ_k^(j)
= σ_0 J∫_0^∞dr r^2 J∫_-1^1dμ(1-μ^2)^1 J-1∫_0^2πdϕ 2πβε_0 z^-1 e^βε_0 r (z^-1 e^βε_0 r+ 1)^2ṽ_k^(i)ṽ_k^(j)τ̃_k^(j).
Thus from Eq. (<ref>), we have
σ_xx(T) = σ_0 J 2∫_0^∞ dr r^2 βε_0 z^-1 e^βε_0 r (z^-1 e^βε_0 r+ 1)^2∫_-1^1dμ(1-μ^2)τ̃^(x)(μ),
σ_zz(T) = σ_0 c_z^2 J∫_0^∞ dr r^2 Jβε_0 z^-1 e^βε_0 r (z^-1 e^βε_0 r+ 1)^2∫_-1^1dμ(1-μ^2)^1 J-1μ^2 τ̃^(z)(μ).
To derive the asymptotic behaviors of σ_ii(T)/σ_ii(0) at low and high temperatures, let us rewrite Eq. (<ref>) in the main text, in the following energy integral form:
σ_ii(T) = g e^2 I ∫_0^∞ dε(-∂ f^(0)(ε)∂ε) D(ε) [v^(i)(ε)]^2 τ^(i)(ε,T),
where I is a factor from the angular integration. Note that the factor I will be canceled by σ_ii(0) later. Assuming that τ^(i)(ε, T) can be decomposed as
τ^(i)(ε, T) = τ^(i)(ε) g^(i)(T/T_ F),
where g^(i)(T/T_ F) is the energy-independent correction term from the screening effect with g^(i)(0)≡1, we can separate the contributions from the energy averaging over the Fermi distribution and the temperature dependent screening. Suppose D(ε) ∝ε^α-1, v^(i)(ε) ∝ε^ν, and τ^(i)(ε) ∝ε^γ. Then we can express σ_ii(T) as
σ_ii(T) = C ∫_0^∞ dε(-∂ f^(0)(ε)∂ε) ε^α-1+2ν+γ g^(i)(T/T_ F)
= C (k_ BT)^δΓ(δ+1)F_δ(z)g (T/T_ F),
where C is a constant and δ≡α-1+2ν+γ. Note that Eq. (<ref>) reduces to σ_ii(0)= C ε_ F^δ at zero temperature.
Therefore, after eliminating C, we have
σ_ii(T)σ_ii(0)= Γ(δ+1)F_δ(z)/(βε_ F)^δ g^(i)(T/T_ F).
For short-range impurities, g^(i)(T T_ F)=1.
For charged impurities at low temperatures, from the form of the low-temperature correction for the Thomas-Fermi wavevector in Eq. (<ref>), we expect
g^(i)(T T_ F)≈ 1-A^(i)(T T_ F)^2.
Note that A^(i) depends on the screening strength, and in the strong screening limit, from Eq. (<ref>) we have A^(i)=2π^2 3J.
At high temperatures, however, τ^(i)(ε,T) cannot be simply decomposed as Eq. (<ref>). The energy averaging typically dominates over the screening contribution <cit.>, and the screening correction g(T T_ F) only gives a constant factor without changing the temperature power. Assuming g^(i)(T T_ F)≈ 1 at high temperatures, then in the low and high temperature limits, we have
σ_ii(T)σ_ii(0) =
1+[π^2/6(δ-α)δ-A^(i)] (T/T_ F)^2 (T≪ T_ F),
Γ(δ+1) η(δ)(T/T_ F)^δ (T≫ T_ F).
Now, consider m-WSM with α=2/J+1.
For short-range impurities, g^(i)(T/T_ F)=1, and from the energy dependence of the relaxation time in Eq. (<ref>), γ=-2/J. Thus, we find
σ_xx(T)σ_xx(0) =
1+ π^2/3(J-1/J)(J-4/J)(T T_ F)^2 (T≪ T_ F),
Γ(3-2/J)η(2-2/J)(T T_ F)^2-2/J (T≫ T_ F),
σ_zz(T)σ_zz(0) =
1-e^-T_ F/T (T≪ T_ F),
1/2+ 1/8η(2 J)Γ(2 J+2)(T/T_ F)^-2+J J (T≫ T_ F).
For charged impurities in the strong screening limit, from Eq. (<ref>), τ^(i)(ε)∼ε^-2 J thus γ=-2/J at low temperatures, whereas at high temperatures τ^(i)(ε)∼ε^-2 J+4 J because thermally induced charge carriers participate in transport giving γ=2/J. Combining the temperature dependent screening correction with A^(i)=2π^2 3J at low temperatures, we find
σ_xx(T)σ_xx(0) =
1+ π^2/3(J^2-7J+4/J^2)(T T_ F)^2 (T≪ T_ F),
Γ(3+2/J)η(2+2/J)(T T_ F)^2+2/J (T≫ T_ F),
σ_zz(T)σ_zz(0) =
1-2π^2/3J(T T_ F)^2 (T≪ T_ F),
Γ(1+4/J)η(4/J)(T T_ F)^4/J (T≫ T_ F).
For charged impurities at arbitrary screening, from the Fermi energy dependence of the relaxation time discussed in Sec. <ref>, γ=4ζ-2/J with 1 J≤ζ≤ 1 at high temperatures. Thus, we can express the low and high temperature asymptotic forms as
σ_xx(T)σ_xx(0) =
1+ C_xx(T T_ F)^2 (T≪ T_ F),
Γ(3+4ζ-2 J)ζ(2+4ζ-2 J)(T T_ F)^2+4ζ-2 J (T≫ T_ F),
σ_zz(T)σ_zz(0) =
1+ C_zz(T T_ F)^2 (T≪ T_ F),
Γ(1+4ζ)η(4ζ)(T T_ F)^4ζ (T≫ T_ F).
As explained in Sec. <ref>, ζs in σ_xx and σ_zz do not need to be the same.
Note that ζ=1 J in Eq. (<ref>) gives the same high-temperature exponent corresponding to the strong screening limit in Eq. (<ref>), and the temperature dependent conductivity has the high-temperature asymptotic form given by Eq. (<ref>) with ζ which varies within 1 J≤ζ≤ 1 and approaches 1 J in the strong screening limit.
Figure <ref> shows the evolution of the low-temperature coefficients C_xx and C_zz in Eq. (<ref>) for charged impurities as a function of the screening strength gα.
Above a critical gα, C_xx and C_zz become negative, thus the conductivity decreases with temperature, showing a metallic behavior. As gα increases further, the low-temperature coefficients eventually approach C_xx=π^2/3(J^2-7J+4/J^2) and C_zz=-2π^2/3J, as obtained in Eq. (<ref>). The non-monotonic behavior in the low-temperature coefficients C_zz as a function of gα for J>1 originates from the angle-dependent power-law in the relaxation time, similarly as shown in Fig. <ref> in the main text.
999
Ahn2016_SM
S. Ahn, E. H. Hwang, and H. Min,
Collective modes in multi-Weyl semimetals,
Scientific Reports 6, 34023 (2016).
Arfken_SM
G. B. Arfken, H. J. Weber, and F. E. Harris,
Mathematical Methods for Physicists, 7th ed., (Academic , New York, 2012).
Siggia1970_SM
E. D. Siggia and P. C. Kwok,
Properties of Electrons in Semiconductor Inversion Layers with Many Occupied Electric Subbands. I. Screening and Impurity Scattering,
Phys. Rev. B 2, 1024 (1970).
Ashcroft1976_SM
N. W. Ashcroft and N. D. Mermin, Solid State Physics, (Brooks-Cole, Pacific Grove, CA, 1976).
DasSarma2015_SM
S. Das Sarma and E. H. Hwang,
Charge transport in gapless electron-hole systems with arbitrary band dispersion,
Phys. Rev. B 91, 195104 (2015).
|
http://arxiv.org/abs/1701.07737v2 | 20170126152037 | Dilaton Field Released under Collision of Dilatonic Black Holes with Gauss-Bonnet Term | [
"Bogeun Gwak",
"Daeho Ro"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
Dilaton Field Released under Collision of Dilatonic Black Holes with Gauss-Bonnet Term
Dimitri R. Dounas-Frazer
======================================================================================
^a[[email protected]] and ^b[[email protected] ]
0.25in
^aDepartment of Physics and Astronomy, Sejong University, Seoul 05006, Republic of Korea
^bAsia Pacific Center for Theoretical Physics, POSTECH, Pohang, Gyeongbuk 37673, Republic of Korea
0.6in
We investigate the upper limit of the gravitational radiation released upon the collision of two dilatonic black holes by analyzing the Gauss-Bonnet term. Dilatonic black holes have a dilaton hair coupled with this term. Using the laws of thermodynamics, the upper limit of the radiation is obtained, which reflected the effects of the dilaton hair. The amount of radiation released is greater than that emitted by a Schwarzschild black hole due to the contribution from the dilaton hair. In the collision, most of the dilaton hair can be released through radiation, where the energy radiated by the dilaton hair is maximized when the horizon of one black hole is minimized for a fixed second black hole.
empty
§ INTRODUCTION
Gravitational waves have been detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO)<cit.>. The sources of the waves have been the mergers of binary black holes in which the masses of the black holes have been more than 10 times the mass of the sun. The binary system that caused GW150914 consisted of black holes with masses of approximately 36M_⊙ and 29M_⊙ in the source frame<cit.>. The recently detected gravitational wave, GW151226, was generated by a binary black hole merger involving two black holes with masses of 14.2M_⊙ and 7.5M_⊙<cit.>. The detections of these waves have proven that there are many black holes in our universe and that collisions between them may be frequent events.
For an asymptotic observer, a black hole in the Einstein-Maxwell theory can be distinguished by its conserved quantities: mass, angular momentum, and electric charge<cit.>. This concept is known as the no-hair theorem, in which charges cannot be observed outside of the event horizon of a black hole. In the theory in which gravity is coupled with Maxwell and antisymmetric tensor fields, the dilaton hair concept was first introduced in association with string theory<cit.>. Since then, many kinds of hairs have been described in different gravity theories, such as those involving Maxwell and Yang-Mills fields<cit.>. One of them is dilaton gravity theory, which includes the Gauss-Bonnet term<cit.>, a curvature-squared term given in the effective field theory of a heterotic string theory<cit.> and topological in four dimensions, so that the equations of motion are the same as in Einstein gravity when the dilaton field is turned off<cit.>. In dilaton gravity theory, the black hole solution has a dilaton field outside the black hole horizon<cit.>, and thus, dilaton hairs are an exception to the no-hair theorem. Because dilaton hairs originate from the masses of black holes, dilaton hairs are secondary hairs that grow from the primary hairs of black holes<cit.>. The presence of a dilaton hair changes the physical properties of the corresponding black hole, such as its stability and thermodynamics, which have been studied in various black holes coupled with dilaton fields and Gauss-Bonnet terms<cit.>.
As a counterexample to the no-hair theorem, a dilatonic black hole should be stable in our universe. The stability of a dilatonic black hole can be tested and identified based on the specific range in which the mass lies. The solution for a dilatonic black hole is convergent with that for a Schwarzschild black hole in the large mass limit. Thus, its stability is also similar to that of a Schwarzschild black hole in the same range. When the mass is low, its behavior is different. A dilatonic black hole becomes unstable below a certain mass known as the critical mass. Thus, a dilatonic black hole must possess a certain minimum mass to be stable. In addition, at its critical mass, the black hole possesses minimum entropy and thus can be related to the cosmological remnant<cit.>. On the other hand, the solution includes a naked singularity that is not allowed under the cosmic censorship conjecture<cit.>. The cosmic censorship conjecture for black holes prevents naked singularities, so black holes should have horizons. Kerr black holes were first investigated with reference to the abovementioned conjecture by the inclusion of a particle<cit.>, and many black holes have subsequently been studied in a similar manner<cit.>.
Classically, a black hole cannot emit particles, so its mass is nondecreasing. However, a black hole can radiate particles via the quantum mechanical effect, and its temperature can be defined in terms of the emitted radiation. The Hawking temperature is proportional to the surface gravity of a black hole<cit.>. The horizon areas of black holes can only increase via physical processes, which is similar to the behavior of entropy in a thermal system. Using this similarity, the entropy of a black hole, called Bekenstein-Hawking entropy, is defined as being proportional to its horizon area<cit.>. Then, a black hole can be defined as a thermal system in terms of its Hawking temperature and Bekenstein-Hawking entropy. The nonperturbed stability can be tested based on the thermodynamics of the black hole and can be described by a heat capacity. However, a dilatonic black hole is thermally unstable, as its heat capacity is negative, but at the same time, its Hawking temperature has a finite value<cit.>, which is similar to that of a Schwarzschild black hole. One of other tests for nonperturbed stability is the fragmentation instability of black holes. The fragmentation instability is based on the entropy preference, so a black hole near an extremal bound decays into fragmented black holes that are thermally stable and have greater entropy than a single black hole system. For example, a Myers-Perry (MP) black hole is defined in higher dimensions, and its angular momentum has no upper bound over five dimensions<cit.>. Then, an MP black hole becomes unstable when the large angular momentum is sufficiently large due to its centrifugal force. Thermally, the entropy of one extremal MP black hole is less than that of fragmented MP black holes, so an MP black hole breaks into multiple MP black holes<cit.>. The fragmentation instability gives similar results for perturbation<cit.>. This kind of instability can also be obtained in rotating or charged anti-de Sitter (AdS) black holes<cit.>. A dilatonic black hole with a Gauss-Bonnet term also has a complicated phase diagram related to fragmentation instability<cit.>.
The gravitational radiation released when two black holes collide can be described thermodynamically. The sum of the entropies of the separate black holes in the initial state should be less than the entropy of the final black hole after the collision<cit.>. Using the second law of thermodynamics, the minimum mass of the final black hole can be obtained based on the initial conditions. Thus, the difference between the initial and final masses is the mass released in the form of gravitational radiation. For Kerr black holes, the gravitational radiation depends on the alignments of their rotation axes<cit.>. The dependency also exists for MP black hole collisions<cit.>. Many types of interaction energy can be released in the form of radiation upon collision. One of these types of interaction energy is that of the spin interaction between the black holes. If one of the initial black holes is infinitesimally small, the potential energy of the spin interaction is identical to the radiation energy obtained using thermodynamics in Kerr<cit.> and Kerr-AdS black holes<cit.>. More precise analysis can be conducted using numerical methods in relativity<cit.>. In this case, the waveform of the gravitational radiation can be investigated for different initial conditions<cit.>.
In this study, we investigated the upper limit of the gravitational radiation released due to the collision of two dilatonic black holes through the Gauss-Bonnet term. During the collision process, the energy of the black hole system will be released as radiation. Most of the radiation energy originates from the mass of the system, and the rest comes from various interactions between the black holes. There are many interactions, such as angular momentum and Maxwell charge interactions, that can contribute to the emitted radiation. The dilaton field is also one means through which black holes can interact. To an asymptotic observer, the dilaton charge, which is a secondary hair, is included in the mass of the black hole, so it acts as a mass distribution similar to a dust distribution around the black hole. Although the dilaton field is not observed in our universe, information about the behaviors of dust-like mass distributions in black hole collisions can be obtained. However, in Einstein gravity, a black hole cannot be coupled with a scalar field due to the no-hair theorem. Thus, the extent to which a scalar field can contribute to radiation is not well studied. For this reason, using a black hole solution coupled with a dilaton field through the Gauss-Bonnet term, the contribution of the dilaton field to the radiation can be determined. There are differences between dilatonic and Schwarzschild black holes, based on which the radiation of a dilatonic black hole can be distinguished from that of a Schwarzschild black hole. Therefore, we will show in this report that the dilaton field sufficiently affects the gravitational radiation released due to the collision of two black holes coupled with a dilaton hair through the Gauss-Bonnet term.
This paper is organized as follows. In section <ref>, we review the concept of dilatonic black holes, which can be numerically obtained from the equations of motion in Einstein gravity coupled with a dilaton field through the Gauss-Bonnet term. In addition, the behaviors of dilatonic black hole for given parameters are introduced. In section <ref>, we demonstrate how the upper limit of the gravitational radiation that is thermally allowed can be obtained and employ it to illustrate the differences between dilatonic black holes and black holes in Einstein gravity. In particular, we consider the contribution of the dilaton hair, since the limit is clearly different and distinguishable from that of a Schwarzschild black hole. We also discuss our results along with those of the LIGO experiment. In section <ref>, we briefly summarize our results.
§ DILATONIC BLACK HOLES WITH GAUSS-BONNET TERM
A dilatonic black hole is a four-dimensional solution to the Einstein dilaton theory with the Gauss-Bonnet term given by<cit.>. The dilaton field is coupled with the Gauss-Bonnet term in the Lagrangian
L = R2 - 12∇_μϕ∇^μϕ + f(ϕ) R^2_GB ,
where the spacetime curvature and dilaton field are denoted as R and ϕ, respectively. The Einstein constant κ = 8π G is set equal to unity for simplicity. The Gauss-Bonnet term is R^2_GB = R^2 - 4R_μνR^μν + R_μνρσR^μνρσ and is coupled with a function of the dilaton field, f(ϕ) = α e^γϕ. The dilaton field is a secondary hair whose source is the mass of the conserved charge of the black hole. The dilaton hair appears as an element coupled with the Gauss-Bonnet term. The dilaton field equation and Einstein equations can be obtained from Eq. (<ref>) and are as follows:
0 = 1√(-g)∂_μ( √(-g)∂^μϕ) + f'(ϕ) R^2_GB,
0 = R_μν - 12g_μν R - ∂_μϕ∂^μϕ + 12 g_μν∂_ρϕ∂^ρϕ + T^GB_μν ,
in which the GB term contributes to the energy-momentum tensor T^GB_μν<cit.>. Then,
T^GB_μν = - 4 (∇_μ∇_ν f(ϕ)) R + 4 g_μν (∇^2 f(ϕ)) R + 8 (∇_ρ∇_μ f(ϕ)) R_ν^ρ + 8 (∇_ρ∇_ν f(ϕ)) R_μ^ρ
- 8 (∇^2 f(ϕ)) R_μν - 8 g_μν (∇_ρ∇_σ f(ϕ)) R^ρσ + 8 (∇^ρ∇^σ f(ϕ)) R_μρνσ,
where only the nonminimally coupled terms in four-dimensional spacetime are presented in <cit.>.
A dilatonic black hole is a spherically symmetric and asymptotically flat solution for which the ansatz is given as<cit.>
ds^2 = - e^X(r) dt^2 + e^Y(r) dr^2 + r^2 (dθ^2 + sin^2 θ dφ^2) ,
where the metric exponents X and Y only depend on the radial coordinate r. Then, the dilaton field equation is
ϕ” + ϕ' ( X' - Y'2 + 2r) - 4αγ e^γϕr^2( X' Y' e^-Y + (1-e^-Y) (X” + X'2(X' - Y') ) )=0 ,
and the (tt), (rr), and (θθ) components of Einstein's equations are
r ϕ'^22 + 1 - e^Yr - Y' (1 + 4 αγ e^γϕϕ'r (1 - 3e^-Y) ) + 8 αγ e^γϕr (ϕ” + γϕ'^2)(1 - e^-Y)=0 ,
r ϕ'^22 - 1 - e^Yr - X' (1 + 4 αγ e^γϕϕ'r (1 - 3e^-Y) )=0 ,
X” + (X'2 + 1r)(X'-Y') + ϕ'^2 - 8 αγ e^γϕ - Yr(ϕ' X” + (ϕ” + γϕ'^2)X' + ϕ' X'2(X'-3Y'))=0 .
By taking the derivative of Eq. (<ref>) with respect to r, Y' can be eliminated from the equations of motion. The remaining equations of motion can be written as ordinary coupled differential equations:
ϕ” = N_1D and X” = N_2D ,
where N_1, N_2, and D are only functions of X', Y, ϕ, and ϕ'. The detailed expressions for these functions are given in Appendix <ref>. The Gauss-Bonnet term is topological term in four-dimensional spacetime, so it cannot affect the equations of motion without the term f(ϕ) of the dilaton field. This idea can be easily shown by setting to ϕ=0, where the dilaton field is turned off, so that f(ϕ)=α. However, the Gauss-Bonnet term still exists in Eq. (<ref>). Then, the dilaton field equation vanishes, and the equations of motion from Eqs. (<ref>) to (<ref>) are reduced to
1-e^Y/r-Y'=0 , -1-e^Y/r-X'=0 , X”+(X'/2+1/r)(X'-Y')=0 ,
which are equations of motion for Einstein's gravity, G_μν=0. Hence, without the dilaton field, the effect of the Gauss-Bonnet term vanishes from the equations of motion.
The solution for a dilatonic black hole can be obtained by numerically solving Eq. (<ref>). The numerical solution will be found from the outer horizon to infinity, so an initial condition at the outer horizon where the coordinate singularity is located is required. To determine the initial condition for the differential equations, it is necessary to investigate the behavior of a dilatonic black hole in the near-horizon region r_h. For the corresponding parameters at the horizon, the subscript h is used. At the outer horizon, the metric should satisfy the relation g_rr(r_h) = ∞ or g^rr = 0. The metric components can be expanded in the near-horizon limit as
e^-X(r) = x_1(r-r_h) + x_2 (r-r_h)^2 + ⋯,
e^Y(r) = y_1(r-r_h) + y_2 (r-r_h)^2 + ⋯,
ϕ(r) = ϕ_h + ϕ'_h(r-r_h) + ϕ”_h (r-r_h)^2 + ⋯.
To check the divergence of e^Y(r) at the outer horizon, e^Y can be obtained using Eq. (<ref>):
e^Y(r) = 14((2 - r^2 ϕ'^2 + (2r + 8 αγ e^γϕϕ') X') + √((2 - r^2 ϕ'^2 + (2r + 8 αγ e^γϕϕ') X')^2 - 192 αγ e^γϕϕ' X')) .
where the positive root has been chosen to form the horizon. From Eq. (<ref>), e^Y(r) has the same divergence of X'(r). Furthermore, the plus sign was chosen in Eq. (<ref>) to obtain the positive definition of e^Y(r) in the limit of X' going to the infinity at the outer horizon. The initial value of e^Y(r_h) can be determined based on the values of other fields, such as X'(r_h), ϕ_h, and ϕ'_h. To obtain a general solution, it is necessary to assume that ϕ_h and ϕ'_h are finite and that X' tends to infinity when r approaches the horizon as a result of Eq. (<ref>). By considering series expansion up to 1/X' near the horizon, Eq. (<ref>) becomes
e^Y(r) = (r + 4 αγ e^γϕϕ') X' + 2r - r^3 ϕ'^2 - 16 αγ e^γϕϕ' - 4 r^2 αγ e^γϕϕ'^32(r + 4 αγ e^γϕϕ') + O(1X') ,
in which the leading term is X'. To obtain detailed forms of X'(r_h), ϕ_h, and ϕ'_h, Eq. (<ref>) can be expanded at the near-horizon limit after inserting Eq. (<ref>). Then, the leading terms of Eq. (<ref>) are
ϕ” = (r + 4 αγ e^γϕϕ')(r^3 ϕ' + 12 αγ e^γϕ + 4r^2 αγ e^γϕϕ'^2)r^3(r + 4 αγ e^γϕϕ') - 96 α^2 γ^2 e^2γϕ X' + O(1) ,
X” = r^4 + 8r^3 αγ e^γϕϕ' - 48α^2 γ^2 e^2γϕ + 16r^2 α^2 γ^2 e^2γϕϕ'^2 r^3(r + 4 αγ e^γϕϕ') - 96 α^2 γ^2 e^2γϕX'^2 + O(X') .
For ϕ”_h to be finite, the factor (r^3 ϕ' + 12 αγ e^γϕ + 4r^2 αγ e^γϕϕ'^2) in Eq. (<ref>) must be assumed to be zero, which simplifies Eqs. (<ref>) and (<ref>) to
ϕ” = 𝒪(1) , X”=X'^2+𝒪(X') ,
where the coefficient in front of X'^2 goes to unity at the near-horizon limit. Now, the differential equations can be solved to obtain the function X' = x_1/(r-r_h) + O(1) at the near-horizon limit, fixing the coefficient x_1 to unity as the initial condition. Furthermore, ϕ_h and ϕ'_h are related at the horizon r_h by the condition for ϕ”_h being finite, which is
ϕ'_h = - r_h e^-γϕ_h8 αγ( 1 ±√(1 - 192 α^2 γ^2 e^2γϕr_h^4)) ,
where ϕ_h' can be determined by setting r_h and ϕ_h. Then, in the choice of r_h and ϕ_h, ϕ' should be real. Hence, from Eq. (<ref>), possible values of ϕ_h should satisfy
ϕ_h ≤12γlog( r_h^4192 α^2 γ^2) ,
in which all values of ϕ can solve Eq. (<ref>). The solution for the black hole should satisfy specific X(r), Y(r), and ϕ (r) boundary conditions. In the asymptotic region, r≫ 1, the flatness of the spacetime is ensured by the form of the metric<cit.>
e^X(r) = 1 - 2Mr + ⋯,
e^Y(r) = 1 + 2Mr + ⋯,
ϕ(r) = Qr + ⋯,
where M and Q denote the ADM mass and dilaton charge of the dilatonic black hole. Note that the asymptotic form of the dilaton field is proportional to 1/r. This form is different from the logarithmic forms of dilaton fields in other models such as<cit.>, where there is no Gauss-Bonnet term. The form of the dilaton field will depend on the existence of the Gauss-Bonnet term and choice of the metric ansatz. α and γ are fixed to obtain the dilatonic black hole solution. In this case, each value of r_h gives a range of ϕ_h satisfying Eq. (<ref>). A solution can then be obtained for any initial value set (r_h,ϕ_h). However, the dilaton field of the real solution is zero in the asymptotic region, as shown in Eq. (<ref>). The real dilaton solution is the only one for given a set (α, γ, r_h). With (ϕ_h, r_h), ϕ'_h can be obtained from Eq. (<ref>), where the positive sign is selected to retrieve the asymptotic behavior of the dilaton field. The value of X' at the horizon is given by X'_h = 1/ϵ, where ϵ=10^-8 is introduced to avoid the initial singularity from Eq. (<ref>). Since the initial value of e^Y(r) can be obtained from Eq. (<ref>), the initial conditions for the equations of motion are only
ϕ'_h = - e^-γϕ_h r_h8 αγ( 1 + √(1 - 192 α^2 γ^2 e^2γϕr_h^4)) , X'_h = 1ϵ .
To find the dilatonic black hole solution, we used one of the Runge-Kutta methods with a specific parameter set, the Dormand-Prince method. The equations of motion are solved from r_h + ϵ to r_max = 10^6, which was considered to be infinity. After the equations are solved, we obtained numerical functions for X'(r), Y(t), and ϕ(r). Then, the numerical form of X(r) can be obtained by numerical integration of X(r) with respect to r from r_h+ϵ to r_max. The ADM masses M are obtained by fitting Eq. (<ref>) to the solution. The dilatonic black hole solutions are obtained for given values of γ as shown in Fig. <ref>, which are the same as the dilatonic black hole solutions reported previously<cit.>.
The mass of the dilatonic black hole M increases as r_h increases. This increase is evident because the mass inside the horizon is proportional to its length in Eq. (<ref>). However, for a small horizon, the mass of the black hole is bounded at the minimum mass M_min and is two-valued for a given horizon, as shown in Fig. <ref> (a) and (b), which is an important cause of the interesting behavior of the upper limit of the gravitational radiation. The effect of the dilaton hair becomes important in a black hole with a small mass, which has less gravity than a black hole with a greater mass. This effect originates from the small mass of the black hole having a long hair. Hence, the behavior depends on the coupling γ and disappears for values less than γ=1.29859, as shown in Fig. <ref> (c). The overall behavior of the metric component is shown in Fig. <ref> (a), where the solution can be recognized as that of a Schwarzschild black hole coupled with a dilaton hair. As the mass of the dilatonic black hole increases, the black hole more closely approximates a Schwarzschild black hole, so the effect of the dilaton hair becomes a smaller for more massive dilatonic black holes. In the solution shown in Fig. <ref>, the large r_h becomes a black hole for a small value of the dilaton hair strength ϕ_h. Then, in the asymptotic region, ϕ_h vanishes, as can be chosen by selecting an appropriate solution to the equations of motion.
The mass of the dilatonic black hole consists of the mass of Schwarzschild black hole M_BH and the dilaton hair contribution M_d. This characteristic can be seen from the metric component g^rr=e^-Y(r). When we consider this g^rr component to be e^Y(r)=1-2M(r)/r, where the mass function M(r) is the mass inside a sphere of radius r, the mass function should satisfy the boundary conditions,
M(r_h)=r_h/2 , lim_r→∞ M(r)=M ,
which implies that the ADM mass consists of two contributions, one each from inside and outside the outer horizon. The mass inside the dilatonic black hole is the same as that of a Schwarzschild black hole. Hence, we call it the Schwarzschild mass M_BH=r_h/2. Since the dilaton field is only one component outside the horizon, the difference between M and M_BH is the mass contribution of the dilaton hair stretched outside of the black hole. Therefore, this difference can be set equal to M_d. Then, the mass of the dilatonic black hole can be written as<cit.>,
M = M_BH(r_h) + M_d.
Thus, because the mass of the dilatonic black hole is an arithmetic sum of two contributions, it can be treated separately.
In this work, the analysis focuses on the thermal upper bound of the radiation in which the entropy of the black hole plays an important role. The entropy of a dilatonic black hole is given by<cit.>
S_BH=π r_h^2-16απ e^γϕ_h ,
where the first term is the contribution of the horizon area of the black hole similar to the Bekenstein-Hawking entropy, and the second term is the correction to the Gauss-Bonnet term. The entropy has two limits related to parameters of the black hole solution. In the limit in which α tends to zero, the Gauss-Bonnet term is removed from the action Eq. (<ref>). According to the no-hair theorem, the metric becomes that of a Schwarzschild black hole in Einstein gravity, and then the area term only remains in Eq. (<ref>). The other limit is that in which γ tends to zero. In this limit, ϕ_h is negligible in Eq. (<ref>), and the action becomes that of Einstein gravity coupled with the Gauss-Bonnet term in which there is no dilaton hair. Although the Gauss-Bonnet term still exists, the metric is the same as that of a Schwarzschild black hole, but the entropy is given by
S_BH=π r_h^2-16απ ,
which has a constant contribution from the Gauss-Bonnet term. This feature caused difficulties in this analysis, which will be discussed in the following section. Therefore, we expect that the effect of the dilaton hair nontrivially appears in the gravitational radiation between dilatonic black holes. In addition, the radiation includes the energy from the dilaton hair released in the process.
§ UPPER LIMIT OF RADIATION UNDER COLLISION OF DILATONIC BLACK HOLES
To find the upper limit of the gravitational radiation released in a dilatonic black hole collision, we define the initial and final states of the process. Then, the limit is obtained using thermodynamic preference between states, and the effect of the dilaton hair in the collisions is determined.
§.§ Analytical Approach to the Collision
We consider the initial state to be one with two dilaton black holes separated far from each other in flat spacetime. Hence, the interactions between them are considered to be negligible. In the initial state, one black hole is defined as having mass M_1, horizon r_1, and dilaton field strength ϕ_1, while the other had M_2, r_2, and ϕ_2. The total mass of the dilaton black hole M_tot includes the contribution of the dilaton field determined by ϕ_1 and ϕ_2, making the total mass of the initial state
M_tot=M_1(r_1)+M_2(r_2) .
In the final state, we consider the two black holes to merge into a dilatonic black hole with gravitational radiation released in the process. The energy M_r released as radiation is defined by denoting the final black hole parameters as M_f, r_f, and ϕ_f. In this situation, the conservation of the total mass of the final state can be expressed as
M_tot=M_f(r_1, r_2)+M_r(r_1,r_2) ,
where the minimum mass of the final black hole can be obtained from the inequality of the entropies of the initial and final black holes, A_initial and A_final, respectively. The horizon area of the final black hole should be larger than the sum of the areas of the initial black holes according to the second law of thermodynamics<cit.>. Then,
A_initial=4π r_1^2 + 4π r_2^2≤ 4π r_f^2=A_final ,
where the entropies of radiation and turbulence have been assumed to be negligible, since the entropy of the radiation is less than that of the black holes and is very small in actual observations<cit.>. In actual observations, the radiation is about 5%<cit.>, so the contributions to the entropies of radiation and turbulence can be assumed to be sufficiently small compared with those of black holes in the initial state. The minimum value of the horizon r_f,min can be obtained from the equality in Eq. (<ref>). At r_f,min, the mass of the final black hole is also a minimum, M_f,min. With this minimum mass, the radiation energy is the maximum of M_rad, which is the upper limit of the radiation from Eq. (eq:radmass). Therefore, the limit can be expressed as
M_rad=M_tot-M_f,min=M_1+M_2-M_f,min ,
where the masses depend on r_1 and r_2, as shown in Fig. <ref>, so their behaviors are nonlinear.
Since a solution for two interacting dilatonic black holes remains to be obtained in the action in Eq. (<ref>), it is necessary to assume the form of the entropy correction in two dilatonic black holes to describe the increase in entropy between the initial and final states. We focus on the correction term in Eq. (<ref>), which is from the Gauss-Bonnet term of the action in Eq. (<ref>). Hence, we assume the entropy correction to be the same in the initial and final states, causing the corrections to cancel each other. Therefore, the inequality in Eq. (<ref>) can be reduced to the area theorem of black holes, because the correction term results from the Gauss-Bonnet term in the action in Eq. (<ref>). This assumption also gives consistent results for arbitrary values of γ, both zero and nonzero.
At γ=0, the action of Eq. (<ref>) becomes the Einstein gravity coupled with the Gauss-Bonnet term. As a topological term, the Gauss-Bonnet term does not change the equations of motion, because it is removed as a total derivative term in the equations of motion, so it cannot affect the dynamics. However, the entropy in Eq. (<ref>) has a constant correction term, so it gives different results from Einstein gravity for radiation released during collisions. This difference originates from the definition of the initial state. Irrespective of how far the black holes are from each other, the initial black holes are to be just one solution of the action (<ref>), so the correction term can also be considered once. Then, the entropy increase that occurs due to the collision can be expressed as
S_i=π r_1^2+π r_2^2-16απ≤π r_f^2-16απ=S_f ,
which gives the same result as the Einstein gravity. If we consider the correction twice, it may be the result obtained by summing up two different action at γ=0.
L = L_1+ L_2=R_12 + R^2_1GB+R_22 + R^2_2GB ,
where the indices 1 and 2 indicate the first and second black holes, respectively. Because the Lagrangians L_1 and L_2 have Schwarzschild black hole solutions and correction terms, the sum of their entropies is twice that of the value -16απ. However, this result is the sum of black hole entropies existing in two different spacetimes 1 and 2, hence this case can be ruled out. In the solution for a single black hole system, the action would be from Eq. (<ref>)
L = R2 + R^2_GB ,
where two black holes far from each other should be a solution to Eq. (<ref>). Thus, the contribution of the Gauss-Bonnet term may be added once to the total entropy. In addition, this is consistent with Einstein gravity, both with and without the Gauss-Bonnet term.
If this assumption is generalized to γ 0, the correction term may be essentially the same in the initial and final states for consistency with the γ=0 case. Since the black hole solution depends on r_h, the initial and final black hole entropies can be expressed as an inequality:
π r_1^2 + π r_2^2 + c_i(r_1,r_2) ≤π r_f^2 -16απ e^γϕ_f .
For the γ=0 case, as limγ -> 0, c_i and -16απ e^γϕ_f should converge to -16απ. Furthermore, in the massive limit, r_1, r_2≫ 1, a dilatonic black hole tends to a Schwarzschild black hole. In this case, c_i and -16απ e^γϕ_f should also converge to -16απ. The exact values of the corrections c_i remain unknown, but they might be coincident with each other in these limits. We then checked whether c_i can be approximately equal to -16απ e^γϕ_f. As r_h increases, ϕ_h rapidly decreases, as shown in Fig. <ref>, so an asymptotic observer can observe that the dilaton mass M_d is slightly greater in the initial state than in the final state since M_d is proportional to ϕ_h. However, the areas of the initial black holes can be stretched by each other, so these two opposing contributions can be expected to cancel each other. Thus, it can be assumed that c_i≈-16απ e^γϕ_f, which is consistent with the γ=0 or M ≫ 1 case. This result is expected based on our assumption, and the exact value must be calculated, which will be done in further work. The contributions of the correction terms cancel out in Eq. (<ref>), yielding Eq. (<ref>). Using Eq. (<ref>), the limiting mass of the final black hole whose horizon is
r_f,min=√(r_1^2+r_2^2) ,
which can be obtained, and the limiting amount of radiation will be released when the final black hole is synthesized with the minimum mass given by Eq. (<ref>). The dilaton field strength of the final black hole satisfies
ϕ_h ≤12γlog( (r_1^2+r_2^2)^2192 α^2 γ^2) .
where a real black hole solution satisfying the boundary condition in the asymptotic region, lim_r→∞ϕ(r)=0, should be found. The difference between the masses of the initial and final black holes is the released radiation energy. The maximum radiation released in the given conditions is the upper limit that is thermodynamically allowed. The detailed behavior of the radiation will depend on parameters such as the horizon and dilaton field strength, as illustrated in the following sections.
§.§ Upper Limit of Radiation with Dilaton Field
The upper limit of the gravitational radiation released by dilatonic black holes with the Gauss-Bonnet term is dependent on the masses and dilaton field strengths of the black holes. For a black hole with a large mass, the minimum horizon of the final black hole is given by Eq. (<ref>) and is proportional to its minimum mass. As its mass increases, the limit of the radiation also increases, as shown in Fig. <ref> (a), which depicts the relation between the mass of the first black hole M_1 and the limit of the radiation energy when the parameters of the second black hole are fixed as follows: r_2 = 2, M_2 = 1.013263, and ϕ_2 = 0.19192. Note that these values are the same from Figs. <ref> to <ref>.
The radiation energy includes the contribution from the dilaton hair, so the limit of the radiation from a dilatonic black hole is greater than that from a Schwarzschild black hole. However, in a dilatonic black hole with a small mass, the limit of the radiation begins at M_min for the first black hole, which has the minimum value in M_rad. This limiting value results from the solution for a dilatonic black hole with the minimum mass M_min, which is shown in Fig. <ref>. Hence, diatonic black holes exhibit behaviors very different from those of Schwarzschild black holes. In Fig. <ref> (a), the limit for a dilatonic black hole begins at M_min for the first black hole, because the black hole solution does not exist for small values of r_h in Fig. <ref>. In addition, the limit of radiation has a minimum value and a discontinuity at M_h,min, as shown in Fig. <ref> (b), since the black hole solution has a minimum value and two solutions for a given mass M, as shown in Fig. <ref>. Since the mass of the final black hole has a value in the range of √(r_1^2+r_2^2), which is sufficiently large compared to the range containing the two solution values, the limit of the radiation depends on the behaviors of M_1(r_1) rather than M_f,min. In the range of the two solutions, the mass of the first black hole M_1 is large at r_1,min, the smallest radius, so M_rad becomes large, as shown in Fig. <ref> (b). For a given mass M_1,
M_min≤ M_1(r_1,min) ,
where the equality is satisfied for small γ. Hence, the radiation limit has a discontinuity at M_h,min in the range of the two solutions.
The thermally allowed upper limit of the radiation is represented by the black line in Fig. <ref> (b). M_h,min depends on γ in the range of the two solutions, as shown in Fig. <ref>. For large γ, the discontinuity persists, as shown in Fig. <ref> (a), but it approximates the minimum value of M_1 for small γ. Finally, for γ = 1.29859, the points overlap with each other, and there is no discontinuous upper limit, as shown in Fig. <ref> (b), which has been observed for dilatonic black holes but not Schwarzschild black holes.
The mass of a dilatonic black hole, such as M_1, M_2, or M_f, includes the dilaton mass. Hence, M_rad also includes a contribution from the dilaton hair. Therefore, the mass of the dilaton hair that can be released due to the collision can be determined. The mass of the black hole can be expressed in terms of its own mass M_BH and the dilaton mass M_d. M_BH is half of r_h from Eq. (<ref>):
M=M_BH+M_d=r_h/2+M_d .
Therefore, the energy of the dilaton hair released as radiation M_d,rad can be obtained as shown in Fig. <ref>.
The overall behavior of the upper limit of the dilaton hair radiation is presented in Fig. <ref> (a). Since the mass of the hair in the dilatonic black hole is very small, the dilaton hair released due to radiation is also very small compared to the total mass of the initial state. As the mass of the black hole increases, the dilatonic black hole approximates a Schwarzschild black hole. Therefore, the dilaton contribution to the radiation is large when the mass of the initial state is small. The dilaton contribution to the radiation is the largest at M_h,min and starts at the minimum mass M_min, as shown in Fig <ref>. The point of discontinuity disappears for small values of γ. This is also identical to the limit of the radiation. In addition, throughout the process, most of the dilaton hair is released, as shown in Fig. <ref> (b), which shows the ratio of the dilaton contribution to the radiation to the dilaton mass of the initial black hole,
M_d,rad(%)=(M_1+M_2-r_1/2-r_2/2)-(M_f-r_f/2)/M_1+M_2-r_1/2-r_2/2 .
This ratio is approximately 90%. Hence, most of the dilaton hair in the initial state is radiated out during the collision process. Due to the dilaton effect, the radiation limit still has a point of discontinuity M_h,min, but M_d,rad(%), which is close to 100% in a massive black hole, has no maximum. As the mass increases, a dilatonic black hole more closely approximates a Schwarzschild black hole, so the abovementioned ratio becomes small for a massive final black hole. If the dilatonic black hole is massive, the dilaton hair is also released in larger quantities, which results in the increases shown in Fig. <ref> (b). Therefore, since the mass of the dilatonic black hole increases due to the collision, the final black hole is similar to a Schwarzschild black hole, and no dilaton hair can be detected.
§.§ Notes on GW150914 and GW151226
The radiation released with respect to the total mass of the initial state can be obtained and divided into two parts as
M_rad(%)=(M_1+M_2)-M_f/M_1+M_2=(r_1/2+r_2/2)-r_f/2/M_1+M_2+(M_1-r_1/2+M_2-r_2/2)-(M_f-r_f/2)/M_1+M_2 ,
where the first term is the contribution of the black hole mass, and the second term is that of the dilaton hair. The mass released due to gravitational radiation is thermally limited at approximately 30% of the total mass of the initial state, as shown in Fig. <ref>. Most of the radiation is from the black hole mass shown in blue.
However, a dilaton black hole has a contribution from the dilaton hair, which is given by the contribution of the dilaton hair less than 10% of the upper limit of the total radiation shown in red in Fig. <ref>. The ratio of the contribution of the dilaton hair to the total mass of the initial black holes is larger when the mass is smaller, which can be seen from the solution of the black hole.
If these results are applied to GW150914 and GW121226, which were detected by LIGO<cit.>, the upper limit is consistent with the experimental observation. GW150914 was generated by the merger of two black holes with masses of 39M_⊙ and 32M_⊙ in a detector frame considered with a redshift z=0.09<cit.>. The final state is a black hole with a mass of 68 M_⊙ and a 3M_⊙ gravitational wave. In this case, the radiation is approximately 4% of the total mass of the initial state. In a similar way, for GW151226, the radiation is also approximately 4% of the total mass. Therefore, the ratios obtained for the gravitational waves detected by LIGO are near the upper limit of the radiation obtained from the thermodynamics calculations. If the ratio of the dilaton hair is assumed to be the same in both the upper limit and detection, the contribution of the dilation hair can comprise up to approximately 10% of the radiation, so that the real contribution is 0.4% of the total amount of radiation, which is significant considering that the radiation energies are 0.3 M_⊙ for GW150914 and 0.1 M_⊙ for GW151226, since the masses are in units of solar mass. However, this is based on the maximum ratio, so the exact contribution may be much less than 10%.
§ SUMMARY
We investigated the upper limit of the gravitational radiation released in a dilaton black hole collision using the Gauss-Bonnet term. The solutions for dilatonic black holes were obtained numerically. In the solution, the total mass consists of the black hole and dilaton hair masses. As the black hole horizon becomes larger, the total mass increases, but there are two black hole solutions for a given radius r_h when smaller masses are involved. This feature plays an important role in radiation emission. To determine the upper limit of the radiation that is thermally allowed, we assumed the dilatonic black holes to be far apart from one another and a head-on collision between them to produce the final black hole. The mass difference between the initial and final black holes was determined based on the energy of the gravitational radiation. Since such collisions are irreversible, the entropy of the final black hole should be larger than that of the initial state. In addition, we assumed the correction terms in the entropies to cancel each other in the initial and final states, because such collisions occur in one spacetime and each correction term can be expected to contribute the same value on both sides. Using this thermal preference, the upper limit of the radiation energy in the collision can be obtained.
The upper limit is larger than that of a Schwarzschild black hole, since the radiation includes not only the mass of the black hole, but also its dilaton hair. The upper limit starts at the minimum black hole mass and is proportional to the black hole mass; however, when the mass is small, a point of discontinuity originates from the two solutions for a given mass. The point of discontinuity depends on γ and vanishes for γ<1.29859. Due to the collision, the dilaton hair can be radiated out. Most of the mass from the dilaton hair in the initial state, approximately 90% of the initial hair mass, is released, so the final black hole has a very small amount of hair compared with the initial one. In the total mass of the initial state, the upper limit of the radiation is approximately 30%, and the radiation of the dilaton hair contributes a maximum of 10% to the initial mass. Therefore, the hair contribution should be considered in gravitational radiation. We also discussed the possible mass of the hair radiated out, such as in GW150914 and GW151226 detected by LIGO.
Acknowledgments
This work was supported by the faculty research fund of Sejong University in 2016. BG was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2015R1C1A1A02037523). DR was supported by the Korea Ministry of Education, Science and Technology, Gyeongsangbuk-Do and Pohang City.
§ APPENDIX
N_1 = -2 r X'^2 (e^Y r-8 e^γϕαγϕ') (e^Y r+4 e^γϕ(-3+e^Y) αγϕ')^2
+2 e^Y(-32 e^2 γϕ(12-7 e^Y+e^2 Y) r^2 α^2 γ^2 ϕ'^4
+8 e^γϕ r αγ(7 e^Y r^2-e^2 Y r^2+24 e^γϕαγ^2-48 e^Y+γϕαγ^2+24 e^2 Y+γϕαγ^2) ϕ'^3
-(2 e^2 Y r^4+e^3 Y r^4+16 e^Y+γϕαγ^2 r^2-32 e^2 Y+γϕαγ^2 r^2+16 e^3 Y+γϕαγ^2 r^2+192 e^2 γϕα^2 γ^2
+ 64 e^2 (Y+γϕ)α^2 γ^2 - 256 e^Y+2 γϕα^2 γ^2 ) ϕ'^2 -16 e^Y+γϕ(-1+e^2 Y) r αγϕ' + 2 e^3 Y(-1+e^Y) r^2 )
+ X'(-96 e^2 γϕ(-1+e^Y) r α^2 γ^2 (e^Y r^2-16 e^γϕαγ^2+16 e^Y+γϕαγ^2) ϕ'^4
+ 4 e^γϕ(-3 + e^Y) αγ(e^2 Y r^4-32 e^Y+γϕαγ^2 r^2+32 e^2 Y+γϕαγ^2 r^2 -384 e^2 γϕα^2 γ^2+128 e^Y+2 γϕα^2 γ^2 ) ϕ'^3
+ e^Y r (e^2 Y r^4-32 e^Y+γϕαγ^2 r^2+32 e^2 Y+γϕαγ^2 r^2-1344 e^2 γϕα^2 γ^2+64 e^2 (Y+γϕ)α^2 γ^2
+512 e^Y+2 γϕα^2 γ^2) ϕ'^2 -8 e^2 Y+γϕ(-15+2 e^Y+e^2 Y) r^2 αγϕ'-2 e^3 Y(1+e^Y) r^3 ),
N_2 = -8 e^γϕ r αγ(-e^2 Y(-3+e^Y) r^2-4 e^Y+γϕ(9-2 e^Y+e^2 Y) αγϕ' r +32 e^2 γϕ(3+e^2 Y) α^2 γ^2 ϕ'^2) X'^3
+2 e^Y(16 e^2 γϕ(-9+5 e^Y) r^3 α^2 γ^2 ϕ'^3+2 e^γϕαγ(13 e^Y r^4-3 e^2 Y r^4-768 e^2 γϕα^2 γ^2
+256 e^Y+2 γϕα^2 γ^2 ) ϕ'^2 -r (e^2 Y r^4-96 e^2 γϕα^2 γ^2+32 e^2 (Y+γϕ)α^2 γ^2-192 e^Y+2 γϕα^2 γ^2) ϕ'
-4 e^Y+γϕ(3+e^2 Y) r^2 αγ) X'^2 +e^Y(-8 e^Y+γϕ r^3 αγ(3 r^2+32 e^γϕαγ^2) ϕ'^4
+r^2 (e^2 Y r^4+16 e^Y+γϕαγ^2 r^2 +16 e^2 Y+γϕαγ^2 r^2+192 e^2 γϕα^2 γ^2 +64 e^2 (Y+γϕ)α^2 γ^2
+128 e^Y+2 γϕα^2 γ^2 ) ϕ'^3 +16 e^Y+γϕ(-2+e^Y) r^3 αγϕ'^2 +32 e^Y+γϕ(-1+e^Y)^2 r αγ
-2 (e^2 Y r^4+e^3 Y r^4+192 e^2 γϕα^2 γ^2+192 e^2 (Y+γϕ)α^2 γ^2-384 e^Y+2 γϕα^2 γ^2) ϕ' ) X'
-2 e^2 Y(8 e^γϕ(-3+e^Y) r^4 αγϕ'^4+r^3 (e^Y r^2+16 e^γϕαγ^2-16 e^Y+γϕαγ^2) ϕ'^3
+8 e^γϕ(-1-e^Y+2 e^2 Y) r^2 αγϕ'^2+2 e^Y(-1+e^Y) r^3 ϕ'-16 e^γϕ(-1+e^Y)^2 αγ),
D = 4 r (8 e^γϕ(-1+e^Y) αγ X' (-e^2 Y r^2-4 e^Y+γϕ(-3+e^Y) αγϕ' r+48 e^2 γϕ(-1+e^Y) α^2 γ^2 ϕ'^2)
+e^Y( 4 e^Y+γϕ(-5+e^Y) αγϕ'^2 r^3-32 e^2 γϕ(-3+e^Y) α^2 γ^2 ϕ'^3 r^2+8 e^Y+γϕ(-1+e^Y)^2 αγ r
+(e^2 Y r^4-96 e^2 γϕα^2 γ^2-96 e^2 (Y+γϕ)α^2 γ^2+192 e^Y+2 γϕα^2 γ^2 ) ϕ' ) ).
99
Abbott:2016blz
B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],
Phys. Rev. Lett. 116, no. 6, 061102 (2016).
Abbott:2016nmj
B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations],
Phys. Rev. Lett. 116, no. 24, 241103 (2016).
Abbott:2017vtc
B. P. Abbott et al. [LIGO Scientific and VIRGO Collaborations],
Phys. Rev. Lett. 118, no. 22, 221101 (2017).
Ruffini:1971bza
R. Ruffini and J. A. Wheeler,
Phys. Today 24, no. 1, 30 (1971).
Bekenstein:1995un
J. D. Bekenstein,
Phys. Rev. D 51, no. 12, R6608 (1995).
Mayo:1996mv
A. E. Mayo and J. D. Bekenstein,
Phys. Rev. D 54, 5059 (1996).
Gibbons:1982ih
G. W. Gibbons,
Nucl. Phys. B 207, 337 (1982).
Gibbons:1987ps
G. W. Gibbons and K. i. Maeda,
Nucl. Phys. B 298, 741 (1988).
Garfinkle:1990qj
D. Garfinkle, G. T. Horowitz and A. Strominger,
Phys. Rev. D 43, 3140 (1991)
Erratum: [Phys. Rev. D 45, 3888 (1992)].
Droz:1991cx
S. Droz, M. Heusler and N. Straumann,
Phys. Lett. B 268, 371 (1991).
Lee:1991vy
K. M. Lee, V. P. Nair and E. J. Weinberg,
Phys. Rev. D 45, 2751 (1992).
Breitenlohner:1991aa
P. Breitenlohner, P. Forgacs and D. Maison,
Nucl. Phys. B 383, 357 (1992).
Lavrelashvili:1992cp
G. V. Lavrelashvili and D. Maison,
Phys. Lett. B 295, 67 (1992).
Torii:1993vm
T. Torii and K. i. Maeda,
Phys. Rev. D 48, 1643 (1993).
Breitenlohner:1994di
P. Breitenlohner, P. Forgacs and D. Maison,
Nucl. Phys. B 442, 126 (1995).
Kleihaus:2011tg
B. Kleihaus, J. Kunz and E. Radu,
Phys. Rev. Lett. 106, 151104 (2011).
Kleihaus:2014lba
B. Kleihaus, J. Kunz and S. Mojica,
Phys. Rev. D 90, no. 6, 061501 (2014).
Kleihaus:2016rgf
B. Kleihaus, J. Kunz and F. Navarro-Lerida,
Class. Quant. Grav. 33, no. 23, 234002 (2016).
Boulware:1986dr
D. G. Boulware and S. Deser,
Phys. Lett. B 175, 409 (1986).
Callan:1988hs
C. G. Callan, Jr., R. C. Myers and M. J. Perry,
Nucl. Phys. B 311, 673 (1989).
Mignemi:1992pm
S. Mignemi and N. R. Stewart,
Phys. Lett. B 298, 299 (1993).
Campbell:1991kz
B. A. Campbell, N. Kaloper and K. A. Olive,
Phys. Lett. B 285, 199 (1992).
Campbell:1992hc
B. A. Campbell, N. Kaloper, R. Madden and K. A. Olive,
Nucl. Phys. B 399, 137 (1993).
Callan:1985ia
C. G. Callan, Jr., E. J. Martinec, M. J. Perry and D. Friedan,
Nucl. Phys. B 262, 593 (1985).
Zwiebach:1985uq
B. Zwiebach,
Phys. Lett. 156B, 315 (1985).
Gross:1986mw
D. J. Gross and J. H. Sloan,
Nucl. Phys. B 291, 41 (1987).
Kanti:1995vq
P. Kanti, N. E. Mavromatos, J. Rizos, K. Tamvakis and E. Winstanley,
Phys. Rev. D 54, 5049 (1996).
Torii:1996yi
T. Torii, H. Yajima and K. -i. Maeda,
Phys. Rev. D 55, 739 (1997).
Kanti:1997br
P. Kanti, N. E. Mavromatos, J. Rizos, K. Tamvakis and E. Winstanley,
Phys. Rev. D 57, 6255 (1998).
Torii:1998gm
T. Torii and K. -i. Maeda,
Phys. Rev. D 58, 084004 (1998).
Kokkotas:2015uma
K. D. Kokkotas, R. A. Konoplya and A. Zhidenko,
Phys. Rev. D 92, no. 6, 064022 (2015).
Coleman:1991ku
S. R. Coleman, J. Preskill and F. Wilczek,
Nucl. Phys. B 378, 175 (1992).
Coleman:1991jf
S. R. Coleman, J. Preskill and F. Wilczek,
Phys. Rev. Lett. 67, 1975 (1991).
Horne:1992zy
J. H. Horne and G. T. Horowitz,
Phys. Rev. D 46, 1340 (1992).
Mann:1992yv
R. B. Mann,
Phys. Rev. D 47, 4438 (1993).
Lavrelashvili:1992ia
G. V. Lavrelashvili and D. Maison,
Nucl. Phys. B 410, 407 (1993).
Gibbons:1994vm
G. W. Gibbons, G. T. Horowitz and P. K. Townsend,
Class. Quant. Grav. 12, 297 (1995).
Cai:1997ii
R. G. Cai, J. Y. Ji and K. S. Soh,
Phys. Rev. D 57, 6547 (1998).
Cai:2001dz
R. G. Cai,
Phys. Rev. D 65, 084014 (2002).
Cai:2003gr
R. G. Cai and Q. Guo,
Phys. Rev. D 69, 104025 (2004).
Kim:2007iw
H. C. Kim and R. G. Cai,
Phys. Rev. D 77, 024045 (2008).
Goldstein:2009cv
K. Goldstein, S. Kachru, S. Prakash and S. P. Trivedi,
JHEP 1008, 078 (2010).
Cai:2013qga
R. G. Cai, L. M. Cao, L. Li and R. Q. Yang,
JHEP 1309, 005 (2013).
Moura:2006pz
F. Moura and R. Schiappa,
Class. Quant. Grav. 24, 361 (2007).
Moura:2012fq
F. Moura,
Phys. Rev. D 87, no. 4, 044036 (2013).
Penrose:1969pc
R. Penrose,
Riv. Nuovo Cim. 1, 252 (1969)
[Gen. Rel. Grav. 34, 1141 (2002)].
Wald1974548
R. Wald,
Ann. Phys. 82, no. 2, 548 (1974).
Jacobson:2009kt
T. Jacobson and T. P. Sotiriou,
Phys. Rev. Lett. 103, 141101 (2009).
Saa:2011wq
A. Saa and R. Santarelli,
Phys. Rev. D 84, 027501 (2011);
Gao:2012ca
S. Gao and Y. Zhang,
Phys. Rev. D 87, no. 4, 044028 (2013).
Barausse:2010ka
E. Barausse, V. Cardoso and G. Khanna,
Phys. Rev. Lett. 105, 261102 (2010).
Barausse:2011vx
E. Barausse, V. Cardoso and G. Khanna,
Phys. Rev. D 84, 104006 (2011).
Colleoni:2015afa
M. Colleoni and L. Barack,
Phys. Rev. D 91, 104024 (2015).
Colleoni:2015ena
M. Colleoni, L. Barack, A. G. Shah and M. van de Meent,
Phys. Rev. D 92, no. 8, 084044 (2015).
Hubeny:1998ga
V. E. Hubeny,
Phys. Rev. D 59, 064013 (1999).
Isoyama:2011ea
S. Isoyama, N. Sago and T. Tanaka,
Phys. Rev. D 84, 124024 (2011).
Aniceto:2015klq
P. Aniceto, P. Pani and J. V. Rocha,
JHEP 1605, 115 (2016).
Hod:2016hqx
S. Hod,
Class. Quant. Grav. 33, no. 3, 037001 (2016).
Horowitz:2016ezu
G. T. Horowitz, J. E. Santos and B. Way,
Class. Quant. Grav. 33, no. 19, 195007 (2016).
Toth:2015cda
G. Z. Toth,
Class. Quant. Grav. 33, no. 11, 115012 (2016).
Rocha:2011wp
J. V. Rocha and V. Cardoso,
Phys. Rev. D 83, 104037 (2011).
Gwak:2012hq
B. Gwak and B. H. Lee,
Class. Quant. Grav. 29, 175011 (2012).
Gwak:2015sua
B. Gwak and B. H. Lee,
Phys. Lett. B 755, 324 (2016).
BouhmadiLopez:2010vc
M. Bouhmadi-Lopez, V. Cardoso, A. Nerozzi and J. V. Rocha,
Phys. Rev. D 81, 084051 (2010).
Doukas:2010be
J. Doukas,
Phys. Rev. D 84, 064046 (2011).
Lehner:2010pn
L. Lehner and F. Pretorius,
Phys. Rev. Lett. 105, 101102 (2010).
Gwak:2011rp
B. Gwak and B. H. Lee,
Phys. Rev. D 84, 084049 (2011).
Figueras:2015hkb
P. Figueras, M. Kunesch and S. Tunyasuvunakool,
Phys. Rev. Lett. 116, no. 7, 071102 (2016).
Gwak:2015fsa
B. Gwak and B. H. Lee,
JCAP 1602, 015 (2016).
Gwak:2016gwj
B. Gwak,
Phys. Rev. D 95, no. 12, 124050 (2017).
Hawking:1974sw
S. W. Hawking Commun. Math. Phys. 43, 199-220 (1975).
Hawking:1976de
S. W. Hawking Phys. Rev. D13, 191-197 (1976).
Bekenstein:1973ur
J. D. Bekenstein,
Phys. Rev. D 7, 2333 (1973).
Bekenstein:1974ax
J. D. Bekenstein,
Phys. Rev. D 9, 3292 (1974).
Myers:1986un
R. C. Myers and M. J. Perry,
Annals Phys. 172, 304 (1986).
Emparan:2003sy
R. Emparan and R. C. Myers JHEP 09, 025 (2003).
Shibata:2009ad
M. Shibata and H. Yoshino Phys. Rev. D81, 021501 (2010).
Dias:2009iu
O. J. C. Dias, P. Figueras, R. Monteiro, J. E. Santos, and R. Emparan Phys. Rev. D80, 111701 (2009).
Dias:2010eu
O. J. C. Dias, P. Figueras, R. Monteiro, H. S. Reall, and J. E. Santos JHEP 05, 076 (2010).
Dias:2010maa
O. J. C. Dias, P. Figueras, R. Monteiro, and J. E. Santos Phys. Rev. D82, 104025 (2010).
Durkee:2010qu
M. Durkee and H. S. Reall Class. Quant. Grav. 28, 035011 (2011).
Murata:2012ct
K. Murata Class. Quant. Grav. 30, 075002 (2013).
Dias:2014eua
O. J. C. Dias, G. S. Hartnett, and J. E. Santos Class. Quant. Grav. 31, 245011 (2014), no. 24.
Gwak:2014xra
B. Gwak and B.-H. Lee Phys. Rev. D91, 064020 (2015), no. 6.
Gwak:2015ysa
B. Gwak, B.-H. Lee, and D. Ro Phys. Lett. B761, 437-443 (2016).
Ahn:2014fwa
W.-K. Ahn, B. Gwak, B.-H. Lee, and W. Lee Eur. Phys. J. C75, 372
(2015), no. 8.
Hawking:1971tu
S. W. Hawking Phys. Rev. Lett. 26, 1344-1346 (1971).
Schiff:1960gi
L. I. Schiff Proc. Nat. Acad. Sci. 46, 871 (1960).
Wilkins:1970wap
D. Wilkins Annals of Physics 61, 277 - 293 (1970), no. 2.
Mashhoon:1971nm
B. Mashhoon J. Math. Phys. 12, 1075-1077 (1971).
Wald:1972sz
R. M. Wald Phys. Rev. D6, 406-413 (1972).
Gwak:2016cbq
B. Gwak and B. H. Lee,
JHEP 1607, 079 (2016).
Gwak:2016icd
B. Gwak and D. Ro,
arXiv:1610.04847 [gr-qc].
Smarr:1976qy
L. Smarr, A. Cadez, B. S. DeWitt, and K. Eppley Phys. Rev. D14,
2443-2452 (1976).
Smarr:1977fy
L. Smarr Phys. Rev. D15, 2069-2077 (1977).
Smarr:1977uf
L. Smarr and J. W. York, Jr. Phys. Rev. D17, 2529-2551 (1978).
Witek:2010xi
H. Witek, M. Zilhao, L. Gualtieri, V. Cardoso, C. Herdeiro, A. Nerozzi, and
U. Sperhake Phys. Rev. D82, 104014 (2010).
Bantilan:2014sra
H. Bantilan and P. Romatschke Phys. Rev. Lett. 114, 081601 (2015),
no. 8.
Bednarek:2015dga
W. Bednarek and P. Banasinski Astrophys. J. 807, 168 (2015), no. 2.
Hirotani:2015fxp
K. Hirotani and H.-Y. Pu Astrophys. J. 818, 50 (2016), no. 1.
Sperhake:2015siy
U. Sperhake, E. Berti, V. Cardoso, and F. Pretorius Phys. Rev. D93,
044012 (2016), no. 4.
Barkett:2015wia
K. Barkett et al. Phys. Rev. D93, 044064 (2016), no. 4.
Hinderer:2016eia
T. Hinderer et al. Phys. Rev. Lett. 116, 181101 (2016), no. 18.
Konoplya:2016pmh
R. Konoplya and A. Zhidenko,
Phys. Lett. B 756, 350 (2016).
Nojiri:2005vv
S. Nojiri, S. D. Odintsov and M. Sasaki, Phys. Rev. D 71, 123509 (2005).
DeWitt:1964
C. DeWitt and B. DeWitt, Relativity, Groups and Topology, vol. 12, Gordon & Breach, p. 719 (1964).
Sudarsky:2002mk
D. Sudarsky and J. A. Gonzalez, Phys. Rev. D 67, 024038 (2003).
Hendi:2010gq
S. H. Hendi, A. Sheykhi and M. H. Dehghani,
Eur. Phys. J. C 70, 703 (2010).
Hendi:2015xya
S. H. Hendi, A. Sheykhi, S. Panahiyan and B. Eslam Panah,
Phys. Rev. D 92, no. 6, 064028 (2015).
Krolak:1987ofj
A. Krolak and B. F. Schutz,
Gen. Rel. Grav. 19, 1163 (1987).
Ade:2015xua
P. A. R. Ade et al. [Planck Collaboration],
Astron. Astrophys. 594, A13 (2016).
|
http://arxiv.org/abs/1701.07613v1 | 20170126082842 | Dispersion and viscous attenuation of capillary waves with finite amplitude | [
"Fabian Denner",
"Gounséti Paré",
"Stéphane Zaleski"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
Department of Mechanical Engineering, Imperial College London,
Exhibiton Road, London, SW7 2AZ, United Kingdom
Sorbonne Universités, UPMC Univ Paris 06, CNRS, UMR 7190, Institut
Jean Le Rond d'Alembert, F-75005 Paris, France
We present a comprehensive study of the dispersion of capillary waves
with finite amplitude, based on direct numerical simulations.
The presented results show an increase of viscous attenuation and, consequently,
a smaller frequency of capillary waves with increasing initial wave amplitude.
Interestingly, however, the critical wavenumber as well as the wavenumber at
which the maximum frequency is observed
remain the same for a given two-phase system, irrespective of the wave
amplitude.
By devising an empirical correlation that describes the effect of the wave
amplitude on the viscous attenuation, the dispersion of capillary waves with finite
initial amplitude is shown to be, in very good approximation, self-similar
throughout the entire underdamped regime and independent of the fluid
properties. The results also shown that analytical solutions for capillary waves
with infinitesimal amplitude are applicable with reasonable accuracy
for capillary waves with moderate amplitude.
Dispersion and viscous attenuation of capillary waves with
finite amplitude
Fabian [email protected]
Gounséti Paré2 Stéphane Zaleski2
=============================================================================
§ INTRODUCTION
Waves on fluid interfaces for which surface tension is the main restoring and
dispersive mechanism, so-called capillary waves, play a key role in many
physical phenomena, natural processes and engineering applications.
Prominent examples are the heat and mass transfer between the atmosphere and the
ocean <cit.>, capillary wave turbulence
<cit.> and the stability of liquid and
capillary bridges <cit.>.
The dispersion relation for a capillary wave with small amplitude on a fluid
interface between two inviscid fluids is <cit.>
ω_0^2 = σ k^3/ρ̃ ,
where ω_0 is the undamped angular frequency, σ is the surface
tension coefficient, k is the wavenumber and ρ̃ =ρ_a +
ρ_b is the relevant fluid density, where subscripts a
and b denote properties of the two interacting bulk phases. The
dispersion relation given in Eq. (<ref>) is only valid for waves with
infinitesimal amplitude <cit.>.
In reality, however, capillary waves typically have a finite amplitude.
<cit.> was the first to provide an exact solution for progressive
capillary waves of finite amplitude in fluids of infinite depth. The
frequency of capillary waves with finite amplitude a (measured from the
equilibrium position to the wave crest or trough) and wavelength
λ is given as
<cit.>
ω = ω_0 (1 + π^2 a^2/λ^2)^-1/4 .
The solution of <cit.> was extended to capillary waves on liquid
films of finite depth by <cit.> and to general gravity and
capillary waves by <cit.>.
However, these studies neglected viscous stresses, in order to make an
analytical solution feasible.
Since capillary waves typically have a short wavelength (otherwise the influence
of gravity also has to be considered) and because viscous stresses act
preferably at small lengthscales <cit.>,
understanding how viscous stresses affect the dispersion of capillary waves is
crucial for a complete understanding of the associated processes and for
optimising the related applications. Viscous stresses are known to attenuate the
wave motion, with the frequency of capillary waves in viscous fluids being
ω = ω_0 + i Γ.
This complex frequency leads to three damping regimes: the underdamped regime
for k < k_c, critical damping for k = k_c and the
overdamped regime for k > k_c. A wave with critical wavenumber
k_c requires the shortest time to return to its equilibrium state
without oscillating, with the real part of its complex angular frequency
vanishing, Re(ω) = 0. Critical damping, thus, represents the
transition from the underdamped (oscillatory) regime, with k < k_c
and Re(ω) > 0, to the overdamped (non-oscillatory) regime, with
k>k_c and Re(ω) = 0.
Based on the linearised Navier-Stokes equations, in this context usually
referred to as the weak damping assumption, the dispersion relation of capillary
waves in viscous fluids is given as <cit.>
ω_0^2 + (i ω + 2 ν k^2)^2
- 4 ν^2 k^4 √(1+ i ω/ν k^2) = 0 ,
where ν = μ/ρ is the kinematic viscosity and μ is the dynamic
viscosity.
The damping rate based on Eq. (<ref>) is
Γ = 2 ν k^2, applicable for k ≪√(ω_0/ν)
<cit.>.
Note that Eq. (<ref>) has been derived for a
single fluid with a free surface <cit.>.
Previous analytical and numerical studies showed that the
damping coefficient Γ is not a constant, but is dependent on the
wavenumber and changes significantly throughout the underdamped regime
<cit.>.
<cit.> recently proposed a consistent scaling for
small-amplitude capillary waves in viscous fluids, which leads to a self-similar
characterisation of the frequency dispersion of capillary waves in the entire
underdamped regime. The results reported by <cit.> also suggest that the weak damping
assumption is not appropriate when viscous stresses dominate the dispersion of
capillary waves, close to critical damping.
With regards to finite-amplitude capillary waves in viscous fluids, the
interplay between wave amplitude and viscosity as well as the effect of the
amplitude on the frequency and critical wavelength have yet to be studied and
quantified.
In this article, direct numerical simulation (DNS) is applied to study the
dispersion and viscous attenuation of freely-decaying capillary waves with
finite amplitude in viscous fluids.
The presented results show a nonlinear increase in viscous attenuation and,
hence, a lower frequency for an increasing initial amplitude of capillary waves.
Nevertheless, the critical wavenumber for a given two-phase system is found to
be independent of the initial wave amplitude and is accurately predicted by the
harmonic oscillator model proposed by <cit.>.
An empirical correction to the characteristic viscocapillary timescale is
proposed that leads to a self-similar solution for the dispersion of finite-amplitude
capillary waves in viscous fluids.
In Sect. <ref> the characterisation of capillary waves is
discussed and Sect. <ref> describes the computational
methods used in this study. In Sect. <ref>, the dispersion of
capillary waves with finite amplitude is studied and
Sect. <ref> analyses the validity of linear wave theory
based on an infinitesimal wave amplitude. The article is summarised and
conclusions are drawn in Sect. <ref>.
§ CHARACTERISATION OF CAPILLARY WAVES
Assuming that no gravity is acting, the fluids are free of surfactants and
inertia is negligible, only two physical mechanisms govern the dispersion of
capillary waves; surface tension (dispersion) and viscous stresses
(dissipation).
The main characteristic of a capillary wave in viscous fluids is its frequency
ω = ω_0 + iΓ = ω_0 √(1-ζ^2) ,
with ζ = Γ/ω_0 being the damping ratio.
In the underdamped regime (for k<k_c) the damping ratio is ζ <
1, ζ =1 for critical damping (k=k_c) and ζ >1 in the overdamped regime
(for k>k_c).
As recently shown by <cit.>, the dispersion of capillary
waves can be consistently parameterised by the
critical wavenumber k_c together with an appropriate timescale.
The wavenumber at which capillary waves are critically damped, the so-called
critical wavenumber, is given as
<cit.>
k_c = 2^2/3/l_vc (1.0625 - β) ,
where the viscocapillary lengthscale is
l_vc = μ̃^2/σ ρ̃ ,
with μ̃ = μ_a+μ_b, and
β = ρ_aρ_b/ρ̃^2ν_aν_b/ν̃^2
is a property ratio, with ν̃ = ν_a+ν_b. Note
that l_vc follows from a balance of capillary and viscous timescales
<cit.>. Based on the governing mechanisms, the characteristic
timescale of the dispersion of capillary waves is the viscocapillary timescale
<cit.>
t_vc = μ̃^3/σ^2 ρ̃ .
Defining the dimensionless wavenumber as k̂=k/k_c and the
dimensionless frequency as ω̂ = ω t_vc results in a
self-similar characterisation of the dispersion of capillary waves with small
(infinitesimal) amplitude <cit.>, i.e. there exists a
single dimensionless frequency ω̂ for every dimensionless wavenumber k̂.
§ COMPUTATIONAL METHODS
The incompressible flow of isothermal, Newtonian fluids is governed by the
momentum equations
∂ u_i/∂ t + u_j ∂
u_i/∂ x_j = - 1/ρ∂ p/∂ x_i
+ ∂/∂ x_j[ ν(∂
u_i/∂ x_j + ∂ u_j/∂ x_i)]
+ f_σ,i/ρ
and the continuity equation
∂ u_i/∂ x_i = 0 ,
where x≡ (x,y,z) denotes a Cartesian coordinate system, t
represents time, u is the velocity, p is the pressure and f_σ
is the volumetric force due to surface tension acting at the fluid
interface.
The hydrodynamic balance of forces acting at the fluid
interface is given as <cit.>
(p_a - p_b. + . σ κ)
m̂_i = [ μ_a(. ∂ u_i/∂
x_j|_a + . ∂ u_j/∂ x_i|_a)
. - . μ_b( . ∂ u_i/∂
x_j|_b + . ∂ u_j/∂ x_i|_b) ] m̂_j - ∂σ/∂ x_i ,
where κ is the curvature and m̂ is the unit normal
vector (pointing into fluid b) of the fluid interface.
In the current study the surface tension coefficient σ is taken
to be constant and, hence, ∇σ = 0. <cit.> performed
extensive molecular dynamics simulations, showing that hydrodynamic theory is applicable to capillary waves
in the underdamped regime as well as at critical damping.
§.§ DNS methodology
The governing equations are solved numerically in a single linear system of
equations using a coupled finite-volume framework with collocated variable
arrangement <cit.>, resolving all relevant scales in space and time.
The momentum equations, Eq. (<ref>), are discretised using a Second-Order Backward Euler
scheme for the transient term and convection is discretised using central
differencing <cit.>. The continuity equation,
Eq. (<ref>), is discretised using the
momentum-weighted interpolation method for two-phase flows proposed by
<cit.>, providing an accurate and robust pressure-velocity coupling.
The Volume-of-Fluid (VOF) method <cit.> is adopted to capture the
interface between the immiscible bulk phases.
The local volume fraction of both phases is represented by the colour function
γ, with the interface located in mesh cells with a colour function value
of 0 < γ < 1. The local density ρ and viscosity μ are
interpolated using an arithmetic average based on the colour function γ
<cit.>, e.g. ρ(x) =
ρ_a [1-γ(x)] + ρ_bγ(x) for density.
The colour function γ is advected by the linear advection equation
∂γ/∂ t + u_i ∂γ/∂ x_i
= 0 ,
which is discretised using a compressive VOF method <cit.>.
Surface tension is modelled as a surface force per unit volume, described by
the CSF model <cit.> as
f_s = σ κ ∇γ.
The interface curvature is computed as
κ = h_xx/(1+h_x^2)^3/2,
where h_x and h_xx represent the first and second derivatives of the
height function h of the colour function with respect to the x-axis of
the Cartesian coordinate system, calculated by means of central differences.
No convolution is applied to smooth the colour function field or the surface
force <cit.>.
A standing capillary wave with wavelength λ and initial amplitude a_0
in four different two-phase systems is simulated. The fluid properties of the
considered cases, which have previously also been considered in the study on the
dispersion of small-amplitude capillary waves in viscous fluids by
<cit.>, are given in Table <ref>.
The computational domain, sketched in Fig. <ref>, has the dimensions
λ× 3λ and is represented by an equidistant Cartesian mesh
with mesh spacing Δ x = λ/100, which has previously been shown to
provide an adequate spatial resolution <cit.>. The applied
computational time-step is Δ t = (200 ω_0)^-1, which satisfies
the capillary time-step constraint <cit.> and results in a Courant
number of 𝐶𝑜 = Δ t |u|/Δ x < 10^-2.
The domain boundaries oriented parallel to the interface are treated as
free-slip walls, whereas periodic boundary conditions are applied at the other
domain boundaries.
The flow field is initially stationary and no gravity is acting.
§.§ Analytical initial-value solution
The analytical initial-value solution (AIVS) for small-amplitude capillary waves
in viscous fluids, as proposed by <cit.> based
on the linearised Navier-Stokes equations, for the special cases of a single
fluid with a free-surface (i.e. ρ_b = μ_b = 0)
<cit.> and for two-phase systems with equal bulk phases of
equal kinematic viscosity (i.e. ν_a=ν_b)
<cit.> is considered as reference solution. Since the AIVS is
based on the linearised Navier-Stokes equations, it is only valid in the limit
of infinitesimal wave amplitude a_0 → 0. In the present
study, the AIVS is computed at time intervals Δ t = (200 ω_0)^-1, i.e. with 200 solutions per undamped period, which provides a sufficient
temporal resolution of the evolution of the capillary wave.
§.§ Validation
The dimensionless frequency ω̂=ω t_vc as a function
of dimensionless wavenumber k̂=k/k_c for Case A with initial wave
amplitude a_0 = 0.01 λ is shown in Fig. <ref>,
where the results obtained with the DNS methodology described in
Sect. <ref> are compared against AIVS, see Sect. <ref>, as
well as results obtained with the open-source DNS code Gerris <cit.>.
The applied DNS methodology is in very good agreement with the results
obtained with Gerris and is excellent agreement with
the analytical solution up to k̂≈ 0.9.
For k̂>0.9 the very small amplitude at the first extrema (|a_1| ∼
10^-6) is indistinguishable from the error caused by the underpinning
modelling assumption and numerical discretisation errors. Hence, the applied
numerical method can provide accurate and reliable results for k̂≤
0.9, as previously reported in Ref. <cit.>.
Figure <ref> shows DNS result of the dimensionless
frequency as a function of dimensionless wavenumber for Case A with an initial amplitude of a_0 = 0.1
λ, compared against results obtained with Gerris, exhibiting a very
good agreement.
Note that Gerris has previously been successfully applied to a variety of
related problems, such as capillary wave turbulence <cit.>
and capillary-driven jet breakup <cit.>.
§ DISPERSION AND DAMPING OF FINITE-AMPLITUDE CAPILLARY WAVES
The damping ratio ζ as a function of dimensionless wavenumber k̂
for Cases A and D with different initial wave amplitudes a_0 is shown in
Fig. <ref>. The damping ratio increases with
increasing amplitude for any given wavenumber k̂<1. This trend is
particularly pronounced for smaller wavenumbers, i.e. longer wavelength.
Irrespective of the initial amplitude a_0 of the capillary wave,
however, critical damping (ζ=1) is observed at k̂=k/k_c=1,
with k_c defined by Eq. (<ref>). Hence, the
critical wavenumber and, consequently, the characteristic lengthscale
l_vc remain unchanged for different initial
amplitudes of the capillary wave.
As a result of the increased damping for capillary waves with larger initial
amplitude, the frequency of capillary waves with large initial amplitude is
lower than the frequency of capillary waves with the same wavenumber but
smaller initial amplitude, as observed in
Fig. <ref>, which shows the dimensionless frequency
ω̂ = ω t_vc as a function of dimensionless wavenumber
k̂ = k/k_c for Cases A and D with different initial wave
amplitudes.
Interestingly, the wavenumber at which the maximum frequency is observed,
k_m≈ 0.751 k_c, is unchanged by the initial
amplitude of the capillary wave. This concurs with the earlier observation that
the critical wavenumber is not dependent on the initial wave amplitude.
Furthermore, comparing Figs. <ref> and
<ref> suggests that there exists a single
dimensionless frequency ω̂ = ω t_vc for any given
dimensionless wavenumber k̂ and initial wave amplitude a_0.
Based on the DNS results for the considered cases and different initial
amplitudes, an amplitude-correction to the viscocapillary
timescale t_vc can be devised. Figure <ref>
shows the correction factor C=t_vc^∗/t_vc, where
t_vc^∗ is the viscocapillary timescale obtained from DNS results
with various initial amplitudes, as a function of the dimensionless initial
amplitude â_0=a_0/λ. This correction factor is well approximated
at k̂ = 0.75 by the correlation
C ≈ -4.5 â_0^3 + 5.3 â_0^2 + 0.18 â_0 + 1 ,
as seen in Fig. <ref>. The amplitude-corrected
viscocapillary timescale then readily follows as
t_vc^∗≈ C t_vc .
Thus, the change in frequency as a result of a finite initial wave amplitude is
independent of the fluid properties. Note that this correction is particularly
accurate for moderate wave amplitudes of a_0 ≲ 0.1 λ.
By redefining the dimensionless frequency ω̂^∗ = ω
t_vc^∗ with the amplitude-corrected viscocapillary timescale
t_vc^∗ as defined in Eq. (<ref>), a
(approximately) self-similar solution of the dispersion of capillary waves with
initial wave amplitude a_0 can be obtained, as seen in
Fig. <ref>. Thus, for every dimensionless wavenumber
k̂ there exists, in good approximation, only one dimensionless frequency
ω̂^∗.
The maximum frequency is ω̂_m^∗≈ 0.488 at
k̂_m≈ 0.751, as previously reported for small-amplitude
capillary waves <cit.>.
§ VALIDITY OF LINEAR WAVE THEORY
As observed and discussed in the previous section, an increasing initial wave
amplitude results in a lower frequency of the capillary wave.
The influence of the amplitude is small if
μ̃=0. According to the analytical solution derived by
<cit.> for inviscid fluids, see
Eq. (<ref>), the frequency error
ε(k̂, â_0) =
|ω_AIVS(k̂)
-ω(k̂, â_0)|/ω_AIVS(k̂) ,
where ω_AIVS is the frequency according to the AIVS solution, is
ε = 0.06% for a_0 = 0.05 λ and ε = 2.32% for a_0 = 0.10 λ.
The influence of the initial wave amplitude on the frequency of capillary waves
increases significantly in viscous fluids. As seen in
Fig. <ref> for Case A with k̂ = 0.01, the time to the
first extrema increases noticeably for increasing initial wave amplitude a_0. However, this frequency shift
diminishes for subsequent extrema as the wave decays
rapidly and the wave amplitude reduces.
The frequency error ε associated with a finite initial wave
amplitude, shown in Fig. <ref>, is approximately constant for the considered range
of dimensionless wavenumbers and is, hence, predominantly a function of the wave
amplitude. For an initial amplitude of a_0 = 0.05
λ the frequency error is ε≈ 2.5%, as observed in
Fig. <ref>, and rises to ε≈ 7.5% for a_0 =
0.10 λ.
§ CONCLUSIONS
The dispersion and viscous attenuation of capillary waves is considerably
affected by the wave amplitude. Using direct numerical simulation in conjunction
with an analytical solution for small-amplitude capillary waves, we have studied
the frequency and the damping ratio of capillary waves with finite amplitude
in the underdamped regime, including critical damping.
The presented numerical results show that capillary waves with a given
wavenumber experience a larger viscous attenuation and exhibit a lower frequency
if their initial amplitude is increased.
Interestingly, however, the critical wavenumber of a capillary wave
in a given two-phase system is independent of the wave amplitude. Similarly, the
wavenumber at which the maximum frequency of the capillary wave is observed
remains unchanged by the wave amplitude, although the maximum frequency depends
on the wave amplitude, with a smaller maximum frequency for increasing wave
amplitude.
Consequently, the viscocapillary lengthscale
l_vc is independent of the wave amplitude and k∼
l_vc^-1 irrespective of the wave amplitude. The reported reduction
in frequency for increasing wave amplitude has been consistently observed for all
considered two-phase systems, meaning that a larger amplitude leads to an
increase of the viscocapillary timescale t_vc. The viscocapillary
timescale has been corrected for the wave amplitude with an empirical
correlation, which leads to an approximately self-similar solution of the
dispersion of capillary waves with finite amplitude in
arbitrary viscous fluids.
Comparing the frequency of finite-amplitude capillary waves with the analytical
solution for infinitesimal amplitude, we found that the analytical
solution based on the infinitesimal-amplitude assumption is applicable with
reasonable accuracy (ε≲ 2.5 %) for capillary waves with an
amplitude of a_0 ≲ 0.05 λ.
The financial support from the Engineering and
Physical Sciences Research Council through Grant
No. EP/M021556/1 is gratefully acknowledged. Data supporting this publication
can be obtained from https://doi.org/10.5281/zenodo.259434 under a Creative
Commons Attribution license.
32
natexlab#1#1
[#1],#1
[Witting(1971)]Witting1971
authorJ. Witting,
journalJ. Fluid Mech. volume50
(year1971) pages321–334.
[Szeri(1997)]Szeri1997
authorA. Szeri,
journalJ. Fluid Mech. volume332
(year1997) pages341–358.
[Falcon et al.(2007)Falcon, Laroche, and Fauve]Falcon2007
authorE. Falcon, authorC. Laroche,
authorS. Fauve,
journalPhys. Rev. Lett. volume98
(year2007) pages094503.
[Deike et al.(2014)Deike, Fuster, Berhanu, and Falcon]Deike2014
authorL. Deike, authorD. Fuster,
authorM. Berhanu, authorE. Falcon,
journalPhys. Rev. Lett. volume112
(year2014) pages234501.
[Abdurakhimov et al.(2015)Abdurakhimov, Arefin, Kolmakov, Levchenko,
Lvov, and Remizov]Abdurakhimov2015
authorL. Abdurakhimov, authorM. Arefin,
authorG. Kolmakov, authorA. Levchenko,
authorY. Lvov, authorI. Remizov,
journalPhys. Rev. E volume91
(year2015) pages023021.
[Hoepffner and Paré(2013)]Hoepffner2013
authorJ. Hoepffner, authorG. Paré,
journalJ. Fluid Mech. volume734
(year2013) pages183–197.
[Castrejón-Pita et al.(2015)Castrejón-Pita,
Castrejón-Pita, Thete, Sambath, Hutchings, Hinch, Lister, and
Basaran]Castrejon-Pita2015
authorJ. Castrejón-Pita,
authorA. Castrejón-Pita, authorS. Thete,
authorK. Sambath, authorI. Hutchings,
authorJ. Hinch, authorJ. Lister,
authorO. Basaran,
journalProc. Nat. Acad. Sci. volume112
(year2015) pages4582–4587.
[Lamb(1932)]Lamb1932
authorH. Lamb, titleHydrodynamics,
publisherCambridge University Press, edition6th
edition, year1932.
[Crapper(1957)]Crapper1957
authorG. Crapper,
journalJ. Fluid Mech. volume2
(year1957) pages532–540.
[Kinnersley(1976)]Kinnersley1976
authorW. Kinnersley,
journalJ. Fluid Mech. volume77
(year1976) pages229.
[Bloor(1978)]Bloor1978
authorM. Bloor,
journalJ. Fluid Mech. volume84
(year1978) pages167–179.
[Longuet-Higgins(1992)]Longuet-Higgins1992
authorM. Longuet-Higgins,
journalJ. Fluid Mech. volume240
(year1992) pages659–679.
[Levich(1962)]Levich1962
authorV. Levich, titlePhysicochemical Hydrodynamics,
publisherPrentice Hall, year1962.
[Landau and Lifshitz(1966)]Landau1966
authorL. Landau, authorE. Lifshitz,
titleFluid Mechanics, publisherPergamon Press Ltd.,
edition3rd edition, year1966.
[Byrne and Earnshaw(1979)]Byrne1979
authorD. Byrne, authorJ. C. Earnshaw,
journalJ. Phys. D: Appl. Phys. volume12
(year1979) pages1133–1144.
[Jeng et al.(1998)Jeng, Esibov, Crow, and Steyerl]Jeng1998
authorU.-S. Jeng, authorL. Esibov,
authorL. Crow, authorA. Steyerl,
journalJ. Phys. Cond. Matter
volume10 (year1998) pages4955–4962.
[Denner(2016)]DennerCapDisp2016
authorF. Denner,
journalPhys. Rev. E volume94
(year2016) pages023110.
[Levich and Krylov(1969)]Levich1969
authorV. Levich, authorV. Krylov,
titleSurface-Tension-Driven Phenomena,
journalAnnu. Rev. Fluid Mech.
volume1 (year1969) pages293–316.
[Delgado-Buscalioni et al.(2008)Delgado-Buscalioni, Chacón, and
Tarazona]Delgado2008
authorR. Delgado-Buscalioni, authorE. Chacón,
authorP. Tarazona,
journalJ. Phys. Cond. Matter
volume20 (year2008) pages494229.
[Denner and van Wachem(2014)]Denner2014
authorF. Denner, authorB. van Wachem,
journalNumer. Heat Transfer, Part B
volume65 (year2014) pages218–255.
[Denner(2013)]DennerThesis2013
authorF. Denner, titleBalanced-Force Two-Phase Flow
Modelling on Unstructured and Adaptive Meshes, Ph.D. thesis, Imperial
College London, year2013.
[Hirt and Nichols(1981)]Hirt1981
authorC. W. Hirt, authorB. D. Nichols,
journalJ. Comput. Phys.
volume39 (year1981) pages201–225.
[Denner and van Wachem(2014)]Denner2014d
authorF. Denner, authorB. van Wachem,
journalJ. Comput. Phys.
volume279 (year2014) pages127–144.
[Brackbill et al.(1992)Brackbill, Kothe, and Zemach]Brackbill1992
authorJ. Brackbill, authorD. Kothe,
authorC. Zemach,
journalJ. Comput. Phys.
volume100 (year1992) pages335–354.
[Denner and van Wachem(2013)]Denner2013
authorF. Denner, authorB. van Wachem,
journalInt. J. Multiph. Flow
volume54 (year2013) pages61–64.
[Denner and van Wachem(2015)]Denner2015
authorF. Denner, authorB. van Wachem,
journalJ. Comput. Phys.
volume285 (year2015) pages24–40.
[Prosperetti(1976)]Prosperetti1976
authorA. Prosperetti,
journalPhys. Fluids volume19
(year1976) pages195–203.
[Prosperetti(1981)]Prosperetti1981
authorA. Prosperetti,
journalPhys. Fluids volume24
(year1981) pages1217–1223.
[Popinet(2003)]Popinet2003
authorS. Popinet,
journalJ. Comput. Phys.
volume190 (year2003) pages572–600.
[Popinet(2009)]Popinet2009
authorS. Popinet,
journalJ. Comput. Phys.
volume228 (year2009) pages5838–5866.
[Deike et al.(2015)Deike, Popinet, and Melville]Deike2015
authorL. Deike, authorS. Popinet,
authorW. Melville,
journalJ. Fluid Mech. volume769
(year2015) pages541–569.
[Moallemi et al.(2016)Moallemi, Li, and Mehravaran]Moallemi2016
authorN. Moallemi, authorR. Li,
authorK. Mehravaran,
journalPhys. Fluids volume28
(year2016) pages012101.
|
http://arxiv.org/abs/1701.07994v1 | 20170127101247 | Constructive Euler hydrodynamics for one-dimensional attractive particle systems | [
"C Bahadoran",
"H Guiol",
"K Ravishankar",
"E Saada"
] | math.PR | [
"math.PR"
] | |
http://arxiv.org/abs/1701.07810v4 | 20170126184437 | Intelligent Topic Selection for Low-Cost Information Retrieval Evaluation: A New Perspective on Deep vs. Shallow Judging | [
"Mucahid Kutlu",
"Tamer Elsayed",
"Matthew Lease"
] | cs.IR | [
"cs.IR"
] |
1]Mucahid Kutlu
1]Tamer Elsayed
2]Matthew Lease
[1]Dept. of Computer Science and Engineering, Qatar University, Qatar
[2]School of Information, University of Texas at Austin, USA
Proper motions and structural parameters of the Galactic
globular cluster M71[Based on observations collected with
the NASA/ESA HST (GO10775, GO12932), obtained at the Space
Telescope Science Institute, which is operated by AURA, Inc.,
under NASA contract NAS5-26555.]
[
18 november 2016
==========================================================================================================================================================================================================================================================================================
While test collections provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional pooling techniques to construct test collections at the scale of today's massive document collections (e.g., ClueWeb12's 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and deep vs. shallow judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to this over-arching question, we conduct a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments. Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
§ INTRODUCTION
Test collections provide the cornerstone for system-based evaluation of information retrieval (IR) algorithms in the Cranfield paradigm <cit.>. A test collection consists of: 1) a collection of documents to be searched; 2) a set of pre-defined user search topics (i.e., a set of topics for which some users would like to search for relevant information, along with a concise articulation of each topic as a search query suitable for input to an IR system); and 3) a set of human relevance judgments indicating the relevance of collection documents to each search topic. Such a test collection allows empirical A/B testing of new search algorithms and community benchmarking, thus enabling continuing advancement in the development of more effective search algorithms. Because
exhaustive judging of all documents in any realistic document collection is cost-prohibitive, traditionally the top-ranked documents from many systems are pooled, and only these top-ranked documents are judged. Assuming the pool depth is sufficiently large, the reliability of incomplete judging by pooling is well-established <cit.>.
However, if insufficient documents are judged, evaluation findings could be compromised, e.g., by erroneously assuming unjudged documents are not relevant when many actually are relevant <cit.>. The great problem today is that: 1) today's document collections are increasingly massive and ever-larger; and 2)
realistic evaluation of search algorithms requires testing them at the scale of document collections to be searched in practice,
so that evaluation findings in the lab carry-over to practical use. Unfortunately, larger collections naturally tend to contain many more relevant (and seemingly-relevant) documents, meaning human relevance assessors are needed to judge the relevance of ever-more documents for each search topic. As a result, evaluation costs have quickly become cost prohibitive with traditional pooling techniques <cit.>.
Consequently, a key open challenge in IR is to devise new evaluation techniques to reduce evaluation cost while preserving evaluation reliability. In other words, how can we best spend a limited IR evaluation budget?
A number of studies have investigated whether it is better to collect many relevance judgments for a few topics – i.e., Narrow and Deep (NaD) judging – or a few relevance judgments for many topics – i.e., Wide and Shallow (WaS) judging, for a given evaluation budget. For example, in the TREC Million Query Track <cit.>, IR systems were run on ∼10K queries sampled from two large query logs, and shallow judging was performed for a subset of topics for which a human assessor could ascribe some intent to the query such that a topic description could be back-fit and relevance determinations could be made.
Intuitively, since people search for a wide variety of topics expressed using a wide variety of queries, it makes sense to evaluate systems across a similarly wide variety of search topics and queries. Empirically, large variance in search accuracy is often observed for the same system across different topics <cit.>, motivating use of many diverse topics for evaluation in order to achieve stable evaluation of systems.
Prior studies have reported a fairly consistent finding that WaS judging tends to provide more stable evaluation for the same human effort vs. NaD judging <cit.>. While this finding does not hold in all cases, exceptions have been fairly limited. For example, achieve the same reliability using 250 topics with 20 judgments per topic (5000 judgments in total) as 600 topics with 10 judgments per topic (6000 judgments in total).
A key observation we make in this work is noting that all prior studies comparing NaD vs. WaS judging assume that search topics are selected randomly.
The results are obtained from Figure 5 in that paper. They assume that topic generation cost is 5 minutes. So it is not only number of judgments.
But I noticed that I missed one part there: When they use MTC, 600 topics with 10 judgments per topic gives the same correlation score with 250 topic with 20 judgments per topic. When they use statAP, 400 topics with 20 judgments per topic has similar correlation score with 200 topics with 40 judgments per topic
Their total cost (topic generation and judgments) is as follows. In MTC case, they spend around 85 hours for 600 topics, and 50 hours for 250 topics. In statAP case, they spend 80 hours for 400 topics, and around 63 hours for 200 topics
Another direction of research has sought to carefully choose which search topics are included in a test collection (i.e., intelligent topic selection) so as to minimize the number of search topics needed for a stable evaluation. Since human relevance judgments must be collected for any topic included, using fewer topics directly reduces judging costs. NIST TREC test collections have traditionally used 50 search topics (manually selected from a larger initial set of candidates), following a simple, effective, but costly topic creation process which includes collecting initial judgments for each candidate topic and manual selection of final topics to keep <cit.>. report that at least 25 topics are needed for stable evaluation, with 50 being better, while showed that one set of 25 topics predicted relative performance of systems fairly well on a different set of 25 topics. conducted a systematic study showing that evaluating IR systems using the “right” subset of topics yields very similar results vs. evaluating systems over all topics. However, they did not propose a method to find such an effective topic subset in practice. Most recently, proposed an iterative algorithm to find effective topic subsets, showing encouraging results. A key observation we make is that prior work on intelligent topic selection has not evaluated against shallow judging baselines, which tend to be the preferred strategy today for reducing IR evaluation cost. We argue that one must ask whether it is actually useful to select topics, or should one simply perform WaS judging over many topics?
Our Work. In this article, we propose a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our over-arching question of whether topic selection is actually useful in comparison to WaS judging approaches, we integrate previously disparate lines of research on intelligent topic selection and NaD vs. WaS judging. Specifically, we investigate a comprehensive set of relevant factors never previously considered together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments.
We note that prior work on NaD vs. WaS judging has not considered cost ramifications of how judging depth impacts judging speed (i.e., assessors becoming faster at judging a particular topic as they become more familiar with it). Similarly, prior work on NaD vs. WaS judging has not considered topic construction time; WaS judging of many topics appears may be far less desirable if we account for traditional NIST TREC topic construction time <cit.>. As such, our findings also further inform the broader debate on NaD vs. WaS judging assuming random topic selection.
We begin with our first research question RQ-1: How can we select search topics that maximize evaluation validity given document rankings of multiple IR systems for each topic?
We propose a novel application of learning-to-rank (L2R) to topic selection. In particular, topics are selected iteratively via a greedy method which optimizes accurate ranking of systems (Section <ref>). We adopt MART <cit.> as our L2R model, though our approach is largely agnostic and other L2R models might be used instead. We define and extract 63 features for this topic selection task which represent the interaction between topics and ranking of systems (Section <ref>).
To train our model, we propose a method to automatically generate useful training data from existing test collections (Section <ref>). By relying only on pre-existing test collections for model training, we can construct a new test collection without any prior relevance judgments for it, rendering our approach more generalizable and useful. We evaluate our approach on NIST TREC Robust 2003 <cit.> and Robust 2004 <cit.> test collections,
comparing our approach to recent prior work <cit.> and random topic selection (Section <ref>).
Results show consistent improvement over baselines, with greater relative improvement as fewer topics are used.
In addition to showing improvement of our topic selection method over prior work, as noted above, we believe it is essential to assess intelligent topic selection in regard to the real over-arching question: what is the best way to achieve cost-effective IR evaluation? Is intelligent topic selection actually useful, or should we simply do WaS judging over many topics? To investigate this, we conduct a comprehensive analysis involving a set of focused research questions not considered by the prior work, all utilizing our intelligent topic selection method:
* RQ-2: When topics are selected intelligently, and other factors held constant, is WaS judging (still) a better way to construct test collections than NaD judging?
When intelligent topic selection is used, we find that NaD judging often achieves greater evaluation reliability than WaS judging for the same budget when topics are selected intelligently, contrasting with popular wisdom today favoring WaS judging.
* RQ-3 (Judging Speed and Topic Familiarity): Assuming WaS judging leads to slower judging speed than NaD judging, how does this impact our assessment of intelligent topic selection? Past comparisons between NaD vs. WaS judging have typically assumed constant judging speed <cit.>. However, data reported by <cit.> suggests that assessors may judge documents faster as they judge more documents for the same topic (likely due to increased topic familiarity).
Because we can collect more judgments in the same amount of time with NaD vs. WaS judging, we show that NaD judging achieves greater relative evaluation reliability than shown in prior studies, which did not consider the speed benefit of deep judging.
* RQ-4 (Topic Development Time): How does topic development time in the context of NaD vs. WaS judging impact our assessment of intelligent topic selection? Prior NaD vs. WaS studies have typically ignored non-judging costs involved in test collection construction. While consider topic development time, the 5-minute time they assumed is roughly two orders of magnitude less than the 4 hours NIST has traditionally taken to construct each topic <cit.>.
We find that WaS judging is preferable than NaD judging for short topic development times (specifically ≤5 minutes in our experiments). However, as the topic development cost further increases, NaD judging becomes increasingly preferable.
* RQ-5 (Judging Error): Assuming short topic development times reduce judging consistency, how does this impact our assessment of intelligent topic selection in the context of NaD vs. WaS judging? Several studies have reported calibration effects impacting the decisions and consistency of relevance assessors <cit.>.
While NIST has traditionally included an initial “burn-in” judging period as part of topic generation and formulation <cit.>, we posit that drastically reducing topic development time (e.g., from 4 hours <cit.> to 2 minutes <cit.>) could negatively impact topic quality, leading to less well-defined topics and/or calibrated judges, and thereby less reliable judgments. As suggestive evidence, report high judging agreement in reproducing a “standard” NIST track, but high and inexplicable judging disagreement on TREC's Million Query track <cit.>, which lacked any burn-in period for judges and had far shorter topic generation times. To investigate this, we simulate increased judging error as a function of lower topic generation times.
We find that it is better to invest a portion of our evaluation budget to increase quality of topics, instead of collecting more judgments for low-quality topics. This also makes NaD judging preferable in many cases, due to increased topic development cost.
Contributions. Our five research questions address the over-arching goal and challenge of minimizing IR evaluation cost while ensuring validity of evaluation. Firstly, we propose an intelligent topic selection algorithm, as a novel application of learning-to-rank, and show its effectiveness vs. prior work. Secondly, we go beyond prior work on topic selection to investigate whether it is actually useful, or if one should simply do WaS judging over many topics rather than topic selection? Our comprehensive analysis over several factors not considered in prior studies shows that intelligent topic selection is indeed useful, and contrasts current wisdom favoring WaS judging.
The remainder of this article is organized as follow. Section <ref> reviews the related work on topic selection and topic set design. Section <ref> formally defines the topic selection problem. In Section <ref>, we describe our proposed L2R-based approach in detail. Section <ref> presents our experimental evaluation. Finally, Section <ref> summarizes the contributions of our work and suggests potential future directions.
For instance, generating a final NIST topic requires 4 hours on average (including collecting the initial judgments)[According to our personal communication with Ellen Voorhees]. On the other hand, Carterette et al. <cit.> obtain a query log from a commercial search engine and ask assessors to back-fit the queries to convert them into topics. They report that median of time for generating a topic is 76 seconds.
reports that one of the authors judged a portion of document-topic pairs of 2009 TREC Web Track and also 2009 TREC Million Query (MQ) Track to find that his judgment was consistent with NIST judgments of Web Track while highly-inconsistent with judgments of MQ Track. The authors couldn't find an explanation for those original judgments even after deep investigation. One of the main differences between the two tracks is the topic generation process. In Web track, the topics were generated by standard NIST topic generation process (which takes 4 hours on average as mentioned earlier)و while in MQ track, queries are back-fitted to convert them into topics where median time spent is 76 seconds. Therefore, NIST topics in Web Track are well-defined compared to topics in MQ track. We posit that poorly-defined topics can cause misunderstanding of the targeted information need and consequently, wrong judgments.
Thus, there is a potential important trade-off to consider, not considered by prior work, between time spent on topic generation and the accuracy of the judgments.
Our work seeks to answer the following research questions:
* RQ-1: How can we best select a subset of topics such that the evaluation of IR systems will be most similar to the one when all topics are employed?
* RQ-2: How does intelligent selection of topics impact evaluation budget optimization in balancing NaD vs. WaS judging?
* RQ-3: If decreased topic generation time reduces consistency of relevance judgments, how does this impact evaluation budget optimization in balancing NaD vs. WaS judging?
The main contributions of our work are as follows:
* We propose a L2R based method for selecting topics in test collections.
* We explore the impact of topic generation cost on the quality of test collections.
* We revisit shallow vs. deep judging debate by considering topic generation cost, effect of familiarization of assessors to topics, and intelligent selection of topics.
The results are obtained from Figure 5 in that paper. They assume that topic generation cost is 5 minutes. So it is not only number of judgments.
But I noticed that I missed one part there: When they use MTC, 600 topics with 10 judgments per topic gives the same correlation score with 250 topic with 20 judgments per topic. When they use statAP, 400 topics with 20 judgments per topic has similar correlation score with 200 topics with 40 judgments per topic
Their total cost (topic generation and judgments) is as follows. In MTC case, they spend around 85 hours for 600 topics, and 50 hours for 250 topics. In statAP case, they spend 80 hours for 400 topics, and around 63 hours for 200 topics
§ RELATED WORK
Constructing test collections is expensive in human effort required. Therefore, researchers have proposed a variety of methods to reduce the cost of creating test collections. Proposed methods include: developing new evaluation measures and statistical methods for the case of incomplete judgments <cit.>, finding the best sample of documents to be judged for each topic <cit.>,
inferring relevance judgments <cit.>,
topic selection <cit.>, evaluation with no human judgments <cit.>, crowdsourcing <cit.>, and others. The reader is referred to <cit.> and <cit.> for a more detailed review of prior work on methods for low-cost IR evaluation.
§.§ Topic Selection
To the best of our knowledge, 's study was the first seeking to select the best subset of topics for evaluation. They first built a system-topic graph representing the relationship between topics and IR systems, then ran the HITS algorithm on it. They hypothesized that topics with higher ‘hubness’ scores would better distinguish between systems. However, experimentally showed that their hypothesis was not true.
experimentally showed that if we choose the right subset of topics, we can achieve a ranking of systems that is very similar to the ranking when we employ all topics. However they did not provide a solution to find the right subset of topics. This study has motivated other researchers to investigate this problem. stressed generality and showed that a carefully selected good subset of topics to evaluate a set of systems can be also adequate to evaluate a different set of systems. reported that using the easiest topics based on Jensen-Shannon Divergence approach did not work well to reduce the number of topics. focused on selecting the subset of topics to extend an existing collection in order to increase its re-usability. . investigated how
the capability of topics to predict overall system effectiveness has changed over the years in TREC test collections.
reduced the cost using dissimilarity based query selection for preference based IR evaluation.
The closest study to our own is <cit.>, which employs an adaptive algorithm for topic selection. It selects the first topic randomly. Once a topic is selected, the relevance judgments are acquired and used to assist with the selection of subsequent topics.
Specifically, in the following iterations, the topic that is predicted to maximize the current Pearson correlation is selected. In order to do that, they predict relevance probabilities of qrels for the remaining topics using a Support Vector Machine (SVM) model trained on the judgments from the topics selected thus far. Training data is extended at each iteration by adding the relevance judgments from each topic as it is selected in order to better select the next topic.
Further studies investigated topic selection for other purposes, such as creating low-cost datasets for training learning-to-rank algorithms <cit.>, system rank estimation <cit.>, and selecting training data to improve supervised data fusion algorithms <cit.>. These studies do not consider topic selection for low-cost evaluation of IR systems.
3.1 IR System Evaluation Datasets
WebTrack 2011. The ad hoc task used 50 topics. Each
system submitted up to 3 ranked lists of 10K documents
each. Judging was limited to a pool depth of 25, formed over
all 62 submissions. In total, 19,381 documents were judged
for 5-point graded relevance, which we binarize. Following
Voorhees and Harman’s estimate of assessors making two
judgments per minute [38], judging 8 hours a day would still
require over 20 person days of work.
TREC 6. The ad hoc task used 50 topics. Each system
submitted up to 3 ranked lists of 1K documents each, with
74 total submissions. Judging was limited to a pool depth
of 100, using only one retrieval list per system. A total of
72,270 binary judgments were made, requiring over 75 per-
son days of work, per Voorhees and Harman’s estimate [38].
[38] E. M. Voorhees and D. Harman. Overview of trec
2001. In Proc. TREC, 2001.
§.§ How Many Topics Are Needed?
Past work has investigated the ideal size of test collections and how many topics are needed for a reliable evaluation. While traditional TREC test collections employ 50 topics, a number of researchers claimed that 50 topics are not sufficient for a reliable evaluation <cit.>. Many researchers reported that wide and shallow
judging is preferable than narrow and deep judging <cit.>.
experimentally compared deep vs. shallow judging in terms of budget utilization. They found that 20 judgments with 250 topics was the most cost-effective in their experiments. measured the reliability of TREC test collections with regard to generalization and concluded that the number of topics needed for a reliable evaluation varies across different tasks. analyzed different test collection reliability measures with a special focus on the number of topics.
In order to calculate the number of topics required, proposed adding topics iteratively until desired statistical power is reached. Sakai proposed methods based on two-way ANOVA , confidence interval , and t test and one-way ANOVA . In his follow-up studies, Sakai investigated the effect of score standardization in topic set design
and provided guidelines for test collection design for a given fixed budget . applied the method of to decide the number of topics for evaluation measures of a Short Text Conversation task[http://ntcir12.noahlab.com.hk/stc.htm]. explored how many topics and IR systems are needed for a reliable topic set size estimation.
While these studies focused on calculating the number of topics required, our work focuses on how to select the best topic set for a given size in order to maximize the reliability of evaluation.
We also investigate further considerations impacting the debate over shallow vs. deep judging: familiarization of users to topics, and the effect of topic development costs on the budget utilization and the quality of judgments for each topic.
investigated the problem of evaluating commercial search engines by sampling queries based on their distribution in query logs. In contrast, our work does not rely on any prior knowledge about the popularity of topics in performing topic selection.
§.§ Topic Familiarity vs. Judging Speed
reported that as the number of judgments per topic increases (when collecting 8, 16, 32, 64 or 128 judgments per topic), the median time to judge each document decreases respectively: 15, 13, 15, 11 and 9 seconds. This suggests that assessors become more familiar with a topic as they judge more documents for it, and this greater familiarity yields greater judging speed. However, prior work comparing deep vs. shallow judging did not consider this, instead assuming that judging speed is constant regardless of judging depth. Consequently, our experiments in Section <ref> revisit this question, considering how faster judging with greater judging depth per topic may impact the tradeoff between deep vs. shallow judging in maximizing evaluation reliability for a given budget in human assessor time.
§.§ Topic Development Cost vs. Judging Consistency
Past work has utilized a variety of different processes to develop search topics when constructing test collections. These different processes explicitly or implicitly enact potentially important trade-offs between human effort (i.e. cost) vs. quality of the resultant topics developed by each process. For example, NIST has employed a relatively costly process in order to ensure creation of very high quality topics <cit.>:
For the traditional ad hoc tasks, assessors generally came to NIST with some rough ideas for topics having been told the target document collection. For each idea, they would create a query and judge about 100 documents (unless at least 20 of the first 25 were relevant, in which case they would stop at 25 and discard the idea). From the set of candidate topics across all assessors, NIST would select the final test set of 50 based on load-balancing across assessors, number of relevant found, eliminating duplication of subject matter or topic types, etc. The judging was an intrinsic part of the topic development routine because we needed to know that the topic had sufficiently many (but not too many) relevant in the target document set. (These judgments made during the topic development phase were then discarded. Qrels were created based only on the judgments made during the official judgment phase on pooled participant results.) We used a heuristic that expected one out of three original ideas would eventually make it as a test set topic. Creating a set of 50 topics for a newswire ad hoc collection was budgeted at about 175-225 assessor hours, which works out to about 4 hours per final topic.
In contrast, the TREC Million Query (MQ) Track used a rather different procedure to develop topics. In the 2007 MQ Track <cit.>, 10000 queries were sampled from a large search engine query log.
The assessment system showed 10 randomly selected queries to each assessor, who then selected one and converted it into a standard TREC topic by back-fitting a topic description and narrative to the selected query. reported that the median time of developing a topic was roughly 5 minutes. In the 2008 MQ Track <cit.>, assessors could refresh list of candidate 10 queries if they did not want to judge any of the candidates listed. reported that median time for viewing a list of queries was 22 seconds and back-fitting a topic description was 76 seconds. On average, each assessor viewed 2.4 lists to develop each topic. Therefore, the cost of developing a topic was roughly 2.4*22 + 76 ≈ 129 seconds, or 2.1 minutes.
The examples above show a vast range of topic creation times: from 4 hours to 2 minutes per topic. Therefore, in Section <ref>, we investigate deep vs. shallow judging when cost of developing topics is also considered.
In addition to considering topic construction time, we might also consider whether aggressive reduction in topic creation time might also have other unintended, negative impacts on topic quality. For example, reported calibration effects change judging decisions as assessors familiarize themselves with a topic. Presumably NIST's 4 hour topic creation process provides judges ample time to familiarize themselves with a topic, and as noted above, judgments made during the topic development phase are then discarded. In contrast, it seems MQ track assessors began judging almost immediately after selecting a query for which to back-fit a topic, and with no initial topic formation period for establishing the topic and discarding initial topics made during this time. Further empirical evidence suggesting quality concerns with MQ track judgments was also recently reported by , who described a detailed judging process they employed to reproduce NIST judgments. While the authors reported high agreement between their own judging and crowd judging vs. NIST on the 2009 Web Track, for NIST judgments from the 2009 MQ track, the authors and crowd judges were both consistent while disagreeing often with NIST judges. The authors also reported that even after detailed analysis of the cases of disagreement, they could not find a rationale for the observed MQ track judgments. Taken in sum, these findings suggest that aggressively reducing topic creation time may negatively impact the quality of judgments collected for that topic. For example, while an assessor is still formulating and clarify a topic for himself/herself, any judgments made at this early stage of topic evolution may not be self-consistent with judgments made once the topic is further crystallized. Consequently, in Section <ref> we revisit the question of deep judging of few topics vs. shallow judging of many topics, assuming that low topic creation times may also mean less consistent judging.
§ PROBLEM DEFINITION
In this section, we define the topic selection problem.
We assume that we have a TREC-like setup: a document collection has already been acquired, a large pool of topics and ranked lists of IR systems for each topic are also available. Our goal is to select a certain number of topics from the topic pool such that evaluation with those selected topics yields the most similar ranking of the IR systems to the “ground-truth”.
We assume that the ground-truth ranking of the IR systems is the one when we use all topics in the pool for evaluation.
We can formulate this problem as follows.
Let T={t_1,t_2,...,t_N} denote the pool of N topics, S={s_1,s_2,...,s_K} denote the set of K IR systems to be evaluated, and R_<S,T,e> denote the ranking of systems in S when they are evaluated based on evaluation measure e over the topic set T (notation used in equations and algorithms is shown in Table <ref>).
We aim to select a subset P ⊂ T of M topics that maximizes the correlation (as a measure of similarity between two ranked lists) between the ranking of systems over P (i.e., considering only M topics and their corresponding relevance judgments) and the ground-truth ranking of systems (over T). Mathematical definition of our goal is as follows:
P⊂ T, |P|=Mmax corr(R_<S,P,e>, R_<S,T,e> )
where corr is a ranking similarity function, e.g., Kendall-τ <cit.>.
§ PROPOSED APPROACH
The problem we are tackling is challenging since we do not know the actual performance of systems (i.e. their performance when all topics are employed for evaluation) and we would like to find a subset of topics that achieves similar ranking to the unknown ground-truth.
To demonstrate the complexity of the problem, let us assume that we obtain the judgments for all topic-document pairs (i.e., we know the ground-truth ranking). In this case, we have to check N M possibilities of subsets in order to find the optimal one (i.e., the one that produces a ranking that has the maximum correlation with the ground-truth ranking). For example, if N=100 and M=50, we need to check around 10^29 subsets of topics. Since this is computationally intractable, we need an approximation algorithm to solve this problem. Therefore, we first describe a greedy oracle approach to select the best subset of topics when we already have the judgments for all query-document pairs (Section <ref>). Subsequently, we discuss how we can employ this greedy approach when we do not already have the relevance judgments (Section <ref>). Finally, we introduce our L2R-based topic selection approach (Section <ref>).
§.§ Greedy Approach
We first explore a greedy oracle approach that selects topics in an iterative way when relevance judgments are already obtained. Instead of examining all possibilities, at each iteration, we select the 'best' topic (among the currently non-selected ones) that, when added to the currently-selected subset of topics, will produce the ranking that has the maximum correlation with the ground-truth ranking of systems.
Algorithm <ref> illustrates this oracle greedy approach. First, we initialize set of selected topics (P) and set of candidate topics to be selected (P̅) (Line 1).
For each candidate topic t in P̅, we rank the systems over the selected topics P in addition to t (R_<S,P∪{t},e>), and calculate the Kendall's τ achieved with this ranking (Lines 3-4). We then pick the topic achieving the highest Kendall-τ score among other candidates (Line 5) and update P and P̅ accordingly (Lines 6-7). We repeat this process until we reach the targeted subset size M (Lines 2-7).
While this approach has O(M× N) complexity (which is clearly much more efficient compared to selecting the optimal subset), it is also impractical due to leveraging the real judgments (which we typically do not have in advance) in order to calculate the ground-truth ranking and thereby Kendall-τ scores.
§.§ Performance Prediction Approach
One possible way to avoid the need for the actual relevance judgments is to predict the performance of IR systems using automatic evaluation methods <cit.> and then rank the systems based on their predicted performance.
For example, predict relevance probability of document-topic pairs by employing an SVM classifier and select topics in a greedy way similar to Algorithm <ref>. We use their selection approach as a baseline in our experiments (Section <ref>).
§.§ Proposed Learning-to-Rank Approach
In this work, we formulate the topic selection problem as a learning-to-rank (L2R) problem. In a typical L2R problem, we are given a query q and a set of documents D, and a model is learned to rank those documents in terms of relevance with respect to q. The model is trained using a set of queries and their corresponding labeled documents.
In our context, we are given a set of currently-selected topic set P (analogous to the query q) and the set of candidate topics P̅ to be selected from (analogous to the documents D), and we aim to train a model to rank the topics in P̅ based on the expected effectiveness of adding each to P.
The training samples used to train the model are tuples of the form (P, t, corr(R_<S, P∪{t}, e>, R_<S, T, e>)), where the measured correlation is used to label topic t with respect to P. Notice that the correlation is computed using the true relevance judgments in the training data. This enables us to use the wealth of existing test collections to acquire data for training our model, as explained in Section <ref>.
We apply this L2R problem formulation to the topic selection problem using our greedy approach.
We use the trained L2R model to rank the candidate topics and then select the first-ranked one. The algorithm is shown in Algorithm <ref>. At each iteration, a feature vector v_t is computed for each candidate topic t in P̅ using a feature extraction function f (Lines 3-4), detailed in Section <ref>. The candidate topics are then ranked using our learned model (Line 5) and the topic in the first rank is picked (Line 6). Finally, the topic sets P and P̅ are updated (Lines 7-8) and a new iteration is started, if necessary.
§.§.§ Features
In this section, we describe the features we extract in our L2R approach for each candidate topic.
mathematically show that, in the greedy approach, the topic selected at each iteration should be different from the already-selected ones (i.e., topics in P) while being representative of the non-selected ones (i.e., topics in P̅). Therefore, the extracted set of features should cover the candidate topic as well as the two sets P and P̅. Features should therefore capture the interaction between the topics and the IR systems in addition to the diversity between the IR systems in terms of their retrieval results.
We define two types of feature sets. Topic-based features are extracted from an individual topic while set-based features are extracted from a set of topics by aggregating the topic-based features extracted from each of those topics.
The topic-based features include 7 features that are extracted for a given candidate topic t_c and are listed in Table <ref>.
For a given set of topics (e.g., currently-selected topics P), we extract the set-based features by computing both average and standard deviation of each of the 7 topic-based features extracted from all topics in the set. This gives us 14 set-based features that can be extracted for a set of topics. We compute these 14 features for each of the following sets of topics:
* currently-selected topics (P)
* not-yet-selected topics (P̅)
* selected topics with the candidate topic (P∪{t_c})
* not-selected topics excluding the candidate topic (P̅-{t_c})
In total, we have 63 features for each data record representing a candidate topic: 14 × 4 = 56 features for the above groups + 7 topic-based features. We now describe the seven topic-based features that are at the core of the feature set.
*
Average sampling weight of documents (f_w̅): In the statAP sampling method <cit.>, a weight is computed for each document based on where it appears in the ranked lists of all IR systems. Simply, the documents at higher ranks get higher weights. The weights are then used in a non-uniform sampling strategy to sample more documents relevant to the corresponding topic. We compute the average sampling weight of all documents that appear in the pool of the candidate topic t_c as follows:
f_w̅(t_c) = 1/|D_t_c|∑_d ∈ D_t_c w(d,S)
where D_t_c is the document pool for topic t_c and w(d, S) is the weight of document d over the IR systems S. High f_w̅ values mean that the systems have common documents at higher ranks for the corresponding topic, whereas lower f_w̅ values indicate that
the systems return significantly different ranked lists or have only the documents at lower ranks in common.
*
Standard deviation of weight of documents (f_σ_w): Similar to f_w̅, we also compute the standard deviation of the sampling weights of documents for the candidate topic as follows:
f_σ_w(t_c) = σ{w(d,S) | ∀ d ∈ D_t_c}
*
Average τ score for ranked lists pairs (f_τ̅):
This feature computes Kendall's τ correlation between ranked lists of each pair of the IR systems and then takes the average (as shown in Equation <ref>) in order to capture the diversity of the results of the IR systems. The depth of the ranked lists is set to 100. In order to calculate the Kendall's τ score, the documents that appear in one list but not in the other are concatenated to the other list so that both ranked lists contain the same documents. If there are multiple documents to be concatenated, the order of the documents in the ranked list is preserved during concatenation. For instance, if system B returns documents {a,b,c,d} and system R returns {e,a,f,c} for a topic, then the concatenated ranked lists of B and R are {a,b,c,d,e,f} and {e,a,f,c,b,d}, respectively.
f_τ̅(t_c) =1/2|S|-1∑_i=1^|S|-1∑_j=i+1^|S| corr(L_s_i(t_c), L_s_j(t_c))
where L_s_j(t_c) represents the ranked list resulting from system s_j for the topic t_c.
*
Standard deviation of τ scores for ranked lists pairs (f_σ_τ): This feature computes the standard deviation of the τ scores of the pairs of ranked lists as follows:
f_σ_τ(t_c) =σ{corr(L_s_i(t_c), L_s_j(t_c)) | ∀ i,j ≤ |S|, i ≠ j}
*
Judgment cost of the topic (f_$): This feature estimates the cost of judging the candidate topic as the number of documents in the pool at a certain depth. If IR systems return many different documents, then the judging cost increases; otherwise, it decreases due to having many documents in common. We set pool depth to 100 and normalize costs by dividing by the maximum possible cost (i.e., 100 × |S|).
f_$(t_c) = D_t_c/|S| × 100
*
Standard deviation of judgment costs of system pairs (f_σ_$): The judgment cost depends on systems participating in the pool. We construct the pool separately for each pair of systems and compute the standard deviation of the judgment cost across pools as follows:
f_σ_$(t_c) =σ{|L_s_i(t_c)∪ L_s_j(t_c)| | ∀ i,j ≤ |S|, i ≠ j}
*
Standard deviation of estimated performance of systems (f_σ_QPP): We finally compute standard deviation of the estimated performances of the IR systems for the topic t_c using a query performance predictor (QPP) <cit.>.
The QPP is typically used to estimate the performance of a single system and is affected by the range of retrieval scores of retrieved documents. Therefore, we normalize the document scores using min-max normalization before computing the predictor.
f_σ_QPP(t_c) = σ{QPP(s_i,t_c) | ∀ i ≤ |S|}
where QPP(s_i,t_c) is the performance predictor applied
on system s_i given topic t_c.
§.§.§ Generating Training Data
Our proposed L2R approach ranks topics based on their effectiveness when added to some currently-selected set of topics. This makes creating the training data for the model a challenging task. First, there are countless number of possible scenarios (i.e., different combinations of topic sets) that we can encounter during the topic selection process. Second, the training data should specify which topic is more preferable for a given scenario.
We developed a method to generate training data by leveraging existing test collections for which we have both relevance judgments and document rankings from several IR systems (e.g., TREC test collections). We first simulate a scenario in which a subset of topics has already been selected. We then rank the rest of the topics based on the correlation with the ground-truth ranking when each topic is added to the currently-selected subset of topics. We repeat this process multiple times and vary the number of already-selected topics in order to generate more diverse training data. The algorithm for generating training data from one test collection is given in Algorithm <ref>. The algorithm could also be applied to several test collections in order to generate larger training data.
The algorithm first determines the ground-truth ranking of IR systems using all topics in the test collection (Line 1). It then starts the process of generating the data records for each possible topic subset size for the targeted test collection (Line 2).
For each subset size i, we repeat the following procedure W times (Line 3); in each, we randomly select i topics, assuming that these represent the currently-selected subset of topics P (Line 4). For each topic t of the non-selected topics P̅, we rank the systems in case we add t to P and calculate the Kendall's τ score achieved in that case (Lines 6-9). This gives us how effective each of the candidate topics would be in the IR evaluation for this specific scenario (i.e., when those i topics are already selected). This also allows us to make a comparison between topics and rank them in terms of their effectiveness. In order to generate labels that can be used in L2R methods, we map each τ score to a value within a certain range. We first divide the range between maximum and minimum τ scores into K equal bins and then assign each topic to its corresponding bin based on its effectiveness. For example, let K=10, T_max= 0.9, and T_min=0.7. The τ ranges for labeling will be 0=[0.7-0.72), 1=[0.72-0.74), ..., 9=[0.88-0.9]. Topics are then labeled from 0 to (K-1) based on their assigned bin. For example, if we achieve τ=0.73 score for a particular topic, then the label for the corresponding data record will be 1. Finally, we compute the feature vector for each topic, assign the labels, and output the data records for the current scenario (Lines 10-13). We repeat this process W times (Line 3) to capture more diverse scenarios for the given topic subset size. We can further increase the size of the generated training data by applying the algorithm on different test collections and merging the resulting data.
§ EVALUATION
In this section, we evaluate our proposed L2R topic selection approach with respect to our research questions and baseline methods. Section <ref> details our experimental setup, including generation of training data and tuning of our L2R model parameters.
We present results of our topic selection experiments (RQ-1) in Section <ref>. We report ablation analysis of our features in Section <ref> and discuss the evaluation of the parameters of our approach in Section <ref>. In Section <ref>, we report the results of our experiments for intelligent topic selection with a fixed budget (RQ-2) and considering different parameters in the debate of NaD vs. WaS judging: varying judging speed (RQ-3), topic generation time (RQ-4), and judging error (RQ-5).
§.§ Setup
We adopt the MART <cit.> implementation in the RankLib library[<https://sourceforge.net/p/lemur/wiki/RankLib/>] as our L2R model[We also focused on other L2R models but MART yielded the best results in our initial experiments when developing our method.]. To tune MART parameters, we partition our data into disjoint training, tuning, and testing sets. We assume that the ground-truth ranking of systems is given by MAP@100.
Test Collections. We consider two primary criteria in selecting test collections to use in our experiments: (1) the collection should contain many topics, providing a fertile testbed for topic selection experimentation, and (2) the set of topics used in training, tuning, and testing should be disjoint to avoid over-fitting.
To satisfy these criteria, we adopt the TREC-9 <cit.> and TREC-2001 <cit.> Web Track collections, as well as TREC-2003 <cit.> and TREC-2004 <cit.> Robust Track collections. Details of these test collections are presented in Table <ref>. Note that all four collections target ad-hoc retrieval. We use TREC-9 and TREC-2001 test collections to generate our training data.
Robust2003 and Robust2004 collections are particularly well-suited to topic selection experimentation since they have relatively more topics (100 and 249, respectively) than many other TREC collections.
However, because topics of Robust2003 were repeated in Robust2004, we define a new test collection subset which excludes all Robust2003 topics from Robust2004, referring to this subset as Robust2004_149.
We use Robust2003 and Robust2004_149 collections for tuning and testing. When testing on Robust2003, we tune parameters on Robust2004_149, unless otherwise noted.
Similarly, when testing on Robust2004_149, we tune parameters on Robust2003.
Generation of Training Data. We generate 100K data records for each topic set size from 0-49 (i.e., N=50 and W=100K in Algorithm <ref>) for TREC-9 and TREC-2001 and remove the duplicates. The label range is set to 0-49 (i.e., K = 50 in Algorithm <ref>) since each of TREC-9 and TREC-2001 has 50 topics.
We merge the data records generated from each test collection to form our final training data. We use this training data in our experiments unless otherwise stated.
Parameter Tuning. To tune parameters of MART, we fix the number of trees to 50 and vary the number of leaves from 2-50 with a step-size of 2. For each of those 25 considered configurations, we build a L2R model and select 50 topics (the standard number of topics in TREC collections) using the tuning set. At each iteration of the topic selection process, we rank the systems based on the topics selected thus far and calculate Kendall's τ rank correlation vs. the ground-truth system ranking. Finally, we selected the parameter configuration which achieves the highest average τ score while selecting the first 50 topics.
Evaluation Metrics. We adopt MAP@100 and statAP <cit.> in order to measure the effectiveness of IR systems. In computing MAP, we use the full pool of judgments for each selected topic. In computing statAP, the number of sampled documents varies in each experiment and are reported in the corresponding sections. Because statAP is stochastic, we repeat the sampling 20 times and report average results.
Baselines. We compare our approach to two baselines:
* Baseline 1: Random. For a given topic subset size M, we randomly select topics R times and calculate the average Kendall's τ score achieved over the R trials. We set R to 10K for MAP and 1K for statAP (due to its higher computation cost than MAP).
* Baseline 2: . We implemented their method using WEKA library <cit.> since no implementation is available from the authors. The authors do not specify parameters used in their linear SVM model, so we adopt default parameters of WEKA's sequential minimal optimization implementation for linear SVMs. Due to its stochastic nature, we run it 50 times and report the average performance.
In addition to these two baselines, we also compare our approach to
the greedy oracle approach defined in Section <ref> (See Algorithm <ref>). This serves as a useful oracle upper-bound, since in practice we would only collect judgments for a topic after it was selected.
Following their experimental setup, we used MAP@1000 as evaluation metric, ran our implementation 50 times on Robust2004 (i.e., with all 249 topics), and calculated average Kendall's τ score (achieved for the given number of topics) over the 50 trials. We also ran our random method to compare the results reported in <cit.>. The results are shown in Figure <ref>.
Our implementation could select only 50 topics after 2 days of execution[which is the time limit for executing programs on the cluster we employed for this task], due to the need for re-building the model with the judgments of the recently selected topic, and thus we were not able to try larger topic sets. As the figure shows, the performance of our implementation is about 0.05 less than what is reported in <cit.> on average. But the two curves become closer as the number of selected topics increases. We believe that the difference does not affect our conclusions in the following experiments. It is reasonable to argue why Husseini et al. (2012) is worse than random method, although they report the opposite in their paper <cit.>. We also deeply investigated this problem. As discussed above, the difference between our implementation and original implementation is not high and becomes very similar as the number of selected topics increases. However, the performance of random approach they mentioned is much less than our random method's performance. In order to understand this problem, we compared results of our random approach with guiver2009few's random approach reported for TREC-8. The results were quite similar. After discussing with the authors of <cit.>, we thought that one possible explanation can be as follows. In their paper, the authors say they took "special care when considering runs from the same participant" without mentioning what exactly it is. Even we don't know the exact reason, it can be because of treating differently the runs of the same participant. This can be also another reason of having different results between our implementation and original implementation.
§.§ Selecting A Fixed Number of Topics
In our first set of experiments, we evaluate our proposed L2R topic selection approach vs. baselines in terms of Kendall's τ rank correlation achieved as a function of number of topics (RQ-1). We assume the full pool of judgments are collected for each selected topic and evaluate with MAP.
Figure <ref> shows results on Robust2003 and Robust2004_149 collections. Given the computational complexity of 's method, which re-trains the classifier at each iteration, we could only select 63 topics for Robust2003 and 77 topics for Robust2004_149 after 2 days of execution[Two days is the time limit for executing programs on the computing cluster we used for experiments.], so its plots terminate early.
The upper-bound Greedy Oracle is seen to achieve 0.90 τ score (a traditionally-accepted threshold for acceptable correlation <cit.>) with only 12 topics in Robust2003 and 20 topics in Robust2004_149.
Our proposed L2R method strictly outperforms baselines for Robust2004_149 and outperforms baselines for Robust2003 except when 70% and 80% of topics are selected.
Relative improvement over baselines is seen to increase as the number of topics is reduced. This suggests that our L2R method becomes more effective as either fewer topics are used, or as more topics are available to choose between when selecting a fixed number of topics.
In our next experiment, instead of assuming the full document pool is judged for each selected topic, we consider a more parsimonious judging condition in which statAP is used to select only 64 or 128 documents to be judged for each selected topic.
The average τ scores for each method are shown in Figure <ref>. The vertical bars represent the standard deviation across trials.
Overall, similar to the first set of experiments, our approach outperforms the baselines in almost all cases and becomes more preferable as the number of selected topics decreases. Similar to the previous experiment with full pooling, our L2R approach performs relatively weakest on Robust 2003 when 70 or 80 topics are selected. In this case, our L2R approach is comparable to random selection (with slight increase over it), whereas with the previous experiment we performed slightly worse than random for 70 or 80 topics on this collection.
We were surprised to see 's topic selection method performing worse than random in our experiments, contrary to their reported results. Consequently, we investigated this in great detail. In comparing results of our respective random baselines, we noted that our own random baseline performed τ≈ 0.12 better on average than theirs over the 20 results they report (using 10, 20, 30, ..., 200 topics), despite our carefully following their reported procedure for implementing the baseline.
To further investigate this discrepancy in baseline performance, we also ran our random baseline on TREC-8 and compared our results with those reported by . Our results were quite similar to 's. kindly discussed the issue with us, and the best explanation we could find was that they took, “special care when considering runs from the same participant”, so perhaps different preprocessing of participant runs between our two studies may contribute to this empirical discrepancy.
Overall, our approach outperforms the baselines in almost all cases demonstrated over two test collections. While the baseline methods do not require any existing test collections for training, the existing wealth of test collections produced by TREC and other shared task campaigns make our method feasible.
Moreover, our experiments show that we can leverage existing test collections in building models that are useful for constructing other test collections. This suggests that there are common characteristics across different test collections that can be leveraged even in other scenarios that are out of the scope of this work, such as the prediction of system rankings in a test collection using other test collections.
§.§ Feature Ablation Analysis
In this experiment, we conduct a feature ablation analysis to study the impact of each core feature and also each group of features on the performance of our approach.
We divide our feature set into mutually-exclusive subsets in two ways: core-feature-based subsets, and topic-group-based subsets. Each of the core-feature-based subsets consists of all features related to one of our 7 core features (defined in Table <ref>). That yields 9 features in each of these subsets; we denote each of them by {f}, where f represents a core feature. In the other way, we define 5 groups of the topics: the candidate topic t_c (which has 7 core features) and four other groups of topics defined in Section <ref> (each has a subset of features using average and standard deviation of the 7 core features, yielding a total of 14 features). We denote each of these feature subsets by F(g), where g represents a group of topics.
In our ablation analysis, we apply leave-one-subset-out method in which we exclude one subset of the features at a time and follow the same experimental procedure with the previous experiments using the remaining features. We evaluate the effectiveness of systems using MAP.
For each subset of features, we report the average Kendall's τ correlation over all possible topic set sizes (1 to 100 for Robust2003 and 1 to 149 for Robust2004_149) to see its effect on the performance. The results are shown in Table <ref>.
The table shows four interesting observations. First, {f_σ_$} and {f_σ_w} are the most effective among the core-feature-based subsets, while F(P∪{t_c}) and F(t_c) are the most effective among the topic-group-based subsets, when testing on Robust2003 and Robust2004_149 respectively. Second, {f_σ_τ} has the least impact in both test collections.
Third, the feature subset of the candidate topic F(P∪{t_c}) is the best on overage over all subsets, which is expected as it solely focuses on the topic we are considering to add to the currently-selected topics. Finally, testing on both test collections, we achieve the best performance when we use all features.
§.§ Robustness and Parameter Sensitivity
The next set of experiments we report assess our L2R method's effectiveness across different training datasets and parameterizations. We evaluate the effectiveness of systems using MAP.
In addition to presenting results for all topics, we also compute the average τ score over 3 equal-sized partitions of the topics. For example, in Robust2004, we calculate the average τ scores for each of the following partitions: 1-50 (denoted by τ_1-33%), 51-100 (denoted by τ_34-66%) and 101-149 (denoted by τ_67-100%). These results are presented in a table within each figure.
Effect of Label Range in Training Set: As explained in Section <ref>, we can assign labels to data records in various ranges. In this experiment, we vary the label range parameter (K in Line 12 of Algorithm <ref>) and compare the performance of our approach with the corresponding training data on Robust2003 and Robust2004_149 test collections. The results are shown in Figure <ref>. It is hard to draw a clear conclusion since each labeling range has varying performances in different cases. For instance, when we use 5 labels only (i.e., Labeling 0-4), it has very good performance with few selected topics. As the number of topics increases, its performance becomes very close to the random approach. Considering the results in Robust2003 and Robust2004_149 together, using 50 labels (i.e., labeling 0-49) gives more consistent and better results than others. Using 25 labels are better than using 10 or 5 labels, in general. Therefore, we observe that using fine grained labels yields better results with our L2R approach.
Effect of Size of Tuning Dataset: In this experiment, we evaluate how robust our approach is to having fewer topics available for tuning.
For this experiment, we randomly select 50 and 75 topics from Robust2003 and remove the not-selected ones from the test collection. We refer to these reduced tuning sets as R3(50) and R3(75). We use these reduced tuning sets for testing on Robust2004_149. When we test on Robust2003, we perform a similar approach. That is, we randomly select 50, 75, and 100 topics from Robust2004_149 and follow the same procedure. We repeat this process 5 times and calculate the average τ score achieved.
The results are presented in Figure <ref>. The vertical bars represent the standard deviation across 5 different trials. As expected, over Robust2004_149, we achieve the best performance when we tune with all 100 topics (i.e., actual Robust2003); employing 75 topics is slightly better than employing 50 topics.
Over Robust2003, when the number of selected topics is ≤33% of the whole topic pool size, tuning with 149 topics gives the best results. For the rest of the cases, tuning with 75 topics gives slightly better results than others. As expected, tuning with only 50 topics yields the worst results in general. Intuitively, using test collections with more tuning topics is seen to yield better results.
Effect of Test Collections Used in Training:
In this experiment, we fix the training data set size, but vary the test collections used for generating the training data. For the experiments so far, we had generated 100K data records for each topic set size from 0-49 with TREC-9 and TREC-2001 and subsequently combined both (yielding 200K records in total). In this experiment, in addition to this training data, we generate 200K data records for each topic set size from 0-49 with TREC-9 and TREC-2001, and use them separately. That is, we have 3 different datasets (namely, T9&T1, T9 and T1) and each dataset has roughly the same number of data records. The results are shown in Figure <ref>. As expected, using more test collections leads to better and more consistent results. Therefore, instead of simply generating more data records from the same test collection, diversifying the test collections in present in the training data appears to increase our L2R method's effectiveness.
§.§ Topic Selection with a Fixed Budget
Next, we seek to compare narrow and deep (NaD) vs. wide and shallow (WaS) judging when topics are selected intelligently (RQ-2), considering also familiarization of assessors to topics (RQ-3), the effect of topic generation cost (RQ-4) and judging error (RQ-5).
We evaluate the performance of the methods using statAP <cit.>.
The budget is distributed equally among topics. In each experiment, we exhaust the full budget for the selected topics, i.e., as the number of topics increases, the number of judgments per topic decreases, and vice-versa.
Effect of Familiarization to Topics when Judging: As discussed in Section <ref>, found that as the number of judgments per topic increases (when collecting 8, 16, 32, 64 or 128 judgments per topic), the median time to judge each document decreases respectively: 15, 13, 15, 11 and 9 seconds. Because prior work comparing NaD vs. WaS judging did not consider variable judging speed as a function of topic depth, we revisit this question, considering how faster judging with greater judging depth per topic may impact the tradeoff between deep vs. shallow judging in maximizing evaluation reliability for a given assessment time budget.
Using 's data points, we fit a piece-wise judging speed function (Equation <ref>) to simulate judging speed as a function of judging depth as illustrated in Figure <ref>. According to this model, judging a single document takes 15 seconds if there are 32 or fewer judgments per topic (i.e., as the assessor “warms up”). After 32 judgments, the assessors become familiar with the topic and start judging faster. Because judging speed cannot increase forever, we assume that after 128 judgments, judging speed becomes stable at 9 seconds per judgment.
f(x) =
15, if x≤ 32
8.761 + 16.856 × e^-0.0316 × x, if 32 < x < 127
9, otherwise
For the constant judging case, we set the judging speed to 15 seconds per document. For instance, if our total budget is 100 hours and we have 100 topics, then we spend 1 hour per topic. If judging speed is constant, we judge 60 min × 60 sec/min÷ 15 seconds/judgment = 240 judgments for each topic. However, if judging speed increases according to our model, we can judge a larger set of 400 documents per topic in the same one hour.
We set our budget to 40 hours for both test collections. We initially assume that developing the topics has no cost.
Results are shown in Figure <ref>. We can see that the additional judgments due to faster judging results in a higher τ score after 30 topics, and its effect increases as the number of topics increases. Since the results are significantly different (p value of paired t-test is 0.0075 and 0.0001 in experiments with Robust2003 and Robust2004_149, respectively), results suggest that
topic familiarity (i.e., judging speed)
impacts optimization of evaluation budget for
deep vs. shallow judging.
Effect of Topic Development Cost: As discussed in Section <ref>, topic development cost (TDC) can vary significantly.
Topic selection is usually considered a method that decreases the cost by reducing the number of judgments, assuming that topic generation has a negligible cost. However, generating topics is an important step of constructing a test collection, which may increase the cost significantly.
For instance, NIST asks topic generators to create a query and judge around 100 documents. If 20 of the first 25 documents are relevant, they discard the idea and start a new one. These initial judgments are used to measure the quality of the candidate topics (i.e. whether it has sufficiently many relevant documents, but not too many). Final 50 topics are selected among the candidates. The cost of generating a single final topic of NIST is around 4 hours on average. Another way to generate topics is getting a query log of users and back-fitting queries to topic definitions as in <cit.>. It is reported that the median time of converting queries to topics is 76 seconds <cit.>. <cit.> also reports that topic generation cost (TGC) is roughly 5 minutes on average and uses it as another parameter of budget analysis.
In order to understand TDC's effect, we perform intelligent topic selection with our L2R method and vary TDC from 76 seconds (i.e., time needed to convert a query to topic, as reported in <cit.>) to 2432 seconds (i.e., 32 × 76) in geometric order while fixing the budget to 40 hours for both test collections. Note that TREC spends 4 hours to develop a final topic <cit.>, which is almost 5 times more than 2432 seconds.
We assume that judging speed is constant (i.e., 15 seconds per judgment). For instance, if TDC is 76 seconds and we select 50 topics, then we subtract 50 × 76=3800 seconds from the total budget and use the remaining for document judging.
The results are shown in Figure <ref>. When the topic development cost is ≤152 seconds, results are fairly similar. However, when we spend more time on the topic development, after selecting a number of topics, τ scores achieved start decreasing due to insufficient budget left for judging the documents. In Robust2004_149, the total budget is not sufficient to generate more than 118 topics when TDC is 1216 seconds. Therefore, no judgment can be collected when the number of topics is 120 or higher. Considering the results for Robust2004_149 test collection, when TGC is 304 seconds (which is close to that mentioned in <cit.>), we are able to achieve better performance with 80-110 topics instead of employing all topics. When TDC is 608 seconds,
using only 50 topics achieves higher τ score than employing all topics (0.888 vs. 0.868). Overall, the results suggest that as the topic development cost increases, NaD judging becomes more cost-effective than WaS judging.
Another effect of topic development cost can be observed in the reliability of the judgments, as discussed in Section <ref>.
In the experiments so far, we haven't argued the correctness of the judgments and considered them as perfect judgments. However, there is always a possibility of doing mistakes during judging the documents. For instance, <cit.> reports that one of the authors judged a number of document-topic pairs from 2009 TREC Million Query Track and they found a high number of inconsistencies between their own judgments and MQ Track judgments. Specifically, NIST assessors disagreed with the author's judgments in 33% of cases where the author judged as relevant and in 70% of cases where the author judged as irrelevant. Despite close analysis of the NIST judgments, the authors couldn't find any explanation for the original judgments.
Even there can be many reasons for wrong judgments, we posit that one of the reasons can be poorly defined topics which lead the assessors to misunderstand the topics. This problem can increase especially in cases where the topic generation and judging the documents are performed by different people such as crowd-sourced judgments.
In this experiment, we consider a scenario where the assessors rely on the topic definitions and poorly-defined topics can cause inconsistent judgments. In order to simulate this scenario, we assume that 8% of judgments are inconsistent when TDC = 76 seconds. The accuracy of judgments increases by 2% as TDC doubles. So when TDC = 1216 seconds, assessors can achieve perfect judgments.
Note that the assessors in this scenario are much more reliable than what is reported in <cit.>. In order to implement this scenario, we randomly flip over judgments of qrels based on the corresponding accuracy of judging. The ground-truth rankings of the systems are based on the original judgments. We set the total budget to 40 hours and assume that judging a single document takes 15 seconds. We use our method to select the topics. We repeat the process 50 times and report the average.
The results are shown in Figure <ref>. In Robust 2003, the achieved Kendall's τ scores increase as TDC increases from 76 to 608 seconds due to more consistent judging (opposite of what we observe in Figure <ref>).
In Robust2004_149, when the number of topics
is 100 or less, Kendall's τ score increases as TDC increases from 76 to 608. However, when the number of topics is more than 100, τ scores achieved with TDC=608 start decreasing due to insufficient amount of budget for judging. We observe a similar pattern in Robust2003 with TDC=1216. We achieve the highest τ scores with TDC=1216 and 60 or fewer topics, but the performance starts decreasing later on as more topics are selected. In general, the results suggest that poorly-defined topics should be avoided if they have negative effect on the consistency of the relevance judgments. However, spending more time to develop high quality topics can significantly increase the cost. Therefore, NaD becomes preferable over WaS when we target constructing high quality topics.
Varying budget: In this set of experiments, we vary our total budget from 20 to 40 hours. We assume that the assessors judge faster as they judge more documents, up to a point, based on our model given in Equation <ref>. We also assume that topic development cost is 76 seconds.
The results are shown in Figure <ref>. In Robust2004_149, our approach performs better than the random method in all cases. In Robust2003, our approach outperforms the random method in the selection of the first 50 topics, while both perform similarly when we select more than 50 topics. Regarding WaS vs. NaD judging debate, when our budget is 20 hours, we are able to achieve higher τ scores by reducing the number of topics in both test collections. In Robust2004_149, when our budget is 30 hours, using 90-130 topics leads to higher τ scores than using all topics. When our budget is 40 hours, we are able to achieve similar τ scores by reducing the number of topics to 100.
However, τ scores achieved by the random method monotonically increase as the number of topics increases (except 20-hours budget scenario with Robust2004_149 in which using 100-120 topics achieves very slightly higher τ scores than using all topics).
That is to say that, WaS judging leads to a better ranking of systems if we select the topics randomly, as reported by other studies <cit.>. However, if we select the topics intelligently, we can achieve a better ranking by using fewer number of topics for a given budget.
Re-usability of Test Collections:
In this experiment, we compare our approach with the random topic selection method in terms of re-usability of the test collections with the selected topics. We again set topic development cost to 76 seconds and assume non-constant judging speed. We vary the total budget from 20 hours to 40 hours, as in the previous experiment.
In order to measure the re-usability of the test collections, we adopt the following process. For each topic selection method, we first select the topics for the given topic subset size. Using only the selected topics, we then apply a leave-one-group-out method <cit.>: for each group, we ignore the documents which only that group contributes to the pool and sample documents based on remaining documents. Then, the statAP score is calculated for the runs of the corresponding group. After applying this for all groups, we rank the systems based on their statAP scores and calculate Kendall's τ score compared to the ground-truth ranking of the retrieval systems. We repeat this process 20 times for our method and 5000 times for random method by re-selecting the topics.
The results are shown in Figure <ref>. The vertical bars represent the standard deviation and the dashed horizontal line represents the performance when we employ all topics. There are several observations we can make from the results. First, our proposed method yields more re-usable test collections than random method in almost all cases. As the budget decreases, our approach becomes more effective in order to construct reusable test collections.
Second, in all budget cases for both test collections, we can reach same/similar re-usability scores with fewer topics. Lastly, the τ scores achieved by the random topic selection method again
monotonically increases as the number of topics increases in almost all cases. However, by intelligently reducing the number of topics, we can increase the reusability of the test collections in all budget scenarios for Robust2004_149 and 20-hours-budget scenario for Robust2003. Therefore, the results suggest that NaD judging can yield more reusable test collections than WaS judging, when topics are selected intelligently.
§ CONCLUSION
While the Cranfield paradigm <cit.> for systems-based IR evaluations has demonstrated remarkably longevity, it has become increasingly infeasible to rely on TREC-style pooling to construct test collections at the scale of today's massive document collections.
In this work, we proposed a new intelligent topic selection method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrated previously disparate lines of research on intelligent topic selection and NaD vs. WaS judging. While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs. shallow judging has largely argued for shallow judging, but assuming random topic selection. Arguing that ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics, we presented a comprehensive investigation over a set of relevant factors never previously studied together: 1) method of topic selection; 2) the effect of topic familiarity on human judging speed; and 3) how different topic generation processes (requiring varying human effort) impact (i) budget utilization and (ii) the resultant quality of judgments.
Experiments on NIST TREC Robust 2003 and Robust 2004 test collections show that not only can we reliably evaluate IR systems with fewer topics, but also that: 1) when topics are intelligently selected, deep judging is often more cost-effective than shallow judging in evaluation reliability; and 2) topic familiarity and topic generation costs greatly impact the evaluation cost vs. reliability trade-off. Our findings challenge conventional wisdom in showing that deep judging is often preferable to shallow judging when topics are selected intelligently.
More specifically, the main findings from our study are as follows. First, in almost all cases, our proposed approach selects better topics yielding more reliable evaluation than the baselines. Second, shallow judging is preferable than deep judging if topics are selected randomly, confirming findings of prior work. However, when topics are selected intelligently, deep judging often achieves greater evaluation reliability for the same evaluation budget than shallow judging. Third, assuming that judging speed increases as more documents for the same topic are judged, increased judging speed has significant effect on evaluation reliability, suggesting that it should be another parameter to be considered in deep vs. shallow judging trade-off. Fourth, as topic generation cost increases, deep judging becomes preferable to shallow judging.
Finally, assuming that short topic generation times reduce the quality of topics, and thereby consistency of relevance judgments, it is better to increase quality of topics and collect fewer judgments instead of collecting many judgments with low-quality topics. This also makes deep judging preferable than shallow judging in many cases, due to increased topic generation cost.
As future work, we plan to investigate the effectiveness of our topic selection method using other evaluation metrics, and conduct qualitative analysis to identify underlying factors which could explain why some topics seem to be better than others in terms of predicting the relative average performance of IR systems. We are inspired here by prior qualitative analysis seeking to understand what makes some topics harder than others <cit.>. Such deeper understanding could provide an invaluable underpinning to guide future design of topic sets and foster transformative insights on how we might achieve even more cost-effective yet reliable IR evaluation.
§ ACKNOWLEDGMENTS
This work was made possible by NPRP grant# NPRP 7-1313-1-245 from the Qatar National Research Fund (a member
of Qatar Foundation). The statements made herein are
solely the responsibility of the authors. We thank the Texas
Advanced Computing Center (TACC) at the University of
Texas at Austin for computing resources enabling this research.
apacite
9
ellenvoorheesemail Personal communication with Ellen Voorhees at NIST.
Allan08millionquery James Allan, Javed A. Aslam, Ben Carterette, Virgil Pavlu, and Evangelos Kanoulas. "Million query track 2008 overview". In: In Proceedings of the Seventeenth Text REtrieval Conference (TREC 2007. 2008.
allan2007million James Allan, Ben Carterette, Javed A Aslam, Virgil Pavlu, Blagovest Dachev, and Evangelos Kanoulas. Million query track 2007 overview. Tech. rep. DTIC Document, 2007.
alonso2009can Omar Alonso and Stefano Mizzaro. "Can we get rid of TREC assessors? Using Mechanical Turk for relevance assessment". In: Proceedings of the SIGIR 2009 Workshop on the Future of IR Evaluation. Vol. 15. 2009, p. 16.
aslam2006statistical Javed A Aslam, Virgil Pavlu, and Emine Yilmaz. "A statistical method for system evaluation using incomplete judgments". In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2006, pp. 541–548.
aslam2007inferring Javed A Aslam and Emine Yilmaz. "Inferring document relevance from incomplete information". In: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. ACM. 2007, pp. 633–642.
banks99 David Banks, Paul Over, and Nien-Fan Zhang. "Blind men and elephants: Six approaches to TREC data". In: Information Retrieval 1.1-2 (1999), pp. 7–34.
berto2013using Andrea Berto, Stefano Mizzaro, and Stephen Robertson. "On using fewer topics in information retrieval evaluations". In: Proceedings of the 2013 Conference on the Theory of Information Retrieval. ACM. 2013, p. 9.
bodoff2007test David Bodoff and Pu Li. "Test theory for assessing IR test collections". In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2007, pp. 367–374.
Voorhees06 Chris Buckley, Darrin Dimmick, Ian Soboroff, and Ellen Voorhees. "Bias and the limits of pooling". In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2006, pp. 619–620.
BuckleyVoorhees2000 Chris Buckley and Ellen M Voorhees. "Evaluating evaluation measure stability". In: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2000, pp. 33–40.
buckley2004retrieval Chris Buckley and Ellen M Voorhees. "Retrieval evaluation with incomplete information". In: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2004, pp. 25–32.
carterette2006minimal Ben Carterette, James Allan, and Ramesh Sitaraman. "Minimal test collections for retrieval evaluation". In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2006, pp. 268–275.
27
carterette2008evaluation Ben Carterette, Virgil Pavlu, Evangelos Kanoulas, Javed A Aslam, and James Allan. "Evaluation over thousands of queries". In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2008, pp. 651–658.
carterette2009if Ben Carterette, Virgil Pavlu, Evangelos Kanoulas, Javed A Aslam, and James Allan. "If I had a million queries". In: European conference on information retrieval. Springer. 2009, pp. 288–300.
carterette2007hypothesis Ben Carterette and Mark D Smucker. "Hypothesis testing with incomplete relevance judgments". In: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. ACM. 2007, pp. 643–652.
cleverdon1959evaluation Cyril W Cleverdon. "The evaluation of systems used in information retrieval". In: Proceedings of the international conference on scientific information. Vol. 1. National Academy of Sciences Washington, DC, 1959, pp. 687–698.
cormack1998efficient Gordon V Cormack, Christopher R Palmer, and Charles LA Clarke. "Efficient construction of large test collections". In: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 1998, pp. 282–289.
cummins2011improved Ronan Cummins, Joemon Jose, and Colm O'Riordan. "Improved query performance prediction using standard deviation". In: Proceedings of the 34th international ACM SI- GIR conference on Research and development in Information Retrieval. ACM. 2011, pp. 1089–1090.
friedman2001greedy Jerome H Friedman. "Greedy function approximation: a gradient boosting machine". In: Annals of statistics (2001), pp. 1189–1232.
grady2010crowdsourcing Catherine Grady and Matthew Lease. "Crowdsourcing document relevance assessment with mechanical turk". In: Proceedings of the NAACL HLT 2010 workshop on creating speech and language data with Amazon’s mechanical turk. Association for Computational Linguistics. 2010, pp. 172–179.
guiver2009few John Guiver, Stefano Mizzaro, and Stephen Robertson. "A few good topics: Experiments in topic set reduction for retrieval evaluation". In: ACM Transactions on Information Systems (TOIS) 27.4 (2009), p. 21.
hauff2010case Claudia Hauff, Djoerd Hiemstra, Leif Azzopardi, and Franciska De Jong. "A case for automatic system evaluation". In: European Conference on Information Retrieval. Springer. 2010, pp. 153–165.
hauff2009relying Claudia Hauff, Djoerd Hiemstra, Franciska De Jong, and Leif Azzopardi. "Relying on topic subsets for system ranking estimation". In: Proceedings of the 18th ACM conference on Information and knowledge management. ACM. 2009, pp. 1859–1862.
hawking2000overview David Hawking. "Overview of the TREC-9 Web Track." In: TREC. 2000.
hawking2002overview David Hawking and Nick Craswell. "Overview of the TREC-2001 web track". In: NIST
special publication (2002), pp. 61–67.
hosseini2012uncertainty Mehdi Hosseini, Ingemar J Cox, Natasa Milic-Frayling, Milad Shokouhi, and Emine Yilmaz. "An uncertainty-aware query selection model for evaluation of IR systems". In: Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM. 2012, pp. 901–910.
hosseini2011selecting Mehdi Hosseini, Ingemar J Cox, Natasa Milic-Frayling, Vishwa Vinay, and Trevor Sweeting. "Selecting a subset of queries for acquisition of further relevance judgements". In: Conference on the Theory of Information Retrieval. Springer. 2011, pp. 113–124.
jones1975report K. Spa ̈rck Jones and C. J. van Rijsbergen. "Report on the need for and provision of an "ideal" information retrieval test collection (British Library Research and Development Report No. 5266)". In: (1975), p. 43.
kazai2014dissimilarity Gabriella Kazai and Homer Sung. "Dissimilarity based query selection for efficient preference based IR evaluation". In: European Conference on Information Retrieval. Springer. 2014, pp. 172–183.
kendall1938new Maurice G Kendall. "A new measure of rank correlation". In: Biometrika 30.1/2 (1938), pp. 81–93.
lin2011query Ting-Chu Lin and Pu-Jen Cheng. "Query sampling for learning data fusion". In: Proceedings of the 20th ACM international conference on Information and knowledge management. ACM. 2011, pp. 141–146.
McDonnell2016 Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. "Why Is That Relevant? Collecting Annotator Rationales for Relevance Judgments". In: Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing (HCOMP). AAAI. 2016, pp. 139–148.
mehrotra2015representative Rishabh Mehrotra and Emine Yilmaz. "Representative & Informative Query Selection for Learning to Rank using Submodular Functions". In: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM. 2015, pp. 545–554.
mizzaro2007hits Stefano Mizzaro and Stephen Robertson. "Hits hits TREC: exploring IR evaluation results with network analysis". In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2007, pp. 479– 486.
moffat2007strategic Alistair Moffat, William Webber, and Justin Zobel. "Strategic system comparisons via targeted relevance judgments". In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2007, pp. 375–382.
moghadasi2013low Shiva Imani Moghadasi, Sri Devi Ravana, and Sudharshan N Raman. "Low-cost evaluation techniques for information retrieval systems: A review". In: Journal of Informetrics 7.2 (2013), pp. 301–312.
nuray2006automatic Rabia Nuray and Fazli Can. "Automatic ranking of information retrieval systems using data fusion". In: Information Processing & Management 42.3 (2006), pp. 595–614.
pavlu2007practical V Pavlu and J Aslam. A practical sampling strategy for efficient retrieval evaluation. Tech. rep. Technical Report, College of Computer and Information Science, Northeastern University, 2007.
robertson2011contributions Stephen Robertson. "On the contributions of topics to system evaluation". In: European conference on information retrieval. Springer. 2011, pp. 129–140.
sakai2007alternatives Tetsuya Sakai. "Alternatives to bpref". In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2007, pp. 71–78.
sakai2014designing Tetsuya Sakai. "Designing test collections for comparing many systems". In: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM. 2014, pp. 61–70.
sakai2014CI Tetsuya Sakai. "Designing test collections that provide tight confidence intervals". In: Forum on information technology 2014 RD. Vol. 3. 2014.
sakai2015topic Tetsuya Sakai. "Topic set size design". In: Information Retrieval Journal (2015), pp. 1– 28.
sakai2014topic Tetsuya Sakai. "Topic Set Size Design with Variance Estimates from Two-Way ANOVA." In: EVIA@ NTCIR. Citeseer. 2014.
sanderson2010test Mark Sanderson. Test collection based evaluation of information retrieval systems. Now Publishers Inc, 2010.
sanderson2010relatively Mark Sanderson, Falk Scholer, and Andrew Turpin. "Relatively relevant: Assessor shift in document judgements". In: Proceedings of the Australasian Document Computing Symposium. 2010, pp. 60–67.
sanderson2005information Mark Sanderson and Justin Zobel. "Information retrieval system evaluation: effort, sensitivity, and reliability". In: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2005, pp. 162– 169.
scholer2013effect Falk Scholer, Diane Kelly, Wan-Ching Wu, Hanseul S Lee, and William Webber. "The effect of threshold priming and need for cognition on relevance calibration and assessment". In: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM. 2013, pp. 623–632.
soboroff2001ranking Ian Soboroff, Charles Nicholas, and Patrick Cahan. "Ranking retrieval systems without relevance judgments". In: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. ACM. 2001, pp. 66–73.
urbano2013measurement Julián Urbano, Mónica Marrero,and Diego Martín."On the measurement of test collection reliability". In: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM. 2013, pp. 393–402.
voorhees2004overview Ellen M Voorhees. "Overview of the TREC 2004 Robust Track." In: TREC. Vol. 4. 2004.
voorhees2003overview Ellen M Voorhees. "Overview of TREC 2003." In: TREC. 2003, pp. 1–13.
voorhees2001philosophy Ellen M Voorhees. "The philosophy of information retrieval evaluation". In: Workshop of the Cross-Language Evaluation Forum for European Languages. Springer. 2001, pp. 355– 370.
voorhees2009topic Ellen M Voorhees. "Topic set size redux". In: Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM. 2009, pp. 806–807.
voorhees2000variations Ellen M Voorhees. "Variations in relevance judgments and the measurement of retrieval effectiveness". In: Information processing & management 36.5 (2000), pp. 697–716.
webber2008statistical William Webber, Alistair Moffat, and Justin Zobel. "Statistical power in retrieval experimentation". In: Proceedings of the 17th ACM conference on Information and knowledge management. ACM. 2008, pp. 571–580.
yilmaz2008estimating Emine Yilmaz and Javed A Aslam. "Estimating average precision when judgments are incomplete". In: Knowledge and Information Systems 16.2 (2008), pp. 173–211.
yilmaz2006estimating Emine Yilmaz and Javed A Aslam. "Estimating average precision with incomplete and imperfect judgments". In: Proceedings of the 15th ACM international conference on In- formation and knowledge management. ACM. 2006, pp. 102–111.
Zobel1998 Justin Zobel. "How reliable are the results of large-scale information retrieval experiments?" In: Proceedings of the 21st annual international ACM SIGIR conference on Re- search and development in information retrieval. ACM. 1998, pp. 307–314.
|
http://arxiv.org/abs/1701.07642v2 | 20170126103339 | Identification of nonclassical properties of light with multiplexing layouts | [
"J. Sperling",
"A. Eckstein",
"W. R. Clements",
"M. Moore",
"J. J. Renema",
"W. S. Kolthammer",
"S. W. Nam",
"A. Lita",
"T. Gerrits",
"I. A. Walmsley",
"G. S. Agarwal",
"W. Vogel"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, United Kingdom
Texas A&M University, College Station, Texas 77845, USA
Institut für Physik, Universität Rostock, Albert-Einstein-Straße 23, D-18059 Rostock, Germany
In Ref. <cit.>, we introduced and applied a detector-independent method to uncover nonclassicality.
Here, we extend those techniques and give more details on the performed analysis.
We derive a general theory of the positive-operator-valued measure that describes multiplexing layouts with arbitrary detectors.
From the resulting quantum version of a multinomial statistics, we infer nonclassicality probes based on a matrix of normally ordered moments.
We discuss these criteria and apply the theory to our data which are measured with superconducting transition-edge sensors.
Our experiment produces heralded multi-photon states from a parametric down-conversion light source.
We show that the known notions of sub-Poisson and sub-binomial light can be deduced from our general approach, and we establish the concept of sub-multinomial light, which is shown to outperform the former two concepts of nonclassicality for our data.
Identification of nonclassical properties of light with multiplexing layouts
W. Vogel
December 30, 2023
============================================================================
§ INTRODUCTION
The bare existence of photons highlights the particle nature of electromagnetic waves in quantum optics <cit.>.
Therefore, the generation and detection of photon states are crucial for a comprehensive understanding of fundamental concepts in quantum physics; see Refs. <cit.> for recent reviews on single photons.
Beyond this scientific motivation, the study of nonclassical radiation fields is also of practical importance.
For instance, quantum communication protocols rely on the generation and detection of photons <cit.>.
Yet, unwanted attenuation effects—which are always present in realistic scenarios—result in a decrease of the nonclassicality of a produced light field.
Conversely, an inappropriate detector model can introduce fake nonclassicality even to a classical radiation field <cit.>.
For this reason, we seek robust and detector-independent certifiers of nonclassicality <cit.>.
The basic definition of nonclassicality is that a quantum state of light cannot be described in terms of classical statistical optics.
A convenient way to represent general states is given in terms of the Glauber-Sudarshan P function <cit.>.
Whenever this distribution cannot be interpreted in terms of classical probability theory, the thereby represented state is a nonclassical one <cit.>.
A number of nonclassicality tests have been proposed; see Ref. <cit.> for an overview.
Most of them are formulated in terms of matrices of normally ordered moments of physical observables; see, e.g., Ref. <cit.>.
For example, the concept of nonclassical sub-Poisson light <cit.> can be written and even generalized in terms of matrices of higher-order photon-number correlations <cit.>.
Other matrix-based nonclassicality tests employ the Fourier or Laplace transform of the Glauber-Sudarshan P function <cit.>.
In order to apply such nonclassicality probes, one has to measure the light field under study with a photodetector <cit.>.
The photon statistics of the measured state can be inferred if the used detector has been properly characterized.
This can be done by a detector tomography <cit.>—i.e., measuring a comparably large number of well-defined probe states to construct a detection model.
Alternatively, one can perform a detector calibration <cit.>—i.e., the estimation of parameters of an existing detection model with some reference measurements.
Of particular interest are photon-number-resolving detectors of which superconducting transition-edge sensors (TESs) are a successful example <cit.>.
Independent of the particular realization, photon-number-resolving devices allow for the implementation of quantum tasks, such as state reconstruction <cit.>, imaging <cit.>, random number generation <cit.>, and the characterization of sources of nonclassical light <cit.>—even in the presence of strong imperfections <cit.>.
Moreover, higher-order <cit.>, spatial <cit.>, and conditional <cit.> quantum correlations have been studied.
So far, we did not distinguish between the detection scheme and the actual detectors.
That is, one has to discern the optical manipulation of a signal field and its interaction with a sensor which yields a measurement outcome.
Properly designed detection layouts of such a kind render it possible to infer or use properties of quantum light without having a photon-number-resolution capability <cit.> or they do not require a particular detector model <cit.>.
For instance, multiplexing layouts with a number of detectors that can only discern between the presence (“on”) or absence (“off”) of absorbed photons can be combined into a photon-number-resolving detection device <cit.>.
Such types of schemes use an optical network to split an incident light field into a number of spatial or temporal modes of equal intensities which are subsequently measured with on/off detectors.
The measured statistics is shown to resemble a binomial distribution <cit.> rather than a Poisson statistics, which is obtained for photoelectric detection models <cit.>; see also Refs. <cit.> in this context.
For such detectors, the positive-operator-valued measure (POVM), which fully describes the detection layout, has been formulated <cit.>.
Recently, the combination of a multiplexing scheme with multiple TESs has been used to significantly increase the maximal number of detectable photons <cit.>.
Based on the binomial character of the statistics of multiplexing layouts with on/off detectors, the notion of sub-binomial light has been introduced <cit.> and experimentally demonstrated <cit.>.
It replaces the concept of sub-Poisson light <cit.>, which applies to photoelectric counting models <cit.>, for multiplexing arrangements using on/off detectors.
Nonclassical light can be similarly inferred from multiplexing devices with non-identical splitting ratios <cit.>.
In addition, the on-chip realization of optical networks <cit.> can be used to produce integrated detectors to verify sub-binomial light <cit.>.
In this paper, we derive the quantum-optical click-counting theory for multiplexing layouts which employ arbitrary detectors.
Therefore, we formulate nonclassicality tests in terms of normally ordered moments, which are independent of the detector response.
This method is then applied to our experiment which produces heralded multi-photon states.
Our results are discussed in relation with other notions of nonclassical photon correlations.
In Ref. <cit.>, we study the same topic as we do in this paper from a classical perspective.
There, the treatment of the detector-independent verification of quantum light is performed solely in terms of classical statistical optics.
Here, however, we use a complementary quantum-optical perspective on this topic.
Beyond that, we also consider higher-order moments of the statistics, present additional features of our measurements, and compare our results with previously known nonclassicality tests as well as simple theoretical models.
This paper is organized as follows.
In Sec. <ref>, the theoretical model for our detection layout is elaborated and nonclassicality criteria are derived.
The performed experiment is described in Sec. <ref> with special emphasis on the used TESs.
An extended analysis of our data, presented in Sec. <ref>, includes the comparison of different forms of nonclassicality.
We summarize and conclude in Sec. <ref>.
§ THEORY
In this section, we derive the general, theoretical toolbox for describing the multiplexing arrangement with arbitrary detectors and for formulating the corresponding nonclassicality criteria.
The measurement layout under study is shown in Fig. <ref>.
Our detection model shows that for any type of employed detector, the measured statistics can be described in the form of a quantum version of a multinomial statistics [Eq. (<ref>)].
This leads to the formulation of nonclassicality criteria in terms of negativities in the normally ordered matrix of moments [Eq. (<ref>)].
Especially, covariance-based criteria are discussed and related to previously known forms of nonclassicality.
§.§ Preliminaries
We apply well-established concepts in quantum optics in this section.
Namely, any quantum state of light ρ̂ can be written in terms of the Glauber-Sudarshan representation <cit.>,
ρ̂=∫ d^2α P(α)|α⟩⟨α|.
From this diagonal expansion in terms of coherent states |α⟩, one observes that one can formulate the detection theory in terms of coherent states.
A subsequent integration over the P function then describes the model for any state.
Furthermore, the definition of nonclassicality is also based on this representation.
Namely, the state ρ̂ is a classical state if and only if P can be interpreted in terms of classical probability theory <cit.>, i.e., P(α)≥0.
Whenever this cannot be done, ρ̂ refers to a nonclassical state.
Moreover, the P function of a state is related to the normal ordering (denoted by :⋯:) of measurement operators.
For a detailed introduction to bosonic operator ordering, we refer to Ref. <cit.>.
It can be shown in general that any classical state obeys <cit.>
⟨:f̂^†f̂:⟩cl.≥0,
for any operator f̂.
In addition, we may recall that expectation values of normally ordered operators and coherent states can be simply computed by replacing the bosonic annihilation â and creation operator â^† with the coherent amplitude α and its complex conjugate α^∗, respectively.
A violation of constraint (<ref>) necessarily identifies nonclassicality, which will be also used to formulate our nonclassicality criteria.
§.§ Multiplexing detectors
The optical detection scheme under study, shown in Fig. <ref>, consists of a balanced multiplexing network which splits a signal into N modes.
Those outputs are measured with N identical detectors which can produce K+1 outcomes, labeled as k=0,…,K.
Let us stress that we make a clear distinction between the well-characterized optical multiplexing, the individual and unspecified detectors, and the resulting full detection scheme.
In the multiplexing part, a coherent-state input |α⟩ is distributed over N output modes.
Further on, we have vacuum |vac⟩=|0⟩ at all other N-1 input ports.
In general, the N input modes—defined by the bosonic annihilation operators â_n,in (â_1,in=â)—are transformed via the unitary U(N)=(U_m,n)_m,n=1^N into the output modes
â_m,out=U_m,1â_1,in+⋯+U_m,Nâ_N,in.
Taking the balanced splitting into account, it holds that |U_m,n|=1/√(N).
Adjusting the phases of the outputs properly, we get the following input-output relation
|α⟩⊗|0⟩^⊗(N-1)U(N)⟼|α/√(N)⟩^⊗ N.
Note that a balanced, but lossy network similarly yields |τα⟩⊗⋯⊗|τα⟩ for τ≤ 1/√(N).
For describing the detector, we do not make any specifications.
Nevertheless, we will be able to formulate nonclassicality tests.
The probability p_k for the kth measurement outcome (0≤ k≤ K) for any type of detector can be written in terms of the expectation value of the POVM operators :π̂'_k:, p_k=⟨:π̂'_k:⟩.
Note that any operator can be written in a normally ordered form <cit.> and that the POVM includes all imperfections of the individual detector, such as the quantum efficiency or nonlinear responses.
For the coherent states |α/√(N)⟩, we have
p_k(α)=⟨α/√(N)|:π̂'_k:|α/√(N)⟩=⟨α|:π̂_k:|α⟩,
whereby we also define :π̂_k: in terms of :π̂'_k: through the mapping â↦â/√(N).
We find that for a measurement with our N detectors and our coherent output state (<ref>), the probability to measure the outcome k_n with the nth detector—more rigorously a coincidence (k_1,…,k_N) from the N individual detectors—is given by
p_k_1(α)
⋯
p_k_N(α)=⟨α|:π̂_k_1⋯π̂_k_N:|α⟩,
where we used the relation ⟨α|:Â:|α⟩⟨α|:B̂:|α⟩=⟨α|:ÂB̂:|α⟩ for any two (or more) operators  and B̂ and Eq. (<ref>).
The Glauber-Sudarshan representation (<ref>) allows one to write for any quantum state ρ̂
p_(k_1,…,k_N)= ∫ d^2α P(α)p_k_1(α)⋯ p_k_N(α)
= ⟨:π̂_k_1⋯π̂_k_N:⟩.
So far we studied the individual parts, i.e., the optical multiplexing and the N individual detectors, separately.
To describe the full detection scheme in Fig. <ref>, we need some additional combinatorics, which is fully presented in Appendix <ref>.
There, the main idea is that one can group the individual detectors into subgroups of N_k detectors which deliver the same outcome k.
Suppose the individual detectors yield the outcomes (k_1,…,k_N).
Then, N_k is the number of individual detectors for which k_n=k holds.
In other words, (N_0,…,N_K) describes the coincidence that N_0 detectors yield the outcome 0, N_1 detectors yield the outcome 1, etc.
Note that the total number of detectors is given by N=N_0+⋯+N_K.
The POVM representation Π̂_(N_0,…,N_K) for the event (N_0,…,N_K) is given in Eq. (<ref>).
In combination with Eq. (<ref>), we get for the detection layout in Fig. <ref> the click-counting statistics of a state ρ̂ as
c_(N_0,…,N_K)=tr[ρ̂Π̂_(N_0,…,N_K)]
= ⟨:N!/N_0!⋯ N_K!π̂_0^N_0⋯π̂_K^N_K:⟩,
which is a normal-ordered version of a multinomial distribution.
The click-counting statistics (<ref>) yields the probability that N_0 times the outcome k=0 together with N_1 times the outcome k=1, etc., is recorded with the N individual detectors.
Using Eq. (<ref>), we can rewrite the click-counting distribution,
c_(N_0,…,N_K)
= ∫ d^2α P(α)N!/N_0!⋯ N_K!
×
p_0(α)^N_0⋯
p_K(α)^N_K.
In this form, we can directly observe that any classical statistics, P(α)≥0, is a classical average over multinomial probability distributions.
§.§ Higher-order nonclassicality criteria
Our click-counting model (<ref>) describes a multiplexing scheme and applies to arbitrary detectors.
One observes that its probability distribution is based on normally ordered expectation values of the form ⟨:π̂_0^m_0⋯π̂_K^m_K:⟩.
Hence, we can formulate nonclassicality criteria from inequality (<ref>) while expanding
f̂=∑_m_0+⋯+m_K≤ N/2 f_m_0,…,m_Kπ̂_0^m_0⋯π̂_K^m_K.
This operator is chosen such that it solely includes the operators that are actually measured.
We can write
⟨:f̂^†f̂:⟩
= ∑_[ m_0+⋯+m_K≤ N/2; m'_0+⋯+m'_K≤ N/2 ]
f^∗_m_0,…,m_K
×⟨:π̂_0^m_0+m'_0⋯π̂_K^m_K+m'_K:⟩
f_m'_0,…,m'_K
= f⃗^ † Mf⃗,
with a vector f⃗=(f_m_0,…,m_K)_(m_0,…,m_K), using a multi-index notation, and the matrix of normally ordered moments M, which is defined in terms of the elements ⟨:π̂_0^m_0+m'_0⋯π̂_K^m_K+m'_K:⟩.
Also note that the order of the moments is bounded by the number of individual detectors, N≥ m_0+⋯+m_K+m'_0+⋯+m'_K, as the measured statistics (<ref>) only allows for retrieving them.
As the non-negativity of the expression (<ref>) holds for classical states [condition (<ref>)] and for all coefficients f⃗, we can equivalently write the following:
A state is nonclassical if
0≰ M.
Conversely, the matrix of higher-order, normal-ordered moments M is positive semidefinite for classical light.
Note, it can be also shown (Appendix A in Ref. <cit.>) that the matrix of normally ordered moments can be equivalently expressed in a form that is based on central moments, ⟨:(Δπ̂_0)^m_0+m_0'⋯(Δπ̂_K)^m_K+m_K':⟩.
For example and while restricting to the second-order submatrix, we get nonclassicality conditions in terms of normal-ordered covariances,
0≰ M^(2)=(⟨:Δπ̂_kΔπ̂_k':⟩)_k,k'=0,…,K
=[ ⟨:(Δπ̂_0)^2:⟩ ⟨:(Δπ̂_0)(Δπ̂_K):⟩; ⋮ ⋱ ⋮; ⟨:(Δπ̂_0)(Δπ̂_K):⟩ ⟨:(Δπ̂_K)^2:⟩ ].
The relation ⟨:π̂_K:⟩=1-[⟨:π̂_0:⟩+⋯+⟨:π̂_K-1:⟩] of general POVMs implies that the last row of M^(2) is linearly dependent on the other ones.
This further implies that zero is an eigenvalue of M^(2).
Hence, we get for any classical state that the minimal eigenvalue of this covariance matrix is necessarily zero.
In order to relate our nonclassicality criteria to the measurement of the click-counting statistics (<ref>), let us consider the generating function, which is given by
g(z_0,…,z_N)=z_0^N_0⋯ z_K^N_K
= ∑_N_0+⋯+N_K=N c_(N_0,…,N_K) z_0^N_0⋯ z_K^N_K
= ⟨:(z_0π̂_0+⋯+z_Kπ̂_K)^N
:⟩.
The derivatives of the generating function relate the measured moments with the normally ordered ones,
.∂_z_0^m_0⋯∂_z_K^m_Kg(z_0,…,z_K)|_z_0=⋯=z_K=1
= ∑_N_0+⋯+N_K=N c_(N_0,…,N_K)N_0!/(N_0-m_0)!⋯N_K!/(N_K-m_K)!
= (N_0)_m_0⋯(N_K)_m_K
= (N)_m_0+⋯+m_K⟨:π̂_0^m_0⋯π̂_K^m_K:⟩
for m_0+⋯+m_K≤ N and (x)_m=x(x-1)⋯(x-m+1)=x!/(x-m)! being the falling factorial.
Having a closer look at the second and third line of Eq. (<ref>), we see that the factorial moments (N_0)_m_0⋯(N_K)_m_K can be directly sampled from c_(N_0,…,N_K).
From the last two lines of Eq. (<ref>) follows the relation to the normally ordered moments, which are needed for our nonclassicality tests.
§.§ Second-order criteria
As an example and due to its importance, let us focus on the first- and second-order moments in detail.
In addition, our experimental realization implements a single multiplexing step, N=2, which yields a restriction to second-order moments [see comment below Eq. (<ref>)].
As a special case of Eq. (<ref>), we obtain
⟨:π̂_k:⟩=N_k/N and ⟨:π̂_kπ̂_k':⟩=N_kN_k'-δ_k,k'N_k/N(N-1)
for k,k'∈{0,…,K}.
Hence, our covariances are alternatively represented by
⟨:Δπ̂_kΔπ̂_k':⟩
=
NΔ N_kΔ N_k'-N_k(Nδ_k,k'-N_k')
/N^2(N-1).
As the corresponding matrix (<ref>) of normal-ordered moments is nonnegative for classical states, we get
0cl.≤ N^2(N-1)M^(2)
=(NΔ N_kΔ N_k'-N_k[Nδ_k,k'-N_k'])_k,k'=0,…,K.
The violation of this specific constraint for classical states has been experimentally demonstrated for the generated quantum light <cit.>.
Let us consider other special cases of the general criterion.
In particular, let us study the projections that result in a nonclassicality condition
f⃗^ T M^(2)f⃗<0,
see also Eqs. (<ref>) and (<ref>).
Note that M^(2) is a real-valued and symmetric (K+1)×(K+1) matrix.
Thus, it is sufficient to consider real-valued vectors f⃗=(f_0,…,f_K)^T.
Further on, let us define the operator
:μ̂:=f_0:π̂_0:+⋯+f_K:π̂_K:.
Then, we can also read condition (<ref>) as
⟨:(Δμ̂)^2:⟩<0.
That is, the fluctuations of the observable :μ̂: are below those of any classical light field.
In the following, we consider specific choices for f⃗ to formulate different nonclassicality criteria.
§.§.§ Sub-multinomial light
The minimization of (<ref>) over all normalized vectors yields the minimal eigenvalue Q_multi of M^(2).
That is
Q_multi=min_f⃗:f⃗^ Tf⃗=1f⃗^ T M^(2)f⃗=f⃗_0^ TM^(2)f⃗_0,
where f⃗_0 is a normalized eigenvector to the minimal eigenvalue.
If we have M^(2)≱ 0, then we necessarily get Q_multi<0.
For classical states, we get Q_multi=0; see the discussion below Eq. (<ref>).
As this criterion exploits the maximal negativity from covariances of the multinomial statistics, we refer to a radiation field with Q_multi<0 as sub-multinomial light.
§.§.§ Sub-binomial light
We can also consider the vector f⃗=(0,1,…,1)^T, which yields :μ̂:=1̂-:π̂_0:.
Hence, we have effectively reduced our system to a detection with a binary outcome, represented through the POVMs :π̂_0: and :μ̂:=1̂-:π̂_0:.
Using a proper scaling, we can write
(N-1)f⃗^ TM^(2)f⃗/⟨:π̂_0:⟩(1-⟨:π̂_0:⟩)
=N(Δ B)^2-NB+B^2/(N-B)B
= N(Δ B)^2/B(N-B)-1=Q_bin,
defining B=N_1+⋯+N_K=N-N_0 and using Eq. (<ref>).
The condition Q_bin<0 defines the notion of sub-binomial light <cit.> and is found to be a special case of inequality (<ref>).
§.§.§ Sub-Poisson light
Finally, we study criterion (<ref>) for f⃗=(0,1,…,K)^T.
We have :μ̂:=∑_k=0^K k :π̂_k: and we also define
A=∑_k=0^K k N_k.
Their mean values are related to each other,
⟨:μ̂:⟩=∑_k=0^K kN_k/N=A/N.
We point out that N_k/N can be also interpreted as probabilities, being nonnegative N_k/N≥0 and normalized 1=N_0/N+⋯+N_K/N since N=N_0+⋯+N_K.
Further, we can write the normally ordered variance (<ref>) in the form
⟨:(Δμ)^2:⟩=f⃗^ TM^(2)f⃗
= (Δ A)^2-A/N(N-1)
-(∑_k=0^K k^2N_k/N)
-(∑_k=0^K kN_k/N)^2
-(∑_k=0^K kN_k/N)
/N-1.
Again, we can use a proper, nonnegative scaling to find
⟨:(Δμ)^2:⟩/⟨:μ:⟩= Q_Pois-Q_Pois'/N-1,
with
Q_Pois= (Δ A)^2/A-1
and
Q_Pois'= (
∑_k=0^K k^2N_k/N)
-(∑_k=0^K kN_k/N)^2
/(∑_k=0^K kN_k/N)
-1.
The parameters Q_Pois and Q_Pois^', often denoted as the Mandel or Q parameter, relate to the notion of sub-Poisson light <cit.>.
However, we have a difference of two such Mandel parameters in Eq. (<ref>).
The second parameter Q_Pois' can be considered as a correction, because the statistics of A is only in a rough approximation a Poisson distribution.
This is further analyzed in Appendix <ref>.
§.§ Discussion
We derived the click-counting statistics (<ref>) for unspecified POVMs of the individual detectors.
This was achieved by using the properties of a well-defined multiplexing scheme.
We solely assumed that the N detectors (with K+1 possible outcomes) are described by the same POVM.
A deviation from this assumption can be treated as a systematic error; see Supplemental Material to Ref. <cit.>.
The full detection scheme was shown to result in a quantum version of multinomial statistics.
This also holds true for an infinite, countable (K=|ℕ|) or uncountable (K=|ℝ|) set of outcomes, for which any measurement run can only deliver a finite sub-sample.
For coherent light |α_0⟩, we get a true multinomial probability distribution; see Eq. (<ref>) for P(α)=δ(α-α_0).
For a binary outcome, K+1=2, we retrieve a binomial distribution <cit.>, which applies, for example, to avalanche photodiodes in the Geiger mode <cit.> or superconducting nanowire detectors <cit.>.
Further on, we derived higher-order nonclassicality tests which can be directly sampled from the data obtained from the measurement layout in Fig. <ref>.
Then, we focused on the second-order nonclassicality probes and compared the cases of sub-multinomial [Eq. (<ref>)], sub-binomial [Eq. (<ref>)], and (corrected) sub-Poisson [Eq. (<ref>)] light.
The latter notion is related to nonclassicality in terms of photon-number correlation functions (see also Ref. <cit.>) and is a special case of our general criteria.
Additionally, our method can be generalized to multiple multiplexing-detection arrangements to include multimode correlations similar to the approach in Ref. <cit.>.
Recently, another interesting application was reported to characterize spatial properties of a beam profile with multipixel cameras <cit.>.
There, the photon-number distribution itself is described in terms of a multinomial statistics, and the Mandel parameter can be used to infer nonclassical light.
Here, we show that an balanced multiplexing and any measurement POVM yield a click-counting statistics—describing a different statistical quantity than the photon statistics of a beam profile—in the form of a quantum version of a multinomial distribution leading to higher-order nonclassicality criteria.
We also demonstrated that in some special scenarios (Appendix <ref>), a relation between the click statistic and photon statistics can be retrieved which is, however, much more involved in the general case; see also Sec. <ref>.
§ EXPERIMENT
Before applying the derived techniques to our data, we describe the experiment and study some features of our individual detectors in this section.
Especially, the response of our detectors is shown to have a nonlinear behavior which underlines the need for our nonclassicality criteria which are applicable to any type of detector.
Additional details can be found in Appendix <ref>.
§.§ Setup description and characterization
An outline of our setup is given in Fig. <ref>.
It is divided into a source that produces heralded photon states and a detection stage which represents one multiplexing step.
In total, we use three superconducting TESs.
For generating correlated photons, we employ a spontaneous parametric down-conversion (PDC) source.
Here, we describe the individual parts in some more detail.
§.§.§ The PDC source
Our spontaneous PDC source is a waveguide-written periodically poled potassium titanyl phosphate (PP-KTP) crystal which is 8 mm long.
The type-II spontaneous PDC process is pumped with laser pulses at 775 nm and a full width at half maximum (FWHM) of 2 nm at a repetition rate of 75 kHz.
The heralding idler mode has a horizontal polarization and it is centered at 1554 nm.
The signal mode is vertically polarized and centered at 1546 nm.
A PBS spatially separates the output signal and idler pulses.
An edge filter discards the pump beam.
In addition, the signal and idler are filtered by 3 nm bandpass filters.
This is done in order to filter out the broadband background which is typically generated in dielectric nonlinear waveguides <cit.>.
In general, such PDC sources have been proven to be well-understood and reliable sources of quantum light <cit.>.
Hence, we may focus our attention on the employed detectors.
§.§.§ The TES detectors
We use superconducting TESs as our photon detectors <cit.>.
These TESs are micro-calorimeters consisting of 25 μ m× 25 μ m× 20 nm slabs of tungsten located inside an optical cavity with a gold backing mirror designed to maximize absorption at 1500 nm.
They are secured within a ceramic ferule as part of a self-aligning mounting system, so that the fiber core is well aligned to the center of the detector <cit.>.
The TESs are first cooled below their transition temperature within a dilution refrigerator and then heated back up to their transition temperature by Joule heating caused by a voltage bias, which is self-stabilized via an electro-thermal feedback effect <cit.>.
Within this transition region, the steep resistance curve ensures that the small amount of heat deposited by photon absorption causes a measurable decrease in current flowing through the device.
After photon absorption, the heat is then dissipated to the environment via a weak thermal link to the TES substrate.
To read out the signal from this photon absorption process, the current change—produced by photon absorption in the TES—is inductively coupled to a superconducting quantum interference device (SQUID) module where it is amplified, and this signal is subsequently amplified at room temperature.
This results in complex time-varying signals of about 5 μ s duration.
These signals are sent to a digitizer to perform fast analog-to-digital conversion, where the overlap with a reference signal is computed and then binned.
This method allows us to process incoming signals at a speed of up to 100 kHz.
Our TESs are installed in a dilution refrigerator operating at a base temperature of about 70 mK and a cooling power of 400 μ W at 100 mK.
One of the detectors has a measured detection efficiency of 0.98^+ 0.02_- 0.08 <cit.>.
The other two TESs have identical efficiencies within the error of our estimation.
§.§ Detector response analysis
Even though we will not use specific detector characteristics for our analysis of nonclassicality, it is nevertheless scientifically interesting to study their response.
This will also outline the complex behavior of superconducting detectors.
For the time being, we ignore the detection events of the TESs 1 and 2 in Fig. <ref> and solely focus on the measurement of the heralding TES.
In Fig. <ref>, the measurement outcome of those marginal counts is shown.
A separation into disjoint energy intervals represents our outcomes k∈{0,…,11} (see also Appendix <ref>).
The distribution around the peaked structures can be considered as fluctuations of the discrete energy levels (indicated by vertical dark green, solid lines).
We observe that the difference between two discrete energies E_n is not constant as one would expect from E_n+1-E_n=ħω, which will be discussed in the next paragraph.
In addition, the marginal photon statistics should be given by a geometric distribution for the two-mode squeezed-vacuum state produced by our PDC source; see Appendix <ref>.
In the logarithmic scaling in Fig. <ref>, this would result in a linear function.
However, we observe a deviation from such a model; compare light green, dashed and dot-dashed lines in Fig. <ref>.
This deviation from the expected, linear behavior could have two origins:
The source is not producing a two-mode squeezed-vacuum state (affecting the height of the peaks), or the detector, including the SQUID response, is not operating in a linear detection regime (influence on the horizontal axis).
To counter the latter, the measured peak energies E_n—relating to the photon numbers n—have been fitted by a quadratic response function n=aE_n^2+bE_n+c; see the inset in Fig. <ref>.
As a result of such a calibration, the peaked structure is well described by a linear function in n for the heralding TES as shown in Fig. <ref> (top), which is now consistent with the theoretical expectation.
The same nonlinear energy transformation also yields a linear n dependence for the TESs 1 and 2 (cf. Fig. <ref>, bottom).
Note that those two detectors only allow for a resolution of K+1=8 outcomes and that these two detectors have indeed a very similar response—the depicted linear function is identical for both.
In conclusion, it is more likely that the measured nonlinear behavior in Fig. <ref> can be assigned to the detectors, and the PDC source is operating according to our expectations.
In summary, we encountered an unexpected, nonlinear behavior of our data.
To study this, a nonlinear fit was applied.
This allowed us to make some predictions about the detector response in the particular interval of measurement while using known properties of our source.
However, a lack of such extra knowledge prevents one from characterizing the detector.
In Sec. <ref>, we have formulated nonclassicality tests which are robust against the particular response function of the individual detectors.
They are accessible without any prior detector analysis and include the eventuality of nonlinear detector responses and other imperfections, such as quantum efficiency.
With this general treatment, we also avoid the time-consuming detector tomography.
§ APPLICATION
In this section, we apply the general theory, presented in Sec. <ref>, to our specific experimental arrangement, shown in Fig. <ref>.
In the first step, we perform an analysis to identify nonclassicality which can be related to photon-number-based approaches.
In the second step, we also compare the different criteria for sub-multinomial, sub-binomial, and sub-Poisson light for different realizations of our multi-photon states.
§.§ Heralded multi-photon states
As derived in Appendix <ref>, the connection of the operator (<ref>), for f_k=k, to the photon-number statistics for the idealized scenario of photoelectric detection POVMs is given by :μ̂:=(η/N)n̂, where η is the quantum efficiency of the individual detectors.
This also relates—in this ideal case—the quantities
⟨:μ̂:⟩=η/N⟨:n̂:⟩ and ⟨:(Δμ̂)^2:⟩=η^2/N^2⟨:(Δn̂)^2:⟩.
Recalling :n̂:=n̂, we see that ⟨:μ̂:⟩ is proportional to the mean photon number in this approximation.
Similarly, we can connect ⟨:(Δμ̂)^2:⟩ to the normally ordered photon-number fluctuations.
They are non-negative for classical states and negative for sub-Poisson light [see Eq. (<ref>)].
An ideal PDC source is known to produce two-mode squeezed-vacuum states,
|q⟩=√(1-|q|^2)∑_n=0^∞ q^n |n⟩⊗|n⟩,
where |q|<1.
One mode can be used to produce multi-photon states by conditioning to the lth outcome of the heralding detector.
Using photoelectric detector POVMs, we get the following mean value and the variances (Appendix <ref>):
⟨:μ̂:⟩=η/Nλ̃+l/1-λ̃ and ⟨:(Δμ̂)^2:⟩=η^2/N^2(λ̃+l)^2-l(l+1)/(1-λ̃)^2,
with a transformed squeezing parameter λ̃=(1-η̃)|q|^2 and η̃ being the efficiency of the heralding detector.
Note, we get the ideal lth Fock state, |l⟩, for λ̃→0.
The experimental result is shown in the top panel of Fig. <ref>.
Using Eq. (<ref>), we directly sampled the mean value and the variance of :μ̂:=0:π̂_0:+⋯+K:π̂_K: from the measured statistics for a heralding with l=0,…,5.
In this plot, l increases from left to right relating to the increased mean photon numbers (including attenuations) of the heralded multi-photon states.
The idealized theoretical modeling [see Eq. (<ref>)] is shown in the bottom part of Fig. <ref>.
Note, details of the error analysis have been formulated previously in Ref. <cit.>.
From the variances, we observe no nonclassicality when heralding to the 0th outcome, which is expected as we condition on vacuum.
In contrast, we can infer nonclassicality for the conditioning to higher outcomes of the heralding TES, ⟨:(Δμ)^2:⟩<0 for l>0.
We have a linear relation between the normally ordered mean and variance of :μ̂:, which is consistent with the theoretical prediction in Eq. (<ref>).
In the ideal case, the normal-ordered variance of the photon number for Fock states also decreases linearly with increasing l, ⟨ l|n̂|l⟩=l and ⟨ l|:(Δn̂)^2:|l⟩⟩=-l.
It is also obvious that the errors are quite large for the verification of nonclassicality with this particular test for sub-Poisson light.
We will discuss this in more detail in the next subsection.
§.§ Varying pump power
So far, we have studied measurements for a single pump power of the PDC process.
However, the purity of the heralded states depends on the squeezing parameter, which is a function of the pump power.
For instance, in the limit of a vanishing squeezing, we have the optimal approximation of the heralded state to a Fock state.
However, the rate of the probabilistic generation converges to zero in the same limit (Appendix <ref>).
Hence, we have additionally generated multi-photon states for different squeezing levels.
The results of our analysis are shown in Fig. <ref> and will be discussed in the following.
Suppose we measure the counts C_l for the lth outcome of the heralding TES.
The efficiency of generating this lth heralded state reads
η_gen=C_l/∑_l C_l.
From the model in Appendix <ref>, we expect that
η_gen=1-|q|^2/1-|q|^2(1-η̃)(η̃|q|^2/1-|q|^2(1-η̃))^l.
The efficiency decays exponentially with l and the decay is stronger for smaller squeezing or pump power—i.e., a decreasing |q|^2.
In the left column of Fig. <ref>, we can observe this behavior.
It can be seen in all other parts of Fig. <ref> that η_gen influences the significance of our results.
A smaller η_gen value naturally implies a larger error because of a decreased sample size C_l.
This holds for increasing l and for decreasing squeezing.
In the second column in Fig. <ref>, labeled as “sub-Poisson”, we study the nonclassicality criterion
0>N^2(N-1)f⃗^ TM^(2)f⃗,
for f⃗=(0,1,…, K)^T,
N=2, and K=7, which is related to sub-Poisson light (Sec. <ref>).
The third column in Fig. <ref> correspondingly shows “sub-binomial” light (Sec. <ref>),
0>N^2(N-1)f⃗^ TM^(2)f⃗ for f⃗=(0,1,…, 1)^T.
The last column, “sub-multinomial”, depicts the nonclassicality criterion
0> N^2(N-1)f⃗_0^ TM^(2)f⃗_0,
where f⃗_0 is a normalized eigenvector to the minimal eigenvalue of M^(2) (Sec. <ref>).
For all notations of nonclassicality under study, the heralding to the 0th outcome is consistent with our expectation of a classical state, which also confirms that no fake nonclassicality is detected.
For instance, applying the Mandel parameter to the data of this 0th heralded stated without the corrections derived here [Eq. (<ref>)], we would observe a negative value; see also similar discussions in Refs. <cit.>.
The case of a Poisson or binomial statistics tends to be above zero, whereas the multinomial case is consistent with the value of zero.
This expectation has been justified below Eq. (<ref>).
A lot of information on the quantum-optical properties of the generated multi-photon (l>0) light fields can be concluded from Fig. <ref>.
Let us mention some of them by focusing on a comparison.
We have the trend that the notion of sub-Poisson light has the least significant nonclassicality.
This is due to the vector f⃗ [Eq. (<ref>)], which assigns a higher contribution to the larger outcome numbers.
However, those contributions have lower count numbers, which consequently decreases the statistical significance.
As depicted in Fig. <ref>, this effect is not present for sub-binomial light, which is described by a more or less balanced weighting of the different counts; see vector f⃗ in Eq. (<ref>).
Still, this vector is fixed.
The optimal vector is naturally computed by the sub-multinomial criterion in Eq. (<ref>).
The quality of the verified nonclassicality is much better than for the other two scenarios of sub-Poisson and sub-binomial light in most of the cases.
Let us mention that the normalized eigenvector to the minimal eigenvalue of the sampled matrix M^(2) typically, but not necessarily, yields the minimal propagated error.
Additionally, a lower squeezing level allows for the heralding of a state which is closer to an ideal Fock state.
This results in higher negativities for decreasing squeezing and fixed outcomes l in Fig. <ref>.
However, the heralding efficiency η_gen is also reduced, which results in a larger error.
Finally, we may point out that this comparative analysis of sub-Poisson, sub-binomial, and sub-multinomial light from data of a single detection arrangement would not be possible without the technique that has been elaborated in this paper (Sec. <ref>).
§ SUMMARY
In summary, we constructed the quantum-optical framework to describe multiplexing schemes that employs arbitrary detectors and to verify nonclassicality of generated multi-photon states.
We formulated the theory of such a detection layout together with nonclassicality tests.
Further, we set up an experimental realization and applied our technique to the data.
In a first step, the theory was formulated.
We proved that the measured click-counting statistics of the scheme under study is always described by a quantum version of the multinomial statistics.
In fact, for classical light, this probability distribution can be considered as a mixture of multinomial statistics.
This bounds the minimal amount of fluctuations which can be observed for classical radiation fields.
More precisely, the matrix of higher-order, normally ordered moments, which can be directly sampled from data, can exhibit negative eigenvalues for nonclassical light.
As a particular example, we discussed nonclassicality tests based on the second-order covariance matrix, which led to establishing the concept of sub-multinomial light.
Previously studied notions of nonclassicality, i.e, sub-Poisson and sub-binomial light, have been found to be special cases of our general nonclassicality criteria.
In our second part, the experiment was analyzed.
Our source produces correlated photon pairs by a parametric-down-conversion process.
A heralding to the outcome of a detection of the idler photons with a transition-edge sensor produced multi-photon states in the signal beam.
A single multiplexing step was implemented with a subsequent detection by two transition-edge sensors to probe the signal field.
The complex function of these detectors was discussed by demonstrating their nonlinear response to the number of incident photons.
Consequently, without worrying about this unfavorable feature, we applied our robust nonclassicality criteria to our data.
We verified the nonclassical character of the produced quantum light.
The criterion of sub-multinomial light was shown to outperform its Poisson and binomial counterparts to the greatest possible extent.
In conclusion, we presented a detailed and more extended study of our approach in Ref. <cit.>.
We formulated the general positive-operator-valued measure and generalized the nonclassicality tests to include higher-order correlations which become more and more accessible with an increasing number of multiplexing steps.
In addition, details of our data analysis and a simple theoretical model were considered.
Thus, we described a robust detection scheme to verify quantum correlations with unspecified detectors and without introducing fake nonclassicality.
The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 665148.
A. E. is supported by EPSRC EP/K034480/1.
J. J. R. is supported by the Netherlands Organization for Scientific Research (NWO).
W. S. K is supported by EPSRC EP/M013243/1.
S. W. N., A. L., and T. G. are supported by the Quantum Information Science Initiative (QISI).
I. A. W. acknowledges an ERC Advanced Grant (MOQUACINO).
The authors thank Johan Fopma for technical support.
The authors gratefully acknowledge helpful comments by Tim Bartley and Omar Magaña-Loaiza.
Contributions of this work by NIST, an agency of the U.S. Government, are not subject to U.S. copyright.
§ COMBINATORICS AND POVM ELEMENTS
Here, we provide the algebra that is needed to get from Eq. (<ref>) to Eq. (<ref>).
More rigorously, we use combinatorial methods to formulate the POVM Π̂_(N_0,…,N_K) in terms of the POVM :π̂_k_1⋯π̂_k_N:.
Say N_k is the number of elements of (k_1,…,k_N) which take the value k.
Then, (N_0,…,N_K) describes the coincidence that N_0 detectors yield the outcome 0, N_1 detectors yield the outcome 1, etc.
One specific and ordered measurement outcome is defined by (k_0,1,…,k_0,N), with
k_0,n={[ 0 for 1≤ n ≤ N_0,; 1 for N_0+1≤ n ≤ N_0+N_1,; ⋮; K for N_0+⋯+N_K-1+1≤ n ≤ N,; ].
which results in a given (N_0,…,N_K), where the total number of detectors is N=N_0+⋯+N_K.
This specific example can be used to represent all similar outcomes as we will show now.
The (k_1,…,k_N) for the same combination (N_0,…,N_K) can be obtained from (k_0,σ(1),…,k_0,σ(N)) via a permutation σ∈𝒮_N of the elements.
Here 𝒮_N denotes the permutation group of N elements which has a cardinality of N!.
Note that all permutations σ which exchange identical outcomes result in the same tuple.
This means for the outcome defined in Eq. (<ref>) that (k_0,σ(1),…,k_0,σ(N))=(k_0,1,…,k_0,N) for any permutation of the form σ∈𝒮_N_0×⋯×𝒮_N_K.
Therefore, the POVM element for a given (N_0,…,N_K) can be obtained by summing over all permutations σ∈𝒮_N of the POVMs of individual outcomes :π̂_k_0,1⋯π̂_k_0,N: [Eq. (<ref>)] while correcting for the N_0!⋯ N_K! multi-counts.
More rigorously, we can write
Π̂_(N_0,…,N_K)
= 1/N_0!⋯ N_K!∑_σ∈𝒮_N:π̂_k_0,σ(1)⋯π̂_k_0,σ(N):
= N!/N_0!⋯ N_K!:π̂_0^N_0⋯π̂_K^N_K:,
where relations of the form :ÂB̂Â:=:Â^2B̂: have been used.
§ CORRECTED MANDEL PARAMETER
For the nonclassicality test in Sec. <ref>, we could assume a detector which can discriminate K=∞ measurement outcomes, which are related to measurement operators of a Poisson form, :π̂_k':=:Γ̂^ke^-Γ̂:/k! <cit.>, where Γ̂=ηn̂ is an example of a linear detector response function (η quantum efficiency).
Using the definition (<ref>), we get :π̂_k:=:(Γ̂/N)^ke^-Γ̂/N:/k!, where the denominator N accounts for the splitting into N modes <cit.>.
This idealized model yields ⟨:μ̂:⟩=⟨:(Γ̂/N):⟩ and
∑_k=0^∞ k^2 N_k/N=⟨:Γ̂^2:⟩/N^2+⟨:Γ̂:⟩/N and A^2= ⟨:Γ̂^2:⟩+⟨:Γ̂:⟩.
Hence, we have Q_Pois=⟨: (ΔΓ̂)^2:⟩/⟨:Γ̂:⟩=NQ_Pois' and
⟨:(Δμ)^2:⟩/⟨:μ:⟩=1/NQ_Pois=η/N⟨:(Δn̂)^2:⟩/⟨:n̂:⟩.
Thus, we have shown that for photoelectric detection models, we retrieve the notation of sub-Poisson light, Q_Pois<0, from the general form (<ref>), which includes a correction term.
§ BINNING AND MEASURED COINCIDENCES
The data in Fig. <ref> (Sec. <ref>) are grouped in disjoint intervals around the peaks, representing the photon numbers.
They define the outcomes k=0,…,K.
Because we are free in the choice of the intervals, we studied different scenarios and found that the given one is optimal from the information-theoretic perspective.
On the one hand, if the current intervals are divided into smaller ones, we distribute the data of one photon number among several outcomes.
This produces redundant information about this photon number.
On the other hand, we have a loss of information about the individual photon numbers if the interval stretches over multiple photon numbers.
This explains our binning as shown in Fig. <ref>.
An example of a measured coincidence statistics for outcomes (k_1,k_2) is shown in Fig. <ref>.
There, we consider a state which is produced by the simplest conditioning to the 0th outcome of the heralding TES.
Based on this plot, let us briefly explain how these coincidences for (k_1,k_2) result in the statistics c_(N_0,…,N_K) for (N_0,…,N_K) and K=7.
The counts on the diagonal, k_1=k_2=k, of the plot yield c_(N_0,…,N_K) for N_k=2 and N_k'=0 for k'≠ k.
For example, the highest counts are recorded for (k_1,k_2)=(0,0) in Fig. <ref> which gives c_(2,0,…,0) when normalized to all counts.
Off-diagonal combinations, k_1≠ k_2, result in c_(N_0,…,N_K) for N_k_1=N_k_2=1 and N_k=0 otherwise.
For example, the normalized sum of the counts for (k_1,k_2)∈{(0,1),(1,0)} yields c_(1,1,0,…,0).
As we have N=2 TESs in our multiplexing scheme and N_0+⋯+N_K=N, the cases k_1=k_2 and k_1≠ k_2 already define the full distribution c_(N_0,…,N_K).
The asymmetry in the counting statistics between the two detectors results in a small systematic error ≲1%.
One should keep in mind that the counts are plotted in a logarithmic scale.
For all other measurements of heralded multi-photon states, this error is in the same order <cit.>.
§ SIMPLIFIED THEORETICAL MODEL
Let us analytically compute the quantities which are used for the simplified description of physical system under study.
The PDC source produces a two-mode squeezed-vacuum state (<ref>), where the first mode is the signal and the second mode is the idler or herald.
In our idealized model, the heralding detector is supposed to be a photon-number-resolving detector with a quantum efficiency η̃.
A multiplexing and a subsequent measurement with N photon-number-resolving detectors (K=∞) are employed for the click counting.
Each of the photon-number-resolving detector's POVM elements is described by
:π̂_k:=:(ηn̂/N)^k/k!e^-ηn̂/N:.
In addition, we will make use of the relations :e^yn̂:=(1+y)^n̂ (cf., e.g., Ref. <cit.>) and
∂_z^k:e^[z-1]yn̂:|_z=1= :(yn̂)^k:,
1/k!∂_z^k:e^[z-1]yn̂:|_z=0= :(yn̂)^k/k!e^-yn̂:.
For this model, we can conclude that the two-mode generating function for the considered two-mode squeezed-vacuum state reads
Γ(z,x⃗)=⟨:e^[z-1]η̃n̂⊗ e^[x⃗-1]ηn̂/N:⟩
= 1-|q|^2/1-|q|^2(1-η̃+η̃z)(1-η+ηx⃗/N).
where z∈ [0,1] relates the heralding mode and the components of x⃗∈[0,1]^N (recall that x⃗=∑_n x_n) to the outcomes of the N detectors in the multiplexing scheme.
From this generating function, we directly deduce the different properties that are used in this paper for comparing the measurement with our model.
The needed derivatives are
∂_x⃗^k⃗∂_z^lΓ(z,x⃗)=∂_x⃗^k∂_z^lΓ(z,x⃗)
= (1-|q|^2)l!k![(η/N)|q|^2z']^k
[η̃|q|^2x']^l
/[1-|q|^2x'z']^k+l+1
×∑_j=0^min{k,l}(k+l-j)!/j!(k+j)!(l-j)![
1-|q|^2x'z'/|q|^2x'z']^j,
where k=k⃗, x'=1-η +ηx⃗/N, and z'=1-η̃+η̃z.
It is also worth mentioning that the case N=1 yields the result for photon-number-resolving detection without multiplexing.
The marginal statistics of the heralding detector reads
p̃_l=1/l!∂_z^lΓ(z,x⃗)|_z=0,x_1=⋯=x_N=1
= 1-|q|^2/1-|q|^2(1-η̃)(η̃|q|^2/1-|q|^2(1-η̃))^l.
The marginal statistics of the nth detector is
1/k_n!∂_x_n^k_nΓ(1,x⃗)|_x_n=0,z=1=x_1=⋯=x_n-1=x_n+1=⋯=x_N
= 1-|q|^2/1-|q|^2(1-η/N)(η|q|^2/N/1-|q|^2(1-η/N))^k_n.
In addition, the case of no multiplexing (N=1 and x≅x⃗) yields for the lth heralded state the following first and second normally ordered photon numbers:
⟨:(ηn̂):⟩= 1/p̃_ll!∂_x∂_z^lΓ(z,x)|_z=0,x=1
=ηl+λ̃/1-λ̃,
⟨:(ηn̂)^2:⟩= 1/p̃_ll!∂_x^2∂_z^lΓ(z,x)|_z=0,x=1
= η^2(2(l+λ̃)^2-l(l+1))/(1-λ̃)^2,
with λ̃=(1-η̃)|q|^2.
The corresponding photon distribution (i.e., for η=1) of the lth multi-photon state reads
p̃_k|l= 1/p̃_l1/k!l!∂_x^k∂_z^l Γ(z,x)|_z=x=0
= {[ 0 for k<l,; kl(1-λ̃)^l+1λ̃^n-l for k≥ l. ].
For λ̃→0, we have p̃_k|l=δ_k,l, which is the photon statistics of the lth Fock state.
99
Setal2017
J. Sperling, W. R. Clements, A. Eckstein, M. Moore, J. J. Renema, W. S. Kolthammer, S. W. Nam, A. Lita, T. Gerrits, W. Vogel, G. S. Agarwal, and I. A. Walmsley,
Detector-Independent Verification of Quantum Light,
https://doi.org/10.1103/PhysRevLett.118.163602Phys. Rev. Lett. 118, 163602 (2017).
E05
A. Einstein,
Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt,
http://dx.doi.org/10.1002/andp.19053220607Ann. Phys. (Leipzig) 17, 132 (1905).
BC10
G. S. Buller and R. J. Collins,
Single-photon generation and detection,
http://dx.doi.org/10.1088/0957-0233/21/1/012002Meas. Sci. Technol. 21, 012002 (2010).
CDKMS14
C. J. Chunnilall, I. P. Degiovanni, S. Kück, I. Müller, and A. G. Sinclair,
Metrology of single-photon sources and detectors: A review,
http://dx.doi.org/10.1117/1.OE.53.8.081910Opt. Eng. 53, 081910 (2014).
GT07
N. Gisin and R. Thew,
Quantum communication,
https://doi.org/10.1038/nphoton.2007.22Nat. Photon. 1, 165 (2007).
S09
J. H. Shapiro,
The Quantum Theory of Optical Communications,
http://dx.doi.org/10.1109/JSTQE.2009.2024959IEEE J. Sel. Top. Quantum Electron. 15, 1547 (2009).
SV11
A. A. Semenov and W. Vogel,
Fake violations of the quantum Bell-parameter bound,
https://doi.org/10.1103/PhysRevA.83.032119Phys. Rev. A 83, 032119 (2011).
GLLSSMK11
I. Gerhardt, Q. Liu, A. Lamas-Linares, J. Skaar, V. Scarani, V. Makarov, and C. Kurtsiefer,
Experimentally Faking the Violation of Bell’s Inequalities,
https://doi.org/10.1103/PhysRevLett.107.170404Phys. Rev. Lett. 107, 170404 (2011).
SVA12a
J. Sperling, W. Vogel, and G. S. Agarwal,
True photocounting statistics of multiple on-off detectors,
http://dx.doi.org/10.1103/PhysRevA.85.023820Phys. Rev. A 85, 023820 (2012).
S63
E. C. G. Sudarshan,
Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams,
http://dx.doi.org/10.1103/PhysRevLett.10.277Phys. Rev. Lett. 10, 277 (1963).
G63
R. J. Glauber,
Coherent and incoherent states of the radiation field,
http://dx.doi.org/10.1103/PhysRev.131.2766Phys. Rev. 131, 2766 (1963).
TG86
U. M. Titulaer and R. J. Glauber,
Correlation functions for coherent fields,
http://dx.doi.org/10.1103/PhysRev.140.B676Phys. Rev. 140, B676 (1965).
M86
L. Mandel,
Non-classical states of the electromagnetic field,
http://dx.doi.org/10.1088/0031-8949/1986/T12/005Phys. Scr. T12, 34 (1986).
MBWLN10
A. Miranowicz, M. Bartkowiak, X. Wang, Yu-xi Liu, and F. Nori,
Testing nonclassicality in multimode fields: A unified derivation of classical inequalities,
http://dx.doi.org/10.1103/PhysRevA.82.013824Phys. Rev. A 82, 013824 (2010).
SRV05
E. Shchukin, Th. Richter, and W. Vogel,
Nonclassicality criteria in terms of moments,
http://dx.doi.org/10.1103/PhysRevA.71.011802Phys. Rev. A 71, 011802(R) (2005).
M79
L. Mandel,
Sub-Poissonian photon statistics in resonance fluorescence,
http://dx.doi.org/10.1364/OL.4.000205Opt. Lett. 4, 205 (1979).
AT92
G. S. Agarwal and K. Tara,
Nonclassical character of states exhibiting no squeezing or sub-Poissonian statistics,
https://doi.org/10.1103/PhysRevA.46.485Phys. Rev. A 46, 485 (1992).
RV02
Th. Richter and W. Vogel,
Nonclassicality of Quantum States: A Hierarchy of Observable Conditions,
http://dx.doi.org/10.1103/PhysRevLett.89.283601Phys. Rev. Lett. 89, 283601 (2002).
SVA16
J. Sperling, W. Vogel, and G. S. Agarwal,
Operational definition of quantum correlations of light,
http://dx.doi.org/10.1103/PhysRevA.94.013833Phys. Rev. A 94, 013833 (2016).
S07
C. Silberhorn,
Detecting quantum light,
http://dx.doi.org/10.1080/00107510701662538Contemp. Phys. 48, 143 (2007).
H09
R. H. Hadfield,
Single-photon detectors for optical quantum information applications,
http://dx.doi.org/10.1038/nphoton.2009.230Nat. Photon. 3, 696 (2009).
LS99
A. Luis and L. L. Sánchez-Soto,
Complete Characterization of Arbitrary Quantum Measurement Processes,
https://doi.org/10.1103/PhysRevLett.83.3573Phys. Rev. Lett. 83, 3573 (1999).
AMP04
G. M. D'Ariano, L. Maccone, and P. Lo Presti,
Quantum Calibration of Measurement Instrumentation,
https://doi.org/10.1103/PhysRevLett.93.250407Phys. Rev. Lett. 93, 250407 (2004).
LKKFSL08
M. Lobino, D. Korystov, C. Kupchak, E. Figueroa, B. C. Sanders, and A. I. Lvovsky,
Complete characterization of quantum-optical processes,
http://dx.doi.org/10.1126/science.1162086Science 322, 563 (2008).
LFCPSREPW09
J. S. Lundeen, A. Feito, H. Coldenstrodt-Ronge, K. L. Pregnell, C. Silberhorn, T. C. Ralph, J. Eisert, M. B. Plenio, and I. A. Walmsley,
Tomography of quantum detectors,
http://dx.doi.org/10.1038/nphys1133Nat. Phys. 5, 27 (2009).
ZDCJEPW12
L. Zhang, A. Datta, H. B. Coldenstrodt-Ronge, X.-M. Jin, J. Eisert, M. B. Plenio, and I. A. Walmsley,
Recursive quantum detector tomography,
http://dx.doi.org/10.1088/1367-2630/14/11/115005New J. Phys. 14, 115005 (2012).
BCDGMMPPP12
G. Brida, L. Ciavarella, I. P. Degiovanni, M. Genovese, A. Migdall, M. G. Mingolla, M. G. A. Paris, F. Piacentini, and S. V. Polyakov,
Ancilla-Assisted Calibration of a Measuring Apparatus,
https://doi.org/10.1103/PhysRevLett.108.253601Phys. Rev. Lett. 108, 253601 (2012).
PHMH12
J. Peřina, O. Haderka, V. Michálek, and M. Hamar,
Absolute detector calibration using twin beams,
https://doi.org/10.1364/OL.37.002475Opt. Lett. 37, 2475 (2012).
BKSSV17
M. Bohmann, R. Kruse, J. Sperling, C. Silberhorn, and W. Vogel,
Direct calibration of click-counting detectors
https://doi.org/10.1103/PhysRevA.95.033806Phys. Rev. A 95, 033806 (2017).
LMN08
A. E. Lita, A. J. Miller, and S. W. Nam,
Counting nearinfrared single-photons with 95% efficiency,
https://doi.org/10.1364/OE.16.003032Opt. Express 16, 3032 (2008).
Getal11
T. Gerrits, et al.,
On-chip, photon-number-resolving, telecommunication-band detectors for scalable photonic information processing,
https://doi.org/10.1103/PhysRevA.84.060301Phys. Rev. A 84, 060301(R) (2011).
BCDGLMPRTP12
G. Brida, L. Ciavarella, I. P. Degiovanni, M. Genovese, L. Lolli, M. G. Mingolla, F. Piacentini, M. Rajteri, E. Taralli, and M. G. A. Paris,
Quantum characterization of superconducting photon counters,
https://doi.org/10.1088/1367-2630/14/8/085001New J. Phys. 14, 085001 (2012).
RFZMGDFE12
J. J. Renema, G. Frucci, Z. Zhou, F. Mattioli, A. Gaggero, R. Leoni, M. J. A. de Dood, A. Fiore, and M. P. van Exter,
Modified detector tomography technique applied to a superconducting multiphoton nanodetector,
http://dx.doi.org/10.1364/OE.20.002806Opt. Express 20, 2806 (2012).
ZCDPLJSPW12
L. Zhang, H. Coldenstrodt-Ronge, A. Datta, G. Puentes, J. S. Lundeen, X.-M. Jin, B. J. Smith, M. B. Plenio, and I. A. Walmsley,
Mapping coherence in measurement via full quantum tomography of a hybrid optical detector,
https://doi.org/10.1038/nphoton.2012.107Nat. Photon. 6, 364 (2012).
LCGS10
K. Laiho, K. N. Cassemiro, D. Gross, and C. Silberhorn,
Probing the Negative Wigner Function of a Pulsed Single Photon Point by Point,
https://doi.org/10.1103/PhysRevLett.105.253603Phys. Rev. Lett. 105, 253603 (2010).
BGGMPTPOP11
G. Brida, M. Genovese, M. Gramegna, A. Meda, F. Piacentini, P. Traina, E. Predazzi, S. Olivares, and M. G. A. Paris,
Quantum state reconstruction using binary data from on/off photodetection,
http://dx.doi.org/10.1166/asl.2011.1204Adv. Sci. Lett. 4, 1 (2011).
LBFD08
E. Lantz, J.-L. Blanchet, L. Furfaro, and F. Devaux,
Multi-imaging and Bayesian estimation for photon counting with EMCCDs,
http://dx.doi.org/10.1111/j.1365-2966.2008.13200.xMon. Not. R. Astron. Soc. 386, 2262 (2008).
CWB14
R. Chrapkiewicz, W. Wasilewski, and K. Banaszek,
High-fidelity spatially resolved multiphoton counting for quantum imaging applications,
http://dx.doi.org/10.1364/OL.39.005090Opt. Lett. 39, 5090 (2014).
ATDYRS15
M. J. Applegate, O. Thomas, J. F. Dynes, Z. L. Yuan, D. A. Ritchie, and A. J. Shields,
Efficient and robust quantum random number generation by photon number detection,
http://dx.doi.org/10.1063/1.4928732Appl. Phys. Lett. 107, 071106 (2015).
WDSBY04
E. Waks, E. Diamanti, B. C. Sanders, S. D. Bartlett, and Y. Yamamoto,
Direct Observation of Nonclassical Photon Statistics in Parametric Down-Conversion,
http://dx.doi.org/10.1103/PhysRevLett.92.113602Phys. Rev. Lett. 92, 113602 (2004).
HPHP05
O. Haderka, J. Peřina, Jr., M. Hamar, and J. Peřina,
Direct measurement and reconstruction of nonclassical features of twin beams generated in spontaneous parametric down-conversion,
http://dx.doi.org/10.1103/PhysRevA.71.033815Phys. Rev. A 71, 033815 (2005).
FL13
R. Filip and L. Lachman,
Hierarchy of feasible nonclassicality criteria for sources of photons,
https://doi.org/10.1103/PhysRevA.88.043827Phys. Rev. A 88, 043827 (2013).
APHAB16
I. I. Arkhipov, J. Peřina Jr., O. Haderka, A. Allevi, and M. Bondani,
Entanglement and nonclassicality in four-mode Gaussian states generated via parametric down-conversion and frequency up-conversion,
http://dx.doi.org/10.1038/srep33802Sci. Rep. 6, 33802 (2016).
TKE15
S.-H. Tan, L. A. Krivitsky, and B.-G. Englert,
Measuring quantum correlations using lossy photon-number-resolving detectors with saturation,
http://dx.doi.org/10.1080/09500340.2015.1076080J. Mod. Opt. 63, 276 (2015).
ALCS10
M. Avenhaus, K. Laiho, M. V. Chekhova, and C. Silberhorn,
Accessing Higher Order Correlations in Quantum Optical States by Time Multiplexing,
http://dx.doi.org/10.1103/PhysRevLett.104.063602Phys. Rev. Lett. 104, 063602 (2010).
AOB12
A. Allevi, S. Olivares, and M. Bondani,
Measuring high-order photon-number correlations in experiments with multimode pulsed quantum states,
http://dx.doi.org/10.1103/PhysRevA.85.063835Phys. Rev. A 85, 063835 (2012).
SBVHBAS15
J. Sperling, M. Bohmann, W. Vogel, G. Harder, B. Brecht, V. Ansari, and C. Silberhorn,
Uncovering Quantum Correlations with Time-Multiplexed Click Detection,
http://dx.doi.org/10.1103/PhysRevLett.115.023601Phys. Rev. Lett. 115, 023601 (2015).
BDFL08
J.-L. Blanchet, F. Devaux, L. Furfaro, and E. Lantz,
Measurement of Sub-Shot-Noise Correlations of Spatial Fluctuations in the Photon-Counting Regime,
http://dx.doi.org/10.1103/PhysRevLett.101.233604Phys. Rev. Lett. 101, 233604 (2008).
MMDL12
P.-A. Moreau, J. Mougin-Sisini, F. Devaux, and E. Lantz,
Realization of the purely spatial Einstein-Podolsky-Rosen paradox in full-field images of spontaneous parametric down-conversion,
http://dx.doi.org/10.1103/PhysRevA.86.010101Phys. Rev. A 86, 010101(R) (2012).
CTFLMA16
V. Chille, N. Treps, C. Fabre, G. Leuchs, C. Marquardt, and A. Aiello,
Detecting the spatial quantum uncertainty of bosonic systems,
https://doi.org/10.1088/1367-2630/18/9/093004New J. Phys. 18, 093004 (2016).
SBDBJDVW16
J. Sperling, T. J. Bartley, G. Donati, M. Barbieri, X.-M. Jin, A. Datta, W. Vogel, and I. A. Walmsley,
Quantum Correlations from the Conditional Statistics of Incomplete Data,
http://dx.doi.org/10.1103/PhysRevLett.117.083601Phys. Rev. Lett. 117, 083601 (2016).
ZABGGBRP05
G. Zambra, A. Andreoni, M. Bondani, M. Gramegna, M. Genovese, G. Brida, A. Rossi, and M. G. A. Paris,
Experimental Reconstruction of Photon Statistics without Photon Counting,
http://dx.doi.org/10.1103/PhysRevLett.95.063602Phys. Rev. Lett. 95, 063602 (2005).
PADLA10
W. N. Plick, P. M. Anisimov, J. P. Dowling, H. Lee, and G. S. Agarwal,
Parity detection in quantum optical metrology without number-resolving detectors,
http://dx.doi.org/10.1088/1367-2630/12/11/113025New J. Phys. 12, 113025 (2010).
KV16
B. Kühn and W. Vogel,
Unbalanced Homodyne Correlation Measurements,
https://doi.org/10.1103/PhysRevLett.116.163603Phys. Rev. Lett. 116, 163603 (2016).
CKS14
M. Cooper, M. Karpinski, and B. J. Smith,
Quantum state estimation with unknown measurements,
http://dx.doi.org/10.1038/ncomms5332Nat. Commun. 5, 4332 (2014).
AGSB16
M. Altorio, M. G. Genoni, F. Somma, and M. Barbieri,
Metrology with Unknown Detectors,
https://doi.org/10.1103/PhysRevLett.116.100802Phys. Rev. Lett. 116, 100802 (2016).
PTKJ96
H. Paul, P. Törmä, T. Kiss, and I. Jex,
Photon Chopping: New Way to Measure the Quantum State of Light,
http://dx.doi.org/10.1103/PhysRevLett.76.2464Phys. Rev. Lett. 76, 2464 (1996).
KB01
P. Kok and S. L. Braunstein,
Detection devices in entanglement-based optical state preparation,
http://dx.doi.org/10.1103/PhysRevA.63.033812Phys. Rev. A 63, 033812 (2001).
ASSBW03
D. Achilles, C. Silberhorn, C. Śliwa, K. Banaszek, and I. A. Walmsley,
Fiber-assisted detection with photon number resolution,
http://dx.doi.org/10.1364/OL.28.002387Opt. Lett. 28, 2387 (2003).
FJPF03
M. J. Fitch, B. C. Jacobs, T. B. Pittman, and J. D. Franson,
Photon-number resolution using time-multiplexed single-photon detectors,
http://dx.doi.org/10.1103/PhysRevA.68.043814Phys. Rev. A 68, 043814 (2003).
RHHPH03
J. Řeháček, Z. Hradil, O. Haderka, J. Peřina, Jr., and M. Hamar,
Multiple-photon resolving fiber-loop detector,
http://dx.doi.org/10.1103/PhysRevA.67.061801Phys. Rev. A 67, 061801(R) (2003).
CDSM07
S. A. Castelletto, I. P. Degiovanni, V. Schettini, and A. L. Migdall,
Reduced deadtime and higher rate photon-counting detection using a multiplexed detector array,
https://doi.org/10.1080/09500340600779579J. Mod. Opt. 54, 337 (2007).
SPDBCM07
V. Schettini, S.V. Polyakov, I.P. Degiovanni, G. Brida, S. Castelletto, and A.L. Migdall,
Implementing a Multiplexed System of Detectors for Higher Photon Counting Rates,
https://doi.org/10.1109/JSTQE.2007.902846IEEE J. Sel. Top. Quantum Electron. 13, 978 (2007).
KK64
P. L. Kelley and W. H. Kleiner,
Theory of Electromagnetic Field Measurement and Photoelectron Counting,
https://doi.org/10.1103/PhysRev.136.A316Phys. Rev. 136, A316 (1964).
I14
A. Ilyin,
Generalized binomial distribution in photon statistics,
https://doi.org/10.1515/phys-2015-0005Open Phys. 13, 41 (2014).
PZA16
M. Pleinert, J. von Zanthier, and G. S. Agarwal,
Quantum signatures of collective behavior of a coherently driven two atom system coupled to a single-mode of the electromagnetic field,
https://arxiv.org/abs/1608.00137arXiv:1608.00137 [quant-ph].
MSB16
F. M. Miatto, A. Safari, and R. W. Boyd,
Theory of multiplexed photon number discrimination,
https://arxiv.org/abs/1601.05831arXiv:1601.05831 [quant-ph].
HBLNGS15
G. Harder, T. J. Bartley, A. E. Lita, S. W. Nam, T. Gerrits, and C. Silberhorn,
Single-Mode Parametric-Down-Conversion States with 50 Photons as a Source for Mesoscopic Quantum Optics,
https://doi.org/10.1103/PhysRevLett.116.143601Phys. Rev. Lett. 116, 143601 (2016).
SVA12
J. Sperling, W. Vogel, and G. S. Agarwal,
Sub-Binomial Light,
http://dx.doi.org/10.1103/PhysRevLett.109.093601Phys. Rev. Lett. 109, 093601 (2012).
BDJDBW13
T. J. Bartley, G. Donati, X.-M. Jin, A. Datta, M. Barbieri, and I. A. Walmsley,
Direct Observation of Sub-Binomial Light,
http://dx.doi.org/10.1103/PhysRevLett.110.173602Phys. Rev. Lett. 110, 173602 (2013).
LFPR16
C. Lee, S. Ferrari, W. H. P. Pernice, and C. Rockstuhl,
Sub-Poisson-Binomial Light,
https://doi.org/10.1103/PhysRevA.94.053844Phys. Rev. A 94, 053844 (2016).
MGHPGSWS15
T. Meany, M. Gräfe, R. Heilmann, A. Perez-Leija, S. Gross, M. J. Steel, M. J. Withford, and A. Szameit,
Laser written circuits for quantum photonics,
http://dx.doi.org/10.1002/lpor.201500061Laser Photon. Rev. 9, 1863 (2015).
HSPGHNVS16
R. Heilmann, J. Sperling, A. Perez-Leija, M. Gräfe, M. Heinrich, S. Nolte, W. Vogel, and A. Szameit,
Harnessing click detectors for the genuine characterization of light states,
http://dx.doi.org/10.1038/srep19489Sci. Rep. 6, 19489 (2016).
AW70
G. S. Agarwal and E. Wolf,
Calculus for Functions of Noncommuting Operators and General Phase-Space Methods in Quantum Mechanics. I. Mapping Theorems and Ordering of Functions of Noncommuting Operators,
https://doi.org/10.1103/PhysRevD.2.2161Phys. Rev. D 2, 2161 (1970);
ibid.,
II. Quantum Mechanics in Phase Space,
https://doi.org/10.1103/PhysRevD.2.2187Phys. Rev. D 2, 2187 (1970);
ibid.,
III. A Generalized Wick Theorem and Multitime Mapping,
https://doi.org/10.1103/PhysRevD.2.2206Phys. Rev. D 2, 2206 (1970).
VW06
See Ch. 8 in W. Vogel and D.-G. Welsch,
Quantum Optics
(Wiley-VCH, Weinheim, 2006).
LSV15
T. Lipfert, J. Sperling, and W. Vogel,
Homodyne detection with on-off detector systems,
http://dx.doi.org/10.1103/PhysRevA.92.053835Phys. Rev. A 92, 053835 (2015).
SVA13
J. Sperling, W. Vogel, and G. S. Agarwal,
Correlation measurements with on-off detectors,
http://dx.doi.org/10.1103/PhysRevA.88.043821Phys. Rev. A 88, 043821 (2013).
BKSSV17atm
M. Bohmann, R. Kruse, J. Sperling, C. Silberhorn, and W. Vogel,
Probing free-space quantum channels with in-lab experiments,
https://arxiv.org/abs/1702.04127arXiv:1702.04127 [quant-ph].
ZM90
X. T. Zou and L. Mandel,
Photon-antibunching and sub-Poissonian photon statistics,
https://doi.org/10.1103/PhysRevA.41.475Phys. Rev. A 41, 475 (1990).
ECMS11
A. Eckstein, A. Christ, P. J. Mosley, and C. Silberhorn,
Highly Efficient Single-Pass Source of Pulsed Single-Mode Twin Beams of Light,
https://doi.org/10.1103/PhysRevLett.106.013603Phys. Rev. Lett. 106, 013603 (2011).
LCS09
K. Laiho, K. N. Cassemiro, and Ch. Silberhorn,
Producing high fidelity single photons with optimal brightness via waveguided parametric down-conversion,
https://doi.org/10.1364/OE.17.022823Opt. Express 17, 22823 (2009).
KHQBSS13
S. Krapick, H. Herrmann, V. Quiring, B. Brecht, H. Suche and Ch. Silberhorn,
An efficient integrated two-color source for heralded single photons,
http://dx.doi.org/10.1088/1367-2630/15/3/033010New J. Phys. 15, 033010 (2013).
MLCVGN11
A. J. Miller, A. E. Lita, B. Calkins, I. Vayshenker, S. M. Gruber, and S. W. Nam,
Compact cryogenic self-aligning fiber-to-detector coupling with losses below one percent,
https://doi.org/10.1364/OE.19.009102Opt. Express 19, 9102 (2011).
I95
K. D. Irwin,
An application of electrothermal feedback for high resolution cryogenic particle detection,
http://dx.doi.org/10.1063/1.113674Appl. Phys. Lett. 66, 1998 (1995).
HMGHLNNDKW15
P. C. Humphreys, B. J. Metcalf, T. Gerrits, T. Hiemstra, A. E. Lita, J. Nunn, S. W. Nam, A. Datta, W. S. Kolthammer, and I. A. Walmsley,
Tomography of photon-number resolving continuous-output detectors,
http://dx.doi.org/10.1088/1367-2630/17/10/103044New J. Phys. 17, 103044 (2015).
SVA14
J. Sperling, W. Vogel, and G. S. Agarwal,
Quantum state engineering by click counting,
http://dx.doi.org/10.1103/PhysRevA.89.043829Phys. Rev. A 89, 043829 (2014).
|
http://arxiv.org/abs/1701.08188v1 | 20170127205009 | A construction of hyperkähler metrics through Riemann-Hilbert problems I | [
"César Garza"
] | math.DG | [
"math.DG"
] |
Department of Mathematics, IUPUI, Indianapolis, USA
[email protected]
[2010]Primary
In 2009 Gaiotto, Moore and Neitzke presented a new construction of hyperkähler metrics on the total spaces of certain complex integrable systems, represented as a torus fibration ℳ over a base space ℬ, except for a divisor D in ℬ, in which the torus fiber degenerates into a nodal torus. The hyperkähler metric g is obtained via solutions 𝒳_γ of a Riemann-Hilbert problem. We interpret the Kontsevich-Soibelman Wall Crossing Formula as an isomonodromic deformation of a family of RH problems, therefore guaranteeing continuity of 𝒳_γ at the walls of marginal stability. The technical details about solving the different classes of Riemann-Hilbert problems that arise here are left to a second article. To extend this construction to singular fibers, we use the Ooguri-Vafa case as our model and choose a suitable gauge transformation that allow us to define an integral equation defined at the degenerate fiber, whose solutions are the desired Darboux coordinates 𝒳_γ. We show that these functions yield a holomorphic symplectic form ϖ(ζ), which, by Hitchin's twistor construction, constructs the desired hyperkähler metric.
A construction of hyperkähler metrics through Riemann-Hilbert problems I
C. Garza
========================================================================
§ INTRODUCTION
Hyperkähler manifolds first appeared within the framework of differential geometry as Riemannian manifolds with holonomy group of special restricted group. Nowadays, hyperkähler geometry forms a separate research subject fusing traditional areas of mathematics such as differential and algebraic geometry of complex manifolds, holomorphic symplectic geometry, Hodge theory and many others.
One of the latest links can be found in theoretical physics: In 2009, Gaiotto, Moore and Neitzke <cit.> proposed a new construction of hyperkähler metrics g on target spaces ℳ of quantum field theories with d = 4, 𝒩 = 2 superysmmetry. Such manifolds were already known to be hyperkähler (see <cit.>), but no known explicit hyperkähler metrics have been constructed.
The manifold ℳ is a total space of a complex integrable system and it can be expressed as follows. There exists a complex manifold ℬ, a divisor D ⊂ℬ and a subset ℳ' ⊂ℳ such that ℳ' is a torus fibration over ℬ' := ℬ\ D. On the divisor D, the torus fibers of ℳ degenerate, as Figure <ref> shows.
Moduli spaces ℳ of Higgs bundles on Riemann surfaces with prescribed singularities at finitely many points are one of the prime examples of this construction. Hyperkähler geometry is useful since we can use Hitchin's twistor space construction <cit.> and consider all -worth of complex structures at once. In the case of moduli spaces of Higgs bundles, this allows us to consider ℳ from three distinct viewpoints:
* (Dolbeault) ℳ_Dol is the moduli space of Higgs bundles, i.e. pairs (E, Φ), E → C a rank n degree zero holomorphic vector bundle and Φ∈Γ(End(E) ⊗Ω^1) a Higgs field.
* (De Rham) ℳ_DR is the moduli space of flat connections on rank n holomorphic vector bundles, consisting of pairs (E, ∇) with ∇ : E →Ω^1 ⊗ E a holomorphic connection and
* (Betti) ℳ_B = Hom(π_1(C) →GL_n())/GL_n() of conjugacy classes of representations of the fundamental group of C.
All these algebraic structures form part of the family of complex structures making ℳ into a hyperkähler manifold.
To prove that the manifolds ℳ from the integrable systems are indeed hyperkähler, we start with the existence of a simple, explicit hyperkähler metric g^sf on ℳ'. Unfortunately, g^sf does not extend to ℳ. To construct a complete metric g, it is necessary to do “quantum corrections” to g^sf. These are obtained by solving a certain explicit integral equation (see (<ref>) below). The novelty is that the solutions, acting as Darboux coordinates for the hyperkähler metric g, have discontinuities at a specific locus in ℬ. Such discontinuities cancel the global monodromy around D and is thus feasible to expect that g extends to the entire ℳ.
We start by defining a Riemann-Hilbert problem on the -slice of the twistor space 𝒵 = ℳ' ×. That is, we look for functions 𝒳_γ with prescribed discontinuities and asymptotics. In the language of Riemann-Hilbert theory, this is known as monodromy data. Rather than a single Riemann-Hilbert problem, we have a whole family of them parametrized by the ℳ' manifold. We show that this family constitutes an isomonodromic deformation since by the Kontsevich-Soibelman Wall-Crossing Formula, the monodromy data remains invariant.
Although solving Riemann-Hilbert problems in general is not always possible, in this case it can be reduced to an integral equation solved by standard Banach contraction principles. We will focus on a particular case known as the “Pentagon” (a case of Hitchin systems with gauge group SU(2)). The family of Riemann-Hilbert problems and their methods of solutions is a topic of independent study so we leave this construction to a second article that can be of interest in the study of boundary-value problems.
The extension of the manifold ℳ' is obtained by gluing a circle bundle with an appropriate gauge transformation eliminating any monodromy problems near the divisor D. The circle bundle constructs the degenerate tori at the discriminant locus D (see Figure <ref>).
On the extended manifold ℳ we prove that the solutions 𝒳_γ of the Riemann-Hilbert problem on ℳ' extend and the resulting holomorphic symplectic form ϖ(ζ) gives the desired hyperkähler metric g.
Although for the most basic examples of this construction such as the moduli space of Higgs bundles it was already known that ℳ' extends to a hyperkähler manifold ℳ with degenerate torus fibers, the construction here works for the general case of _ℬ = 1. Moreover, the functions 𝒳_γ here are special coordinates arising in moduli spaces of flat connections, Teichmüller theory and Mirror Symmetry. In particular, these functions are used in <cit.> for the construction of holomorphic discs with boundary on special Lagrangian torus fibers of mirror manifolds.
The organization of the paper is as follows. In Section <ref> we introduce the complex integrable systems to be considered in this paper. These systems arose first in the study of moduli spaces of Higgs bundles and they can be written in terms of initial data and studied abstractly. This leads to a formulation of a family of Riemann-Hilbert problems, whose solutions provide Darboux coordinates for the moduli spaces ℳ considered and hence equip the latter with a hyperkähler structure. In Section <ref> we fully work the simplest example of these integrable systems: the Ooguri-Vafa case. Although the existence of this hyperkähler metric was already known, this is the first time it is obtained via Riemann-Hilbert methods. In Section <ref>, we explicitly show that this metric is a smooth deformation of the well-known Taub-NUT metric near the singular fiber of ℳ thus proving its extension to the entire manifold. In Section <ref> we introduce our main object of study, the Pentagon case. This is the first nontrivial example of the integrable systems considered and here the Wall Crossing phenomenon is present. We use the KS wall-crossing formula to apply an isomonodromic deformation of the Riemann-Hilbert problems leading to solutions continuous at the wall of marginal stability. Finally, Section <ref> deals with the extension of these solutions 𝒳_γ to singular fibers of ℳ thought as a torus fibration. What we do is to actually complete the manifold ℳ from a regular torus fibration ℳ' by gluing circle bundles near a discriminant locus D. This involves a change of the torus coordinates for the fibers of ℳ'. In terms of the new coordinates, the 𝒳_γ functions extend to the new patch and parametrize the complete manifold ℳ. We finish the paper by showing that, near the singular fibers of ℳ, the hyperkähler metric g looks like the metric for the Ooguri-Vafa case plus some smooth corrections, thus proving that this metric is complete.
Acknowledgment: The author likes to thank Andrew Neitzke for his guidance, support and incredibly helpful conversations.
§ INTEGRABLE SYSTEMS DATA
We start by presenting the complex integrable systems introduced in <cit.>. As motivation, consider the moduli space ℳ of Higgs bundles on a complex curve C with Higgs field Φ having prescribed singularities at finitely many points. In <cit.>, it is shown that the space of quadratic differentials u on C with fixed poles and residues is a complex affine space ℬ and the map det : ℳ→ℬ is proper with generic fiber Jac(Σ_u), a compact torus obtained from the spectral curve Σ_u : = {(z, ϕ) ∈ T^*C : ϕ^2 = u}, a double-branched cover of C over the zeroes of the quadratic differential u. Σ_u has an involution that flips ϕ↦ -ϕ. If we take Γ_u to be the subgroup of H_1(Σ_u, ℤ) odd under this involution, Γ forms a lattice of rank 2 over ℬ', the space of quadratic differentials with only simple zeroes. This lattice comes with a non-degenerate anti-symmetric pairing ⟨ , ⟩ from the intersection pairing in H_1. It is also proved in <cit.> that the fiber Jac(Σ_u) can be identified with the set of characters Hom(Γ_u, /2πℤ). If λ denotes the tautological 1-form in T^* C, then for any γ∈Γ,
Z_γ = 1/π∮_γλ
defines a holomorphic function Z_γ in ℬ'. Let {γ_1, γ_2} be a local basis of Γ with {γ^1, γ^2} the dual basis of Γ^*. Without loss of generality, we also denote by ⟨ , ⟩ the pairing in Γ^*. Let ⟨ dZ ∧ dZ ⟩ be short notation for ⟨γ^1, γ^2 ⟩ dZ_γ_1∧ dZ_γ_2. Since _ℬ' = 1, ⟨ dZ ∧ dZ ⟩ = 0.
This type of data arises in the construction of hyperkähler manifolds as in <cit.> and <cit.>, so we summarize the conditions required:
We start with a complex manifold ℬ (later shown to be affine) of dimension n and a divisor D ⊂ℬ. Let ℬ' = ℬ\ D. Over ℬ' there is a local system Γ with fiber a rank 2n lattice, equipped with a non-degenerate anti-symmetric integer valued pairing ⟨ , ⟩.
We will denote by Γ^* the dual of Γ and, by abuse of notation, we'll also use ⟨ , ⟩ for the dual pairing (not necessarily integer-valued) in Γ^*. Let u denote a general point of ℬ'. We want to obtain a torus fibration over ℬ', so let TChar_u(Γ) be the set of twisted unitary characters of Γ_u[Although we can also work with the set of unitary characters (no twisting involved) by shifting the θ-coordinates, we choose not to do so, as that results in more complex calculations], i.e. maps θ : Γ_u →ℝ/2πℤ satisfying
θ_γ + θ_γ' = θ_γ + γ' + π⟨γ, γ' ⟩.
Topologically, TChar_u(Γ) is a torus (S^1)^2n. Letting u vary, the TChar_u(Γ) form a torus bundle ℳ' over ℬ'. Any local section γ gives a local angular coordinate of ℳ' by “evaluation on γ”, θ_γ : ℳ' →ℝ/2πℤ.
We also assume there exists a homomorphism Z : Γ→ such that the vector Z(u) ∈Γ^*_u ⊗ varies holomorphically with u. If we pick a patch U ⊂ℬ' on which Γ admits a basis {γ_1, …, γ_2n} of local sections in which ⟨ , ⟩ is the standard symplectic pairing, then (after possibly shrinking U) the functions
f_i = Re(Z_γ_i)
are real local coordinates. The transition functions on overlaps U ∩ U' are valued on Sp(2n, ℤ), as different choices of basis in Γ must fix the symplectic pairing. This gives an affine structure on ℬ'.
By differentiating and evaluating in γ, we get 1-forms dθ_γ, d Z_γ on ℳ' which are linear on Γ. For a local basis {γ_1, …, γ_2n} as in the previous paragraph, let {γ^1, …, γ^2n} denote its dual basis on Γ^*. We write ⟨ dZ ∧ dZ ⟩ as short notation for
⟨γ^i , γ^j ⟩ dZ_γ_i∧ dZ_γ_j,
where we sum over repeated indices. Observe that the anti-symmetric pairing ⟨ , ⟩ and the anti-symmetric wedge product of 1-forms makes (<ref>) symmetric. We require that:
⟨ dZ ∧ dZ ⟩ = 0,
By (<ref>), near u, ℬ' can be locally identified with a complex Lagrangian submanifold of Γ^* ⊗_ℤ.
In the example of moduli spaces of Higgs bundles, as u approaches a quadratic differential with non-simple zeros, one homology cycles vanishes (see Figure <ref>). This cycle γ_0 is primitive in H_1 and its monodromy around the critical quadratic differential is governed by the Picard-Lefschetz formula. In the general case, let D_0 be a component of the divisor D ⊂ℬ. We also assume the following:
* Z_γ_0(u) → 0 as u → u_0 ∈ D_0 for some γ_0 ∈Γ.
* γ_0 is primitive (i.e. there exists some γ' with ⟨γ_0, γ'⟩ = 1).
* The monodromy of Γ around D_0 is of “Picard-Lefschetz type”, i.e.
γ↦γ + ⟨γ, γ_0 ⟩γ_0
We assign a complex structure and a holomorphic symplectic form on ℳ' as follows (see <cit.> and the references therein for proofs). Take a local basis {γ_1, …, γ_2n} of Γ. If ϵ^ij = ⟨γ_i , γ_j⟩ and ϵ_ij is its dual, let
ω_+ = ⟨ dZ ∧ dθ⟩ = ϵ_ij dZ_γ_i∧ dθ_γ_j.
By linearity on γ of the 1-forms, ω_+ is independent of the choice of basis. There is a unique complex structure J on ℳ' for which ω_+ is of type (2,0). The 2-form ω_+ gives a holomorphic symplectic structure on (ℳ', J). With respect to this structure, the projection π: ℳ' →ℬ' is holomorphic, and the torus fibers ℳ'_u = π^-1(u) are compact complex Lagrangian submanifolds.
Recall that a positive 2-form ω on a complex manifold is a real 2-form for which ω(v,Jv) >0 for all real tangent vectors v. From now on, we assume that ⟨ dZ ∧ dZ⟩ is a positive 2-form on ℬ'. Now fix R > 0. Then we can define a 2-form on ℳ' by
ω_3^sf = R/4⟨ dZ ∧ dZ⟩ - 1/8π^2 R⟨ dθ∧ dθ⟩.
This is a positive form of type (1,1) in the J complex structure. Thus, the triple (ℳ', J, ω_3^sf) determines a Kähler metric g^sf on ℳ'. This metric is in fact hyperkähler (see <cit.>), so we have a whole -worth of complex structures for ℳ', parametrized by ζ∈. The above complex structure J represents J(ζ = 0), the complex structure at ζ = 0 in . The superscript ^sf stands for “semiflat”. This is because g^sf is flat on the torus fibers ℳ'_u.
Alternatively, it is shown in <cit.> that if
𝒳_γ^sf(ζ) = exp( π R Z_γ/ζ + iθ_γ + π R ζZ_γ)
Then the 2-form
ϖ(ζ) = 1/8π^2 R⟨ dlog𝒳^sf(ζ) ∧ dlog𝒳^sf(ζ) ⟩
(where the DeRham operator d is applied to the ℳ' part only) can be expressed as
-i/2ζω_+ + ω^sf_3 -i ζ/2ω_-,
for ω_- = ω_+ = ⟨ dZ∧ dθ⟩, that is, in the twistor space 𝒵 = ℳ' × of <cit.>, ϖ(ζ) is a holomorphic section of Ω_𝒵/⊗𝒪(2) (the twisting by 𝒪(2) is due to the poles at ζ = 0 and ζ = ∞ in ). This is the key step in Hitchin's twistor space construction. By <cit.>, ℳ' is hyperkähler.
We want to reproduce the same construction of a hyperkähler metric now with corrected Darboux coordinates 𝒳_γ(ζ). For that, we need another piece of data. Namely, a function Ω : Γ→ℤ such that Ω(γ;u) = Ω(-γ;u). Furthermore, we impose a condition on the nonzero Ω(γ;u). Introduce a positive definite norm on Γ. Then we require the existence of K > 0 such that
|Z_γ|/γ > K
for those γ such that Ω(γ; u) ≠ 0. This is called the Support Property, as in <cit.>.
For a component of the singular locus D_0 and for γ_0 the primitive element in Γ for which Z_γ_0→ 0 as u → u_0 ∈ D_0, we also require
Ω(γ_0; u) = 1 for all u in a neighborhood of D_0
To see where these invariants arise from, consider the example of moduli spaces of Higgs bundles again. A quadratic differential u ∈ℬ' determines a metric h on C. Namely, if u = P(z)dz^2, h = |P(z)| dz dz. Let C' be the curve obtained after removing the poles and zeroes of u. Consider the finite length inextensible geodesics on C' in the metric h. These come in two types:
* Saddle connections: geodesics running between two zeroes of u. See Figure <ref>.
* Closed geodesics: When they exist, they come in 1-parameter families sweeping out annuli in C'. See Figure <ref>.
On the branched cover Σ_u → C, each geodesic can be lifted to a union of closed curves in Σ_u, representing some homology class γ∈ H_1(Σ_u, ℤ). See Figure <ref>.
In this case, Ω(γ,u) counts these finite length geodesics: every saddle connection with lift γ contributes +1 and every closed geodesic with lift γ contributes -2.
Back to the general case, we're ready to formulate a Riemann-Hilbert problem on the -slice of the twistor space 𝒵 = ℳ' ×. Recall that in a RH problem we have a contour Σ dividing a complex plane (or its compactification) and one tries to obtain functions which are analytic in the regions defined by the contour, with continuous extensions along the boundary and with prescribed discontinuities along Σ and fixed asymptotics at the points where Σ is non-smooth. In our case, the contour is a collection of rays at the origin and the discontinuities can be expressed as symplectomorphisms of a complex torus:
Define a ray associated to each γ∈Γ_u as:
ℓ_γ(u) = Z_γℝ_-.
We also define a transformation of the functions 𝒳_γ' given by each γ∈Γ_u:
𝒦_γ𝒳_γ' = 𝒳_γ' (1- 𝒳_γ)^⟨γ', γ⟩
Let T_u denote the space of twisted complex characters of Γ_u, i.e. maps 𝒳 : Γ_u →^× satisfying
𝒳_γ𝒳_γ' = (-1)^⟨γ, γ'⟩𝒳_γ + γ'
T_u has a canonical Poisson structure given by
{𝒳_γ, 𝒳_γ'} = ⟨γ, γ' ⟩𝒳_γ + γ'
The T_u glue together into a bundle over ℬ' with fiber a complex Poisson torus. Let T be the pullback of this system to ℳ'. We can interpret the transformations 𝒦_γ as birational automorphisms of T.
To each ray ℓ going from 0 to ∞ in the ζ-plane, we can define a transformation
S_ℓ = ∏_γ : ℓ_γ(u) = ℓ𝒦_γ^Ω(γ;u)
Note that all the γ's involved in this product are multiples of each other, so the 𝒦_γ commute and it is not necessary to specify an order for the product.
To obtain the corrected 𝒳_γ, we can formulate a Riemann-Hilbert problem for which the former functions are solutions to it. We seek a map 𝒳 : ℳ'_u ×^×→ T_u with the following properties:
* 𝒳 depends piecewise holomorphically on ζ, with discontinuities only at the rays ℓ_γ(u) for which Ω(γ;u) ≠ 0.
* The limits 𝒳^± as ζ approaches any ray ℓ from both sides exist and are related by
𝒳^+ = S_ℓ^-1∘𝒳^-
* 𝒳 obeys the reality condition
𝒳_-γ(-1/ζ) = 𝒳_γ(ζ)
* For any γ∈Γ_u, lim_ζ→ 0𝒳_γ(ζ) / 𝒳^sf_γ(ζ) exists and is real.
In <cit.>, this RH problem is formulated as an integral equation:
𝒳_γ(u,ζ) = 𝒳^sf_γ(u,ζ)exp[ -1/4π i∑_γ'Ω(γ';u) ⟨γ, γ' ⟩∫_ℓ_γ'(u)dζ'/ζ'ζ'+ζ/ζ'-ζlog( 1 - 𝒳_γ'(u,ζ'))],
One can define recursively, setting 𝒳^(0) = 𝒳^sf:
𝒳^(ν+1)_γ(u,ζ) = 𝒳^sf_γ(u,ζ)exp[ -1/4π i∑_γ'Ω(γ';u) ⟨γ, γ' ⟩∫_ℓ_γ'(u)dζ'/ζ'ζ'+ζ/ζ'-ζlog( 1 - 𝒳^(ν)_γ'(u,ζ'))],
More precisely, we have a family of RH problems, parametrized by u ∈ℬ', as this defines the rays ℓ_γ(u), the complex torus T_u where the symplectomorphisms are defined and the invariants Ω(γ;u) involved in the definition of the problem.
We still need one more piece of the puzzle, since the latter function Ω may not be continuous. In fact, Ω jumps along a real codimension-1 loci in ℬ' called the “wall of marginal stability”. This is the locus where 2 or more functions Z_γ coincide in phase, so two or more rays ℓ_γ(u) become one. More precisely:
W = {u ∈ℬ': ∃γ_1, γ_2 with Ω(γ_1;u) ≠ 0, Ω(γ_2;u) ≠ 0, ⟨γ_1, γ_2⟩≠ 0, Z_γ_1/Z_γ_2∈_+}
The jumps of Ω are not arbitrary; they are governed by the Kontsevich-Soibelman wall-crossing formula.
To describe this, let V be a strictly convex cone in the ζ-plane with apex at the origin. Then for any u ∉ W define
A_V(u) = ∏^_γ : Z_γ(u) ∈ V𝒦_γ^Ω(γ;u) = ∏^_ℓ⊂ V S_ℓ[This product may be infinite. One should more precisely think of A_V(u) as living in a certain prounipotent completion of the group generated by {𝒦_γ}_γ : Z_γ(u) ∈ V as explained in <cit.>]
The arrow indicates the order of the rational maps 𝒦_γ. A_V(u) is a birational Poisson automorphism of T_u. Define a V-good path to be a path p in ℬ' along which there is no point u with Z_γ(u) ∈∂ V and Ω(γ;u) ≠ 0. (So as we travel along a V-good path, no ℓ_γ rays enter or exit V.) If u, u' are the endpoints of a V-good path p, the wall-crossing formula is the condition that A_V(u), A_V(u') are related by parallel transport in T along p. See Figure <ref>.
§.§ Statement of Results
We will restrict in this paper to the case _ℬ = 1, so n = Γ = 2. We want to extend the torus fibration ℳ' to a manifold ℳ with degenerate torus fibers. To give an example, in the case of Hitchin systems, the torus bundle ℳ' is not the moduli space of Higgs bundles yet, as we have to consider quadratic differentials with non-simple zeroes too. The main results of this paper center on the extension of the manifold ℳ' to a manifold ℳ with an extended fibration ℳ→ℬ such that the torus fibers ℳ'_u degenerate to nodal torus (i.e. “singular” or “bad” fibers) for u ∈ D.
We start by fully working out the simplest example known as Ooguri-Vafa <cit.>. Here we have a fibration over the open unit disk ℬ := {u ∈ : |u| < 1 }. At the discriminant locus D : = { u = 0 }, the fibers degenerate into a nodal torus. The local rank-2 lattice Γ has a basis (γ_m, γ_e) and the skew-symmetric pairing is defined by ⟨γ_m, γ_e ⟩ = 1. The monodromy of Γ around u = 0 is γ_e ↦γ_e, γ_m ↦γ_m + γ_e. We also have functions Z_γ_e(u) = u, Z_γ_m(u) = u/2π i ( log u - 1) + f(u), for f holomorphic and admitting an extension to ℬ. Finally, the integer-valued function Ω in Γ is here: Ω(±γ_e; u) = 1 and Ω(γ; u) = 0 for any other γ∈Γ_u. There is no wall of marginal stability in this case. The integral equation (<ref>) can be solved after just 1 iteration.
For all other nontrivial cases, in order to give a satisfactory extension of the 𝒳_γ coordinates, it was necessary to develop the theory of Riemann-Hilbert-Birkhoff problems to suit these infinite-dimensional systems (as the transformations S_ℓ defining the problem can be thought as operators on C^∞(T_u), rather than matrices). It is not clear that such coordinates can be extended, since we may approach the bad fiber from two different sides of the wall of marginal stability and obtain two different extensions. To overcome this first obstacle, we have to use the theory of isomonodromic deformations as in <cit.> to reformulate the Riemann-Hilbert problem in <cit.> independent of the regions determined by the wall.
Having redefined the problem, we want our 𝒳_γ to be smooth on the parameters θ_γ_1, θ_γ_2 and u,
away from where the prescribed jumps are. Even at ℳ', there was no mathematical proof that such condition must be true. In the companion paper <cit.>, we combine classical Banach contraction methods and Arzela-Ascoli results on uniform convergence in compact sets to obtain:
If the collection J of nonzero Ω(u; γ) satisfies the support property (<ref>) and if the parameter R of (<ref>) is large enough (determined by the values |Z_γ(u)|, γ∈ J), there exists a unique collection of functions 𝒳_γ with the prescribed asymptotics and jumps as in <cit.>. These functions are smooth on u and the torus coordinates θ_1, θ_2 (even for u at the wall of marginal stability), and piecewise holomorphic on ζ.
Since we're considering only the case n=1, Γ is a rank-1 lattice over the Riemann surface ℬ' and the discriminant locus D where the torus fibers degenerate is a discrete subset of ℬ'.
From this point on, we restrict our attention to the next nontrivial system, known as the Pentagon case <cit.>. Here ℬ = with 2 bad fibers which we can assume are at u = -2, u = 2 and ℬ' is the twice-punctured plane. There is a wall of marginal stability where all Z_γ are contained in the same line. This separates ℬ in two domains ℬ_out and a simply-connected ℬ_in. See Figure <ref>.
On ℬ_in we can trivialize Γ and choose a basis {γ_1, γ_2} with pairing ⟨γ_1, γ_2⟩ = 1. This basis does not extend to a global basis for Γ since it is not invariant under monodromy. However, the set {γ_1, γ_2, -γ_1, -γ_2, γ_1 + γ_2, -γ_1 - γ_2} is indeed invariant so the following definition of Ω makes global sense:
For u ∈ℬ_in, Ω(γ; u) = {[ 1 for γ∈{γ_1, γ_2, -γ_1, -γ_2}; 0 otherwise ].
For u ∈ℬ_out , Ω(γ; u) = {[ 1 for γ∈{γ_1, γ_2, -γ_1, -γ_2, γ_1 + γ_2, -γ_1 - γ_2}; 0 otherwise ].
The Pentagon case appears in the study of Hitchin systems with gauge group SU(2). The extension of ℳ' was previously obtained by hyperkähler quotient methods in <cit.>, but no explicit hyperkähler metric was constructed.
Once the {𝒳_γ_i} are obtained by Theorem <ref>,
it is necessary to do an analytic continuation along ℬ' for the particular 𝒳_γ_i for which Z_γ_i→ 0 as u → u_0 ∈ D. Without loss of generality, we can assume there
is a local basis {γ_1, γ_2} of Γ such that Z_γ_2→ 0 in D. After that, an analysis of the possible divergence of 𝒳_γ as u → u_0 shows the necessity of performing a gauge transformation on the torus coordinates of the fibers ℳ_u that allows us to define an integral equation even at u_0 ∈ D. This series of transformations are defined in (<ref>), (<ref>), (<ref>) and (<ref>), and constitute a new result that was not expected in <cit.>. We basically deal with a family of boundary value problems for which the jump function vanishes at certain points and
singularities of certain kind appear as u → u_0. As this is of independent interest, we leave the relevant results to <cit.> and we show that our solutions contain at worst branch singularities at 0 or ∞ in the ζ-plane. As in the case of normal fibers, we can run a contraction argument to obtain Darboux coordinates even at the singular fibers and conclude:
Let {γ_1, γ_2} be a local basis for Γ in a small sector centered at u_0 ∈ D such that Z_γ_2→ 0 as u → u_0 ∈ D. For the Pentagon integrable system, the local function 𝒳_γ_1 admits an analytic continuation 𝒳_γ_1 to a punctured disk centered at u_0 in ℬ. There exists a gauge transformation θ_1 ↦θ_1 that extends the torus fibration ℳ' to a manifold ℳ that is locally, for each point in D, a (trivial) fibration over ℬ× S^1 with fiber S^1 coordinatized by θ_1 and with one fiber collapsed into a point. For R > 0 big enough, it is possible to extend 𝒳_γ_1 and 𝒳_γ_2 to ℳ, still preserving the smooth properties as in Theorem <ref>.
After we have the smooth extension of the {𝒳_γ_i} by Theorem <ref>, we can extend the holomorphic symplectic form ϖ(ζ) labeled by ζ∈ as in <cit.> for all points except possibly one at the singular fiber. From ϖ(ζ) we can obtain the hyperkähler metric g and, in the case of the Pentagon, after a change of coordinates, we realize g locally as the Taub-NUT metric plus smooth corrections, finishing the construction of ℳ and its hyperkähler metric. The following is the main theorem of the paper.
For the Pentagon case, the extension ℳ of the manifold ℳ' constructed in Theorem <ref> admits, for R large enough, a hyperkähler metric g obtained by extending the hyperkähler metric on ℳ' determined by the Darboux coordinates {𝒳_γ_i}.
§ THE OOGURI-VAFA CASE
§.§ Classical Case
We start with one of the simplest cases, known as the Ooguri-Vafa case, first treated in <cit.>. To see where this case comes from, recall that by the SYZ picture of K3 surfaces <cit.>, any K3 surface ℳ is a hyperkähler manifold. In one of its complex structures (say J^(ζ = 0)) is elliptically fibered, with base manifold ℬ = and generic fiber a compact complex torus. There are a total of 24 singular fibers, although the total space is smooth. See Figure <ref>.
Gross and Wilson <cit.> constructed a hyperkähler metric g on a K3 surface by gluing in the Ooguri-Vafa metric constructed in <cit.> with a standard metric g^sf away from the degenerate fiber. Thus, this simple case can be regarded as a local model for K3 surfaces.
We have a fibration over the open unit disk ℬ := {a ∈ : |a| < 1 }. At the locus D : = { a = 0 } (in the literature this is also called the discriminant locus), the fibers degenerate into a nodal torus. Define ℬ' as ℬ\ D, the punctured unit disk. On ℬ' there exists a local system Γ of rank-2 lattices with basis (γ_m, γ_e) and skew-symmetric pairing defined by ⟨γ_m, γ_e ⟩ = 1. The monodromy of Γ around a = 0 is γ_e ↦γ_e, γ_m ↦γ_m + γ_e. We also have functions Z_γ_e(a) = a, Z_γ_m(a) = a/2π i ( log a - 1). On ℬ' we have local coordinates (θ_m, θ_e) for the torus fibers with monodromy θ_e ↦θ_e, θ_m ↦θ_m + θ_e - π. Finally, the integer-valued function Ω in Γ is here: Ω(±γ_e, a) = 1 and Ω(γ, a) = 0 for any other γ∈Γ_a. There is no wall of marginal stability in this case.
We call this the “classical Ooguri-Vafa” case as it is the one appearing in <cit.> already mentioned at the beginning of this section. In the next section, we'll generalize this case by adding a function f(a) to the definition of Z_γ_m.
Let
𝒳^sf_γ(ζ, a) := exp( π R ζ^-1 Z_γ(a) + iθ_γ + π R ζZ_γ(a))
These functions receive corrections defined as in <cit.>. We are only interested in the pair (𝒳_m, 𝒳_e) which will constitute our desired Darboux coordinates for the holomorphic symplectic form ϖ. The fact that Ω(γ_m, a) = 0 gives that 𝒳_e = 𝒳^sf_e. As a → 0, Z_γ_e and Z_γ_m approach 0. Thus 𝒳_e|_a = 0 = e^iθ_e. Since 𝒳_e = 𝒳^sf_e the actual 𝒳_m is obtained after only 1 iteration of (<ref>). For each a ∈ℬ', let ℓ_+ be the ray in the ζ-plane defined by {ζ : a/ζ∈_- }. Similarly, ℓ_- : = {ζ : a/ζ∈_+}.
Let
𝒳_m = 𝒳^sf_m exp[ i/4π∫_ℓ_+dζ'/ζ'ζ' +
ζ/ζ' - ζlog[1 - 𝒳_e(ζ')] - i/4π∫_ℓ_-dζ'/ζ'ζ' +
ζ/ζ' - ζlog[1 - 𝒳_e(ζ')^-1] ].
For convenience, from this point on we assume a is of the form sb, where s is a positive number, b is fixed and |b| = 1. Moreover, in ℓ_+, ζ' = -tb, for t ∈ (0, ∞), and a similar parametrization holds in ℓ_-.
For fixed b, 𝒳_m as in (<ref>) has a limit as |a| → 0.
Writing ζ' + ζζ'(ζ' - ζ) = -1ζ' + 2ζ' - ζ, we want to find the limit as a → 0 of
∫_ℓ_+{-1ζ' + 2ζ' - ζ}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
- ∫_ℓ_-{-1ζ' + 2ζ' - ζ}log[1 - exp(-π Ra/ζ' - iθ_e - π Rζ' a̅)] dζ' .
For simplicity, we'll focus in the first integral only, the second one can be handled similarly. Rewrite:
∫_ℓ_+{-1ζ' + 2ζ' - ζ}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
= ∫_0^-b{-1ζ' + 2ζ' - ζ}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_-b^-b∞{-1ζ' + 2ζ' - ζ}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
= ∫_0^-b{-1ζ' + 2ζ' - ζ}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_-b^-b∞{-1ζ' + 2/ζ' + 2ζ' - ζ - 2/ζ'}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
= ∫_0^-b-1ζ'log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_-b^-b∞1ζ'log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_0^-b2ζ' - ζlog[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_-b^-b∞{2ζ' - ζ - 2/ζ'}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
Observe that
∫_0^-b-1ζ'log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
= -∫_0^1 1/tlog[1 - exp(-π Rs(t + 1/t))] dt
and after a change of variables t̃ = 1/t, we get = -∫_1^∞1/t̃log[1 - exp(-π Rs(t̃ + 1/t̃))] dt̃
= -∫_-b^-b∞1ζ'log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'.
Thus, (<ref>) reduces to
∫_0^-b2ζ' - ζlog[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ'
+ ∫_-b^-b∞{2ζ' - ζ - 2/ζ'}log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] dζ' .
If θ_e = 0, (<ref>) diverges to -∞, in which case 𝒳_m = 0. Otherwise, log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)] is bounded away from 0. Consequently,
|log[1 - exp(π Ra/ζ' + iθ_e + π Rζ' a̅)]| < C < ∞ in ℓ_+.
As a → 0, the integrals are dominated by
∫_0^-b2C|ζ' - ζ| |dζ'| + ∫_-b^-b∞C|ζ/b|/|ζ'(ζ' - ζ)| |dζ'| < ∞
if θ_e ≠ 0. Hence we can interchange the limit and the integral in (<ref>) and obtain that, as a → 0, this reduces to
2log(1 - e^i θ_e)[∫_0^-bdζ'/ζ' - ζ + ∫_-b^-b∞ dζ' {1/ζ' - ζ - 1/ζ'}]
= 2log(1 - e^i θ_e)[F(-b) + G(-b)],
where
F(z) := log( 1 - zζ), G(z) := log( 1 - ζz)
are the (unique) holomorphic solutions in the simply connected domain U := - {z : z/ζ∈_+} to the ODEs
F'(z) = 1/z - ζ, F(0) = 0 G'(z) = 1/z - ζ - 1/z, lim_z →∞ G(z) = 0.
This forces us to rewrite (<ref>) uniquely as
2log(1 - e^i θ_e)[log(1 + b/ζ) - log(1 + ζ/b)]
Here log denotes the principal branch of the log in both cases, and the equation makes sense for {b ∈ : b ∉ℓ_+ } (recall that by construction, we have the additional datum |b| = 1). We want to conclude that
log(1 + b/ζ) - log(1 + ζ/b) = log(b/ζ),
still using the principal branch of the log. To see this, define H(z) as F(z) - G(z) - log(-z/ζ). This is an analytic function on U and clearly H'(z) ≡ 0. Thus H is constant in U. It is easy to show that the identity holds for a suitable choice of z (for example, if ζ is not real, choose z = 1) and by the above, it holds on all of U; in particular, for z = -b.
All the arguments so far can be repeated to the ray ℓ_- to get the final form of (<ref>):
2{log[b/ζ]log(1 - e^iθ_e)
-log[- b/ζ]log(1 - e^-iθ_e) }, θ_e ≠ 0.
This yields that (<ref>) simplifies to:
𝒳_m = 𝒳^sf_m exp( i/2π{log[b/ζ]log(1 - e^iθ_e)
-log[- b/ζ]log(1 - e^-iθ_e)})
= 𝒳^sf_m exp( i/2π{log[a/|a|ζ]log(1 - e^iθ_e)
-log[- a/|a|ζ]log(1 - e^-iθ_e)})
in the limiting case a → 0.
To obtain a function that is continuous everywhere and independent of a, define regions I, II and III in the a-plane as follows: 𝒳^sf_m has a fixed cut in the negative real axis, both in the ζ-plane and the a-plane. Assuming for the moment that ζ∈ (0,π), define region I as the half plane {a ∈ : Im( a/ζ) < 0 }. Region II is that enclosed by the ℓ_- ray and the cut in the negative real axis, and region III is the remaining domain so that as we travel counterclockwise we traverse regions I, II and III in this order (see Figure <ref>).
For a ≠ 0, Gaiotto, Moore and Neitzke <cit.> proved that 𝒳_m has a continuous extension to the punctured disk of the form:
𝒳_m = {[ 𝒳_m in region I; (1 - 𝒳^-1_e) 𝒳_m in region II; - 𝒳_e (1 - 𝒳^-1_e) 𝒳_m = (1 - 𝒳_e)𝒳_m in region III ].
If we regard ℳ' as a S^1-bundle over ℬ' × S^1, with the fiber parametrized by θ_m, then we seek to extend ℳ' to a manifold ℳ by gluing to ℳ' another S^1-bundle over D × (0,2π), for D a small open disk around a = 0, and θ_e ∈ (0,2π). The S^1-fiber is parametrized by a different coordinate θ'_m where the Darboux coordinate 𝒳_m can be extended to ℳ. This is the content of the next theorem.
ℳ' can be extended to a manifold ℳ where the torus fibers over ℬ' degenerate at D = {a = 0} and 𝒳_m can be extended to D, independent of the value of a.
We'll use the following identities:
log(1 - e^iθ_e) = log(1 - e^-iθ_e) +i(θ_e - π), for θ_e ∈ (0, 2π)
log[-a/|a|ζ] = {[ log[a/|a|ζ] + iπ in region I; log[a/|a|ζ] - iπ in regions II and III ].
log [a/ζ] = {[ log a - logζ in regions I and II; log a - logζ + 2π i in region III ].
to obtain a formula for 𝒳_m at a = 0 independent of the region. Formula (<ref>) can be proved with an argument analogous to that used for the proof of (<ref>). Starting with region I, by (<ref>), (<ref>), (<ref>) and (<ref>):
𝒳_m = exp[ iθ_m - 1/2π (θ_e - π) log[a/|a|ζ] + 1/2log(1 - e^-iθ_e) ] in region I.
By (<ref>), = exp[ iθ_m - 1/2π (θ_e - π) log[a/|a|] + θ_e - π/2πlogζ + 1/2log(1 - e^-iθ_e) ]
In region II, by our formulas above, we get
𝒳_m = exp[iθ_m - 1/2π (θ_e - π) log[a/|a|ζ] - 1/2log(1 - e^-iθ_e) ](1 - e^-iθ_e)
= exp[iθ_m - 1/2π (θ_e - π) log[a/|a|ζ] - 1/2log(1 - e^-iθ_e) + log(1 - e^-iθ_e) ]
= exp[ iθ_m - 1/2π (θ_e - π) log[a/|a|] + θ_e - π/2πlogζ + 1/2log(1 - e^-iθ_e) ] in region II.
Finally, in region III, and making use of (<ref>), (<ref>), (<ref>):
𝒳_m = exp[iθ_m - 1/2π (θ_e - π) log[a/|a|ζ] - 1/2log(1 - e^-iθ_e) ](1 - e^iθ_e)
= exp[ iθ_m - 1/2π (θ_e - π) log[a/|a|] + θ_e - π/2πlogζ - i(θ_e - π) .
. - 1/2log(1 - e^-iθ_e) + log(1 - e^-iθ_e) + i(θ_e - π) ]
= exp[ iθ_m - 1/2π (θ_e - π) log[a/|a|] + θ_e - π/2πlogζ + 1/2log(1 - e^-iθ_e) ] .
Observe that, throughout all these calculations, we only had to use the natural branch of the complex logarithm. In summary, (<ref>) works for any region in the a-plane, with a cut in the negative real axis.
This also suggest the following coordinate transformation
θ'_m = θ_m + i(θ_e - π)/4π( loga/Λ - loga̅/Λ)
Here Λ is the same cutoff constant as in <cit.>. Let φ parametrize the phase of a/|a|. Then (<ref>) simplifies to
θ'_m = θ_m - (θ_e - π)φ/2π
On a coordinate patch around the singular fiber, θ'_m is single-valued.
Thus, the above shows that we can glue to ℳ' another S^1-bundle over D × (0,2π), for D a small open disk around a = 0, and θ_e ∈ (0,2π). The S^1-fiber is parametrized by θ'_m and the transition function is given by (<ref>), yielding a manifold ℳ. In this patch, we can extend 𝒳_m to a = 0 as:
. 𝒳_m|_a = 0 = e^iθ'_mζ^θ_e - π/2π (1 - e^-iθ_e)^1/2
where the branch of ζ^θ_e - π/2π is determined by the natural branch of the logarithm in the ζ plane. Note that when θ_e = 0, 𝒳_m ≡ 0 in (<ref>) and by definition, 𝒳_e ≡ 1. Since these two functions are Darboux coordinates for ℳ, the S^1 fibration over D × (0, 2π) we glued to ℳ' to get ℳ degenerates into a point when θ_e = 0.
Now consider the case that ζ∈ (-π, 0). Label the regions as one travels counterclockwise, starting with the region bounded by the cut and the ℓ_- (See Figure <ref>). We can do an analytic continuation similar to (<ref>) starting in region I, but formulas (<ref>), (<ref>) become now:
log[-a/|a|ζ] = {[ log[a/|a|ζ] - iπ in region II; log[a/|a|ζ] + iπ in regions I and III ].
log [a/ζ] = {[ log a - logζ in regions I and II; log a - logζ - 2π i in region III ].
By an argument entirely analogous to the case ζ > 0, we get again:
. 𝒳_m|_a = 0 = e^iθ'_mζ^θ_e - π/2π (1 - e^-iθ_e)^1/2
The case ζ real and positive is even simpler, as Figure <ref> shows. Here we have only two regions, and the jumps at the cut and the ℓ_+ ray are combined, since these two lines are the same. Label the lower half-plane as region I and the upper half-plane as region II. Start an analytic continuation of 𝒳_m in region I as before, using the formulas:
log[-a/|a|ζ] = {[ log[a/|a|ζ] - iπ in region II; log[a/|a|ζ] + iπ in region I ].
log [a/ζ] = log a - logζin both regions
The result is equation (<ref>) again. The case ζ = π is entirely analogous to this and it yields the same formula, thus proving that (<ref>) holds for all ζ and is independent of a.
§.§ Alternative Riemann-Hilbert problem
We may obtain the function 𝒳_m (and consequently, the analytic extension 𝒳_m) at a = 0 through a slightly different formulation of the Riemann-Hilbert problem stated in (<ref>). Namely, instead of defining a jump of 𝒳_m at two opposite rays ℓ_+, ℓ_-, we combine these into a single jump at the line ℓ defined by ℓ_+ and ℓ_-, as in Figure <ref>. Note that because of the orientation of ℓ one of the previous jumps has to be reversed.
For all values a ≠ 0, 𝒳_e = 𝒳_e^sf approaches 0 as ζ→ 0 or ζ→∞ along the ℓ ray due to the exponential decay in formula (<ref>). Thus, the jump function
G(ζ) := {[ 1-𝒳^-1_e for ζ = t a, 0 ≤ t ≤∞; 1- 𝒳_e for ζ = t a, -∞≤ t ≤ 0 ].
is continuous on ℓ regarded as a closed contour on , and it approaches the identity transformation exponentially fast at the points 0 and ∞.
The advantage of this reformulation of the Riemann-Hilbert problem is that it can be extended to the case a = 0 and we can obtain estimates on the solutions 𝒳_m even without an explicit formulation. If we fix a and let |a| → 0 as before, the jump function G(ζ) approaches the constant jumps
. G(ζ) |_|a|=0 := {[ 1-e^-iθ_e for ζ = t a, 0 < t < ∞; 1- e^iθ_e for ζ = t a, -∞ < t < 0 ].
Thus, . G(ζ) |_|a|=0 has two discontinuities at 0 and ∞. If we denote by
Δ_0 = lim_t → 0^+ G(ζ) - lim_t → 0^- G(ζ), Δ_∞ = lim_t →∞^+ G(ζ) - lim_t →∞^- G(ζ),
then, by (<ref>),
Δ_0 = -Δ_∞
Let D^+ be the region in bounded by ℓ with the positive, counterclockwise orientation. Denote by D^- the region where ℓ as a boundary has the negative orientation. We look for solutions of the homogeneous boundary problem
X_m^+(ζ) = G(ζ) X_m^-(ζ)
with G(ζ) as in (<ref>). This is Lemma 4.1 in <cit.>.
The solutions X_m^± obtained therein are related to 𝒳_m via 𝒳_m (ζ) = 𝒳^sf_m (ζ) X_m (ζ). Uniqueness of solutions of the homogeneous Riemann-Hilbert problem shows that these are the same functions (up to a constant factor) constructed in the previous section. Observe that the term ζ^θ_e - π/2π appears naturally due to the nature of the discontinuity of the jump function at 0 and ∞. The analytic continuation around the point a = 0 and the gauge transformation θ_m ↦θ'_m are still performed as before.
§.§ Generalized Ooguri-Vafa coordinates
We can generalize the previous extension to the case Z_γ_m := 1/2π ia log a + f(a), where f : ℬ' → is holomorphic and admits a holomorphic extension into ℬ. In particular,
𝒳_m^sf = exp( -iR/2ζa log a + π R f(a)/ζ + i θ_m + i ζ R/2aloga + π R ζf(a))
The value at the singular locus f(0) does not have to be 0. All the other data remains the same.
The first thing we observe is that 𝒳_e remains the same. Consequently, the corrections for the generalized 𝒳_m are as before. Using the change of coordinates as in (<ref>), we can thus write
. 𝒳_m|_a = 0 = exp[ π R f(0)/ζ + iθ'_m + π R ζ f(0) ] ζ^θ_e - π/2π (1 - e^-iθ_e)^1/2
§ EXTENSION OF THE OOGURI-VAFA METRIC
§.§ Classical Case
§.§.§ A C^1 extension of the coordinates
In section <ref> we extended the fibered manifold ℳ' to a manifold ℳ with a degenerate fiber at a = 0 in ℬ. We also extended 𝒳_m continuously to this bad fiber. Now we extend the metric by enlarging the holomorphic symplectic form ϖ(ζ). Recall that this is of the form
ϖ(ζ) = -1/4π^2 Rd 𝒳_e/𝒳_e∧d𝒳_m/𝒳_m
Clearly there are no problems extending d log𝒳_e, so it remains only to extend d log𝒳_m.
Let 𝒳_m denote the analytic continuation around a = 0 of the magnetic function, as in the last section. The 1-form
d log𝒳_m = d 𝒳_m/𝒳_m,
(where d denotes the differential of a function on the torus fibration ℳ' only) has an extension to ℳ
We proceed as in section <ref> and work in different regions in the a-plane (see Figure <ref>), starting with region I, where 𝒳_m = 𝒳_m. Then observe that we can write the corrections on 𝒳_m as a complex number Υ_m(ζ) ∈ (ℳ'_a)^ such that
𝒳_m = exp( -i R /2ζ(alog a - a) + i Υ_m + iζ R/2 (aloga - a )).
Thus, by (<ref>) and ignoring the i factor, it suffices to obtain an extension of
d[ - R /2ζ(alog a - a) + Υ_m + ζ R/2 (aloga - a ) ]
= -R/2ζlog a da + d Υ_m + ζ R/2loga da.
Using (<ref>),
d Υ_m = dθ_m - 1/4π∫_ℓ_+dζ'ζ'ζ'+ζ/ζ'-ζ𝒳_e/1-𝒳_e( π R/ζ' da +idθ_e+ π R ζ' da)
+1/4π∫_ℓ_-dζ'ζ'ζ'+ζ/ζ'-ζ𝒳^-1_e/1-𝒳^-1_e( -π R/ζ' da -idθ_e - π R ζ' da).
We have to change our θ_m coordinate into θ'_m according to (<ref>) and differentiate to obtain:
d Υ_m =
dθ'_m - i(θ_e - π)/4π( da/a - da/a)+ a/2πdθ_e
- 1/4π∫_ℓ_+dζ'ζ'ζ'+ζ/ζ'-ζ𝒳_e/1-𝒳_e( π R/ζ' da +idθ_e+ π R ζ' da)
+1/4π∫_ℓ_-dζ'ζ'ζ'+ζ/ζ'-ζ𝒳^-1_e/1-𝒳^-1_e( -π R/ζ' da -idθ_e - π R ζ' da)
Recall that, since we have introduced the change of coordinates θ_m ↦θ'_m, we are working on a patch on ℳ that contains a = 0 with a degenerate fiber here. It then makes sense to ask if (<ref>) extends to a =0. If this is true, then every independent 1-form extends individually. Let's consider the form involving dθ_e first. By (<ref>), this part consists of:
a/2πdθ_e - i/4π∫_ℓ_+dζ'ζ'ζ'+ζ/ζ'-ζ𝒳_e/1-𝒳_e dθ_e - i/4π∫_ℓ_-dζ'ζ'ζ'+ζ/ζ'-ζ𝒳^-1_e/1-𝒳^-1_e dθ_e.
We can use the exact same technique in section <ref> to find the limit of (<ref>) as a → 0. Namely, split each integral into four parts, use the symmetry of 𝒳_e1- 𝒳_e between 0 and ∞ to cancel two of these integrals and take the limit in the remaining ones. The result is:
a/2π - ie^iθ_e/2π(1-e^iθ_e)log[ e^i a/ζ] -
ie^-iθ_e/2π(1-e^-iθ_e)log[ -e^i a/ζ]
= a/2π - ie^iθ_e/2π(1-e^iθ_e)log[ e^i a/ζ] +
i/2π(1-e^iθ_e)log[ -e^i a/ζ]
in region I (we omitted the dθ_e factor for simplicity). Making use of formulas (<ref>) and (<ref>), we can simplify the above expression and get rid of the apparent dependence on a until finally getting:
-ilogζ/2π - 1/2(1-e^iθ_e), θ_e ≠ 0.
In other regions of the a-plane we have to modify 𝒳_m as in (<ref>). Nonetheless, by (<ref>) and (<ref>), the result is the same and we conclude that at least the terms involving dθ_e have an extension to a=0 for θ_e ≠ 0.
Next we extend the terms involving da. By (<ref>) and (<ref>), these are:
-R/2ζlog a da - i(θ_e - π)/4π a da - R/4∫_ℓ_+dζ'(ζ')^2ζ'+ζ/ζ'-ζ𝒳_e/1-𝒳_e da - R/4∫_ℓ_-dζ'(ζ')^2ζ'+ζ/ζ'-ζ𝒳^-1_e/1-𝒳^-1_e da
In what follows, we ignore the da part and focus on the coefficients for the extension. The partial fraction decomposition
ζ'+ζ/(ζ')^2(ζ'-ζ) = 2/ζ'(ζ'-ζ) - 1/(ζ')^2
splits each integral above into two parts. We will consider first the terms
- i(θ_e - π)/4π a + R/4∫_ℓ_+dζ'(ζ')^2𝒳_e/1-𝒳_e + R/4∫_ℓ_-dζ'(ζ')^2𝒳^-1_e/1-𝒳^-1_e.
Use the fact that 𝒳_e (resp. 𝒳^-1_e) has norm less than 1 on ℓ_+ (resp. ℓ_-) and the uniform convergence of the geometric series on ζ' to write (<ref>) as:
- i(θ_e - π)/4π a + R/4∑_n=1^∞{. ∫_ℓ_+dζ'/(ζ')^2exp(
π R n a/ζ' +i n θ_e +π R n ζ' a) +
.
∫_ℓ_-dζ'/(ζ')^2exp(
-π R n a/ζ' -i n θ_e -π R n ζ' a)},
= - i(θ_e - π)/4π a + (R/4) ( -2|a|/a)∑_n=1^∞( e^inθ_e - e^-inθ_e)K_1(2π R n |a|)
= - i(θ_e - π)/4π a - R|a|/2a∑_n=1^∞( e^inθ_e - e^-inθ_e)K_1(2π R n |a|).
Since K_1(x) 1/x, for x real and x → 0, we obtain, letting a → 0:
- i(θ_e - π)/4π a - R|a|/2a· 2π R |a|∑_n=1^∞( e^inθ_e - e^-inθ_e)/n
= - i(θ_e - π)/4π a + 1/4π a[log(1-e^iθ_e)-log(1-e^-iθ_e)]
and by (<ref>), = - i(θ_e - π)/4π a +i(θ_e -π)/4π a = 0.
Therefore this part of the da terms extends trivially to 0 in the singular fiber.
It remains to extend the other terms involving da. Recall that by (<ref>), these terms are (after getting rid of a factor of -R/2):
log a/ζ + ∫_ℓ_+dζ'/ζ'(ζ'-ζ)𝒳_e/1-𝒳_e + ∫_ℓ_-dζ'/ζ'(ζ'-ζ)𝒳^-1_e/1-𝒳^-1_e.
We'll focus in the first integral in (<ref>). As a starting point, we'll prove that as a → 0, the limiting value of this integral is the same as the limit of
∫_ℓ_+dζ'/ζ'(ζ'-ζ)exp( π R a/ζ' +iθ_e )/1-exp( π R a/ζ' +iθ_e +π R ζ' a).
It suffices to show that
∫_ℓ_+dζ'/ζ'(ζ'-ζ)exp( π R a/ζ')/1-exp( π R a/ζ' +iθ_e +π R ζ' a) [1-exp(π R ζ' a)] → 0, as a → 0, θ_e ≠ 0
To see this, we can assume |a| < 1. Let b = a/|a|. Observe that in the ℓ_+ ray, |exp(π Ra/ζ')| < 1, and since θ_e ≠ 0, we can bound (<ref>) by
const∫_ℓ_+dζ'/ζ'(ζ'-ζ) [1-exp(π R ζ' b)] < ∞.
Equation (<ref>) now follows from Lebesgue Dominated Convergence and the fact that 1-exp(π R ζ' a) → 0 as a → 0. A similar application of Dominated Convergence allows us to reduce the problem to the extension of
∫_ℓ_+dζ'/ζ'(ζ'-ζ)exp( π R a/ζ' +iθ_e )/1-exp( π R a/ζ' +iθ_e ).
Introduce the real variable s = -π R a / ζ'. We can write (<ref>) as:
e^iθ_e∫_0^∞ds/s[ -π R a/s - ζ]e^-s/1-e^iθ_e-s
= -1/ζ∫_0^∞ds/s+π R a/ζ·e^-s/e^-iθ_e-e^-s
= 1/ζ∫_0^∞ds/s+π R a/ζ·1/1-e^s-iθ_e
The integrand of (<ref>) has a double zero at ∞, when a → 0, so the only possible non-convergent part in the limit a=0 is the integral
1/ζ∫_0^1 ds/s+π R a/ζ·1/1-e^s-iθ_e.
Since
∫_0^1 ds/s[ 1/1-e^s-iθ_e - 1/1-e^-iθ_e] < ∞,
we can simplify this analysis even further and focus only on
1/ζ(1-e^-iθ_e)∫_0^1 ds/s+π R a/ζ
= -log (π R a /ζ)/ζ(1-e^-iθ_e).
We can apply the same technique to obtain a limit for the second integral in (<ref>). The result is
-log (-π R a /ζ)/ζ(1-e^iθ_e),
which means that the possibly non-convergent terms in (<ref>) are:
log a/ζ - log a/ζ(1-e^-iθ_e) - log a/ζ(1-e^iθ_e) = 0.
Note that the corrections of 𝒳_m in other regions of the a-plane as in (<ref>) depend only on 𝒳_e, which clearly has a smooth extension to the singular fiber.
The extension of the da part is performed in exactly the same way as with the da forms. We conclude that the 1-form
d𝒳_m/𝒳_m
has an extension to ℳ; more explicitly, to the fiber at a=0 in the classical Ooguri-Vafa case. This holds true also in the generalized Ooguri-Vafa case since here we simply add factors of the form f'(a)da and it is assumed that f(a) has a smooth extension to the singular fiber.
In section <ref>, we will reinterpret these extension of the derivatives of 𝒳_m if we regard the gauge transformation (<ref>) as a contour integral between symmetric contours. It will be then easier to see that the extension can be made smooth.
§.§.§ Extension of the metric
The results of the previous section already show the continuous extension of the holomorphic symplectic form
ϖ(ζ) = -1/4π^2 Rd 𝒳_e/𝒳_e∧d𝒳_m/𝒳_m
to the limiting case a = 0, but we excluded the special case θ_e = 0. Here we obtain ϖ(ζ) at the singular fiber with a different approach that will allow us to see that such an extension is smooth without testing the extension for each derivative. Although it was already known that ℳ' extends to the hyperkähler manifold ℳ constructed here, this approach is new, as it gives an explicit construction of the metric as we will see. Furthermore, the Ooguri-Vafa model can be thought as an elementary model for which more complex integrable systems are modeled locally (see <ref>).
The holomorphic symplectic form ϖ(ζ) extends smoothly to ℳ. Near a = 0 and θ_e = 0, the hyperkähler metric g looks like a constant multiple of the Taub-NUT metric g_Taub-NUT plus some smooth corrections.
By <cit.>, near a = 0,
ϖ(ζ) = -1/4π^2 Rd 𝒳_e/𝒳_e∧[ idθ_m + 2π i A + π i V
(1/ζda - ζ da̅)],
where
A = 1/8π^2( loga/Λ - loga̅/Λ)dθ_e - R/4π( da/a - da̅/a̅)∑_n ≠ 0 (sgn n) e^inθ_e |a|
K_1(2π R|na|)
should be understood as a U(1) connection over the open subset of × S^1 parametrized by (a,θ_e) and V is given by Poisson re-summation as
V = R/4π[ 1/√(R^2|a|^2 + θ_e^2/4π^2) + ∑_n = -∞
n ≠ 0^∞( 1/√(R^2 |a|^2 + (θ_e/2π + n)^2) - κ_n ) ].
Here κ_n is a regularization constant introduced to make the sum convergent, even at a = 0, θ_e ≠ 0. The curvature F of the unitary connection satisfies
dA = *dV.
Consider now a gauge transformation θ_m ↦θ_m + α and its induced change in the connection A ↦ A' = A - dα/2π (see <cit.>). We have idθ'_m + 2π i A' = idθ_m + idα + 2π i A - idα = idθ_m + 2π i A. Furthermore, for the particular gauge transformation in (<ref>), at a = 0 and for θ_e ≠ 0:
A' = A - dα/2π
= 1/8π^2( loga/Λ - loga̅/Λ)dθ_e - 1/8π^2( da/a - da̅/a̅) [ ∑_n = 1^∞e^inθ_e/n - ∑_n = 1^∞e^-inθ_e/n]
- 1/8π^2( loga/Λ - loga̅/Λ)dθ_e - i(θ_e - π)/8π^2( da/a - da̅/a̅),
(here we're using the fact that K_1(x) → 1/x as x → 0) = i(θ_e - π)/8π^2( da/a - da̅/a̅) - i(θ_e - π)/8π^2( da/a - da̅/a̅) = 0.
since the above sums converge to -log(1 - e^iθ_e) + log(1 - e^-iθ_e) = -i(θ_e - π) for θ_e ≠ 0.
Writing V_0 (observe that this only depends on θ_e) for the limit of V as a → 0, we get at a = 0
ϖ(ζ) = -1/4π^2 R( π R/ζda + idθ_e + π R ζ da̅) ∧(
idθ'_m + π i V_0 ( da/ζ - ζ da̅) )
= 1/4π^2 R dθ_e ∧ dθ'_m + iV_0/2da ∧ da̅ -i/4πζda ∧ dθ'_m - V_0/4π Rζda ∧ dθ_e
- iζ/4π da̅∧ dθ'_m + V_0 ζ/4π R da̅∧ dθ_e.
This yields that, at the singular fiber,
ω_3 = 1/4π^2 R dθ_e ∧ dθ'_m + iV_0/2da ∧ da̅
ω_+ = 1/2π da ∧( dθ'_m - iV_0/Rdθ_e )
ω_- = 1/2π da̅∧( dθ'_m + iV_0/Rdθ_e )
From the last two equations we obtain that dθ'_m - iV_0/R dθ_e and dθ'_m + iV_0/R dθ_e are respectively (1,0) and (0,1) forms under the complex structure J_3. A (1,0) vector field dual to the (1,0) form above is then 12(∂_θ'_m + iR/V_0 ∂_θ_e). In particular,
J_3(∂_θ'_m) = -R/V_0∂_θ_e, J_3 (-R/V_0∂_θ_e) = -∂_θ'_m.
With this and (<ref>) we can reconstruct the metric at a = 0. Observe that
g(∂_θ_e, ∂_θ_e) = ω_3(∂_θ_e, J_3(∂_θ_e)) = ω_3(∂_θ_e, V_0/R∂_θ'_m) = V_0/4π^2 R^2
g(∂_θ'_m, ∂_θ'_m) = ω_3(∂_θ'_m, J_3(∂_θ'_m)) = ω_3(∂_θ'_m, -R/V_0∂_θ_e) = 1/4π^2 V_0
Consequently,
g = 1/V_0( dθ'_m/2π)^2 + V_0 dx⃗^2,
where a = x^1 + ix^2, θ_e = 2π R x^3. Since V_0(θ_e) is undefined for θ_e = 0, we have to check that g extends to this point. Let (r,ϑ, ϕ) denote spherical coordinates for x⃗. The formula above is the natural extension of the metric given in <cit.> for nonzero a:
g = 1/V(x⃗)( dθ'_m/2π + A'(x⃗))^2 + V(x⃗) dx⃗^2
To see that this extends to r =0, we rewrite
V = R/4π[ 1/√(R^2 |a|^2 + θ_e^2/4π^2) + ∑_n ≠ 0(
1/√(R^2 |a|^2 + (θ_e/2π + n)^2) - κ_n )]
= 1/4π[ 1/√( |a|^2 + θ_e^2/4R^2 π^2) + R∑_n ≠ 0(
1/√(R^2 |a|^2 + (θ_e/2π + n)^2) - κ_n ) ]
= 1/4π( 1/r + C(x⃗) ),
where C(x⃗) is smooth and bounded in a neighborhood of the origin.
Similarly, we do Poisson re-summation for the unitary connection
A' = - 1/4π( da/a - da̅/a̅) [ i(θ_e - π)/2π + R ∑_n ≠ 0 (sgn n) e^inθ_e |a| K_1(2π R|na|) ].
Using the fact that the inverse Fourier transform of (sgn ξ)e^iθ_e ξ|a|K_1(2π R|aξ|) is
i(θ_e/2π + t)/2R√(R^2|a|^2 + ( θ_e/2π + t)^2),
we obtain
A' = - i/8π( da/a - da̅/a̅)∑_n = -∞^∞( θ_e/2π + n√(R^2 |a|^2 + (θ_e/2π + n)^2) - κ_n )
= 1/4π( da/a - da̅/a̅)[-iθ_e/4π√(R^2 |a|^2 + (
θ_e/2π)^2) - i/2∑_n ≠ 0( θ_e/2π + n√(R^2 |a|^2 + (θ_e/2π + n)^2) - κ_n )]
since dϕ = d a = -idloga|a| = -i2(daa - da̅a̅) and cosϑ = x^3r, this simplifies to: = 1/4π(cosϑ + D(x⃗))dϕ.
Here κ_n is a regularization constant that makes the sum converge, and D(x⃗) is smooth and bounded in a neighborhood of r = 0. By (<ref>) and (<ref>), it follows that near r = 0
g = V^-1( dθ'_m/2π + A' )^2 + Vdx⃗^2
= 4π( 1/r + C )^-1( dθ'_m/2π + 1/4πcosϑ dϕ + D dϕ)^2 + 1/4π( 1/r + C ) dx⃗^2
= 1/4π[ ( 1/r + C )^-1( 2dθ'_m + cosϑ dϕ + D̃ dϕ)^2
+ ( 1/r + C ) dx⃗^2 ]
= 1/4π g_Taub-NUT + smooth corrections.
This shows that our metric extends to r = 0 and finishes the construction of the singular fiber.
§.§ General case
Here we work with the assumption in subsection <ref>. To distinguish this case to the previous one, we will denote by ϖ_old, g_old, etc. the forms obtained in the classical case.
Let C := -i/2 + π f'(0) and let
B_0 = V_0 + R Im C/π.
We will see that, to extend the holomorphic symplectic form ϖ(ζ) and consequently the hyperkähler metric g to ℳ, it is necessary to impose a restriction on the class of functions f(a) on ℬ for the generalized Ooguri-Vafa case.
In the General Ooguri-Vafa case, the holomorphic symplectic form ϖ(ζ) and the hyperkähler metric g extend to ℳ, at least for the set of functions f(a) as in <ref> with f'(0) > B_0.
By formula (<ref>),
d log𝒳_m^sf = d log𝒳_m, old^sf + R/ζ( -i/2 + π f'(a) )da + Rζ( i/2 + πf'(a))da
Recall that the corrections of 𝒳_m are the same as the classical Ooguri-Vafa case. Thus, using (<ref>), at a = 0
ϖ(ζ) = ϖ_old(ζ) + iR /2πIm C da ∧ da + i C/4π^2 ζ
da ∧ dθ_e + i ζC/4π^2 da∧ dθ_e.
Decomposing ϖ(ζ) = -i/2ζω_+ + ω_3 -iζ /2 ω_-, we obtain:
ω_3 = ω_3, old + i R/2πIm C da ∧ da,
ω_+ = ω_+, old - C/2π^2 da ∧ dθ_e
ω_- = ω_-, old - C/2π^2 da∧ dθ_e
By (<ref>) and (<ref>),
dθ'_m - i/R( V_0 - iRC/π)dθ_e and dθ'_m + i/R( V_0 + iRC/π)dθ_e
are, respectively, (1,0) and (0,1) forms. It's not hard to see that
-V_0 π -iR C/Rπ∂_θ'_m - i∂_θ_e
or, rearranging real parts,( -V_0/R -Im C/π) ∂_θ'_m -i ( Re C/π∂_θ'_m + ∂_θ_e)
is a (1,0) vector field. This allow us to obtain
J_3[ ( -V_0/R -Im C/π) ∂_θ'_m] = Re C/π∂_θ'_m + ∂_θ_e
J_3[Re C/π∂_θ'_m + ∂_θ_e] = ( V_0/R +Im C/π) ∂_θ'_m.
By linearity,
J_3(∂_θ'_m) = const·∂_θ'_m - Rπ/V_0 π + RIm C∂_θ_e
J_3(∂_θ_e) = ( V_0 π + RIm C /π R + (Re C)^2 R/π(V_0 π
+ RIm C))∂_θ'_m + const·∂_θ_e.
With this we can compute
g(∂_θ'_m, ∂_θ'_m) = ω_3(∂_θ'_m, J_3(∂_θ'_m))
= 1/4π(V_0 π + RIm C)
g(∂_θ_e, ∂_θ_e) = ω_3(∂_θ_e, J_3(∂_θ_e))
= V_0 π + RIm C/4π^3 R^2 + (Re C)^2/4π^3(V_0 π + RIm C)
= B_0/4π^3 R^2 + (Re C)^2/4π^3 B_0
We can see that, if B_0 > 0, the metric at a = 0 is
g = 1/B_0( dθ'_m/2π)^2 + B_0 dx⃗^2 + (R·Re C/π)^2 dx_3^2/B_0.
This metric can be extended to the point θ_e = 0 (r = 0 in <ref>) exactly as before, by writing g as the Taub-NUT metric plus smooth corrections and observing that, since lim_θ_e → 0 B_0 = ∞,
lim_θ_e → 0(R·Re C/π)^2 dx_3^2/B_0 = 0.
§ THE PENTAGON CASE
§.§ Monodromy Data
Now we will extend the results of the Ooguri-Vafa case to the general problem. We will start with the Pentagon example. This example is presented in detail in <cit.>. By <cit.>, this example represents the moduli space of Higgs bundles with gauge group SU(2) over with 1 irregular singularity at z = ∞.
Here ℬ = with discriminant locus a 2-point set, which we can assume is {-2,2} in the complex plane. Thus ℬ' is the twice-punctured plane. ℬ is divided into two domains ℬ_in and ℬ_out by the locus
W = {u : Z(Γ_u) is contained in a line in }⊂ℬ
See Figure <ref>. Since ℬ_in is simply connected Γ can be trivialized over ℬ_in by primitive cycles γ_1, γ_2, with Z_γ_1 = 0 at u = -2, Z_γ_2 = 0 at u = 2. We can choose them also so that ⟨γ_1, γ_2 ⟩ = 1.
Take the set {γ_1, γ_2}. To compute its monodromy around infinity, take cuts at each point of D = {-2,2} (see Figure <ref>) and move counterclockwise. By (<ref>), the jump of γ_2 when you cross the cut at -2 is of the form γ_2 ↦γ_1 + γ_2. As you return to the original place and cross the cut at 2, the jump of γ_1 is of the type γ_1 ↦γ_1 - γ_2.
Thus, around infinity, {γ_1, γ_2} transforms into {-γ_2, γ_1 + γ_2}. The set {γ_1, γ_2, -γ_1, -γ_2, γ_1 + γ_2, -γ_1 - γ_2} is therefore invariant under monodromy at infinity and it makes global sense to define
For u ∈ℬ_in, Ω(γ; u) = {[ 1 for γ∈{γ_1, γ_2, -γ_1, -γ_2}; 0 otherwise ].
For u ∈ℬ_out , Ω(γ; u) = {[ 1 for γ∈{γ_1, γ_2, -γ_1, -γ_2, γ_1 + γ_2, -γ_1 - γ_2}; 0 otherwise ].
Let ℳ' denote the torus fibration over ℬ' constructed in <cit.>. Near u=2, we'll denote γ_1 by γ_m and γ_2 by γ_e (the labels will change for u = - 2). To shorten notation, we'll write ℓ_e, Z_e, etc. instead of ℓ_γ_e, Z_γ_e, etc. Let θ denote the vector of torus coordinates (θ_e, θ_m). With the change of variables a := Z_e(u) we can assume, without loss of generality, that the bad fiber is at a = 0 and
lim_a → 0 Z_m(a) = c ≠ 0.
Let T denote the complex torus fibration over ℳ' constructed in <cit.>. By the definition of Ω(γ; a), the functions (𝒳_e, 𝒳_m) both receive corrections. Recall that by (<ref>), for each ν∈ℕ, we get a function 𝒳_γ^(ν), which is the ν-th iteration of the function 𝒳_γ. We can write
𝒳_γ^(ν)(a, ζ, θ) = 𝒳_γ^sf(a, ζ, θ)C_γ^(ν)(a, ζ, θ).
It will be convenient to rewrite the above equation as in <cit.>. For that, let be the map from ℳ_a to its complexification ℳ_a^ such that
𝒳_γ^(ν)(a, ζ, θ) = 𝒳_γ^sf(a, ζ, ).
We'll do a modification in the construction of <cit.> as follows: We'll use the term “BPS ray” for each ray {ℓ_γ : Ω(γ,a) ≠ 0 } as in <cit.>. This terminology comes from Physics. In the language of Riemann-Hilbert problems, these are known as “anti-Stokes” rays. That is, they represent the contour Σ where a function has prescribed discontinuities.
The problem is local on ℬ, so instead of defining a Riemann-Hilbert problem using the BPS rays ℓ_γ, we will cover ℬ' with open sets {U_α : α∈Δ} such that for each α, U_α is compact, U_α⊂ V_α, with V_α open and . ℳ' |_V_α a trivial fibration. For any ray r in the ζ-plane, define ℍ_r as the half-plane of vectors making an acute angle with r. Assume that there is a pair of rays r, -r such that for all a ∈ U_α, half of the rays lie inside ℍ_r and the other half lie in ℍ_-r. We call such rays admissible rays. If U_α is small enough, there exists admissible rays for such a neighborhood. We are allowing the case that r is a BPS ray ℓ_γ, as long as it satisfies the above condition. As a varies in U_α, some BPS rays (or anti-Stokes rays, in RH terminology) converge into a single ray (wall-crossing phenomenon) (see Figures <ref> and <ref>).
For γ∈Γ, we define γ > 0 (resp. γ < 0) as ℓ_γ∈ℍ_r (resp. ℓ_γ∈ℍ_-r). Our Riemann-Hilbert problem will have only two anti-Stokes rays, namely r and -r. The specific discontinuities at the anti-Stokes rays for the function we're trying to obtain are called Stokes factors (see <cit.>). In (<ref>), the Stokes factor was given by S^-1_ℓ.
In this case, the Stokes factors are the concatenation of all the Stokes factors S^-1_ℓ in (<ref>) in the counterclockwise direction:
S_+ = ∏^_γ > 0𝒦^Ω(γ; a)_γ
S_- = ∏^_γ < 0𝒦^Ω(γ; a)_γ
We will denote the solutions of this Riemann-Hilbert problem by 𝒴. As in (<ref>), we can write 𝒴 as
𝒴_γ(a, ζ, θ) = 𝒳_γ^sf(a, ζ, Θ),
for Θ : ℳ_a →ℳ_a^.
A different choice of admissible pairs r', -r' gives an equivalent Riemann-Hilbert problem, where the two solutions 𝒴, 𝒴' differ only for ζ in the sector defined by the rays r,r', and one can be obtained from the other by analytic continuation.
In the case of the Pentagon, we have two types of wall-crossing phenomenon. Namely, as a varies, ℓ_e moves in the ζ-plane until it coincides with the ℓ_m ray for some value of a in the wall of marginal stability (Fig. <ref> and <ref>). We'll call this type I of wall-crossing. In this case we have the Pentagon identity
𝒦_e 𝒦_m = 𝒦_m 𝒦_e+m𝒦_e,
As a goes around 0, the ℓ_e ray will then intersect with the ℓ_-m ray now. Because of the monodromy γ_m ↦γ_-e+m around 0, ℓ_m becomes ℓ_-e+m. This second type (type II) of wall-crossing is illustrated in Fig. <ref> and <ref>.
This gives a second Pentagon identity
𝒦_-e𝒦_m = 𝒦_m 𝒦_-e+m𝒦_-e
In any case, the Stokes factors above remain the same even if a is in the wall of marginal stability. The way we defined S_+, S_- makes this true for the general case also.
Specifically, in the Pentagon the two Stokes factors for the first type of wall-crossing are given by the maps:
. [ 𝒴_m ↦𝒴_m(1-𝒴_e(1-𝒴_m))^-1; 𝒴_e ↦𝒴_e(1-𝒴_m) ]} S_+
and, similarly. [ 𝒴_m ↦𝒴_m(1-𝒴^-1_e(1-𝒴^-1_m)); 𝒴_e ↦𝒴_e(1-𝒴^-1_m)^-1 ]} S_-
For the second type:
. [ 𝒴_m ↦𝒴_m(1-𝒴^-1_e); 𝒴_e ↦𝒴_e(1-𝒴_m(1-𝒴^-1_e)) ]} S_+
. [ 𝒴_m ↦𝒴_m(1-𝒴_e)^-1; 𝒴_e ↦𝒴_e(1-𝒴^-1_m(1-𝒴_e))^-1 ]} S_-
§.§ Solutions
In <cit.> we prove the following theorem (in fact, a more general version is proven).
There exist functions 𝒴_m(a, ζ, θ_e, θ_m), 𝒴_e(a, ζ, θ_e, θ_m) defined for a ≠ 0, smooth on a, θ_e and θ_m. The functions are sectionally analytic on ζ and obey the jump condition
[ 𝒴^+ = S_+ 𝒴^-, along r; 𝒴^+ = S_-𝒴^-, along -r ]
Moreover, 𝒴_m, 𝒴_e obey the reality condition (<ref>) and the asymptotic condition <ref>.
Our construction used integrals along a fixed admissible pair r,-r and our Stokes factors are concatenation of the Stokes factors in <cit.>. Thus, the coefficients f^γ' are different here, but they are still obtained by power series expansion of the explicit Stokes factor. In particular, it may not be possible to express
f^γ' = c_γ'γ'
for some constant c_γ'. For instance, in the pentagon, wall-crossing type I, we have, for 0≤ j≤ i and γ' = γ_ie +jm:
f^γ' = (-1)^jij/i^2γ_ie.
Because of this, we didn't use the Cauchy-Schwarz property of the norm in Γ in the estimates above as in <cit.>. Nevertheless, the tameness condition on the Ω(γ',a) invariants still give us the desired contraction.
Observe that, since we used admissible rays, the Stokes matrices don't change at the walls of marginal stability and we were able to treat both sides of the wall indistinctly. Thus, the functions 𝒴 in Theorem <ref> are smooth across the wall.
Let's reintroduce the solutions in <cit.>. Denote by 𝒳_e, 𝒳_m the solutions to the Riemann-Hilbert problem with jumps of the form S_ℓ^-1 at each BPS ray ℓ with the same asymptotics and reality condition as 𝒴_e, 𝒴_m. In fact, we can see that the functions 𝒴 are the analytic continuation of 𝒳 up until the admissible rays r, -r.
In a patch U_α⊂ℬ' containing the wall of marginal stability, define the admissible ray r as the ray where ℓ_e, ℓ_m (or ℓ_e, ℓ_-m) collide. Since one is the analytic continuation of the other, 𝒳 and 𝒴 differ only in a small sector in the ζ-plane bounded by the ℓ_e, ℓ_m (ℓ_e, ℓ_-m) rays, for a not in the wall. As a approaches the wall, such a sector converges to the single admissible ray r. Thus, away from the ray where the two BPS rays collide, the solutions 𝒳 in <cit.> are continuous in a.
§ EXTENSION TO THE SINGULAR FIBERS
In this paper we will only consider the Pentagon example and in this section we will extend the Darboux coordinates 𝒳_e, 𝒳_m obtained above to the singular locus D ⊂ℬ where one of the charges Z_γ approaches zero.
Let u be a coordinate for ℬ =. We can assume that the two bad fibers of ℳ are at -2,2 in the complex u-plane. For almost all ζ∈, the BPS rays converge in a point of the wall of marginal stability away from any bad fiber:
It is assumed that lim_u → 2 Z_γ_1 exists and it is nonzero. If we denote this limit by c = |c|e^iϕ, then for ζ such that ζ→ϕ + π, the ray ℓ_γ_1 emerging from -2 approaches the other singular point u = 2 (see Figure <ref>).
When ζ = ϕ + π, the locus { u : Z_γ(u)/ζ∈_-}, for some γ such that Ω(γ;u) ≠ 0 crosses u = 2. See Figure <ref>.
As ζ keeps changing, the rays leave the singular locus, but near u = 2, the tags change due to the monodromy of γ_1 around u=2. Despite this change of labels, near u = 2 only the rays ℓ_γ_2, ℓ_-γ_2 pass through this singular point. See Figure <ref>
In the general case of Figures <ref>, <ref> or <ref>, the picture near u = 2 is like in the Ooguri-Vafa case, Figure <ref>.
In any case, because of the specific values of the invariants Ω, it is possible to analytically extend the function 𝒳_γ_1 around u = 2. The global jump coming from the rays ℓ_γ_2, ℓ_-γ_2 is the opposite of the global monodromy coming from the Picard-Lefschetz monodromy of γ_1 ↦γ_1 - γ_2 (see (<ref>)). Thus, it is possible to obtain a function 𝒳_γ_1 analytic on a punctured disk on ℬ' near u = 2 extending 𝒳_γ_1.
From this point on, we use the original formulation of the Riemann-Hilbert problem using BPS rays as in <cit.>. We also use a = Z_γ_2(u) to coordinatize a disk near u = 2, and we label {γ_1, γ_2} as {γ_m, γ_e} as in the Ooguri-Vafa case. Recall that, to shorten notation, we write ℓ_e, 𝒳_e, etc. instead of ℓ_γ_e, 𝒳_γ_e, etc.
By our work in the previous section, solutions 𝒳_γ (or, taking logs, Υ_γ) to the Riemann-Hilbert problem are continuous at the wall of marginal stability for all ζ except those in the ray ℓ_m = Z_m/ζ∈_- = ℓ_e (to be expected by the definition of the RH problem). We want to extend our solutions to the bad fiber located at a=0. We'll see that to achieve this, it is necessary to introduce new θ coordinates.
For convenience, we rewrite the integral formulas for the Pentagon in terms of Υ as in <cit.>. We will only write the part in ℬ_in, the part is similar.
Υ_e(a,ζ) = θ_e -
1/4π{∫_ℓ_mdζ'/ζ'ζ' + ζ/ζ' -ζlog[ 1 - 𝒳_m^sf(a,ζ', Υ_m) ] - ∫_ℓ_-mdζ'/ζ'ζ' + ζ/ζ' -ζlog[ 1 - 𝒳_-m^sf(a,ζ', Υ_-m)] },
Υ_m(a,ζ) = θ_m +
1/4π{∫_ℓ_edζ'/ζ'ζ' + ζ/ζ' -ζlog[ 1 - 𝒳_e^sf(a,ζ', Υ_e) ] - ∫_ℓ_-edζ'/ζ'ζ' + ζ/ζ' -ζlog[ 1 - 𝒳_-e^sf(a,ζ', Υ_-e) ] }
We can focus only on the integrals above, so write Υ_γ(a,ζ) = θ_γ + 14πΦ_γ(a,ζ), for γ∈{γ_m, γ_e}. To obtain the right gauge transformation of the torus coordinates θ, we'll split the integrals above into four parts and then we'll show that two of them define the right change of coordinates (in , and a similar transformation for ) that simplify the integrals and allow an extension to the singular fiber.
By Theorem <ref>, both Υ_m, Υ_e satisfy the “reality condition”, which expresses a symmetry in the behavior of the complexified coordinates Υ:
Υ_γ(a, ζ) = Υ_γ(a, -1/ζ), a ≠ 0
If we write as Υ_0 (resp. Υ_∞) the asymptotic of this function as ζ→ 0 (resp. ζ→∞) so that
Υ_0 = θ + 1/4πΦ_0,
for a suitable correction Φ_0. A similar equation holds for the asymptotic as ζ→∞. By the asymptotic condition <ref>, Φ_0 is imaginary.
Condition (<ref>) also shows that Φ_0 = - Φ_∞. This and the fact that Φ_0 is imaginary give the reality condition
Υ_0 = Υ_∞
Split the integrals in (<ref>) into four parts as in (<ref>). For example, if we denote by ζ_e := -a/|a|, the intersection of the unit circle with the ℓ_e ray, then
∫_ℓ_edζ'/ζ'ζ' + ζ/ζ' -ζlog( 1 - 𝒳_e^sf(a,ζ', Υ_e) ) =
-∫_0^ζ_edζ'/ζ'log( 1 - 𝒳_e^sf(a,ζ', Υ_e) ) + ∫_ζ_e^ζ_e ∞dζ'/ζ'log( 1 - 𝒳_e^sf(a,ζ', Υ_e) )
+ ∫_0^ζ_e2 dζ'/ζ'-ζlog( 1 - 𝒳_e^sf(a,ζ', Υ_e) ) + ∫_ζ_e^ζ_e ∞ 2dζ' {1/ζ'-ζ -1/ζ'}log( 1 - 𝒳_e^sf(a,ζ', Υ_e) )
We consider the first two integrals apart from the rest. If we take the limit a → 0 the exponential decay in 𝒳_e^sf:
exp( π R a/ζ' + π R ζ' a)
vanishes and the integrals are no longer convergent.
By combining the two integrals with their analogues in the ℓ_-e ray we obtain:
-∫_0^ζ_edζ'/ζ'log( 1 - 𝒳_e^sf(a,ζ', Υ_e) ) + ∫_ζ_e^ζ_e ∞dζ'/ζ'log( 1 - 𝒳_e^sf(a,ζ', Υ_e) )
∫_0^-ζ_edζ'/ζ'log( 1 - 𝒳_e^sf^-1(a,ζ', -Υ_e) ) - ∫_-ζ_e^-ζ_e ∞dζ'/ζ'log( 1 - 𝒳_e^sf^-1(a,ζ', -Υ_e) )
The parametrization in the first pair of integrals is of the form ζ' = tζ_e, and in the second pair ζ' = -tζ_e. Making the change of variables ζ' ↦ 1/ζ', we can pair up these integrals in a more explicit way as:
-∫_0^1 dt/t{log[ 1 - exp( -π R |a| (1/t + t ) +iΥ_e(a,-te^i a) ) ] .
. + log[ 1 - exp( -π R |a| (1/t + t ) - iΥ_e(a,1/t e^i a) ) ] }
+ ∫_0^1 dt/t{log[ 1 - exp( -π R |a| (1/t + t ) +iΥ_e(a,-1/t e^i a) ) ] .
. + log[ 1 - exp( -π R |a| (1/t + t ) -iΥ_e(a,te^i a) ) ] }
By (<ref>), the integrands come in conjugate pairs. Therefore, we can rewrite (<ref>) as:
-2∫_0^1 dt/tRe {. log[ 1 - exp( -π R |a| (1/t + t ) +iΥ_e(a,-te^i a) ) ] -
. log[ 1 - exp( -π R |a| (1/t + t ) -iΥ_e(a,te^i a) ) ] }
= -2∫_0^1 dt/tlog| 1 - exp( -π R |a| (t^-1 + t ) +iΥ_e(a,-te^i a) )/1 - exp( -π R |a| (t^-1 + t ) -iΥ_e(a,te^i a) )|
Observe that (<ref>) itself suggest the correct transformation of the θ coordinates that fixes this. Indeed, for a fixed a ≠ 0 and θ_e, let Q be the map
Q(θ_m) = θ_m + ψ(a,θ),
where
ψ_in(a,θ) = 1/2π∫_0^1 dt/tlog| 1 - exp( -π R |a| (t^-1 + t ) +iΥ_e(a,-te^i a) )/1 - exp( -π R |a| (t^-1 + t ) -iΥ_e(a,te^i a) )|
= 1/2π∫_0^1 dt/tlog| 1 - [𝒳_e](-te^i a)/1 - [𝒳_-e](te^i a)|
for a ∈. For a ∈ where the wall-crossing is of type I, let φ = (Z_γ_e + γ_m(a)), with ζ'= -t e^iφ parametrizing the ℓ_e + m ray:
ψ_out(a,θ) = 1/2π∫_0^1 dt/t{log| 1 - exp( -π R |a| (t^-1 + t ) +iΥ_e(a,-te^i a) )/1 - exp( -π R |a| (t^-1 + t ) -iΥ_e(a,te^i a) )| .
+ . log| 1 - exp( -π R |Z_γ_e + γ_m| (t^-1 + t ) +iΥ_e +m(a,-te^iφ) )/1 - exp( -π R |Z_γ_e + γ_m| (t^-1 + t ) -iΥ_e+m(a,te^iφ) )| }
= 1/2π∫_0^1 dt/t{log| 1 - [𝒳_e](-te^i a)/1 - [𝒳_-e](te^i a)| + log| 1 - [𝒳_e+m](-te^iφ)/1 - [𝒳_-e-m](te^iφ)| }
Similarly, for wall-crossing of type II, φ = (Z_γ_-e + γ_m(a)), with ζ'= -t e^iφ for the ℓ_-e + m ray:
ψ_out(a,θ) = 1/2π∫_0^1 dt/t{log| 1 - exp( -π R |a| (t^-1 + t ) +iΥ_e(a,-te^i a) )/1 - exp( -π R |a| (t^-1 + t ) -iΥ_e(a,te^i a) )| .
+ . log| 1 - exp( -π R |Z_γ_-e + γ_m| (t^-1 + t ) +iΥ_-e +m(a,-te^iφ) )/1 - exp( -π R |Z_γ_-e + γ_m| (t^-1 + t ) -iΥ_-e+m(a,te^iφ) )| }
= 1/2π∫_0^1 dt/t{log| 1 - [𝒳_e](-te^i a)/1 - [𝒳_-e](te^i a)| + log| 1 - [𝒳_-e+m](-te^iφ)/1 - [𝒳_e-m](te^iφ)| }
As a approaches the wall of marginal stability W, a →φ. We need to show the following
The two definitions ψ_in and ψ_out coincide at the wall of marginal stability.
First let a approach W from the “in” region, so we're using definition (<ref>). Start with the pair of functions (𝒳_e, 𝒳_m) in the ζ-plane and let 𝒳_e denote the analytic continuation of 𝒳_e. See Figure <ref>. When they reach the ℓ_e ray, 𝒳_e jumped to 𝒳_e(1-𝒳_m) by (<ref>) and (<ref>). Thus 𝒳_e = 𝒳_e(1-𝒳_m) along the ℓ_e ray.
Therefore,
ψ_in(a,θ) = 1/2π∫_0^1 dt/tlog| 1 - [𝒳_e(1-𝒳_m)](-te^i a)/1 - [𝒳_-e(1-𝒳_m)^-1](te^i a)|
Now starting from the “out” region, and focusing on the wall-crossing of type I for the moment, we start with the pair (𝒳_e, 𝒳_m) as before. This time, 𝒳_e at the ℓ_e ray has not gone to any jump yet. See Figure <ref>. Only 𝒳_e+m undergoes a jump at the ℓ_e+m ray and it is of the form 𝒳_e+m↦𝒳_e+m(1-𝒳_e)^-1.
When a hits the wall W, φ = a and the integrals are taken over the same ray. Thus, we can combine the logs and obtain:
ψ_out(a,θ) = 1/2π∫_0^1 dt/t{log| 1 - [𝒳_e](-te^i a)/1 - [𝒳_-e](te^i a)| + log| 1 - [𝒳_e+m(1-𝒳_e)^-1](-te^i a)/1 - [𝒳_-e-m(1-𝒳_e)](te^i a)| }
= 1/2π∫_0^1 dt/tlog| 1 - [𝒳_e(1-𝒳_m)](-te^i a)/1 - [𝒳_-e(1-𝒳_m)^-1](te^i a)|
and the two definitions coincide. For the wall-crossing of type II the proof is entirely analogous.
Q is a reparametrization in θ_m; that is, a diffeomorphism of ℝ/2πℤ.
To show that Q is injective, it suffices to show that |∂ψ/∂θ_m| < 1. We will show this in the region. The proof for the region is similar.
To simplify the calculations, write
ψ(a,θ) = 2∫_0^1 dt/tlog| 1-Cf(θ_m)/1-Cg(θ_m)|
for functions f, g of the form e^iΥ_γ for different choices of γ (they both depend on other parameters, but they're fixed here) and a factor C of the form
C = exp( -π R |a| (t^-1 + t))
Now take partials in both sides of (<ref>) and bring the derivative inside the integral. After an application of the chain rule we get the estimate
| ∂ψ/∂θ_m| ≤ 2∫_0^1 dt/t |C| {|f||∂Υ_e(t)/∂θ_m|/|1-Cf| + |g||∂Θ_e(-t)/∂θ_m|/|1-Cg|}
By the estimates in <cit.>, |∂Υ_e/∂θ_m| < 1. In <cit.>, we show that |f|, |g| can be bounded by 2. The part C has exponential decay so if R is big enough we can bound the above by 1 and injectivity is proved. For surjectivity, just observe that ψ(θ_m + 2π) = ψ(θ_m), so Q(θ_m + 2π) = θ_m + 2π.
With respect to the new coordinate θ'_m, the functions Υ_e, Υ_m satisfy the equation:
Υ_e(a,ζ) = θ_e +
1/4π∑_γ'Ω(γ';a) ⟨γ_e, γ' ⟩ ∫_γ'dζ'/ζ'ζ' + ζ/ζ' -ζlog[ 1 - 𝒳_γ'^sf(a,ζ', Υ_γ') ]
Υ_m(a,ζ) = θ'_m +
1/2π∑_γ'Ω(γ';a) ⟨γ_m, γ' ⟩{.
∫_0^b'dζ'/ζ' - ζlog[ 1 - 𝒳_γ'^sf(a,ζ', Υ_γ') ] +
. ∫_b'^b' ∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - 𝒳_γ'^sf(a,ζ', Υ_γ') ] } ,
for b' the intersection of the unit circle with the ℓ_γ' ray. The Ω(γ';a) jump at the wall, but in the Pentagon case, the sum is finite.
In order to show that Υ converges to some function, even at a = 0, observe that the integral equations in (<ref>) and (<ref>) still make sense at the singular fiber, since in the case of (<ref>), lim_a → 0 Z_m = c ≠ 0 and the exponential decay is still present, making the integrals convergent. In the case of (<ref>), the exponential decay is gone, but the different kernel makes the integral convergent, at least for ζ∈^×. The limit function lim_a → 0Υ should be then a solution to the integral equations obtained by recursive iteration, as in <cit.>.
We have to be specially careful with the Cauchy integral in (<ref>). It will be better to obtain each iteration Υ^(ν)_m when |a| → 0 by combining the pair of rays ℓ_γ', ℓ_-γ' into a single line L_γ', where in the case of the Pentagon, γ' can be either γ_e or γ_e+m, depending on the side of the wall we're at. We formulate a boundary problem over each infinite curve L_γ' as in <ref>. As in the Ooguri-Vafa case, the jump function[Since we do iterations of boundary problems, we abuse notation and use simply G(ζ) where it should be G^(ν)(ζ). This shouldn't cause any confusion, as our main focus in this section is how to obtain any iteration of 𝒳_m] G(ζ) has discontinuities of the first kind at 0 and ∞, but we also have a new difficulty: For θ_e close to 0, the jump function G(ζ) = 1-e^iΥ^(ν - 1)_γ'(ζ) may be 0 for some values of ζ.
Since the asymptotics of Υ^(ν)_e as ζ→ 0 or ζ→∞ are θ_e ± iϕ_e ≠ 0, the jump function G(ζ) can only attain the 0 value inside a compact interval away from 0 or ∞, hence these points are isolated in L_γ'. By the symmetry relation expressed in Lemma <ref>, the zeroes of G(ζ) come in pairs in L_γ' and are of the form ζ_k, -1/ζ_k. By our choice of orientation for L_γ', one of the jumps is inverted so that G(ζ) has only zeroes along L_γ' and no poles.
Thus, as in <ref>, we have a Riemann-Hilbert problem of the form[To simplify notation, we omit the iteration index ν in the Riemann-Hilbert problem expressed. By definition, 𝒳_m = 𝒳^sf_m X_m, for any iteration ν]
X_m^+(ζ) = G(ζ) X_m^-(ζ)
In <cit.>, we show that the solutions of (<ref>) exist and are unique, given our choice of kernel in (<ref>). We thus obtain each iteration Υ_m^(ν) of (<ref>). Moreover, since by <cit.>, 𝒳_m^+ = 0 at points ζ in the L_e ray where G(ζ) = 0, Υ_m^(ν)+ has a logarithmic singularity at such points.
§.§ Estimates and a new gauge transformation
As we've seen in the Ooguri-Vafa case, we expect our solutions lim_a → 0Υ to be unbounded in the ζ variable.
Define a Banach space X as the completion under the sup norm of the space of functions Φ: ^××𝕋× U →^2n that are piecewise holomorphic on ^×, smooth on 𝕋× U, for U an open subset of ℬ containing 0 and such that (<ref>), (<ref>) hold.
Like in the Ooguri-Vafa case, let a → 0 fixing a. We will later get rid of this dependence on a with another gauge transformation of θ_m. The following estimates on Υ^(ν) will clearly give us that the sequence converges to some limit Υ^(ν).
In the Pentagon case, at the bad fiber a = 0:
Υ_e^(ν + 1) = Υ_e^(ν) + O( e^-2πν R |Z_m|), ν≥ 2
Υ_m^(ν + 1) = Υ_m^(ν) + O( e^-2πν R |Z_m|), ν≥ 1
As before, we prove this by induction. Note that Υ^(1)_m = Υ^OV, the extension of the Ooguri-Vafa case obtained in (<ref>), and Υ^(1)_m differs considerably from θ_m because of the logζ term. Hence the estimates cannot start at ν = 0. Because of this reason, Υ^(2)_e differs considerably from Υ^(1)_e since this is the first iteration where Υ^(1)_m is considered.
Let ν = 1. The integral equations for Υ_e didn't change in this special case. By Lemma 3.3 in <cit.>, we have for the general case:
Υ^(1)_e = θ_e + ∑_γ'Ω(γ',a) ⟨γ_e, γ' ⟩e^-2π R |Z_γ'|/4π i √(R |Z_γ'|)ζ_γ' + ζ/ζ_γ' - ζ e^iθ_γ' + O( e^-2π R |Z_γ'|/R)
where ζ_γ' = -Z_γ'/|Z_γ'| is the saddle point for the integrals in (<ref>), and ζ is not ζ_γ'. Note that there is no divergence if ζ→ 0 or ζ→∞. If ζ = ζ_γ', again by Lemma 3.3 in <cit.>, we obtain estimates as in (<ref>) except for the √(R) terms in the denominator.
In any case, for the Pentagon, the γ' in (<ref>) are only γ_± m, γ_± (e+m), depending on the side of the wall of marginal stability. At a = 0, Z_e+m = Z_m, so (<ref>) gives that log[1 - e^i Υ^(1)_e] = log[1 - e^iθ_e] + O(e^-2π R |Z_m|) along the ℓ_e ray, and a similar estimate holds for log[1 - e^-i Υ^(1)_e] along the ℓ_-e ray. Plugging in this in (<ref>), we get (<ref>) for ν = 1.
For general ν, a saddle point analysis on Υ^(ν)_e can still be performed and obtain as in (<ref>):
Υ^(ν+1)_e = θ_e + e^-2π R |Z_m|/4π i √(R |Z_m|){ζ_m + ζ/ζ_m - ζ e^iΥ^(ν)_m(ζ_m) - ζ_m - ζ/ζ_m + ζ e^-iΥ^(ν)_m(-ζ_m)} + O( e^-2π R |Z_γ'|/R),
from one side of the wall. On the other side (for type I) it will contain the extra terms
e^-2π R |Z_m|/4π i √(R |Z_m|){ζ_m + ζ/ζ_m - ζ e^i(Υ^(ν)_m(ζ_m) + Υ^(ν)_e(ζ_m)) - ζ_m - ζ/ζ_m + ζ e^-i(Υ^(ν)_m(-ζ_m) - Υ^(ν)_e(-ζ_m))}.
Observe that for this approximation we only need Υ^(ν) at the point ζ_m. By the previous part, for ν = 2,
e^iΥ^(2)_m(ζ_m) = e^iΥ^(1)_m(ζ_m)(1 + O( e^-2π R |Z_m|) )
Thus, for ν = 2,
Υ^(3)_e = θ_e + e^-2π R |Z_m|/4π i √(R |Z_m|){ζ_m + ζ/ζ_m - ζ e^iΥ^(1)_m(ζ_m)(1 + O( e^-2π R |Z_m|) ) .
- . ζ_m - ζ/ζ_m + ζ e^-iΥ^(1)_m(-ζ_m)(1 + O( e^-2π R |Z_m|) ) } + O( R^1/2)
= Υ^(2)_e + O( e^-4π R|Z_m|)
and similarly in the other side of the wall. For general ν, the same arguments show that (<ref>), (<ref>) hold after the appropriate ν.
There is still one problem: the limit of 𝒳_m we obtained as a → 0 for the analytic continuation of 𝒳_m was only along a fixed ray a = constant. To get rid of this dependence, it is necessary to perform another gauge transformation on the torus coordinates θ. Recall that we are restricted to the Pentagon case. Let a → 0 fixing a. Let ζ_γ denote Z_γ/|Z_γ|. In particular, ζ_e = a/|a| and this remains constant since we're fixing a. Also, ζ_m = Z_m/|Z_m| and this is independent of a since Z_m has a limit as a → 0. The following lemma will allow us to obtain the correct gauge transformation.
For the limit . 𝒳_m|_a=0 obtained above, its imaginary part is independent of the chosen ray a = c along which a → 0.
Let Υ_m denote the analytic continuation of Υ_m yielding 𝒳_m. Start with a fixed value a ≡ρ_0, for ρ_0 different from Z_m(0), (-Z_m(0)). For another ray a ≡ρ, we compute . Υ_m|_a=0
a = ρ - . Υ_m|_a=0
a = ρ_0 (without analytic continuation for the moment).
The integrals in (<ref>) are of two types. One type is of the form
∫_0^ζ_± edζ'/ζ' - ζlog[ 1 - e^iΥ_± e(ζ')] + ∫_ζ_± e^ζ_± e∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - e^iΥ_± e(ζ')]
The other type appears only in the outside part of the wall of marginal stability. Since Z : Γ→ is a homomorphism, Z_γ_e + γ_m = Z_γ_e + Z_γ_m. At a = 0, Z_e = a = 0, so Z_e+m = Z_m. Hence, ℓ_m = ℓ_e+m at the singular fiber. This second type of integral is thus of the form
∫_0^ζ_± mdζ'/ζ' - ζlog[ 1 - e^iΥ_± (e+m)(ζ')] + ∫_ζ_± m^ζ_± m∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - e^iΥ_± (e+m)(ζ')]
Since the ℓ_m stays fixed at a = 0 independently of a, (<ref>) does not depend on a, so this has a well-defined limit as a → 0. We should focus then only on integrals of the type (<ref>). For a different a, ζ_e changes to another point ζ_e in the unit circle. See Figure <ref>. The paths of integration change accordingly. We have two possible outcomes: either ζ lies outside the sector determined by the two paths, or ζ lies inside the region.
In the first case (ζ_1 on Figure <ref>), the integrands
log[1-e^iΥ_± e(ζ')]/ζ'-ζ, ζlog[1-e^iΥ_± e(ζ')]/ζ'(ζ'-ζ)
are holomorphic on ζ' in the sector between the two paths. By Cauchy's formula, the difference between the two integrals is just the integration along a path C_± e between the two endpoints ζ_± e, ζ_± e. If f(s) parametrizes the path C_e, let C_-e = -1/f(s). The orientation of C_e in the contour containing ∞ is opposite to the contour containing 0. Similarly for C_-e. Thus, the difference of Υ_m for these two values of a is the integral along C_e, C_-e of the difference of kernels (<ref>), namely:
∫_C_edζ'/ζ'log[1-e^iΥ_e(ζ')] - ∫_C_-edζ'/ζ'log[1-e^-iΥ_e(ζ')]
Even if e^iΥ_e(ζ') = 1 for ζ' in the contour, the integrals in (<ref>) are convergent, so this is well-defined for any values of θ_e ≠ 0. By symmetry of C_e, C_-e and the reality condition (<ref>), the second integral is the conjugate of the first one. Thus (<ref>) is only real.
When ζ hits one of the contours, ζ coincides with one of the ℓ_e or ℓ_-e rays, for some value of a. The contour integrals jump since ζ lies now inside the contour (ζ_2 in Figure <ref>). The jump is by the residue of the integrands (<ref>). This gives the jump of 𝒳_m that the analytic continuation around a = 0 cancels. Therefore, only the real part of Υ_m depends on a.
By the previous lemma, . Υ_m|_a=0
a = ρ - . Υ_m|_a=0
a = ρ_0 is real and is given by (<ref>). Define then a new gauge transformation:
θ_m = θ'_m - 1/2π{∫_C_edζ'/ζ'log[1-e^iΥ_e(ζ')] + ∫_C_-edζ'/ζ'log[1-e^-iΥ_e(ζ')] }
This eliminates the dependence on a for the limit . 𝒳_m |_a=0. As we did in <ref> in Theorem <ref>, we can extend the torus fibration ℳ' by gluing a S^1-fiber bundle of the form D × (0, 2π) × S^1 for D a disk around a = 0, θ_e ∈ (0,2π) and θ_m the new coordinate of the S^1 fibers. Using Taub-NUT space as a local model for this patch, the trivial S^1 bundle can be extended to θ_e = 0 where the fiber degenerates into a point (nevertheless, in Taub-NUT coordinates the space is still locally isomorphic to ^2). Since 𝒳_m ≡ 0 if θ_e = 0 as in <ref>, in this new manifold ℳ we thus obtain a well defined function 𝒳_m.
§.§ Extension of the derivatives
Extension of the derivatives@Extension of the derivatives
So far we were able to extend the functions 𝒳_e, 𝒳_m to ℳ. Unfortunately, we can no longer bound uniformly on ν the derivatives of 𝒳_m near a = 0, so the Arzela-Ascoli arguments no longer work here. Since there's no difference on the definition of 𝒳_e at a = 0 from that of the regular fibers, this function extends smoothly to a = 0.
We have to obtain the extension of all derivatives of 𝒳_m directly from its definition. It suffices to extend the derivatives of 𝒳_m only, as the analytic continuation doesn't affect the symplectic form ϖ(ζ) (see below).
log𝒳_m extends smoothly to ℳ, for θ_e ≠ 0.
For convenience, we rewrite Υ_m with the final magnetic coordinate θ_m:
Υ_m = θ_m + 1/2π{∫_C_edζ'/ζ'log[1-e^i Υ_e(ζ')] -
∫_C_-edζ'/ζ'log[1-e^-i Υ_e(ζ')] }
+ 1/2π∑_γ'Ω(γ';a) ⟨γ_m, γ' ⟩{∫_0^ζ_γ'dζ'/ζ' - ζlog[ 1 - 𝒳_γ'^sf(a,ζ', Υ_γ') ] . +
. ∫_ζ_γ'^ζ_γ'∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - 𝒳_γ'^sf(a,ζ', Υ_γ') ] }
where e^i Υ_e(ζ') is evaluated only at a = 0. For γ' of the type ±γ_e ±γ_m, 𝒳_γ' and its derivatives still have exponential decay along the ℓ_γ' ray, so these parts in Υ_m extend to a =0 smoothly. It thus suffices to extend only
Υ_m = θ_m + 1/2π{∫_C_edζ'/ζ'log[1-e^i Υ_e(ζ')] -
∫_C_-edζ'/ζ'log[1-e^-i Υ_e(ζ')] .
+ ∫_0^ζ_edζ'/ζ' - ζlog[ 1 - 𝒳_e^sf(a,ζ', Υ_e) ] + ∫_ζ_e^ζ_e∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - 𝒳_e^sf(a,ζ', Υ_e) ]
-. ∫_0^-ζ_edζ'/ζ' - ζlog[ 1 - 𝒳_e^sf^-1(a,ζ', -Υ_e) ] - ∫_-ζ_e^-ζ_e∞ζ dζ'/ζ'(ζ' - ζ)log[ 1 - 𝒳_e^sf^-1(a,ζ', -Υ_e) ]}
together with the semiflat part π R Z_m/ζ + π R ζZ_m, which we assume is as in the Generalized Ooguri-Vafa case, namely:
𝒳_m = exp( -i R /2ζ(alog a - a + f(a)) + i Υ_m + iζ R/2 (aloga - a + f(a) ))
for a holomorphic function f near a = 0 and such that f(0) ≠ 0. The derivatives of the terms involving f(a) clearly extend to a = 0, so we focus on the rest, as in <ref>.
We show first that ∂log𝒳_m∂_θ_e, ∂log𝒳_m∂_θ_m extend to a = 0. Since there is no difference in the proof between electric or magnetic coordinates, we'll denote by ∂_θ a derivative with respect to any of these two variables.
We have:
∂/∂θlog𝒳_m = -i/2π{∫_C_edζ'/ζ'e^i Υ_e(ζ')/1-e^i Υ_e(ζ')∂Υ_e(ζ')/∂θ - ∫_C_-edζ'/ζ'e^-i Υ_e(ζ')/1-e^-i Υ_e(ζ')∂Υ_e(ζ')/∂θ.
+ ∫_0^ζ_edζ'/ζ' - ζ𝒳_e(ζ')/1-𝒳_e(ζ')∂Υ_e(ζ')/∂θ + ∫_ζ_e^ζ_e∞ζ dζ'/ζ'(ζ' - ζ)𝒳_e(ζ')/1-𝒳_e(ζ')∂Υ_e(ζ')/∂θ
. + ∫_0^-ζ_edζ'/ζ' - ζ𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')∂Υ_e(ζ')/∂θ + ∫_-ζ_e^ζ_e∞ζ dζ'/ζ'(ζ' - ζ)𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')∂Υ_e(ζ')/∂θ}
when a → 0, 𝒳_e(ζ')1-𝒳_e(ζ')→e^i Υ_e(ζ')1-e^i Υ_e(ζ'). The integrals along C_e and C_-e represent a difference of integrals along the contour in the last integrals and a fixed contour, as in Figure <ref>. Thus, when a = 0,
. 2π i ∂/∂θlogΥ_m|_a =0 = ∫_0^bdζ'/ζ' - ζ𝒳_e(ζ')/1-𝒳_e(ζ')∂Υ_e(ζ')/∂θ + ∫_b^b ∞ζ dζ'/ζ'(ζ' - ζ)𝒳_e(ζ')/1-𝒳_e(ζ')∂Υ_e(ζ')/∂θ
. + ∫_0^-bdζ'/ζ' - ζ𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')∂Υ_e(ζ')/∂θ + ∫_-b^-b ∞ζ dζ'/ζ'(ζ' - ζ)𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')∂Υ_e(ζ')/∂θ}
for a fixed point b in the unit circle, independent of a. If Υ_e(ζ') = 1 for a point c in the line L passing through the origin and b, then as seen in <cit.>, the function 𝒳_m develops a zero on the right side of such line. Nevertheless, the analytic continuation 𝒳_m around a = 0 introduces a factor of the form (1 - 𝒳_e)^-1 when a changes from region III to region I in Figure <ref>, so the pole at c on the right side of L for the derivative ∂∂θlogΥ_m coming from the integrand in (<ref>) is canceled by analytic continuation. Hence, the integrals are well defined and thus the left side has an extension to a = 0.
Now, for the partials with respect to a, a, there are two different types of dependence: one is the dependence of the contours, the other is the dependence of the integrands. The former dependence is only present in (<ref>), as the contours in Figure <ref> change with a. A simple application of the Fundamental Theorem of Calculus in each integral in (<ref>) gives that this change is:
. -2π i ∂/∂ alogΥ_m|_a =0 = log[1-e^-i Υ_e(ζ_e)] - log[1-e^-i Υ_e(ζ_e)]
- log[1-e^-i Υ_e(ζ_e)] + log[1-e^-i Υ_e(ζ_e)] = 0,
where we again used the fact that the integrals along C_e and C_-e represent the difference between the integrals in the other pairs with respect to two different rays, one fixed. By continuity on parameters, the terms are still 0 if Υ_e(ζ_e) = 0. Compare this with (<ref>), where we obtained this explicitly.
Then there is the dependence on a, a on the integrands and the semiflat part. Focusing on a only, we take partials on log𝒳_m in (<ref>) (ignoring constants and parts that clearly extend to a = 0). This is:
log a/ζ + ∫_0^ζ_edζ'/ζ'(ζ' - ζ)𝒳_e/1-𝒳_e + ∫_0^-ζ_edζ'/ζ'(ζ' - ζ)𝒳^-1_e/1-𝒳^-1_e
This is the equivalent of (<ref>) in the general case. In the limit a → 0, we can do an asymptotic expansion of e^i Υ_e(ζ')1-e^i Υ_e(ζ') = e^i Υ_e(0)1-e^i Υ_e(0) + O(ζ'). Clearly when we write this expansion in (<ref>), the only divergent term at a = 0 is the first degree approximation in the integral. Thus, we can focus on that and assume that the 𝒳_e1-𝒳_e (resp. 𝒳^-1_e1-𝒳^-1_e) factor is constant. If we do the partial fraction decomposition, we can run the same argument as in Eqs. (<ref>) up to (<ref>) and obtain that (<ref>) is actually 0 at a = 0. The only identity needed is
1/1-e^iΥ_e(0) + 1/1-e^-iΥ_e(0) = 1
The argument also works for the derivative with respect to a, now with an asymptotic expansion around ∞ of Υ_e.
This shows that 𝒳_m extends in a C^1 way to a = 0. For the C^∞ extension, derivatives with respect to any θ coordinate work in the same way, all that was used was the specific form of the contours C_e, C_-e. The same thing applies to the dependence on the contours C_e, C_-e. For derivatives with respect to a, a in the integrands, we can again do an asymptotic expansion of Υ_e at 0 or ∞ and compare it to the asymptotic of the corresponding derivative of a log a - a as a → 0.
Nothing we have done in this section is particular of the Pentagon example. We only needed the specific values of Ω(γ;u) given in (<ref>) to obtain the Pentagon identities at the wall and to perform the analytic continuation of 𝒳_m around u = 2. For any integrable systems data as in section <ref> with suitable invariants Ω(γ;u) allowing the wall-crossing formulas and analytic continuation, we can do the same isomonodromic deformation of putting all the jumps at a single admissible ray, perform saddle-point analysis and obtain the same extensions of the Darboux coordinates 𝒳_γ. This finishes the proof of Theorem <ref>.
What is exclusive of the Pentagon case is that we have a well-defined hyperkähler metric g_OV that we can use as a local model of the metric to be constructed here.
The extension of the holomorphic symplectic form ϖ(ζ) is now straightforward. We proceed as in <cit.> by first writing:
ϖ(ζ) = -1/4π^2 Rd𝒳_e/𝒳_e∧d𝒳_m/𝒳_m
Where we used the fact that the jumps of the functions 𝒳_γ are via the symplectomorphisms 𝒦_γ' of the complex torus T_a (see (<ref>)) so ϖ(ζ) remains the same if we take 𝒳_m or its analytic continuation 𝒳_m.
We need to show that ϖ(ζ) is of the form
-i/2ζω_+ + ω_3 -i ζ/2ϖ_-
that is, ϖ(ζ) must have simple poles at ζ = 0 and ζ = ∞, even at the singular fiber where a = 0.
By definition, 𝒳_e = exp(π R a/ζ + iΥ_e + π R ζa). Thus
d𝒳_e(ζ)/𝒳_e(ζ) = π R da/ζ + i dΥ_e(ζ) +π R ζ da
By (<ref>), and since lim_a → 0 Z_m ≠ 0, 𝒳_m (resp. 𝒳_-m) of the form exp(π R Z_m(a)/ζ + iΥ_m + π R ζZ_m(a)) still has exponential decay when ζ lies in the ℓ_m ray (resp. ℓ_-m), even if a = 0. The differential d Υ_e(ζ) thus exists for any ζ∈ since the integrals defining it converge for any ζ.
As in <cit.>, we can write
d𝒳_e/𝒳_e∧d𝒳_m/𝒳_m = d𝒳_e/𝒳_e∧( d𝒳^sf_m/𝒳^sf_m + ℐ_±),
for ℐ_± denoting the corrections to the semiflat function. By the form of 𝒳^sf = exp(π R Z_m(a)/ζ + iθ_m + π R ζZ_m(a)), the wedge involving only the semiflat part has only simple poles at ζ = 0 and ζ = ∞, so we can focus on the corrections. These are of the form
d𝒳_e(ζ)/𝒳_e(ζ)∧ℐ_± = -i/2π{∫_0^ζ_edζ'/ζ'-ζ𝒳_e(ζ')/1-𝒳_e(ζ')d𝒳_e(ζ)/𝒳_e(ζ)∧d𝒳_e(ζ')/𝒳_e(ζ').
+ ∫_ζ_e^ζ_e ∞ζ dζ'/ζ'(ζ'-ζ)𝒳_e(ζ')/1-𝒳_e(ζ')d𝒳_e(ζ)/𝒳_e(ζ)∧d𝒳_e(ζ')/𝒳_e(ζ')
+ ∫_0^-ζ_edζ'/ζ'-ζ𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')d𝒳_e(ζ)/𝒳_e(ζ)∧d𝒳_e(ζ')/𝒳_e(ζ')
+ . ∫_-ζ_e^-ζ_e ∞ζ dζ'/ζ'(ζ'-ζ)𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')d𝒳_e(ζ)/𝒳_e(ζ)∧d𝒳_e(ζ')/𝒳_e(ζ')}
In the “inside” part of the wall of marginal stability. A similar equation holds in the other side. We can simplify the wedge products above by taking instead
d𝒳_e(ζ)/𝒳_e(ζ)∧(d𝒳_e(ζ)/𝒳_e(ζ) - d𝒳_e(ζ')/𝒳_e(ζ')) = π R [ ( 1/ζ - 1/ζ')da + (ζ - ζ')da] +i ( dΦ_e(ζ) - dΦ_e(ζ') )
Recall that Φ_e represents the corrections to θ_e, so Υ_e = θ_e + Φ_e. By <ref>, Φ_e and dΦ_e are defined for ζ = 0 ζ = ∞ even if a = 0, since lim_a → 0 Z_m(a) ≠ 0 and the exponential decay in 𝒳_m^sf still present guarantees convergence of the integrals in <ref>. Hence, the terms involving dΦ_e(ζ) - dΦ_e(ζ') are holomorphic for any ζ∈. It thus suffices to consider the other terms. After simplifying the integration kernels, we obtain
π R da/ζ∫_0^ζ_edζ'/ζ'𝒳_e(ζ')/1-𝒳_e(ζ') +π R da ∫_ζ_e^ζ_e ∞dζ'/(ζ')^2𝒳_e(ζ')/1-𝒳_e(ζ')
π R da/ζ∫_0^-ζ_edζ'/ζ'𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ') +π R da ∫_-ζ_e^-ζ_e ∞dζ'/(ζ')^2𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')
-π R da∫_0^ζ_e dζ' 𝒳_e(ζ')/1-𝒳_e(ζ') -π R ζ da∫_ζ_e^ζ_e ∞dζ'/ζ'𝒳_e(ζ')/1-𝒳_e(ζ')
-π R da∫_0^ζ_e dζ' 𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ') -π R ζ da∫_ζ_e^ζ_e ∞dζ'/ζ'𝒳^-1_e(ζ')/1-𝒳^-1_e(ζ')
The only dependence on ζ is in the factors ζ, 1/ζ. Thus ϖ(ζ) has only simple poles at ζ = 0 and ζ = ∞.
Finally, the estimates in Lemma <ref> show that if we recover the hyperkähler metric g from the holomorphic symplectic form ϖ(ζ) as in <ref> and <ref>, we obtain that the hyperkähler metric for the Pentagon case is the metric obtained in <ref> for the Ooguri-Vafa case plus smooth corrections near a = 0, θ_e = 0, so it extends to this locus.
This gives Theorem <ref>.
amsplain
|
http://arxiv.org/abs/1701.07992v2 | 20170127100322 | HJB equations in infinite dimension and optimal control of stochastic evolution equations via generalized Fukushima decomposition | [
"Giorgio Fabbri",
"Francesco Russo"
] | math.PR | [
"math.PR"
] |
*
Luke Hutton and Tristan Henderson
December 30, 2023
=====================================
A stochastic optimal control problem driven by an abstract evolution equation in a separable Hilbert space is considered. Thanks to the identification of
the mild solution of the state equation as ν-weak Dirichlet process,
the value processes is proved to be a real weak Dirichlet process.
The uniqueness of the corresponding decomposition is used to prove a
verification theorem.
Through that technique several of the required assumptions are milder than
those employed in previous contributions about
non-regular solutions of Hamilton-Jacobi-Bellman equations.
KEY WORDS AND PHRASES: Weak Dirichlet processes in infinite dimension;
Stochastic evolution equations; Generalized Fukushima decomposition; Stochastic optimal control in Hilbert spaces.
2010 AMS MATH CLASSIFICATION: 35Q93, 93E20, 49J20
§ INTRODUCTION
The goal of this paper is to show that, if we carefully exploit some
recent developments in stochastic calculus in infinite dimension,
we can weaken some of the hypotheses typically demanded in the literature of
non-regular solutions of Hamilton-Jacobi-Bellman (HJB) equations to prove
verification theorems and optimal syntheses of stochastic optimal control problems in Hilbert spaces.
As well-known, the study of a dynamic optimization problem can be linked, via the dynamic programming to the analysis of the related HJB equation, that is, in the context we are interested in, a second order infinite dimension PDE. When this approach can be successfully applied, one can prove a verification theorem and express the optimal control in feedback form (that is, at any time, as a function of the state) using the solution of the HJB equation. In this case the latter can be identified with the value function of the problem.
In the regular case (i.e. when the value function is C^1,2, see for instance Chapter 2 of <cit.>) the standard proof of the verification theorem is based on the Itô formula.
In this paper we show that some recent results in stochastic calculus, in particular Fukushima-type decompositions explicitly suited for the infinite dimensional context, can be used to prove the same kind of result for less regular solutions of the HJB equation.
The idea is the following.
In a previous paper (<cit.>) the authors introduced the class of ν-weak Dirichlet processes (the definition is recalled in Section <ref>, ν is a Banach space
strictly associated with a suitable subspace ν_0 of H)
and showed that convolution type processes, and in particular mild solutions
of infinite dimensional stochastic evolution
equations (see e.g. <cit.>, Chapter 4), belong to this class. By applying this result to the solution of the state equation of a class of stochastic optimal control problems in infinite dimension we are able to show that the value process, that is the value of any given solution of the HJB equation computed on the trajectory taken into account[
The expression value process is sometime used for denoting the value function computed on the trajectory, often the two definition coincide but it is not always the case.], is a (real-valued) weak Dirichlet processes (with respect to a given filtration), a notion introduced in <cit.> and subsequently analyzed in <cit.>. Such a process can be written as the sum of a local martingale and a martingale
orthogonal process, i.e. having zero covariation with every continuous local martingale. Such decomposition is
unique and in Theorem <ref>,
we exploit the uniqueness property to characterize the martingale part of
the value process as a suitable stochastic integral
with respect to a Girsanov-transformed Wiener process
which allows to obtain a substitute of the Itô-Dynkin formula for solutions of the Hamilton-Jacobi-Bellman equation.
This is possible when the value process associated to the optimal control problem can be expressed by
a C^0,1([0,T[ × H) function of the state process, with however a stronger regularity on the first derivative.
We finally use this expression to prove the verification result stated in Theorem <ref>[A similar approach is used, when H is finite-dimensional,
in <cit.>. In that case
things are simpler and there is not need to use the notion of ν-weak Dirichlet processes and and results that are specifically suited for the infinite dimensional case. In that case ν_0 will be isomorphic to the full space H.].
We think the interest of our contribution is twofold. On the one hand we show that recent developments in stochastic
calculus in Banach spaces, see for instance <cit.>, from which we adopt the
framework related to generalized covariations and Itô-Fukushima formulae, but also other approaches as <cit.>
may have important control theory counterpart applications.
On the other hand the method we present allows to improve some previous verification results weakening a series of hypotheses.
We discuss here this second point in detail.
There are several ways to introduce non-regular solutions of second order HJB equations in Hilbert spaces. They are more precisely surveyed in <cit.> but they essentially are viscosity solutions, strong solutions and the study of the HJB equation through backward SDEs.
Viscosity solutions are defined, as in the finite-dimensional case, using test functions that
locally “touch” the candidate solution. The viscosity solution approach was first adapted to the second order Hamilton
Jacobi equation in Hilbert space in <cit.> and then, for the “unbounded” case (i.e. including a possibly unbounded generator of a strongly continuous semigroup in the state equation, see e.g. equation (<ref>)) in <cit.>.
Several improvements of those pioneering studies have been published, including extensions to several specific
equations but, differently from what happens in the finite-dimensional case, there are no verification theorems available at the moment for stochastic problems in infinite-dimension that use the notion of viscosity solution.
The backward SDE approach can be applied when the mild solution of the HJB equation can be represented
using the solution of a forward-backward system.
It was introduced in <cit.>
in the finite dimensional setting
and developed in several works,
among them <cit.>. This method
only allows to find optimal feedbacks in classes of problems satisfying a specific “structural condition”, imposing,
roughly speaking, that the control acts within the image of the noise. The same limitation concerns the L^2_μ approach introduced and developed in <cit.> and <cit.>.
In the strong solutions approach, first introduced in <cit.>, the solution is defined as a
proper limit of solutions of regularized problems. Verification results in this framework are given in
<cit.>. They are collected and refined in Chapter 4 of <cit.>.
The results obtained using strong solutions are the main term of comparison for ours both because in this context the verification results are more developed and because we partially work in the same framework by approximating the solution of the HJB equation using solutions of regularized problems. With reference to them our method has some advantages [Results for specific cases, as boundary control problems and reaction-diffusion equation (see <cit.>) cannot be treated at the moment with the method we present here.]: (i) the assumptions on the cost structure are milder, notably they do not include any continuity assumption on the running cost that is only asked to be a measurable function; moreover the admissible controls are only asked to verify, together with the related trajectories, a quasi-integrability condition of the functional, see Hypothesis <ref> and the subsequent paragraph; (ii) we work with a bigger set of approximating functions because we do not require the approximating functions and their derivatives to be uniformly
bounded; (iii) the convergence of the derivatives of the approximating solution is not necessary and it is replaced by the weaker condition (<ref>).
This convergence, in different possible forms, is unavoidable in the standard structure of the strong solutions approach and it is avoided here only thanks to the use of Fukushima decomposition in the proof.
In terms of the last just mentioned two points, our notion of solution is weaker than those used in the mentioned works,
we need nevertheless to assume that the gradient of the solution of the HJB equation is continuous as an D(A^*)-valued function.
Even if it is rather simple, it is itself of some interest because, as far as we know, no explicit (i.e. with explicit expressions of the value function and of the approximating sequence) example of strong solution for second order HJB in infinite dimension are published so far.
The paper proceeds as follows. Section <ref> is devoted to some preliminary notions, notably
the definition of ν-weak-Dirichlet process and some related results. Section <ref> focuses on the optimal
control problem and the related HJB equation. It includes the key
decomposition Theorem <ref>. Section <ref> concerns the verification theorem. In Section <ref> we provide an example of optimal control problem that can solved by using the developed techniques.
§ SOME PRELIMINARY DEFINITIONS AND RESULT
Consider a complete probability space ( Ω,ℱ,ℙ). Fix T>0 and s∈ [0,T[. Let {ℱ^s_t }_t≥ s be a filtration satisfying the usual conditions. Each time we use expressions as “adapted”, “martingale”, etc... we always mean “with respect to the filtration {ℱ^s_t }_t≥ s”.
Given a metric space S we denote by ℬ(S) the Borel σ-field on S. Consider two real Hilbert spaces H and G. By default we assume that all the processes [s,T]×Ω→ H are
Bochner measurable functions with respect to the product σ-algebra
ℬ([s,T]) ⊗ℱ with values in (H, ℬ(H)).
Continuous processes are clearly Bochner measurable processes.
Similar conventions are done for G-valued processes. We denote by H⊗̂_π G the projective tensor product of H and G, see <cit.> for details.
A continuous real process X [s,T]×Ω→ℝ is called
weak Dirichlet process if it can be written as X=M+A, where M is a continuous local martingale and A is a martingale orthogonal process
in the sense that A(s)=0 and [ A,N] =0 for every continuous local martingale N.
The following result is proved in Remarks 3.5 and 3.2 of <cit.>.
* The decomposition described in Definition <ref> is unique.
* A semimartingale is a weak Dirichlet process.
The notion of weak Dirichlet process constitutes a natural generalization
of the one of semimartingale. To figure out this fact one can start by considering a real continuous semimartingale S = M + V, where M is a local martingale and V is a bounded variation
process vanishing at zero. Given a function f: [0,T] ×→
of class C^1,2, Itô formula shows that
f(·, S) = M^f + A^f
is a semimartingale where M^f_t = f(0,S_0) + ∫_0^t ∂_x f(r,S_r) dM_r
is a local martingale and A^f is a bounded variation process expressed
in terms of the partial derivatives of f.
If f ∈ C^0,1 then (<ref>) still holds with the same M^f, but now A^f is only a martingale orthogonal process; in this case f(·, S) is generally no longer a semimartingale
but only a weak Dirichlet process, see <cit.>, Corollary 3.11.
For this
reason (<ref>) can be interpreted as a generalized Itô formula.
Another aspect to be emphasized is that a semimartingale is also a finite quadratic variation process.
Some authors, see e.g. <cit.> have extended the notion
of quadratic variation to the case of stochastic process taking values in
a Hilbert (or even Banach) space B.
The difficulty is that
the notion of finite quadratic variation process (but also the one
of semimartingale or weak Dirichlet process) is not suitable
in several contexts and in particular in the analysis of mild solutions of an evolution equations that cannot be expected to be in general neither a semimartingale nor a finite quadratic variation process.
A way to remain in this spirit is to introduce a notion of quadratic
variation which is associated with a space (called Chi-subspace) χ
of the dual of the tensor product B ⊗̂_π B. In the rare cases
when the process has indeed a finite quadratic variation then
the corresponding χ would be allowed to be the full space
(B ⊗̂_π B)^*.
We recall that, following <cit.>, a Chi-subspace (of (H⊗̂_π G)^*) is defined as any Banach subspace (χ, |·|_χ) which is continuously embedded into (H⊗̂_π G)^* and, following <cit.>, given a Chi-subspace χ we introduce the notion of χ-covariation as follows.
Given two process [s,T] → H and [s,T] → G, we say that (, ) admits a
χ-covariation if the two following conditions are satisfied.
H1 For any sequence of positive real numbers ϵ_n↘ 0 there exists a subsequence ϵ_n_k such that
sup_k∫_s^T | J((r+ϵ_n_k)-(r))
⊗ ((r+ϵ_n_k)-(r)) |_χ^∗/ϵ_n_k dr
< ∞ a.s.,
where J: H⊗̂_πG ⟶ (H⊗̂_πG)^∗∗ is the canonical injection between a space and its bidual.
H2
If we denote by [,]_χ^ϵ the application
{[ [,]_χ^ϵ:χ⟶𝒞([s,T]); ϕ↦∫_s^·[_χ]⟨ϕ,
J( ((r+ϵ)-(r))⊗((r+ϵ)-(r)) )/ϵ⟩_χ^∗ dr, ] .
the following two properties hold.
* (i) There exists an application, denoted by [,]_χ, defined on χ with values in 𝒞([s,T]),
satisfying[Given a separable Banach space B and a probability space (Ω, ℙ) a family of processes ^ϵΩ× [0, T] → B is said to converge in the ucp (uniform convergence on probability) sense to Ω× [0, T] → B, when ϵ goes to zero,
if lim_ϵ→ 0sup_t∈ [0,T] |^ϵ_t - _t|_B = 0 in probability i.e. if, for any γ>0, lim_ϵ→ 0ℙ (sup_t∈ [0,T] |^ϵ_t - _t|_B >γ ) = 0.]
[,]_χ^ϵ(ϕ) [,]_χ(ϕ),
for every ϕ∈χ⊂
(H⊗̂_πG)^∗.
* (ii)
There exists a Bochner measurable process
[,]_χ:Ω× [s,T]⟶χ^∗,
such that
* for almost all
ω∈Ω, [,]_χ(ω,·) is a (càdlàg) bounded variation process,
* [,]_χ(·,t)(ϕ)=[,]_χ(ϕ)(·,t) a.s. for all ϕ∈χ, t∈ [s,T].
If (,) admits a χ-covariation
we call [,] χ-covariation of (,).
If [,] vanishes
we also write that [,Y]_χ = 0.
We say that a process admits a χ-quadratic variation if (, )
admits a χ-covariation. In that case [,]
is called χ-quadratic variation of .
Let H and G be two separable Hilbert spaces.
Let ν⊆ (H⊗̂_π G)^* be a Chi-subspace.
A continuous adapted H-valued process [s,T] ×Ω→ H is said to be
ν-martingale-orthogonal if [ , ]_ν=0, for any G-valued continuous local martingale .
Let H and G be two separable Hilbert spaces, [s,T] ×Ω→ H a bounded variation process.
For any any Chi-subspace ν⊆ (H⊗̂_π G)^*, is ν-martingale-orthogonal.
We will prove that, given any continuous process [s,T] ×Ω→ G and any Chi-subspace ν⊆ (H⊗̂_π G)^*, we have [, ]_ν = 0. This will hold in particular if
is a continuous local martingale.
By Lemma 3.2 of <cit.> it is enough to show that
A(ε) := ∫_s^T sup_cΦ∈ν,
Φ_ν≤ 1 | ⟨ J ( ((t+ε) - (t)) ⊗ ((t+ε) - (t)) ), Φ⟩ | t 0
in probability (the processes are extended on ]T,T+ε] by defining, for instance, (t)=(T) for any t∈ ]T,T+ε]). Now,
since ν is continuously embedded in (H⊗̂_π G)^*, there exists a constant C such that ·_(H⊗̂_π G)^*≤ C ·_ν so that
A(ε) ≤ C ∫_s^T sup_cΦ∈ν,
Φ_(H⊗̂_π G)^*≤ 1 | ⟨ J ( ((t+ε) - (t)) ⊗ ((t+ε) - (t)) ), Φ⟩ | t
≤
C ∫_s^T J ( ((t+ε) - (t)) ⊗ ((t+ε) - (t)) ) _(H⊗̂_π G)^** t
= C ∫_s^T ( ((t+ε) - (t)) ⊗ ((t+ε) - (t)) ) _(H⊗̂_π G) t
= C ∫_s^T ((t+ε) - (t)) _H ((t+ε) - (t)) _G t,
where the last step follows by Proposition 2.1 page 16 of <cit.>. Now, denoting t↦ ||||||(t) the real total variation function of an H-valued bounded variation function defined on the interval [s,T] we get
(t+ε) - (t) = ∫_t^t+ε Y(r) ≤∫_t^t+ε) |||Y||| (r).
So, by using Fubini's theorem in (<ref>),
A(ε) ≤ C δ(; ε) ∫_s^T+ε ||||||(r),
where δ(; ε) is the modulus of continuity of . Finally this converges to zero almost surely and then in probability.
Let H and G be two separable Hilbert spaces.
Let ν⊆ (H⊗̂_π G)^* be a Chi-subspace.
A continuous H-valued process [s,T] ×Ω→ H is called ν-weak-Dirichlet process if it is adapted and there exists a decomposition = + where
(i) is an H-valued continuous local martingale,
(ii) is an ν-martingale-orthogonal process with (s)=0.
The theorem below was the object of Theorem 3.19 of <cit.>:
it extended Corollary 3.11 in <cit.>.
Let ν_0 be a Banach subspace continuously embedded in H. Define ν:= ν_0⊗̂_πℝ and χ:=ν_0⊗̂_πν_0. Let F [s,T] × H →ℝ be a C^0,1-function. Denote with ∂_x F the Fréchet derivative of F with respect to x and assume that the mapping (t,x) ↦∂_xF(t,x) is continuous from [s,T]× H to ν_0. Let (t) = (t) + (t) for t∈ [s,T] be an ν-weak-Dirichlet process with finite χ-quadratic variation. Then Y(t):= F(t, (t)) is a real weak Dirichlet process
with local martingale part
R(t) = F(s, (s)) + ∫_s^t ⟨∂_xF(r,(r)), (r) ⟩, t∈ [s,T].
§ THE SETTING OF THE PROBLEM AND HJB EQUATION
In this section we introduce a class of infinite dimensional optimal control problems and we prove a
decomposition result for the strong solutions of the related Hamilton-Jacobi-Bellman equation. We refer the reader to <cit.> and <cit.> respectively for the classical notions of functional analysis and stochastic calculus in infinite dimension we use.
§.§ The optimal control problem
Assume from now that H and U are real separable Hilbert spaces, Q ∈ℒ(U), U_0:=Q^1/2 (U). Assume that _Q={_Q(t):s≤ t≤ T} is an U-valued ℱ^t_s-Q-Wiener process (with _Q(s)=0, ℙ a.s.) and denote by ℒ_2(U_0, H) the Hilbert space of the Hilbert-Schmidt operators from U_0 to H.
We denote by A D(A) ⊆ H → H the generator of the C_0-semigroup e^tA (for t≥ 0) on H. A^* denotes the adjoint of A. Recall that D(A) and D(A^*) are Banach spaces when endowed with the graph norm. Let Λ be a Polish space.
We formulate the following standard assumptions that will be needed to ensure the existence and the uniqueness of the solution of the state equation.
b [0,T] × H ×Λ→ H is a continuous function and satisfies, for some C>0,
[ |b(s,x,a) - b(s,y,a)| ≤ C |x-y|,; |b(s,x,a)| ≤ C (1+|x|), ]
for all x,y ∈ H, s∈ [0,T], a∈Λ. σ [0,T]× H →ℒ_2(U_0, H) is continuous and, for some C>0, satisfies,
[ σ(s,x) - σ(s,y)_ℒ_2(U_0, H)≤ C |x-y|,; σ(s,x)_ℒ_2(U_0, H)≤ C (1+|x|), ]
for all x,y ∈ H, s∈ [0,T].
Given an adapted process a = a(·): [s,T] ×Ω→Λ, we consider the state equation
{[ (t) = ( A(t)+ b(t,(t),a(t)) ) t + σ(t,(t)) _Q(t); (s)=x. ].
The solution of (<ref>) is understood in the mild sense: an H-valued adapted process (·) is a solution if
ℙ{∫_s^T (|(r)| + | b(r,(r), a(r))| + σ(r,(r))_ℒ_2(U_0, H)^2) r <+∞} = 1
and
(t) = e^(t-s)Ax + ∫_s^t e^(t-r)A b(r,(r),a(r)) r
+ ∫_s^t e^(t-r)Aσ(r,(r)) _Q(r)
ℙ-a.s. for every t ∈ [s,T].
Thanks to Theorem 3.3 of <cit.>, given Hypothesis <ref>, there exists a unique (up to modifications) continuous (mild) solution (·; s,x, a(·)) of (<ref>).
Set ν̅_0 = D(A^*), ν = ν̅_0 ⊗̂_πℝ, χ̅= ν̅_0 ⊗̂_πν̅_0.
The process (·; s,x, a(·)) is ν-weak-Dirichlet process admitting a χ̅-quadratic variation with decomposition + where is the local martingale defined by (t) = x + ∫_s^t σ (r, (r)) _Q(r) and is a ν-martingale-orthogonal process.
See Corollary 4.6 of <cit.>.
Let l [0,T] × H ×Λ→ℝ (the running cost) be a measurable function and g H→ℝ (the terminal cost) a continuous function.
We consider the class 𝒰_s of admissible controls constituted by
the adapted processes a:[s,T] ×Ω→Λ such that (r, ω) ↦ l(r, (r,s,x, a(·)), a(r)) +
g((T, s, x, a(·))) is r ⊗- is
quasi-integrable. This means that, either its positive or negative part are integrable.
We consider the problem of minimizing, over all a(·) ∈𝒰_s, the cost functional
J(s,x;a(·))=𝔼[ ∫_s^T l(r, (r;s,x,a(·)), a(r)) r + g ((T;s,x,a(·))) ].
The value function of this problem is defined, as usual, as
V(s,x) = inf_a(·)∈𝒰_s J(s,x;a(·)).
As usual we say that the control a^*(·)∈𝒰_s is optimal at (s,x) if a^*(·) minimizes (<ref>) among the controls in 𝒰_s, i.e. if J(s,x;a^*(·)) = V(s,x). In this case we denote by ^*(·) the process (·; s,x, a^*(·)) which is then the corresponding optimal trajectory of the system.
§.§ The HJB equation
The HJB equation associated with the minimization problem above is
{[ ∂_s v + ⟨ A^* ∂_x v, x ⟩ + 1/2 Tr [ σ(s,x) σ^*(s,x) ∂_xx^2 v ]; + inf_a∈Λ{⟨∂_x v, b(s,x,a) ⟩ + l(s,x,a) }=0,; [8pt]
v(T,x)=g(x). ].
In the above equation ∂_xv (respectively ∂^2_xx v) is the first
(respectively second) Fréchet
derivatives of v with respect to the x variable.
Let (s,x) ∈ [0,T] × H,
∂_xv(s,x) it is identified (via Riesz Representation Theorem, see <cit.>, Theorem III.3) with elements of H.
∂^2_xx v(s,x) which is a priori an element of
(H ⊗̂_π H)^* is naturally associated
with a symmetric bounded operator on H,
see <cit.>, statement 3.5.7, page 192.
In particular, if h_1,h_2 ∈ H then
⟨∂^2_xx v(s,x), h_1 ⊗ h_2 ⟩≡∂^2_xx v(s,x)(h_1)(h_2).
∂_s v is the derivative with respect to the time variable.
The function
F_CV(s,x,p;a):= ⟨ p, b(s,x,a) ⟩ + l(s,x,a), (s,x,p,a)∈ [0,T]× H × H×Λ,
is called the current value Hamiltonian of the system and its infimum over a∈Λ
F(s,x,p):= inf_a∈Λ{⟨ p, b(s,x,a) ⟩ + l(s,x,a) }
is called the Hamiltonian.
We remark that F:[0,T] × H × H → [-∞ +∞[.
Using this notation the HJB equation (<ref>)
can be rewritten as
{[ ∂_s v+ ⟨ A^* ∂_x v, x ⟩ + 1/2 Tr [ σ(s,x) σ^*(s,x) ∂^2_xv ] + F(s,x,∂_xv)=0,; v(T,x)=g(x). ].
We introduce the operator ℒ_0 on C([0,T]× H) defined as
{[ D(ℒ_0):= {φ∈ C^1,2([0,T]× H) : ∂_xφ∈ C([0,T]× H ; D(A^*)) }; ℒ_0 (φ)(s,x) := ∂_s φ(s,x)+ ⟨ A^* ∂_x φ (s,x), x ⟩ + 1/2 Tr [ σ(s,x) σ^*(s,x) ∂_xx^2 φ(s,x) ], ] .
so that the HJB equation (<ref>) can be formally rewritten as
{[ ℒ_0 (v) (s,x) = - F(s,x, ∂_xv(s,x)); v(T,x) = g(x). ] .
Recalling that we suppose the validity of Hypothesis <ref>
we consider the two following definitions of solution of the HJB equation.
We say that v ∈ C([0,T]× H) is a classical solution of (<ref>) if
(i) v∈ D(ℒ_0)
(ii) The function
{[ [ 0,T] × H →ℝ; (s,x) ↦ F(s,x,∂_xv(s,x)) ] .
is well-defined and finite for all ( s,x) ∈[ 0,T] × H and it is continuous in the two variables
(iii) (<ref>) is satisfied at any ( s,x) ∈[ 0,T] × H.
Given g∈ C(H) we say that v ∈ C^0,1([0,T[ × H) ∩ C^0([0,T] × H) with ∂_x v ∈ UC([0,T[× H; D(A^*))
[The space of uniformly
continuous functions on each ball of [0,T[ × H with values
in D(A^*).]
is a strong solution of (<ref>) if the following properties hold.
(I) The function (s,x) ↦ F(s,x,∂_xv(s,x)) is finite
for all (s,x) ∈[ 0,T [ × H, it is continuous in the two variables
and admits continuous extension on [ 0,T ] × H.
(II) There exist three sequences {v_n }⊆ D(ℒ_0), {h_n}⊆ C([0,T]× H) and { g_n }⊆ C(H) fulfilling the following.
(i) For any n∈ℕ, v_n is a classical solution of the problem
{[ ℒ_0 (v_n) (s,x) = h_n(s,x); v_n(T,x) = g_n(x). ] .
(ii) The following convergences hold:
{[ v_n→ v in C([0,T]× H); h_n→ - F(·,·, ∂_xv(·,·)) in C([0,T]× H); g_n→ g in C(H), ] .
where the convergences in C([0,T]× H) and C(H) are meant in the sense of uniform convergence on compact sets.
The notion of classical solution as defined in Definition <ref> is well established in the literature of second-order infinite dimensional Hamilton-Jacobi equations, see for instance Section 6.2 of <cit.>, page 103. Conversely the denomination strong solution is used for a certain number of definitions where the solution of the Hamilton-Jacobi equation is characterized by the existence of a certain approximating sequence (having certain properties and) converging to the candidate solution. The chosen functional spaces and the prescribed convergences depend on the classes of equations, see for instance <cit.>. In this sense the solution defined in Definition <ref> is a form of strong solution of (<ref>) but, differently to all other papers we know[Except <cit.>, but there the HJB equation and the optimal controls are finite dimensional.] we do not require any form of convergence of the derivatives of the approximating functions to the derivative of the candidate solution. Moreover all the results we are aware of use sequences of bounded approximating functions (i.e. the v_n in the definition are bounded) and this is not required in our definition. All in all the sets of approximating sequences that we can manage are bigger than those used in the previous literature and so the definition of strong solution is weaker.
§.§ Decomposition for solutions of the HJB equation
Suppose Hypothesis <ref> is satisfied.
Suppose that v ∈ C^0,1([0,T[ × H) ∩ C^0([0,T] × H) with ∂_x v ∈ UC([0,T[× H; D(A^*))
is a strong solution of (<ref>). Let (·):=(·;t,x,a(·)) be the solution of (<ref>) starting at time s at some x∈ H and driven by some control a(·)∈𝒰_s.
Assume that b is of the form
b(t,x,a) = b_g(t,x,a) + b_i(t,x,a),
where b_g and b_i satisfy the following conditions.
(i) σ(t,(t))^-1 b_g(t,(t),a(t)) is bounded (being σ(t,(t))^-1 the pseudo-inverse of σ);
(ii) b_i satisfies
lim_n→∞∫_s^·⟨∂_x v_n (r,(r)) - ∂_x v (r,(r)), b_i(r,(r), a(r)) ⟩ r =0 ucp on [s,T_0],
for each s < T_0 <T.
Then
v(t, (t)) - v(s, (s)) = v(t, (t)) - v(s, x) = - ∫_s^t F(r,(r), ∂_xv(r,(r))) r
+ ∫_s^t ⟨∂_x v(r, (r)), b(r,(r), a(r)) ⟩ r
+ ∫_s^t ⟨∂_x v(r, (r)), σ (r,(r)) _Q(r) ⟩, t ∈ [s,T[.
We fix T_0 in ]s,T[.
We denote by v_n the sequence of
smooth solutions of the approximating problems prescribed
by Definition <ref>, which converges to v.
Thanks to Itô formula for convolution type processes (see e.g. Corollary 4.10 in <cit.>), every v_n verifies
v_n(t,(t)) = v_n(s,x) + ∫_s^t ∂_r v_n(r,(r)) r
+ ∫_s^t ⟨ A^* ∂_x v_n(r,(r)), (r) ⟩ r
+ ∫_s^t ⟨∂_x v_n(r,(r)), b(r, (r), a(r))
⟩ r
+ 1/2∫_s^t Tr [ ( σ (r, (r)) Q^1/2 )
( σ (r, (r)) Q^1/2)^* ∂_xx^2
v_n(r,(r)) ] r
+ ∫_s^t ⟨∂_x v_n(r,(r)), σ(r, (r)) _Q(r)
⟩, t ∈ [s,T]. ℙ- a.s.
Using Girsanov's Theorem (see <cit.> Theorem 10.14) we can observe that
β_Q(t) := W_Q(t) + ∫_s^t σ(r,(r))^-1 b_g(r,(r), a(r)) r,
is a Q-Wiener process with respect to
a probability ℚ equivalent to ℙ on the whole interval [s,T].
We can rewrite (<ref>) as
v_n(t,(t)) = v_n(s,x) + ∫_s^t ∂_r v_n(r,(r)) r
+ ∫_s^t ⟨ A^* ∂_x v_n(r,(r)), (r) ⟩ r
+ ∫_s^t ⟨∂_x v_n(r,(r)), b_i(r, (r), a(r)) ⟩ r,
+ 1/2∫_s^t Tr [ ( σ (r, (r)) Q^1/2 )
( σ (r, (r)) Q^1/2)^* ∂_xx^2 v_n(r,(r)) ] r
+ ∫_s^t ⟨∂_x v_n(r,(r)), σ(r, (r)) β_Q(r) ⟩. ℙ-a.s.
Since v_n is a classical solution of (<ref>), the expression
above gives
v_n(t,(t)) = v_n(s,x) + ∫_s^t h_n(r,(r)) r
+ ∫_s^t ⟨∂_x v_n(r,(r)), b_i(r, (r), a(r)) ⟩ r
+ ∫_s^t ⟨∂_x v_n(r,(r)), σ(r, (r)) β_Q(r)⟩.
Since we wish to take the limit for n→∞, we define
M_n(t) := v_n(t,(t)) - v_n(s,x) - ∫_s^t h_n(r,(r)) r
- ∫_s^t ⟨∂_x v_n(r,(r)), b_i(r, (r), a(r)) ⟩ r.
{ M_n }_n∈ℕ is a sequence of real ℚ-local
martingales converging ucp, thanks to the definition of strong solution
and Hypothesis (<ref>), to
M(t) := v(t,(t)) - v(s,x) + ∫_s^t F(r,(r), ∂_xv(r,(r))) r
- ∫_s^t ⟨∂_x v(r,(r)), b_i(r, (r), a(r)) ⟩ r, t ∈ [s,T_0].
Since the space of real continuous local martingales equipped with
the ucp topology is closed (see e.g. Proposition 4.4 of <cit.>)
then M is a continuous ℚ-local martingale indexed by
t ∈ [s,T_0].
We have now gathered all the ingredients to conclude the proof.
We set ν̅_0 = D(A^*),
ν = ν̅_0 ⊗̂_π, χ̅= ν̅_0 ⊗̂_πν̅_0. Proposition <ref> ensures that (·) is
a ν-weak Dirichlet process admitting a χ̅-quadratic variation with decomposition + where is the local martingale (with respect to ) defined by (t) = x + ∫_s^t σ (r, (r)) _Q(r) and is a ν-martingale-orthogonal process.
Now
(t) = (t)
+ (t) + (t),
t ∈ [s,T_0],
where (t) = x + ∫_s^t σ (r, (r)) β_Q(r)
and (t) = - ∫_s^t b_g(r, (r),a(r)) dr, t ∈ [s,T_0],
is a bounded variation process. Thanks to <cit.> Theorem 2.14 page 14-15, is a -local martingale. Moreover
is a bounded variation process and then, thanks to Lemma
<ref>, it is a -ν-martingale orthogonal process.
So + is a again (one can easily verify that the sum of two ν-martingale-orthogonal processes is again a ν-martingale-orthogonal process) a -ν-martingale orthogonal process and is a ν-weak Dirichlet process with local martingale part , with respect to .
Still under , since v ∈ C^0,1([0,T_0] × H),
Theorem <ref>
ensures that the process v(·, (·)) is a real
weak Dirichlet process on [s,T_0],
whose local martingale part being equal to
N(t) = ∫_s^t ⟨∂_x v(r,(r)), σ(r, (r)) β_Q(r)⟩, t ∈ [s,T_0].
On the other hand, with respect to ℚ, (<ref>) implies that
v(t,(t)) = [ v(s,x) - ∫_s^t F(r,(r), ∂_xv(r,(r))) r
+ ∫_s^t ⟨∂_x v(r,(r)), b_i(r, (r), a(r)) ⟩ r ] + N(t), t ∈ [s,T_0],
is a decomposition of v(·,(·)) as ℚ-
semimartingale, which is also in particular, a ℚ-weak Dirichlet process.
By Theorem <ref> such a decomposition is unique
on [s,T_0] and so
M(t) = N(t), t ∈ [s,T_0], so M(t) = N(t), t ∈ [s,T[.
Consequently
M(t) = ∫_s^t ⟨∂_x v(r,(r)), σ(r, (r)) β_Q(r)⟩
= ∫_s^t ⟨∂_x v(r,(r)), b_g(r, (r),a(r)) r⟩
+ ∫_s^t ⟨∂_x v(r,(r)), σ(r, (r))
_Q(r)⟩, t ∈ [s,T].
The decomposition (<ref>) with validity of
Hypotheses (i) and (ii) in Theorem <ref> are satisfied if v is a strong solution of the HJB equation in the sense of Definition <ref> and, moreover the sequence of corresponding functions
∂_x v_n converge to ∂_xv in C([0,T]× H).
In that case we simply set b_g = 0 and b = b_i.
This is the typical assumption required in the standard strong solutions literature.
Again the decomposition (<ref>) with validity of
Hypotheses (i) and (ii) in Theorem <ref>
is fulfilled
if the following assumption is satisfied.
σ(t,(t))^-1 b(t,(t),a(t)) is bounded,
for all choice of admissible controls a(·).
In this case we apply Theorem <ref>
with b_i = 0 and b=b_g.
§ VERIFICATION THEOREM
In this section, as anticipated in the introduction, we use the decomposition result of Theorem <ref> to prove a verification theorem.
Assume that Hypotheses <ref> and <ref>
are satisfied and that the value function is finite for any (s,x)∈ [0,T]× H.
Let v ∈ C^0,1([0,T[ × H) ∩ C^0([0,T] × H)
with ∂_x v ∈ UC([0,T[× H; D(A^*)) be a
strong solution of (<ref>) and suppose that there exists two constants M>0 and m∈ℕ such that
| ∂_xv(t,x)| ≤ M(1+|x|^m) for all (t,x)∈ [0,T[× H.
Assume that for all initial data
(s,x)∈ [0,T]× H and every control a(·)∈𝒰_s b
can be written as b(t,x,a) = b_g(t,x,a) + b_i(t,x,a) with b_i and b_g
satisfying hypotheses (i) and (ii) of Theorem <ref>.
Then
we have the following.
(i) v≤ V on [0,T]× H.
(ii) Suppose that, for some s∈ [0,T[,
there exists a
predictable process a(·) = a^*(·) ∈𝒰_s
such that, denoting ( ·;s,x,a^*(·)) simply
by ^*(·),
we have
F( t, ^*( t) ,∂_xv( t,^*( t)
) ) =F_CV( t,^*( t) ,∂_xv(
t,^*( t) ) ;a^*( t) ),
dt ⊗ a.e.
Then a^*(·) is optimal at ( s,x); moreover
v( s,x) =V(s,x).
We choose a control a(·) ∈𝒰_s and call the related trajectory. We make use of (<ref>) in Theorem
<ref>.
Then we need to extend (<ref>) to the case
when t ∈ [s,T].
This is possible since v is continuous,
(s,x) ↦ F(s,x,∂_xv(s,x)) is well-defined and (uniformly continuous) on compact sets.
At this point, setting t = T
we can write
g((T)) = v(T, (T)) = v(s, x) - ∫_s^T F (r, (r), ∂_xv(r,(r))) r
+ ∫_s^T ⟨∂_x v(r, (r)), b(r,(r), a(r)) ⟩ r
+ ∫_s^T ⟨∂_x v(r, (r)), σ (r,(r)) _Q(r) ⟩.
Since both sides of (<ref>) are a.s. finite, we can add
∫_s^T l(r, (r) , a(r)) r to them, obtaining
g((T)) + ∫_s^T l(r, (r) , a(r)) r
= v(s, x) + ∫_s^T ⟨∂_x v(r, (r)), σ (r,(r)) _Q(r) ⟩
+ ∫_s^T (- F (r, (r), ∂_xv(r,(r))) +
F_CV (r, (r), ∂_xv(r,(r));a(r))) r.
Observe now that, by definition of F and F_CV we know that
- F (r, (r), ∂_xv(r,(r))) + F_CV (r, (r), ∂_xv(r,(r))
;a(r))
is always positive. So its expectation always exists even if it
could be +∞, but not -∞ on an event of positive probability.
This shows a posteriori that
∫_s^T l(r, (r) , a(r)) r cannot be -∞
on a set of positive probability.
By Proposition 7.4 in <cit.>, all the momenta of
sup_r ∈ [s,T]|(r) | are
finite.
On the other hand, σ is Lipschitz-continuous,
v(s,x) is deterministic and,
since ∂_x v has polynomial growth,
then
𝔼∫_s^T ⟨∂_x v(r, (r)), ( σ (r,(r)) Q^1/2 ) ( σ (r,(r)) Q^1/2 )^* ∂_x v(r, (r)) ⟩ r
is finite. Consequently (see <cit.> Sections 4.3, in particular Theorem 4.27 and 4.7),
∫_s^·⟨∂_x v(r, (r)), σ (r,(r)) _Q(r) ⟩,
is a true martingale vanishing at s. Consequently,
its expectation is zero.
So the expectation of the right-hand side of (<ref>)
exists even if it could be +∞; consequently the same holds for the left-hand side.
By definition of J, we have
J(s,x,a(·)) = 𝔼 [ g((T)) + ∫_s^T l(r, (r) , a(r)) r ] = v(s, x)
+ 𝔼∫_s^T ( - F (r, (r), ∂_xv(r,(r))) + F_CV(r, (r), ∂_xv(r,(r)); a(r)) ) r.
So minimizing J(s,x,a(·)) over a(·) is equivalent to minimize
𝔼∫_s^T ( - F (r, (r), ∂_xv(r,(r))) + F_CV(r, (r), ∂_xv(r,(r)); a(r)) ) r,
which is a non-negative quantity.
As mentioned above, the integrand of such an expression is always nonnegative and then a lower bound for (<ref>) is 0. If the conditions of point (ii) are satisfied such a bound is attained by the control a^*(·), that in this way is proved to be optimal.
Concerning the proof of (i), since the integrand in (<ref>) is nonnegative, (<ref>) gives
J(s,x,a(·)) ≥ v(s, x).
Taking the inf over a(·) we get V(s,x) ≥ v(s,x), which concludes the proof.
* The first part of the proof does not make use that
a belongs to 𝒰_s, but only that
r ↦ l(r,(·,s,x,a(·)), a(·))
is a.s. strictly bigger then -∞. Under that only assumption,
a(·) is forced to be admissible, i.e. to belong to
𝒰_s.
* Let v be a strong solution of HJB equation.
Observe that the condition (<ref>) can be rewritten as
a^*(t) ∈min_a∈Λ [ F_CV( t,^*( t),
∂_xv(
t,^*( t) ) ;a ) ]
.
Suppose the existence of a Borel function ϕ:[0,T] × H →
such that
for any (t,y) ∈ [0,T] × H,
ϕ(t,y) ∈min_a ∈Λ ( F_CV(t,y,
∂_xv(t,y);a) ).
Suppose that the equation
{[ (t) = ( A(t)+ b(t,(t),ϕ(t,(t) ) t + σ(t,(t)) _Q(t); (s)=x, ].
admits a unique mild solution ^*.
We set a^*(t) = ϕ(t, ^*(t)), t ∈ [0,T].
Suppose moreover that
∫_s^T l(r,^*(r), a^*(r)) dr > - ∞ a.s.
Now (<ref>) and Remark <ref> 1.
imply that a^*(·) is admissible.
Then ^* is the optimal trajectory of the state variable related to the optimal control a^*(t).
The function ϕ is
called optimal feedback of the system since it gives
an optimal control as a function of the state.
Observe that, using exactly the same arguments we used in this section one could treat the (slightly) more general case in which b has the form
b(t,x,a)= b_0(t,x) + b_g(t,x,a) + b_i(t,x,a),
where b_g and b_i satisfy condition of Theorem <ref>
and b_0: [0,T] × H → H is continuous. In this case the addendum b_0 can be included in the expression of ℒ_0 that becomes
{[ D(ℒ_0^b_0):= {φ∈ C^1,2([0,T]× H) : ∂_xφ∈ C([0,T]× H ; D(A^*)) }; ℒ_0^b_0 (φ) := ∂_s φ+ ⟨ A^* ∂_x φ, x ⟩ + ⟨∂_x φ, b_0(t,x) ⟩ + 1/2 Tr [ σ(s,x) σ^*(s,x) ∂_xx^2 φ ]. ] .
Consequently in the definition of regular solution the operator ℒ_0^b_0 appears instead ℒ_0.
§ AN EXAMPLE
We describe in this section an example where the techniques developed in the previous part of the paper can be applied. It is rather simple but some “missing” regularities and continuities show up so that it cannot be treated by using the standard techniques (for more details see Remark <ref>).
Denote by Θℝ→ℝ the Heaviside function
{[ Θℝ→ℝ; Θ y↦{[ 1 if y ≥ 0; 0 if y < 0. ] . ] .
Fix T>0. Let ρ,β be two real numbers, ψ∈ D(A^*) ⊆ H an eigenvector[Examples where the optimal control distributes as an eigenvector of A^* arise in applied examples, see for instance <cit.> for some economic deterministic examples. In the mentioned cases the operator A is elliptic and self-adjoint.] for the operator A^* corresponding to an eigenvalue λ∈ℝ, ϕ an element of H and W a standard real (one-dimensional) Wiener process. We consider the case where Λ = ℝ (i.e. we consider real-valued controls). Let us take into account a state equation of the following specific form:
{[ (t) = ( A(t)+ a(t)ϕ ) t + β(t) W(t); (s)=x. ] .
The operator ℒ_0 specifies then as follows:
{[ D(ℒ_0):= {φ∈ C^1,2([0,T]× H) : ∂_xφ∈ C([0,T]× H ; D(A^*)) }; ℒ_0 (φ)(s,x) := ∂_s φ(s,x)+ ⟨ A^* ∂_x φ (s,x), x ⟩ + 1/2β^2 ⟨ x, ∂_xx^2 φ(s,x)(x)⟩. ] .
Denote by α the real constant α := -ρ + 2λ +
β^2/⟨ϕ, ψ⟩^2. We take into account the
functional
J(s,x;a(·))=𝔼[ ∫_s^T e^-ρ rΘ ( ⟨(r;s,x,a(·)), ψ⟩ ) a^2(r) r
+ e^-ρ TαΘ ( ⟨(T;s,x,a(·)), ψ⟩ ) ⟨(T;s,x,a(·)), ψ⟩^2 ].
The Hamiltonian associated to the problem is given by
F(s,x,p):= inf_a∈ℝ F_CV(s,x,p;a),
where
F_CV(s,x,p;a) =
⟨ p, a ϕ⟩ +
e^-ρ sΘ (⟨ x, ψ⟩) a^2.
Standard calculations give
F(s,x,p) = {[ - ∞ : p ≠ 0, ⟨ x, ψ⟩ < 0; - ⟨ p, ϕ⟩^2/4 : otherwise. ].
The HJB equation is
{[ ℒ_0(v)(s,t) = - F(s,x,∂_x v (s,x)),; [8pt]
v(T,x)=g(x) := e^-ρ TαΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2. ].
The function
{[ v [0,T] × H →ℝ; v (s,x) ↦α e^-ρ s{[ 0 if ⟨ x, ψ⟩≤ 0; ⟨ x, ψ⟩^2 if ⟨ x, ψ⟩ >0 ] . ] .
(that we could write in a more compact form as v (s,x) = α e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2) is a strong solution of (<ref>).
We verify all the requirements of Definition <ref>. Given the form of g in (<ref>) one can easily see that g∈ C(H). The first derivatives of v are given by
∂_s v(s,x) = -ρα e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2
and
∂_x v(s,x) = 2α e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩ψ,
so the regularities of v demanded in the first two lines of Definition <ref> are easily verified. Injecting (<ref>) into
(<ref>) yields
F(s,x,∂_x v(s,x)) =
- α^2 Θ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2 ⟨ϕ, ψ⟩^2 e^-ρ s,
so the function (s,x) ↦ F(s,x,∂_xv(s,x))
from [ 0,T] × H to H is finite and continuous.
We define, for any n∈ℕ,
α_n := -ρ + (2+1/n) λ + 1/2β^2 (2+1/n) (1+1/n)/-1/4 (2+1/n)^2 ⟨ϕ, ψ⟩^2. We consider the approximating sequence
v_n(s,x) := α_n e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2+1/n.
The first derivative of v_n w.r.t. s and and first and second derivative of v_n w.r.t. x are given, respectively, by
∂_s v_n(s,x) = -ρα_n e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2+1/n,
∂_x v_n(s,x) = (2+1/n) α_n e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^1+1/nψ
and
∂_xx^2 v_n(s,x) = (2+1/n) (1+1/n) α_n e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^1/nψ⊗ψ.
so it is straightforward to see that, for any
n ∈ℕ, v_n∈ D(ℒ_0). Moreover, if we define
g_n(x):= e^-ρ Tα_n Θ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩^2+1/n
and
h_n(s,x) := -1/4α_n^2 e^-ρ s (2+1/n)^2 Θ ( ⟨ x, ψ⟩ ) ⟨ϕ, ψ⟩^2 ⟨ x, ψ⟩^2 + 1/n,
(by an easy direct computation) we can see that v_n is a classical solutions of the problem
{[ ℒ_0(v)(s,t) = h_n(s,x),; [8pt]
v(T,x)=g_n(x). ].
The convergences asked in point (ii) of part (II) of Definition <ref> are straightforward.
An optimal control of the problem (<ref>)-(<ref>) can be written in feedback form as
a(t) = -αΘ ( ⟨(t), ψ⟩ ) ⟨(t),ψ⟩⟨ϕ, ψ⟩.
The corresponding optimal trajectory is given by the unique solution of the mild equation
(t) = e^(t-s)A x - ∫_s^t e^(t-r)AϕαΘ ( ⟨(r), ψ⟩ ) ⟨(r),ψ⟩⟨ϕ, ψ⟩ dr
+ β∫_s^t e^(t-r)A(r) dW(r).
Observe that the hypotheses of Theorem <ref> are verified: the regularity and the growth of v are a simple consequence of its Definition (<ref>) and taking b_i (t,x,a) = b(t,x,a) = a(t) ψ the condition (<ref>) is easily verified.
The optimality of (<ref>) is now just a consequence of point 2. of Remark <ref> once we observe that
min_a∈ℝ F_CV(s,x,∂_xv(s,x);a)
= min_a∈ℝ{ a 2α e^-ρ sΘ ( ⟨ x, ψ⟩ ) ⟨ x, ψ⟩⟨ψ , ϕ⟩ + e^-ρ sΘ ( ⟨ x, ψ⟩ ) a^2 }
= {[ -α⟨ x,ψ⟩⟨ϕ, ψ⟩ if ⟨ x,ψ⟩≥ 0; ℝ if ⟨ x,ψ⟩ < 0, ] .
so we can set
ϕ(s,x) = -αΘ ( ⟨ x, ψ⟩ ) ⟨ x,ψ⟩⟨ϕ, ψ⟩.
Observe that the elements v_n of the approximating sequence are indeed the value functions of the optimal control problems having the same state equation (<ref>) with running cost function
l_n(r,x,a) = e^-ρ rΘ(⟨ x, ψ⟩)
⟨ x, ψ⟩^1/n a^2
and terminal cost function g_n (defined in (<ref>)). The corresponding Hamiltonian is given by (-h_n) where h_n is defined in (<ref>).
Even if it is rather simple example, it is itself of some interest because, as far as we know, no explicit (i.e. with explicit expressions of the value function and of the approximating sequence) example of strong solution for second order HJB in infinite dimension is published so far.
In the example some non-regularities arise.
(i) The running cost function is
l(r,x,a) = e^-ρ rΘ(⟨ x, ψ⟩) a^2,
so
for any choice of a≠ 0, it is discontinuous at any x∈ H such that ⟨ x , ψ⟩ =0.
(ii) By (<ref>) the Hamiltonian (s,x,p) ↦ F(s,x,p)
is not continuous
and even not finite. Indeed, for any non-zero p∈ H and for any x∈ H with ⟨ x , ψ⟩ < 0,
its value is infinity.
Conversely F(s,x,∂_xv(s,x)) found in (<ref>) is always finite: observe that for any x∈ H with ⟨ x , ψ⟩≤ 0, ∂_xv(s,x)=0.
(iii) The second derivative of v with respect to x is well-defined on the points (t,x)∈ [0,T]× H such that ⟨ x , ψ⟩ <0 (where its value is 0) and it is well-defined on the points (t,x)∈ [0,T]× H such that ⟨ x , ψ⟩>0 (where its value is 2 α e^-ρ tψ⊗ψ) so it is discontinuous at all the points (t,x)∈ [0,T]× H such that ⟨ x , ψ⟩ =0.
Thanks to points (i) and (ii) one cannot deal with the example by using existing results. Indeed, among various techniques, only solutions defined through a perturbation approach in space of square-integrable functions with respect to some invariant measure (see e.g. <cit.> and Chapter 5 of <cit.>) can deal with non-continuous running cost but they can (at least for the moment) only deal with problems with additive noise and satisfying the structural condition and it is not the case here. Moreover none of the verification results we are aware of can deal at the moment with Hamiltonian with discontinuity in the variable p.
ACKNOWLEDGEMENTS: The authors are grateful to the Editor and to the Referees
for having carefully red the paper and stimulated us to improve the first submitted version.
This article was partially written during the stay of the
second named author
at Bielefeld University, SFB 1283 (Mathematik).
plain
|
http://arxiv.org/abs/1701.07440v1 | 20170125190009 | Infalling Young Clusters in the Galactic Centre: implications for IMBHs and young stellar populations | [
"J. A. Petts",
"A. Gualandris"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage 2015
Unifying microscopic and continuum treatments of van der Waals and Casimir interactions
Alejandro W. Rodriguez
Submitted TEMP
=======================================================================================
The central parsec of the Milky Way hosts two puzzlingly young stellar populations, a tight isotropic distribution of B stars around SgrA* (the S-stars) and a disk of OB stars extending to ∼ 0.5pc. Using a modified version of Sverre Aarseth's direct summation code NBODY6 we explore the scenario in which a young star cluster migrates to the Galactic Centre within the lifetime of the OB disk population via dynamical friction. We find that star clusters massive and dense enough to reach the central parsec form a very massive star via physical collisions on a mass segregation timescale. We follow the evolution of the merger product using the most up to date, yet conservative, mass loss recipes for very massive stars. Over a large range of initial conditions, we find that the very massive star expels most of its mass via a strong stellar wind, eventually collapsing to form a black hole of mass ∼ 20 - 400 M⊙, incapable of bringing massive stars to the Galactic Centre. No massive intermediate mass black hole can form in this scenario. The presence of a star cluster in the central ∼ 10pc within the last 15 Myr would also leave a ∼ 2 pc ring of massive stars, which is not currently observed. Thus, we conclude that the star cluster migration model is highly unlikely to be the origin of either young population, and in-situ formation models or binary disruptions are favoured.
stars: winds – stars: evolution – stars: black holes – stars: massive
§ INTRODUCTION
The central parsec of the Milky Way hosts almost two dozen He-1 emission-line stars <cit.> and a population of many other OB stars in a thin clockwise disk extending from ∼0.04-1.0pc <cit.>. <cit.> recently extended the range of observations up to ∼ 4pc^2 centred on SgrA*; the radio source associated with the supermassive black hole (SMBH) at the centre of the Milky Way. The authors show that the OB population is very centrally concentrated, with 90% projected within the central 0.5pc. The clockwise disk exhibits a top heavy mass function (α∼ 1.7, <cit.>). <cit.> estimate the He-1 stars to be only ∼3-7 Myr old, which is puzzling as the tremendous tidal forces in this region make it difficult for a giant molecular cloud (GMC) to remain bound long enough for gas to cool and fragment <cit.>.
There appears to be very few He-1 stars farther than the central parsec, other than inside/near the young Arches <cit.> and Quintuplet <cit.> clusters at ∼ 30 pc. This led <cit.> to postulate that efficient dynamical friction on star clusters forming a few parsecs from SgrA*, where GMCs can more easily cool and fragment, could bring a dense core of massive stars to the central parsec within the age of the He-1 population.
Another model suggests that in-situ formation of the clockwise disk is possible if a tidally disrupted GMC spirals in to form a small gaseous disk, which can be dense enough to become Jeans unstable and fragment into stars <cit.>. The infalling cloud needs to be ∼ 10^5 M⊙ in order to reproduce observations <cit.>. Two large gas clouds of mass ∼5×10^5M⊙, M-0.02-0.07 and M-0.13-0.08, are seen projected at ∼ 7 and ∼ 13 pc from the Galactic Centre, respectively <cit.>. The top heavy mass function can
be reproduced by the in-situ model so long as the gas has a temperature greater than 100K, consistent with observations of the Galactic clouds. The rotation axis of the clockwise disk shows a strong transition from the inner to outer edge <cit.>, suggesting that the disk is either strongly warped, or is comprised of a series of stellar streamers with significant variation in their orbital planes <cit.>. In-situ formation is currently favoured for the clockwise disk, as an infalling cluster would likely form a disk with a constant rotation axis <cit.>. A caveat of in-situ formation is that it requires near radial orbits incident upon SgrA*, perhaps requiring cloud-cloud collisions <cit.>.
Interior to the disk lies a more enigmatic population of B-stars in a spatially isotropic distribution around SgrA*, with a distribution of eccentricities skewed slightly higher than a thermal distribution (<cit.>, <cit.>). These “S-stars” have semi-major axes less than 0.04 pc, with S0-102 having the shortest period of just 11.5±0.3 yrs, and a pericentre approach of just ∼260 AU <cit.>. The S-star population could potentially be older than the disk population, as the brightest star in this population, S0-2, is a main sequence B0-B2.5 V star with an age less than 15 Myr <cit.>. The other stars in this population have spectra consistent with main sequence stars <cit.>, and observational limits require them to be less than 20 Myr old in order to be visible.
The tidal forces in this region prohibit standard star formation, so the S-stars must have formed farther out and migrated inwards. A possible formation mechanism of the S-stars is from the tidal disruption of binaries scattered to low angular momentum orbits, producing an S-star and a hyper-velocity star via the Hills mechanism <cit.>. The captured stars would have initial eccentricities greater than 0.97 <cit.>, but the presence of a cusp of stellar mass black holes around SgrA* could efficiently reduce the eccentricities of these orbits via resonant relaxation within the lifetime of the stars <cit.>. Additionally, <cit.> show that if a binary is not tidally disrupted at first pericentre passage, the Kozai-Lidov (KL) resonance <cit.> can cause the binary to coalesce after a few orbital periods, producing an S-star and no hyper velocity star.
Alternatively, <cit.> show that stars from the clockwise disk can be brought very close to SgrA* via global KL like resonances, if the clockwise disk of gas originally extended down to ∼ 10^-6pc (the lowest stable circular orbit around SgrA*). The authors also show that O/WR stars would be tidally disrupted within the region of the observed S-star cluster due to their large stellar radii, whereas B-stars could survive, in agreement with observations. Recently, <cit.> showed that a clockwise disk with 100% primordial binarity can produce ∼ 20 S-stars in less than 4Myr. KL oscillations can efficiently drag binaries close to SgrA*, producing an S-star and a hyper-velocity star. This mechanism produces S-stars with eccentricities lower than from the disruption of binaries originating from outside the disk. However, in order to thermalize the S-stars, ∼ 500 M⊙ in dark remnants are still required around SgrA* in order to match observations, consistent with Fokker-Planck models <cit.>. Three confirmed eclipsing binaries are observed within the clockwise disk, all being very massive O/WR binaries <cit.>. <cit.> estimate the present day binary fraction of the disk to be 0.3^+0.34_-0.21 at 95% confidence, with a fraction greater than 0.85 ruled out at 99% confidence. More recently, <cit.> predict that the binary fraction must be greater than 32% at 90% confidence.
An additional popular scenario is the transport of stars from young dense star clusters that migrate to the Galactic Centre via dynamical friction, with the aid of an intermediate mass black hole (IMBH). <cit.> showed that to survive to the central parsec from a distance ≥ 10pc, clusters either need to be very massive (∼10^6M⊙) or very dense (central density, ρc∼ 10^8 M⊙ pc^-3). <cit.> showed that including an IMBH in the cluster means the the core density can be lowered, but only if the IMBH contains ∼ 10 % of the mass of the entire cluster, far greater than is expected from runaway collisions <cit.>.
<cit.> (hereafter F09) revisited this problem using the tree-direct hybrid code, BRIDGE <cit.>, allowing the internal dynamics of the star clusters to be resolved. The small tidal limits imposed by SgrA* meant the clusters had core densities greater than 10^7 M⊙pc^-3, leading to runaway collisions on a mass segregation timescale <cit.>. During collisions, the resulting very massive star (VMS) was rejuvenated using the formalism of <cit.>, and collapsed to an IMBH at the end of its main sequence lifetime, extrapolated from the results of <cit.>. The authors found that by allowing the formation of a 3-16 × 10^3 M⊙ IMBH <cit.>, some stars could be carried very close to SgrA* via a 1:1 mean resonance with the infalling IMBH. The orbits of these “Trojan stars” were randomised by 3-body interactions with the SMBH and IMBH, constructing a spatially isotropic S-star cluster. F09's simulation “LD64k” transported 23 stars to the central 0.1pc, however, the resolution of the simulation is ∼ 0.2pc, set by the force softening of SgrA*. The simulation also brought 354 stars within 0.5pc of SgrA*, 16 being more massive than 20M⊙, analogous to clockwise disk stars. The IMBH formed in LD64k is more massive than the observational upper limit of ∼ 10^4 M⊙, derived from VLBA measurements of SgrA* <cit.>. However, <cit.> state that an IMBH of 1500 M⊙ is sufficient for the randomisation of stars <cit.>.
Despite the successes of the F09 model, IMBH formation in young dense star clusters may be prohibited. VMSs of the order 10^3 M⊙ are expected to have luminosities greater than 10^7 L⊙ <cit.>, driving strong stellar winds. F09 assumed the mass loss rate of stars more massive than 300M⊙ to be linear with mass, however, recent work on VMS winds show steeper relations for stars that approach the Eddington limit <cit.>. F09's model also neglected the effect of the evolving chemical composition on the luminosity, and hence the mass loss, of the VMS <cit.>. We note that the initial mass function (IMF) used in F09, although employed due to numerical constraints, meant there were ten times more massive stars than expected from a full Kroupa IMF, leading to an increased collision rate and buildup of the VMS mass.
No conclusive evidence for the existence of IMBHs in star clusters has yet been found <cit.>. Sufficiently high mass loss could cause VMSs to end their lives as stellar mass black holes or pair-instability supernovae at low metallicity <cit.>. Pair-instability supernovae candidates have recently been found at metallicities as high as ∼ 0.1 Z⊙ <cit.>, with expected progenitors of several hundred solar masses <cit.>.
The most massive star observed, R136a1, is a 265^+80_-35M⊙ star in the 30 Doradus region of the Large Magellanic Cloud (LMC) <cit.>, with metallicity Z = 0.43 Z⊙. <cit.> suggest that it could be a very rare main sequence star, with a zero age main sequence mass of 320^+100_-40M⊙. However, it could be the collision product of a few massive stars. R136a1 has a large inferred mass loss rate of (5.1^+0.9_-0.8)× 10^-5M⊙yr^-1, ∼ 0.1 dex larger than the theoretical predictions of <cit.>. <cit.> predict that the evolution of all stars more massive than 300 M⊙ is dominated by stellar winds, with similar lifetimes of ∼2-3Myr. As such, it is not surprising that R136a1 is the most massive star currently observed, as more massive VMSs should be rare and short lived.
Whilst it may be unlikely for an IMBH to form at solar metallicity, a VMS could transport stars to SgrA* within its lifetime. In this paper we test the feasibility of the star cluster migration scenario as the origin of either young population in the Galactic Centre.
We evolve direct N-body models of star clusters in the Galactic Centre, using the GPU-accelerated code NBODY6df, a modified version of Sverre Aarseth's NBODY6 <cit.> which includes the effects of dynamical friction semi-analytically <cit.>. In section <ref> we describe the theory behind our dynamical friction and stellar evolution models. In section <ref> we describe the numerical implementation. Section <ref> discusses prior constraints on the initial conditions and describes the parameters of the simulations performed. In sections <ref> and <ref>, we present our results and discuss their implications for the origin of the young populations. Finally, we present our conclusions in section <ref>.
§ THEORY
§.§ Dynamical friction
The dynamical friction model used in this paper is a semi-analytic implementation of Chandrasekhar's dynamical friction <cit.>, described in <cit.>, which provides an accurate description of the drag force on star clusters orbiting in analytic spherical background distributions of asymptotic inner slope γ = 0.5,..,3 <cit.>. The novelty of our model is the use of physically motivated, radially varying maximum and minimum impact parameters (bmax and bmin respectively), which vary based on the local properties of the background. The dynamical friction force is given by:
dvcl/dt = -4π G^2 Mclρlog(Λ) f(v* < vcl) vcl/v^3cl
where vcl is the cluster velocity, Mcl is the cluster mass, ρ is the local background density and f(v* < vcl) is the fraction of stars moving slower than the cluster; assuming a Maxwellian distribution of velocities, valid in the cuspy models explored here. The Coulomb logarithm is given by:
log(Λ) = log(bmax/bmin) = log(min(ρ(Rg)/|∇ρ(Rg)|,Rg)/max(rhm, GMcl/vcl^2)),
where Rg is the galactocentric distance of the cluster and rhm is the half mass radius of the cluster. When coupled with the N-body dynamics, rhm is the live half mass radius, and Mcl is well represented by the cluster mass enclosed within its tidal radius, including stars with energies above the escape energy.
§.§ Evolution of very massive stars
<cit.> present similarity theory models of VMSs, for which the stellar properties can be calculated by solving a set of differential equations <cit.>. VMSs are predicted to have large convective cores containing more than 85% of the mass, surrounded by a thin extensive radiative envelope. In such stars the opacity becomes larger than the electron scattering value, and can be considered to come from Thomson scattering alone. Utilising such approximations, the authors provide simple formulae to calculate the core mass and luminosity, as functions of stellar mass and chemical composition.
The luminosity of stars with μ^2M ≥ 100 can be found by substituting equation 36 of <cit.> into their equation 34:
L ≈64826 M (1 - 4.5/√(μ^2 M))/1 + X,
where L is the luminosity, M is the mass of the VMS, X is the core hydrogen abundance, and μ is the mean atomic mass of the core. Assuming a fully ionised plasma, μ takes the form:
μ = 4/6X + Y + 2,
where Y is the core helium abundance. Equation <ref> shows that at very large masses L ∝ M. However, unlike the F09 model, this formulation of the luminosity explicitly includes an L ∝ (1+X)^-1 dependence. As the mass loss rate depends on L, this leads to an increased mass loss rate in the late stages of evolution (see section <ref>).
<cit.> (hereafter B07) modelled the evolution of VMSs with zero age main sequence (ZAMS) masses of up to 1000M⊙, assumed to have formed via runaway collisions in a young dense star cluster. The authors numerically evolve the chemical composition of the star through the Core Hydrogen Burning (CHB) and Core Helium Burning (CHeB) phases via conservation of energy and mass loss from the stellar wind. In this section we briefly outline the model of <cit.> and describe how we include stellar collisions and their effect on VMS evolution.
As VMSs have large convective cores, one can reasonably approximate them as homogeneous (verified to be a good approximation down to 120 M⊙, B07). Applying conservation of energy, the hydrogen fraction in the core during CHB evolves as equation 1 of B07:
Mcc(μ,M)dX/dt = - L(μ,M)/ϵH,
where Mcc is the mass of the convective core and ϵH is the hydrogen burning efficiency (i.e. the energy released by fusing one mass unit of hydrogen to helium).
When the core is depleted of hydrogen, the VMS burns helium via equation 4 of B07 (see also <cit.>):
Mcc(μ,M) dY/dt = -L(μ,M)/ϵratio,
ϵratio = [(BY/AY - BO/AO)+(BC/AC - BO/AO) C'(Y)],
where ϵratio accounts for the fact that C and O are produced in a non-constant ratio, affecting the energy production per unit mass of helium burnt. Here, A and B are the atomic weights and binding energy of nuclei; with subscripts Y, C and O representing helium, carbon and oxygen respectively. C'(Y) is the derivative of the C(Y) fit from <cit.> with respect to Y (see B07 for the derivation of Equation <ref>). During CHeB, μ is defined as <cit.>:
μ = 48/36Y + 28C + 27O,
which by assuming Y+C+O=1 and using the fit to C(Y) by <cit.>, can be rewritten solely as a function of Y as:μ = 48/19Y + C(Y) + 27.
Subsequent stages of evolution are rapid and explosive. We assume that after core helium burning the remnant collapses to a black hole with no significant mass loss.
§.§.§ Mass loss
The chemical evolution of the VMS is coupled to the mass evolution, as the luminosity of the star sets the wind strength. <cit.> (hereafter V11) show that the wind strength is heavily dependent on the proximity to the Eddington limit, when gravity is completely counterbalanced by the radiative forces, i.e. grad/ggrav = 1, where grad and ggrav are the radiative and gravitational forces, respectively. For a fully ionised plasma, the Eddington parameter, Γe, is dominated by free electrons and is approximately constant throughout the star (V11):
Γe = grad/ggrav = 10^-4.813 (1 + Xs) (L/L⊙) (M/M⊙)^-1,
where Xs is the surface hydrogen abundance of the star. V11's fig. 2 shows that the logarithmic difference between the empirical <cit.> (here after V01) rates and the VMS rates follow a tight relation with Γe, almost independent of mass. The authors find that the mass loss rate is proportional to:
Ṁ∝Γe^2.2, if 0.4< Γe < 0.7
Γe^4.77, if 0.7<Γe < 0.95.
During the CHB phase, we model the stellar wind of the VMS using the formulae from V01, whilst correcting for the proximity to the Eddington limit by fitting on the data from table 1 of V11. In this way we obtain a coefficient that allows us to convert the V01 rate to the Γe enhanced rates of stars approaching the Eddington limit <cit.>. V11 modelled stars up to 300 M⊙, however, as the logarithmic difference between the V11 and V01 rates shows little dependence on mass, we extrapolate this approach to higher masses. V11 state that their predicted wind velocities are a factor 2–4 less than derived empirically. The effect of rotation is also neglected. It should be noted that due to these two effects, and our extrapolation of the V11 models, we likely underestimate the mass loss of our VMSs. Therefore the masses of our VMSs and their resulting remnants should be taken as a conservative upper limit at solar metallicity.
During CHeB VMSs are depleted of hydrogen and are expected to show Wolf-Rayet like features. We follow the approach of <cit.> and extrapolate the mass loss formula of <cit.>:
log(Ṁ) = -11 + 1.29 log(L) +1.7 log(Y) + 0.5 log(Z).
B07 explored models with Wolf-Rayet like mass loss rates (arbitrarily) up to 4 times weaker, which only left a remnant twice as massive. The uncertainty arising from extrapolation of this formula should be of little significance to the transport of young stars to the central parsec, as post main sequence VMSs are not massive enough to experience substantial dynamical friction after the cluster is disrupted (B07). However, if sufficiently chemically rejuvenated, a CHB VMS may be capable of bringing stars to the central parsec before losing most of its mass. Thus the evolution during the CHB stage is of most interest.
We make sure that in both burning phases the predicted mass loss never exceeds the photon tiring limit, the maximum mass loss rate that can theoretically be achieved using 100% of the stars luminosity to drive the wind <cit.>:
Ṁtir = 0.032 (L/10^6L⊙) (R/R⊙) (M/M⊙)^-1.
Here, the radii, R, of stars are taken from the mass-radius relation of <cit.>, which is in excellent agreement with <cit.>'s similarity theory models of VMSs, but requires less computational resources to calculate. The OB disk population is less than 7Myr old. Hence, we assume approximately solar abundances such that X0=0.7, Y0=0.28 and Z0=0.2 <cit.>.
§.§.§ Rejuvenation following collisions
<cit.> show that VMSs have nearly all of their mass in their large convective cores. Repeated collisions can efficiently mix the core and the halo, keeping the star relatively homogeneous. The wind of the VMS also ensures homogeneity, as the loose radiative envelope is shedded by the stellar wind, leaving the surface with composition similar to the core.
We chemically rejuvenate a VMS following a collision with another star. We assume that stars colliding with the VMS efficiently mix with the convective core such that:
Xnew = XstarMstar + XVMSMVMS/MVMS+star
Similarly for Y and Z. We approximate Xstar(t) and Ystar(t) for main sequence stars by interpolating the detailed stellar models of <cit.>. If a CHeB VMS collides with a hydrogen rich main sequence star, we assume that CHB is reignited. When two VMSs collide their composition is also assumed to be well mixed.
§ NUMERICAL METHOD
To model the effects of dynamical friction on self-consistent star cluster models we use the GPU-enabled direct N-body code NBODY6df <cit.>, which is a modified version of Aarseth's direct N-body code NBODY6 <cit.>. In this paper we model the background as an analytic stellar distribution with a central black hole (see section <ref>). In <cit.> we only tested our dynamical friction model for cases without a central black hole, however we discuss how to trivially add a black hole to the model in Appendix A. A validation of this approach via comparison with full N-body models computed with GADGET <cit.> is also given in the appendix.
We introduced an additional modification to the code to properly model the evolution of a VMS, as described in section <ref>. When a physical collision creates a star greater than 100M⊙ we flag it as a VMS and treat its evolution separately from the standard SSE package in NBODY6 <cit.> via the method described in section <ref>. As the mass loss can be very large for VMSs, fine time resolution is needed to prevent overestimation of the mass loss. We introduce a new routine which integrates the mass and composition of the star between the dynamical time steps using a time step of 0.1 years, sufficiently accurate to resolve the evolution. V11 predict terminal wind speeds of a few thousand kms^-1 for VMSs, as such we assume that the stellar wind escapes the cluster and simply remove this mass from the VMS. An arbitrary number of VMSs can potentially form and evolve simultaneously in the simulation.
§ INITIAL CONDITIONS
In NBODY6df the background potential is assumed static and analytic; an assumption valid over the short timescales considered here (less than 7Myr). We adopt a Dehnen model <cit.>, representing the central region of the Galaxy. We use a slope γ = 1.5, scale radius a = 8.625pc and total mass Mg = 5.9× 10^7 M_⊙, which closely reproduces the observed broken power-law profile obtained by <cit.> for the central region of the Galaxy, yet has simple analytic properties. We place a central fixed point mass of 4.3×10^6M⊙ to represent SgrA* <cit.>.
§.§ Physical and numerical constraints on the initial conditions
There are two constraints on the initial conditions of the clusters. Firstly, they must reach the Galactic Centre within the age of the young populations. We therefore wish to model clusters that can potentially reach the Galactic Centre in less than 7Myr, so that we may test the migration model for both the clockwise disk and the S-stars. We obtain tight constraints on the initial orbital parameters by integrating the orbits of point masses in the Galactic Centre potential including dynamical friction. Fig. <ref> shows contours of equal inspiral time for different initial masses, apocentres and initial velocities. Initial conditions to the right of each line are such that the clusters can reach within 0.5pc in less than 7Myr. Arches like clusters <cit.>) could reach the Galactic centre in less than 7Myr if they formed at ∼5pc, or from 7-10pc if large initial eccentricities were assumed. More massive clusters can easily migrate ∼ 10pc in 7Myr. We note that these inspiral times are lower limits, as real clusters would lose mass from stellar winds and tides. We choose to model only those clusters for which a point mass object of the same mass can reach the Galactic Centre within ∼7Myr.
Secondly, the size of the clusters is limited by their small tidal limits when so close to SgrA*. Approximating the cluster as a point mass, the tidal radius is given by <cit.>:
r^3t = GMcl/ω^2p +(d^2Φ/dR^2)p,
where ωp and (d^2Φ/dR^2)p are the angular velocity of a circular orbit and the second derivative of the potential at pericentre, respectively. The high mass requirement for fast inspiral, coupled with the small tidal limits, means that all models are inherently very dense and runaway mergers are expected. Although it is unknown whether such dense clusters are likely to form in the Galactic Centre, we explore these initial conditions in order to test the feasibility of the inspiral model.
§.§ Initial Mass function
We sample stars from a Kroupa initial mass function (IMF) with an upper limit of 100 M⊙ <cit.>. A lower mass limit of 0.08 M⊙ would yield the most physically realistic results, but at a computational cost unfeasible for a parameter study of such massive clusters at the current time (365k-730k particles for the most massive models explored). However, truncating the low end of the IMF means that one samples too many massive stars as compared with a full Kroupa IMF. To quantify the difference this has on VMS formation, we ran three test simulations at different mass resolutions, in the absence of a tidal field. Simulations 1lo, 1hi and 1kr have lower cutoffs of 1.0, 0.16 and 0.08 M⊙, respectively. We model star clusters as King models with dimensionless central potential, W0 = 6, and with no primordial mass segregation. The parameters of the isolated simulations are displayed in Table <ref>. Fig. <ref> shows the VMS mass as a function of time for simulations 1lo, 1hi and 1kr, showing that better sampling of the low end of the IMF inhibits the growth of the VMS. This occurs because primarily high mass stars build up the VMS, due to their short dynamical friction timescales and large cross sections for collision. In simulation 1kr, although half the cluster mass is comprised of stars less massive than 0.58 M⊙, only 37 stars less massive than 0.58 M⊙ are consumed throughout the entire lifetime of the VMS. The VMS initially grows very rapidly. However, the late main sequence evolution is dominated by the strong stellar wind of the helium rich VMS. Throughout its lifetime, the VMS in simulation 1kr removes 2244M⊙ of material from the cluster through its stellar wind, ∼ 2% of the cluster mass. During CHeB, simulations 1lo and 1hi reignite CHB via collision with a massive main sequence star, resulting in a lower remnant mass at collapse. The late evolution is very stochastic, however this is not important for the migration of young stars to the Galactic Centre, as the VMS only provides gravitational binding energy comparable to normal cluster stars during its CHeB phase.
Fig. <ref> shows that a lower limit of 0.16 M⊙ is sufficient to resolve the mass evolution of the VMS, and as we are only interested in the final distribution of OB stars, this IMF is sufficient for our simulations. We cannot evolve the most massive clusters at high mass resolution, as these models become too computationally expensive. As a compromise, we test a large range of initial conditions with a lower limit of 1 M⊙, and re-run a selection of initial conditions with a lower limit of 0.16 M⊙ to obtain more realistic results. We can simultaneously use the low resolution simulations to explore the possibility of an initially top heavy mass function for clusters forming close to SgrA*. A very top heavy function is observed for the clockwise disk <cit.>, however, it is unknown whether a top heavy IMF is expected from the collapse of GMCs at ∼5-10 pc from SgrA*.
§.§ Binary fraction
Some simulations include a population of primordial binaries. Binaries are initialised as follows. Firstly all stars more massive than 5 M⊙ are ordered by mass. The most massive star is then paired with the second most massive star, and so on. This choice is motivated by observational data showing that massive OB stars are more likely to form in binary systems with mass-ratios of order unity <cit.>. Once all stars more massive than 5 M⊙ are in binaries, lower mass stars are paired at random until the specified binary fraction (the fraction of stars initially in a binary system) is reached <cit.>. For stars more massive than 5 M⊙, the periods and eccentricities are drawn from the empirical distributions derived in <cit.>, which show that short periods and low eccentricities are preferred in massive binaries. For lower mass stars the periods are drawn from the <cit.> period distribution and are assigned thermal eccentricities. The mass of a binary and its initial position in the cluster are assumed to be independent.
§.§ Simulations
The initial conditions are described in Table <ref> and are referred to by the following naming convention: <M><mf><Ra>, where <M> is the cluster mass in units of ∼ 10^5 M⊙, <mf> is the mass resolution of the simulation, and <Ra> is the initial galactocentric distance in pc. For most simulations we sample from a Kroupa IMF with an upper mass cut off, mup = 100M⊙. The “lo” resolution models have a lower mass cut off, mlow = 1M⊙ and mean mass, ⟨ m*⟩= 3.26M⊙. The “hi” resolution models have mlow = 0.16M⊙ and ⟨ m*⟩= 0.81M⊙. For simulations with <mf> = lu we use an IMF identical to the mass function of the clockwise disk <cit.>. The simulation name is followed by a suffix describing additional information about the simulation. The suffix W4 denotes that the dimensionless central potential, W0, is initially 4 instead of 6. The suffix vX indicates an eccentric orbit with initial velocity, 0.X vc (where vc is the circular velocity at the initial position). The Suffix “ms” indicates that the cluster is primordially mass segregated. Finally, the suffix “b” denotes the inclusion of primordial binaries (see section <ref>). Most models are Roche-filling at first pericentre passage, apart from runs marked with an asterisk, which are Roche-filling at their initial positions. The model with the suffix “d” is extremely Roche under-filling at its initial position.
§ RESULTS
In all models, the clusters are completely tidally disrupted in less than 7Myr. Massive clusters migrate farther in than lower mass clusters on the same initial orbits, due to shorter dynamical friction timescales and less efficient tidal stripping. However any cluster that reaches ∼ 3 pc is rapidly dissolved by its shrinking tidal limit as it approaches SgrA*. Clusters on eccentric orbits inspiral faster, as they pass through denser regions of the cusp periodically. However, clusters on very eccentric orbits (e.g. 2lo10_v2*, e ∼ 0.9) disrupt on the first few pericentre passages, depositing stars at large distances along the initial cluster trajectory.
Most simulations naturally form a VMS in less than 1Myr due to their high initial densities. However, the initial rapid mass accretion soon loses to the increasing mass loss rate and relaxation of the cluster, causing the VMS to collapse to a black hole of ∼ 20 - 250 M⊙ after 2-5 Myr (300-400 M⊙ for models with a <cit.> IMF), typically before their parent clusters completely disrupt. Table <ref> shows the maximum mass, remnant mass and lifetime of the VMS formed in each simulation. The clusters become completely unbound at ∼ 2-3 pc, and the IMBHs formed are not massive enough to experience significant dynamical friction and drag stars close to SgrA* (dynamical friction timescales for even a 400 M⊙ IMBH are longer than 100 Myr). Conversely, the evolution of the VMS does not appear to significantly inhibit the inspiral of the cluster, as only ≲ 2% of the initial cluster mass is typically expelled by the VMS throughout its lifetime.
For each model, Table <ref> shows the final distribution of stars after complete cluster dissolution and death of any VMSs. We show the distributions of semi-major axes for all stars and main sequence stars more massive than 8M⊙ at 7 Myr, as well as how many of these stars have final semi-major axes smaller than 1 pc. We use a 8M⊙ cut-off as these are the faintest main sequence stars spectroscopically observable in the Galactic Centre with current telescopes, K ≥ 15.5 <cit.>. Although photometric studies can see objects down to magnitudes of K<19-18 <cit.>, it is impossible to determine whether these stars are young or old. We also show the projected distributions of visible main sequence stars at 7 Myr and 15 Myr (as the S-star population may be older than the disk population, see section <ref>).
§.§ Low Resolution Models
Fig. <ref> shows the final distributions of the semi-major axes and projected positions of stars for a representative selection of the models with a lower mass cutoff of 1 M⊙ (<mf> = lo). In all models the final distributions are broad, with a standard deviation of ∼ 2 pc. Other simulations show similar distributions, with less massive and less eccentric models dissolving farther out (see Table <ref>).
Models with very eccentric orbits (e.g. 2lo10_v2*) can bring stars close to SgrA*, however, very few stars have final semi-major axes smaller than 1pc. No stars more massive than 8 M⊙ are scattered to semi-major axes smaller than 1 pc in either 1lo10_v2* or 2lo10_v2*. This is likely due to the preferential loss of low mass stars, whereas high mass stars remain inside the cluster for longer, and end up tracing the final cluster orbit.
Simulation 2lo5 is the only non-radial model to bring stars to the central parsec, and the only model that brings a significant number of massive stars. However, one would expect to also see ∼ 3000 massive stars in the range 1-10 pc, about 10 times more than reach the central parsec. The right side of Fig. <ref> shows the distributions of projected distances of stars that are spectroscopically visible at 7 and 15 Myr. The amplitudes of the distributions are normalised to the expected number of stars had the simulation been run with a Kroupa IMF. The stars are projected to rotate clockwise in the sky. It can be seen that for all simulations, more than 1000 young stars are observed out to ∼ 10 pc. Considering current observational limitations, if a cluster were present in the central ∼ 10 pc within the last ∼ 15 Myr, a large number of stars would be observable up to ∼ 10 pc, suggesting it is unlikely that any clusters have inhabited this region in the last ∼ 15 Myr.
Simulations 2lo10 and 2lo10_W4 have the same initial orbit and mass, yet 2lo10_W4 is less concentrated. The lower density and longer relaxation timescale cause 2lo10_W4 to form a less massive VMS than 2lo10. However, the VMS in 2lo10_W4 lives longer as not all the most massive stars are consumed within ∼ 1 Myr. The models end up with similar final distributions of the resulting disk (see Table <ref>). The same trend is seen for the less massive analogues, 1lo10 and 1lo10_W4. The two most massive simulations 4lo10_W4 and 4lo10_W4v75, are massive enough to reach the central parsec from 15 pc in ∼ 7 Myr, but with central densities low enough to suppress the formation of VMSs. However, these simulations are more susceptible to tides, and are tidally disrupted at large radii.
§.§ Higher resolution models
Fig. <ref> shows a comparison between simulations 1lo10_v5* and 1hi10_v5*, which have the same initial conditions, except 1hi10_v5* better samples the low mass end of the IMF. The panels on the left show the distributions of semi-major axes for all the stars and main sequence stars more massive than 8M⊙ at 7 Myr. The distributions are very similar, however 1hi10_v5* has a smaller ratio of spectroscopically visible stars to all stars due to differences in the IMF. The panels on the right show the projected distributions of main sequence stars visible at 7 and 15 Myr. For the projected distributions, the number of stars is re-normalised to the expected number of stars had the simulation been run with a Kroupa IMF from 0.08-100 M⊙. Although massive stars are consumed to construct the VMS, this is a small fraction of the population. The distributions look very similar in shape and magnitude, indicating that models run with a lower limit of 1 M⊙ produce similar final distributions to simulations that better sample the IMF. This verifies the validity of the normalisation approach used on the projected visible distributions in Fig. <ref>.
Fig. <ref> demonstrates how simulations 1hi5, 1hi10, 1hi10_v5* and 1hi10_v2* evolve with time. The top two panels show the evolution of the Galactocentric distance of the cluster and the mass enclosed within the tidal radius. The bottom two panels show the evolution of the VMS mass and the half mass radius of the cluster. Simulations 1hi5, 1hi10 and 1hi10_v5* quickly form a VMS and expand due to rapid two body relaxation in the dense core. The expansion lowers the core density and thus the collision rate. The reduced collision rate allows the VMSs to rapidly burn their fuel and collapse without significant hydrogen rejuvenation. Simulation 1hi5 forms a more massive VMS than the other simulations as it is initially ∼ 10 times as dense, however the resulting increased luminosity decreases its lifetime. In simulation 1hi10_v2*, the cluster becomes unbound before the massive stars can reach the centre of the cluster, however the initial density is high enough that a 235 M⊙ VMS forms by the first pericentre passage. The self-limiting nature of the VMS formation is discussed in section <ref>.
§.§ Models with extreme initial conditions
The young clockwise disk population exhibits a top heavy mass function, with power law index α∼ 1.7 <cit.>. In the context of the cluster inspiral scenario this has been explained by mass segregation inside the cluster, with the most massive stars reaching the central parsec, and low mass stars being preferentially lost due to tides during inspiral. However, as we have shown in section <ref>, clusters lose massive stars as well as low mass stars throughout inspiral, via dynamical ejections and the shrinking tidal limits as the clusters approach SgrA*. In order to test the effect of mass segregation, we ran simulation 1hi5_ms, which we primordially mass segregated using the method described in <cit.>. For simulations 1hi5 and 1hi5_ms, Fig. <ref> shows the semi-major axes of all stars and main sequence stars more massive than 8M⊙ at T=7Myr, as well as the distributions of projected distances of spectroscopically visible stars at 7 and 15Myr. Their distributions look similar, as simulation 1hi5 has an initial dynamical friction timescale of tdf∼ 0.1 Myr for the most massive stars, causing the cluster to rapidly mass segregate. As such, primordial mass segregation does not significantly enhance the transport of massive stars to the central parsec, as clusters of high enough density become mass segregated before tides become important.
As star formation close to a SMBH is not well understood, we also test a model in which the cluster is born with the top heavy mass function derived in <cit.>. Fig. <ref> shows the evolution of simulations 1lo5, 2lo5, 1lu5 and 2lu5, where the two latter clusters have stars sampled from the <cit.> mass function. Models computed with the top-heavy mass function form VMSs of greater mass, as more massive stars are sampled, and their cross sections for collisions are larger. However, as the cluster mass is distributed amongst fewer stars, simulations 1lu5 and 2lu5 relax faster than 1lo5 and 2lo5 and dissolve more rapidly. A flatter mass function does not help bring stars closer to SgrA*, and leaves more visible stars spread across the central 10 pc.
As a final test of extreme initial conditions we re-run simulation 1hi5 with an initial size and density corresponding to being Roche filling at 1 pc. In simulation 1hi5_W4d, this cluster is placed initially on a circular orbit at 5 pc, so that it is initially very Roche under-filling. Fig. <ref> shows the evolution of Galactocentric distance, cluster mass, VMS mass and half mass radius of simulations 1hi5 and 1hi5_W4d as a function of time. Increasing the density shortens the relaxation time, and after ∼ 2 Myr the clusters in simulations 1hi5 and 1hi5_W4d are similar. The cluster is able to hold onto its mass for slightly longer in 1hi5_W4d, but after the cluster expands it ultimately gets disrupted by the tidal field in the same way as 1hi5. As such, making clusters arbitrarily dense does not help the cluster migration scenario.
§.§ Models with primordial binaries
We include a primordial binary population in three of our simulations 1hi10_b, 1hi10_v2b* and 1hi5_b. The inclusion of primordial binaries is interesting as the clockwise disk has three confirmed eclipsing binaries <cit.>, with a total binary population estimated to be greater than 30% <cit.>. Secondly, a popular formation scenario for the S-stars is from the tidal disruption of binaries by SgrA* via the Hills mechanism <cit.>, where one star is captured and the other is ejected as a hyper-velocity star.
Fig <ref> shows the final projected distances of binaries in simulations 1hi10_b, 1hi10_v2b* and 1hi5_b. In these models 5% of the stars are initially in binary systems, the properties of which are described in section <ref>. A large number of binary systems survive, despite some being consumed during the formation of the VMS. The final distributions of binaries with main sequence primaries more massive than 8 M⊙ are very similar to the distribution of single stars more massive than 8 M⊙ in models with no primordial binaries.
In all three simulations, no binaries end up with semi-major axes less than 1pc. In 1hi10_v2b* one massive binary of total mass 68.9 M⊙ came within 0.1 pc of SgrA*. Fig. <ref> shows the orbit of this star, which came 0.09 pc from SgrA* at its third pericentre passage. However, the binary remained bound, as its tidal disruption radius by SgrA* was equal to ∼ 10 AU, ∼ 2000 times smaller than its distance. The binary coalesced at apocentre. Due to the scarcity of binaries that approach SgrA*, and the fact that many binaries would be observed beyond the disk, we conclude that if the binary breakup scenario is the origin of the S-stars, it is unlikely that the progenitors originated from nearby star clusters.
§ DISCUSSION
The formation and evolution of a VMS appears to have little effect on cluster inspiral as compared with the collisionless models of <cit.>, yet the suppression of IMBH formation strongly inhibits the radial migration of a sub population of massive stars towards the central parsec (F09). However, even in the case of IMBH formation, one would still observe a broad distribution of massive stars out to ∼ 10 pc, making the scenario unlikely even if IMBH formation were efficient.
All simulations form a large disk of massive stars from ∼ 1 - 10pc, contradicting observations. This implies that no cluster has been present in this region in the past ∼ 15 Myr, as a large population would still be visible with current telescopes. Two ∼ 10^5 M⊙ gas clouds, M-0.02-0.07 and M-0.13-0.08, are seen projected at ∼ 7 and ∼ 13 pc from SgrA* <cit.>, suggesting the presence of GMCs in this region is commonplace. However, the absence of young stars in this region suggests that perhaps GMC collapse is suppressed, only triggering from a tidal shock at close passage to SgrA* <cit.>. Verifying this hypothesis is beyond the scope of this work, and would require further study of GMC collapse in the close vicinity of a massive black hole.
§ CONCLUSIONS
We ran N-body simulations of young dense star clusters that form at distances of 5-15 pc from SgrA* and inspiral towards the Galactic Centre due to dynamical friction. Most models are dense enough that runaway collisions are inevitable, forming a very massive star in less than 1 Myr. However, careful treatment of the evolution of this very massive star shows that it is likely to lose most of its mass through its stellar wind and end its life as a ∼ 50-250 M⊙ black hole. As no intermediate mass black hole can form in this model, clusters dissolve a few pc from SgrA*, leaving a population of bright early type stars that would be observable for longer than the age of both the clockwise disk and S-star population, contradicting observations. It is therefore unlikely that a cluster has inhabited the central 10 pc in the last ∼ 15 Myr, as such the S-stars are unlikely to have formed via disrupted binaries originating from star clusters. Instead, the clockwise disk likely formed in-situ, perhaps from a gas cloud on a radial orbit incident on SgrA* <cit.>, and the S-star cluster is likely to be populated either by dynamical processes in the clockwise disk <cit.>, or through the binary breakup of scattered binaries <cit.>.
§ ACKNOWLEDGEMENTS
JAP would like to thank the University of Surrey for the computational resources required to perform the simulations used in this paper.
mn2e,natbib
§ APPENDIX A: DYNAMICAL FRICTION COMPARISON WITH GADGET
In <cit.>, we only tested our dynamical friction formulation against N-body models of single component Dehnen profiles. In this paper we add an additional central massive black hole. In the Maxwellian approximation, valid for cuspy distributions, this comes into Chandrasekhar's formula via the black hole's contribution to the velocity dispersion of the stars. The addition of a black hole to the model is described in the appendix of <cit.>.
Here we briefly show that our dynamical friction formulation in the vicinity of a black hole is accurate by means of two N-body models of point mass clusters, of mass 10^5M⊙, orbiting the potential described in section <ref>. The N-body models are computer using the mpi-parallel tree-code GADGET2 <cit.>. The stellar background is comprised of 2^24 particles of mass 3.5M⊙, with a central black hole of 4.3×10^6M⊙. The softening of the cluster potential is ϵ = 0.0769pc, corresponding to rhm∼ 0.1pc. The same softening length is used for the background particles. The black hole is given a softening length, ϵ = 0.2pc to reduce numerical inaccuracies resulting from the large mass ratio of the black hole to background particles. In GADGET2 the force is exactly Newtonian at 2.8ϵ, so the semi-analytic and N-body models should agree to ∼0.56pc. The cluster is initially 5pc from SgrA*.
Fig. <ref> shows the orbital evolution of the circular and eccentric cases computed semi-analytically and with GADGET2. Our formalism agrees very well with the N-body models in both cases. In the eccentric case the circularisation appears to be slightly under-predicted. This is likely because we neglect the drag force from stars moving faster than the cluster <cit.>.
|
http://arxiv.org/abs/1701.07615v2 | 20170126083411 | On the Design of Distributed Programming Models | [
"Christopher S. Meiklejohn"
] | cs.DC | [
"cs.DC"
] |
On the Design of Distributed Programming Models
Christopher S. Meiklejohn
[email protected]
================================================================
Programming large-scale distributed applications requires new abstractions and models to be done well. We demonstrate that these models are possible.
Following from both the FLP result and CAP theorem, we show that concurrent programming models are necessary, but not sufficient, in the construction of large-scale distributed systems because of the problem of failure and network partitions: languages need to be able to capture and encode the tradeoffs between consistency and availability.
We present two programming models, Lasp and Austere, each of which makes a strong tradeoff with respects to the CAP theorem. These two models outline the bounds of distributed model design: strictly AP or strictly CP. We argue that all possible distributed programming models must come from this design space, and present one practical design that allows declarative specification of consistency tradeoffs, called Spry.
§ INTRODUCTION
Languages for building large-scale distributed applications experienced a Golden Age in the 1980s with innovations in networking and the invention of the Internet. These language tried to ease the development of these applications, influenced by the growing requirements of increased computing resources, larger storage capacity, and the desired high-availability and fault-tolerance of applications.
Two of the most widely known from the era, Argus <cit.> and Emerald <cit.> each took a different approach to solving the problem, and the abstractions that each of these languages provided aimed to simplify the creation of correct applications, reduce uncertainty when dealing with unreliable networks, and alleviate the burden of dealing with low-level details related to a dynamic network topology; Emerald focusing heavily on object mobility and Argus on atomic transactions across objects residing on multiple machines.
As it stands, these languages never saw any adoption[One notable exception here is the Distributed Erlang <cit.> extension for the Erlang programming model, later adopted by Haskell <cit.>.] and most of the large-scale distributed applications that exist today have been built with sequential or concurrent programming languages such as Go, Rust, C/C++, and Java. These languages have taken a library approach to distribution, adopting many ideas from languages such as Emerald and Argus. We can highlight two examples: first, the concept of promises from Argus, is now a standard mechanism relied upon when issuing an asynchronous request <cit.>; second, from Emerald, the concept of a directory service that maintains the current location of mobile processes <cit.>.
Distributed programming today has become “the new normal.” Nowadays, whether you are building a mobile or a rich web application, developers are increasingly trying to provide a “near native” experience <cit.>, where application users feel as if the application is running on their machine. To achieve this, shared state is usually replicated to devices, locally mutated and periodically synchronized with a server. In these scenarios, consistency can become increasingly challenging if the application is to stay available when the server cannot be reached. Therefore, it is now paramount that we have tools for building correct distributed applications.
We argue that the reason that these previous attempts at building languages for large-scale distributed systems have failed to see adoption is that they fail to capture the requirements of today's application developers. For instance, if an application must operate offline, a language should have primitives for managing consistency and conflict resolution; similarly, if a language is to adhere to a strict latency bound for each distributed operation, the language should have primitives for expressing these bounds.
In this paper, we relate these real-world application requirements to the CAP theorem <cit.>, showing that a distributed application must sacrifice consistency if it wishes to remain available under network partitions. We demonstrate that there are several design points for programming models that could, and do, exist within the bounds of the CAP theorem, with two examples that declaratively specify application-level distribution requirements.
§ SEQUENTIAL AND CONCURRENT PROGRAMMING
We explore the challenges when moving from sequential programming to concurrent programming.
§.§ Sequential Programming
Most of the programming models that have widespread adoption today are designed in von Neumann style[Functional and logic programming remain notable exceptions here, although their influence is minimal in comparison.]: computation revolves around mutable storage locations whose values vary with time, aptly called variables, and control statements are used to order assignment statements that mutate these storage locations. Programs are seen to progress in a particular order, with a single thread of execution, and terminate when they reach the end of the ordered list of statements.
§.§ Concurrent Programming
Concurrent programming extends sequential programming with multiple sequential threads of execution: this is done to leverage all available performance of the computer by allowing multiple tasks to execute at the same time. Each of these threads of execution could be executing in parallel, if multiple processors happened to be available, or a single processor could be alternating control between each of the threads of execution, giving a certain amount of execution time to each of the threads.
Concurrent programming is difficult. In the von Neumann model where shared memory locations are mutated by the sequential threads of execution, care must be taken to prevent uncontrolled access to these memory locations, as this may sacrifice correctness of the concurrent program. For instance, consider the case where two sequential threads of execution, executing in parallel, happen to read the same location and write back the location incremented by 1. It is trivial to observe that without controlled access to the shared memory location, both threads could increment the location to 2; effectively “losing” one of the valid updates to the counter.
Originally described by Dijkstra <cit.>, this “mutual exclusion” problem is the fundamental problem of concurrent computation with shared memory. How can we provide a mechanism that allows correct programming where multiple sequential threads of execution can mutate memory that is visible by all nodes of the system. Dijkstra innovated many techniques in this area, but the most famous technique he introduced was that of the “mutex” or “mutual exclusion.”
Mutual exclusion is the process of acquiring an exclusive lock to a shared memory location to ensure safe mutation. Returning to our previous example, if each sequential thread of execution was to acquire an exclusive lock to the memory location before reading and updating the value, we no longer have to worry about the correctness of the application. However, mutual exclusion can be difficult to get right when multiple locks are required, if they are not handled correctly to avoid deadlock.
But, how is the programmer to reason about whether their concurrent application has been programmed correctly? Given multiple threads of execution, and a nondeterministic scheduler, there exists an exponential number of possible schedules, or executions, the program may take, where programmers desire that each of these executions result in the same value, regardless of schedule.
Therefore, most programmers desire a correspondence commonly referred to as confluence. Confluence states simply, that the evaluation order of a program, does not impact the outcome of the program. In terms of the correspondence, programmers ideally can write code in a sequential manner, that can be executed concurrently, and possibly in parallel, and any of the possible schedules that it may take when executed, results in the same outcome of the sequential execution.
From a formal perspective, we have several correctness criteria for expressing whether a concurrent execution was correct. For instance, sequential consistency <cit.> is a correctness criteria for a concurrent program that states that the execution and mutation of shared memory reflects the program order in the textual specification of the program; linearizability <cit.> states that a concurrent execution followed the real time order of shared memory accesses and strengthens the guarantees of sequential consistency.
§ DISTRIBUTED PROGRAMMING
Distributed programming, while superficially believed to be an extension of concurrent programming, has its own fundamental challenges that must be overcome.
§.§ The Reasons For Distribution
Distributed programming extends concurrent programming even further. Given a program that's already concurrent in it's execution, programmers distribute the sequential threads of execution across multiple machines that are communicating on a network. Programmers do this for several reasons:
* Working set. The working data set or problem programmers are trying to solve will take too long to execute on a single machine or fit on a single machine in memory, and therefore programmers need to distribute the work across multiple machines.
* Data replication. Programmers need data replication, to ensure that a failure of one machine does not cause the failure of our entire application.
* Inherent distribution. Programmers application's are inherently distributed; for example, a client application living on a mobile device being serviced by a server application living at a data center.
What programmers have learned from concurrent programming is that accesses to shared memory should be controlled: programmers may be tempted to use the techniques of concurrent programming, such as mutexes, monitors <cit.>, and semaphores, to control access to shared memory and perform principled, safe, mutation.
§.§ The Challenges of Distribution
On the surface, it appears that distribution is just an extension of concurrent programming: we have taken applications that relied on multiple threads of execution to work in concert to achieve a goal and only relocated the threads of execution to gain more performance and more computational resources. However, this point of view is fatally flawed.
As previously mentioned, the challenges of concurrent programming are the challenges of nondeterminism. The techniques pioneered by both Dijkstra and Hoare were mainly developed to ensure that nondeterminism in scheduling did not result in nondeterminism in program output. Normally, we do not want the same application, with fixed inputs, to return different output values across multiple executions because the scheduler happened to schedule the threads in different orders.
Distribution is fundamentally different from concurrent programming: machines that communicate on a network may be, at times, unreachable, completely failed, unable to answer a request, or, in the worst case, permanently destroyed. Therefore, it should follow that our existing tools are insufficient to solve the problems of distributed programming. We refer to these classes of failures in distributed programming as “partial failure”: in an effort to make machines appear as a single, logical unit of computation, individual machines that make up the whole may independently fail.
Distributed systems researchers have realized this, and have identified the core problem of distributed system as the following: the agreement problem. The agreement problem takes two forms, duals of each other, mainly:
* Leader election. The process of selecting an active leader amongst a group of nodes to act as a sequencer or coordinator of operations for those nodes; and
* Failure detection. The process of detecting a node that has failed and can no longer act as a leader.
These problems, and the problems of locking, are only exacerbated by two fundamental impossibility results in distributed computing: the CAP theorem <cit.> and the FLP result <cit.>.
The FLP result demonstrates that on a truly asynchronous system, agreement, when one process in the agreement process has failed, is impossible. To make this a bit more clear, when we can not determine how long a process will take to perform a step of computation, and we can not determine how long a message will take to arrive from a remote party on the network, there is no way to tell if the process is just delayed in responding or failed: we may have to wait an arbitrarily long amount of time.
FLP is solved in practice via randomized timeouts that introduce nondeterminism into the leader election process to prevent infinite elections. Algorithms that solve the agreement protocol, like the Raft consensus protocol <cit.> and the Paxos leader election algorithm <cit.> take these timeouts into account and take measures to prevent a seemingly faulty leader from sacrificing the correctness of a distributed application.
The CAP theorem states another fundamental result in distributed programming. CAP states simply, that if we wish to maintain linearizability when working with replicated data, we must sacrifice the ability for our applications to service requests to that shared memory if we wish to remain operational when some of the processes in the system can not communicate with each other. Therefore, for distributed applications to be able to continue to operate when not all of the processes in the system are able to communicate – such as when developing large-scale mobile applications or even simple applications where state is cached in a web browser – we have to sacrifice safe access to shared memory.
Both CAP and FLP incentivize developers to avoid using replicated, shared state, if that state needs to be synchronized to ensure consistent access to it. Applications that rely on shared state are bound to have reduced availability, because they either need to wait for timeouts related to failure detection or for the cost of coordinating changes across multiple replicas of the shared state.
§ TWO EXTREMES IN THE DESIGN SPACE
While development of large-scale distributed applications with both sequential and concurrent programming models has been widely successful in industry, most of these successes have been supported by systems that close the gap between a language that has distribution as a first-class citizen, and a concurrent language where tools that solve both failure detection and the agreement problem are used to augment the language.
For programming models to be successful for large-scale distributed programming, they need to embrace the tradeoffs of both the FLP result and the CAP theorem. We believe that there exists a space, in what we refer to as the boundaries of the CAP theorem, where a set of programming models that take into account the tradeoffs of the CAP theorem, can exist and flourish as systems for building distributed applications.
We now demonstrate two extremes in the design space. First, Lasp, a programming model that sacrifices consistency for availability. Second, Austere, a programming model that sacrifices availability for consistency. Both of these models sit at extreme sides of the spectrum proposed by the CAP theorem.
Both of these languages share a common design component: a datastore tracking local replicas of shared state, or “variable” state. To ensure recency of these replicas, a propagation mechanism is used: where strong consistency is required, this protocol may be driven by a consensus protocol such as Raft <cit.> or Paxos <cit.>; where weaker consistency is required, a simple anti-entropy protocol <cit.> may suffice. Where the models differ is in there evaluation semantics. Application written in these models may choose to synchronize replicas before using a value in a computation or not, depending on whether the model prefers availability or consistency.
Each of these models is a small extension to the λ-calculus. This extension provides named registers pointing to location in the data store. These locations designate their primary location and data type, and are dereferenced during substitution. In the event the replica has to be refreshed before evaluation, delimited continuations <cit.> are used as a method of interposition to insert the required synchronization code, managed by a scheduling thread. This same mechanism is used to periodically refresh values in the background either through anti-entropy sessions or consensus.
§.§ Lasp
Lasp <cit.> is a programming model designed as part of the SyncFree and LightKone EU projects <cit.> focusing on synchronization-free programming of large-scale distributed applications. Lasp sits at one extreme of the CAP theorem: Lasp will sacrifice consistency in order to remain available.
Lasp's only data abstraction is the Conflict-Free Replicated Data Type (CRDT) <cit.>. A CRDT is a replicated abstract data type that has a well defined merge operation for deterministically combining the state of all replicas, and Lasp builds upon one specific variant of CRDTs: state-based CRDTs. CRDTs guarantee that once all messages are delivered to all replicas, all replicas will converge to the same result.
With state-based CRDTs[Herein referred to as just CRDTs.], each data structure forms a bounded join semilattice, where the join operation computes the least-upper-bound for any two elements. While CRDTs come in a variety of flavors, like sets, counters, and flags (booleans), two main things must be keep in mind when specifying new CRDTs:
* CRDTs are replicated, and by that fact inherently concurrent. Therefore, when building a CRDT version of a set, the developer must define semantics for all possible pairs of concurrent operations: for instance, a concurrent addition and removal of the same element.
* To ensure replica convergence with minimal coordination, it follows from the join-semilattice that the join operation compute a least-upper-bound: therefore, all operations on CRDTs must be associative, commutative, and idempotent.
Lasp is a programming model that allows developers to do basic functional programming with CRDTs, without requiring application developers to work directly with the bounded join-semilattice structures themselves: in Lasp, a developer sees a CRDT set as a sequential set. Given all of the data structures in Lasp are CRDTs themselves, the output of Lasp applications are also CRDTs that can be joined to combine their results.
Lasp never sacrifices availability: updates are always performed against a local replica and state is eventually merged with other replicas. Consistency is sacrificed: while replica convergence is ensured, it may take an arbitrarily long amount of time for convergence to be reached and updates may arrive in any order.
However, Lasp has several strong restrictions given the CRDT foundation that provides its availability: all operations must commute and all data structures must be able to be expressed as a bounded join-semilattice. This obviously rules out several common data structures, such one very important one: the list.
§.§ Austere
Austere is a programming model where all replicated, shared state is synchronized for every operation in the system. Austere sits at another extreme of the CAP theorem: Austere will sacrifice availability in order to preserve consistency.
Before any access or modification to replicated, shared state, Austere will contact all replicas using two-phase locking (2PL) <cit.> to ensure values are read without interleaving writes with communication, and two-phase commit (2PC) <cit.> to commit all modifications. In the event that a replica can not be contacted, the system will fail to make progress, ensuring a single system image: reducing distributed programming to a single sequential execution to ensure a consistent view across all replicas.
Compared to Lasp, Austere shares the common λ-calculus core; however, the gain of semantics in Austere is paid for by reduced availability.
§ “NEXT 700” DISTRIBUTED PROGRAMMING MODELS
The “next 700” <cit.> distributed programming models will set between the bounds presented by Austere and Lasp: extreme availability where consistency is sacrificed vs. extreme consistency where availability is sacrificed. (see Figure <ref>)
More practically, we believe that the most useful languages in this space will allow the developer to specifically trade-off between availability and consistency at the application-level. This is because the unreliability of the network, dynamicity of real networks, and production workloads require applications to remain flexible. These trade-offs should be specified declaratively, close to the application code. Application developers should not have to reason about transaction isolation levels or consistency models when writing application code.
We provide an example of one such language, Spry, that makes a trade-off between availability and consistency based on application-level semantics. We target this language for one use case in particular: Content Delivery Networks (CDNs), where application-level semantics can be used to declaratively specify Service Level Agreements (SLAs.)
§.§ Spry
Spry <cit.> is a programming model for building applications that want to tradeoff availability and consistency at varying points in application code to support application requirements.
We demonstrate two different types of tradeoffs that application developers might make in the same application. Consider the case of a Content Delivery Network (CDN), an extremely large-scale distributed application.
* Availability for consistency. In a CDN, the system tries to ensure that content that is older than a particular number of seconds will never be returned to the user. This is usually specified by the application developer explicitly, by checking the object's staleness and fetching the data from the origin before returning the response to the user.
* Consistency for availability. CDN's usually maintain partitioned inverted indexes that can be queried to search for a piece of content. Because nodes may become unavailable, or respond slowly because of high load, application developers may want to specify that a query returns results from a local cache if fetching the index over the network takes more than a particular amount of time. This is usually specified by the application developer explicitly, by setting a timer, attempting to retrieve the object over the network, reusing cached results if the latency bound can not be met, and returning the response to the user.
Application developers specify these constraints declaratively in Spry. If a replicated value should not be older than a particular number of milliseconds, developers can annotate these values with the bounded staleness requirements. If a replicated value should always be as fresh as it can be within a bound of a number of milliseconds, this can be specified as well. Similarly, these values can be tweaked while the application is running, allowing developers to adjust the system while it is operating, responding to failures or increased load.
§ CONCLUSION
We have seen that the move from sequential programming to concurrent programming was fairly straightforward: all that was required was a principled approach to state mutation through the use of techniques like locking to prevent values from being corrupted, which can lead to unsafe programs. However, the move to distributed programming is much more difficult because of the uncertainty that is inherent in distributed programming. For example: will this machine respond in time? Has this machine failed and is it able to respond?
Distributed programming is a different beast, and we need programming models for adapting accordingly. Can we come up with new abstractions and programming models that aid in expressing the application developers intent declaratively?
We believe that it is possible. We have shown you three different programming model designs that all make different tradeoffs when nodes become unavailable. Two of these, Lasp and Austere, provide the boundaries in model design that are aligned with the constraints of the CAP theorem. One of these, Spry, takes a declarative approach that puts the tradeoffs in the hands of the application developer. All three of these can cohabit the same underlying concurrent language.
§ ACKNOWLEDGEMENTS
We want to thank Zeeshan Lakhani, Justin Sheehy, Andrew J. Stone, and Zach Tellman for their feedback.
abbrv
|
http://arxiv.org/abs/1701.07851v1 | 20170126192355 | Human-Robot Mutual Adaptation in Shared Autonomy | [
"Stefanos Nikolaidis",
"Yu Xiang Zhu",
"David Hsu",
"Siddhartha Srinivasa"
] | cs.RO | [
"cs.RO"
] |
arrows
2017
acmcopyright
HRI '17,March 06-09, 2017, Vienna, Austria
978-1-4503-4336-7/17/03$15.00
http://dx.doi.org/10.1145/2909824.3020253
4
Human-Robot Mutual Adaptation in Shared Autonomy
Stefanos Nikolaidis
Carnegie Mellon University
[email protected]
Yu Xiang Zhu
Carnegie Mellon University
[email protected]
David Hsu
National University of Singapore
[email protected]
Siddhartha Srinivasa
Carnegie Mellon University
[email protected]
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================
Shared autonomy integrates user input with robot autonomy in order to control a robot and help the user to complete a task. Our work aims to improve the performance of such a human-robot team: the robot tries to guide
the human towards an effective strategy, sometimes against the human's own preference, while still retaining his trust. We achieve this through a principled human-robot mutual adaptation formalism. We integrate a bounded-memory adaptation model of the human into a partially observable stochastic decision model, which enables the robot to adapt to an adaptable human. When the human is adaptable, the robot guides the human towards a good strategy, maybe unknown to the human in advance. When the human is stubborn and not adaptable, the robot complies with the human's preference in order to retain their trust. In the shared autonomy setting, unlike many other common human-robot collaboration settings, only the robot actions can change the physical state of the world, and the human and robot goals are not fully observable. We address these challenges and show in a human subject experiment that the proposed mutual adaptation formalism improves human-robot team performance, while retaining a high level of user trust in the robot, compared to the common approach of having the robot strictly following participants' preference.
§ INTRODUCTION
Assistive robot arms show great promise in increasing the independence of
people with upper extremity disabilities <cit.>. However, when a user teleoperates
directly a robotic arm via an interface such as a joystick, the limitation of
the interface, combined with the increased capability and complexity of robot
arms, often makes it difficult or tedious to accomplish complex tasks.
Shared autonomy alleviates this issue by combining direct teleoperation with
autonomous assistance <cit.>. In recent
work by Javdani et al., the robot estimates a distribution of user goals based on the history of user inputs, and assists the user for that distribution <cit.>. The user is assumed to be always right about their goal choice. Therefore, if the assistance strategy knows the user's goal, it will select actions to minimize the cost-to-go to that goal.
This assumption is often not true, however. For instance, a user may choose an unstable grasp when picking up an object (Fig. <ref>), or they may arrange items in the wrong order by stacking a heavy item on top of a fragile one. Fig. <ref> shows a shared autonomy scenario, where the user teleoperates the robot towards the left bottle. We assume that the robot knows the optimal goal for the task: picking up the right bottle is a better choice, for instance because the left bottle is too heavy, or because the robot has less uncertainty about the right bottle's location. Intuitively, if the human insists on the left bottle, the robot should comply; failing to do so can have a negative effect on the user's trust in the robot, which may lead to disuse of the system <cit.>. If the human is willing to adapt by aligning its actions with the robot, which has been observed in adaptation between humans and artifacts <cit.>, the robot should insist towards the optimal goal. The human-robot team then exhibits a mutually adaptive behavior, where the robot adapts its own actions by reasoning over the adaptability of the human teammate.
Nikolaidis et al. <cit.> proposed a mutual
adaptation formalism in collaborative tasks, e.g., when a human and a robot work together to carry a table out of the room. The robot builds a Bounded-memory Adaptation Model (BAM) of the human teammate, and it integrates the model into a partially observable stochastic process, which enables robot adaptation to the human: If the user is adaptable, the robot will disagree with them, expecting them to switch towards the optimal goal. Otherwise, the robot will align its actions with the user policy, thus retaining their trust.
A characteristic of many collaborative settings is that human and robot both affect the world state, and that disagreement between the human and the robot impedes task completion. For instance, in the table-carrying example, if human and robot attempt to move the table in opposing directions with equal force, the table will not move and the task will not progress. Therefore, a robot that optimizes the task completion time will account for the human adaptability implicitly in its optimization process: if the human is non-adaptable, the only way for the robot to complete the task is to follow the human goal. This is not the case in a shared-autonomy setting, since the human actions do not affect the state of the task. Therefore, a robot that solely maximizes task performance in that setting will always move towards the optimal goal, ignoring human inputs altogether.
In this work, we propose a generalized human-robot mutual adaptation formalism, and we formulate mutual adaptation in the collaboration and shared-autonomy settings as instances of this formalism.
We identify that in the shared-autonomy setting (1) tasks may typically exhibit less structure than in the collaboration domain, which limits the observability of the user's intent, and (2) only robot actions directly affect task progress. We address the first challenge by including the operator goal as an additional latent variable in a mixed-observability Markov decision process (MOMDP) <cit.>. This allows the robot to maintain a probability distribution over the user goals based on the history of operator inputs. We also take into account the uncertainty that the human has on the robot goal by modeling the human as having a probability distribution over the robot goals (Sec. <ref>). We address the second challenge by proposing an explicit penalty for disagreement in the reward function that the robot is maximizing (Sec. <ref>). This allows the robot to infer simultaneously the human goal and the human adaptability, reason over how likely the human is to switch their goal based on the robot actions, and guide the human towards the optimal goal while retaining their trust.
We conducted a human subject experiment (n=51) with an assistive robotic arm on a table-clearing task. Results show that the proposed formalism significantly improved human-robot team performance, compared to the robot following participants' preference, while retaining a high level of human trust in the robot.
§ PROBLEM SETTING
A human-robot team can be treated as a multi-agent system, with world state x_world∈ X_world, robot action a_r ∈ A_r, and human action a_h ∈ A_h. The system evolves according to a stochastic state transition function T X_world× A_r × A_h →Π(X_world). Additionally, we model the user as having a goal, among a discrete set of goals g ∈ G. We assume access to a stochastic joint policy for each goal, which we call modal policy, or mode m ∈ M. We call m_h the modal policy that the human is following at a given time-step, and m_r the robot mode, which is the [perceived by the human] robot policy towards a goal. The human mode, m_h ∈ M is not fully observable. Instead, the robot has uncertainty over the user's policy, that can modeled as a Partially Observable Markov Decision Process (POMDP). Observations in the POMDP correspond to human actions a_h ∈ A_h. Given a sequence of human inputs, we infer a distribution over user modal policies using an observation function O(a_h | x_world, m_h).
Contrary to previous work in modeling human intention <cit.> and in shared autonomy <cit.>, the user goal is not static. Instead, we define a transition function T_m_h M × H_t × X_world× A_r →Π(M), where h_t is the history of states, robot and human actions (x^0_world, a^0_r,a^0_h, … , x^t-1_world, a^t-1_r, a^t-1_h ). The function models how the human mode may change over time. At each time step, the human-robot team receives a real-valued reward that in the general case also depends on the human mode m_h and history h_t: R(m_h, h_t, x_world, a_r, a_h). The reward captures both the relative cost of each goal g ∈ G, as well as the cost of disagreement between the human and the robot. The robot goal is then to maximize the expected total reward over time: ∑_t=0^∞γ^t R(t), where the discount factor γ∈ [ 0,1) gives higher weight to immediate rewards than future ones.
Computing the maximization is hard: Both T_m_h and R depend on the whole history of states, robot and human actions h_t. We use the Bounded-memory Adaptation Model <cit.> to simplify the problem.
§.§ Bounded-Memory Adaptation Model
Nikolaidis et al. <cit.> proposed the Bounded-memory Adaptation Model (BAM). The model is based on the assumption of “bounded rationality” was proposed first by Herbert Simon: people often do not have the time and cognitive capabilities to make perfectly rational decisions <cit.>. In game theory, bounded rationality has been modeled by assuming that players have a “bounded memory” or “bounded recall" and base their decisions on recent observations <cit.>.
The BAM model allows us to simplify the problem, by modeling the human as making decisions not on the whole history of interactions, but on the last k interactions with the robot. This allows us to simplify the transition function T_m_h and reward function R defined in the previous section, so that they depend on the history of the last k time-steps only. Additionally, BAM provides a parameterization of the transition function T_m_h, based on the parameter α∈𝒜, which is the human adaptability. The adaptability represents one's inclination to adapt to the robot. With the BAM assumptions, we have T_m_h M ×𝒜× H_k × X_world× A_r →Π(M) and R: M × H_k × X_world× A_r × A_h →ℝ. We describe the implementation of T_m_h in Sec. <ref> and of R in Sec. <ref>.
§.§ Shared Autonomy
The shared autonomy setting allows us to further simplify the general problem: the world state consists only of the robot configuration x_r ∈ X_r, so that x_r ≡ x_world. A robot action induces a deterministic change in the robot configuration. The human actions a_h ∈ A_h are inputs through a joystick interface and do not affect the world state. Therefore, the transition function of the system is deterministic and can be defined as: T X_r× A_r → X_r.
§.§ Collaboration
We include the collaboration setting formalism for completeness. Contrary to the shared-autonomy setting, both human and robot actions affect the world state, and the transition function can be deterministic or stochastic. In the deterministic case, it is T: X_world× A_r × A_h → X_world. Additionally, the reward function does not require a penalty for disagreement between the human and robot modes; instead, it can depend only on the relative cost for each goal, so that R : X_world→ℝ. Finally, if the task exhibits considerable structure, the modes may be directly observed from the human and robot actions. In that case, the robot does not maintain a distribution over modal policies.
§ HUMAN AND ROBOT MODE INFERENCE
When the human provides an input through a joystick interface, the robot makes an inference on the human mode. In the example table-clearing task of Fig. <ref>, if the robot moves to the right, the human will infer that the robot follows a modal policy towards the right bottle. Similarly, if the human moves the joystick to the left, the robot will consider more likely that the human follows a modal policy towards the left bottle. In this section, we formalize the inference that human and robot make on each other's goals.
§.§ Stochastic Modal Policies
In the shared autonomy setting, there can be a very large number of modal policies that lead to the same goal. We use as example the table-clearing task of Fig. <ref>. We let G_L represent the left bottle, G_R the right bottle, and S the starting end-effector position of the robot. Fig. <ref>-left shows paths from three different modal policies that lead to the same goal G_L. Accounting for a large set of modes can increase the computational cost, in particular if we assume that the human mode is partially observable (Section <ref>).
Therefore, we define a modal policy as a stochastic joint-policy over human and robot actions, so that m X_r × H_t→Π(A_r) ×Π(A_h). A stochastic modal policy compactly represents a probability distribution over paths and allows us to reason probabilistically about the future actions of an agent that does not move in a perfectly predictable manner. For instance, we can use the principle of maximum entropy to create a probability distribution over all paths from start to the goal <cit.>. While a stochastic modal policy represents the uncertainty of the observer over paths, we do not require the agent to actually follow a stochastic policy.
§.§ Full Observability Assumption
While m_r, m_h can be assumed to be observable for a variety of structured tasks in the collaboration domain <cit.>, this is not the case for the shared autonomy setting because of the following factors:
Different policies invoke the same action. Assume two modal policies in Fig. <ref>, one for the left goal shown in red in Fig. <ref>-left, and a symmetric policy for the right goal (not shown). An agent moving upwards (Figure <ref>-right) could be following either of the two with equal probability. In that case, inference of the exact modal policy without any prior information is impossible, and the observer needs to maintain a uniform belief over the two policies.
Human inputs are noisy. The user provides its inputs to the system through a joystick interface. These inputs are noisy: the user may “overshoot” an intended path and correct their input, or move the joystick in the wrong direction. In the fully observable case, this would result in an incorrect inference of the human mode. Maintaining a belief over modal policies allows robustness to human mistakes.
This leads us to assume that modal policies are partially observable. We model how the human infers the robot mode, as well as how the robot infers the human mode, in the following sections.
§.§ Robot Mode Inference
The bounded-memory assumption dictates that the human does not recall the whole history of states and actions, but only a recent history of the last k time-steps. The human attributes the robot actions to a robot mode m_r.
P(m_r | h_k, x_r^t, a_r^t) = P (m_r | x_r^t-k+1, a_r^t-k+1, ... , x_r^t, a_r^t)
= η P (a_r^t-k+1, ... , a_r^t| m_r, x_r^t-k+1, ... , x_r^t)
In this work, we consider modal policies that generate actions based only on the current world state, so that
M : X_r →Π(A_h) ×Π(A_r).
Therefore Eq. <ref> can be simplified as follows, where m_r(x^t_r,a^t_r) denotes the probability of the robot taking action a_r at time t, if it follows modal policy m_r:
P(m_r | h_k, x_r^t, a_r^t) = η m_r(x_r^t-k+1,a_r^t-k+1) ... m_r(x_r^t,a_r^t)
P(m_r | h_k,x_r^t,a_r^t) is the [estimated by the robot] human belief on the robot mode m_r.
§.§ Human Mode Inference
To infer the human mode, we need to implement the dynamics model T_m_h that describes how the human mode evolves over time, and the observation function O that allows the robot to update its belief on the human mode from the human actions.
In Sec. <ref> we defined a transition function T_m_h, that indicates the probability of the human switching from mode m_h to a new mode m'_h, given a history h_k and their adaptability α. We simplify the notation, so that x_r ≡ x^t_r, a_r ≡ a^t_r and x ≡ (h_k, x_r):
T_m_h(x, α,m_h, a_r, m'_h)
= P(m'_h | x, α, m_h,a_r)
=∑_m_r P(m'_h, m_r |x,α,m_h,a_r)
= ∑_m_r P(m'_h | x,α,m_h, a_r,m_r) × P(m_r | x,α,m_h,a_r)
= ∑_m_r P(m'_h | α, m_h, m_r)× P(m_r | x,a_r)
The first term gives the probability of the human switching to a new mode m'_h, if the human mode is m_h and the robot mode is m_r.
Based on the BAM model <cit.>, the human switches to m_r, with probability α and stays at m_h with probability 1-α. Nikolaidis et al. <cit.> define α as the human adaptability, which represents their inclination to adapt to the robot. If α=1, the human switches to m_r with certainty. If α=0, the human insists on their mode m_h and does not adapt. Therefore:
P(m'_h|α,m_h,m_r) = {α m'_h ≡ m_r
1-α m'_h ≡ m_h
0 otherwise
.
The second term in Eq. <ref> is computed using Eq. <ref>, and it is the [estimated by the human] robot mode.
Eq. <ref> describes that the probability of the human switching to a new robot mode m_r depends on the human adaptability α, as well as on the uncertainty that the human has about the robot following m_r. This allows the robot to compute the probability of the human switching to the robot mode, given each robot action.
The observation function O X_r× M →Π(A_h) defines a probability distribution over human actions a_h. This distribution is specified by the stochastic modal policy m_h ∈ M. Given the above, the human mode m_h can be estimated by a Bayes filter, with b(m_h) the robot's previous belief on m_h:
b'(m'_h) = η O(m'_h, x'_r, a_h)∑_m_h ∈ MT_m_h(x, α, m_h,a_r, m'_h) b(m_h)
In this section, we assumed that α is known to the robot. In practice, the robot needs to estimate both m_h and α. We formulate this in Sec. <ref>.
§ DISAGREEMENT BETWEEN MODES
In the previous section we formalized the inference that human and robot make on each other's goals. Based on that, the robot can infer the human goal and it can reason over how likely the human is to switch goals given a robot action.
Intuitively, if the human insists on their goal, the robot should follow the human goal, even if it is suboptimal, in order to retain human trust. If the human is willing to change goals, the robot should move towards the optimal goal. We enable this behavior by proposing in the robot's reward function a penalty for disagreement between human and robot modes. The intuition is that if the human is non-adaptable, they will insist on their own mode throughout the task, therefore the expected accumulated cost of disagreeing with the human will outweigh the reward of the optimal goal. In that case, the robot will follow the human preference. If the human is adaptable, the robot will move towards the optimal goal, since it will expect the human to change modes.
We formulate the reward function that the robot is maximizing, so that there is a penalty for following a mode that is perceived to be different than the human's mode.
R(x, m_h, a_r) =
R_goal : x_r ∈ G
R_other : x_r ∉ G
If the robot is at a goal state x_r ∈ G, a positive reward associated with that goal is returned, regardless of the human mode m_h and robot mode m_r. Otherwise, there is a penalty C<0 for disagreement between m_h and m_r, induced in R_other. The human does not observe m_r directly, but estimates it from the recent history of robot states and actions (Sec. <ref>). Therefore, R_other is computed so that the penalty for disagreement is weighted by the [estimated by the human] probability of the robot actually following m_r:
R_other = ∑_m_rR_m(m_h,m_r)P(m_r | x,a_r)
where R_m(m_h, m_r) =
0 : m_h ≡ m_r
C : m_h ≠ m_r
§ HUMAN-ROBOT MUTUAL ADAPTATION FORMALISM
§.§ MOMDP Formulation
In Section <ref>, we showed how the robot estimates the human mode, and how it computes the probability of the human switching to the robot mode based on the human adaptability. In Section <ref>, we defined a reward function that the robot is maximizing, which captures the trade-off between going to the optimal goal and following the human mode. Both the human adaptability and the human mode are not directly observable. Therefore, the robot needs to estimate them through interaction, while performing the task. This leads us to formulate this problem as a mixed-observability Markov Decision Process (MOMDP) <cit.>. This formulation allows us to compute an optimal policy for the robot that will maximize the expected reward that the human-robot team will receive, given the robot's estimates of the human adaptability and of the human mode. We define a MOMDP as a tuple {X, Y, A_r, 𝒯_x, 𝒯_α, 𝒯_m_h,R,Ω, O}:
* X: X_r × A_r^k is the set of observable variables. These are the current robot configuration x_r, as well as the history h_k. Since x_r transitions deterministically, we only need to register the current robot state and robot actions a^t-k+1_r, ... ,a^t_r.
* Y: 𝒜× M is the set of partially observable variables. These are the human adaptability α∈ A, and the human mode m_h ∈ M.
* A_r is a finite set of robot actions. We model actions as transitions between discrete robot configurations.
* 𝒯_x: X × A_r⟶ X is a deterministic mapping from a robot configuration x_r, history h_k and action a_r, to a subsequent configuration x'_r and history h'_k.
* 𝒯_α: 𝒜× A_r⟶Π(𝒜) is the probability of the human adaptability being α' at the next time step, if the adaptability of the human at time t is α and the robot takes action a_r. We assume the human adaptability to be fixed throughout the task.
* 𝒯_m_h: X ×𝒜× M× A_r ⟶Π(M) is the probability of the human switching from mode m_h to a new mode m'_h, given a history h_k, robot state x_r, human adaptability α and robot action a_r. It is computed using Eq. <ref>, Sec. <ref>.
* R : X × M× A_r ⟶ℝ is a reward function that gives an immediate reward for the robot taking action a_r given a history h_k, human mode m_h and robot state x_r. It is defined in Eq. <ref>, Sec. <ref>.
* Ω is the set of observations that the robot receives. An observation is a human input a_h ∈ A_h (Ω≡ A_h).
* O : M × X_r ⟶Π(Ω) is the observation function, which gives a probability distribution over human actions for a mode m_h at state x_r. This distribution is specified by the stochastic modal policy m_h ∈ M.
§.§ Belief Update
Based on the above, the belief update for the MOMDP is <cit.>:
b'(α', m_h') = η O(m_h',x'_r, a_h) ∑_α∈𝒜∑_m_h ∈ M𝒯_x(x, a_r, x')
𝒯_α(α,a_r,α')𝒯_m_h(x, α,m_h,a_r,m'_h)b(α,m_h)
We note that since the MOMDP has two partially observable variables, α and m_h, the robot maintains a joint probability distribution over both variables.
§.§ Robot Policy
We solve the MOMDP for a robot policy π^*_r(b) that is optimal with respect to the robot's expected total reward.
The stochastic modal policies may assign multiple actions at a given state. Therefore, even if m_h ≡ m_r, a_r may not match the human input a_h. Such disagreements are unnecessary when human and robot modes are the same. Therefore, we let the robot actions match the human inputs, if the robot has enough confidence that robot and human modes (computed using Eq. <ref>, <ref>) are identical in the current time-step. Otherwise, the robot executes the action specified by the MOMDP optimal policy. We leave for future work adding a penalty for disagreement between actions, which we hypothesize it would result in similar behavior.
§.§ Simulations
Fig. <ref> shows the robot behavior for two simulated users, one with low adaptability (User 1, α = 0.0), and one with high adaptability (User 2, α = 0.75) for a shared autonomy scenario with two goals, G_L and G_R, corresponding to modal policies m_L and m_R respectively. Both users start with modal policy m_L (left goal). The robot uses the human input to estimate both m_h and α. We set a bounded-memory of k=1 time-step. If human and robot disagree and the human insists on their modal policy, then the MOMDP belief is updated so that smaller values of adaptability α have higher probability (lower adaptability). It the human aligns its inputs to the robot mode, larger values become more likely. If the robot infers the human to be adaptable, it moves towards the optimal goal. Otherwise, it complies with the human, thus retaining their trust.
Fig. <ref> shows the team-performance over α, averaged over 1000 runs with simulated users. We evaluate performance by the reward of the goal achieved, where R_opt is the reward for the optimal and R_sub for the sub-optimal goal. We see that the more adaptable the user, the more often the robot will reach the optimal goal. Additionally, we observe that for α = 0.0, the performance is higher than R_sub. This is because the simulated user may choose to move forward in the first time-steps; when the robot infers that they are stubborn, it is already close to the optimal goal and continues moving to that goal.
§ HUMAN SUBJECT EXPERIMENT
We conduct a human subject experiment (n=51) in a shared autonomy setting. We are interested in showing that the human-robot mutual adaptation formalism can improve the performance of human-robot teams, while retaining high levels of perceived collaboration and trust in the robot in the shared autonomy domain.
On one extreme, we “fix” the robot policy, so that the robot always moves towards the optimal goal, ignoring human adaptability. We hypothesize that this will have a negative effect on human trust and perceived robot performance as a teammate. On the other extreme, we have the robot assist the human in achieving their desired goal.
We show that the proposed formalism achieves a trade-off between the two: when the human is non-adaptable, the robot follows the human preference. Otherwise, the robot insists on the optimal way of completing the task, leading to significantly better policies, compared to following the human preference, while achieving a high level of trust.
§.§ Independent Variables
No-adaptation session. The robot executes a fixed policy, always acting towards the optimal goal.
Mutual-adaptation session. The robot executes the MOMDP policy of Sec. <ref>.
One-way adaptation session. The robot estimates a distribution over user goals, and adapts to the user following their preference, assisting them for that distribution <cit.>. We compute the robot policy in that condition by fixing the adaptability value to 0 in our model and assigning equal reward to both goals.
§.§ Hypotheses
H1 The performance of teams in the No-adaptation condition will be better than of teams in the Mutual-adaptation condition, which will in turn be better than of teams in the One-way adaptation condition. We expected teams in the No-adaptation condition to outperform the teams in the other conditions, since the robot will always go to the optimal goal. In the Mutual-adaptation condition, we expected a significant number of users to adapt to the robot and switch their strategy towards the optimal goal. Therefore, we posited that this would result in an overall higher reward, compared to the reward resulting from the robot following the participants' preference throughout the task (One-way adaptation).
H2 Participants that work with the robot in the One-way adaptation condition will rate higher their trust in the robot, as well as their perceived collaboration with the robot, compared to working with the robot in the Mutual-adaptation condition,. Additionally, participants in the Mutual-adaptation condition will give higher ratings, compared to working with the robot in the No-adaptation condition.
We expected users to trust the robot more in the One-way adaptation condition than in the other conditions, since in that condition the robot will always follow their preference. In the Mutual-adaptation condition, we expected users to trust the robot more and perceive it as a better teammate, compared with the robot that executed a fixed strategy ignoring users' adaptability (No-adaptation). Previous work in collaborative tasks has shown a significant improvement in human trust, when the robot had the ability to adapt to the human parter <cit.>
§.§ Experiment Setting: A Table Clearing Task
Participants were asked to clear a table off two bottles placed symmetrically, by providing inputs to a robotic arm through a joystick interface (Fig. <ref>). They controlled the robot in Cartesian space by moving it in three different directions: left, forward and right. We first instructed them in the task, and asked them to do two training sessions, where they practiced controlling the robot with the joystick. We then asked them to choose which of the two bottles they would like the robot to grab first, and we set the robot policy, so that the other bottle was the optimal goal. This emulates a scenario where, for instance, the robot would be unable to grasp one bottle without dropping the other, or where one bottle would be heavier than the other and should be placed in the bin first. In the one-way and mutual adaptation conditions, we told them that “the robot has a mind of its own, and it may choose not to follow your inputs.” Participants then did the task three times in all conditions, and then answered a post-experimental questionnaire that used a five-point Likert scale to assess their responses to working with the robot. Additionally, in a video-taped interview at the end of the task, we asked participants that had changed strategy during the task to justify their action.
§.§ Subject Allocation
We recruited 51 participants from the local community, and chose a between-subjects design in order to not bias the users with policies from previous conditions.
§.§ MOMDP Model
The size of the observable state-space X was 52 states. We empirically found that a history length of k=1 in BAM was sufficient for this task, since most of the subjects that changed their preference did so reacting to the previous robot action. The human and robot actions were {move-left, move-right, move-forward}. We specified two stochastic modal policies {m_L, m_R}, one for each goal. We additionally assumed a discrete set of values of the adaptability α : {0.0,0.25,0.5,0.75,1.0}. Therefore, the total size of the MOMDP state-space was 5 × 2 × 52 = 520 states. We selected the reward so that R_opt = 11 for the optimal goal, R_sub = 10 for the suboptimal goal, and C = -0.32 for the cost of mode disagreement (Eq. <ref>). We computed the robot policy using the SARSOP solver <cit.>, a point-based approximation algorithm which, combined with the MOMDP formulation, can scale up to hundreds of thousands of states <cit.>.
§ ANALYSIS
§.§ Objective Measures
We consider hypothesis H1, that the performance of teams in the No-adaptation condition will be better than of teams in the Mutual-adaptation condition, which in turn will be better than of teams in the One-way adaptation condition.
Nine participants out of 16 (56%) in the Mutual-adaptation condition guided the robot towards the optimal goal, which was different than their initial preference, during the final trial of the task, while 12 out of 16 (75%) did so at one or more of the three trials. From the participants that changed their preference, only one stated that they did so for reasons irrelevant to the robot policy. On the other hand, only two participants out of 17 in the One-way adaptation condition changed goals during the task, while 15 out of 17 guided the robot towards their preferred, suboptimal goal in all trials. This indicates that the adaptation observed in the Mutual-adaptation condition was caused by the robot behavior.
We evaluate team performance by computing the mean reward over the three trials, with the reward for each trial being R_opt if the robot reached the optimal goal and R_sub if the robot reached the suboptimal goal (Fig. <ref>-left). As expected, a Kruskal-Wallis H test showed that there was a statistically significant difference in performance among the different conditions (χ^2(2) = 39.84, p < 0.001). Pairwise two-tailed Mann-Whitney-Wilcoxon tests with Bonferroni corrections showed the difference to be statistically significant between the No-adaptation and Mutual-adaptation (U = 28.5, p < 0.001), and Mutual-adaptation and One-way adaptation (U = 49.5, p = 0.001) conditions. This supports our hypothesis.
§.§ Subjective Measures
Recall hypothesis H2, that participants in the Mutual-adaptation condition would rate their trust and perceived collaboration with the robot higher than in the No-adaptation condition, but lower than in the One-way adaptation condition. Table I shows the two subjective scales that we used. The trust scales were used as-is from <cit.>. We additionally chose a set of questions related to participants' perceived collaboration with the robot.
Both scales had good consistency. Scale items were combined into a score. Fig. <ref>-center shows that both participants' trust (M=3.94, SE=0.18) and perceived collaboration (M=3.91, SE=0.12) were high in the Mutual-adaptation condition. One-way ANOVAs showed a statistically significant difference between the three conditions in both trust (F(2,48)=8.370, p = 0.001) and perceived collaboration (F(2,48)=9.552, p < 0.001). Tukey post-hoc tests revealed that participants of the Mutual-adaptation condition trusted the robot more, compared to participants that worked with the robot in the No-adaptation condition (p = 0.010). Additionally, they rated higher their perceived collaboration with the robot (p = 0.017). However, there was no significant difference in either measure between participants in the One-way adaptation and Mutual-adaptation conditions. We attribute these results to the fact that the MOMDP formulation allowed the robot to reason over its estimate of the adaptability of its teammate; if the teammate insisted towards the suboptimal goal, the robot responded to the input commands and followed the user's preference. If the participant changed their inputs based on the robot actions, the robot guided them towards the optimal goal, while retaining a high level of trust. By contrast, the robot in the No-adaptation condition always moved towards the optimal goal ignoring participants' inputs, which in turn had a negative effect on subjective measures.
§ DISCUSSION
In this work, we proposed a human-robot mutual adaptation formalism in a shared autonomy setting. In a human subject experiment, we compared the policy computed with our formalism, with an assistance policy, where the robot helped participants to achieve their intended goal, and with a fixed policy where the robot always went towards the optimal goal.
As Fig. <ref> illustrates, participants in the one-way adaptation condition had the worst performance, since they guided the robot towards a suboptimal goal. The fixed policy achieved maximum performance, as expected. However, this came to the detriment of human trust in the robot. On the other hand, the assistance policy in the One-way adaptation condition resulted in the highest trust ratings — albeit not significantly higher than the ratings in the Mutual-adaptation condition — since the robot always followed the user preference and there was no goal disagreement between human and robot. Mutual-adaptation balanced the trade-off between optimizing performance and retaining trust: users in that condition trusted the robot more than in the No-adaptation condition, and performed better than in the One-way adaptation condition.
Fig. <ref>-right shows the three conditions with respect to trust and performance scores. We can make the MOMDP policy identical to either of the two policies in the end-points, by changing the MOMDP model parameters. If we fix in the model the human adaptability to 0 and assign equal costs for both goals, the robot would assist the user in their goal (One-way adaptation). If we fix adaptability to 1 in the model (or we remove the penalty for mode disagreement), the robot will always go to the optimal goal (fixed policy).
The presented table-clearing task can be generalized without significant modifications to tasks with a large number of goals, human inputs and robot actions, such as picking good grasps in manipulation tasks (Fig. <ref>): The state-space size increases linearly with (1/dt), where dt a discrete time-step, and with the number of modal policies. On the other hand, the number of observable states is polynomial to the number of robot actions (O(A_r^k)), since each state includes history h_k: For tasks with large |A_r| and memory length k, we could approximate h_k using feature-based representations.
Overall, we are excited to have brought about a better understanding of the relationships between adaptability, performance and trust in a shared autonomy setting. We are very interested in exploring applications of these ideas beyond assistive robotic arms, to powered wheelchairs, remote manipulators, and generally to settings where human inputs are combined with robot autonomy.
§ ACKNOWLEDGMENTS
We thank Michael Koval, Shervin Javdani and Henny Admoni for the very helpful discussion and advice.
§ FUNDING
This work was funded by the DARPA SIMPLEX program through ARO contract number 67904LSDRP, National Institute of Health R01 (#R01EB019335), National Science Foundation CPS (#1544797), and the Office of Naval Research. We also acknowledge the Onassis Foundation as a sponsor.
IEEEtran
|
http://arxiv.org/abs/1701.07537v1 | 20170126013050 | Radial length, radial John disks and $K$-quasiconformal harmonic mappings | [
"Shaolin Chen",
"Saminathan Ponnusamy"
] | math.CV | [
"math.CV",
"Primary: 30C62, 30C75, Secondary: 30C20, 30C25, 30C45, 30F45, 30H10"
] |
equationsection
plain
thmTheorem
lemLemma
corCorollary
propProposition
Cor[equation]Corollary
clClaim
caCase
scaSubcase
sclSubclaim
conjConjecture
definition
defnDefinition
exampleExample[section]
op[equation]Open Problem
ques[equation]Question
remRemark
exam[equation]Example
own
.own
alphabet
tmp
|
http://arxiv.org/abs/1701.07800v2 | 20170126181011 | A multilinear reverse Hölder inequality with applications to multilinear weighted norm inequalities | [
"David Cruz-Uribe",
"Kabe Moen"
] | math.CA | [
"math.CA",
"42B20, 42B25"
] |
=1.2
#1#1
#1
#1
#1
∫
#1#2#30=#1#2#3∫
#2#3-.50
=
-
-
|
http://arxiv.org/abs/1701.07778v1 | 20170126171801 | On Number of Rich Words | [
"Josef Rukavicka"
] | math.CO | [
"math.CO",
"68R15"
] |
On Lattice Calculation of Electric Dipole Moments and Form Factors of the Nucleon
[
18 november 2016
=================================================================================
Any finite word w of length n contains at most n+1 distinct palindromic factors. If the bound n+1 is reached, the word w is called rich.
The number of rich words of length n over an alphabet of cardinality q is denoted R_n(q). For binary alphabet, Rubinchik and Shur deduced that R_n(2)≤ c 1.605^n for some constant c.
We prove that lim_n→∞√(R_n(q))=1 for any q, i.e. R_n(q) has a subexponential
growth on any alphabet.
§ INTRODUCTION
The study of palindromes is a frequent topic and many diverse results may be found.
In recent years, some of the papers deal with so-called rich words, or also words having palindromic defect 0.
They are words that have the maximum number of palindromic factors.
As noted by <cit.>, a finite word w can contains at most |w|+1 distinct palindromic factors with |w| being the length of w.
The rich words are exactly those that attain this bound. It is known that on binary alphabet the set of rich words contains factors of Sturmian words, factors of complementary symmetric Rote words, factors of the period-doubling word, etc., see <cit.>. On multiliteral alphabet, the set of rich words contains for example factors of Arnoux–Rauzy words and factors of words coding symmetric interval exchange.
Rich words can be characterized using various properties, see for instance <cit.>.
The concept of rich words can also be generalized to respect so-called pseudopalindromes, see <cit.>.
In this paper we focus on an unsolved question of computing the number of rich words of length n over an alphabet with q>1 letters. This number is denoted R_n(q).
This question is investigated in <cit.>, where J. Vesti gives a recursive lower bound on the number of rich words of length n, and an upper bound on the number of binary rich words.
Both these estimates seem to be very rough.
In <cit.>, C. Guo, J. Shallit and A.M. Shur constructed for each n a large set of rich words of length n. Their construction gives, currently, the best lower bound on the number of binary rich words, namely
R_n(2)≥C^√(n)/p(n),
where p(n) is a polynomial and the constant C ≈ 37. On the other hand, the best known upper bound is exponential. As mentioned in <cit.>, calculation performed recently by M. Rubinchik provides the upper bound R_n(2)≤ c 1.605^n for some constant c, see <cit.>.
Our main result stated as Theorem <ref> shows that R_n(q) has a subexponential
growth on any alphabet. More precisely, we prove
lim_n→∞√(R_n(q))=1 .
In <cit.>, Shur calls languages with the above property small.
Our result is an argument in favor of a conjecture formulated in <cit.> saying that for some infinitely growing function g(n) the following holds true R_n(2) = 𝒪(n/g(n))^√(n) .
To derive our result we consider a specific factorization of a rich word into distinct rich palindromes, here called UPS-factorization (Unioccurrent Palindromic Suffix factorization), see Definition <ref>.
Let us mention that another palindromic factorizations have already been studied, see <cit.>: Minimal (minimal number of palindromes), maximal (every palindrome cannot be extended on the given position) and diverse (all palindromes are distinct). Note that only the minimal palindromic factorization has to exist for every word.
The article is organized as follows: Section <ref> recalls notation and known results. In Section <ref> we study a relevant property of UPS-factorization. The last section is devoted to the proof of our main result.
§ PRELIMINARIES
Let us start with a couple of definitions:
Let A be an alphabet of q letters, where q>1 and q∈ℕ (ℕ denotes the set of nonnegative integers).
A finite sequence u_1u_2⋯ u_n with u_i ∈ A is a finite word.
Its length is n and is denoted |u_1u_2⋯ u_n| = n.
Let A^n denote the set of words of length n. We define that A^0 contains just the empty word.
It is clear that the size of A^n is equal to q^n.
Given u=u_1u_2⋯ u_n ∈ A^n and v=v_1v_2⋯ v_k ∈ A^k with 0≤ k ≤ n, we say that v is a factor of u if there exists i such that 0<i, i+k ≤ n and u_i=v_1, u_i+1=v_2, …, u_i+k-1=v_k.
A word u=u_1u_2⋯ u_n is called a palindrome if u_1u_2⋯ u_n=u_nu_n-1⋯ u_1. The empty word is considered to be a palindrome and a factor of any word.
A word u of length n is called rich if u has n+1 distinct palindromic factors. Clearly, u=u_1u_2⋯ u_n is rich if and only if its reversal u_nu_n-1⋯ u_1 is rich as well.
Any factor of a rich word is rich as well, see <cit.>. In other words, the language of rich words is factorial. In particular it means that
R_n(q)R_m(q)≤ R_n+m(q) for any m, n, q ∈ℕ. Therefore, the Fekete's lemma implies existence of the limit of √(R_n(q)) and moreover
lim_n→∞√(R_n(q))= inf{√(R_n(q)) n ∈ℕ}.
For a fixed n_0, one can find the number of all rich words of length n_0 and obtain an upper bound on the limit.
Using computer Rubinchik counted R_n(2) for n≤ 60, (see the sequence A216264 in OEIS). As √(R_60(2)) < 1.605, he obtained the upper bound given in Introduction.
As shown in <cit.>, any rich word u over alphabet A is richly prolongable, i.e., there exist letters a, b ∈ A such that aub is also rich. Thus a rich word is a factor of an arbitrarily long rich word. But the question whether two rich words can appear simultaneously as factors of a longer rich word may have negative answer. It means that the language of rich words is not recurrent. This fact makes enumeration of rich words hard.
§ FACTORIZATION OF RICH WORDS INTO RICH PALINDROMES
Let us recall one important property of rich words <cit.>: the longest palindromic suffix of a rich word w has exactly one occurrence in w (we say that the longest palindromic suffix of w is unioccurrent in w).
It implies that w=w^(1)w_1, where w_1 is a palindrome which is not a factor of w^(1). Since every factor of a rich word is a rich word as well, it follows that w^(1) is a rich word and thus w^(1)=w^(2)w_2, where w_2 is a palindrome which is not a factor of w^(2). Obviously w_1≠w_2. We can repeat the process until w^(p) is the empty word for some p∈ℕ, p≥ 1. We express these ideas by the following lemma:
Let w be a rich word. There exist distinct non-empty palindromes w_1,w_2,…,w_p such that
w=w_pw_p-1⋯ w_2w_1
w_i
w_pw_p-1⋯ w_i i=1,2,… ,p.
We define UPS-factorization (Unioccurrent Palindromic Suffix factorization) to be the factorization of a rich word w into the form (<ref>).
Since w_i in the factorization (<ref>) are non-empty, it is clear that p≤ n=| w|. From the fact that the palindromes w_i in the factorization (<ref>) are distinct we can derive a better upper bound for p. The aim of this section is to prove the following theorem:
There is a constant c>1 such that for any rich word w of length n the number of palindromes in the UPS-factorization of w=w_pw_p-1⋯ w_2w_1 satisfies
p≤ cn/lnn
Before proving the theorem, we show two auxiliary lemmas:
Let q,n,t∈ℕ such that
∑_i=1^tiq^⌈i/2⌉≥ n
The number p of palindromes in the UPS-factorization w=w_pw_p-1… w_2w_1 of any rich word w with n=| w| satisfies
p≤∑_i=1^tq^⌈i/2⌉
Let f_1,f_2,f_3,… be an infinite sequence of all non-empty palindromes over an alphabet A with q=| A| letters, where the palindromes are ordered in such a way that i<j implies that | f_i|≤| f_j|.
In consequence f_1,…,f_q are palindromes of length 1, f_q+1, …,f_2q are palindromes of length 2, etc.
Since w_1,…,w_p are distinct non-empty palindromes we have ∑_i=1^p| f_i|≤∑_i=1^p| w_i|=n.
The number of palindromes of length i over the alphabet A with q letters is equal to q^⌈i/2⌉ (just consider that that the “first half” of a palindrome determines the second half).
The number ∑_i=1^tiq^⌈i/2⌉ equals the length of a word concatenated from all palindromes of length less than or equal to t.
Since ∑_i=1^p | f_i|≤ n ≤∑_i=1^tiq^⌈i/2⌉, it follows that the number of palindromes p is less than or equal to the number of all palindromes of length at most t; this explains the inequality (<ref>).
Let N∈ℕ, x∈ℝ, x>1 such that N(x-1)≥ 2. We have
N x^N/2(x-1)≤∑_i=1^Nix^i-1≤N x^N/(x-1)
The sum of the first N terms of a geometric series with the quotient x is equal to ∑_i=1^Nx^i=x^N+1-x/x-1. Taking the derivative of this formula with respect to x with x>1 we obtain:
∑_i=1^Nix^i-1=x^N(N(x-1)-1)+1/(x-1)^2=N x^N/x-1+1-x^N/(x-1)^2.
It follows that the right inequality of (<ref>) holds for all N∈ℕ and x>1. The condition N(x-1)≥ 2 implies that 1/2N(x-1)≤ N(x-1)-1, which explains the left inequality of (<ref>).
We can start the proof of Theorem <ref>:
Let t∈ℕ be a minimal nonnegative integer such that the inequality (<ref>) in Lemma <ref> holds. It means that:
n>∑_i=1^t-1iq^⌈i/2⌉≥∑_i=1^t-1iq^i/2=q^1/2∑_i=1^t-1iq^i-1/2≥(t-1)q^t/2/2(q^1/2-1)
where for the last inequality we exploited (<ref>) with N=t-1 and x=q^1/2. If q≥ 9, then the condition N(x-1)=(t-1)(q^1/2-1)≥ 2 is fulfilled (it is the condition from Lemma <ref>) for any t≥ 2. Hence let us suppose that q≥ 9 and t≥ 2. From (<ref>) we obtain:
q^t/2/q^1/2-1≤2n/t-1≤4n/t
Since t is such that the inequality (<ref>) holds and i≤ q^i+1/2 for any i∈ℕ and q≥ 2, we can write:
n≤∑_i=1^t iq^i+1/2≤∑_i=1^t q^i+1=q^2q^t-1/q-1≤q^2/q-1q^t≤ q^2t
We apply a logarithm on the previous inequality:
lnn≤ 2tlnq
An upper bound for the number of palindromes p in UPS-factorization follows from (<ref>), (<ref>), and (<ref>):
p≤∑_i=1^tq^⌈i/2⌉≤∑_i=1^tq^i+1/2≤ q^3/2q^t/2/q^1/2-1≤ q^3/24n/t≤ q^3/28lnqn/lnn
The previous inequality supposes that q≥ 9 and t≥ 2. If t=1 then we can easily derive from (<ref>) that n≤ q and consequently p≤ n≤ q. Thus the inequality p≤ q^3/28lnqn/lnn holds as well for this case.
Since every rich word over an alphabet with the cardinality q<9 is also a rich word over the alphabet with the cardinality 9, the estimate (<ref>) in Theorem <ref> holds if we set the constant c as follows: c=max{8q^3/2lnq, 8 · 9^3/2ln9}.
Theorem <ref> implies that average length of a palindrome of UPS-factorization of a rich word of length n is 𝒪(ln(n)).
Note that in <cit.> it is shown that most of palindromic factors of a random word of length n are of length close to ln(n).
§ RICH WORDS FORM A SMALL LANGUAGE
The aim of this section is to show that the set of rich words forms a small language, see Theorem <ref>.
We present a recurrent inequality for R_n(q). To ease our notation we omit the specification of the cardinality of alphabet and write
R_n instead of R_n(q).
Denote κ_n= ⌈ cn/lnn⌉, where c is the constant from Theorem <ref> and n≥ 2.
Let n≥ 2, then
R_n≤∑_p=1^κ_n∑_n_1+n_2+… +n_p=n
n_1,n_2,…, n_p≥ 1R_⌈n_1/2⌉R_⌈n_2/2⌉… R_⌈n_p/2⌉
Given p,n_1,n_2,…,n_p, let R(n_1,n_2,…, n_p) denote the number of rich words with UPS-factorization w=w_pw_p-1… w_1, where | w_i|=n_i for i=1,2,…,p.
Note that any palindrome w_i is uniquely determined by its prefix of length ⌈n_i/2⌉; obviously this prefix is rich. Hence the number of words that appears in UPS-factorization as w_i cannot be larger than R_⌈n_i/2⌉. It follows that R(n_p,n_p-1,…, n_1)≤ R_⌈n_1/2⌉R_⌈n_2/2⌉… R_⌈n_p/2⌉. The sum of this result over all possible p (see Theorem <ref>) and n_1,n_2,…,n_p completes the proof.
If h>1,K≥ 1 such that R_n≤ Kh^n for all n, then lim_n→∞√(R_n)≤√(h).
For any integers p,n_1,…,n_p≥ 1, the assumption implies that
R_⌈n_1/2⌉R_⌈n_2/2⌉… R_⌈n_p/2⌉≤ K^ph^n_1+1/2h^n_2+1/2… h^n_p+1/2≤ K^ph^n+p/2.
Exploiting (<ref>) we obtain:
R_n≤ K^κ_nh^n+κ_n/2∑_p=1^κ_n∑_n_1+n_2+… +n_p=n
n_1,n_2,…, n_p≥ 1 1
The sum
S_n=∑_n_1+n_2+… +n_p=n
n_1,n_2,…, n_p≥ 11
can be interpreted as the number of ways how to distribute n coins between p people in such a way that everyone has at least one coin. That is why S_n=n-1p-1.
It is known (see Appendix for the proof) that
∑_i=0^LNi≤(eN/L)^LL,N∈ℕ L≤ N
From (<ref>) we can write:
R_n≤ K^κ_nh^n+κ_n/2enκ_n^κ_n. To evaluate √(R_n), just recall that lim_n→∞(const)^κ_n/n=lim_n→∞(const)^c/lnn=1 for any constant const and moreover lim_n→∞(n/κ_n)^κ_n/n=lim_n→∞(clnn)^1/clnn=1.
The main theorem of this paper is a simple consequence of the previous proposition.
Let R_n denote the number of rich words of length n over an alphabet with q letters.
We have lim_n→∞√(R_n)=1.
Let us suppose that lim_n→∞√(R_n)=λ>1. We are going to find ϵ>0 such that λ+ϵ<λ^2. The definition of a limit implies that there is n_0 such that √(R_n)<λ+ϵ for any n>n_0, i.e. R_n<(λ+ϵ)^n. Let K=max{R_1,R_2,…,R_n_0}. It holds for any n∈ℕ that R_n≤ K(λ+ϵ)^n. Using Proposition <ref> we obtain lim_n→∞√(R_n)≤√(λ+ϵ)<λ, and this is a contradiction to our assumption that lim_n→∞√(R_n)=λ>1.
§ APPENDIX
For the reader's convenience, we provide a proof of the well-known inequality we used the proof of Proposition <ref>.
∑_k=0^LNk≤(eN/L)^L, where L≤ N and L,N∈ℕ.
Consider x∈ (0,1]. The binomial theorem states that
(1+x)^N=∑_k=0^NNkx^k≥∑_k=0^LNkx^k
By dividing by the factor x^L we obtain
∑_k=0^LNkx^k-L≤(1+x)^N/x^L
Since x∈ (0,1] and k-L≤ 0, then x^k-L≥ 1, it follows that
∑_k=0^LNk≤(1+x)^N/x^L
Let us substitute x=L/N∈ (0,1] and let us exploit the inequality 1+x<e^x, that holds for all x>0:
(1+x)^N/x^L≤e^xN/x^L=e^L/NN/(L/N)^L=(eN/L)^L
§ ACKNOWLEDGMENTS
The author wishes to thank Edita Pelantová and Štěpán Starosta for their useful comments.
The authors acknowledges support by the Czech Science
Foundation grant GAČR 13-03538S and by the Grant Agency of the Czech Technical University in Prague, grant No. SGS14/205/OHK4/3T/14.
siam
biblio.bib
|
http://arxiv.org/abs/1701.08057v2 | 20170127141217 | Level crossings induced by a longitudinal coupling in the transverse field Ising chain | [
"Grégoire Vionnet",
"Brijesh Kumar",
"Frédéric Mila"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
School of Physical Sciences, Jawaharlal Nehru University (JNU), New Delhi 110067, India.
Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
We study the effect of
antiferromagnetic longitudinal coupling on the one-dimensional transverse
field Ising model with nearest-neighbour couplings. In the topological phase
where, in the thermodynamic limit, the ground state is twofold degenerate,
we show that, for a finite system of N sites, the longitudinal coupling
induces N level crossings between the two lowest lying states as a function
of the field. We also provide strong arguments suggesting that these N
level crossings all appear simultaneously as soon as the longitudinal coupling
is switched on. This conclusion is based on perturbation theory, and a mapping
of the problem onto the open Kitaev chain, for which we write down the
complete solution in terms of Majorana fermions.
Level crossings induced by a longitudinal coupling in the transverse field Ising chain
Frédéric Mila
December 30, 2023
======================================================================================
The topological properties of matter are currently attracting a considerable
attention <cit.>. One of the hallmarks of a topologically non
trivial phase is the presence of surface states. In one dimension, the first
example was the spin-1 chain that was shown a long time ago to have a
gapped phase <cit.> with two quasi-degenerate low-lying states
(a singlet and a triplet) on open chains <cit.>. These low-lying
states are due to the emergent spin-1/2 degrees of freedom at the edges of
the chains which combine to make a singlet ground state with an almost
degenerate low-lying triplet for an even number of sites,
and a triplet ground state with an almost degenerate low-lying singlet when
the number of sites is odd. In that system, the emergent degrees of freedom
are magnetic since they carry a spin 1/2, and they can be detected by standard
probes sensitive to local magnetisation such as NMR <cit.>.
In fermionic systems, a topological phase is present if the model includes a
pairing term (as in the mean-field treatment of a p-wave superconductor),
and the emergent degrees of freedom are two Majorana fermions localised at
the opposite edges of the chain <cit.>. Their detection is much less
easy than that of magnetic edge states, and it relies on indirect
consequences such as their impact on the local tunneling density of
states <cit.>, or the presence of two quasi-degenerate
low-lying states in open systems. In that respect, it has been suggested to
look for situations where the low-lying states cross as a function of an
external parameter, for instance the chemical potential, to prove that there
are indeed two low-lying states <cit.>.
In a recent experiment with chains of Cobalt atoms evaporated onto a
Cu_2N/Cu(100) substrate <cit.>, the presence of level crossings
as a function of the external magnetic field has been revealed by scanning
tunneling microscopy, which exhibits a specific signature whenever the
ground state is degenerate. The relevant effective model for that system is
a spin-1/2 XY model in an in-plane magnetic field. The exact diagonalisation
of finite XY chains has indeed revealed the presence of
quasi-degeneracy between the two lowest energy states, that are well separated
from the rest of the spectrum, and a series of level crossings between them
as a function of the magnetic field <cit.>. Furthermore, the
position of these level crossings is in good agreement with the experimental
data. It has been proposed that
these level crossings are analogous to those predicted in topological
fermionic spin chains, and that they can be interpreted as a consequence
of the Majorana edge modes <cit.>.
The topological phase of the XY model in an in-plane magnetic field is
adiabatically connected to that of the transverse field Ising model, in
which the longitudinal spin-spin coupling (along the field) is switched off.
However, in the transverse field Ising model, the two low-lying states never
cross as a function of the field, as can be seen from the magnetisation curve
calculated by Pfeuty a long time ago <cit.>, and which does not show
any anomaly. The very different behaviour of the XY model in an in-plane field
in that respect calls for an explanation. The goal of the present paper is to
provide such an explanation, and to show that the presence of N level
crossings, on a chain of N sites, is generic as soon as an antiferromagnetic
longitudinal coupling is switched on. To achieve this goal, we have studied
a Hamiltonian which interpolates between the exactly solvable transverse field
Ising (TFI) and the longitudinal field Ising (LFI) chains.
The approach that best accounts for these level crossings turns out to be an
approximate mapping onto the exactly solvable Kitaev chain, which contains all
the relevant physics. In the Majorana representation, the level crossings are
due to the interaction between Majorana fermions localised at each end of the
chain.
The paper is organized as follows. In section <ref>, we present the
model and give some exact diagonalisation results on small chains to get an
intuition of the qualitative behaviour of the spectrum. We show in section
<ref> that perturbation theory works in principle but is rather
limited because of the difficulty to go to high order. We then turn to an
approximate mapping onto the open Kitaev chain via a mean-field decoupling in
section <ref>. The main result of this paper is presented in section
<ref>, namely the explanation of the level crossings in a Majorana
representation. Finally, we conclude with a a quick discussion of some
possible experimental realisations in section <ref>.
§ MODEL
We consider the transverse field spin-1/2 Ising model with an additional
antiferromagnetic longitudinal spin-spin coupling along the field, i.e. the
Hamiltonian
H=J_x ∑_i=1^N-1S_i^x S^x_i+1 + J_z ∑_i=1^N-1S^z_i S^z_i+1
- h ∑_i=1^N S^z_i
with J_z ≥ 0 [This Hamiltonian is equivalent to an XY model in
an in-plane magnetic field, but we chose to rotate the spins around the
x-axis so that we recover the usual formulations of the TFI and LFI
models as special cases.].
This model can be seen as an interpolation between the TFI model (J_z=0)
and the LFI model (J_x=0). The case J_z=J_x corresponds to the effective
model describing the experiment in Ref. toskovic, up to small
irrelevant terms [In Ref. toskovic it is explained that
the ± 3/2 doublet of the spin 3/2 Cobalt adatoms can be
projected out by a Schrieffer-Wolff transformation due to the strong
magnetic anisotropy. The resulting effective spin 1/2 model is the one of
equation (<ref>) with J_x=J_z and additional nearest-neighbour
out-of-plane and next-nearest-neighbour in-plane Ising couplings.
These additional terms do not lead to qualitative changes because the model is still
symmetric under a π-rotation of the spins around the
z-axis, and since their coupling constants are small (∼ 0.1J_x)
they have only a small quantitative effect in exact diagonalisation results.].
Since we will be mostly interested in the parameter range
0≤ J_z≤ J_x, we will measure energies in units of J_x by setting
J_x=1 henceforth. The spectrum of the Hamiltonian in Eq. (<ref>) is
invariant under h→ -h since the Hamiltonian is invariant if we
simultaneously rotate the spins around the x-axis so that
S^z_i → -S^z_i ∀ i. Hence, we will in most cases quote the results
only for h ≥ 0.
The TFI limit of H can be solved exactly by Jordan-Wigner mapping onto a
chain of spinless fermions <cit.>. In the thermodynamic limit, it is
gapped with a twofold degenerate ground state for h < h_c=1/2, and undergoes
a quantum phase transition at h=h_c to a non-degenerate gapped ground state
for h>h_c. The twofold degeneracy when h<h_c can be described by two
zero-energy Majorana edge modes <cit.>. As a small positive J_z is
turned on, there is no qualitative change in the thermodynamic limit, except
that h_c increases with J_z. Indeed, the model is then equivalent to the
ANNNI model in a transverse field which has been extensively studied before,
see for example <cit.>. A second order perturbation
calculation in 1/h yields h_c =1/2+(3/4)J_z+O(J_z^2) for small J_z and
h_c=1/2+J_z +O(1/J_z) for large J_z <cit.>. Since, for
J_z ≳ 1, there are other phases arising <cit.>, we shall
mostly consider J_z ≲ 1 in the following in order to stay in the
phase with a degenerate ground state.
For a finite size chain, the twofold degeneracy of the TFI model at 0<h<h_c
is lifted and there is a small non-vanishing energy splitting
ϵ = E_1-E_0 between the two lowest energy states, where the
E_k are the eigenenergies and E_k ≤ E_k+1 ∀ k. This splitting
is exponentially suppressed with the system length,
ϵ∼exp(-N/ξ) <cit.>. These two quasi-degenerate states
form a low energy sector separated from the higher energy states. The spectrum
for J_z=0 and N=3 is shown in Fig. <ref>a. For J_z > 0, the
splitting ϵ has an oscillatory behaviour and vanishes for some values
of h. For N=3, it vanishes once for h>0. See the spectrum for J_z=0.5
and J_z=1 in Figs <ref>b-c. As J_z becomes large, there is no low
energy sector separated from higher energy states any more. In the LFI limit,
J_z→∞, the eigenstates have a well defined magnetisation in the
z-direction and the energies are linear as a function of h, see
Fig. <ref>d. In this limit, the level crossings are obvious.
As the field is increased, the more polarised states become favoured, which
leads to level crossings.
The plots in Fig. <ref> are instructive for very small N but become
messy for larger chains. In Figs <ref>a-b, we show the spectrum relative
to the ground state energy, i.e. E_k-E_0, of a chain of N=6 sites for
J_z=0 and J_z=0.75. The energies E_0 and E_1 are plotted in
Figs <ref>c-d for the same parameters. The structure of the spectrum is
similar to the N=3 case, except that now ϵ vanishes at three points
for h>0. In general, there are N points of exact degeneracy where the
splitting ϵ vanishes since the spectrum is symmetric under h→ -h.
This is shown in Fig. <ref> for 2≤ N≤ 8. For N even, there are
N/2 level crossings for h>0, and for N odd, there are (N-1)/2 level
crossings for h>0 and one at h=0.
As shown in Figs <ref>e-f, the level crossings lead to jumps in the
magnetisation M(h)=- E_0/ h. The number of magnetisation
jumps turns out to be independent of J_z for 0<J_z<∞, as illustrated
in Fig. <ref>. In the LFI limit, most of the jumps merge together at
h=J_z, with an additional jump persisting for even N at
h=J_z/2 [In the LFI model the lowest energy with a given
magnetisation is E_0,M= 0 =-J_z(N-1)/4 and
E_0,M≠ 0 = E_0,0+ J_z(|M|-1/2) - Mh. Thus for even N, the ground
state has M=0 for 0<h<J_z/2, M=1 for J_z/2<h<J_z and M=N/2 for
h>J_z, whereas for odd N the ground state has M=1/2 for 0<h<J_z
and M=N/2 for h>J_z.]. In this large J_z region, however, there is no
quasi-degeneracy and the magnetisation jumps indicate level crossings but no
oscillation in contrast to the small J_z region. Since there are no level
crossings in the TFI limit, one might expect the number of crossings to
decrease as J_z decreases. However, the exact diagonalisation results do
not support this scenario, and hint to all level crossings appearing at the
same time as soon as J_z≠ 0. This is a remarkable feature that we shall
explain in the following.
A useful equivalent representation of the Hamiltonian in Eq. (<ref>)
in terms of spinless fermions is obtained by applying the Jordan-Wigner
transformation used to solve exactly the TFI model <cit.>,
S_i^x = 1/2(c_i† + c_i)exp(ıπ∑_j<ic_j† c_j)
S_i^y = 1/2ı(c_i† - c_i)exp(ıπ∑_j<ic_j† c_j)
S_i^z=c_i† c_i - 1/2,
which yields
H = 1/4∑_i=1^N-1(c_i†-c_i)(c_i+1†+c_i+1)
- h ∑_i=1^N (c_i† c_i - 1/2)
+ J_z ∑_i=1^N-1(c_i† c_i
- 1/2)(c_i+1† c_i+1 - 1/2)
where the c_i,c_i† are fermionic annihilation and creation operators.
This is the Hamiltonian of a spinless p-wave superconductor with
nearest-neighbour density-density interaction. As for the simpler TFI model,
the Hamiltonian is symmetric under a π-rotation of the spins around the
z-axis, S^x_i→ -S^x_i and S^y_i→ -S^y_i in the spin
language. This leads to two parity sectors given by the parity operator
P=e^ıπ∑_j=1^N c_j† c_j = (-2)^N S_1^z⋯ S_N^z.
In other words, the Hamiltonian does not mix states with even and odd number
of up spins, or equivalently with even and odd number of fermions. The ground
state parity changes at each point of exact degeneracy, and thus alternates as
a function of the magnetic field for J_z > 0. This can be understood
qualitatively by looking at Fig. <ref>f. The magnetisation plateaus are
roughly at M=0,1,2,3. Hence to jump from one plateau to the next, one spin
has to flip, thus changing the sign of the parity P.
§ PERTURBATION THEORY
As a first attempt to understand if the N level crossings develop
immediately upon switching on J_z, we treat the
V=J_z ∑_i=1^N-1S^z_i S^z_i+1 term as a perturbation to the
exactly solvable transverse field Ising model. One may naively expect that
degenerate perturbation theory is required since the TFI chain has a
quasi-twofold degeneracy at low field. Fortunately, the two low-energy states
live in different parity sectors <cit.> that are not mixed by the
perturbation V. We can therefore apply the simple Rayleigh-Schrödinger
perturbation theory in the range of parameters we are interested in, i.e.
J_z ≲ 1.
Writing A_i=c_i† + c_i and B_i=c_i† - c_i, the perturbation can be
rewritten as V=(J_z/4) ∑_i=1^N-1B_iA_iB_i+1A_i+1.
The unperturbed eigenstates are |m⟩ = Υ†_m|0⟩ where
|0⟩ is the ground state and the Υ†_m are a product of the
creation operators corresponding to the Bogoliubov fermions.
The matrix elements are then
⟨n|V|m|=⟩J_z/4∑_i=1^N-1⟨0|Υ_n B_iA_iB_i+1A_i+1Υ†_m|0|⟩
which can be computed by applying Wick's theorem, similarly to how correlation
functions are found in <cit.>. We computed the effect of V up to third
order, with the basis of virtual states slightly truncated, namely by keeping
states with at most three Bogoliubov fermions. Since the more fermions there
are in a state, the larger its energy, we expect this approximation to be
excellent.
As shown in Fig. <ref>, the number of crossings increases with the order
of perturbation, and to third order in perturbation, the results for N=6
sites are in qualitative agreement with exact diagonalisations. From the way
level crossings appear upon increasing the order of perturbation theory, one
can expect to induce up to 2m+1 level crossings if perturbation theory
is pushed to order m, see Fig. <ref>. So these results suggest that the
appearance of level crossings is a perturbative effect, and that, for a given
size N, pushing perturbation theory to high enough order will indeed lead to
N level crossings for small J_z. However, in practice, it is impossible to
push perturbation theory to very high order. Indeed, the results at order 3
are already very demanding. So, these pertubative results are encouraging, but
they call for an alternative approach to actually prove that the number of
level crossings is indeed equal to N, and that these level crossings appear
as soon as J_z is switched on.
§ FERMIONIC MEAN-FIELD APPROXIMATION
In the fermionic representation, Eq. (<ref>), there is a quartic
term that cannot be treated exactly. Here, we approximate it by mean-field
decoupling. In such an approximation, one assumes the system can be well
approximated by a non-interacting system (quadratic in fermions) with
self-consistently determined parameters. For generality, we decouple the
quartic term in all three mean-field channels consistent with Wick's theorem,
c†_i c_ic†_i+1c_i+1≈
⟨c†_ic_i|c⟩†_i+1c_i+1 + ⟨c†_i+1c_i+1|c⟩†_ic_i - ⟨c†_ic_i|⟨%s|%s⟩⟩c†_i+1c_i+1
- ⟨c†_ic†_i+1|c⟩_ic_i+1 - ⟨c_ic_i+1|c⟩†_ic†_i+1 +⟨c†_ic†_i+1|⟨%s|%s⟩⟩c_ic_i+1
+ ⟨c†_ic_i+1|c⟩_ic†_i+1 +⟨c_ic†_i+1|c⟩†_ic_i+1 - ⟨c†_ic_i+1|⟨%s|%s⟩⟩c_ic†_i+1.
Here, ⟨.|$⟩ denotes the ground state expectation value.
The3N-2self-consistent parameters⟨c†_ic_i|$⟩,
⟨c†_ic†_i+1|$⟩ and⟨c†_ic_i+1|$⟩ can be found
straightforwardly by iteratively solving the quadratic mean-field Hamiltonian.
As it turns out, it is more instructive to consider only three self-consistent
parameters. To do so, we solve the mean-field approximation of the
translationally invariant Hamiltonian (c_N+1=c_1),
H' = ∑_i=1^N{1/4
(c_i†-c_i)(c_i+1†+c_i+1) - h(c_i† c_i - 1/2) .
. + J_z (c_i† c_i - 1/2)(c_i+1† c_i+1 -
1/2) }
≈∑_i=1^N{
-μ c_i† c_i +( t c_i+1† c_i + h.c. )
- (Δ c_i+1† c†_i + h.c. ) }
+ const,
where μ=h+J_z(1-2⟨c†_ic_i|)⟩,
t=1/4 - J_z⟨c†_ic_i+1|$⟩ andΔ=1/4-J_z⟨c_ic_i+1|$⟩ are determined self-consistently.
These parameters are found to be real, and are shown in Fig. <ref>
as a function h, J_z and N.
Using these self-consistent parameters, the Hamiltonian in
Eq. (<ref>) is then approximated by the following mean-field
problem on an open chain:
H_ MF = -∑_i=1^N μ(c†_ic_i-1/2)
+ ∑_i=1^N-1[(t c†_i+1c_i + h.c.)
- ( Δ c†_i+1c†_i + h.c. ) ],
up to an irrelevant additive constant [In the periodic chain
used to get the mean-field parameters, there is a level crossing when
μ=2t. To get good agreement with exact diagonalisation results and avoid a small
discontinuity, we need to compute the expectation values in the state
adiabatically connected to the ground state at μ<2t. Thus for μ>2t,
the ⟨.|$⟩ are not computed in the ground state, but in the first
excited state. Since all the level crossings arise forμ< 2t, this has
no influence on the following discussion..
< g r a p h i c s >
Critical fields, h_ crit, where the degeneracy is exact,
as a function of J_z in the self-consistent mean-field approximation
(<ref>) (blue crosses) compared to the exact diagonalisation result
(red squares) for (a) N=6 and (b) N=7.
< g r a p h i c s >
Energy splitting ϵ=E_1-E_0 as a function of h in the
self-consistent mean-field approximation (<ref>) (blue solid line)
compared to the exact diagonalisation result (red dashed line) for J_z=0.5
and (a) N=6 and (b) N=7.
Since the self-consistent parameters are almost independent of the system size
(see Fig. <ref>c), the boundaries are not very important and the bulk
contribution is determinant. This partly justifies the approximation of
playing with the boundary conditions to get the approximate model
(<ref>) with just three self-consistent parameters. This approximation
is also justified by the great quantitative agreement with the exact
diagonalisation results for the critical fields forJ_z ≲0.8(see Fig. <ref>), and to a lesser extent for the energy splittingϵ=E_1-E_0between the two lowest energy states, see Fig. <ref>.
ForNodd, the degeneracy ath=0is protected by symmetry for anyJ_zin the Hamiltonian (<ref>). Indeed, under the transformationS_i^z →-S_i^z ∀i, the parity operator transforms asP→(-1)^N P. Hence, forNodd andh=0, the ground state has to be
twofold degenerate. As can be seen in Fig. <ref>b, the critical fieldh=0at lowJ_zevolves to a non-zero value for largeJ_z, thus showing
that this symmetry is broken by the mean-field approximation (<ref>).
The discrepancy is, however, small forJ_z ≲0.8as can also be seen
in Fig. <ref>b.
We observe from Fig. <ref>a that as a function of magnetic field, the
parameterstandΔare almost constant, whereasμis almost
proportional toh. Thus, we can understand the physics of the level
oscillations by forgetting about the self-consistency and consideringμ,tandΔas free parameters, i.e. by studying the open Kitaev
chain <cit.>, where the level crossings happen asμis tuned.
Compared to the TFI model for whichΔ=t, the main effect ofJ_z>0is to make0 < Δ< t, which, as we shall see in the next section, is
the condition to see level oscillations.
Such a mapping between the two lowest lying energy states of the interacting
Kitaev chain and of the non-interacting Kitaev chain can be made rigorous for a
special value ofh>0, provided the boundary terms in
equation (<ref>) are slighty modified <cit.>. But this
particular exact case misses out on level-crossing oscillations.
§ LEVEL OSCILLATIONS AND MAJORANA FERMIONS
We define2NMajorana operatorsγ'_i, γ_i”as:
γ'_i = c_i+ c†_i
γ”_i = -ı (c_i - c†_i)
which satisfyγ'_i†=γ'_i,γ”_i†=γ”_i,{γ'_i,γ”_j}=0and{γ'_i,γ'_j}=
{γ”_i,γ”_j}=2δ_ij. Since theμ, t, Δare real, theH_MFof Eq. (<ref>) reads
H_ MF= ı/2∑_i=1^N-1[ -(t+Δ)γ”_iγ'_i+1
+ (t-Δ )γ'_iγ”_i+1]
-ıμ/2∑_i=1^N γ'_iγ”_i=
ı/2∑_i,j=1^Nγ'_iM_ijγ”_j.
From the singular value decomposition ofM, we writeM=UΣV^T,
whereUandVare orthogonal matrices andΣ=diag(ϵ_1,…, ϵ_N)with realϵ_iand|ϵ_i|≤|ϵ_i+1| ∀i. Thus, the Hamiltonian reads
H_ MF = ı/2∑_i,j,k=1^Nγ'_i U_ikϵ_k V^T_kjγ”_j
=ı/2∑_k=1^Nϵ_k γ̃'_k γ̃”_k
=∑_k ϵ_k (η†_kη_k-1/2)
where
γ̃'_k = ∑_i=1^N γ'_i U_ik, γ̃”_k = ∑_i=1^N γ”_i V_ik
are the rotated Majorana operators, and theη_k = 1/2 (γ̃'_k+ıγ̃”_k) are
fermionic annihilation operators corresponding to the Bogoliubov
quasiparticles.
As derived in Appendix, in general the Majorana operators,γ̃'_kandγ̃”_k, are of the form
γ̃'_k = ∑_j (a_+ x_+^j + b_+ x_+^N+1-j
+ a_- x_-^j + b_- x_-^N+1-j ) γ'_j
γ̃”_k = ∑_j ( a_+ x_+^N+1-j + b_+ x_+^j
+a_- x_-^N+1-j + b_- x_-^j) γ”_j
where thex_±,a_±andb_±are functions of the energyϵ_kwhich is quantised in order to satisfy the boundary
conditions. On can easily solve numerically the nonlinear equation for
theϵ_k. Here, we will instead focus on a simple analytical
approximation forγ̃'_1, γ̃”_1andϵ_1which works well to discuss the level crossings,
and is equivalent to the Ansatz given in <cit.>.
From Eqs. (<ref>) and (<ref>), we see that forϵ=0,
we have eithera_±=0orb_±=0. Without loss of generality, we can
chooseb_±(ϵ=0) = 0. Since we expectϵ_1 ≪1, we
approximate
b_± (ϵ_1) ≈ b_± (0)=0
and
x_± (ϵ_1) ≈ x_±(0) =
μ±√(μ^2-4t^2+4Δ^2)/2(t+Δ),
which yields
γ̃'_1 ≈∑_j (a_+ x_+^j + a_- x_-^j ) γ'_j
γ̃”_1 ≈∑_j ( a_+ x_+^N+1-j +a_- x_-^N+1-j )γ”_j
with∑_j (a_+ x_+^j + a_- x_-^j )^2=1.
The boundary conditions (<ref>) now read
a_+ + a_- = 0
a_+ x_+^N+1 + a_- x_-^N+1 = 0
and in general cannot be both satisfied unlessϵ_1=0exactly.
If|x_±|<1,γ̃'_1is localised on the left side of the chain
with its amplitude∼e^-j/ξasj≫1withξ=-1/ln(max(|x_+|,|x_-|)). Furthermore,γ̃”_1is related
toγ̃'_1by the reflection symmetryj→N+1-j.
Thus, in the thermodynamic limit, the boundary condition (<ref>) is
irrelevant andϵ_1 →0asN→∞.
Similarly, if|x_±| >1the boundary condition (<ref>) becomes
irrelevant in the thermodynamic limit.
However, if|x_+|>1and|x_-|<1, or|x_+|<1and|x_-|>1, thenγ̃'_1,γ̃”_1have significant weight on both
sides of the chain and both boundary conditions (<ref>) and
(<ref>) remain important in the thermodynamic limit. Hence, the
approximationϵ_1 ≈0is bad, indicating a gapped system.
As discussed in <cit.>, for|μ|<2|t|we have either|x_±|<1or|x_±|>1which yieldsϵ_1=0in the thermodynamic limit. This
is the topological phase with a twofold degenerate ground state. For a finite
system, however, the boundary conditions (<ref>) and (<ref>) are
in general not exactly satisfied and the system is only quasi-degenerate with
a gapϵ∼e^-N/ξ.
For|μ|>2|t|, either|x_+|>1and|x_-|<1, or|x_+|<1and|x_-|>1, and the system is gapped.
In the topological phase,|μ|<2|t|, there are parameters for which the
boundary conditions (<ref>) can be exactly satisfied even forN<∞and thusϵ_1=0exactly. In such a case, there is an
exact zero mode even for a finite chain. This was previously discussed in
Ref. <cit.>, as well as in <cit.> where a more general method that
applies to disordered systems is described. Ifx_±∈ℝ, it
is never possible to satisfy the boundary conditions (<ref>) and
therefore the quasi-gap is always finite,ϵ_1 ≠0. However, ifx_+=re^ıϕ ∉ℝ, Eq. (<ref>) yieldsx_-=x_+^*and(x_+^N+1 - x_-^N+1) ∝r^N+1sin[(N+1)ϕ].
Thus it may happen for specific parameters thatϵ_1=0exactly. This
degeneracy indicates a level crossing. The phaseϕ, defined for|μ|<μ_c=2√(t^2-Δ^2), is given by
tanϕ = √((μ_c/μ)^2-1).
It thus goes continuously fromϕ(μ=0^+)=π/2toϕ(μ→μ_c) →0. Hence, there are critical chemical potentials,0 ≤μ_⌈N/2⌉ < …< μ_m <…<μ_1<μ_c,
such thatϕ(μ=μ_m)=πm/N+1(see Fig. <ref>a).
For these criticalμ_m, the system is exactly degenerate, i.e.ϵ_1=0. In the TFI limit, we haveΔ=tandμ_c=0,
thus there are no level crossings.
< g r a p h i c s >
(a) Phase ϕ(μ) of x_+=re^ıϕ within the approximation
(<ref>) for several Δ with t=1 and N=6. The horizontal black
dotted lines indicate the values ϕ=π m/N+1. (b) Splitting
E_1-E_0=|ϵ_1| in the Kitaev chain calculated exactly solving
numerically the full self-consistent equations described in Appendix
(blue solid line) and with the analytical approximate result
in Eq. (<ref>) (red dashed line) for N=6, t=1 and Δ=0.3.
For|μ|<2|t|, writingx_+=re^ıϕwithr>0, we have
ϵ_1 =Σ_11=(U^T M V)_11
≈ 4(t+Δ) a_+^2 r^N+2sin(ϕ) sin[(N+1)ϕ],
where we used the approximations (<ref>), (<ref>) and the
boundary condition (<ref>) [respectively (<ref>)] whentΔ>0(respectivelytΔ<0), since in this case|x_±|<1(respectively|x_±|>1). Note thatϕ(-μ) = ϕ(μ)-π,
and thusϵ_1is an odd function ofμfor oddNand an
even function ofμfor evenN. Sinceϵ_1changes sign
wheneversin((N+1)ϕ)=0, the degeneracy points indicate level crossings.
This approximate description works extremely well, as shown in
Fig. <ref>b forΔ=0.3t. Becauseϕtakes all the values
in]0,π/2]for0 < μ<μ_c, and in]-π,-π/2]for-μ_c < μ< 0, there are either exactlyNlevel crossings as a function
ofμif0<μ_c∈ℝ, i.e. if|Δ|<|t|, and no zero
level crossing otherwise. At the points of exact degeneracy,b_±(ϵ=0)=0, the zero-mode Majorana fermions are localised on
opposite sides of the chain. When the degeneracy is not exact, however,b_±(ϵ≠0) ≠0and the zero-mode Majorana fermions mix
together to form Majoranas localised mostly on one side but also a little
bit on the opposite side.
In the XY model in an out-of-plane magnetic field, which is equivalent to
the non-interacting Kitaev chain <cit.>, these level crossings lead
to an oscillatory behaviour of the spin correlation functions <cit.>.
In the context of p-wave
superconductors, the level oscillations described above also
arise in more realistic models and are considered one of the hallmarks of
the presence of topological Majorana fermions <cit.>. Although
it is still debated whether Majorana fermions have already been observed,
strong experimental evidence for the level oscillations was reported in
<cit.>.
Coming back to the mean-field Hamiltonian of Eq. (<ref>), we can get
the phaseϕwithin the approximation (<ref>), i.e. the phase ofx_+(ϵ=0), as a function of the physical parametersh, J_zsince
we know how the self-consistent parametersμ, t, Δdepend on them.
We plot in Fig. <ref> the phaseϕas a function ofhfor
severalJ_zwhich yields a good qualitative understanding of the sudden
appearance ofNlevel crossings as soon asJ_z > 0. As previously
discussed, the self-consistent parameters are almost independent ofNand
therefore the curvesϕ(h)are almost independent ofNas well. The
main effect ofNis to change the conditionϕ(μ=μ_m)=πm/N+1for the boundary condition in
Eq. (<ref>) to be satisfied and thus for the system to be exactly
degenerate.
< g r a p h i c s >
Phase ϕ(h) of x_+(ϵ=0) based on the self-consistent
parameters μ, t, Δ of the mean-field decoupling for several J_z
and (a) N=6, (b) N=9, (c) N=12. The horizontal black dotted lines
indicate the values ϕ=π m/N+1.
§ SUMMARY
The main result of this paper is that the level crossings between the two
lowest energy eigenstates of the XY chain in an in-plane magnetic field
are more generally a fundamental feature of the transverse field Ising chain
with an antiferromagnetic longitudinal coupling
howsoever small. These points of level crossings (twofold degeneracy)
correspond to having Majorana edge modes in a Kitaev chain onto which the
problem can be approximately mapped. The level crossings of the XY chains
have been observed experimentally in <cit.> by scanning tunneling
microscopy on Cobalt atoms evaporated onto a Cu_2N/Cu(100) substrate.
By varying the adsorbed atoms and the substrate, it should be possible to
vary the easy-plane and easy-axis anisotropies, and thus to explore the
exact degeneracy points for various values of the longitudinal coupling.
The possibility to probe the two-fold degeneracy of this family of spin
chains is important in view of their potential use for universal quantum
computation <cit.>.
Besides, one could also realise the spinless fermionic Hamiltonian
(<ref>) in an array of Josephson junctions as described in
<cit.>. The advantage of this realisation is that it allows a
great flexibility to tune all the parameters of the model. We hope that
the results of the present paper will stimulate experimental investigations
along these lines.
We acknowledge Somenath Jalal for useful discussions and the Swiss National
Science Foundation for financial support. B.K. acknowledges the financial
support under UPE-II and DST-PURSE programs of JNU.
*
§ MAJORANA SOLUTIONS OF THE KITAEV CHAIN
To solve the Kitaev chain (<ref>), we need to find the singular
value decomposition of
M=[ -μ τ_- 0 ⋯; τ_+ -μ τ_- 0 ⋯; 0 τ_+ -μ τ_- 0 ⋯; ⋱ ⋱ ⋱ ; ⋯ 0 τ_+ -μ τ_-; ⋯ 0 τ_+ -μ ]
withτ_±= t±Δ, i.e. find orthogonal matricesU,Vand a real diagonal matrixΣsuch thatM=UΣV^T. Writingu⃗_kandv⃗_kthek^thcolumns ofUandVrespectively, they satisfy
Mv⃗_k = ϵ_k u⃗_k
u⃗_k^T M = ϵ_k v⃗_k^T.
Let's find two unit-norm column vectorsu⃗, v⃗andϵsuch thatMv⃗=ϵu⃗andu⃗^TM=ϵv⃗^T.
First we forget about the normalisation and boundary conditions and focus
on the secular equation.
Setting the components ofu⃗, v⃗asu_j=a x^jandv_j = b x^j,
we have
M v⃗ = b/aτ_+ -μ x +τ_- x^2/xu⃗+ b.t.
u⃗^T M =a/bτ_- -μ x +τ_+ x^2/xv⃗^T + b.t.
where b.t. stands for boundary terms. Hence,uandvsatisfy the secular
equation provided
b/a=√(τ_- -μ x +τ_+ x^2/τ_+ -μ x +τ_- x^2)
and
ϵ = 1/x√((τ_- -μ x +τ_+ x^2)
(τ_+ -μ x +τ_- x^2)).
Because of the reflection symmetryj →N+1-j, ifxis a solution of
equation (<ref>) for someϵ, then1/xis also a solution.
Assumingϵknown, the solutions arex_±,1/x_±and satisfy
0 = ϵ^2 x^2 - (τ_- -μ x +τ_+ x^2)
(τ_+ -μ x +τ_- x^2)
∝ (x-x_+)(x-1/x_+)(x-x_-)(x-1/x_-)
which by identification yields, writingρ_±= x_±+ 1/x_±,
x_± = 1/2(ρ_± +√(ρ_±^2 - 4)),
ρ_± = μ t ±√((t^2-Δ^2)ϵ^2 +
Δ^2(μ^2-4t^2+4Δ^2))/t^2-Δ^2.
Taking into account the reflection symmetry, the general form of the
components ofu⃗, v⃗is thus
u_j = a_+ x_+^j + b_+ x_+^N+1-j + a_- x_-^j + b_- x_-^N+1-j
v_j = a_+ x_+^N+1-j + b_+ x_+^j +a_- x_-^N+1-j + b_- x_-^j
with the ratiosb_+/a_+andb_-/a_-given by equation (<ref>)
withx=x_+andx=x_-respectively.
Furthermore, we have the boundary conditions
a_+ + b_+x_+^N+1 + a_- + b_-x_-^N+1 = 0
a_+ x_+^N+1 + b_+ + a_-x_-^N+1 + b_- = 0
which set the ratioa_-/a_+and give the quantisation condition on the
energiesϵ_k.
The last degree of freedom, saya_+, is then set by normalisingu⃗(from equation (<ref>), u⃗ = v⃗ ).
Note that for the special casest= Δandμ=0, we haveγ̃'_1 = γ'_1andγ̃”_1=γ”_Nwithϵ_1=0. We have a similar result fort=-Δandμ=0.
For these two cases, the general formalism described above does not apply
since it yieldsx_±=0,±∞.
99kane
M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
zhang
X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
haldane
F. D. M. Haldane, Phys. Lett. A 93, 464 (1983).
kennedy
T. Kennedy, J. Phys. Cond. Mat. 2, 5737 (1990).
tedoldi
F. Tedoldi, R. Santachiara, and M. Horvatić, Phys. Rev.
Lett. 83, 412 (1999).
kitaev
A. Y. Kitaev. Phys.-Usp. 44 131 (2001).
mourik
V. Mourik, K. Zuo, S. M. Frolov, S. R. Plissard,
E. P. A. M. Bakkers, and L. P. Kouwenhoven,
Science 336, 1003 (2012).
nadj-perge
S. Nadj-Perge, I. K. Drozdov, J. Li, H. Chen, S. Jeon, J. Seo,
A. H. MacDonald, B. A. Bernevig, and A. Yazdani, Science 346, 602 (2014).
sarma
S. Das Sarma, J. D. Sau, and T. D. Stanescu,
Phys. Rev. B 86, 220506(R) (2012).
toskovic
R. Toskovic, R. van den Berg, A. Spinelli, I. S. Eliens,
B. van den Toorn, B. Bryant, J.-S. Caux, and A. F. Otte,
Nat. Phys. 12, 656 (2016).
dmitriev
D. V. Dmitriev, V. Y. Krivnov, A. A. Ovchinnikov, and A. Langari,
J. Exp. Theor. Phys. 95, 538 (2002).
mila
F. Mila, Nat. Phys. 12, 633 (2016).
pfeuty
P. Pfeuty, Ann. Phys. 57, 79 (1970).
chakrabarti
S. Suzuki, J.-i. Inoue, and B. K. Chakrabarti, Quantum Ising Phases and
Transitions in Transverse Ising Models (Springer, Lecture Notes in Physics,
Vol. 862 (2013)).
jalal
S. Jalal and B. Kumar, Phys. Rev. B 90, 184416 (2014).
rujan
P. Ruján, Phys. Rev. B 24, 6620 (1981).
hassler
F. Hassler and D. Schuricht, New J. Phys. 14, 125018 (2012).
lieb
E. Lieb, T. Schultz, and D. Mattis, Ann. Phys. 16, 407 (1961).
katsura
H. Katsura, D. Schuricht, and M. Takahashi,
Phys. Rev. B 92, 115137 (2015).
kao
H.-C. Kao, Phys. Rev. B 90, 245435 (2014).
hedge
S. S. Hegde, and S. Vishveshwara, Phys. Rev. B 94,
115166 (2016).
barouch
E. Barouch and B. M . McCoy, Phys. Rev. A3, 786 (1971).
loss
D. Rainis, L. Trifunovic, J. Klinovaja, and D. Loss,
Phys. Rev. B 87, 024515 (2013).
markus
S. M. Albrecht, A. P. Higginbotham, M. Madsen, F. Kuemmeth,
T. S. Jespersen, J. Nygård, P. Krogstrup, and C. M. Marcus,
Nature 531, 206 (2016).
loss2 Y. Tserkovnyak and D. Loss, Phys. Rev. A 84, 032333 (2011).] |
http://arxiv.org/abs/1701.07585v1 | 20170126061357 | Bistable director alignments of nematic liquid crystals confined in frustrated substrates | [
"Takeaki Araki",
"Jumpei Nagura"
] | cond-mat.soft | [
"cond-mat.soft"
] |
∂ϵ≳≲ε∇δ̣
^1Department of Physics, Kyoto University, Kyoto 606-8502, Japan
^2CREST, Japan Science and Technology Agency, Japan
We studied in-plane bistable alignments of nematic liquid crystals
confined by two frustrated surfaces by means of Monte Carlo simulations
of the Lebwohl-Lasher spin model.
The surfaces are prepared with orientational checkerboard patterns,
on which the director field is
locally anchored to be planar yet orthogonal between the
neighboring blocks.
We found the director field in the bulk tends to be aligned along the
diagonal axes of the checkerboard pattern,
as reported experimentally
[J.-H. Kim et al., Appl. Phys. Lett. 78, 3055 (2001)].
The energy barrier between the two stable orientations
is increased, when the system is brought to the isotropic-nematic
transition temperature.
Based on an elastic theory, we found that the bistability
is attributed to the spatial modulation of the director field
near the frustrated surfaces.
As the block size is increased and/or the elastic modulus is reduced,
the degree of the director inhomogeneity is increased, enlarging the
energy barrier.
We also found that the switching rate between the stable states is
decreased when the block size is comparable to the cell thickness.
64.70.mf,
61.30.Hn,
64.60.De,
42.79.Kr
Bistable director alignments of nematic liquid crystals confined
in frustrated substrates
Takeaki Araki^1,2 and Jumpei Nagura^1
December 30, 2023
==========================================================================================
§ INTRODUCTION
Liquid crystals have been utilized in many applications.
In particular, they are widely used
in optical devices such as flat panel displays
<cit.>.
Because of the softness of the liquid crystal,
its director field is deformed by
relatively weak external fields <cit.>.
To sustain the deformed state, the external field
has to be constantly applied to the liquid crystal substance.
In order to reduce power consumption,
a variety of liquid crystal systems showing
multistable director configurations or storage effects
have been developed <cit.>.
In such systems,
a pulsed external field can induce permanent changes of the
director configurations.
Liquid crystals of lower symmetries,
such as
cholesteric, ferroelectric and flexoelectric phases,
are known to show the storage effects
<cit.>.
A nematic liquid crystal
in a simple geometry, e.g. that
sandwiched
between two parallel plates with homeotropic anchoring,
shows a unique stable director configuration if external
fields are not imposed.
By introducing elastic frustrations,
the nematic liquid crystals can have
different director configurations
of equal or nearly equal elastic energy
<cit.>.
For instance,
either of horizontal or vertical director orientation
is possibly formed
in nematic liquid crystals confined
between two flat surfaces of uniformly tilted but oppositely directed
anchoring alignments <cit.>.
Also, it was shown that the nematic liquid crystal confined in
porous media shows a memory effect <cit.>.
The disclination lines of the director field
can adopt a large number of trajectories running through the
channels of the porous medium
<cit.>.
The prohibition of spontaneous changes of the defect
pattern among the possible trajectories leads to the
memory effect.
Recent evolutions of micro- and nano-technologies
enable us to tailor substrates of inhomogeneous
anchoring conditions, the length scale of which
can be tuned less than the wavelength of visible light.
With them, many types of structured surfaces for the liquid crystals
and the resulting
director alignments have been reported in the past
few decades
<cit.>.
For example, a striped surface, in which the
homeotropic and planar anchorings appear alternatively,
was used to control the polar angle of the
director field in the bulk <cit.>.
Kim et al. demonstrated
in-plane bistable alignments
by using a nano-rubbing technique with an atomic
force microscope <cit.>.
They prepared
surfaces of orientational checkerboard patterns.
The director field in contact to the surfaces
is imposed to be parallel to the
surface yet orthogonal between the neighboring domains.
They found that the director field far from the surface tends to be
aligned along either of the two diagonal axes of the
checkerboard pattern.
More complicated patterns are also possible to prepare <cit.>.
In this paper, we consider the mechanism of
the bistable orientations
of the nematic liquid crystals
confined in two flat surfaces of the checkerboard
anchoring patterns.
We carried out Monte Carlo simulations
of the Lebwohl-Lasher spin model <cit.>
and argued their results
with a coarse-grained elastic theory.
In particular, the dependences of the stability of the
director patterns on the temperature, and the domain size
of the checkerboard patterns are studied.
Switching dynamics between the stable configurations
are also considered.
§ SIMULATION MODEL
We carry out lattice-based Monte Carlo simulations
of nematic liquid crystals confined by two parallel plates
<cit.>.
The confined space is composed of three-dimensional
lattice sites (L× L × H)
and it is denoted by ℬ.
Each lattice site i has a unit
spin vector u_i (|u_i|=1),
and the spins are mutually interacting with those at the
adjacent sites.
At z=0 and z=H+1,
we place substrates,
composed of two-dimensional lattices.
We put unit vectors d_j on the site j on 𝒮,
where 𝒮 represents
the ensemble of the substrate lattice sites.
We employ the following Hamiltonian for u_i,
ℋ =
-∑_i,j (∈ℬ)
P_2(u_i·u_j)
-∑_i∈ℬP_2(u_i·e)
-w∑_i,j,i∈ℬ, j∈𝒮
P_2(u_i·d_j),
where P_2(x)=3(x^2-1/3)/2 is the second-order Legendre function and
∑_i,j means
the summation over the nearest neighbor site pairs.
We have employed the same Hamiltonian to
study the nematic liquid crystal confined in
porous media <cit.>.
The first term of the right hand side of Eq. (<ref>)
is the Lebwohl-Lasher potential,
which describes the isotropic-nematic
transition <cit.>.
In Fig. <ref>, we plot the temperature dependences of
(a) the scalar nematic order parameter S_ b
and (b) the elastic modulus K
in a bulk system.
The numerical schemes for measuring them
are described in Appendix A.
We note that
a cubic lattice with periodic boundary conditions (L^3=128^3)
is used for obtaining S_ b and K in Fig. <ref>.
As the temperature is increased,
both the scalar order parameter and
the elastic modulus are decreased and show abrupt
drops at the transition temperature T=T_ IN,
which is estimated as
k_ BT_ IN/≅ 1.12<cit.>.
The second term of Eq. (<ref>) is the coupling between the
spins in ℬ and an in-plane external field
e.
The last term represents the
interactions between the bulk spins and the surface directors,
that is, the
Rapini-Papoular type anchoring effect <cit.>.
w is the strength of the anchoring interaction.
If d_j is parallel to the substrates and w>0,
the planar anchoring conditions are imposed to the
spins at the ℬ-sites contacting to 𝒮.
This term not only gives the angle dependence of the anchoring effect
in the nematic phase, but also enhances the nematic order near the surface.
In Fig. <ref>(a), we also plot the scalar nematic order parameter
on a homogenuous surface of w=ϵ.
The definition of S_ w is described in Appendix A.
The nematic order on the surface is larger than that in the bulk S_ b
and is decreased continuously with T.
Even at and above T_ IN,
S_ w does not vanish to zero.
When the temperature is far below T_ IN,
on the other hand, it is close to that in the bulk S_ b.
In this study, we prepare two types of anchoring cells.
In type I cells, we set hybrid substrates.
At the bottom surface (z=0), the preferred direction d_j
is heterogeneously patterned like a checkerboard as given
by
d_j(x,y)
= {[ (0,1,0) if ([x/D]+[y/D]) is even; (1,0,0) if ([x/D]+[y/D]) is odd; ].,
where [X] stands for the largest integer smaller
than a real number X.
D is the unit block size of the checkerboard pattern.
At the top surface (z=H+1), on the other hand,
the preferred direction is homogeneously set to
d_j=d_ t≡(cosϕ_ tsinθ_ t,
sinϕ_ tsinθ_ t,cosθ_ t).
θ_ t and ϕ_ t are the polar and azimuthal
angles of the preferred direction at the top surface.
In type II cells,
both substrates are patterned like the checkerboard,
according to Eq. (<ref>).
We perform Monte Carlo simulations with
heat bath samplings.
A trial rotation of the i-th spin
is accepted, considering the local
configurations of neighboring spins,
with the probability
p(Δℋ)=1/(1+e^Δℋ/k_ BT),
where Δℋ is the difference of the
Hamiltonian between before and after the trial rotation.
The physical meaning of the temporal evolution of Monte
Carlo simulations is sometimes a matter of debate.
However, we note that the method is known to be
very powerful and useful for studying glassy
systems with slow relaxations,
such as a spin glass <cit.>,
the dynamics of which is dominated by activation processes
overcoming an energy barrier.
In this study
we fix the anchoring strengths at both
the surfaces to w=, for simplicity.
The lateral system size is L=512 and the
thickness H is changed.
For the lateral x- and y- directions,
the periodic boundary conditions
are employed.
§ RESULTS AND DISCUSSIONS
§.§ Bistable alignments
First,
we consider nematic liquid crystals confined
in cells with the hybrid surfaces (type I).
Figure <ref>(a)
plots the energies stored in the cell
with respect to the azimuthal anchoring angle
ϕ_ t.
Here the polar anchoring angle is fixed to θ_ t=π/2.
The energy per unit area
ℰ is calculated as
ℰ(θ_ t,ϕ_ t)
=ℋ(θ_ t,ϕ_ t)/L^2-
ℰ_ min,
where X means the spatial average of a variable X.
ℰ_ min is the lowest energy defined as
ℰ_ min=min_θ_ t,ϕ_ tℋ/L^2
at each temperature (see below).
ℰ is obtained after 5× 10^4 Monte Carlo steps
(MCS) in the absence of external fields.
The cell thickness is H=16 and the block size is D=8.
The temperature is changed.
Figures <ref>(a) indicates the
energy has two minima at ϕ_ t=±π /4,
while it is maximized at
ϕ_ t=0 and ±π/2.
To see the dependence on
the polar angle, we plot ℰ against θ_ t
with fixing ϕ_ t=π/4 in Fig. <ref>(b).
It is shown that ℰ
is minimized at θ_ t=π/2 for ϕ_ t=π/4.
Hence we conclude that
the stored energy is globally lowest
at (θ_ t,ϕ_ t)=(π/2,±π/4),
so that we set
ℰ_ min
=ℋ(θ_ t=π/2,ϕ_ t=π/4)/L^2
in Fig. <ref>.
This global minimum indicates that
the parallel, yet bistable configurations of the director field
are energetically preferred in this cell.
This simulated bistability is in accordance with the
experimental observations reported by Kim et al.<cit.>.
When a semi-infinite cell is used,
the bistable alignments of the director field would be realized.
Hereafter, we express these two stable directions with
n̂_+ and n̂_-.
That is, n̂_±=(1/√(2),± 1/√(2),0).
Figure <ref> also indicates the temperature dependencies
of the stored energies.
When the temperature
is much lower than the transition temperature T_ IN,
the curves of ℰ are
are rather flat.
As the temperature is increased,
the dependence becomes more remarkable.
Figure <ref>(a)
plots the energy difference between the
maximum and minimum of ℰ
for fixed θ_ t=π/2
as functions of T.
It is defined by
the in-plane rotation of d_ t
as Δℰ=
ℰ(θ_ t=π/2,ϕ_ t=π/4)
-ℰ(π/2,0).
We plot them for several block sizes D, while
the cell thickness is fixed to H=16.
In Fig. <ref>(a), we observe non-monotonic dependences of
the energy difference on the temperature.
Δℰ is almost independent of T
when T/T_ IN<0.6.
In the range of
0.6 ≲ T/T_ IN < 0.9,
it is increased with increasing T.
When T/T_ IN≳ 0.9,
it decreases with T and it almost disappears if T>T_ IN.
When T>T_ IN, the system is in the isotropic state,
and it does not have the long-range order.
Thus, it is reasonable that Δℰ vanishes
when T>T_ IN.
When T<T_ IN,
on the other hand,
it is rather striking that the energy difference
shows the non-monotonic dependences on T,
in spite of that the long-range order and the resultant
elasticity are reduced monotonically with increasing T
(see Fig. <ref>).
We plot the energy difference Δℰ
as a function of D in Fig. <ref>(b),
where the temperature is T/T_ IN=0.89.
The cell thickness is changed.
It is shown that the energy difference Δℰ is increased
proportionally
to the block size D when D is small.
When the cell thickness is large, on the other hand,
the energy difference is almost
saturated.
The saturated value becomes smaller when the liquid crystal is
confined in the thicker cell.
In order to clarify the mechanisms of the bistable alignments,
we calculate the spatial distribution of the nematic order parameter.
In Fig. <ref>,
we show snapshots of xx- and xy components of a
tensorial order parameter at several planes parallel to the
substrates.
Using u_i(t'), the tensorial order parameter
Q_μν is calculated
by averaging 3/2(u_μ u_ν/2-δ_μν/3)
in a certain period δ t as,
Q_i,μν(t)=1/δ t∑_t'=t^
t+δ t-13/2{u_i,μ(t')u_i,ν(t')-1/3δ_μν},
where t' means the Monte Carlo cycle, and
μ and ν stand for x, y and z.
In this study, we set δ t=10^2,
which is chosen so that the system is well thermalized.
The block size is
D=8 in (a) and D=64 in (b),
and the cell thickness is fixed to H=8.
The temperature is set to T/T_ IN=0.89.
The anchoring direction at the top surface is along n̂_+,
and we started the simulation with an initial condition,
in which the director field is
along
n̂_+,
so that the director field is likely
to be parallel to the surface
and along the azimuthal angle ϕ=π/4 in average.
Q_xx near the bottom surface
shows the checkerboard pattern
as like as that of the imposed anchoring directions d_j.
Q_xy inside the block domains is small
and it is enlarged at the edges between the blocks.
With departing from the bottom surface,
the inhomogeneity is reduced and the director pattern
becomes homogeneous along n̂_+.
The inhomogeneities in Q_xx and Q_xy
are more remarkable for the larger D than
those for the smaller D.
In Fig. <ref>,
we plot the corresponding
profiles of the spatial modulations of
the order parameter with respect to z.
The degree of the inhomogeneity of Q_μν is defined by
I(z) = 1/L^2S_ b^2∫ dx dy {Q_μν(x,y,z)-Q̅_μν(z)}^2,
where Q̅_μν(z) is the spatial average of Q_μν
in the z-plane and it is given by
Q̅_μν(z) = L^-2∫ dx dy Q_μν(x,y,z).
S_ b is the scalar nematic parameter obtained in the bulk
[see Fig. <ref>(a)].
Since Q_μν∝ S_ b in the bulk,
the profiles are scaled by S^2_ b in Eq. (<ref>),
in order to see the pure degree of the inhomogeneity
of the director field.
In Fig. <ref>(a), we changed the block size D and the
temperature is fixed to T/T_ IN=0.89.
It is shown that the degree of the inhomogeneity
decays with z, and it is larger for
larger D as shown in Fig. <ref>.
Figure <ref>(a) also shows that the
decaying length is also increased with the block size D.
Roughly it agrees with D.
In Fig.<ref>(b), we plot the profiles of I(z)
for different temperatures with fixing D=16.
It is shown that
the spatial modulation is increased as the temperature is increased.
This is because
the nematic phase becomes softer as the temperature
is increased (see Fig. <ref>(b)).
When the elastic modulus is small, the director field
is distorted by the anchoring surface more largely.
Based on these numerical results,
we consider the bistable alignments with a
continuum elasticity theory.
The details of the continuum theory is described in
Appendix B.
In our theoretical argument,
the spatial modulation of the director field due to the
heterogeneous anchoring
plays a crucial role in inducing the bistable
alignments along the diagonal directions.
After some calculations,
we obtained an effective anchoring energy for D≪ H as
g(ϕ_0)=-cW^2D/Ksin^22ϕ_0,
instead of the Rapini-Papoular anchoring energy, -Wcos^2ϕ_0/2.
Here ϕ_0 is the average azimuthal angle
of the director field on the patterned surface.
K is the elastic modulus of the director field
in the one-constant approximation of the elastic theory,
and W represents the anchoring strength in the
continuum description.
c is a numerical factor, which is estimated as
c≅ 0.085 when H/D is large.
g(ϕ_0) has a fourfold symmetry and is lowered
at ϕ_0=±π/4 and ± 3π/4.
The resulting energy difference per unit area is given by
Δℰ_ th=
π^2K/32H{1+K^2/(8cW^2DH)}.
First we discuss the dependence of the energy difference
on the block size D.
Equation (<ref>) indicates that the energy difference
behaves as Δℰ_ th≈π^2 cW^2D/(4K),
which is increased linearly with D,
when D is sufficiently small.
If D is large enough, on the other hand,
the energy difference converges to Δℰ_ th≈π^2 K/(32H).
The latter energy difference agrees with the deformation
energy of the
director field, which twists along the z axis by ±π/4.
It is independent of D, but
is proportional to H^-1.
The asymptote behaviors for small and large D
are consistent with the numerical results
shown in Fig. <ref>(b).
Next we consider the dependence of Δℰ on the
temperature.
Equation (<ref>) also suggests
Δℰ_ th is proportional to W^2D/K
when W^2DH/K^2 is small.
We have speculated the anchoring strength
is simply proportional to the nematic order as
W∝ S_ b.
If so, the energy difference is expected to be
independent of S_ b
as Δℰ_ th∝ W^2/K
∝ S^0_ b,
since K is roughly proportional to S_ b^2.
This expectation is inconsistent with the dependence of
the numerical results of Δℰ in Fig. <ref>(a).
A possible candidate mechanism in explaining this discrepancy
is that
we should use the nematic order on the surface S_ w,
instead of S_ b, for estimating W.
Since S_ w is dependent on T more weakly
than S_ b near the transition temperature [see Fig. <ref>(a)],
W^2/K can be increased with T.
The curve of S_ w^2/K is drawn in Fig. <ref>(b).
Thus,
the director field is more largely deformed near T_ IN as shown
in Fig. <ref>(b),
so that the resulting energy difference shows the
increase with T.
Also, Fig. <ref>(a) shows
Δℰ turns to decrease to zero,
when we approach to T_ IN more closely.
In the vicinity of T_ IN,
K is so small that W^2/K becomes large.
Then Eq. (<ref>) behaves as
Δℰ_ th∝ K/H.
It is decreased to zero as K with approaching to T_ IN.
In Fig. <ref>(a), we draw the theoretical
curve of Eq. (<ref>) with taking into account
the dependences of W and K on the
temperature.
Here we assume W=W_0 S_ w with W_0 being a
constant.
The theoretical curve reproduces the non-monotonic
behavior of the energy difference qualitatively.
After the plateau of Δℰ_ th
in the lower temperature region,
it is increased with T.
Then it turns to decrease to zero when the temperature
is close to the transition temperature.
Here we use W_0=0.3, which is chosen to adjust
the theoretical curve to the numerical result.
§.§ Switching dynamics
Next we confine the nematic liquid crystals in the
type II cells, both the surfaces of which
are patterned as checkerboard.
As indicated by Eq. (<ref>),
each checkerboard surface gives rise to the effective
anchoring effect with the fourfold symmetry.
Hence,
the director field is expected to
show the in-plane bistable alignments along
n_+ or n_- also in the type II cells.
Figure <ref>(a)
plots
the spatial average of the
xy component of Q_μν at equilibrium
with respect to the block size.
The equilibrium value of Q_xy is
estimated as Q_xy^∞=Q_xy|_t=5× 10^4
in the simulations with no external field.
As the initial condition, we employ the director field
homogeneously aligned along n̂_+,
so that Q_xy^∞ is likely to be positive.
In Fig. <ref>(a),
we also draw a line of 3S_ b/4,
which corresponds to the bulk nematic order when the
director field is along n_+.
It is shown that
Q_xy^∞ is roughly constant and is
close to 3S_ b/4 for D≪ H.
It is reasonable since the inhomogeneity of the director field
is localized within D from the surfaces.
When D>H, on the other hand,
Q_xy^∞ is decreased with D.
When D≫ H, the type II cell can be considered as a
collection of square domains each carrying the uniform
anchoring direction.
Thus, the director field
tends to be parallel to the local anchoring direction
d_j, and then, Q_xy inside each unit block
becomes small locally.
Only on the edges of the block domains, the director fields
are distorted and adopts either of the distorted
states as schematically shown in Fig. <ref>(b).
With scaling D by H, the plots of Q_xy^∞
collapse onto a single curve.
Then we consider the switching dynamics of the director field
between the two stable alignments with imposing in-plane
external fields e in the type II cells.
In Fig. <ref>,
we plot the spatial average of the xy component of the
order parameter Q_xy
in the processes
of the director switching.
The cell size is H=16, the block size is D=16 and
the temperature is T/T_ IN=0.89.
At t=0, we start the Monte Carlo simulation with the same initial condition,
in which the director field is homogeneously aligned along
n̂_+,
in the absence of the external field.
As shown in Fig. <ref>,
the nematic order is relaxed to
a certain positive value,
which agrees with Q_xy^∞ in Fig. <ref>(a).
From t_1=10^4,
we then impose an in-plane external field along
n̂_-, and turn it off at t_2=2× 10^4.
After the system is thermalized during t=2× 10^4 and
t_3=3× 10^4 with no external field,
we apply the second external field along n̂_+
from t_3=3× 10^4
until t_4=4× 10^4.
We change the strength of the external field e.
When the external field is weak (e^2≤ 0.03),
the averaged orientational order is slightly reduced by the external field,
but it recovers the original state after the field is removed.
After a strong field (e^2≥ 0.04) is applied and
is removed off, on the other hand,
Q_xy
is relaxed to another steady state value, which is close to
-Q_xy^∞.
This new state of the negative Q_xy
corresponds to the other bistable alignment along
n̂_-.
After the second field along
n̂_+ is applied,
the averaged orientational order Q_xy
comes back to the positive original value, +Q_xy^∞.
In Fig. <ref>(a),
we show the detailed relaxation behaviors of
Q_xy in the first switching after t_1.
Δ t means the elapsed time in the first switching,
that is Δ t=t-t_1.
Here
we change the block size D, while
we fix the external field at e^2=0.03 and
the cell thickness H=16 (type II).
We note that Q_xy at Δ t=0
depends on D as indicated in Fig. <ref>(a).
In Fig. <ref>(a), it is shown that the switching rate depends
also on the block size D.
Notably, the dependence of the switching behavior is not
monotonic against D.
In Fig. <ref>(b), we plot the characteristic switching time
τ with respect to the block size D in the cells of
H=8, 16 and 32.
The temperature and the field strength are the same those for
Fig. <ref>(a).
The characteristic time τ is defined such that the average orientational
order is equal to zero at τ, Q_xy(Δ t=τ)=0.
Figure <ref>(b) shows the characteristic time is maximized
when the
block size is comparable to the cell thickness.
When D<H, the switching process is slowed down as the block size
is increased.
On the other hand, it is speeded up with D when D>H.
In Fig. <ref>(b), it is suggested that
the dependence of τ on D becomes less significant
as H is increased.
Figure <ref> depicts
snapshots of
Q_xy(t)
at the midplane (z=H/2) during the first switching process.
The parameters are the same as those in Fig. <ref>(a),
so that the pattern evolutions
correspond to the curves of Q_xy in Fig. <ref>(a).
Figure <ref> shows that the switching behavior
is slowed down when D is comparable to H,
in accordance with Fig. <ref>(b).
When D<H, the snapshots implies the switching proceeds
via nucleation and growth mechanism.
From the sea of the positive Q_xy, where the director is
aligned along n̂_+,
the droplets of the negative Q_xy are nucleated.
They grow with
time and cover the whole area eventually.
Under the external field along n̂_-,
the alignment of the director field along n̂_-
is more preferred than that along n̂_+.
Because of the energy barrier between these bistable alignments,
the director field cannot change its
orientation to n̂_- smoothly under
a weak external field.
From Eq. (<ref>), the energy barrier
for the local swiching of the director field
between the two stable states is given by
Δℱ=8D^2 {g(ϕ_0=0)-g(π/4)}=8cW^2D^3/K,
when D<H.
Thus, the slowing down of the switching process
with D is considered to be
attributed to the enhancement of the energy barrier.
Here we note that
a critical field strength
for the thermally activated switching cannot be defined unambiguously.
Since the new alignment is energetically preferred over the
original one even under a weak field,
the director configuration will change its orientation
if the system is annealed for a sufficiently long period.
When the field strength is moderate (e^2≅ 0.035),
the averaged order
goes to an intermediate value,
neither of Q_xy^∞ or -Q_xy^∞ in Fig. <ref>.
Such intermediate values of Q_xy reflect large scale
inhomogeneities of the bistable alignments (see Fig. <ref>).
At each block, the director field adopts either of the
two stable orientations.
The pattern of the intermediate Q_xy
depends not only on the field strength, but also
on the annealed time.
Under large external fields,
on the other hand,
the energy barrier between the two
states can be easily overcome, so that
the switching
occurs without arrested at the initial orientation
(not shown here).
Regarding the local director field, which adopts
either of the two stable orientations (n̂_+ and
n̂_-), as a binarized spin
at the corresponding block unit, we found a similarity
of the domain growth in our system and that in a two-dimensional
Ising model subject to an external magnetic field.
If the switching of the director field occurs locally
only at each block unit,
there is no correlations between the director fields
in the adjacent block units.
Therefore, the nucleation and growth switching behavior
implies the director field at a block unit prefers to be aligned
along the same orientation as those at the adjacent block units.
We observed string-like patterns
as shown at Δ t=4000 for D=1 in
Fig. <ref>.
Here we note that they are not
disclinations of the director field.
They represent domain walls perpendicular to the substrates.
In the type II cells, we have not observed
any topological defects,
although topological defects are sometimes stabilized
in the frustrated cell <cit.>.
The string-like patterns remain rather stable transiently.
On the other hand,
such string-like patterns are not observed in the switching process
in the Ising model.
This indicates the binarized spin description of the bistable director
alignments may be not adequate.
Under the external field along n̂_-,
the director field rotates to the new orientation
clockwise or counter-clockwise.
New domains, which appear via the clockwise rotations,
have some mismatches against those through the counter-clockwise
rotations.
The resulting boundaries between the incommensurate domains
are formed and
tend to suppress the coagulations of them more and less,
although the corresponding energy barriers are not so large.
When D>H,
the switching occurs in a different way.
The director rotations are localized around the
edges of the blocks
as indicated in Fig. <ref>(b).
As D is increased, the amount of the
director field that reacts to the field
is reduced.
Although
the director fields
around the centers of the blocks
do not show any switching behaviors
before and after the field application,
they are distorted to orient slightly
toward the field.
It is considered that the distortion of the
director field inside the blocks effectively reduces
the energy barrier against the external field.
We have not succeeded in explaining
the mechanism of the reduction of the
switching time with D.
When D>H, the inhomogeneous director field
contains higher Fourier modes of the distortion.
The energy barrier for
each Fourier mode becomes lower for the higher
Fourier modes [see Eq. (<ref>).
Thus, such higher Fourier modes are more active
against the external field and they would
behave as a trigger of the switching process.
§ CONCLUSION
In this article, we studied nematic liquid crystals confined
by two parallel checkerboard substrates by means of
Monte Carlo simulation of the Lebwohl-Lasher model.
As observed experimentally by Kim et al.,
we found the director field in the bulk shows the bistable alignments,
which are along
either of the two diagonal axes.
We attribute the bistability of the alignments to the
spatial modulation of the director field near the substrates.
Based on the elastic theory, we derived
an effective
anchoring energy with the fourfold symmetry (Eq. (<ref>)).
Its anchoring strength is expected to behave as
W^2D/K, when the block size D is smaller than
the cell thickness.
As the temperature is increased to the isotropic-nematic transition
temperature, the elastic modulus K of the nematic phase is reduced
so that the director field is deformed near the substrates more largely.
With this effective anchoring effect, we can explain
the non-monotonic dependence of the energy stored in this cell
qualitatively.
We also studied the switching dynamics of the director configuration
with imposing in-plane external fields.
Usually, the switching is considered to be associated with the
actual breaking of the anchoring condition.
Thus, the energy barrier for the switching is expected to be
proportional to W<cit.>.
In this article, we propose another possible mechanism of the
switching, in which the anchoring condition is not necessarily broken.
Since the energy barrier is increased with the block size,
the switching dynamics notably becomes slower when the block size is
comparable to the cell thickness.
By solving Δℱ=Δϵ E^2/2× (4D^2H),
we obtain a characteristic strength of the electric field E
as
E_ c≅{8cW^2D/(Δϵ KH)}^1/2,
where Δϵ is the anisotropy of the dielectric
constant [see Eq. (<ref>)].
If we apply an in-plane external field larger than E_ c,
the switching occurs rather homogeneously
without showing the nucleation and growth processes.
This characteristic strength is decreased with decreasing D,
so that the checkerboard pattern of smaller D is preferred
to reduce the field strength.
With smaller D, however, the stability of the two preferred
orientations is reduced.
If the effective anchoring energy is lower than the thermal energy,
the
bistable alignment will be destroyed by the thermal fluctuation.
In this sense, the block size D should be larger
than D_ c≈ (Kk_ BT/8cW^2)^1/3, where we
assumed that the switching occurs locally in each block,
that is Δℱ≈ k_ BT.
For a typical nematic liquid crystal with K=1 pN and
W=10^-5 J/m^2 at room temperature T=300 K,
it is estimated as D_ c≅ 34 nm.
In our theoretical argument,
we assumed the one-constant approximation of the elastic modulus.
However, the director field cannot be described
by a single deformation mode in the above cells.
The in-plane splay and bend deformations are localized
within the layer of D near the surface.
On the other hand,
the twist deformation is induced by the external field
along the cell thickness direction.
If the elastic moduli for the three deformation modes
are largely different from each other,
our theoretical argument would be invalid.
We need to improve both the theoretical and numerical
schemes to consider such dependences more correctly.
Also, we considered only the checkerboard substrates.
But, it is interesting and important
to design other types of patterned surfaces <cit.>
to append more preferred functions,
such as faster responses against the external field,
to liquid crystal devices.
We hope to report a series of such studies in the near future.
§ ACKNOWLEDGEMENTS
We acknowledge valuable discussions with J. Yamamoto, H. Kikuchi,
I. Nishiyama, K. Minoura and T. Shimada.
This work was supported by KAKENHI
(No. 25000002 and No. 24540433). Computation was done using the
facilities of the Supercomputer Center, the Institute for Solid
State Physics, the University of Tokyo.
§ ESTIMATIONS OF THE NEMATIC ORDER AND THE ELASTIC MODULS
In this appendix,
we estimate the scalar nematic order parameter
and the elastic modulus in Fig. <ref>
from the Monte Carlo simulations with Eq. (<ref>)
<cit.>.
First we consider the bulk behaviors of nematic liquid crystals,
which are described by the Lebwohl-Lasher potential.
Here we remove the surface sites 𝒮 and employ
the periodic boundary conditions for all the axes (x,y, and z).
We use the initial condition along u_i=(1,0,0) and thermalize the
system with the heat bath sampling.
The simulation box size is L^3 with L=128.
It is well known that this Lebwohl-Lasher spin model describes
the first-order transition between isotropic and nematic phases.
In Fig. <ref>, we plot the xx component of the tensorial order
parameter after the thermalization (t≤ 5× 10^4)
as a function of T.
Since the initial condition is along the x axis,
the director field is likely to be aligned along the x axis.
We here regard Q_xx
as the scalar nematic order parameter S_ b.
We see an abrupt change of S_ b around k_ BT_ IN≈
1.12ϵ, which is consistent
with previous studies <cit.>.
Above T_ IN, the nematic order almost vanishes, while
it is increased with decreasing T when T<T_ IN.
In the nematic phase (T<T_ IN),
the director field n can be defined.
Because of the thermal noise,
the local director field is fluctuating around the
average director field, reflecting the elastic modulus.
The elastic modulus of the director field is obtained
by calculating the scattering function of the
tensorial order parameter as <cit.>,
|Q̃_xμ(q)|^2_T=
k_ BT/A+4L_Qsin^2 |q|a/2,
for μ=y and z.
Q̃_xμ(q) is the Fourier component
of Q_xμ at a wave vector q.
a is the lattice constant, and ⋯_T means
the thermal average.
A and L_Q are the coefficients appearing in the
free energy functional for Q_μν.
In the case of T<T_ IN, the scattering function goes to
zero for |q|a≅ 0.
Then, we obtain the coefficient L_Q by fitting
|Q̃_xμ|^2^-1_T
with 4(k_ BT)^-1L_Qsin^2 |q|a/2.
L_Q is proportional to the elastic modulus K of the director field
n as L_Q=KS^2_ b.
In Fig. <ref>(b),
the elastic modulus K is plotted with respect to T.
It is decreased with increasing T, if T>T_ IN.
This indicates the softening of the nematic phase near the transition
temperature.
Next we consider the effect of the surface term.
The surface effect not only induces the angle dependence
of the anchoring effect in the nematic phase, but also leads to the wetting
effect of the nematic phase to the surface in the isotropic phase
<cit.>.
We set homogeneous surfaces of w=ϵ
at z=0 and z=H+1 as in the main text.
The anchoring direction is d_j=(1,0,0).
The periodic boundary conditions are imposed for the x- and y-directions
and the initial condition is along u_i=(1,0,0).
The profile of Q̅_xx (not shown here) indicates
Q̅_xx at z=1 becomes larger than that in the bulk, S_ b.
This value is the surface order S_ w, which is also plotted in
Fig. <ref>(a) with red open squaress.
Notably, S_ w remains a finite value
even when T>T_ IN.
In Fig. <ref>(a), we cannot see any drastic change of S_ w,
which is continuously decreased with T.
§ ANALYSIS WITH FRANK ELASTICITY THEORY
Here, we consider the nematic liquid crystal
confined in the checkerboard substrate
on the basis of the Frank elasticity theory.
The checkerboard substrate is placed at z=0,
while we fix the director field at the top surface
like the type I cells employed in the simulations.
The free energy of the nematic liquid crystal
is given by
ℱ = K/2∫ dr
(n)^2-Δϵ/2∫ dr(n·E)^2
-W∫_z=0dxdy(n·d)^2,
where n is the director field.
The first term in the right hand side of Eq. (<ref>)
is the elastic energy.
Here we employ the one-constant approximation
with the elastic modulus K.
E and Δϵ are external electric field
and the anisotropy of the dielectric constant.
Here
we do not consider the effect of the electric field.
The third term in Eq. (<ref>)
represents the anchoring energy in
the Rapini-Papoular form.
W is the anchoring strength and d
is the preferred direction on the surface at z=0.
For the checkerboard substrates,
we set d according to Eq. (<ref>).
At the top surface, we fix the director field
as n(z=H)=d_ t(cosϕ_ t,sinϕ_ t,0),
and the bottom surface also prefers the planar anchoring.
From the symmetry, therefore,
we assume that the director field in the bulk lies parallel
to the substrates everywhere.
Then, we can write it only with the azimuthal angle ϕ as
n=(cosϕ,sinϕ,0).
Also, we assume that the director field
is periodic for x and y directions,
so that we only have to consider the
free energy in the unit block (0≤ x, y ≤ 2D).
With these assumptions, the free energy per unit area
is written as
ℰ
=K/8D^2∫_0^2Ddx∫_0^2Ddy∫_0^Hdz (ϕ)^2
-W/2D^2.∫_0^Ddx
{∫_0^Ddy sin^2ϕ
+∫_D^2Ddycos^2ϕ}|_z=0.
In the equilibrium state,
the free energy is minimized with respect to ϕ(x,y,z).
Inside the cell (0<z<H),
the functional derivative of ℰ gives
the Laplace equation of ϕ as
δℰ/δϕ=-K^2ϕ=0.
From the symmetry argument,
we have its solution as
ϕ(x,y,z) = ϕ_0+(ϕ_ t-ϕ_0)z/H+
Δ(x,y,z),
Δ(x,y,z) = ∑_m,n=0^∞Δ_mnsin(2m+1)π x/Dsin(2n+1)π y/D
×sinh(πγ_mn(H-z)/D),
γ_mn = √((2m+1)^2+(2n+1)^2)
where
ϕ_0 and Δ_mn are determined later.
It is not easy to calculate the second term
in Eq. (<ref>) analytically.
Assuming |Δ|≪ 1, we approximate sin^2ϕ as
sin^2(ϕ_0+Δ)
≈sin^2ϕ_0+Δsin 2ϕ_0 +Δ^2cos 2ϕ_0,
Then, we obtain the free energy per unit area as
ℰ = K/2H(ϕ_ t-ϕ_0)^2-W/2
+∑_m,n[
π K Δ_mn^2sinh(2πγ_mnH/D)/16D.
.
-4WΔ_mnsin2ϕ_0 sinh(πγ_mnH/D)/(2m+1)(2n+1)π^2].
First we minimize the free energy with respect to Δ_mn by solving
∂ℰ/∂Δ_mn=0.
Then, we have
Δ_mn=16WDsech
(πγ_mnH/D)sin2ϕ_0/(2m+1)(2n+1)γ_mnπ^3 K,
and
ℰ ≈ K/2H(ϕ_ t-ϕ_0)^2-W/2
-cW^2D/Ksin^22ϕ_0,
c = ∑_mn32tanh(πγ_mnH/D)/(2m+1)^2(2n+1)^2π^5γ_mn.
In the limit of H≫ D, c converges to c≈ 0.085,
while it behaves as c≈ 0.5H/D if H≪ D.
Since c is positive,
the last term in the right-hand side of Eq. (<ref>)
represents an effective anchoring condition
[Eq. (<ref>)] in the main text.
It indicates
the director field tends to be along the diagonal axes of
the checkerboard surface, ϕ_0=±π/4.
Then, we minimize ℰ with respect to ϕ_0
and obtain
ℰ=-W/2-cW^2D/K+K/2H(ϕ_ t∓π/4)^2/1+K^2/(8cW^2DH).
It corresponds to the plots in Fig. <ref>(a).
Here we assumed |ϕ_0∓π/4|≪ 1,
so that sin^22ϕ_0≅ 1-4(ϕ_0±π/4)^2.
The resulting energy difference is obtained as
Δℰ=π^2K/32H{1+K^2/(8cW^2DH)}.
In the strong anchoring limit,
we can obtain ϕ rigorously as
ϕ(x,y,z)=(ϕ_ t∓π/4)z/H ±π/4
+4/π∑_m,n1/(2m+1)(2n+1)sin(2m+1)π x/Dsin(2n+1)π y/D
×sinh(πγ_mn(H-z)/D)
Its energy difference is then given by Δℰ=π^2K/(32H).
It is consistent with Eq. (<ref>)
in the limit of W→∞.
apsrev4-1
|