technicolor/llama3.1_8b_abstract_generation
Text Generation
β’
Updated
arXiv ID
float64 704
1k
| Title
stringlengths 20
207
| Introduction
stringlengths 497
75.7k
| Abstract
stringlengths 91
2.39k
| Prompt
stringlengths 877
76.1k
|
---|---|---|---|---|
802.033 | Spherical black holes with regular center: a review of existing models including a recent realization with Gaussian sources | Recently there has been a renewed interest for the search of solutions of Einstein equations,
mostly motivated by the study of higher dimensional gravity, for instance in the context
of the brane world scenario [ 104 , 105 ] and of string theory [ 110 ] . Some beautiful
examples of higher dimensional solutions have appeared (see, e.g., [ 68 , 67 ] ), which feature interesting properties
that, in the pure 4 4 4 -dimensional framework, are absent. This renewed interest is also having
some influence on a rather more specialized, but very interesting, research area, that of regular
black holes.
It is of course well-known that, under rather generic conditions on the energyβmatter content
of spacetime, classical solution of Einstein equations exhibit both, future [ 99 ] and past [ 60 , 61 , 62 , 51 , 63 ] singularities [ 65 , 31 , 14 , 114 , 90 , 129 , 64 ] usually hidden by an event horizon [ 71 ] .
This fact, which is clearly exemplified by the first and probably most well-known solution of Einstein
equations, i.e. the Schwarzschild solution [ 112 , 111 ] ,
has been likely fully appreciated only after the study of its
global analytic extension [ 44 , 46 , 74 , 125 , 95 , 90 , 64 ] ,
which gives a consistent picture
of its properties, like the existence of the region inside the horizon,
in which the radial and time coordinate exchange their character, and the presence of the
central singularity. Although the presence of, both, the black hole region and the central
singularity has been eventually accepted, i.e. we learned how to classically live with them,
especially the presence of a singularity is a recurrent motivation to underline the inadequacy
of general relativity as a theory of spacetime below some length scale (apart from the theoretical
motivations, it is a fact that, experimentally, gravity can be tested only in
a finite range of scales). On one side, maybe the most well known one, this has
motivated the search for a more complete theory of gravity including also quantum effects
(see, e.g., [ 73 ] for a recent, comprehensive review).
On the other side, it has also sustained many efforts to push as much as possible Einstein gravity
to its limit, trying to avoid, if not the black hole region, at least the central singularity
in a way as consistent as possible with physical requirements 2 2 2 The two points of view just
outlined should be seen as complementary and, often, even integrated. . Following some very early
ideas that date back to the work of Sakharov [ 109 ] ,
Gliner [ 53 ] and Bardeen [ 9 ] , solutions having a global
structure very similar to the one of black hole spacetimes, but in which the central singularity
is absent, have been found (references to them will appear in the rest of the paper). In this contribution we are going to briefly review some of these
ideas, but, before concluding this introductory section with the layout of the rest of the paper,
we would like to make a couple of remarks. The first of them is that, in contrast to the fact that
nowadays the world of theoretical physics witnesses a consistent number of strongly believed
(but as yet unproven) conjectures, most of the results about black holes and their properties are,
in fact theorems . Theorems usually make some hypotheses, which in this case can be roughly
interpreted within a threefold scheme; i) the validity of some geometric properties, usually related
to the behavior of geodesics (for instance the existence of trapped surfaces, and so on);
ii) the validity of some conditions on the matter fields that are coupled to gravity (energy conditions);
iii) the validity of some (more technical) hypotheses about the global/causal structure of spacetime.
It is then clear that the possibility of singularity avoidance, within the context defined by
general relativity, requires the violation of, at least, one of the above conditions.
Since conditions of type iii) are mostly technical,
there has been a great effort to make them as general as possible (although sometimes this means less
immediate) if not to remove them at all, by generalizing the earliest results [ 65 , 64 , 129 ] . Conditions i) are usually
the requirement that some indicator exists, which emphasizes that something a little bit unusual for
a βflatβ non covariant mind is taking place in spacetime, and are usually related to the existence of
horizons, so there is little reason to modify them. It is then natural that, as a possible way to avoid singularities,
a relaxation of conditions of type ii) has been advocated. With a strongly conservative attitude, a word
of caution should be sounded at this point. It is, in fact, known that matter and energy violating some
of the energy conditions, have as yet unobserved properties 3 3 3 We will come back to this point
later on, mentioning vacuum and the cosmological constant. A clear discussion of this point can be found
in the standard reference [ 64 ] ; see also the early [ 107 ] for a physically oriented discussion of the implications of a violation of the weak energy condition. :
this means that we are not yet able to produce them in a laboratory by a well-known, generally
reproducible procedure. We have, nevertheless, good candidates to realize these violations when we treat at an
effective level the quantum properties of spacetime and matter at some length/energy scales:
this is very suggestive, since it directly connects to the, possibly, incomplete character of
classical general relativity as a theory of spacetime and with the ongoing, diversified, efforts
toward its quantization [ 73 ] . To review in more detail some aspects related
to the above reflections, we plan as follows. In section 2 we review various regular models of spacetime,
centering our attention, almost exclusively, on regular black holes of a very specific type (specified below).
After a review of the earliest ideas (subsection 2.1 ) we analyze their first concrete
(and, perhaps, to some extent independent) realization, known as the Bardeen solution: we review also some
studies, which appeared much later, discussing its global character (subsection 2.2 ); we
then continue with a discussion of black hole interiors (subsection 2.3 ) reporting various early
proposals, which adopted spacetime junctions to get rid of singularities; this brings
us to the central part of
this section (subsection 2.4 ), where some exact solutions are analyzed, together with the
possibility of physical realizations for the energy-matter content which should act as their source
(subsubsection 2.4.1 ). The solutions that we will have described up to this point are not
extemporary realizations, but can be understood in a very interesting, complete and general framework: we
thus review the essence of this framework in subsection 2.5 . This section is concluded
with a very concise summary of the results that we have reviewed (subsection 2.6 ). Then, in section 3 we use a recently obtained solution, which is another
possible realization of the general type of solutions described in subsection 2.5 ,
to perform a simple exercise, i.e. the study of the violation of one of the energy conditions. For completeness,
after introducing the algebraic form of the solution, we quickly construct its global spacetime structure
in subsection 3.1 (this result follows immediately from the results reviewed in subsection 2.5 ); we then show which regions of spacetime are filled with matter violating the
strong energy condition (subsection 3.2 ). The results of this second part of the paper
are summarized in subsection 3.3 . Some general comments and remarks find space in the concise concluding section, i.e. section 4 .
We now conclude this introduction by fixing one notation and one naming convention, as below. 1.1 Conventions and notations In what follows we will concentrate on spherically symmetric solutions of Einstein equations and restrict
ourself to media which satisfy the condition that the radial pressure equals the opposite of the energy
density. We will then use, throughout and unless otherwise stated, the coordinate system ( t , r , Ο , Ο ) π‘ π italic-Ο π (t,r,\vartheta,\varphi) , in which
the metric can be written in the static form adapted to the spherical symmetry, i.e. g ΞΌ β Ξ½ = diag β ( β f β ( r ) , f β ( r ) β 1 , r 2 , r 2 β sin 2 β‘ Ο ) . subscript π π π diag π π π superscript π 1 superscript π 2 superscript π 2 superscript 2 italic-Ο g_{\mu\nu}={\mathrm{diag}}\left(-f(r),f(r)^{-1},r^{2},r^{2}\sin^{2}\vartheta\right). (1) As apparent from the above definition we adopt the signature ( β , + , + , + ) (-,+,+,+) . We occasionally
will use the name metric function for the function f β ( r ) π π f(r) . We do not spend extra
comments about the meaning of the coordinate choice, which is standard and discussed in
detail in various textbooks (see for instance [ 90 ] ; any
other textbook choice will be equivalent); Thus, without restating every time our coordinate choice,
in what follows we will specify various metrics just by specifying the corresponding metric function.
In view of the above, when we will have to discuss the maximal extension of solutions that
admit an expression of the metric in the form ( 1 ), although we will follow the naming conventions
of the standard reference [ 64 ] for boundaries
as infinity, only in one point of our discussion we will need a few more global ideas than the one concisely
and effectively discussed in [ 130 ] . We will moreover use the standard notation T ΞΌ β Ξ½ subscript π π π T_{\mu\nu} for the stress-energy tensor which appears on the righthand side of Einstein equations. | We review, in a historical perspective, some results about black hole spacetimes with a regular center. We then see how their properties are realized in a specific solution that recently appeared; in particular we analyze in detail the (necessary) violation of the strong energy condition. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*Spherical black holes with regular center: a review of existing models including a recent realization with Gaussian sources* and the introduction:Recently there has been a renewed interest for the search of solutions of Einstein equations,
mostly motivated by the study of higher dimensional gravity, for instance in the context
of the brane world scenario 104 , 105 and of string theory 110 . Some beautiful
examples of higher dimensional solutions have appeared (see, e.g., 68 , 67 ), which feature interesting properties
that, in the pure 4 4 4 -dimensional framework, are absent. This renewed interest is also having
some influence on a rather more specialized, but very interesting, research area, that of regular
black holes.
It is of course well-known that, under rather generic conditions on the energyβmatter content
of spacetime, classical solution of Einstein equations exhibit both, future 99 and past 60 , 61 , 62 , 51 , 63 singularities 65 , 31 , 14 , 114 , 90 , 129 , 64 usually hidden by an event horizon 71 .
This fact, which is clearly exemplified by the first and probably most well-known solution of Einstein
equations, i.e. the Schwarzschild solution 112 , 111 ,
has been likely fully appreciated only after the study of its
global analytic extension 44 , 46 , 74 , 125 , 95 , 90 , 64 ,
which gives a consistent picture
of its properties, like the existence of the region inside the horizon,
in which the radial and time coordinate exchange their character, and the presence of the
central singularity. Although the presence of, both, the black hole region and the central
singularity has been eventually accepted, i.e. we learned how to classically live with them,
especially the presence of a singularity is a recurrent motivation to underline the inadequacy
of general relativity as a theory of spacetime below some length scale (apart from the theoretical
motivations, it is a fact that, experimentally, gravity can be tested only in
a finite range of scales). On one side, maybe the most well known one, this has
motivated the search for a more complete theory of gravity including also quantum effects
(see, e.g., 73 for a recent, comprehensive review).
On the other side, it has also sustained many efforts to push as much as possible Einstein gravity
to its limit, trying to avoid, if not the black hole region, at least the central singularity
in a way as consistent as possible with physical requirements 2 2 2 The two points of view just
outlined should be seen as complementary and, often, even integrated. . Following some very early
ideas that date back to the work of Sakharov 109 ,
Gliner 53 and Bardeen 9 , solutions having a global
structure very similar to the one of black hole spacetimes, but in which the central singularity
is absent, have been found (references to them will appear in the rest of the paper). In this contribution we are going to briefly review some of these
ideas, but, before concluding this introductory section with the layout of the rest of the paper,
we would like to make a couple of remarks. The first of them is that, in contrast to the fact that
nowadays the world of theoretical physics witnesses a consistent number of strongly believed
(but as yet unproven) conjectures, most of the results about black holes and their properties are,
in fact theorems . Theorems usually make some hypotheses, which in this case can be roughly
interpreted within a threefold scheme; i) the validity of some geometric properties, usually related
to the behavior of geodesics (for instance the existence of trapped surfaces, and so on);
ii) the validity of some conditions on the matter fields that are coupled to gravity (energy conditions);
iii) the validity of some (more technical) hypotheses about the global/causal structure of spacetime.
It is then clear that the possibility of singularity avoidance, within the context defined by
general relativity, requires the violation of, at least, one of the above conditions.
Since conditions of type iii) are mostly technical,
there has been a great effort to make them as general as possible (although sometimes this means less
immediate) if not to remove them at all, by generalizing the earliest results 65 , 64 , 129 . Conditions i) are usually
the requirement that some indicator exists, which emphasizes that something a little bit unusual for
a flat non covariant mind is taking place in spacetime, and are usually related to the existence of
horizons, so there is little reason to modify them. It is then natural that, as a possible way to avoid singularities,
a relaxation of conditions of type ii) has been advocated. With a strongly conservative attitude, a word
of caution should be sounded at this point. It is, in fact, known that matter and energy violating some
of the energy conditions, have as yet unobserved properties 3 3 3 We will come back to this point
later on, mentioning vacuum and the cosmological constant. A clear discussion of this point can be found
in the standard reference 64 ; see also the early 107 for a physically oriented discussion of the implications of a violation of the weak energy condition. :
this means that we are not yet able to produce them in a laboratory by a well-known, generally
reproducible procedure. We have, nevertheless, good candidates to realize these violations when we treat at an
effective level the quantum properties of spacetime and matter at some length/energy scales:
this is very suggestive, since it directly connects to the, possibly, incomplete character of
classical general relativity as a theory of spacetime and with the ongoing, diversified, efforts
toward its quantization 73 . To review in more detail some aspects related
to the above reflections, we plan as follows. In section 2 we review various regular models of spacetime,
centering our attention, almost exclusively, on regular black holes of a very specific type (specified below).
After a review of the earliest ideas (subsection 2.1 ) we analyze their first concrete
(and, perhaps, to some extent independent) realization, known as the Bardeen solution: we review also some
studies, which appeared much later, discussing its global character (subsection 2.2 ); we
then continue with a discussion of black hole interiors (subsection 2.3 ) reporting various early
proposals, which adopted spacetime junctions to get rid of singularities; this brings
us to the central part of
this section (subsection 2.4 ), where some exact solutions are analyzed, together with the
possibility of physical realizations for the energy-matter content which should act as their source
(subsubsection 2.4.1 ). The solutions that we will have described up to this point are not
extemporary realizations, but can be understood in a very interesting, complete and general framework: we
thus review the essence of this framework in subsection 2.5 . This section is concluded
with a very concise summary of the results that we have reviewed (subsection 2.6 ). Then, in section 3 we use a recently obtained solution, which is another
possible realization of the general type of solutions described in subsection 2.5 ,
to perform a simple exercise, i.e. the study of the violation of one of the energy conditions. For completeness,
after introducing the algebraic form of the solution, we quickly construct its global spacetime structure
in subsection 3.1 (this result follows immediately from the results reviewed in subsection 2.5 ); we then show which regions of spacetime are filled with matter violating the
strong energy condition (subsection 3.2 ). The results of this second part of the paper
are summarized in subsection 3.3 . Some general comments and remarks find space in the concise concluding section, i.e. section 4 .
We now conclude this introduction by fixing one notation and one naming convention, as below. 1.1 Conventions and notations In what follows we will concentrate on spherically symmetric solutions of Einstein equations and restrict
ourself to media which satisfy the condition that the radial pressure equals the opposite of the energy
density. We will then use, throughout and unless otherwise stated, the coordinate system ( t , r , Ο , Ο ) π‘ π italic-Ο π (t,r,\vartheta,\varphi) , in which
the metric can be written in the static form adapted to the spherical symmetry, i.e. g ΞΌ β Ξ½ = diag β ( β f β ( r ) , f β ( r ) β 1 , r 2 , r 2 β sin 2 β‘ Ο ) . subscript π π π diag π π π superscript π 1 superscript π 2 superscript π 2 superscript 2 italic-Ο g_{\mu\nu}={\mathrm{diag}}\left(-f(r),f(r)^{-1},r^{2},r^{2}\sin^{2}\vartheta\right). (1) As apparent from the above definition we adopt the signature ( β , + , + , + ) (-,+,+,+) . We occasionally
will use the name metric function for the function f β ( r ) π π f(r) . We do not spend extra
comments about the meaning of the coordinate choice, which is standard and discussed in
detail in various textbooks (see for instance 90 ; any
other textbook choice will be equivalent); Thus, without restating every time our coordinate choice,
in what follows we will specify various metrics just by specifying the corresponding metric function.
In view of the above, when we will have to discuss the maximal extension of solutions that
admit an expression of the metric in the form ( 1 ), although we will follow the naming conventions
of the standard reference 64 for boundaries
as infinity, only in one point of our discussion we will need a few more global ideas than the one concisely
and effectively discussed in 130 . We will moreover use the standard notation T ΞΌ β Ξ½ subscript π π π T_{\mu\nu} for the stress-energy tensor which appears on the righthand side of Einstein equations."},{"role":"assistant","content":"Abstract: We review, in a historical perspective, some results about black hole spacetimes with a regular center. We then see how their properties are realized in a specific solution that recently appeared; in particular we analyze in detail the (necessary) violation of the strong energy condition."}] |
704.0432 | A Survey of Huebschmann and Stasheff's Paper: Formal Solution of the Master Equation via HPT and Deformation Theory | These notes, based on the paper [ 8 ] by Huebschmann and Stasheff, were prepared for a series of talks at Illinois State University with the intention of applying Homological Perturbation Theory (HPT) to the construction of derived brackets [ 11 , 16 ] , and eventually writing PartΒ II of the paper [ 1 ] . Derived brackets are obtained by deforming the initial bracket via a derivation of the bracket. In [ 3 ] it was demonstrated that such deformations correspond to solutions of the Maurer-Cartan equation, and the role of an βalmost contractionβ was noted. This technique (see also [ 9 ] ) is very similar to the iterative procedure of [ 8 ] for finding the most general solution of the Maurer-Cartan equation, i.e.Β the deformation of a given structure in a prescribed direction. The present article, besides providing additional details of the condensed article [ 8 ] , forms a theoretical background for understanding and generalizing the current techniques that give rise to derived brackets. The generalization, which will be the subject matter of [ 2 ] , will be achieved by using Stasheff and Huebschmannβs universal solution. A second application of the universal solution will be in deformation quantization and will help us find the coefficients of star products in a combinatorial manner,
rather than as a byproduct of string theory which underlies the original solution given by Kontsevich [ 10 ] . HPT is often used to replace given chain complexes by homotopic, smaller, and more readily computable chain complexes (to explore βsmallβ or βminimalβ models). This method may prove to be more efficient than βspectral sequencesβ in computing (co)homology. One useful tool in HPT is Lemma 1 (Basic Perturbation Lemma (BPL)) . Given a contraction of N π N onto M π M and a perturbation β \partial of d N subscript π π d_{N} , under suitable conditions there exists a perturbation d β subscript π d_{\partial} of d M subscript π π d_{M} such that H β ( M , d M + d β ) = H β ( N , d N + β ) π» π subscript π π subscript π π» π subscript π π H(M,d_{M}+d_{\partial})=H(N,d_{N}+\partial) . The main question is: under what conditions does the BPL allow the preservation of the data structures (DGAβs, DG coalgebras, DGLAβs etc.)? (We will use the self-explanatory abbreviations such as DG for βdifferential gradedβ, DGA for βdifferential graded (not necessarily associative) algebraβ, and DGLA for βdifferential graded Lie algebraβ.) Another prominent idea is that of a β(universal) twisting cochainβ
as a solution of the βmaster equationβ: Proposition 1 . Given a contraction of N π N onto M π M and a twisting cochain N β A β π π΄ N\rightarrow A ( A π΄ A some DGA), there exists a unique twisting cochain M β A β π π΄ M\rightarrow A that factors through the given one and which can be constructed inductively. The explicit formulas are reminiscent of the Kuranishi map [ 13 ] (p.17), and the
relationship will be investigated elsewhere. Note: we will assume that the ground ring is a field F πΉ F of characteristic zero.
We will denote the end of an example with the symbol β β \Diamond and the end of a proof by β‘ β‘ \Box . | These notes, based on the paper "Formal Solution of the Master Equation via HPT and Deformation Theory" by Huebschmann and Stasheff, were prepared for a series of talks at Illinois State University with the intention of applying Homological Perturbation Theory to the derived bracket constructions of Kosmann-Schwarzbach and T. Voronov, and eventually writing Part II of the paper "Higher Derived Brackets and Deformation Theory I" by the present authors. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A Survey of Huebschmann and Stasheff's Paper: Formal Solution of the Master Equation via HPT and Deformation Theory* and the introduction:These notes, based on the paper 8 by Huebschmann and Stasheff, were prepared for a series of talks at Illinois State University with the intention of applying Homological Perturbation Theory (HPT) to the construction of derived brackets 11 , 16 , and eventually writing PartΒ II of the paper 1 . Derived brackets are obtained by deforming the initial bracket via a derivation of the bracket. In 3 it was demonstrated that such deformations correspond to solutions of the Maurer-Cartan equation, and the role of an almost contraction was noted. This technique (see also 9 ) is very similar to the iterative procedure of 8 for finding the most general solution of the Maurer-Cartan equation, i.e.Β the deformation of a given structure in a prescribed direction. The present article, besides providing additional details of the condensed article 8 , forms a theoretical background for understanding and generalizing the current techniques that give rise to derived brackets. The generalization, which will be the subject matter of 2 , will be achieved by using Stasheff and Huebschmanns universal solution. A second application of the universal solution will be in deformation quantization and will help us find the coefficients of star products in a combinatorial manner,
rather than as a byproduct of string theory which underlies the original solution given by Kontsevich 10 . HPT is often used to replace given chain complexes by homotopic, smaller, and more readily computable chain complexes (to explore small or minimal models). This method may prove to be more efficient than spectral sequences in computing (co)homology. One useful tool in HPT is Lemma 1 (Basic Perturbation Lemma (BPL)) . Given a contraction of N π N onto M π M and a perturbation β \partial of d N subscript π π d_{N} , under suitable conditions there exists a perturbation d β subscript π d_{\partial} of d M subscript π π d_{M} such that H β ( M , d M + d β ) = H β ( N , d N + β ) π» π subscript π π subscript π π» π subscript π π H(M,d_{M}+d_{\partial})=H(N,d_{N}+\partial) . The main question is: under what conditions does the BPL allow the preservation of the data structures (DGAs, DG coalgebras, DGLAs etc.)? (We will use the self-explanatory abbreviations such as DG for differential graded, DGA for differential graded (not necessarily associative) algebra, and DGLA for differential graded Lie algebra.) Another prominent idea is that of a (universal) twisting cochain
as a solution of the master equation: Proposition 1 . Given a contraction of N π N onto M π M and a twisting cochain N β A β π π΄ N\rightarrow A ( A π΄ A some DGA), there exists a unique twisting cochain M β A β π π΄ M\rightarrow A that factors through the given one and which can be constructed inductively. The explicit formulas are reminiscent of the Kuranishi map 13 (p.17), and the
relationship will be investigated elsewhere. Note: we will assume that the ground ring is a field F πΉ F of characteristic zero.
We will denote the end of an example with the symbol β β \Diamond and the end of a proof by β‘ β‘ \Box ."},{"role":"assistant","content":"Abstract: These notes, based on the paper "Formal Solution of the Master Equation via HPT and Deformation Theory" by Huebschmann and Stasheff, were prepared for a series of talks at Illinois State University with the intention of applying Homological Perturbation Theory to the derived bracket constructions of Kosmann-Schwarzbach and T. Voronov, and eventually writing Part II of the paper "Higher Derived Brackets and Deformation Theory I" by the present authors."}] |
802.2726 | A review of spin determination at the LHC | Most models of TeV scale new physics are motivated by solving the
hierarchy problem. Therefore, the most crucial ingredient of all of
them is the mechanism of cancelling the quadratically divergent
correction to the Higgs mass within the Standard Model. In order to
achieve this, a set of new physics particles with the same or similar
gauge quantum numbers as
the Standard Model particles are introduced, whose couplings to the
Higgs are related to those of the Standard Model particles. This
βpartnerβ-like structure of new
physics is very generic in large classes of new physics
scenarios. Well-known examples include the set of
superpartners in low energy supersymmetry [ 1 ] (for a recent review see Ref. [ 2 ] ) , KK
excitations in
extra-dimensional models [ 3 ] , as well as similar
states in little Higgs models [ 4 ] (Ref. [ 5 ] provides a brief review). Due to the similarities in gauge quantum numbers, initial LHC signatures
of new partners are very similar, as they can decay into the
same set of observable final state
particles. The mass spectra of different scenarios can be chosen to
produce similar dominant kinematical features, such as the p T subscript π π p_{T} distribution of the
decay product. For example, a typical gluino decay chain in
supersymmetry is g ~ β q β q Β― + N ~ 2 β ~ π π Β― π subscript ~ π 2 \tilde{g}\rightarrow q\bar{q}+\tilde{N}_{2} followed by N ~ 2 β β β β Β― + N ~ 1 β subscript ~ π 2 β Β― β subscript ~ π 1 \tilde{N}_{2}\rightarrow\ell\bar{\ell}+\tilde{N}_{1} . A similar decay chain in universal extra-dimension
models [ 3 ] with KK-gluon ( g ( 1 ) superscript π 1 g^{(1)} ), KK-W ( W 3 ( 1 ) subscript superscript π 1 3 W^{(1)}_{3} )
and KK-photon ( Ξ³ ( 1 ) superscript πΎ 1 \gamma^{(1)} ), g ( 1 ) β q β q Β― β W 3 ( 1 ) β superscript π 1 π Β― π subscript superscript π 1 3 g^{(1)}\rightarrow q\bar{q}W^{(1)}_{3} followed by W 3 ( 1 ) β β β β Β― β Ξ³ ( 1 ) β subscript superscript π 1 3 β Β― β superscript πΎ 1 W^{(1)}_{3}\rightarrow\ell\bar{\ell}\gamma^{(1)} , gives
identical final states since both N ~ 1 subscript ~ π 1 \tilde{N}_{1} and Ξ³ 1 superscript πΎ 1 \gamma^{1} are
neutral stable particles which escape detection. The mass spectra of both
supersymmetry and UED can be adjusted in such a way that the p T subscript π π p_{T} of
the jets and leptons are quite similar. Some of the similarities in the LHC signature are actually the result
of equivalences in low energy effective theory. For example, it is
known that βtheory spaceβ motivated little Higgs models are equivalent to
extra-dimensional models in which
Higgs is a non-local mode in the bulk, via deconstruction [ 6 , 7 , 8 ] . Therefore, they
can actually be described by the same set of low energy ( βΌ similar-to \sim TeV)
degrees of freedom. An important feature of this class of models is
that the partners typically have the same spin as their corresponding
Standard Model particles. However, the difference between this set of models and low energy
supersymmetry is physical and observable with
a sufficiently precise measurement. In particular, the spin of
superpartners differ
from their Standard Model counter parts by half integers. Therefore,
spin measurements are crucial to set these scenarios apart. The conventional way of measuring the spin of a new particle involves
reconstruction of its rest frame using its decay
products and studying the angular distribution about the polarization
axis. For example, in process e + β e β β Z β ΞΌ + β ΞΌ β β superscript π superscript π π β superscript π superscript π e^{+}e^{-}\rightarrow Z\rightarrow\mu^{+}\mu^{-} , the 1 + cos 2 β‘ ΞΈ 1 superscript 2 π 1+\cos^{2}\theta distribution of the muon direction in the rest frame of the Z π Z reveals
its vector nature. However, in most new physics scenarios of
interest such a strategy is complicated by the generic existence of
undetectable massive particles. Motivated by electroweak precision
constraints and the existence of
Cold Dark Matter, many such scenarios incorporate some discrete
symmetry which guarantees the existence of a
lightest stable neutral particle. Well-known examples of such discrete
symmetries include R-parity in
supersymmetry, KK-parity of universal extra-dimension
models (UED) [ 3 ] , or similarly, T-parity in Little
Higgs Models [ 9 , 10 , 11 , 12 ] . The
existence of such a neutral particle at the end of the decay chain
results in large missing energy events in which new
physics particles are produced.
This fact helps to separate them from the Standard Model
background. On the other hand, it
also makes the spin measurement more complicated because it is generically
impossible to reconstruct the momentum, and
therefore the rest frame, of the decaying new physics particles. There are two different approaches to measuring spin. First, given the
same gauge quantum numbers, particles with
different spin usually have very different production rates, due to the
difference between fermionic and bosonic couplings and the number of degrees
of freedom. Such an approach could be useful, in particular initially, for colored
particles due to their large (hence more measurable) production
rates. However, a crucial ingredient in such a strategy is the measurement of
the masses of particles produced, as rate can only provide definitive
information once the mass is fixed. Such an effort is made more
difficult owing to the existence of missing massive particles. There
is also
some residual model dependence since, for example, a couple of complex
scalars can fake a dirac fermion. The second approach, is the direct measurement of spin through its effect on angular correlations in decay products. In the absence of a
reconstructed rest frame, one is left to consider Lorentz invariant
quantities which encode angular correlations. As we will see later
in this review, spin correlations typically only exist in certain type
of decays. Furthermore, new physics particles are frequently pair
produced with independent decay chains containing similar
particles. Therefore, a valid spin correlation measurement requires
the ability to identify a relatively pure sample of events where we
can isolate certain decay chains and suppress combinatorics
effectively. Therefore, except for very special
cases, we expect this measurement will require large statistics. At
the same time, as will be clear from our discussion, using the appropriate
variables and correctly interpreting the measured angular distribution
frequently requires at least a partial
knowledge of the spectrum and the gauge quantum numbers. Obtaining
information about the spectrum and the quantum numbers is likely to
require a somewhat lower integrated luminosity than spin measurements
do. Therefore, the order with which we uncover the properties of new
particles is congruent to the order with which we must proceed in the
first place to correctly establish these properties. Thus, we should
clearly focus on mass scales, branching
ratios and gauge quantum numbers first, once new physics is
discovered at the LHC, while keeping an unbiased perspective towards
the underlying model. More refined measurements, such as the ones
described in this review, will enable us to tell the different models
apart thereafter. Such measurements can be useful and even more powerful in a linear collider as was recently proposed in Ref. [ 13 ] . In this review we will concentrate on methods applicable to the LHC. In principle, the production of particles with different spins also leads
to distinguishable angular distributions. This was investigated in the
context of linear colliders in Ref. [ 14 ] . A
similar measurement using the process p β p β β ~ β β ~ β β π π ~ β superscript ~ β β pp\rightarrow\tilde{\ell}\tilde{\ell}^{\star} at the LHC has been studied in
Ref. [ 15 ] . An analogues measurement in the production
of colored states is more challenging. First, typically several
different initial states and partial waves can contribute to the same
production process. Therefore, it is difficult to extract spin
information from the resulting angular distribution in a model
independent way. Second, as commented above it is often difficult to
reconstruct the direction of the original particles coming out of the
production vertex. As a result, angular correlations are further
washed out. In the rest of this review, we will survey both of these approaches
with slightly heavier emphasis given to the angular correlation technique. For
concreteness, we will compare supersymmetry with another generic
scenario in which the partners, such as gluon partner g β² superscript π β² g^{\prime} , W-partner W β² superscript π β² W^{\prime} , quark partner q β² superscript π β² q^{\prime} , etc., have the same spin as their
corresponding Standard Model particles. As was pointed out above, this
so called same-spin scenario effectively parameterizes almost all
non-SUSY models which address the hierarchy problem. Spin measurement at the LHC is still a relatively new field where only
first steps
towards a comprehensive study have been taken. We will briefly
summarize these developments in this review. We will focus here on the
theoretical foundations and considerations relevant for the
construction of observables. The potential for measuring spin in many
new decay channels remains to be studied. Important effects, such as
Standard Model background and large combinatorics, deserve careful
further consideration. We outline these issues in connection to
particular channels below. | We review the prospects of direct spin determination of new particles which may be discovered at the LHC. We discuss the general framework and the different channels which contain spin information. The experimental challenges associated with such measurements are briefly discussed and the most urgent unresolved problems are emphasized. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A review of spin determination at the LHC* and the introduction:Most models of TeV scale new physics are motivated by solving the
hierarchy problem. Therefore, the most crucial ingredient of all of
them is the mechanism of cancelling the quadratically divergent
correction to the Higgs mass within the Standard Model. In order to
achieve this, a set of new physics particles with the same or similar
gauge quantum numbers as
the Standard Model particles are introduced, whose couplings to the
Higgs are related to those of the Standard Model particles. This
partner-like structure of new
physics is very generic in large classes of new physics
scenarios. Well-known examples include the set of
superpartners in low energy supersymmetry 1 (for a recent review see Ref. 2 ) , KK
excitations in
extra-dimensional models 3 , as well as similar
states in little Higgs models 4 (Ref. 5 provides a brief review). Due to the similarities in gauge quantum numbers, initial LHC signatures
of new partners are very similar, as they can decay into the
same set of observable final state
particles. The mass spectra of different scenarios can be chosen to
produce similar dominant kinematical features, such as the p T subscript π π p_{T} distribution of the
decay product. For example, a typical gluino decay chain in
supersymmetry is g ~ β q β q Β― + N ~ 2 β ~ π π Β― π subscript ~ π 2 \tilde{g}\rightarrow q\bar{q}+\tilde{N}_{2} followed by N ~ 2 β β β β Β― + N ~ 1 β subscript ~ π 2 β Β― β subscript ~ π 1 \tilde{N}_{2}\rightarrow\ell\bar{\ell}+\tilde{N}_{1} . A similar decay chain in universal extra-dimension
models 3 with KK-gluon ( g ( 1 ) superscript π 1 g^{(1)} ), KK-W ( W 3 ( 1 ) subscript superscript π 1 3 W^{(1)}_{3} )
and KK-photon ( Ξ³ ( 1 ) superscript πΎ 1 \gamma^{(1)} ), g ( 1 ) β q β q Β― β W 3 ( 1 ) β superscript π 1 π Β― π subscript superscript π 1 3 g^{(1)}\rightarrow q\bar{q}W^{(1)}_{3} followed by W 3 ( 1 ) β β β β Β― β Ξ³ ( 1 ) β subscript superscript π 1 3 β Β― β superscript πΎ 1 W^{(1)}_{3}\rightarrow\ell\bar{\ell}\gamma^{(1)} , gives
identical final states since both N ~ 1 subscript ~ π 1 \tilde{N}_{1} and Ξ³ 1 superscript πΎ 1 \gamma^{1} are
neutral stable particles which escape detection. The mass spectra of both
supersymmetry and UED can be adjusted in such a way that the p T subscript π π p_{T} of
the jets and leptons are quite similar. Some of the similarities in the LHC signature are actually the result
of equivalences in low energy effective theory. For example, it is
known that theory space motivated little Higgs models are equivalent to
extra-dimensional models in which
Higgs is a non-local mode in the bulk, via deconstruction 6 , 7 , 8 . Therefore, they
can actually be described by the same set of low energy ( βΌ similar-to \sim TeV)
degrees of freedom. An important feature of this class of models is
that the partners typically have the same spin as their corresponding
Standard Model particles. However, the difference between this set of models and low energy
supersymmetry is physical and observable with
a sufficiently precise measurement. In particular, the spin of
superpartners differ
from their Standard Model counter parts by half integers. Therefore,
spin measurements are crucial to set these scenarios apart. The conventional way of measuring the spin of a new particle involves
reconstruction of its rest frame using its decay
products and studying the angular distribution about the polarization
axis. For example, in process e + β e β β Z β ΞΌ + β ΞΌ β β superscript π superscript π π β superscript π superscript π e^{+}e^{-}\rightarrow Z\rightarrow\mu^{+}\mu^{-} , the 1 + cos 2 β‘ ΞΈ 1 superscript 2 π 1+\cos^{2}\theta distribution of the muon direction in the rest frame of the Z π Z reveals
its vector nature. However, in most new physics scenarios of
interest such a strategy is complicated by the generic existence of
undetectable massive particles. Motivated by electroweak precision
constraints and the existence of
Cold Dark Matter, many such scenarios incorporate some discrete
symmetry which guarantees the existence of a
lightest stable neutral particle. Well-known examples of such discrete
symmetries include R-parity in
supersymmetry, KK-parity of universal extra-dimension
models (UED) 3 , or similarly, T-parity in Little
Higgs Models 9 , 10 , 11 , 12 . The
existence of such a neutral particle at the end of the decay chain
results in large missing energy events in which new
physics particles are produced.
This fact helps to separate them from the Standard Model
background. On the other hand, it
also makes the spin measurement more complicated because it is generically
impossible to reconstruct the momentum, and
therefore the rest frame, of the decaying new physics particles. There are two different approaches to measuring spin. First, given the
same gauge quantum numbers, particles with
different spin usually have very different production rates, due to the
difference between fermionic and bosonic couplings and the number of degrees
of freedom. Such an approach could be useful, in particular initially, for colored
particles due to their large (hence more measurable) production
rates. However, a crucial ingredient in such a strategy is the measurement of
the masses of particles produced, as rate can only provide definitive
information once the mass is fixed. Such an effort is made more
difficult owing to the existence of missing massive particles. There
is also
some residual model dependence since, for example, a couple of complex
scalars can fake a dirac fermion. The second approach, is the direct measurement of spin through its effect on angular correlations in decay products. In the absence of a
reconstructed rest frame, one is left to consider Lorentz invariant
quantities which encode angular correlations. As we will see later
in this review, spin correlations typically only exist in certain type
of decays. Furthermore, new physics particles are frequently pair
produced with independent decay chains containing similar
particles. Therefore, a valid spin correlation measurement requires
the ability to identify a relatively pure sample of events where we
can isolate certain decay chains and suppress combinatorics
effectively. Therefore, except for very special
cases, we expect this measurement will require large statistics. At
the same time, as will be clear from our discussion, using the appropriate
variables and correctly interpreting the measured angular distribution
frequently requires at least a partial
knowledge of the spectrum and the gauge quantum numbers. Obtaining
information about the spectrum and the quantum numbers is likely to
require a somewhat lower integrated luminosity than spin measurements
do. Therefore, the order with which we uncover the properties of new
particles is congruent to the order with which we must proceed in the
first place to correctly establish these properties. Thus, we should
clearly focus on mass scales, branching
ratios and gauge quantum numbers first, once new physics is
discovered at the LHC, while keeping an unbiased perspective towards
the underlying model. More refined measurements, such as the ones
described in this review, will enable us to tell the different models
apart thereafter. Such measurements can be useful and even more powerful in a linear collider as was recently proposed in Ref. 13 . In this review we will concentrate on methods applicable to the LHC. In principle, the production of particles with different spins also leads
to distinguishable angular distributions. This was investigated in the
context of linear colliders in Ref. 14 . A
similar measurement using the process p β p β β ~ β β ~ β β π π ~ β superscript ~ β β pp\rightarrow\tilde{\ell}\tilde{\ell}^{\star} at the LHC has been studied in
Ref. 15 . An analogues measurement in the production
of colored states is more challenging. First, typically several
different initial states and partial waves can contribute to the same
production process. Therefore, it is difficult to extract spin
information from the resulting angular distribution in a model
independent way. Second, as commented above it is often difficult to
reconstruct the direction of the original particles coming out of the
production vertex. As a result, angular correlations are further
washed out. In the rest of this review, we will survey both of these approaches
with slightly heavier emphasis given to the angular correlation technique. For
concreteness, we will compare supersymmetry with another generic
scenario in which the partners, such as gluon partner g β² superscript π β² g^{\prime} , W-partner W β² superscript π β² W^{\prime} , quark partner q β² superscript π β² q^{\prime} , etc., have the same spin as their
corresponding Standard Model particles. As was pointed out above, this
so called same-spin scenario effectively parameterizes almost all
non-SUSY models which address the hierarchy problem. Spin measurement at the LHC is still a relatively new field where only
first steps
towards a comprehensive study have been taken. We will briefly
summarize these developments in this review. We will focus here on the
theoretical foundations and considerations relevant for the
construction of observables. The potential for measuring spin in many
new decay channels remains to be studied. Important effects, such as
Standard Model background and large combinatorics, deserve careful
further consideration. We outline these issues in connection to
particular channels below."},{"role":"assistant","content":"Abstract: We review the prospects of direct spin determination of new particles which may be discovered at the LHC. We discuss the general framework and the different channels which contain spin information. The experimental challenges associated with such measurements are briefly discussed and the most urgent unresolved problems are emphasized."}] |
802.1829 | A review of the Statistical Mechanics approach to Random Optimization Problems | I Introduction The connection between the statistical physics of disordered systems
and optimization problems in computer science dates back from twenty
years at least MΓ©zard etΒ al. ( 1987 ) . In combinatorial optimization one
is given a cost function (the length of a tour in the traveling salesman
problem (TSP), the number of violated constraints in constraint
satisfaction problems,Β β¦)
over a set of variables and looks for the minimal cost over
an allowed range for those variables. Finding the true minimum
may be complicated, and requires bigger and bigger computational
efforts as the number of variables to be minimized over
increases Papadimitriou and Steiglitz ( 1998 ) .
Statistical physics is at first sight very
different. The scope is to deduce the macroscopic, that is, global
properties of a physical system, for instance a gas, a liquid or a solid,
from the knowledge of the energetic interactions of its elementary
components (molecules, atoms or ions). However, at very
low temperature, these elementary components are essentially forced to
occupy the spatial conformation minimizing the global energy
of the system. Hence low temperature statistical physics can
be seen as the search for minimizing a cost function whose expression
reflects the laws of Nature or, more humbly, the degree of accuracy retained
in its description. This problem is generally not difficult to solve
for non disordered systems where
the lowest energy conformation are crystals in which components are
regularly spaced from each other. Yet the presence of disorder,
e.g. impurities, makes the problem very difficult and finding the
conformation with minimal energy is a true optimization problem. At the beginning of the eighties,
following the works of G. Parisi and others on systems called spin
glasses MΓ©zard etΒ al. ( 1987 ) ,
important progresses were made in the statistical physics of
disordered systems. Those progresses made possible the quantitative study of
the properties of systems given some distribution of the disorder
(for instance the location of impurities) such as the
average minimal energy and its fluctuations.
The application to optimization problems was natural and led to
beautiful studies on (among others) the average
properties of the minimal tour length in the TSP,
the minimal cost in Bipartite Matching, for some specific
instance distributions MΓ©zard etΒ al. ( 1987 ) . Unfortunately
statistical physicists and computer scientists did not establish
close ties on a large scale at that time. The reason could
have been of methodological nature Fu and Anderson ( 1986 ) .
While physicists were making statistical
statements, true for a given distribution of inputs, computer scientists were
rather interested in solving one (or several) particular instances of a
problem. The focus was thus on efficient ways to do so, that is, requiring a
computational effort growing not too quickly with the number of data
defining the instance. Knowing precisely the typical properties for
a given, academic distribution of instances did not help much to
solve practical cases. At the beginning of the nineties practitionners in artificial
intelligence realized that classes of random constraint satisfaction
problems used as artificial benchmarks for search algorithms exhibited
abrupt changes of behaviour when some control parameter were finely
tuned Mitchell etΒ al. ( 1992 ) . The most celebrated example was random k π k -Satisfiability, where one looks for a solution to a set of random
logical constraints over a set of Boolean variables. It appeared that,
for large sets of variables, there was a critical value of the number
of constraints per variable below which there almost surely existed
solutions, and above which solutions were absent. An important feature
was that the performances of known search algorithms drastically worsened
in the vicinity of this critical ratio. In addition to its intrinsic
mathematical interest the random k π k -SAT problem was therefore worth to
be studied for βpracticalβ reasons. This critical phenomenon, strongly reminiscent of phase transitions in
condensed matter physics, led to a revival of the research at the
interface between statistical physics and computer science, which is still
very active. The purpose of the present review is to introduce the non
physicist reader to some concepts required to understand the
literature in the field and to present some major results.
We shall in
particular discuss the refined picture of the satisfiable phase put
forward in statistical mechanics studies and the algorithmic approach
(Survey Propagation, an extension of Belief Propagation used in
communication theory and statistical inference) this picture suggested. While the presentation will mostly focus on the k π k -Satisfiability
problem (with random constraints) we will occasionally discuss
another computational problem, namely, linear systems of Boolean
equations. A good reason to do so is that this problem exhibits some
essential features encountered in random k π k -Satisfiability, while
being technically simpler to study. In addition it is closely related
to error-correcting codes in communication theory. The chapter is divided into four main parts. In Section II we present the
basic statistical physics concepts necessary to understand the onset
of phase transitions, and to characterize the nature of the
phases. Those are illustrated on a simple example of decision problem,
the so-called perceptron problem. In Section III we review the scenario of the various phase transitions taking place in
random k π k -SAT.
Section IV and V present the techniques used to study various type of algorithms in optimization
(local search, backtracking procedures, message
passing algorithms). We end up with some conclusive remarks in
Sec. VI . | We review the connection between statistical mechanics and the analysis of random optimization problems, with particular emphasis on the random k-SAT problem. We discuss and characterize the different phase transitions that are met in these problems, starting from basic concepts. We also discuss how statistical mechanics methods can be used to investigate the behavior of local search and decimation based algorithms. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A review of the Statistical Mechanics approach to Random Optimization Problems* and the introduction:I Introduction The connection between the statistical physics of disordered systems
and optimization problems in computer science dates back from twenty
years at least MΓ©zard etΒ al. ( 1987 ) . In combinatorial optimization one
is given a cost function (the length of a tour in the traveling salesman
problem (TSP), the number of violated constraints in constraint
satisfaction problems,Β β¦)
over a set of variables and looks for the minimal cost over
an allowed range for those variables. Finding the true minimum
may be complicated, and requires bigger and bigger computational
efforts as the number of variables to be minimized over
increases Papadimitriou and Steiglitz ( 1998 ) .
Statistical physics is at first sight very
different. The scope is to deduce the macroscopic, that is, global
properties of a physical system, for instance a gas, a liquid or a solid,
from the knowledge of the energetic interactions of its elementary
components (molecules, atoms or ions). However, at very
low temperature, these elementary components are essentially forced to
occupy the spatial conformation minimizing the global energy
of the system. Hence low temperature statistical physics can
be seen as the search for minimizing a cost function whose expression
reflects the laws of Nature or, more humbly, the degree of accuracy retained
in its description. This problem is generally not difficult to solve
for non disordered systems where
the lowest energy conformation are crystals in which components are
regularly spaced from each other. Yet the presence of disorder,
e.g. impurities, makes the problem very difficult and finding the
conformation with minimal energy is a true optimization problem. At the beginning of the eighties,
following the works of G. Parisi and others on systems called spin
glasses MΓ©zard etΒ al. ( 1987 ) ,
important progresses were made in the statistical physics of
disordered systems. Those progresses made possible the quantitative study of
the properties of systems given some distribution of the disorder
(for instance the location of impurities) such as the
average minimal energy and its fluctuations.
The application to optimization problems was natural and led to
beautiful studies on (among others) the average
properties of the minimal tour length in the TSP,
the minimal cost in Bipartite Matching, for some specific
instance distributions MΓ©zard etΒ al. ( 1987 ) . Unfortunately
statistical physicists and computer scientists did not establish
close ties on a large scale at that time. The reason could
have been of methodological nature Fu and Anderson ( 1986 ) .
While physicists were making statistical
statements, true for a given distribution of inputs, computer scientists were
rather interested in solving one (or several) particular instances of a
problem. The focus was thus on efficient ways to do so, that is, requiring a
computational effort growing not too quickly with the number of data
defining the instance. Knowing precisely the typical properties for
a given, academic distribution of instances did not help much to
solve practical cases. At the beginning of the nineties practitionners in artificial
intelligence realized that classes of random constraint satisfaction
problems used as artificial benchmarks for search algorithms exhibited
abrupt changes of behaviour when some control parameter were finely
tuned Mitchell etΒ al. ( 1992 ) . The most celebrated example was random k π k -Satisfiability, where one looks for a solution to a set of random
logical constraints over a set of Boolean variables. It appeared that,
for large sets of variables, there was a critical value of the number
of constraints per variable below which there almost surely existed
solutions, and above which solutions were absent. An important feature
was that the performances of known search algorithms drastically worsened
in the vicinity of this critical ratio. In addition to its intrinsic
mathematical interest the random k π k -SAT problem was therefore worth to
be studied for practical reasons. This critical phenomenon, strongly reminiscent of phase transitions in
condensed matter physics, led to a revival of the research at the
interface between statistical physics and computer science, which is still
very active. The purpose of the present review is to introduce the non
physicist reader to some concepts required to understand the
literature in the field and to present some major results.
We shall in
particular discuss the refined picture of the satisfiable phase put
forward in statistical mechanics studies and the algorithmic approach
(Survey Propagation, an extension of Belief Propagation used in
communication theory and statistical inference) this picture suggested. While the presentation will mostly focus on the k π k -Satisfiability
problem (with random constraints) we will occasionally discuss
another computational problem, namely, linear systems of Boolean
equations. A good reason to do so is that this problem exhibits some
essential features encountered in random k π k -Satisfiability, while
being technically simpler to study. In addition it is closely related
to error-correcting codes in communication theory. The chapter is divided into four main parts. In Section II we present the
basic statistical physics concepts necessary to understand the onset
of phase transitions, and to characterize the nature of the
phases. Those are illustrated on a simple example of decision problem,
the so-called perceptron problem. In Section III we review the scenario of the various phase transitions taking place in
random k π k -SAT.
Section IV and V present the techniques used to study various type of algorithms in optimization
(local search, backtracking procedures, message
passing algorithms). We end up with some conclusive remarks in
Sec. VI ."},{"role":"assistant","content":"Abstract: We review the connection between statistical mechanics and the analysis of random optimization problems, with particular emphasis on the random k-SAT problem. We discuss and characterize the different phase transitions that are met in these problems, starting from basic concepts. We also discuss how statistical mechanics methods can be used to investigate the behavior of local search and decimation based algorithms."}] |
704.317 | ZIPF'S LAW AND CITY SIZES: A SHORT TUTORIAL REVIEW ON MULTIPLICATIVE PROCESSES IN URBAN GROWTH | Biological populations βand, among them, human communitiesβ are
subject, during their existence, to a multitude of actions of quite
disparate origins. Such actions involve a complex interplay between
factors endogenous to the population and external effects related to
the interaction with the ecosystem and with physical environmental
factors. The underlying mechanism governing the growth or decline of
the population size (i.e., the number of individuals) is however
very simple in essence, since it derives from the elementary events
of reproduction: at a given time, the growth rate of the population
is proportional to the population itself. This statement must be
understood in the sense that two populations formed by the same
organisms and under the same ecological conditions, one of them
βsayβ twice as large as the other, will grow by amounts also
related by a factor of two. Such proportionality between population
and growth rate, which is empirically verified in practically all
instances of biological systems, defines a multiplicative
process [ 11 ] . Populations whose size is governed by multiplicative processes and
which, at the same time, are subject to environmental random-like
fluctuations, are known to display universal statistical
regularities in the distribution of certain features. Specifically,
those traits which are transmitted vertically, from parents to their
offspring, exhibit broad, long-tailed distributions with stereotyped
shapes βtypically, log-normal or power laws. For instance,
consider a human society where, except for some unfrequent
exceptions, the surname of each individual is inherited from the
father. Consider moreover the subpopulations formed by individuals
with the same surname. It turns out that the frequency of
subpopulations of size n π n is approximately proportional to n β 2 superscript π 2 n^{-2} [ 19 , 4 ] . Or take, from the whole human population, the
communities whose individuals speak the same language, which in the
vast majority of the cases is learnt from the mother. The sizes of
those communities are distributed following a log-normal function [ 12 ] . Such statistical regularities are generally referred to
as Zipfβs law [ 18 , 19 ] . The derivation of Zipfβs law
from the underlying multiplicative processes was first worked out in
detail by the sociologist H. A. Simon, within a set of assumptions
which became known as Simonβs model [ 9 ] . A well-documented instance of occurrence of Zipfβs law involves the
distribution of city sizes [ 3 , 14 , 1 , 2 ] , where
βsizeβ is here identified with the number of inhabitants. In
practically any country or region over the globe, the frequency of
cities of size n π n decays as n β z superscript π π§ n^{-z} , where the exponent z π§ z is
approximately equal to 2 2 2 βas in the case of surnames. The
occurrence of Zipfβs law in the distribution of city sizes can be
understood in terms of multiplicative processes using Simonβs model.
Inspection of current literature on the subject of city size
distributions, however, suggests that the potential of Simonβs model
as an explanation of Zipfβs law, as well as its limitations, are not
well understood. In a recently published handbook on urban economics [ 2 ] , for instance, we read: βSimonβs model
encounters some serious problems. In the limit where it can generate
Zipfβs law, it β¦ requires that the number of cities grow
indefinitely, in fact as fast as the urban population.β It turns
out that this assertion is wrong: the truth, in fact, happens to be
exactly the opposite! Leaving aside the derivation that may have led
to this false conclusion [ 1 ] , we note that such strong
statements risk to become dogmatic for the part of the scientific
community which does not have the tools for their critical analysis. With this motivation, the present short review will be devoted to
give a pedagogical presentation of Simonβs model in the frame of the
evolution of city size distributions. The emphasis will be put on a
qualitative description of the basic processes involved in the
modeling. The explicit statement of the hypotheses that define the
model should already expose its limitations but, at the same time,
should clarify its flexibility regarding possible generalizations.
In the next section, an elementary model for the evolution of a
population based on stochastic processes is introduced, and the
concurrent role of multiplicative and additive mechanisms in the
appearance of power-law distributions is discussed. After an outline
of the main features of Zipfβs rank plots in the distribution of
city sizes, Simonβs model is presented in its original version,
describing its implications as for the population distribution in
urban systems. Then, we discuss a few extensions of the model, aimed
at capturing some relevant processes not present in its original
formulation. Finally, we close with a summary of the main results
and some concluding remarks. | We address the role of multiplicative stochastic processes in modeling the occurrence of power-law city size distributions. As an explanation of the result of Zipfβs rank analysis, Simonβs model is presented in a mathematically elementary way, with a thorough discussion of the involved hypotheses. Emphasis is put on the flexibility of the model, as to its possible extensions and the relaxation of some strong assumptions. We point out some open problems regarding the prediction of the detailed shape of Zipfβs rank plots, which may be tackled by means of such extensions. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*ZIPF'S LAW AND CITY SIZES: A SHORT TUTORIAL REVIEW ON MULTIPLICATIVE PROCESSES IN URBAN GROWTH* and the introduction:Biological populations βand, among them, human communitiesβ are
subject, during their existence, to a multitude of actions of quite
disparate origins. Such actions involve a complex interplay between
factors endogenous to the population and external effects related to
the interaction with the ecosystem and with physical environmental
factors. The underlying mechanism governing the growth or decline of
the population size (i.e., the number of individuals) is however
very simple in essence, since it derives from the elementary events
of reproduction: at a given time, the growth rate of the population
is proportional to the population itself. This statement must be
understood in the sense that two populations formed by the same
organisms and under the same ecological conditions, one of them
βsayβ twice as large as the other, will grow by amounts also
related by a factor of two. Such proportionality between population
and growth rate, which is empirically verified in practically all
instances of biological systems, defines a multiplicative
process 11 . Populations whose size is governed by multiplicative processes and
which, at the same time, are subject to environmental random-like
fluctuations, are known to display universal statistical
regularities in the distribution of certain features. Specifically,
those traits which are transmitted vertically, from parents to their
offspring, exhibit broad, long-tailed distributions with stereotyped
shapes βtypically, log-normal or power laws. For instance,
consider a human society where, except for some unfrequent
exceptions, the surname of each individual is inherited from the
father. Consider moreover the subpopulations formed by individuals
with the same surname. It turns out that the frequency of
subpopulations of size n π n is approximately proportional to n β 2 superscript π 2 n^{-2} 19 , 4 . Or take, from the whole human population, the
communities whose individuals speak the same language, which in the
vast majority of the cases is learnt from the mother. The sizes of
those communities are distributed following a log-normal function 12 . Such statistical regularities are generally referred to
as Zipfs law 18 , 19 . The derivation of Zipfs law
from the underlying multiplicative processes was first worked out in
detail by the sociologist H. A. Simon, within a set of assumptions
which became known as Simons model 9 . A well-documented instance of occurrence of Zipfs law involves the
distribution of city sizes 3 , 14 , 1 , 2 , where
size is here identified with the number of inhabitants. In
practically any country or region over the globe, the frequency of
cities of size n π n decays as n β z superscript π π§ n^{-z} , where the exponent z π§ z is
approximately equal to 2 2 2 βas in the case of surnames. The
occurrence of Zipfs law in the distribution of city sizes can be
understood in terms of multiplicative processes using Simons model.
Inspection of current literature on the subject of city size
distributions, however, suggests that the potential of Simons model
as an explanation of Zipfs law, as well as its limitations, are not
well understood. In a recently published handbook on urban economics 2 , for instance, we read: Simons model
encounters some serious problems. In the limit where it can generate
Zipfs law, it β¦ requires that the number of cities grow
indefinitely, in fact as fast as the urban population. It turns
out that this assertion is wrong: the truth, in fact, happens to be
exactly the opposite! Leaving aside the derivation that may have led
to this false conclusion 1 , we note that such strong
statements risk to become dogmatic for the part of the scientific
community which does not have the tools for their critical analysis. With this motivation, the present short review will be devoted to
give a pedagogical presentation of Simons model in the frame of the
evolution of city size distributions. The emphasis will be put on a
qualitative description of the basic processes involved in the
modeling. The explicit statement of the hypotheses that define the
model should already expose its limitations but, at the same time,
should clarify its flexibility regarding possible generalizations.
In the next section, an elementary model for the evolution of a
population based on stochastic processes is introduced, and the
concurrent role of multiplicative and additive mechanisms in the
appearance of power-law distributions is discussed. After an outline
of the main features of Zipfs rank plots in the distribution of
city sizes, Simons model is presented in its original version,
describing its implications as for the population distribution in
urban systems. Then, we discuss a few extensions of the model, aimed
at capturing some relevant processes not present in its original
formulation. Finally, we close with a summary of the main results
and some concluding remarks."},{"role":"assistant","content":"Abstract: We address the role of multiplicative stochastic processes in modeling the occurrence of power-law city size distributions. As an explanation of the result of Zipfs rank analysis, Simons model is presented in a mathematically elementary way, with a thorough discussion of the involved hypotheses. Emphasis is put on the flexibility of the model, as to its possible extensions and the relaxation of some strong assumptions. We point out some open problems regarding the prediction of the detailed shape of Zipfs rank plots, which may be tackled by means of such extensions."}] |
705.0337 | A mathematical and computational review of HartreeβFock SCF methods in quantum chemistry | In the hot field of computer simulation of biological macromolecules,
available potential energy functions are often not accurate enough to
properly describe complex processes such as the folding of proteins [ 1 , 2 , 3 , 4 , 5 , 6 , 7 ] .
In order to improve the situation, it is convenient to extract ab
initio information from quantum mechanical calculations with the hope
of being able to devise less computationally demanding methods that
can be used to tackle large systems. In this spirit, the effective
potential for the nuclei calculated in the non-relativistic
Born-Oppenheimer approximation is typically considered as a good
reference to assess the accuracy of cheaper potentials [ 8 , 9 , 10 , 11 , 12 , 13 , 14 ] .
The study of molecules at this level of theoretical detail and the
design of computationally efficient approximations for solving the
demanding equations that appear constitute the major part of the field
called quantum chemistry [ 15 , 16 ] . In this
work, we voluntarily circumscribe ourselves to the basic formalism
needed for the ground-state quantum chemical calculations that are
typically performed in this context. For more general expositions, we
refer the reader to any of the thorough accounts in
refs. [ 17 , 18 , 19 ] . In sec. 2 , we introduce the molecular
Hamiltonian and a special set of units (the atomic ones) that are
convenient to simplify the equations. In
sec. 3 , we present in an axiomatic way the
concepts and expressions related to the separation of the electronic
and nuclear problems in the Born-Oppenheimer scheme. In
sec. 4 , we introduce the variational method
that underlies the derivation of the basic equations of the Hartree
and Hartree-Fock approximations, discussed in
sec. 6 and 7 respectively. The computational implementation of the Hartree-Fock
approximation is tackled in sec. 8 , where the
celebrated Roothaan-Hall equations are derived. In
sec. 9 , the main issues related to the
construction and selection of Gaussian basis sets are discussed, and,
finally, in sec. 10 , the hottest areas
of modern research are briefly reviewed and linked to the issues in
the rest of the work, with a special emphasis in the development
of linear-scaling methods. | We present a review of the fundamental topics of HartreeβFock theory in quantum chemistry. From the molecular Hamiltonian, using and discussing the BornβOppenheimer approximation, we arrive at the Hartree and HartreeβFock equations for the electronic problem. Special emphasis is placed on the most relevant mathematical aspects of the theoretical derivation of the final equations, and on the results regarding the existence and uniqueness of their solutions. All HartreeβFock versions with different spin restrictions are systematically extracted from the general case, thus providing a unifying framework. The discretization of the one-electron orbital space is then reviewed and the RoothaanβHall formalism introduced. This leads to an exposition of the basic underlying concepts related to the construction and selection of Gaussian basis sets, focusing on algorithmic efficiency issues. Finally, we close the review with a section in which the most relevant modern developments (especially those related to the design of linear-scaling methods) are commented on and linked to the issues discussed. The paper is intentionally introductory and rather self-contained, and may be useful for non-experts intending to use quantum chemical methods in interdisciplinary applications. Moreover, much material that can be found scattered in the literature has been put together to facilitate comprehension and to serve as a handy reference. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A mathematical and computational review of HartreeβFock SCF methods in quantum chemistry* and the introduction:In the hot field of computer simulation of biological macromolecules,
available potential energy functions are often not accurate enough to
properly describe complex processes such as the folding of proteins 1 , 2 , 3 , 4 , 5 , 6 , 7 .
In order to improve the situation, it is convenient to extract ab
initio information from quantum mechanical calculations with the hope
of being able to devise less computationally demanding methods that
can be used to tackle large systems. In this spirit, the effective
potential for the nuclei calculated in the non-relativistic
Born-Oppenheimer approximation is typically considered as a good
reference to assess the accuracy of cheaper potentials 8 , 9 , 10 , 11 , 12 , 13 , 14 .
The study of molecules at this level of theoretical detail and the
design of computationally efficient approximations for solving the
demanding equations that appear constitute the major part of the field
called quantum chemistry 15 , 16 . In this
work, we voluntarily circumscribe ourselves to the basic formalism
needed for the ground-state quantum chemical calculations that are
typically performed in this context. For more general expositions, we
refer the reader to any of the thorough accounts in
refs. 17 , 18 , 19 . In sec. 2 , we introduce the molecular
Hamiltonian and a special set of units (the atomic ones) that are
convenient to simplify the equations. In
sec. 3 , we present in an axiomatic way the
concepts and expressions related to the separation of the electronic
and nuclear problems in the Born-Oppenheimer scheme. In
sec. 4 , we introduce the variational method
that underlies the derivation of the basic equations of the Hartree
and Hartree-Fock approximations, discussed in
sec. 6 and 7 respectively. The computational implementation of the Hartree-Fock
approximation is tackled in sec. 8 , where the
celebrated Roothaan-Hall equations are derived. In
sec. 9 , the main issues related to the
construction and selection of Gaussian basis sets are discussed, and,
finally, in sec. 10 , the hottest areas
of modern research are briefly reviewed and linked to the issues in
the rest of the work, with a special emphasis in the development
of linear-scaling methods."},{"role":"assistant","content":"Abstract: We present a review of the fundamental topics of HartreeβFock theory in quantum chemistry. From the molecular Hamiltonian, using and discussing the BornβOppenheimer approximation, we arrive at the Hartree and HartreeβFock equations for the electronic problem. Special emphasis is placed on the most relevant mathematical aspects of the theoretical derivation of the final equations, and on the results regarding the existence and uniqueness of their solutions. All HartreeβFock versions with different spin restrictions are systematically extracted from the general case, thus providing a unifying framework. The discretization of the one-electron orbital space is then reviewed and the RoothaanβHall formalism introduced. This leads to an exposition of the basic underlying concepts related to the construction and selection of Gaussian basis sets, focusing on algorithmic efficiency issues. Finally, we close the review with a section in which the most relevant modern developments (especially those related to the design of linear-scaling methods) are commented on and linked to the issues discussed. The paper is intentionally introductory and rather self-contained, and may be useful for non-experts intending to use quantum chemical methods in interdisciplinary applications. Moreover, much material that can be found scattered in the literature has been put together to facilitate comprehension and to serve as a handy reference."}] |
803.18 | SOCIOPHYSICS: A REVIEW OF GALAM MODELS | I Introduction The field of sociophysics has emerged in the 19-seventies with only a very few scarce papers in the hostile environment of the community of physicists. It started to attracted some physicists around the mid nineties and then took over fifteen years to nucleate involving a growing number of physicists. Today it is a recognized field of physics anchored within statistical physics. It is flourishing and expanding with hundreds of papers published in the leading physical journals and quite a few international conferences held each year. The topics covered by sociophysics are becoming numerous and address many different problems including social networks, language evolution, population dynamics, epidemic spreading, terrorism, voting, coalition formation and opinion dynamics. Among these topics the subject of opinion dynamics has become one of the main streams of sociophysics producing a great deal of research papers also in this journal, including this issue. This review does not deal with all of these papers because of the restriction made clear by its title. This does not mean that the other papers are less important or worse than those cited here. But we restrict the presentation to the models introduced by Galam and Galam et al over the last twenty five years, a good part of them being the pioneer works of sociophysics. A Springer book is in preparation on the subject. These models deal with the five subjects of democratic voting in bottom up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. The first class of models v1 ; v2 ; v3 ; v4 ; v5 ; v6 ; v7 ; v8 ; v9 ; v10 ; v11 ; v12 ; v13 consider a population, which is a mixture of two species A and B. A bottom up hierarchy is then built from the population using local majority rules with the possibility of some power inertia bias. Tree like networks are thus constructed, which combine a random selection of agents at the bottom from the surrounding population with an associated deterministic outcome at the top. The scheme relates on adapting real space renormalization group technics to build a social and political structure. The second class s1 ; s2 ; s3 ; s4 ; s5 ; s6 ; s7 ; s8 ; s9 ; s10 tackles the problem of decision making in various frames including firms and small committees. It uses ferromagnetic Ising spin Hamiltonians with both external and random quenched fields at both zero and non zero temperatures. The associated phase diagrams are constructed. The effect of reversing an external field on the collective equilibrium state is studied with an emphasis on the existence of nucleation phenomena. Mean field treatment is applied. The third class f1 ; f2 ; f3 ; f4 ; f5 introduces a combination of random bond and random site spins glasses to describe the formation of coalitions as well the dynamics of fragmentation among a group of countrys. External and local fields are also considered together with site dilution effects in mixtures of ferro and anti-ferromagnetic spin Hamiltonians. Ising and Potts variables are used. The fourth class t1 ; t2 ; t3 ; t4 ; t5 ; t6 studies some aspects of terrorism by focusing on the role of passive supporters in the creation of the open social spaces, which are opened to terrorist activities. It relies on the theory of percolation and uses the dependence of the percolation threshold upon the space dimensionality. The fifth class o1 ; o2 ; o3 ; o4 ; o5 ; o6 ; o7 ; o8 ; o9 ; o10 ; o11 ; o12 ; o13 ; o14 ; o15 ; o16 investigates opinion dynamics within reaction-diffusion like models. Two and three states variables are used. Three king of agents are also considered, which are respectively floaters, contrarians and inflexibles. The dynamics operates via local updates and reshuffling. Technics from real space renormalization group approach are used. For each class of models the precise connexion to the original physical model is made. Similarities and differences are outlined emphasizing the eventual novelties with respect to the statistical physics counterparts. The numerous results obtained by each class of models are reviewed enlightening the novel and counterintuitive aspects with respect to the associated social and political framework. In particular several major real political events were successfully predicted using these models. It includes the victory of the French extreme right party in the 2000 first round of French presidential elections vp1 ; vp2 ; vp3 ; vp4 ; vp5 ; vp6 , the voting at fifty - fifty in several democratic countries (Germany, Italy, Mexico) op6 ; op7 ; op8 , and the victory of the no to the 2005 French referendum on the European constitution op9 . To conclude, the perspectives to make sociophysics a predictive solid field of science are discussed, emphasizing both the challenges and the risks. | We review a series of models of sociophysics introduced by Galam and Galam et al. in the last 25 years. The models are divided into five different classes, which deal respectively with democratic voting in bottom-up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. For each class the connexion to the original physical model and techniques are outlined underlining both the similarities and the differences. Emphasis is put on the numerous novel and counterintuitive results obtained with respect to the associated social and political framework. Using these models several major real political events were successfully predicted including the victory of the French extreme right party in the 2000 first round of French presidential elections, the voting at fiftyβfifty in several democratic countries (Germany, Italy, Mexico), and the victory of the "no" to the 2005 French referendum on the European constitution. The perspectives and the challenges to make sociophysics a predictive solid field of science are discussed. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*SOCIOPHYSICS: A REVIEW OF GALAM MODELS* and the introduction:I Introduction The field of sociophysics has emerged in the 19-seventies with only a very few scarce papers in the hostile environment of the community of physicists. It started to attracted some physicists around the mid nineties and then took over fifteen years to nucleate involving a growing number of physicists. Today it is a recognized field of physics anchored within statistical physics. It is flourishing and expanding with hundreds of papers published in the leading physical journals and quite a few international conferences held each year. The topics covered by sociophysics are becoming numerous and address many different problems including social networks, language evolution, population dynamics, epidemic spreading, terrorism, voting, coalition formation and opinion dynamics. Among these topics the subject of opinion dynamics has become one of the main streams of sociophysics producing a great deal of research papers also in this journal, including this issue. This review does not deal with all of these papers because of the restriction made clear by its title. This does not mean that the other papers are less important or worse than those cited here. But we restrict the presentation to the models introduced by Galam and Galam et al over the last twenty five years, a good part of them being the pioneer works of sociophysics. A Springer book is in preparation on the subject. These models deal with the five subjects of democratic voting in bottom up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. The first class of models v1 ; v2 ; v3 ; v4 ; v5 ; v6 ; v7 ; v8 ; v9 ; v10 ; v11 ; v12 ; v13 consider a population, which is a mixture of two species A and B. A bottom up hierarchy is then built from the population using local majority rules with the possibility of some power inertia bias. Tree like networks are thus constructed, which combine a random selection of agents at the bottom from the surrounding population with an associated deterministic outcome at the top. The scheme relates on adapting real space renormalization group technics to build a social and political structure. The second class s1 ; s2 ; s3 ; s4 ; s5 ; s6 ; s7 ; s8 ; s9 ; s10 tackles the problem of decision making in various frames including firms and small committees. It uses ferromagnetic Ising spin Hamiltonians with both external and random quenched fields at both zero and non zero temperatures. The associated phase diagrams are constructed. The effect of reversing an external field on the collective equilibrium state is studied with an emphasis on the existence of nucleation phenomena. Mean field treatment is applied. The third class f1 ; f2 ; f3 ; f4 ; f5 introduces a combination of random bond and random site spins glasses to describe the formation of coalitions as well the dynamics of fragmentation among a group of countrys. External and local fields are also considered together with site dilution effects in mixtures of ferro and anti-ferromagnetic spin Hamiltonians. Ising and Potts variables are used. The fourth class t1 ; t2 ; t3 ; t4 ; t5 ; t6 studies some aspects of terrorism by focusing on the role of passive supporters in the creation of the open social spaces, which are opened to terrorist activities. It relies on the theory of percolation and uses the dependence of the percolation threshold upon the space dimensionality. The fifth class o1 ; o2 ; o3 ; o4 ; o5 ; o6 ; o7 ; o8 ; o9 ; o10 ; o11 ; o12 ; o13 ; o14 ; o15 ; o16 investigates opinion dynamics within reaction-diffusion like models. Two and three states variables are used. Three king of agents are also considered, which are respectively floaters, contrarians and inflexibles. The dynamics operates via local updates and reshuffling. Technics from real space renormalization group approach are used. For each class of models the precise connexion to the original physical model is made. Similarities and differences are outlined emphasizing the eventual novelties with respect to the statistical physics counterparts. The numerous results obtained by each class of models are reviewed enlightening the novel and counterintuitive aspects with respect to the associated social and political framework. In particular several major real political events were successfully predicted using these models. It includes the victory of the French extreme right party in the 2000 first round of French presidential elections vp1 ; vp2 ; vp3 ; vp4 ; vp5 ; vp6 , the voting at fifty - fifty in several democratic countries (Germany, Italy, Mexico) op6 ; op7 ; op8 , and the victory of the no to the 2005 French referendum on the European constitution op9 . To conclude, the perspectives to make sociophysics a predictive solid field of science are discussed, emphasizing both the challenges and the risks."},{"role":"assistant","content":"Abstract: We review a series of models of sociophysics introduced by Galam and Galam et al. in the last 25 years. The models are divided into five different classes, which deal respectively with democratic voting in bottom-up hierarchical systems, decision making, fragmentation versus coalitions, terrorism and opinion dynamics. For each class the connexion to the original physical model and techniques are outlined underlining both the similarities and the differences. Emphasis is put on the numerous novel and counterintuitive results obtained with respect to the associated social and political framework. Using these models several major real political events were successfully predicted including the victory of the French extreme right party in the 2000 first round of French presidential elections, the voting at fiftyβfifty in several democratic countries (Germany, Italy, Mexico), and the victory of the "no" to the 2005 French referendum on the European constitution. The perspectives and the challenges to make sociophysics a predictive solid field of science are discussed."}] |
705.1787 | Energy-Efficient Resource Allocation in Wireless Networks | I Introduction and Motivation Future wireless networks are expected to support a variety of
services with diverse quality-of-service (QoS) requirements. For
example, a mixture of delay-sensitive applications (e.g., voice
and video teleconferencing) and delay-tolerant ones (e.g., web
browsing and file downloading) must be supported. Given that the
two principal wireless network resources, i.e., bandwidth and energy , are scarce, the main challenge in designing
wireless networks is to use network resources as efficiently as
possible while providing the QoS required by the users. Game-theoretic approaches to radio resource allocation have
recently attracted much attention and will be the focus of this
article. We will show that game theory can be used as a unifying
framework to study radio resource management in a variety of
wireless networks with different service criteria. Our focus will
be on infrastructure networks where users transmit to a common
concentration point such as a base station in a cellular network
or an access point. Since most of the terminals in a wireless
network are battery-powered, energy efficiency is crucial to
prolonging the life of the terminals. Also, in most practical
scenarios, distributed algorithms are preferred over centralized
ones. Centralized algorithms tend to be complex and not easily
scalable. Therefore, throughout this article, we focus on
distributed algorithms with emphasis on energy efficiency. Using a
game-theoretic framework, we demonstrate the impact of advanced
signal processing on energy efficiency and network capacity. The
tradeoffs among throughput, delay, network capacity and energy
efficiency are also discussed. The ideas presented in this paper
can also be applied to wireless ad hoc networks, however, the
topic is beyond the scope of this article (see [ 1 ] for applications of game theory to ad hoc networks). It should be noted that, recently, tools from optimization theory
have also been employed to study resource allocation in wireless
networks using the network utility maximization framework proposed
in [ 2 ] (see for example [ 3 ] ). While there is
considerable overlap between the game-theoretic and
optimization-theoretic approaches, game theory tends to focus on
the multiuser competitive nature of the problem and on the usersβ
interaction. The rest of this article is organized as follows. In
Section II , we describe how game theory can be used
for studying radio resource management in wireless networks. The
choice of the utility function is discussed in
Section III . In Section IV , we present a
family of power control games for energy-efficient resource
allocation in wireless CDMA networks. Finally, discussions and
conclusions are given in Section V . | This paper shows that game theory can be used as a unifying framework to study radio resource management in a variety of wireless networks. with different service criteria. It focuses on infrastructure networks where users transmit to a common concentration point such as a base station in a cellular network or an access point. Since most of the terminals in a wireless network are battery-powered, energy efficiency is crucial to prolonging the life of the terminals. Also, in most practical scenarios, distributed algorithms are preferred over centralized ones. Throughout this article it focuses on distributed algorithms with emphasis on energy efficiency. A family of power control games is presented for energy-efficient resource allocation in wireless code-division multiple-access (CDMA) networks and give discussions and conclusions | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*Energy-Efficient Resource Allocation in Wireless Networks* and the introduction:I Introduction and Motivation Future wireless networks are expected to support a variety of
services with diverse quality-of-service (QoS) requirements. For
example, a mixture of delay-sensitive applications (e.g., voice
and video teleconferencing) and delay-tolerant ones (e.g., web
browsing and file downloading) must be supported. Given that the
two principal wireless network resources, i.e., bandwidth and energy , are scarce, the main challenge in designing
wireless networks is to use network resources as efficiently as
possible while providing the QoS required by the users. Game-theoretic approaches to radio resource allocation have
recently attracted much attention and will be the focus of this
article. We will show that game theory can be used as a unifying
framework to study radio resource management in a variety of
wireless networks with different service criteria. Our focus will
be on infrastructure networks where users transmit to a common
concentration point such as a base station in a cellular network
or an access point. Since most of the terminals in a wireless
network are battery-powered, energy efficiency is crucial to
prolonging the life of the terminals. Also, in most practical
scenarios, distributed algorithms are preferred over centralized
ones. Centralized algorithms tend to be complex and not easily
scalable. Therefore, throughout this article, we focus on
distributed algorithms with emphasis on energy efficiency. Using a
game-theoretic framework, we demonstrate the impact of advanced
signal processing on energy efficiency and network capacity. The
tradeoffs among throughput, delay, network capacity and energy
efficiency are also discussed. The ideas presented in this paper
can also be applied to wireless ad hoc networks, however, the
topic is beyond the scope of this article (see 1 for applications of game theory to ad hoc networks). It should be noted that, recently, tools from optimization theory
have also been employed to study resource allocation in wireless
networks using the network utility maximization framework proposed
in 2 (see for example 3 ). While there is
considerable overlap between the game-theoretic and
optimization-theoretic approaches, game theory tends to focus on
the multiuser competitive nature of the problem and on the users
interaction. The rest of this article is organized as follows. In
Section II , we describe how game theory can be used
for studying radio resource management in wireless networks. The
choice of the utility function is discussed in
Section III . In Section IV , we present a
family of power control games for energy-efficient resource
allocation in wireless CDMA networks. Finally, discussions and
conclusions are given in Section V ."},{"role":"assistant","content":"Abstract: This paper shows that game theory can be used as a unifying framework to study radio resource management in a variety of wireless networks. with different service criteria. It focuses on infrastructure networks where users transmit to a common concentration point such as a base station in a cellular network or an access point. Since most of the terminals in a wireless network are battery-powered, energy efficiency is crucial to prolonging the life of the terminals. Also, in most practical scenarios, distributed algorithms are preferred over centralized ones. Throughout this article it focuses on distributed algorithms with emphasis on energy efficiency. A family of power control games is presented for energy-efficient resource allocation in wireless code-division multiple-access (CDMA) networks and give discussions and conclusions"}] |
805.1705 | Prospects for producing ultracold NH 3 molecules by sympathetic cooling: A survey of interaction potentials | I Introduction There is great interest at present in producing samples of cold
molecules (below 1Β K) and ultracold molecules (below 1Β mK). Such
molecules have many potential applications. High-precision
measurements on ultracold molecules might be used to measure
quantities of fundamental physics interest, such as the electric
dipole moment of the electron Hudson etΒ al. ( 2002 ) and the
time-dependence of fundamental constants such as the
electron/proton mass ratio van Veldhoven etΒ al. ( 2004 ) . Ultracold
molecules are a stepping stone to ultracold quantum gases Baranov etΒ al. ( 2002 ) and might have applications in quantum
information and quantum computing DeMille ( 2002 ) . There are two basic approaches to producing ultracold
molecules. In direct methods such as Stark deceleration Bethlem and Meijer ( 2003 ); Bethlem etΒ al. ( 2006 ) and helium buffer-gas
cooling Weinstein etΒ al. ( 1998 ) , preexisting molecules are
cooled from higher temperatures and trapped in electrostatic or
magnetic traps. In indirect methods Hutson and SoldΓ‘n ( 2006 ) , laser-cooled atoms that are already
ultracold are paired up to form molecules by either
photoassociation Jones etΒ al. ( 2006 ) or tuning through
magnetic Feshbach resonances KΓΆhler etΒ al. ( 2006 ) . Indirect methods have already been used extensively to produce
ultracold molecules at temperatures below 1 ΞΌ π \mu K. However,
they are limited to molecules formed from atoms that can
themselves be cooled to such temperatures. Direct methods are
far more general than indirect methods, and can in principle be
applied to a very wide range of molecules. However, at present
direct methods are limited to temperatures in the range 10-100
mK, which is outside the ultracold regime. There is much
current research directed at finding second-stage cooling
methods to bridge the gap and eventually allow directly cooled
molecules to reach the region below 1 ΞΌ π \mu K where quantum
gases can form. One of the most promising second-stage cooling methods that has
been proposed is sympathetic cooling . The hope is that, if a
sample of cold molecules in brought into contact with a gas of
ultracold atoms, thermalization will occur and the molecules will
be cooled towards the temperature of the atoms. Sympathetic
cooling has already been used successfully to cool atomic species
such as 6 Li Schreck etΒ al. ( 2001 ) and 41 K Modugno etΒ al. ( 2001 ) but has not yet been applied to neutral
molecules. Sympathetic cooling relies on thermalization occurring before
molecules are lost from the trap. Thermalization requires elastic collisions between atoms and molecules to redistribute
translational energy. However, electrostatic and magnetic traps
rely on Stark and Zeeman splittings and trapped atoms and
molecules are not usually in their absolute ground state in the
applied field. Any inelastic collision that converts
internal energy into translational energy is likely to kick
both colliding species out of the trap. The ratio of
elastic to inelastic cross sections is thus crucial, and a
commonly stated rule of thumb is that sympathetic cooling will
not work unless elastic cross sections are a factor of 10 to
100 greater than inelastic cross sections for the states
concerned. Inelastic cross sections for atom-atom collisions are sometimes
strongly suppressed by angular momentum constraints. In
particular, for s-wave collisions (end-over-end angular momentum L = 0 πΏ 0 L=0 ), pairs of atoms in spin-stretched states (with the
maximum possible values of the total angular momentum F πΉ F and its
projection | M F | subscript π πΉ |M_{F}| ) can undergo inelastic collisions only by
changing L πΏ L . Cross sections for such processes are very small
because, for atoms in S states, the only interaction that can
change L πΏ L is the weak dipolar coupling between the electron
spins. However, for molecular collisions the situation is
different: the anisotropy of the intermolecular potential
can change L πΏ L , and this is usually much stronger than spin-spin
coupling. It is thus crucial to investigate the anisotropy of the
interaction potential for systems that are candidates for
sympathetic cooling experiments. In experimental terms, the
easiest systems to work with are those in which molecules that
can be cooled by Stark deceleration (such as NH 3 , OH and NH)
interact with atoms that can be laser-cooled (such as
alkali-metal and alkaline-earth atoms). There has been
extensive work on low-energy collisions of molecules with
helium atoms Balakrishnan etΒ al. ( 1997 , 1999 , 2000 ); Bohn ( 2000 ); Balakrishnan etΒ al. ( 2003 ); Krems etΒ al. ( 2003 ); GonzΓ‘lez-MartΓnez and
Hutson ( 2007 ) , but relatively little on collisions
with alkali-metal and alkaline-earth atoms. SoldΓ‘n and
Hutson SoldΓ‘n and Hutson ( 2004 ) investigated the potential energy
surfaces for Rb + NH and identified deeply bound ion-pair
states as well as weakly bound covalent states. They suggested
that the ion-pair states might hinder sympathetic cooling. Lara et al. Lara etΒ al. ( 2006 , 2007 ) subsequently
calculated full potential energy surfaces for Rb + OH, for both
ion-pair states and covalent states, and used them to
investigate low-energy elastic and inelastic cross sections,
including spin-orbit coupling and nuclear spin splittings. They
found that even for the covalent states the potential energy
surfaces had anisotropies of the order of 500 cm -1 and
that this was sufficient to make the inelastic cross sections
larger than inelastic cross sections at temperatures below 10
mK. Tacconi et al. Tacconi etΒ al. ( 2007 ) have recently
carried out analogous calculations on Rb + NH, though without
considering nuclear spin. There has also been a considerable
amount of work on collisions between alkali metal atoms and the
corresponding dimers SoldΓ‘n etΒ al. ( 2002 ); QuΓ©mΓ©ner
etΒ al. ( 2004 ); CvitaΕ‘
etΒ al. ( 2005a , b ); QuΓ©mΓ©ner
etΒ al. ( 2005 ); Hutson and SoldΓ‘n ( 2007 ) . One way around the problem of inelastic collisions is to work
with atoms and molecules that are in their absolute ground
state in the trapping field. However, this is quite limiting:
only optical dipole traps and alternating current traps van Veldhoven etΒ al. ( 2005 ) can trap such molecules. It is
therefore highly desirable to seek systems in which the
potential energy surface is only weakly anisotropic. The
purpose of the present paper is to survey the possibilities for
collision partners to use in sympathetic cooling of NH 3 (or
ND 3 ), which is one of the easiest molecules for Stark
deceleration. Even if sympathetic cooling proves to be impractical for a
particular system, the combination of laser cooling for atoms
and Stark deceleration for molecules offers opportunities for
studying molecular collisions in a new low-energy regime. For
example, experiments are under way at the University of
Colorado Lewandowski ( 2008 ) to study collisions
between decelerated NH 3 molecules and laser-cooled Rb atoms. There alkali-metal atom + NH 3 systems have not been
extensively studied theoretically, though there has been
experimental interest in the spectroscopy of Li-NH 3 complex
as a prototype metal atom-Lewis base complex Wu etΒ al. ( 2001 ) . Lim et al. Lim etΒ al. ( 2007 ) recently calculated electrical
properties and infrared spectra for complexes of NH 3 with
alkali-metal atoms from K to Fr and gave the equilibrium
structures of their global minima. However, to our knowledge,
no complete potential energy surfaces have been published for
any of these systems. The alkaline-earth + NH 3 have been
studied even less, and except for an early study of the
Be-NH 3 system Chalasinski etΒ al. ( 1993 ) there are no previous
results available. | We investigate the possibility of producing ultracold NH3 molecules by sympathetic cooling in a bath of ultracold atoms. We consider the interactions of NH3 with alkali-metal and alkaline-earth-metal atoms, and with Xe, using ab initio coupled-cluster calculations. For Rb-NH3 and Xe-NH3 we develop full potential energy surfaces, while for the other systems we characterize the stationary points (global and local minima and saddle points). We also calculate isotropic and anisotropic van der Waals C6 coefficients for all the systems. The potential energy surfaces for interaction of NH3 with alkali-metal and alkaline-earth-metal atoms all show deep potential wells and strong anisotropies. The well depths vary from 887 cmβ1 for Mg-NH3 to 5104 cmβ1 for Li-NH3. This suggests that all these systems will exhibit strong inelasticity whenever inelastic collisions are energetically allowed and that sympathetic cooling will work only when both the atoms and the molecules are already in their lowest internal states. Xe-NH3 is more weakly bound and less anisotropic. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*Prospects for producing ultracold NH 3 molecules by sympathetic cooling: A survey of interaction potentials* and the introduction:I Introduction There is great interest at present in producing samples of cold
molecules (below 1Β K) and ultracold molecules (below 1Β mK). Such
molecules have many potential applications. High-precision
measurements on ultracold molecules might be used to measure
quantities of fundamental physics interest, such as the electric
dipole moment of the electron Hudson etΒ al. ( 2002 ) and the
time-dependence of fundamental constants such as the
electron/proton mass ratio van Veldhoven etΒ al. ( 2004 ) . Ultracold
molecules are a stepping stone to ultracold quantum gases Baranov etΒ al. ( 2002 ) and might have applications in quantum
information and quantum computing DeMille ( 2002 ) . There are two basic approaches to producing ultracold
molecules. In direct methods such as Stark deceleration Bethlem and Meijer ( 2003 ); Bethlem etΒ al. ( 2006 ) and helium buffer-gas
cooling Weinstein etΒ al. ( 1998 ) , preexisting molecules are
cooled from higher temperatures and trapped in electrostatic or
magnetic traps. In indirect methods Hutson and SoldΓ‘n ( 2006 ) , laser-cooled atoms that are already
ultracold are paired up to form molecules by either
photoassociation Jones etΒ al. ( 2006 ) or tuning through
magnetic Feshbach resonances KΓΆhler etΒ al. ( 2006 ) . Indirect methods have already been used extensively to produce
ultracold molecules at temperatures below 1 ΞΌ π \mu K. However,
they are limited to molecules formed from atoms that can
themselves be cooled to such temperatures. Direct methods are
far more general than indirect methods, and can in principle be
applied to a very wide range of molecules. However, at present
direct methods are limited to temperatures in the range 10-100
mK, which is outside the ultracold regime. There is much
current research directed at finding second-stage cooling
methods to bridge the gap and eventually allow directly cooled
molecules to reach the region below 1 ΞΌ π \mu K where quantum
gases can form. One of the most promising second-stage cooling methods that has
been proposed is sympathetic cooling . The hope is that, if a
sample of cold molecules in brought into contact with a gas of
ultracold atoms, thermalization will occur and the molecules will
be cooled towards the temperature of the atoms. Sympathetic
cooling has already been used successfully to cool atomic species
such as 6 Li Schreck etΒ al. ( 2001 ) and 41 K Modugno etΒ al. ( 2001 ) but has not yet been applied to neutral
molecules. Sympathetic cooling relies on thermalization occurring before
molecules are lost from the trap. Thermalization requires elastic collisions between atoms and molecules to redistribute
translational energy. However, electrostatic and magnetic traps
rely on Stark and Zeeman splittings and trapped atoms and
molecules are not usually in their absolute ground state in the
applied field. Any inelastic collision that converts
internal energy into translational energy is likely to kick
both colliding species out of the trap. The ratio of
elastic to inelastic cross sections is thus crucial, and a
commonly stated rule of thumb is that sympathetic cooling will
not work unless elastic cross sections are a factor of 10 to
100 greater than inelastic cross sections for the states
concerned. Inelastic cross sections for atom-atom collisions are sometimes
strongly suppressed by angular momentum constraints. In
particular, for s-wave collisions (end-over-end angular momentum L = 0 πΏ 0 L=0 ), pairs of atoms in spin-stretched states (with the
maximum possible values of the total angular momentum F πΉ F and its
projection | M F | subscript π πΉ |M_{F}| ) can undergo inelastic collisions only by
changing L πΏ L . Cross sections for such processes are very small
because, for atoms in S states, the only interaction that can
change L πΏ L is the weak dipolar coupling between the electron
spins. However, for molecular collisions the situation is
different: the anisotropy of the intermolecular potential
can change L πΏ L , and this is usually much stronger than spin-spin
coupling. It is thus crucial to investigate the anisotropy of the
interaction potential for systems that are candidates for
sympathetic cooling experiments. In experimental terms, the
easiest systems to work with are those in which molecules that
can be cooled by Stark deceleration (such as NH 3 , OH and NH)
interact with atoms that can be laser-cooled (such as
alkali-metal and alkaline-earth atoms). There has been
extensive work on low-energy collisions of molecules with
helium atoms Balakrishnan etΒ al. ( 1997 , 1999 , 2000 ); Bohn ( 2000 ); Balakrishnan etΒ al. ( 2003 ); Krems etΒ al. ( 2003 ); GonzΓ‘lez-MartΓnez and
Hutson ( 2007 ) , but relatively little on collisions
with alkali-metal and alkaline-earth atoms. SoldΓ‘n and
Hutson SoldΓ‘n and Hutson ( 2004 ) investigated the potential energy
surfaces for Rb + NH and identified deeply bound ion-pair
states as well as weakly bound covalent states. They suggested
that the ion-pair states might hinder sympathetic cooling. Lara et al. Lara etΒ al. ( 2006 , 2007 ) subsequently
calculated full potential energy surfaces for Rb + OH, for both
ion-pair states and covalent states, and used them to
investigate low-energy elastic and inelastic cross sections,
including spin-orbit coupling and nuclear spin splittings. They
found that even for the covalent states the potential energy
surfaces had anisotropies of the order of 500 cm -1 and
that this was sufficient to make the inelastic cross sections
larger than inelastic cross sections at temperatures below 10
mK. Tacconi et al. Tacconi etΒ al. ( 2007 ) have recently
carried out analogous calculations on Rb + NH, though without
considering nuclear spin. There has also been a considerable
amount of work on collisions between alkali metal atoms and the
corresponding dimers SoldΓ‘n etΒ al. ( 2002 ); QuΓ©mΓ©ner
etΒ al. ( 2004 ); CvitaΕ‘
etΒ al. ( 2005a , b ); QuΓ©mΓ©ner
etΒ al. ( 2005 ); Hutson and SoldΓ‘n ( 2007 ) . One way around the problem of inelastic collisions is to work
with atoms and molecules that are in their absolute ground
state in the trapping field. However, this is quite limiting:
only optical dipole traps and alternating current traps van Veldhoven etΒ al. ( 2005 ) can trap such molecules. It is
therefore highly desirable to seek systems in which the
potential energy surface is only weakly anisotropic. The
purpose of the present paper is to survey the possibilities for
collision partners to use in sympathetic cooling of NH 3 (or
ND 3 ), which is one of the easiest molecules for Stark
deceleration. Even if sympathetic cooling proves to be impractical for a
particular system, the combination of laser cooling for atoms
and Stark deceleration for molecules offers opportunities for
studying molecular collisions in a new low-energy regime. For
example, experiments are under way at the University of
Colorado Lewandowski ( 2008 ) to study collisions
between decelerated NH 3 molecules and laser-cooled Rb atoms. There alkali-metal atom + NH 3 systems have not been
extensively studied theoretically, though there has been
experimental interest in the spectroscopy of Li-NH 3 complex
as a prototype metal atom-Lewis base complex Wu etΒ al. ( 2001 ) . Lim et al. Lim etΒ al. ( 2007 ) recently calculated electrical
properties and infrared spectra for complexes of NH 3 with
alkali-metal atoms from K to Fr and gave the equilibrium
structures of their global minima. However, to our knowledge,
no complete potential energy surfaces have been published for
any of these systems. The alkaline-earth + NH 3 have been
studied even less, and except for an early study of the
Be-NH 3 system Chalasinski etΒ al. ( 1993 ) there are no previous
results available."},{"role":"assistant","content":"Abstract: We investigate the possibility of producing ultracold NH3 molecules by sympathetic cooling in a bath of ultracold atoms. We consider the interactions of NH3 with alkali-metal and alkaline-earth-metal atoms, and with Xe, using ab initio coupled-cluster calculations. For Rb-NH3 and Xe-NH3 we develop full potential energy surfaces, while for the other systems we characterize the stationary points (global and local minima and saddle points). We also calculate isotropic and anisotropic van der Waals C6 coefficients for all the systems. The potential energy surfaces for interaction of NH3 with alkali-metal and alkaline-earth-metal atoms all show deep potential wells and strong anisotropies. The well depths vary from 887 cmβ1 for Mg-NH3 to 5104 cmβ1 for Li-NH3. This suggests that all these systems will exhibit strong inelasticity whenever inelastic collisions are energetically allowed and that sympathetic cooling will work only when both the atoms and the molecules are already in their lowest internal states. Xe-NH3 is more weakly bound and less anisotropic."}] |
705.2844 | A Review of integrable deformations in AdS/CFT | In recent years a large number of studies have emerged indicating
that type IIB string theory on A β d β S 5 Γ S 5 π΄ π subscript π 5 superscript π 5 AdS_{5}\times S^{5} and π© = 4 π© 4 {\cal N}=4 super-Yang-Mills (SYM) theory in four dimensions may be integrable
in the planar limit. The technology of integrable systems has
therefore become extraordinarily useful in studying the AdS/CFT
correspondence in detail. The correspondence equates the spectrum
of operator anomalous dimensions of the gauge theory with the energy
spectrum of the string theory. The rank of the Yang-Mills
gauge group determines the amount of Ramond-Ramond (RR) flux on the S 5 superscript π 5 S^{5} subspace in the string theory, and in the planar limit this
number is scaled to infinity: N c β β β subscript π π N_{c}\to\infty .
The string coupling g s subscript π π g_{s} is related
to the gauge theory coupling g YM subscript π YM g_{\rm YM} via the standard relation, g s = e Ο 0 = g YM 2 / 4 β Ο subscript π π superscript π subscript italic-Ο 0 subscript superscript π 2 YM 4 π g_{s}=e^{\phi_{0}}={g^{2}_{\rm YM}/4\pi} , and the radial scale of
both the A β d β S 5 π΄ π subscript π 5 AdS_{5} and S 5 superscript π 5 S^{5} spaces is given by R 4 = 4 β Ο β g s β N c = g YM 2 β N c = Ξ» superscript π
4 4 π subscript π π subscript π π subscript superscript π 2 YM subscript π π π R^{4}=4\pi g_{s}N_{c}=g^{2}_{\rm YM}N_{c}=\lambda (with Ξ± β² = 1 superscript πΌ β² 1 \alpha^{\prime}=1 ). If these theories are indeed integrable, the dynamics should be
encoded in a diffractionless scattering matrix S π S . On the string
side, in the strong-coupling limit ( Ξ» = g YM 2 β N c β β ) π superscript subscript π YM 2 subscript π π β (\lambda=g_{\rm YM}^{2}N_{c}\to\infty) , this S π S matrix can be interpreted as describing the
two-body scattering of elementary excitations on the worldsheet.
As their worldsheet momenta becomes large, these
excitations are better described as special types of solitonic
solutions, or giant magnons , and the interpolating region is described by the
dynamics of the so-called near-flat-space
regime. [ 1 , 2 ] On the gauge theory side, the action
of the dilatation generator on single-trace operators can be equated
with that of a Hamiltonian acting on states of a spin
chain. [ 3 ] In this picture, operators in the trace are
represented as lattice pseudoparticles that, like their stringy
counterparts, experience diffractionless scattering encoded by an S π S matrix. Proving that the gauge and string theories are identical
in the planar limit
therefore amounts to showing that the underlying physics of both
theories is governed by the same two-body scattering matrix. In
fact, symmetry fixes this S π S matrix up to an overall phase Ο π \sigma , so what remains is to somehow determine Ο π \sigma from
basic principles. [ 4 ] (Unitarity and crossing
relations, as they exist in this context, constrain this phase to
some extent; see Refs. \refcite Janik:2006dc,Beisert:2006ib,Beisert:2006ez for
recent developments.) An impressive amount of evidence exists in favor of the mutual
integrability of these two theories. If true, this raises the
question of whether these theories can be deformed in a controlled
manner while remaining integrable. One class of interesting
deformations to consider are the marginal Ξ² π½ \beta deformations of π© = 4 π© 4 {\cal N}=4 SYM, also known as Leigh-Strassler
deformations. [ 9 ] The resulting theories comprise a
one-parameter family of π© = 1 π© 1 {\cal N}=1 conformal gauge theories (in
the case of real Ξ² π½ \beta deformations). On the gravity side of the
correspondence, these correspond to special geometrical deformations
of the S 5 superscript π 5 S^{5} subspace in the string theory background. [ 10 ] In fact, the integrability of the gauge and string theory, to the
extent that it is understood in the undeformed cases, seems to
persist under these deformations. This problem was studied directly
and indirectly, for example, in Refs. \refcite Frolov:2005ty,Plefka:2005bk,Frolov:2005dj,Alday:2005ww,Freyhult:2005ws,Chen:2005sb,Chen:2006bh,Beisert:2005if,Spradlin:2005sv,Bobev:2005cz,Ryang:2005pg (see
also references therein). The dynamics of both theories can be
captured, at least in certain limits, by twisted Bethe
equations. Here we review an analogous class of deformations acting
on the A β d β S 5 π΄ π subscript π 5 AdS_{5} subspace of the string theory background, first
studied in Ref. \refcite US. While the corresponding gauge theory
is less well understood (it may be a non-commutative or even
non-associative theory), the string theory seems to be well defined
in the near-pp-wave regime. Furthermore, the string energy spectrum
can be computed precisely in this limit from a discrete Bethe
ansatz, which lends substantial support to the methodology developed
in Refs. \refcite Arutyunov:2004vx,Staudacher:2004tk,Beisert:2005fw. In Section 2 below, TsT deformations of the string
background geometry are reviewed in detail. The classical
integrability of the string sigma model is discussed in Section 3 . String energy spectra are computed directly from the
deformed Green-Schwarz action in the near-pp-wave limit in Section 4 . In Section 5 , the thermodynamic Bethe
equations are promoted to discrete Bethe equations that correctly
reproduce the deformed energy spectra. A brief discussion and
thoughts on further research are given in Section 6 . This
letter is a review of a seminar based on Ref. \refcite US given in
May, 2006 at the Institute for Advanced Study. | Marginal Ξ² deformations of $\mathcal{N}=4$ super-YangβMills theory are known to correspond to a certain class of deformations of the S5 background subspace of type IIB string theory in AdS5ΓS5. An analogous set of deformations of the AdS5 subspace is reviewed here. String energy spectra computed in the near-pp-wave limit of these backgrounds match predictions encoded by discrete, asymptotic Bethe equations, suggesting that the twisted string theory is classically integrable in this regime. These Bethe equations can be derived algorithmically by relying on the existence of Lax representations, and on the RiemannβHilbert interpretation of the thermodynamic Bethe ansatz. This letter is a review of a seminar given at the Institute for Advanced Study, based on research completed in collaboration with McLoughlin. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A Review of integrable deformations in AdS/CFT* and the introduction:In recent years a large number of studies have emerged indicating
that type IIB string theory on A β d β S 5 Γ S 5 π΄ π subscript π 5 superscript π 5 AdS_{5}\times S^{5} and π© = 4 π© 4 {\cal N}=4 super-Yang-Mills (SYM) theory in four dimensions may be integrable
in the planar limit. The technology of integrable systems has
therefore become extraordinarily useful in studying the AdS/CFT
correspondence in detail. The correspondence equates the spectrum
of operator anomalous dimensions of the gauge theory with the energy
spectrum of the string theory. The rank of the Yang-Mills
gauge group determines the amount of Ramond-Ramond (RR) flux on the S 5 superscript π 5 S^{5} subspace in the string theory, and in the planar limit this
number is scaled to infinity: N c β β β subscript π π N_{c}\to\infty .
The string coupling g s subscript π π g_{s} is related
to the gauge theory coupling g YM subscript π YM g_{\rm YM} via the standard relation, g s = e Ο 0 = g YM 2 / 4 β Ο subscript π π superscript π subscript italic-Ο 0 subscript superscript π 2 YM 4 π g_{s}=e^{\phi_{0}}={g^{2}_{\rm YM}/4\pi} , and the radial scale of
both the A β d β S 5 π΄ π subscript π 5 AdS_{5} and S 5 superscript π 5 S^{5} spaces is given by R 4 = 4 β Ο β g s β N c = g YM 2 β N c = Ξ» superscript π
4 4 π subscript π π subscript π π subscript superscript π 2 YM subscript π π π R^{4}=4\pi g_{s}N_{c}=g^{2}_{\rm YM}N_{c}=\lambda (with Ξ± β² = 1 superscript πΌ β² 1 \alpha^{\prime}=1 ). If these theories are indeed integrable, the dynamics should be
encoded in a diffractionless scattering matrix S π S . On the string
side, in the strong-coupling limit ( Ξ» = g YM 2 β N c β β ) π superscript subscript π YM 2 subscript π π β (\lambda=g_{\rm YM}^{2}N_{c}\to\infty) , this S π S matrix can be interpreted as describing the
two-body scattering of elementary excitations on the worldsheet.
As their worldsheet momenta becomes large, these
excitations are better described as special types of solitonic
solutions, or giant magnons , and the interpolating region is described by the
dynamics of the so-called near-flat-space
regime. 1 , 2 On the gauge theory side, the action
of the dilatation generator on single-trace operators can be equated
with that of a Hamiltonian acting on states of a spin
chain. 3 In this picture, operators in the trace are
represented as lattice pseudoparticles that, like their stringy
counterparts, experience diffractionless scattering encoded by an S π S matrix. Proving that the gauge and string theories are identical
in the planar limit
therefore amounts to showing that the underlying physics of both
theories is governed by the same two-body scattering matrix. In
fact, symmetry fixes this S π S matrix up to an overall phase Ο π \sigma , so what remains is to somehow determine Ο π \sigma from
basic principles. 4 (Unitarity and crossing
relations, as they exist in this context, constrain this phase to
some extent; see Refs. \refcite Janik:2006dc,Beisert:2006ib,Beisert:2006ez for
recent developments.) An impressive amount of evidence exists in favor of the mutual
integrability of these two theories. If true, this raises the
question of whether these theories can be deformed in a controlled
manner while remaining integrable. One class of interesting
deformations to consider are the marginal Ξ² π½ \beta deformations of π© = 4 π© 4 {\cal N}=4 SYM, also known as Leigh-Strassler
deformations. 9 The resulting theories comprise a
one-parameter family of π© = 1 π© 1 {\cal N}=1 conformal gauge theories (in
the case of real Ξ² π½ \beta deformations). On the gravity side of the
correspondence, these correspond to special geometrical deformations
of the S 5 superscript π 5 S^{5} subspace in the string theory background. 10 In fact, the integrability of the gauge and string theory, to the
extent that it is understood in the undeformed cases, seems to
persist under these deformations. This problem was studied directly
and indirectly, for example, in Refs. \refcite Frolov:2005ty,Plefka:2005bk,Frolov:2005dj,Alday:2005ww,Freyhult:2005ws,Chen:2005sb,Chen:2006bh,Beisert:2005if,Spradlin:2005sv,Bobev:2005cz,Ryang:2005pg (see
also references therein). The dynamics of both theories can be
captured, at least in certain limits, by twisted Bethe
equations. Here we review an analogous class of deformations acting
on the A β d β S 5 π΄ π subscript π 5 AdS_{5} subspace of the string theory background, first
studied in Ref. \refcite US. While the corresponding gauge theory
is less well understood (it may be a non-commutative or even
non-associative theory), the string theory seems to be well defined
in the near-pp-wave regime. Furthermore, the string energy spectrum
can be computed precisely in this limit from a discrete Bethe
ansatz, which lends substantial support to the methodology developed
in Refs. \refcite Arutyunov:2004vx,Staudacher:2004tk,Beisert:2005fw. In Section 2 below, TsT deformations of the string
background geometry are reviewed in detail. The classical
integrability of the string sigma model is discussed in Section 3 . String energy spectra are computed directly from the
deformed Green-Schwarz action in the near-pp-wave limit in Section 4 . In Section 5 , the thermodynamic Bethe
equations are promoted to discrete Bethe equations that correctly
reproduce the deformed energy spectra. A brief discussion and
thoughts on further research are given in Section 6 . This
letter is a review of a seminar based on Ref. \refcite US given in
May, 2006 at the Institute for Advanced Study."},{"role":"assistant","content":"Abstract: Marginal Ξ² deformations of $\mathcal{N}=4$ super-YangβMills theory are known to correspond to a certain class of deformations of the S5 background subspace of type IIB string theory in AdS5ΓS5. An analogous set of deformations of the AdS5 subspace is reviewed here. String energy spectra computed in the near-pp-wave limit of these backgrounds match predictions encoded by discrete, asymptotic Bethe equations, suggesting that the twisted string theory is classically integrable in this regime. These Bethe equations can be derived algorithmically by relying on the existence of Lax representations, and on the RiemannβHilbert interpretation of the thermodynamic Bethe ansatz. This letter is a review of a seminar given at the Institute for Advanced Study, based on research completed in collaboration with McLoughlin."}] |
805.1893 | A brief survey of the renormalizability of four dimensional gravity for generalized Kodama states | : Stability of the pure Kodama state This work continues the line of research from [ 1 ] , [ 2 ] . The main question that would like to formulate is (i) whether or not the pure Kodama state Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} is a good ground state of general relativity about which quantum fluctuations constitue a renormalizable theory. The second main question we would like to analyse is concerning the relation between the pure and the generalized Kodama states with respect to the vacuum state of quantum general relativity: (ii) Is Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} in any sense an excited version of Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} , or is it an independent ground state of the gravity-matter system? As a corollary to (ii), does each model for which a Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} can be constructed consitute an additional vacuum state of general relativity? In [ 3 ] , Chopin Soo and Lee Smolin present the hypothesis for the treatment of matter fields as a perturbation about DeSitter space satisfying a SchrΓΆdinger equation for small fluctuations. In [ 4 ] , Smolin and Friedel expand Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} in gravitions about an abelian theory. However, since the generalized Kodama states are designed to incorporate the matter effects to all orders and to enforce the proper semiclassical limit, we expect it to be the case that Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} is to all orders in the expansion nonperturbatively related to Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} . In this case, one should be able to find a discrete transformation between that maps Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} into Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} and more generally, a discrete transformation amongst the generalized Kodama states for different models.The manner in which we address this transformation is to view Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} as being invariant under a symmetry which is broken due to the presence of matter fields. When one views the effect of the matter fields in terms of backreactions on DeSitter spacetime, then one can see more clearly the link from Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} to the semiclassical limit below the Planck scale. We provide in this work a brief synopsis of the required transformation in terms of tree networks, and then briefly comment in the discussion on the implications for nonperturbative renormalizability in the Ashtekar variables.The layout of this paper is as follows. In section 2 we review the developments which cast the pure Kodama state into a perspective suitable for posing the question of a stable ground state. In section 3 we discuss in detail the effects and the interpretation of incorporating matter fields, in the good semiclassical limit below the Planck scale, into the fully extrapolated theory of quantized gravity. In section 4 we briefly introduce the quantum theory of fluctuations on DeSitter spacetime and the general structures required. In section 5 we introduce the concept of the tree network, which can be seen as the application of Feynman diagrammatic techniques to the solution of the constraints. We then show how the networks implement the discrete transformation amongst generalized Kodama states.
We argue for the interpretation of general relativity as a renormalizable theory due to its tree network structure when expressed in Ashtekar variables. | We continue the line of research from previous works in assessing the suitability of the pure Kodama state both as a ground state for the generalized Kodama states, as well as characteristic of a good semiclassical limit of general relativity. We briefly introduce the quantum theory of fluctuations about DeSitter spacetime, which enables one to examine some perturbative aspects of the state. Additionally, we also motivate the concept of the cubic tree network, which enables one to view the generalized Kodama states in compact form as a nonlinear transformation of the pure Kodama states parametrized by the matter content of the proper classical limit. It is hoped that this work constitutes a first step in addressing the nonperturbative renormalizability of general relativity in Ashtekar variables. Remaining issues to address, including the analysis of specific matter models, include finiteness and normalizability of the generalized Kodama state as well as reality conditions on the Ashtekar variables, which we relegate to separate works. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*A brief survey of the renormalizability of four dimensional gravity for generalized Kodama states* and the introduction:: Stability of the pure Kodama state This work continues the line of research from 1 , 2 . The main question that would like to formulate is (i) whether or not the pure Kodama state Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} is a good ground state of general relativity about which quantum fluctuations constitue a renormalizable theory. The second main question we would like to analyse is concerning the relation between the pure and the generalized Kodama states with respect to the vacuum state of quantum general relativity: (ii) Is Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} in any sense an excited version of Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} , or is it an independent ground state of the gravity-matter system? As a corollary to (ii), does each model for which a Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} can be constructed consitute an additional vacuum state of general relativity? In 3 , Chopin Soo and Lee Smolin present the hypothesis for the treatment of matter fields as a perturbation about DeSitter space satisfying a SchrΓΆdinger equation for small fluctuations. In 4 , Smolin and Friedel expand Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} in gravitions about an abelian theory. However, since the generalized Kodama states are designed to incorporate the matter effects to all orders and to enforce the proper semiclassical limit, we expect it to be the case that Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} is to all orders in the expansion nonperturbatively related to Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} . In this case, one should be able to find a discrete transformation between that maps Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} into Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} and more generally, a discrete transformation amongst the generalized Kodama states for different models.The manner in which we address this transformation is to view Ξ¨ K β o β d subscript Ξ¨ πΎ π π \Psi_{Kod} as being invariant under a symmetry which is broken due to the presence of matter fields. When one views the effect of the matter fields in terms of backreactions on DeSitter spacetime, then one can see more clearly the link from Ξ¨ G β K β o β d subscript Ξ¨ πΊ πΎ π π \Psi_{GKod} to the semiclassical limit below the Planck scale. We provide in this work a brief synopsis of the required transformation in terms of tree networks, and then briefly comment in the discussion on the implications for nonperturbative renormalizability in the Ashtekar variables.The layout of this paper is as follows. In section 2 we review the developments which cast the pure Kodama state into a perspective suitable for posing the question of a stable ground state. In section 3 we discuss in detail the effects and the interpretation of incorporating matter fields, in the good semiclassical limit below the Planck scale, into the fully extrapolated theory of quantized gravity. In section 4 we briefly introduce the quantum theory of fluctuations on DeSitter spacetime and the general structures required. In section 5 we introduce the concept of the tree network, which can be seen as the application of Feynman diagrammatic techniques to the solution of the constraints. We then show how the networks implement the discrete transformation amongst generalized Kodama states.
We argue for the interpretation of general relativity as a renormalizable theory due to its tree network structure when expressed in Ashtekar variables."},{"role":"assistant","content":"Abstract: We continue the line of research from previous works in assessing the suitability of the pure Kodama state both as a ground state for the generalized Kodama states, as well as characteristic of a good semiclassical limit of general relativity. We briefly introduce the quantum theory of fluctuations about DeSitter spacetime, which enables one to examine some perturbative aspects of the state. Additionally, we also motivate the concept of the cubic tree network, which enables one to view the generalized Kodama states in compact form as a nonlinear transformation of the pure Kodama states parametrized by the matter content of the proper classical limit. It is hoped that this work constitutes a first step in addressing the nonperturbative renormalizability of general relativity in Ashtekar variables. Remaining issues to address, including the analysis of specific matter models, include finiteness and normalizability of the generalized Kodama state as well as reality conditions on the Ashtekar variables, which we relegate to separate works."}] |
705.4146 | Dynamical 3-Space: A Review | We review here some of the new physics emerging from the discovery that there exists a dynamical 3-space. This discovery changes all of physics. While at a deeper level this emerges from the information-theoretic Process Physics [ 2 , 5 , 6 , 7 , 8 , 9 , 10 ] here we focus on the phenomenological description of this 3-space in terms of the velocity field that describes the internal dynamics of this structured 3-space. It is straightforward to construct the minimal dynamics for this 3-space, and it involves two constants: G πΊ G - Newtonβs gravitational constant, and Ξ± πΌ \alpha - the fine structure constant. G πΊ G quantifies the effect of matter upon the flowing 3-space, while Ξ± πΌ \alpha describes the self-interaction of the 3-space. Bore hole experiments and black hole astronomical observations give the value of Ξ± πΌ \alpha as the fine structure constant to within observational errors. A major development is that the Newtonian theory of gravity [ 11 ] is fundamentally flawed - that even in the non-relativistic limit it fails to correctly model numerous gravitational phenomena. So Newtonβs theory of gravity is far from being βuniversalβ. The Hilbert-Einstein theory of gravity (General Relativity - GR), with gravity being a curved spacetime effect, was based on the assumption that Newtonian gravity was valid in the non-relativistic limit. The ongoing effort to save GR against numerous disagreements with experiment and observation lead to the invention first of βdark matterβ and then βdark energyβ. These effects are no longer required in the new physics. The 3-space velocity field has been directly detected in at least eight experiments including the Michelson-Morley experiment [ 3 ] of 1887, but most impressively by the superb experiment by Miller in 1925/1926 [ 4 ] . The Miller experiment was one of the great physics experiments of the 20th century, but has been totally neglected by mainstream physics. All of these experiments detected the dynamical 3-space by means of the light speed anisotropy - that the speed of light is different in different directions, and the anisotropy is very large, namely some 1 part in a 1000. The existence of this 3-space as a detectable phenomenon implies that a generalisation of all the fundamental theories of physics be carried out. The generalisation of the Maxwell equations leads to a simple explanation for gravitational light bending and lensing effects, the generalisation of the SchrΓΆdinger equation leads to the first derivation of gravity - as a refraction effect of the quantum matter waves by the time dependence and inhomogeneities of the 3-space, leading as well to a derivation of the equivalence principle. This generalised SchrΓΆdinger equation also explains the Lense-Thirring effect as being caused by vorticity in the flowing 3-space. This effect is being studied by the Gravity Probe B (GP-B) gyroscope precession experiment. The generalisation of the Dirac equation to take account of the interaction of the spinor with the dynamical 3-space results in the derivation of the curved spacetime formalism for the quantum matter geodesics, but without reference to the GR equations for the induced spacetime metric. What emerges from this derivation is that the spacetime is purely a mathematical construct - it has no ontological status. That discovery completely overturns the paradigm of 20th century physics. The dynamical equation for the 3-space has black hole solutions with properties very different from the putative black holes of GR, leading to the verified prediction for the masses of the minimal black holes in spherical star systems. That same dynamics has an expanding 3-space solution - the Hubble effect for the universe. That solution has the expansion mainly determined by space itself. This expansion gives a extremely good account of the supernovae/Gamma-Ray Burst redshift data without the notion of βdark energyβ or an accelerating universe. This review focuses on the phenomenological modelling of the 3-space dynamics and its experimental checking. Earlier reviews are available in [ 2 ] (2005) and [ 5 ] (2003). Page limitations mean that some developments have not been discussed herein. | For some 100 years physics has modelled space and time via the spacetime concept, with space being merely an observer dependent perspective effect of that spacetime - space itself had no observer independent existence - it had no ontological status, and it certainly had no dynamical description. In recent years this has all changed. In 2002 it was discovered that a dynamical 3-space had been detected many times, including the Michelson-Morley 1887 light-speed anisotropy experiment. Here we review the dynamics of this 3-space, tracing its evolution from that of an emergent phenomena in the information-theoretic Process Physics to the phenomenological description in terms of a velocity field describing the relative internal motion of the structured 3-space. The new physics of the dynamical 3-space is extensively tested against experimental and astronomical observations, including the necessary generalisation of the Maxwell, Schrodinger and Dirac equations, leading to a derivation and explanation of gravity as a refraction effect of quantum matter waves. The flat and curved spacetime formalisms are derived from the new physics, so explaining their apparent many successes. | [{"role":"user","content":"Help me to generate the abstract of a survey paper given the title:*Dynamical 3-Space: A Review* and the introduction:We review here some of the new physics emerging from the discovery that there exists a dynamical 3-space. This discovery changes all of physics. While at a deeper level this emerges from the information-theoretic Process Physics 2 , 5 , 6 , 7 , 8 , 9 , 10 here we focus on the phenomenological description of this 3-space in terms of the velocity field that describes the internal dynamics of this structured 3-space. It is straightforward to construct the minimal dynamics for this 3-space, and it involves two constants: G πΊ G - Newtons gravitational constant, and Ξ± πΌ \alpha - the fine structure constant. G πΊ G quantifies the effect of matter upon the flowing 3-space, while Ξ± πΌ \alpha describes the self-interaction of the 3-space. Bore hole experiments and black hole astronomical observations give the value of Ξ± πΌ \alpha as the fine structure constant to within observational errors. A major development is that the Newtonian theory of gravity 11 is fundamentally flawed - that even in the non-relativistic limit it fails to correctly model numerous gravitational phenomena. So Newtons theory of gravity is far from being universal. The Hilbert-Einstein theory of gravity (General Relativity - GR), with gravity being a curved spacetime effect, was based on the assumption that Newtonian gravity was valid in the non-relativistic limit. The ongoing effort to save GR against numerous disagreements with experiment and observation lead to the invention first of dark matter and then dark energy. These effects are no longer required in the new physics. The 3-space velocity field has been directly detected in at least eight experiments including the Michelson-Morley experiment 3 of 1887, but most impressively by the superb experiment by Miller in 1925/1926 4 . The Miller experiment was one of the great physics experiments of the 20th century, but has been totally neglected by mainstream physics. All of these experiments detected the dynamical 3-space by means of the light speed anisotropy - that the speed of light is different in different directions, and the anisotropy is very large, namely some 1 part in a 1000. The existence of this 3-space as a detectable phenomenon implies that a generalisation of all the fundamental theories of physics be carried out. The generalisation of the Maxwell equations leads to a simple explanation for gravitational light bending and lensing effects, the generalisation of the SchrΓΆdinger equation leads to the first derivation of gravity - as a refraction effect of the quantum matter waves by the time dependence and inhomogeneities of the 3-space, leading as well to a derivation of the equivalence principle. This generalised SchrΓΆdinger equation also explains the Lense-Thirring effect as being caused by vorticity in the flowing 3-space. This effect is being studied by the Gravity Probe B (GP-B) gyroscope precession experiment. The generalisation of the Dirac equation to take account of the interaction of the spinor with the dynamical 3-space results in the derivation of the curved spacetime formalism for the quantum matter geodesics, but without reference to the GR equations for the induced spacetime metric. What emerges from this derivation is that the spacetime is purely a mathematical construct - it has no ontological status. That discovery completely overturns the paradigm of 20th century physics. The dynamical equation for the 3-space has black hole solutions with properties very different from the putative black holes of GR, leading to the verified prediction for the masses of the minimal black holes in spherical star systems. That same dynamics has an expanding 3-space solution - the Hubble effect for the universe. That solution has the expansion mainly determined by space itself. This expansion gives a extremely good account of the supernovae/Gamma-Ray Burst redshift data without the notion of dark energy or an accelerating universe. This review focuses on the phenomenological modelling of the 3-space dynamics and its experimental checking. Earlier reviews are available in 2 (2005) and 5 (2003). Page limitations mean that some developments have not been discussed herein."},{"role":"assistant","content":"Abstract: For some 100 years physics has modelled space and time via the spacetime concept, with space being merely an observer dependent perspective effect of that spacetime - space itself had no observer independent existence - it had no ontological status, and it certainly had no dynamical description. In recent years this has all changed. In 2002 it was discovered that a dynamical 3-space had been detected many times, including the Michelson-Morley 1887 light-speed anisotropy experiment. Here we review the dynamics of this 3-space, tracing its evolution from that of an emergent phenomena in the information-theoretic Process Physics to the phenomenological description in terms of a velocity field describing the relative internal motion of the structured 3-space. The new physics of the dynamical 3-space is extensively tested against experimental and astronomical observations, including the necessary generalisation of the Maxwell, Schrodinger and Dirac equations, leading to a derivation and explanation of gravity as a refraction effect of quantum matter waves. The flat and curved spacetime formalisms are derived from the new physics, so explaining their apparent many successes."}] |
No dataset card yet